Skip to main content

Reflections on the Assessment and Feedback Literature Review (2016-2021)

05 May 2022 | Kathleen M. Quinlan and Edd Pitt Authors of a new literature review, Impacts of Higher Education Assessment and Feedback Policy and Practice on Students: A Review of the Literature 2016-2021, Kathleen M. Quinlan and Edd Pitt give us a glimpse inside their process.

How do you get from 3,091 articles about assessment, feedback and peer feedback in higher education to 21 recommendations in four months? Here, we reflect on our processes, the decisions we took along the way, and how we overcame four key challenges.   

Challenge 1: Volume of articles 

We knew this was a big field when we started. Nonetheless, we hadn’t appreciated just HOW big or how international in scope it was until we got underway. The previous 2017 review (Jackel, Pearce, Radloff and Edwards, 2017) limited itself mainly to articles appearing, in Assessment and Evaluation in Higher Education and selected articles from a few other key higher education journals. However, we cast our net much more broadly in looking at any peer-reviewed articles written in English.  

In selecting our search terms, we were deliberately mindful of different nomenclature in different countries. We also searched abstracts rather than keywords, knowing how idiosyncratic keywords can be. These choices put the onus on us to screen 3,091 abstracts to identify 481 empirical articles that addressed our research question. In the end, our articles came from 71 different countries across six continents.  

Separating articles into the three broad headers of assessment, feedback and peer assessment/feedback, consistent with the Advance HE Assessment Framework, generally worked well. We also quickly began coding abstracts into themes that became headings and sub-headings in the overall review. This coding helped us divide the task into smaller chunks, allowing us to treat each section as a mini-review within a larger review.  

Challenge 2: Defining and focusing on high quality research 

By focusing on peer-reviewed articles, we could depend initially on reviewers’ assessments of the quality of the evidence. However, we found there was still considerable variety in methods, the type of evidence generated, and the ability to make claims about “demonstrable impact” that we sought in the review. We considered different published evidence standards and debated how stringent we should be and even whether the most stringent standards (randomised control trials) were the best approach for complex classroom-based research.  

We were aware that there has been a longstanding philosophical and methodological debate in education about what “quality” is in educational research.  In the end, we found England’s Office for Students’ evidence standards for evaluating outreach activities to be helpful. We used their definitions of evidence needed for causal claims to select studies to showcase in our review. We referenced but didn’t elaborate other studies that offered empirical but not strong causal evidence. We offered commentary in some areas where methodological issues were critical to the interpretation.  

Challenge 3: Drawing conclusions about what works 

Evidence was often limited during the five-year timeframe of the review. Sometimes studies reported contradictory results; a practice that worked in one context (study) might not work in another. We took two main approaches to this challenge.  

First, we looked for mechanisms and principles that underpinned what were necessarily complex, multi-component, real world practices. To do so, we needed to apply broader knowledge of educational principles. Taking a mechanisms-based approach allowed us to help explain sometimes contradictory findings in terms of the presence or absence of these underpinning processes, such as scaffolding of new practices or peer collaboration. As every context is different, having a set of principles and an awareness of key processes that define a given practice gives practitioners the tools they need to adapt designs for their own practices. In effect, practitioners need to know what the key ingredients are; if you leave out a key ingredient, the cake may not rise.  

Second, we bounced ideas off each other, keeping each other “honest”.  While Kathleen led on assessment, Edd led on feedback. We each first authored our respective sections but once we had solid drafts and were still refining the conclusions, the second author provided a fresh eye, looking for the wood amongst the trees, noticing contradictions and confusing statements, and suggesting summaries and conclusions. While the second author didn’t always get it right, the ensuing discussion helped us clarify our messages and ensure those conclusions matched the evidence presented.  

Challenge 4: Offering recommendations 

This challenge builds on challenges 2 and 3. We approached recommendations with a strong sense of responsibility. We knew some people will jump right from the executive summary straight to the recommendations, without reading the lengthy review of literature in between. Therefore, for each recommendation, we offered a high-level headline, typically in the imperative, but then nuanced each one with additional details and explanation.  

We also continued with the emphasis on underlying mechanisms that we outlined under Challenge 3. Thinking about recommendations to different groups, particularly practitioners and policymakers, helped us think about levers of change such as evaluations of teaching. Our aim was to identify different ways key findings could be embedded to create aligned systems that promote good practice. Ultimately, we want to impact positively on educational practice and student learning. The Summits based on this report should foster discussion of ways to translate the findings into enhanced practice.   

We hope this reflection on process will help you contextualise your reading of the full review and consider how to put its key findings into practice in your own role.  

Impacts of Higher Education Assessment and Feedback Policy and Practice on Students: A Review of the Literature 2016-2021 is part of the Connect Benefit Series - Student Success. Following publication, outputs for the Assessment and Feedback project will include an interactive webinar with the review's authors, virtual summits for Advance HE members and a series of podcasts. Find out more here.

Find out more about the Connect Benefit Series during 2021-22 here.

Author:

We feel it is important for voices to be heard to stimulate debate and share good practice. Blogs on our website are the views of the author and don’t necessarily represent those of Advance HE.

Keep up to date - Sign up to Advance HE communications

Our monthly newsletter contains the latest news from Advance HE, updates from around the sector, links to articles sharing knowledge and best practice and information on our services and upcoming events. Don't miss out, sign up to our newsletter now.

Sign up to our enewsletter