The Impact of GAI Tools on Evaluative Judgement in Higher Education

Rachel Forsyth

What is evaluative judgement?

Evaluative judgement is a key skill developed in all higher education courses, even if it does not always appear explicitly in programme learning outcomes. If you have sound evaluative judgement, you are able to assess your own work and that of others using relevant criteria and standards (Panadero et al., 2019; Tai et al., 2018). This enables you to carry out a variety of higher order tasks, such as identifying low quality resources, comparing different situations, and making decisions in a professional context. The skill is also linked to the ability to self-regulate your own learning, as it enables you to monitor and improve your own performance, as well as to provide and receive constructive feedback (Yan & Carless, 2021).

How do we assess evaluative judgement?

Evaluative judgement is tested implicitly with learning outcomes which use verbs such as ‘choose’, ‘analyse’, ‘compare’, ‘recommend’, and of course, ‘evaluate’. The addition of adverbs like ‘critically’ may indicate the level at which the outcome has been achieved. But assessing the ability to make evaluative judgements is usually somewhat hidden within tasks which test these outcomes. Making judgements needs self-awareness, reflection, comparison, analysis, synthesis, and communication, which can be difficult to assess directly. So, we have tended to assume that the production of certain types of output, such as dissertations, case studies, clinical practice reports, and portfolios act as proxies to test evaluative judgement.

Evaluative judgement can be assessed to some degree in compressed periods such as a traditional unseen, supervised, examination, but this type of assessment does not allow for all of the underlying skills to be expressed, and so we have tended to use longer, staged, tasks to assess them. In these assessment tasks, the process of making judgements may be recorded, but has generally not been given as much weighting as the final output: we tend to look at whether the case study made convincing recommendations, or if the treatment plan was similar to one a professional would have made, or if the dissertation analysed sufficient literature.

What impact will AI have on the assessment of evaluative judgement?

The existence of generative AI (GAI) tools means that the rapid creation of plausible outputs which seem to show evaluative judgement will be relatively straightforward for most learners. This will make it difficult to be sure that students have achieved the intended learning outcomes themselves, without making changes to the assessment process. Using controlled examinations instead of take-home tasks is not a good option, since we cannot test all the skills of evaluative judgement in that way.

What can teachers do?

Teachers will need to consider ways to explain what evaluative judgement is, why it is important, and how it relates to particular assessment tasks. Therefore, assessors will have to think even more carefully about what they actually want students to demonstrate, and it seems likely that there will be a renewed focus on process and on the steps taken by students to achieve the final product. Teaching time might be taken up with more discussion and demonstration of critical thinking processes, and grading might involve more questioning of why a particular decision was taken, and how criteria to take the decision were developed. Teachers can discuss their own processes for self and peer assessment, perhaps using the examples of research outputs, as well as explaining their own approaches to marking and giving feedback on student work. It will be important to do this work both with and without AI tools, to show how human brains do this work, and to help students evaluate and judge what the benefits and threats of these tools might be in relation to decision-making.

Essentially, teachers will need to make explicit their expectations and personal evaluative judgement processes, which have sometimes been hidden away in the setting of assignments: essays, reports, and summaries which acted as proxies for demonstrating higher order skills. This is potentially good news for students who find it difficult to decode teacher expectations, but teachers will need some time and support to learn to live with the ways that these tools may change assessment.

What does GAI suggest?

I should note that Microsoft Copilot for Office, a GAI tool, provides an output suggesting that AI will have a positive impact on this topic, and will “transform the assessment of evaluative judgement in several ways, such as:

  • Providing automated and immediate feedback on students’ work, using natural language processing and machine learning techniques.
  • Generating adaptive and personalized learning pathways and interventions, based on students’ strengths and weaknesses in evaluative judgement.
  • Creating realistic and authentic scenarios and simulations, where students can practice and demonstrate their evaluative judgement skills in a safe and controlled environment.
  • Enhancing the validity and reliability of assessment methods, by reducing human error and bias, and by using large-scale data and analytics to inform assessment design and evaluation.”

These suggestions are reasonable enough, but they are still dependent on teachers being able to articulate what they are looking for when they ask to see evidence of evaluative judgement, which is the key starting point.

References

Panadero, E., Broadbent, J., Boud, D., & Lodge, J. M. (2019). Using formative assessment to influence self- and co-regulated learning: the role of evaluative judgement. European Journal of Psychology of Education, 34(3), 535-557. https://doi.org/10.1007/s10212-018-0407-8

Tai, J., Ajjawi, R., Boud, D., Dawson, P., & Panadero, E. (2018). Developing evaluative judgement: enabling students to make decisions about the quality of work. Higher Education, 76(3), 467-481. https://doi.org/10.1007/s10734-017-0220-3

Yan, Z., & Carless, D. (2021). Self-assessment is about more than self: the enabling role of feedback literacy. Assessment & Evaluation in Higher Education, 1-13. https://doi.org/10.1080/02602938.2021.2001431

Bio

Rachel Forsyth is a senior educational developer at U21 partner Lund University, in Sweden, and a Principal Fellow of AdvanceHE. She is particularly interested in curriculum design, digital learning, and assessment design and management, and is currently researching how trust is built in teaching situations. Her recent book, Confident Assessment in Higher Education (Sage, 2022), is a practical guide for anyone working in higher education to understand and improve assessments and examinations.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top