Like our academic and Academic Development colleagues across the sector, the past nine months have been dominated by the spectre of generative AI - from initial fears (the world is over! Skynet is here!) to philosophical questions (what knowledge and skills do we value when an AI can write an essay?) and practical problems (how do I AI proof my assessment?!) As an institution, we started with a workshop in February for staff to discuss the impact, the opportunities and the challenges presented by ChatGPT and generative AI, followed by an institutional response in March outlining our commitment to the ethical and responsible use of generative AI tools in our education practices.
And for the Academic Development team, this was where the fun began.
Because alongside learning how to use this technology and consider its potential impact we needed to develop training, resources and support for our academic colleagues on how to incorporate generative AI into their teaching practice, to teach their students critical AI literacy and – of course – how to prepare for assessment in the upcoming exam period, the next academic year and beyond.
On our existing Education Toolkit, we started developing an AI Hub with training and resources on using AI in the classroom, challenges and ethical issues, AI and inclusivity, reimagining assessment in the age of AI. On the topic of assessment, we employed a range of creative, and often playful approaches, including:
- An evaluation of ChatGPT in action, otherwise known and Loki the Chihuahua's one dog crusade against AI (developed and published on THE Campus by Dr. Karen Kenny, Senior Academic Developer)
- A series of prompts to use go get ChatGPT to develop ideas to make your assessments ‘AI resilient’ and screencasts of them in action (my attempt at a ‘quick fix’)
- AI assessment re-writing retreats
- A range of workshops, talks and discussion panels as part of our EduExe Festival of teaching and learning in June
As the events at the EduExe Festival unfolded it became clear that the desire for a quick fix or a silver bullet for assessment has not been satiated. At one of our events, an audience member expressed the need for a quick reference guide on different assessment methods, their ‘weaknesses’ to AI related misconduct and tips to address these. Afterwards my colleague Professor Alex Janes shared the University of Reading’s A-Z of Assessment Methods and suggested we put together an ‘expert group’ of academic and professional services colleagues to develop something similar for AI and assessment. And so the AI and assessment matric was born!
This matrix was developed in discussion with Professor Kevin Brandom, Professor Barrie Cooper, Professor Alex Janes, Dr. Edward Mills, Dr. Annabel Watson, myself and Dr. Eleanor Hodgson and Dr. Karen Kenny who are Senior Academic Developers in my team. We also drew on support from our supposed nemesis – ChatGPT – and incorporated ideas developed using following prompts:
- How are XXX assessments susceptible to misconduct using generative AI?
- What skills do XXX assessments test?
- How could you design XXX assessments, so it was less susceptible to cheating using generative AI?
- How could you include the use of generative AI in XXX assessment tasks?
The matrix outlines common assessment methods followed by:
- How susceptible it is to generative-AI-related misconduct
- Key skills being assessed
- Things to consider including the following elements in your task to making them more resilient to AI-related misconduct
- How you could use AI as part of the assignment task
This then becomes a quick reference guide where, when the method of assessment is outlined in a module descriptor (i.e., essay) but the assessment task (I.e., the essay question) can be adapted.
The suggestions we provide are not complete or intended to be static. It represents our knowledge and experience now – and we know how quickly things are going out of date as the technology develops exponentially. But we hope nonetheless it will be a useful tool for the upcoming academic year.