Artificial Intelligence (AI) is changing the way we work and learn. Increasingly sophisticated natural language processing (NLP) tools can competently complete tasks currently the remit of professional roles. Kay Hack and Charles Knight, from Advance HE’s Knowledge and Innovation team, consider the implications for the sector and the need for a nuanced response.
What is ChatGPT and why is it unsettling the academic community?
ChatGPT is a sophisticated chatbot capable of Generating new text through its Pre-training on vast amounts of human-written content. It uses natural language processing software to Transform this data to create text-like human writing with accurate grammar, punctuation and spelling. As the first self-taught text generation program ChatGPT can learn and adapt to the writing style of its user – representing a significant advance in the development of AI writing tools. It is just a single example of a range of AI tools coming to market.
Authentic assessment in the era of AI: avoiding the deficit model
The ability of ChatGPT to generate or summarise text, provide accurate responses to questions, write computer code and complete an array of other tasks commonly found in assignments, raises questions about the veracity of student work. Since its launch in November 2022, academics from diverse subject areas have been ‘testing’ ChatGPT with exam questions and coursework - incidentally supplying more training data! The consensus view is that responses are broadly competent.
The answer may not be as detailed or complete as one provided by a human expert in the field, but, students with a strong understanding of the topic could use the AI's response as a starting point and then enhance it with their own knowledge and research. The ability to generate unique responses that will elude plagiarism detection tools will expand opportunities for academic dishonesty. Unsurprisingly, the rapid changes in technology have resulted in us entering a period where policies and regulations are lagging behind practice, leading to highly individualised responses. Take the following example of an attempt to detect the use of ChatGPT:
I have determined that … your submission was written by AI. I’ve made this determination on my own, as well as having used ChatGPTZero that detects AI-written papers... you have been given a zero for this assignment”
Professor (US, anonymised)
Anecdotally, false positive rates for ChatGPTZero vary between 2% and 20%, although this and other detection tools are yet to be subject to independent verification. Moreover, the act of uploading student work to such tools may be in breach of data protection legislation or institutional policies, raising both legal and ethical questions. It is also relatively straightforward to evade detection using parameters within ChatGPT itself, such as increased randomness and deploying different writing styles. Given its intrinsic ability to learn, the poacher will always be a step ahead of the gamekeeper.
Therefore, higher education providers need to think beyond detection to fundamental questions of what we teach and how we assess learning. Authentic assessment should provide students with opportunities to demonstrate the skills and knowledge required to work with, and make responsible use of, AI. We need to prepare students for employability by supporting them to critique the content produced by AI, appreciate the underlying algorithms and data sets, on which they are built, including what biases are in that data. ChatGPT itself provides useful guidance for how to interpret the output it produces:
It's important to note that as a machine, I do not have personal biases or opinions, but I can inadvertently reproduce the biases present in the data I was trained on. I strive to provide accurate and unbiased information, but it is important for users to critically evaluate the information I provide and consider multiple sources.”
Having an institutional viewpoint
Higher education providers therefore need to think systematically about how the increased use of these tools will change their practices. It is easy to get fixated on a deficit model where students use these tools to commit academic malpractice but there are a range of interesting and legitimate use cases that need to be considered. This raises important questions for quality assurance and institutional policies and broader questions about employability and the skills required to thrive in an AI-dominated world of work.