Skip to main content

Is AI making students less critical thinkers?

22 Sep 2025 | Dr Elizabeth Kaplunov and Athina Ntasioti Dr Elizabeth Kaplunov, Senior Lecturer in Health and Athina Ntasioti, Lecturer in Health at Regent College London, explore whether generative AI tools are helping or hindering students’ ability to think critically.

Artificial intelligence is changing the way students learn - but is it also stopping them from thinking for themselves? 

Research suggests that without the right guidance, AI use can lead to reduced engagement in the mental processes that underpin critical thinking. 

A recent UK study found that 80% of students are aware their university has an AI policy, and nearly half feel staff are somewhat prepared to manage AI use. Yet students continue to rely heavily on generative AI (GenAI) tools like ChatGPT to help with essays, coding tasks and even exam preparation. 

While this can speed up research and spark ideas, it raises important questions. Are students learning less deeply? Are they losing the ability - or motivation - to question, analyse and reflect? And what can educators do to keep critical thinking alive? 

Shortcuts come at a cost 

Psychologist Daniel Kahneman famously explained that humans have two systems for thinking.  

“System 1” is fast, intuitive and emotional - great for reacting quickly, but prone to bias and error. “System 2” is slow, deliberate and logical - the foundation of critical thinking. 

AI often appeals to System 1. It autocompletes answers, suggests ideas instantly and gives users the illusion of expertise. It mimics smart thinking but doesn’t encourage users to think for themselves. In fact, AI tools often become what researchers call “black boxes” - they give polished outputs, but don’t show how decisions were made or where information came from (Aydın et al., 2025). This makes it harder for students to evaluate, challenge or build on what they see. 

What the research shows 

A recent study used EEG scans to monitor students’ brain activity while writing essays. Those who used ChatGPT showed significantly lower engagement in memory, attention and executive control areas of the brain compared to students who worked without AI. The researchers called this “metacognitive laziness.” Not only did these students think less deeply, they also remembered less of what they wrote. 

Other research paints a similar picture. In a study on programming students, it was found that those using AI often accepted solutions without questioning them. In contrast, students working with human tutors spent more time refining their ideas and thinking critically about the task. 

When students rely too heavily on AI, they may skip the uncomfortable but essential process of struggling, questioning and refining. Instead of developing their own voice or solving complex problems, they become passive consumers of machine-generated content. 

Creativity can also suffer. Research shows that essays written with ChatGPT were less original and more uniform. Other evidence echoed this: students who relied on AI produced fewer creative variations in their coding projects than those who collaborated with peers. 

Reflection - a key part of deep learning - was another casualty. Students who used AI were less likely to revisit or critique their work unless specifically asked to do so. Without structured reflection, learning becomes shallow and transactional. 

But it doesn’t have to be this way. 

Rethinking how students use AI 

Several studies show that with the right guidance, students can use AI without sacrificing their critical thinking. 

In one experiment, students were asked to pause during AI-supported tasks and reflect on prompts like, “What assumptions are being made?” or, “Can you think of another perspective?” The results were promising: students who used these prompts asked better questions, evaluated responses more carefully and explored more angles. 

Further research also found that when students were given “provocations” - small nudges to challenge AI-generated content - they were more likely to amend or critique the material rather than accept it blindly. 

Similarly, educational chatbots that gave students real-time metacognitive feedback were tested in a biology class. It was found that students who received these prompts performed better on knowledge tests, felt more motivated and believed more in their own abilities. 

In all these cases, students benefited from one key shift: moving from passive use to active engagement. When students are encouraged to ask “why,” “how,” or “what if,” they begin to reclaim the cognitive effort that AI tools often displace. 

So what can universities do? 

The role of educators 

Teachers and course designers have a crucial role to play. Assignments can include clear instructions about how AI is (and isn’t) expected to be used. Students can be asked to annotate how they used AI, explain changes they made, or compare outputs with their own thinking. 

Group discussions, peer reviews and class debates can also bring AI use out into the open helping students reflect on their choices and learn from each other. 

A research study showed that when students used AI within carefully designed tasks, they developed stronger conceptual understanding and remained in control of their reasoning. In other words, AI worked best when it was a support tool, not a substitute for thought. 

Some universities have already created open-access guidance to help students use AI thoughtfully. The University of Edinburgh, University of Sydney, and University of Manchester share annotated assignments, AI-use statements and reflection prompts to encourage deeper engagement. Others, like King’s College London, UCL, Oxford and Liverpool, offer practical tools from “golden rules” and AI-use templates to prompting tips, citation guidance and digital fluency toolkits. 

Final thoughts 

AI isn’t going away and nor should it. When used thoughtfully, it can spark ideas, break through creative blocks and make learning more efficient. But if students treat AI as a shortcut to answers, rather than a partner in thinking, essential academic skills may be lost. 

Universities need to go beyond policies and focus on practice: helping students learn not just how to use AI, but how to question it. That means designing assignments that reward reflection, creating space for discussion and modelling curiosity rather than convenience. 

In the end, the challenge is simple but urgent: to ensure that in a world of fast answers, students don’t lose the ability to think deeply. 

 

Dr Elizabeth Kaplunov is a senior lecturer in health at RCL. She is also a research lead at RCL. Her research focuses on culture, motivation, health promotion, disability and behaviour change. 

Athina Ntasioti is a lecturer in health at RCL, whose extensive background in teaching and research has been demonstrated through leadership in health and psychology modules, inclusive education and inquiry into mental health and AI. 

We are currently inviting proposals for presentations at our 2026 Artificial Intelligence Symposium. Please use the provided link to access detailed information about the event and submission requirements. https://advance-he.ac.uk/programmes-events/events/artificial-intelligence-symposium-2026

Author:

We feel it is important for voices to be heard to stimulate debate and share good practice. Blogs on our website are the views of the author and don’t necessarily represent those of Advance HE.

Keep up to date - Sign up to Advance HE communications

Our monthly newsletter contains the latest news from Advance HE, updates from around the sector, links to articles sharing knowledge and best practice and information on our services and upcoming events. Don't miss out, sign up to our newsletter now.

Sign up to our enewsletter