Illustration: Drawing of brain split in half with AI questions falling out
An arms race is under way in classrooms. Since the launch of ChatGPT a little over two years ago, enterprising students have become some of the chatbot’s most enthusiastic users, relying on the tool to write essays they would otherwise have had to labour over.
Some have resorted to unreliable online checkers, with one university professor failing an entire class after his screening tool incorrectly accused every student of using ChatGPT. Others have hidden tiny white text within assignments, instructing chatbots to use words such as “banana” and “Frankenstein” in their essays in an attempt to catch them out.
The teachers appear to be losing. A quarter of 13 to 17-year-olds recently admitted to the US Pew Research Centre that they use ChatGPT to write their homework, double the proportion found a year earlier. Last year, the Higher Education Policy Institute found that one in eight undergraduates – 13pc – were using AI to write assessments, and 3pc were handing in the chatbot’s output without checking it.
Cheating is a problem as old as schoolwork. But as artificial intelligence bots become increasingly ubiquitous and capable, researchers are now starting to wonder if the technology is affecting how we learn and think – not only for lazy students, but for the rest of us.
Last week, researchers at Microsoft and Carnegie Mellon provided evidence for what many have already suspected: we are offloading our brains on to generative AI systems.
In a study of 319 “knowledge workers”, people working in fields such as computer science, education, business and administration, they found use of the tools was associated with lower levels of critical thinking – the ability to comprehend and question ideas and statements, rather than merely absorb them.
In other words, workers were less likely to engage their brains if they felt comfortable leaving it to AI.
“GenAI tools appear to reduce the perceived effort required for critical thinking tasks among knowledge workers,” the study said. “Confidence in AI is associated with reduced critical thinking effort, while self-confidence is associated with increased critical thinking.”
The researchers noted that “used improperly, technologies can and do result in the deterioration of cognitive faculties that ought to be preserved”. Ominously, they warned that reliance on automation risks leaving our cognitive muscles “atrophied and unprepared” for when they are needed.
The authors wrote: “While GenAI can improve worker efficiency, it can inhibit critical engagement with work and can potentially lead to long-term over reliance on the tool and diminished skill for independent problem-solving.”
‘I can think for you’
New technology has always been accompanied by warnings that it will break our brains through “cognitive offloading”.
Socrates feared that reliance on writing would erode our memories and lead to a mere surface-level understanding of important arguments. The rise of the pocket calculator in the 1970s led to a classroom panic as teachers and parents worried that children would no longer learn arithmetic.
The arrival of search engines and the convenience of accessing them at any time through smartphones has led to concerns about “digital amnesia”, the idea that we forget things as soon as we are told them, confident that our devices will be able to bail us out later. Yet few of us would choose to give these advanced tools up.
However, researchers say AI is different. While Google and smartphones might allow us to store information elsewhere and freeing up our brains for other tasks, AI promises to think on our behalf.
“In the past, I would offload information somewhere else. But when you can offload your whole thinking process to technology, and the technology says ‘I can think for you’, this is where the difference comes,” says Professor Michael Gerlich, a behavioural psychologist at the Swiss Business School in Zurich.
“It’s so convenient, you can ask a question and you get the answer directly.”
A study by Gerlich published in January made similar conclusions to the Microsoft study, observing “a significant negative correlation between frequent AI tool usage and critical thinking abilities”. It found that younger users – those aged between 17 and 25 – were the most dependent on AI tools, and scored lower on critical thinking markers.
In the professional field that has been most affected by AI chatbots – computer programming – some workers say they now feel helpless without the tools. One programmer says he recently gave up a coding task on a flight where he did not have a WiFi connection allowing him to query his virtual assistant. “I just cannot make myself code any more,” one programmer said.
Gerlich says that existing research does not prove that AI is breaking our brains, merely that those less confident in their abilities are more likely to lean on them. “But when you put all these studies together,” he adds, “there is some indication that there could be a causality”.
New generation of ‘reasoning’ systems
Researchers say the effects of the technology on our brains depends how people use chatbots.
Lev Tankelevitch, one of the authors of the Microsoft paper, says that people were more likely to offload thinking to AI when they saw the answer as less important, and would still apply critical thinking to higher stakes queries. He adds that a new generation of “reasoning” systems which describe how they arrive at answers are more likely to encourage critical evaluation.
“When AI challenges us, it doesn’t just boost productivity – it drives better decisions and stronger outcomes,” he argues.
The researchers outlined a series of changes that could ease the risk of offloading our thinking to chatbots. One study has suggested that AI should ask people follow-up questions when they deliver answers, forcing users to think about what they have just been told.
Another said “cognitive forcing” functions, such as giving people a multiple choice of answers before showing the correct one, or simply taking longer to produce answers, could jolt people into engaging their brain.
There was just one problem – users hated it. “People assigned the least favourable subjective ratings to the designs that reduced the over-reliance the most,” the study noted.
Gerlich says: “Humans appreciate comfort. And if you have a tool that takes difficult things away from you and makes your life a lot easier, then we tend to use it.”
Microsoft’s Tankelevitch says that the company’s research will help it “design tools that promote critical thinking”. But with tech companies burning through billions to achieve artificial general intelligence – systems that are as capable as humans – concerns are growing that our own brainpower may be collateral damage.