One look at the headlines and you would think civilisation is either ending, ascending, or about to be permanently automated. Apparently, AI will destroy work, create infinite prosperity, democratise knowledge, eliminate expertise, replace teachers, save education, and possibly run the world sometime between Year 10 exams and the next university curriculum review. Not bad for a technology most schools were barely discussing seriously three years ago.
I am not especially interested in the apocalypse narrative or the utopian one. We have been here before. Calculators were going to destroy mathematics. Google was going to destroy memory. Wikipedia was going to destroy scholarship. Grammarly was going to destroy writing. Every educational technology arrives with revolutionary rhetoric and then quietly settles into a more predictable role. It simply amplifies what was already there.
AI is another beast altogether: it is rocket fuel for the engaged student and a sedative for the disengaged. It raises the ceiling and lowers the floor at the same time. That is not a glitch. That is how tools interact with human motivation. Cognitive science has been very boringly consistent about this for decades. Learning requires effort, retrieval, feedback, struggle, and time. Remove the effort entirely and you remove the learning. Support the effort and you accelerate it.
The high-functioning student already knows how to use AI. They treat it like a slightly overconfident tutor who occasionally needs fact-checking. They ask it for alternative explanations, and to generate practice questions, to stress test their arguments, and iterate drafts. Importantly, they still do the thinking. They just get more: more feedback and more cognitive reps. If expertise is built through deliberate practice, AI can be a decent training partner.
Then there is the other use case. Essays that read smoothly but say very little. Reflections that sound thoughtful but feel emotionally outsourced. Reports that are immaculate until you ask a follow-up question and discover the intellectual scaffolding is mostly decorative. Students asking AI to summarise readings they never opened, generate citations they never checked, explain calculations they never attempted, and paraphrase until plagiarism software stops complaining. The academic equivalent of instant coffee. Technically coffee, occasionally useful, but nobody serious pretends it’s the real thing.
The institutional reaction has been a mixture of panic, denial, and PowerPoint. Some schools banned it, which worked about as well as banning smartphones by confiscating chargers. Universities rushed to detection software, discovered false positives, worried about legal exposure, and started quietly muttering about assessment redesign instead. Regulators now talk about “secure assessment,” which is polite bureaucratic language for “we can no longer assume a polished assignment means the student actually thought about it.”
But focusing only on AI misses something bigger that has been happening for years. Long before ChatGPT, classrooms were already drifting toward less thinking happening in class. More content pressure, larger classes, administrative creep, accountability metrics, wellbeing mandates, reporting requirements. The easiest adjustment was predictable. Explain faster. Assign practice for home. Assume engagement. Hope for completion. In other words, we have been outsourcing learning long before AI. ChatGPT just industrialised the outsourcing.
Homework used to mean independent effort. Now it means independent effort plus algorithmic assistance plus whatever level of motivation the student happens to bring that evening. Which means the cognitive quality of the work varies wildly, even when the presentation quality does not.
And while this might sound like an argument for the full flipped classroom orthodoxy, where students supposedly watch lectures at home and arrive ready for deep discussion, it is not. In practice, the flipped classroom often results in many watching nothing, skimming summaries, or asking AI to explain the lecture they did not watch. Something else we have known for a long time, efficiency models tend to work best for already efficient learners.
The argument here is simpler and slightly more uncomfortable. If thinking matters, more thinking needs to happen in the room, with the teacher present.
Classrooms should be cognitively busy again. Not administratively busy. Not performatively busy. Cognitively busy. Students reading in class. Writing in class. Solving problems in class. Explaining ideas in class. Struggling in class. Getting feedback immediately. Technology used deliberately, not continuously. Laptops out when needed. Away when not. Which, if we are honest, should be most of the time. Technology is an excellent tool. It is a terrible default environment. Like dessert. Lovely occasionally, but problematic as a staple diet.
Cognitive science has not changed here. Working memory is limited. Attention is fragile. Multitasking degrades learning. Retrieval strengthens memory. Feedback accelerates expertise. Struggle is not a design flaw. It is the engine of learning. None of this is controversial. It just becomes inconvenient in busy classrooms.
There is also a stamina issue nobody likes talking about. Cognitive endurance is built through sustained effort. If AI gradually removes the need for sustained effort, students may become very good at producing outputs while becoming less comfortable with prolonged thinking. That matters because automation does not eliminate cognition. It concentrates it. Fewer people will need to think deeply, but those who do will need to think very well.
Education has always had a slightly aspirational relationship with effort. We talk about resilience, grit, and intellectual curiosity. Then we structure classrooms to minimise cognitive load for efficiency. AI exposes that contradiction brutally.
Right now, many classrooms look like this. Quick explanation. Partial attention. Significant work assigned for later. Devices open constantly “just in case.” A hopeful assumption that learning will somehow crystallise between dismissal and bedtime. That model worked tolerably when independent work meant independent cognition. That assumption is now, at best, optimistic.
If anything, AI should push us the other way. More thinking where we can see it. More dialogue. More process visibility. Less faith that unsupervised digital environments produce deep understanding. Not because technology is evil, but because cognition still requires effort. Does anyone remember when we had tests in class or had to write an essay in class? It seems that today, 90% of assessments are take-home tasks, why?
Yes, it sounds suspiciously old-fashioned. Sometimes progress is rediscovering what worked before we became distracted by what was new and shiny, and nothing exemplifies that more right now than AI.
Here is the paradox nobody wants to state plainly. The future may involve less routine cognitive labour, much the same way that Google reduced the need to remember things while increasing the penalty for not understanding them. Information became cheap. Judgment did not. AI is likely to do the same for thinking. Routine cognition gets automated. The remaining cognition becomes more valuable. Which makes it slightly odd that some classrooms are drifting toward less visible thinking just as the world is beginning to demand more of it. AI will draft, summarise, code, analyse, predict. But that makes human cognition more valuable, not less. Judgment, creativity, synthesis, ethical reasoning, intellectual independence. Those do not develop through convenience. They develop through effort.
Students should finish the day mentally tired. Not drained from compliance, but tired from thinking. That used to be normal. Increasingly, it seems optional. And AI makes optional thinking dangerously attractive. None of this makes AI the villain. Used well, it is extraordinary. Used lazily, it is intellectual junk food. The technology is neutral. Human motivation rarely is.
Which brings us back to the uncomfortable core argument.
AI will produce some exceptional learners. Students who already lean into challenge now have unprecedented leverage. But it will also produce more students who look competent while leaning heavily on tools they may not fully understand.
Education systems are beginning to sense this, even if public discussion still oscillates between panic and hype. The real issue is not whether AI belongs in education. The issue is whether classrooms remain places where thinking happens visibly, effort is normal, and cognition is developed deliberately. Eventually, someone has to ask the blunt question:
If a student cannot think without the tool, what exactly are schools for?