Among the general public, generative AI’s most enthusiastic early adopters have been students. Surveys conducted a year ago revealed that nearly 90 percent of college students and more than 50 percent of high-schoolers were regularly using chatbots for schoolwork. Those numbers are certainly higher now. AI may be the most rapidly adopted educational tool since the pencil.
Because text-generating bots like ChatGPT offer an easy way to cheat on papers and other assignments, students’ embrace of the technology has stirred uneasiness, and sometimes despair, among educators. Teachers and pupils now find themselves playing an algorithmic cat-and-mouse game, with no winners. But cheating is a symptom of a deeper, more insidious problem. The real threat AI poses to education isn’t that it encourages cheating. It’s that it discourages learning.
To understand why, it’s important to recognize that generative AI is an automation technology. You can speculate all you want about computers eventually attaining human-level intelligence or even “superintelligence,” but for the time being AI is doing something that has a long precedent in human affairs. Whether it’s engaged in research or summarization, writing words or creating charts, it is replacing human labor with machine labor.
Thanks to human-factors researchers and the mountain of evidence they’ve compiled on the consequences of automation for workers,1 we know that one of three things happens when people use a machine to automate a task they would otherwise have done themselves:
Their skill in the activity grows.
Their skill in the activity atrophies.
Their skill in the activity never develops.
Which scenario plays out hinges on the level of mastery a person brings to the job. If a worker has already mastered the activity being automated, the machine can become an aid to further skill development. It takes over a routine but time-consuming task, allowing the person to tackle and master harder challenges. In the hands of an experienced mathematician, for instance, a slide rule or a calculator becomes an intelligence amplifier.
If, however, the maintenance of the skill in question requires frequent practice — as is the case with most manual skills and many skills requiring a combination of manual and mental dexterity — then automation can threaten the talent of even a master practitioner. We see this in aviation. When skilled pilots become so dependent on autopilot systems that they rarely practice manual flying, they suffer what researchers term “skill fade.” They lose situational awareness, and their reactions slow. They get rusty.
Automation is most pernicious in the third scenario: when a machine takes command of a job before the person using the machine has gained any direct experience doing the work. Without experience, without practice, talent is stillborn. That was the story of the “deskilling” phenomenon of the early Industrial Revolution. Skilled craftsmen were replaced by unskilled machine operators. The work sped up, but the only skill the machine operators developed was the skill of operating the machine, which in most cases was hardly any skill at all. Take away the machine, and the work stops.
Because generative AI is a general-purpose technology that can be used to automate all sorts of tasks and jobs, we’re likely to see plenty of examples of each of the three skill scenarios in the years to come. But AI’s use by high-school and college students to complete written assignments, to ease or avoid the work of reading and writing, is a special case. It puts the process of deskilling at education’s core. To automate learning is to subvert learning.
Unlike carpentry or calculus, learning is not a skill that can be “mastered.” It’s true that the more research you do, the better you’ll get at doing research, and the more papers you write, the better you’ll get at writing papers, but the pedagogical value of a writing assignment doesn’t lie in the tangible product of the work — the paper that gets handed in at the assignment’s end. It lies in the work itself: the critical reading of source materials, the synthesis of evidence and ideas, the formulation of a thesis and an argument, and the expression of thought in a coherent piece of writing. The paper is a proxy that the instructor uses to evaluate the success of the work the student has done — the work of learning. Once graded and returned to the student, the paper can be thrown away.
Generative AI enables students to produce the product without doing the work. Rather than reading and making sense of difficult source texts, they can ask a chatbot to gin up simplified summaries. Rather than synthesizing various ideas and perspectives through concerted thinking, they can ask the chatbot for a generic synthesis. And rather than expressing (and refining) their thoughts through the composition of sentences and paragraphs, they can get the bot to spit out a first draft or even a final one. The paper a student hands in no longer provides evidence of the work of learning its creation entailed. It is a substitute for the work.2
In a recent Chronicle of Higher Education article, Clay Shirky, a long-time analyst of digital media who serves as Vice Provost for AI and Technology in Education at New York University, drew on his extensive work with professors and students to explain how this new dynamic is eating away at one of education’s foundations:
Every year, 15 million or so undergraduates in the United States produce papers and exams running to billions of words. While the output of any given course is student assignments — papers, exams, research projects, and so on — the product of that course is student experience. “Learning results from what the student does and thinks,” as the great educational theorist Herbert Simon once noted, “and only as a result of what the student does and thinks.” . . .
The utility of written assignments relies on two assumptions: The first is that to write about something, the student has to understand the subject and organize their thoughts. The second is that grading student writing amounts to assessing the effort and thought that went into it. At the end of 2022, the logic of this proposition — never ironclad — began to fall apart completely. The writing a student produces and the experience they have can now be decoupled as easily as typing a prompt, which means that grading student writing might now be unrelated to assessing what the student has learned to comprehend or express.
The work of learning is hard by design — unchallenged, the mind learns nothing — and trying to alleviate or avoid it is nothing new. Time-pressed students have always sought shortcuts (CliffsNotes built a business serving them), and unscrupulous students have always found ways to cheat. But generative AI is something different, not just in scale but in kind. AI’s speed, ease of use, flexibility, and, most important, wide adoption throughout society are making it feel normal and even necessary to automate reading and writing and bypass the work of learning. Struggling with words and the ideas they represent is starting to feel old-fashioned and even foolish, like struggling to navigate a city with a paper map. Why bother, when a machine can do the heavy lifting for you?
What AI too often produces is the illusion of learning. Students may well be able to write better papers with a chatbot than they could on their own, but they end up learning less. The problem doesn’t seem to be limited to writing assignments. An extensive 2024 University of Pennsylvania study of the effects of AI on high-school math students found, as its authors write in a forthcoming PNAS article, that “access to GPT-4 significantly improves performance [as measured by grades],” but when access to the technology is taken away, “students actually perform worse than those who never had access.” Armed with generative AI, a B student can produce A work while turning into a C student.3
An ironic consequence of the loss of learning is that it prevents students from using AI adeptly. Writing a good prompt requires an understanding of the subject being explored. The prompter needs to know the context of the prompt. The development of that kind of understanding is exactly what a reliance on AI impedes. “The most useful deployment of current and near-future generative AI in research and expression absolutely requires that you already know a great deal,” writes Swarthmore history professor Timothy Burke, but the way the technology is actually being used “is brutally short-circuiting the processes by which people gain enough knowledge and expressive proficiency to be able to use the potential of generative AI correctly.” The tool’s deskilling effect extends to the use of the tool itself.
Shirky senses a growing “sadness” among students as they become more dependent on AI. They feel compelled to use the technology even though they know it’s sapping their learning — and foreclosing the intellectual possibilities that learning opens, the satisfactions that come with doing or grasping something hard. He quotes some undergraduates:
“I’ve become lazier. AI makes reading easier, but it slowly causes my brain to lose the ability to think critically or understand every word.”
“I literally can’t even go 10 seconds without using Chat when I am doing my assignments. I hate what I have become because I know I am learning NOTHING, but I am too far behind now to get by without using it . . . my motivation is gone.”
“Everyone is doing it.”
We’ve been focused on how students use AI to cheat. What we should be more concerned about is how AI cheats students.
I explore this evidence in my book on automation, The Glass Cage.
One might argue that, in adopting generative AI, students are bringing to its logical conclusion a long-running trend in education that was set in motion by parents, politicians, and school administrators: stressing quantitative measures of performance over actual learning.
This cycle of dependency is good for AI companies. In March, with evidence of AI’s disruptive effects on education mounting, OpenAI announced that it was giving students free access to the premium version of its service, ChatGPT Plus, through the end of the school year. For AI companies, students aren’t learners. They’re customers.
"What we should be more concerned about is how AI cheats students." Fully agree with your emphasis on what students are losing in the break-neck adoption of AI tools. My husband and I recently attended a talk at the Perimeter Institute on AI and education, and it was the most hopeful perspective that I have come across: https://schooloftheunconformed.substack.com/p/learning-fast-and-slow-why-ai-will
Thanks for your writing!
Well done! I hope no AI was hurt in the production of this post.
Learning comes from struggle. It's frustrating, time-consuming, but when the light goes on, you remember it because you've had to work to flip that switch of discovery.
When "learning" is too easy, it ceases to be memorable. It becomes ephemeral and forgettable.
I'm not even sure students are swimming in the shallows here--more like they're dipping toes in the ocean and then moving on, never immersing themselves, never feeling the thrill of waves and currents of knowledge, never getting tossed around but emerging smarter and wiser from the experience.