“Just use ChatGPT.”
That’s the whisper in lecture halls, the temptation in browser tabs, and the callow counsel passed among today’s undergraduates as though it were an indispensable life hack, or even a godsend.
More and more, large language models (LLMs) are becoming the silent co-authors behind too many “perfectly fine” paragraphs that look fluent and polished on the surface but are weightless, insipid, and shallow in substance. Put differently, generative AI capabilities tempt students to stop thinking after the initial prompt, producing vapid prose that sounds like thought while saying almost nothing at all.
This view shouldn’t be read as alarmism. It’s prudence. Nor does it fall into the trap of personifying AI, as if the machine itself had motives or malice. It is simply an acknowledgment and description of consequences now plain to anyone who reads students’ written work for a living. When we outsource the agency of young learners, whose central task is curiosity and growth in an educational environment, rather than meeting workplace productivity quotas, we also outsource the formation of the mind.
LLMs are undeniably useful. Tools like ChatGPT, Claude, and Gemini are accessible, fast, and relatively frictionless, handling tasks like summarization, translation, outlining, practice-question generation, and troubleshooting with ease. In the right place, they offer a real utility. But they ought not gain a strong foothold in pedagogy, for a tool’s value depends on what it displaces. When it displaces the very work education is meant to train — attention, judgment, synthesis, revision, and the willingness to sit with uncertainty — it doesn’t help at all. It hollows.
Educators don’t get to be neutral about that, especially those of us who insist that education is not simply a product, but a public good. In our classrooms, we can either defend the habits of mind a robust democracy requires or preside over the automation of thinking and the training of automations.
The reckoning has already begun. In a Washington Post article published on December 12, 2025, professors explain how they are returning to oral and in-class, hand-written exams because take-home assessments have become so easy to counterfeit with generative AI. One professor quoted in the Washington Post article tells students that using AI to do the intellectual lifting is like “bringing a forklift to the gym when your goal is to build muscle.” She describes the classroom as a “gymnasium” and wants students “to lift the weights” themselves. Assessments conducted in live settings force students to own their understanding of course material, explain claims and concepts in their own words, and make their own connections, without a hidden prompt smoothing over gaps in comprehension.
The gymnasium metaphor deserves to stick, because it names the real danger of AI-in-education: i.e., not wrong answers, but atrophied faculties. You can still “lift the weight” with a forklift. You just won’t get stronger. And the more we, as educators, normalize the forklift, the more we quietly redefine the purpose of schooling, shifting the focus from cultivating the quality of creativity and critical thinking to producing quantifiable outputs that never probe beyond the superficial. Worse yet, our inaction will train students into a kind of learned helplessness: if the forklift is ever unavailable — offline, paywalled, restricted, or simply wrong — many will discover they no longer know how to lift at all.
Research is beginning to validate what teachers have been sensing. A Microsoft Research study published in April 2025 surveyed 319 “knowledge workers.” It reported that higher confidence in generative AI (as opposed to higher self-confidence) is associated with reduced critical-thinking effort, with people dropping the cognitive work of discovery and synthesis to supervise and verify an LLM’s output.
This finding does not constitute a moral indictment against those who use AI tools and benefit from productivity boosts. However, one must understand that efficiency is not the same thing as education. A polished paragraph is not the same thing as a formed mind.
There’s another loss we should put plainly out in the open: the detriment to the craft of writing itself. Students already tend to treat revising as a tedious task — something that “takes time” rather than something that clarifies thought and strengthens reasoned argumentation. This is a detriment precisely because drafting, editing, and rewriting are not simply cosmetic steps at window dressing the real work of jotting down one’s ideas. They are the real work.
We write to find out what we think. We revise to discover what our claims can actually bear. The sentence you cut, the transition you refine, the example you sharpen — none of these are mere ornaments on good writing. They are the mind training itself through reflection. A tool that promises to eliminate revision might save time and labor, but at what cost? For students, it can short-circuit the organic process by which ideas become arguments, and arguments become convictions disciplined by evidence.
Creativity is vulnerable, too, in a subtler way. The promise is that AI will “supercharge” imagination. The cost is that it can standardize it. A paper published this year in the peer-reviewed scientific journal Nature Human Behaviour reports that ChatGPT-assisted brainstorming tends to reduce idea diversity across groups — narrowing and homogenizing the range of available sources and solutions, even when individual outputs appear clever. Similarly, a Science Advances study published last year found that generative AI can be useful for “creative writing,” while still making the overall pool of stories more similar. In other words, outputs are better-looking but less novel. This screams inauthentic .
The danger isn’t that students will stop producing. It’s that they’ll sow their various seeds of thought into a terrain only capable of producing rearrangements of the same thing, mistaking fluency for originality and coherence for creativity and understanding.
So yes: returning to tried-and-true methods is one option. In-class writing. Oral exams. Live defense. Not because we long for grading blue book exams, but because we’re serious about the mission and purpose of education.
However, educators shouldn’t respond by fortifying the old walls alone. We should also build smarter, more innovative classrooms that encourage students to interact with one another and actively grapple with the objects of knowledge before them.
This is especially urgent for those of us in political science, history, philosophy, and humanities departments, for we are explicitly charged with the duty of forming responsible, inquisitive, and informed citizens.
Our aim is not only for students to recite civics. It’s that they can appreciate and practice it. They should leave our courses with the ability to weigh evidence, interpret institutions, argue in good faith, and reach compromise, while concomitantly recognizing trade-offs, resisting manipulation, and revising their views with a budding sense of intellectual maturity. Democracies do not fail only because people lack information. They fail because people lose the habits of thought that underlie deliberation and informed consent.
That’s why, when designing my undergraduate Introduction to American Government course, I’ve opted to replace the midterm and final exams with a moot court and a mock Congress. The pedagogical aim of these exercises is active learning and interactive immersion. It moves students from rote memorization and recitation to collectively inhabiting the process and stakes of governance. For instance, in the mock Congress, students will certainly learn how a bill becomes a law, for we need a common vocabulary to conduct the exercise in the first place. The idea is that they will then apply what they have learned from our readings, not on paper but through first-hand experience with agenda-setting, coalition-building, bargaining, and the hard calculus of compromise.
This functions like an oral exam with an interactive twist that transforms the classroom into an animated learning community. It makes thinking public, accountable, and alive. Students cannot prompt their way through discussion or automate their intellectual courage.
The classroom should be one of the few places where students are expected to try, experiment, take intellectual risks, and fail with low stakes. These are all elements of personal and intellectual growth. When AI becomes the ever-present escape hatch, students lose permission to participate in this process. They stop drafting and start generating. They stop thinking and start submitting.
So here’s the call, educators. Fight back! Neither panic nor resignation will take us far. But thoughtful, creative design can. Bring back assessments that make thinking visible and accountable. Create assignments that involve trial and error and demand spontaneity. Defend the classroom as a civic workshop, a space where minds are trained for free inquiry rather than optimized for productivity.
If we don’t protect critical thinking now, we won’t merely see worse essays. We’ll see a weaker, incurious citizenry.
Robert T.F. Downes is a PhD candidate in the Department of Political Science at the University of Connecticut. He is also an Adjunct Professor at Rhode Island College, where he teaches Introduction to American Government. The views expressed here are his alone.

