Sep 2, 2025

Opinion

A.I. isn't the problem. Our idea of learning is.

Clay Shirky argues that only in-class paper exams can solve A.I. cheating. We suggest a better path: redesign assessment to value the thinking process, using A.I. as a scaffold for inquiry rather than a shortcut.

Haley Moller

Co-founder & CEO

Library image

In his piece, “Students Hate Them. Universities Need Them. The Only Real Solution to the A.I. Cheating Crisis,” Clay Shirky, a vice provost at N.Y.U., argues that encouraging “good” uses of AI has failed and that traditional fixes like detectors or redesigned assignments don't solve the problem. Instead, he calls for a shift from take-home essays to in-class, paper assessments — framing this change as a return to older academic traditions of direct, oral engagement.

Shirky underscores a troubling point: Even the “good students” (the ones who come prepared, care about the material, and contribute to class discussion) are still tempted to use A.I. to sidestep work outside of class. The problem, in his telling, is not limited to disengaged or struggling students but is nearly universal; if even motivated learners are outsourcing their effort, what does that say about the role of writing and critical thinking in education?

We think Shirky is right to identify the problem but wrong in his claim about its origin. The problem is not inherent to A.I. itself but lies instead in the way we have been conditioned to think about technology: A.I. has been marketed almost exclusively as a shortcut allowing people to save time, cut costs, and optimize efficiency. Students don't have to be cynical to see the appeal. Under pressure, with deadlines looming, who wouldn't reach for a tool that promises to do the hard work for you?

But A.I., like most technologies, is not all bad. Its value — like that of all tools — is not fixed; it is determined by how it is used. The printing press, the calculator, and the search engine were once accused of making students lazy, but over time they were absorbed into education in ways that enhanced rather than diminished learning. The danger with A.I. is not the technology itself, but the attitude we bring to it. If we see it only as a means of cutting corners, it will corrode learning. If we find ways to use it as a scaffold for deeper inquiry, it can become something else entirely.

This is why banning A.I. from classrooms is a dead end. Students will continue to use it in the shadows as a shortcut. What we need is a reframing of what education is asking students to do. Instead of doubling down on timed essays and proctored exams, we should be asking: Might this new technology be incorporated in a way that encourages students to slow down, to question, and to reflect? Shirky says no: “We cannot simply redesign our assignments to prevent lazy A.I. use. (We've tried.)” We would argue we have not tried hard enough; the problem is new, but so are the design possibilities.

One promising path is to rethink the very purpose of assessment. Exams are traditionally designed to measure what students know by demanding the right answer under pressure. But what if assessment focused less on outcomes and more on process? Imagine students being guided through a dialogue where they must ask questions, test ideas, and reason step by step. Imagine teachers not just seeing the final product, but the transcript of the student's thinking, including false starts, clarifications, and moments of insight. That record would reveal not only whether the student got to the right answer, but how they got there.

This kind of assessment changes the role of A.I. Instead of providing shortcuts, the technology becomes a conversational partner that pushes students to explore, explain, and reconsider. The machine isn't there to do the work for them; it's there to hold up a mirror to their reasoning. The teacher remains at the center, able to evaluate not only the correctness of answers but the quality of thought that produced them.

A.I. can be a powerful tool for learning, but it requires good faith from the user. Developing the ability to use A.I. thoughtfully as a means of self-teaching will only grow more valuable in the workforce and beyond. Banning it in schools seems short-sighted. Instead, we should focus on teaching students how to use A.I. responsibly and productively while keeping teachers at the center of the process rather than at the margins.

The approach we are advocating for is not about nostalgia for medieval oral exams, nor about panic over new tools. It is about recognizing that education has always been shaped by its technologies and that each generation has had to decide how to use them. With A.I., the choice before us is clear: Either A.I. becomes just another shortcut that hollowed out learning, or we seize the chance to redefine it as a tool for cultivating curiosity and reasoning. The future of education depends on that choice.

Be the first to hear about new features, classroom pilots, and launch updates.

Contact us