By Chris Kneifl, OSU College of Liberal Arts

Resilient Teaching Voices Series
Editor’s note: Student names in this post have been changed to protect privacy.
I hadn’t planned to talk about AI with Rachel. She came to my office hours for a casual chat, but we ended up talking about the state of academia in general. Rachel came to OSU, in part, to become a better writer. She’s not looking to cut corners either. She reads for enjoyment, and she recently bought a (non-required) book about English grammar so that she could better understand things like the semicolon.
A couple of weeks later, I spoke with another student, Santiago. He is studying languages to better connect with his family. Like Rachel, he wants to be able to use language with precision and to communicate well in both English and Spanish. Learning has not always been easy for him. In many ways, he feels like just being here is an accomplishment. But now that he’s here, he wants to make the most of it.
Rachel and Santiago are the kind of students we all love to teach. They both feel conflicted, however, about certain aspects of the learning environment. Rachel laments the mindless acceptance of AI and other digital tools that rob students of their agency. Santiago echoes these concerns, adding that he feels distracted by the onslaught of tech tools that impede real learning. At the same time, they realize that AI is here to stay, and they worry that they’ll be left behind professionally if they don’t know how to use it.
These concerns raise difficult questions. Is it possible to use AI responsibly? Or is it a Pandora’s Box that, once opened, leads to educational ruin? As somebody who occasionally uses ChatGPT, is it hypocritical for me to prohibit my students from doing the same?
It’s tempting to adopt an absolutist view on these questions. In a recent article about AI use amongst college students, Anastasia Berg, a professor of philosophy at UC Irvine, describes how even limited use of this tool is harmful. Berg warns of a generation of students who could become subcognitive:
At stake are not just specialized academic skills or refined habits of mind but also the most basic form of cognitive fluency. To leave our students to their own devices — which is to say, to the devices of A.I. companies — is to deprive them of indispensable opportunities to develop their linguistic mastery, and with it their most elementary powers of thought.
These concerns are not new. For the last 10 years or so, I’ve seen tools like Google Translate creep into my students’ work. I’ve witnessed the loss of learning that occurs when students adopt technologies that do too much of the work for them.
What does seem new, though, is the institutional embrace of these tools. In his recent essay on the topic, Jonathan Malesic, who teaches writing at SMU, describes how universities themselves are to blame for creating an environment in which the irresponsible use of AI can flourish. In this world, humanistic skills like reading and critical thinking take a backseat to a process that is more hurried and less contemplative. In such an environment, educational “success” is mostly a matter of writing a good prompt.
Is there a way to harness the upside of AI without having students who are no longer capable of nuanced, sophisticated reasoning? Many argue that if our guidelines are sufficiently clear, we can create an environment in which AI serves an important role but is ultimately limited by sound judgment. The problem with this argument is that AI diminishes the very mental faculties that are needed to make such judgments. It does too much of the “thinking” for us, and sometimes its results are wildly misleading (or worse).
Resilient teaching goes beyond the responsible use of AI, if there is such a thing. It’s about creating an environment in which students can develop the fundamental skills they’ll need in a complex world. It’s about maintaining honest standards and expectations that encourage students to maximize their full learning potential. And it’s about fostering meaningful, human interactions in a world that feels increasingly disconnected.
Rachel and Santiago seem to get this. They both came here to learn the basic skills of language. We can serve them best by teaching them skills that won’t be obsolete after the next update to ChatGPT.
References
Berg, Anastasia (2025, October 29). Why even basic A.I. use Is so bad for students. New York Times. https://www.nytimes.com/2025/10/29/opinion/ai-students-thinking-school-reading.html
Guinzburg, Amanda (2025, June 1). Diabolus ex machina. Everything Is a Wave. https://amandaguinzburg.substack.com/p/diabolus-ex-machina
Malesic, Jonathan (2025, October 25). There’s a very good reason why college students don’t read anymore. New York Times. https://www.nytimes.com/2024/10/25/opinion/college-university-students-reading.html
Sankaran, Vishwam (2023, April 6). Chat GPT cooks up fake sexual harassment scandal and names real professor as accused. The Independent. https://www.the-independent.com/tech/chatgpt-sexual-harassment-law-professor-b2315160.html

About the author: Chris Kneifl is a Senior Instructor of Spanish in the School of Language, Culture, and Society. He enjoys teaching language through music and through the use of comprehensible input. Outside of work, he likes to play the piano and explore the PNW on two wheels.
Editor’s note: This is part of a series of guest posts about resilience and teaching strategies by members of the Fall ’25 Resilient Teaching Faculty Learning Community facilitated by CTL. The opinions expressed in guest posts are solely those of the author.
Top image generated with Microsoft Copilot.
Leave a Reply