By Demian Hommel, CTL AI in Teaching and Learning Fellow in partnership with the AI Literacy Center

For the final installment of the Intentional AI at OSU series, I sat down with Inara Scott, senior advisor for strategy and innovation in the College of Business. While many of our previous spotlights focused on specific classroom strategies or innovations, Inara’s vantage point allows her to think broadly about the architecture of higher education. As a land-grant university, Oregon State University has a mission to serve the public good, and Inara is at the forefront of asking how AI reshapes our responsibility to society, policy, and the economy.
The challenge: Education in the “in-between” space
According to Inara, we are currently in a liminal period: AI is sufficiently capable to disrupt traditional labor and academic models, but it cannot replace humans in judgment, context, or those deeply human attributes of empathy, connection, and care. There are also gaps in its abilities that may be visible only to experts in a field—something Ethan Mollick calls the “jagged frontier.” It is remarkably good at some complex tasks, while failing at other, much simpler ones.For an institution dedicated to building student success, this creates a unique strategic tension:
- The content knowledge risk: While some tasks need a human in the loop who is as or more capable than the AI, other tasks can be effectively offloaded to AI. Given this, how do we decide what to teach? Do students need to know how to format a document or design a PowerPoint? In five years, will they need to know how to write a formula in Excel? How do we teach writing in an environment where the written word is constantly being mediated by AI tools?
- The foundational risk: Learning requires a productive struggle. If we know students are learning with AI, how do we prevent a long-term atrophy of the critical thinking skills required for democratic participation and economic leadership?Can we expect them to “figure it out” and protect their brains on their own, or do we have responsibility for teaching them those skills?
- The paradox of efficiency: In a world where AI can handle the “average” task with ease, the university’s value shifts: how do our roles evolve past information providers, so we can truly become the architects of human flourishing? How do we teach students how to provide the “extra 10%,” the empathy, creativity, and lived experience that machines cannot reach?
- The institutional mission: As a public-serving body, we must decide whether we are training people to be “good enough” or to be intentional architects of a better world. How does our mission evolve in the face of fast-moving societal change?
The innovation: Scaling human wisdom
Inara’s work in AI-responsive pedagogy isn’t just about models or prompting; it’s about institutional evolution. She views the “expert-in-the-loop” as a strategic imperative for a land-grant university.
- Socio-technical stewardship: Much like my conversation with Quincy Clark, Inara emphasizes that we must teach students to view AI as a system that interacts with values and ethics. In her own classroom, she uses the subject’s technical dimensions as a springboard for deeper discussions of social and ethical contexts.
- The inclusive scaffold: To maintain our mission of broad participation, Inara uses AI to reduce barriers to entry for complex tasks, enabling a broader engagement with synthesis and critique.
- Strategic pedagogy: She advocates for moving from “teaching to the test” toward “teaching to the person.” This means redesigning curricula to focus on the transformational power of mentorship, the one thing that an algorithm cannot scale.
Reflection: The institutional inflection point
Inara believes we are at a crossroads for the professoriate and the institution as a whole. If we ignore AI, we fail our students’ future careers and risk their ability to engage in critical thinking in an AI-integrated world; if we embrace it without intent, we fail our mission as a public-serving body. This “in-between” space is an opportunity to reclaim the university’s human heart.
AI is a moving target, but our commitment to student success is static. We have to help them find their voice in this new landscape so they can flourish, not just function. — Inara Scott
Key advice for faculty
- Vary the rhythm of learning: Just as a good writer varies sentence length to create music, we must vary our pedagogical approaches. Use short, automated tasks for efficiency, but engage students with long, “considerable” projects that burn with energy and require deep, human impetus.
- Lead with strategic transparency: Be explicit about why you are using—or not using—AI. Lay out shared expectations on day one so that the technology remains a partner in learning, not a replacement for thinking.
- Prioritize human resilience: One of the university’s goals is to develop socially conscious individuals. Use this technological shift to double down on human-centered pedagogy, helping students develop the intellectual flexibility to navigate an unpredictable economic landscape.
- Focus on the “so what?”: AI can give an answer, but it can’t explain why that answer matters to a community. Always encourage students to consider the social and ethical implications of their work.
This concludes our Intentional AI series. Thanks for reading!

About the author: Demian Hommel is a professor of geography and environmental science in the College of Earth, Ocean, and Atmospheric Sciences and is an AI in Teaching and Learning Fellow with the OSU Center for Teaching and Learning. When he isn’t exploring the societal and environmental impacts of AI, you can find him DJing under the alias Dr. Gonzo or trying to graft citrus trees in his greenhouse.
Editor’s note: This is the final installment in the 10-part Spring 2026 series of Intentional AI Spotlights on the Oregon State University CTL blog. You can find the whole series at Stories of AI @ OSU.
Top image generated with Microsoft Copilot.
Leave a Reply