About Raul Moreno

Equity-driven instructional designer | writing & humanities instructor | English/Creative Ph.D. (University of South Dakota, 2020) | Returned Peace Corps Volunteer (Kyrgyzstan, 2010) | former NPR producer

This post is adapted from a panel talk for AI Week, Empowering OSU: Stories of Harnessing Generative AI for Impact in Staff and Faculty Work

This past spring marked one year in my role as an instructional designer for Ecampus. Like many of our readers, I started conversing with AI in the early months of 2023, following OpenAI’s rollout of ChatGPT. Or as one colleague noted in recapping news of the past year, “generative AI happened.” Later, I wrote a couple of posts for this blog on AI and media literacy. A few things became clear from this work. Perhaps most significantly, in the words of research professor Ethan Mollick: “You will need to check it all.”

As the range of courses I support began to expand, so did my everyday use of LLM-powered tools. Here are some of my prompts to ChatGPT from last year, edited for clarity:

  • What is the total listening time of the Phish album Sigma Oasis?
    • Answer: 66 minutes and 57 seconds
  • How many lines are in the following list of special education acronyms (ranging from Section 504 – the Rehabilitation Act – to TBI – Traumatic Brain Injury)?
    • Answer: 27 lines
  • Where is the ancient city of Carthage today?
    • Answer: Today, Carthage is an archaeological site and historical attraction in the suburbs of the Tunisian capital, Tunis.
  • What is the name of the Roman equivalent of the Greek god Zeus?
    • Answer: Jupiter, king of the gods and the god of the sky and thunder
  • What’s the difference between colors D73F09 and DC4405?
    • Answer: In terms of appearance, … 09 will likely have a slightly darker, more orange-red hue compared to … 05, which might appear brighter. (Readers might also know these hues as variations on Beaver Orange.)

And almost every day:

  • Please create an (APA or MLA) citation of the following …

The answers were often on point but always in need of fact checking or another iteration of the prompt. Early LLMs were infamously prone to hallucinations. Factual errors and tendencies toward bias are still not uncommon.

As you can sense from my early prompts, I was mostly using AI as either a kind of smart calculator or an uber-encyclopedia. But in recent months, my colleagues and I here at Course Development and Training (CDT)—along with other units in OSU’s Division of Educational Ventures (DEV)—have been using AI in more creative and collaborative ways. And that’s where I want to focus this post.

The Partnership

First, some context for the work we do at DEV. Online course development is both a journey and a partnership between the instructor or faculty member and any number of support staff, from training to multimedia and beyond. Anchoring this partnership is the instructor’s working relationship with the instructional designer—an expert in online pedagogy and educational technology, but also a creative partner in developing the online or hybrid course.

Infographic showing the online course development process, from set up, to terms 1-2 in collaboration with the instructional designer, to launch and refresh.
Fig. 1. Collaboration anchors the story of online course development at OSU (credit: Ecampus).

Ecampus now offers more than 1,800 courses in more than 100 subjects. Every course results from a custom build that must maintain our strong reputation for quality (see fig. 1). This post is focused on that big circle in the middle—collaboration with the instructional designer. That’s where I see incredible potential for support or “augmentation” from generative AI tools.

As Yong Bakos, a senior instructor with the College of Engineering, recently reminded Faculty Forum, modern forms of this technology have been around since the 1940s, starting with the influence of programmable computers on World War II. But now, he added—in challenging faculty using AI to figure out rapid, personalized feedback for learners—”we speak the same language.”

Through continued partnership, how do we make such processes more nimble, more efficient? What does augmentation and collaboration look like when we add tools like Copilot or a custom GPT? Many instructional designers have been wrestling with these questions as of late.

“Human Guided, but AI Assisted”

Here are a few answers from educators Wesley Kinsey and Page Durham at Germanna Community College in Virginia (see fig. 2). Generative AI—also known as GAI—is a powerful tool, says Kinsey. “But the real magic happens when it is paired with a framework that ensures course quality.”

Slide on
Fig. 2. From a recent QM webinar on “unleashing” generative AI (CC BY-NC-ND).

Take this line of inquiry a little farther, and one starts to wonder: How might educators track or evaluate progress toward such use cases?

Funneling Toward Augmentation

As a thought experiment, I offer the following criteria and inventory—a kind of self-assessment of my own “human guided” journey through course development with generative AI (see fig. 3).

Criteria for Augmenting Development with Generative AI

ESTABLISHED – Regular, refined practice in course development
— EMERGING – Irregular and/or unrefined practice, could be improved
— ENVISION – Under consideration or imagined, not yet practiced

Faculty with experience teaching online may find my suggested criteria familiar; “established, emerging, envision” is adapted from an Ecampus checklist used in course redevelopment.

Funnel-shaped infographic with five augmentations: (1) From set up to intake; (2) Course content; (3) Suggested revisions; (4) Discussion, planning, and review; (5) Building and rebuilding
Fig. 3. Self-assessment of augmenting development with generative AI (CC BY-NC-SA).

Augmentation 1: From Set Up to Intake

Broadly speaking, I’m only starting to use chatbots in kicking off a course development—to capture a bulleted summary of an intake over Zoom, for example. Or with these kinds of level-setting prompts:

  • Remind me, what is linear regression analysis?
  • What fields are important to physical hydrology?
  • Explain to a college professor the migration of a social annotation learning tool from LTI 1.1 to 1.3.

Augmentation 2: Course Content

In my experience, instructors are only now beginning to envision how they might propose a course or develop its learning materials and activities with support from tools like Copilot—which is increasingly adept at helping us with this kind of iterative brainstorming work. The key here will be getting comfortable with practice, engaging in sustained conversations with defined parameters, often in scenarios that build on existing content. In recent practice with building assignments, I’m finding Claude 3 Sonnet helpful—more nuanced in its responses, and because you can upload brief documents at no cost and revisit previous chats.

Screenshot of conversation with Copilot, starting with a request to create an MLA citation of a lecture by Liam Callanan at the Bread Loaf Writers' Conference
Fig. 4. From a “more precise” conversation on citation generation. Can you spot Copilot’s errors in applying MLA style?

Augmentation 3: Suggested Revisions

Once course content begins rolling in, I apply more established practices for augmentation. For building citations of learning materials, I’m using Copilot’s “more precise” mode for its more robust abilities to read the open web and draw on various style guides (see fig. 4). With activities, often the germ of an idea for interaction needs enlargement—a statement of purpose or more detailed instructions. Here are a few more examples from working with the School of Psychological Science, with prompts edited for brevity:

  • What would be the purpose of practicing rebus puzzles in a lower division course on general psychology?
  • Please analyze the content of the following exam study guide, excerpted in HTML. Then, suggest a two-sentence statement of purpose that should replace the phrase lorem ipsum.
  • How should college students think about exploring Rorschach tests with inkblots? Please suggest two prompts for reflection (see fig. 5.)
Screenshot of Week 6 - Reflection Activity - Rorschach Inkblot Test, including a warning about the limitations of Rorschach tests and prompts for reflection
Fig. 5. From an augmented reflection activity in PSY 202H, General Psychology (credit: Juan Hu).

Augmentation 4: Discussion, Planning & Review

As with course planning, I’m not quite there yet with using generative AI to shape module templates and collect preferred settings for the building I do in Canvas. But by next year—armed perhaps with a desktop license for Copilot—I can imagine using AI to offer instructors custom templates or prompts to accelerate the design process. One more note on annotating augmentation—it’s incredibly important to let my faculty partners know—with consistent labeling—when I’m suggesting course content adapted from a conversation with AI. Most often, I’m not the subject matter expert—they are. That rule of thumb from Ethan Mollick still holds true: “You will need to check it all.”

Augmentation 5: Building & Rebuilding—More Efficiently

Finally, I look forward to exploring opportunities for more efficiently writing and revising the code behind everything we do with support from generative AI. Just imagine if the designer or instructor could ask a bot to suggest ways to strengthen module learning outcomes or update a task list, right there in Canvas.

Your Turn

With the above inventory in mind, let’s pause to reflect. To what extent are you comfortable using generative AI as a course developer? In what ways could this technology supplement new partnerships with instructional designers—or other colleagues involved in the discipline you teach? Together, how would you assess “augmentation” at each stage of the course development process?

Looking back on my own year of “human guidance with AI assistance,” I now turn more reflexively to AI for help with frontline design work—even as our team considers, for example, the ethical dimensions of asking chatbots to deliver custom graphics for illustrating weekly modules. In other stages, I’m still finding my footing in leveraging new tools, particularly during set up, refresh, and redesign. As we continue to partner with faculty, I remain open to navigating the evolving intersection of AI and course development.

(And now, for fun: Can you spot the augmentation? How much of that last sentence was crafted with support from a “creative” conversation with Copilot? Find the answer below.)

Resources, etc.

The following resources may be helpful in exploring generative AI tools, becoming more fluent with their applications, and considering their role in your teaching and learning practices.

For the first part of this post, please see Media Literacy in the Age of AI, Part I: “You Will Need to Check It All.”

Just how, exactly, we’re supposed to follow Ethan Mollick’s caution to “check it all” happens to be the subject of a lively, forthcoming collaboration from two education researchers who have been following the intersection of new media and misinformation for decades.

In Verified: How to Think Straight, Get Duped Less, and Make Better Decisions about What to Believe Online (University of Chicago Press, November 2023), Mike Caulfield and Sam Wineburg provide a kind of user’s manual to the modern internet. The authors’ central concern is that students—and, by extension, their teachers—have been going about the process of verifying online claims and sources all wrong—usually by applying the same rhetorical skills activated in reading a deep-dive on Elon Musk or Yevgeny Prigozhin, to borrow from last month’s headlines. Academic readers, that is, traditionally keep their attention fixed on the text—applying comprehension strategies such as prior knowledge, persisting through moments of confusion, and analyzing the narrative and its various claims about technological innovation or armed rebellion in discipline-specific ways.

The Problem with Checklists

Now, anyone who has tried to hold a dialogue on more than a few pages of assigned reading at the college level knows that sustained focus and critical thinking can be challenging, even for experienced readers. (A majority of high school seniors are not prepared for reading in college, according to 2019 data.) And so instructors, partnering with librarians, have long championed checklists as one antidote to passive consumption, first among them the CRAAP test, which stands for currency, relevance, authority, accuracy, and purpose. (Flashbacks to English 101, anyone?) The problem with checklists, argue Caulfield and Wineburg, is that in today’s media landscape—awash in questionable sources—they’re a waste of time. Such routines might easily keep a reader focused on critically evaluating “gameable signals of credibility” such as functional hyperlinks, a well-designed homepage, airtight prose, digital badges, and other supposedly telling markers of authority that can be manufactured with minimal effort or purchased at little expense, right down to the blue checkmark made infamous by Musk’s platform-formerly-known-as-Twitter.

Three Contexts for Lateral Reading

One of the delights in reading Verified is drawing back the curtains on a parade of little-known hoaxes, rumors, actors, and half-truths at work in the shadows of the information age—ranging from a sugar industry front group posing as a scientific think tank to headlines in mid-2022 warning that clouds of “palm-sized flying spiders” were about to descend on the East Coast. In the face of such wild ideas, Caulfield and Wineburg offer a helpful, three-point heuristic for navigating the web—and a sharp rejoinder to the source-specific checklists of the early aughts. (You will have to read the book to fact-check the spider story, or as the authors encourage, you can do it yourself after reading, say, the first chapter!) “The first task when confronted with the unfamiliar is not analysis. It is the gathering of context” (p. 10). More specifically:

  • The context of the source — What’s the reputation of the source of information that you arrive at, whether through a social feed, a shared link, or a Google search result?
  • The context of the claim — What have others said about the claim? If it’s a story, what’s the larger story? If a statistic, what’s the larger context?
  • Finally, the context of you — What is your level of expertise in the area? What is your interest in the claim? What makes such a claim or source compelling to you, and what could change that?
“The Three Contexts” from Verified (2023)

At a regional conference of librarians in May, Wineburg shared video clips from his scenario-based research, juxtaposing student sleuths with professional fact checkers. His conclusion? By simply trying to gather the necessary context, learners with supposedly low media literacy can be quickly transformed into “strong critical thinkers, without any additional training in logic or analysis” (Caulfield and Wineburg, p. 10). What does this look like in practice? Wineburg describes a shift from “vertical” to “lateral reading” or “using the web to read the web” (p. 81). To investigate a source like a pro, readers must first leave the source, often by opening new browser tabs, running nuanced searches about its contents, and pausing to reflect on the results. Again, such findings hold significant implications for how we train students in verification and, more broadly, in media literacy. Successful information gathering, in other words, depends not only on keywords and critical perspective but also on the ability to engage in metacognitive conversations with the web and its architecture. Or, channeling our eight-legged friends again: “If you wanted to understand how spiders catch their prey, you wouldn’t just look at a single strand” (p. 87).

SIFT graphic by Mike Caulfield with icons for stop, investigate the source, find better coverage, and trace claims, quotes, and media to the original context.

Image 2: Mike Caulfield’s “four moves”

Reconstructing Context

Much of Verified is devoted to unpacking how to gain such perspective while also building self-awareness of our relationships with the information we seek. As a companion to Wineburg’s research on lateral reading, Caulfield has refined a series of higher-order tasks for vetting sources called SIFT, or “The Four Moves” (see Image 2). By (1) Stopping to take a breath and get a look around, (2) Investigating the source and its reputation, (3) Finding better sources of journalism or research, and (4) Tracing surprising claims or other rhetorical artifacts back to their origins, readers can more quickly make decisions about how to manage their time online. You can learn more about the why behind “reconstructing context” at Caulfield’s blog, Hapgood, and as part of the OSU Libraries’ guide to media literacy. (Full disclosure: Mike is a former colleague from Washington State University Vancouver.)

If I have one complaint about Caulfield and Wineburg’s book, it’s that it dwells at length on the particulars of analyzing Google search results, which fill pages of accompanying figures and a whole chapter on the search engine as “the bestie you thought you knew” (p. 49). To be sure, Google still occupies a large share of the time students and faculty spend online. But as in my quest for learning norms protocols, readers are already turning to large language model tools for help in deciding what to believe online. In that respect, I find other chapters in Verified (on scholarly sources, the rise of Wikipedia, deceptive videos, and so-called native advertising) more useful. And if you go there, don’t miss the author’s final take on the power of emotion in finding the truth—a line that sounds counterintuitive, but in context adds another, rather moving dimension to the case against checklists.

Given the acceleration of machine learning, will lateral reading and SIFTing hold up in the age of AI? Caulfield and Wineburg certainly think so. Building out context becomes all the more necessary, they write in a postscript on the future of verification, “when the prose on the other side is crafted by a convincing machine” (p. 221). On that note, I invite you and your students to try out some of these moves on your favorite chatbot.

Another Postscript

The other day, I gave Microsoft’s AI-powered search engine a few versions of the same prompt I had put to ChatGPT. In “balanced” mode, Bing dutifully recommended resources from Stanford, Cornell, and Harvard on introducing norms for learning in online college classes. Over in “creative” mode, Bing’s synthesis was slightly more offbeat—including an early-pandemic blog post on setting norms for middle school faculty meetings in rural Vermont. More importantly, the bot wasn’t hallucinating. Most of the sources it suggested seemed worth investigating. Pausing before each rabbit hole, I took a deep breath.

Related Resource

Oregon State Ecampus recently rolled out its own AI toolkit for faculty, based on an emerging consensus that developing capacities for using this technology will be necessary in many areas of life. Of particular relevance to this post is a section on AI literacy, conceptualized as “a broad set of skills that is not confined to technical disciplines.” As with Verified, I find the toolkit’s frameworks and recommendations on teaching AI literacy particularly helpful. For instance, if students are allowed to use ChatGPT or Bing to brainstorm and evaluate possible topics for a writing assignment, “faculty might provide an effective example of how to ask an AI tool to help, ideally situating explanation in the context of what would be appropriate and ethical in that discipline or profession.”

References

Caulfield, M., & Wineburg, S. (2023). Verified: How to think straight, get duped less, and make better decisions about what to believe online. University of Chicago Press.

Mollick, E. (2023, July 15). How to use AI to do stuff: An opinionated guide. One Useful Thing.

Oregon State Ecampus. (2023). Artificial Intelligence Tools.

Have you found yourself worried or overwhelmed in thinking about the implications of artificial intelligence for your discipline? Whether, for example, your department’s approaches to teaching basic skills such as library research and source evaluation still hold up? You’re not alone. As we enter another school year, many educators continue to think deeply about questions of truth and misinformation, creativity, and how large language model (LLM) tools such as chatbots are reshaping higher education. Along with our students, faculty (oh, and instructional designers) must consider new paradigms for our collective media literacy.

Here’s a quick backstory for this two-part post. In late spring, shortly after the “stable release” of ChatGPT to iOS, I started chatting with bot model GPT-3.5, which innovator Ethan Mollick describes as “very fast and pretty solid at writing and coding tasks,” if a bit lacking in personality. Other, internet-connected models, such as Bing, have made headlines for their resourcefulness and darker, erratic tendencies. But so far, access to GPT-4 remains limited, and I wanted to better understand the more popular engine’s capabilities. At the time, I was preparing a workshop for a creative writing conference. So, I asked ChatGPT to write a short story in the modern style of George Saunders, based in part on historical events. The chatbot’s response, a brief burst of prose it titled “Language Unleashed,” read almost nothing like Saunders. Still, it got my participants talking about questions of authorship, originality, representation, etc. Check, check, check.

The next time I sat down with the GPT-3.5, things went a little more off-script.

One faculty developer working with Ecampus had asked our team about establishing learning norms in a 200-level course dealing with sensitive subject matter. As a writing instructor, I had bookmarked a few resources in this vein, including strategies from the University of Colorado Boulder. So, I asked ChatGPT to create a bibliographic citation of Creating Collaborative Classroom Norms, which it did with the usual lightning speed. Then I got curious about what else this AI model could do, as my colleagues Philip Chambers and Nadia Jaramillo Cherrez have been exploring. Could ChatGPT point me to some good resources for faculty on setting norms for learning in online college classes?

“Certainly!” came the cheery reply, along with a summary of five sources that would provide me with “valuable information and guidance” (see Image 1). Noting OpenAI’s fine-print caveat (“ChatGPT may produce inaccurate information about people, places, or facts”), I began opening each link, expecting to be teleported to university teaching centers across the country. Except none of the tabs would load properly.

“Sorry we can’t find what you’re looking for,” reported Inside Higher Ed. “Try these resources instead,” suggested Stanford’s Teaching Commons. A closer look with Internet Archive’s Wayback Machine confirmed that the five sources in question were, like “Language Unleashed,” entirely fictitious.

An early chat with ChatGPT-3.5, asking whether the chatbot can point the author to some good resources for faculty on setting classroom norms for learning in online college classes. "Certainly," replies ChatGPT, in recommending five sources that "should provide you with valuable information and guidance."

Image 1: An early, hallucinatory chat with ChatGPT-3.5

As Mollick would explain months later: “it is very easy for the AI to ‘hallucinate’ and generate plausible facts. It can generate entirely false content that is utterly convincing. Let me emphasize that: AI lies continuously and well. Every fact or piece of information it tells you may be incorrect. You will need to check it all.”

The fabrications and limitations of chatbots lacking real-time access to the ever-expanding web have by now been well-documented. But as an early adopter, the speed and confidence ChatGPT brought to the task of inventing and describing fake sources felt unnerving. And without better guideposts for verification, I expect students less familiar with the evolution of AI will continue to experience confusion, or worse. As the Post recently reported, chatbots can easily say offensive things and act in culturally-biased ways—”a reminder that they’ve ingested some of the ugliest material the internet has to offer, and they lack the independent judgment to filter that out.”

Just how, exactly, we’re supposed to “check it all” happens to be the subject of a lively, forthcoming collaboration from two education researchers who have been following the intersection of new media and misinformation for decades.

Stay tuned for an upcoming post with the second installment of “Media Literacy in the Age of AI,” a review of Verified: How to Think Straight, Get Duped Less, and Make Better Decisions about What to Believe Online by Mike Caulfield and Sam Wineburg (University of Chicago Press, November 2023).

References

Mollick, E. (2023, July 15). How to use AI to do stuff: An opinionated guide. One Useful Thing.

Wroe, T., & Volckens, J. (2022, January). Creating collaborative classroom norms. Office of Faculty Affairs, University of Colorado Boulder.

Yu Chen, S., Tenjarla, R., Oremus , W., & Harris, T. (2023, August 31). How to talk to an AI chatbot. The Washington Post.