This past spring marked one year in my role as an instructional designer for Ecampus. Like many of our readers, I started conversing with AI in the early months of 2023, following OpenAI’s rollout of ChatGPT. Or as one colleague noted in recapping news of the past year, “generative AI happened.” Later, I wrote a couple of posts for this blog on AI and media literacy. A few things became clear from this work. Perhaps most significantly, in the words of research professor Ethan Mollick: “You will need to check it all.”
As the range of courses I support began to expand, so did my everyday use of LLM-powered tools. Here are some of my prompts to ChatGPT from last year, edited for clarity:
What is the total listening time of the Phish album Sigma Oasis?
Answer: 66 minutes and 57 seconds
How many lines are in the following list of special education acronyms (ranging from Section 504 – the Rehabilitation Act – to TBI – Traumatic Brain Injury)?
Answer: 27 lines
Where is the ancient city of Carthage today?
Answer: Today, Carthage is an archaeological site and historical attraction in the suburbs of the Tunisian capital, Tunis.
What is the name of the Roman equivalent of the Greek god Zeus?
Answer: Jupiter, king of the gods and the god of the sky and thunder
What’s the difference between colors D73F09 and DC4405?
Answer: In terms of appearance, … 09 will likely have a slightly darker, more orange-red hue compared to … 05, which might appear brighter. (Readers might also know these hues as variations on Beaver Orange.)
And almost every day:
Please create an (APA or MLA) citation of the following …
The answers were often on point but always in need of fact checking or another iteration of the prompt. Early LLMs were infamously prone to hallucinations. Factual errors and tendencies toward bias are still not uncommon.
As you can sense from my early prompts, I was mostly using AI as either a kind of smart calculator or an uber-encyclopedia. But in recent months, my colleagues and I here at Course Development and Training (CDT)—along with other units in the Division of Educational Ventures (DEV)—have been using AI in more creative and collaborative ways. And that’s where I want to focus this post.
The Partnership
First, some context for the work we do at DEV. Online course development is both a journey and a partnership between the instructor or faculty member and any number of support staff, from training to multimedia and beyond. Anchoring this partnership is the instructor’s working relationship with the instructional designer—an expert in online pedagogy and educational technology, but also a creative partner in developing the online or hybrid course.
Ecampus now offers more than 1,800 courses in more than 100 subjects. Every course results from a custom build that must maintain our strong reputation for quality (see fig. 1). This post is focused on that big circle in the middle—collaboration with the instructional designer. That’s where I see incredible potential for support or “augmentation” from generative AI tools.
As Yong Bakos, a senior instructor with the College of Engineering, recently reminded Faculty Forum, modern forms of this technology have been around since the 1940s, starting with the influence of programmable computers on World War II. But now, he added—in challenging faculty using AI to figure out rapid, personalized feedback for learners—”we speak the same language.”
Through continued partnership, how do we make such processes more nimble, more efficient? What does augmentation and collaboration look like when we add tools like Copilot or a custom GPT? Many instructional designers have been wrestling with these questions as of late.
“Human Guided, but AI Assisted”
Here are a few answers from educators Wesley Kinsey and Page Durham at Germanna Community College in Virginia (see fig. 2). Generative AI—also known as GAI—is a powerful tool, says Kinsey. “But the real magic happens when it is paired with a framework that ensures course quality.”
Take this line of inquiry a little farther, and one starts to wonder: How might educators track or evaluate progress toward such use cases?
Funneling Toward Augmentation
As a thought experiment, I offer the following criteria and inventory—a kind of self-assessment of my own “human guided” journey through course development with generative AI (see fig. 3).
Criteria for Augmenting Development with Generative AI
ESTABLISHED – Regular, refined practice in course development — EMERGING – Irregular and/or unrefined practice, could be improved — ENVISION – Under consideration or imagined, not yet practiced
Faculty with experience teaching online may find my suggested criteria familiar; “established, emerging, envision” is adapted from an Ecampus checklist used in course redevelopment.
Augmentation 1: From Set Up to Intake
Broadly speaking, I’m only starting to use chatbots in kicking off a course development—to capture a bulleted summary of an intake over Zoom, for example. Or with these kinds of level-setting prompts:
Remind me, what is linear regression analysis?
What fields are important to physical hydrology?
Explain to a college professor the migration of a social annotation learning tool from LTI 1.1 to 1.3.
Augmentation 2: Course Content
In my experience, instructors are only now beginning to envision how they might propose a course or develop its learning materials and activities with support from tools like Copilot—which is increasingly adept at helping us with this kind of iterative brainstorming work. The key here will be getting comfortable with practice, engaging in sustained conversations with defined parameters, often in scenarios that build on existing content. In recent practice with building assignments, I’m finding Claude 3 Sonnet helpful—more nuanced in its responses, and because you can upload brief documents at no cost and revisit previous chats.
Augmentation 3: Suggested Revisions
Once course content begins rolling in, I apply more established practices for augmentation. For building citations of learning materials, I’m using Copilot’s “more precise” mode for its more robust abilities to read the open web and draw on various style guides (see fig. 4). With activities, often the germ of an idea for interaction needs enlargement—a statement of purpose or more detailed instructions. Here are a few more examples from working with the School of Psychological Science, with prompts edited for brevity:
What would be the purpose of practicing rebus puzzles in a lower division course on general psychology?
Please analyze the content of the following exam study guide, excerpted in HTML. Then, suggest a two-sentence statement of purpose that should replace the phrase lorem ipsum.
How should college students think about exploring Rorschach tests with inkblots? Please suggest two prompts for reflection (see fig. 5.)
Augmentation 4: Discussion, Planning & Review
As with course planning, I’m not quite there yet with using generative AI to shape module templates and collect preferred settings for the building I do in Canvas. But by next year—armed perhaps with a desktop license for Copilot—I can imagine using AI to offer instructors custom templates or prompts to accelerate the design process. One more note on annotating augmentation—it’s incredibly important to let my faculty partners know—with consistent labeling—when I’m suggesting course content adapted from a conversation with AI. Most often, I’m not the subject matter expert—they are. That rule of thumb from Ethan Mollick still holds true: “You will need to check it all.”
Augmentation 5: Building & Rebuilding—More Efficiently
Finally, I look forward to exploring opportunities for more efficiently writing and revising the code behind everything we do with support from generative AI. Just imagine if the designer or instructor could ask a bot to suggest ways to strengthen module learning outcomes or update a task list, right there in Canvas.
Your Turn
With the above inventory in mind, let’s pause to reflect. To what extent are you comfortable using generative AI as a course developer? In what ways could this technology supplement new partnerships with instructional designers—or other colleagues involved in the discipline you teach? Together, how would you assess “augmentation” at each stage of the course development process?
Looking back on my own year of “human guidance with AI assistance,” I now turn more reflexively to AI for help with frontline design work—even as our team considers, for example, the ethical dimensions of asking chatbots to deliver custom graphics for illustrating weekly modules. In other stages, I’m still finding my footing in leveraging new tools, particularly during set up, refresh, and redesign. As we continue to partner with faculty, I remain open to navigating the evolving intersection of AI and course development.
(And now, for fun: Can you spot the augmentation? How much of that last sentence was crafted with support from a “creative” conversation with Copilot? Find the answer below.)
Resources, etc.
The following resources may be helpful in exploring generative AI tools, becoming more fluent with their applications, and considering their role in your teaching and learning practices.
Just how, exactly, we’re supposed to follow Ethan Mollick’s caution to “check it all” happens to be the subject of a lively, forthcoming collaboration from two education researchers who have been following the intersection of new media and misinformation for decades.
In Verified: How to Think Straight, Get Duped Less, and Make Better Decisions about What to Believe Online (University of Chicago Press, November 2023), Mike Caulfield and Sam Wineburg provide a kind of user’s manual to the modern internet. The authors’ central concern is that students—and, by extension, their teachers—have been going about the process of verifying online claims and sources all wrong—usually by applying the same rhetorical skills activated in reading a deep-dive on Elon Musk or Yevgeny Prigozhin, to borrow from last month’s headlines. Academic readers, that is, traditionally keep their attention fixed on the text—applying comprehension strategies such as prior knowledge, persisting through moments of confusion, and analyzing the narrative and its various claims about technological innovation or armed rebellion in discipline-specific ways.
The Problem with Checklists
Now, anyone who has tried to hold a dialogue on more than a few pages of assigned reading at the college level knows that sustained focus and critical thinking can be challenging, even for experienced readers. (A majority of high school seniors are not prepared for reading in college, according to 2019 data.) And so instructors, partnering with librarians, have long championed checklists as one antidote to passive consumption, first among them the CRAAP test, which stands for currency, relevance, authority, accuracy, and purpose. (Flashbacks to English 101, anyone?) The problem with checklists, argue Caulfield and Wineburg, is that in today’s media landscape—awash in questionable sources—they’re a waste of time. Such routines might easily keep a reader focused on critically evaluating “gameable signals of credibility” such as functional hyperlinks, a well-designed homepage, airtight prose, digital badges, and other supposedly telling markers of authority that can be manufactured with minimal effort or purchased at little expense, right down to the blue checkmark made infamous by Musk’s platform-formerly-known-as-Twitter.
Three Contexts for Lateral Reading
One of the delights in reading Verified is drawing back the curtains on a parade of little-known hoaxes, rumors, actors, and half-truths at work in the shadows of the information age—ranging from a sugar industry front group posing as a scientific think tank to headlines in mid-2022 warning that clouds of “palm-sized flying spiders” were about to descend on the East Coast. In the face of such wild ideas, Caulfield and Wineburg offer a helpful, three-point heuristic for navigating the web—and a sharp rejoinder to the source-specific checklists of the early aughts. (You will have to read the book to fact-check the spider story, or as the authors encourage, you can do it yourself after reading, say, the first chapter!) “The first task when confronted with the unfamiliar is not analysis. It is the gathering of context” (p. 10). More specifically:
The context of the source — What’s the reputation of the source of information that you arrive at, whether through a social feed, a shared link, or a Google search result?
The context of the claim — What have others said about the claim? If it’s a story, what’s the larger story? If a statistic, what’s the larger context?
Finally, the context of you — What is your level of expertise in the area? What is your interest in the claim? What makes such a claim or source compelling to you, and what could change that?
“The Three Contexts” from Verified (2023)
At a regional conference of librarians in May, Wineburg shared video clips from his scenario-based research, juxtaposing student sleuths with professional fact checkers. His conclusion? By simply trying to gather the necessary context, learners with supposedly low media literacy can be quickly transformed into “strong critical thinkers, without any additional training in logic or analysis” (Caulfield and Wineburg, p. 10). What does this look like in practice? Wineburg describes a shift from “vertical” to “lateral reading” or “using the web to read the web” (p. 81). To investigate a source like a pro, readers must first leave the source, often by opening new browser tabs, running nuanced searches about its contents, and pausing to reflect on the results. Again, such findings hold significant implications for how we train students in verification and, more broadly, in media literacy. Successful information gathering, in other words, depends not only on keywords and critical perspective but also on the ability to engage in metacognitive conversations with the web and its architecture. Or, channeling our eight-legged friends again: “If you wanted to understand how spiders catch their prey, you wouldn’t just look at a single strand” (p. 87).
Image 2: Mike Caulfield’s “four moves”
Reconstructing Context
Much of Verified is devoted to unpacking how to gain such perspective while also building self-awareness of our relationships with the information we seek. As a companion to Wineburg’s research on lateral reading, Caulfield has refined a series of higher-order tasks for vetting sources called SIFT, or “The Four Moves” (see Image 2). By (1) Stopping to take a breath and get a look around, (2) Investigating the source and its reputation, (3) Finding better sources of journalism or research, and (4) Tracing surprising claims or other rhetorical artifacts back to their origins, readers can more quickly make decisions about how to manage their time online. You can learn more about the why behind “reconstructing context” at Caulfield’s blog, Hapgood, and as part of the OSU Libraries’ guide to media literacy. (Full disclosure: Mike is a former colleague from Washington State University Vancouver.)
If I have one complaint about Caulfield and Wineburg’s book, it’s that it dwells at length on the particulars of analyzing Google search results, which fill pages of accompanying figures and a whole chapter on the search engine as “the bestie you thought you knew” (p. 49). To be sure, Google still occupies a large share of the time students and faculty spend online. But as in my quest for learning norms protocols, readers are already turning to large language model tools for help in deciding what to believe online. In that respect, I find other chapters in Verified (on scholarly sources, the rise of Wikipedia, deceptive videos, and so-called native advertising) more useful. And if you go there, don’t miss the author’s final take on the power of emotion in finding the truth—a line that sounds counterintuitive, but in context adds another, rather moving dimension to the case against checklists.
Given the acceleration of machine learning, will lateral reading and SIFTing hold up in the age of AI? Caulfield and Wineburg certainly think so. Building out context becomes all the more necessary, they write in a postscript on the future of verification, “when the prose on the other side is crafted by a convincing machine” (p. 221). On that note, I invite you and your students to try out some of these moves on your favorite chatbot.
Another Postscript
The other day, I gave Microsoft’s AI-powered search engine a few versions of the same prompt I had put to ChatGPT. In “balanced” mode, Bing dutifully recommended resources from Stanford, Cornell, and Harvard on introducing norms for learning in online college classes. Over in “creative” mode, Bing’s synthesis was slightly more offbeat—including an early-pandemic blog post on setting norms for middle school faculty meetings in rural Vermont. More importantly, the bot wasn’t hallucinating. Most of the sources it suggested seemed worth investigating. Pausing before each rabbit hole, I took a deep breath.
Related Resource
Oregon State Ecampus recently rolled out its own AI toolkit for faculty, based on an emerging consensus that developing capacities for using this technology will be necessary in many areas of life. Of particular relevance to this post is a section on AI literacy, conceptualized as “a broad set of skills that is not confined to technical disciplines.” As with Verified, I find the toolkit’s frameworks and recommendations on teaching AI literacy particularly helpful. For instance, if students are allowed to use ChatGPT or Bing to brainstorm and evaluate possible topics for a writing assignment, “faculty might provide an effective example of how to ask an AI tool to help, ideally situating explanation in the context of what would be appropriate and ethical in that discipline or profession.”
References
Caulfield, M., & Wineburg, S. (2023). Verified: How to think straight, get duped less, and make better decisions about what to believe online. University of Chicago Press.
Have you found yourself worried or overwhelmed in thinking about the implications of artificial intelligence for your discipline? Whether, for example, your department’s approaches to teaching basic skills such as library research and source evaluation still hold up? You’re not alone. As we enter another school year, many educators continue to think deeply about questions of truth and misinformation, creativity, and how large language model (LLM) tools such as chatbots are reshaping higher education. Along with our students, faculty (oh, and instructional designers) must consider new paradigms for our collective media literacy.
Here’s a quick backstory for this two-part post. In late spring, shortly after the “stable release” of ChatGPT to iOS, I started chatting with bot model GPT-3.5, which innovator Ethan Mollick describes as “very fast and pretty solid at writing and coding tasks,” if a bit lacking in personality. Other, internet-connected models, such as Bing, have made headlines for their resourcefulness and darker, erratic tendencies. But so far, access to GPT-4 remains limited, and I wanted to better understand the more popular engine’s capabilities. At the time, I was preparing a workshop for a creative writing conference. So, I asked ChatGPT to write a short story in the modern style of George Saunders, based in part on historical events. The chatbot’s response, a brief burst of prose it titled “Language Unleashed,” read almost nothing like Saunders. Still, it got my participants talking about questions of authorship, originality, representation, etc. Check, check, check.
The next time I sat down with the GPT-3.5, things went a little more off-script.
One faculty developer working with Ecampus had asked our team about establishing learning norms in a 200-level course dealing with sensitive subject matter. As a writing instructor, I had bookmarked a few resources in this vein, including strategies from the University of Colorado Boulder. So, I asked ChatGPT to create a bibliographic citation of Creating Collaborative Classroom Norms, which it did with the usual lightning speed. Then I got curious about what else this AI model could do, as my colleagues Philip Chambers and Nadia Jaramillo Cherrez have been exploring. Could ChatGPT point me to some good resources for faculty on setting norms for learning in online college classes?
“Certainly!” came the cheery reply, along with a summary of five sources that would provide me with “valuable information and guidance” (see Image 1). Noting OpenAI’s fine-print caveat (“ChatGPT may produce inaccurate information about people, places, or facts”), I began opening each link, expecting to be teleported to university teaching centers across the country. Except none of the tabs would load properly.
“Sorry we can’t find what you’re looking for,” reported Inside Higher Ed. “Try these resources instead,” suggested Stanford’s Teaching Commons. A closer look with Internet Archive’s Wayback Machine confirmed that the five sources in question were, like “Language Unleashed,” entirely fictitious.
Image 1: An early, hallucinatory chat with ChatGPT-3.5
As Mollick would explain months later: “it is very easy for the AI to ‘hallucinate’ and generate plausible facts. It can generate entirely false content that is utterly convincing. Let me emphasize that: AI lies continuously and well. Every fact or piece of information it tells you may be incorrect. You will need to check it all.”
The fabrications and limitations of chatbots lacking real-time access to the ever-expanding web have by now been well-documented. But as an early adopter, the speed and confidence ChatGPT brought to the task of inventing and describing fake sources felt unnerving. And without better guideposts for verification, I expect students less familiar with the evolution of AI will continue to experience confusion, or worse. As the Post recently reported, chatbots can easily say offensive things and act in culturally-biased ways—”a reminder that they’ve ingested some of the ugliest material the internet has to offer, and they lack the independent judgment to filter that out.”
Just how, exactly, we’re supposed to “check it all” happens to be the subject of a lively, forthcoming collaboration from two education researchers who have been following the intersection of new media and misinformation for decades.
Stay tuned for an upcoming post with the second installment of “Media Literacy in the Age of AI,” a review of Verified: How to Think Straight, Get Duped Less, and Make Better Decisions about What to Believe Online by Mike Caulfield and Sam Wineburg (University of Chicago Press, November 2023).
ChatGPT has been featured everywhere recently, with some educators being particularly worried because of its ability to craft essays, provide answers to quiz questions, and bypass plagiarism detection software. Teachers are trying to find ways to detect AI generated content, but are we being too hasty to associate AI in education with academic misconduct? Could the tool being used to cheat on courses actually help build them? I put this to the test by asking ChatGPT some questions.
Note: The purpose of this was to see how much of the course development process could be generated by the chatbot.
How to begin?
Let’s start at the beginning: how do we design a course? Well, keeping the process brief for this post, we want to start with some course learning outcomes, align them with some assessments, and generate some activities to help practice for those assessments. I have a background in linguistics and language education, so I wanted to see how this chatbot would respond to some prompts to create a generic English language course.
Chat Transcript
The following is the chat transcript from ChatGPT verbatim. Screenshots of the conversation are available below. Anything under the “ChatGPT” was written by the AI, with my comments in the “Comments” subsection under that.
Question 1
Gathering the learning outcomes
Could you generate 4-5 course learning outcomes for an intermediate English as a second language course?
ChatGPT Answer
Students will be able to understand and use complex grammatical structures in spoken and written English, including verb tenses, modals, and conditional sentences.
Students will be able to participate in discussions and debates on a variety of topics, using appropriate vocabulary and idiomatic expressions.
Students will be able to read and comprehend authentic English texts, such as news articles, short stories, and literary works, and be able to analyze and interpret them.
Students will be able to write clear, well-organized, and grammatically correct essays, using appropriate transitional words and phrases to develop their ideas.
Students will be able to give presentations in English, using appropriate pronunciation, intonation, and stress to convey their message effectively.
Comments
These are standard course learning outcomes for an ESL course. As I simply asked for learning outcomes with no context, the chatbot has no idea how detailed they should be, or how long it will take to achieve these outcomes. Though that was tested in the next question.
Question 2
Basic weekly topics
Using those outcomes, what would the topics for a 10-week course look like?
ChatGPT Answer
Week 1-2: Review of basic grammatical structures, introduction to verb tenses, including present perfect and past continuous.
Week 3-4: Introduction to modals, including can, could, may, might, must, should, and will. Practice using them in context.
Week 5-6: Conditional sentences, including zero, first, second, and third conditionals. Practice using them in context.
Week 7-8: Vocabulary building and idiomatic expressions. Practice using them in discussion and debates.
Week 9-10: Reading comprehension and analysis. Practice with a variety of texts and interpreting them.
Week 11-12: Writing skills, including clear organization, development of ideas and use of transitional words and phrases.
Week 13-14: Presentation skills, including pronunciation, intonation and stress.
Note: These topics might vary depending on the level of proficiency of the students in the class and the specific course design.
Comments
Interestingly here, the chatbot gave me 14 weeks of topics rather than the 10 I asked for. I believe this is because of the structure of my question, because this is fixed in a later question. This was the only glaring mistake during our entire “conversation”.
Question 3
Assessments
What kinds of assessments could be used for these outcomes and topics?
ChatGPT Answer
Grammar and vocabulary quizzes or tests to assess students’ understanding and mastery of grammatical structures, verb tenses, modals, and idiomatic expressions.
Oral presentations, debates, or discussions to assess students’ ability to participate in spoken interactions using appropriate vocabulary and idiomatic expressions.
Written assessments such as essays, short answer questions or summary writing to assess students’ ability to understand, analyze and interpret texts and convey their ideas in writing.
Reading comprehension quizzes or tests to assess students’ ability to understand and interpret authentic English texts.
Writing assessments such as essays, reports, or research papers to evaluate students’ ability to organize their thoughts, develop their ideas and use appropriate transitional words and phrases.
Oral presentations or speeches to assess students’ ability to convey information effectively using appropriate pronunciation, intonation, and stress.
Self-reflection or peer-evaluation could be used as well to evaluate students’ progress and areas of improvement.
Comments
A fairly wide range of skills being tested here – which is also down to my extremely generic requirement of ‘intermediate english language”. I thought the assessments here contained enough that someone designing a course could pick a few of them from each set provided.
Question 4
Alignment
Can you map the above assessments to the previously mentioned course learning outcomes? How would they align?
ChatGPT Answer
Sure, here’s how the assessments could align with the previously mentioned course learning outcomes:
1. Grammar and vocabulary quizzes or tests align with the outcome of being able to understand and use complex grammatical structures in spoken and written English, including verb tenses, modals, and conditional sentences.
2. Oral presentations, debates, or discussions align with the outcome of being able to participate in discussions and debates on a variety of topics, using appropriate vocabulary and idiomatic expressions.
3. Written assessments such as essays, short answer questions or summary writing align with the outcome of being able to read and comprehend authentic English texts, such as news articles, short stories, and literary works, and be able to analyze and interpret them, and also the outcome of being able to write clear, well-organized, and grammatically correct essays, using appropriate transitional words and phrases to develop their ideas.
4. Reading comprehension quizzes or tests align with the outcome of being able to read and comprehend authentic English texts, such as news articles, short stories, and literary works, and be able to analyze and interpret them.
5. Writing assessments such as essays, reports, or research papers align with the outcome of being able to write clear, well-organized, and grammatically correct essays, using appropriate transitional words and phrases to develop their ideas.
6. Oral presentations or speeches align with the outcome of being able to give presentations in English, using appropriate pronunciation, intonation, and stress to convey their message effectively.
7. Self-reflection or peer-evaluation align with the outcome of evaluating students’ progress and areas of improvement.
It’s important to note that the assessments should be designed to be both formative and summative, in order to provide ongoing feedback to the student and help them to track their progress and identify areas for improvement.
Comments
Alignment of assessments to outcomes is one of the most important parts of designing an online course. I was impressed with the combination of outcome to assessment as explaining why an assessment is being given to students is quite common.
Question 5
How could multimedia help students?
Can you think of how multimedia could be used in this course to promote effective language learning?
ChatGPT Answer
There are several ways that multimedia could be used in an intermediate English as a second language course to promote effective language learning:
Video and audio recordings of native English speakers can be used to expose students to authentic English, including different accents and dialects. This will help students to improve their listening comprehension and pronunciation skills.
Interactive online exercises and activities, such as quizzes, games, and simulations can be used to provide students with opportunities to practice and reinforce their grammar, vocabulary, and idiomatic expressions in a fun and engaging way.
Online discussion boards, chat rooms, and virtual meetings can be used to provide students with opportunities to interact with their classmates and teachers in real-time, thus promoting their speaking and writing skills.
Online videos, documentaries, and TED talks can be used to expose students to a variety of interesting and engaging topics, and to provide students with opportunities to practice their listening comprehension and critical thinking skills.
Online flashcards, quizzes, and other tools can be used to help students to memorize and retain new vocabulary words, and to improve their spelling and word recognition skills.
Online or offline multimedia-based learning materials, such as interactive e-books, mobile apps, and podcasts, can be used as additional resources for students to explore and practice their language skills on their own time.
Online resources such as subtitles for films, TV shows or podcasts can be used to improve students’ listening comprehension, vocabulary and grammar skills.
Overall, multimedia can be a great tool to make language learning more interactive, engaging, and personalized, and can be used to support the traditional teaching methods and assessments.
Comments
The multimedia selection here is fairly common in ESL classrooms. Selecting a combination of these would enhance the learning experience. It is also helpful that an explanation of purpose is given for each item on the list.
Question 6
Diversity and inclusion
Students on this course will be from diverse backgrounds and cultures. Can you anticipate any topics that would best be avoided?
ChatGPT Answer
When teaching students from diverse backgrounds and cultures, it’s important to be sensitive to cultural differences and to avoid topics that may be offensive or controversial. Some potential topics that might be best avoided in an intermediate English as a second language course include:
Religion: It’s best to avoid discussing religious beliefs, practices, or controversies, as they can be deeply personal and may offend some students or create unnecessary tension among the class.
Politics: Avoid discussing or debating controversial political issues, as they may be divisive and could create tension among the class.
Cultural stereotypes: Avoid making generalizations or negative comments about any particular culture or group of people, as this can be offensive and perpetuate stereotypes.
Racism and discrimination: Avoid discussing these topics in a way that might make any student feel targeted, uncomfortable or offended.
History: Some historical events can be sensitive and can cause discomfort to some students. It is important to be aware of different perspectives and be sensitive.
It is important to remember that every student is unique and has their own personal experiences and perspectives, and topics that may be acceptable for one student may not be for another. It’s best to use common sense, and if you are unsure, it’s always good to ask your students if they are comfortable discussing a particular topic.
It’s also important to create a safe and inclusive environment, where all students feel respected and valued, and where they can express themselves freely without fear of judgement or discrimination.
Comments
I included this question because of the importance of diversity and inclusion in the classroom, and with an ESL course in particular, students will be from diverse backgrounds. It was good to see that the chatbot could provide not only a list of topics to answer my question but also to give additional advice on the learning environment.
Question 7
Converting this into an online course
How could everything mentioned so far be developed as an online, asynchronous course? Student interaction might be limited to discussion boards as they are likely to be in other countries with different time zones.
ChatGPT Answer
An intermediate English as a second language course can be developed as an online, asynchronous course by using a variety of tools and resources to create a self-paced and interactive learning experience. Some potential strategies include:
Use a Learning Management System (LMS) such as Canvas, Blackboard, or Moodle to create a centralized location for all course materials, assignments, and communication. This will provide students with easy access to all the resources they need to complete the course. Use video and audio recordings, as well as interactive online exercises and activities to provide students with opportunities to practice and reinforce their grammar, vocabulary, and idiomatic expressions in a fun and engaging way.
Use discussion boards, chat rooms, and virtual meetings to provide students with opportunities to interact with their classmates and teachers in real-time, thus promoting their speaking and writing skills. These tools can also be used to create a sense of community among the students, despite their geographic distance and time zones.
Create and provide online multimedia-based learning materials, such as interactive e-books, mobile apps, and podcasts, as additional resources for students to explore and practice their language skills on their own time.
Use online quizzes, tests and assessments to evaluate student’s understanding, and use the results to provide them with feedback and guidance on areas where they need to improve.
Use Self-reflection or peer-evaluation to evaluate students’ progress, and areas of improvement.
Provide detailed and clear instructions for all assignments, activities, and assessments, and make sure that the deadlines are flexible enough to accommodate students in different time zones.
Encourage students to take an active role in their own learning by regularly checking in with them, providing them with feedback, and giving them opportunities to reflect on their progress and set goals for themselves.
Overall, an online, asynchronous course can be designed to be flexible, interactive, and engaging, and can provide students with opportunities to learn and practice their language skills at their own pace and on their own schedule.
Comments
I work in the field of online and hybrid learning, and often adapt existing in-person courses to online versions. I wanted to see if the chatbot could also do this, given that all of the previous answers were based on the assumption of an in-person class. This was not the fault of the ChatGPT – I never specified whether I wanted it to be an online-specific course. An important lesson here is that imprecise input will not result in precise output (see: GIGO for more information, or this article on tailoring your prompts for ChatGPT).
Still, given the updated prompt, it was able to provide both synchronous and asynchronous activities to accommodate for differences in time zones and locations.
Question 8
Asking directly for an opinion
Can you think of anything else that might be important to consider when building this course?
ChatGPT Answer
Here are a few additional things that might be important to consider when building an intermediate English as a second language course:
Assessing students’ prior knowledge and language proficiency: It’s important to assess students’ prior knowledge and language proficiency before the course begins, in order to tailor the course content to their needs and to ensure that they are placed in the appropriate level.
Differentiating instruction: Since students in an intermediate English as a second language course will have varying levels of proficiency, it’s important to differentiate instruction and provide multiple means of representation, expression, and engagement to meet the needs of all students.
Scaffolding and modeling: Provide students with the necessary support, guidance and modeling to complete the activities and assignments.
Provide opportunities for students to practice and apply their language skills in authentic and meaningful contexts.
Encourage students to use English as much as possible: Encourage students to use English as much as possible, both inside and outside of class, by providing them with opportunities to interact with native speakers, read authentic English texts, and listen to English-language music and podcasts.
Use technology to enhance language learning: Use technology to enhance language learning by providing students with access to online resources, such as grammar exercises, vocabulary lists, and pronunciation guides.
Provide regular and constructive feedback: Provide regular and constructive feedback on students’ progress and areas for improvement. This can be done through formative assessments, individual conferences, or through online tools such as rubrics and self-reflection.
Be flexible and responsive: Be flexible and responsive to students’ needs and be open to suggestions and feedback to improve the course.
Cultural sensitivity: Be culturally sensitive, and be aware of the cultural backgrounds of the students and make sure that the course content is inclusive and respectful of different cultures.
Keep in mind that the course is not just about language, but also about culture. Provide students with opportunities to learn about the cultures of English-speaking countries in order to help them to understand and appreciate the language and its context better.
Comments
For this question, I wanted to see if the chatbot could provide additional considerations for teaching and learning. These are all great suggestions, which is probably why the AI selected them – they likely appear in many examples of educational content that it had been trained on.
Question 9
Rewriting course learning outcomes to be more measurable
Can you take the aforementioned course learning outcomes and rewrite them with strong, measurable verbs? For example, “understand” is not easy to measure.
ChatGPT Answer
Sure, here are the course learning outcomes rephrased with strong, measurable verbs:
Students will be able to identify and use complex grammatical structures in spoken and written English, including verb tenses, modals, and conditional sentences with 80% accuracy on grammar quizzes and tests.
Students will be able to initiate and participate in discussions and debates on a variety of topics, using appropriate vocabulary and idiomatic expressions with 70% accuracy on oral presentations, debates, or discussions.
Students will be able to read and comprehend authentic English texts, such as news articles, short stories, and literary works, and analyze and interpret them with 80% accuracy on reading comprehension quizzes or tests.
Students will be able to write clear, well-organized, and grammatically correct essays, using appropriate transitional words and phrases to develop their ideas with 80% accuracy on writing assessments such as essays, reports, or research papers.
Students will be able to give presentations in English, using appropriate pronunciation, intonation, and stress to convey their message effectively with 80% accuracy on oral presentations or speeches.
These objectives are now specific, measurable and have a clear criteria of success. They are also aligned with the assessments and activities that you have previously mentioned.
Comments
I conducted an experiment which mimics the process that instructional designers routinely encounter when designing online courses. There are often a set of learning outcomes given at the start of the development process which regularly use verbs that are heard to measure such as “understand” or “know”. This makes it difficult to align assessments to outcomes because what does it really mean to “understand” something? How can we apply a rubric or grade to it?
Note that, in a real development beginning from scratch, we would want to design the course learning outcomes to be measurable from the start before anything else. In practice, however, I usually work with faculty on courses with predetermined outcomes that are either modified to more measurable ones, or linked to more measurable weekly learning outcomes.
ChatGPT modified the one verb I called out – “understand” – but left “comprehend”. Perhaps this also required a specific mention in the initial question, and is another example of how your initial prompt influences the results.
Question 10
Creating a weekly schedule
Can you make a weekly schedule, from week 1 to week 10, with an overview of activities for that particular week, incorporating the aforementioned assessments where appropriate and aligning all of it with the course learning outcomes?
ChatGPT Answer
Sure, here’s a possible weekly schedule for an intermediate English as a second language course:
Week 1:
Overview of the course and introduction to the course learning outcomes
Assessment of students’ prior knowledge and language proficiency
Introduction to verb tenses, including present perfect and past continuous
Interactive online exercises and activities to practice verb tenses
Vocabulary building and idiomatic expressions
Week 2:
Review and reinforcement of verb tenses
Quiz or test on verb tenses
Introduction to modals, including can, could, may, might, must, should, and will
Practice using modals in context
Week 3:
Review and reinforcement of modals
Quiz or test on modals
Introduction to conditional sentences, including zero, first, second, and third conditionals
Practice using conditional sentences in context
Week 4:
Review and reinforcement of conditional sentences
Quiz or test on conditional sentences
Vocabulary building and idiomatic expressions
Practice using vocabulary and idiomatic expressions in discussions and debates
Week 5:
Reading comprehension and analysis
Practice with a variety of texts and interpreting them
Writing skills, including clear organization, development of ideas and use of transitional words and phrases
Week 6:
Review and reinforcement of writing skills
Writing assessment such as essays, reports, or research papers
Presentation skills, including pronunciation, intonation and stress
Week 7:
Review and reinforcement of presentation skills
Practice oral presentations or speeches
Self-reflection or peer-evaluation
Vocabulary building and idiomatic expressions
Week 8:
Reading comprehension and analysis
Practice with a variety of texts and interpreting them
Writing skills, including clear organization, development of ideas and use of transitional words and phrases
Week 9:
Review and reinforcement of all language skills
Final assessment, which can include a combination of quizzes, tests, essays, oral presentations, or other forms of assessment.
Feedback on final assessment
Week 10:
Review of key concepts and skills
Feedback on the course, and suggestions for improvement
Final self-reflection or peer-evaluation
Note: This is just a sample schedule, and it can be adjusted depending on the specific needs of the students and the course design. Also, it’s important to remember that the assessments should be designed to be both formative and summative, in order to provide ongoing feedback to the student and help them to track their progress and identify areas for improvement.
Comments
The language I used in this question yielded the correct number of weeks from the AI. Each week is structured to include a review of the previous week’s content, which is a common teaching technique. This is asking a lot from the chatbot! It managed to incorporate most of my requests though, which again is impressive.
Question 11
Academic Integrity
What measures could be taken to ensure academic integrity in the online version of this course?
ChatGPT Answer
Academic integrity is a critical aspect of any course, and it’s important to take measures to ensure that students are completing their work independently and honestly. Here are a few measures that could be taken to ensure academic integrity in an online version of an intermediate English as a second language course:
Use plagiarism detection software: Use plagiarism detection software, such as Turnitin, to check students’ work for originality. This will help to detect instances of plagiarism and ensure that students are completing their own work.
Provide clear guidelines on academic integrity: Provide students with clear guidelines on academic integrity, including definitions of plagiarism, cheating, and academic dishonesty.
Monitor online activities: Monitor online activities, such as discussion board posts, chat room conversations, and virtual meetings, to ensure that students are participating in the course honestly and independently.
Use proctoring software: Use proctoring software, such as ProctorU or Respondus Monitor, to monitor students while they take online quizzes, tests, or exams.
Use randomized questions: Use randomized questions, or different versions of the same test to prevent students from sharing answers with each other.
Encourage self-regulation: Encourage self-regulation, and make it clear that the students are responsible for maintaining academic integrity in their work.
Create a culture of integrity: Create a culture of integrity in the class, by emphasizing the importance of honesty, trust, and responsibility.
Have clear consequences: Have clear consequences for academic dishonesty, and communicate them to the students.
Overall, it’s important to establish clear expectations for academic integrity and
ChatGPT froze here, the servers were experiencing high volumes of traffic.
Comments
Again, these are solid solutions which are used in both in-person and online courses. A quick web search would yield similar results as these are shared across most educational institutions.
Thoughts on further usage
It is clear that AI is set to change the way we interact with technology in education (or already has depending on how much you are using it). For this test, I simply provided the most generic input to see what the chatbot would generate. For an intermediate (however that was interpreted by the chatbot) ESL course, the learning outcomes were solid, once measurable outcomes were requested, and I was impressed by the generated weekly schedule including high level overviews of activities and assessments.
With more precise input requirements for course content and outcomes, one would expect an even more accurate output.
There are many ways that this kind of technology can be used to enhance courses – and I can see it being a very useful tool for quick questions such as potential assignments for certain learning outcomes, or figuring out if an assignment prompt might result in unintended consequences. For example, see the following:
Bonus Question
What could go wrong?
Can you think of any potential negative consequences of using the following prompt in a classroom?
“Make a presentation on current events in your country.”
ChatGPT Answer
There are several potential negative consequences of using the prompt “Make a presentation on current events in your country” in a classroom setting:
Bias: Students may only focus on certain events or perspectives, leading to a biased understanding of current events.
Limited perspective: Students may only present events happening in their immediate community, leading to a limited perspective on current events in the country as a whole.
Inappropriate content: Students may present on sensitive or controversial topics that may be inappropriate for a classroom setting.
Lack of critical thinking: By only presenting current events, students may not develop the critical thinking skills needed to analyze and understand the underlying causes and consequences of the events.
Limited access to information: Some students may not have access to current events information, especially if they come from low-income or marginalized communities.
It’s important to consider these potential negative consequences and take steps to mitigate them, such as providing guidance on how to present information in a balanced and fair manner and encouraging critical thinking and analysis of current events.
Watching for potential bias
During my “conversation” with the bot, I was impressed with the response to the diversity and inclusion question I posed. The points it presented were, like most answers in this experiment, fairly common and expected responses for such a course (‘don’t talk about politics, religion, etc. etc.’). A phrase that kept cropping up is ‘it is important to…’ followed by a value statement. Those presented in this conversation were benign, and actually good advice from an educational standpoint (e.g. every student is unique, you should create a safe learning environment, and so on). It seems a lot of work has been done to minimize bias – responses from the bot on many issues have disclaimers and things that are ‘important to remember’. It is also, however, important to remember that biases can creep in and to keep an eye out for it, especially when taking advice from AI.
Copyright Issues
Obviously the chatbot is pulling this information from somewhere (more info on this is contained on the Wikipedia page for ChatGPT), and even though it is generative AI, one has to wonder how much this will be an issue in the future if content is too similar to an existing program or course – not to mention all other subjects it is capable of generating.