We all remember the warning from math class:

“You won’t always have a calculator in your pocket!”

How we laugh now, with calculators first arriving in our pockets and, eventually, smartphones putting one in our hands at all times.

I have seen a lot of comparisons 1 2 3 across the Internet to artificial intelligence (AI) and these mathematics classes of yesteryear. The idea being that AI is but the newest embodiment of this same concern, which ended up being overblown.

But is this an apt comparison to make? After all, we did not replace math lessons and teachers with pocket calculators, nor even with smart phones. The kindergarten student is not simply given a Casio and told to figure it out. The quote we all remember has a deeper meaning, hidden among the exacerbated response to the question so often asked by students:  “Why are we learning this?”

The response

It was never about the calculator itself, but about knowing how, when, and why to use it. A calculator speeds up the arithmetic, but the core cognitive process remains the same. The key distinction is between pressing the = button and understanding the result of the = button. A student who can set up the equation, interpret the answer, and explain the steps behind the screen will retain the mathematical insight long after the device is switched off.

The new situation – Enter AI

Scenario

Pressed for time and juggling multiple commitments, a student turns to an AI tool to help finish an essay they might otherwise have written on their own. The result is a polished, well-structured piece that earns them a strong grade. On the surface, it looks like a success, but because the heavy lifting was outsourced, the student misses out on the deeper process of grappling with ideas, making connections, and building understanding.

This kind of situation highlights a broader concern: while AI can provide short-term relief for students under pressure, it also risks creating long-term gaps in learning. The issue is not simply that these tools exist, but that uncritical use of them can still produce passing grades without the student engaging in meaningful reflection gained by prior cohorts. Additionally, when AI-generated content contains inaccuracies or outright hallucinations, a student’s grade can suffer, revealing the importance of reviewing and verifying the material themselves. This rapid, widespread uptake stresses the need to move beyond use alone and toward cultivating the critical habits that ensure AI supports, rather than supplants, genuine learning. 

Some background studies

In a 2024 study on Generative AI Usage and Exam Performance, Wecks et al. (2024) describe that:

Employing multivariate regression analysis, we find that students using GenAI tools score on average 6.71 (out of 100) points lower than non-users. While GenAI may offer benefits for learning and engagement, the way students actually use it correlates with diminished exam outcomes

Another study (Ju, 2023) found that:

After adjusting for background knowledge and demographic factors, complete reliance on AI for writing tasks led to a 25.1% reduction in accuracy. In contrast, AI-assisted reading resulted in a 12% decline. Ju (2023).

In this same study, Ju (2023) noted that while using AI to summarize texts improved both quality and output of comprehension, those who had a ‘robust background in the reading topic and superior reading/writing skills’ benefited the most.

Ironically, the students who would benefit most from critical reflection on AI use are often the ones using it most heavily, demonstrating the importance of embedding AI literacy into the curriculum. For example: A recent article by Heidi Mitchell from the Wall Street Journal (Mitchell, 2025) cites a study showing that the “less you know about AI, the more you are likely to use it”, and describing AI as seemingly “magical to those with low AI literacy”.

Finally, Kosmyna et al. (2025), testing how LLM usage affects cognitive processes and neural engagement in essay writing, assembled groups of LLM users, search engine users, and those without these tools (dubbed “brain-only” users). The authors recorded weaker performance in students with AI assistance over time, a lower sense of ownership of work with inability to recall work, and even seemingly reduced neural connectivity in LLM users compared to the brain-only group, which scored better in all of the above.

The takeaways from these studies are that unstructured AI use acts as a shortcut that erodes retention. While AI-assistance can be beneficial, outright replacement of thinking with it is harmful. In other words, AI amplifies existing competence but rarely builds it from scratch.

Undetected

Many people believe themselves to be fully capable of detecting AI-usage:

Most of the writing professors I spoke to told me that it’s abundantly clear when their students use AI. Sometimes there’s a smoothness to the language, a flattened syntax; other times, it’s clumsy and mechanical. The arguments are too evenhanded — counterpoints tend to be presented just as rigorously as the paper’s central thesis. Words like multifaceted and context pop up more than they might normally. On occasion, the evidence is more obvious, as when last year a teacher reported reading a paper that opened with “As an AI, I have been programmed …” Usually, though, the evidence is more subtle, which makes nailing an AI plagiarist harder than identifying the deed. (Walsh, 2025).

In the same NY Mag article, however, Walsh (2025) cites another study, showing that it might not be as clear who is using AI and who is not (emphasis added):

[…] while professors may think they are good at detecting AI-generated writing, studies have found they’re actually not. One, published in June 2024, used fake student profiles to slip 100 percent AI-generated work into professors’ grading piles at a U.K. university. The professors failed to flag 97 percent.

The two quotes are not contradictory; they describe different layers of the same phenomenon. Teachers feel they can spot AI because memorable extremes stick in their minds, yet systematic testing proves that intuition alone misses the overwhelming majority of AI‑generated work. This should not be surprising though, as most faculty have never been taught systematic ways to audit AI‑generated text (e.g., checking provenance metadata, probing for factual inconsistencies, or using stylometric analysis). Nor do most people, let alone faculty grading hundreds of papers per week, have the time to audit every student. Without a shared, college-wide rubric of sorts, detection remains an ad‑hoc, intuition‑driven activity. Faulty detection risks causing undue stress to students, and can foster a climate of mistrust by assuming that AI use is constant or inherently dishonest rather than an occasional tool in the learning process. Even with a rubric, instructors must weigh practical caveats: large-enrollment courses cannot sustain intensive auditing, some students may resist AI-required tasks, and disparities in access to tools raise equity concerns. For such approaches to work, they must be lightweight, flexible, and clearly framed as supporting learning rather than policing it.

This nuance is especially important when considering how widespread AI adoption has been. Walsh (2025) observed that “just two months after OpenAI launched ChatGPT, a survey of 1,000 college students found that nearly 90 percent of them had used the chatbot to help with homework assignments.” While this figure might seem to justify the use of AI detectors, it could simply reflect the novelty of the tool at the time rather than widespread intent to circumvent learning. In other words, high usage does not automatically equal cheating, showing the importance of measured, thoughtful approaches to AI in education rather than reactionary ones.

What to do…?

The main issue here is not that AI is magically writing better essays than humans can muster, it is that students are slipping past the very moments where they would normally grapple with concepts, evaluate evidence, and argue a position. Many institutions are now taking a proactive role rather than a reactive one, and I want to offer such a suggestion going forward.

Embracing the situation: The reflective AI honor log

It is a fact that large language models have become ubiquitous. They are embedded in web browsers, word processors, and even mobile keyboards. Trying to ban them outright creates a cat‑and‑mouse game; it also sends the message that the classroom is out of sync with the outside world.

Instead of fighting against a technology that is already embedded in our lives, invite students to declare when they use it and to reflect on what they learned from that interaction.

For this post, I am recommending using an “AI Honor-Log Document”, and deeply embedding it into courses, with the goal of increasing AI literacy. 

What is it?

As assignments vary across departments and even within courses, a one-size-fits-all approach is unlikely to be effective. To support thoughtful AI use without creating extra work for students, faculty could select an approach that best aligns with their course design:

  1. Built-in reflection: Students note when and how they used AI, paired with brief reflections integrated into their normal workflow.
  2. Optional, just-in-time logging: Students quickly log AI use and jot a short note only when it feels helpful, requiring minimal time.
  3. Embedded in assignments: Reflection is incorporated directly into the work, so students engage with it as part of the regular writing or research process.
  4. Low-effort annotations: Students add brief notes alongside tasks they are already completing, making reflection simple and natural.

These options aim to cultivate critical thinking around AI without imposing additional burdens or creating the perception of punishment, particularly for students who may not be using AI at all.

AI literacy is a massive topic, so let’s only address a few things here: 

  • Mechanics Awareness: Ability to explain the model architecture, training data, limits, and known biases.
  • Critical Evaluation: Requiring fact-checking, citation retrieval, and bias spotting.
  • Orchestration Skills: Understanding how to craft precise prompts, edit outputs, and add original analysis.

Note: you might want to go further and incorporate these into an assignment level learning outcome. Something like: “Identifies at least two potential biases in AI-generated text” could be enough on a rubric to gather interesting student responses.

Log layout example

#Assignment/ActivityDateAI ModelExact PromptAI OutputWhat you changed/AddedWhy You EditedConfidence (1-5)Link to Final Submission
1Essay #2 – Digital-privacy law2025-09-14GPT-5“Write a 250-word overview of GDPR’s extraterritorial reach and give two recent cases[pastes AI text]Added citation to 2023 policy ruling; re-phrased a vague sentence.AI omitted the latest case; needed up-to-date reference4https://canvas.oregonstate.edu/…… 

Potential deployment tasks (and things to look out for)

It need not take much time to model this to students or deploy it in your course. That said, there are practical and pedagogical limits depending on course size, discipline, and student attitudes toward AI. The notes below highlight possible issues and ways to adjust.

  1. Introduce the three reasons above (either text form or video, if you have more time and want to make a multimedia item).
    Caveat: Some students may be skeptical of AI-required work.
    Solution: Frame this as a reflection skill that can also be done without AI, offering an alternative if needed.
  2. Distribute the template to students: post a Google-Sheet link (or similar) in the LMS.
    Caveat: Students with limited internet access or comfort with spreadsheets may struggle.
    Solution: Provide a simple Word/PDF version or allow handwritten reflections as a backup.
  3. Model the process in the first week: Submit a sample log entry like the one above but related to your class and required assignment reflection type.
    Caveat: In large-enrollment courses, individualized modeling is difficult.
    Solution: Share one well-designed example for the whole class, or record a short screencast that students can revisit.
  4. Require the link with each AI-assisted assignment (or as and when you believe AI will be used).
    Caveat: Students may feel burdened by repeated uploads or object to mandatory AI use.
    Solution: Keep the log lightweight (one or two lines per assignment) and permit opt-outs where students reflect without AI.
  5. Provide periodic feedback: scan the logs, highlight common hallucinations or errors provided by students, give a “spot the error” mini lecture/check-in/office hour.
    Caveat: In large classes, it’s not realistic to read every log closely.
    Solution: Sample a subset of entries for themes, then share aggregated insights with the whole class during office hours, or post in weekly announcements or discussion boards designed for this kind of two-way feedback.
  6. (Optional) Student sharing session in a discussion board: allow volunteers or require class to submit sanitized prompts (i.e., any personal data removed) and edits for peer learning.
    Caveat: Privacy concerns or reluctance to share work may arise.
    Solution: Keep sharing optional, encourage anonymization, and provide opt-outs to respect comfort levels.

Important considerations when planning AI-tasks

Faculty should be aware of several practical and pedagogical considerations when implementing AI-reflective logs. Large-enrollment courses may make detailed feedback or close monitoring of every log infeasible, requiring sampling or aggregated feedback. Some students may object to AI-required assignments for ethical, accessibility, or personal reasons, so alternatives should be available (i.e. the option to declare that a student did not use AI should be present). Unequal access to AI tools or internet connectivity can create equity concerns, and privacy issues may arise when students share prompts or work publicly. To address these challenges, any approach should remain lightweight, flexible, and clearly framed as a tool to support learning rather than as a policing mechanism.

Conclusion

While some students may feel tempted to rely on AI, passing an assignment in this manner can also pass over the critical thinking, analytical reasoning, and reflective judgment that go beyond content mastery to true intellectual growth. Incorporating a reflective AI-usage log based not on assumption of cheating, but on the ubiquitous availability of this now-common tool, reintroduces one of the evidence-based steps for learning and mastery that has fallen out of favor in the last 2-3 years. By encouraging students to pause, articulate, and evaluate their process, reflection helps them internalize knowledge, spot errors, and build the judgment skills that AI alone cannot provide.

Footnotes

  1. https://www.reddit.com/r/ArtificialInteligence/comments/1ewh2ji/i_remember_when/ ↩︎
  2. https://www.uwa.edu.au/news/article/2025/august/generative-ai-is-not-a-calculator-for-words-5-reasons-why-this-idea-is-misleading ↩︎
  3. https://medium.com/%40josh_tucker/why-not-using-ai-is-like-refusing-to-use-a-calculator-in-a-maths-test-093b860d7b45 ↩︎

References

Fu, Y. and Hiniker, A. (2025). Supporting Students’ Reading and Cognition with AI. In Proceedings of Workshop on Tools for Thought (CHI ’25 Workshop on Tools for Thought). ACM, New York, NY, USA, 5 pages. https://arxiv.org/pdf/2504.13900v1

Ju, Q. (2023). Experimental Evidence on Negative Impact of Generative AI on Scientific Learning Outcomes. https://doi.org/10.48550/arXiv.2311.05629

Kosmyna, N., Hauptmann, E., Yuan, Y. T., Situ, J., Liao, X-H., Beresnitzky, A. V., Braunstein, I., & Maes, P. (2025). Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task. https://arxiv.org/abs/2506.08872

Mitchell, H. (2025). The Less You Know About AI, the More You Are Likely to Use It. Wall Street Journal. Accessed September 3, 2025: https://www.wsj.com/tech/ai/ai-adoption-study-7219d0a1

Wecks, J. O., Voshaar, J., Plate, B. J., & Zimmermann, J. (2024). Generative AI Usage and Exam Performance. https://doi.org/10.48550/arXiv.2404.19699

Walsh, J. (May 7, 2025). Everyone Is Cheating Their Way Through College ChatGPT has unraveled the entire academic project. Intelligencer. https://nymag.com/intelligencer/article/openai-chatgpt-ai-cheating-education-college-students-school.html

Over the past few years, Higher Education (HE) has been called to action in response to the rise of Generative Artificial Intelligence (GenAI) tools. As Artificial Intelligence (AI) becomes more autonomous and capable, proactive steps are needed to preserve academic and learning integrity. This article will explore tangible strategies educators can apply to their unique program and course contexts. Only slight adjustments may be necessary to support learning processes and capture evidence of learning, as changes will build upon the excellent work that is already occurring.

Initially, the focus in HE was on understanding the potential impact these tools would have on teaching and learning. Awareness of GenAI capabilities, limitations, and risks has been acknowledged with great care. Today, the tools are now being tested, and educators are envisioning how to use them for various purposes (e.g., productivity, creativity). Integration of these tools has begun with the aim of supplementing and enhancing human learning. As we move forward, concerns with regard to academic and learning integrity become increasingly prominent.

Meet Agentic AI

Recently, I had the opportunity to attend the Quality Matters Quality in Action conference, where I attended the session Ensuring Academic Integrity and Quality Course Design in the Age of AI. The presenter Robert Gibson, Director of Instructional Design at WSU Tech, shared about an Artificial Intelligence (AI) innovation now available to the public (and our students)….meet Agentic AI!

Your new Agentic AI assistant no longer requires you to be an expert prompt engineer. These tools are designed to achieve specific and clear goals with minimal human supervision or oversight. Engagement in complex reasoning, decision making, problem solving, learning from new information, and adapting to environments can occur autonomously (Gibson, 2025; Schroeder, 2025; Marr, 2025). These new Agentic AIs can even work together to form what is known as an Orchestrated AI. Think of this as an AI team working collaboratively to accomplish complex tasks. Agentic AI has already demonstrated the capability to create and complete online courses. What does this mean for Higher Education?

Now more than ever, we need to come together to collectively reinforce academic and learning integrity in online and hybrid courses. Preserving the quality of our institutional products and credentials is essential. Equally important are the students who will apply their OSU-acquired knowledge and skills in the real world. The time to be proactive is now.

Where and how should I start?

A good starting point is to evaluate assessments that AI can complete. Running an assignment through a GenAI tool to see if it can complete the task, with relative accuracy, can produce helpful insights. Next, consider modifications to pedagogical approaches and assessment methods. The goal is to design assessments to produce and capture evidence that learning is taking place. This could include assessments that are process-oriented, focus on skill mastery, are personalized, incorporate visual demonstrations (e.g., video), and/or integrate real-time engagement (Gibson, 2025).

What might a reimagined activity look like?

For example, let us say an instructor uses case-based learning in their course, and small groups discuss real-world scenarios on a discussion board. This activity could be reimagined by having students meet virtually and record their discussion. During their real-time interaction, they examine a real-world scenario, identify associated evidence, present examples, and share their lived experiences. This would be similar to how students conduct group presentations currently. This approach could be enhanced by shifting the focus to the learning process, such as arriving at ideas and cultivating perspectives (i.e., learning, growth, development). This would be in lieu of having students find a right or wrong answer (Gibson, 2025). This approach encourages students to engage substantively, co-construct knowledge, and work together to demonstrate learning. After participating in the activity, each student could create an individual video presentation to synthesize their learning. A synthesis video could include discussing their initial perspectives (Where did I start? – prior knowledge activation), how those initial perspectives evolved (What was my cognitive process? – metacognition), what new knowledge is needed (gap analysis), and how my perspectives and knowledge change (learning reflection). This method reinforces academic and learning integrity by validating that students are learning and achieving outcomes (Bertram-Gallant, 2017).

Reflect! Take a moment to reflect on how you know that students are learning in your course(s). What evidence do you have?

While the potential for academic dishonesty cannot be entirely controlled, there should not be an assumption that students will use these tools in their coursework just because they are available. Take a moment to examine the Ecampus Research Unit’s research, “Student Perceptions of Generative AI Tools in Online Courses.” This research study explores online students’ perceptions, understanding, and use of GenAI tools. The study found that most students had not been using GenAI tools in their courses, but rather, they were primarily using GenAI tools within professional contexts. Students noted that they understood that using AI in their careers would be necessary. However, strong concerns were articulated around inaccuracies, biases, lack of reliability, propagation of misinformation, and that the use of the tools is not in alignment with their personal values and ethics. (Dello Stritto, M, Underhill, G. and Aguiar, N., 2024). 

How can academic and learning integrity be reinforced?

Educators can foster academic integrity in a way that drives students’ internal motivation, self-determination, and desire to demonstrate their learning because they value the work they are doing. A multifaceted developmental approach that fosters a culture of academic integrity using various strategies in concert with one another is key (Bertram-Gallant, 2025), as no single approach can serve as a definitive solution.

  • Integrity teaching – Taking on the role of an active guide during course delivery and meeting students where they are developmentally is essential. This may include teaching students how to engage in critical thinking around the use of AI tools, connect the value of academic and learning integrity to their future profession, how to make well-informed decisions, and how to leverage metacognitive strategies when engaging with AI.
  • Integrity messaging – This approach is one that can be most effective if holistically integrated into a course. The content communicates that integrity, values, and ethics are normative within the course and will be held at the forefront of the learning community. Staged and timed messaging can be most helpful when targeted at different points in a course and as the complexity of academic work increases.
  • Transformative real time experiential learning – Transformative experiential learning involves designing opportunities that generate new ideas for action, which can be applied to other experiences. These activities may include, but are not limited to, service learning, internships, hands-on collaborative activities (e.g., role play, point-counterpoint discussions), and demonstrations. By focusing on real-time engagement, this approach demonstrates learning and thereby reinforces academic and learning integrity.
  • Deep learning –  Learning opportunities focused on skill mastery and demonstration through staged attempts. This approach may necessitate a pedagogical shift focusing on development and growth (Bertram-Gallant, 2025).

Agentic AI brings exciting opportunities for the world but tangible challenges for HE. By intentionally designing assessments that lead students to demonstrate evidence of their learning and using facilitation strategies that foster a culture of academic integrity, we can harness the potential of AI to supplement learning. What is the end goal? To ensure that educational opportunities are designed to preserve and enhance learners’ critical skills and knowledge needed to thrive in their professional pursuits. Will you accept this challenge?

Ecampus Artificial Intelligence Tools Resource Inventory

  • Trying to decide when and how to incorporate AI into your work? Take a look at the AI Decision Tree!
  • Need a few quick, practical strategies to get started? These recommendations aim to improve learning for both teachers and students.
  • Are you ready to evaluate and enhance the resiliency (i.e., flexibility, adaptability) of your course within the context of AI? Check out the new The Course AI Resilience Tracker [CART] interactive tool. This interactive tool can help you reflect on various course elements and will share personalized resources to help you get started.
  • Review Bloom’s Taxonomy Revisited to explore how to emphasize distinctive human skills and/or integrate AI tools to supplement the learning process.
  • Explore our AI Assessment Examples Library for assessment ideas designed to incorporate AI tools and strategies in your course and/or create more human-centric assessments.

Sources

About the Author

Connect with Ashlee! Email | Linkedin

Ashlee M. C. Foster, MSEd, is a seasoned Instructional Designer with the Oregon State University Ecampus Course Development and Training Team. With a profound commitment to supporting faculty and students in online teaching and learning, Ashlee’s mission is to design high-quality and innovative educational opportunities that foster transformational learning, development, and growth. Ashlee’s learning design approaches are grounded in research-based insights, foundational learning theories, and the thoughtful integration of industry-led practices. This ensures that each educational experience is not only effective but also engaging and relevant.

Background

“In the Winter Term 2024, the Ecampus Research Unit conducted a survey study of 669 students who had taken online courses at OSU. The 40-item survey was designed to assess students’ knowledge and use of generative AI tools, as well as their perceptions of their use in their courses and careers. A full report of this study is available on the Ecampus Research Unit website. Based on the results of this study, several recommendations were developed to guide decision making about generative AI tools in online courses.”

Dello Stritto, Underhill, & Aguiar (2024).

This recent study highlighted three key recommendations for faculty seeking to integrate generative AI into their courses effectively:

  • Recommendation 1
    • Write a course policy about generative AI that is clearly explained.
  • Recommendation 2
    • Consider a wide range of student emotions and concerns when integrating generative AI in your online courses.
  • Recommendation 3
    • Educate students on generative AI tools.

Applying data to design

To apply these recommendations in practice, we can reorganize them into instructional design categories that foster AI resiliency in course design: Course Learning Outcomes, Learner Profiles, Learning Materials, Activities and Assessments, and Course Policies. These categories offer a comprehensive framework for integrating AI while addressing students’ concerns and enhancing learning experiences.

Course Policies: Establish Clear Guidelines for AI Usage

Reflecting Recommendation 1, developing a clear, transparent policy on AI usage is key. Faculty should articulate when and how students can use AI tools, providing specific examples of ethical use. By defining these expectations early in the course, instructors help students understand the role AI can play in their learning process, promoting academic integrity. 

Learner Profiles: Address Emotional and Academic Concerns

In line with Recommendation 2, it is essential to consider students’ diverse reactions to AI—ranging from excitement to anxiety—when designing a course. This is where understanding Learner Profiles becomes critical. 

Learning Materials and Activities: Ensure Relevance and Adaptability with AI

Recommendation 3 emphasizes the importance of educating students about generative AI, which can be achieved through thoughtful integration into learning materials, activities, and assessments.

Course Learning Outcomes: Integrate AI with Intentional Learning Design

The integration of generative AI tools into course design necessitates an examination of their impact on student mastery of the Course Learning Outcomes. It is vital to ensure that student use of AI tools supplement and enhance the learning process rather than bypass cognitive engagement.

With these four considerations in mind, we can now introduce a tool to help assess and improve course resilience against generative AI, while providing learners with clear policy decisions and explanations.

Introducing CART: Course AI Resiliency Tracker 

In response to the clear need for effective integration of generative AI in educational settings, a new tool has been developed (as part of a wider suite of artificial intelligence tools) to assist faculty in navigating this complex landscape. This tool is designed to support instructors in evaluating how generative AI could respond to their course learning outcomes by highlighting its current capabilities to address and complete these outcomes. It facilitates a detailed understanding of learner profiles to ensure that AI applications are relevant and accessible to all students. Additionally, the tool encourages faculty to reflect on the currency and relevance of their learning materials and to assess how AI might be incorporated into activities and assignments. By examining existing course policies on AI usage and offering actionable steps for course development, this resource aims to demystify generative AI for both educators and students, promoting a thoughtful and strategic approach to its integration or decision to restrict AI.

Getting Started

Upon accessing the landing page, you will be prompted to input your Course ID, after which you may proceed by selecting the “Start” button.

Course AI Resilience Tracker Tool Getting Started Page

Learning Outcomes

The first step in the tool involves a reflection on your Course Learning Outcomes (CLOs). At this stage, you will have the option to choose from a list of commonly used learning outcome verbs, organized by the general categories of Bloom’s Taxonomy. Note that there is a current selection limit of five CLOs at one time, and faculty with verbs absent from this list are encouraged at this time to select verbs that are most like those in their own CLOs to get feedback that will feel the most transferable.

Course AI Resilience Tracker Learning Outcomes Page

After selecting the appropriate verbs that align with your outcomes, click on the “Test Resiliency” button. This will display feedback on how generative AI may already be able to meet expectations for common tasks associated with those action verbs.

Your Learners

Following the assessment of CLOs, the next step encourages you to consider your learners. In this section, you are invited to input relevant details about your students, including their backgrounds, career aspirations, prior knowledge, or any other contextual information that could inform your generative AI course policies. We are aware that this question might feel challenging, especially for faculty who teach all kinds of learners as part of a general education course. In this case, consider this as a more general introduction to the wide variety of learner profiles that may take the course, and how generative AI may be used from their perspective.

Your responses here, as with all inputs in the tool, will be temporarily stored and displayed on the Summary Page for your future reference.

Course AI Resilience Tracker Your Learners Page

Learning Materials

Next, the tool asks you to evaluate the relevance and adaptability of your learning materials. You may choose from the pre-set options provided, or alternatively, you can select “Other” to add customized choices based on your specific course materials.

Course AI Resilience Tracker Learning Materials Page

Activities and Assessments

Next, you will be prompted to reflect on your course activities and assessments. This section includes three key questions. Two of the questions are straightforward yes-or-no inquiries, while the third invites you to select one or more methods that you currently employ to promote academic integrity in your assessments. Including this information alongside activities and assessments bolsters understanding for your learners about expected Gen AI usage, why the choice has been made, and enhances academic integrity across the entire course.

Course AI Resilience Tracker Activities and Assessments Page

Course Policies

You will then be prompted to consider an important question: does your syllabus currently include a policy on generative AI? This reflection is crucial for ensuring transparency and consistency in how AI is addressed throughout your course design. After choosing one of the answers, you will be able to select from some key elements to include in your AI usage policy.

Course AI Resilience Tracker Course Policies Page

Next Steps

Finally, the tool concludes by prompting you to consider the next steps in your course development, offering guidance on how to proceed with integrating generative AI effectively. Each choice offers different recommendations as automatic feedback, and you are encouraged to read through them all before moving onto the final summary.

Course AI Resilience Tracker Next Steps Page

Summary Page

At the conclusion of the tool, you will be directed to a Summary Page that consolidates all your previous inputs, along with the guidance and recommendations provided throughout the process. This comprehensive summary can be printed or saved as a PDF for future reference and review.

The benefits of using the tool

Recommendation 1: A clearly explained course policy

The new tool supports this recommendation by guiding instructors to design course policies that offer clear instructions to learners on what is allowed and disallowed, and most importantly to give rationales behind these policy decisions.

Recommendation 2: Considering learner profiles

The tool helps instructors map these profiles to ensure that generative AI is integrated in ways that are accessible, equitable, and aligned with the emotional and cognitive needs of different students. By anticipating student concerns, instructors can provide thoughtful guidance on how AI will or will not be used in various course activities and assessments.

Recommendation 3: Ensure Relevance and Adaptability with AI

The tool helps instructors evaluate the relevance and adaptability of their current materials by offering pre-set options or the ability to add customized choices. This process ensures that course content remains up-to-date and flexible enough to incorporate generative AI effectively or alternatively,  provides avenues to secure assessments against AI generated content.

Course Learning Outcomes: Integrate AI with Intentional Learning Design

The tool supports this by guiding instructors through a reflection on their CLOs, offering a selection of commonly used learning outcome verbs categorized by Bloom’s Taxonomy. It also helps educators recognize the extent to which generative AI can currently accomplish many of these learning outcomes, providing valuable insights into the specific areas where AI might enhance or support course goals. the purpose of this is to ensure that AI integration choices are not just incidental, but strategically aligned with fostering critical thinking, creativity, and problem-solving skills within the broader context of your course objectives.

Conclusion

In response to the growing need for effective AI integration, this new tool helps faculty navigate the complexities of incorporating generative AI into course design. By addressing Course Learning Outcomes, Learner Profiles, Learning Materials, Activities and Assessments, and Course Policies, the tool promotes a strategic approach that aims to demystify AI for both educators and students. With thoughtful integration, well-designed generative AI policies can enhance learning experiences, help prepare students for future, teach learners to avoid potential pitfalls, and maintain the academic integrity of online courses.

License and Attribution

License

Course AI Resilience Tracker Tool, created by Oregon State University Ecampus, is licensed under Creative Commons Attribution-NonCommercial 4.0 International

Text Content and Guidance

Ashlee Foster, Dana Simionescu, Philip Chambers, Katherine McAlvage, and Cub Kahn

HTML/JavaScript Development

Philip Chambers

References

Dello Stritto, M. E., Underhill G. R., & Aguiar, N. R. (2024). Online Students’ Perceptions of Generative AI. Oregon State University Ecampus Research Unit. https://ecampus.oregonstate.edu/research/publications/

Helpful Links

Introduction

We’ve all heard by now of ChatGPT, the large language model-based chat bot that can seemingly answer most any question you present it. What if there were a way to provide this functionality to students on their learning management system, and it could answer questions they had about course content? Sure, this would not completely replace the instructor, nor would it be intended to. Instead, for quick course content questions, a chatbot with access to all course materials could provide students with speedy feedback and clarifications in far less time than the standard turnaround required through the usual channels. Of course, more involved questions about assignments and course content questions outside of the scope of course materials would be more suited to the instructor, and the exact usage of a tool like this would need to be explained, as with anything.

Such a tool could be a useful addition to an online course because not only could it potentially save a lot of time, but it could also keep students on the learning platform instead of using a 3rd-party solution to answer questions as is the suspected case right now with currently available chatbots.

To find out what this would look like, I researched a bit on potential LLM chatbot candidates, and came up with a plan to integrate one into a Canvas page.

Disclaimer!
This is simply a proof of concept, and is not in production due to certain unknowns such as origin of the initial training data, CPU-bound performance, and pedagogical implications. See the Limitations and Considerations section for more details.

How it works

The main powerhouse behind this is an open source, Large Language Model (LLM) called privateGPT. privateGPT is designed to let you “ask questions to your documents” offline, with privacy as the goal. It therefore seemed like the best way to test this concept out. The owner of the privateGPT repository, Iván Martínez, notes that privacy is prioritized over accuracy. To quote the ReadMe file from GitHub:

100% private, no data leaves your execution environment at any point. You can ingest documents and ask questions without an internet connection!

privateGPT, GitHub Site

privateGPT, at the time of writing, was licensed under the Apache-2.0 license, but during this test, no modifications were made to the privateGPT code. Initially, when you run privateGPT, train it on your documents, and ask it questions, you are doing all of this locally through a command line interface in a terminal window. This obviously will not do if we want to integrate it into something like Canvas, so additional tools needed to be built to bridge the gap.

I therefore set about making two additional pieces of software: a web-interface chat box that would later be embedded into a Canvas page, and a small application to connect what the student would type in the chat box to privateGPT, then strip irrelevant data from its response (such as redundant words like “answer” or listing the source documents for the answer) and push that back to the chat box.

A diagram showing how the front-end of the system (what the user sees) interacts with the back-end of the system (what the user does not see). Self-creation.

Once created, the web interface portion, running locally, allows us to plug it into a Canvas page, like so:

A screenshot showing regular Canvas text on the left, and the chat box interface on the right, connected to the LLM.

Testing how it works

To begin, I let the LLM ‘ingest’ the Ecampus Essentials document provided to course developers on the Ecampus website. Then I asked some questions to test it out, one of which was: “What are the Ecampus Essentials?”

I am not sure what I expected here, as it is quite an open ended question, only that it would scan its trained model data and the ingested files looking for an answer. After a while (edited for time) the bot responded:

A video showing the result of asking the bot “What are the Ecampus Essentials?”

A successful result! It has indeed pulled text from the Ecampus Essentials document, but interestingly has also paraphrased certain parts of it as well. Perhaps this is down to the amount of text it is capable of generating, along with the model that was initially selected.

A longer text example

So what happens if you give it a longer text, such as an OpenStax textbook? Would it be able to answer questions students might have about course content inside the book?

To find out, I gave the chatbot the OpenStax textbook Calculus 1, which you can download for free at the OpenStax website. No modifications were made to this text.

Then I asked the chatbot some calculus questions to see what it came up with:

Asking two questions about certain topics in the OpenStax Calculus 1 book.

It would appear that if students had any questions about mathematical theory, they could get a nice (and potentially accurate) summary from a chatbot such as this. Though this brings up some pedagogical considerations such as: would this make students less likely to read textbooks? Would this be able to search for answers to quiz questions and/or assignment problems? It is already common to ask ChatGPT to provide summaries and discussion board replies, would this bot function in much the same way?

Asking the chatbot to calculate things, however, is where one would run into the current limitations of the program, as it is not designed for that. Simple sums such as “1 + 1” return the correct answer, as this is part of the training data or otherwise common knowledge. Asking it to do something like calculate the hypotenuse of a triangle using Pythagorus’ theorem will not be successful (even using a textbook example of 32 + 42 = c2). The bot will attempt to give an answer, but its accuracy will vary wildly based on the data given to it. I could not get it to give me the correct response, but that was expected as this was not in the ingested documentation.

Limitations and Considerations

OK, so it’s not all perfect – far from it, in fact! The version of privateGPT I was using, while impressive, had some interesting quirks in certain responses. Responses were never identical either, but perhaps that is to be expected from a generative LLM. Still, this would require further investigation and testing in a production-ready model.

How regular and substantive interaction (RSI) might be affected is an important point to consider, as a more capable chatbot could impact the student-instructor Q&A discussion board side of things without prior planning on intended usage.

A major technical issue was that I was limited to using the central processing unit (CPU) instead of the much faster graphics processing unit (GPU) used in other LLMs and generative AI tools. This meant that the time between the question being sent and the answer being generated was far higher than desired. As of writing, there appears to be a way to switch privateGPT to GPU instead, which would greatly increase performance on systems with a modern GPU. The processing power required for a chatbot that more than one user would interact with simultaneously would be substantial.

Additionally, the incorporation of a chatbot like this has some other pedagogical implications, such as how the program would respond to questions related to assignment answers, which would need to be researched.

We also need to consider the technical skill required to create and upkeep a chatbot. Despite going through all of this, I am no Artificial Intelligence or Machine Learning expert; a dedicated team would be required to maintain the chatbot’s functionality to a high-enough standard.

Conclusion

In the end, the purpose of this little project was to test if this could be a tool students might find useful and could help them with content questions faster than contacting the instructor. From the small number of tests I conducted, it is very promising, and perhaps a properly built version could be used as a private alternative to ChatGPT, which is already being used by students for this very purpose. A major limitation was running the program from a single computer with consumer components made 3 years ago. With modern hardware and software – perhaps a first-party integrated version built directly into a learning management system like Canvas – students could be provided with their own course- or platform-specific chatbot for course documents and texts.

If you can see any additional uses, or potential benefits or downsides to something like this, leave a comment!

Notes

  1. Martínez Toro, I., Gallego Vico, D., & Orgaz, P. (2023). PrivateGPT [Computer software]. https://github.com/imartinez/privateGPT.
  2. “Calculus 1” is copyrighted by Rice University and licensed under an Attribution-NonCommercial-Sharealike 4.0 International License (CC BY-NC-SA).

Note: This was conducted using the Jan 9 Pre Research Preview version of ChatGPT.

Introduction

ChatGPT has been featured everywhere recently, with some educators being particularly worried because of its ability to craft essays, provide answers to quiz questions, and bypass plagiarism detection software. Teachers are trying to find ways to detect AI generated content, but are we being too hasty to associate AI in education with academic misconduct? Could the tool being used to cheat on courses actually help build them? I put this to the test by asking ChatGPT some questions.

Note: The purpose of this was to see how much of the course development process could be generated by the chatbot.

How to begin?

Let’s start at the beginning: how do we design a course? Well, keeping the process brief for this post, we want to start with some course learning outcomes, align them with some assessments, and generate some activities to help practice for those assessments. I have a background in linguistics and language education, so I wanted to see how this chatbot would respond to some prompts to create a generic English language course.

Chat Transcript

The following is the chat transcript from ChatGPT verbatim. Screenshots of the conversation are available below. Anything under the “ChatGPT” was written by the AI, with my comments in the “Comments” subsection under that.

Question 1

Gathering the learning outcomes

Could you generate 4-5 course learning outcomes for an intermediate English as a second language course?

ChatGPT Answer

  1. Students will be able to understand and use complex grammatical structures in spoken and written English, including verb tenses, modals, and conditional sentences.
  2. Students will be able to participate in discussions and debates on a variety of topics, using appropriate vocabulary and idiomatic expressions.
  3. Students will be able to read and comprehend authentic English texts, such as news articles, short stories, and literary works, and be able to analyze and interpret them.
  4. Students will be able to write clear, well-organized, and grammatically correct essays, using appropriate transitional words and phrases to develop their ideas.
  5. Students will be able to give presentations in English, using appropriate pronunciation, intonation, and stress to convey their message effectively.

Comments

These are standard course learning outcomes for an ESL course. As I simply asked for learning outcomes with no context, the chatbot has no idea how detailed they should be, or how long it will take to achieve these outcomes. Though that was tested in the next question.

Question 2

Basic weekly topics

Using those outcomes, what would the topics for a 10-week course look like?

ChatGPT Answer

  1. Week 1-2: Review of basic grammatical structures, introduction to verb tenses, including present perfect and past continuous.
  2. Week 3-4: Introduction to modals, including can, could, may, might, must, should, and will. Practice using them in context.
  3. Week 5-6: Conditional sentences, including zero, first, second, and third conditionals. Practice using them in context.
  4. Week 7-8: Vocabulary building and idiomatic expressions. Practice using them in discussion and debates.
  5. Week 9-10: Reading comprehension and analysis. Practice with a variety of texts and interpreting them.
  6. Week 11-12: Writing skills, including clear organization, development of ideas and use of transitional words and phrases.
  7. Week 13-14: Presentation skills, including pronunciation, intonation and stress.

Note: These topics might vary depending on the level of proficiency of the students in the class and the specific course design.

Comments

Interestingly here, the chatbot gave me 14 weeks of topics rather than the 10 I asked for. I believe this is because of the structure of my question, because this is fixed in a later question. This was the only glaring mistake during our entire “conversation”.

Question 3 

Assessments

What kinds of assessments could be used for these outcomes and topics?

ChatGPT Answer

  1. Grammar and vocabulary quizzes or tests to assess students’ understanding and mastery of grammatical structures, verb tenses, modals, and idiomatic expressions.
  2. Oral presentations, debates, or discussions to assess students’ ability to participate in spoken interactions using appropriate vocabulary and idiomatic expressions.
  3. Written assessments such as essays, short answer questions or summary writing to assess students’ ability to understand, analyze and interpret texts and convey their ideas in writing.
  4. Reading comprehension quizzes or tests to assess students’ ability to understand and interpret authentic English texts.
  5. Writing assessments such as essays, reports, or research papers to evaluate students’ ability to organize their thoughts, develop their ideas and use appropriate transitional words and phrases.
  6. Oral presentations or speeches to assess students’ ability to convey information effectively using appropriate pronunciation, intonation, and stress.
  7. Self-reflection or peer-evaluation could be used as well to evaluate students’ progress and areas of improvement.

Comments

A fairly wide range of skills being tested here – which is also down to my extremely generic requirement of ‘intermediate english language”. I thought the assessments here contained enough that someone designing a course could pick a few of them from each set provided.

Question 4

Alignment

Can you map the above assessments to the previously mentioned course learning outcomes? How would they align?

ChatGPT Answer

Sure, here’s how the assessments could align with the previously mentioned course learning outcomes:

1. Grammar and vocabulary quizzes or tests align with the outcome of being able to understand and use complex grammatical structures in spoken and written English, including verb tenses, modals, and conditional sentences.

2. Oral presentations, debates, or discussions align with the outcome of being able to participate in discussions and debates on a variety of topics, using appropriate vocabulary and idiomatic expressions.

3. Written assessments such as essays, short answer questions or summary writing align with the outcome of being able to read and comprehend authentic English texts, such as news articles, short stories, and literary works, and be able to analyze and interpret them, and also the outcome of being able to write clear, well-organized, and grammatically correct essays, using appropriate transitional words and phrases to develop their ideas.

4. Reading comprehension quizzes or tests align with the outcome of being able to read and comprehend authentic English texts, such as news articles, short stories, and literary works, and be able to analyze and interpret them.

5. Writing assessments such as essays, reports, or research papers align with the outcome of being able to write clear, well-organized, and grammatically correct essays, using appropriate transitional words and phrases to develop their ideas.

6. Oral presentations or speeches align with the outcome of being able to give presentations in English, using appropriate pronunciation, intonation, and stress to convey their message effectively.

7. Self-reflection or peer-evaluation align with the outcome of evaluating students’ progress and areas of improvement.

It’s important to note that the assessments should be designed to be both formative and summative, in order to provide ongoing feedback to the student and help them to track their progress and identify areas for improvement.

Comments

Alignment of assessments to outcomes is one of the most important parts of designing an online course. I was impressed with the combination of outcome to assessment as explaining why an assessment is being given to students is quite common.

Question 5 

How could multimedia help students?

Can you think of how multimedia could be used in this course to promote effective language learning?

ChatGPT Answer

There are several ways that multimedia could be used in an intermediate English as a second language course to promote effective language learning:

  1. Video and audio recordings of native English speakers can be used to expose students to authentic English, including different accents and dialects. This will help students to improve their listening comprehension and pronunciation skills.
  2. Interactive online exercises and activities, such as quizzes, games, and simulations can be used to provide students with opportunities to practice and reinforce their grammar, vocabulary, and idiomatic expressions in a fun and engaging way.
  3. Online discussion boards, chat rooms, and virtual meetings can be used to provide students with opportunities to interact with their classmates and teachers in real-time, thus promoting their speaking and writing skills.
  4. Online videos, documentaries, and TED talks can be used to expose students to a variety of interesting and engaging topics, and to provide students with opportunities to practice their listening comprehension and critical thinking skills.
  5. Online flashcards, quizzes, and other tools can be used to help students to memorize and retain new vocabulary words, and to improve their spelling and word recognition skills.
  6. Online or offline multimedia-based learning materials, such as interactive e-books, mobile apps, and podcasts, can be used as additional resources for students to explore and practice their language skills on their own time.
  7. Online resources such as subtitles for films, TV shows or podcasts can be used to improve students’ listening comprehension, vocabulary and grammar skills.

Overall, multimedia can be a great tool to make language learning more interactive, engaging, and personalized, and can be used to support the traditional teaching methods and assessments.

Comments

The multimedia selection here is fairly common in ESL classrooms. Selecting a combination of these would enhance the learning experience. It is also helpful that an explanation of purpose is given for each item on the list.

Question 6

Diversity and inclusion

Students on this course will be from diverse backgrounds and cultures. Can you anticipate any topics that would best be avoided?

ChatGPT Answer

When teaching students from diverse backgrounds and cultures, it’s important to be sensitive to cultural differences and to avoid topics that may be offensive or controversial. Some potential topics that might be best avoided in an intermediate English as a second language course include:

  1. Religion: It’s best to avoid discussing religious beliefs, practices, or controversies, as they can be deeply personal and may offend some students or create unnecessary tension among the class.
  2. Politics: Avoid discussing or debating controversial political issues, as they may be divisive and could create tension among the class.
  3. Cultural stereotypes: Avoid making generalizations or negative comments about any particular culture or group of people, as this can be offensive and perpetuate stereotypes.
  4. Racism and discrimination: Avoid discussing these topics in a way that might make any student feel targeted, uncomfortable or offended.
  5. History: Some historical events can be sensitive and can cause discomfort to some students. It is important to be aware of different perspectives and be sensitive.

It is important to remember that every student is unique and has their own personal experiences and perspectives, and topics that may be acceptable for one student may not be for another. It’s best to use common sense, and if you are unsure, it’s always good to ask your students if they are comfortable discussing a particular topic.

It’s also important to create a safe and inclusive environment, where all students feel respected and valued, and where they can express themselves freely without fear of judgement or discrimination.

Comments

I included this question because of the importance of diversity and inclusion in the classroom, and with an ESL course in particular, students will be from diverse backgrounds. It was good to see that the chatbot could provide not only a list of topics to answer my question but also to give additional advice on the learning environment.

Question 7

Converting this into an online course

How could everything mentioned so far be developed as an online, asynchronous course? Student interaction might be limited to discussion boards as they are likely to be in other countries with different time zones.

ChatGPT Answer

An intermediate English as a second language course can be developed as an online, asynchronous course by using a variety of tools and resources to create a self-paced and interactive learning experience. Some potential strategies include:

  1. Use a Learning Management System (LMS) such as Canvas, Blackboard, or Moodle to create a centralized location for all course materials, assignments, and communication. This will provide students with easy access to all the resources they need to complete the course. Use video and audio recordings, as well as interactive online exercises and activities to provide students with opportunities to practice and reinforce their grammar, vocabulary, and idiomatic expressions in a fun and engaging way.
  2. Use discussion boards, chat rooms, and virtual meetings to provide students with opportunities to interact with their classmates and teachers in real-time, thus promoting their speaking and writing skills. These tools can also be used to create a sense of community among the students, despite their geographic distance and time zones.
  3. Create and provide online multimedia-based learning materials, such as interactive e-books, mobile apps, and podcasts, as additional resources for students to explore and practice their language skills on their own time.
  4. Use online quizzes, tests and assessments to evaluate student’s understanding, and use the results to provide them with feedback and guidance on areas where they need to improve.
  5. Use Self-reflection or peer-evaluation to evaluate students’ progress, and areas of improvement.
  6. Provide detailed and clear instructions for all assignments, activities, and assessments, and make sure that the deadlines are flexible enough to accommodate students in different time zones.
  7. Encourage students to take an active role in their own learning by regularly checking in with them, providing them with feedback, and giving them opportunities to reflect on their progress and set goals for themselves.

Overall, an online, asynchronous course can be designed to be flexible, interactive, and engaging, and can provide students with opportunities to learn and practice their language skills at their own pace and on their own schedule.

Comments

I work in the field of online and hybrid learning, and often adapt existing in-person courses to online versions. I wanted to see if the chatbot could also do this, given that all of the previous answers were based on the assumption of an in-person class. This was not the fault of the ChatGPT – I never specified whether I wanted it to be an online-specific course. An important lesson here is that imprecise input will not result in precise output (see: GIGO for more information, or this article on tailoring your prompts for ChatGPT).

Still, given the updated prompt, it was able to provide both synchronous and asynchronous activities to accommodate for differences in time zones and locations.

Question 8

Asking directly for an opinion

Can you think of anything else that might be important to consider when building this course?

ChatGPT Answer

Here are a few additional things that might be important to consider when building an intermediate English as a second language course:

  1. Assessing students’ prior knowledge and language proficiency: It’s important to assess students’ prior knowledge and language proficiency before the course begins, in order to tailor the course content to their needs and to ensure that they are placed in the appropriate level.
  2. Differentiating instruction: Since students in an intermediate English as a second language course will have varying levels of proficiency, it’s important to differentiate instruction and provide multiple means of representation, expression, and engagement to meet the needs of all students.
  3. Scaffolding and modeling: Provide students with the necessary support, guidance and modeling to complete the activities and assignments.
  4. Provide opportunities for students to practice and apply their language skills in authentic and meaningful contexts.
  5. Encourage students to use English as much as possible: Encourage students to use English as much as possible, both inside and outside of class, by providing them with opportunities to interact with native speakers, read authentic English texts, and listen to English-language music and podcasts.
  6. Use technology to enhance language learning: Use technology to enhance language learning by providing students with access to online resources, such as grammar exercises, vocabulary lists, and pronunciation guides.
  7. Provide regular and constructive feedback: Provide regular and constructive feedback on students’ progress and areas for improvement. This can be done through formative assessments, individual conferences, or through online tools such as rubrics and self-reflection.
  8. Be flexible and responsive: Be flexible and responsive to students’ needs and be open to suggestions and feedback to improve the course.
  9. Cultural sensitivity: Be culturally sensitive, and be aware of the cultural backgrounds of the students and make sure that the course content is inclusive and respectful of different cultures.
  10. Keep in mind that the course is not just about language, but also about culture. Provide students with opportunities to learn about the cultures of English-speaking countries in order to help them to understand and appreciate the language and its context better.

Comments

For this question, I wanted to see if the chatbot could provide additional considerations for teaching and learning. These are all great suggestions, which is probably why the AI selected them – they likely appear in many examples of educational content that it had been trained on.

Question 9

Rewriting course learning outcomes to be more measurable

Can you take the aforementioned course learning outcomes and rewrite them with strong, measurable verbs? For example, “understand” is not easy to measure.

ChatGPT Answer

Sure, here are the course learning outcomes rephrased with strong, measurable verbs:

  1. Students will be able to identify and use complex grammatical structures in spoken and written English, including verb tenses, modals, and conditional sentences with 80% accuracy on grammar quizzes and tests.
  2. Students will be able to initiate and participate in discussions and debates on a variety of topics, using appropriate vocabulary and idiomatic expressions with 70% accuracy on oral presentations, debates, or discussions.
  3. Students will be able to read and comprehend authentic English texts, such as news articles, short stories, and literary works, and analyze and interpret them with 80% accuracy on reading comprehension quizzes or tests.
  4. Students will be able to write clear, well-organized, and grammatically correct essays, using appropriate transitional words and phrases to develop their ideas with 80% accuracy on writing assessments such as essays, reports, or research papers.
  5. Students will be able to give presentations in English, using appropriate pronunciation, intonation, and stress to convey their message effectively with 80% accuracy on oral presentations or speeches.

These objectives are now specific, measurable and have a clear criteria of success. They are also aligned with the assessments and activities that you have previously mentioned.

Comments

I conducted an experiment which mimics the process that instructional designers routinely encounter when designing online courses. There are often a set of learning outcomes given at the start of the development process which regularly use verbs that are heard to measure such as “understand” or “know”. This makes it difficult to align assessments to outcomes because what does it really mean to “understand” something? How can we apply a rubric or grade to it? 

Note that, in a real development beginning from scratch, we would want to design the course learning outcomes to be measurable from the start before anything else. In practice, however, I usually work with faculty on courses with predetermined outcomes that are either modified to more measurable ones, or linked to more measurable weekly learning outcomes.

ChatGPT modified the one verb I called out – “understand” – but left “comprehend”. Perhaps this also required a specific mention in the initial question, and is another example of how your initial prompt influences the results. 

Question 10

Creating a weekly schedule

Can you make a weekly schedule, from week 1 to week 10, with an overview of activities for that particular week, incorporating the aforementioned assessments where appropriate and aligning all of it with the course learning outcomes?

ChatGPT Answer

Sure, here’s a possible weekly schedule for an intermediate English as a second language course:

Week 1:

  • Overview of the course and introduction to the course learning outcomes
  • Assessment of students’ prior knowledge and language proficiency
  • Introduction to verb tenses, including present perfect and past continuous
  • Interactive online exercises and activities to practice verb tenses
  • Vocabulary building and idiomatic expressions

Week 2:

  • Review and reinforcement of verb tenses
  • Quiz or test on verb tenses
  • Introduction to modals, including can, could, may, might, must, should, and will
  • Practice using modals in context

Week 3:

  • Review and reinforcement of modals
  • Quiz or test on modals
  • Introduction to conditional sentences, including zero, first, second, and third conditionals
  • Practice using conditional sentences in context

Week 4:

  • Review and reinforcement of conditional sentences
  • Quiz or test on conditional sentences
  • Vocabulary building and idiomatic expressions
  • Practice using vocabulary and idiomatic expressions in discussions and debates

Week 5:

  • Reading comprehension and analysis
  • Practice with a variety of texts and interpreting them
  • Writing skills, including clear organization, development of ideas and use of transitional words and phrases

Week 6:

  • Review and reinforcement of writing skills
  • Writing assessment such as essays, reports, or research papers
  • Presentation skills, including pronunciation, intonation and stress

Week 7:

  • Review and reinforcement of presentation skills
  • Practice oral presentations or speeches
  • Self-reflection or peer-evaluation
  • Vocabulary building and idiomatic expressions

Week 8:

  • Reading comprehension and analysis
  • Practice with a variety of texts and interpreting them
  • Writing skills, including clear organization, development of ideas and use of transitional words and phrases

Week 9:

  • Review and reinforcement of all language skills
  • Final assessment, which can include a combination of quizzes, tests, essays, oral presentations, or other forms of assessment.
  • Feedback on final assessment

Week 10:

  • Review of key concepts and skills
  • Feedback on the course, and suggestions for improvement
  • Final self-reflection or peer-evaluation

Note: This is just a sample schedule, and it can be adjusted depending on the specific needs of the students and the course design. Also, it’s important to remember that the assessments should be designed to be both formative and summative, in order to provide ongoing feedback to the student and help them to track their progress and identify areas for improvement.

Comments

The language I used in this question yielded the correct number of weeks from the AI. Each week is structured to include a review of the previous week’s content, which is a common teaching technique. This is asking a lot from the chatbot! It managed to incorporate most of my requests though, which again is impressive. 

Question 11

Academic Integrity

What measures could be taken to ensure academic integrity in the online version of this course?

ChatGPT Answer

Academic integrity is a critical aspect of any course, and it’s important to take measures to ensure that students are completing their work independently and honestly. Here are a few measures that could be taken to ensure academic integrity in an online version of an intermediate English as a second language course:

  1. Use plagiarism detection software: Use plagiarism detection software, such as Turnitin, to check students’ work for originality. This will help to detect instances of plagiarism and ensure that students are completing their own work.
  2. Provide clear guidelines on academic integrity: Provide students with clear guidelines on academic integrity, including definitions of plagiarism, cheating, and academic dishonesty.
  3. Monitor online activities: Monitor online activities, such as discussion board posts, chat room conversations, and virtual meetings, to ensure that students are participating in the course honestly and independently.
  4. Use proctoring software: Use proctoring software, such as ProctorU or Respondus Monitor, to monitor students while they take online quizzes, tests, or exams.
  5. Use randomized questions: Use randomized questions, or different versions of the same test to prevent students from sharing answers with each other.
  6. Encourage self-regulation: Encourage self-regulation, and make it clear that the students are responsible for maintaining academic integrity in their work.
  7. Create a culture of integrity: Create a culture of integrity in the class, by emphasizing the importance of honesty, trust, and responsibility.
  8. Have clear consequences: Have clear consequences for academic dishonesty, and communicate them to the students.

Overall, it’s important to establish clear expectations for academic integrity and

ChatGPT froze here, the servers were experiencing high volumes of traffic.

Comments

Again, these are solid solutions which are used in both in-person and online courses. A quick web search would yield similar results as these are shared across most educational institutions.

Thoughts on further usage

It is clear that AI is set to change the way we interact with technology in education (or already has depending on how much you are using it). For this test, I simply provided the most generic input to see what the chatbot would generate. For an intermediate (however that was interpreted by the chatbot) ESL course, the learning outcomes were solid, once measurable outcomes were requested, and I was impressed by the generated weekly schedule including high level overviews of activities and assessments.

With more precise input requirements for course content and outcomes, one would expect an even more accurate output.

There are many ways that this kind of technology can be used to enhance courses – and I can see it being a very useful tool for quick questions such as potential assignments for certain learning outcomes, or figuring out if an assignment prompt might result in unintended consequences. For example, see the following:

Bonus Question

What could go wrong?

Can you think of any potential negative consequences of using the following prompt in a classroom?

“Make a presentation on current events in your country.”

ChatGPT Answer

There are several potential negative consequences of using the prompt “Make a presentation on current events in your country” in a classroom setting:

  1. Bias: Students may only focus on certain events or perspectives, leading to a biased understanding of current events.
  2. Limited perspective: Students may only present events happening in their immediate community, leading to a limited perspective on current events in the country as a whole.
  3. Inappropriate content: Students may present on sensitive or controversial topics that may be inappropriate for a classroom setting.
  4. Lack of critical thinking: By only presenting current events, students may not develop the critical thinking skills needed to analyze and understand the underlying causes and consequences of the events.
  5. Limited access to information: Some students may not have access to current events information, especially if they come from low-income or marginalized communities.

It’s important to consider these potential negative consequences and take steps to mitigate them, such as providing guidance on how to present information in a balanced and fair manner and encouraging critical thinking and analysis of current events.

Watching for potential bias

During my “conversation” with the bot, I was impressed with the response to the diversity and inclusion question I posed. The points it presented were, like most answers in this experiment, fairly common and expected responses for such a course (‘don’t talk about politics, religion, etc. etc.’). A phrase that kept cropping up is ‘it is important to…’ followed by a value statement. Those presented in this conversation were benign, and actually good advice from an educational standpoint (e.g. every student is unique, you should create a safe learning environment, and so on). It seems a lot of work has been done to minimize bias – responses from the bot on many issues have disclaimers and things that are ‘important to remember’. It is also, however, important to remember that biases can creep in and to keep an eye out for it, especially when taking advice from AI.

Copyright Issues

Obviously the chatbot is pulling this information from somewhere (more info on this is contained on the Wikipedia page for ChatGPT), and even though it is generative AI, one has to wonder how much this will be an issue in the future if content is too similar to an existing program or course – not to mention all other subjects it is capable of generating.

References and Sources

  1. ChatGPT‘. Wikipedia. [Accessed Jan 19 2023].
  2. ChatGPT Jan 9 Version Just Released: Learn What’s New!‘ ChatGPT. [Accessed Jan 19 2023].
  3. ChatGPT Success Completely Depends on Your Prompts‘. Forbes. [Accessed Jan 19 2023].
  4. Garbage in, garbage out‘. Wikipedia. [Accessed Jan 19 2023].
  5. Lee, J. W. (2023). ‘A new tool helps teachers detect if AI wrote an assignment‘. NPR. [Accessed Jan 19 2023].
  6. OpenAI: ChatGPT [Accessed Jan 19 2023].

Conversation Screenshots

Man Reading Touchscreen
Man Reading Touchscreen” | Image by Karolina Grabowska from Pixabay

Background

Five years ago, I wrote a small entry in the ORTESOL Newsletter about the then state of “adaptive software capable of teaching, testing, giving feedback, and most importantly, adjusting to student needs” (Chambers, 2015: 13). I mentioned a set of technologies colloquially referred to as the “Digital Aristotle”, or ‘Project Halo’ (Friedland et al., 2004), and the update to this six years later (Gunning et al., 2010). The Digital Aristotle was described as ‘an application that will encompass much of the world’s scientific knowledge and be capable of applying sophisticated problem-solving to answer novel questions’ (Friedland et al., 2004).

At the time, I was writing about a more grandiose piece of software that might one day replace the repetitive tasks of an ESOL classroom. The idea, or perhaps the concern about this technology for existing teachers was where they would fit in once a set of algorithms could replicate much of the day to day learning of a language course.

Five years on, I turn to how learning designers might be able to incorporate AI into course design.

Potential

First, a question: is there currently a program capable of teaching a course and adapting to student needs like an instructor? The answer is still no. Certain technology has, however, progressed to the point that portions of an online course can be enhanced by AI.

Adaptive Learning

Quite possibly the most exciting development in artificial intelligence for learning is that of adaptive learning. This concept has been in the works for a while with certain platforms utilizing algorithms to produce content that adapt to specific learner needs.

Instructure’s Canvas for instance, allows Instructional Designers to set up ‘Differentiated Assignments’ (Canvas Doc Team, How do I view differentiated assignments with different due dates in a course?, 2017) and ‘MasteryPaths’ (Canvas Doc Team, How do I use MasteryPaths in course modules?, 2020) which ‘allows targeted learning activities to be assigned to different users and sections’ (‘MasteryPaths’). Currently this is a manual process with course designers creating every assessment beforehand. The best students might not see the additional activities. It is not ‘intelligent’ in the way that course content is adapted specifically for a learner’s needs and on the fly. To do this requires large amounts of data and most importantly, AI training to see patterns, strengths, and weaknesses for a particular learner.

This is why Duolingo records progress at every step and offers learners a review of concepts the learner struggles with in previous activities. Platforms that provide an automated review often use large question banks and flag questions that learners initially, or continuously incorrectly answer. An intelligent AI could create novel questions based on learner goals, data from prior students, and information about the subject matter. An example of this is Google or Amazon’s ability to predict and offer products or suggestions based on the vast amount of information provided to them every single day. Certain training providers are currently working with application developers to produce tools capable of this on-the-fly feedback and adaptation.

Discussions

At the OLC Innovate 2020 conference, Kasey Gandham from Ed Tech company Paperback and Kim A. Scalzo, Executive Director of Open SUN, demonstrated how Paperback’s AI is being used with online discussions to help students write higher quality posts. As students write their discussion posts, the AI program checks for “close-ended questions, plagiarism, insufficient length, content about class logistics, profanity and abuse’ (Gandham & Scalzo, 2020). After this, if required, the post is moderated and the learner receives email feedback saying why and how to revise their post. The AI is also capable of suggesting posts to feature as the best of the week by analyzing, among other things, sentence depth and ‘curiosity score’.

More than a Quiz

The role and importance of ‘big data’ in online learning cannot be understated. Technology already exists that records the time learners interact with learning materials. It knows where they are clicking/tapping on the screen and how long activities hold learner attention. Using this data, AI could suggest, or even craft assessments that are adapted specifically to a single learner’s usage habits. Traditional quizzes which assess information retention could become only part of the larger formative assessment of the entire course, at every point in the course, without the learner even realizing any of this is happening.

Future

AI-enhanced design has the ability to transform Instructional Designers like never before. It could help us to modify our own design practices based on how learners are responding to course content. Through learner feedback, it could demonstrate which activities are most appealing and conducive to personalized learning goals. Big data’s role in recording learner interactions with content can provide insights into preferred learning styles and methods of instruction. Instructional Designers will have to continue adapting with the technology just as we have done in our everyday lives.

References

Canvas Doc Team. (2017, April 19). How do I view differentiated assignments with different due dates in a course? Retrieved July 02, 2020, from https://community.canvaslms.com/docs/DOC-10036-how-do-i-view-differentiated-assignments-with-different-due-dates-in-a-course

Canvas Doc Team. (2020, April 6). How do I use MasteryPaths in course modules? Retrieved July 02, 2020, from https://community.canvaslms.com/docs/DOC-26231-how-do-i-use-masterypaths-in-course-modules 

Chambers, P. (2015, Spring). “Digital Aristotle” and ESL: What does it mean for us?. ORTESOL Quarterly Newsletter, 38(1), P.12-13.

Friedland, N.S. et al. (2004). ‘Project Halo: Towards a Digital Aristotle’, American Association for Artificial Intelligence, 25(4), pp. 29-47. DOI: http:// dx.doi.org/10.1609/aimag.v25i4.1783.

Gandham, K. and Scalzo, K., A. (2020, June). USING AI IN DISCUSSION TO SCALE ACCESS TO QUALITY ONLINE EDUCATION. OLC Innovate 2020, Online Presentation. https://onlinelearningconsortium.org/olc-innovate-2020-session-page/?session=8486 

Gunning, D. et al. (2010). ‘Project Halo Update – Progress Toward Digital Aristotle’, Association for the Advancement of Artificial Intelligence, 31(3), pp. 33-58. DOI: http:// dx.doi.org/10.1609/aimag.v31i3.2302.