We all remember the warning from math class:

“You won’t always have a calculator in your pocket!”

How we laugh now, with calculators first arriving in our pockets and, eventually, smartphones putting one in our hands at all times.

I have seen a lot of comparisons 1 2 3 across the Internet to artificial intelligence (AI) and these mathematics classes of yesteryear. The idea being that AI is but the newest embodiment of this same concern, which ended up being overblown.

But is this an apt comparison to make? After all, we did not replace math lessons and teachers with pocket calculators, nor even with smart phones. The kindergarten student is not simply given a Casio and told to figure it out. The quote we all remember has a deeper meaning, hidden among the exacerbated response to the question so often asked by students:  “Why are we learning this?”

The response

It was never about the calculator itself, but about knowing how, when, and why to use it. A calculator speeds up the arithmetic, but the core cognitive process remains the same. The key distinction is between pressing the = button and understanding the result of the = button. A student who can set up the equation, interpret the answer, and explain the steps behind the screen will retain the mathematical insight long after the device is switched off.

The new situation – Enter AI

Scenario

Pressed for time and juggling multiple commitments, a student turns to an AI tool to help finish an essay they might otherwise have written on their own. The result is a polished, well-structured piece that earns them a strong grade. On the surface, it looks like a success, but because the heavy lifting was outsourced, the student misses out on the deeper process of grappling with ideas, making connections, and building understanding.

This kind of situation highlights a broader concern: while AI can provide short-term relief for students under pressure, it also risks creating long-term gaps in learning. The issue is not simply that these tools exist, but that uncritical use of them can still produce passing grades without the student engaging in meaningful reflection gained by prior cohorts. Additionally, when AI-generated content contains inaccuracies or outright hallucinations, a student’s grade can suffer, revealing the importance of reviewing and verifying the material themselves. This rapid, widespread uptake stresses the need to move beyond use alone and toward cultivating the critical habits that ensure AI supports, rather than supplants, genuine learning. 

Some background studies

In a 2024 study on Generative AI Usage and Exam Performance, Wecks et al. (2024) describe that:

Employing multivariate regression analysis, we find that students using GenAI tools score on average 6.71 (out of 100) points lower than non-users. While GenAI may offer benefits for learning and engagement, the way students actually use it correlates with diminished exam outcomes

Another study (Ju, 2023) found that:

After adjusting for background knowledge and demographic factors, complete reliance on AI for writing tasks led to a 25.1% reduction in accuracy. In contrast, AI-assisted reading resulted in a 12% decline. Ju (2023).

In this same study, Ju (2023) noted that while using AI to summarize texts improved both quality and output of comprehension, those who had a ‘robust background in the reading topic and superior reading/writing skills’ benefited the most.

Ironically, the students who would benefit most from critical reflection on AI use are often the ones using it most heavily, demonstrating the importance of embedding AI literacy into the curriculum. For example: A recent article by Heidi Mitchell from the Wall Street Journal (Mitchell, 2025) cites a study showing that the “less you know about AI, the more you are likely to use it”, and describing AI as seemingly “magical to those with low AI literacy”.

Finally, Kosmyna et al. (2025), testing how LLM usage affects cognitive processes and neural engagement in essay writing, assembled groups of LLM users, search engine users, and those without these tools (dubbed “brain-only” users). The authors recorded weaker performance in students with AI assistance over time, a lower sense of ownership of work with inability to recall work, and even seemingly reduced neural connectivity in LLM users compared to the brain-only group, which scored better in all of the above.

The takeaways from these studies are that unstructured AI use acts as a shortcut that erodes retention. While AI-assistance can be beneficial, outright replacement of thinking with it is harmful. In other words, AI amplifies existing competence but rarely builds it from scratch.

Undetected

Many people believe themselves to be fully capable of detecting AI-usage:

Most of the writing professors I spoke to told me that it’s abundantly clear when their students use AI. Sometimes there’s a smoothness to the language, a flattened syntax; other times, it’s clumsy and mechanical. The arguments are too evenhanded — counterpoints tend to be presented just as rigorously as the paper’s central thesis. Words like multifaceted and context pop up more than they might normally. On occasion, the evidence is more obvious, as when last year a teacher reported reading a paper that opened with “As an AI, I have been programmed …” Usually, though, the evidence is more subtle, which makes nailing an AI plagiarist harder than identifying the deed. (Walsh, 2025).

In the same NY Mag article, however, Walsh (2025) cites another study, showing that it might not be as clear who is using AI and who is not (emphasis added):

[…] while professors may think they are good at detecting AI-generated writing, studies have found they’re actually not. One, published in June 2024, used fake student profiles to slip 100 percent AI-generated work into professors’ grading piles at a U.K. university. The professors failed to flag 97 percent.

The two quotes are not contradictory; they describe different layers of the same phenomenon. Teachers feel they can spot AI because memorable extremes stick in their minds, yet systematic testing proves that intuition alone misses the overwhelming majority of AI‑generated work. This should not be surprising though, as most faculty have never been taught systematic ways to audit AI‑generated text (e.g., checking provenance metadata, probing for factual inconsistencies, or using stylometric analysis). Nor do most people, let alone faculty grading hundreds of papers per week, have the time to audit every student. Without a shared, college-wide rubric of sorts, detection remains an ad‑hoc, intuition‑driven activity. Faulty detection risks causing undue stress to students, and can foster a climate of mistrust by assuming that AI use is constant or inherently dishonest rather than an occasional tool in the learning process. Even with a rubric, instructors must weigh practical caveats: large-enrollment courses cannot sustain intensive auditing, some students may resist AI-required tasks, and disparities in access to tools raise equity concerns. For such approaches to work, they must be lightweight, flexible, and clearly framed as supporting learning rather than policing it.

This nuance is especially important when considering how widespread AI adoption has been. Walsh (2025) observed that “just two months after OpenAI launched ChatGPT, a survey of 1,000 college students found that nearly 90 percent of them had used the chatbot to help with homework assignments.” While this figure might seem to justify the use of AI detectors, it could simply reflect the novelty of the tool at the time rather than widespread intent to circumvent learning. In other words, high usage does not automatically equal cheating, showing the importance of measured, thoughtful approaches to AI in education rather than reactionary ones.

What to do…?

The main issue here is not that AI is magically writing better essays than humans can muster, it is that students are slipping past the very moments where they would normally grapple with concepts, evaluate evidence, and argue a position. Many institutions are now taking a proactive role rather than a reactive one, and I want to offer such a suggestion going forward.

Embracing the situation: The reflective AI honor log

It is a fact that large language models have become ubiquitous. They are embedded in web browsers, word processors, and even mobile keyboards. Trying to ban them outright creates a cat‑and‑mouse game; it also sends the message that the classroom is out of sync with the outside world.

Instead of fighting against a technology that is already embedded in our lives, invite students to declare when they use it and to reflect on what they learned from that interaction.

For this post, I am recommending using an “AI Honor-Log Document”, and deeply embedding it into courses, with the goal of increasing AI literacy. 

What is it?

As assignments vary across departments and even within courses, a one-size-fits-all approach is unlikely to be effective. To support thoughtful AI use without creating extra work for students, faculty could select an approach that best aligns with their course design:

  1. Built-in reflection: Students note when and how they used AI, paired with brief reflections integrated into their normal workflow.
  2. Optional, just-in-time logging: Students quickly log AI use and jot a short note only when it feels helpful, requiring minimal time.
  3. Embedded in assignments: Reflection is incorporated directly into the work, so students engage with it as part of the regular writing or research process.
  4. Low-effort annotations: Students add brief notes alongside tasks they are already completing, making reflection simple and natural.

These options aim to cultivate critical thinking around AI without imposing additional burdens or creating the perception of punishment, particularly for students who may not be using AI at all.

AI literacy is a massive topic, so let’s only address a few things here: 

  • Mechanics Awareness: Ability to explain the model architecture, training data, limits, and known biases.
  • Critical Evaluation: Requiring fact-checking, citation retrieval, and bias spotting.
  • Orchestration Skills: Understanding how to craft precise prompts, edit outputs, and add original analysis.

Note: you might want to go further and incorporate these into an assignment level learning outcome. Something like: “Identifies at least two potential biases in AI-generated text” could be enough on a rubric to gather interesting student responses.

Log layout example

#Assignment/ActivityDateAI ModelExact PromptAI OutputWhat you changed/AddedWhy You EditedConfidence (1-5)Link to Final Submission
1Essay #2 – Digital-privacy law2025-09-14GPT-5“Write a 250-word overview of GDPR’s extraterritorial reach and give two recent cases[pastes AI text]Added citation to 2023 policy ruling; re-phrased a vague sentence.AI omitted the latest case; needed up-to-date reference4https://canvas.oregonstate.edu/…… 

Potential deployment tasks (and things to look out for)

It need not take much time to model this to students or deploy it in your course. That said, there are practical and pedagogical limits depending on course size, discipline, and student attitudes toward AI. The notes below highlight possible issues and ways to adjust.

  1. Introduce the three reasons above (either text form or video, if you have more time and want to make a multimedia item).
    Caveat: Some students may be skeptical of AI-required work.
    Solution: Frame this as a reflection skill that can also be done without AI, offering an alternative if needed.
  2. Distribute the template to students: post a Google-Sheet link (or similar) in the LMS.
    Caveat: Students with limited internet access or comfort with spreadsheets may struggle.
    Solution: Provide a simple Word/PDF version or allow handwritten reflections as a backup.
  3. Model the process in the first week: Submit a sample log entry like the one above but related to your class and required assignment reflection type.
    Caveat: In large-enrollment courses, individualized modeling is difficult.
    Solution: Share one well-designed example for the whole class, or record a short screencast that students can revisit.
  4. Require the link with each AI-assisted assignment (or as and when you believe AI will be used).
    Caveat: Students may feel burdened by repeated uploads or object to mandatory AI use.
    Solution: Keep the log lightweight (one or two lines per assignment) and permit opt-outs where students reflect without AI.
  5. Provide periodic feedback: scan the logs, highlight common hallucinations or errors provided by students, give a “spot the error” mini lecture/check-in/office hour.
    Caveat: In large classes, it’s not realistic to read every log closely.
    Solution: Sample a subset of entries for themes, then share aggregated insights with the whole class during office hours, or post in weekly announcements or discussion boards designed for this kind of two-way feedback.
  6. (Optional) Student sharing session in a discussion board: allow volunteers or require class to submit sanitized prompts (i.e., any personal data removed) and edits for peer learning.
    Caveat: Privacy concerns or reluctance to share work may arise.
    Solution: Keep sharing optional, encourage anonymization, and provide opt-outs to respect comfort levels.

Important considerations when planning AI-tasks

Faculty should be aware of several practical and pedagogical considerations when implementing AI-reflective logs. Large-enrollment courses may make detailed feedback or close monitoring of every log infeasible, requiring sampling or aggregated feedback. Some students may object to AI-required assignments for ethical, accessibility, or personal reasons, so alternatives should be available (i.e. the option to declare that a student did not use AI should be present). Unequal access to AI tools or internet connectivity can create equity concerns, and privacy issues may arise when students share prompts or work publicly. To address these challenges, any approach should remain lightweight, flexible, and clearly framed as a tool to support learning rather than as a policing mechanism.

Conclusion

While some students may feel tempted to rely on AI, passing an assignment in this manner can also pass over the critical thinking, analytical reasoning, and reflective judgment that go beyond content mastery to true intellectual growth. Incorporating a reflective AI-usage log based not on assumption of cheating, but on the ubiquitous availability of this now-common tool, reintroduces one of the evidence-based steps for learning and mastery that has fallen out of favor in the last 2-3 years. By encouraging students to pause, articulate, and evaluate their process, reflection helps them internalize knowledge, spot errors, and build the judgment skills that AI alone cannot provide.

Footnotes

  1. https://www.reddit.com/r/ArtificialInteligence/comments/1ewh2ji/i_remember_when/ ↩︎
  2. https://www.uwa.edu.au/news/article/2025/august/generative-ai-is-not-a-calculator-for-words-5-reasons-why-this-idea-is-misleading ↩︎
  3. https://medium.com/%40josh_tucker/why-not-using-ai-is-like-refusing-to-use-a-calculator-in-a-maths-test-093b860d7b45 ↩︎

References

Fu, Y. and Hiniker, A. (2025). Supporting Students’ Reading and Cognition with AI. In Proceedings of Workshop on Tools for Thought (CHI ’25 Workshop on Tools for Thought). ACM, New York, NY, USA, 5 pages. https://arxiv.org/pdf/2504.13900v1

Ju, Q. (2023). Experimental Evidence on Negative Impact of Generative AI on Scientific Learning Outcomes. https://doi.org/10.48550/arXiv.2311.05629

Kosmyna, N., Hauptmann, E., Yuan, Y. T., Situ, J., Liao, X-H., Beresnitzky, A. V., Braunstein, I., & Maes, P. (2025). Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task. https://arxiv.org/abs/2506.08872

Mitchell, H. (2025). The Less You Know About AI, the More You Are Likely to Use It. Wall Street Journal. Accessed September 3, 2025: https://www.wsj.com/tech/ai/ai-adoption-study-7219d0a1

Wecks, J. O., Voshaar, J., Plate, B. J., & Zimmermann, J. (2024). Generative AI Usage and Exam Performance. https://doi.org/10.48550/arXiv.2404.19699

Walsh, J. (May 7, 2025). Everyone Is Cheating Their Way Through College ChatGPT has unraveled the entire academic project. Intelligencer. https://nymag.com/intelligencer/article/openai-chatgpt-ai-cheating-education-college-students-school.html

Too often, online courses struggle with communication that feels slow and one-sided. Students swap ideas in discussion boards, but collaboration stops there. Integrating Microsoft Teams into Canvas changes that. It brings real-time conversation, file-sharing, and group spaces directly into the LMS–helping students connect more naturally and giving instructors new ways to guide and engage. This integration not only boosts collaboration, it also provides more opportunities for Regular and Substantive Interaction (RSI) between students and instructors—structured, faculty-initiated engagement that is required in online courses under federal guidelines.


Seamless Collaboration Across Projects and Courses

Integrating Teams into Canvas ensures that group work and peer review move beyond static discussion boards into dynamic, asynchronous interactions. Students can download the app on their mobile devices, which allows them to have more consistent and real-time access to the comments and work shared by their peers. Teams allows for:

  • Dedicated channels for individual projects or study groups
  • Tagging teammates so each member of a channel knows when they are needed
  • File sharing by both team members and instructors

This unified workspace helps teams stay organized, accountable, and focused on shared learning outcomes. Teams has both course-level and group-level integrations. This allows instructors flexibility in how they would like to use the app. These different levels allow Teams to be used for the entire course or just for specific group projects (or both). Regardless of the level of integration and use, instructors can see how students are collaborating and completing a task or group assignment. This gives them a space to quickly jump in if students are struggling or off track. 


Enhanced Communication and Community Building

Canvas announcements and emails can feel one-sided; within Teams, conversations become two-way forums where ideas flow instantly. Notifications appear directly inside Canvas (and on mobile devices if students/instructors allow), ensuring students never miss critical updates. Meanwhile, professors can host Q&A chats without scheduling hurdles by simply creating a channel in Teams. The fluid interaction nurtures a vibrant learning community, fostering peer support and timely faculty feedback. Additionally, this allows instructors to meet their Regular and Substantive Interaction goals, nurtures a collaborative online community and directly addresses the Ecampus Essentials standard of requiring all three forms (student–student, student–instructor, student–content) of interaction and engagement in a classroom. 


Easy Oversight for Seeking Solutions Courses

One of the new CoreEd (Core Education is OSU’s state-of-the-art, 21st-century-focused general education program) categories being implemented this year include the Seeking Solutions courses. These courses require students to work in interdisciplinary groups and “wrestle with complex, multifaceted problems, and evaluate potential solutions from multiple points of view” (from the Seeking Solutions OSU page). This necessitates that students complete group assignments and projects while instructors mentor and monitor these groups individually. 

With a fully asynchronous OSU Ecampus course, this can be difficult. One way this can be accomplished is through Teams channels. If each group has its own Teams channel and the instructor requires that they use Teams to communicate and collaborate for their project, then instructors can use this space to share resources, mentor the students, and facilitate hard conversations. 


Conclusion

Integrating Microsoft Teams into Canvas reshapes the university experience by uniting collaboration and communication within a single resource. Students benefit from real-time teamwork features and greater access to their instructors, while professors enjoy streamlined group work oversight and the ability to intervene whenever necessary. Adopting this integrated approach not only enhances the quality of instruction but also fosters a more engaged and connected learning community. For more information on how to integrate Teams into your Canvas site, read the Canvas: Create linked Teams from Canvas page. 

graphic image of the five steps in the feedback cycle

Giving and receiving feedback effectively is a key skill we all develop as we grow, and it helps us reflect on our performance, guide our future behavior, and fine-tune our practices. Later in life, feedback continues to be vital as we move into work and careers, getting feedback from the people we work for and with. As teachers, the most important aspect of our job is giving feedback that informs students how to improve and meet the learning outcomes to pass our courses.  We soon learn, however, that giving feedback can be difficult for several reasons. Despite it being one of our primary job duties as educators, we may have received little training on how to give feedback or what effective feedback looks like. We also realize how time-consuming it can be to provide detailed feedback students need to improve. To make matters worse, we may find that students don’t do much with the feedback we spend so much time providing. Additionally, students may not respond well to feedback- they might become defensive, feel misunderstood, or worse, ignore the feedback altogether. This can set us up for an ineffective feedback process, which can be frustrating for both sides. 

I taught ESL to international students from around the world for more than 10 years and have given a fair amount of feedback. Over many cycles, I developed a detailed and systematic approach for providing feedback that looked like this.

Gaps in this cycle can lead to frustration from both sides. Each step in the cycle is essential, so we’ll look at each in greater depth in this blog series. Today, we will focus on starting strong by preparing students to receive feedback, a crucial beginning that sets the stage for a healthy cycle.

Step 1: Prepare Students to Receive Feedback

An effective feedback cycle starts before the feedback is given by laying careful groundwork. The first and often-overlooked step in the cycle is preparing students to receive feedback, which takes planned, ongoing work. Various factors may influence whether students welcome feedback, including their self-confidence going into your course, their own self-concept and mindset as a learner, their working memory and learning capacity, how they view your feedback, and whether they feel they can trust you. Outside factors such as motivation and working memory are often beyond our control, but creating an atmosphere of trust and safety in the classroom can positively support students. Student confidence and mindset are areas in which  teachers can play a crucial supporting role. 

Researcher Carol Dweck coined the term “growth mindset” after noticing that some students showed remarkable resilience when faced with hardship or failure. In contrast, others tended to easily become frustrated and angry, and tended to give up on tasks. She developed her theory of growth vs. fixed mindsets to explain and expound on the differences between these two mindsets. The chart below shows some of the features of each extreme, and we can easily see how a fixed mindset can limit students’ resilience and persistence when faced with difficulties. 

graphic of brain with growth mindset hallmarks on the left and fixed mindset ideas on the right.

Mindset directly impacts how students receive feedback. Research has shown that students who believe that their intelligence and abilities can be developed through hard work and dedication are more likely to put in the effort and persist through difficult tasks, while those who see intelligence as a fixed, unchangeable quality are more likely to see feedback as criticism and give up. 

Developing a growth mindset can have transformative results for students, especially if they have grown up in a particularly fixed mindset environment. People with a growth mindset are more likely to seek out feedback and use it to improve their performance, while those with a fixed mindset may be more likely to ignore feedback or become defensive when receiving it. Those who receive praise for their effort and hard work, rather than just their innate abilities, are more likely to develop a growth mindset. This is because they come to see themselves as capable of improving through their own efforts, rather than just relying on their natural talents. A growth mindset also helps students learn to deal with failure and reframe it positively. It can be very difficult to receive a critique without tying our performance to our identity. Students must  have some level of assurance that they will be safe taking risks and trying, without fear of being punished for failing. 

Additionally, our own mindset affects how we view student effort, and we often, purposefully or not, convey those messages to students. Teachers with growth mindsets have a positive and statistically significant association with the development of their students’ growth mindsets. Our own mindset affects the type of feedback we are likely to provide, the amount of time we spend on giving feedback, and the way we view the abilities of our students. 

These data suggest that taking the time to learn about and foster a growth mindset in ourselves and our students results in benefits for all. Teachers need to address the value of feedback early on in the learning process and repeatedly throughout the term or year, and couching our messaging to students in positive, growth-oriented language can bolster the feedback process and start students off on the right foot, prepared to improve. 

Here are some concrete steps you can take to improve how your students will receive feedback:

  • Model a growth mindset through language and actions 
  • Include growth-oriented statements in early messaging
  • Provide resources for students to learn more about growth vs. fixed mindsets
  • Discuss the value of feedback and incorporate it into lessons
  • Create an atmosphere of trust and safety that helps students feel comfortable trying new things 
  • Teach that feedback is NOT a judgment of the person, but rather a judgment on the product or process
  • Ensure the feedback we give focuses on the product or process rather than the individual
  • Praise effort rather than intelligence
  • Make it clear that failure is part of learning and that feedback helps improve performance
  • Provide students with tools and strategies to plan, monitor, and evaluate their learning 

Resources for learning more about growth mindset and how it relates to feedback:


Stay tuned for part 2, covering the remaining steps in the feedback cycle. 

Background

“In the Winter Term 2024, the Ecampus Research Unit conducted a survey study of 669 students who had taken online courses at OSU. The 40-item survey was designed to assess students’ knowledge and use of generative AI tools, as well as their perceptions of their use in their courses and careers. A full report of this study is available on the Ecampus Research Unit website. Based on the results of this study, several recommendations were developed to guide decision making about generative AI tools in online courses.”

Dello Stritto, Underhill, & Aguiar (2024).

This recent study highlighted three key recommendations for faculty seeking to integrate generative AI into their courses effectively:

  • Recommendation 1
    • Write a course policy about generative AI that is clearly explained.
  • Recommendation 2
    • Consider a wide range of student emotions and concerns when integrating generative AI in your online courses.
  • Recommendation 3
    • Educate students on generative AI tools.

Applying data to design

To apply these recommendations in practice, we can reorganize them into instructional design categories that foster AI resiliency in course design: Course Learning Outcomes, Learner Profiles, Learning Materials, Activities and Assessments, and Course Policies. These categories offer a comprehensive framework for integrating AI while addressing students’ concerns and enhancing learning experiences.

Course Policies: Establish Clear Guidelines for AI Usage

Reflecting Recommendation 1, developing a clear, transparent policy on AI usage is key. Faculty should articulate when and how students can use AI tools, providing specific examples of ethical use. By defining these expectations early in the course, instructors help students understand the role AI can play in their learning process, promoting academic integrity. 

Learner Profiles: Address Emotional and Academic Concerns

In line with Recommendation 2, it is essential to consider students’ diverse reactions to AI—ranging from excitement to anxiety—when designing a course. This is where understanding Learner Profiles becomes critical. 

Learning Materials and Activities: Ensure Relevance and Adaptability with AI

Recommendation 3 emphasizes the importance of educating students about generative AI, which can be achieved through thoughtful integration into learning materials, activities, and assessments.

Course Learning Outcomes: Integrate AI with Intentional Learning Design

The integration of generative AI tools into course design necessitates an examination of their impact on student mastery of the Course Learning Outcomes. It is vital to ensure that student use of AI tools supplement and enhance the learning process rather than bypass cognitive engagement.

With these four considerations in mind, we can now introduce a tool to help assess and improve course resilience against generative AI, while providing learners with clear policy decisions and explanations.

Introducing CART: Course AI Resiliency Tracker 

In response to the clear need for effective integration of generative AI in educational settings, a new tool has been developed (as part of a wider suite of artificial intelligence tools) to assist faculty in navigating this complex landscape. This tool is designed to support instructors in evaluating how generative AI could respond to their course learning outcomes by highlighting its current capabilities to address and complete these outcomes. It facilitates a detailed understanding of learner profiles to ensure that AI applications are relevant and accessible to all students. Additionally, the tool encourages faculty to reflect on the currency and relevance of their learning materials and to assess how AI might be incorporated into activities and assignments. By examining existing course policies on AI usage and offering actionable steps for course development, this resource aims to demystify generative AI for both educators and students, promoting a thoughtful and strategic approach to its integration or decision to restrict AI.

Getting Started

Upon accessing the landing page, you will be prompted to input your Course ID, after which you may proceed by selecting the “Start” button.

Course AI Resilience Tracker Tool Getting Started Page

Learning Outcomes

The first step in the tool involves a reflection on your Course Learning Outcomes (CLOs). At this stage, you will have the option to choose from a list of commonly used learning outcome verbs, organized by the general categories of Bloom’s Taxonomy. Note that there is a current selection limit of five CLOs at one time, and faculty with verbs absent from this list are encouraged at this time to select verbs that are most like those in their own CLOs to get feedback that will feel the most transferable.

Course AI Resilience Tracker Learning Outcomes Page

After selecting the appropriate verbs that align with your outcomes, click on the “Test Resiliency” button. This will display feedback on how generative AI may already be able to meet expectations for common tasks associated with those action verbs.

Your Learners

Following the assessment of CLOs, the next step encourages you to consider your learners. In this section, you are invited to input relevant details about your students, including their backgrounds, career aspirations, prior knowledge, or any other contextual information that could inform your generative AI course policies. We are aware that this question might feel challenging, especially for faculty who teach all kinds of learners as part of a general education course. In this case, consider this as a more general introduction to the wide variety of learner profiles that may take the course, and how generative AI may be used from their perspective.

Your responses here, as with all inputs in the tool, will be temporarily stored and displayed on the Summary Page for your future reference.

Course AI Resilience Tracker Your Learners Page

Learning Materials

Next, the tool asks you to evaluate the relevance and adaptability of your learning materials. You may choose from the pre-set options provided, or alternatively, you can select “Other” to add customized choices based on your specific course materials.

Course AI Resilience Tracker Learning Materials Page

Activities and Assessments

Next, you will be prompted to reflect on your course activities and assessments. This section includes three key questions. Two of the questions are straightforward yes-or-no inquiries, while the third invites you to select one or more methods that you currently employ to promote academic integrity in your assessments. Including this information alongside activities and assessments bolsters understanding for your learners about expected Gen AI usage, why the choice has been made, and enhances academic integrity across the entire course.

Course AI Resilience Tracker Activities and Assessments Page

Course Policies

You will then be prompted to consider an important question: does your syllabus currently include a policy on generative AI? This reflection is crucial for ensuring transparency and consistency in how AI is addressed throughout your course design. After choosing one of the answers, you will be able to select from some key elements to include in your AI usage policy.

Course AI Resilience Tracker Course Policies Page

Next Steps

Finally, the tool concludes by prompting you to consider the next steps in your course development, offering guidance on how to proceed with integrating generative AI effectively. Each choice offers different recommendations as automatic feedback, and you are encouraged to read through them all before moving onto the final summary.

Course AI Resilience Tracker Next Steps Page

Summary Page

At the conclusion of the tool, you will be directed to a Summary Page that consolidates all your previous inputs, along with the guidance and recommendations provided throughout the process. This comprehensive summary can be printed or saved as a PDF for future reference and review.

The benefits of using the tool

Recommendation 1: A clearly explained course policy

The new tool supports this recommendation by guiding instructors to design course policies that offer clear instructions to learners on what is allowed and disallowed, and most importantly to give rationales behind these policy decisions.

Recommendation 2: Considering learner profiles

The tool helps instructors map these profiles to ensure that generative AI is integrated in ways that are accessible, equitable, and aligned with the emotional and cognitive needs of different students. By anticipating student concerns, instructors can provide thoughtful guidance on how AI will or will not be used in various course activities and assessments.

Recommendation 3: Ensure Relevance and Adaptability with AI

The tool helps instructors evaluate the relevance and adaptability of their current materials by offering pre-set options or the ability to add customized choices. This process ensures that course content remains up-to-date and flexible enough to incorporate generative AI effectively or alternatively,  provides avenues to secure assessments against AI generated content.

Course Learning Outcomes: Integrate AI with Intentional Learning Design

The tool supports this by guiding instructors through a reflection on their CLOs, offering a selection of commonly used learning outcome verbs categorized by Bloom’s Taxonomy. It also helps educators recognize the extent to which generative AI can currently accomplish many of these learning outcomes, providing valuable insights into the specific areas where AI might enhance or support course goals. the purpose of this is to ensure that AI integration choices are not just incidental, but strategically aligned with fostering critical thinking, creativity, and problem-solving skills within the broader context of your course objectives.

Conclusion

In response to the growing need for effective AI integration, this new tool helps faculty navigate the complexities of incorporating generative AI into course design. By addressing Course Learning Outcomes, Learner Profiles, Learning Materials, Activities and Assessments, and Course Policies, the tool promotes a strategic approach that aims to demystify AI for both educators and students. With thoughtful integration, well-designed generative AI policies can enhance learning experiences, help prepare students for future, teach learners to avoid potential pitfalls, and maintain the academic integrity of online courses.

License and Attribution

License

Course AI Resilience Tracker Tool, created by Oregon State University Ecampus, is licensed under Creative Commons Attribution-NonCommercial 4.0 International

Text Content and Guidance

Ashlee Foster, Dana Simionescu, Philip Chambers, Katherine McAlvage, and Cub Kahn

HTML/JavaScript Development

Philip Chambers

References

Dello Stritto, M. E., Underhill G. R., & Aguiar, N. R. (2024). Online Students’ Perceptions of Generative AI. Oregon State University Ecampus Research Unit. https://ecampus.oregonstate.edu/research/publications/

Helpful Links

This layered paper art depicts the Heceta Head Lighthouse amid colorful hills, a flowing river, and tall green trees. Whimsical clouds and birds add depth, creating a vibrant and detailed handcrafted scene.

I’d like to share a recent experience highlighting the crucial role of collecting and using feedback to enhance our online course materials. As faculty course developers and instructional designers, we understand the importance of well-designed courses. However, even minor errors can diminish the quality of an otherwise outstanding online course.

This layered paper art depicts the Heceta Head Lighthouse amid colorful hills, a flowing river, and tall green trees. Whimsical clouds and birds add depth, creating a vibrant and detailed handcrafted scene.
A lighthouse on the Oregon coast, where student feedback and technological tools act as the guiding light. Image generated with Midjourney.

A Student’s Perspective

Recently, feedback was forwarded to me submitted by an online student enrolled in a course I had helped develop.

He praised the overall design of the courses and the instructors’ responsiveness, but he pointed out some typographic and grammatical errors that caused confusion. He mentioned issues like quiz answers not matching the questions and contradictory examples.

What stood out to me was his statement:

“These courses are well-designed and enjoyable. Their instructors are great. They deserve written material to match.”

Proactive Steps for Quality Improvement

This feedback got me thinking about how we can proactively address such concerns and ensure our course materials meet the high standards our students deserve. Here are a few ideas that might help:

Implement a Feedback Mechanism

Incentivize students to hunt for flaws. Reward sharp eyes for spotting typos and grammar slips. Bonus points could spark enthusiasm, turning proofreading into a game of linguistic detective work. For example:

  • Weekly Surveys: Add a question to the weekly surveys asking students to report any errors they encounter, specifying the location (e.g., page number, section, or assignment).
    • “Did you encounter any typographic or grammatical errors in the course materials this week? If so, please describe them here, including the specific location (e.g., page number, section, or assignment).”
  • Assignment Feedback: Include a text-field option for students to report errors alongside their file uploads in each assignment submission.

Utilize Technology Tools

Consider using technology tools to streamline the review process and help identify typographic, grammatical, or factual errors.

AI tools

The latest advanced AI tools can assist in identifying grammatical errors, suggesting more precise phrasing, and improving overall readability. They can also highlight potential inconsistencies or areas needing clarification, ensuring the materials are more accessible to students. They can also help format documents consistently, create summary points for complex topics, and even generate quiz questions based on the content.

(Oregon State University employees and currently enrolled students have access to the Data Protected version of Copilot. By logging in with their OSU credentials, users can use Copilot with commercial data protection, ensuring their conversations are secure and that Microsoft cannot access any customer data.)

Many powerful AI tools exist. But always verify their information for accuracy. Use them as a helper, not your only guide. AI tools complement human judgment but can’t replace it. Your oversight is essential. It ensures that AI-suggested changes align with the learning goals. It also preserves your voice and expertise.

Tools for content help

Some tools can be used to target different areas of content improvement:

  • Grammar and Style Checkers:
  • Fact-Checking Tools:
    • Google Scholar: This can be used to verify academic sources and find citations and references.
    • Snopes.com: Checks common misconceptions and urban legends
    • FactCheck.org: Verifies political claims and statements
  • Language Translation Tools:
    • Google Translate: Offers quick translations for various languages
    • DeepL: Provides accurate translations for multi-language content
  • Text-to-Speech and Proofreading:
  • Collaborative Editing Platforms:
    • Google Docs: Allows real-time collaboration and suggesting mode
    • Microsoft Word (with Track Changes): Enables collaborative editing

Request Targeted Assistance

If specific content requires a closer review, ask for help from other SMEs, your instructional designer, colleagues, or even students. Collaboration can provide fresh perspectives and help catch errors that might have been overlooked.

Encourage Open Communication

Foster an environment where students feel comfortable reporting errors and providing feedback. Make it clear that their input is valued and will be used to improve the course.

Embrace Constructive Criticism

It’s natural to feel defensive when receiving critical feedback (I always do!), but view it as an opportunity for potential improvement. By addressing these concerns, you can enhance the quality of your course materials and ultimately improve our students’ learning experience.

This post is adapted from a panel talk for AI Week, Empowering OSU: Stories of Harnessing Generative AI for Impact in Staff and Faculty Work

This past spring marked one year in my role as an instructional designer for Ecampus. Like many of our readers, I started conversing with AI in the early months of 2023, following OpenAI’s rollout of ChatGPT. Or as one colleague noted in recapping news of the past year, “generative AI happened.” Later, I wrote a couple of posts for this blog on AI and media literacy. A few things became clear from this work. Perhaps most significantly, in the words of research professor Ethan Mollick: “You will need to check it all.”

As the range of courses I support began to expand, so did my everyday use of LLM-powered tools. Here are some of my prompts to ChatGPT from last year, edited for clarity:

  • What is the total listening time of the Phish album Sigma Oasis?
    • Answer: 66 minutes and 57 seconds
  • How many lines are in the following list of special education acronyms (ranging from Section 504 – the Rehabilitation Act – to TBI – Traumatic Brain Injury)?
    • Answer: 27 lines
  • Where is the ancient city of Carthage today?
    • Answer: Today, Carthage is an archaeological site and historical attraction in the suburbs of the Tunisian capital, Tunis.
  • What is the name of the Roman equivalent of the Greek god Zeus?
    • Answer: Jupiter, king of the gods and the god of the sky and thunder
  • What’s the difference between colors D73F09 and DC4405?
    • Answer: In terms of appearance, … 09 will likely have a slightly darker, more orange-red hue compared to … 05, which might appear brighter. (Readers might also know these hues as variations on Beaver Orange.)

And almost every day:

  • Please create an (APA or MLA) citation of the following …

The answers were often on point but always in need of fact checking or another iteration of the prompt. Early LLMs were infamously prone to hallucinations. Factual errors and tendencies toward bias are still not uncommon.

As you can sense from my early prompts, I was mostly using AI as either a kind of smart calculator or an uber-encyclopedia. But in recent months, my colleagues and I here at Course Development and Training (CDT)—along with other units in the Division of Educational Ventures (DEV)—have been using AI in more creative and collaborative ways. And that’s where I want to focus this post.

The Partnership

First, some context for the work we do at DEV. Online course development is both a journey and a partnership between the instructor or faculty member and any number of support staff, from training to multimedia and beyond. Anchoring this partnership is the instructor’s working relationship with the instructional designer—an expert in online pedagogy and educational technology, but also a creative partner in developing the online or hybrid course.

Infographic showing the online course development process, from set up, to terms 1-2 in collaboration with the instructional designer, to launch and refresh.
Fig. 1. Collaboration anchors the story of online course development at OSU (credit: Ecampus).

Ecampus now offers more than 1,800 courses in more than 100 subjects. Every course results from a custom build that must maintain our strong reputation for quality (see fig. 1). This post is focused on that big circle in the middle—collaboration with the instructional designer. That’s where I see incredible potential for support or “augmentation” from generative AI tools.

As Yong Bakos, a senior instructor with the College of Engineering, recently reminded Faculty Forum, modern forms of this technology have been around since the 1940s, starting with the influence of programmable computers on World War II. But now, he added—in challenging faculty using AI to figure out rapid, personalized feedback for learners—”we speak the same language.”

Through continued partnership, how do we make such processes more nimble, more efficient? What does augmentation and collaboration look like when we add tools like Copilot or a custom GPT? Many instructional designers have been wrestling with these questions as of late.

“Human Guided, but AI Assisted”

Here are a few answers from educators Wesley Kinsey and Page Durham at Germanna Community College in Virginia (see fig. 2). Generative AI—also known as GAI—is a powerful tool, says Kinsey. “But the real magic happens when it is paired with a framework that ensures course quality.”

Slide on
Fig. 2. From a recent QM webinar on “unleashing” generative AI (CC BY-NC-ND).

Take this line of inquiry a little farther, and one starts to wonder: How might educators track or evaluate progress toward such use cases?

Funneling Toward Augmentation

As a thought experiment, I offer the following criteria and inventory—a kind of self-assessment of my own “human guided” journey through course development with generative AI (see fig. 3).

Criteria for Augmenting Development with Generative AI

ESTABLISHED – Regular, refined practice in course development
— EMERGING – Irregular and/or unrefined practice, could be improved
— ENVISION – Under consideration or imagined, not yet practiced

Faculty with experience teaching online may find my suggested criteria familiar; “established, emerging, envision” is adapted from an Ecampus checklist used in course redevelopment.

Funnel-shaped infographic with five augmentations: (1) From set up to intake; (2) Course content; (3) Suggested revisions; (4) Discussion, planning, and review; (5) Building and rebuilding
Fig. 3. Self-assessment of augmenting development with generative AI (CC BY-NC-SA).

Augmentation 1: From Set Up to Intake

Broadly speaking, I’m only starting to use chatbots in kicking off a course development—to capture a bulleted summary of an intake over Zoom, for example. Or with these kinds of level-setting prompts:

  • Remind me, what is linear regression analysis?
  • What fields are important to physical hydrology?
  • Explain to a college professor the migration of a social annotation learning tool from LTI 1.1 to 1.3.

Augmentation 2: Course Content

In my experience, instructors are only now beginning to envision how they might propose a course or develop its learning materials and activities with support from tools like Copilot—which is increasingly adept at helping us with this kind of iterative brainstorming work. The key here will be getting comfortable with practice, engaging in sustained conversations with defined parameters, often in scenarios that build on existing content. In recent practice with building assignments, I’m finding Claude 3 Sonnet helpful—more nuanced in its responses, and because you can upload brief documents at no cost and revisit previous chats.

Screenshot of conversation with Copilot, starting with a request to create an MLA citation of a lecture by Liam Callanan at the Bread Loaf Writers' Conference
Fig. 4. From a “more precise” conversation on citation generation. Can you spot Copilot’s errors in applying MLA style?

Augmentation 3: Suggested Revisions

Once course content begins rolling in, I apply more established practices for augmentation. For building citations of learning materials, I’m using Copilot’s “more precise” mode for its more robust abilities to read the open web and draw on various style guides (see fig. 4). With activities, often the germ of an idea for interaction needs enlargement—a statement of purpose or more detailed instructions. Here are a few more examples from working with the School of Psychological Science, with prompts edited for brevity:

  • What would be the purpose of practicing rebus puzzles in a lower division course on general psychology?
  • Please analyze the content of the following exam study guide, excerpted in HTML. Then, suggest a two-sentence statement of purpose that should replace the phrase lorem ipsum.
  • How should college students think about exploring Rorschach tests with inkblots? Please suggest two prompts for reflection (see fig. 5.)
Screenshot of Week 6 - Reflection Activity - Rorschach Inkblot Test, including a warning about the limitations of Rorschach tests and prompts for reflection
Fig. 5. From an augmented reflection activity in PSY 202H, General Psychology (credit: Juan Hu).

Augmentation 4: Discussion, Planning & Review

As with course planning, I’m not quite there yet with using generative AI to shape module templates and collect preferred settings for the building I do in Canvas. But by next year—armed perhaps with a desktop license for Copilot—I can imagine using AI to offer instructors custom templates or prompts to accelerate the design process. One more note on annotating augmentation—it’s incredibly important to let my faculty partners know—with consistent labeling—when I’m suggesting course content adapted from a conversation with AI. Most often, I’m not the subject matter expert—they are. That rule of thumb from Ethan Mollick still holds true: “You will need to check it all.”

Augmentation 5: Building & Rebuilding—More Efficiently

Finally, I look forward to exploring opportunities for more efficiently writing and revising the code behind everything we do with support from generative AI. Just imagine if the designer or instructor could ask a bot to suggest ways to strengthen module learning outcomes or update a task list, right there in Canvas.

Your Turn

With the above inventory in mind, let’s pause to reflect. To what extent are you comfortable using generative AI as a course developer? In what ways could this technology supplement new partnerships with instructional designers—or other colleagues involved in the discipline you teach? Together, how would you assess “augmentation” at each stage of the course development process?

Looking back on my own year of “human guidance with AI assistance,” I now turn more reflexively to AI for help with frontline design work—even as our team considers, for example, the ethical dimensions of asking chatbots to deliver custom graphics for illustrating weekly modules. In other stages, I’m still finding my footing in leveraging new tools, particularly during set up, refresh, and redesign. As we continue to partner with faculty, I remain open to navigating the evolving intersection of AI and course development.

(And now, for fun: Can you spot the augmentation? How much of that last sentence was crafted with support from a “creative” conversation with Copilot? Find the answer below.)

Resources, etc.

The following resources may be helpful in exploring generative AI tools, becoming more fluent with their applications, and considering their role in your teaching and learning practices.

This image is part of the Transformation Projects at the Ars Electronica Kepler's Garden at the JUK. The installation AI Truth Machine deals with the chances and challenges of finding truth through a machine.

All the buzz recently has been about Generative AI, and for good reason. These new tools are reshaping the way we learn and work. Within the many conversations about Artificial Intelligence in Higher Ed a common thread has been appearing regarding the other AI–Academic Integrity. Creating and maintaining academic integrity in online courses is a crucial part of quality online education. It ensures that learners are held to ethical standards and encourages a fair, honest, and respectful learning environment. Here are some strategies to promote academic integrity and foster a culture of ethical behavior throughout your online courses, even in the age of generative AI.

Create an Academic Integrity Plan

Having a clear academic integrity plan is essential for any course. Create an instructor-only page within your course that details a clear strategy for maintaining academic integrity. This plan might include a schedule for revising exam question banks to prevent cheating, as well as specific measures to detect and address academic dishonesty (plagiarism or proctoring software). In this guide, make note of other assignments or places in the course where academic integrity is mentioned (in the syllabus and/or particular assignments), so these pages can be easily located and updated as needed. By having a plan, you can ensure a consistent approach across the course.

Exemplify Integrity Throughout the Course

It is important to weave academic integrity into the fabric of your course. Begin by introducing the concept in your Start Here module. Provide an overview of what integrity means in your course, including specific examples of acceptable and unacceptable behavior. This sets the tone for the rest of the course and establishes clear expectations. On this page, you might:

  • Offer resources and educational materials on academic integrity for learners, such as guides on proper citation and paraphrasing.
  • Include definitions of academic dishonesty, such as plagiarism, cheating, and falsification.
  • Provide guidance on how learners might use generative AI within the class, including what is and is not considered acceptable.
  • Add scenarios or case studies that allow learners to discuss and understand academic integrity issues, specifically related to the use of generative AI.
  • Connect academic integrity with ethical behavior in the larger field.
  • Provide a place for learners to reflect on what it means for them to participate in the course in a way that maximizes their learning while maintaining academic integrity.

Throughout the course, continue to reinforce these ideas. Reminders about academic integrity can be integrated into various lessons and modules. By articulating the integrity expectations at the activity and assignment level, you provide learners with a deeper understanding of how these principles apply to their work. 

Set Clear Expectations for Assignments

When designing assignments, it is important to be explicit about your expectations for academic integrity. Outline what learners should and should not do when completing the task. For instance, if you do not want them to collaborate on a particular assignment, state that clearly. Provide examples and resources to guide learners on how to properly cite sources or avoid plagiarism. Be specific with your expectations and share why you have specific policies in place. For instance, if you want to discourage the use of generative AI in particular assignments, call out the ways it can and cannot be used. As an example, you might tell learners they can use generative AI to help form an outline or check their grammar in their finished assignment, but not to generate the body text. Share the purpose behind the policy, in this case it might be something about how a writing assignment is their opportunity to synthesize their learning and cement specific course concepts. This kind of transparency shows respect for the tools and the learning process, while also clearly outlining for learners what is acceptable.

Encourage Conversations About Integrity

Creating opportunities for learners to engage in discussions about academic integrity can help solidify these concepts in their minds. You can incorporate forums or discussion boards where learners can share their thoughts and experiences related to integrity. This also gives them a chance to ask questions and seek clarification on any concerns they may have. Encourage open dialogue between instructors and learners regarding academic integrity and any related concerns. These conversations can also extend beyond the classroom, exploring how integrity applies in your field or career paths. By connecting academic integrity to real-world scenarios, you help learners understand its relevance and importance in their professional lives.

Foster a Supportive Learning Environment

A supportive learning environment can help reinforce academic integrity by making learners feel comfortable asking questions and seeking guidance. Offer resources like definitions, guides, or access to mentors who can provide additional support. When learners know they have access to help, they are more likely to adhere to integrity standards. With generative AI in the learning landscape, we will inevitably encounter more “gray areas” in academic integrity. Be honest with your learners about your concerns and your hopes. Being open to conversations can only enhance the learning experience and the integrity in your courses.

We all play a role in cultivating a culture of academic integrity in online courses. By documenting a clear plan, weaving integrity into the course content, setting clear expectations, encouraging conversations, and providing support, you can create an environment where honesty and ethical behavior are valued and upheld. This not only benefits learners during their academic journey but also helps them develop skills and values that will serve them well in their future careers.

I was recently reminded of a conference keynote that I attended a few years ago, and the beginning of an academic term seems like an appropriate time to revisit it on this blog.

In 2019, Dan Heath, a bestselling author and senior fellow at Duke University’s CASE Center, gave a presentation at InstructureCon, a conference for Canvas users, where he talked about how memories are formed. He explained that memories are composed of moments. Moments, according to Heath, are “mostly forgettable and occasionally remarkable.” To illustrate, most of what I’ve done today–dropping my kids off at spring break camp, replying to emails, going to a lunchtime yoga class, and writing this blog post–will largely be forgotten by next month. There is nothing remarkable about today. Unremarkable is often a desirable state because it means that an experience occurred without any hiccups or challenges.

Heath went on to describe what it is that makes great experiences memorable. His answer: Great experiences consist of “peaks,” and peaks consist of at least one of the following elements: elevation, insight, pride, or connection. He argued that we need to create more academic peaks in education. Creating peaks, he contends, will lead to more memorable learning experiences.

So, how do we create these peaks that will lead to memorable experiences? Let’s explore some ideas through the four approaches outlined by Heath.

Elevation. Elevation refers to moments that bring us joy and make us feel good. You might bring this element into your course by directly asking students to share what is bringing them joy, perhaps as an icebreaker. Sharing their experiences might also lead to connection, which is another way (see below) to create peaks that lead to memorable experiences. 

Insight. Insight occurs when new knowledge allows us to see something differently. Moments of insight are often sparked by reflection. You might consider making space for reflection in your courses. Creativity is another way to spark new insights. How might students engage with course concepts in new, creative ways? To list off a few ideas, perhaps students can create a meme, record a podcast, engage in a role play, or write a poem.

Pride. People often feel a sense of pride when their accomplishments are celebrated. To spark feelings of accomplishment in your students, I encourage you to go beyond offering positive feedback and consider sharing particularly strong examples of student work with the class (after getting permission–of course!) Showcasing the hard work of students can help students to feel proud of their efforts and may even lead to moments of joyful elevation.

Connection. Connection refers to our ties with other people. Experiencing connection with others can feel deeply rewarding. As I mentioned above, asking students to share their experiences with peers is one way to foster connection. In Ecampus courses, we aim to foster student-student and student-teacher connection, but I encourage you to explore other opportunities for students to make meaningful connections. Perhaps students can get involved with their communities or with colleagues, if they happen to have a job outside of classes. Students could connect with their academic advisors or the writing center to support their work in a course. There are many ways to foster connections that support students in their learning!

It’s easy to focus on delivering content, especially in online courses. This was one of Heath’s overarching points. The key, however, to creating memorable learning experiences is to take a student-centered approach to designing and facilitating your course. 

I invite you to start the term off by asking yourself: How can I create more moments of elevation, insight, pride, and connection for my students? It might be easier than you think.

References:

Heath, D. (2019, July 10). Keynote. InstructureCon. Long Beach, CA.

By Greta Underhill

In my last post, I outlined my search for a computer-assisted qualitative data analysis software (CAQDAS) program that would fit our Research Unit’s needs. We needed a program that would enable our team to collaborate across operating systems, easily adding in new team members as needed, while providing a user-friendly experience without a high learning curve. We also needed something that would adhere to our institution’s IRB requirements for data security and preferred a program that didn’t require a subscription. However, the programs I examined were either subscription-based, too cumbersome, or did not meet our institution’s IRB requirements for data security. It seemed that there just wasn’t a program out there to suit our team’s needs.

However, after weeks of continued searching, I found a YouTube video entitled “Coding Text Using Microsoft Word” (Harold Peach, 2014). At first, I assumed this would show me how to use Word comments to highlight certain text in a transcript, which is a handy function, but what about collating those codes into a table or Excel file? What about tracking which member of the team codes certain text? I assumed this would be an explanation of manual coding using Word, which works fine for some projects, but not for our team.

Picture of a dummy transcript using Lorem Ipsum placeholder text. Sentences are highlighted in red or blue depending upon the user. Highlighted passages have an associated “comment” where users have written codes.

Fortunately, my assumption was wrong. Dr. Harold Peach, Associate Professor of Education at Georgetown College, had developed a Word Macro to identify and pull all comments from the word document into a table (Peach, n.d.). A macro is “a series of commands and instructions that you group together as a single command to accomplish a task automatically” (Create or Run a Macro – Microsoft Support, n.d.). Once downloaded, the “Extract Comments to New Document” macro opens a template and produces a table of the coded information as shown in the image below. The macro identifies the following properties:

  • Page: the page on which the text can be found
  • Comment scope: the text that was coded
  • Comment text: the text contained in the comment; for the purpose of our projects, the code title
  • Author: which member of the team coded the information
  • Date: the date on which the text was coded

Picture of a table of dummy text that was generated from the “Extract Comments to New Document” Macro. The table features the following columns: Page, Comment Scope, Comment Text, Author, and Date.

You can move the data from the Word table into an Excel sheet where you can sort codes for patterns or frequencies, a function that our team was looking for in a program as shown below:

A picture of the dummy text table in an Excel sheet where codes have been sorted and grouped together by code name to establish frequencies.

This Word Macro was a good fit for our team for many reasons. First, our members could create comments on a Word document, regardless of their operating system. Second, we could continue to house our data on our institution’s servers, ensuring our projects meet strict IRB data security measures. Third, the Word macro allowed for basic coding features (coding multiple passages multiple times, highlighting coded text, etc.) and had a very low learning curve: teaching someone how to use Word Comments. Lastly, our institution provides access to the complete Microsoft Suite so all team members including students that would be working on projects already had access to the Word program. We contacted our IT department to have them verify that the macro was safe and for help downloading the macro.

Testing the Word Macro       

Once installed, I tested out the macro with our undergraduate research assistant on a qualitative project and found it to be intuitive and helpful. We coded independently and met multiple times to discuss our work. Eventually we ran the macro, pulled all comments from our data, and moved the macro tables into Excel where we manually merged our work. Through this process, we found some potential drawbacks that could impact certain teams.

First, researchers can view all previous comments made which might impact how teammates code or how second-cycle coding is performed; other programs let you hide previous codes so researcher can come at the text fresh.

Second, coding across paragraphs can create issues with the resulting table; cells merge in ways that make it difficult to sort and filter if moved to Excel, but a quick cleaning of the data took care of this issue.

Lastly, we manually merged our work, negotiating codes and content, as our codes were inductively generated; researchers working on deductive projects may bypass this negotiation and find the process of merging much faster.

Despite these potential drawbacks, we found this macro sufficient for our project as it was free to use, easy to learn, and a helpful way to organize our data. The following table summarizes the pro and cons of this macro.

Pros and Cons of the “Extract Comments to New Document” Word Macro

Pros

  • Easy to learn and use: simply providing comments in a Word document and running the macro
  • Program tracks team member codes which can be helpful in discussions of analysis
  • Team members can code separately by generating separate Word documents, then merge the documents to consensus code
  • Copying Word table to Excel provides a more nuanced look at the data
  • Program works across operating systems
  • Members can house their data in existing structures, not on cloud infrastructures
  • Macro is free to download

Cons

  • Previous comments are visible through the coding process which might impact other members’ coding or second round coding
  • Coding across paragraph breaks creates cell breaks in the resulting table that can make it hard to sort
  • Team members must manually merge their codes and negotiate code labels, overlapping data, etc.

Scientific work can be enhanced and advanced by the right tools; however, it can be difficult to distinguish which computer-assisted qualitative data analysis software program is right for a team or a project. Any of the programs mentioned in this paper would be good options for individuals who do not need to collaborate or for those who are working with publicly available data that require different data security protocols. However, the Word macro highlighted here is a great option for many research teams. In all, although there are many powerful computer-assisted qualitative data analysis software programs out there, our team found the simplest option was the best option for our projects and our needs.

References 

Create or run a macro—Microsoft Support. (n.d.). Retrieved July 17, 2023, from https://support.microsoft.com/en-us/office/create-or-run-a-macro-c6b99036-905c-49a6-818a-dfb98b7c3c9c

Harold Peach (Director). (2014, June 30). Coding text using Microsoft Word. https://www.youtube.com/watch?v=TbjfpEe4j5Y

Peach, H. (n.d.). Extract comments to new document – Word macros and tips – Work smarter and save time in Word. Retrieved July 17, 2023, from https://www.thedoctools.com/word-macros-tips/word-macros/extract-comments-to-new-document/

For the first part of this post, please see Media Literacy in the Age of AI, Part I: “You Will Need to Check It All.”

Just how, exactly, we’re supposed to follow Ethan Mollick’s caution to “check it all” happens to be the subject of a lively, forthcoming collaboration from two education researchers who have been following the intersection of new media and misinformation for decades.

In Verified: How to Think Straight, Get Duped Less, and Make Better Decisions about What to Believe Online (University of Chicago Press, November 2023), Mike Caulfield and Sam Wineburg provide a kind of user’s manual to the modern internet. The authors’ central concern is that students—and, by extension, their teachers—have been going about the process of verifying online claims and sources all wrong—usually by applying the same rhetorical skills activated in reading a deep-dive on Elon Musk or Yevgeny Prigozhin, to borrow from last month’s headlines. Academic readers, that is, traditionally keep their attention fixed on the text—applying comprehension strategies such as prior knowledge, persisting through moments of confusion, and analyzing the narrative and its various claims about technological innovation or armed rebellion in discipline-specific ways.

The Problem with Checklists

Now, anyone who has tried to hold a dialogue on more than a few pages of assigned reading at the college level knows that sustained focus and critical thinking can be challenging, even for experienced readers. (A majority of high school seniors are not prepared for reading in college, according to 2019 data.) And so instructors, partnering with librarians, have long championed checklists as one antidote to passive consumption, first among them the CRAAP test, which stands for currency, relevance, authority, accuracy, and purpose. (Flashbacks to English 101, anyone?) The problem with checklists, argue Caulfield and Wineburg, is that in today’s media landscape—awash in questionable sources—they’re a waste of time. Such routines might easily keep a reader focused on critically evaluating “gameable signals of credibility” such as functional hyperlinks, a well-designed homepage, airtight prose, digital badges, and other supposedly telling markers of authority that can be manufactured with minimal effort or purchased at little expense, right down to the blue checkmark made infamous by Musk’s platform-formerly-known-as-Twitter.

Three Contexts for Lateral Reading

One of the delights in reading Verified is drawing back the curtains on a parade of little-known hoaxes, rumors, actors, and half-truths at work in the shadows of the information age—ranging from a sugar industry front group posing as a scientific think tank to headlines in mid-2022 warning that clouds of “palm-sized flying spiders” were about to descend on the East Coast. In the face of such wild ideas, Caulfield and Wineburg offer a helpful, three-point heuristic for navigating the web—and a sharp rejoinder to the source-specific checklists of the early aughts. (You will have to read the book to fact-check the spider story, or as the authors encourage, you can do it yourself after reading, say, the first chapter!) “The first task when confronted with the unfamiliar is not analysis. It is the gathering of context” (p. 10). More specifically:

  • The context of the source — What’s the reputation of the source of information that you arrive at, whether through a social feed, a shared link, or a Google search result?
  • The context of the claim — What have others said about the claim? If it’s a story, what’s the larger story? If a statistic, what’s the larger context?
  • Finally, the context of you — What is your level of expertise in the area? What is your interest in the claim? What makes such a claim or source compelling to you, and what could change that?
“The Three Contexts” from Verified (2023)

At a regional conference of librarians in May, Wineburg shared video clips from his scenario-based research, juxtaposing student sleuths with professional fact checkers. His conclusion? By simply trying to gather the necessary context, learners with supposedly low media literacy can be quickly transformed into “strong critical thinkers, without any additional training in logic or analysis” (Caulfield and Wineburg, p. 10). What does this look like in practice? Wineburg describes a shift from “vertical” to “lateral reading” or “using the web to read the web” (p. 81). To investigate a source like a pro, readers must first leave the source, often by opening new browser tabs, running nuanced searches about its contents, and pausing to reflect on the results. Again, such findings hold significant implications for how we train students in verification and, more broadly, in media literacy. Successful information gathering, in other words, depends not only on keywords and critical perspective but also on the ability to engage in metacognitive conversations with the web and its architecture. Or, channeling our eight-legged friends again: “If you wanted to understand how spiders catch their prey, you wouldn’t just look at a single strand” (p. 87).

SIFT graphic by Mike Caulfield with icons for stop, investigate the source, find better coverage, and trace claims, quotes, and media to the original context.

Image 2: Mike Caulfield’s “four moves”

Reconstructing Context

Much of Verified is devoted to unpacking how to gain such perspective while also building self-awareness of our relationships with the information we seek. As a companion to Wineburg’s research on lateral reading, Caulfield has refined a series of higher-order tasks for vetting sources called SIFT, or “The Four Moves” (see Image 2). By (1) Stopping to take a breath and get a look around, (2) Investigating the source and its reputation, (3) Finding better sources of journalism or research, and (4) Tracing surprising claims or other rhetorical artifacts back to their origins, readers can more quickly make decisions about how to manage their time online. You can learn more about the why behind “reconstructing context” at Caulfield’s blog, Hapgood, and as part of the OSU Libraries’ guide to media literacy. (Full disclosure: Mike is a former colleague from Washington State University Vancouver.)

If I have one complaint about Caulfield and Wineburg’s book, it’s that it dwells at length on the particulars of analyzing Google search results, which fill pages of accompanying figures and a whole chapter on the search engine as “the bestie you thought you knew” (p. 49). To be sure, Google still occupies a large share of the time students and faculty spend online. But as in my quest for learning norms protocols, readers are already turning to large language model tools for help in deciding what to believe online. In that respect, I find other chapters in Verified (on scholarly sources, the rise of Wikipedia, deceptive videos, and so-called native advertising) more useful. And if you go there, don’t miss the author’s final take on the power of emotion in finding the truth—a line that sounds counterintuitive, but in context adds another, rather moving dimension to the case against checklists.

Given the acceleration of machine learning, will lateral reading and SIFTing hold up in the age of AI? Caulfield and Wineburg certainly think so. Building out context becomes all the more necessary, they write in a postscript on the future of verification, “when the prose on the other side is crafted by a convincing machine” (p. 221). On that note, I invite you and your students to try out some of these moves on your favorite chatbot.

Another Postscript

The other day, I gave Microsoft’s AI-powered search engine a few versions of the same prompt I had put to ChatGPT. In “balanced” mode, Bing dutifully recommended resources from Stanford, Cornell, and Harvard on introducing norms for learning in online college classes. Over in “creative” mode, Bing’s synthesis was slightly more offbeat—including an early-pandemic blog post on setting norms for middle school faculty meetings in rural Vermont. More importantly, the bot wasn’t hallucinating. Most of the sources it suggested seemed worth investigating. Pausing before each rabbit hole, I took a deep breath.

Related Resource

Oregon State Ecampus recently rolled out its own AI toolkit for faculty, based on an emerging consensus that developing capacities for using this technology will be necessary in many areas of life. Of particular relevance to this post is a section on AI literacy, conceptualized as “a broad set of skills that is not confined to technical disciplines.” As with Verified, I find the toolkit’s frameworks and recommendations on teaching AI literacy particularly helpful. For instance, if students are allowed to use ChatGPT or Bing to brainstorm and evaluate possible topics for a writing assignment, “faculty might provide an effective example of how to ask an AI tool to help, ideally situating explanation in the context of what would be appropriate and ethical in that discipline or profession.”

References

Caulfield, M., & Wineburg, S. (2023). Verified: How to think straight, get duped less, and make better decisions about what to believe online. University of Chicago Press.

Mollick, E. (2023, July 15). How to use AI to do stuff: An opinionated guide. One Useful Thing.

Oregon State Ecampus. (2023). Artificial Intelligence Tools.