This blog post is a continuation from “Refining Rubrics & Assessments: AI as Design Support – Part 1“.

Using AI to Refine Rubric Language

In the previous post, I gave an assignment prompt to Copilot (as that’s the recommended tool at Oregon State University) and asked it to complete the task. For reference, here is the task.

Rubrics are often the weakest link in assessment design, particularly when descriptors rely on vague phrases like “meets expectations” or “demonstrates understanding.” One way to evaluate rubric clarity is to ask AI to self-assess its own response using the rubric criteria.

If the model can plausibly justify a high score despite shallow reasoning or inconsistent logic, the rubric may not be clearly distinguishing levels of performance. More precise rubrics specify what evidence matters and how quality differs, emphasizing reasoning, coherence, and alignment with course concepts rather than polish or length. Clear criteria benefit students, but they also make it harder for superficially strong work to masquerade as deep learning.


Rubric Analysis Prompt (Click to expand)

You are now acting as an external assessment reviewer, not a student.
You will be given:

  1. An assignment prompt
  2. A grading rubric
  3. A model-generated student submission (your own prior response)

Your task is not to grade the submission.
Instead, critically evaluate the rubric itself by answering the following:

  1. Rubric Vulnerabilities
    • Identify specific rubric criteria or descriptors that allow a high score to be justified through fluent but shallow reasoning.
    • For each vulnerability, explain what kind of weak or superficial evidence could still plausibly receive a high score under the current wording.
  2. Distinguishing Performance Levels
    • For at least three rubric categories, explain why the difference between “Excellent” and “Good” (or “Good” and “Satisfactory”) may be ambiguous in practice.
    • Describe what concrete evidence a human grader would need to reliably distinguish between those levels.
  3. AI Self-Assessment Stress Test
    • Using your own generated submission as an example, explain how it could convincingly argue for a high score even if underlying understanding were limited.
    • Point to specific rubric language that enables this justification.
  4. Rubric Strengthening Recommendations
    • Propose revised rubric language that makes expectations more explicit and evidence-based.
    • Emphasize observable reasoning, causal explanation, constraint awareness, or conceptual boundaries rather than general phrases such as “demonstrates understanding” or “well-justified.”

Constraints:

  • Do not rewrite the assignment prompt.
  • Do not assume access to course-specific lectures or materials.

Focus on how the rubric functions as an assessment instrument, not on pedagogy or student motivation.

Tone:
Analytical, critical, and concrete. Avoid generic advice.



You could use this directly by attaching a rubric, assessment prompt, and “submission”, or modifying it to your own situation.

Here is a section of the results it gave, along with the “thinking” section expanded to see the process of the generated answer:


(Copilot gave me an enormous amount of feedback, as expected because the rubric included a lot of generic language.)


Rethinking “Higher-Order Thinking” in an AI-Rich Environment

Frameworks like Bloom’s Taxonomy remain useful, but AI complicates the assumption that higher-order tasks are automatically more resistant to outsourcing. AI can analyze, evaluate, and even create convincing responses if prompts are static and unconstrained.

What remains more difficult to outsource is judgment. Assignments that require students to choose among approaches, justify those choices, identify uncertainty, or explain when a method would fail tend to surface understanding more reliably than tasks that simply ask for analysis or synthesis. When reviewing AI-generated responses, a helpful question is: What would a human need to know to trust this answer? Designing assessments around that question shifts the focus from output to accountability.

Instructors can strengthen authenticity by introducing under specified scenarios, realistic limitations, or prompts that require students to articulate how they would evaluate the reliability of their own results. These design choices don’t prevent AI use, but they make it harder to succeed without understanding when and why an answer might be wrong.


An Iterative Design Loop for Assessments and Rubrics

Using AI as an assessment design diagnostic and refinement tool can work best as an iterative process. Draft the assignment and rubric, test them with AI, analyze how success is achieved, and revise accordingly. The goal is not to reach a point where AI “fails,” but rather a point where success requires engagement with disciplinary concepts and reasoning. This mirrors quality-assurance practices in other domains: catching misalignment early, refining specifications, and retesting until the design reliably produces the intended outcome. Importantly, this loop should be finite and purposeful, not an endless escalation.

Conclusion

using AI in assessment design is not about surveillance or enforcement. It is a transparency tool. When instructors acknowledge that AI exists and design accordingly, they reduce the incentive for adversarial behavior and increase clarity around expectations. Being open with students about the role of AI (what is permitted, what responsibility cannot be delegated, and how understanding will be evaluated) helps maintain trust while preserving academic standards. The credibility of online and in-person education alike depends not on stopping students from using tools, but on ensuring that passing a course still signifies meaningful learning.

Takeaway Cheat Sheet

  • Think of AI as support, not a villain.
  • Stress‑test early: run the rubric through a model for verification before you hand it to students.
  • Refine granularity: precise descriptors = clearer expectations.
  • Target higher‑order thinking: embed authentic scenarios.
  • Iterate, don’t stagnate: keep the loop tight but finite.
  • Mind ethics: disclose, de‑bias, and set realistic limits.

For centuries, knowledge and access to education was restricted to just a few. In today’s’ world, almost anybody can access information through the web and more recently through AI tools. However, it is important to recognize that these tools, while offering expansive access to content of varied nature, also pose challenges. Generative AI has fundamentally changed how students interact with assignments, but it has also given instructors a powerful new lens for examining their own assessment design. Rather than treating AI solely as a threat to academic integrity, we can use it as a diagnostic tool – one that quickly reveals whether our assignments and rubrics are actually measuring what we think they are. If an AI can complete an assignment, and meet the stated criteria for success without engaging course-specific learning, is it really a student problem, or a signal to modify the design?


A small shift in perspective from “they’re using this to cheat” to “how can this help me prevent cheating” is especially important in online and hybrid environments, where traditional academic integrity controls like proctored exams are either unavailable or undesirable. Instead of trying to outmaneuver AI or police its use, instructors can ask a more productive question: What does success on this assignment actually require?


Why AI Is a Helpful Design Tool


AI can function as an unusually honest “devil’s advocate.” It doesn’t get tired, anxious, or confused about instructions, and it excels at finding the most efficient path to meeting stated requirements. When an instructor gives an AI model an assignment prompt and a rubric, the resulting output can expose whether the rubric rewards deep engagement or simply fluent compliance.


If an AI can generate a response that appears to meet expectations without referencing key course concepts, grappling with assumptions, or making meaningful decisions, then students can likely do the same. In this way, AI acts less like a cheating student and more like a mirror held up to our assessment design.

An example using Copilot:


Stress-Testing Assignments Before Students Ever See Them

One practical workflow to test the resilience of your assignments is to run them through AI before they are deployed. Provide the model with the prompt and the rubric (nothing else) and ask it to produce a strong submission. Then evaluate that response using your own grading criteria.

The point is not to judge whether the AI’s answer is “good,” but to analyze why it succeeds in meeting the set requirements easily and flawlessly (at first sight). If the response earns high marks through generic explanations, surface-level analysis, or broadly applicable reasoning, that’s evidence that the assessment may not be tightly aligned with course learning outcomes, focus on deeper thinking and analysis, or elicit students’ own creativity . This kind of stress-testing takes minutes, and often surfaces issues that would otherwise only become visible after grading a full cohort.


The Task (Click to reveal )

Assignment Prompt

Subject: Chemical Engineering
Level: Upper-level undergraduate (3rd year)
Topic: Reactor Design & Engineering Judgment

Assignment: Conceptual Design and Analysis of a Chemical Reactor

You are tasked with the preliminary design and analysis of a chemical reactor for the production of a commodity chemical of your choice (e.g., ammonia, methanol, ethylene oxide, sulfuric acid, or another well-established industrial product).

Your analysis should address the following:

  1. Process Overview
    • Briefly describe the selected chemical process and its industrial relevance.
    • Identify the primary reaction(s) involved and classify the reaction type(s) (e.g., exothermic/endothermic, reversible/irreversible, catalytic/non-catalytic).
  2. Reactor Selection
    • Propose an appropriate reactor type (e.g., CSTR, PFR, batch, packed bed).
    • Justify your selection based on reaction kinetics, heat transfer considerations, conversion goals, and operational constraints.
  3. Operating Conditions
    • Discuss key operating variables such as temperature, pressure, residence time, and feed composition.
    • Explain how these variables influence conversion, selectivity, and safety.
  4. Engineering Trade-Offs
    • Identify at least two major design trade-offs (e.g., conversion vs. selectivity, energy efficiency vs. safety, capital cost vs. operating cost).
    • Explain how an engineer might balance these trade-offs in practice.
  5. Limitations and Assumptions
    • Clearly state any simplifying assumptions made in your analysis.
    • Discuss the limitations of your proposed design at this preliminary stage.

Your response should demonstrate clear engineering reasoning rather than detailed numerical calculations. Where appropriate, qualitative trends, simplified relationships, or order-of-magnitude reasoning may be used.

Length: ~1,000–1,200 words
References: Not required, but accepted if used appropriately

The Rubric (Click to reveal)
CriterionExcellent (A)Good (B)Satisfactory (C)Unsatisfactory (D/F)
Understanding of Chemical Engineering PrinciplesDemonstrates strong understanding of reaction engineering concepts and correctly applies them to the chosen processDemonstrates general understanding with minor conceptual gapsShows basic familiarity but with notable misunderstandings or oversimplifications
Demonstrates weak or incorrect understanding of core concepts
Reactor Selection & JustificationReactor choice is well-justified using multiple relevant criteria (kinetics, heat transfer, safety, operability)Reactor choice is reasonable but justification lacks depth or completenessReactor choice is weakly justified or based on limited reasoning

Reactor choice is inappropriate or unjustified
Analysis of Operating ConditionsClearly explains how operating variables affect performance, safety, and efficiencyExplains effects of variables with minor omissions or inaccuracies
Provides limited or superficial discussion of operating conditions

Fails to meaningfully analyze operating variables
Engineering Trade-OffsInsightfully identifies and explains realistic trade-offs, demonstrating engineering judgmentIdentifies trade-offs but discussion lacks nuance or integrationTrade-offs are mentioned but poorly explained or generic
Trade-offs are absent or incorrect
Assumptions & LimitationsAssumptions are clearly stated and critically evaluatedAssumptions are stated but not fully examined
Assumptions are implicit or weakly articulated

Assumptions are missing or inappropriate
Clarity & OrganizationResponse is well-structured, clear, and professionalGenerally clear with minor organizational issues
Organization or clarity interferes with understanding


Poorly organized or difficult to follow



Identifying Gaps in What We’re Measuring

AI performs particularly well on tasks that rely on recognition, pattern matching, and general world knowledge. This means it can easily succeed on assessments that emphasize recall, procedural execution, or elimination of obviously wrong answers. When that happens, the assessment may be measuring familiarity rather than understanding.

Revising these tasks does not require making them longer or more complex. Instead, instructors can focus on higher-order thinking and metacognition, for example requiring students to articulate why a particular approach applies, what assumptions are being made, or how results should be interpreted. These shifts move the assessment away from answer production and toward critical and disciplinary thinking – without assuming that AI use can or should be eliminated. The point of identifying the gaps can also help you revisit the structure of the assignment to determine how each of its elements (purpose, instructions/task/prompt, and criteria for success) are cohesively connected to strengthen the assignment.

In the second part of this blog, I take the same task above, and work with the AI to refine a rubric.

This post was written in collaboration with Mary Ellen Dello Stritto, Director of Ecampus Research Unit.

Quality Matters standards are supported by extensive research on effective learning. Oregon State University’s own Ecampus Essentials build upon these standards, incorporating OSU-specific quality criteria for ongoing course development. But what do students themselves think about the elements that constitute a well-designed online course?

The Study

The Ecampus Research Unit took part in a national research study with Penn State and Boise State universities that sought student insight into what elements of design and course management contribute to quality in an online course. Data was collected from 6 universities across the US including Oregon State in Fall of 2024. Students who chose to participate completed a 73-item online survey that asked about course design elements from the updated version of the Quality Matters Rubric. Students responded to each question with the following scale: 0=Not important, 1=Important, 2=Very Important, 3=Essential.  A total of 124 students completed survey, including 15 OSU Ecampus students. The findings reveal a remarkable alignment between research-based best practices and student preferences, validating the approach taken in OSU’s Ecampus Essentials.

See the findings in data visualization form below, followed by a detailed description.

Data visualization of the findings. See detailed description after the image.

What Students Consider Most Important

Students clearly value practical, research-backed features that make online courses easier to navigate, more accessible, and more supportive of learning. The following items received the most ratings of “Essential” + “Very Important”:

QM Standards and Study FindingsRelated Ecampus Essentials
Accessibility and Usability (QM Standards 8.2, 8.3, 8.4, 8.5, 8.6): Every OSU student rated course readability and accessible text as “Very Important” or “Essential” (100%). Nationally, this was also a top priority (96% and 91%, respectively). Accessibility of multimedia—like captions and user-friendly video/audio—was also highly rated (100% OSU, 90% nationally).Text in the course site is accessible. Images in the course are accessible (e.g., alt text or long description for images). The course design facilitates readability. All video content is accurately captioned.
Clear Navigation and Getting Started (QM Standards 1.1, 8.1): 93% of OSU students and 94% of the national sample rated easy navigation highly, while 89% of OSU students and 96% nationally said clear instructions for how to get started and where to find things were essential.  Course is structured into intuitive sections (weeks, units, etc.) with all materials for each section housed within that section (e.g., one page with that week’s learning materials rather than a long list of files in the module). Course is organized with student-centered navigation, and it is clear to students how to get started in the course.
Meaningful Feedback and Instructor Presence (QM Standards 3.5, 5.3): Students placed high importance on receiving detailed feedback that connects directly to course content (100% OSU, 94% nationally). The ability to ask questions of instructors was also essential (100% OSU, 96% nationally).  Assessments are sequenced in a way to give students an opportunity to build knowledge and learn from instructor feedback. The instructor’s plan for regular interaction with students in substantive ways during the course is clearly articulated. Information about student support specific to the course (e.g., links to the Writing Center in a writing course, information about TA open office hours, etc.) is provided.  
Clear Grading Criteria (QM Standards 3.2, 3.3): 93% of OSU students and the full sample found clear, detailed grading rules to be essential.  Specific and descriptive grading information for each assessment is provided (e.g., detailed grading criteria and/or rubrics).
Instructional Materials (QM Standard 4.1): All OSU students and 92% nationally rated high-quality materials that support learning outcomes as very important or essential.Instructional materials align with the course and weekly outcomes. A variety of instructional materials are used to appeal to many learning preferences (readings, audio, visual, multimedia, etc.). When pre-recorded lectures are utilized, content is brief and integrated into course learning activities, such as with interactive components, discussion questions, or quiz questions. Longer lectures should be shortened to less than 20 min. chunks.

What Students Consider Less Important

The study also revealed areas where students expressed less enthusiasm:

Study FindingsRelated Ecampus Essentials
Self-Introductions (QM Standard 1.9): Over half of OSU students (56%) and a third nationally (33%) rated opportunities to introduce themselves as “Not Important”.No specific EE
Peer Interaction (QM Standard 5.2): Students were lukewarm about peer-to-peer learning activities. Nearly half said that working in small groups is not important (47% OSU, 46% nationally). About a quarter didn’t value sharing ideas in public forums (27% OSU, 24% nationally) or having learning activities that encourage them to interact with other students (27% OSU, 23% nationally).  Three forms of interaction are present, in some form, in the course (student/content, student/instructor, student/student).
Technology Variety and Data Privacy Info (QM Standards 6.3, 6.4): Some students questioned the value of using a variety of tech tools (20% OSU, 23% nationally rated this as “Not Important”) or being given info about protecting personal data (20% OSU, 22% nationally).  Privacy policies for any tools used outside of Canvas are provided.

Student Comments

Here are a few comments from Ecampus students that illustrate their opinions on what makes a quality course:

  • “Accessible instructional staff who will speak to students in synchronous environments. Staff who will guide students toward the answer rather than either treating it like cheating to ask for help at all or simply giving out the answer.”
  • “A lack of communication/response from teachers and no sense of community” – was seen as a barrier.
  • “Mild reliance on e-book/publisher content, out-weighed by individual faculty created content that matches student deliverables. In particular, short video content guiding through the material in short, digestible amounts (not more than 20 minutes at a go).”
  • “When there aren’t a variety of materials, it makes it hard to successfully understand the materials. For example, I prefer there to be lectures or videos associated with readings so that I understand the material to the professor’s standards. When I only have reading materials, I can sometimes misinterpret the information.”
  • “Knock it off with the discussion boards, and the ‘reply to 2 other posts’ business. This is not how effective discourse takes place, nor is it how collaborative learning/learning community is built.”

Conclusion and Recommendations

The takeaways? This research shows that students recognize and value the same quality elements emphasized in OSU’s Ecampus Essentials:

  1. Student preferences align with research-based standards – Students consistently value accessibility, clear structure, meaningful feedback, and purposeful content.
  2. Universal design benefits everyone – Students’ strong preference for accessible, well-designed courses supports the universal design principles embedded in the Ecampus Essentials.

However, there is always room for improvement, and these data provide some hints. Many students don’t immediately see value in peer interactions and collaborative activities, even though extensive educational research shows these are among the most effective learning strategies. Collaborative learning is recognized as a High Impact Practice that significantly improves student outcomes and critical thinking. This disconnect suggests we need to design these experiences more thoughtfully to help students recognize their benefits. Here are some suggestions:

  • Frame introductions purposefully: Instead of generic “tell us about yourself” posts, connect introductions to course content (“Introduce yourself and share an experience related to the topic of this course”).
  • Design meaningful group work: Create projects that genuinely require collaboration and produce something students couldn’t create alone.
  • Show the connection: Explicitly explain how peer interactions help students learn and retain information better, and the value of teamwork for their future jobs.
  • Start small: Begin with low-stakes peer activities before moving to more complex collaborations.
graphic image of the five steps in the feedback cycle

Giving and receiving feedback effectively is a key skill we all develop as we grow, and it helps us reflect on our performance, guide our future behavior, and fine-tune our practices. Later in life, feedback continues to be vital as we move into work and careers, getting feedback from the people we work for and with. As teachers, the most important aspect of our job is giving feedback that informs students how to improve and meet the learning outcomes to pass our courses.  We soon learn, however, that giving feedback can be difficult for several reasons. Despite it being one of our primary job duties as educators, we may have received little training on how to give feedback or what effective feedback looks like. We also realize how time-consuming it can be to provide detailed feedback students need to improve. To make matters worse, we may find that students don’t do much with the feedback we spend so much time providing. Additionally, students may not respond well to feedback- they might become defensive, feel misunderstood, or worse, ignore the feedback altogether. This can set us up for an ineffective feedback process, which can be frustrating for both sides. 

I taught ESL to international students from around the world for more than 10 years and have given a fair amount of feedback. Over many cycles, I developed a detailed and systematic approach for providing feedback that looked like this.

Gaps in this cycle can lead to frustration from both sides. Each step in the cycle is essential, so we’ll look at each in greater depth in this blog series. Today, we will focus on starting strong by preparing students to receive feedback, a crucial beginning that sets the stage for a healthy cycle.

Step 1: Prepare Students to Receive Feedback

An effective feedback cycle starts before the feedback is given by laying careful groundwork. The first and often-overlooked step in the cycle is preparing students to receive feedback, which takes planned, ongoing work. Various factors may influence whether students welcome feedback, including their self-confidence going into your course, their own self-concept and mindset as a learner, their working memory and learning capacity, how they view your feedback, and whether they feel they can trust you. Outside factors such as motivation and working memory are often beyond our control, but creating an atmosphere of trust and safety in the classroom can positively support students. Student confidence and mindset are areas in which  teachers can play a crucial supporting role. 

Researcher Carol Dweck coined the term “growth mindset” after noticing that some students showed remarkable resilience when faced with hardship or failure. In contrast, others tended to easily become frustrated and angry, and tended to give up on tasks. She developed her theory of growth vs. fixed mindsets to explain and expound on the differences between these two mindsets. The chart below shows some of the features of each extreme, and we can easily see how a fixed mindset can limit students’ resilience and persistence when faced with difficulties. 

graphic of brain with growth mindset hallmarks on the left and fixed mindset ideas on the right.

Mindset directly impacts how students receive feedback. Research has shown that students who believe that their intelligence and abilities can be developed through hard work and dedication are more likely to put in the effort and persist through difficult tasks, while those who see intelligence as a fixed, unchangeable quality are more likely to see feedback as criticism and give up. 

Developing a growth mindset can have transformative results for students, especially if they have grown up in a particularly fixed mindset environment. People with a growth mindset are more likely to seek out feedback and use it to improve their performance, while those with a fixed mindset may be more likely to ignore feedback or become defensive when receiving it. Those who receive praise for their effort and hard work, rather than just their innate abilities, are more likely to develop a growth mindset. This is because they come to see themselves as capable of improving through their own efforts, rather than just relying on their natural talents. A growth mindset also helps students learn to deal with failure and reframe it positively. It can be very difficult to receive a critique without tying our performance to our identity. Students must  have some level of assurance that they will be safe taking risks and trying, without fear of being punished for failing. 

Additionally, our own mindset affects how we view student effort, and we often, purposefully or not, convey those messages to students. Teachers with growth mindsets have a positive and statistically significant association with the development of their students’ growth mindsets. Our own mindset affects the type of feedback we are likely to provide, the amount of time we spend on giving feedback, and the way we view the abilities of our students. 

These data suggest that taking the time to learn about and foster a growth mindset in ourselves and our students results in benefits for all. Teachers need to address the value of feedback early on in the learning process and repeatedly throughout the term or year, and couching our messaging to students in positive, growth-oriented language can bolster the feedback process and start students off on the right foot, prepared to improve. 

Here are some concrete steps you can take to improve how your students will receive feedback:

  • Model a growth mindset through language and actions 
  • Include growth-oriented statements in early messaging
  • Provide resources for students to learn more about growth vs. fixed mindsets
  • Discuss the value of feedback and incorporate it into lessons
  • Create an atmosphere of trust and safety that helps students feel comfortable trying new things 
  • Teach that feedback is NOT a judgment of the person, but rather a judgment on the product or process
  • Ensure the feedback we give focuses on the product or process rather than the individual
  • Praise effort rather than intelligence
  • Make it clear that failure is part of learning and that feedback helps improve performance
  • Provide students with tools and strategies to plan, monitor, and evaluate their learning 

Resources for learning more about growth mindset and how it relates to feedback:


Stay tuned for part 2, covering the remaining steps in the feedback cycle. 

This image is part of the Transformation Projects at the Ars Electronica Kepler's Garden at the JUK. The installation AI Truth Machine deals with the chances and challenges of finding truth through a machine.

All the buzz recently has been about Generative AI, and for good reason. These new tools are reshaping the way we learn and work. Within the many conversations about Artificial Intelligence in Higher Ed a common thread has been appearing regarding the other AI–Academic Integrity. Creating and maintaining academic integrity in online courses is a crucial part of quality online education. It ensures that learners are held to ethical standards and encourages a fair, honest, and respectful learning environment. Here are some strategies to promote academic integrity and foster a culture of ethical behavior throughout your online courses, even in the age of generative AI.

Create an Academic Integrity Plan

Having a clear academic integrity plan is essential for any course. Create an instructor-only page within your course that details a clear strategy for maintaining academic integrity. This plan might include a schedule for revising exam question banks to prevent cheating, as well as specific measures to detect and address academic dishonesty (plagiarism or proctoring software). In this guide, make note of other assignments or places in the course where academic integrity is mentioned (in the syllabus and/or particular assignments), so these pages can be easily located and updated as needed. By having a plan, you can ensure a consistent approach across the course.

Exemplify Integrity Throughout the Course

It is important to weave academic integrity into the fabric of your course. Begin by introducing the concept in your Start Here module. Provide an overview of what integrity means in your course, including specific examples of acceptable and unacceptable behavior. This sets the tone for the rest of the course and establishes clear expectations. On this page, you might:

  • Offer resources and educational materials on academic integrity for learners, such as guides on proper citation and paraphrasing.
  • Include definitions of academic dishonesty, such as plagiarism, cheating, and falsification.
  • Provide guidance on how learners might use generative AI within the class, including what is and is not considered acceptable.
  • Add scenarios or case studies that allow learners to discuss and understand academic integrity issues, specifically related to the use of generative AI.
  • Connect academic integrity with ethical behavior in the larger field.
  • Provide a place for learners to reflect on what it means for them to participate in the course in a way that maximizes their learning while maintaining academic integrity.

Throughout the course, continue to reinforce these ideas. Reminders about academic integrity can be integrated into various lessons and modules. By articulating the integrity expectations at the activity and assignment level, you provide learners with a deeper understanding of how these principles apply to their work. 

Set Clear Expectations for Assignments

When designing assignments, it is important to be explicit about your expectations for academic integrity. Outline what learners should and should not do when completing the task. For instance, if you do not want them to collaborate on a particular assignment, state that clearly. Provide examples and resources to guide learners on how to properly cite sources or avoid plagiarism. Be specific with your expectations and share why you have specific policies in place. For instance, if you want to discourage the use of generative AI in particular assignments, call out the ways it can and cannot be used. As an example, you might tell learners they can use generative AI to help form an outline or check their grammar in their finished assignment, but not to generate the body text. Share the purpose behind the policy, in this case it might be something about how a writing assignment is their opportunity to synthesize their learning and cement specific course concepts. This kind of transparency shows respect for the tools and the learning process, while also clearly outlining for learners what is acceptable.

Encourage Conversations About Integrity

Creating opportunities for learners to engage in discussions about academic integrity can help solidify these concepts in their minds. You can incorporate forums or discussion boards where learners can share their thoughts and experiences related to integrity. This also gives them a chance to ask questions and seek clarification on any concerns they may have. Encourage open dialogue between instructors and learners regarding academic integrity and any related concerns. These conversations can also extend beyond the classroom, exploring how integrity applies in your field or career paths. By connecting academic integrity to real-world scenarios, you help learners understand its relevance and importance in their professional lives.

Foster a Supportive Learning Environment

A supportive learning environment can help reinforce academic integrity by making learners feel comfortable asking questions and seeking guidance. Offer resources like definitions, guides, or access to mentors who can provide additional support. When learners know they have access to help, they are more likely to adhere to integrity standards. With generative AI in the learning landscape, we will inevitably encounter more “gray areas” in academic integrity. Be honest with your learners about your concerns and your hopes. Being open to conversations can only enhance the learning experience and the integrity in your courses.

We all play a role in cultivating a culture of academic integrity in online courses. By documenting a clear plan, weaving integrity into the course content, setting clear expectations, encouraging conversations, and providing support, you can create an environment where honesty and ethical behavior are valued and upheld. This not only benefits learners during their academic journey but also helps them develop skills and values that will serve them well in their future careers.

As a follow-up to discussing equity in grading and group work, Feldman (2019) offers a compelling case against the use of extra credit. “But wait a minute,” I can hear you saying, “Extra credit is optional—students have to opt-in if they want to do it! And it can be fun! What’s wrong with that?” Many instructors may think of extra credit as a way to benefit students and give them extra opportunities in a course, especially at the end of a term, to improve their grade, take on additional challenges, and demonstrate additional skills they have learned. (I know I thought about extra credit that way at one time!) However, there is more at play with extra credit than you might think. Let’s return to Feldman’s three pillars of equitable grades:

  1. “They are mathematically accurate, validly reflecting a student’s academic performance.
  2. They are bias-resistant, preventing biased subjectivity from infecting our grades.
  3. They motivate students to strive for academic success, persevere, accept struggles and setbacks, and to gain critical lifelong skills” (Feldman, p. 71).

With these three pillars in mind, let’s examine some potential issues with extra credit:

  1. Accuracy: There are many ways extra credit can obscure what information a grade includes. First, it can be used to incentivize certain behaviors, which obscures a grade by not assessing academic performance or learning. (For example, extra credit for turning things in on time.) Second, it can obscure whether a grade reflects what students know by turning grades into a commodity (more about this below). In this way, grades are a reflection of how many points students are able to accumulate, not necessarily how much they have learned or whether they have met all of a course’s learning outcomes.This kind of extra credit can unintentionally signal to students that their behavior and non-academic performance in a course is more important than their learning.
  2. Bias: Sometimes extra credit is awarded to incentivize students to participate in extra events or opportunities, like attending a webinar, guest lecture, local event, etc. However, in addition to treating grades like a commodity, this kind of incentive also makes it difficult for students without outside resources or help to engage. What about students without the money for event tickets, transportation, child or family care, and/or without the time away from work, family, etc.? They are unable to participate, even if they want to, due to external factors outside of their control. And often these are the students who could potentially benefit the most from additional points if they are already struggling because of these exact conditions. For extra credit that provides extra challenges beyond the course materials, only the students already doing well will be able to participate and benefit from the opportunity, additionally shutting out students who are already behind.
  3. Motivation: Having extra credit, especially at the end of the course, can also be damaging to student motivation, as it places an emphasis on grades and points instead of learning. For example, some students may prioritize obtaining a desired grade above learning important content, while other students may use extra credit to bolster a weak area they were unable to fully grasp, thereby giving up on learning that material entirely. Both of these potential mindsets set students up to focus on a product (grade) more than learning and any future perspectives they might have about their learning.

One additional issue of extra credit to consider is the additional work and time on instructors for both designing additional assignments and grading the extra work, especially at the end of a term when there is usually a plethora of assignments, exams, and projects to grade.

“If the work is important, require it; if it’s not, don’t include it in the grade.”

Feldman, p. 122.

So, what options can we give students that are more equitable as an alternative to extra credit? Instead of creating additional assignments, allow students to revise and resubmit work. This shift can help support students by encouraging them to learn from past mistakes, build on their learning, and see their growth over time. Revisions and resubmissions don’t have to only happen at the end of the term, so instructors can also consider timing of revisions based on course design, formative and summative assessment timing, and their own workloads. It also helps students who may be struggling with outside barriers to have additional attempts to complete work they may have missed. It also means that students cannot opt-out of important work or concepts because they cannot substitute those points from other areas of the course. Lastly, it saves the instructor time from designing and implementing additional assignments and complicated grading setups at the end of a term when instructors are often the busiest. While the use of extra credit is often from a place of good intentions, I hope this brief outline helps recontextualize how it may have a larger, negative impact in your course than you may have initially thought, as well as a strategy for replacing it in your course designs.

References

Feldman, J. (2019). Grading for equity: What it is, why it matters, and how it can transform schools and classrooms. Thousand Oaks, CA: Corwin.

The following is a guest blog post from Julia DeViney. Julia completed an Instructional Design internship with OSU Ecampus during the Spring of 2023.

What are student perceptions of Voice Thread? I observed the pros and cons of Voice Thread (VT) as both a student in my final term of a cohort-structured program, and on the instructor side as an Ecampus intern. The purpose of this post is to synthesize my experiences with research on VT. Integrated with Canvas as a cloud-based external web tool, VT is an interactive platform that allows instructors and students to create video, audio, and text posts and responses asynchronously. It is used widely at OSU and available for use in all Ecampus courses.

My unique role as both current student using the tool and intern seeing the tool from the instructor’s perspective allowed me to get a thorough understanding of VT. While I was challenged by time requirements and experienced diminishing value with more frequent discussions, used strategically, VT can be a worthwhile tool for instructors and students.

The strengths of VT include fostering dense interaction and strong social presence, and ease of use; drawbacks can be avoided by considering the audience, use frequency, and purpose of using VT in a learning environment.

VT allows users to upload premade slides or images and record text, audio, or video comments to their own and peers’ slides, allowing for a rich back and forth dialogue that fosters dense social presence and interaction in a learning environment.

In my course, students used this video feature exclusively for initial posts, and occasionally used audio recordings for peer responses. Hearing vocal inflection and seeing each other on screen in natural environments helped us witness emotions, interact authentically, and build on each other’s ideas to create richer learning. Delmas (2017) and Ching and Hsu (2012) found similar results in their respective studies of using VT to build online community and support collaborative learning.

Another strength of VT is ease of use. Brief VT navigation instructions provided by the instructor abbreviated the learning curve for students new to this tool. Making a video slide or commenting on peer’s slides was straightforward and simple. VT automatically previews submitter-created slides or comments prior to saving, and this allows students to redo their slide or comment if they are not satisfied with their first attempt. I found this feature particularly helpful.

Students’ prior interactions and frequency of use are considerations for instructors’ use of VT. As a student who already intensely engaged with most of the peers in my cohort through discussions, group projects, presentations, and peer feedback assignments, dense social presence was not as valuable to me in my final term. However, this course included a few students from other disciplines, and I appreciated quickly getting to know them through their posts and responses. This class utilized VT intermittently; in later-term posts, I found myself less motivated to respond as robustly as in the beginning of the term. Chen and Bogachenko (2022) echoed my experience: mandated minimum posting requirements and prompt frequency may influence social presence density results.

Student connection may not increase student engagement and is best-suited for certain types of knowledge construction. Responding to the minimum required number of students was common practice among graduate students in a 2013 study by Ching and Hsu; this differs from findings from a study by Kidd (2012), which focused on student-instructor interactions. Student obligations outside of school are cited as the primary reason for meeting minimum requirements only (Ching & Hsu, 2012). In my experience, a few classmates responded to more than the minimum required responses, as time allowed. Students tended to develop a stronger consensus of ideas shared in video-based interactions than in text-based interactions; future research is needed to evaluate the degree of critical or summarizing skills developed in video-based forums (Guo et al., 2022). In my course, VT discussion prompts were largely reflective, and that maximized the strengths of the tool.

Time may be another drawback for some students. While many of my classmates created unscripted video posts and responses to discussion prompts, a few of us spent extra time scripting posts and responses, which added time to assignments. Ching and Hsu (2013) found that for contemplative or anxious students who “structure their ideas prior to making their ideas public,” the time requirement is a disadvantage (p. 309). I did not experience technological glitches, but that has been mentioned as an additional time consideration.

For instructors, time needed to learn to set up and use VT themselves was cited as a major drawback (Salas & Miller, 2015). However, the instructors studied used VT outside of their institution’s learning management system. At OSU, VT is seamlessly integrated into Canvas and SpeedGrader. Easy-to-follow guides and Ecampus support significantly reduce the risk of use for faculty. VT is a superb tool for creating dense social presence in hybrid or online courses for collaborative assignments or consensus-building discussions.

From the instructor side, I recommend carefully considering the pros and cons of assignment type: a) create, b) comment, or c) watch. Remember that “create” assignments require students to post at least one comment and create a slide. The “comment” assignment type still allows students to create a slide, and instructors have more flexibility in establishing minimum slide and/or comment requirements, provided those minimums match the Canvas assignments. “Watch” assignments could work well for crucial announcements or video-based instruction. For all assignments, I also recommend communicating in both Canvas and VT that clicking the “Submit Assignment” button is a very important step (for continuity with SpeedGrader). Setting up assignments in VT was simple and straightforward once I understood the assignment types.

In short, VT powerfully facilitates dense social presence and community using asynchronous video, audio, and text-based interactions among instructors and students. When used as a tool for reflection or consensus-building, students benefit from VT interactions. Overuse and time constraints may compromise use value, particularly for students with anxiety or needing extra preparation. OSU Ecampus offers support and guides to assist instructors with incorporating VT into Canvas. To reap the benefits of this fantastic tool, I recommend exploring the practical uses of VT in hybrid and online courses.

References

Chen, J., & Bogachenko, T. (2022). Online community building in distance education: The case of social presence in the Blackboard discussion board versus multimodal Voice Thread interaction._ Journal of Educational Technology & Society, 25_(2), 62-75. https://oregonstate.idm.oclc.org/login?url=https://www.proquest.com/scholarly-journals/online-community-building-distance-education-case/docview/2652525579/se-2
Ching, Y-H., & Hsu, Y-C. (2013). Collaborative learning using Voice Thread in an online graduate course._ Knowledge Management & E-Learning, 5_(3), 298-314. https://oregonstate.idm.oclc.org/login?url=https://www.proquest.com/scholarly-journals/collaborative-learning-using-Voice Thread-online/docview/1955098489/se-2
Delmas, P. M. (2017). Using Voice Thread to create community in online learning._ TechTrends, 61_(6), 595-602. https://doi.org/10.1007/s11528-017-0195-z
Guo, C., Shea, P., & Chen, X. (2022). Investigation on graduate students’ social presence and social knowledge construction in two online discussion settings. Education and Information Technologies, 27(2), 2751-2769. https://link-gale-com.oregonstate.idm.oclc.org/apps/doc/A706502995/CDB?u=s8405248&sid=bookmark-CDB&xid=7f135a22
Salas, A., & Moller, L. (2015). The value of voice thread in online learning: Faculty perceptions of usefulness. Quarterly Review of Distance Education, 16(1), 11-24. https://link-gale.com.oregonstate.idm.oclc.org/apps/doc/A436983171/PROF?u=s8405248&sid=bookmark-PROF&xid=2759f021

Image by Benjamin Abara from Pixabay 

My family and I were preparing for a move. We packed up some of our things, removing extraneous items from our walls and surfaces and preparing our house to list and show. Not willing to part with these things, we rented a small storage unit to temporarily warehouse all this extra “stuff.” Well, as it turned out, we ended up not moving at all, and after a few months went to clear out the storage unit and retrieve our extra things. The funny thing was, we could hardly remember what had gone in there, and as it turns out, we did not miss most of the items we had packed away. We ended up selling most of what was in that storage unit, and shortly thereafter, we did even more “spring cleaning.” One of the bedrooms, which also doubles an office, needed particular attention. The space was dysfunctional, in that multiple doors and drawers were blocked from fully opening. After a little purging and reorganization this room now functions beautifully, with enough space to open every door and drawer. I have been calling this process “moving back into our own house,” and it’s been a joy to rethink, reorganize, and reclaim our living spaces.

Course Design Connection

As I have been working with more instructors who are redeveloping existing courses, I have been trying to bring this mindset into my instructional design work. How can we reclaim our online learning spaces and make them more inviting and functional? How can we help learners open all the proverbial doors and operate fully within the learning environment? You guessed it: While our first instinct might be to add more to the course, the answer might lie in the other direction. With a little editing and a keen eye on alignment, we can very intentionally remove things from our courses that might be needless or even distracting. We can also rearrange our pages and modules to maximize our learner’s attention.

Memory and Course Design

Our working memories, according to Cowan (2010), can only store 3-5 meaningful items at a time. Thus, it becomes essential to consider what is genuinely necessary on any given LMS page. If we focus on helping learners to achieve the learning outcomes when choosing the content to keep in each module, we can intentionally remove distractors. There can be a place for tangential or supplemental information, but those items should not live in the limelight. To help get us started on this “cleaning process,” we can ask ourselves a few simple questions. Are there big-ticket items (assignments, discussions, readings) that are not directly helping learners reach the outcomes? Are we formatting pages and arranging content in beneficial and progressive ways? Might we express longer bodies of text in ways that are more concisely or clearly? Can we break text up with related visuals? Below are some tips to help guide your process as you “clean” up your course and direct your learners where to focus.

Cut out the Bigger Extraneous Content

It is simple to assume that for your learners to meet the course outcomes, they must read and comprehend many things and complete a wide variety of assignments. When planning your learning activities, it’s crucial to keep in mind the limits of the brain and also that giving learners opportunities to practice applying content will be more successful than asking them to memorize and restate it. For courses with dense content, lean into your course outcomes to guide your editing process. Focusing on the objectives can help you remove extraneous readings and activities.  This will allow your learners to concentrate on the key points. (Cowden & Sze, 2012)

Review Instructions

For the items you choose to keep in your course, reviewing assignment instructions, and discussion prompts is helpful.  Consider inviting a non-expert to read these items.  An outside eye might help you to simplify what you are asking your learners to accomplish by calling to your attention any points of confusion. You may be tempted to add more detail, but try to figure out where you can remove text when possible. Why use a paragraph to explain something that only needs a few sentences? Simplifying your language can enable learners to get to the point faster. (For more on this, see the post by intern Aimee L. Lomeli Garcia about  Improving Readability). When reviewing your instructions and prompts, think about what learners want to know:

·       What should they pay attention to?

·       Where do they start?

·       What do they do next?

·       What is expected?

·       How are they being assessed/graded?

(Grennan, 2018)

Utilize Best Practices for Formatting

Use native formatting tools like styles, headers, and lists to help visually break up content and make it more approachable. Here are some examples:

If I were to list my favorite animals here without a list, it would look like this: dogs, turtles, hummingbirds, frogs, elephants, and cheetahs. 

Suppose I give you that same list using a header and number list format. In that case, it becomes much easier to digest mentally, and it looks nicer on the page:

Julie’s Favorite Animals

  1. Dogs
  2. Turtles
  3. Hummingbirds
  4. Frogs
  5. Elephants
  6. Cheetahs

Provide High-Level Overviews

If an assignment does need a more thorough explanation, and your instructions are running long, you can always create a high-level overview, calling out the main points of the page. You could place this in a call-out box or its own section (preferably at the top). This is where learners can quickly look for reminders about what to do next and how to do it. Providing a high-level overview alongside detailed instructions will cater to a variety of learning preferences and help set up your learners for success.

Module Organization

Scaling up beyond single pages and assignments to module organization, consider the order you want learners to encounter ideas and accomplish tasks. Don’t be afraid to move pages around within your modules to help learners find the most efficient and helpful pathway through your material (Shift Elearning, n.d.).

Wrapping It Up

The culture of “more is better” is pervasive, and it’s almost always easier to add rather than to remove information. In online learning, when we buy into the “culture of more” we can impede the success of our learners. But more isn’t always better; sometimes more is just more. Instead, don’t be afraid to dust off that delete button and start reclaiming and reorganizing your course for ultimate learner success. Sometimes less is best. For more on the art of subtraction, see Elisabeth McBrien’s blog post from February of 2022.

References

Cowan, N. (2010). The magical mystery four. Current Directions in Psychological Science, 19(1), 51–57. https://doi.org/10.1177/0963721409359277

Cowden, P., & Sze, S. (2012). ONLINE LEARNING: THE CONCEPT OF LESS IS MORE. Allied Academies International Conference.Academy of Information and Management Sciences.Proceedings, 16(2), 1-6. https://oregonstate.idm.oclc.org/login?url=https://www.proquest.com/scholarly-journals/online-learning-concept-less-is-more/docview/1272095325/se-2

Grennan, H. (2018, April 30). Why less is more in Elearning. Belvista Studios – eLearning Blog. Retrieved April 4, 2023, from http://blog.belvistastudios.com/2018/04/why-less-is-more-in-elearning.html

Lomeli Garcia, A. L. (2023, January 17). Five Tips on Improving Readability in Your Courses. Ecampus Course Development and training. Retrieved April 4, 2023, from https://blogs.oregonstate.edu/inspire/2023/01/17/five-tips-on-improving-readability-in-your-courses/

McBrien, E. (2022, February 24). Course design challenge: Try subtraction. Ecampus Course Development and training. Retrieved April 4, 2023, from https://blogs.oregonstate.edu/inspire/2022/02/24/course-design-challenge-try-subtraction/

Parker, R. (2022, June 30). Why less is more for e-learning course materials. Synergy Learning. Retrieved April 4, 2023, from https://synergy-learning.com/blog/why-less-is-sometimes-more-when-it-comes-to-your-e-learning-course-materials/

Shift Elearning. (n.d.). The art of simplification in Elearning Design. The Art of Simplification in eLearning Design. Retrieved April 4, 2023, from https://www.shiftelearning.com/blog/the-art-of-simplification-in-elearning-design

University of Waterloo, Queen’s University, & University of Toronto; and Conestoga Colleg (n.d.). Module 3: Quality course structure and content. In High Quality Online Courses . essay, Pressbooks Open Library, from https://ecampusontario.pressbooks.pub/hqoc/chapter/3-1-module-overview/

Online writing support center appointment options. 50 minute Zoom or written feedback via email

Ecampus students have access to a number of online resources to support their academic success at OSU. Receiving guidance and feedback on their writing assignments can be helpful across courses, throughout their planning and revision process. In this post, we will share more information about the current writing resources available to students, no matter where they are located, along with resources for faculty.

OSU Writing Center

The OSU Writing Center supports any type of writing project, during any stage of the writing process. Instructors can share this resource with students, or even integrate the writing center’s support as a step to receive guidance and feedback from a consultant in coordination with a class assignment.

Online Writing Support (OWS)

According to the OWS website, both written feedback and virtual support (held over Zoom) are available to all OSU community members, including Ecampus students.

Any OSU community member can submit writing for written feedback or schedule a Zoom appointment. This includes students, faculty, staff, and alumni. However, graduate students working on dissertations, theses, IRB applications, grant applications, manuscripts, and other advanced graduate projects should connect with the Graduate Writing Center for support.

Students can choose one of the following appointment types when they submit their request online:

  • Consultation (50 minutes, Zoom)
  • Written Feedback (Replies are usually within 24 hours, Email)
Image of the appointment options on the OWS website. One is a writing consultation over Zoom and the other is written feedback via Email.
Scheduling options for Online Writing Support (OWS)

The Writing Center’s website includes answers to common questions. Here are some of the responses to questions students might have about this resource:

  1. How often can I use Online Writing Support?
    • You can request written feedback on up to three writing projects (or three drafts of the same project) per week. You can make Zoom appointments as often as you like. We welcome repeat writers as we enjoy being a part of your writing process. You cannot schedule an appointment more than two weeks in advance, but we invite you to work with us often. 
  2. What kind of writing can I submit for written feedback?
    • You can submit any kind of writing, as long as it doesn’t exceed 25 double-spaced pages (around 6,250 words). Ideally, for longer projects, you should be prepared to request several written feedback consultations, each focusing on a different section of the project.
  3. How can I provide my instructor with confirmation that I used Online Writing Support?
    • All OWS consultations will receive an email confirmation after the appointment occurs or after the feedback has been sent to you—usually the next morning. If your instructor requests confirmation that you sought assistance from the OWS, you may forward or capture a screen shot of the confirmation email.

For more information about the type of support the Writing Center provides, please see their overview video below.

An overview of the resources provided by the OSU Writing Center and how to submit requests via the website

Academic Success Center – Writing Resources

Student Resources

  • Academic Success Workshop Series – Each term the ASC hosts a series of workshops on a variety of topics. Their remote series is available for online registration and hosted via Zoom.
    • For the Spring 2023 term, the workshop schedule is listed below and features a writing-focused workshop in Week 6.
    • The details of the workshop series, along with links to register, are available on the Remote Workshop Series website.
  • The Learning Corner – The learning corner provides a number of online tools, such as guides and fillable worksheets, to support students in reaching their academic goals.
  • Services & Programs – Supplemental Instruction (SI) is available for certain courses via Zoom, as well as academic coaching support.

Faculty Resources

A number of faculty support options are offered on the Faculty Resources page, including an optional Canvas module, PowerPoint slides, and a sample Syllabus statement. The Online Writing Support group and Academic Success Center partner with faculty to collaborate on assignments and course-specific tips for implementing writing support for their online students.

Instructors can email writing.center@oregonstate.edu to discuss ideas for implementation in their course.

Four students working together on a project
Four students working together on a project

A term paper is a common final assignment, but does the final assignment have to be a paper? The answer depends on the type of course and the learning outcomes. If the final assignment can be an alternative to the term paper, we can consider other types of assignments that allow students not only to accomplish the learning outcomes, as expected, but also to engage more deeply with the content and exercise critical thinking. A caveat related to the discipline is important here. Fields that require a writing component may necessarily rely on the term paper which can be scaffolded through a set of stages. For assignments with a sequence of tasks, refer to staged assignments (Loftin, 2018) for details on how to design them.

A first step in moving towards considering other types of assessments is to self-reflect on the purpose of the course and what role it will play in students’ learning journeys. You can use some of the following questions as a guide to self-reflect:

Focus levelInitial self-questions
Course What is the nature of the course (e.g., practice-based, reading-intense, general education, writing-intensive, labs, etc.)?
What are the outcomes?
What level do the outcomes target (e.g., recall, analysis, evaluation)?
Discipline What do people in the discipline I teach regularly do in the work environment? Do they: write grants? or develop lesson plans? write technical reports? write articles or white papers? build portfolios? demonstrate skills? and so on…
Do all students need to complete the final assignment in the same format or can the format vary (e.g., paper, presentation, podcast)?

Taking some time to reevaluate the assessment practices in your course might be beneficial for your students who seek meaningful learning opportunities and expect relevant assignments (Jones, 2012; ). Students might also welcome variety and flexibility in how they learn and be evaluated (ASU Prep Digital, 2021; Soffer et al., 2019). 

Let’s explore alternative and authentic assessments next.

Alternative assessments

Alternative authentic assessments tend to focus on high-order and critical thinking skills –skills much appreciated these days. These assessments aim to provide more effective methods of increasing knowledge, fostering learning, and analyzing learning (Anderson, 2016; Gehr, 2021). Research also suggests that authentic assessments can increase students’ employability skills (Sotiriadou et al., 2019). However, the implementation of alternative assessments needs to transcend the status quo and become a critical element that allows instructors and students to focus on societal issues, acknowledge the value of assessment tasks, and embrace these assessments as vehicles for transforming society (McArthur, 2022). A student-centered environment also challenges educators to search for alternative assessments to make the learning experience more meaningful and lasting –fostering student agency and lifelong learning (Sambell & Brown, 2021).

Authentic assessments

I recall that when I was learning English, some of the types of practices and assessments did not really equip me to use the language outside the classroom. I thought that I would not go around the world and select choices from my interlocutors as I used to do through the language quizzes in class. I have been motivated by the Task-Based Language Teaching framework to focus on designing tasks (for learning and assessment) that help students use their knowledge and skills beyond the classroom –more useful and realistic tasks. 

Authentic assessments provide students with opportunities to apply what they learn to situations that they likely will encounter in their daily life. These situations will not be well-structured, easy to solve, and formulaic (like the English language practices I had); to the contrary, these situations will be complex, involve a real audience, require judgment, and require students to use a repertoire of skills to solve the problems and tasks (Wiley, n.d.). 

As you may see, alternative and authentic assessments can overlap, giving educators options to innovate their teaching and providing students opportunities to increase interest and engagement with their learning process. Below, you will see a collection of ideas for assessments that go beyond the term paper and give room for course innovations, learning exploration, and student agency.

Examples of Alternative and Authentic Assessments

You can select one or more assessments and create a sequence of assignments that build the foundation, give students an opportunity to reflect, and engage students in the active application of concepts. Diversifying the types of assessment practices can also serve as an inclusive teaching approach for your students to engage with the course in multiple ways (McVitty, 2022).

Introduction to New Concepts

students to these new ideas by designing simple and direct tasks such as:

  • Listen to podcasts, watch documentaries/films: write summaries or reviews
  • Conduct field observations: report what was observed, thoughts, and feelings
  • Create fact sheets and posters: share them with peers and provide comments
  • Study a case: write a report, design a visual abstract, create a data visualization or presentation
  • Create an infographic or digital prototype: present it to peers for feedback
  • Write a short newspaper article: contribute to the class blog, post it on the class digital board
  • Provide insights and comments: contribute with annotations and posts (e.g., Perusall, VoiceThread)

Reflective Practice

Reflection allows students to think further about their own learning process. If you are looking for activities to instill in students higher-order thinking skills and metacognitive skills, you can consider designing one of the tasks below. Remember to provide students with guiding questions for the reflection process

  • Review assignments and describe the learning journey: Create a portfolio with reflective notes
  • Develop an understanding of concepts by identifying areas of difficulty and feedforward goals: write a weekly learning log, create a learning journey map/graph 
  • Describe your learning experience through personal reflection: write an autoethnography
  • Connect course concepts and activities to learning experiences: create a think-out-loud presentation, podcast, or paper
  • Self-assess learning and progress: take a quiz, write a journal, create a learning map: “from here to there”)   

Theory Application

  • Demonstrate a solid understanding of key elements, theory strengths, and weaknesses: write an application paper to explore lines of inquiry, create an infographic connecting theory and examples, write an article or artifact critique through the lens of the theory
  • Dissect a theory by identifying and organizing the key components of theoretical frameworks: develop a theory profile document or presentation (instructor can create a dissect theory template)
  • Anchor course concepts in the literature: write a position paper, a response paper, or a commentary for a journal. 

Application Tasks

  • Guided interviews with professionals
  • Digital and augmented reality assets
  • Grant/funding applications
  • Project/conference proposals
  • Annotated bibliographies, article critiques
  • Reviews (e.g., music, videos, films, books, articles, media)
  • Oral discussion group exam (e.g., cases, scenarios, problem-solving) w/reflection
  • Conduct Failure Mode and Effect Analysis studies/simulations
  • Book newsletter, blog, and book live event Q&A (e.g., students plan the Q&A)
  • Create a student-led OER
  • Patchwork Screencast Assessment (PASTA) Reflections

The list of alternative and authentic assessments provided above is not exhaustive and I would welcome your comments and suggestions for the activities that you might have designed or researched for your online or hybrid courses. I would love to hear more about your approaches and thoughts on alternative and authentic assessments.

References

Anderson, M. (2016). Learning to choose, choosing to learn: The key to student motivation and achievement. ASCD.

ASU Prep Digital. (2021). Why Do Students Prefer Online Learning? https://www.asuprepdigital.org/why-do-students-prefer-online-learning/ 

Gher, L. (2021, March 11). How using authentic digital assessments can benefit students. Edutopia. https://www.edutopia.org/article/how-using-authentic-digital-assessments-can-benefit-students/#:~:text=With%20this%20method%20of%20assessment,of%20the%20comments%20and%20responses.

Jones, S. J. (2012). Reading between the lines of online course evaluations: Identifiable actions that improve student perceptions of teaching effectiveness and course value. Journal of Asynchronous Learning Networks, 16(1), 49-58. http://dx.doi.org/10.24059/olj.v16i1.227

Jopp, R., & Cohen, J. (2022). Choose your own assessment–assessment choice for students in online higher education. Teaching in Higher Education, 27(6), 738-755. https://doi.org/10.1080/13562517.2020.1742680

Loftin, D. (2018, April 24). Staged assignments. [Oregon State University Ecampus blog post] https://blogs.oregonstate.edu/inspire/2018/04/24/staged-assignments/

McArthur, J. (2022). Rethinking authentic assessment: work, well-being, and society. Higher Education, 1-17. https://link.springer.com/article/10.1007/s10734-022-00822-y

McVitty, D. (2022). Building back learning and teaching means changing assessment. Wonkhe Ltd.  

Soffer, T., Kahan, T. & Nachmias, R. (2019). Patterns of Students’ Utilization of Flexibility in Online Academic Courses and Their Relation to Course Achievement. International Review of Research in Open and Distributed Learning, 20(3). https://doi.org/10.19173/irrodl.v20i4.3949

Sotiriadou, P., Logan, D., Daly, A., & Guest, R. (2020). The role of authentic assessment to preserve academic integrity and promote skill development and employability. Studies in Higher Education, 45(11), 2132-2148. https://doi.org/10.1080/03075079.2019.1582015

Sambel, S., & Brown (2021). Covid-19 assessment collection. Assessment, Learning, and Teaching in Higher Education. https://sally-brown.net/kay-sambell-and-sally-brown-covid-19-assessment-collection/

Wiley University Services. (n.d.). Authentic assessment in the online classroom. https://ctl.wiley.com/authentic-assessment-in-the-online-classroom/