This is it, the week you’ve been waiting for!

Open Education Week is an annual celebration that raises awareness of global efforts to make learning more “open” — that is, more affordable and accessible to students everywhere. Every March, this weeklong, online event gives educators and students an opportunity to learn more about open educational practices and be inspired by the work being developed around the world, including by Oregon State University faculty.

Please join us for Open Education Week this year to learn how you can get involved and make a meaningful difference in the lives of OSU students.

  • What: Open Education Week
  • When: March 2-6, 2026
  • Where: Fully online
  • Who: Higher education faculty, students and thought leaders

Research on rubrics has often focused on validity and reliability (Matshedisho, 2020), but more recent work explores how students actually interpret and use rubrics (Brookhart, 2015; Matshedisho, 2020; Taylor, 2024; Tessier, 2021). This emerging scholarship consistently shows a gap between instructor intention and student interpretation. For example, Matshedisho (2020) found that “students expected procedural and declarative guidance, while instructors expected conceptual, reflective work” (p. 175).

If students understand rubrics differently than we intend, rubrics cannot fully support learning. Below are key reasons this mismatch occurs—and strategies to close the gap.

Tacit Knowledge and Language

Students bring varied backgrounds, disciplinary exposure, and assumptions to their learning (Brookhart, 2015; Matshedisho, 2020). Many do not enter college knowing what a rubric is or how to apply one (Tessier, 2021).

Key issues include:

  • Unfamiliar terms or disciplinary jargon
    Early‑year students may lack field‑specific language. In Matshedisho’s (2020) study, first‑year medical students struggled with the sociological-specific criteria required for a reflective assignment.
  • Different meanings across disciplines
    Terms like “concept,” “analysis,” or “argument” shift across fields, confusing students taking multiple general‑education courses.
  • Ambiguous or subjective labels
    Students struggle to distinguish between words like good and very good, and terms such as “critical analysis” can feel subjective (Taylor, 2024).
  • Minimal differentiation between performance levels
    When descriptors are too similar, students, unable to discern differences between the ratings, cannot see how to progress.

How Students Use Rubrics

Students often approach rubrics differently than instructors expect:

  • They treat the rubric as separate from course content, starting with the criteria column and reading each cell in isolation (Matshedisho, 2020).
  • They search for procedural instructions, expecting the rubric to tell them how to complete the assignment (Matshedisho, 2020; Taylor, 2024; Tessier, 2021).
  • Many prefer hard‑copy rubrics over digital versions (Tessier, 2021; Panadero, 2025).

Bridging the Gap Through Instruction

Rubrics only support learning when students understand them as instructors intend (Brookhart, 2015). Effective strategies include:

Build Shared Understanding

  • Explain key terms and check for tacit knowledge—especially discipline‑specific language (Taylor, 2024).
  • Explicitly teach what a rubric is and how to use one; don’t assume prior knowledge (Tessier, 2021).
  • Calibrate expectations by discussing examples and rating sample work with students (Taylor, 2024).

Integrate Rubrics Into the Course

  • Refer to the rubric during lectures and discussions. (Tessier, 2021).
  • Provide feedback that directly connects to rubric criteria. (Matshedisho, 2020) (Taylor, 2024) (Tessier, 2021).
  • Celebrate or reinforce active rubric use (Tessier, 2021).
  • Provide hard copies of the rubric whenever possible (Tessier, 2021; Panadero, 2025).

Support Instructors

  • Offer training in rubric design and student‑centered implementation (Brookhart, 2015) (Taylor, 2024).
  • Use shared rubrics for multi‑section courses to support consistency.
  • Meet as a teaching team to create and calibrate the common rubric.
  • Recognize limitations of online rubric platforms; include clarifying hyperlinks or exemplars when possible (Panadero, 2025).

Clarify Task Expectations

Students often want a checklist. Provide procedural instructions separately, and use the rubric for conceptual evaluation (Matshedisho, 2020; Taylor, 2024; Tessier, 2021).

Conclusion

Research has proven that students comment favorably when it comes to questions referencing a rubric’s validity and reliability, but when the research focuses on how students interact with, understand, and apply the rubric, it is clear we still have a long way to go. Hopefully the suggestions above will get you started on the road to even better creation and application of your rubrics.

References

Brookhart, S. M. (2015). The quality and effectiveness of descriptive rubrics. Educational Review, 67(3), 343–368. doi:10.1080/00131911.2014.929565

Matshedisho, K. R. (2020). Straddling rows and columns: Students’ (mis)conceptions of an assessment rubric. Assessment & Evaluation in Higher Education, 169–179. doi:10.1080/02602938.2019.1616671

Panadero, E. O. (2025). Analysis of online rubric platforms: Advancing toward erubrics. Assessment & Evaluation in Higher Education, 31–49. doi:10.1080/02602938.2024.2345657

Taylor, B. K. (2024). Rubrics in higher education: An exploration of undergraduate students’ understanding and perspectives. Assessment & Evaluation in Higher Education, 799–809. doi:10.1080/02602938.2023.2299330

Tessier, L. (2021). Listening to student perspectives of rubrics: Perceptions, Uses, and Grades. Journal on Excellence in College Teaching, 32(3), 133–168.

This blog post is a continuation from “Refining Rubrics & Assessments: AI as Design Support – Part 1“.

Using AI to Refine Rubric Language

In the previous post, I gave an assignment prompt to Copilot (as that’s the recommended tool at Oregon State University) and asked it to complete the task. For reference, here is the task.

Rubrics are often the weakest link in assessment design, particularly when descriptors rely on vague phrases like “meets expectations” or “demonstrates understanding.” One way to evaluate rubric clarity is to ask AI to self-assess its own response using the rubric criteria.

If the model can plausibly justify a high score despite shallow reasoning or inconsistent logic, the rubric may not be clearly distinguishing levels of performance. More precise rubrics specify what evidence matters and how quality differs, emphasizing reasoning, coherence, and alignment with course concepts rather than polish or length. Clear criteria benefit students, but they also make it harder for superficially strong work to masquerade as deep learning.


Rubric Analysis Prompt (Click to expand)

You are now acting as an external assessment reviewer, not a student.
You will be given:

  1. An assignment prompt
  2. A grading rubric
  3. A model-generated student submission (your own prior response)

Your task is not to grade the submission.
Instead, critically evaluate the rubric itself by answering the following:

  1. Rubric Vulnerabilities
    • Identify specific rubric criteria or descriptors that allow a high score to be justified through fluent but shallow reasoning.
    • For each vulnerability, explain what kind of weak or superficial evidence could still plausibly receive a high score under the current wording.
  2. Distinguishing Performance Levels
    • For at least three rubric categories, explain why the difference between “Excellent” and “Good” (or “Good” and “Satisfactory”) may be ambiguous in practice.
    • Describe what concrete evidence a human grader would need to reliably distinguish between those levels.
  3. AI Self-Assessment Stress Test
    • Using your own generated submission as an example, explain how it could convincingly argue for a high score even if underlying understanding were limited.
    • Point to specific rubric language that enables this justification.
  4. Rubric Strengthening Recommendations
    • Propose revised rubric language that makes expectations more explicit and evidence-based.
    • Emphasize observable reasoning, causal explanation, constraint awareness, or conceptual boundaries rather than general phrases such as “demonstrates understanding” or “well-justified.”

Constraints:

  • Do not rewrite the assignment prompt.
  • Do not assume access to course-specific lectures or materials.

Focus on how the rubric functions as an assessment instrument, not on pedagogy or student motivation.

Tone:
Analytical, critical, and concrete. Avoid generic advice.



You could use this directly by attaching a rubric, assessment prompt, and “submission”, or modifying it to your own situation.

Here is a section of the results it gave, along with the “thinking” section expanded to see the process of the generated answer:


(Copilot gave me an enormous amount of feedback, as expected because the rubric included a lot of generic language.)


Rethinking “Higher-Order Thinking” in an AI-Rich Environment

Frameworks like Bloom’s Taxonomy remain useful, but AI complicates the assumption that higher-order tasks are automatically more resistant to outsourcing. AI can analyze, evaluate, and even create convincing responses if prompts are static and unconstrained.

What remains more difficult to outsource is judgment. Assignments that require students to choose among approaches, justify those choices, identify uncertainty, or explain when a method would fail tend to surface understanding more reliably than tasks that simply ask for analysis or synthesis. When reviewing AI-generated responses, a helpful question is: What would a human need to know to trust this answer? Designing assessments around that question shifts the focus from output to accountability.

Instructors can strengthen authenticity by introducing under specified scenarios, realistic limitations, or prompts that require students to articulate how they would evaluate the reliability of their own results. These design choices don’t prevent AI use, but they make it harder to succeed without understanding when and why an answer might be wrong.


An Iterative Design Loop for Assessments and Rubrics

Using AI as an assessment design diagnostic and refinement tool can work best as an iterative process. Draft the assignment and rubric, test them with AI, analyze how success is achieved, and revise accordingly. The goal is not to reach a point where AI “fails,” but rather a point where success requires engagement with disciplinary concepts and reasoning. This mirrors quality-assurance practices in other domains: catching misalignment early, refining specifications, and retesting until the design reliably produces the intended outcome. Importantly, this loop should be finite and purposeful, not an endless escalation.

Conclusion

using AI in assessment design is not about surveillance or enforcement. It is a transparency tool. When instructors acknowledge that AI exists and design accordingly, they reduce the incentive for adversarial behavior and increase clarity around expectations. Being open with students about the role of AI (what is permitted, what responsibility cannot be delegated, and how understanding will be evaluated) helps maintain trust while preserving academic standards. The credibility of online and in-person education alike depends not on stopping students from using tools, but on ensuring that passing a course still signifies meaningful learning.

Takeaway Cheat Sheet

  • Think of AI as support, not a villain.
  • Stress‑test early: run the rubric through a model for verification before you hand it to students.
  • Refine granularity: precise descriptors = clearer expectations.
  • Target higher‑order thinking: embed authentic scenarios.
  • Iterate, don’t stagnate: keep the loop tight but finite.
  • Mind ethics: disclose, de‑bias, and set realistic limits.

For centuries, knowledge and access to education was restricted to just a few. In today’s’ world, almost anybody can access information through the web and more recently through AI tools. However, it is important to recognize that these tools, while offering expansive access to content of varied nature, also pose challenges. Generative AI has fundamentally changed how students interact with assignments, but it has also given instructors a powerful new lens for examining their own assessment design. Rather than treating AI solely as a threat to academic integrity, we can use it as a diagnostic tool – one that quickly reveals whether our assignments and rubrics are actually measuring what we think they are. If an AI can complete an assignment, and meet the stated criteria for success without engaging course-specific learning, is it really a student problem, or a signal to modify the design?


A small shift in perspective from “they’re using this to cheat” to “how can this help me prevent cheating” is especially important in online and hybrid environments, where traditional academic integrity controls like proctored exams are either unavailable or undesirable. Instead of trying to outmaneuver AI or police its use, instructors can ask a more productive question: What does success on this assignment actually require?


Why AI Is a Helpful Design Tool


AI can function as an unusually honest “devil’s advocate.” It doesn’t get tired, anxious, or confused about instructions, and it excels at finding the most efficient path to meeting stated requirements. When an instructor gives an AI model an assignment prompt and a rubric, the resulting output can expose whether the rubric rewards deep engagement or simply fluent compliance.


If an AI can generate a response that appears to meet expectations without referencing key course concepts, grappling with assumptions, or making meaningful decisions, then students can likely do the same. In this way, AI acts less like a cheating student and more like a mirror held up to our assessment design.

An example using Copilot:


Stress-Testing Assignments Before Students Ever See Them

One practical workflow to test the resilience of your assignments is to run them through AI before they are deployed. Provide the model with the prompt and the rubric (nothing else) and ask it to produce a strong submission. Then evaluate that response using your own grading criteria.

The point is not to judge whether the AI’s answer is “good,” but to analyze why it succeeds in meeting the set requirements easily and flawlessly (at first sight). If the response earns high marks through generic explanations, surface-level analysis, or broadly applicable reasoning, that’s evidence that the assessment may not be tightly aligned with course learning outcomes, focus on deeper thinking and analysis, or elicit students’ own creativity . This kind of stress-testing takes minutes, and often surfaces issues that would otherwise only become visible after grading a full cohort.


The Task (Click to reveal )

Assignment Prompt

Subject: Chemical Engineering
Level: Upper-level undergraduate (3rd year)
Topic: Reactor Design & Engineering Judgment

Assignment: Conceptual Design and Analysis of a Chemical Reactor

You are tasked with the preliminary design and analysis of a chemical reactor for the production of a commodity chemical of your choice (e.g., ammonia, methanol, ethylene oxide, sulfuric acid, or another well-established industrial product).

Your analysis should address the following:

  1. Process Overview
    • Briefly describe the selected chemical process and its industrial relevance.
    • Identify the primary reaction(s) involved and classify the reaction type(s) (e.g., exothermic/endothermic, reversible/irreversible, catalytic/non-catalytic).
  2. Reactor Selection
    • Propose an appropriate reactor type (e.g., CSTR, PFR, batch, packed bed).
    • Justify your selection based on reaction kinetics, heat transfer considerations, conversion goals, and operational constraints.
  3. Operating Conditions
    • Discuss key operating variables such as temperature, pressure, residence time, and feed composition.
    • Explain how these variables influence conversion, selectivity, and safety.
  4. Engineering Trade-Offs
    • Identify at least two major design trade-offs (e.g., conversion vs. selectivity, energy efficiency vs. safety, capital cost vs. operating cost).
    • Explain how an engineer might balance these trade-offs in practice.
  5. Limitations and Assumptions
    • Clearly state any simplifying assumptions made in your analysis.
    • Discuss the limitations of your proposed design at this preliminary stage.

Your response should demonstrate clear engineering reasoning rather than detailed numerical calculations. Where appropriate, qualitative trends, simplified relationships, or order-of-magnitude reasoning may be used.

Length: ~1,000–1,200 words
References: Not required, but accepted if used appropriately

The Rubric (Click to reveal)
CriterionExcellent (A)Good (B)Satisfactory (C)Unsatisfactory (D/F)
Understanding of Chemical Engineering PrinciplesDemonstrates strong understanding of reaction engineering concepts and correctly applies them to the chosen processDemonstrates general understanding with minor conceptual gapsShows basic familiarity but with notable misunderstandings or oversimplifications
Demonstrates weak or incorrect understanding of core concepts
Reactor Selection & JustificationReactor choice is well-justified using multiple relevant criteria (kinetics, heat transfer, safety, operability)Reactor choice is reasonable but justification lacks depth or completenessReactor choice is weakly justified or based on limited reasoning

Reactor choice is inappropriate or unjustified
Analysis of Operating ConditionsClearly explains how operating variables affect performance, safety, and efficiencyExplains effects of variables with minor omissions or inaccuracies
Provides limited or superficial discussion of operating conditions

Fails to meaningfully analyze operating variables
Engineering Trade-OffsInsightfully identifies and explains realistic trade-offs, demonstrating engineering judgmentIdentifies trade-offs but discussion lacks nuance or integrationTrade-offs are mentioned but poorly explained or generic
Trade-offs are absent or incorrect
Assumptions & LimitationsAssumptions are clearly stated and critically evaluatedAssumptions are stated but not fully examined
Assumptions are implicit or weakly articulated

Assumptions are missing or inappropriate
Clarity & OrganizationResponse is well-structured, clear, and professionalGenerally clear with minor organizational issues
Organization or clarity interferes with understanding


Poorly organized or difficult to follow



Identifying Gaps in What We’re Measuring

AI performs particularly well on tasks that rely on recognition, pattern matching, and general world knowledge. This means it can easily succeed on assessments that emphasize recall, procedural execution, or elimination of obviously wrong answers. When that happens, the assessment may be measuring familiarity rather than understanding.

Revising these tasks does not require making them longer or more complex. Instead, instructors can focus on higher-order thinking and metacognition, for example requiring students to articulate why a particular approach applies, what assumptions are being made, or how results should be interpreted. These shifts move the assessment away from answer production and toward critical and disciplinary thinking – without assuming that AI use can or should be eliminated. The point of identifying the gaps can also help you revisit the structure of the assignment to determine how each of its elements (purpose, instructions/task/prompt, and criteria for success) are cohesively connected to strengthen the assignment.

In the second part of this blog, I take the same task above, and work with the AI to refine a rubric.

What is an OER?

OER or open educational resources are openly licensed educational materials. What makes them different from other educational materials is the fact that they carry a Creative Commons (CC) license. This means that the person who created the OER, which could be a textbook, assessments, media, course syllabi, etc., has made it possible for others to reuse, revise, remix, redistribute and retain the work without needing to ask for permission. And, even better, OERs are FREE! How does this work in practice? Here’s an example. A professor at OSU writes a textbook on cell biology specifically for the course and gives it a Creative Commons license. Their students now have access to a free textbook on cell biology, tailored to the course, and saving the students hundreds of dollars. The students can keep it as long as they want (no rental returns or use limits). A professor at another university can take that same cell biology textbook and, without worrying about copyright violations or fair use evaluations, reorder the contents to better fit their course syllabus. They can add new, updated content like a recent discovery in gene therapy, or they can remove content that does not meet their course needs. Then they can release this work under a Creative Commons license, providing their students with a free textbook (also saving them oodles of money). It a win-win. Here at Oregon State University, since 2019, our students have saved more than $20 million thanks to OSU faculty who use free textbooks or other free and low-cost learning materials in their classes.

Why is this important?

  • Students have access to their course materials on day one and everyone has equal access to the course content.
  • Students don’t have to decide between buying textbooks and rent or food and they don’t have to reduce the number of courses they are taking because they won’t be able to pay for the course materials.
  • Students report feeling less stressed and a stronger sense of belonging when they don’t have to worry about affording their course materials.
  • Faculty can customize the course materials, aligning them with course learning outcomes, and making them more relevant to local circumstances or current events.
  • Faculty can support students as active creators of knowledge by having them contribute to and even create OER materials (open pedagogy).
  • Faculty can increase their own teaching impact by creating OER that are used across the globe.
  • Studies have shown that students using OER course material achieve the same or better learning outcomes as with commercial course materials.


In a 2022 survey of Oregon State University students, 61% of them didn’t purchase at least one textbook because of its high cost. By utilizing low ($40 or less) and no cost resources like OERs, you can have a huge impact on our students. For example, instead of deciding between food and rent or buying a textbook, students will have immediate access, which is significant in a 10-week term, to the texts for their class. This often leads to better performance in their classes because they have access to their textbook and aren’t trying to “get by” without it. Students can also take the amount of credits they wish to stay on track with their degree completion goals because the textbooks are now not a concern as far as affordability goes.

Where do I start?

Oregon State University has a growing collection of open, free to use textbooks across several disciplines. Check out the Oregon State University OER Commons and see if there’s a resource you could utilize. If you don’t find what you were looking for there, so many more resources exist, start with looking at the OER Commons main site. But wait, there’s more!

In addition to our own OER commons, there is a great list of other places to search for Open Educational Resources for your class. Oregon State University has a curated collection of trainings, tutorials, and webinars if you’d like to dive deeper into the world of OER. If you’re needing help navigating or just feel overwhelmed with all of the options, feel free to contact our OER unit for a consultation.

What is OER Week?

Open Education Week is an annual celebration that raises awareness about OERs. In past years, there have been success stories shared, tools highlighted, and how to get involved in adopting or adapting OERs for use in classes.

Keep an eye out for more details about Oregon State University’s activities during Oregon State University’s Open Ed Week for 2026 happening March 2-6, 2026. Whether you’re a faculty member curious about open textbooks or a student interested in more affordable learning materials, there will be plenty of ways to participate and learn more.

Accessibility is a hot topic these days, and alt text is one of its most significant building blocks. There are many comprehensive resources and tutorials out there, so I won’t get into what alt text is or how to write it (if you need an intro, start here: OSU Digital Accessibility – Alternative Text for Images). In this post, I’ll address a few issues where guidance is less clear-cut and that have come up in my conversations with instructors.

Does alt text have a character limit?

You’ve done the work and written a detailed alt text that you’re proud of. You hit “done” and, much to your frustration, the Canvas editor is flagging your image and saying: “Alt attribute text should not contain more than 120 characters.” What’s going on here? Is there really a limit, and why is it so?

Well, this is one of those things where you’ll find lots of conflicting information. Some people say that assistive devices only read the first 140 characters; others, the first 150; yet others argue there are no such limits with modern tech. See this article: 100, 150, or 200? Debunking the Alt text character limit, which has more info and references, including a nod to NASA’s famous alt text for the James Webb telescope images.

One thing is clear though: alt text should be short and sweet, to make it easy on the users. Keep the purpose in mind and address it as succinctly as you can. However, if your carefully written alt text still exceeds Canvas’s limit of 120 characters, don’t fret – that constraint is probably too restrictive anyway.  But if the image is complex and needs a much longer description, use a different method (see more options below).

How should I use a long description?

When you have an image that contains a lot of information, such as a graph or a map, you need both alt text and a long description. The alt text is short (e.g., “Graph of employment trends 2025”), while the long description is detailed (e.g., it would describe the axes and bars, numbers etc.). The W3C Web Accessibility Initiative (WAI) – Complex Images Tutorial explains a few ways you can add a long description.  The most common ones (and that I would recommend) are:

  • Put the long description on a separate page or in a file and add a link to it next to the image.
  • Put the long description on the same page in the text (under a special heading or simply in the main content) and include its location in the alt text (e.g., “Graph of employment trends 2025. Described under the heading Employment Trends”.)

The advantage of these methods is that everyone, not just people using assistive technologies, can access them. The description can benefit people with other disabilities or those who simply need more help understanding complex graphics.

But wait, what about image captions? Do they duplicate alt text?

Image captions can be used in various ways: as a short title for the picture, as related commentary, or as a full explanation (see an example of alt text vs. caption). In any case, avoid duplicating content between the caption and the alt text. If the caption doesn’t include a sufficient description, make sure you have that in the alt text. Alternatively, you can keep the alt text very short and use the caption for a longer description that everyone can read (I wouldn’t recommend very long ones, though – those may be better placed elsewhere, as described above).

For web pages, it’s best to add the caption using the <figcaption> element. This ensures that your caption is semantically linked to its image. If you like editing the HTML in your LMS, check out the W3Schools tutorial on the HTML <figcaption> Tag.

It’s 2026! Can’t I just get AI to write the alt text?

You’re right that AI tools can be a great help in writing alt text or long descriptions! We often recommend ASU’s Image Accessibility Creator. But, as you’re aware, LLMs are not always correct. Moreover, they don’t know what exactly you want your students to get from that image (well, you could tell them, but that may be as much effort as writing the alt text yourself…). Make sure you always check the output for accuracy and revise it to fit your purpose and context.

Once the term begins, you and your students enter into a full motion of course activities—getting connected with one another and moving along the education journey together. Then, when you realize it is the end of the term! I have heard many instructors saying things like “I can’t believe how fast this term has gone!”, “It’s already week 10, and I don’t know where time went!” And with the term at its conclusion, it is an opportunity to debrief, reflect, and take time for self-kindness, for both instructors and instructional designers.

Debrief

A debrief is an activity that helps close out the course development project. A debrief can help instructors more intentionally discuss how the course development process worked in a particular course, identify the challenges that took place while teaching, and outline future improvements and more effective course design approaches (Chatterjee, Juvale, & Jaramillo Cherrez, 2023). If you are an instructor who worked with an instructional designer to develop the course that you just finished teaching, it is important to meet with them and discuss how the course went, what worked well, what items presented challenges for students as well as for the instructors that immediate changes or improvements can be addressed as these are fresh in mind, and what major updates or changes are required before the course is taught again. These debriefs can take place during the last weeks of the term (e.g., finals week or the week after) and be initiated by the instructional designer as a way to close out the course development project, or by the instructor to seek additional instructional design assistance for improvements.

Reflection

Why would you want to reflect as an instructor? Generally speaking, reflection can serve as a mechanism to deliberately process and examine your actions, thoughts, and experiences in developing and teaching the course. For reflection after the term, we will focus on reflection-on-action, which is engaging in this deliberate process after the fact (Brookfield, 2017; Schön, 1987)—after you have taught the course. In reflecting about your course development and teaching experience at the end of the term, you may have the opportunity to not only describe what those experiences were like but also the opportunity to question and evaluate design and teaching choices, identify additional challenges presented in the context of the course, and reviewing student feedback to better understand the instructional design decisions that were successful and those that failed to accomplish your goals and the goals of the course. Reflection can be part of the debrief, but also a regular practice to look back at the course development and teaching experience for future improvements.

Self-Kindness

Self-kindness is not a new concept, but it may well be in the context of education. Applying this concept to your online course development and teaching experience means that you engage in kind actions to yourself—actions to treat yourself with care, compassion, and consideration (Denial, 2023). At the end of the term, as you debrief and/or reflect, think about the teaching actions that went well and consider how they made you feel. Give yourself grace and compassion because you are a human being and capable of so many great things, while acknowledging that the context and experiences may shape us in multiple ways. Also, because you have created an excellent online course and your teaching presence has elevated its quality. In exercising self-kindness, you may feel vulnerable as you may start recognizing the challenges and struggles in your academic and personal lives. Consider giving yourself the same compassion that you can give a loved one or a close friend, recognizing that the challenges, struggles, and failures are part of the human experience—even in teaching. Self-kindness is a way to direct your attention and actions away from judgments and shortcomings. Take care.

I’m curious, how do you conclude a term? Are there specific self-care actions that you take besides grading and submitting final grades?  

References

Brookfield, S. 2017. Becoming a critically reflective teacher. 2nd ed. Jossey-Bass

Chatterjee, R., Juvale, D., & Jaramillo Cherrez, N. (2023). What the debriefs unfold. A multicase study of their experiences of higher education faculty in designing and teaching their asynchronous online courses. The Quarterly Review of Distance Education, 24(1), 25-41.

Denial, C. J. (2024). A pedagogy of kindness. University of Oklahoma Press

Schön, D. A. 1987. Educating the reflective practitioner: Toward a new design for teaching and learning in the professions. Jossey-Bass

Customizing your Canvas course is a simple way to create a smoother, more intuitive learning experience—for both you and your students. With a few strategic adjustments, you can create a polished, efficient Canvas set-up that gives students a clean, streamlined learning environment.


Personalize Your Home Page

Instructors can choose which page students see first when clicking into their Canvas course, and a well-designed homepage can act as a dashboard for the entire course. Most Ecampus courses have a home page that includes a custom banner with the course code and name as well as links to the modules, Start Here module, and Syllabus page. You might want to add links to external tools or websites your students will use frequently for easy access. You can also pin up to five of the most recent announcements to the home page, which can help students quickly see important information (see “Course Details” below for instructions).

If you want to change which page your course defaults to, click the “pages” tab in the course menu, then click “view all pages”. Your current course home page will appear with the tag “front page”. To change it, click the three dots and choose “remove as front page”. Then choose the three dots menu next to the page you want and you’ll see a new option, “use as front page”.


The Gradebook

The next area you will want to check is your gradebook, as there are many options you can set to help streamline grading. Click the settings gear icon to pop out the gradebook settings menu. The first option is choosing a default grade or percentage deduction for late assignment submissions. Automating this calculation is extremely helpful for both instructors and students, especially if you want grades to decrease a certain percentage each day.

The next tab allows you to choose whether you want grades to be posted automatically or manually as your default setting. The third tab, “advanced”, enables an instructor to manually override the final grades calculated by Canvas.

The last tab, “view options”, contains several ways to tweak the appearance of your gradebook. The first option is determining how the gradebook displays assignments, defaulting to the order they appear on the assignments page. You can change that if you prefer to see assignments in one of the other possible arrangements (see image below).

You can choose which columns you want to see when you launch the gradebook, with the option to add a notes column, visible only to instructors, which appears to the right of student names. Many instructors use the notes column as a field where they can track interactions and keep important information about students. You can also change the default gradebook colors that indicate whether a submission was late, missing or excused.


The Settings Tab

The settings tab in your Canvas course is hiding some features you might not know you have access to that allow you to customize your course. Let’s look more closely at three of the sections you’ll see there: course details, navigation, and feature options.

Course Details

There are a few options you can change under the course details section, though it is important to note that there are settings here that you should NOT adjust, including the time zone, participation, start and end dates, language, visibility, and any other setting besides the specific ones described below. These settings are put in place by OSU and should not be changed.

At the top of this section, there is a place to upload or change your course image, which is mirrored on the dashboard view for both you and students. Adding an image here that represents your course content can help students visually find your course quickly on their Canvas dashboard.

The next section of interest is the grading scheme. Canvas has set a default grading scheme, shown in the chart below, so if the default scheme works, you do not need to adjust it. However, if your department or course syllabus uses different score ranges than the default scheme, you can create your own.

Another area in this section you may want to consider is the bottom set of options, seen in the image below. Here, you have the ability to show up to five most recent announcements to the course homepage, which helps ensure students see important messages when they navigate to your course. Click the checkbox to show recent announcements and choose how many you’d like students to see.

There are some other options here, giving instructors the choice to allow students to create their own discussion boards, edit or delete their discussion replies, or attach files to discussions. There is also the option to allow students to form their own groups. Additionally, instructors can hide totals in the student grade summary, hide the grade distribution graphs from student view, and disable or enable comments on announcements. Be sure to remember to click the “update course details” box when editing course details to save any changes you make.

Navigation

The next section instructors may want to explore is navigation, which controls the Canvas course links that appear in the left-hand menu. This simple interface lets you enable or disable links to customize what links students see in the left-side navigation menu. We recommend checking your course to be sure that the tabs students need are enabled, such as syllabus and assignments, and others such as instructor-only areas like pages and files, are hidden from students. Navigation items including OSU Instructor Tools, Ally Course Accessibility Report, and UDOIT Accessibility never show to students and should be left enabled. You can also enable links to any external tools, like Perusall or Peerceptiv, you may be using in your course.

In your course, disabled links will not be visible to students, marked with the crossed-out eye icon denoting that they are hidden, but you will still see them. To enable/disable a menu item, use the three dots menu or simply grab and drag menu items to the top (enabled) or bottom (disabled) section, and remember to click save at the bottom of the screen. You will immediately be able to see a change in your course menu.

Feature Options

The final section you might want to explore is Feature Options, which lists features that you can turn on or off. This usually includes previews of features that Instructure is beta testing. Clicking the arrow icon next to each shows a brief description of the option. You’ll see disabled features marked with a red X, while enabled ones are marked with a green checkmark- you can toggle these on and off with a click.

Some features you might be interested in testing out include the following:

  • Assignment enhancements (improves the assignment interface and submission workflow for students)
  • Enhanced rubrics (a new, more robust tool for creating and managing rubrics)
  • Ignite AI discussion tools (uses AI to garner insights from, summarize, or translate a discussion)
  • Speedgrader upgrades (particularly useful for high-enrollment courses)
  • Smart search (uses AI to improve searchability within a course; currently searches content pages, announcements, discussion prompts and assignment descriptions)
  • Submission stickers (a fun one you can add if you enable assignment enhancement)

While these may seem like small changes individually, customizing the look and feel of your Canvas course can have a big effect on your students’ learning experience. Contact Ecampus faculty support if you have any questions or need assistance personalizing your course.

The saying “students may forget what we teach, but they’ll remember how we made them feel” is such an important idea that could help us start thinking about how to make online learning a space that fosters belonging, connection, respect for individual differences, and authentic participation. In support of instructional design practices and teaching approaches that can accomplish these aspects, online course design and teaching can be transformative for students (and instructors) when more human-centered strategies are built into the courses, strategies that go beyond cognitive tasks and challenges, and attend to the well-being of students and instructors. What if these practices and approaches are built upon kindness? Cate Denial’s (2024) “A Pedagogy of Kindness” invites us to explore ways in which we can do that.

In this first blog, I will share the foundational notion of “A Pedagogy of Kindness” and its main components. I will also offer a brief example to illustrate a step in applying this kind of pedagogy in an online course.

Denial’s View of Academia and Teaching

Kindness and Pedagogy

But what does kindness have to do with pedagogy and with online teaching and learning? The short answer is “everything.” Kindness helps to direct our attention, actions, and thoughts to being compassionate and considerate to those around us, physically and virtually. But what is kindness in the first place? Denial defines it in terms of what it is not: It is not about being nice because being nice is hypocritical, underscores power imbalances, presupposes a traditional view of rigor, and leads to burnout—all because of the efforts to disguise actions and thoughts as pleasant when they are less so. Kindness, instead, is a more genuine way to care for others—an approach that makes all see opportunities to bring equity, embrace accountability for our actions and thoughts, and remind ourselves that we live and thrive through compassion even in darker times.

Pedagogy of Kindness Framework

In the Pedagogy of Kindness framework, Denial presents three intersecting elements: a vision for justice, believing students, and believing in students. And these three elements point to considering students’ personal responsibilities, work commitments, financial obligations, disabilities, and attitudes toward the world around them (e.g., politics, climate change), and meet them where they are. In addition, Denial posits that “[w]e must take a hard look at what we’re asking students to do and then identify if there is value in it….If there is, we need to be able to explain that value to students as clearly and directly as we can.” (p 17) Denial’s framework for a pedagogy of kindness rests on three intersecting principles:

Justice: In this element, we should consider:

  • Who our students are
  • Give students the benefit of the doubt when something outlandish occurs
  • Believe in what students share about their educational experiences

Believe Students:

  • Cultivate trust
  • Be ready to deal with a situation or crisis rather than putting all under suspicion
  • Discuss with students ethical decisions

Believe in Students:

  • Students want to learn—they have creativity, capacity, and thoughtfulness
  • Establish collaboration for mutual learning

Pedagogy of Kindness in Online Education

This framework can be integrated into a course in multiple ways, through the teaching approach, syllabus, class practices, and overall interactions with students. For instance, in one online course I helped design, the language in the Artificial Intelligence policy said something to the effect of “If the assignment is written with AI, you will receive a zero.” In order to cultivate trust and help the instructor prepare to deal with any academic misconduct, we revisited the description of the policy, highlighting the value of students’ own work, ideas, and reflections. The policy language also acknowledged the challenges in completing the assignment and reinforced the importance of the learning process more than the product of the assignment. This example speaks to the believe students and believe in students elements of the pedagogy of kindness.

Now, how might you make kindness visible in your own online teaching?

A second part of this blog will provide examples and practical ways of how Pedagogy of Kindness grounded the design and facilitation of online courses.

References

Denial, C. J. (2024). A pedagogy of kindness. University of Oklahoma Press. 

Many educators are grappling with questions about AI detection. Yet, AI detection tools are unreliable, biased, and distressing. False positives can harm students’ academic standing and well-being, with marginalized groups often disproportionately affected, while detectors still miss significant portions of AI-generated text (Lurye, 2025; Encouraging Academic Integrity, 2025; Hirsch, 2024). And detection tools assume students simply copy and paste AI outputs, when in reality many use these tools more fluidly–taking suggestions, rewriting, or iterating through prompts–making their work indistinguishable from original writing. As one student noted, “it’s very easy to use AI to do the lion’s share of the thinking while still submitting work that looks like your own…” (Terry, 2023).

What Students Want

Most students believe institutions should address academic integrity concerns related to generative AI, but they largely prefer proactive and educational approaches over punitive measures. A significant number of students want clear rules about when and how AI tools can be used, as well as a voice in shaping them (Flaherty, 2025).

From Policing to Partnership

Given the inherent risks of detection and bans — tools that can unfairly penalize students and policies that do little to promote ethical use — the better path forward is not more surveillance, but more collaboration. Faculty-written policies risk missing the mark if they ignore how students actually engage with AI. Instead of policing AI through punitive measures, faculty can create space for students to help define appropriate guidelines. Policies crafted together shift the dynamic from rules imposed to standards co-owned, building trust and relevance.

Why Co-Creation Works

Self-Determination Theory suggests that students are more likely to internalize and adhere to guidelines when they have a hand in creating them. Involving students in developing AI usage policies communicates that their perspectives are valued and supports their need for autonomy, turning compliance into genuine commitment. Co-created rules feel less like authoritarian decrees and more like shared standards, which in turn fosters ownership, clarity, and consistency in how those policies are understood and followed (Guay, 2022; Kuo et al., 2025).

Practical Approaches to Co-Create AI Policies

Research makes it clear: students are more likely to respect and follow policies they help shape. But theory alone won’t change your syllabus. The real shift happens when faculty move from principle to practice. The good news? There are straightforward, adaptable activities you can use right now to bring students into the conversation and co-create meaningful AI usage policies. For best results, implement these activities within the first week or two of the term, or before your first major assignment.

Document-Based Collaboration

1. Shared Policy Google Doc

A structured Google Doc provides policy headings (Assessment, Collaboration, Academic Integrity). Students co-edit the text under each section, adding suggestions in comments. As comments are resolved, the document evolves into a finalized class-wide AI usage policy.

Tool: Google Docs

2. Scenario Response Wiki

Students use a wiki page to respond to realistic AI-use scenarios (e.g., “AI writing feedback on essays”). Small groups draft responses for each scenario, and peers edit for consistency. Over time, pages become a collective guide to what counts as acceptable AI use, directly forming a policy.

Tool: Canvas Wiki Pages (or equivalent LMS wiki feature)

3. Crowd-Sourced Glossary

Students collaboratively define AI-related terms and practices in a shared glossary tool (wiki or Google Doc). Each entry includes “permitted uses” and “restricted uses.” The glossary doubles as both a vocabulary aid and a concrete class AI policy.

Tools: Canvas Wiki Pages, Google Docs

4. Policy Charter Pad

Using a template, students co-author a charter with structured sections: Purpose, Guidelines, Responsibilities, and Consequences. Each section is drafted collaboratively, with rotating editors refining language. The final product is a polished class AI usage charter.

Tools: Google Docs

Discussion & Forum-Based Activities

5. Canvas Discussion Draft

Instructor seeds a discussion with prompts for different policy areas. Students propose clauses and debate wording in threads. A moderator (instructor or rotating student role) synthesizes the top ideas into a consensus policy posted back to the group.

Tool: Canvas Discussions

6. Draft & Vote Forum

The instructor posts draft clauses in a forum. Students propose alternatives as replies. A class-wide vote (via Canvas poll or Google Form) determines the preferred wording. The winning clauses are compiled into the final AI policy.\

Tools: Canvas Discussions, Google Forms

7. Policy Blog Chain

Students write sequential short blog posts on a shared course blog. Each post revises or critiques the prior entry, building momentum toward consensus. The chain of posts is later synthesized into a cohesive AI usage policy.

Tools: WordPress, Edublogs

Visual & Interactive Tools

8. Miro Collaborative Map

On a Miro board, students build a shared mind map with branches like “Learning Support,” “Integrity,” and “Assessment.” They attach notes or examples under each branch. The class then translates the map’s structure into a written, shared policy document.

Tool: Miro

9. Perusall Policy Annotation

An instructor uploads an external AI policy (e.g., from a university or journal) into Perusall. Students highlight passages and comment on what they agree with or want to adapt. Annotations are collected and distilled into a tailored class policy.

Tool: Perusall

Media & Feedback Tools

10. Media Roundtable & Podcast

Students record short reflections (video or audio) on what should or shouldn’t be in the AI policy. Using Kaltura in Canvas, Microsoft Teams, or Canvas Discussions with media replies, they share contributions and respond to peers. The instructor (or a student group) compiles clips and/or transcripts into a single artifact. This collective media product is then distilled into draft clauses for the shared AI usage policy.

Tools: Kaltura, Microsoft Teams, Canvas Discussions

11. AI Policy Survey & Summary

The instructor creates a Qualtrics survey with items such as: “Is it acceptable to use AI to generate code for an assignment?” Students select Acceptable, Unacceptable, or Conditional and provide a brief rationale. Qualtrics automatically aggregates results into tables and charts, making consensus and disagreements easy to spot. The class then uses these visual summaries to draft clear, evidence-based clauses for the shared AI usage policy.

Tool: Qualtrics

12. Peer-Reviewed Policy Exchange

Each student drafts a mini-policy document and submits it to a shared folder or assignment space. Using a structured rubric, peers review at least two classmates’ drafts, either through Peerceptiv or LMS assignment tools. The strongest and most frequently endorsed ideas are integrated into a composite class policy authored by the group.

Tools: Peerceptiv, Google Drive, Canvas Assignments

Bringing It All Together

Once students have contributed through these activities, the instructor’s role is to bring the pieces together. Compiling the results, highlighting areas of consensus, and drafting a clear, shareable policy ensures that the final guidelines reflect the class’s input. Sharing this draft back with students not only closes the loop but also reinforces that their voices shaped the outcome.

Before you drop a boilerplate AI statement in your syllabus, try one of these toolkit activities. Start small–maybe a survey or a media roundtable–and see how co-writing changes the game.

References

Encouraging academic integrity – University Center for Teaching and Learning. (2025). University of Pittsburgh. https://teaching.pitt.edu/resources/encouraging-academic-integrity/

Flaherty, C. (2025, August 29). How AI is changing—not ‘killing’—college. Inside Higher Ed. https://www.insidehighered.com/news/students/academics/2025/08/29/survey-college-students-views-ai

Guay, F. (2022). Applying self-determination theory to education: Regulations types, psychological needs, and autonomy supporting behaviors. Canadian Journal of School Psychology, 37(1), 75–92. https://doi.org/10.1177/08295735211055355

Hirsch, A. (2024, December 12). AI detectors: An ethical minefield. Center for Innovative Teaching and Learning. https://citl.news.niu.edu/2024/12/12/ai-detectors-an-ethical-minefield/

Kuo, T.-S., Chen, Q. Z., Zhang, A. X., Hsieh, J., Zhu, H., & Holstein, K. (2025). PolicyCraft: Supporting collaborative and participatory policy design through case-grounded deliberation. In Proceedings of the 2025 CHI Conference on Human Factors in Computing Systems (pp. 1–24). Association for Computing Machinery. https://doi.org/10.1145/3706598.3713865

Lurye, S. (2025, August 7). Students have been called to the office — and even arrested — for AI surveillance false alarms. AP News. https://apnews.com/article/ai-school-surveillance-gaggle-goguardian-bark-8c531cde8f9aee0b1ef06cfce109724a

Terry, O. K. (2023, May 12). Opinion | I’m a student. You have no idea how much we’re using ChatGPT. The Chronicle of Higher Education. https://www.chronicle.com/article/im-a-student-you-have-no-idea-how-much-were-using-chatgpt