The golden rule of link accessibility: links should be descriptive! For foundational information on the why and the how, see OSU Digital Accessibility – Links.) Let’s dig deeper into a few common questions:
Can I use “click here” or “this” for my link text?
This practice is not ideal, and it’s best to avoid it. While WCAG does permit it when surrounding context provides enough information, you would not be creating a good experience for your audience. That type of text is not descriptive enough to show the user where the link will go, and it’s especially problematic if this text appears multiple times! Think of people skimming the content – whether visually or via assistive technologies. It’s much more helpful when the text clearly conveys the link’s function or destination. See an example below.
Can I link an image?
Yes, you can use an image directly as a link or button. But! If the image serves as a link on its own, make sure to write alt text that describes the action initiated by the link. The example image below is linked to an interactive lesson about cat behavior. Therefore, you would use the alt text “Cat Behavior Interactive Lesson”, NOT describe the image. See more explanations and examples on the W3C WAI Functional Images page.
Proper citations include URLs. How do we make those accessible?
Citation styles may be strict, but they do allow some flexibility for online-only resources and materials outside of formal papers. The recommended practice is to link the work title and ditch the DOI or URL, like in the example below. Check out more examples and explanations for APA and for MLA.
Is it ok to repeat a link multiple times on a page?
Canvas is flagging some links that don’t seem to exist!
You may have noticed, on occasion, “ghost links” in Canvas. The link validator or accessibility checker says there’s a broken or duplicate link, but when you look at the text, there’s nothing there. However, if you switch to the HTML editor, you’ll find the link lurking underneath. In the example below, you can see that there are actually two links instead of one: the Assignment 1 link was not completely deleted when I replaced it with Assignment 2.
What happens is that sometimes, if you delete text without unlinking first, the link may persist. To avoid this situation, make sure to remove the links before deleting or pasting in text.
BONUS link-related tip: Don’t underline regular text
Usually, links are underlined, and most people think of links when they see underlined text. This may be confusing when they try to access the link and it doesn’t work. In addition, underlining is just not a good way of highlighting information. For more information, see an article and video from Boise State University: Underlined text.
These practices make your course more readable, easy to navigate, and overall, more enjoyable for your students!
Research on rubrics has often focused on validity and reliability (Matshedisho, 2020), but more recent work explores how students actually interpret and use rubrics (Brookhart, 2015; Matshedisho, 2020; Taylor, 2024; Tessier, 2021). This emerging scholarship consistently shows a gap between instructor intention and student interpretation. For example, Matshedisho (2020) found that “students expected procedural and declarative guidance, while instructors expected conceptual, reflective work” (p. 175).
If students understand rubrics differently than we intend, rubrics cannot fully support learning. Below are key reasons this mismatch occurs—and strategies to close the gap.
Tacit Knowledge and Language
Students bring varied backgrounds, disciplinary exposure, and assumptions to their learning (Brookhart, 2015; Matshedisho, 2020). Many do not enter college knowing what a rubric is or how to apply one (Tessier, 2021).
Key issues include:
Unfamiliar terms or disciplinary jargon Early‑year students may lack field‑specific language. In Matshedisho’s (2020) study, first‑year medical students struggled with the sociological-specific criteria required for a reflective assignment.
Different meanings across disciplines Terms like “concept,” “analysis,” or “argument” shift across fields, confusing students taking multiple general‑education courses.
Ambiguous or subjective labels Students struggle to distinguish between words like good and very good, and terms such as “critical analysis” can feel subjective (Taylor, 2024).
Minimal differentiation between performance levels When descriptors are too similar, students, unable to discern differences between the ratings, cannot see how to progress.
How Students Use Rubrics
Students often approach rubrics differently than instructors expect:
They treat the rubric as separate from course content, starting with the criteria column and reading each cell in isolation (Matshedisho, 2020).
They search for procedural instructions, expecting the rubric to tell them how to complete the assignment (Matshedisho, 2020; Taylor, 2024; Tessier, 2021).
Many prefer hard‑copy rubrics over digital versions (Tessier, 2021; Panadero, 2025).
Bridging the Gap Through Instruction
Rubrics only support learning when students understand them as instructors intend (Brookhart, 2015). Effective strategies include:
Build Shared Understanding
Explain key terms and check for tacit knowledge—especially discipline‑specific language (Taylor, 2024).
Explicitly teach what a rubric is and how to use one; don’t assume prior knowledge (Tessier, 2021).
Calibrate expectations by discussing examples and rating sample work with students (Taylor, 2024).
Integrate Rubrics Into the Course
Refer to the rubric during lectures and discussions. (Tessier, 2021).
Provide feedback that directly connects to rubric criteria. (Matshedisho, 2020) (Taylor, 2024) (Tessier, 2021).
Celebrate or reinforce active rubric use (Tessier, 2021).
Provide hard copies of the rubric whenever possible (Tessier, 2021; Panadero, 2025).
Support Instructors
Offer training in rubric design and student‑centered implementation (Brookhart, 2015) (Taylor, 2024).
Use shared rubrics for multi‑section courses to support consistency.
Meet as a teaching team to create and calibrate the common rubric.
Recognize limitations of online rubric platforms; include clarifying hyperlinks or exemplars when possible (Panadero, 2025).
Clarify Task Expectations
Students often want a checklist. Provide procedural instructions separately, and use the rubric for conceptual evaluation (Matshedisho, 2020; Taylor, 2024; Tessier, 2021).
Conclusion
Research has proven that students comment favorably when it comes to questions referencing a rubric’s validity and reliability, but when the research focuses on how students interact with, understand, and apply the rubric, it is clear we still have a long way to go. Hopefully the suggestions above will get you started on the road to even better creation and application of your rubrics.
References
Brookhart, S. M. (2015). The quality and effectiveness of descriptive rubrics. Educational Review, 67(3), 343–368. doi:10.1080/00131911.2014.929565
Matshedisho, K. R. (2020). Straddling rows and columns: Students’ (mis)conceptions of an assessment rubric. Assessment & Evaluation in Higher Education, 169–179. doi:10.1080/02602938.2019.1616671
Panadero, E. O. (2025). Analysis of online rubric platforms: Advancing toward erubrics. Assessment & Evaluation in Higher Education, 31–49. doi:10.1080/02602938.2024.2345657
Taylor, B. K. (2024). Rubrics in higher education: An exploration of undergraduate students’ understanding and perspectives. Assessment & Evaluation in Higher Education, 799–809. doi:10.1080/02602938.2023.2299330
Tessier, L. (2021). Listening to student perspectives of rubrics: Perceptions, Uses, and Grades. Journal on Excellence in College Teaching, 32(3), 133–168.
In the previous post, I gave an assignment prompt to Copilot (as that’s the recommended tool at Oregon State University) and asked it to complete the task. For reference, here is the task.
Rubrics are often the weakest link in assessment design, particularly when descriptors rely on vague phrases like “meets expectations” or “demonstrates understanding.” One way to evaluate rubric clarity is to ask AI to self-assess its own response using the rubric criteria.
If the model can plausibly justify a high score despite shallow reasoning or inconsistent logic, the rubric may not be clearly distinguishing levels of performance. More precise rubrics specify what evidence matters and how quality differs, emphasizing reasoning, coherence, and alignment with course concepts rather than polish or length. Clear criteria benefit students, but they also make it harder for superficially strong work to masquerade as deep learning.
Rubric Analysis Prompt (Click to expand)
You are now acting as an external assessment reviewer, not a student. You will be given:
An assignment prompt
A grading rubric
A model-generated student submission (your own prior response)
Your task is not to grade the submission. Instead, critically evaluate the rubric itself by answering the following:
Rubric Vulnerabilities
Identify specific rubric criteria or descriptors that allow a high score to be justified through fluent but shallow reasoning.
For each vulnerability, explain what kind of weak or superficial evidence could still plausibly receive a high score under the current wording.
Distinguishing Performance Levels
For at least three rubric categories, explain why the difference between “Excellent” and “Good” (or “Good” and “Satisfactory”) may be ambiguous in practice.
Describe what concrete evidence a human grader would need to reliably distinguish between those levels.
AI Self-Assessment Stress Test
Using your own generated submission as an example, explain how it could convincingly argue for a high score even if underlying understanding were limited.
Point to specific rubric language that enables this justification.
Rubric Strengthening Recommendations
Propose revised rubric language that makes expectations more explicit and evidence-based.
Emphasize observable reasoning, causal explanation, constraint awareness, or conceptual boundaries rather than general phrases such as “demonstrates understanding” or “well-justified.”
Constraints:
Do not rewrite the assignment prompt.
Do not assume access to course-specific lectures or materials.
Focus on how the rubric functions as an assessment instrument, not on pedagogy or student motivation.
Tone: Analytical, critical, and concrete. Avoid generic advice.
You could use this directly by attaching a rubric, assessment prompt, and “submission”, or modifying it to your own situation.
Here is a section of the results it gave, along with the “thinking” section expanded to see the process of the generated answer:
(Copilot gave me an enormous amount of feedback, as expected because the rubric included a lot of generic language.)
Rethinking “Higher-Order Thinking” in an AI-Rich Environment
Frameworks like Bloom’s Taxonomy remain useful, but AI complicates the assumption that higher-order tasks are automatically more resistant to outsourcing. AI can analyze, evaluate, and even create convincing responses if prompts are static and unconstrained.
What remains more difficult to outsource is judgment. Assignments that require students to choose among approaches, justify those choices, identify uncertainty, or explain when a method would fail tend to surface understanding more reliably than tasks that simply ask for analysis or synthesis. When reviewing AI-generated responses, a helpful question is: What would a human need to know to trust this answer? Designing assessments around that question shifts the focus from output to accountability.
Instructors can strengthen authenticity by introducing under specified scenarios, realistic limitations, or prompts that require students to articulate how they would evaluate the reliability of their own results. These design choices don’t prevent AI use, but they make it harder to succeed without understanding when and why an answer might be wrong.
An Iterative Design Loop for Assessments and Rubrics
Using AI as an assessment design diagnostic and refinement tool can work best as an iterative process. Draft the assignment and rubric, test them with AI, analyze how success is achieved, and revise accordingly. The goal is not to reach a point where AI “fails,” but rather a point where success requires engagement with disciplinary concepts and reasoning. This mirrors quality-assurance practices in other domains: catching misalignment early, refining specifications, and retesting until the design reliably produces the intended outcome. Importantly, this loop should be finite and purposeful, not an endless escalation.
Conclusion
using AI in assessment design is not about surveillance or enforcement. It is a transparency tool. When instructors acknowledge that AI exists and design accordingly, they reduce the incentive for adversarial behavior and increase clarity around expectations. Being open with students about the role of AI (what is permitted, what responsibility cannot be delegated, and how understanding will be evaluated) helps maintain trust while preserving academic standards. The credibility of online and in-person education alike depends not on stopping students from using tools, but on ensuring that passing a course still signifies meaningful learning.
Takeaway Cheat Sheet
Think of AI as support, not a villain.
Stress‑test early: run the rubric through a model for verification before you hand it to students.
For centuries, knowledge and access to education was restricted to just a few. In today’s’ world, almost anybody can access information through the web and more recently through AI tools. However, it is important to recognize that these tools, while offering expansive access to content of varied nature, also pose challenges. Generative AI has fundamentally changed how students interact with assignments, but it has also given instructors a powerful new lens for examining their own assessment design. Rather than treating AI solely as a threat to academic integrity, we can use it as a diagnostic tool – one that quickly reveals whether our assignments and rubrics are actually measuring what we think they are. If an AI can complete an assignment, and meet the stated criteria for success without engaging course-specific learning, is it really a student problem, or a signal to modify the design?
A small shift in perspective from “they’re using this to cheat” to “how can this help me prevent cheating” is especially important in online and hybrid environments, where traditional academic integrity controls like proctored exams are either unavailable or undesirable. Instead of trying to outmaneuver AI or police its use, instructors can ask a more productive question: What does success on this assignment actually require?
Why AI Is a Helpful Design Tool
AI can function as an unusually honest “devil’s advocate.” It doesn’t get tired, anxious, or confused about instructions, and it excels at finding the most efficient path to meeting stated requirements. When an instructor gives an AI model an assignment prompt and a rubric, the resulting output can expose whether the rubric rewards deep engagement or simply fluent compliance.
If an AI can generate a response that appears to meet expectations without referencing key course concepts, grappling with assumptions, or making meaningful decisions, then students can likely do the same. In this way, AI acts less like a cheating student and more like a mirror held up to our assessment design.
An example using Copilot:
Stress-Testing Assignments Before Students Ever See Them
One practical workflow to test the resilience of your assignments is to run them through AI before they are deployed. Provide the model with the prompt and the rubric (nothing else) and ask it to produce a strong submission. Then evaluate that response using your own grading criteria.
The point is not to judge whether the AI’s answer is “good,” but to analyze why it succeeds in meeting the set requirements easily and flawlessly (at first sight). If the response earns high marks through generic explanations, surface-level analysis, or broadly applicable reasoning, that’s evidence that the assessment may not be tightly aligned with course learning outcomes, focus on deeper thinking and analysis, or elicit students’ own creativity . This kind of stress-testing takes minutes, and often surfaces issues that would otherwise only become visible after grading a full cohort.
Assignment: Conceptual Design and Analysis of a Chemical Reactor
You are tasked with the preliminary design and analysis of a chemical reactor for the production of a commodity chemical of your choice (e.g., ammonia, methanol, ethylene oxide, sulfuric acid, or another well-established industrial product).
Your analysis should address the following:
Process Overview
Briefly describe the selected chemical process and its industrial relevance.
Identify the primary reaction(s) involved and classify the reaction type(s) (e.g., exothermic/endothermic, reversible/irreversible, catalytic/non-catalytic).
Reactor Selection
Propose an appropriate reactor type (e.g., CSTR, PFR, batch, packed bed).
Justify your selection based on reaction kinetics, heat transfer considerations, conversion goals, and operational constraints.
Operating Conditions
Discuss key operating variables such as temperature, pressure, residence time, and feed composition.
Explain how these variables influence conversion, selectivity, and safety.
Engineering Trade-Offs
Identify at least two major design trade-offs (e.g., conversion vs. selectivity, energy efficiency vs. safety, capital cost vs. operating cost).
Explain how an engineer might balance these trade-offs in practice.
Limitations and Assumptions
Clearly state any simplifying assumptions made in your analysis.
Discuss the limitations of your proposed design at this preliminary stage.
Your response should demonstrate clear engineering reasoning rather than detailed numerical calculations. Where appropriate, qualitative trends, simplified relationships, or order-of-magnitude reasoning may be used.
Length: ~1,000–1,200 words References: Not required, but accepted if used appropriately
The Rubric (Click to reveal)
Criterion
Excellent (A)
Good (B)
Satisfactory (C)
Unsatisfactory (D/F)
Understanding of Chemical Engineering Principles
Demonstrates strong understanding of reaction engineering concepts and correctly applies them to the chosen process
Demonstrates general understanding with minor conceptual gaps
Shows basic familiarity but with notable misunderstandings or oversimplifications
Demonstrates weak or incorrect understanding of core concepts
Reactor Selection & Justification
Reactor choice is well-justified using multiple relevant criteria (kinetics, heat transfer, safety, operability)
Reactor choice is reasonable but justification lacks depth or completeness
Reactor choice is weakly justified or based on limited reasoning
Reactor choice is inappropriate or unjustified
Analysis of Operating Conditions
Clearly explains how operating variables affect performance, safety, and efficiency
Explains effects of variables with minor omissions or inaccuracies
Provides limited or superficial discussion of operating conditions
Fails to meaningfully analyze operating variables
Engineering Trade-Offs
Insightfully identifies and explains realistic trade-offs, demonstrating engineering judgment
Identifies trade-offs but discussion lacks nuance or integration
Trade-offs are mentioned but poorly explained or generic
Trade-offs are absent or incorrect
Assumptions & Limitations
Assumptions are clearly stated and critically evaluated
Assumptions are stated but not fully examined
Assumptions are implicit or weakly articulated
Assumptions are missing or inappropriate
Clarity & Organization
Response is well-structured, clear, and professional
Generally clear with minor organizational issues
Organization or clarity interferes with understanding
Poorly organized or difficult to follow
Identifying Gaps in What We’re Measuring
AI performs particularly well on tasks that rely on recognition, pattern matching, and general world knowledge. This means it can easily succeed on assessments that emphasize recall, procedural execution, or elimination of obviously wrong answers. When that happens, the assessment may be measuring familiarity rather than understanding.
Revising these tasks does not require making them longer or more complex. Instead, instructors can focus on higher-order thinking and metacognition, for example requiring students to articulate why a particular approach applies, what assumptions are being made, or how results should be interpreted. These shifts move the assessment away from answer production and toward critical and disciplinary thinking – without assuming that AI use can or should be eliminated. The point of identifying the gaps can also help you revisit the structure of the assignment to determine how each of its elements (purpose, instructions/task/prompt, and criteria for success) are cohesively connected to strengthen the assignment.
In the second part of this blog, I take the same task above, and work with the AI to refine a rubric.
Accessibility is a hot topic these days, and alt text is one of its most significant building blocks. There are many comprehensive resources and tutorials out there, so I won’t get into what alt text is or how to write it (if you need an intro, start here: OSU Digital Accessibility – Alternative Text for Images). In this post, I’ll address a few issues where guidance is less clear-cut and that have come up in my conversations with instructors.
Does alt text have a character limit?
You’ve done the work and written a detailed alt text that you’re proud of. You hit “done” and, much to your frustration, the Canvas editor is flagging your image and saying: “Alt attribute text should not contain more than 120 characters.” What’s going on here? Is there really a limit, and why is it so?
Well, this is one of those things where you’ll find lots of conflicting information. Some people say that assistive devices only read the first 140 characters; others, the first 150; yet others argue there are no such limits with modern tech. See this article: 100, 150, or 200? Debunking the Alt text character limit, which has more info and references, including a nod to NASA’s famous alt text for the James Webb telescope images.
One thing is clear though: alt text should be short and sweet, to make it easy on the users. Keep the purpose in mind and address it as succinctly as you can. However, if your carefully written alt text still exceeds Canvas’s limit of 120 characters, don’t fret – that constraint is probably too restrictive anyway. But if the image is complex and needs a much longer description, use a different method (see more options below).
How should I use a long description?
When you have an image that contains a lot of information, such as a graph or a map, you need both alt text and a long description. The alt text is short (e.g., “Graph of employment trends 2025”), while the long description is detailed (e.g., it would describe the axes and bars, numbers etc.). The W3C Web Accessibility Initiative (WAI) – Complex Images Tutorial explains a few ways you can add a long description. The most common ones (and that I would recommend) are:
Put the long description on a separate page or in a file and add a link to it next to the image.
Put the long description on the same page in the text (under a special heading or simply in the main content) and include its location in the alt text (e.g., “Graph of employment trends 2025. Described under the heading Employment Trends”.)
The advantage of these methods is that everyone, not just people using assistive technologies, can access them. The description can benefit people with other disabilities or those who simply need more help understanding complex graphics.
But wait, what about image captions? Do they duplicate alt text?
Image captions can be used in various ways: as a short title for the picture, as related commentary, or as a full explanation (see an example of alt text vs. caption). In any case, avoid duplicating content between the caption and the alt text. If the caption doesn’t include a sufficient description, make sure you have that in the alt text. Alternatively, you can keep the alt text very short and use the caption for a longer description that everyone can read (I wouldn’t recommend very long ones, though – those may be better placed elsewhere, as described above).
For web pages, it’s best to add the caption using the <figcaption> element. This ensures that your caption is semantically linked to its image. If you like editing the HTML in your LMS, check out the W3Schools tutorial on the HTML <figcaption> Tag.
Should the alt text describe people’s gender, race, age etc.?
It really depends on what you are trying to convey and how much you know about the individuals in the image. Are those details significant? If yes, you should include them. Are you making any assumptions? Make sure not to project your own ideas about who the person is. This guide from University of Colorado Boulder: Identity and Inclusion in Alt Text is a great resource to refer to when faced with these decisions.
It’s 2026! Can’t I just get AI to write the alt text?
You’re right that AI tools can be a great help in writing alt text or long descriptions! We often recommend ASU’s Image Accessibility Creator. But, as you’re aware, LLMs are not always correct. Moreover, they don’t know what exactly you want your students to get from that image (well, you could tell them, but that may be as much effort as writing the alt text yourself…). Make sure you always check the output for accuracy and revise it to fit your purpose and context.
Customizing your Canvas course is a simple way to create a smoother, more intuitive learning experience—for both you and your students. With a few strategic adjustments, you can create a polished, efficient Canvas set-up that gives students a clean, streamlined learning environment.
Personalize Your Home Page
Instructors can choose which page students see first when clicking into their Canvas course, and a well-designed homepage can act as a dashboard for the entire course. Most Ecampus courses have a home page that includes a custom banner with the course code and name as well as links to the modules, Start Here module, and Syllabus page. You might want to add links to external tools or websites your students will use frequently for easy access. You can also pin up to five of the most recent announcements to the home page, which can help students quickly see important information (see “Course Details” below for instructions).
If you want to change which page your course defaults to, click the “pages” tab in the course menu, then click “view all pages”. Your current course home page will appear with the tag “front page”. To change it, click the three dots and choose “remove as front page”. Then choose the three dots menu next to the page you want and you’ll see a new option, “use as front page”.
The Gradebook
The next area you will want to check is your gradebook, as there are many options you can set to help streamline grading. Click the settings gear icon to pop out the gradebook settings menu. The first option is choosing a default grade or percentage deduction for late assignment submissions. Automating this calculation is extremely helpful for both instructors and students, especially if you want grades to decrease a certain percentage each day.
The next tab allows you to choose whether you want grades to be posted automatically or manually as your default setting. The third tab, “advanced”, enables an instructor to manually override the final grades calculated by Canvas.
The last tab, “view options”, contains several ways to tweak the appearance of your gradebook. The first option is determining how the gradebook displays assignments, defaulting to the order they appear on the assignments page. You can change that if you prefer to see assignments in one of the other possible arrangements (see image below).
You can choose which columns you want to see when you launch the gradebook, with the option to add a notes column, visible only to instructors, which appears to the right of student names. Many instructors use the notes column as a field where they can track interactions and keep important information about students. You can also change the default gradebook colors that indicate whether a submission was late, missing or excused.
The Settings Tab
The settings tab in your Canvas course is hiding some features you might not know you have access to that allow you to customize your course. Let’s look more closely at three of the sections you’ll see there: course details, navigation, and feature options.
Course Details
There are a few options you can change under the course details section, though it is important to note that there are settings here that you should NOT adjust, including the time zone, participation, start and end dates, language, visibility, and any other setting besides the specific ones described below. These settings are put in place by OSU and should not be changed.
At the top of this section, there is a place to upload or change your course image, which is mirrored on the dashboard view for both you and students. Adding an image here that represents your course content can help students visually find your course quickly on their Canvas dashboard.
The next section of interest is the grading scheme. Canvas has set a default grading scheme, shown in the chart below, so if the default scheme works, you do not need to adjust it. However, if your department or course syllabus uses different score ranges than the default scheme, you can create your own.
Another area in this section you may want to consider is the bottom set of options, seen in the image below. Here, you have the ability to show up to five most recent announcements to the course homepage, which helps ensure students see important messages when they navigate to your course. Click the checkbox to show recent announcements and choose how many you’d like students to see.
There are some other options here, giving instructors the choice to allow students to create their own discussion boards, edit or delete their discussion replies, or attach files to discussions. There is also the option to allow students to form their own groups. Additionally, instructors can hide totals in the student grade summary, hide the grade distribution graphs from student view, and disable or enable comments on announcements. Be sure to remember to click the “update course details” box when editing course details to save any changes you make.
Navigation
The next section instructors may want to explore is navigation, which controls the Canvas course links that appear in the left-hand menu. This simple interface lets you enable or disable links to customize what links students see in the left-side navigation menu. We recommend checking your course to be sure that the tabs students need are enabled, such as syllabus and assignments, and others such as instructor-only areas like pages and files, are hidden from students. Navigation items including OSU Instructor Tools, Ally Course Accessibility Report, and UDOIT Accessibility never show to students and should be left enabled. You can also enable links to any external tools, like Perusall or Peerceptiv, you may be using in your course.
In your course, disabled links will not be visible to students, marked with the crossed-out eye icon denoting that they are hidden, but you will still see them. To enable/disable a menu item, use the three dots menu or simply grab and drag menu items to the top (enabled) or bottom (disabled) section, and remember to click save at the bottom of the screen. You will immediately be able to see a change in your course menu.
Feature Options
The final section you might want to explore is Feature Options, which lists features that you can turn on or off. This usually includes previews of features that Instructure is beta testing. Clicking the arrow icon next to each shows a brief description of the option. You’ll see disabled features marked with a red X, while enabled ones are marked with a green checkmark- you can toggle these on and off with a click.
Some features you might be interested in testing out include the following:
Assignment enhancements (improves the assignment interface and submission workflow for students)
Enhanced rubrics (a new, more robust tool for creating and managing rubrics)
Smart search (uses AI to improve searchability within a course; currently searches content pages, announcements, discussion prompts and assignment descriptions)
Submission stickers (a fun one you can add if you enable assignment enhancement)
While these may seem like small changes individually, customizing the look and feel of your Canvas course can have a big effect on your students’ learning experience. Contact Ecampus faculty support if you have any questions or need assistance personalizing your course.
Special Edition: Guest Blog by Assistant Professor of Practice (Urban Forestry), Jennifer Killian
When I was asked to create a new course for Oregon State University’s Ecampus program, my first reaction was a mix of sheer excitement… and, well, a little terror. I’ve built workshops, presentations, and even all-day trainings, but assembling ten weeks of graduate-level content from scratch? That felt like wandering through a haunted house to me. Dark, empty, and full of unknowns. Adding to the surrealness, I realized that thirteen years ago, I was a graduate student here, taking several Ecampus courses myself including an early version of the very class I would now be teaching. The idea that I could bring my professional experience back to this institution and shape this course? Thrilling, humbling… and a yes, definitely a little spooky.
The course, FES 454/554: Forestry in the Wildland-Urban Interface, explores the complex challenges of managing forests where communities and wildlands meet. Students dive into forest health, urban forestry, land-use planning, wildfire, and natural resource management through social, ecological, economic, and political lenses. It’s a “slash course,” meaning both undergraduates and graduate students can enroll so I knew the content needed to speak to a broad spectrum of learners. And I had to build it all from the ground up.
Enter the magical world of Ecampus Instructional Design. My Instructional Design partner was way more than support. To me, she was a friendly ghost guiding me through every room of this haunted course house. There were moments when I was convinced I had hit a dead-end, only to have a creative solution appear almost instantly. From turning complex assignments into clear, engaging experiences to keeping me on track and motivated, the team transformed my raw ideas into a cohesive, polished course. I honestly cannot say enough about the skill, creativity, and dedication they bring to the table.
One lesson I carried from my own hiking adventures literally proved invaluable during the course build. Years ago, I was struggling up a 14,000-foot peak in Colorado, staring at the distant summit, more than ready to quit. My hiking buddy simply said, “Don’t look at the summit. Pick a rock a few feet ahead and walk to that. Then take a break, and pick another rock.” That became my metaphor for course development. Instead of being paralyzed by the enormity of a ten-week course, I focused on the next “rock.” Some of my rocks included simply finishing the syllabus, creating the first assignment, securing a guest lecture, or finding a key reading. By breaking the work into manageable pieces, the haunted hallways of that blank course shell became far less intimidating and actually surprisingly rewarding.
Another highlight of building this course was connecting students with the people shaping forestry in the field. Reaching out to industry professionals for guest lectures and insights brought this material to life and grounded it in examples. It also reminded me how much real-world perspectives enrich student learning. Two colleagues from my department contributed individual weeks of material, which helped broaden the course and gave students a chance to see the WUI topic through multiple professional lenses. I was grateful for their contributions too! Seeing the course evolve into a bridge between theory and practice was incredibly rewarding and it reinforced a key principle I’d learned over the years through my various roles. That collaboration amplifies impact. Never has this resonated more with me!
For anyone stepping into a course development role for the first time, my advice is simple; Lean on the resources around you. The Ecampus team offers an incredible array of tools, templates, and guidance. Don’t hesitate to ask questions, tap into expertise, and stick to timelines. Above all, remember the “next rock” approach: the mountain is climbed one step at a time. Celebrate small wins along the way because they add up faster than you think.
Looking back, building this course has been a career highlight. From the panic of staring at a totally blank syllabus to the thrill of seeing assignments, discussions, and modules come alive, I’ve learned that teaching online is truly a team sport. The course may be called Forestry in the Wildland-Urban Interface, but what I really learned was how humans, collaboration, and thoughtful design intersect to create something extraordinary. I hope my story encourages other first-time developers to embrace the process, trust their teams, and find joy in the climb. After all, even a haunted course house is easier to navigate when you have friendly ghosts guiding the way and every “next rock” brings you closer to the summit. And as the crisp autumn air settles in and the leaves turn, I’m reminded that even the spookiest, most intimidating challenges can reveal unexpected magic when you face them step-by-step.
“You won’t always have a calculator in your pocket!”
How we laugh now, with calculators first arriving in our pockets and, eventually, smartphones putting one in our hands at all times.
I have seen a lot of comparisons 123 across the Internet to artificial intelligence (AI) and these mathematics classes of yesteryear. The idea being that AI is but the newest embodiment of this same concern, which ended up being overblown.
But is this an apt comparison to make? After all, we did not replace math lessons and teachers with pocket calculators, nor even with smart phones. The kindergarten student is not simply given a Casio and told to figure it out. The quote we all remember has a deeper meaning, hidden among the exacerbated response to the question so often asked by students: “Why are we learning this?”
The response
It was never about the calculator itself, but about knowing how, when, and why to use it. A calculator speeds up the arithmetic, but the core cognitive process remains the same. The key distinction is between pressing the = button and understanding the result of the = button. A student who can set up the equation, interpret the answer, and explain the steps behind the screen will retain the mathematical insight long after the device is switched off.
The new situation – Enter AI
Scenario
Pressed for time and juggling multiple commitments, a student turns to an AI tool to help finish an essay they might otherwise have written on their own. The result is a polished, well-structured piece that earns them a strong grade. On the surface, it looks like a success, but because the heavy lifting was outsourced, the student misses out on the deeper process of grappling with ideas, making connections, and building understanding.
This kind of situation highlights a broader concern: while AI can provide short-term relief for students under pressure, it also risks creating long-term gaps in learning. The issue is not simply that these tools exist, but that uncritical use of them can still produce passing grades without the student engaging in meaningful reflection gained by prior cohorts. Additionally, when AI-generated content contains inaccuracies or outright hallucinations, a student’s grade can suffer, revealing the importance of reviewing and verifying the material themselves. This rapid, widespread uptake stresses the need to move beyond use alone and toward cultivating the critical habits that ensure AI supports, rather than supplants, genuine learning.
Employing multivariate regression analysis, we find that students using GenAI tools score on average 6.71 (out of 100) points lower than non-users. While GenAI may offer benefits for learning and engagement, the way students actually use it correlates with diminished exam outcomes
Another study (Ju, 2023) found that:
After adjusting for background knowledge and demographic factors, complete reliance on AI for writing tasks led to a 25.1% reduction in accuracy. In contrast, AI-assisted reading resulted in a 12% decline. Ju (2023).
In this same study, Ju (2023) noted that while using AI to summarize texts improved both quality and output of comprehension, those who had a ‘robust background in the reading topic and superior reading/writing skills’ benefited the most.
Ironically, the students who would benefit most from critical reflection on AI use are often the ones using it most heavily, demonstrating the importance of embedding AI literacy into the curriculum. For example: A recent article by Heidi Mitchell from the Wall Street Journal (Mitchell, 2025) cites a study showing that the “less you know about AI, the more you are likely to use it”, and describing AI as seemingly “magical to those with low AI literacy”.
Finally, Kosmyna et al. (2025), testing how LLM usage affects cognitive processes and neural engagement in essay writing, assembled groups of LLM users, search engine users, and those without these tools (dubbed “brain-only” users). The authors recorded weaker performance in students with AI assistance over time, a lower sense of ownership of work with inability to recall work, and even seemingly reduced neural connectivity in LLM users compared to the brain-only group, which scored better in all of the above.
The takeaways from these studies are that unstructured AI use acts as a shortcut that erodes retention. While AI-assistance can be beneficial, outright replacement of thinking with it is harmful. In other words, AI amplifies existing competence but rarely builds it from scratch.
Undetected
Many people believe themselves to be fully capable of detecting AI-usage:
Most of the writing professors I spoke to told me that it’s abundantly clear when their students use AI. Sometimes there’s a smoothness to the language, a flattened syntax; other times, it’s clumsy and mechanical. The arguments are too evenhanded — counterpoints tend to be presented just as rigorously as the paper’s central thesis. Words like multifaceted and context pop up more than they might normally. On occasion, the evidence is more obvious, as when last year a teacher reported reading a paper that opened with “As an AI, I have been programmed …” Usually, though, the evidence is more subtle, which makes nailing an AI plagiarist harder than identifying the deed. (Walsh, 2025).
In the same NY Mag article, however, Walsh (2025) cites another study, showing that it might not be as clear who is using AI and who is not (emphasis added):
[…] while professors may think they are good at detecting AI-generated writing, studies have found they’re actually not. One, published in June 2024, used fake student profiles to slip 100 percent AI-generated work into professors’ grading piles at a U.K. university. The professors failed to flag 97 percent.
The two quotes are not contradictory; they describe different layers of the same phenomenon. Teachers feel they can spot AI because memorable extremes stick in their minds, yet systematic testing proves that intuition alone misses the overwhelming majority of AI‑generated work. This should not be surprising though, as most faculty have never been taught systematic ways to audit AI‑generated text (e.g., checking provenance metadata, probing for factual inconsistencies, or using stylometric analysis). Nor do most people, let alone faculty grading hundreds of papers per week, have the time to audit every student. Without a shared, college-wide rubric of sorts, detection remains an ad‑hoc, intuition‑driven activity. Faulty detection risks causing undue stress to students, and can foster a climate of mistrust by assuming that AI use is constant or inherently dishonest rather than an occasional tool in the learning process. Even with a rubric, instructors must weigh practical caveats: large-enrollment courses cannot sustain intensive auditing, some students may resist AI-required tasks, and disparities in access to tools raise equity concerns. For such approaches to work, they must be lightweight, flexible, and clearly framed as supporting learning rather than policing it.
This nuance is especially important when considering how widespread AI adoption has been. Walsh (2025) observed that “just two months after OpenAI launched ChatGPT, a survey of 1,000 college students found that nearly 90 percent of them had used the chatbot to help with homework assignments.” While this figure might seem to justify the use of AI detectors, it could simply reflect the novelty of the tool at the time rather than widespread intent to circumvent learning. In other words, high usage does not automatically equal cheating, showing the importance of measured, thoughtful approaches to AI in education rather than reactionary ones.
What to do…?
The main issue here is not that AI is magically writing better essays than humans can muster, it is that students are slipping past the very moments where they would normally grapple with concepts, evaluate evidence, and argue a position. Many institutions are now taking a proactive role rather than a reactive one, and I want to offer such a suggestion going forward.
Embracing the situation: The reflective AI honor log
It is a fact that large language models have become ubiquitous. They are embedded in web browsers, word processors, and even mobile keyboards. Trying to ban them outright creates a cat‑and‑mouse game; it also sends the message that the classroom is out of sync with the outside world.
Instead of fighting against a technology that is already embedded in our lives, invite students to declare when they use it and to reflect on what they learned from that interaction.
For this post, I am recommending using an “AI Honor-Log Document”, and deeply embedding it into courses, with the goal of increasing AI literacy.
What is it?
As assignments vary across departments and even within courses, a one-size-fits-all approach is unlikely to be effective. To support thoughtful AI use without creating extra work for students, faculty could select an approach that best aligns with their course design:
Built-in reflection: Students note when and how they used AI, paired with brief reflections integrated into their normal workflow.
Optional, just-in-time logging: Students quickly log AI use and jot a short note only when it feels helpful, requiring minimal time.
Embedded in assignments: Reflection is incorporated directly into the work, so students engage with it as part of the regular writing or research process.
Low-effort annotations: Students add brief notes alongside tasks they are already completing, making reflection simple and natural.
These options aim to cultivate critical thinking around AI without imposing additional burdens or creating the perception of punishment, particularly for students who may not be using AI at all.
AI literacy is a massive topic, so let’s only address a few things here:
Mechanics Awareness: Ability to explain the model architecture, training data, limits, and known biases.
Critical Evaluation: Requiring fact-checking, citation retrieval, and bias spotting.
Orchestration Skills: Understanding how to craft precise prompts, edit outputs, and add original analysis.
Note: you might want to go further and incorporate these into an assignment level learning outcome. Something like: “Identifies at least two potential biases in AI-generated text” could be enough on a rubric to gather interesting student responses.
Log layout example
#
Assignment/Activity
Date
AI Model
Exact Prompt
AI Output
What you changed/Added
Why You Edited
Confidence (1-5)
Link to Final Submission
1
Essay #2 – Digital-privacy law
2025-09-14
GPT-5
“Write a 250-word overview of GDPR’s extraterritorial reach and give two recent cases
[pastes AI text]
Added citation to 2023 policy ruling; re-phrased a vague sentence.
AI omitted the latest case; needed up-to-date reference
4
https://canvas.oregonstate.edu/……
Potential deployment tasks (and things to look out for)
It need not take much time to model this to students or deploy it in your course. That said, there are practical and pedagogical limits depending on course size, discipline, and student attitudes toward AI. The notes below highlight possible issues and ways to adjust.
Introduce the three reasons above (either text form or video, if you have more time and want to make a multimedia item). Caveat: Some students may be skeptical of AI-required work. Solution: Frame this as a reflection skill that can also be done without AI, offering an alternative if needed.
Distribute the template to students: post a Google-Sheet link (or similar) in the LMS. Caveat: Students with limited internet access or comfort with spreadsheets may struggle. Solution: Provide a simple Word/PDF version or allow handwritten reflections as a backup.
Model the process in the first week: Submit a sample log entry like the one above but related to your class and required assignment reflection type. Caveat: In large-enrollment courses, individualized modeling is difficult. Solution: Share one well-designed example for the whole class, or record a short screencast that students can revisit.
Require the link with each AI-assisted assignment (or as and when you believe AI will be used). Caveat: Students may feel burdened by repeated uploads or object to mandatory AI use. Solution: Keep the log lightweight (one or two lines per assignment) and permit opt-outs where students reflect without AI.
Provide periodic feedback: scan the logs, highlight common hallucinations or errors provided by students, give a “spot the error” mini lecture/check-in/office hour. Caveat: In large classes, it’s not realistic to read every log closely. Solution: Sample a subset of entries for themes, then share aggregated insights with the whole class during office hours, or post in weekly announcements or discussion boards designed for this kind of two-way feedback.
(Optional) Student sharing session in a discussion board: allow volunteers or require class to submit sanitized prompts (i.e., any personal data removed) and edits for peer learning. Caveat: Privacy concerns or reluctance to share work may arise. Solution: Keep sharing optional, encourage anonymization, and provide opt-outs to respect comfort levels.
Important considerations when planning AI-tasks
Faculty should be aware of several practical and pedagogical considerations when implementing AI-reflective logs. Large-enrollment courses may make detailed feedback or close monitoring of every log infeasible, requiring sampling or aggregated feedback. Some students may object to AI-required assignments for ethical, accessibility, or personal reasons, so alternatives should be available (i.e. the option to declare that a student did not use AI should be present). Unequal access to AI tools or internet connectivity can create equity concerns, and privacy issues may arise when students share prompts or work publicly. To address these challenges, any approach should remain lightweight, flexible, and clearly framed as a tool to support learning rather than as a policing mechanism.
Conclusion
While some students may feel tempted to rely on AI, passing an assignment in this manner can also pass over the critical thinking, analytical reasoning, and reflective judgment that go beyond content mastery to true intellectual growth. Incorporating a reflective AI-usage log based not on assumption of cheating, but on the ubiquitous availability of this now-common tool, reintroduces one of the evidence-based steps for learning and mastery that has fallen out of favor in the last 2-3 years. By encouraging students to pause, articulate, and evaluate their process, reflection helps them internalize knowledge, spot errors, and build the judgment skills that AI alone cannot provide.
Fu, Y. and Hiniker, A. (2025). Supporting Students’ Reading and Cognition with AI. In Proceedings of Workshop on Tools for Thought (CHI ’25 Workshop on Tools for Thought). ACM, New York, NY, USA, 5 pages. https://arxiv.org/pdf/2504.13900v1
Kosmyna, N., Hauptmann, E., Yuan, Y. T., Situ, J., Liao, X-H., Beresnitzky, A. V., Braunstein, I., & Maes, P. (2025). Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task. https://arxiv.org/abs/2506.08872
In part one of this two-part blog series, we focused on setting the stage for a better feedback cycle by preparing students to receive feedback. In part two, we’ll discuss the remaining steps of the cycle- how to deliver feedback effectively and ensure students use it to improve.
In part one, we learned about the benefits of adding a preliminary step to your feedback system by preparing students to receive suggestions and view them as helpful and valuable rather than as criticism. If you haven’t read part one, I recommend doing so before continuing. This first crucial but often overlooked step involves fostering a growth mindset and creating an environment where students understand the value of feedback and learn to view it as a tool for improvement rather than criticism.
Step 2: Write Clear Learning Outcomes
The next step in the cycle is likely more familiar to teachers, as much focus in recent decades has been placed on developing and communicating clear, measurable learning outcomes when designing and delivering courses. Bloom’s Taxonomy is commonly used as a reference when determining learning outcomes and is often a starting point in backwards design strategy. Instructors and course designers must consider how a lesson, module, or course aligns with the learning objectives so that students are well-equipped to meet these outcomes via course content and activities. Sharing these expected outcomes with students, in the form of CLOs and rubrics, can help them to focus on what matters most and be better informed about the importance of each criterion. These outcomes should also inform instructors’ overall course map and lesson planning.
Another important consideration is ensuring that learning outcomes are measurable, which requires rewriting unmeasurable ones that begin with verbs such as understand, learn, appreciate, or grasp. A plethora of resources are available online to assist instructors and course designers who want to improve the measurability of their learning outcomes. These include our own Ecampus-created Bloom’s Taxonomy Revisited and a chart of active and measurable verbs from the OSU Center for Teaching and Learning that fit each taxonomy level.
Step 3: Provide Formative Practice & Assessments
The third step reminds us that student learning is also a cycle, overlapping and informing our feedback cycle. When Ecampus instructional designers build courses, we try to ensure instructors provide active learning opportunities that engage students and teach the content and skills needed to meet our learning objectives. We need to follow that up with ample practice assignments and assessments, such as low-stakes quizzes, discussions, and other activities to allow students to apply what they have learned. This in turn allows instructors to provide formative feedback that should ideally inform our students’ study time and guide them to correct errors or revisit content before being formally or summatively graded. Giving preliminary feedback also gives us time to adjust our teaching based on how students perform and hone in on what toreview before assessments. Providing practice tests or assignments or using exam wrappers, exit cards, or “muddiest point” surveys to collect your students’ feedback can also be an important practice that can help us improve our teaching.
Step 4: Make Feedback Timely and Actionable
Step four is two-fold, as both the timeliness and quality of the feedback we give are important. The best time to give feedback is when the student can still use it to improve future performance. When planning your term schedule, it can be useful to predict when you will need to block off time to provide feedback on crucial assignments and quizzes, as a delay for the instructor equates to a delay for students. Having clear due dates, reminding students of them, and sticking to the timetable by giving feedback promptly are important aspects of giving feedback.
To be effective, feedback must focus on moving learning forward. It should target the identified learning gap and suggest specific steps for the student to improve.. For a suggestion to be actionable, it should describe actions that will help the student do better without overloading them with too much information- choose a few actionable areas to focus on each time. Comments that praise students’ abilities, attitudes, or personalities are not as helpful as ones that give them concrete ways to improve their work.
Step 5: Give Time to Use Feedback and Incentive it
The last step in the cycle, giving students time to use the feedback provided, is often relegated to homework or ignored altogether. Feedback is most useful when students are required to view it and preferably do something with it, and by skipping this important step, the feedback might be ignored or glanced over perfunctorily and promptly forgotten. To close the loop, students must put the feedback to use. This can be the point where your feedback cycle sputters out, so be sure to make time to prioritize this final step. Students may need assistance in applying your feedback. Guiding students through the process, and providing scaffolds and models for using your feedback can be beneficial, especially during the initial attempts.
In my experience, it never hurts to incentivize this step: this can be as simple as adding points to an assignment for reflecting on the feedback given or giving extra credit opportunities around redone work. As a writing teacher, I required rewrites for work that scored below passing and offered to regrade any rewritten essays incorporating my detailed feedback. This proved to be a good solution, and while marking essays was definitely labor intensive, I was rewarded with very positive feedback from my students, often commenting that they learned a lot and improved significantly in my courses.
Considerations
A robust feedback cycle often includes opportunities for students to develop their own feedback skills by performing self-assessments and peer reviews. Self-assessment helps students in several ways, promoting metacognition and helping them learn to identify their own strengths and weaknesses. It also allows students to reflect on their study habits and motivation, manage self-directed learning, and develop transferable skills. Peer review also provides valuable practice honing their evaluative skills, using feedback techniques, and giving and receiving feedback, all skills they will find useful throughout adulthood. Both self-assessment and peer review give students a deeper understanding of the criteria teachers use to evaluate work, which can help them fine-tune their performance.
Giving and receiving feedback effectively is a key skill we all develop as we grow, and it helps us reflect on our performance, guide our future behavior, and fine-tune our practices. Later in life, feedback continues to be vital as we move into work and careers, getting feedback from the people we work for and with. As teachers, the most important aspect of our job is giving feedback that informs students how to improve and meet the learning outcomes to pass our courses. We soon learn, however, that giving feedback can be difficult for several reasons. Despite it being one of our primary job duties as educators, we may have received little training on how to give feedback or what effective feedback looks like. We also realize how time-consuming it can be to provide detailed feedback students need to improve. To make matters worse, we may find that students don’t do much with the feedback we spend so much time providing. Additionally, students may not respond well to feedback- they might become defensive, feel misunderstood, or worse, ignore the feedback altogether. This can set us up for an ineffective feedback process, which can be frustrating for both sides.
I taught ESL to international students from around the world for more than 10 years and have given a fair amount of feedback. Over many cycles, I developed a detailed and systematic approach for providing feedback that looked like this.
Gaps in this cycle can lead to frustration from both sides. Each step in the cycle is essential, so we’ll look at each in greater depth in this blog series. Today, we will focus on starting strong by preparing students to receive feedback, a crucial beginning that sets the stage for a healthy cycle.
Step 1: Prepare Students to Receive Feedback
An effective feedback cycle starts before the feedback is given by laying careful groundwork. The first and often-overlooked step in the cycle is preparing students to receive feedback, which takes planned, ongoing work. Various factors may influence whether students welcome feedback, including their self-confidence going into your course, their own self-concept and mindset as a learner, their working memory and learning capacity, how they view your feedback, and whether they feel they can trust you. Outside factors such as motivation and working memory are often beyond our control,butcreating an atmosphere of trust and safety in the classroom can positively support students. Student confidence and mindset are areas in which teachers can play a crucial supporting role.
Researcher Carol Dweck coined the term “growth mindset” after noticing that some students showed remarkable resilience when faced with hardship or failure. In contrast, others tended to easily become frustrated and angry, and tended to give up on tasks. She developed her theory of growth vs. fixed mindsets to explain and expound on the differences between these two mindsets. The chart below shows some of the features of each extreme, and we can easily see how a fixed mindset can limit students’ resilience and persistence when faced with difficulties.
Mindset directly impacts how students receive feedback. Research has shown that students who believe that their intelligence and abilities can be developed through hard work and dedication are more likely to put in the effort and persist through difficult tasks, while those who see intelligence as a fixed, unchangeable quality are more likely to see feedback as criticism and give up.
Developing a growth mindset can have transformative results for students, especially if they have grown up in a particularly fixed mindset environment. People with a growth mindset are more likely to seek out feedback and use it to improve their performance, while those with a fixed mindset may be more likely to ignore feedback or become defensive when receiving it. Those who receive praise for their effort and hard work, rather than just their innate abilities, are more likely to develop a growth mindset. This is because they come to see themselves as capable of improving through their own efforts, rather than just relying on their natural talents. A growth mindset also helps students learn to deal with failure and reframe it positively. It can be very difficult to receive a critique without tying our performance to our identity. Students must have some level of assurance that they will be safe taking risks and trying, without fear of being punished for failing.
Additionally, our own mindset affects how we view student effort, and we often, purposefully or not, convey those messages to students. Teachers with growth mindsets have a positive and statistically significant association with the development of their students’ growth mindsets. Our own mindset affects the type of feedback we are likely to provide, the amount of time we spend on giving feedback, and the way we view the abilities of our students.
These data suggest that taking the time to learn about and foster a growth mindset in ourselves and our students results in benefits for all. Teachers need to address the value of feedback early on in the learning process and repeatedly throughout the term or year, and couching our messaging to students in positive, growth-oriented language can bolster the feedback process and start students off on the right foot, prepared to improve.
Here are some concrete steps you can take to improve how your students will receive feedback:
Model a growth mindset through language and actions
Include growth-oriented statements in early messaging
Provide resources for students to learn more about growth vs. fixed mindsets
Discuss the value of feedback and incorporate it into lessons
Create an atmosphere of trust and safety that helps students feel comfortable trying new things
Teach that feedback is NOT a judgment of the person, but rather a judgment on the product or process
Ensure the feedback we give focuses on the product or process rather than the individual
Praise effort rather than intelligence
Make it clear that failure is part of learning and that feedback helps improve performance
Provide students with tools and strategies to plan, monitor, and evaluate their learning
Resources for learning more about growth mindset and how it relates to feedback: