For centuries, knowledge and access to education was restricted to just a few. In today’s’ world, almost anybody can access information through the web and more recently through AI tools. However, it is important to recognize that these tools, while offering expansive access to content of varied nature, also pose challenges. Generative AI has fundamentally changed how students interact with assignments, but it has also given instructors a powerful new lens for examining their own assessment design. Rather than treating AI solely as a threat to academic integrity, we can use it as a diagnostic tool – one that quickly reveals whether our assignments and rubrics are actually measuring what we think they are. If an AI can complete an assignment, and meet the stated criteria for success without engaging course-specific learning, is it really a student problem, or a signal to modify the design?


A small shift in perspective from “they’re using this to cheat” to “how can this help me prevent cheating” is especially important in online and hybrid environments, where traditional academic integrity controls like proctored exams are either unavailable or undesirable. Instead of trying to outmaneuver AI or police its use, instructors can ask a more productive question: What does success on this assignment actually require?


Why AI Is a Helpful Design Tool


AI can function as an unusually honest “devil’s advocate.” It doesn’t get tired, anxious, or confused about instructions, and it excels at finding the most efficient path to meeting stated requirements. When an instructor gives an AI model an assignment prompt and a rubric, the resulting output can expose whether the rubric rewards deep engagement or simply fluent compliance.


If an AI can generate a response that appears to meet expectations without referencing key course concepts, grappling with assumptions, or making meaningful decisions, then students can likely do the same. In this way, AI acts less like a cheating student and more like a mirror held up to our assessment design.

An example using Copilot:


Stress-Testing Assignments Before Students Ever See Them

One practical workflow to test the resilience of your assignments is to run them through AI before they are deployed. Provide the model with the prompt and the rubric (nothing else) and ask it to produce a strong submission. Then evaluate that response using your own grading criteria.

The point is not to judge whether the AI’s answer is “good,” but to analyze why it succeeds in meeting the set requirements easily and flawlessly (at first sight). If the response earns high marks through generic explanations, surface-level analysis, or broadly applicable reasoning, that’s evidence that the assessment may not be tightly aligned with course learning outcomes, focus on deeper thinking and analysis, or elicit students’ own creativity . This kind of stress-testing takes minutes, and often surfaces issues that would otherwise only become visible after grading a full cohort.


The Task (Click to reveal )

Assignment Prompt

Subject: Chemical Engineering
Level: Upper-level undergraduate (3rd year)
Topic: Reactor Design & Engineering Judgment

Assignment: Conceptual Design and Analysis of a Chemical Reactor

You are tasked with the preliminary design and analysis of a chemical reactor for the production of a commodity chemical of your choice (e.g., ammonia, methanol, ethylene oxide, sulfuric acid, or another well-established industrial product).

Your analysis should address the following:

  1. Process Overview
    • Briefly describe the selected chemical process and its industrial relevance.
    • Identify the primary reaction(s) involved and classify the reaction type(s) (e.g., exothermic/endothermic, reversible/irreversible, catalytic/non-catalytic).
  2. Reactor Selection
    • Propose an appropriate reactor type (e.g., CSTR, PFR, batch, packed bed).
    • Justify your selection based on reaction kinetics, heat transfer considerations, conversion goals, and operational constraints.
  3. Operating Conditions
    • Discuss key operating variables such as temperature, pressure, residence time, and feed composition.
    • Explain how these variables influence conversion, selectivity, and safety.
  4. Engineering Trade-Offs
    • Identify at least two major design trade-offs (e.g., conversion vs. selectivity, energy efficiency vs. safety, capital cost vs. operating cost).
    • Explain how an engineer might balance these trade-offs in practice.
  5. Limitations and Assumptions
    • Clearly state any simplifying assumptions made in your analysis.
    • Discuss the limitations of your proposed design at this preliminary stage.

Your response should demonstrate clear engineering reasoning rather than detailed numerical calculations. Where appropriate, qualitative trends, simplified relationships, or order-of-magnitude reasoning may be used.

Length: ~1,000–1,200 words
References: Not required, but accepted if used appropriately

The Rubric (Click to reveal)
CriterionExcellent (A)Good (B)Satisfactory (C)Unsatisfactory (D/F)
Understanding of Chemical Engineering PrinciplesDemonstrates strong understanding of reaction engineering concepts and correctly applies them to the chosen processDemonstrates general understanding with minor conceptual gapsShows basic familiarity but with notable misunderstandings or oversimplifications
Demonstrates weak or incorrect understanding of core concepts
Reactor Selection & JustificationReactor choice is well-justified using multiple relevant criteria (kinetics, heat transfer, safety, operability)Reactor choice is reasonable but justification lacks depth or completenessReactor choice is weakly justified or based on limited reasoning

Reactor choice is inappropriate or unjustified
Analysis of Operating ConditionsClearly explains how operating variables affect performance, safety, and efficiencyExplains effects of variables with minor omissions or inaccuracies
Provides limited or superficial discussion of operating conditions

Fails to meaningfully analyze operating variables
Engineering Trade-OffsInsightfully identifies and explains realistic trade-offs, demonstrating engineering judgmentIdentifies trade-offs but discussion lacks nuance or integrationTrade-offs are mentioned but poorly explained or generic
Trade-offs are absent or incorrect
Assumptions & LimitationsAssumptions are clearly stated and critically evaluatedAssumptions are stated but not fully examined
Assumptions are implicit or weakly articulated

Assumptions are missing or inappropriate
Clarity & OrganizationResponse is well-structured, clear, and professionalGenerally clear with minor organizational issues
Organization or clarity interferes with understanding


Poorly organized or difficult to follow



Identifying Gaps in What We’re Measuring

AI performs particularly well on tasks that rely on recognition, pattern matching, and general world knowledge. This means it can easily succeed on assessments that emphasize recall, procedural execution, or elimination of obviously wrong answers. When that happens, the assessment may be measuring familiarity rather than understanding.

Revising these tasks does not require making them longer or more complex. Instead, instructors can focus on higher-order thinking and metacognition, for example requiring students to articulate why a particular approach applies, what assumptions are being made, or how results should be interpreted. These shifts move the assessment away from answer production and toward critical and disciplinary thinking – without assuming that AI use can or should be eliminated. The point of identifying the gaps can also help you revisit the structure of the assignment to determine how each of its elements (purpose, instructions/task/prompt, and criteria for success) are cohesively connected to strengthen the assignment.

In the second part of this blog, I take the same task above, and work with the AI to refine a rubric.

There are many benefits to using rubrics for both instructors and students, as discussed in Rubrics Markers of Quality Part 1 – Unlock the Benefits. Effective rubrics serve as a tool to foster excellence in teaching and learning, so let’s take a look at some best practices and tips to get you started.

Best Practices

Alignment

Rubrics should articulate a clear connection between how students demonstrate learning and the (CLO) Course Learning Outcomes. Solely scoring gateway criteria, the minimum expectations for a task, (e.g., word count, number of discussion responses) can be alluring. Consider a rubric design to move past minimum expectations and assess what students should be able to do after completing a task.

Detailed, Measurable, and Observable

Clear and specific rubrics have the potential to communicate to how to demonstrate learning, how performance evaluation measures, and markers of excellence. The details provide students with a tool to self-assess their progress and level up their performance autonomously.

Language Use

Rubrics create the opportunity to foster an inclusive learning environment. Application of clear and consistent language takes into consideration a diverse student composition. Online students hail from around the world and speak various native languages. Learners may interpret the meaning of different words differently. Use simple terms with specific and detailed descriptions. Doing so creates space for students to focus on learning instead of decoding expectations. Additionally, consider the application of parallel language consistently. The use of similar language (e.g. demonstrates, mostly demonstrates, and doesn’t demonstrate) across each criterion can be helpful to differentiate between each performance level.

Tips of the Trade!

Suitability

Consider the instructional aim, learning outcomes, and the purpose of a task when choosing the best rubric for your course.

  • Analytic Rubrics: The hallmark design of an analytic rubric evaluates performance criteria separately. Characteristically this rubric’s structure is a grid, and evaluation of performance scores are on a continuum of levels. Analytic rubrics are detailed, specific, measurable, and observable. Therefore, this rubric type is an excellent tool for formative feedback and assessment of learning outcomes.
  • Holistic Rubrics: Holistic rubrics evaluate criteria together in one general description for each performance level. Ideally, this rubric design evaluates the overall quality of a task.  Consider the application of a holistic rubric can when an exact answer isn’t needed, when deviation or errors are allowed, and for interpretive/exploratory activities.
  • General Rubrics: Generalized rubrics can be leveraged to assess multiple tasks that have the same learning outcomes (e.g., reflection paper, journal). Performance dimensions focus solely on outcomes versus discrete task features.

Explicit Expectations

Demystifying expectations can be challenging.  Consider articulating performance expectations in the task description before deploying a learning task. Refrain from using rubrics as a standalone vehicle to communicate expectations. Unfortunately, students may miss the rubric all together and fail to meet expectations. Secondly, make the implicit explicit! Be transparent. Provide students with all the information and tools they need to be successful from the outset.

Iterate

A continuous improvement process is a key to developing high-quality assessment rubrics. Consider multiple tests and revisions of the rubric. There are several strategies for testing a rubric. 1) Consider asking students, teaching assistants, or professional colleagues to score a range of work samples with a rubric. 2) Integrate opportunities for students to conduct self-assessments. 3) Consider assessing a task with the same rubric between course sections and academic terms. Reflect on how effectively and accurately the rubric performed, after testing is complete. Revise and redeploy as needed.

Customize

Save some time, and don’t reinvent the wheel. Leverage existing samples and templates. Keep in mind that existing resources weren’t designed with your course in mind. Customization will be needed to ensure the accuracy and effectiveness of the rubric.

Are you interested in learning more about rubrics and how they can enrich your course? Your Instructional Designer can help you craft effective rubrics that will be the best fit for your unique course.

References

Additional Resources

The Basics
Best Practices
Creating and Designing Rubrics

Would you like to save time grading, accurately assess student learning, provide timely feedback, track student progress, demonstrate teaching and learning excellence, foster communication, and much more? If you answered yes, then rubrics are for you! Let’s explore why the intentional use of rubrics can be a valuable tool for instructors and students.

Value for instructors

  • Time management: Have you ever found yourself drowning in a sea of student assignments that need to be graded ASAP (like last week)?  Grading with a rubric can quicken the process because each student is graded in the same way using the same criteria. Rubrics which are detailed, specific, organized and measurable clearly communicate expectations. As you become familiar with how students are commonly responding to an assessment, feedback can be easily personalized and readily deployed.
  • Timely and meaningful feedback: Research has shown that there are several factors that enhance student motivation. One factor is obtaining feedback that is shared often, detailed, timely, and useful. When students receive relevant, meaningful, and useful feedback quickly they have an opportunity to self-assess their progress, course correct (if necessary), and level up their performance.
  • Data! Data! Data! Not only can rubrics provide a panoramic view of student progress, but the tool can also help identify teaching and learning gaps. Instructors will be able to identify if students are improving, struggling, remaining consistent, or if they are missing the mark completely. The information gleaned from rubrics can be utilized to compare student performance within a course, between course sections, or even across time. As well as, the information can serve as feedback to the instructor regarding the effectiveness of the assessment.
  • Effectiveness: When a rubric is designed from the outset to measure the course learning outcomes the rubric can serve as a tool for effective, and accurate, assessment. Tip! Refrain from solely scoring gateway criteria (i.e. organization, mechanics, and grammar). Doing so is paramount because students will interpret meeting the criteria as a demonstration that they have met the learning outcomes even if they haven’t. If learning gaps are consistently identified consider evaluating the task and rubric to ensure instructions, expectations, and performance dimensions are clear and aligned.
  • Shareable: As academic programs begin to develop courses for various modalities (i.e. on campus, hybrid, online) consistently assessing student learning can be a challenge. The advantage of rubrics is they can be easily shared and applied between course sections and modalities. Doing so can be especially valuable when the same course is taught by multiple instructors and teaching assistants.
  • Fosters communication: Instructors can clearly articulate performance expectations and outcomes to key stakeholders such as teaching assistants, instructors, academic programs, and student service representatives (e.g. Ecampus Student Success Team, Writing Center). Rubrics provide additional context above and beyond what is outlined in the course syllabus. A rubric can communicate how students will be assessed, what students should attend to, and how institutional representatives can best help support students. Imagine a scenario where student contacts the Writing Center with the intent of reviewing a draft term paper, and the representative asks for the grading criteria or rubric. The grading criteria furnished by the instructor only outlines the requirements for word length, formatting, and citation conventions. None of the aforementioned criteria communicate the learning outcomes or make any reference to the quality of the work. In this example, the representative might find it challenging to effectively support the student without understanding the instructor’s implicit expectations.
  • Justification: Have you ever been tasked with justifying a contested grade? Rubrics can help you through the process! Rubrics which are detailed, specific, measurable, complete, and aligned can be used to explain why a grade was awarded. A rubric can quickly and accurately highlight where a student failed to meet specific performance dimensions and/ or the learning outcomes.
  • Evidence of teaching improvement: The values of continuous improvement, lifelong learning, and ongoing professional development are woven into the very fabric of academia. Curating effective assessment tools and methods can provide a means of demonstrating performance and providing evidence to support professional advancement.

Value for students

  • Equity: Using rubrics creates an opportunity for consistent and fair grading for all students. Each student is assessed on the same criteria and in the same way. If performance criteria are not clearly communicated from the outset then evaluations may be based on implicit expectations. Implicit expectations are not known or understood by students, and it can create an unfair assessment structure.
  • Clarity: Ambiguity is decreased by using student-centered language. Student composition is highly diverse, and many students speak different native languages. Therefore, students may have different interpretations as to what words mean (e.g. critical thinking). Using very clear and simplistic language can mitigate unintended barriers and decrease confusion.
  • Expectations: Students know exactly what they need to do to demonstrate learning, what instructors are looking for, how to meet the instructor’s expectations, and how to level up their performance. A challenge can be to ensure that all expectations (implicit and explicit) are clearly communicated to students. Tip! Consider explaining expectations in the description of the task as well.
  • Skill development: Rubrics can introduce new concepts/ terminology and help students develop authentic skills (e.g. critical thinking) which can be applied outside of their academic life.
  • Promotes metacognition and self-regulatory behavior: Guidance and feedback help students reflect on their thought processes, self-assess, and foster positive learning behaviors.

As an Ecampus course developer, you have a wide array of support services and experts available to you. Are you interested in learning more about rubric design, development, and implementation? Contact your Instructional Designer today to begin exploring best-fit options for your course. Stay tuned for Rubrics: Markers of Quality (Part 2) –Tips & Best Practices.

References:

  • Brookhart, S. M. (2013). How to create and use rubrics for formative assessment and grading. Alexandria, Va.: ASCD.
  • Richter, D., & Ehlers, Ulf-Daniel. (2013). Open Learning Cultures: A Guide to Quality, Evaluation, and Assessment for Future Learning. (1st ed.). Berlin, Heidelberg: Springer.
  • Stevens, D. D., & Levi, Antonia. (2013). Introduction to rubrics: an assessment tool to save grading time, convey effective feedback, and promote student learning (2nd ed.). Sterling, Va.: Stylus.
  • Walvoord, B. E. F., & Anderson, Virginia Johnson. (2010). Effective grading: a tool for learning and assessment in college (Second edition.). San Francisco, CA: Jossey-Bass.