Research on rubrics has often focused on validity and reliability (Matshedisho, 2020), but more recent work explores how students actually interpret and use rubrics (Brookhart, 2015; Matshedisho, 2020; Taylor, 2024; Tessier, 2021). This emerging scholarship consistently shows a gap between instructor intention and student interpretation. For example, Matshedisho (2020) found that “students expected procedural and declarative guidance, while instructors expected conceptual, reflective work” (p. 175).

If students understand rubrics differently than we intend, rubrics cannot fully support learning. Below are key reasons this mismatch occurs—and strategies to close the gap.

Tacit Knowledge and Language

Students bring varied backgrounds, disciplinary exposure, and assumptions to their learning (Brookhart, 2015; Matshedisho, 2020). Many do not enter college knowing what a rubric is or how to apply one (Tessier, 2021).

Key issues include:

  • Unfamiliar terms or disciplinary jargon
    Early‑year students may lack field‑specific language. In Matshedisho’s (2020) study, first‑year medical students struggled with the sociological-specific criteria required for a reflective assignment.
  • Different meanings across disciplines
    Terms like “concept,” “analysis,” or “argument” shift across fields, confusing students taking multiple general‑education courses.
  • Ambiguous or subjective labels
    Students struggle to distinguish between words like good and very good, and terms such as “critical analysis” can feel subjective (Taylor, 2024).
  • Minimal differentiation between performance levels
    When descriptors are too similar, students, unable to discern differences between the ratings, cannot see how to progress.

How Students Use Rubrics

Students often approach rubrics differently than instructors expect:

  • They treat the rubric as separate from course content, starting with the criteria column and reading each cell in isolation (Matshedisho, 2020).
  • They search for procedural instructions, expecting the rubric to tell them how to complete the assignment (Matshedisho, 2020; Taylor, 2024; Tessier, 2021).
  • Many prefer hard‑copy rubrics over digital versions (Tessier, 2021; Panadero, 2025).

Bridging the Gap Through Instruction

Rubrics only support learning when students understand them as instructors intend (Brookhart, 2015). Effective strategies include:

Build Shared Understanding

  • Explain key terms and check for tacit knowledge—especially discipline‑specific language (Taylor, 2024).
  • Explicitly teach what a rubric is and how to use one; don’t assume prior knowledge (Tessier, 2021).
  • Calibrate expectations by discussing examples and rating sample work with students (Taylor, 2024).

Integrate Rubrics Into the Course

  • Refer to the rubric during lectures and discussions. (Tessier, 2021).
  • Provide feedback that directly connects to rubric criteria. (Matshedisho, 2020) (Taylor, 2024) (Tessier, 2021).
  • Celebrate or reinforce active rubric use (Tessier, 2021).
  • Provide hard copies of the rubric whenever possible (Tessier, 2021; Panadero, 2025).

Support Instructors

  • Offer training in rubric design and student‑centered implementation (Brookhart, 2015) (Taylor, 2024).
  • Use shared rubrics for multi‑section courses to support consistency.
  • Meet as a teaching team to create and calibrate the common rubric.
  • Recognize limitations of online rubric platforms; include clarifying hyperlinks or exemplars when possible (Panadero, 2025).

Clarify Task Expectations

Students often want a checklist. Provide procedural instructions separately, and use the rubric for conceptual evaluation (Matshedisho, 2020; Taylor, 2024; Tessier, 2021).

Conclusion

Research has proven that students comment favorably when it comes to questions referencing a rubric’s validity and reliability, but when the research focuses on how students interact with, understand, and apply the rubric, it is clear we still have a long way to go. Hopefully the suggestions above will get you started on the road to even better creation and application of your rubrics.

References

Brookhart, S. M. (2015). The quality and effectiveness of descriptive rubrics. Educational Review, 67(3), 343–368. doi:10.1080/00131911.2014.929565

Matshedisho, K. R. (2020). Straddling rows and columns: Students’ (mis)conceptions of an assessment rubric. Assessment & Evaluation in Higher Education, 169–179. doi:10.1080/02602938.2019.1616671

Panadero, E. O. (2025). Analysis of online rubric platforms: Advancing toward erubrics. Assessment & Evaluation in Higher Education, 31–49. doi:10.1080/02602938.2024.2345657

Taylor, B. K. (2024). Rubrics in higher education: An exploration of undergraduate students’ understanding and perspectives. Assessment & Evaluation in Higher Education, 799–809. doi:10.1080/02602938.2023.2299330

Tessier, L. (2021). Listening to student perspectives of rubrics: Perceptions, Uses, and Grades. Journal on Excellence in College Teaching, 32(3), 133–168.