Using Rubric Criteria and Levels to Ensure Accuracy in Peer Assessment

The idea of peer assessment can be daunting for professors as they worry about the accuracy and fairness of peer-given marks. To resolve this concern, professors should provide rubrics for every activity. Rubrics are well known to be great tools for guiding peer-to-peer evaluations because they help reviewers evaluate with both consistency and accuracy. They are essential in helping students understand what the assignment expectations are, and the key learning goals that are required. Here at Kritik, we understand the importance of rubrics in peer assessment, which is why we try to make rubric building easier with customizable rubrics at your disposal. 

"I am a big believer in rubrics; they harmonize everyone's expectations on an assignment. Kritik's features include pre-built rubrics for any type of activity you plan on assigning, so I use those and tweak them to ensure that they apply to my class' learning objectives." -Dr. Jeff Boggs, see more here

An effective rubric should clearly list the criteria and have a range of rating levels with detailed descriptions (Brookhart, 2018). But what's the optimal amount of criteria and levels to have? 

How Much Criteria Should My Rubric Have?

There is no universal optimal number of criteria, but “the general consensus is that less is more” (Suskie, 2017). Research suggests that once rubrics get too lengthy, it becomes difficult for students to understand the main focus of the assignment and the key skills they need to achieve (Lane, 2010). Thus, Quinlan (2012) suggests using 3-4 to start with and Stevens and Levi (2012) recommend a maximum of 7 criteria. Overall, the number of criteria should be related to the learning outcome(s) the assignment assesses (Brookhart, 2018). The complexity of the assignment will also play a factor. 

Narrowing Down Criteria

To help determine the number of criteria to include, MIT suggests considering the following questions:

  • What is the learning goal of the activity?
  • How will students demonstrate that they have achieved the learning goal(s)?
  • What knowledge and skills are required to succeed? 
  • What characteristics should the final product have? 

After answering these questions and using them to develop a list of potential criteria, the next step is to prioritize the most important traits. Make sure to eliminate unnecessary criteria and group together similar ones, to shorten the rubric down to its essentials. 

How Many Levels Should My Rubric Have? 

Like criteria, the optimal number will vary, but a high-quality rubric should typically consist of 3-5 levels (Suskie, 2009). A minimum of three is recommended so that there are enough levels to represent adequate work, inadequate work, and an exceptional level to motivate students to go above and beyond (Suskie, 2009). On the other hand, no more than 5 is recommended because having too many performance levels can make it harder to distinguish the differences between them (Suskie, 2009).

"They see the rubric when they submit their assignments, and they use the rubrics to evaluate each other, so it trains them to respond to objectives. It also manages a student's expectation of what they need to do to achieve the grade they want" - Dr. Erin Panda, see more here

Using The Number of Levels to Improve Grading Consistency

1.  Ensure The Number of Levels are Linked to The Number of Relevant Distinctions

Strong rubrics typically have 3-5 levels, but most importantly, the number of levels should be linked to the number of relevant distinctions in a criterion (Suskie, 2009). This rule makes it easier to consistently identify what ratings to give because each level will cover characteristics that are easily distinguishable from each other. For example, if the criterion was simply "include 3 properly cited sources", there would be 3 relevant distinctions:

  • 1 properly cited source
  • 2 properly cited sources
  • 3 properly cited sources 

The same thing would apply to more subjective criteria, such as creativity. Relevant distinctions could be:

  • The assignment lacked creativity 
  • There is some originality and creativity 
  • There is a good amount of creativity and originality 
  • There is an exceptional amount of creativity and originality

2.  Varying Amount of Levels For Each Criterion 

Most rubrics are made so that each criterion has the same number of levels, but Humphry & Heldsinger (2014) found that this structure can influence an evaluator's judgment. When using this structure, reviewers tend to assign students similar levels across all criteria. For example, “if a student scores in the top category on one element, the student is likely to receive scores in the top category on all other elements, even when performance across elements is uneven” (Stanny, n.d.).

This tendency happens when there's an equal amount of levels for all criteria, unrelated to the number of relevant distinctions each criterion can have (Humphry & Heldsinger, 2014). Thus, if the first tip is followed, the rubric should not influence the reviewer's ability to independently mark criteria, regardless if the same amount of levels are used. Despite this, rubrics with varying levels across different criteria have still been shown to help reviewers give more independent scores, serving to improve accuracy and consistency in student grading (Humphry & Heldsinger, 2014). 

Conclusion 

Making effective rubrics can be tricky at first, but it’s an incredibly valuable tool for peer evaluation as it helps ensure students are marking accurately and are analyzing key points. For ideas on how to implement rubric-based assessments, we recommend checking out Professor Jeff Boggs webinar!

References 

Brookhart, S. M. (2018, April 10). Appropriate criteria: Key to effective rubrics. Frontiers. Retrieved February 7, 2022, from https://www.frontiersin.org/articles/10.3389/feduc.2018.00022/full 

Humphry, S. M., & Heldsinger, S. A. (2014, June 1). Common Structural Design Features of Rubrics May Represent a Threat to Validity. Sage Journals. Retrieved February 8, 2022, from https://journals.sagepub.com/stoken/rbtfl/yrfv29x8TlcH./full

Lane, S. (2010). Performance assessment: The state of the art. (SCOPE Student Performance Assessment Series). Stanford, CA: Stanford University, Stanford Center for Opportunity Policy in Education

Massachusetts Institute of Technology. (n.d.). How to use Rubrics. Teaching + Learning Lab. Retrieved March 27, 2022, from https://tll.mit.edu/teaching-resources/assess-learning/how-to-use-rubrics/

Quinlan, A. M. (2012). A complete guide to rubrics: Assessment made easy for teachers of K–college (2nd ed.). Lanham, MD: Rowman & Littlefield.

Stanny, C. (n.d.). How many levels of quality should we represent in a rubric? Retrieved February 7, 2022, from https://www.bellarmine.edu/docs/default-source/faculty-development-docs/17-how-many-levels-of-quality-should-we-represent-in-a-rubric.pdf?sfvrsn=963e9081_0 

Suskie, L. (2009). Using a scoring guide or rubric to plan and evaluate an assessment. In Assessing student learning: A common sense guide (2nd edition, pp. 137-154). Jossey-Bass.

Suskie, L. (2017). Rubric Development. In Handbook on Measurement, Assessment, and Evaluation in Higher Education (pp. 549–550). 

Sara Chen
Education Researcher

Heading

×
Product Demo: Discover how Kritik helps you save time grading while improving student engagement and enhancing students' critical thinking skills!
1