Rubrics for Oral Presentations

Introduction.

Many instructors require students to give oral presentations, which they evaluate and count in students’ grades. It is important that instructors clarify their goals for these presentations as well as the student learning objectives to which they are related. Embedding the assignment in course goals and learning objectives allows instructors to be clear with students about their expectations and to develop a rubric for evaluating the presentations.

A rubric is a scoring guide that articulates and assesses specific components and expectations for an assignment. Rubrics identify the various criteria relevant to an assignment and then explicitly state the possible levels of achievement along a continuum, so that an effective rubric accurately reflects the expectations of an assignment. Using a rubric to evaluate student performance has advantages for both instructors and students.  Creating Rubrics

Rubrics can be either analytic or holistic. An analytic rubric comprises a set of specific criteria, with each one evaluated separately and receiving a separate score. The template resembles a grid with the criteria listed in the left column and levels of performance listed across the top row, using numbers and/or descriptors. The cells within the center of the rubric contain descriptions of what expected performance looks like for each level of performance.

A holistic rubric consists of a set of descriptors that generate a single, global score for the entire work. The single score is based on raters’ overall perception of the quality of the performance. Often, sentence- or paragraph-length descriptions of different levels of competencies are provided.

When applied to an oral presentation, rubrics should reflect the elements of the presentation that will be evaluated as well as their relative importance. Thus, the instructor must decide whether to include dimensions relevant to both form and content and, if so, which one. Additionally, the instructor must decide how to weight each of the dimensions – are they all equally important, or are some more important than others? Additionally, if the presentation represents a group project, the instructor must decide how to balance grading individual and group contributions.  Evaluating Group Projects

Creating Rubrics

The steps for creating an analytic rubric include the following:

1. Clarify the purpose of the assignment. What learning objectives are associated with the assignment?

2. Look for existing rubrics that can be adopted or adapted for the specific assignment

3. Define the criteria to be evaluated

4. Choose the rating scale to measure levels of performance

5. Write descriptions for each criterion for each performance level of the rating scale

6. Test and revise the rubric

Examples of criteria that have been included in rubrics for evaluation oral presentations include:

  • Knowledge of content
  • Organization of content
  • Presentation of ideas
  • Research/sources
  • Visual aids/handouts
  • Language clarity
  • Grammatical correctness
  • Time management
  • Volume of speech
  • Rate/pacing of Speech
  • Mannerisms/gestures
  • ​​​​​​​Eye contact/audience engagement

Examples of scales/ratings that have been used to rate student performance include:

  • Strong, Satisfactory, Weak
  • Beginning, Intermediate, High
  • Exemplary, Competent, Developing
  • Excellent, Competent, Needs Work
  • Exceeds Standard, Meets Standard, Approaching Standard, Below Standard
  • Exemplary, Proficient, Developing, Novice
  • Excellent, Good, Marginal, Unacceptable
  • Advanced, Intermediate High, Intermediate, Developing
  • Exceptional, Above Average, Sufficient, Minimal, Poor
  • Master, Distinguished, Proficient, Intermediate, Novice
  • Excellent, Good, Satisfactory, Poor, Unacceptable
  • Always, Often, Sometimes, Rarely, Never
  • Exemplary, Accomplished, Acceptable, Minimally Acceptable, Emerging, Unacceptable

Grading and Performance Rubrics Carnegie Mellon University Eberly Center for Teaching Excellence & Educational Innovation

Creating and Using Rubrics Carnegie Mellon University Eberly Center for Teaching Excellence & Educational Innovation

Using Rubrics Cornell University Center for Teaching Innovation

Rubrics DePaul University Teaching Commons

Building a Rubric University of Texas/Austin Faculty Innovation Center

Building a Rubric Columbia University Center for Teaching and Learning

Rubric Development University of West Florida Center for University Teaching, Learning, and Assessment

Creating and Using Rubrics Yale University Poorvu Center for Teaching and Learning

Designing Grading Rubrics ​​​​​​​ Brown University Sheridan Center for Teaching and Learning

Examples of Oral Presentation Rubrics

Oral Presentation Rubric Pomona College Teaching and Learning Center

Oral Presentation Evaluation Rubric University of Michigan

Oral Presentation Rubric Roanoke College

Oral Presentation: Scoring Guide Fresno State University Office of Institutional Effectiveness

Presentation Skills Rubric State University of New York/New Paltz School of Business

Oral Presentation Rubric Oregon State University Center for Teaching and Learning

Oral Presentation Rubric Purdue University College of Science

Group Class Presentation Sample Rubric Pepperdine University Graziadio Business School

Eberly Center

Teaching excellence & educational innovation, grading and performance rubrics, what are rubrics.

A rubric is a scoring tool that explicitly represents the performance expectations for an assignment or piece of work. A rubric divides the assigned work into component parts and provides clear descriptions of the characteristics of the work associated with each component, at varying levels of mastery. Rubrics can be used for a wide array of assignments: papers, projects, oral presentations, artistic performances, group projects, etc. Rubrics can be used as scoring or grading guides, to provide formative feedback to support and guide ongoing learning efforts, or both.

Advantages of Using Rubrics

Using a rubric provides several advantages to both instructors and students. Grading according to an explicit and descriptive set of criteria that is designed to reflect the weighted importance of the objectives of the assignment helps ensure that the instructor’s grading standards don’t change over time. Grading consistency is difficult to maintain over time because of fatigue, shifting standards based on prior experience, or intrusion of other criteria. Furthermore, rubrics can reduce the time spent grading by reducing uncertainty and by allowing instructors to refer to the rubric description associated with a score rather than having to write long comments. Finally, grading rubrics are invaluable in large courses that have multiple graders (other instructors, teaching assistants, etc.) because they can help ensure consistency across graders and reduce the systematic bias that can be introduced between graders.

Used more formatively, rubrics can help instructors get a clearer picture of the strengths and weaknesses of their class. By recording the component scores and tallying up the number of students scoring below an acceptable level on each component, instructors can identify those skills or concepts that need more instructional time and student effort.

Grading rubrics are also valuable to students. A rubric can help instructors communicate to students the specific requirements and acceptable performance standards of an assignment. When rubrics are given to students with the assignment description, they can help students monitor and assess their progress as they work toward clearly indicated goals. When assignments are scored and returned with the rubric, students can more easily recognize the strengths and weaknesses of their work and direct their efforts accordingly.

Examples of Rubrics

Here are links to a diverse set of rubrics designed by Carnegie Mellon faculty and faculty at other institutions. Although your particular field of study and type of assessment activity may not be represented currently, viewing a rubric that is designed for a similar activity may provide you with ideas on how to divide your task into components and how to describe the varying levels of mastery.

Paper Assignments

  • Example 1: Philosophy Paper This rubric was designed for student papers in a range of philosophy courses, CMU.
  • Example 2: Psychology Assignment Short, concept application homework assignment in cognitive psychology, CMU.
  • Example 3: Anthropology Writing Assignments This rubric was designed for a series of short writing assignments in anthropology, CMU.
  • Example 4: History Research Paper . This rubric was designed for essays and research papers in history, CMU.
  • Example 1: Capstone Project in Design This rubric describes the components and standard of performance from the research phase to the final presentation for a senior capstone project in the School of Design, CMU.
  • Example 2: Engineering Design Project This rubric describes performance standards on three aspects of a team project: Research and Design, Communication, and Team Work.

Oral Presentations

  • Example 1: Oral Exam This rubric describes a set of components and standards for assessing performance on an oral exam in an upper-division history course, CMU.
  • Example 2: Oral Communication
  • Example 3: Group Presentations This rubric describes a set of components and standards for assessing group presentations in a history course, CMU.

Class Participation/Contributions

  • Example 1: Discussion Class This rubric assesses the quality of student contributions to class discussions. This is appropriate for an undergraduate-level course, CMU.
  • Example 2: Advanced Seminar This rubric is designed for assessing discussion performance in an advanced undergraduate or graduate seminar. 

creative commons image

Rubric Best Practices, Examples, and Templates

A rubric is a scoring tool that identifies the different criteria relevant to an assignment, assessment, or learning outcome and states the possible levels of achievement in a specific, clear, and objective way. Use rubrics to assess project-based student work including essays, group projects, creative endeavors, and oral presentations.

Rubrics can help instructors communicate expectations to students and assess student work fairly, consistently and efficiently. Rubrics can provide students with informative feedback on their strengths and weaknesses so that they can reflect on their performance and work on areas that need improvement.

How to Get Started

Best practices, moodle how-to guides.

  • Workshop Recording (Fall 2022)
  • Workshop Registration

Step 1: Analyze the assignment

The first step in the rubric creation process is to analyze the assignment or assessment for which you are creating a rubric. To do this, consider the following questions:

  • What is the purpose of the assignment and your feedback? What do you want students to demonstrate through the completion of this assignment (i.e. what are the learning objectives measured by it)? Is it a summative assessment, or will students use the feedback to create an improved product?
  • Does the assignment break down into different or smaller tasks? Are these tasks equally important as the main assignment?
  • What would an “excellent” assignment look like? An “acceptable” assignment? One that still needs major work?
  • How detailed do you want the feedback you give students to be? Do you want/need to give them a grade?

Step 2: Decide what kind of rubric you will use

Types of rubrics: holistic, analytic/descriptive, single-point

Holistic Rubric. A holistic rubric includes all the criteria (such as clarity, organization, mechanics, etc.) to be considered together and included in a single evaluation. With a holistic rubric, the rater or grader assigns a single score based on an overall judgment of the student’s work, using descriptions of each performance level to assign the score.

Advantages of holistic rubrics:

  • Can p lace an emphasis on what learners can demonstrate rather than what they cannot
  • Save grader time by minimizing the number of evaluations to be made for each student
  • Can be used consistently across raters, provided they have all been trained

Disadvantages of holistic rubrics:

  • Provide less specific feedback than analytic/descriptive rubrics
  • Can be difficult to choose a score when a student’s work is at varying levels across the criteria
  • Any weighting of c riteria cannot be indicated in the rubric

Analytic/Descriptive Rubric . An analytic or descriptive rubric often takes the form of a table with the criteria listed in the left column and with levels of performance listed across the top row. Each cell contains a description of what the specified criterion looks like at a given level of performance. Each of the criteria is scored individually.

Advantages of analytic rubrics:

  • Provide detailed feedback on areas of strength or weakness
  • Each criterion can be weighted to reflect its relative importance

Disadvantages of analytic rubrics:

  • More time-consuming to create and use than a holistic rubric
  • May not be used consistently across raters unless the cells are well defined
  • May result in giving less personalized feedback

Single-Point Rubric . A single-point rubric is breaks down the components of an assignment into different criteria, but instead of describing different levels of performance, only the “proficient” level is described. Feedback space is provided for instructors to give individualized comments to help students improve and/or show where they excelled beyond the proficiency descriptors.

Advantages of single-point rubrics:

  • Easier to create than an analytic/descriptive rubric
  • Perhaps more likely that students will read the descriptors
  • Areas of concern and excellence are open-ended
  • May removes a focus on the grade/points
  • May increase student creativity in project-based assignments

Disadvantage of analytic rubrics: Requires more work for instructors writing feedback

Step 3 (Optional): Look for templates and examples.

You might Google, “Rubric for persuasive essay at the college level” and see if there are any publicly available examples to start from. Ask your colleagues if they have used a rubric for a similar assignment. Some examples are also available at the end of this article. These rubrics can be a great starting point for you, but consider steps 3, 4, and 5 below to ensure that the rubric matches your assignment description, learning objectives and expectations.

Step 4: Define the assignment criteria

Make a list of the knowledge and skills are you measuring with the assignment/assessment Refer to your stated learning objectives, the assignment instructions, past examples of student work, etc. for help.

  Helpful strategies for defining grading criteria:

  • Collaborate with co-instructors, teaching assistants, and other colleagues
  • Brainstorm and discuss with students
  • Can they be observed and measured?
  • Are they important and essential?
  • Are they distinct from other criteria?
  • Are they phrased in precise, unambiguous language?
  • Revise the criteria as needed
  • Consider whether some are more important than others, and how you will weight them.

Step 5: Design the rating scale

Most ratings scales include between 3 and 5 levels. Consider the following questions when designing your rating scale:

  • Given what students are able to demonstrate in this assignment/assessment, what are the possible levels of achievement?
  • How many levels would you like to include (more levels means more detailed descriptions)
  • Will you use numbers and/or descriptive labels for each level of performance? (for example 5, 4, 3, 2, 1 and/or Exceeds expectations, Accomplished, Proficient, Developing, Beginning, etc.)
  • Don’t use too many columns, and recognize that some criteria can have more columns that others . The rubric needs to be comprehensible and organized. Pick the right amount of columns so that the criteria flow logically and naturally across levels.

Step 6: Write descriptions for each level of the rating scale

Artificial Intelligence tools like Chat GPT have proven to be useful tools for creating a rubric. You will want to engineer your prompt that you provide the AI assistant to ensure you get what you want. For example, you might provide the assignment description, the criteria you feel are important, and the number of levels of performance you want in your prompt. Use the results as a starting point, and adjust the descriptions as needed.

Building a rubric from scratch

For a single-point rubric , describe what would be considered “proficient,” i.e. B-level work, and provide that description. You might also include suggestions for students outside of the actual rubric about how they might surpass proficient-level work.

For analytic and holistic rubrics , c reate statements of expected performance at each level of the rubric.

  • Consider what descriptor is appropriate for each criteria, e.g., presence vs absence, complete vs incomplete, many vs none, major vs minor, consistent vs inconsistent, always vs never. If you have an indicator described in one level, it will need to be described in each level.
  • You might start with the top/exemplary level. What does it look like when a student has achieved excellence for each/every criterion? Then, look at the “bottom” level. What does it look like when a student has not achieved the learning goals in any way? Then, complete the in-between levels.
  • For an analytic rubric , do this for each particular criterion of the rubric so that every cell in the table is filled. These descriptions help students understand your expectations and their performance in regard to those expectations.

Well-written descriptions:

  • Describe observable and measurable behavior
  • Use parallel language across the scale
  • Indicate the degree to which the standards are met

Step 7: Create your rubric

Create your rubric in a table or spreadsheet in Word, Google Docs, Sheets, etc., and then transfer it by typing it into Moodle. You can also use online tools to create the rubric, but you will still have to type the criteria, indicators, levels, etc., into Moodle. Rubric creators: Rubistar , iRubric

Step 8: Pilot-test your rubric

Prior to implementing your rubric on a live course, obtain feedback from:

  • Teacher assistants

Try out your new rubric on a sample of student work. After you pilot-test your rubric, analyze the results to consider its effectiveness and revise accordingly.

  • Limit the rubric to a single page for reading and grading ease
  • Use parallel language . Use similar language and syntax/wording from column to column. Make sure that the rubric can be easily read from left to right or vice versa.
  • Use student-friendly language . Make sure the language is learning-level appropriate. If you use academic language or concepts, you will need to teach those concepts.
  • Share and discuss the rubric with your students . Students should understand that the rubric is there to help them learn, reflect, and self-assess. If students use a rubric, they will understand the expectations and their relevance to learning.
  • Consider scalability and reusability of rubrics. Create rubric templates that you can alter as needed for multiple assignments.
  • Maximize the descriptiveness of your language. Avoid words like “good” and “excellent.” For example, instead of saying, “uses excellent sources,” you might describe what makes a resource excellent so that students will know. You might also consider reducing the reliance on quantity, such as a number of allowable misspelled words. Focus instead, for example, on how distracting any spelling errors are.

Example of an analytic rubric for a final paper

Example of a holistic rubric for a final paper, single-point rubric, more examples:.

  • Single Point Rubric Template ( variation )
  • Analytic Rubric Template make a copy to edit
  • A Rubric for Rubrics
  • Bank of Online Discussion Rubrics in different formats
  • Mathematical Presentations Descriptive Rubric
  • Math Proof Assessment Rubric
  • Kansas State Sample Rubrics
  • Design Single Point Rubric

Technology Tools: Rubrics in Moodle

  • Moodle Docs: Rubrics
  • Moodle Docs: Grading Guide (use for single-point rubrics)

Tools with rubrics (other than Moodle)

  • Google Assignments
  • Turnitin Assignments: Rubric or Grading Form

Other resources

  • DePaul University (n.d.). Rubrics .
  • Gonzalez, J. (2014). Know your terms: Holistic, Analytic, and Single-Point Rubrics . Cult of Pedagogy.
  • Goodrich, H. (1996). Understanding rubrics . Teaching for Authentic Student Performance, 54 (4), 14-17. Retrieved from   
  • Miller, A. (2012). Tame the beast: tips for designing and using rubrics.
  • Ragupathi, K., Lee, A. (2020). Beyond Fairness and Consistency in Grading: The Role of Rubrics in Higher Education. In: Sanger, C., Gleason, N. (eds) Diversity and Inclusion in Global Higher Education. Palgrave Macmillan, Singapore.

Center for Teaching Innovation

Resource library.

  • AACU VALUE Rubrics

Using rubrics

A rubric is a type of scoring guide that assesses and articulates specific components and expectations for an assignment. Rubrics can be used for a variety of assignments: research papers, group projects, portfolios, and presentations.  

Why use rubrics? 

Rubrics help instructors: 

  • Assess assignments consistently from student-to-student. 
  • Save time in grading, both short-term and long-term. 
  • Give timely, effective feedback and promote student learning in a sustainable way. 
  • Clarify expectations and components of an assignment for both students and course teaching assistants (TAs). 
  • Refine teaching methods by evaluating rubric results. 

Rubrics help students: 

  • Understand expectations and components of an assignment. 
  • Become more aware of their learning process and progress. 
  • Improve work through timely and detailed feedback. 

Considerations for using rubrics 

When developing rubrics consider the following:

  • Although it takes time to build a rubric, time will be saved in the long run as grading and providing feedback on student work will become more streamlined.  
  • A rubric can be a fillable pdf that can easily be emailed to students. 
  • They can be used for oral presentations. 
  • They are a great tool to evaluate teamwork and individual contribution to group tasks. 
  • Rubrics facilitate peer-review by setting evaluation standards. Have students use the rubric to provide peer assessment on various drafts. 
  • Students can use them for self-assessment to improve personal performance and learning. Encourage students to use the rubrics to assess their own work. 
  • Motivate students to improve their work by using rubric feedback to resubmit their work incorporating the feedback. 

Getting Started with Rubrics 

  • Start small by creating one rubric for one assignment in a semester.  
  • Ask colleagues if they have developed rubrics for similar assignments or adapt rubrics that are available online. For example, the  AACU has rubrics  for topics such as written and oral communication, critical thinking, and creative thinking. RubiStar helps you to develop your rubric based on templates.  
  • Examine an assignment for your course. Outline the elements or critical attributes to be evaluated (these attributes must be objectively measurable). 
  • Create an evaluative range for performance quality under each element; for instance, “excellent,” “good,” “unsatisfactory.” 
  • Avoid using subjective or vague criteria such as “interesting” or “creative.” Instead, outline objective indicators that would fall under these categories. 
  • The criteria must clearly differentiate one performance level from another. 
  • Assign a numerical scale to each level. 
  • Give a draft of the rubric to your colleagues and/or TAs for feedback. 
  • Train students to use your rubric and solicit feedback. This will help you judge whether the rubric is clear to them and will identify any weaknesses. 
  • Rework the rubric based on the feedback. 

Search form

  • About Faculty Development and Support
  • Programs and Funding Opportunities

Consultations, Observations, and Services

  • Strategic Resources & Digital Publications
  • Canvas @ Yale Support
  • Learning Environments @ Yale
  • Teaching Workshops
  • Teaching Consultations and Classroom Observations
  • Teaching Programs
  • Spring Teaching Forum
  • Written and Oral Communication Workshops and Panels
  • Writing Resources & Tutorials
  • About the Graduate Writing Laboratory
  • Writing and Public Speaking Consultations
  • Writing Workshops and Panels
  • Writing Peer-Review Groups
  • Writing Retreats and All Writes
  • Online Writing Resources for Graduate Students
  • About Teaching Development for Graduate and Professional School Students
  • Teaching Programs and Grants
  • Teaching Forums
  • Resources for Graduate Student Teachers
  • About Undergraduate Writing and Tutoring
  • Academic Strategies Program
  • The Writing Center
  • STEM Tutoring & Programs
  • Humanities & Social Sciences
  • Center for Language Study
  • Online Course Catalog
  • Antiracist Pedagogy
  • NECQL 2019: NorthEast Consortium for Quantitative Literacy XXII Meeting
  • STEMinar Series
  • Teaching in Context: Troubling Times
  • Helmsley Postdoctoral Teaching Scholars
  • Pedagogical Partners
  • Instructional Materials
  • Evaluation & Research
  • STEM Education Job Opportunities
  • Yale Connect
  • Online Education Legal Statements

You are here

Creating and using rubrics.

A rubric describes the criteria that will be used to evaluate a specific task, such as a student writing assignment, poster, oral presentation, or other project. Rubrics allow instructors to communicate expectations to students, allow students to check in on their progress mid-assignment, and can increase the reliability of scores. Research suggests that when rubrics are used on an instructional basis (for instance, included with an assignment prompt for reference), students tend to utilize and appreciate them (Reddy and Andrade, 2010).

Rubrics generally exist in tabular form and are composed of:

  • A description of the task that is being evaluated,
  • The criteria that is being evaluated (row headings),
  • A rating scale that demonstrates different levels of performance (column headings), and
  • A description of each level of performance for each criterion (within each box of the table).

When multiple individuals are grading, rubrics also help improve the consistency of scoring across all graders. Instructors should insure that the structure, presentation, consistency, and use of their rubrics pass rigorous standards of validity , reliability , and fairness (Andrade, 2005).

Major Types of Rubrics

There are two major categories of rubrics:

  • Holistic : In this type of rubric, a single score is provided based on raters’ overall perception of the quality of the performance. Holistic rubrics are useful when only one attribute is being evaluated, as they detail different levels of performance within a single attribute. This category of rubric is designed for quick scoring but does not provide detailed feedback. For these rubrics, the criteria may be the same as the description of the task.
  • Analytic : In this type of rubric, scores are provided for several different criteria that are being evaluated. Analytic rubrics provide more detailed feedback to students and instructors about their performance. Scoring is usually more consistent across students and graders with analytic rubrics.

Rubrics utilize a scale that denotes level of success with a particular assignment, usually a 3-, 4-, or 5- category grid:

what is a presentation rubric

Figure 1: Grading Rubrics: Sample Scales (Brown Sheridan Center)

Sample Rubrics

Instructors can consider a sample holistic rubric developed for an English Writing Seminar course at Yale.

The Association of American Colleges and Universities also has a number of free (non-invasive free account required) analytic rubrics that can be downloaded and modified by instructors. These 16 VALUE rubrics enable instructors to measure items such as inquiry and analysis, critical thinking, written communication, oral communication, quantitative literacy, teamwork, problem-solving, and more.

Recommendations

The following provides a procedure for developing a rubric, adapted from Brown’s Sheridan Center for Teaching and Learning :

  • Define the goal and purpose of the task that is being evaluated - Before constructing a rubric, instructors should review their learning outcomes associated with a given assignment. Are skills, content, and deeper conceptual knowledge clearly defined in the syllabus , and do class activities and assignments work towards intended outcomes? The rubric can only function effectively if goals are clear and student work progresses towards them.
  • Decide what kind of rubric to use - The kind of rubric used may depend on the nature of the assignment, intended learning outcomes (for instance, does the task require the demonstration of several different skills?), and the amount and kind of feedback students will receive (for instance, is the task a formative or a summative assessment ?). Instructors can read the above, or consider “Additional Resources” for kinds of rubrics.
  • Define the criteria - Instructors can review their learning outcomes and assessment parameters to determine specific criteria for the rubric to cover. Instructors should consider what knowledge and skills are required for successful completion, and create a list of criteria that assess outcomes across different vectors (comprehensiveness, maturity of thought, revisions, presentation, timeliness, etc). Criteria should be distinct and clearly described, and ideally, not surpass seven in number.
  • Define the rating scale to measure levels of performance - Whatever rating scale instructors choose, they should insure that it is clear, and review it in-class to field student question and concerns. Instructors can consider if the scale will include descriptors or only be numerical, and might include prompts on the rubric for achieving higher achievement levels. Rubrics typically include 3-5 levels in their rating scales (see Figure 1 above).
  • Write descriptions for each performance level of the rating scale - Each level should be accompanied by a descriptive paragraph that outlines ideals for each level, lists or names all performance expectations within the level, and if possible, provides a detail or example of ideal performance within each level. Across the rubric, descriptions should be parallel, observable, and measurable.
  • Test and revise the rubric - The rubric can be tested before implementation, by arranging for writing or testing conditions with several graders or TFs who can use the rubric together. After grading with the rubric, graders might grade a similar set of materials without the rubric to assure consistency. Instructors can consider discrepancies, share the rubric and results with faculty colleagues for further opinions, and revise the rubric for use in class. Instructors might also seek out colleagues’ rubrics as well, for comparison. Regarding course implementation, instructors might consider passing rubrics out during the first class, in order to make grading expectations clear as early as possible. Rubrics should fit on one page, so that descriptions and criteria are viewable quickly and simultaneously. During and after a class or course, instructors can collect feedback on the rubric’s clarity and effectiveness from TFs and even students through anonymous surveys. Comparing scores and quality of assignments with parallel or previous assignments that did not include a rubric can reveal effectiveness as well. Instructors should feel free to revise a rubric following a course too, based on student performance and areas of confusion.

Additional Resources

Cox, G. C., Brathwaite, B. H., & Morrison, J. (2015). The Rubric: An assessment tool to guide students and markers. Advances in Higher Education, 149-163.

Creating and Using Rubrics - Carnegie Mellon Eberly Center for Teaching Excellence and & Educational Innovation

Creating a Rubric - UC Denver Center for Faculty Development

Grading Rubric Design - Brown University Sheridan Center for Teaching and Learning

Moskal, B. M. (2000). Scoring rubrics: What, when and how? Practical Assessment, Research & Evaluation 7(3).

Quinlan A. M., (2011) A Complete Guide to Rubrics: Assessment Made Easy for Teachers of K-college 2nd edition, Rowman & Littlefield Education.

Andrade, H. (2005). Teaching with Rubrics: The Good, the Bad, and the Ugly. College Teaching 53(1):27-30.

Reddy, Y. M., & Andrade, H. (2010). A review of rubric use in higher education. Assessment & Evaluation in Higher Education, 35(4), 435-448.

Sheridan Center for Teaching and Learning , Brown University

Downloads 

what is a presentation rubric

YOU MAY BE INTERESTED IN

what is a presentation rubric

The Poorvu Center for Teaching and Learning routinely supports members of the Yale community with individual instructional consultations and classroom observations.

what is a presentation rubric

Reserve a Room

The Poorvu Center for Teaching and Learning partners with departments and groups on-campus throughout the year to share its space. Please review the reservation form and submit a request.

Nancy Niemi in conversation with a new faculty member at the Greenberg Center

Instructional Enhancement Fund

The Instructional Enhancement Fund (IEF) awards grants of up to $500 to support the timely integration of new learning activities into an existing undergraduate or graduate course. All Yale instructors of record, including tenured and tenure-track faculty, clinical instructional faculty, lecturers, lectors, and part-time acting instructors (PTAIs), are eligible to apply. Award decisions are typically provided within two weeks to help instructors implement ideas for the current semester.

Skip to Content

Other ways to search:

  • Events Calendar

Rubrics are a set of criteria to evaluate performance on an assignment or assessment. Rubrics can communicate expectations regarding the quality of work to students and provide a standardized framework for instructors to assess work. Rubrics can be used for both formative and summative assessment. They are also crucial in encouraging self-assessment of work and structuring peer-assessments. 

Why use rubrics?

Rubrics are an important tool to assess learning in an equitable and just manner. This is because they enable:

  • A common set of standards and criteria to be uniformly applied, which can mitigate bias
  • Transparency regarding the standards and criteria on which students are evaluated
  • Efficient grading with timely and actionable feedback 
  • Identifying areas in which students need additional support and guidance 
  • The use of objective, criterion-referenced metrics for evaluation 

Some instructors may be reluctant to provide a rubric to grade assessments under the perception that it stifles student creativity (Haugnes & Russell, 2018). However, sharing the purpose of an assessment and criteria for success in the form of a rubric along with relevant examples has been shown to particularly improve the success of BIPOC, multiracial, and first-generation students (Jonsson, 2014; Winkelmes, 2016). Improved success in assessments is generally associated with an increased sense of belonging which, in turn, leads to higher student retention and more equitable outcomes in the classroom (Calkins & Winkelmes, 2018; Weisz et al., 2023). By not providing a rubric, faculty may risk having students guess the criteria on which they will be evaluated. When students have to guess what expectations are, it may unfairly disadvantage students who are first-generation, BIPOC, international, or otherwise have not been exposed to the cultural norms that have dominated higher-ed institutions in the U.S (Shapiro et al., 2023). Moreover, in such cases, criteria may be applied inconsistently for students leading to biases in grades awarded to students.

Steps for Creating a Rubric

Clearly state the purpose of the assessment, which topic(s) learners are being tested on, the type of assessment (e.g., a presentation, essay, group project), the skills they are being tested on (e.g., writing, comprehension, presentation, collaboration), and the goal of the assessment for instructors (e.g., gauging formative or summative understanding of the topic). 

Determine the specific criteria or dimensions to assess in the assessment. These criteria should align with the learning objectives or outcomes to be evaluated. These criteria typically form the rows in a rubric grid and describe the skills, knowledge, or behavior to be demonstrated. The set of criteria may include, for example, the idea/content, quality of arguments, organization, grammar, citations and/or creativity in writing. These criteria may form separate rows or be compiled in a single row depending on the type of rubric.

(See row headers  of  Figure 1 )

Create a scale of performance levels that describe the degree of proficiency attained for each criterion. The scale typically has 4 to 5 levels (although there may be fewer levels depending on the type of rubrics used). The rubrics should also have meaningful labels (e.g., not meeting expectations, approaching expectations, meeting expectations, exceeding expectations). When assigning levels of performance, use inclusive language that can inculcate a growth mindset among students, especially when work may be otherwise deemed to not meet the mark. Some examples include, “Does not yet meet expectations,” “Considerable room for improvement,” “ Progressing,” “Approaching,” “Emerging,” “Needs more work,” instead of using terms like “Unacceptable,” “Fails,” “Poor,” or “Below Average.”

(See column headers  of  Figure 1 )

Develop a clear and concise descriptor for each combination of criterion and performance level. These descriptors should provide examples or explanations of what constitutes each level of performance for each criterion. Typically, instructors should start by describing the highest and lowest level of performance for that criterion and then describing intermediate performance for that criterion. It is important to keep the language uniform across all columns, e.g., use syntax and words that are aligned in each column for a given criteria. 

(See cells  of  Figure 1 )

It is important to consider how each criterion is weighted and for each criterion to reflect the importance of learning objectives being tested. For example, if the primary goal of a research proposal is to test mastery of content and application of knowledge, these criteria should be weighted more heavily compared to other criteria (e.g., grammar, style of presentation). This can be done by associating a different scoring system for each criteria (e.g., Following a scale of 8-6-4-2 points for each level of performance in higher weight criteria and 4-3-2-1 points for each level of performance for lower weight criteria). Further, the number of points awarded across levels of performance should be evenly spaced (e.g., 10-8-6-4 instead of 10-6-3-1). Finally, if there is a letter grade associated with a particular assessment, consider how it relates to scores. For example, instead of having students receive an A only if they received the highest level of performance on each criterion, consider assigning an A grade to a range of scores (28 - 30 total points) or a combination of levels of performance (e.g., exceeds expectations on higher weight criteria and meets expectations on other criteria). 

(See the numerical values in the column headers  of  Figure 1 )

 a close up of a score sheet

Figure 1:  Graphic describing the five basic elements of a rubric

Note : Consider using a template rubric that can be used to evaluate similar activities in the classroom to avoid the fatigue of developing multiple rubrics. Some tools include Rubistar or iRubric which provide suggested words for each criteria depending on the type of assessment. Additionally, the above format can be incorporated in rubrics that can be directly added in Canvas or in the grid view of rubrics in gradescope which are common grading tools. Alternately, tables within a Word processor or Spreadsheet may also be used to build a rubric. You may also adapt the example rubrics provided below to the specific learning goals for the assessment using the blank template rubrics we have provided against each type of rubric. Watch the linked video for a quick introduction to designing a rubric . Word document (docx) files linked below will automatically download to your device whereas pdf files will open in a new tab.

Types of Rubrics

In these rubrics, one specifies at least two criteria and provides a separate score for each criterion. The steps outlined above for creating a rubric are typical for an analytic style rubric. Analytic rubrics are used to provide detailed feedback to students and help identify strengths as well as particular areas in need of improvement. These can be particularly useful when providing formative feedback to students, for student peer assessment and self-assessments, or for project-based summative assessments that evaluate student learning across multiple criteria. You may use a blank analytic rubric template (docx) or adapt an existing sample of an analytic rubric (pdf) . 

figure 2

Fig 2: Graphic describing a sample analytic rubric (adopted from George Mason University, 2013)

These are a subset of analytical rubrics that are typically used to assess student performance and engagement during a learning period but not the end product. Such rubrics are typically used to assess soft skills and behaviors that are less tangible (e.g., intercultural maturity, empathy, collaboration skills). These rubrics are useful in assessing the extent to which students develop a particular skill, ability, or value in experiential learning based programs or skills. They are grounded in the theory of development (King, 2005). Examples include an intercultural knowledge and competence rubric (docx)  and a global learning rubric (docx) .

These rubrics consider all criteria evaluated on one scale, providing a single score that gives an overall impression of a student’s performance on an assessment.These rubrics also emphasize the overall quality of a student’s work, rather than delineating shortfalls of their work. However, a limitation of the holistic rubrics is that they are not useful for providing specific, nuanced feedback or to identify areas of improvement. Thus, they might be useful when grading summative assessments in which students have previously received detailed feedback using analytic or single-point rubrics. They may also be used to provide quick formative feedback for smaller assignments where not more than 2-3 criteria are being tested at once. Try using our blank holistic rubric template docx)  or adapt an existing sample of holistic rubric (pdf) . 

figure 3

Fig 3: Graphic describing a sample holistic rubric (adopted from Teaching Commons, DePaul University)

These rubrics contain only two levels of performance (e.g., yes/no, present/absent) across a longer list of criteria (beyond 5 levels). Checklist rubrics have the advantage of providing a quick assessment of criteria given the binary assessment of criteria that are either met or are not met. Consequently, they are preferable when initiating self- or  peer-assessments of learning given that it simplifies evaluations to be more objective and criteria can elicit only one of two responses allowing uniform and quick grading. For similar reasons, such rubrics are useful for faculty in providing quick formative feedback since it immediately highlights the specific criteria to improve on. Such rubrics are also used in grading summative assessments in courses utilizing alternative grading systems such as specifications grading, contract grading or a credit/no credit grading system wherein a minimum threshold of performance has to be met for the assessment. Having said that, developing rubrics from existing analytical rubrics may require considerable investment upfront given that criteria have to be phrased in a way that can only elicit binary responses. Here is a link to the checklist rubric template (docx) .

 Graphic describing a sample checklist rubric

Fig. 4: Graphic describing a sample checklist rubric

A single point rubric is a modified version of a checklist style rubric, in that it specifies a single column of criteria. However, rather than only indicating whether expectations are met or not, as happens in a checklist rubric, a single point rubric allows instructors to specify ways in which criteria exceeds or does not meet expectations. Here the criteria to be tested are laid out in a central column describing the average expectation for the assignment. Instructors indicate areas of improvement on the left side of the criteria, whereas areas of strength in student performance are indicated on the right side. These types of rubrics provide flexibility in scoring, and are typically used in courses with alternative grading systems such as ungrading or contract grading. However, they do require the instructors to provide detailed feedback for each student, which can be unfeasible for assessments in large classes. Here is a link to the single point rubric template (docx) .

Fig. 5 Graphic describing a single point rubric (adopted from Teaching Commons, DePaul University)

Fig. 5 Graphic describing a single point rubric (adopted from Teaching Commons, DePaul University)

Best Practices for Designing and Implementing Rubrics

When designing the rubric format, descriptors and criteria should be presented in a way that is compatible with screen readers and reading assistive technology. For example, avoid using only color, jargon, or complex terminology to convey information. In case you do use color, pictures or graphics, try providing alternative formats for rubrics, such as plain text documents. Explore resources from the CU Digital Accessibility Office to learn more.

Co-creating rubrics can help students to engage in higher-order thinking skills such as analysis and evaluation. Further, it allows students to take ownership of their own learning by determining the criteria of their work they aspire towards. For graduate classes or upper-level students, one way of doing this may be to provide learning outcomes of the project, and let students develop the rubric on their own. However, students in introductory classes may need more scaffolding by providing them a draft and leaving room for modification (Stevens & Levi 2013). Watch the linked video for tips on co-creating rubrics with students . Further, involving teaching assistants in designing a rubric can help in getting feedback on expectations for an assessment prior to implementing and norming a rubric. 

When first designing a rubric, it is important to compare grades awarded for the same assessment by multiple graders to make sure the criteria are applied uniformly and reliably for the same level of performance. Further, ensure that the levels of performance in student work can be adequately distinguished using a rubric. Such a norming protocol is particularly important to also do at the start of any course in which multiple graders use the same rubric to grade an assessment (e.g., recitation sections, lab sections, teaching team). Here, instructors may select a subset of assignments that all graders evaluate using the same rubric, followed by a discussion to identify any discrepancies in criteria applied and ways to address them. Such strategies can make the rubrics more reliable, effective, and clear.

Sharing the rubric with students prior to an assessment can help familiarize students with an instructor’s expectations. This can help students master their learning outcomes by guiding their work in the appropriate direction and increase student motivation. Further, providing the rubric to students can help encourage metacognition and ability to self-assess learning.

Sample Rubrics

Below are links to rubric templates designed by a team of experts assembled by the Association of American Colleges and Universities (AAC&U) to assess 16 major learning goals. These goals are a part of the Valid Assessment of Learning in Undergraduate Education (VALUE) program. All of these examples are analytic rubrics and have detailed criteria to test specific skills. However, since any given assessment typically tests multiple skills, instructors are encouraged to develop their own rubric by utilizing criteria picked from a combination of the rubrics linked below.

  • Civic knowledge and engagement-local and global
  • Creative thinking
  • Critical thinking
  • Ethical reasoning
  • Foundations and skills for lifelong learning
  • Information literacy
  • Integrative and applied learning
  • Intercultural knowledge and competence
  • Inquiry and analysis
  • Oral communication
  • Problem solving
  • Quantitative literacy
  • Written Communication

Note : Clicking on the above links will automatically download them to your device in Microsoft Word format. These links have been created and are hosted by Kansas State University . Additional information regarding the VALUE Rubrics may be found on the AAC&U homepage . 

Below are links to sample rubrics that have been developed for different types of assessments. These rubrics follow the analytical rubric template, unless mentioned otherwise. However, these rubrics can be modified into other types of rubrics (e.g., checklist, holistic or single point rubrics) based on the grading system and goal of assessment (e.g., formative or summative). As mentioned previously, these rubrics can be modified using the blank template provided.

  • Oral presentations  
  • Painting Portfolio (single-point rubric)
  • Research Paper
  • Video Storyboard

Additional information:

Office of Assessment and Curriculum Support. (n.d.). Creating and using rubrics . University of Hawai’i, Mānoa

Calkins, C., & Winkelmes, M. A. (2018). A teaching method that boosts UNLV student retention . UNLV Best Teaching Practices Expo , 3.

Fraile, J., Panadero, E., & Pardo, R. (2017). Co-creating rubrics: The effects on self-regulated learning, self-efficacy and performance of establishing assessment criteria with students. Studies In Educational Evaluation , 53, 69-76

Haugnes, N., & Russell, J. L. (2016). Don’t box me in: Rubrics for àrtists and Designers . To Improve the Academy , 35 (2), 249–283. 

Jonsson, A. (2014). Rubrics as a way of providing transparency in assessment , Assessment & Evaluation in Higher Education , 39(7), 840-852 

McCartin, L. (2022, February 1). Rubrics! an equity-minded practice . University of Northern Colorado

Shapiro, S., Farrelly, R., & Tomaš, Z. (2023). Chapter 4: Effective and Equitable Assignments and Assessments. Fostering International Student Success in higher education (pp, 61-87, second edition). TESOL Press.

Stevens, D. D., & Levi, A. J. (2013). Introduction to rubrics: An assessment tool to save grading time, convey effective feedback, and promote student learning (second edition). Sterling, VA: Stylus.

Teaching Commons (n.d.). Types of Rubrics . DePaul University

Teaching Resources (n.d.). Rubric best practices, examples, and templates . NC State University 

Winkelmes, M., Bernacki, M., Butler, J., Zochowski, M., Golanics, J., & Weavil, K.H. (2016). A teaching intervention that increases underserved college students’ success . Peer Review , 8(1/2), 31-36.

Weisz, C., Richard, D., Oleson, K., Winkelmes, M.A., Powley, C., Sadik, A., & Stone, B. (in progress, 2023). Transparency, confidence, belonging and skill development among 400 community college students in the state of Washington . 

Association of American Colleges and Universities. (2009). Valid Assessment of Learning in Undergraduate Education (VALUE) . 

Canvas Community. (2021, August 24). How do I add a rubric in a course? Canvas LMS Community.

 Center for Teaching & Learning. (2021, March 03). Overview of Rubrics . University of Colorado, Boulder

 Center for Teaching & Learning. (2021, March 18). Best practices to co-create rubrics with students . University of Colorado, Boulder.

Chase, D., Ferguson, J. L., & Hoey, J. J. (2014). Assessment in creative disciplines: Quantifying and qualifying the aesthetic . Common Ground Publishing.

Feldman, J. (2018). Grading for equity: What it is, why it matters, and how it can transform schools and classrooms . Corwin Press, CA.

Gradescope (n.d.). Instructor: Assignment - Grade Submissions . Gradescope Help Center. 

Henning, G., Baker, G., Jankowski, N., Lundquist, A., & Montenegro, E. (Eds.). (2022). Reframing assessment to center equity . Stylus Publishing. 

 King, P. M. & Baxter Magolda, M. B. (2005). A developmental model of intercultural maturity . Journal of College Student Development . 46(2), 571-592.

Selke, M. J. G. (2013). Rubric assessment goes to college: Objective, comprehensive evaluation of student work. Lanham, MD: Rowman & Littlefield.

The Institute for Habits of Mind. (2023, January 9). Creativity Rubrics - The Institute for Habits of Mind . 

  • Assessment in Large Enrollment Classes
  • Classroom Assessment Techniques
  • Creating and Using Learning Outcomes
  • Early Feedback
  • Five Misconceptions on Writing Feedback
  • Formative Assessments
  • Frequent Feedback
  • Online and Remote Exams
  • Student Learning Outcomes Assessment
  • Student Peer Assessment
  • Student Self-assessment
  • Summative Assessments: Best Practices
  • Summative Assessments: Types
  • Assessing & Reflecting on Teaching
  • Departmental Teaching Evaluation
  • Equity in Assessment
  • Glossary of Terms
  • Attendance Policies
  • Books We Recommend
  • Classroom Management
  • Community-Developed Resources
  • Compassion & Self-Compassion
  • Course Design & Development
  • Course-in-a-box for New CU Educators
  • Enthusiasm & Teaching
  • First Day Tips
  • Flexible Teaching
  • Grants & Awards
  • Inclusivity
  • Learner Motivation
  • Making Teaching & Learning Visible
  • National Center for Faculty Development & Diversity
  • Open Education
  • Student Support Toolkit
  • Sustainaiblity
  • TA/Instructor Agreement
  • Teaching & Learning in the Age of AI
  • Teaching Well with Technology
  • Presentation Design

Presentation Rubric for a College Project

We seem to have an unavoidable relationship with public speaking throughout our lives. From our kindergarten years, when our presentations are nothing more than a few seconds of reciting cute words in front of our class…

Image contains kids singing

...till our grown up years, when things get a little more serious, and the success of our presentations may determine getting funds for our business, or obtaining an academic degree when defending our thesis.

Image contains a person speaking with a microphone

By the time we reach our mid 20’s, we become worryingly used to evaluations based on our presentations. Yet, for some reason, we’re rarely told the traits upon which we are being evaluated. Most colleges and business schools for instance use a PowerPoint presentation rubric to evaluate their students. Funny thing is, they’re not usually that open about sharing it with their students (as if that would do any harm!).

What is a presentation rubric?

A presentation rubric is a systematic and standardized tool used to evaluate and assess the quality and effectiveness of a presentation. It provides a structured framework for instructors, evaluators, or peers to assess various aspects of a presentation, such as content, delivery, organization, and overall performance. Presentation rubrics are commonly used in educational settings, business environments, and other contexts where presentations are a key form of communication.

A typical presentation rubric includes a set of criteria and a scale for rating or scoring each criterion. The criteria are specific aspects or elements of the presentation that are considered essential for a successful presentation. The scale assigns a numerical value or descriptive level to each criterion, ranging from poor or unsatisfactory to excellent or outstanding.

Common criteria found in presentation rubrics may include:

  • Content: This criterion assesses the quality and relevance of the information presented. It looks at factors like accuracy, depth of knowledge, use of evidence, and the clarity of key messages.
  • Organization: Organization evaluates the structure and flow of the presentation. It considers how well the introduction, body, and conclusion are structured and whether transitions between sections are smooth.
  • Delivery: Delivery assesses the presenter's speaking skills, including vocal tone, pace, clarity, and engagement with the audience. It also looks at nonverbal communication, such as body language and eye contact.
  • Visual Aids: If visual aids like slides or props are used, this criterion evaluates their effectiveness, relevance, and clarity. It may also assess the design and layout of visual materials.
  • Audience Engagement: This criterion measures the presenter's ability to connect with the audience, maintain their interest, and respond to questions or feedback.
  • Time Management: Time management assesses whether the presenter stayed within the allotted time for the presentation. Going significantly over or under the time limit can affect the overall effectiveness of the presentation.
  • Creativity and Innovation: In some cases, rubrics may include criteria related to the creative and innovative aspects of the presentation, encouraging presenters to think outside the box.
  • Overall Impact: This criterion provides an overall assessment of the presentation's impact on the audience, considering how well it achieved its intended purpose and whether it left a lasting impression.

“We’re used to giving presentations, yet we’re rarely told the traits upon which we’re being evaluated.

Well, we don’t believe in shutting down information. Quite the contrary: we think the best way to practice your speech is to know exactly what is being tested! By evaluating each trait separately, you can:

  • Acknowledge the complexity of public speaking, that goes far beyond subject knowledge.
  • Address your weaker spots, and work on them to improve your presentation as a whole.

I’ve assembled a simple Presentation Rubric, based on a great document by the NC State University, and I've also added a few rows of my own, so you can evaluate your presentation in pretty much any scenario!

CREATE PRESENTATION

What is tested in this powerpoint presentation rubric.

The Rubric contemplates 7 traits, which are as follows:

Image contains seven traits: "Organization, Subject knowledge, mechanics, eye contact, poise, elocution, enthusiasm".

Now let's break down each trait so you can understand what they mean, and how to assess each one:

Presentation Rubric

Image contains the presentation rubric

How to use this Rubric?:

The Rubric is pretty self explanatory, so I'm just gonna give you some ideas as to how to use it. The ideal scenario is to ask someone else to listen to your presentation and evaluate you with it. The less that person knows you, or what your presentation is about, the better.

WONDERING WHAT YOUR SCORE MAY INDICATE?

  • 21-28 Fan-bloody-tastic!
  • 14-21 Looking good, but you can do better
  • 7-14 Uhmmm, you ain't at all ready

As we don't always have someone to rehearse our presentations with, a great way to use the Rubric is to record yourself (this is not Hollywood material so an iPhone video will do!), watching the video afterwards, and evaluating your presentation on your own. You'll be surprised by how different your perception of yourself is, in comparison to how you see yourself on video.

Image contains a person using a whiteboard

Related read: Webinar - Public Speaking and Stage Presence: How to wow?

It will be fairly easy to evaluate each trait! The mere exercise of reading the Presentation Rubric is an excellent study on presenting best practices.

If you're struggling with any particular trait, I suggest you take a look at our Academy Channel where we discuss how to improve each trait in detail!

It's not always easy to objectively assess our own speaking skills. So the next time you have a big presentation coming up, use this Rubric to put yourself to the test!

Need support for your presentation? Build awesome slides using our very own Slidebean .

Popular Articles

what is a presentation rubric

How to Kickstart the economy

what is a presentation rubric

Startup vs Small Business: Main differences

Slidebean App dashboard

Let’s move your company to the next stage 🚀

Ai pitch deck software, pitch deck services.

Financial Modelling examples

Financial Model Consulting for Startups 🚀

Pitch Deck examples

Raise money with our pitch deck writing and design service 🚀

Slidebean App preview dashboard

The all-in-one pitch deck software 🚀

what is a presentation rubric

Check out our list of the top free presentation websites that offer unique features and design options. Discover the best platform for your next presentation now.

what is a presentation rubric

This presentation software list is the result of weeks of research of 50+ presentation tools currently available online. It'll help you compare and decide.

Slidebean logo

This is a functional model you can use to create your own formulas and project your potential business growth. Instructions on how to use it are on the front page.

Financial Model Example

Book a call with our sales team

In a hurry? Give us a call at 

what is a presentation rubric

How to Use Rubrics

what is a presentation rubric

A rubric is a document that describes the criteria by which students’ assignments are graded. Rubrics can be helpful for:

  • Making grading faster and more consistent (reducing potential bias). 
  • Communicating your expectations for an assignment to students before they begin. 

Moreover, for assignments whose criteria are more subjective, the process of creating a rubric and articulating what it looks like to succeed at an assignment provides an opportunity to check for alignment with the intended learning outcomes and modify the assignment prompt, as needed.

Why rubrics?

Rubrics are best for assignments or projects that require evaluation on multiple dimensions. Creating a rubric makes the instructor’s standards explicit to both students and other teaching staff for the class, showing students how to meet expectations.

Additionally, the more comprehensive a rubric is, the more it allows for grading to be streamlined—students will get informative feedback about their performance from the rubric, even if they don’t have as many individualized comments. Grading can be more standardized and efficient across graders.

Finally, rubrics allow for reflection, as the instructor has to think about their standards and outcomes for the students. Using rubrics can help with self-directed learning in students as well, especially if rubrics are used to review students’ own work or their peers’, or if students are involved in creating the rubric.

How to design a rubric

1. consider the desired learning outcomes.

What learning outcomes is this assignment reinforcing and assessing? If the learning outcome seems “fuzzy,” iterate on the outcome by thinking about the expected student work product. This may help you more clearly articulate the learning outcome in a way that is measurable.  

2. Define criteria

What does a successful assignment submission look like? As described by Allen and Tanner (2006), it can help develop an initial list of categories that the student should demonstrate proficiency in by completing the assignment. These categories should correlate with the intended learning outcomes you identified in Step 1, although they may be more granular in some cases. For example, if the task assesses students’ ability to formulate an effective communication strategy, what components of their communication strategy will you be looking for? Talking with colleagues or looking at existing rubrics for similar tasks may give you ideas for categories to consider for evaluation.

If you have assigned this task to students before and have samples of student work, it can help create a qualitative observation guide. This is described in Linda Suskie’s book Assessing Student Learning , where she suggests thinking about what made you decide to give one assignment an A and another a C, as well as taking notes when grading assignments and looking for common patterns. The often repeated themes that you comment on may show what your goals and expectations for students are. An example of an observation guide used to take notes on predetermined areas of an assignment is shown here .

In summary, consider the following list of questions when defining criteria for a rubric (O’Reilly and Cyr, 2006):

  • What do you want students to learn from the task?
  • How will students demonstrate that they have learned?
  • What knowledge, skills, and behaviors are required for the task?
  • What steps are required for the task?
  • What are the characteristics of the final product?

After developing an initial list of criteria, prioritize the most important skills you want to target and eliminate unessential criteria or combine similar skills into one group. Most rubrics have between 3 and 8 criteria. Rubrics that are too lengthy make it difficult to grade and challenging for students to understand the key skills they need to achieve for the given assignment. 

3. Create the rating scale

According to Suskie, you will want at least 3 performance levels: for adequate and inadequate performance, at the minimum, and an exemplary level to motivate students to strive for even better work. Rubrics often contain 5 levels, with an additional level between adequate and exemplary and a level between adequate and inadequate. Usually, no more than 5 levels are needed, as having too many rating levels can make it hard to consistently distinguish which rating to give an assignment (such as between a 6 or 7 out of 10). Suskie also suggests labeling each level with names to clarify which level represents the minimum acceptable performance. Labels will vary by assignment and subject, but some examples are: 

  • Exceeds standard, meets standard, approaching standard, below standard
  • Complete evidence, partial evidence, minimal evidence, no evidence

4. Fill in descriptors

Fill in descriptors for each criterion at each performance level. Expand on the list of criteria you developed in Step 2. Begin to write full descriptions, thinking about what an exemplary example would look like for students to strive towards. Avoid vague terms like “good” and make sure to use explicit, concrete terms to describe what would make a criterion good. For instance, a criterion called “organization and structure” would be more descriptive than “writing quality.” Describe measurable behavior and use parallel language for clarity; the wording for each criterion should be very similar, except for the degree to which standards are met. For example, in a sample rubric from Chapter 9 of Suskie’s book, the criterion of “persuasiveness” has the following descriptors:

  • Well Done (5): Motivating questions and advance organizers convey the main idea. Information is accurate.
  • Satisfactory (3-4): Includes persuasive information.
  • Needs Improvement (1-2): Include persuasive information with few facts.
  • Incomplete (0): Information is incomplete, out of date, or incorrect.

These sample descriptors generally have the same sentence structure that provides consistent language across performance levels and shows the degree to which each standard is met.

5. Test your rubric

Test your rubric using a range of student work to see if the rubric is realistic. You may also consider leaving room for aspects of the assignment, such as effort, originality, and creativity, to encourage students to go beyond the rubric. If there will be multiple instructors grading, it is important to calibrate the scoring by having all graders use the rubric to grade a selected set of student work and then discuss any differences in the scores. This process helps develop consistency in grading and making the grading more valid and reliable.

Types of Rubrics

If you would like to dive deeper into rubric terminology, this section is dedicated to discussing some of the different types of rubrics. However, regardless of the type of rubric you use, it’s still most important to focus first on your learning goals and think about how the rubric will help clarify students’ expectations and measure student progress towards those learning goals.

Depending on the nature of the assignment, rubrics can come in several varieties (Suskie, 2009):

Checklist Rubric

This is the simplest kind of rubric, which lists specific features or aspects of the assignment which may be present or absent. A checklist rubric does not involve the creation of a rating scale with descriptors. See example from 18.821 project-based math class .

Rating Scale Rubric

This is like a checklist rubric, but instead of merely noting the presence or absence of a feature or aspect of the assignment, the grader also rates quality (often on a graded or Likert-style scale). See example from 6.811 assistive technology class .

Descriptive Rubric

A descriptive rubric is like a rating scale, but including descriptions of what performing to a certain level on each scale looks like. Descriptive rubrics are particularly useful in communicating instructors’ expectations of performance to students and in creating consistency with multiple graders on an assignment. This kind of rubric is probably what most people think of when they imagine a rubric. See example from 15.279 communications class .

Holistic Scoring Guide

Unlike the first 3 types of rubrics, a holistic scoring guide describes performance at different levels (e.g., A-level performance, B-level performance) holistically without analyzing the assignment into several different scales. This kind of rubric is particularly useful when there are many assignments to grade and a moderate to a high degree of subjectivity in the assessment of quality. It can be difficult to have consistency across scores, and holistic scoring guides are most helpful when making decisions quickly rather than providing detailed feedback to students. See example from 11.229 advanced writing seminar .

The kind of rubric that is most appropriate will depend on the assignment in question.

Implementation tips

Rubrics are also available to use for Canvas assignments. See this resource from Boston College for more details and guides from Canvas Instructure.

Allen, D., & Tanner, K. (2006). Rubrics: Tools for Making Learning Goals and Evaluation Criteria Explicit for Both Teachers and Learners. CBE—Life Sciences Education, 5 (3), 197-203. doi:10.1187/cbe.06-06-0168

Cherie Miot Abbanat. 11.229 Advanced Writing Seminar. Spring 2004. Massachusetts Institute of Technology: MIT OpenCourseWare, https://ocw.mit.edu . License: Creative Commons BY-NC-SA .

Haynes Miller, Nat Stapleton, Saul Glasman, and Susan Ruff. 18.821 Project Laboratory in Mathematics. Spring 2013. Massachusetts Institute of Technology: MIT OpenCourseWare, https://ocw.mit.edu . License: Creative Commons BY-NC-SA .

Lori Breslow, and Terence Heagney. 15.279 Management Communication for Undergraduates. Fall 2012. Massachusetts Institute of Technology: MIT OpenCourseWare, https://ocw.mit.edu . License: Creative Commons BY-NC-SA .

O’Reilly, L., & Cyr, T. (2006). Creating a Rubric: An Online Tutorial for Faculty. Retrieved from https://www.ucdenver.edu/faculty_staff/faculty/center-for-faculty-development/Documents/Tutorials/Rubrics/index.htm

Suskie, L. (2009). Using a scoring guide or rubric to plan and evaluate an assessment. In Assessing student learning: A common sense guide (2nd edition, pp. 137-154 ) . Jossey-Bass.

William Li, Grace Teo, and Robert Miller. 6.811 Principles and Practice of Assistive Technology. Fall 2014. Massachusetts Institute of Technology: MIT OpenCourseWare, https://ocw.mit.edu . License: Creative Commons BY-NC-SA .

  • Visit the University of Nebraska–Lincoln
  • Apply to the University of Nebraska–Lincoln
  • Give to the University of Nebraska–Lincoln

Search Form

How to design effective rubrics.

Rubrics can be effective assessment tools when constructed using methods that incorporate four main criteria: validity, reliability, fairness, and efficiency. For a rubric to be valid and reliable, it must only grade the work presented (reducing the influence of instructor biases) so that anyone using the rubric would obtain the same grade (Felder and Brent 2016). Fairness ensures that the grading is transparent by providing students with access to the rubric at the beginning of the assessment while efficiency is evident when students receive detailed, timely feedback from the rubric after grading has occurred (Felder and Brent 2016). Because the most informative rubrics for student learning are analytical rubrics (Brookhart 2013), the steps below explain how to construct an analytical rubric.

Five Steps to Design Effective Rubrics

The first step in designing a rubric is determining the content, skills, or tasks you want students to be able to accomplish (Wormeli 2006) by completing an assessment. Thus, two main questions need to be answered:

  • What do students need to know or do? and
  • How will the instructor know when the students know or can do it?

Another way to think about this is to decide which learning objectives for the course are being evaluated using this assessment (Allen and Tanner 2006, Wormeli 2006). (More information on learning objectives can be found at Teaching@UNL). For most projects or similar assessments, more than one area of content or skill is occurring, so most rubrics assess more than one learning objective. For example, a project may require students to research a topic (content knowledge learning objective) using digital literacy skills (research learning objective) and presenting their findings (communication learning objective). Therefore, it is important to think through all the tasks or skills students will need to complete during an assessment to meet the learning objectives. Additionally, it is advised to review examples of rubrics for a specific discipline or task to find grade-level appropriate rubrics to aid in preparing a list of tasks and activities that are essential to meeting the learning objectives (Allen and Tanner 2006).

Once the learning objectives and a list of essential tasks for students is compiled and aligned to learning objectives, the next step is to determine the number of criteria for the rubric. Most rubrics have three or more criteria with most rubrics having less than a dozen criteria. It is important to remember that as more criteria are added to a rubric, a student’s cognitive load increases making it more difficult for students to remember all the assessment requirements (Allen and Tanner 2006, Wolf et al. 2008). Thus, usually 3-10 criteria are recommended for a rubric (if an assessment has less than 3 criteria, a different format (e.g., grade sheet) can be used to convey grading expectations and if a rubric has more than ten criteria, some criteria can be consolidated into a single larger category; Wolf et al. 2008). Once the number of criteria is established, the final step for the criteria aspect of a rubric is creating descriptive titles for each criterion and determining if some criteria will be weighted and thus be more influential on the grade for the assessment. Once this is accomplished, the right column of the rubric can be designed (Table 1).

The third aspect of a rubric design is the levels of performance and the labels for each level in the rubric. It is recommended to have 3-6 levels of performance in a rubric (Allen and Tanner 2006, Wormeli 2006, Wolf et al. 2008). The key to determining the number of performance levels for a rubric is based on how easy it is to distinguish between levels (Allen and Tanner 2006). Can the difference in student performance between a “3” and “4” be readily seen on a five-level rubric? If not, should only four levels be used for the rubric for all criteria. If most of the criteria can easily be differentiated with five levels, but only one criterion is difficult to discern, then two levels could be left blank (see “Research Skills” criterion in Table 1). It is also important to note that having fewer levels makes constructing the rubric faster but may result in ambiguous expectations and difficulty providing feedback to students.

Once the number of performance levels are set for the rubric, assign each level a name or title that indicates the level of performance. When creating the name system for the performance levels of a rubric, it is important to use terms that are not subjective, overly negative, or convey judgements (e.g., “Excellent”, “Good”, and “Bad”; Allen and Tanner 2006, Stevens and Levi 2013) and to ensure the terms use the same aspect of language (all nouns, all verbs ending in “-ing”, all adjectives, etc.; Wormeli 2006). Examples of different performance level naming systems include:

  • Exemplary, Competent, Not yet competent
  • Proficient, Intermediate, Novice
  • Strong, Satisfactory, Not yet satisfactory
  • Exceeds Expectations, Meets Expectations, Below Expectations
  • Proficient, Capable, Adequate, Limited
  • Exemplary, Proficient, Acceptable, Unacceptable
  • Mastery, Proficient, Apprentice, Novice, Absent

Additionally, the order of the levels needs to be determined with some rubrics designed to increase in proficiency across the levels (lowest, middle, highest performance) and other designed to start with the highest performance level and move toward the lowest (highest, middle, lowest performance).

It is essential to evaluate how well a rubric works for grading and providing feedback to students. If possible, use previous student work to test a rubric to determine how well the rubric functions for grading the assessment prior to giving the rubric to students (Wormeli 2006). After using the rubric in a class, evaluate how well students met the criteria and how easy the rubric was to use in grading (Allen and Tanner 2006). If a specific criterion has low grades associated with it, determine if the language was too subjective or confusing for students. This can be done by asking students to critique the rubric or using a student survey for the overall assessment. Alternatively, the instructor can ask a colleague or instructional designer for their feedback on the rubric. If more than one instructor is using the rubric, determine if all instructors are seeing lower grades on certain criterion. Analyzing the grades can often show where students are failing to understand the content or the assessment format or requirements.

Next, look at how well the rubric reflects the work turned in by the students (Allen and Tanner 2006, Wormeli 2006). Does the grade based on the rubric reflect what the instructor would expect for the student’s assignment? Or does the rubric result in some students receiving a higher or lower grade? If the latter is occurring, determine which aspect of the rubric needs to be “fudged” to obtain the correct grade for the assessment and update the criteria that are problematic. Alternatively, the instructor may find that the rubric is good for all criteria but that some aspects of the assessment are under or over valued in the rubric (Allen and Tanner 2006). For example, if the main learning objective is the content, but 40% of the assessment is on writing skills, the rubric may need to be weighed to allow content criteria to have a stronger influence on the grade over writing criteria.

Finally, analyze how well the rubric worked for grading the assessment overall. If the instructor needed to modify the interpretation of the rubric while grading, then the levels of performance or the number of criteria may need to be edited to better align with the learning objectives and the evidence being shown in the assessment (Allen and Tanner 2006). For example, if only three performance levels exist in the rubric, but the instructor often had to give partial credit on a criterion, then this may indicate that the rubric needs to be expanded to have more levels of performance. If instead, a specific criterion is difficult to grade or distinguish between adjacent performance levels, this may indicate that too much is being assessed in the criterion (and thus should be divided into two or more different criteria) or that the criterion is not well written and needs to be explained with more details. Reflecting on the effectiveness of a rubric should be done each time the rubric is used to ensure it is well-designed and accurately represents student learning.

Rubric Examples & Resources

UNCW College of Arts & Science “ Scoring Rubrics ” contains links to discipline-specific rubrics designed by faculty from many institutions. Most of these rubrics are downloadable Word files that could be edited for use in courses.

Syracuse University “ Examples of Rubrics ” also has rubrics by discipline with some as downloadable Word files that could be edited for use in courses.

University of Illinois – Springfield has pdf files of different types of rubrics on its “ Rubric Examples ” page. These rubrics include many different types of tasks (presenting, participation, critical thinking, etc.) from a variety of institutions

If you are building a rubric in Canvas, the rubric guide in Canvas 101 provides detailed information including video instructions: Using Rubrics: Canvas 101 (unl.edu)

Allen, D. and K. Tanner (2006). Rubrics: Tools for making learning goals and evaluation criteria explicit for both teachers and learners. CBE – Life Sciences Education 5: 197-203.

Stevens, D. D., and A. J. Levi (2013). Introduction to Rubrics: an assessment tool to save grading time, convey effective feedback, and promote student learning. Stylus Publishing, Sterling, VA, USA.

Wolf, K., M. Connelly, and A. Komara (2008). A tale of two rubrics: improving teaching and learning across the content areas through assessment. Journal of Effective Teaching 8: 21-32.

Wormeli, R. (2006). Fair isn’t always equal: assessing and grading in the differentiated classroom. Stenhouse Publishers, Portland, ME, USA.

This page was authored by Michele Larson and last updated September 15, 2022

Related Links

  • How to build and use rubrics in Canvas
  • Introduction to rubrics
  • Grading and Feedback

USC shield

Center for Excellence in Teaching

Home > Resources > Group presentation rubric

Group presentation rubric

This is a grading rubric an instructor uses to assess students’ work on this type of assignment. It is a sample rubric that needs to be edited to reflect the specifics of a particular assignment. Students can self-assess using the rubric as a checklist before submitting their assignment.

Download this file

Download this file [63.74 KB]

Back to Resources Page

Change Password

Your password must have 8 characters or more and contain 3 of the following:.

  • a lower case character, 
  • an upper case character, 
  • a special character 

Password Changed Successfully

Your password has been changed

  • Sign in / Register

Request Username

Can't sign in? Forgot your username?

Enter your email address below and we will send you your username

If the address matches an existing account you will receive an email with instructions to retrieve your username

Rubrics: Tools for Making Learning Goals and Evaluation Criteria Explicit for Both Teachers and Learners

  • Deborah Allen
  • Kimberly Tanner

*Department of Biological Sciences, University of Delaware, Newark, DE 19716; and

Search for more papers by this author

Department of Biology, San Francisco State University, San Francisco, CA 94132

INTRODUCTION

Introduction of new teaching strategies often expands the expectations for student learning, creating a parallel need to redefine how we collect the evidence that assures both us and our students that these expectations are in fact being met. The default assessment strategy of the typical large, introductory, college-level science course, the multiple- choice (fixed response) exam, when used to best advantage can provide feedback about what students know and recall about key concepts. Leaving aside the difficulty inherent in designing a multiple-choice exam that captures deeper understandings of course material, its limitations become particularly notable when learning objectives include what students are able to do as well as know as the result of time spent in a course. If we want students to build their skill at conducting guided laboratory investigations, developing reasoned arguments, or communicating their ideas, other means of assessment such as papers, demonstrations (the “practical exam”), other demonstrations of problem solving, model building, debates, or oral presentations, to name a few, must be enlisted to serve as benchmarks of progress and/or in the assignment of grades. What happens, however, when students are novices at responding to these performance prompts when they are used in the context of science learning, and faculty are novices at communicating to students what their expectations for a high-level performance are? The more familiar terrain of the multiple-choice exam can lull both students and instructors into a false sense of security about the clarity and objectivity of the evaluation criteria ( Wiggins, 1989 ) and make these other types of assessment strategies seem subjective and unreliable (and sometimes downright unfair) by comparison. In a worst-case scenario, the use of alternatives to the conventional exam to assess student learning can lead students to feel that there is an implicit or hidden curriculum—the private curriculum that seems to exist only in the mind's eye of a course instructor.

Use of rubrics provides one way to address these issues. Rubrics not only can be designed to formulate standards for levels of accomplishment and used to guide and improve performance but also they can be used to make these standards clear and explicit to students. Although the use of rubrics has become common practice in the K–12 setting ( Luft, 1999 ), the good news for those instructors who find the idea attractive is that more and more examples of the use of rubrics are being noted at the college and university level, with a variety of applications ( Ebert-May, undated ; Ebert-May et al ., 1997 ; Wright and Boggs, 2002 ; Moni et al ., 2005 ; Porter, 2005 ; Lynd-Balta, 2006 ).

WHAT IS A RUBRIC?

Although definitions for the word “rubric” abound, for the purposes of this feature article we use the word to denote a type of matrix that provides scaled levels of achievement or understanding for a set of criteria or dimensions of quality for a given type of performance, for example, a paper, an oral presentation, or use of teamwork skills. In this type of rubric, the scaled levels of achievement (gradations of quality) are indexed to a desired or appropriate standard (e.g., to the performance of an expert or to the highest level of accomplishment evidenced by a particular cohort of students). The descriptions of the possible levels of attainment for each of the criteria or dimensions of performance are described fully enough to make them useful for judgment of, or reflection on, progress toward valued objectives ( Huba and Freed, 2000 ).

A good way to think about what distinguishes a rubric from an explanation of an assignment is to compare it with a more common practice. When communicating to students our expectations for writing a lab report, for example, we often start with a list of the qualities of an excellent report to guide their efforts toward successful completion; we may have drawn on our knowledge of how scientists report their findings in peer-reviewed journals to develop the list. This checklist of criteria is easily turned into a scoring sheet (to return with the evaluated assignment) by the addition of checkboxes for indicating either a “yes-no” decision about whether each criterion has been met or the extent to which it has been met. Such a checklist in fact has a number of fundamental features in common with a rubric ( Bresciani et al ., 2004 ), and it is a good starting point for beginning to construct a rubric. Figure 1 gives an example of such a scoring checklist that could be used to judge a high school student poster competition.

Figure 1.

Figure 1. An example of a scoring checklist that could be used to judge a high school student poster competition.

However, what is referred to as a “full rubric” is distinguished from the scoring checklist by its more extensive definition and description of the criteria or dimensions of quality that characterize each level of accomplishment. Table 1 provides one example of a full rubric (of the analytical type, as defined in the paragraph below) that was developed from the checklist in Figure 1 . This example uses the typical grid format in which the performance criteria or dimensions of quality are listed in the rows, and the successive cells across the three columns describe a specific level of performance for each criterion. The full rubric in Table 1 , in contrast to the checklist that only indicates whether a criterion exists ( Figure 1 ), makes it far clearer to a student presenter what the instructor is looking for when evaluating student work.

DESIGNING A RUBRIC

A more challenging aspect of using a rubric can be finding a rubric to use that provides a close enough match to a particular assignment with a specific set of content and process objectives. This challenge is particularly true of so-called analytical rubrics. Analytical rubrics use discrete criteria to set forth more than one measure of the levels of an accomplishment for a particular task, as distinguished from holistic rubrics, which provide more general, uncategorized (“lumped together”) descriptions of overall dimensions of quality for different levels of mastery. Many users of analytical rubrics often resort to developing their own rubric to have the best match between an assignment and its objectives for a particular course.

As an example, examine the two rubrics presented in Tables 2 and 3 , in which Table 2 shows a holistic rubric and Table 3 shows an analytical rubric. These two versions of a rubric were developed to evaluate student essay responses to a particular assessment prompt. In this case the prompt is a challenge in which students are to respond to the statement, “Plants get their food from the soil. What about this statement do you agree with? What about this statement do you disagree with? Support your position with as much detail as possible.” This assessment prompt can serve as both a preassessment, to establish what ideas students bring to the teaching unit, and as a postassessment in conjunction with the study of photosynthesis. As such, the rubric is designed to evaluate student understanding of the process of photosynthesis, the role of soil in plant growth, and the nature of food for plants. The maximum score using either the holistic or the analytical rubric would be 10, with 2 points possible for each of five criteria. The holistic rubric outlines five criteria by which student responses are evaluated, puts a 3-point scale on each of these criteria, and holistically describes what a 0-, 1-, or 2-point answer would contain. However, this holistic rubric stops short of defining in detail the specific concepts that would qualify an answer for 0, 1, or 2 points on each criteria scale. The analytical rubric shown in Table 3 does define these concepts for each criteria, and it is in fact a fuller development of the holistic rubric shown in Table 2 . As mentioned, the development of an analytical rubric is challenging in that it pushes the instructor to define specifically the language and depth of knowledge that students need to demonstrate competency, and it is an attempt to make discrete what is fundamentally a fuzzy, continuous distribution of ways an individual could construct a response. As such, informal analysis of student responses can often play a large role in shaping and revising an analytical rubric, because student answers may hold conceptions and misconceptions that have not been anticipated by the instructor.

The various approaches to constructing rubrics in a sense also can be characterized to be holistic or analytical. Those who offer recommendations about how to build rubrics often approach the task from the perspective of describing the essential features of rubrics ( Huba and Freed, 2000 ; Arter and McTighe, 2001 ), or by outlining a discrete series of steps to follow one by one ( Moskal, 2000 ; Mettler, 2002 ; Bresciani et al ., 2004 ; MacKenzie, 2004 ). Regardless of the recommended approach, there is general agreement that a rubric designer must approach the task with a clear idea of the desired student learning outcomes ( Luft, 1999 ) and, perhaps more importantly, with a clear picture of what meeting each outcome “looks like” ( Luft, 1999 ; Bresciani et al ., 2004 ). If this picture remains fuzzy, perhaps the outcome is not observable or measurable and thus not “rubric-worthy.”

Reflection on one's particular answer to two critical questions—“What do I want students to know and be able to do?” and “How will I know when they know it and can do it well?”—is not only essential to beginning construction of a rubric but also can help confirm the choice of a particular assessment task as being the best way to collect evidence about how the outcomes have been met. A first step in designing a rubric, the development of a list of qualities that the learner should demonstrate proficiency in by completing an assessment task, naturally flows from this prior rumination on outcomes and on ways of collecting evidence that students have met the outcome goal. A good way to get started with compiling this list is to view existing rubrics for a similar task, even if this rubric was designed for younger or older learners or for different subject areas. For example, if one sets out to develop a rubric for a class presentation, it is helpful to review the criteria used in a rubric for oral communication in a graduate program (organization, style, use of communication aids, depth and accuracy of content, use of language, personal appearance, responsiveness to audience; Huba and Freed, 2000 ) to stimulate reflection on and analysis of what criteria (dimensions of quality) align with one's own desired learning outcomes. There is technically no limit to the number of criteria that can be included in a rubric, other than presumptions about the learners' ability to digest and thus make use of the information that is provided. In the example in Table 1 , only three criteria were used, as judged appropriate for the desired outcomes of the high school poster competition.

After this list of criteria is honed and pruned, the dimensions of quality and proficiency will need to be separately described (as in Table 1 ), and not just listed. The extent and nature of this commentary depends upon the type of rubric—analytical or holistic. This task of expanding the criteria is an inherently difficult task, because of the requirement for a thorough familiarity with both the elements comprising the highest standard of performance for the chosen task, and the range of capabilities of learners at a particular developmental level. A good way to get started is to think about how the attributes of a truly superb performance could be characterized in each of the important dimensions—the level of work that is desired for students to aspire to. Common advice ( Moskal, 2000 ) is to avoid use of words that connote value judgments in these commentaries, such as “creative” or “good” (as in “the use of scientific terminology language is ‘good’”). These terms are essentially so general as to be valueless in terms of their ability to guide a learner to emulate specific standards for a task, and although it is admittedly difficult, they need to be defined in a rubric. Again, perusal of existing examples is a good way to get started with writing the full descriptions of criteria. Fortunately, there are a number of data banks that can be searched for rubric templates of virtually all types ( Chicago Public Schools, 2000 ; Arter and McTighe, 2001 ; Shrock, 2006 ; Advanced Learning Technologies, 2006 ; University of Wisconsin-Stout, 2006 ).

Scale 1: Exemplary, Proficient, Acceptable, Unacceptable

Scale 2: Substantially Developed, Mostly Developed, Developed, Underdeveloped

Scale 3: Distinguished, Proficient, Apprentice, Novice

Scale 4: Exemplary, Accomplished, Developing, Beginning

Huba and Freed (2000) offer the interesting recommendation that the descriptions for each level of performance provide a “real world” connection by stating the implications for accomplishment at that level. This description of the consequences could be included in a criterion called “professionalism.” For example, in a rubric for writing a lab report, at the highest level of mastery the rubric could state, “this report of your study would persuade your peers of the validity of your findings and would be publishable in a peer-reviewed journal.” Acknowledging this recommendation in the construction of a rubric might help to steer students toward the perception that the rubric represents the standards of a profession, and away from the perception that a rubric is just another way to give a particular teacher what he or she wants ( Andrade and Du, 2005 ).

As a further help aide for beginning instructors, a number of Web sites, both commercial and open access, have tools for online construction of rubrics from templates, for example, Rubistar ( Advanced Learning Technologies, 2006 ) and TeAch-nology ( TeAch-nology, undated ). These tools allow the would-be “rubrician” to select from among the various types of rubrics, criteria, and rating scales (levels of mastery). Once these choices are made, editable descriptions fall into place in the proper cells in the rubric grid. The rubrics are stored in the site databases, but typically they can be downloaded using conventional word processing or spreadsheet software. Further editing can result in a rubric uniquely suitable for your teaching/learning goals.

ANALYZING AND REPORTING INFORMATION GATHERED FROM A RUBRIC

Whether used with students to set learning goals, as scoring devices for grading purposes, to give formative feedback to students about their progress toward important course outcomes, or for assessment of curricular and course innovations, rubrics allow for both quantitative and qualitative analysis of student performance. Qualitative analysis could yield narrative accounts of where students in general fell in the cells of the rubric, and they can provide interpretations, conclusions, and recommendations related to student learning and development. For quantitative analysis the various levels of mastery can be assigned different numerical scores to yield quantitative rankings, as has been done for the sample rubric in Table 1 . If desired, the criteria can be given different scoring weightings (again, as in the poster presentation rubric in Table 1 ) if they are not considered to have equal priority as outcomes for a particular purpose. The total scores given to each example of student work on the basis of the rubric can be converted to a grading scale. Overall performance of the class could be analyzed for each of the criteria competencies.

Multiple-choice exams have the advantage that they can be computer or machine scored, allowing for analysis and storage of more specific information about different content understandings (particularly misconceptions) for each item, and for large numbers of students. The standard rubric-referenced assessment is not designed to easily provide this type of analysis about specific details of content understanding; for the types of tasks for which rubrics are designed, content understanding is typically displayed by some form of narrative, free-choice expression. To try to capture both the benefits of the free-choice narrative and generate an in-depth analysis of students' content understanding, particularly for large numbers of students, a special type of rubric, called the double-digit, is typically used. A large-scale example of use of this type of scoring rubric is given by the Trends in International Mathematics and Science Study (1999) . In this study, double-digit rubrics were used to code and analyze student responses to short essay prompts.

To better understand how and why these rubrics are constructed and used, refer to the example provided in Figure 2 . This double-digit rubric was used to score and analyze student responses to an essay prompt about ecosystems that was accompanied by the standard “sun-tree-bird” diagram (a drawing of the sun, a tree, and other plants; various primary and secondary consumers; and some not well-identifiable decomposers, with interconnecting arrows that could be interpreted as energy flow or cycling of matter). A brief narrative, summarizing the “big ideas” that could be included in a complete response, along with a sample response that captures many of these big ideas accompanies the actual rubric. The rubric itself specifies major categories of student responses, from complete to various levels of incompleteness. Each level is assigned one of the first digits of the scoring code, which could actually correspond to a conventional point total awarded for a particular response. In the example in Figure 2 , a complete response is awarded a maximum number of 4 points, and the levels of partially complete answers, successively lower points. Here, the “incomplete” and “no response” categories are assigned first digits of 7 and 9, respectively, rather than 0 for clarity in coding; they can be converted to zeroes for averaging and reporting of scores.

Figure 2.

Figure 2. A double-digit rubric used to score and analyze student responses to an essay prompt about ecosystems.

The second digit is assigned to types of student responses in each category, including the common approaches and misconceptions. For example, code 31 under the first partial- response category denotes a student response that “talks about energy flow and matter cycling, but does not mention loss of energy from the system in the form of heat.” The sample double-digit rubric in Figure 2 shows the code numbers that were assigned after a “first pass” through a relatively small number of sample responses. Additional codes were later assigned as more responses were reviewed and the full variety of student responses revealed. In both cases, the second digit of 9 was reserved for a general description that could be assigned to a response that might be unique to one or only a few students but nevertheless belonged in a particular category. When refined by several assessments of student work by a number of reviewers, this type of rubric can provide a means for a very specific quantitative and qualitative understanding, analysis, and reporting of the trends in student understanding of important concepts. A high number of 31 scores, for example, could provide a major clue about deficiencies in past instruction and thus goals for future efforts. However, this type of analysis remains expensive, in that scores must be assigned and entered into a data base, rather than the simple collection of student responses possible with a multiple-choice test.

WHY USE RUBRICS?

When used as teaching tools, rubrics not only make the instructor's standards and resulting grading explicit, but they can give students a clear sense of what the expectations are for a high level of performance on a given assignment, and how they can be met. This use of rubrics can be most important when the students are novices with respect to a particular task or type of expression ( Bresciani et al ., 2004 ).

From the instructor's perspective, although the time expended in developing a rubric can be considerable, once rubrics are in place they can streamline the grading process. The more specific the rubric, the less the requirement for spontaneous written feedback for each piece of student work—the type that is usually used to explain and justify the grade. Although provided with fewer written comments that are individualized for their work, students nevertheless receive informative feedback. When information from rubrics is analyzed, a detailed record of students' progress toward meeting desired outcomes can be monitored and then provided to students so that they may also chart their own progress and improvement. With team-taught courses or multiple sections of the same course, rubrics can be used to make faculty standards explicit to one another, and to calibrate subsequent expectations. Good rubrics can be critically important when student work in a large class is being graded by teaching assistants.

Finally, by their very nature, rubrics encourage reflective practice on the part of both students and teachers. In particular, the act of developing a rubric, whether or not it is subsequently used, instigates a powerful consideration of one's values and expectations for student learning, and the extent to which these expectations are reflected in actual classroom practices. If rubrics are used in the context of students' peer review of their own work or that of others, or if students are involved in the process of developing the rubric, these processes can spur the development of their ability to become self-directed and help them develop insight into how they and others learn ( Luft, 1999 ).

ACKNOWLEDGMENTS

We gratefully acknowledge the contribution of Richard Donham (Mathematics and Science Education Resource Center, University of Delaware) for development of the double-digit rubric in Figure 2 .

  • Advanced Learning Technologies, University of Kansas ( 2006 ). Rubistar 28 May 2006 http://rubistar.4teachers.org/index.php . Google Scholar
  • Andrade H., Du Y. ( 2005 ). Student perspectives on rubric-referenced assessment . Pract. Assess. Res. Eval 18 May 2006 10 http://pareonline.net/pdf/v10n3.pdf . Google Scholar
  • Arter J. A., McTighe J. ( 2001 ). Scoring Rubrics in the Classroom: Using Performance Criteria for Assessing and Improving Student Performance , Thousand Oaks, CA: Corwin Press. Google Scholar
  • Bresciani M. J., Zelna C. L., Anderson J. A. ( 2004 ). Criteria and rubrics In: Assessing Student Learning and Development: A Handbook for Practitioners , Washington, DC: National Association of Student Personnel Administrators, 29-37. Google Scholar
  • Chicago Public Schools ( 2000 ). The Rubric Bank 18 May 2006 http://intranet.cps.k12.il.us/Assessments/Ideas_and_Rubrics/Rubric_Bank/rubric_bank.html . Google Scholar
  • Ebert-May D., Brewer C., Allred S. ( 1997 ). Innovation in large lectures—teaching for active learning . Bioscience 47 , 601-607. Google Scholar
  • Ebert-May D. Scoring Rubrics. Field-tested Learning Assessment Guide 18 May 2006 undated http://www.wcer.wisc.edu/archive/cl1/flag/cat/catframe.htm . Google Scholar
  • Huba M. E., Freed J. E. ( 2000 ). Using rubrics to provide feedback to students In: Learner-Centered Assessment on College Campuses , Boston: Allyn and Bacon, 151-200. Google Scholar
  • Luft J. A. ( 1999 ). Rubrics: design and use in science teacher education . J. Sci. Teach. Educ 10 , 107-121. Google Scholar
  • Lynd-Balta E. ( 2006 ). Using literature and innovative assessments to ignite interest and cultivate critical thinking skills in an undergraduate neuroscience course . CBE Life Sci. Educ 5 , 167-174. Link ,  Google Scholar
  • MacKenzie W. ( 2004 ). Constructing a rubric In: NETS●S Curriculum Series: Social Studies Units for Grades 9–12 , Washington, DC: International Society for Technology in Education, 24-30. Google Scholar
  • Mettler C. A. ( 2002 , Ed. C. Boston , Designing scoring rubrics for your classroom In: Understanding Scoring Rubrics: A Guide for Teachers , University of Maryland, College Park, MD: ERIC Clearinghouse on Assessment and Evaluation, 72-81. Google Scholar
  • Moni R., Beswick W., Moni K. B. ( 2005 ). Using student feedback to construct an assessment rubric for a concept map in physiology . Adv. Physiol. Educ 29 , 197-203. Medline ,  Google Scholar
  • Moskal B. M. ( 2000 ). Scoring Rubrics Part II: How? ERIC/AE Digest, ERIC Clearinghouse on Assessment and Evaluation. Eric Identifier #ED446111 21 April 2006 http://www.eric.ed.gov . Google Scholar
  • Porter J. R. ( 2005 ). Information literacy in biology education: an example from an advanced cell biology course . Cell Biol. Educ 4 , 335-343. Link ,  Google Scholar
  • Shrock K. ( 2006 ). Kathy Shrock's Guide for Educators 5 June 2006 http://school.discovery.com/schrockguide/assess.html#rubrics . Google Scholar
  • TeAch-nology, Inc TeAch-nology 7 June 2006 undated http://teach-nology.com/web_tools/rubrics . Google Scholar
  • Trends in International Mathematics and Science Study ( 1999 ). Science Benchmarking Report, 8th Grade, Appendix A: TIMSS Design and Procedures 9 June 2006 http://timss.bc.edu/timss1999b/sciencebench_report/t99bscience_A.html . Google Scholar
  • University of Wisconsin–Stout ( 2006 ). Teacher Created Rubrics for Assessment 7 June 2006 http://www.uwstout.edu/soe/profdev/rubrics.shtml . Google Scholar
  • Wiggins G. ( 1989 ). A true test: toward more authentic and equitable assessment . Phi Delta Kappan 49 , 703-713. Google Scholar
  • Wright R., Boggs J. ( 2002 ). Learning cell biology as a team: a project-based approach to upper-division cell biology . Cell Biol. Educ 1 , 145-153. Link ,  Google Scholar
  • Empirical assessment of ChatGPT’s answering capabilities in natural science and engineering 29 February 2024 | Scientific Reports, Vol. 14, No. 1
  • Examining student translators’ writing and translation products: quality, errors and self-perceived mental workload 26 March 2024 | The Interpreter and Translator Trainer, Vol. 2
  • Teacher educators’ use of mind mapping in the development of TPACK in a technology-rich learning environment 12 March 2024 | Education and Information Technologies, Vol. 49
  • Stephanie M. Gardner ,
  • Aakanksha Angra , and
  • Joseph A. Harsh
  • Kristy J Wilson, Monitoring Editor
  • The Visual Science Communication Toolkit: Responding to the Need for Visual Science Communication Training in Undergraduate Life Sciences Education 12 March 2024 | Education Sciences, Vol. 14, No. 3
  • The Use of Scoring Rubrics in University 2 January 2024 | Acta Pedagogia Asiana, Vol. 3, No. 1
  • Architecture Education: Rubrics in Google Classroom as a Tool of Improving the Assessment and Learning 1 January 2024
  • Developing an Evaluation Rubric for Planning and Assessing SSI-Based STEAM Programs in Science Classrooms 25 August 2023 | Research in Science Education, Vol. 53, No. 6
  • Democratising assessment rubrics for international students 16 November 2023 | Assessment & Evaluation in Higher Education, Vol. 7
  • Developing accounting students’ professional competencies and satisfaction through learning experiences: Validation of a self-administered questionnaire The International Journal of Management Education, Vol. 21, No. 3
  • Yabancı dil olarak Türkçe öğretiminde konuşma becerisine yönelik biçimlendirici değerlendirme temelli dereceli puanlama anahtarı geliştirme çalışması 21 October 2023 | RumeliDE Dil ve Edebiyat Araştırmaları Dergisi, No. 36
  • Use of a short, in-class drawing activity to assess student understanding of core concepts of the cell membrane in an undergraduate physiology course Advances in Physiology Education, Vol. 47, No. 3
  • Lara K. Goudsouzian and
  • Jeremy L. Hsu
  • Stephanie Gardner, Monitoring Editor
  • Methods to improve grading reliability in multi-section undergraduate courses 22 August 2023 | Journal of Biological Education, Vol. 1
  • Integrating evidence-based teaching practices into the Mammalogy classroom 26 February 2023 | Journal of Mammalogy, Vol. 104, No. 4
  • standard measurement in online learning: a rubric as a focus on teaching-learning practices to move up quality education 5 September 2023 | EIKI Journal of Effective Teaching Methods, Vol. 1, No. 3
  • Co-Production of Assessment Rubrics in an Online Education Context
  • Knowledge of Language in Rubric Design
  • Using Rubrics as Feedforward Tools for Subject Contextualized Dialogue
  • Rubrik ile Ödev ve Performans Değerlendirme: Sürekli İyileştirme Örneği 20 April 2023 | Sağlık Bilimleri Üniversitesi Hemşirelik Dergisi, Vol. 5, No. 1
  • BPMN4MOOC: A BPMN extension for the design of connectivist MOOC 6 April 2023 | Interactive Learning Environments, Vol. 20
  • When a machine detects student reasoning: a review of machine learning-based formative assessment of mechanistic reasoning 1 January 2023 | Chemistry Education Research and Practice, Vol. 24, No. 2
  • A Case Study of a Multi-Faceted Approach to Evaluating Teacher Candidate Ratings 27 July 2022 | The Teacher Educator, Vol. 58, No. 2
  • Noelle Clark and
  • Kristy Jean Wilson, Monitoring Editor
  • Designing and validating an assessment rubric for writing emails in English as a foreign language 27 March 2023 | Research in Subject-matter Teaching and Learning (RISTAL), Vol. 6, No. 1
  • Rúbrica basada en competencias de aprendizaje en un curso CS1 para evaluar actividades de programación CSCL 2 January 2023 | Revista Científica, Vol. 46, No. 1
  • Faculty experience and reliability of assessing narrative reports using rubrics: Report from a dental school in India Journal of Indian Association of Public Health Dentistry, Vol. 21, No. 1
  • Evidence-Based Practice (EBP) evaluation rubric for MSN students 1 January 2023 | i-manager's Journal on Nursing, Vol. 13, No. 3
  • Learning Assessment Tools: Which One to Use? 1 January 2024
  • An Outcomes-Based Framework for Integrating Academics With Student Life to Promote the Development of Undergraduate Students’ Non-cognitive Skills 3 March 2022 | Frontiers in Education, Vol. 7
  • Automated Code Assessment for Education: Review, Classification and Perspectives on Techniques and Tools 8 February 2022 | Software, Vol. 1, No. 1
  • Semester-Long Projects in the Analytical Chemistry Laboratory Curriculum 31 January 2022
  • Do Explicit Course-Level Learning Objectives Affect Students’ Course Perceptions and Ability to Recall Factual Knowledge and Analyze Political Problems? 6 December 2021 | Journal of Political Science Education, Vol. 18, No. 1
  • Using Team-Based Scenario Learning (TBSL) Approach to Teach Audit Risk 10 December 2022
  • Case Study: How Langston University’s LINC Program Contributed to Diversity in the Next Generation of Chemists, Medical Personnel and other Highly Trained STEM Professionals 1 January 2022 | Current Research in Materials Chemistry, Vol. 4, No. 1
  • Impact of a Pediatric-Focused Communication Course on Patient/Caregiver-Perceived Physician Communication Skills in a Pediatric Emergency Department 17 December 2019 | Pediatric Emergency Care, Vol. 37, No. 12
  • Formative Assessment of Social-Emotional Skills Using Rubrics: A Review of Knowns and Unknowns 17 November 2021 | Frontiers in Education, Vol. 6
  • Rubric formats for the formative assessment of oral presentation skills acquisition in secondary education 20 July 2021 | Educational Technology Research and Development, Vol. 69, No. 5
  • Advancing English Language Learners' Speaking Skills Using VoiceThread in Mobile Learning for Russian Tertiary Context International Journal of Web-Based Learning and Teaching Technologies, Vol. 16, No. 6
  • Preservice teachers’ enactment of formative assessment using rubrics in the inquiry-based chemistry laboratory 1 January 2021 | Chemistry Education Research and Practice, Vol. 22, No. 4
  • Cumulative oral examinations in undergraduate human physiology: process, student perceptions, and outcomes Advances in Physiology Education, Vol. 45, No. 3
  • Effects of rubric quality on marker variation in higher education Studies in Educational Evaluation, Vol. 70
  • Active Learning: Basic Science Workshops, Clinical Science Cases, and Medical Role-Playing in an Undergraduate Biology Course 21 July 2021 | Education Sciences, Vol. 11, No. 8
  • Gathering evidence on e-rubrics: Perspectives and many facet Rasch analysis of rating behavior 10 June 2021 | International Journal of Assessment Tools in Education, Vol. 8, No. 2
  • Exploring differences between international business undergraduates’ conceptual understanding 7 October 2019 | Studies in Higher Education, Vol. 46, No. 6
  • Alternative Assessment to Lab Reports: A Phenomenology Study of Undergraduate Biochemistry Students’ Perceptions of Interview Assessment 20 April 2021 | Journal of Chemical Education, Vol. 98, No. 5
  • Students’ Perceptions of Instructional Rubrics in Neurological Physical Therapy and Their Effects on Students’ Engagement and Course Satisfaction 6 May 2021 | International Journal of Environmental Research and Public Health, Vol. 18, No. 9
  • Çevrimiçi Öz Ve Akran Geribildirimlerin Biçimlendirici Niteliğini Belirlemeye Yönelik Analitik Rubrik Geliştirme Çalışması 16 April 2021 | Uşak Üniversitesi Eğitim Araştırmaları Dergisi, Vol. 7, No. 1
  • Comparison of Machine Learning Performance Using Analytic and Holistic Coding Approaches Across Constructed Response Assessments Aligned to a Science Learning Progression 26 September 2020 | Journal of Science Education and Technology, Vol. 30, No. 2
  • Development and Validation of the Pediatric Physician Interpersonal Communication Skills Assessment of Emergency Physicians by Pediatric Patients and Their Caregivers 21 May 2020 | AEM Education and Training, Vol. 5, No. 2
  • Fostering equity, diversity, and inclusion in large, first‐year classes: Using reflective practice questions to promote universal design for learning in ecology and evolution lessons 24 November 2020 | Ecology and Evolution, Vol. 11, No. 8
  • Cognitive Evaluation of Machine Learning Agents Cognitive Systems Research, Vol. 66
  • Developing and Implementing a Specifications Grading System in an Organic Chemistry Laboratory Course 3 December 2020 | Journal of Chemical Education, Vol. 98, No. 2
  • Development and psychometric properties of rubrics for assessing social-emotional skills in youth Studies in Educational Evaluation, Vol. 67
  • Design-based learning for a sustainable future: student outcomes resulting from a biomimicry curriculum in an evolution course 20 October 2020 | Evolution: Education and Outreach, Vol. 13, No. 1
  • MATLAB-based project assessment in process modelling unit: A case study from Swinburne University of Technology Sarawak Campus Education for Chemical Engineers, Vol. 33
  • Mini-Review - Teaching Writing in the Undergraduate Neuroscience Curriculum: Its Importance and Best Practices Neuroscience Letters, Vol. 737
  • Laurence Clement ,
  • Jennie B. Dorman , and
  • Richard McGee
  • Adele Wolfson, Monitoring Editor
  • Investigating reliability and validity of student performance assessment in Higher Education using Rasch Model Journal of Physics: Conference Series, Vol. 1529, No. 4
  • Developing the Physics Teacher Education Program Analysis rubric: Measuring features of thriving programs 3 April 2020 | Physical Review Physics Education Research, Vol. 16, No. 1
  • Development of a Tool to Assess Trainees’ Ability to Design and Conduct Quality Improvement Projects 12 June 2019 | American Journal of Medical Quality, Vol. 35, No. 2
  • Student Motivation from and Resistance to Active Learning Rooted in Essential Science Practices 23 December 2017 | Research in Science Education, Vol. 50, No. 1
  • Beyond Fairness and Consistency in Grading: The Role of Rubrics in Higher Education 7 January 2020
  • The Supervisory Role of Life Science Research Faculty: The Missing Link to Diversifying the Academic Workforce? Journal of Microbiology & Biology Education, Vol. 21, No. 1
  • Enhancing Critical Thinking in Engineering by Incorporating an E-assignment and Peer Review in a Blended Learning Course
  • Using a business process management system to model dynamic teaching methods The Journal of Strategic Information Systems, Vol. 28, No. 3
  • In Defense of Food Curriculum: A Mixed Methods Outcome Evaluation in Afterschool 4 March 2019 | Health Education & Behavior, Vol. 46, No. 4
  • Teaching Technical Communication to Engineering Students at Scale
  • An experimental test: Using rubrics for reflective writing to develop reflection Studies in Educational Evaluation, Vol. 61
  • MOOCAT: A visual authoring tool in the cMOOC context 8 October 2018 | Education and Information Technologies, Vol. 24, No. 2
  • Psychometric Evaluation of a Rubric to Assess Basic Performance During Simulation in Nursing Nursing Education Perspectives, Vol. 40, No. 2
  • Developing 21st century student research skills through assessment matrix and edmodo in biology project 12 March 2019 | Journal of Physics: Conference Series, Vol. 1157
  • Training graduate teaching assistants in the geosciences: Our practices vs. perceived needs 17 January 2019 | Journal of Geoscience Education, Vol. 67, No. 1
  • Testing a Communication Assessment Tool for Ethically Sensitive Scenarios: Protocol of a Validation Study 8 May 2019 | JMIR Research Protocols, Vol. 8, No. 5
  • A Pedagogical Approach Towards Curating Mobile Apps in an Educational Context
  • Best Practices in Summative Assessment 5 December 2019
  • Skills and Foundational Concepts for Biochemistry Students 5 December 2019
  • Aakanksha Angra and
  • Stephanie M. Gardner
  • Jennifer Knight, Monitoring Editor
  • La rúbrica en el examen oral de Traumatología y Ortopedia Educación Médica, Vol. 19
  • Exploring mentors' interpretation of terminology and levels of competence when assessing nursing students: An integrative review Nurse Education Today, Vol. 69
  • Rubric system for evaluation of crown preparation performed by dental students 1 March 2018 | European Journal of Dental Education, Vol. 22, No. 3
  • Cynthia F. C. Hill ,
  • Julia S. Gouvea , and
  • David Hammer
  • Elisabeth Schussler, Monitoring Editor
  • Rubrics in program evaluation 27 March 2018 | Evaluation Journal of Australasia, Vol. 18, No. 1
  • Biobehavioral Insights into Adaptive Behavior in Complex and Dynamic Operational Settings: Lessons learned from the Soldier Performance and Effective, Adaptable Response Task 5 February 2018 | Frontiers in Medicine, Vol. 4
  • The Use of Rubrics to Improve Integration and Engagement Between Biosecurity Agencies and Their Key Partners and Stakeholders: A Surveillance Example 25 May 2018
  • Values in evaluation – The use of rubrics Evaluation and Program Planning, Vol. 65
  • Designing and creating an educational app rubric for preschool teachers 30 January 2017 | Education and Information Technologies, Vol. 22, No. 6
  • On faculty development of STEM inclusive teaching practices 22 August 2017 | FEMS Microbiology Letters, Vol. 364, No. 18
  • High Motivation and Relevant Scientific Competencies Through the Introduction of Citizen Science at Secondary Schools
  • Benefits and Challenges of Developing a Customized Rubric for Curricular Review of a Residency Program in Laboratory Animal Medicine Journal of Veterinary Medical Education, Vol. 44, No. 3
  • Revealing conceptual understanding of international business 9 May 2016 | Assessment & Evaluation in Higher Education, Vol. 42, No. 5
  • Self-Observation and Peer Feedback as a Faculty Development Approach for Problem-Based Learning Tutors: A Program Evaluation 2 March 2017 | Teaching and Learning in Medicine, Vol. 29, No. 3
  • Teaching Tip: Improving Students' Email Communication through an Integrated Writing Assignment in a Third-Year Toxicology Course Journal of Veterinary Medical Education, Vol. 44, No. 2
  • Best practices in summative assessment Advances in Physiology Education, Vol. 41, No. 1
  • Developing and Supporting Students’ Autonomy To Plan, Perform, and Interpret Inquiry-Based Biochemistry Experiments 7 November 2016 | Journal of Chemical Education, Vol. 94, No. 1
  • Embedding ePortfolios in a Postgraduate Medical Sonography Program 15 September 2016
  • Developing Communication Management Skills 23 September 2016 | Business and Professional Communication Quarterly, Vol. 79, No. 4
  • Rubrics as a Tool in Writing Instruction: Effects on the Opinion Essays of First and Second Graders 5 September 2015 | Early Childhood Education Journal, Vol. 44, No. 5
  • Defining Conceptual Understanding for Teaching in International Business 14 October 2016 | Journal of Teaching in International Business, Vol. 27, No. 2-3
  • Kristin M. Bass ,
  • Dina Drits-Esser , and
  • Louisa A. Stark
  • Ross Nehm, Monitoring Editor
  • University Students’ Perceptions of E-Portfolios and Rubrics as Combined Assessment Tools in Education Courses 5 November 2015 | Journal of Educational Computing Research, Vol. 54, No. 1
  • Contemporary Educational Psychology, Vol. 44-45
  • Comparing the Impact of Course-Based and Apprentice-Based Research Experiences in a Life Science Laboratory Curriculum Journal of Microbiology & Biology Education, Vol. 16, No. 2
  • On being examined: do students and faculty agree? Advances in Physiology Education, Vol. 39, No. 4
  • Information Literacy and Adult Learners 1 July 2015 | Adult Learning, Vol. 26, No. 4
  • Reconsidering the Use of Scoring Rubrics in Biology Instruction The American Biology Teacher, Vol. 77, No. 9
  • Lecturers’ perceptions: The value of assessment rubrics for informing teaching practice and curriculum review and development 20 December 2015 | Africa Education Review, Vol. 12, No. 3
  • Teaching and Learning Strategies 14 September 2019 | , Vol. 14
  • Sarah Miller , and
  • Kimberly D. Tanner
  • , Monitoring Editor
  • Backward by Design: Building ELSI into a Stem Cell Science Curriculum 5 May 2015 | Hastings Center Report, Vol. 45, No. 3
  • Brian A. Couch ,
  • Tanya L. Brown ,
  • Tyler J. Schelpat ,
  • Mark J. Graham , and
  • Jennifer K. Knight
  • Michèle Shuster, Monitoring Editor
  • Closing the Circle: Use of Students’ Responses for Peer-Assessment Rubric Improvement 7 November 2015
  • Assuring the quality of standards-oriented classroom assessment with rubrics for complex competencies Studies in Educational Evaluation, Vol. 43
  • Assessing English speaking skills of prospective teachers at entry and graduation level in teacher education program 8 May 2014 | Language Testing in Asia, Vol. 4, No. 1
  • Application of New Assessment Tools in Engineering Studies: The Rubric IEEE Revista Iberoamericana de Tecnologias del Aprendizaje, Vol. 9, No. 4
  • An analysis of the effectiveness of feedback to students on assessed work 19 December 2013 | Higher Education Research & Development, Vol. 33, No. 4
  • Grading rubrics: hoopla or help? 16 April 2013 | Innovations in Education and Teaching International, Vol. 51, No. 4
  • Jeffrey Schinske , and
  • El uso de e-rúbricas para la evaluación de competencias en estudiantes universitarios. Estudio sobre fiabilidad del instrumento. 10 May 2014 | REDU. Revista de Docencia Universitaria, Vol. 12, No. 1
  • Shannon B. Seidel , and
  • Integrating assessment matrices in feedback loops to promote research skill development in postgraduate research projects Assessment & Evaluation in Higher Education, Vol. 38, No. 5
  • Orchestrating learning activities using the CADMOS learning design tool 16 September 2013 | Research in Learning Technology, Vol. 21
  • Using rubrics in economics International Review of Economics Education, Vol. 12
  • Implementation the assessment rubrics to evaluate the outcomes of PBL and ABL process
  • The explication of quality standards in self-evaluation Assessment in Education: Principles, Policy & Practice, Vol. 19, No. 3
  • Measuring beyond content: a rubric bank for assessing skills in authentic research assignments in the sciences 1 January 2012 | Chem. Educ. Res. Pract., Vol. 13, No. 3
  • Teaching Creative Science Thinking Science, Vol. 334, No. 6062
  • Fostering Student Engagement in an Online IR Course 20 August 2011
  • Student attitudes toward the assessment criteria in writing-intensive college courses Assessing Writing, Vol. 16, No. 1
  • Rubric Evaluation of Pediatric Emergency Medicine Fellows Journal of Graduate Medical Education, Vol. 2, No. 4
  • Bridging the educational research‐teaching practice gap 28 January 2010 | Biochemistry and Molecular Biology Education, Vol. 38, No. 1
  • Innovations in Teaching Undergraduate Biology and Why We Need Them Annual Review of Cell and Developmental Biology, Vol. 25, No. 1
  • The Development of a Competency-Based Assessment Rubric to Measure Resident Milestones Journal of Graduate Medical Education, Vol. 1, No. 1
  • International Journal of Palliative Nursing, Vol. 15, No. 7
  • Alison Crowe ,
  • Clarissa Dirks , and
  • Mary Pat Wenderoth
  • Marshall Sundberg, Monitoring Editor
  • Katayoun Chamany ,
  • Deborah Allen , and
  • Assessing student performance during experiential rotations American Journal of Health-System Pharmacy, Vol. 65, No. 16
  • Digital storytelling: a meaningful technology-integrated approach for engaged student learning 11 April 2008 | Educational Technology Research and Development, Vol. 56, No. 4
  • Daryl D. Hurd
  • Jeffrey Hardin, Monitoring Editor
  • Positioning Preservice Teacher Formative Assessment in the Literature

© 2006 by The American Society for Cell Biology

We gratefully acknowledge the contribution of Richard Donham (Mathematics and Science Education Resource Center, University of Delaware) for development of the double-digit rubric in Figure 2.

Created by the Great Schools Partnership , the GLOSSARY OF EDUCATION REFORM is a comprehensive online resource that describes widely used school-improvement terms, concepts, and strategies for journalists, parents, and community members. | Learn more »

Share

A rubric is typically an evaluation tool or set of guidelines used to promote the consistent application of learning expectations, learning objectives , or learning standards  in the classroom, or to measure their attainment against a consistent set of criteria. In instructional settings, rubrics clearly define academic expectations for students and help to ensure consistency in the evaluation of academic work from student to student, assignment to assignment, or course to course. Rubrics are also used as scoring instruments to determine grades or the degree to which learning standards have been demonstrated or attained by students.

In courses, rubrics may be provided and explained to students before they begin an assignment to ensure that learning expectations have been clearly communicated to and understood by students, and, by extension, parents or other adults involved in supporting a student’s education. Rubrics may take many forms, but they typically include the following information:

  • The educational purpose of an assignment, the rationale behind it, or how it connects to larger concepts or themes in a course.
  • The specific criteria or learning objectives that students must show proficiency in to successfully complete an assignment or meet expected standards. An oral-presentation rubric, for example, will establish the criteria—e.g., speak clearly, make eye contact, or include a description of the main characters, setting, and plot—on which students will be graded.
  • The specific quality standards the teacher will use when evaluating, scoring, or grading an assignment. For example, if the teacher is grading an assignment on a scale of 1 to 4, the rubric may detail what students need to do or demonstrate to earn a 1, 2, 3, or 4. Other rubrics will use descriptive language— does not meet , partially meets , meets , or exceeds the standard, for example—instead of a numerical score.

Rubrics are generally designed to be simple, explicit, and easily understood. Rubrics may help students see connections between learning (what will be taught) and assessment (what will be evaluated) by making the feedback they receive from teachers clearer, more detailed, and more useful in terms of identifying and communicating what students have learned or what they may still need to learn. Educators may use rubrics midway through an assignment to help students assess what they still need to do or demonstrate before submitting a final product. Rubrics may also encourage students to reflect on their own learning progress and help teachers to tailor instruction, academic support , or future assignments to address distinct learning needs or learning gaps . In some cases, students are involved in the co-creation of rubrics for a class project or for the purposes of evaluating their own work or that of their peers.

Since rubrics are used to establish a consistent set of learning expectations that all students need to demonstrate, they may also be used by school leaders and teachers as a way to maintain consistency and objectivity when teaching or assessing learning across grade levels, courses, or assignments. While some schools give individual teachers the discretion to create and use their own rubrics, other schools utilize “common rubrics” or “common assessments” to promote greater consistency in the application and evaluation of learning throughout a school. In most cases, common rubrics are collaboratively developed by a school faculty, academic department, or team . Some schools have common rubrics for academic subjects, while in other schools the rubrics are utilized across all the academic disciplines. Common rubrics and assessments can also help schools, departments, and teaching teams refine their lessons and instructional practices to target specific learning areas in which their students tend to struggle. Rubrics are often locally designed by a district or school, but they may be provided by outside organizations as part of a specific program or improvement model.

For related discussions, see coherent curriculum and high expectations .

Creative Commons License

Alphabetical Search

Assessment Rubrics

A rubric is commonly defined as a tool that articulates the expectations for an assignment by listing criteria, and for each criteria, describing levels of quality (Andrade, 2000; Arter & Chappuis, 2007; Stiggins, 2001). Criteria are used in determining the level at which student work meets expectations. Markers of quality give students a clear idea about what must be done to demonstrate a certain level of mastery, understanding, or proficiency (i.e., "Exceeds Expectations" does xyz, "Meets Expectations" does only xy or yz, "Developing" does only x or y or z). Rubrics can be used for any assignment in a course, or for any way in which students are asked to demonstrate what they've learned. They can also be used to facilitate self and peer-reviews of student work.

Rubrics aren't just for summative evaluation. They can be used as a teaching tool as well. When used as part of a formative assessment, they can help students understand both the holistic nature and/or specific analytics of learning expected, the level of learning expected, and then make decisions about their current level of learning to inform revision and improvement (Reddy & Andrade, 2010). 

Why use rubrics?

Rubrics help instructors:

Provide students with feedback that is clear, directed and focused on ways to improve learning.

Demystify assignment expectations so students can focus on the work instead of guessing "what the instructor wants."

Reduce time spent on grading and develop consistency in how you evaluate student learning across students and throughout a class.

Rubrics help students:

Focus their efforts on completing assignments in line with clearly set expectations.

Self and Peer-reflect on their learning, making informed changes to achieve the desired learning level.

Developing a Rubric

During the process of developing a rubric, instructors might:

Select an assignment for your course - ideally one you identify as time intensive to grade, or students report as having unclear expectations.

Decide what you want students to demonstrate about their learning through that assignment. These are your criteria.

Identify the markers of quality on which you feel comfortable evaluating students’ level of learning - often along with a numerical scale (i.e., "Accomplished," "Emerging," "Beginning" for a developmental approach).

Give students the rubric ahead of time. Advise them to use it in guiding their completion of the assignment.

It can be overwhelming to create a rubric for every assignment in a class at once, so start by creating one rubric for one assignment. See how it goes and develop more from there! Also, do not reinvent the wheel. Rubric templates and examples exist all over the Internet, or consider asking colleagues if they have developed rubrics for similar assignments. 

Sample Rubrics

Examples of holistic and analytic rubrics : see Tables 2 & 3 in “Rubrics: Tools for Making Learning Goals and Evaluation Criteria Explicit for Both Teachers and Learners” (Allen & Tanner, 2006)

Examples across assessment types : see “Creating and Using Rubrics,” Carnegie Mellon Eberly Center for Teaching Excellence and & Educational Innovation

“VALUE Rubrics” : see the Association of American Colleges and Universities set of free, downloadable rubrics, with foci including creative thinking, problem solving, and information literacy. 

Andrade, H. 2000. Using rubrics to promote thinking and learning. Educational Leadership 57, no. 5: 13–18. Arter, J., and J. Chappuis. 2007. Creating and recognizing quality rubrics. Upper Saddle River, NJ: Pearson/Merrill Prentice Hall. Stiggins, R.J. 2001. Student-involved classroom assessment. 3rd ed. Upper Saddle River, NJ: Prentice-Hall. Reddy, Y., & Andrade, H. (2010). A review of rubric use in higher education. Assessment & Evaluation In Higher Education, 35(4), 435-448.

Florida State University

FSU | Center for the Advancement of Teaching

Site Navigation

Global navigation.

what is a presentation rubric

Center for the Advancement of Teaching

Home / News / Weekly Teaching Tips / What’s a Rubric? + Upcoming Teaching Showcase

What’s a Rubric? + Upcoming Teaching Showcase

Share on Facebook

Clarifying the Qualities of Successful Work

Those of us who assign final projects (papers, presentations, videos, reports, etc.) have probably all had the experience of feeling perplexed by a lackluster batch of student work. Sometimes we may wonder if students even really knew what they were trying to accomplish. While the issue may sometimes be that they have procrastinated, or had to juggle a mountain of projects for all their various courses, other times their work doesn’t meet our expectations because we haven’t been as transparent as we could about what we’re expecting.

Transparent assignment descriptions make three important aspects of any assignment very clear and concrete for students: the purpose (i.e., why we ask them to do it); the task itself, with all its component parts; and the criteria for success. In particular, clarifying this last aspect can help our students have a better idea of when they’re meeting the expectations of the project and when they need to keep working on it. Limited time is one reason why many students don’t complete multiple drafts of their work, but another is that they don’t know how to gauge whether or not they’re meeting the goals of the assignment, so often they give up after a first pass.

If you have a rubric or a set of specifications, it’s best to provide it to the students when you launch the project—before they try to tackle it. But having a rubric and being able to make heads or tails of it are not necessarily closely akin. Evaluation criteria that seem clear and meaningful to us may not be so for students, and they need our support to interpret and practice applying them.

Students benefit a great deal from opportunities to work with examples, so that they have some models in their minds for each criterion. For example, if having a strong, arguable thesis is a criterion, students first need to understand what a thesis is and isn’t before they can move on to evaluating the quality of one. If effectively using evidence from primary sources to support claims is a criterion, students need to be able to see examples of how this might be done at all before they move on to evaluating whether or not it’s been done well. If they don’t have any models in their minds, their work might look like a person trying to draw an elephant without ever having seen one.

Once students have developed an understanding of the criteria, they can practice using them to evaluate sample work . It’s worth spending some class time helping them to do this, but you could also make it a homework assignment. Students don’t necessarily need to see exceptional samples—in fact, they will learn more from evaluating a few examples of projects that met some of the goals but not others. Students need to learn how to assess their own learning and success, so it helps them to identify where a sample fell short of the mark, what strengths it has and how it could build on them, and what it lacks entirely. It also shows them there is more than one way to accomplish the goals.

If you’re facilitating an activity in class, it can be very useful to have students read/watch a few examples individually, and attempt to evaluate them by the standards you’ve given them, then compare notes in small groups. You can end by asking a few groups to share out about their observations, so that you can calibrate and ensure that they’re identifying and interpreting correctly. Helping students develop a better understanding of the criteria and practice applying it can also be a great way to set them up for a successful peer review activity or assignment. They will be better prepared to give one another accurate and detailed feedback that they can use to revise their work before they submit it to be graded.

Taking the time to help students understand the qualities of successful work is worth it; it helps them learn, and it saves you time on providing lots of feedback later. Plus, you get the pleasure of seeing their good work. If you’d like support to make your evaluation criteria more transparent, or to plan an activity that will help students interpret and apply it, we’d be happy to help! You can email us at [email protected] for a consultation. We look forward to working with you!

UPCOMING…

Provost’s showcase of scholarly teaching.

Friday, April 5th | 1:00 – 4:00 p.m. | SSB 201/203

We are delighted to invite all faculty, staff, and teaching assistants to attend the inaugural Provost’s Showcase of Scholarly Teaching hosted this spring by CAT and FSU Libraries. This FSU-based mini conference offers opportunities to attend poster sessions where you’ll learn more about your colleagues’ innovative teaching practices, and to participate in roundtable discussions on a variety topics relevant to teaching across disciplines and contexts. We hope you’ll be inspired and enjoy building community around teaching with colleagues you might not otherwise have a chance to meet. Please join us to celebrate FSU colleagues’ outstanding teaching and energize yours, too! Refreshments will be provided.

IMAGES

  1. 10 Best Printable Rubrics For Oral Presentations PDF for Free at Printablee

    what is a presentation rubric

  2. Free Printable Oral Presentation Rubric

    what is a presentation rubric

  3. Oral Presentation Grading Rubric Name(s

    what is a presentation rubric

  4. a table with two different types of project rub

    what is a presentation rubric

  5. 10 Best Printable Rubrics For Oral Presentations PDF for Free at Printablee

    what is a presentation rubric

  6. how to presentation rubric

    what is a presentation rubric

VIDEO

  1. Pathophysiology PPT Video Presentation Rubric

  2. rubric presentation

  3. Presentation Rubric

  4. AO International New Agent Class 23 014 071423 Week 1 Day 5

  5. Presentation Portfolio Assessment

  6. Strengthening and Expanding Internationalisation Culture at IIS (Deemed to be University)

COMMENTS

  1. PDF Research Presentation Rubrics

    Research Presentation Rubric. The format of presentations can vary across and within disciplines. This resource focuses on research presentations but may be useful beyond. The goal of this rubric is to identify and assess elements of research presentations, including delivery strategies and slide design. • Self-assessment: Record yourself ...

  2. Rubrics for Oral Presentations

    A rubric is a scoring guide that articulates and assesses specific components and expectations for an assignment. Rubrics identify the various criteria relevant to an assignment and then explicitly state the possible levels of achievement along a continuum, so that an effective rubric accurately reflects the expectations of an assignment.

  3. Creating and Using Rubrics

    This rubric was designed for essays and research papers in history (Carnegie Mellon). Projects. Example 1: Capstone Project in Design This rubric describes the components and standards of performance from the research phase to the final presentation for a senior capstone project in design (Carnegie Mellon).

  4. Rubrics

    This rubric was designed for essays and research papers in history, CMU. Projects. Example 1: Capstone Project in Design This rubric describes the components and standard of performance from the research phase to the final presentation for a senior capstone project in the School of Design, CMU.

  5. Rubric Best Practices, Examples, and Templates

    Rubric Best Practices, Examples, and Templates. A rubric is a scoring tool that identifies the different criteria relevant to an assignment, assessment, or learning outcome and states the possible levels of achievement in a specific, clear, and objective way. Use rubrics to assess project-based student work including essays, group projects ...

  6. PDF Rubric for Standard Research Talks

    Rubric for Standard Research Talks This rubric is designed to help you evaluate the organization, design, and delivery of standard research talks and other oral presentations. Here are some ways to use it: Distribute the rubric to colleagues before a dress rehearsal of your talk. Use the rubric to collect feedback and improve your

  7. Oral Presentation Rubric

    Oral presentation and speaking are important skills for students to master, especially in the intermediate grades. This oral presentation rubric is designed to fit any topic or subject area. The rubric allows teachers to assess students in several key areas of oral presentation. Students are scored on a scale of 1-4 in three major areas.

  8. PDF Oral Presentation Rubric

    Oral Presentation Rubric 4—Excellent 3—Good 2—Fair 1—Needs Improvement Delivery • Holds attention of entire audience with the use of direct eye contact, seldom looking at notes • Speaks with fluctuation in volume and inflection to maintain audience interest and emphasize key points • Consistent use of direct eye contact with ...

  9. Using rubrics

    A rubric can be a fillable pdf that can easily be emailed to students. Rubrics are most often used to grade written assignments, but they have many other uses: They can be used for oral presentations. They are a great tool to evaluate teamwork and individual contribution to group tasks. Rubrics facilitate peer-review by setting evaluation ...

  10. Creating and Using Rubrics

    Creating and Using Rubrics. A rubric describes the criteria that will be used to evaluate a specific task, such as a student writing assignment, poster, oral presentation, or other project. Rubrics allow instructors to communicate expectations to students, allow students to check in on their progress mid-assignment, and can increase the ...

  11. PDF Rubric: Professional Presentations

    Rubric: Professional Presentations. SCORING RUBRICS FOR PROFESSIONAL PRESENTATIONS. Strategy/Purpose: Does the presentation meet its intended objective? Well done Objective of the presentation is easily identified; content supports objective. Acceptable Objective is not immediately clear; some additional content needed to support objective.

  12. Center for Teaching & Learning

    Rubrics. Rubrics are a set of criteria to evaluate performance on an assignment or assessment. Rubrics can communicate expectations regarding the quality of work to students and provide a standardized framework for instructors to assess work. Rubrics can be used for both formative and summative assessment. They are also crucial in encouraging ...

  13. Presentation Rubric for a College Project

    A presentation rubric is a systematic and standardized tool used to evaluate and assess the quality and effectiveness of a presentation. It provides a structured framework for instructors, evaluators, or peers to assess various aspects of a presentation, such as content, delivery, organization, and overall performance. ...

  14. How to Use Rubrics

    A rubric is a document that describes the criteria by which students' assignments are graded. Rubrics can be helpful for: Making grading faster and more consistent (reducing potential bias). Communicating your expectations for an assignment to students before they begin. Moreover, for assignments whose criteria are more subjective, the ...

  15. Grading Rubrics

    A rubric, or "a matrix that provides levels of achievement for a set of criteria" (Howell, 2014), is a common tool for assessing open-response or creative work (writing, presentations, performances, etc.). To use rubrics effectively, instructors should understand their benefits, the types and uses of rubrics, and their limitations. Benefits of Rubrics The criteria identified in the matrix ...

  16. PDF Oral Presentation Rubric College of Science

    Oral Presentation Rubric ... Presentation contains no grammar errors; sentences are free of jargon, complete and easy to understand E. Documentation Proper support and sourcing for major ideas, inclusion of visual aids that support message Little or no message support

  17. PDF RUBRIC -Examples and Speaker Notes.ppt

    A rubric is a tool for scoring an assignment that breaks the work into the component parts (which reflect objectives) The rubric provides a score (at a minimum) or a detailed description (ideally) of good or bad performance on each component part. R ub ri cs can be used to grad e st ud ent s. and combined for an overall score.

  18. How to Design Effective Rubrics

    Five Steps to Design Effective Rubrics. 1 Decide What Students Should Accomplish. 2 Identify 3-10 Criteria. 3 Choose Performance Level Labels. 4 Describe Performance Details. The final step in developing a rubric is to fill in the details for each performance level for each criterion. It is advised to begin by filling out the requirements for ...

  19. Group presentation rubric

    Group presentation rubric. This is a grading rubric an instructor uses to assess students' work on this type of assignment. It is a sample rubric that needs to be edited to reflect the specifics of a particular assignment. Students can self-assess using the rubric as a checklist before submitting their assignment. Download this file.

  20. Rubrics: Tools for Making Learning Goals and Evaluation Criteria

    WHAT IS A RUBRIC? Although definitions for the word "rubric" abound, for the purposes of this feature article we use the word to denote a type of matrix that provides scaled levels of achievement or understanding for a set of criteria or dimensions of quality for a given type of performance, for example, a paper, an oral presentation, or use of teamwork skills.

  21. Rubric Definition

    An oral-presentation rubric, for example, will establish the criteria—e.g., speak clearly, make eye contact, or include a description of the main characters, setting, and plot—on which students will be graded. The specific quality standards the teacher will use when evaluating, scoring, or grading an assignment. For example, if the teacher ...

  22. Assessment Rubrics

    Assessment Rubrics. A rubric is commonly defined as a tool that articulates the expectations for an assignment by listing criteria, and for each criteria, describing levels of quality (Andrade, 2000; Arter & Chappuis, 2007; Stiggins, 2001). Criteria are used in determining the level at which student work meets expectations.

  23. What's a Rubric? + Upcoming Teaching Showcase

    Clarifying the Qualities of Successful Work Those of us who assign final projects (papers, presentations, videos, reports, etc.) have probably all had the experience of feeling perplexed by a lackluster batch of student work. Sometimes we may wonder if students even really knew what they were trying to accomplish. While the issue may sometimes be […]

  24. PDF AP Seminar Performance Task 2: Individual Research-Based ...

    In addition to the scores represented on the rubrics, readers can also assign scores of 0 (zero) and NR (No Response). A score of 0 is assigned to a single row of the rubric when the response displays a below-minimum level of quality as identified in that row of the rubric. Does not meet the criteria for two points.