A rubric sets out clearly criteria and standards for assessing different levels of performance. Rubrics have often been used in education for grading student work, and in recent years have been applied in evaluation to make transparent the process of synthesising evidence into an overall evaluative judgement.
A single rubric can be developed for overall performance or a number of rubrics can be developed, each for an aspect of performance.
A rubric consists of a rating of performance, which can be generic (eg from 'Very poor' to 'Excellent') or customised (eg ''Detrimental' to 'Highly Effective'). A rubric can also be known as a global assessment scale.
The Victorian Department of Natural Resources and Environment developed a rubric (called a Global Assessment Scale) as part of their evaluation of a project to reduce dryland salinity. They developed a rating scale for the community groups which were created through the project, to track their progress and to focus planning for the next stage of the project. The scale was initially developed by five different staff independently creating items and then combining them. Piloting of the scale showed considerable reliability in how different people rated groups using the scale.
Rubric for evaluating community groups
Most members of the community are contributing to the group and recognise they play an integral part in achieving holistic, long term and agreed community objectives. The group has its own identity and strives for excellence. They are able to identify and implement innovative solutions to problems with little or no government support. Members are willing to accept leadership, responsibilities and different roles. All members are implementing on-ground works and attending regular meetings. The group is exceeding salinity tree and pasture establishment targets and will be able to halt salinity within 30 years.
Most members of the community have an interest in the group and are working towards a shared long-term strategy. Most members have a holistic and regional vision, but others are still grappling with the concept. All activities are planned carefully by the group and attract significant interest. Government specialists may be invited to provide technical advice. There is a strong committee commitment and other sub-committees are completing specialised roles. The group is meeting salinity targets every year and will significantly slow the spread of salinity in the next 30 years.
The group shares common medium term goals and is developing a team culture and cohesion. There is a commitment from about 40% of the community to attend meetings regularly and complete on-ground works. New members are encouraged and there is an effort to conduct interesting meetings and activities. Government agencies assist with technical advice and organising activities at the group’s initiation. Salinity targets may or may not be met, depending on economic conditions, but there is a significant amount of on-ground works completed each year.
The group looks to government to set directions and activities. A small group of dedicated members have held leadership roles for long periods and are experiencing ‘burn-out’. The group may compete with other organisations for membership, or members may consist of people with specific agendas. There is no long term planning to assist direction setting and goals are strictly short term and self-centred. On-ground works are completed by a small, dedicated core through government funding. Salinity targets are not being met although small areas of salinity may be mediated with time.
The group is totally dependant on government for funding, support and leadership. There is a reluctance of members to assume any leadership roles or responsibilities, and there is apathy towards attracting new members. Meetings are irregular with few core members present, or meetings are non-existent. There are no agreed goals, and members may not share common problems to bring them closer together. There is little or no evidence of on-ground works occurring. The salinity problem will continue growing.
Source: Dart, J., Petheram, J. and Straw, W. (1998). Review of Evaluation in Agricultural Extension. Rural Industries Research and Development Corporation. Publication No 98/136.
Advice for CHOOSING this option (tips and traps)
- If possible, rubrics should first be piloted or field tested and discussed with the projects or programs being assessed, in order to create an understanding of expectations early on.
- Check that the criteria at each level are defined clearly enough to ensure that scoring is accurate, unbiased and consistent. Make sure several evaluators can use the rubric and would score performance within the same range.
- Ensure that the criteria and expectations in the rubric are directly aligned with the overall objectives of the project and organisation. Rubrics upon which people are judged, especially if performance-related pay is linked to the assessment, can have a big influence on the way people work. This can create perverse incentives to focus on rubric scores rather than project objectives, unless the two are carefully aligned.
Advice for USING this option (tips and traps)
- Involve the right mix of people in developing the rubrics to ensure that all important criteria have been included and the rubric is seen as legitimate by those who will be using its results.
- Consider using interim assessments with projects or programs to check on progress. This can avoid problematic surprises for both the evaluator and the projects being assessed when the final assessment is made, and can address performance issues early, thus improving overall outcomes.
- The Rubric Revolution: Jane Davidson, Nan Wehipeihana & Kate McKegg present a view of rubrics based on their use in giving voice to Indigenous values in New Zealand.
- The River Chart - useful tool for visualising data compiled from multiple rubrics for comparison
- Review of Evaluation in Agricultural Extension: This report from the Rural Industries Research and Development Corporation has an example of two different global assessment scales on pages 62 - 63.
- Evaluation rubrics: how to ensure transparent and clear assessment that respects diverse lines of evidences. Judy Oakden.
Center for Advanced Research on Language Acquisition (CARLA). (2012). Process: Creating rubrics. Retrieved from http://www.carla.umn.edu/assessment/vac/Evaluation/p_7.html
Davidson, E.J. (2004) Evaluation Methodology Basics: The Nuts and Bolts of Sound Evaluation. Beverly HIlls, CA: Sage Publications.