Judy Oakden is an independent evaluator from Aotearoa New Zealand who runs her own consultancy and is a member of the Kinnect Group. She was one of ten participants in the BetterEvaluation writeshop initiative, led by Irene Guijt, which facilitated evaluation practitioners to write up their valuable experiences. Judy's paper is the first in the series to be published.
In Aoteoroa New Zealand the use of rubrics has been adopted across a number of institutions to help ensure there is transparent and clear assessment which respects and includes diverse lines of evidence in evaluation. This case, written as part of the BetterEvaluation writeshop process, discusses how the use of rubrics was helpful throughout all stages of an evaluation of the First-time principals’ Induction Programme.
[Editor's note: see also Patricia Rogers' recent blog post for an introduction to rubrics]
Why we used rubrics in the evaluation
The Ministry of Education required this evaluation on a short time-frame, with a tight budget. This case describes how the use of rubrics supported us to undertake the evaluation in that context. In particular we chose to use rubrics for this project as we believed that the process of developing the rubrics would help us to reach a shared understanding with key stakeholders at the start of the evaluation of what aspects of performance matter to them and what the levels of performance (for instance, what poor, good or excellent) might look like. We also expected the use of rubrics to help us identify and collect credible evidence that answers the important evaluation questions and to provide a framework for synthesising data for reporting on results in an efficient and effective manner that is useful to the client.
The paper uses the BetterEvaluation Rainbow Framework to describe how we developed the rubrics for this evaluation and how they were used with other evaluation methods to make relevant and meaningful assessments.
Lessons learned about rubrics
Since undertaking this evaluation back in 2008 I've completed a number of other evaluations using this or a similar approach and new ‘lessons learned’ in the use of rubrics have emerged. These are:
Rubrics can help frame the evaluation: At the start of an evaluation the development of evaluative criteria and rubrics can help frame the evaluation and set the boundaries, particularly in complex evaluations.
Rubrics are not set in stone: It’s important to set client expectations that the rubrics may not ‘finalised’ till near the end of the evaluation. At times rubrics need to be amended as we learn more during an evaluation. Sometimes additional evaluation criteria need to be added as we learn more about the programme or service.
Rubrics can aid in the development of shared understanding amongst stakeholders: Where stakeholders are involved in the development of rubrics, they appear to have a greater understanding of what the evaluation will cover and what will constitute’ good’ or ‘poor’ levels of performance i.e. the basis on which judgments of performance or effectiveness, etc will be made.
Rubrics can aid in the development of efficient and effective data collection strategies: For the evaluator, once developed, rubrics can enable a more integrated approach to data collection to answer the evaluation questions. It becomes clear where existing data can be used and where new data collection and/ or interviews are needed.
Rubrics can aid data synthesis: Data synthesis can be efficient when mapped against the evaluative criteria. The rubrics can be an effective tool to help layer and interpret the evidence. Clients can have the opportunity to be involved in the judgement making stage, and hence gain a better understanding of how the process is undertaken - aiding in transparency.
Rubrics can provide a useful reporting framework that answers the important questions: Reporting can be developed specifically to answer the evaluation questions. Clients have told me that the report when framed by the evaluative criteria is very focused and actionable. Clients have also told me they like the way transparent judgements are made, they are not left trying to figure out for themselves if the result 'good enough' or not.
Challenges with using rubrics
While rubrics have mostly been helpful there are times when their use can be challenging:
Not all stakeholder groups can work effectively with rubrics: Sometimes it is hard to get agreement on the key aspects of performance, or what constitutes ‘good’ performance amongst stakeholders.
Rubrics support a participatory process and not all stakeholders want to engage in this manner: Not all stakeholders have the time or inclination to work with evaluators in a participatory manner to develop up the rubrics for their evaluation. It is still possible to develop rubrics from the literature, and from other sources, but these still need to be signed off with the client.
At times it can be challenging to prioritise the sources of data that are considered the most credible for the evaluation: Sometimes the there is considerable data which can be used and it is not always easy to prioritise or determine the most credible sources. With large amounts of data, synthesis can be complex and time consuming.
Share your experiences, comments and questions
So those are my thoughts. For those of you have also used evaluation rubrics I’d be keen to hear what other people have learned:
When do you find evaluation rubrics work well?
What are some of the tips and traps you have discovered in the use of evaluation rubrics?
Are there times when you might not use evaluation rubric?
[Remember to log in so you can add comments or questions directly to the page - or comment via Add Content]