As part of a project with an Australian state government agency, I am developing a rubric for people with little to no evaluation skills who might need to judge the quality of an evaluation report. This is within the context of a larger project whereby an evidence base of past evaluation reports is made available for program designers. We want users to access these reports, but also have some support to judge the quality of the reports.
I am keen to understand peoples' experience with rubrics in program monitoring and whether the progress expected in a program and detailed indicators can be accommodated in a rubric. I like the idea of consolidating results into a rubric by using indicator performance aligned with KEQs and levels (Excellent, Good, Poor etc) to synthesis results back up from indicator to outcome. My struggle is with the sheer quantity of indicators at a lower level that need to be tracked, particularly early in a program before any outcomes are possible.
The term "rubric" is often used in education to refer to a systematic way of setting out the expectations for students in terms of what would constitute poor, good and excellent performance.