Part of our commitment to better evaluation is making sure that evaluation itself is evaluated better. Like any intervention, evaluations can be evaluated in different ways.
There can be evaluative thinking early on in the planning stages, to identify priorities and potential consequences of doing an evaluation and particular types of evaluation. There can be evaluation along the way to check implementation is proceeding well, and to identify if any changes are needed. And there can be evaluation at the end before acceptance and use of the report, and as reflective process to identify lessons to be learned for future evaluations.
The BetterEvaluation site has resources in two different places to help you plan and implement evaluations of evaluations (sometimes referred to as meta-evaluation).
The task page “Review evaluation (do meta-evaluation)” has a range of options for processes:
- Beneficiary exchange: seeking feedback from the beneficiaries by discussing the findings with them.
- Expert review for meta-evaluation: reviewing the evaluation by using subject experts, either individually or as a panel.
- Group critical reflection: facilitating a group stakeholder feedback session.
- Individual critical reflection: asking particular individual stakeholders for their independent feedback.
- Peer review for meta-evaluation: reviewing the evaluation by using peers from within the organisation or outside of the organisation.
- Institutional review board: A committee set up by an organization or institution to monitor the ethical and technical research and evaluation conducted by its members. Sometimes the label “Ethics Committee” is used, especially in a university context.
The task page “Define ethical and quality standards for evaluation” has a range of options for reference documents to use for evaluating evaluation:
- Cultural Competency: ensuring cultural context is adequately taken into consideration during the evaluation.
- Ethical guidelines: Institutional or organizational rules or norms that guide evaluation practice, especially regarding vulnerable populations.
- Evaluation standards: Descriptions of good evaluation practice in terms of a number of different dimensions
- Evaluation checklist: A systematic process for assessing different aspects of an evaluation (We're currently creating this new option page - and will be adding Dan Stufflebeam's meta-evaluation checklist, based on the Joint Committee Program Evaluation Standards and Michael Scriven's Meta-Evaluation Checklist).
Thre are some important decisions to be made when planning an evaluation.
Who will do the meta-evaluation, when and how?
Will it be done by the evaluator and the evaluation decisionmakers (eg a steering group or designated stakeholders) to review the evaluation at key stages – especially during the planning stage? Will there be scope to make changes on the basis of the meta-evaluation? By the time a draft report is evaluated, there is often little opportunity to correct gaps and errors in data collection or analysis.
Will it be done by an external meta-evaluator (evaluator of an evaluation) when there is a draft evaluation report – or as a retrospective view of a completed evaluation? If the evaluation has already been completed, what will be the intended use of the meta-evaluation?
Will it be done by the community or by representatives of the intended beneficiaries of programmes to hold evaluators and evaluation managers accountable?
Will the focus be to evaluate an individual evaluation or the portfolio of evaluation? If it’s the latter, then there need to be questions about organisational-level issues – in particular, what gets evaluated and what does not.
Will meta-evaluation be done by a community of practice of evaluators to identify and discuss challenges in meeting standards for good evaluation and strategies for doing so?
Should the meta-evaluation be based existing standards or other reference documents or would it be better to develop a customised reference document?
Particular standards might be important to use because of the organisational context which either requires them or which makes them likely to be relevant. For example, an evaluation of a UN program should pay attention to the UNEG norms and standards. An evaluation of a South African government policy should make reference to the SA Govt standards. An evaluation of a project operating in Latin America could draw on the Standards for Latin America and the Caribbean.
Are there issues that would be useful to explicitly refer to which are not covered in existing standards? For example, UNICEF has a particular child focus. While UNICEF evaluations would be expected to be covered by UNEG standards the issue of a child focus, and what this means for evaluation, might be something specific that UNICEF might want to add.
Q – What have you used in terms of standards, principles, checklists for evaluating evaluation? What did you use and why? When was it used and by whom? How well did it work?
Let us know in the comments below or contact us!