How well do we evaluate evaluation?

Blog%20image%20-%20How%20well%20do%20we%20evaluate%20evaluation_0.png

Part of our commitment to better evaluation is making sure that evaluation itself is evaluated better.  Like any intervention, evaluations can be evaluated in different ways. 

There can be evaluative thinking early on in the planning stages, to identify priorities and potential consequences of doing an evaluation and particular types of evaluation.  There can be evaluation along the way to check implementation is proceeding well, and to identify if any changes are needed.  And there can be evaluation at the end before acceptance and use of the report, and as reflective process to identify lessons to be learned for future evaluations.

The BetterEvaluation site has resources in two different places to help you plan and implement evaluations of evaluations (sometimes referred to as meta-evaluation). 

The task page “Review evaluation (do meta-evaluation)” has a range of options for processes:

The task page “Define ethical and quality standards for evaluation” has a range of options for reference documents to use for evaluating evaluation:

There are some important decisions to be made when planning an evaluation.

Who will do the meta-evaluation, when and how? 

Will it be done by the evaluator and the evaluation decisionmakers (eg a steering group or designated stakeholders) to review the evaluation at key stages – especially during the planning stage?  Will there be scope to make changes on the basis of the meta-evaluation? By the time a draft report is evaluated, there is often little opportunity to correct gaps and errors in data collection or analysis.

Will it be done by an external meta-evaluator (evaluator of an evaluation) when there is a draft evaluation report – or as a retrospective view of a completed evaluation?  If the evaluation has already been completed, what will be the intended use of the meta-evaluation?

Will it be done by the community or by representatives of the intended beneficiaries of programmes to hold evaluators and evaluation managers accountable?

Will the focus be to evaluate an individual evaluation or the portfolio of evaluation? If it’s the latter, then there need to be questions about organisational-level issues – in particular, what gets evaluated and what does not.

Will meta-evaluation be done by a community of practice of evaluators to identify and discuss challenges in meeting standards for good evaluation and strategies for doing so? 

Should the meta-evaluation be based existing standards or other reference documents or would it be better to develop a customised reference document?

Particular standards might be important to use because of the organisational context which either requires them or which makes them likely to be relevant.  For example, an evaluation of a UN program should pay attention to the UNEG norms and standards.  An evaluation of a South African government policy should make reference to the SA Govt standards.  An evaluation of a project operating in Latin America could draw on the Standards for Latin America and the Caribbean.

Are there issues that would be useful to explicitly refer to which are not covered in existing standards?  For example, UNICEF has a particular child focus.  While UNICEF evaluations would be expected to be covered by UNEG standards the issue of a child focus, and what this means for evaluation, might be something specific that UNICEF might want to add.