Evaluations can be conducted by a range of different actors including: external contractors; internal staff; those involved in delivering services; by peers; by the community; and by a combined group. Therefore it is important to make decisions about who is best to conduct the evaluation.
Consider the relative importance of different types of expertise. Relevant expertise may include skills and knowledge in evaluation, in the specific domain (eg education) or program (e.g. delivering health services), or the local culture and context.
Consider the balance of distance and involvement that will be most suitable and that will support use of the evaluation findings. An external, unaligned evaluator may be viewed as more (or less) credible by different stakeholders. Involving staff and communities may be important for supporting cultural change, knowledge building and supporting the utilization of the evaluation findings.
Different management tasks arise depending on who is involved in which evaluative activities. For example, when using an external evaluator you will need to develop a process for selecting and managing them. If internal staff and/or intended beneficiaries are involved there may be a need to ensure processes are well documented and that relevant training in specific evaluation options is conducted to ensure that quality and ethical standards are maintained.
Decisions about who will conduct an evaluation, or components of an evaluation, will also be informed by timelines, resources, and the purpose of the evaluation.
- Community: conducting an evaluation by using the broader community or groups of intended beneficiaries.
- Expert Review: conducting an evaluation by using someone with specific content knowledge or expert judgment and professional expertise.
- External Consultant: contracting an external consultant to conduct the evaluation.
- Hybrid - Internal and External Evaluation: a combination of internal staff and an external (usually expert) opinion to jointly conduct an evaluation.
- Internal Staff: conducting an evaluation using staff from the implementing agency.
- Learning Alliances: bringing together different groups to conduct the evaluation
- Peer Review: conducting an evaluation using Individuals/organizations who are working on similar projects.
- Horizontal Evaluation: conducting an evaluation through a a structured approach to peer learning.
- Positive deviance: involves intended evaluation users identifying ‘outliers’ – those with exceptionally good outcomes - and understanding how they have achieved these.
- Participatory evaluation: involving key stakeholders in the evaluation process.
- NSW Government Evaluation Toolkit: Part 3 of this web-based toolkit, developed by BetterEvaluation for the NSW Government, focuses on choosing the right consultant for an evaluation. It provides detailed advice about what to look for in a consultant and how to go about choosing the right one for the job.
- Key Considerations for Managing Evaluations: Part 4 of this guide from Pact South Africa focuses on selecting an evaluation team. It includes an outline of the skills required by different team members, what to look for when forming a selection committee and how to develop key selection criteria that ensure transparency and consistency.
- Who Should Conduct Your Evaluation? Chapter 3 of the Program Manager's Guide to Evaluation describes the make up of different types of evaluation teams and the positive and negative aspects of using them.
- Is independence always a good thing? This blog post from Howard White ( May 1, 2014) argues that the benefits of an independent evaluation team can sometimes be overstated. He presents three arguments to support this contention: Institutional independence does not necessarily safeguard against biases toward positive evaluation; independence comes at a cost; and what agency evaluation departments do is only a small part of the evaluation story.