An assessment of the extent to which an intervention can be evaluated in a reliable and credible fashion.
This overview is based on a literature review of Evaluability Assessment commissioned by the UK Department of International Development (DFID) in 2012 and published as DFID Working Paper (Davies 2013). The review identified 133 documents including journal articles, books, reports and web pages, published from 1979 onwards. Approximately half of the documents were produced by international development agencies; most of the remaining documents covered American domestic agency experience with Evaluability Assessments (the latter has been more recently summarised by Trevisan and Walser, 2014).
Amongst international development agencies there appears to be widespread agreement on the meaning of the term “evaluability”. The following definition from the Organisation for Economic Co-operation and Development-Development Assistance Committee (OECD-DAC) is widely quoted and used:
“The extent to which an activity or project can be evaluated in a reliable and credible fashion” (OECD-DAC 2010; p.21)
Evaluability Assessments have been used since the 1970s, initially by government agencies in the United States, and subsequently by a wider range of domestic organisations. International development agencies have been using Evaluability Assessments since 2000. Although the most common focus of an Evaluability Assessment is a single project, Evaluability Assessments have also been carried out on sets of projects, policy areas, country strategies, strategic plans, work plans, and partnerships.
The DFID Working Paper (Davies 2013) on Evaluability Assessment identified these dimensions of evaluability:
- Evaluability “ in principle”, given the nature of the project theory of change
- Evaluability “in practice”, given the availability of relevant data and the capacity of management systems able to provide it.
- The utility and practicality of an evaluation, given the views and availability of relevant stakeholders
The overall purpose of an Evaluability Assessment is to inform the timing of an evaluation and to improve the prospects for an evaluation producing useful results. However, the focus and results of an Evaluability Assessment will depend on its timing, as shown below. Early assessments may have wider effects on long term evaluability but later assessments may provide the most up to date assessment of evaluability.
Evaluability Assessment focus
Evaluability Assessment results
Theory of change (ToC)
Improved project design
ToC & Data availability
Improved M&E framework
ToC & Data availability & Stakeholders
Improved evaluation terms of reference (ToRs)
Two forms of advice are commonly provided. The first is about sequencing of activities, given in the form of various stage models. The second is about the contents of inquiries, often structured in the form of checklists.
Stage models include largely predictable (but often iterated) steps involving planning, consultation, data gathering, analysis, report writing and dissemination. Two of these are worth commenting on here:
The first relates to the planning stage. An important early step in an Evaluability Assessment is the reaching of an agreement on the boundaries of the task, which has two aspects:
- The extent to which the Evaluability Assessment should proceed from a diagnosis of evaluability on to a prescription and then implementation of changes that are needed to address evaluability problems. For example, revision of a theory of change or development of an M&E framework.
- The range of project documents and stakeholders that need to be identified and then examined and interviewed respectively. These choices have direct consequences for the scale and duration of the work that needs to be done.
The second relates to the analysis stage, where two tasks can be identified:
- At the base is the synthesis of answers from multiple documents and interviews in respect to a specific checklist question. Here, the assessment needs to: (a) focus on validity and reliability of the data; and then, (b) the identification of the consensus and outlier views.
- At the next level is the synthesis of answers across multiple questions within a given evaluability dimension. Here, the assessment needs to: (a) identify any existence of any “obstacle” problems that must be removed before any other progress can be made; and then, (b) asesses the relative importance of all other problems.
Checklists are used by many international agencies, with varying degrees of rigor and flexibility. At best, their use provides an accountable means of ensuring systematic coverage of all relevant issues. The DFID Working Paper synthesised the checklists used by 11 different agencies into a set of three checklists that cover the dimensions of evaluability listed above. These can provide a useful “starter pack” which can be adapted according to circumstances. If an aggregate score on evaluability (or on multiple aspects of evaluability) needs to be calculated, then explicit attention needs to be given to the weighting given to each item on a checklist. It is unlikely that all items will be of equal importance.
The time required to complete an Evaluability Assessment can range from a few days to a month or more. A key determinant is the extent to which stakeholder consultations are required and whether multiple projects are involved. Evaluability Assessments at the design stage may be carried out largely on the basis of desk-based work, whereas Evaluability Assessments prior to a proposed evaluation is much more likely to require extensive stakeholder consultation.
It is the relationship between the cost of an Evaluability Assessment and the cost of an evaluation that is important, rather than its absolute cost. When the proportionate cost of an Evaluability Assessment is high then, correspondingly, large improvements in evaluation results will be needed to justify those costs.
Some project designs are manifestly unevaluable and some M&E frameworks are manifestly inadequate at first glance. In these circumstances, an Evaluability Assessment would not be needed to make a decision about whether to go ahead with an evaluation. Efforts need to focus on the more immediate tasks of improving project design and/or the M&E framework.
In other circumstances, the cost of a proposed evaluation may be quite small, and thus, the cost- effectiveness of making an additional investment in an Evaluability Assessment may be questionable. On the other hand, with large projects, even those that appear relatively evaluable, investment in an Evaluability Assessment could still deliver cost-effective changes.
At the design and approval stages of a project, the associated quality assurance processes can include evaluability-oriented questions. The process of Evaluability Assessment can in effect be institutionalised within existing systems rather than contracted as a special event.
At the inception stage, some organisations may routinely commission the development of an M&E framework which should intrinsically address evaluability questions. Or, they may have established procedures for reviewing the M&E system which are more purpose-specific than a generic Evaluability Assessment tool of the kind provided by the DFID working paper.
Prior to a proposed evaluation, some organisations may commission preparatory work that takes on a wider ambit than an Evaluability Assessment. Approach Papers may cover issues listed in Evaluability Assessment checklists but also scan a much wider literature for evidence for and against the relevance and effectiveness of the type(s) of interventions being evaluated.
In 2000, ITAD, a UK consultancy firm, carried out an Evaluability Assessment of 28 human rights and governance projects, funded by the Swedish International Development Cooperation (Sida) in four countries in Africa and Latin America (Poate et al. 2000). This assessment is impressive in a number of respects. Analysis was done with the aid of a structured checklist that helped minimise divergences of treatment by the consultants who worked on the study. Nineteen evaluation criteria were investigated by means of subsidiary questions, and a score given for each criterion. The most common evaluability problems that were found related to unavailability of data, followed by issues of project design including insufficient clarity of purpose and the difficulties of causal attribution. Nevertheless, the authors were able to spell out a range of evaluation options that could be explored, along with the type of capacity building work needed to address the identified issues. Their report includes a full data set of checklist ratings of all projects on all criteria, thus, enabling others to do further analysis of this experience with other research or evaluation purposes in mind.
The design of checklist can be usefully informed by theory and not just ad hoc or experience-based conjecture. Sources can include relevant evaluation standards, codes of ethics and syntheses of studies of evaluation use.
Checklists weightings have been used by a number of agencies. Because of the diversity of possible approaches to evaluation and specific evaluation contexts, it is hard to justify any universally applicable set of weightings for a given checklist. However, weightings can be assigned “after the fact”(i.e., after a specific Evaluability Assessment has been carried out for a particular project in a given context). Like all good weightings, their use needs to be accompanied by text explanations.
The attached tools can be further adapted to specific needs/contexts. Please feel free to share your experience with Evaluability Assessment in the comments to this page or recommend additional resources.
- Outline structure for a Terms of Reference for an Evaluability Assessment
- An Evaluability Assessment Checklist
Davies R (2013). Planning Evaluability Assessments: A Synthesis of the Literature with Recommendations. DFID Working Paper 40. Available at: https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/248656/wp40-planning-eval-assessments.pdf
OECD-DAC (2010). Glossary of key terms in evaluation and results based management. Paris: OECD-DAC. Available at: http://www.oecd.org/development/peer-reviews/2754804.pdf
Poate D, Riddell R, Curran T, Chapman N (2000). The Evaluability of Democracy and Human Rights Projects Volume 1 & 2. Available at: www.oecd.org/derec/sweden/46223163.pdf
Trevisan M, Walser T (2014). Evaluability Assessment: Improving Evaluation Quality and Use. SAGE Publications. See: http://www.uk.sagepub.com/textbooks/Book240728
This document provides an overview of the utility of and specific guidance and a tool for implementing an evaluability assessment before an impact evaluation is undertaken. The guide was specifically developed for conducting evaluability assessments as part of the Methods Lab for Impact Evaluation – an action learning collaboration between the Overseas Development Institute (ODI), BetterEvaluation (BE) and the Australian Department of Foreign Affairs and Trade (DFAT). It was piloted and further refined during assessments in Afghanistan, Nepal, and The Pacific and based on feedback from a range of evaluators and commissioners of evaluation. Read More
Planning Evaluability Assessments: A Synthesis of the Literature with Recommendations. (Davies R, 2013)
This report presents a synthesis of the literature on Evaluability Assessments up to 2012. The main focus of the synthesis is on the experience of international agencies and on recommendations relevant to their field of work. The synthesis provides recommendations about the use of evaluability assessments.
A bibliography on evaluability assessment. (Davies R, 2012).
This bibliography includes links to a range of literature related to evaluability assessment.
Evaluability Assessment: Improving Evaluation Quality and Use. (Trevisan M, Walser T, 2014), Sage Publications.
This book summarises a wealth of American domestic agency experience. Stages of an Evaluability Assessment process are described by individual chapters, each of which includes a checklist of issues to examine, along with case examples.
Evaluability Assessment: Examining the Readiness of a Program for Evaluation: This guide from the Juvenile Justice Evaluation Center is aimed at providing juvenile justice program managers with a guide to implementing evaluability assessment in order to ensure that programs are ready for evaluation.
Guidance Note on Carrying Out an Evaluability Assessment: This guide from the United Nations Development Fund for Women (UNIFEM) was developed to ensure program managers understand the key concepts behind evaluability assessment.
Evaluability Assessments and Choice of Evaluation Methods: This webinar highlights the importance of evaluability assessments for development projects, as well as discussing the suitability of various evaluation methods that are available to a manager.