Evaluability assessments are an essential new tool for managers

In this blog, Rick Davies and Keith Child discuss the new CGIAR guidelines for evaluability assessment. This blog post was originally published on the CGIAR website  as "Evaluability assessments are an essential new tool for CGIAR managers" and is republished here with permission. Minor edits have been made to the original post.

The evaluation report has been finalized, recommendations have been made, the findings have been presented to management and funders, and then … nothing happens.

For one reason or another, the evaluation failed in some critical aspect that would have made it useful for stakeholders. As experienced evaluators with almost 50 years between us, this is close to a worst-case scenario. Unfortunately, it is something we know happens more often than is generally acknowledged. The good news is that this scenario is avoidable. By conducting an evaluability assessment (EA), evaluations can become the management, accountability and learning tools they were intended to be.

What is an evaluability assessment?

During an evaluability assessment, judgements are not made about an intervention and what has been achieved but about the possibility of making such judgements and their likely utility. This involves an assessment of the clarity of an intervention's design, the verifiability of the results, stakeholder's expectations of an evaluation and potential institutional constraints on the evaluation. An EA can be a very formalized process, led by an independent assessor, or it can be something much more modest, conducted in-house by CGIAR staff. The one thing an EA is not is a one-size-fits-all tool! Traditionally, an EA is undertaken in preparation for an evaluation, but more recently, an EA has become part and parcel of a professionalized evaluation approach that can be conducted at any time in the intervention cycle (Figure 1). Evaluability assessments serve as a quality assurance mechanism of an intervention, including a health check on its monitoring, evaluation, learning and impact assessment (MELIA) components related to performance and impact

Evaluability assessments are nothing new; in fact, EAs have been used for over 35 years.¹ In other organizations, evaluability assessments are well-established practice. Bilateral and Multilateral Organizations like the United States Agency for International Development (USAID), the Foreign, Commonwealth and Development Office (FCDO), the World Health Organization (WHO), the World Bank Group (WBG) and many others have mainstreamed EAs into their evaluation workstreams.

Flow chart showing four steps of evaluability assessment from design through implementation to evaluation

Figure 1: An evaluability assessment can be conducted at every phase of the project cycle
Source: CGIAR Evaluation Guidelines: Conducting and Using Evaluability Assessments within CGIAR.


Example: Evaluability assessment in CGIAR

In 2022, the CGIAR System Council and Board approved the CGIAR Evaluation Framework and a revised Evaluation Policy. The Framework and Policy made evaluation foundational to CGIAR's effort to inform the design of interventions, provide actionable evidence to support management and governance decisions, and ensure a high level of accountability to funders. Meeting this potential, however, requires more than business as usual but advanced planning, high-quality evaluation inputs and a genuine commitment from CGIAR staff. Recognizing this, evaluability is included as one of 15 standards and principles championed in the CGIAR Evaluation Framework. 

To turn good practice into an operational reality, the Independent Advisory and Evaluation Service (IAES) has recently published detailed guidelines to explain how and why evaluability assessments are used in CGIAR to improve evaluability, evaluations' cost-effectiveness and to foster a culture of continuous learning (Figure 2). The guidelines introduce an evaluability assessment framework generally applicable to the entire CGIAR portfolio of investments (e.g. Initiatives, Regional Initiatives, Impact Area Platforms etc) and sets forth a six-step process for conducting an EA at any stage of the intervention cycle. Since a one-size-fits-all approach is not practical in CGIAR, the guidelines are designed to allow managers considerable flexibility in conducting an EA so that they can commission an EA that is most useful to them.

Flow chart showing the purpose of evaluability assessment guidelines as improving evaluability, creating cost-effective evaluations and contributing to a continuous learning culture

Figure 2: Purpose of the CGIAR evaluability assessment guidelines
Source: CGIAR Evaluation Guidelines: Conducting and Using Evaluability Assessments within CGIAR.

Evaluability assessments are an important tool for managers

Experience has taught us that even under the best of circumstances, evaluation readiness should never be assumed. Whether it is a lack of demand from stakeholders or something more profound like low-quality performance data, a hundred things can turn an evaluation into a ritualized tick-box endeavour. The best way to avoid this scenario is to conduct a high-quality and timely EA. 

For managers, monitoring, evaluation and learning professionals, and stakeholders, EAs mean that problem areas can be fixed before they become insurmountable during an evaluation. In most cases, an EA will identify problems and propose a remedy. In other cases, the EA may help narrow the scope of the evaluation or suggest key evaluation questions that need extra attention. There are undoubtedly many good reasons to conduct an EA. The CGIAR Guidelines outline a number of these reasons, which are relevant beyond the context of CGIAR. Our top four reasons (the guidelines go further and point to twelve) why managers should welcome an EA are:

  1. Theory of change (ToC): ToCs have been part of the CGIAR for some time, but the new Performance and Results Management Framework (PRMF) places enormous importance on ToC design and use for everything from annual reporting and iterative management to mid-level theory testing. The Synthesis of Learning from a Decade of CGIAR Research Programs (2021) revealed that the ToCs of the CGIAR Research Programs were not well articulated or comprehensive. While this critique was widely acknowledged at the time, the design process for CGIAR Research Initiative ToCs was, in many cases, rushed. Managers who imagine room to improve their ToCs should consider an EA to help identify how to improve them.
     
  2. Indicators: CGIAR collects data against portfolio-wide output, outcome, and impact indicators (e.g., common reporting indicators), all the way up to Action Areas. While these data provide funders with a reasonable sense of the portfolio's overall health, individual Initiatives need to devise tailored indicators to capture results not included in the common reporting indicators and to evidence their ToC. Doing so can be challenging, both in terms of indicator selection and maintaining data quality. An EA can help provide a preliminary indication of indicator gaps and data issues. 
     
  3. Baselines: CGIAR Evaluation Policy (2022) outlines seven evaluation criteria that serve as a basis upon which evaluative judgments are made. In order to use these criteria (effectiveness, efficiency, coherence, relevance, sustainability, quality of science and impact), evaluators need a clearly defined, time-bound 'object of evaluation'. Within this object, the lines of inquiry against the criteria are examined. For this, in addition to indicators, a baseline against which progress on targets can be measured is critical. For process and performance evaluations in CGIAR, this baseline data establishes a clear starting point not only in the temporal sense, a point in time, but also the situation at inception against which progress can be assessed. An EAs helps to assess the availability and quality of these baseline data or, when they are not available, the strategies to establish baselines. This is pertinent to future evaluations, so that evaluative judgements can be made about the evaluation criteria of interest.
     
  4. Engagement with stakeholders. The impact of an evaluation will be highly dependent on the extent to which stakeholders are engaged with the evaluation design, implementation, and results dissemination. Their interests, e.g., in aspects of the ToC, in the data being generated and in the questions being asked, will affect how relevant and credible they perceive the evaluation process and results to be. This in turn affects directly stakeholders’ eagerness to take up and follow through with relevant actions. An EA can help identify the diversity and commonalities among stakeholders’ interests and suggest what could feasibly be within the scope of an evaluation, and what might be best left outside.

The 2022 CGIAR Evaluation Framework and Policy ushered in 'evaluability' as a key evaluation principle and standard. There are many good reasons to celebrate the arrival of evaluability assessments in CGIAR, and to encourage their use in other organizations. Most importantly, evaluability assessments are core to a professionalized evaluation approach to enhance evidence-based decision-making that will help organizations achieve their missions.

¹Wholey (1979) is generally cited as introducing EAs.

Related content

'Evaluability assessments are an essential new tool for managers' is referenced in: