Deciding to evaluate

To evaluate or not?

With the exception of external program reviews, which are managed centrally by the Policy and Evaluation Division, evaluation at IDRC is strategic as opposed to routine. This means that evaluations managed by IDRC program staff and grantees are undertaken selectively and in cases where the rationale for undertaking the evaluation is clear.

Within IDRC’s decentralized evaluation system, program staff and grantees generally have jurisdiction over evaluation decisions at the project level and, to a certain extent, at the program level.

At the project levelproject evaluations are normally conducted under the direction of program officers or project partners. Not all projects are routinely evaluated.

The decision to evaluate a project is usually motivated by one of the following “evaluation triggers”:

  • Significant materiality (investment);
  • High risk;
  • Novel / innovative approach;
  • High political priority and/or scrutiny;
  • Advanced phase or maturity of the partnership.

At the program levelprogram-led evaluations are defined and carried out by a program in accordance with its needs. Program-led evaluations often focus in on burning questions or learning needs at the program level. They strategically assess any important defined aspect within the program’s portfolio (e.g., project(s), organization(s), issues, modality, etc.). Program-led evaluations can be conducted either internally, externally or via a hybrid approach. The primary intended users are usually the program team or its partners (e.g., collaborating donors, project partners, like-minded organizations, etc.).

When NOT to evaluate

In general, there are some circumstances in which undertaking a project or program evaluation is not advisable:

  • Middle/Main Content Area;
  • Constant change. When a project or program has experienced constant upheaval and change, evaluation runs the risk of being premature and inconclusive;
  • Some projects and programs are simply too young and too new to evaluate, unless the evaluation is designed as an accompaniment and/or developmental evaluation;
  • Lack of clarity and consensus on objectives. This makes it difficult for the evaluator to establish what s/he is evaluating;
  • Primarily for promotional purposes. Although “success stories” and “best practices” are often a welcome byproduct of an evaluation, it is troublesome to embark on an evaluation if unearthing “success stories” is the primary goal. Evaluations should systematically seek to unearth what did and also what didn’t work.

A key question is: Realistically, do you have enough time to undertake an evaluation?

When the timelines for planning and/or conducting an evaluation are such that they compromise the credibility of the evaluation findings, it is better not to proceed but to focus efforts on how to make the conditions more conducive. 

Roles and responsibilities for evaluation

Within IDRC’s decentralized evaluation system, responsibility for conducting and using evaluation is shared:

  • Senior management actively promotes a culture of learning, creating incentives for evaluation and learning from failures and disappointing results. It allots resources for evaluation and incorporates evaluation findings into its decision-making.
  • Program staff and project partners engage in and support high-quality, use-oriented evaluations. They seek opportunities to build their evaluation capacities, think evaluatively, and develop evaluation approaches and methods relevant to development research.

IDRC resources

  • IDRC's overall approach to evaluation, guiding principles, components, and roles within our decentralized system, read Evaluation at IDRC

Further information & Resources

Back to IDRC Evaluation Commissioners' guide