Reporting

Available languages

The evaluation reports should include relevant and comprehensive information structured in a manner that facilitates its use but also provide transparency in terms of the methods used and the evidence obtained to substantiate the conclusions and recommendations.

Evaluation, by definition, answers evaluative questions, that is, questions about ‘quality’ (how good something is) and ‘value’ (taking into account the specific situation such as the resources used to produce the results and the needs it was supposed to address). Evaluative reasoning is required to synthesize dimensions of quality and value to formulate defensible (i.e., well reasoned and well evidenced) answers to the evaluative questions.

The structure of an evaluation report can do a great deal to encourage the succinct reporting of direct answers to evaluative questions, backed up by enough detail about the evaluative reasoning and methodology to allow the reader to follow the logic and clearly see the evidence base.

The following recommendations will help to set clear expectations for evaluation reports that are strong on evaluative reasoning:

  1. The executive summary must contain direct and explicitly evaluative answers to the key evaluation questions (KEQs) used to guide the whole evaluation.
  2. Explicitly evaluative language must be used when presenting findings (rather than value-neutral language that merely describes findings). Examples should be provided.
  3. Use of clear and simple data visualization to present easy-to-understand ‘snapshots’ of how the intervention has performed on the various dimensions of merit.
  4. Structuring of the findings section using KEQs as subheadings (rather than types and sources of evidence, as is frequently done).
  5. There must be clarity and transparency about the evaluative reasoning used, with the explanations clearly understandable to both non-evaluators and readers without deep content expertise in the subject matter. These explanations should be broad and brief in the main body of the report, with more detail available in annexes.
  6. If evaluative rubrics are relatively small in size, these should be included in the main body of the report. If they are large, a brief summary of at least one or two should be included in the main body of the report, with all rubrics included in full in an annex.

A hallmark of great evaluative reasoning is how succinctly and clearly key points can be conveyed without glossing over important details.

[Source: Davidson J. Evaluative reasoning. UNICEF Methodological Brief 4 on Impact Evaluation. Florence: UNICEF]

Products

The following items are potential outputs from this step. Where possible, it might be useful to research other deliverables that have also been shown to be effective.

  • Evaluation report
  • Products tailored to different audiences: Evaluation summary, Policy Brief, Newsletter, Conference presentation etc.

IDRC-specific information

The IDRC evaluation manager is responsible for:

  • Identifying what report(s) will be needed and the agreed format. This should be done early in the evaluation process.
  • Providing feedback on the draft evaluation reports to ensure it is in line with the IDRC Guideline. Only reports that have been approved on the basis of a quality assessment are accepted as final deliverable and released for use.

IDRC staff members, partners, interns, or consultants doing evaluation work for IDRC should use the guideline Formatting Evaluation Reports at IDRC to structure the main evaluation report.

IDRC Evaluation Report Template:

  1. Cover Page
  • Title
  • Evaluator(s) name and organizational affiliation
  • Date
  • Name of the IDRC team, branch, unit, or person commissioning the evaluation
  • IDRC Project or Research Support Project numbers of all the projects covered in the assessment (if applicable) 
  1. Executive Summary

A brief 1-2 page description of the main findings, methodological approach, and recommendations or conclusions of the evaluation. 

  1. Body of the Evaluation Report
  • Background of the study:

This should detail the intended user(s) and use(s) of the evaluation process and/or product; what led to the evaluation (e.g. need, purpose, etc.); the specific evaluation questions or issues addressed; the values and principles guiding the evaluation process; and, any capacity building intentions.

  • Description of the methodology employed:

This should include an analysis of the strengths and weaknesses of the research design, tools and methods used, the process followed, data sources, and people interviewed. It should describe how the project/program stakeholders and the intended user(s) of the evaluation participated in the process. It should also comment on the validity of the evidence and any ethical considerations.

  • Evaluation Findings:

This section should be formulated according to the evaluation plan and the terms of reference (TOR) of the evaluation study. 

  1. Annexes
  • List of Acronyms.
  • List of people interviewed –with full coordinates if appropriate and not in breach of confidentiality.
  • Bibliography of all documents reviewed.
  • TOR for the evaluation and/or evaluator.
  • Biography of the evaluator(s). This should include the name, sex, organizational affiliation, and contact information for the evaluator(s).

The IDRC guideline on data visualization (PDF, 774KB) provides useful tips for making data easier to understand and use.

The quality of the evaluation report is judged by IDRC’s Evaluation Unit on four internationally recognized standards: utility, feasibility, accuracy, and propriety. A copy of “Quality Assessment of IDRC Evaluation Reports” should be given to the evaluator(s) to ensure they understand how the quality of the evaluation report will be assessed.

Resources

'Reporting' is referenced in: