Conversations to have when designing a program: Fostering evaluative thinking

Lego characters standing around a table as if in dicussion

The first step in evaluating a program is knowing whether you can evaluate it – that the program is ‘evaluable’.

An assessment of evaluability should be the first step for any evaluator in conducting an evaluation. Usually, this is done informally, but may also be a formal task.

There have been long-running discussions about the role of evaluative thinking and evaluators in program design – for example, the Australasian Evaluation Society established a Special Interest Group on Design and Evaluation and the 2016 conference of the American Evaluation Association was focused on the theme of Design + Evaluation. However, this conversation still often happens too late. Many a time, an evaluator is brought in after the intervention has been designed and established, and the data reporting systems are operational, which creates stress and barriers for all parties. The evaluators often struggle to provide a meaningful answer to their clients’ or colleagues’ questions with the information they have available to them, while program staff feel like they are swimming in a sea of reporting.

At the heart of this, is a disconnect between intervention designers and evaluators. While some organisations address this by embedding evaluators in the program design team, even when this is not done there can be great strides made by incorporating some of the theory, knowledge and thinking from evaluation practice into intervention design.

This doesn’t need to be a complicated process. Based on our own experiences, and the learnings from evaluability assessment, Dr Caroline Tomiczek and I developed a ‘toolkit’ of questions we believe program designers should bring to the design table. These questions should support the development of a program design that enables evaluators to enter the picture at any stage of program design and implementation. But even if the program never gets evaluated, we believe these questions support improved program design practice.

Five key questions:

1. What is the intervention?

It sounds simple! But make sure you have a clear scope that all the stakeholders understand and see their role in, and document it so everyone understands the logic of your intervention.

This process allows you to ask questions about the intervention and agree on common definitions. Discussing it with your stakeholders helps to identify areas where you might not have common ground, and agree on an approach to addressing it.

See the Evaluation Task in the Rainbow Framework Develop Initial Description for some options on going about this, and the Task Understand and engage stakeholders for some ideas on how to go about these discussions.

2. What would success look like?

Your intervention won’t be able to achieve everything no matter how hard you try, so what would success look like to you and your stakeholders? This has two elements – setting your criteria and standards for success.

Establishing the criteria of success helps you to understand which areas of your program area that you want your intervention to have an impact on. Who do you want to be impacted, and what timeframe do you want to see success in? Setting standards allows you to work out how good is ‘good enough’ for your intervention.

The determine what ‘success’ looks like task page has a number of options and process that can be used to work with stakeholders to clarify what criteria are going to be used to judge success. Having these conversations during program design helps to establish shared program expectations and priorities.

3. Are you collecting data on what you want to achieve?

When you’re setting up your intervention and reporting systems, it’s important to make sure that you are thinking about what data are going to need to be collected or retrieved in order to adequately describe what you want to achieve.

This includes thinking about whether there are measures, indicators or metrics that will be important to include, what sorts of data will be useful and appropriate to collect, and how you might do this (see the task: Collect and/or retrieve data). This might include setting up new measures or reporting systems during the implementation phase so you can track progress, or might involve communicating to program users that you will want to speak with them down the track.

It’s also important to make an effort to uncover potential unintended results. The task page Identify unintended results has options for this that can be used before the program implementation, and during data collection.

4. Are your expectations realistic?

Make sure you’re realistic about what your intervention can have an impact on. This will help you to understand how you might be able to demonstrate if success has occurred, and attribute it to your intervention appropriately. At what scale will change occur? What other actors or contextual factors are likely to contribute to any changes?

The Understand Causes cluster contains tasks and options for investigating the causal relationship between your intervention and the observed changes and investigating possible alternative explanations. It's also a good idea to get comfortable with being somewhat conservative about attributing change to your intervention. This might involve some difficult conversations with program funders, but it’s important to recognise that having these difficult conversations is likely to lead to more robust implementation and evaluations which are better able to demonstrate genuine impact.

5. What’s your evaluation capability?

Understand who within your organisation understands evaluation. Do you have staff who understand the questions to ask; how to collect data; and, how to analyse data? Working with your internal ‘evaluators’ can help to support program implementation using the knowledge of program design and outcomes that evaluation practitioners bring.

It’s possible that you might have capability gaps in one or more of the program design and evaluation stages. If that is the case, think about bringing in some external resources to build capability, such as an evaluation coach, or engaging an evaluator to set up a program’s M&E systems. The task page Develop evaluation capacity has a range of options for strengthening the evaluation capacity of individuals and organisations.

Building evaluability into intervention design means asking questions up-front, and building evaluative thinking into the set-up of your program. In the short term, it might feel like more work, but I firmly believe that doing this will improve the possibility and quality of an evaluation, and also the quality of the intervention in the first place.

This post is based on a paper by Joanna Farmer and Dr Caroline Tomiczek (Associate Director, Urbis), presented at the AES International Evaluation Conference in Canberra on 6 Sept 2017.

'Conversations to have when designing a program: Fostering evaluative thinking' is referenced in: