52 weeks of BetterEvaluation: Week 26: Understanding causes
If you are doing any kind of outcome or impact evaluation, you need to know something about whether the changes observed (or prevented) had anything to do with the program or policy being evaluated. After all, the word “outcome” implies something that “comes out of” the program – right?
This has been a hot topic for discussion around the world, and my theme topic for the year. I’ve recently run three workshops on this in Australia (for the AES), one in Canada (at CES), am about to run another here in New Zealand next month (for anzea) and am working on an ebook on the issue.
One thing I am finding in my travels – and in my work generally – is that most people seem to think the options for saying anything about causation are fairly limited. It’s common to assume you need a large dataset of quantitative evidence and some highly sophisticated statistics to tease out causal links.
In a similar vein, a recent discussion on impact evaluation on The Guardian had the following statement which assumed that control groups were the only way to get to causal attribution:
Use control groups for evaluation: While measuring inputs into development programmes is important, unless we measure their impact we will not be able to learn to be good implementers. In my opinion, measuring attribution is critical, and we can't do that unless we use control groups to compare them to.
-- Rose Mary Garcia, director of the monitoring and evaluation practice, Crown Agents, Washington, DC, USA
Contrary to popular opinion, there are actually a lot of options. They range from the high-powered ones to some fairly low-tech common-sense options you can use even in small-scale community projects, and even if ALL of your evidence is qualitative. Yes, really!
Did you know that the BetterEvaluation site has 29 (yes, twenty-nine!) different options for inferring causation?
No matter what kind of evaluation you are working on, from a small-scale qualitative project through a large-scale mixed methods evaluation, you have options!
Better still, the BetterEvaluation site has some fantastic guides for how to choose among those options.
A few highlights:
- Download the excellent 2-page guide on Understanding Causes (PDF) covering all 29 methods plus some approaches that blend several of them.
- Check out the new guide, developed by Angela Mbroz and Marc Shotland from the Abdul Latif Jameel Poverty Action Lab, for when and how best to used RCTs (randomized controlled trials)
- Watch the 20-minute coffee break webinar: Understand Causes of Outcomes and Impacts, where I walk you through highlights from the BetterEvaluation site, plus some common sense examples of causal inference with non-experimental methods - see below for the video.
Finally, here are the three key messages from the Understand Causes webinar:
- All outcome/impact evaluation needs causal inference
- Going qualitative doesn't let you off the causal hook!
- The real “gold standard” is not any single method, but sound causal reasoning, backed by the right mix of evidence to convincingly make the case.
For more details, check out BetterEvaluation’s section on Understanding Causes.
This blog post is part of a series of eight posts covering the BetterEvaluation Framework and presenting the recordings of eight corresponding webinars hosted by the American Evaluation Association. The full series of posts is below.
||1. Using the Rainbow Framework, Irene Guijt||
||5. Understanding causes, Jane Davidson|
||2. Defining what needs to be evaluated, Simon Hearn||
||6. Weighing the data for an overall evaluative judgment, Patricia Rogers|
||3. Framing the evaluation, Patricia Rogers||7. How can evaluation make a difference?, Simon Hearn|
||4. Choosing methods to describe activities, results and context, Irene Guijt||8. Manage an evaluation or evaluation system, Kerry Bruce|