52 weeks of BetterEvaluation: Week 26: Understanding causes
If you are doing any kind of outcome or impact evaluation, you need to know something about whether the changes observed (or prevented) had anything to do with the program or policy being evaluated. After all, the word "outcome" implies something that "comes out of" the program – right?
This has been a hot topic for discussion around the world, and my theme topic for the year. I've recently run three workshops on this in Australia (for the AES), one in Canada (at CES), am about to run another here in New Zealand next month (for anzea) and am working on an ebook on the issue.
One thing I am finding in my travels – and in my work generally – is that most people seem to think the methods for saying anything about causation are fairly limited. It’s common to assume you need a large dataset of quantitative evidence and some highly sophisticated statistics to tease out causal links.
In a similar vein, a recent discussion on impact evaluation on The Guardian had the following statement which assumed that control groups were the only way to get to causal attribution:
"Use control groups for evaluation: While measuring inputs into development programmes is important, unless we measure their impact we will not be able to learn to be good implementers. In my opinion, measuring attribution is critical, and we can't do that unless we use control groups to compare them to."
Contrary to popular opinion, there are actually a lot of methods. They range from the high-powered ones to some fairly low-tech common-sense methods you can use even in small-scale community projects, even if ALL of your evidence is qualitative. Yes, really!
Did you know that the BetterEvaluation site has 29 (yes, twenty-nine!) different methods for inferring causation?
No matter what kind of evaluation you are working on, from a small-scale qualitative project through a large-scale mixed methods evaluation, you have methods!
Better still, the BetterEvaluation site has some fantastic guides for how to choose among those methods.
A few highlights:
- This excellent 2-page guide on Understanding Causes covers all 29 methods plus some approaches that blend several of them.
- Check out this guide, developed by Angela Mbroz and Marc Shotland from the Abdul Latif Jameel Poverty Action Lab, for when and how best to used RCTs (randomized controlled trials)
- Understand causes of outcomes and impacts coffee-break webinar: I walk you through highlights from the BetterEvaluation site, plus some common sense examples of causal inference with non-experimental methods - see below for the video.
Finally, here are the three key messages from the Understand Causes webinar:
- All outcome/impact evaluation needs causal inference
- Going qualitative doesn't let you off the causal hook!
- The real “gold standard” is not any single method, but sound causal reasoning, backed by the right mix of evidence to convincingly make the case.
For more details, check out BetterEvaluation’s section on Understanding Causes.
This blog post is part of a series of eight posts covering the BetterEvaluation Rainbow Framework and presenting the recordings of eight corresponding webinars hosted by the American Evaluation Association. The full series of posts is below.
'52 weeks of BetterEvaluation: Week 26: Understanding causes' is referenced in: