52 weeks of BetterEvaluation: Week 26: Understanding causes

By
E. Jane Davidson

If you are doing any kind of outcome or impact evaluation, you need to know something about whether the changes observed (or prevented) had anything to do with the program or policy being evaluated. After all, the word "outcome" implies something that "comes out of" the program – right?

This has been a hot topic for discussion around the world, and my theme topic for the year. I've recently run three workshops on this in Australia (for the AES), one in Canada (at CES), am about to run another here in New Zealand next month (for anzea) and am working on an ebook on the issue.

One thing I am finding in my travels – and in my work generally – is that most people seem to think the methods for saying anything about causation are fairly limited. It’s common to assume you need a large dataset of quantitative evidence and some highly sophisticated statistics to tease out causal links.

In a similar vein, a recent discussion on impact evaluation on The Guardian had the following statement which assumed that control groups were the only way to get to causal attribution:

"Use control groups for evaluation: While measuring inputs into development programmes is important, unless we measure their impact we will not be able to learn to be good implementers. In my opinion, measuring attribution is critical, and we can't do that unless we use control groups to compare them to."

Rose Mary Garcia, director of the monitoring and evaluation practice, Crown Agents, Washington, DC, USA

Contrary to popular opinion, there are actually a lot of methods. They range from the high-powered ones to some fairly low-tech common-sense methods you can use even in small-scale community projects, even if ALL of your evidence is qualitative. Yes, really!

Did you know that the BetterEvaluation site has 29 (yes, twenty-nine!) different methods for inferring causation?

No matter what kind of evaluation you are working on, from a small-scale qualitative project through a large-scale mixed methods evaluation, you have methods!

Better still, the BetterEvaluation site has some fantastic guides for how to choose among those methods.

A few highlights:

Finally, here are the three key messages from the Understand Causes webinar:

  1. All outcome/impact evaluation needs causal inference
  2. Going qualitative doesn't let you off the causal hook!
  3. The real “gold standard” is not any single method, but sound causal reasoning, backed by the right mix of evidence to convincingly make the case.

For more details, check out BetterEvaluation’s section on Understanding Causes.

'52 weeks of BetterEvaluation: Week 26: Understanding causes' is referenced in: