How do we know if a program made a difference? A guide to statistical methods for program impact evaluation

This guide, written by Peter M. Lance, David K. Guilkey, Aiko Hattori and Gustavo Angeles for MEASURE Evaluation, outlines core statistical and econometric methods for program impact evaluation.

Aimed at those seeking to evaluate the impact of programs in human welfare, the guide attempts to discuss the complexities of evaluation estimators in a manner that is accessible to anyone. 

Excerpt

"A dizzying array of programs seek to influence health, wealth, education, employment and other channels of human welfare. An accurate understanding of what these programs actually achieve would allow society to focus scarce resources on those programs that most efficiently and effectively improve welfare. The aim of program impact evaluation is to learn whether and to what degree a program altered outcomes from what otherwise might have prevailed.

Measuring what might \otherwise have prevailed" is a challenging task. The phrase suggests an appeal to history's unrevealed alternatives. In the abstract, one might consider comparing the outcome of interest for an individual in circumstances under which they participate in a program or under which they do not.  Specifically, we might seek to measure differences in outcomes for an individual as their program participation, and only their program participation, varies. The use of the word only is important: if the only thing that varies is their participation in the program, then that must be the driving force behind differences in outcomes that we might observe when they participate and when they do not do so." (Lance, Guilkey, Hattori and Angeles, 2014)

Contents

  • The Program Impact Evaluation Challenge 5
    • Basic Concepts 5
    • The Estimation Challenge: Basic Ideas 8
    • The Estimation Challenges: Some Common Estimators 16
    • Other Considerations 29
  • Randomization 39
    • Randomization: The Basics 39
    • Experimental Evaluations: Some Specific Examples 42
    • The Case for Randomization 58
    • Randomization and Its Discontents 60
    • Estimation Methods 64
    • Some Closing Thoughts 65
  • Selection on Observables 67
    • Regression 68
    • Matching 124
  • Within Estimators 149
    • Classic Models 149
    • The Difference-in-Differences Model 186
  • Instrumental Variables 202
    • Instrumental Variables Basics 203
    • Local Average Treatment Effects 282
    • Regression Discontinuity Designs 294
    • Some Closing Thoughts 314

Sources

Lance, P., D. Guilkey, A. Hattori and G. Angeles. (2014). How do we know if a program made a difference? A guide to statistical methods for program impact evaluation. Chapel Hill, North Carolina: MEASURE Evaluation. Retrieved from: https://www.measureevaluation.org/resources/publications/ms-14-87-en.html

'How do we know if a program made a difference? A guide to statistical methods for program impact evaluation' is referenced in: