Impact Evaluation: Best Practices Aren’t (MQP rumination #4)
Designating something a “best practice” is a marketing ploy, not a scientific conclusion. Calling something “best” is a political and ideological assertion dressed up in research-sounding terminology. Designating one’s preferred model as “best practice” has swept like wildfire through all sectors of society.
Do an Internet search and prepare to be astounded by how much best-ness there is in the world. Governments and international agencies publish best practices for education, health, highways, welfare reform, and on and on it goes. The “best practices” disseminating business is thriving worldwide. Philanthropic foundations are anxious to discover, fund, and disseminate best practices. Corporations advertise that they follow best practices. Management consultants teach best practices. Identifying, following, and promoting “best practices” has led to the creation of Best Practices databases. “Best practices” are not just effective, better, promising, evidence-based, or smart—but the best.
Why There Can’t Be a Best Practice
The connotation embedded in the phrase best practice is that there is a single best way to do something. That means context doesn’t matter. But context does matter.
Moreover, “best” is inevitably a matter of perspective and criteria. Like beauty, what is best will reside in the eye and mind of the beholder and the criteria, comparisons, and evidence that the beholder finds credible. In a world that manifests vast diversity, many paths exist for reaching a desired destination; some may be more difficult and some more costly, but such criteria reveal the importance of asking, “‘Best’ from whose perspective using what criteria?”
From a systems point of view, a major problem with many “best practices” is the way they are offered without attention to context. Suppose automobile engineers identified the best fuel injection system, the best transmission, the best engine cooling system, the best suspension system, and so forth. Let us further suppose, as is likely, that these best subsystems (fuel injection, etc.) come from different car models (Lexus, Infiniti, Audi, Mercedes, etc.). When one had assembled all the “best” systems from all the best cars, they would not constitute a working car. Each best part (subsystem) would have been designed to go together with other specifically designed parts for a specific model of car. They’re not interchangeable. Yet a lot of best-practices rhetoric presumes context-free adoption.
Best Practices and Evidence-Based Medicine
Gary Klein is a psychologist who has studied evidence-based medicine (EBM). Klein (2014) warns against the notion that “best practices” can become the foundation of EBM.
The concept behind EBM is certainly admirable: a set of best practices validated by rigorous experiments. EBM seeks to provide healthcare practitioners with treatments they can trust, treatments that have been evaluated by randomized controlled trials, preferably blinded. EBM seeks to transform medicine into a scientific discipline rather than an art form. What’s not to like? We don’t want to return to the days of quack fads and unverified anecdotes.
But we should only trust EBM if the science behind best practices is infallible and comprehensive, and that’s certainly not the case. Medical science is not infallible. Practitioners shouldn’t believe a published study just because it meets the criteria of randomized controlled trial design. Too many of these studies cannot be replicated….
And medical science is not comprehensive. Best practices often take the form of simple rules to follow, but practitioners work in complex situations. EBM relies on controlled studies that vary one thing at a time, rarely more than two or three. Many patients suffer from multiple medical problems, such as Type 2 diabetes compounded with asthma. The protocol that works with one problem may be inappropriate for the others. EBM formulates best practices for general populations but practitioners treat individuals, and need to take individual differences into account. A treatment that is generally ineffective might still be useful for a sub-set of patients. ...
Worse, reliance on EBM can impede scientific progress. If hospitals and insurance companies mandate EBM, backed up by the threat of lawsuits if adverse outcomes are accompanied by any departure from best practices, physicians will become reluctant to try alternative treatment strategies.
Embrace Humility and Acknowledge Uncertainty
Commenting on the proliferation of supposed “best practices” in evaluation reports, Harvard-based evaluation research pioneer Carol Weiss (2002) advised wisely that evaluators should exercise restraint and demonstrate “a little more humility” about what we conclude and report to our sponsors and clients.
What works in a poverty neighborhood in Chicago may not stand a ghost of a chance in Appalachia. We need to understand the conditions under which programs succeed and the interior components that actually constitute the program in operation, as well as the criteria of effectiveness applied. We need to look across . . . programs in different places under different conditions and with different features, in an attempt to tease out the factors that matter. . . . But even then, I have some skepticism about lessons learned, particularly in the presumptuous “best practices” mode. With the most elegant tools at our disposal, can we really confidently identify the elements that distinguished those program realizations that had good results from those that did not? (Weiss, 2002, pp. 229–230)
What to Do? Five Ideas for Your Consideration
-
Avoid either asking or entertaining the question “Which is best?”
As is so often the case, the problem begins with the wrong question. Ask a more nuanced question to guide your inquiry, a question that in its very framing undermines the notion of a best. Ask what works for whom in what ways with what results under what conditions and in what contexts over what period of time.
-
Eschew the label “best practice.”
Don’t use it even casually, much less professionally.
-
When you hear others use the term, inquire into the supporting evidence.
It will usually turn out to be flimsy, opinion masquerading as research. Where the findings are substantially credible, the findings will still not rise to the standard of certainty and universality required by the designation “best.” I then offer that the only best practice in which I have complete confidence is avoiding the label “best practice.”
-
When there is credible evidence of effectiveness, use less hyperbolic terms
[These are things] like better practices, effective practices, or promising practices, which tend less toward overgeneralization. [Note this in regard the designation of this website as BetterEvaluation not BestEvaluation.]
-
Instead of supporting the search for best-ness, foster dialogue about and deliberation on multiple interpretations and perspectives.
Qualitative data are especially useful in portraying and contextualizing diversity.
Why This Matters
The allure and seduction of best-practice thinking poisons genuine dialogue about both what we know and the limitations of what we know. What is at stake is the extent to which researchers and evaluators model the dialogic processes that support and nurture ongoing scientific discovery and generation of new knowledge. As scholars, we contribute not just by the findings we generate but more crucially, and with longer effect, by the way we facilitate engagement with those findings—fostering mutual respect among those with different perspectives and interpretations. That modeling of and nurturing deliberative, inclusive, and, yes, humble dialogue may make a greater contribution to societal welfare than the search for generalizable, “best-practice” findings—conclusions that risk becoming the latest rigid orthodoxies even as they are becoming outdated anyway. At least that is the history of science so far.
As part of our January focus on impact evaluation, Michael Quinn Patton shares a rumination from the new 4th edition of Qualitative Research and Evaluation Methods on best practices. In these ruminations, Patton reflects on issues that he explains have “persistently engaged, sometimes annoyed, occasionally haunted, and often amused me over more than 40 years of qualitative research and evaluation practice.” Because these issues have global relevance, Patton has agreed to post his ruminations on Better Evaluation to stimulate further reflection among evaluators generally.
References
Klein, G. (2014). Evidence-based medicine. http://edge.org/responses/what-scientific-idea-is-ready-for-retirement
Weiss, C. H. (2002). “Lessons learned:” A comment. American Journal of Evaluation, 23(2), 229–230.
This is part of a series
'Impact Evaluation: Best Practices Aren’t (MQP rumination #4)' is referenced in:
Blog
Theme