Evaluation is a powerful tool that can provide useful, evidence-based information to help inform and influence policy and practice.
In Week 19 in May we blogged about ways of framing the difference between research and evaluation. We had terrific feedback on this issue from the international BetterEvaluation community and this update shares the results.
Recently, I had the good fortune to start collaboration with The MasterCard Foundation, which is strongly committed to what it calls ‘listening deeply and elevating voices’. This organisation is one of an increasing number in international development expressing more than a superficial interest in ‘client feedback’.
Tiina Pasanen is a Research Officer for the Research and Policy in Development (RAPID) Programme at the Overseas Development Programme (ODI). In this blog, Tiina shares her top three realist ‘take-aways’ from the 1st International Conference on Realist Approaches to Evaluation and reflects on when or how realist evaluation may be most useful.
Rhonda Schlangen and Jim Coe are independent consultants who work with social change organisations and funders to develop and evaluate advocacy and campaigns. In ‘The Value Iceberg’, a Discussion Paper published by BetterEvaluation, they look at how concepts of 'value' and 'results' are being applied to advocacy and campaigning and present some alternative strategies for assessing advocacy.
The 4th edition of Qualitative Research and Evaluation Methods by Michael Quinn Patton will be published in mid-November, 2014. A new feature is one personal “rumination” in each chapter.
Scorecards are used in many different types of evaluation, and can have influence through informing decisions and by making performance visible. This week's guest blogger, Jennie Aylward, describes a score card used to report on an advocacy program aimed at members of the U.S. Congress.
[Editor's note, 02/12/14: a previous version of this blog was published without images, this has now been corrected.]