Primary tabs

Show search hints
Did you mean
results make

Search results

  1. Week 21: Know your measures- picking outcomes to monitor policy change

    24th May, 2014

    Simon Hearn continues BetterEvaluation’s theme on the monitoring and evaluation of policy change by suggesting a set of measures to help those struggling to monitor the slippery area of policy influence and advocacy. For more on this theme, see Josephine Tsui’s blog on attribution and contribution in the M&E of advocacy and Julia Coffman’s on innovations in advocacy evaluation.

  2. 52 weeks of BetterEvaluation: Week 41: Recommended content from the BetterEvaluation community

    15th October, 2013

    You'll find hundreds of evaluation resources on the BetterEvaluation site. Some have come from recommendations by stewards. Some have come from our writeshop project or design clinics.  And there are great resources that have been recommended by BetterEvaluation users. This week we are highlighting some of these user-recommended resources, how you can find the latest new material, and how you tell us your recommendations. 

  3. Evaluation of Humanitarian Action: A new page

    ALNAP is delighted to launch the ‘Evaluation of Humanitarian Action’ theme page in partnership with BetterEvaluation. We hope that this page will serve as a useful directory for evaluators and commissioners alike who are looking for guidance and help with navigating the choppy waters of Evaluation of Humanitarian Action (EHA). We welcome you to explore!

  4. 52 weeks of BetterEvaluation: Week 33: Monitoring policy influence part 2- like measuring smoke?

    13th August, 2013

    In the second part of our mini-series on monitoring and evaluating policy influence, Arnaldo Pellini, Research Fellow at the Overseas Development Institute, explores a project supporting research centres in Australia to monitor their impact on health policy in Southeast Asia and the Pacific. Arnaldo explores the main challenges and makes some recommendations for others looking at the M&E of policy influence. 

  5. What we can learn from New Year’s Resolutions to improve evaluation?

    In this first blog of 2019, Patricia Rogers, Greet Peersman and Alice Macfarlan examine how New Year's resolutions are similar to many evaluation practices.

  6. 52 weeks of BetterEvaluation: Week 44: How can monitoring data support impact evaluations?

    1st November, 2013

    Maren Duvendack and Tiina Pasanen explore the issue of using monitoring data in impact evaluations. Maren and Tiina work on the Methods Lab, a programme aiming to develop and test flexible and affordable approaches to impact evaluation.

  7. Week 24: New site features and upcoming developments

    18th June, 2014

    If you've visited BetterEvaluation before March this year, you've probably noticed the site looks a little different to how it used to look. But have you noticed the other changes? Here are some useful features you may not have noticed, and new features to expect in the next few months.

  8. Evaluating C4D Resource Hub: Launch at the Social and Behaviour Change Summit

    11th April, 2018

    On April 16 over a thousand communication for development (C4D) researchers and practitioners descend on Indonesia for the Social and Behaviour Change Summit (SBCC). Among them will be members of the Evaluating C4D research team: Professor Jo Tacchi (Loughborough University), Dr Jessica Noske-Turner (University of Leicester), Dr Linje Manyozo (RMIT University), and Rafael Obregon and Ketan Chitnis (UNICEF C4D). Together we will launch the new Evaluating C4D Resource Hub.

  9. Week 18: is there a 'right' approach to establishing causation in advocacy evaluation?

    6th May, 2014

    We’ve talked before on this blog about evaluating advocacy interventions. One of the hottest debates is how and to what extent it is possible to establish causation in advocacy programmes. Here, Josephine Tsui, a Research Officer at ODI and co-author of a new report ‘Monitoring and evaluation of policy influence and advocacy’, explains the issues and how we can approach this thorny issue.

  10. Mixed methods in evaluation part 2: exploring the case of a mixed-method outcome evaluation

    31st July, 2013

    We continue our mini-series on mixed-methods in evaluation with an interview with the three authors of the recently published paper: Mixing methods for rich and meaningful insight

    Willy Pradel and Gordon Prain from the International Potato Centre in Lima, Peru and Donald Cole from the University of Toronto discuss the evaluation they recently conducted which applied a mixed-methods approach to capture and understand a wide variety of changes to organic markets in the Central Andes region. This case demonstrates a good rationale for choosing a mixed-method design and also an authentic implementation that effectively mixes quantitative and qualitative data to enhance the value of each.