Today we start a series on "visionary evaluation" - the theme of the 2014 American Evaluation Association conference in October.
Shared knowledge and experience from the global evaluation community.
Being able to compare alternatives is essential when designing an evaluation. This post looks at some alternatives to transcribing interviews.
Case studies are often used in evaluations – but not always in ways that use their real potential.
Last week I was lucky enough to be involved in a series of workshops by Stephanie Evergreen on presenting data effectively.
This week’s blog is from Jonny Morell, editor of Evaluation and Program Planning and author of Evaluation in the Face of Uncertainty: Anticipating Surprise and Responding to the Inevitable. He blogs at http://evaluationuncertainty.com/.
Unfortunately I believe so. Last year I met a group of Brazilian evaluators in a conference, and learned from them the growing demand for good evaluation studies in Brazil, but also the need for more capacity building initiatives in this area, besides t
Um dos desafios em trabalhar em avaliação é que importante termos (como "avaliação", "impacto", "indicadores", "monitoramento" e assim por diante) são definidos e usados de maneiras muito diferentes, por pessoas diferentes.
What is more important to you: a good education or a good healthcare system? Or perhaps employment or security is at the forefront of your mind at the moment. What about the environment or human rights?
Our blogger this week is Jesper Johnsøn, Senior Advisor to the U4 Anti-Corruption Resource Centre.
Stephen Porter is Results and Evaluation Advisor for the Education and Partnerships team at DFID.
Simon Hearn continues BetterEvaluation’s theme on the monitoring and evaluation of policy change by suggesting a set of measures to help those struggling to monitor the slippery area of policy influence and advocacy.
Infelizmente acredito que sim.
One of the challenges of working in evaluation is that important terms (like ‘evaluation’, ‘impact’, ‘indicators’, ‘monitoring’ and so on ) are defined and used in very different ways by different people.
We’ve talked before on this blog about evaluating advocacy interventions. One of the hottest debates is how and to what extent it is possible to establish causation in advocacy programmes.
In February, BetterEvaluation hosted a webinar on working with children in evaluation.
Continuing our season of blogs on presenting evaluation findings in ways that will get them read (and hopefully used), Joitske Hulsebosch, an independent consultant, contributes her ideas on how to present your findings in the form of an infographic.