Last week I was lucky enough to be involved in a series of workshops by Stephanie Evergreen on presenting data effectively. I've walked away with a wealth of knowledge on how to choose the most appropriate chart, which tool will create it, and how to improve the chart's design to more effectively communicate my message.
Conferences are a great way to connect and learn with the evaluation community. Earlier this year we shared a listing of conferences for which we received useful feedback from our users. This week we're highlighting a new page which lists upcoming evaluation conferences from around the world. Check out the new page here. Are we missing any? Let us know in the comments below. We'll be updating this page as new conferences emerge.
At the recent 35th conference of the Canadian Evaluation Society in Ottawa I shared my favourite Canadian contributions to evaluation which could be useful more broadly for addressing global challenges in evaluation.
What is more important to you: a good education or a good healthcare system? Or perhaps employment or security is at the forefront of your mind at the moment. What about the environment or human rights? We all have different priorities in life and different sets of values with which we make judgements on things around us. Evaluations attempting to understand effects on people’s lives should at least attempt to try to understand the values of those people rather than imposing an external set of values. This week’s guest blog is from Laura Rodriguez Takeuchi, a researcher at the Overseas Development Institute. She introduces some practical ways that evaluators can begin to weigh people’s values as they relate to desired outcomes and distribution of benefits.
At BetterEvaluation we get many suggestions from users of new resources to add to the site. This is an essential element of our aims to improve evaluation practice and theory by sharing information about options (methods or tools) and approaches. In this week's blog I will be highlighting some of the many resources that users have suggested to us over the last month. If you would like to contribute to BE it is as simple as clicking here and letting us know what you would like to suggest.
If you've visited BetterEvaluation before March this year, you've probably noticed the site looks a little different to how it used to look. But have you noticed the other changes? Here are some useful features you may not have noticed, and new features to expect in the next few months.
Our blogger this week is Jesper Johnsøn, Senior Advisor to the U4 Anti-Corruption Resource Centre. Jesper highlights a frequent confusion among anti-corruption practitioners between the difficulty measuring levels of corruption and the evaluability of anti-corruption initiatives, and urges us not to give us on rigorous evaluation.
Stephen Porter is Results and Evaluation Advisor for the Education and Partnerships team at DFID. In this blog he gives us a valuable insight into what a funder might be thinking as they review a development programme proposal and how he uses evaluation evidence to make funding decisions. In comparing the information that comes from (traditional) systematic reviews to that which comes from a realist synthesis, he urges us all to think hard about the ‘how’ of development interventions, particularly in livelihoods interventions.
Simon Hearn continues BetterEvaluation’s theme on the monitoring and evaluation of policy change by suggesting a set of measures to help those struggling to monitor the slippery area of policy influence and advocacy. For more on this theme, see Josephine Tsui’s blog on attribution and contribution in the M&E of advocacy and Julia Coffman’s on innovations in advocacy evaluation.