Week 11: BetterEvaluation at AfrEA 2014

A person talking in front of a screen

BetterEvaluation was privileged to sponsor the Methodological Innovation stream at the African Evaluation Association (AfREA) conference from 3-7 March. What did we learn?

We set out with three questions to focus the panel sessions and workshops in this stream:

  • What are examples of effective innovation in evaluation?  

  • What kinds of innovation are needed to address unmet challenges?  

  • How do we support effective innovation - in terms of both inventing and adopting new methods and processes? 

Moshood Folorunsho presenting at AfrEA

Some innovations relate to the use of new technologies, such as social media and web 2.0. Some innovations address common challenges in evaluation, such as working with decision-makers or addressing wider issues of power in evaluation.  Some innovations have been developed in particular programmatic areas, such as education and training, and programmes that aim to promote sustainability and resilience.  Other innovations have been developed for use in particular situations – such as conducting evaluation in conflict areas, or evaluating programmes that operate in situations of complexity where traditional approaches to evaluation are inappropriate or ineffective. Read about some of the sessions focusing on innovation:

Evaluating training and education

Chaired by: Cris Sette

Three presenters shared their evaluations with a diverse group of attendees from 15 countries. 

The three presentations had in common the uses of mixed methods for evaluation that foster learning and engagement.

Evaluation methodologies for aid in conflict

Chaired by: Patricia Rogers

This roundtable provided a terrific final session before the closing ceremonies, where participants with an interest in the topic shared their questions and experiences.  The issues and strategies shared related to five main areas of evaluation:

  1. Collecting data – strategies to collect data in dangerous areas, including use of decentralised data collectors, mobile data collection and remote sensing
  2. Reporting findings – strategies to do checks on interpretation and implications.
  3. Measuring hard to measure intended outcomes such as peace, conflict and social cohesion.
  4. How all programs in conflict areas (not only peace building programs) can monitor the interconnections between the program and conflict – how each can and does influence the other
  5. The importance of mainstreaming security risk management

We are currently finalising a summary of the conversation and will be making this available on the BetterEvaluation site as well as sharing it with participants at the session. 

Useful resources related to this topic:

The Learning Portal for Design, Monitoring & Evaluation (DM&E) for Peacebuilding (http://dmeforpeace.org/) - a community of practice for DM&E peacebuilding professionals. It provides a transparent and collaborative space for the sharing of:

Diverse innovations showcase

Chaired by: ​ Simon Hearn

This was an opportunity to showcase diverse innovations. Two innovations and one challenge in need of innovation were showcased.

Innovation 1: Theatre for development, presented by Moshood Folorunsho from Nigeria.

Moshood described how theatre can be used to engage community members in discussing preliminary results from an evaluation, with actors replaying scenarios relating to a programme which were uncovered through prior data collection. By involving community members in role play, the early findings can be verified or corrected based on the participant's experiences of the programme, and new perspectives can be recorded which might otherwise have remained dormant.

 

Innovation 2: Most Significant Change, presented by Bernah Namutebi from Uganda.

MSC has been around for quite a while now but there are still relatively few documented examples of its use. Bernah provided an overview of the approach and some examples of its use by Plan to document the effects of a land rights training programme in Uganda.

 

And a challenge from Karen Odhiambo from Kenya.

Karen described how, in many programme evaluations she has seen, particularly in the water sector, evaluators use impact criteria derived from formalised programme objectives and fail to consider what other stakeholders and targeted beneficiaries might consider to be important criteria for impact. Tools and approaches that can help evaluators identify ‘non-hypothesised’ impacts are therefore required.

We think one way to address this challenge would be to use different options to identify different people’s notions of ‘what success looks like’  - in particular through the participatory development of rubrics.  

 

Chasing the wrong dream counting the wrong miles? Sustainability vs resilience in a complex world.

Chaired by: Patricia Rogers

Lindie Botha presented a paper outlining her journey from traditional goal-focused evaluation to newer approaches that are suitable for adaptive programmes that are responsive to emerging needs and opportunities - such as developmental evaluation.

Related resources: 

Power and evaluation, and evaluating social and behavioural change communications

Chaired by: Greet Peersman

John Njovu from the University of Lusaka (Zambia) talked about governance evaluations which are typically carried out by government in partnership with civil society. Some of the major challenges included: maintaining sufficient independence for evaluators; ensuring adherence to ethical standards; and, increasing the capacity of civil society to be active evaluation partners.His presentation pointed to the need for trying out innovations in Zambia that have been tested elsewhere including the use of social media to spike the interest of young people in local governance performance.

 

Opportunities for integrating a diverse range of monitoring and evaluation activities at all levels of the decentralised government structures in Kenya were discussed by Lucy Gaithi (government, Kenya). Accountability, equitable sharing of resources, and citizen participation were noted as hallmarks of improved social and economic development. Social intelligence reporting is being piloted for assessing health, water and education services through participatory capturing of weaknesses but also the agreed solutions and follow up actions.

 

Marc Boulay (Johns Hopkins School of Public Health, USA) closed the session with an evaluation of health communications for increasing bednet ownership to prevent malaria in Tanzania. He demonstrated that the combined use of two common analytic approaches (i.e., treatment effects model; mediation analysis) greatly increased the utility of the evaluation results: not only whether the communications worked but also which targeted messages were instrumental in achieving positive results.

 

Away from the conference sessions, the BetterEvaluation team had an exhibition stand where they were sharing the website, introducing new members, giving away books and materials and discussing possible collaborations. Highlights include:

  • Over 200 people passed by the stand to ask questions, share the work they are doing, and inquire on how they could engage in BetterEvaluation.
  • We gave away five books to five very thankful participants through a daily prize draw.
  • We gave away about 20 evaluation journals which will be reviewed by the recipients and the reviews added to the BE website.
  • We had nearly 70 people sign up to our newsletter.

The most discussed theme among attendees was capacity for evaluation; the need to strengthen the knowledge of evaluators on processes and methods.  The BetterEvaluation team will pursue some opportunities discussed during AfrEA, in relation to developing materials and activities with partners in Africa.

Were you at AfREA? What was the most interesting evaluation innovation you heard about?