Learning from gLOCAL conversations

Patricia Rogers
gLOCAL evaluation week 2023

The week before last, we were treated to over 300 diverse, live events on evaluation in the annual event that is glocal – a week of locally hosted, globally accessible webinars, presentations and hybrid sessions that is convened by the Global Evaluation Initiative (GEI).

Once again, the scale and range of events was dazzling and inspiring. Sessions were held in different languages (including English, Spanish, French, Portuguese and Mongolian) and with participants from many different countries and organizations, with different areas of expertise, experience and perspective.

Improving evaluations, systems and capacity strengthening

Many sessions focused on ways of improving the quality of individual evaluations through the types of processes used or the forms of evidence drawn on. These included sessions on contribution analysis, big data, remote sensing technologies such as satellite imagery, Indigenous evaluation approaches, and ethical principles.

Other sessions focused on improving whole systems of monitoring and evaluation at the national, sub-national or organizational level – not only in terms of the validity, relevance, timeliness and accessibility of evidence but also what is needed to support its use in decision-making and action. These included sessions on strengthening evaluation capacity through transformational partnerships, the development and use of cross-organizational monitoring systems, and engaging civil society in developing national evaluation policy.

These sessions can be useful for individual professional development and shared learning within and across organizations.

With so much on offer and different time zones, it is impossible to see everything of interest. Where available, recordings and other materials will be uploaded to individual event pages on the gLOCAL site. You can also check social media for #gLOCAL2023 links and keep an eye on the event page on the gLOCAL website for an event you're interested in.

Some of the sessions you can catch up on right now include:

The future of monitoring & evaluation: Context, culture and collaboration

The opening event, which addressed the theme for this year's gLOCAL: "The Future of M&E: Context, Culture and Collaboration" shows the richness of these sessions. Moderated by Jozef Vaessen (Methods Advisor, GEI), five panellists each made informed and passionate presentations about how to move M & E from being technical systems of compliance to actually supporting organizations and communities to meet human needs in a changing and uncertain world.

  • Dugan Fraser, Program Manager of the GEI, spoke about the need to move from seeing M & E systems as systems of pipes for data to something more like food gardens, being planned and tended to meet ongoing and emergent needs and relying on effective relationships.
  • Candice Merkel, Director of the CLEAR Centre for Anglophone Africa, shared insights into five factors that have been observed to lead to failure of national M & E systems, including a failure to design them for real use in the first place.   
  • Marie Gaarder, Executive Director of 3IE, the International Initiative for Impact Evaluation, talked about the need to improve the accessibility and quality of evidence being used to inform policy and the need for evidence literacy among users.
  • Sanjeev Sridharan, Professor of Health Policy Evaluation at the University of Hawaii, talked about the need to go beyond project-centric M & E to develop M & E that will support 'systems of care' that involve organizations that need to work together to address large, complex needs such as reducing homelessness.
  • Estelle Raimondo, Senior Evaluation Officer at the Independent Evaluation Group of the World Bank, presented the processes used in a large country-level evaluation where a combination of big data, GIS data, and local data gathering was used to overcome travel constraints during COVID.

While each of the presentations had useful ideas and examples, what I found particularly thought-provoking were the connections between the different presentations.

For example, Dugan Fraser talked about the importance of relationships and trust in making M & E systems actually work (rather than just tokenistic compliance) – and Candice Merkel warned about over-dependence on a few champion individuals or departments, which can make the system vulnerable when political or other change happens. This suggests a need for wide and diverse relationships and engagement, not just a few.

Marie Gaarder talked about the importance of including cost data in evaluations and in evidence syntheses – and Estelle Raimondo showed the importance of understanding investments by other organizations which contribute to outcomes and change context. So it seems that evaluations and evidence syntheses cannot simply be the dollars contributed by the major funding organization but all the funding which contributed.

Sanjeev Sridharan's emphasis on the need for evaluation to be serious about context – important variations across participants and across implementation sites- also raises challenges for drawing valid conclusions about value for money. For example, a training program that only serves readily accessible participants with a high level of existing skills might have a lower cost per successful completion. A comparative program that services rural and remote populations might have higher costs, as might a program that services those without foundational skills, so the cost per successful work placement might be higher, but not necessarily worse value for money. So evaluations and comparative cost analyses will need to take context seriously to avoid discouraging investment in interventions that are effective for hard-to-reach populations or for people with higher levels of disadvantage.

Check out this session, and ideally, talk about it with your colleagues.


Panel Discussion On The Global Directory Of Academic Training Programs In Evaluation

This GEI event presented the first-ever global directory of academic training programs in evaluation and discussed challenges and perspectives on evaluation education. The event first introduces the directory, including the scope and methodological approach for its development, and an overview of the main characteristics and patterns in academic training in evaluation across the globe.

Following this, four evaluation scholars and representatives of academic training programs from different regions gave concise lightning talks highlighting their experiences in evaluation education. The panel then discussed several forward-looking strategic questions:

  • How has academic training in evaluation been evolving?
  • As we advance to optimally prepare students for a career in the field of evaluation, what are the main issues and challenges for academic training programs?
  • Using the directory as a platform, (how) can a stronger network of academic training programs in evaluation help strengthen academic training in evaluation?

Speakers included John LaVelle (Assistant Professor, University of Minnesota-Twin Cities), Nathalie Holvoet (Professor, Institute of Development Policy and Management, University of Antwerp), Lauren Wildschut (Director of Evaluation Programs, Stellenbosch University), Marcia Joppert (Research and OperationsAssociate, The Evaluators' Institute, Claremont Graduate University), Christina Cuonz (Director, Centre for Continuing Education, University of Bern (ZUW)), and Jos Vaessen (Methods Advisor, GEI), as moderator.


Human Rights And Gender Equality In Evaluation: A New Guidance For A Rights-Based Lens On The OECD Evaluation Criteria

This event combined the launch of new guidance on using the OECD evaluation criteria with a human rights lens and an international panel discussion of the role of evaluators in addressing issues of human rights and gender equality in today's context. Speakers provided practical information on how to adapt evaluation questions with a human rights and gender equality lens and touched on the broader debates that arise, including questions about evaluators bringing their own biases to the table and how to create space for exploration of human rights effects when evaluating interventions that do not explicitly target rights.

Speakers included Martin Bruder (Head of Evaluation Department III, DEVal), Mayanka Vij (Policy Analyst, OECD DAC Network on Development Evaluation Secretariat), Shravanti Reddy (Senior Evaluation Specialist), Daniel Jacobo Orea (Technical Advisor at the Spanish Ministry of Foreign Affairs), Joseph Sewedo Akoro (Senior Evaluator), and Megan Kennedy-Chouane (OECD/DCD Head of Evaluation, Secretariat of the OECD/DAC Network on Development Evaluation (EvalNet)) as moderator.


Towards embedding African methodologies in evaluations: Context Matters

This session explored the question: "Is there hope for African countries to have evaluations with an 'African face' and that are contextually relevant." The discussion focused on the following:

  • Where African methods have been used in research and or evaluation studies
  • What are some of the challenges that can be envisaged in pursuit of decolonization of evaluations or Made in Africa or use of IKS
  • How can African rooted methods be integrated into evaluation studies and or M&E systems?
  • Raise awareness of the African Evaluation Blog website hosted by SAMEA.

Speakers included Dr Umali Saidi (Postgraduate Studies Manager (Research & Innovation Division), Senior Lecturer and Editor-in-Chief of The Dyke Journal (Multidisciplinary academic journal) at Midlands State University, Zimbabwe), Ms Sandra Kokera (PhD Candidate in Programme Evaluation at the University of Cape Town focusing the Made in Africa Evaluation approach and currently employed at the Zimbabwe Technical Assistance, Training and Education Centre for Health), Mr John Njovu (Zambian Economist, honorary Member of the Zambia Monitoring and Evaluation Association, founding member of the network of Indigenous Evaluators (EvalIndigenous), and one of the leaders of its Africa Chapter), Mr Mokgophana Ramasobana (Independent Evaluator and Former Chairperson of South African Monitoring and Evaluation Association), Mr Shadrick Mbata (Chief Evaluation Analyst at the Department of Agriculture, land Reform and Rural Development (DALRRD)), and Dr Taku Chirau (Deputy Director for CLEAR-AA).


Culturally responsive evaluation: How do different regions approach it?

In this session, young and emerging evaluators (YEE) from across the world presented regional perspectives & innovations on culturally responsive evaluation (CRE) and helped provoke the audience's thinking on their own evaluation practice. The discussion focused on the value of local evaluators' firsthand knowledge of the complex and nuanced nature of cultural and social dynamics at the local level and the insights into the challenges & opportunities of incorporating these dynamics into evaluation practice local evaluators can provide. Speakers included Gabriela Renteria Flores (Chair, EvalYouth - moderator), Dugan Fraser (Program Manager, Global Evaluation Initiative), Claudia Olavarria (Consultant, Global Evaluation Initiative, Former Co-Chair EvalYouth LAC; & gender expert), Dr Mercy Fanadzo (Programme Administrative Officer, CLEAR Anglophone Africa), Stephen Porter (Senior Evaluation Specialist, IEG), and Rai Sengupta (Senior Monitoring and Evaluation Consultant, Ecorys London). 

This event was part of the IEG@50 celebrations and launched a new competition for young and emerging evaluators to explore innovative & practical approaches to CRE.


The science behind data collection: How to choose the best tools and approach to collect data considering the culture, context, and existing partnerships

This webinar by Khulisa Management Services outlined how to design data collection efforts through five considerations: the budget, respondents and accessibility, the kind of data being collected, challenges to data collection and how the data will be used. 


How and why do Communities of Practice for Evaluators deliver? The case of EvalForward

Drawing from the recent Independent Review of EvalForward Community of Practice, this session highlighted how peer-to-peer capacity development strengthens culture and collaboration opportunities within the evaluation community, the novel cross-agency dedicated facilitation model of EvalForward, and the role of institutional sponsorship of Communities of Practice to deliver global public goods where there would otherwise be gaps. The session included a presentation of the EvalForward Review by Carl Jackson, knowledge management and evaluation consultant, UK, interventions by Masahiro Igarashi (EvalPartners co-chair and former FAO evaluation director) and EvalForward Steering Group representatives: Svetlana Negroustoueva (Evaluation Lead, CGIAR) and Aurelie Larmoyer (Senior evaluator, WFP), and testimonials and change stories by members of the Community: Nayeli Almanza (Mexico), Gordon Wanzare (Kenya) and Anna Maria Augustyn (Poland). 


Footprint evaluation sessions

In other sessions, I was delighted to see my colleagues from the Footprint Evaluation Initiative, Jane Davidson and Andy Rowe, in two sessions on embedding environmental sustainability in all evaluations.

When The World Is In Crisis, Evaluation Can Provide Answers

The first Footprint session, "When The World Is In Crisis, Evaluation Can Provide Answers. New Approaches And Criteria To Promoting Climate And Ecosystems Health And Meaningful Equity", conducted in association with colleagues from SAMEA and Twende Mbele, explored different approaches to effectively address ecosystems health and just transition in evaluations, and to design evaluations to support decision-making that promotes environmental sustainability and regeneration. 

Todo sobre Footprint Evaluation, la Evaluación de la Huella Ecológica

The second session, "Todo sobre Footprint Evaluation, la Evaluación de la Huella Ecológica/All About Footprint Evaluation, The Ecological Footprint Assessment", conducted in association with colleagues from Deval, with simultaneous Spanish/English translation, presented key ideas from the Footprint Evaluation Initiative on how environmental sustainability can be embedded in how evaluations are commissioned, designed and conducted. It also launched new Spanish language versions of key footprint evaluation materials, which are now available through the RELAC website and BetterEvaluation. This includes the Footprint Evaluation Key Evaluation Questions and the Footprint Evaluation thematic page.

Guidance for practice

There were so many examples of evaluations and evaluation systems that provide guidance for practice and food for thought.

And much, much more.


So do check out the gLOCAL event database and search for #gLOCAL2023 on Twitter and LinkedIn to find relevant events. And plan to clear your calendar for a week next year so you can participate in more events in real time.

And if you hosted an event at this year's gLOCAL, please add the recording to your event page on the gLOCAL website and share it with us!

'Learning from gLOCAL conversations ' is referenced in: