Moving organisations from “having to” to “wanting to” evaluate: Five recommendations for practice
Organisations can foster a culture where evaluation is embraced as a valuable tool for learning and decision-making rather than a compliance requirement.
Drawing on insights from the evolution of UNICEF's evaluation function in East Asia and the Pacific and ongoing work with the WHO’s evaluation function, Riccardo Polastro identifies five recommendations for strengthening evaluation systems. By applying these strategies, organisations can shift from “having to evaluate” to “wanting to evaluate” and ensure evaluation becomes part of their DNA.
This blog is written in a personal capacity and does not reflect the institutional position of WHO and UNICEF.
Organizations with strong learning cultures recognize the value of evaluation as a tool for generating knowledge, ensuring accountability, and guiding evidence-based decisions. When evaluation is strategically embedded, part of a shared vision, and adequately resourced, it shifts from being a compliance requirement to becoming a process that leadership actively champions.
Between 2015 and 2023, I witnessed this shift firsthand at UNICEF, particularly in East Asia and Pacific and Latin America and the Caribbean. Drawing on our experience with the evolution of the UNICEF East Asia and Pacific evaluation function from 2015–2020, I have identified five key recommendations that could help other organizations strengthen their evaluation systems and foster a learning culture. These recommendations are intended for those colleagues who are leading and managing independent evaluations, either in national, and sub-national, governments, and in development partners including bilateral and multilateral donors, United Nations agencies and non-governmental organisations.
1. Embed evaluation and build trust to support learning and decision making
Organisations such as UNICEF have demonstrated how embedding evaluation in organisational processes can foster a culture of learning and adaptation. By combining critical reflection with system strengthening, we were able to position evaluation as a strategic function within the organization, significantly expanding its reach and influence.
Operating in more than 190 countries, UNICEF is among the most decentralized organizations in globally. As Karin Hulshof, UNICEF East Asia and the Pacific Regional Director, notes, “UNICEF’s heartbeat is at the country office level” (Polastro, 2022). This extensive field presence relies on the efforts of regional evaluators and staff which enables UNICEF to adapt to local contexts and respond to shifting needs. In East Asia and the Pacific, evaluators and staff are spread across 14 country offices and one regional office, allowing them to respond to rapidly shifting needs in complex settings and deliver timely, actionable recommendations that support context-specific decision-making.
However, physical presence does not guarantee a strong learning culture. Factors such as staff capacity, incentives, leadership, and vision all influence how and why evaluations are conducted. We worked to build trust with stakeholders and create an environment where senior management and staff actively engaged with evidence. This approach integrated learning and adaptation into planning and implementation processes, reinforcing evaluation’s strategic value.
2. Co-create a shared vision for evaluation
UNICEF Evaluation Policy and 2018–2021 Regional Evaluation Strategy requires its country offices to a minimum number of evaluations through its cooperation programme with national governments, promoting consistent integration of evidence into decision-making processes (Polastro, 2022). The further supported this by helping senior management prioritize evaluations that could produce high-quality evidence to inform policy, programmes, and advocacy efforts (UNICEF, 2017). It aimed to strengthen evaluation processes at every stage – from planning and budgeting to implementation, dissemination, and the application of findings.
As Bamberger (2008) and others have observed, involving stakeholders in the evaluation process is crucial for its use. By partnering with senior management, we set shared goals and milestones, grounded in thorough contextual analysis, which enabled collaborative priority-setting. This partnership anchored the evaluation function and helped clarify political commitment and guided resource planning, prioritization, and allocation. The regional office provided guidance and technical assistance to country offices, sharing good practices across the region to lay the foundations for higher-quality evaluations.
3. Invest in resources to support independent evaluation
Strengthening evaluation independence required targeted changes in staffing, including the creation of four new multi-country specialist positions. Reporting jointly to their respective country offices Representatives and the Regional Evaluation Advisor, these specialists enhanced country-office evaluation capacity, promoted efficiencies through cost-sharing arrangements, and facilitated the exchange of learning and good practices across the region.
Establishing expenditure targets, senior management commitments, and an evaluation pooled fund led to increased funding for evaluation efforts, with the budget increasing by roughly 100% between 2018 and 2020. Despite some disparities among country offices, there has been steady progress in resource commitment for evaluation, often reflected in bilateral donor funding contracts in the region. This increased investment highlights the value senior management places on the evaluation function, emphasizing the need for evaluations to deliver good value for money.
4. Strengthen capacity for credible and useful evaluations
To achieve value for money, improving the quality of evaluations was a key priority. High-quality evaluations are independent, credible and relevant to decision-making processes. In the East Asia and Pacific region, UNICEF introduced standardised procedures to ensure evaluations met consistent quality standards. These were piloted and adapted for the Latin America and Caribbean region in 2020 and in 2023 for WHO. These procedures established responsibilities, from preparing well-defined terms of reference to recruiting strong teams, allocating adequate resources, and employing robust, context-appropriate methodologies that drew on insights from diverse stakeholders.
The focus on quality included use by engaging stakeholders throughout the full evaluation process to ensure evaluations produced relevant findings and actionable recommendations grounded in strong evidence. By embedding these practices, UNICEF aimed to deliver reports that were not only credible and relevant but also responsive to the needs of their audiences.
To support this process, the regional office developed and implemented systems and processes for ensuring high standards. These included reviewing deliverables and ensuring evaluation teams focused on and were accountable for facilitating decision-making, rather than making decisions independently. A regularly updated roster of consultants and firms ensured access to experts with the right competency mix of technical skills and experience, contextual knowledge and local language proficiency.
Collaboration between headquarters, regional and country office teams further strengthened capacity by enabling the exchange of knowledge and good practices. Through webinars, videos, and international conferences, teams shared practical insights and successful approaches, ensuring lessons were widely disseminated across the region and beyond.
5. Align evaluations with user needs and intended impact
Effective evaluations are tailored to the needs of their users and designed with a clear understanding of their intended impact. Evaluation leaders need to act as facilitators, both partners and critical friends, supporting stakeholders to clarify purpose of an evaluation and the specific decisions it is intended to inform. Key strategies to support this stakeholder engagement and promote evaluation use include tailoring messages and findings to stakeholder priorities, fostering constructive dialogue and establishing safe spaces where learning and reflection can take place.
In the East Asia and Pacific region, improvements in the credibility and relevance of evaluations helped elevate their value among senior leadership. This was supported by evaluation leaders acting as evaluation champions to further promote the value of evidence in decision-making. Evidence from evaluations became central to the development of strategies, partnerships, and position papers at the regional level. At the same time, country offices used these insights to guide programme planning, implementation and reporting cycles. This evidence base also supported governments in scaling up successful pilot initiatives and joint evaluations or support for country-led evaluations fed into national policies, implementation plans, and resource allocations.
The strengthened focus on end-user needs and enhancing the quality of the evaluation function allowed evaluation to play a crucial role in informing decisions in a region often grappling with competing priorities between economic growth and social equality. During challenges like the COVID-19 pandemic, natural disasters and conflicts, evaluation processes demonstrated their utility by helping UNICEF adapt to rapidly shifting circumstances. By identifying effective solutions, evaluation has not only expanded UNICEF's organizational learning and institutional credibility but has also emerged as a catalyst for change.
“Mutando mutandis”: New directions
In 2023, I joined the World Health Organization (WHO) Evaluation Office as Chief Evaluation Officer to support the implementation of its evaluation policy and biennial work plan. WHO’s evaluation function was established in 2014, since then it has increased its coverage, although there is a need for further improvement, with the 2024 MOPAN assessment noting that “WHO needs to strengthen its evaluation function in line with its own and UN norms to further improve both accountability and corporate learning”. Efforts are now underway to further build credibility, strengthen independence, and enhance the use of findings. The strategies discussed above around building trust, co-creating a shared vision, investing in resources, strengthening capacity, and focusing on user needs are just as relevant to this work as they were in UNICEF East Asia and Pacific.
In mid-2024, following the comparative study of WHO evaluation function with selected entities of the United Nations system (World Health Organisation, 2024), our Executive Board decided to update the 2018 evaluation policy and asked for a roadmap. The comparative study provided recommendations for strengthening the WHO’s evaluation function. These included expanding the policy’s coverage to reflect WHO’s core areas of work, establishing a process to track progress annually, and introducing a dedicated budget line for evaluation activities. These steps are designed to ensure evaluations play a more prominent role in guiding organisational strategy and decisions – moving us from “having to evaluate” to “wanting to evaluate.”
By implementing these recommendations, WHO aims to foster a stronger culture of learning and accountability, ensuring evaluation becomes a valued and integral part of its work. Watch this space as we continue this journey toward embedding evaluation at the heart of organisational decision-making.
Sources
Bamberger, M. (2008). Enhancing the utilization of evaluations for evidence-based policy making. In M. Segone (Ed.), Bridging the gap: The role of monitoring and evaluation in evidence-based policy making (pp. 120–142). Geneva, Switzerland: UNICEF.
Multilateral Organisation Performance Assessment Network (MOPAN), (2024), MOPAN Assessment Report: World Health Organization (WHO), Paris https://www.mopanonline.org/assessments/who2024/MOPAN_2024_WHO_Part1.pdf
Polastro, R. (2022). Anchoring UNICEF’s evaluation function in East Asia and the Pacific. Evaluation and Program Planning, 91, 102003. https://doi.org/10.1016/j.evalprogplan.2021.102003
World Health Organization (20 18) WHO Evaluation Policy https://www.who.int/publications/m/item/evaluation-policy-and-frameworks#:~:text=The%20Evaluation%20Policy%2C%20as%20approved%20by%20the%20Executive,and%20with%20the%20norms%20and%20standards%20for%20evalua
World Health Organization. (2024). Comparative study of WHO evaluation function with selected UN entities: Report. World Health Organization. https://cdn.who.int/media/docs/default-source/evaluation-office/comparative-study-of-who-evaluation-function-with-selected-un-entities-report.pdf?sfvrsn=4d6d89ff_8&download=true
UNICEF. (2017). ‘Regional Evaluation Strategy 2018-2021 UNICEF East Asia and the Pacific’. Regional Office Evaluation Section, Bangkok, 2017; July 2017. https://www.unicef.org/eap/sites/unicef.org.eap/files/2019-11/Regional%20Evaluation%20Strategy%20and%20Action%20Plan_latest.pdf
United Nations Children’s Fund. (2018). Revised evaluation policy of UNICEF (E/ICEF/2018/14). United Nations Economic and Social Council. https://www.unicef.org/evaluation/media/1411/file/Revised%20Policy%202018%20(Interactive).pdf
United Nations Children’s Fund. (2023). Revised evaluation policy of UNICEF (E/ICEF/2023/27). United Nations Economic and Social Council. https://www.unicef.org/executiveboard/media/18416/file/2023-27-Revised-evaluation-policy-EN-ODS.pdf