Monitoring

Theme type:

Monitoring is a process to periodically collect, analyse and use information to actively manage performance, maximise positive impacts and minimise the risk of adverse impacts.

It is an important part of effective management because it can provide early and ongoing information to help shape implementation in advance of evaluations.

What is monitoring?

Monitoring processes can monitor change and progress in different aspects: needs, the operating context, activities, and the results of activities, projects, programmes and policies.

Monitoring is best thought of as part of an integrated Monitoring and Evaluation (M&E) system that brings together various activities relating to gathering and using data.

How does monitoring differ from evaluation?

Monitoring is typically:

Periodic and ongoing during implementation

Monitoring brings evaluative thinking into the periodic collection, analysis and use of information during implementation, as distinct from single discrete evaluation events or even several linked discrete evaluation events (such as a mid-term and final evaluation). Newer forms of evaluation, such as developmental evaluation and real-time evaluation, have blurred this distinction, as they involve ongoing collection, interpretation and use of evaluative data.

Integrated with other management functions and monitoring systems

Monitoring systems often need to be integrated into the ongoing internal management functions of organisations. These include performance management, risk management and financial management, fundraising and accountability reporting to donors or program participants. The integration can make monitoring a more complicated management endeavour than evaluation.

Operating at different levels

Monitoring systems often need to operate at levels beyond an individual project, such as the program, organisation, sector or country level. Monitoring systems also sometimes need to work across these boundaries, such as joint monitoring by two or more organisations or supporting partner organisations' systems, such as government systems. Working across systems, levels, and boundaries can make monitoring more complicated due to different understandings, cultures and time-frames.

Inclusive of systems of ongoing reflection

Another distinction between monitoring and discrete evaluations is that monitoring uses information to manage performance actively and therefore includes deliberate and ongoing reflection to inform implementation decisions.

Why monitor?

M&E systems need to consider and balance the information needs of different users. Therefore, it is essential to be clear on how various primary intended users will use monitoring information.

Primary intended users of monitoring information can include project participants, government department staff, project staff, senior management in implementing organisations and government departments, fundraising staff, donor organisations, politicians, and members of the public (as individuals or as part of community-based groups).

Some of the different uses of monitoring information include:

  • Program management to inform changes in activities or resourcing – Using monitoring to inform program management can involve:
    • Informing changes to the pace, nature or substance of activities to better contribute to positive outcomes and minimise negative outcomes
    • Promoting equity and non-discrimination by disaggregating information and measuring progress in terms of access of different groups or results in terms of laws, behaviour, attitudes or norms
    • Assessing pilots to guide decisions to accelerate, scale or close activities
    • Identifying gaps, problems and bottlenecks to allow for timely corrective action
  • Compliance and verification to track policy implementation, activity implementation or expenditure against budgets
  • Accountability and transparency to report to various audiences and stakeholders, such as program participants, partners, governments, management and governance bodies, donors and the public
  • Ownership and legitimacy to build local support through participatory monitoring processes
  • Learning, for example, to gain a deeper understanding of the pathways through which desired change comes about – Some organisations are keen to emphasise this use of monitoring information and call their systems Monitoring, Evaluation and Learning (MEL) systems
  • Risk management to assess risk and inform risk mitigation measures – This includes monitoring the risk of not achieving outcomes, which is a part of program management, and the risk of negative effects for participants or the environment
  • Data for future evaluation to collect, quality assure, and store information for future evaluations
  • Public relations/Demonstrate value to present a program to the public and donors

What to monitor?

Monitoring of activities, outputs, and outcomes can be conducted at different levels and across multiple entities. For example, monitoring could focus on a single project or on a more extensive program or sector that includes numerous projects delivered by the same organisation or multiple organisations.

Similarly, conditions, progress towards goals, and contextual factors can be monitored within or across local areas, regions, or countries.

The following shows some examples of what might be monitored at different levels.

Examples of what might be monitored at different levels

Single intervention or project

  • Activities and outputs
  • Quality of activities and outputs
  • Inputs
  • Expenditure against budget
  • Gender equality and social inclusion
  • Partnership quality
  • Risks and policy compliance
  • Cost efficiency
  • Staffing levels
  • Operating context
  • Intermediate outcomes
  • Change pathways
  • Contribution to higher-order outcomes and impact

Across organisations

  • Contributions to jointly agreed outcomes, such as sector level outcomes and strategies

Country level

  • Progress towards achievement of the Sustainable Development Goals (SDGs)
  • Contribution of interventions towards those SDGs
  • Progress towards human rights goals

Subnational or regional level

  • Contribution to sub-national or regional strategies

Across countries

  • Trans-border issues, such as refugees and migration, drug trafficking, pandemic disease, environmental degradation, organised crime, cybersecurity, and water supply

Global level

  • Global issues, such as carbon emissions, climate change, immunisation rates
  • Progress towards achievement of the SDGs

Organisational level

  • Progress towards program objectives, which might be at the country, multi-country, regional or global level
  • Progress towards organisational objectives and strategies
  • Progress towards policy implementation
  • Project and program level information for donor reporting
  • Achievements of the intervention and human interest stories for fundraising and annual reporting

The BetterEvaluation blog Demonstrating outcomes and impact across different scales discusses how evidence of outcomes and impact can be better captured, integrated and reported on across different scales of work.

Framing the boundaries of a monitoring system

Often when people think about designing a monitoring system, they focus on choosing which indicators they will use. However, there are many important tasks around the upfront framing work for the monitoring system. It's important to understand:

  • the users and uses of monitoring information
  • the nature of what is being monitored, including how the monitoring will integrate with other management functions
  • the resources available for the monitoring system

Boundaries in terms of projects, organisations and geographic areas

Organisations often need to draw together information from multiple sources to help understand progress towards organisational or joint objectives or strategies. This kind of synthesis sometimes needs to be across organisations, at a sector, sub-national or country level. Thinking systemically about M&E extends beyond individual projects and activities.

Understanding the various stakeholders and their needs for monitoring information is essential in designing or refining a monitoring and evaluation system. Diving in without conscious and strategic design and proper resourcing can result in too much emphasis on certain aspects of monitoring at the expense of others.

For example, some organisations primarily see monitoring as an accountability and reporting function. This narrow focus neglects the use of monitoring information to inform management decisions and learn about how change comes about.

Another example is organisations that prioritise their own monitoring needs instead of looking for ways to use and strengthen their partners' monitoring systems. This approach overlooks the potential benefits of drawing on monitoring led by partner governments and communities, which can include:

  • fostering partnerships
  • strengthening ownership
  • strengthening national data and information systems
  • improving the ability of partners to learn from implementation

Boundaries in terms of other organisational functions

A good monitoring and evaluation system involves integrating the monitoring function with the evaluation function. Working systematically also means that monitoring should integrate with other management functions, including making timely adjustments to implementation, strategies and plans at the various levels.

Further reading on what to monitor

Systemic Thinking for Monitoring: Attending to Interrelationships, Perspectives, and Boundaries: This discussion note by Bob Williams and Heather Britt discusses attending to interrelationships, multiple perspectives, and boundaries – one of the three key principles underlying complexity-aware monitoring. This principle emphasises the importance of using systems concepts when monitoring, regardless of whether the monitoring method is drawn from the systems field or is a more traditional monitoring method.

Core concepts in developing monitoring and evaluation frameworks:  This guide by Anne Markiewicz and Associates defines the parameters of routine monitoring and periodic evaluation that should take place over the life of an initiative or program, identifying data to be generated, aggregated and analysed on a routine basis for formative and summative evaluation processes that are used to inform organisational decision making and learning.

Linking Monitoring and Evaluation to Impact Evaluation: This guidance note from InterAction on Impact Evaluation, outlines the relationship between regular monitoring and evaluation and impact evaluation with particular reference to how M & E activities can support meaningful and valid impact evaluation.

Who monitors?

There are many options as to who undertakes monitoring activities – sometimes, a combination of these is most appropriate.

Options for who should monitor

Local monitoring

In local monitoring, the implementing organisation monitors implementation, compliance, risks and results as part of managing implementation. Local monitoring can be participatory, with communities helping to decide what to monitor and collect and analyse information.

'Head office' monitoring

At a country, regional or head office level, staff from the same organisation will often monitor implementation, compliance, risks and results as part of managing a portfolio of interventions.

Intermediary partner monitoring

(for example, a UN Agency or international NGO)

Staff from the intermediary organisation will often monitor implementation, compliance, risks and results as part of overseeing a portfolio of interventions at a program, country, regional or global level.

Funding partner monitoring

(for example, government, bilateral organisation or foundation)

Staff from the funding organisation will often monitor implementation, compliance, risks and results as part of overseeing a portfolio of interventions at a program, country, regional or global level.

Considerations when thinking about who monitors

Regardless of who carries out the monitoring function, there are several overlapping aspects and considerations. For example, local teams may make changes in implementation based on monitoring data to improve performance. Monitoring from multiple projects may need to inform other monitoring systems. It might also need to be synthesised to give an overall picture of progress and, therefore, will need to integrate with the overarching monitoring system of the funding organisation. Monitoring may also need to meet the information needs of the funding organisation to comply with funding arrangements and potentially also meet the information needs of the government.

For example, the Australian Government might fund an organisation such as UNICEF to implement a part of a particular UNICEF program that operates in countries of high priority for the Australian Government. UNICEF may then provide grants to local organisations and technical assistance to the government of those countries to implement. The local organisation, the partner government, UNICEF and the funding organisation (the Australian Government) may all undertake monitoring activities. Within each of these organisations, the monitoring information may contribute to other monitoring systems at different levels. Often the various monitoring systems are not harmonised to integrate easily with each other. This already complicated situation is further exacerbated when the local organisation, UNICEF or the partner government receives additional funds from a different funding organisation (such as the EU), which has a different monitoring system again.

When to monitor?

When to monitor is an important consideration in designing a fit-for-purpose monitoring system.

One of the characteristics of monitoring is that it is undertaken periodically during implementation. In this way, monitoring can provide real-time information to inform implementation. In contrast, future evaluations may have to try and reconstruct past situations or conditions. But how often and when should monitoring happen?

There is no simple answer as to when the best time to monitor is. It is likely to involve a balance between different time-frames for different users to ensure the timeliness of information to inform decision making.

The following are some helpful questions to consider in choosing when to monitor for which pieces of information in an M&E system (taken from Ramalingam et al., 2019):

  • How can trade-offs between differing time pressures for an M&E system be managed?
  • What decisions need to be made and when?
  • When is it plausible to observe changes?
  • How are the M&E components and sequencing expected to contribute to decision-making and increase the likelihood of intended outcomes?

Further reading on when to monitor

Synchronizing monitoring with the pace of change in complexity: This USAID discussion note by Richard Hummelbrunner and Heather Britt argues for synchronizing monitoring with the pace of change as a key principle underlying complexity-aware monitoring.

Yemen: 2021 Humanitarian Response Plan Periodic Monitoring Report, January - June 2021 (Issued October 2021): This example from the UN Office for the Coordination of Humanitarian Affairs shows a six-monthly periodic monitoring report from the ongoing crisis in Yemen, documenting the changing circumstances, the activities and results undertaken, the current assessment of different governorates and funding needs.

Systems of reflection and use

As described above, there are a variety of purposes for monitoring information, many of which require systems of reflection.

Systems of reflection often consider successes and failures and the reasons behind them and determine specific actions or steps to be taken as a result. Reflection systems can also extend to reflecting on the overall project strategy and whether this needs revision based on new information.

Systems of reflection can include facilitated conversations, such as after-action reviews or retrospects, and strategy testing exercises. Monitoring visits (sometimes called site visits or field visits) and regular meetings of the core implementation team and potentially others to review a range of evidence can also be a part of systems of reflection.

Further reading on systems of reflection and use

Strategy testing: This paper by Debra Ladner describes an innovative approach to monitoring highly flexible aid programs: Strategy Testing. This approach was developed by The Asia Foundation and involves reviewing and adjusting the theory of change about every four months in light of monitoring information. It shares some examples and some insights, and reflections on the process. 

Revised site-visit standards: A quality-assurance framework: In this journal article, Michael Quinn-Patton proposes 12 quality standards for site visits.

The UNICEF guidance on field monitoring includes some practical steps for planning and using field monitoring.

After Action Review (method page): The After Action Review is a simple method for facilitating reflection on successes and failures and supporting learning. It works by bringing together a team to discuss a task, event, activity or project, in an open and honest fashion.

Approaches to monitoring

It can be helpful to decide whether the system to be monitored is complicated or complex to inform the choice of approach to monitoring.

Complicated systems involve multiple components, levels or organisations but are relatively stable and well-understood with the right expertise.

Complex systems involve many diverse components, which interact in adaptive and nonlinear ways that are fundamentally unpredictable. This means that there are ongoing changes in the understanding of how these systems work, how interventions might best work, and what monitoring is needed.

Monitoring in simple or complicated but straightforward contexts

An intervention might be considered simple or complicated when there is a largely straightforward or well-understood relationship between the intervention and its results. Results-Based Management can be a useful approach to use for monitoring these relatively stable and predictable contexts.

Results-Based Management

Results-Based Management was designed to shift the emphasis from monitoring activities to monitoring results and use what was being learned to adjust activities to achieve better results.It generally answers three types of questions:

  1. Are we implementing as planned?
  2. Are we achieving the expected results?
  3. Do we need to make adjustments?

Further reading on Results-Based Management

Monitoring in complex contexts

Contemporary monitoring systems increasingly incorporate systems thinking and complexity science to respond to situations with much uncertainty, or the situation is changing rapidly.

These approaches include:

Complexity-aware monitoring

USAID has developed the complexity-aware monitoring approach for monitoring programs that contain some complex aspects.  Complexity-aware monitoring is appropriate for aspects of strategies, projects or activities where:

  • Cause-and-effect relationships are uncertain
  • Stakeholders bring diverse perspectives to the situation, making consensus impractical
  • Contextual factors are likely to influence programming
  • New opportunities or new needs continue to arise
  • The pace of change is unpredictable

For more information on complexity-aware monitoring, see Heather Britt's Discussion Note: Complexity-aware monitoring (2013).

Adaptive management

To some extent, all management needs to be adaptive; implementation does not simply involve enacting plans but also modifying them when circumstances or understandings change. However, 'adaptive management' goes beyond normal levels of adaptation. Adaptive management involves deliberately taking actions to learn and adapt as needed under conditions of ongoing uncertainty and complexity.

BetterEvaluation has developed a series of working papers on Monitoring and Evaluation for Adaptive Management. This working paper series explores how monitoring and evaluation can support good adaptive management of programs. While focused especially on international development, this series is relevant to wider areas of public good activity, especially in a time of global pandemic, uncertainty and an increasing need for adaptive management. 

Working Paper #1 is an overview of Monitoring and Evaluation for adaptive managementWorking Paper #2 explores the history, various definitions and forms of adaptive management, including Doing Development Differently (DDD), Thinking Working Politically (TWP), Problem-Driven Iterative Adaption (PDIA), and Collaboration, Learning and Adaption (CLA). It also explores what is needed for adaptive management to work. 

For more information, see BetterEvaluation's adaptive management thematic page.

Further reading on adaptive management

Making adaptive rigour work: Principles and practices for strengthening monitoring, evaluation and learning for adaptive management: This paper by Ben Ramalingam, Leni Wild and Anne L. Buffardi sets out three key elements of an 'adaptive rigour' approach: Strengthening the quality of monitoring, evaluation and learning data and systems; ensuring appropriate investment in monitoring, evaluation and learning across the programme cycle; and strengthening capacities and incentives to ensure the effective use of evidence and learning as part of decision-making, leading ultimately to improved effectiveness. The short adaptive management Annex is an inventory. It presents the three elements in the form of a series of questions to be asked by those used in designing, developing, implementing and improving monitoring, evaluation and learning systems for adaptive programmes. 

Systems concepts in action a practitioner's toolkit: This book, authored by Bob Williams and Richard Hummelbrunner, is focused on the practical use of systems ideas. It describes 19 commonly used systems approaches and outlines a range of tools that can be used to implement them.

Challenges and pitfalls

Monitoring is difficult to do well. Some of the common challenges and pitfalls include the following:

Focusing only on intended outcomes

Many M&E systems focus only on progress towards the achievement of the intended outcomes of specific projects. However, it is also important to make sure your data collection remains open to unintended results, including unexpected negative and positive outcomes and impacts. Wider positive outcomes beyond the project can also be important to monitor.

Not focusing on meaningful outcomes

Many M&E systems focus on outcomes that do not reflect the true intent of the intervention. One example of this is focusing on the number of people reached by a particular program. In contrast, a more meaningful outcome might focus on whether the intended change had taken place.

A useful resource is The Donor Committee for Enterprise Development Standard for Results Measurement, which articulates outcomes in terms of change and also includes a checklist for auditing monitoring systems

Over-reliance on quantitative indicators

Many M&E systems focus data collection on quantitative indicators. However, qualitative data, such as participant stories, observations or other forms of evidence, can often be more valid and useful.

Beyond the Numbers: How qualitative approaches can improve monitoring of humanitarian action by A. Sundberg includes more information on using qualitative evdience.

Presenting quantitative results without context

A common pitfall in the reporting of monitoring activities is the presentation of quantitative results without context which renders them meaningless. For example to report that 456 people (50% women) were trained in a topic tells the reader nothing about the quality of the training, the results of the training, whether this was above or below performance expectations, the context in which this took place or why it was important to do the training. Alternatively, narrative reporting can draw on quantitative and qualitative results to tell a meaningful performance story that explains the significance of the numbers in the context and the implications of what these results mean. 

John Mayne's paper Reporting on outcomes: Setting performance expectations and telling performance stories contains useful ideas for telling performance stories.

Dr Alice Roughley and Dr Jess Dart have also prepared a helpful user guide for the Australian Government: Developing a performance story report. Although prepared for evaluation in the context of natural resource management it is useful for monitoring, and in other contexts. Chapter 6 is particularly helpful in guiding users in how to pull together different types of evidence to write a performance story.

Using the wrong synthesis approach

Simply adding up indicators from smaller units to larger units or a whole organisation often does not produce meaningful performance information as those indicators will usually have come about in different contexts. For example, constructing 10km of road in remote Papua New Guinea cannot meaningfully be added to 100km of road construction in an urban setting in a different country.

Other forms of synthesis, and other boundaries beyond the organisation, might be more meaningful and useful. The BetterEvaluation blog on Demonstrating outcomes and impact across different scales includes different methods of synthesis.

Causing harm through data collection methods

There are many ways in which data collection can cause harm to individuals, households or communities. When determining the ethical standards for a monitoring system, data collection methods' risks, burdens, and benefits need to be fully considered. It's important to ensure informed consent of participants is gained and to think through alternatives when data collection methods are not appropriate. For example, repeated and 'extractive' household surveys can cause inconvenience or discomfort to participants. Alternatives to this might include linking into existing M&E systems, such as census data collection, rather than running parallel activities.

Monitoring function not being valued or sufficiently resourced within organisations

While monitoring and evaluation are intrinsically linked, monitoring has historically not been as highly valued or resourced as evaluation. As a result, monitoring is not always recognised as an essential function within organisations. It is often delegated to specialised units which are divorced from management and budget decisions. Elevating the monitoring function in organisations usually needs strong leadership and culture change. Demonstrating the benefits of good monitoring practice, finding champions within the organisation to roll out good practice and advocating for monitoring, including the use of monitoring information, can be effective strategies to help bring about such change.

For further reading on some of these challenges, see:

Resources

Blog

Project page

Australian Department of Foreign Affairs and Trade. (2017). Monitoring and Evaluation Standards. Retrieved from: https://www.dfat.gov.au/about-us/publications/Pages/dfat-monitoring-and-evaluation-standards

Britt, H. (2013). Discussion note: Complexity aware monitoring. US Agency for International Development (USAID), Bureau for Policy, Planning and Learning. Retrieved from: https://usaidlearninglab.org/library/complexity-aware-monitoring-discussion-note-brief

Clark, L., & Apgar, J. M. (2019). Unpacking the Impact of International Development: Resource Guide 1. Introduction to Theory of Change. IDS, University of Edinburgh and CDI. Retrieved from: http://archive.ids.ac.uk/cdi/publications/unpacking-impact-international-development-resource-guide-1-introduction-theory-change.html

Clark, L., & Apgar, J. M. (2019). Unpacking the Impact of International Development: Resource Guide 2. Seven Steps to a Theory of Change. IDS, University of Edinburgh and CDI. Retrieved from: http://archive.ids.ac.uk/cdi/publications/unpacking-impact-international-development-resource-guide-2-seven-steps-theory-change.html

Clark, L., & Apgar, J.M. (2019). Unpacking the Impact of International Development: Resource Guide 4. Developing a MEL Approach. IDS, University of Edinburgh and CDI. Retrieved from: http://archive.ids.ac.uk/cdi/publications/unpacking-impact-international-development-resource-guide-4-developing-mel-approach.html

Clark, L., & Small, E. (2019). Unpacking the Impact of International Development: Resource Guide 3. Introduction to Logframes. IDS, University of Edinburgh and CDI. Retrieved from: http://archive.ids.ac.uk/cdi/publications/unpacking-impact-international-development-resource-guide-3-introduction-logframes.html

Dillon, N. (2019). Breaking the Mould: Alternative approaches to monitoring and evaluation. ALNAP Paper. London: ODI/ALNAP. Retrieved from: https://www.alnap.org/help-library/breaking-the-mould-alternative-approaches-to-monitoring-and-evaluation

Dillon, N., & Sundberg, A. (2019). Back to the Drawing Board: How to improve monitoring of outcomes. ALNAP Paper. London: ODI/ALNAP. Retrieved from: https://www.alnap.org/help-library/back-to-the-drawing-board-how-to-improve-monitoring-of-outcomes

The Donor Committee for Enterprise Development. (n.d.). DCED Standard for results measurement. Retrieved from: https://www.enterprise-development.org/measuring-results-the-dced-standard/

Guijt, I., & Woodhill, J. (2002). Managing for Impact in Rural Development, A Guide for Project M&E. IFAD. Retrieved from: https://www.ifad.org/documents/38714182/39723245/Section_2-3DEF.pdf/114b7daa-0949-412b-baeb-a7bd98294f1e

Hummelbrunner R., & Britt H. (2014). Synchronising Monitoring with the Pace of Change in Complexity. US Agency for International Development (USAID), Bureau for Policy, Planning and Learning. Retrieved from: https://www.betterevaluation.org/en/resources/synchronizing-monitoring-pace-change-complexity

Mayne, J. (2007). Challenges and lessons in implementing results-based management. Evaluation, 13(1), 87-109. https://journals.sagepub.com/doi/abs/10.1177/1356389007073683?journalCode=evia

Mayne, J. (2004). Reporting on outcomes: Setting performance expectations and telling performance stories. Canadian Journal of Program Evaluation, 19(1), 31-60. Retrieved from: https://evaluationcanada.ca/secure/19-1-031.pdf

Patton, M. Q. (2017). Revised site-visit standards: A quality-assurance framework. In R. K. Nelson, & D. L. Roseland (Eds.), Conducting and Using Evaluative Site Visits. New Directions for Evaluation, 156, 83–102. https://onlinelibrary.wiley.com/doi/abs/10.1002/ev.20267

Peersman, G., Rogers, P., Guijt, I., Hearn, S., Pasanen, T., & Buffardi, A. (2016). 'When and how to develop an impact-oriented monitoring and evaluation system'. A Methods Lab publication. Overseas Development Institute. Retrieved from: https://www.betterevaluation.org/en/resource/discussion-paper/ML-impact-oriented-ME-system

Ramalingam, B. Wild, L. & Buffardi, A. (2019). Annex | Making adaptive rigour work: the adaptive rigour inventory – version 1.0. Overseas Development Institute. Retrieved from: https://odi.org/en/publications/making-adaptive-rigour-work-principles-and-practices-for-strengthening-mel-for-adaptive-management/

Ramalingam, B. Wild, L. & Buffardi, A. (2019). Making adaptive rigour work: Principles and practices for strengthening monitoring, evaluation and learning for adaptive management. Overseas Development Institute. Retrieved from: https://odi.org/en/publications/making-adaptive-rigour-work-principles-and-practices-for-strengthening-mel-for-adaptive-management/

Rogers, P. (2020). Real-time evaluation. Monitoring and Evaluation for Adaptive Management Working Paper Series, Number 4, September. Retrieved from: https://www.betterevaluation.org/en/resources/real-time-evaluation-working-paper-4

Rogers, P. & Macfarlan, A. (2020). An overview of monitoring and evaluation for adaptive management. Monitoring and Evaluation for Adaptive Management Working Paper Series, Number 1, September. Retrieved from https://www.betterevaluation.org/resources/overview-monitoring-and-evaluation-adaptive-management-working-paper-1

Rogers, P. & Macfarlan, A. (2020). What is adaptive management and how does it work? Monitoring and Evaluation for Adaptive Management Working Paper Series, Number 2, September. Retrieved from: https://www.betterevaluation.org/en/resources/what-adaptive-management-and-how-does-it-work-working-paper-2

Sundberg, A. (2019). Beyond the Numbers: How qualitative approaches can improve monitoring of humanitarian action. ALNAP Paper. London: ODI/ALNAP. Retrieved from: https://www.alnap.org/help-library/beyond-the-numbers-how-qualitative-approaches-can-improve-monitoring-of-humanitarian

UNICEF. (2017). Results Based Management Handbook: Working Together for Children. Retrieved from: https://www.unicef.org/rosa/media/10356/file

UNOCHA. (2021). Yemen: 2021 Humanitarian Response Plan Periodic Monitoring Report, January - June 2021. https://yemen.un.org/en/156745-yemen-2021-humanitarian-response-plan-periodic-monitoring-report-january-june-2021-issued

Williams B., & Britt, H. (2014). Systemic Thinking for Monitoring: Attending to Interrelationships, Perspectives, and Boundaries. US Agency for International Development (USAID), Bureau for Policy, Planning and Learning. Retrieved from: https://usaidlearninglab.org/sites/default/files/resource/files/systemic_monitoring_ipb_2014-09-25_final-ak_1.pdf

Williams, B., & Hummelbrunner, R. (2010). Systems concepts in action: A practitioner's toolkit. Stanford University Press. https://www.sup.org/books/cite/?id=18331

Expand to view all resources related to 'Monitoring'

'Monitoring' is referenced in: