What is it?
A Monitoring and Evaluation Framework outlines the overall RM&E plan for monitoring and evaluating across an entire program, or across different programs. It should specify the monitoring strategies, any studies, reviews or evaluations to do done, with details about data sources, timing, management processes, as well as an overall program theory/logic model.
1. Groundwork tasks
The M&E Framework should be informed by several other important decisions and tasks. The C4D Evaluation framework approach would suggest consideration of the following aspects as preparation for undertaking this task:
2. Deciding on which option to use to create an M&E Framework
Here are three options for developing an M&E Framework that are reccomended for C4D.
A questions-led M&E Framework
A questions-led M&E Framework starts with thinking about the information needs (questions) of the primary intended users, and builds a plan for answering those questions. This is a good option for C4D and is consistent with the C4D Evaluation Framework in the following ways:
- Participatory: The potential uses that stakeholders, especially the primary intended users, have are the focus of the M&E. These stakeholders and users should be involved in deciding on the purpose and questions, and selecting options for answering questions.
- Holistic: The key M&E questions drive the direction of the framework. These questions should go beyond 'what happened' and also question the causes, how good programs and results are, and what to do next.
- Critical: A questions-led M&E Framework encourages mixed methods to build a rich understanding of what is working, and what is not working, for different groups.
- Realistic: A questions-led M&E Framework prioritises efforts around the questions that matter most to users. It does not try to measure everything. If primary intended users want to know about impact of C4D initiatives, that implies certain types of strategies, and should be planned for as part of the M&E Framework. If there are lots of uncertainties about what might work, an M&E Framework can be built to allow for trialling and comparison of different strategies that are investigated through smaller studies and inform an emergent approach.
- Learning-based: A questions-led M&E Framework takes learning from RM&E seriously, beyond a list of recommendations at the end. If key users priorities understanding how to make improvements during implementation, this implies certain strategies. Further, learning structures, events and processes (such as committees, annual reviews etc.) can be built into the M&E Framework.
- Accountable: A questions-led M&E Framework supports a true accountability, beyond compliance-oriented reporting against indicators, through building a rigourous, mixed-methods M&E Framework that can be designed to answer questions about effectiveness, impact, relevant and other quality standard criteria.
- Complex: A questions-led M&E Framework is much easier to design around complicated and complex types of C4D initiatives and problems. Depending on the framing of key questions, a Questions-Led M&E Framework can be designed to support emergent and responsive implementation using methods and strategies suited to understanding uncertainty. The focus on questions means it remain realistic, rather than trying to measure every single thing that might possibly be measured.
This approach represents a new innovation in the way C4D M&E Frameworks can be created. An outline of the steps with reference to the C4D guidance on the Rainbow Framework tasks is provided here.
The Vietnam CO and RMIT University researchers followed these steps with counterparts to co-develop an M&E Framework and Plan for the VAC campaign. See the how they used these matrices to document their decisions here NationalProgramforChildProtectionCommunicationMEPlan.docx
Results Frameworks are common in agencies using Results-Based Management approaches. A Results Framework uses a Logic Model as the basis of selecting or creating indicators for inputs, outputs, outcomes. A Results Framework brings the following benefits:
- Accountable: Results Frameworks are designed for upwards reporting against agreed performance indicators. It is easy for managers to aggregate these and get a quick, composite picture of progress.
- Critical: Results Frameworks can specify the data disaggregations that will be required to enable an understanding of results for different groups, including vulnerable groups. Further, Results Frameworks generally include targets, which can specify if improvements in indicators for specific groups or geographical locations should be targeted, and the expected targets of more challenging groups compared to easier to reach/engage groups.
There are a number of weaknesses to understand about Results Frameworks. These include:
- participatory: Logical Frameworks and Results Frameworks can be inaccessible, foreign and difficult to understand, especially for local NGO partners, who are usually not part of the process of designing the frameworks.
- holistic: Results Frameworks mainly rely on the selection of indicators to provide an indication of what is happening. A Results Framework generally does not set programs up well to understand the causes or contributions of changes in indicators. If you are using a Results Framework, ensure that you consider methods and strategies that help you understand contributions and causes, how good the program is, and how it can be improved.
- complex: a Results Framework is based the assumption that change happens in linear ways (inputs leads to outputs, lead to outcomes). Complicated and complex change trajectories (e.g. if something gets worse before it gets better, thing improve and suddenly decline) and other contradictions and uncertainties remain largely invisible.
- Learning-based: Results Frameworks are premised on a high degree of upfront planning followed by implementation of that plan. Although it is sometimes possible to adjust Results Framework at certain times, it is generally not easy to build a Results Framework in such a way that allows for adaptive and learning-based implementation.
Results Frameworks can be adapted to be more in keeping with the C4D Evaluation Framework by considering what additional monitoring might be needed, and what additional small research, studies, evaluations and reviews can be included.
|Specify C4D inputs, outputs, outcomes at each level of the Program Theory||Develop program theory or logic model|
|Select indicators and other monitoring strategies|
Results Based Management Training Slides (UNICEF): These easy to follow slides provide detailed steps on developing a Results Framework. It includes particularly useful guidance on problem analysis, outcome chain (or program theory), and strategies, risks and assumptions, which are built into the Results Framework.
It is consistent with the C4D Evaluation Framework in the following ways:
- accountable: Results Based Management is typically accountability focused mechanism, used to guide upward reporting and ensure a results focus
- holistic (or complex?): This particular training package includes several useful processes for creating a robust Theory of Change, taking into account assumptions, risks, priorities, and an explicit change theory, which is used as the basis for a Results Framework.
Monitoring and Evaluation of Participatory Theatre for Change (PTC) - Table 2 on page 17 includes a sample monitoring plan. This guide is demonstrates how a strong theory of change can inform the design of monitoring and evaluation plans. Although it is written with reference to Participatory Theatre, the resource can be easily adapted to a range of C4D approaches, especially participatory C4D approaches. Click here to go directly to this resource, or here to read a summary and review of this resource. This resource is consistent with the C4D Evaluation Framework in relation to this task in the following ways:
- complex: the strong use of a theory of change, which is based on three high level principles, which can be adaptively applied to suit emerging conditions.
- realistic: the 'Reach, Resonance, Response' framework is simple enough to understand, useful as a guiding framework, and captures the important aspects of C4D outputs and outcomes.
Outcome Mapping to Develop an M&E System
The Outcome Mapping process includes the development of a Performance Monitoring Framework and an Evaluation Plan. Outcome Mapping was developed as an alternative to the kinds of M&E Frameworks associated with Results Based Management, and is particularly intended for social and behavioural change and social transformation initiatives. The Performance Monitoring Framework sets out how actions and progress towards goals will be monitored, building on the progress markers (based on what you would 'expect to see', 'like to see', and 'love to see' in boundary partners), the strategies and organizational practices (all mapped out in the intentional design, similar to theory of change, stage). Not everything is monitored, and there are 'light' options. There are three main data collection tools for monitoring: an outcome journal, a strategy journal and a performance journal. The Evaluation Plan in Outcome mapping is based on the identified uses of primary intended users and their questions. This approach is consistent with the C4D Evaluation Framework in the following ways:
- participatory: Outcome Mapping is a based on a participatory approach, with much of the planning and mapping decisions intended to be made in workshop settings.
- complex: Outcome Mapping focuses on changes in the behaviours, relationships, actions or activities of the people, groups, and organizations with whom a development program works directly, rather than focusing on the development impact of a program in terms of changes in the state or situation such as poverty alleviation, or reduced child marriage etc.
- learning-based: Outcome Mapping builds a monitoring and evaluation system for continual learning and improvement.
- realistic: Outcome Mapping uses group processes to prioritise what will be monitored, recognising that the resources for monitoring and evaluation are limited. In Outcome Mapping, the available resources are channelled into efforts to better understanding of the influences of a program's work on change and use this to improve its performance.
It is important to keep in mind:
- accountable: While Outcome Mapping resources point to ways to use Outcome Mapping for accountability and reporting, mutual learning and improvement is more of the focus. The monitoring methods used are generally based on self-assessment and reporting, which may not be considered rigorous enough in some contexts. Some adaptations to use alternative methods could be used to address this problem.
BetterEvaluation page on Outcome Mapping - This page includes a concise overview and relates the approach to the Rainbow Framework tasks.
Outcome Mapping Learning Community - Is a hub of information on Outcome Mapping, including guides, manuals, video tutorials, and examples. Available in English,