BetterMonitoring draft framework - September 2021
BetterEvaluation is working with UNICEF to try and improve our collective understanding and practice of the monitoring function. The BetterMonitoring initiative focuses on trying to elevate the monitoring function to make it more visible and to provide information about how to plan, conduct and use monitoring activities well.
As part of the Global Partnership for BetterMonitoring project, a draft framework has been created that sorts the main tasks associated with monitoring into nine clusters. It is modelled on the BetterEvaluation Rainbow Framework but adapted to specifically address the monitoring function. It is not intended to be a step-by-step guide. Instead, it aims to provide an overview of the tasks related to monitoring, and provide links to methods, processes and resources that can be used to undertake each task.
BetterMonitoring draft framework
This draft framework groups the tasks associated with the monitoring function into nine clusters:
- Determine ownership of the monitoring system
- Manage the ongoing design and operations of a monitoring system
- Define what is to be monitored
- Frame the boundaries for a monitoring system
- Answer descriptive questions
- Answer causal questions
- Answer evaluative questions
- Synthesise evidence at different levels and scales
- Use and support use of monitoring information
DETERMINE OWNERSHIP of the monitoring system
This cluster of tasks determines the degree to which monitoring will be owned nationally, locally or within organisations.
Understand and engage stakeholders
Who will be the primary owners of the monitoring system? Whose needs are intended to be met? Who needs to be involved in monitoring? How can they be identified and engaged?
To what degree will the monitoring system be integrated with or support existing monitoring and management systems? How can the monitoring system support country ownership of interventions and information? Is more than one system needed?
Establish decision-making processes
Who will have the authority to make various types of decisions about monitoring? Who will provide advice or make recommendations about monitoring? What processes will be used for making decisions? How will these decisions be documented?
Decide who will collect, manage and analyse the monitoring information and who will use it
Who will design, implement and use the monitoring system and the information it generates?
MANAGE the ongoing design and operations of a monitoring system
This cluster of tasks relates to designing a monitoring system, including securing resources, ethics and quality standards, setting up the system and develop monitoring capacity.
Determine and secure resources
What resources (time, money, and expertise) will be needed to design, implement and manage implementation of the monitoring system and how can they be obtained? Consider internal (e.g. staff time) and external (e.g. participants’ and partners’ time) resources.
Define ethical and quality standards
What will be considered high quality and ethical data collection, storage and use of information? What processes and structures need to be established or used to address ethical issues?
Design the monitoring system
What will be the overall approach to gathering, analysing and reporting data to meet the priority information needs? (this task brings together choices in terms of several other tasks and may need to be redone as circumstances or needs change)
Operationalise the monitoring system
What needs to be done to implement the monitoring system and integrate it into existing systems (eg monitoring work plan, monitoring budget)? How will you establish team responsibilities? What processes will be developed or used to ensure the information generated through monitoring informs management decisions and regular reporting and learning events?
Develop monitoring capacity
How can the ability of individuals, groups and organisations to conduct data collection, analyse data and use monitoring information be strengthened?
Review the appropriateness of the monitoring system
Is the system providing the information that is needed? Is different monitoring information needed? Should the frequency of different aspects of monitoring change?
DEFINE what is to be monitored
This cluster of tasks relates to developing a description of what is to be monitored and how it is understood to work.
Develop initial description
What exactly is being monitored? Is it a project, program, policy, organisation, sector, country level program, country?
Develop theories of change if appropriate
Depending on what is being monitored, are there theories of change that would help identify relevant data or support its interpretation and use?
FRAME the boundaries for a monitoring system
This cluster of tasks sets the parameters of the monitoring system – its purposes, key monitoring questions and the criteria and standards to be used.
Identify primary intended users
Who are the primary intended users of the monitoring information?
Understand the different organisation or country levels at which monitoring is needed
At what level(s) will monitoring take place? How will the different needs of different users be met?
What are the primary purposes and intended uses of the monitoring information? To what degree will monitoring information be used for accountability (including to partners, participants, donors and organisationally), informing management decisions and programme implementation, learning purposes, and/or for public relation purposes?
Decide what aspects need to be monitored
Will this include budget expenditure; activities and workplan implementation; policy implementation; achievement of outputs; progress towards outcomes and impacts; risks; contextual factors; gender equality and social inclusion?
Specify the key monitoring questions
What are the questions the monitoring will seek to answer? How can these be developed? How do they relate to evaluation questions? Are they sensitive to gender, disability, and other factors?
How often will each piece of information be collected?
ANSWER DESCRIPTIVE QUESTIONS
This cluster of tasks relates to collecting and analysing data to answer descriptive questions such as questions about the activities of the project/program/ policy, results, the context in which it has been implemented, the risks associated with implementation.
Collect information or data
How will you collect data about activities, outcomes, context and other factors? Who will collect the information? How will monitoring visits be integrated into the monitoring system?
What sampling strategies will you use for collecting data or information?
Ensure ethical and quality standards are met
How will you ensure ethical standards are met in data collection and analysis? How will you ensure data quality?
How will you organise and store monitoring information? How will you make sure it is accessible for a later evaluation?
Combine qualitative and quantitative data
How will you combine qualitative and quantitative data?
How will you draw meaning from or investigate patterns in the numeric or textual data? How will you use the information to answer the monitoring questions? How will you visualise data for analysis purposes?
ANSWER CAUSAL QUESTIONS
This cluster of tasks involves collecting and analyse data to answer causal questions about what has produced outcomes and impacts that have been observed.
Check the results support causal contribution
How will you assess whether the results are consistent with the theory that the intervention produced them?
Predict likely outcomes
Given the current situation, are you are on track to achieve higher order results?
ANSWER EVALUATIVE QUESTIONS
This cluster of tasks relates to using data to draw evaluative conclusions regarding merit, worth or significance.
Determine what ‘success’ looks like
What should be the criteria and standards for judging performance? Whose criteria and standards matter? What process should be used to develop agreement about these?
Develop systematic evidence for answering evaluative questions
Decide whether evaluative conclusions should be based on specific indicators, comparison with targets or benchmarks, or global scales (also known as rubrics)
Develop systematic processes for drawing evaluative conclusions
How should performance across a number of dimensions or different sites be summarised into an overall conclusion?
Interpret the monitoring information to answer the monitoring questions
For example, Is the program theory, theory of change, logic model working as expected at this stage? Is the budget on track? Are risks being realised? Are activities being implemented to the expected quality? Is the country on track to achieve SDGs?
SYNTHESISE EVIDENCE at different levels and scales
This cluster relates to the use of appropriate methods to combine data to draw meaningful conclusions at different levels and scales.
Synthesise monitoring information at different levels and scales
Do you need to synthesise data across multiple interventions, or at different levels and scales? If so, how should this be done?
USE AND SUPPORT USE OF MONITORING INFORMATION
This cluster of tasks relates to using the information for the purposes of the monitoring.
Consider how different users will engage with the monitoring information
What can immediately be adjusted to improve implementation? What further processes are needed in program management meetings, learning sessions, partner or community workshops and/or reporting?
How can the monitoring information be easy to access for different users? This can include the use of data visualisation to communicate key messages
Develop reporting media
What types of reporting formats will be appropriate for the intended users?
Respond to monitoring information
Are changes needed to improve performance?
'BetterMonitoring draft framework - September 2021' is referenced in: