This guest blog is by Anne Markiewicz, Director of Anne Markiewicz and Associates, a consultancy that specialises in developing Monitoring and Evaluation Frameworks. Anne is the co-author, with Ian Patrick, of the text book ‘Developing Monitoring and Evaluation Frameworks’ (Sage 2016). She has extensive experience in the design and implementation of monitoring and evaluation frameworks for a wide range of different initiatives, building the capacity of organisations to plan for monitoring and evaluation.
The benefit of being intentional about program delivery and its contribution to intended outcomes
Programs that invest in routine monitoring and periodic evaluation have a better chance of being effective and delivering intended outcomes that will potentially improve the circumstances for which that program was developed. This is a sound investment for programs attempting to address long-standing and perplexing social issues.
Developing a monitoring and evaluation framework provides an essential first step in ensuring that a program is consciously and consistently monitored and evaluated over its lifespan. Monitoring and evaluation frameworks embody the up-front thinking and planning that is required to determine exactly what will be monitored on an on-going basis and what will be evaluated from time-to-time, how often these activities will take place, and who will be responsible for what functions. The monitoring and evaluation framework also indicates how the information will be collected, analysed and reported and used to guide program focus, priority-setting and future directions.
The value of supporting learning for program improvement
Monitoring and evaluation frameworks are most useful when they serve a variety of functions and are not developed solely for the purpose of accountability to funders. They should support a process of regular program-focused reflection and learning as to what has worked and what has not, for whom and under what circumstances, leading to program improvement over time. The operation of a monitoring and evaluation framework should also generate knowledge about good practices that can be disseminated and shared. The Department of Finance Resource Management Guide No.131 ‘Developing Good Performance Information’ (April 2015) affirms the principle that there are multiple purposes involved in collecting good performance information and that there is benefit in telling a cohesive performance story.
The Place of Program Theory, Program Logic, Evaluation Questions and Indicators in the Identification of Purposeful Data Collection Processes
A theory-based and evaluation-led monitoring and evaluation framework will map out the program theory and program logic and use these conceptual models to guide the development of a set of evaluation questions and performance indicators. The evaluation questions and performance indicators are then used to determine the range of data that needs to be collected in order to answer the evaluation questions and address the agreed indicators. In a step-by-step and sequential process, a monitoring and evaluation framework will thus use the mapping of the intended causal connections between a program’s efforts and the intended results to outline the areas of investigation to be undertaken. It will then develop a monitoring plan to outline what will be monitored over the life of the program, and an evaluation plan to outline what will be evaluated, and when. A data collection, management and analysis plan and a reporting and communication strategy follow-on. (See the relevant tasks and options in the Rainbow Framework for more information on these areas: Collect and/or retrieve data, Manage Data, Analyse Data, Understand Causes, Report and Support Use)
The aim is to strive for proportionality in the size and scope of the monitoring and evaluation framework in relation to the resources, scale and funding of the program. Logically, the larger and more complex and far-reaching the program, the more ambitious is the scope of the Monitoring and Evaluation Framework.
This approach to developing Monitoring and Evaluation Frameworks has been shown to be suitable and effective in application with a variety of programs operating in a range of different contexts.
Australasian Evaluation Society Workshop: Developing Monitoring and Evaluation Frameworks - Melbourne: 13-14 November 2017
Delivered by Anne Markiewicz, this workshop follows the structure of the text book ‘Developing Monitoring and Evaluation Frameworks’ (Anne Markiewicz and Ian Patrick, Sage 2016). It will present a clear and staged conceptual model for the systematic development of an M&E Framework. It will examine a range of steps and techniques involved in the design and implementation of the framework; explore potential design issues and implementation barriers; cover the development of Program Theory and Program Logic; the identification of key evaluation questions; the development of performance indicators; and the identification of processes for multi-method data collection, on-going analysis and reflection based on data generated.