AES Workshop: Developing Monitoring and Evaluation Frameworks

This workshop follows the structure of the text book ‘Developing Monitoring and Evaluation Frameworks’ authored by Dr Ian Patrick and Anne Markiewicz. It will present a clear and staged conceptual model for the systematic development of an M&E Framework. It will examine a range of steps and techniques involved in the design and implementation of the framework; explore potential design issues and implementation barriers; cover the development of a Program Logic; the identification of key evaluation questions; the development of performance indicators; and identification of processes for data collection, on-going analysis and reflection based on data generated. The facilitator will encourage interactive peer to peer dialogue to share experiences and learning, and also draw on case studies to encourage application of knowledge and skills to evaluation contexts.

AES Workshop: Performance Story Reports

Performance story reports aim to strike a good balance between depth of information and brevity. They aim to be written in accessible language and help build a credible case about the contribution a program has made towards outcomes or targets. They help teams and organisations to focus on results and also provide a common language for discussing different programs. This workshop will explore different approaches to performance story, and how performance story reports are developed. It will outline steps to building a report and explore the role of program logic and evidence in developing the report. It will be an interactive and engaging workshop involving case studies and group process.

What would an evaluation conference look like if it was run by people who know and care about presenting information to support use? (hint - that should be us)

Patricia Rogers's picture 2nd March 2017 by Patricia Rogers

All too often conferences fail to make good use of the experience and knowledge of people attending, with most time spent presenting prepared material that could be better delivered other ways, and not enough time spent on discussions and active learning.  With closing dates for two evaluation conferences fast approaching (the Australasian Evaluation Society and the American Evaluation Association), could you propose something more useful, that would demonstrate how much we know and care about communicating and using information?

Complexity Thinking for Complex Evaluations

Experience tells us that our working environments are made up of constantly changing circumstances set against a backdrop of diverse – and often contrasting – stakeholder opinions and goals. This perennial reality is challenging enough without being handed down evaluation and management tools that are ill-equipped to deal with dynamic and contested worlds. The purpose of this highly interactive discussion is to first shine the light on how ill-equipped mainstream evaluation tools and techniques are, and second to explore ways to adapt the tools and techniques each of us already uses today. The main aim of the seminar is that each attendee leave with at least one concrete means of making their future evaluations more compatible with the dynamism and contestation that characterises our complex world.

AES Workshop: Introduction to Program Logic

Program logic is a simplified model of expected cause-and-effect relationships between activities, immediate changes, intermediate outcomes and final outcomes. This workshop introduces the program logic concept and lays out a step by step process for creating a logic model. The workshop concludes with an overview of how this logic model can be used for program design and to be the spine of a monitoring, evaluation, reporting and improvement framework.

AES Seminar: Understanding evaluation reports and applying the findings to policy and practice (Brisbane)

The power of evaluation is its ability to provide meaningful information for use in decisions about programs and policies. This power is diminished by both the lack of use and misuse of evaluation. Based on a survey conducted by the American Evaluation Association in 2006 it is estimated that fewer than one-third of evaluation results are used. A more serious problem however is the use of misevaluation - evaluation with flawed methodology, faulty sampling, data collection, analysis and reporting.

AES Workshop: Move beyond ‘grassroots rhetoric and top down practice’ in monitoring and evaluation of community development projects

The workshop will work with a REAL situation through which you will be able to experience first-hand the usefulness of this methodology on a couple of levels: data collection for monitoring and evaluation; learning by all involved about the project and its issues. The methodology involves sound community engagement, using participatory rapid appraisal methods (PRA) to establish anticipated outcomes and indicators of the change required to achieve those outcomes. Two easy to use, inclusive, ethically sound tools (Ten Seed Technique, Pocket Chart) for data collection are then used to measure these indicators. Data is presented in an easily understood chart format enabling participants to engage in decisions using monitoring data and contribute to the design and conduct of subsequent evaluations.

AES Workshop: What does the evaluator mean? Building the skills to understand evaluation reports and apply the findings to policy and practice

Evidence-based practice depends on skilled practitioners and policy makers to be able to extract and interpret and apply key information from evaluation reports. However, understanding evaluation reports, their limitations and language – and then making good use of the findings are essential skills in order to benefit from the investment in the evaluation process. This workshop provides the link between the evaluator and the evaluation user: users will learn what to look for and what questions to ask and evaluators will learn how to make their reports more accessible.