There is an ongoing and urgent need to move towards culturally safe, appropriate and relevant evaluations that contribute to better outcomes for Indigenous peoples. There is an increasing appetite from funders to explore how innovation can solve some of our most complex social and environmental challenges. Governments are embracing these new approaches and demanding real time evidence of what is working. Philanthropy and not-for-profits are transforming too. At the same time new players are doing the work traditionally associated with evaluation, in innovative ways, with access to more data than ever before. In this context, it is critical that evaluation transforms to remain relevant, meet the market and meaningfully contribute to the changing face of Australasia. We invite you to explore this changing context and what it might mean for you and the areas in which you work with our aes2018 conference theme: transformations. As we explore what these transformations mean for the role of evaluation and evaluators, we hope to equip you with new questions, networks and ways of thinking. Join us in helping shape the future of evaluation in this changing context.
Are quantitative or qualitative methods better for undertaking impact evaluations? What about true experiments? Is contribution analysis the new 'state of the art' in impact evaluation or should I just do a survey and use statistical methods to create comparison groups?
Determining one's plan for an impact evaluation occurs within the constraints of a specific context. Since method choices must always be context specific, debates in the professional literature about impact methods can at best only provide partial guidance to evaluation practitioners. The way to break out of this methods impasse is by focusing on the evidentiary requirements for assessing casual impacts.
We're thrilled to be able to join the Australasian Evaluation Society at their 2017 International Conference in Canberra. We'll have a booth set up in the conference exhibition area and we'd love you to come say hello and join in the fun as we use our time at the AES to work with our members, website users, and the wider evaluation community to co-create and share knowledge about evaluation.
We're delighted to be able to share the news that the Australasian Evaluation Society (AES) has extended the deadline for the Emerging Indigenous Evaluators Support Grants for the AES17 International Evaluation Conference and workshops in Canberra.
Applications are now due on July 11, 2017.
This workshop follows the structure of the text book ‘Developing Monitoring and Evaluation Frameworks’ authored by Dr Ian Patrick and Anne Markiewicz. It will present a clear and staged conceptual model for the systematic development of an M&E Framework. It will examine a range of steps and techniques involved in the design and implementation of the framework; explore potential design issues and implementation barriers; cover the development of a Program Logic; the identification of key evaluation questions; the development of performance indicators; and identification of processes for data collection, on-going analysis and reflection based on data generated. The facilitator will encourage interactive peer to peer dialogue to share experiences and learning, and also draw on case studies to encourage application of knowledge and skills to evaluation contexts.
Performance story reports aim to strike a good balance between depth of information and brevity. They aim to be written in accessible language and help build a credible case about the contribution a program has made towards outcomes or targets. They help teams and organisations to focus on results and also provide a common language for discussing different programs. This workshop will explore different approaches to performance story, and how performance story reports are developed. It will outline steps to building a report and explore the role of program logic and evidence in developing the report. It will be an interactive and engaging workshop involving case studies and group process.
All too often conferences fail to make good use of the experience and knowledge of people attending, with most time spent presenting prepared material that could be better delivered other ways, and not enough time spent on discussions and active learning. With closing dates for two evaluation conferences fast approaching (the Australasian Evaluation Society and the American Evaluation Association), could you propose something more useful, that would demonstrate how much we know and care about communicating and using information?
Experience tells us that our working environments are made up of constantly changing circumstances set against a backdrop of diverse – and often contrasting – stakeholder opinions and goals. This perennial reality is challenging enough without being handed down evaluation and management tools that are ill-equipped to deal with dynamic and contested worlds. The purpose of this highly interactive discussion is to first shine the light on how ill-equipped mainstream evaluation tools and techniques are, and second to explore ways to adapt the tools and techniques each of us already uses today. The main aim of the seminar is that each attendee leave with at least one concrete means of making their future evaluations more compatible with the dynamism and contestation that characterises our complex world.
Program logic is a simplified model of expected cause-and-effect relationships between activities, immediate changes, intermediate outcomes and final outcomes. This workshop introduces the program logic concept and lays out a step by step process for creating a logic model. The workshop concludes with an overview of how this logic model can be used for program design and to be the spine of a monitoring, evaluation, reporting and improvement framework.
AES Seminar: Understanding evaluation reports and applying the findings to policy and practice (Brisbane)
The power of evaluation is its ability to provide meaningful information for use in decisions about programs and policies. This power is diminished by both the lack of use and misuse of evaluation. Based on a survey conducted by the American Evaluation Association in 2006 it is estimated that fewer than one-third of evaluation results are used. A more serious problem however is the use of misevaluation - evaluation with flawed methodology, faulty sampling, data collection, analysis and reporting.