Search
26 results
Filter search resultsThe development and utility of a program theory: Lessons from an evaluation
This article, written by Tim Clement and Christine Bigby, looks at the use and development of program theory in human service organisations.RessourceCausal Attribution Video
This video guide, produced by UNICEF, outlines three broad strategies for causal attribution: 1) estimating the counterfactual; 2) checking the consistency of evidence for the causal relationships mRessourceStoryboard Logic Models Activity
This worksheet by the Action Evaluation Collective gives a steps by step run down of how to use storyboards to engage people in telling their stories. It's focus is on working with and engaging young people in a participatory process.RessourceProgram Evaluation: a Plain English Guide
This 11-step guide defines program evaluation, what it is used for, the different types and when they should be used. Also covered is how to plan a program evaluation, monitor performance, communicate findings, deliver&nRessourceRethinking Social Inquiry: Diverse Tools, Shared Standards (Second Edition)
This second edition of Rethinking Social Inquiry has the aim of redirecting ongoing discussions of methodology in social and political science.RessourcePurposeful program theory: Effective use of theories of change and logic models
This book, by Sue Funnell and Patricia Rogers, discusses ways of developing, representing and using programme theory and theories of change in different ways to suit the particular situation.RessourceContemporary thinking about causation in evaluation
This paper was produced following a discussion between Thomas Cook and Michael Scriven held at The Evaluation Center and Western Michigan University’s Interdisciplinary PhD in Evaluation program jointly hosted Evaluation Cafe´ event onRessourceWeek 19: Ways of framing the difference between research and evaluation
One of the challenges of working in evaluation is that important terms (like ‘evaluation’, ‘impact’, ‘indicators’, ‘monitoring’ and so on ) are defined and used in very different ways by different people.BlogSemana 19: Formas de descrever a diferença entre pesquisa e avaliação
Um dos desafios em trabalhar em avaliação é que importante termos (como "avaliação", "impacto", "indicadores", "monitoramento" e assim por diante) são definidos e usados de maneiras muito diferentes, porBlogBetterEvaluation community's views on the difference between evaluation and research
In May we blogged about ways of framing the difference between research and evaluation. We had terrific feedback on this issue from the international BetterEvaluation community and this update shares the results.BlogWhat are some methods and processes to help stakeholders articulate how they think a program works? (AES17 co-creation challenge #1)
The material from BetterEvaluation comes from a combination of curating existing material and co-creating new material. This blog is part of an ongoing series about material that we have co-created with BetterEvaluation users.BlogUser feedback on the difference between evaluation and research
This page contains thoughts from the BetterEvaluation community provided in response to the blog post onBlogBradford Hill criteria for causal inference
Based on a presentation at the 2015 ANZEA Conference, this free downloadable book presents the Bradford Hill criteria and discusses some ways of using them in practice to draw causal conclusions.RessourceThe environment and disease: Association or causation?
In this original article from 1965, Sir Austin Bradford Hill, Professor Emeritus of Medical Statistics, lays out what will ultimately come to be known as the Bradford Hill criteria.RessourceEnvironmental flows monitoring and assessment framework
This resource from the Cooperative Research Centre for Freshwater Ecology provides a framework for assessing environmental flow management plans.RessourceEnhancing program performance with logic models
Developed by the University of Wisconsin Extension service, this resource provides an introduction to developing and using a particular version of the results chain.RessourceMaking causal claims
This brief, authored by John Mayne for the Institutional Learning and Change (ILAC) Initiative argues the need for a different perspective on causality.RessourceImpact evaluation: A guide for commissioners and managers
This guide, written by Elliot Stern, aims to support managers and commissioners in gaining a deeper and broader understanding of impact evaluation.RessourceThe rigor of case-based causal analysis: Busting myths through a demonstration
This paper focuses on the utilisation of case-based designs for conducting causal analysis and dispelling two misconceptions about their use in the context of evaluation.RessourceTheory of change
This guide, written by Patricia Rogers for UNICEF, looks at the use of theory of change in an impact evaluation.RessourceUNICEF webinar: Overview: strategies for causal inference
What is causal attribution? Do you need a counterfactual to determine if something has caused a change? Professor Patricia Rogers provides an overview of how to determine causal attribution in impact evaluations.RessourceBetterEvaluation FAQ: How do you use program theory for evaluating systems?
Although it’s sometimes referred to as program theory or program logic, theories of change can be used for interventions at any scale, including policies, whole-of-government initiatives, and systems.BlogUsing logic models and theories of change better in evaluation
Many evaluations include a process of developingBlogMapping change: Using a theory of change of guide planning and evaluation
This guide, written by Anne MacKinnon and Natasha Arnott for GrantCraft, describes the process of developing a theory of change to support planning and evaluation.RessourceTheory maker
This free and open-sourced web-based tool was made by Steve Powell as a quick and simple way of creating a theory of change. The information provided was supplied by Steve Powell.RessourceMonitoring and evaluation for thinking and working politically
This article explores the challenges of monitoring and evaluating politically informed and adaptive programmes in the international development field. Authors Thomas Aston, Chris Roche, Marta Schaaf & Sue Cant.Ressource