Search
25 results
Filter search resultsDo health improvement programmes fit with MRC guidance on evaluating complex interventions?
This article, authored by MacKenzie, O'Donnell, Halliday, E.ResourceWhat scientific idea is ready for retirement: Large randomized controlled trials
This comment, written by Dean Ornish and published on the Edge.org blog What scientific idea is ready for retirement, argues that larger studies do not always equate to more rigorous or definitive reResourceIDEAS book launch: Randomized control trials in the field of development, a critical perspective [Webinar]
This webinar to launch the book Randomized Control Trials in the Field of Development: A Critical Perspective brings together five representatives of the book's editors and authors for a discussion around some of the keyResourceContemporary thinking about causation in evaluation
This paper was produced following a discussion between Thomas Cook and Michael Scriven held at The Evaluation Center and Western Michigan University’s Interdisciplinary PhD in Evaluation program jointly hosted Evaluation Cafe´ event onResourceDo labor market policies have displacement effects? Evidence from a clustered randomized experiment
This resource reports the results from a randomized experiment intended to evaluation the direct and indirect (displacement) impacts of job placement assistance on the labor market outcomes of young, educated job seekers in France.ResourceUn-boxing evaluation through developmental and agile approaches
Guest author Nerida Buckley discusses how un-boxing evaluation can benefit from looking at practices from developmental and agile approaches.BlogBeyond the evaluation box – Social innovation with Ingrid Burkett
This blog is the sixth in our series about un-boxing evaluation – the theme of aes19 in Sydney.BlogWeek 47: Rumination #3: Fools' gold: the widely touted methodological "gold standard" is neither golden nor a standard
This week's post is an abbreviated version of a "rumination" from theBlogPathways to professionalisation - Part 1: Professionalisation within the context of the AES
In part 1 of this two-part blog series, greet Peersman and Patricia Rogers introduce the ‘Pathways to advance professionalisation within the context of the AES’ project and report.BlogPathways to professionalisation - Part 2: Options for professionalisation
In the previous blog in this series, greet Peersman and Patricia Rogers introduced the ‘Pathways to advance professionalisation within the context of the AES’ project and report.BlogAES 2018 conference reflections: Power, values, and food
In this guest blog, Fran Demetriou (Lirata Consulting and volunteer M&E advisor for the Asylum Seeker Resource Centre’s Mentoring Program) shares her reflections from the recent Australasian Evaluation Society (AES)'s 2018 conference,BlogWhat does it mean to ‘un-box’ evaluation?
This guest blog by Jade Maloney is the first in a series about un-boxing evaluation – the theme of aes19 in Sydney, Australia.BlogUn-boxing NGO evaluation
This blog is the fourth in our series about un-boxing evaluation – the theme of aes19 in Sydney, Australia.BlogWhat would an evaluation conference look like if it was run by people who know and care about presenting information to support use? (hint - that should be us)
All too often conferences fail to make good use of the experience and knowledge of people attending, with most time spent presenting prepared material that could be better delivered other ways, and not enough time spent on discussions and aBlogThe rubric revolution
Three linked presentations from Jane Davidson, Nan Wehipeihana & Kate McKegg explaining how rubrics can be used to ensure evaluations validly answer evaluative questions.ResourceConditions to consider in the use of randomized experimental designs in evaluation
This paper, written by George Julnes, University of New Mexico, Melvin M. Mark, Penn State University, and Stephanie Shipman, U.S.ResourceExpectation of ongoing competency development
An expectation that members of an association or organisation will engage in ongoing competency development.MethodRandomised control trials for the impact evaluation of development initiatives: a statistician's point of view
This paper from the Institutional Learning and Change (ILAC) Initiative provides a range of technical and practical reflections on the use of randomised control trials in impact evaluation.ResourceIntroduction to randomized control trials
This video lecture given by Dr Annette Brown for the Asian Development Bank (ADB) and the International Initiative for Impact Evaluation (3ie) describes how to create a valid counterfactual using randomizeResourceRandomized controlled trials (RCTs) video guide
This video guide, produced by UNICEF, summarises the key features of RCTs with a particular emphasis on their use in impact evaluation.ResourceRandomized controlled trials (RCTs)
This guide, written by Howard White, Shagun Sabarwal and Thomas de Hoop for UNICEF, looks at the use of Randomized Control Trials (RCTs) in Impact Evaluation.ResourceUNICEF webinar: Randomized controlled trials
What are the key features of an RCT? Are RCTs really the gold standard? What ethical and practical issues do I need to consider before deciding to do an RCT?Resource6: Sample size and power calculations
This presentation explores methods for identifying the right sample size for randomized evaluations so that results are defendable.ResourceCase study: QuIP & RCT to evaluate a cash transfer and gender training programme in Malawi
This case study discusses the combination of the Qualitative Impact Assessment Protocol (QuIP) and Randomised Control Trial (RCT) approaches in the evaluation of Concern Worldwide's "Graduation" programme.ResourceImpact evaluation in 7 or 8 steps
This slideshow takes the viewer through the process of designing an impact evaluation.Resource