Search
19 results
Filter search resultsData collection & analysis video
This video guide from UNICEF looks at the issues involved in choosing and using data collection and analysis methods for impact evaluations.ResourceWebinar recording: When the ‘field’ is online – qualitative data collection
This NVIVO webinar explores ways that researchers can adapt their research approach using online data collection when face to face fieldwork isn’t possible.ResourceCollecting data on sensitive issues
Image: Polling Booths, by PetroleumJelliffe on FlickrBlogBig data for development: challenges & opportunities
This white paper by UN Global Pulse examines the use of Big Data in development contexts.ResourceObjectives-Based Evaluation (OBE) for impact investing
Bob Picciotto is a former Director General of the Independent Evaluation Group which oversees evaluation in the International Finance Corporation, an agency dedicated to the promotion of private sector development in developiBlogL’évaluation en contexte de développement
Ce manuel est destiné aux personnes souhaitant s’initier à l’évaluation de programmes, en particulier en contexte de développement et de coopération internationale. À cet égard, tout en déroulant le fil d’une démarche évaluative classique, il présente…ResourceBest of AEA365: Approaching document review in a systematic way
In this blog post, Linda Cabral discusses document reviews and highlights the importance of conducting them systematically.ResourceCreating Rubrics
This web page gives detailed guided assistance in creating rubrics.ResourceThe rubric revolution
Three linked presentations from Jane Davidson, Nan Wehipeihana & Kate McKegg explaining how rubrics can be used to ensure evaluations validly answer evaluative questions.ResourceData collection methods for evaluation: Document review
This resource from the Centers for Disease Control and Prevention (CDC) provides a brief guide to using document review as a data collection method for evaluation.ResourceRegression discontinuity
Regression Discontinuity Design (RDD) is a quasi-experimental evaluation option that measures the impact of an intervention, or treatment, by applying a treatment assignment mechanism based on a continuous eligibility index which is a variaMethodQuasi-experimental methods for impact evaluations
This video lecture, given by Dr Jyotsna Puri for the Asian Development Bank (ADB) and the International Initiative for Impact Evaluation (3ie), demonstrates how the use of quasi-experimental methods can circumvent the challenge of creatingResourceEvaluation rubrics: how to ensure transparent and clear assessment that respects diverse lines of evidence
This report provides a detailed description of an evaluation, written by Judy Oakden, as part of the first BetterEvaluation writeshop process, led by Irene Guijt.ResourceQuasi-experimental design and methods
This guide, written by Howard White and Shagun Sabarwal for UNICEF looks at the use of quasi-experimental design and methods in impact evaluation.ResourceUNICEF webinar: Overview of data collection and analysis methods in Impact Evaluation
What is the value of using mixed methods in impact evaluation? What methods and designs are appropriate for answering descriptive, causal and evaluative questions?ResourceUNICEF webinar: Quasi-experimental design and methods
What is the main difference between quasi-experiments and RCTs? How can I measure impact when establishing a control group is not an option?ResourceRubrics
A rubric is a framework that sets out criteria and standards for different levels of performance and describes what performance would look like at each level.MethodGuest blog: Why rubrics are useful in evaluations
In Aoteoroa New Zealand the use of rubrics has been adopted across a numbBlog52 weeks of BetterEvaluation: Week 11: Using rubrics
The term "rubric" is often used in education to refer to a systematic way of setting out the expectations for students in terms of what would constitute poor, good and excellent performance.Blog