Search
13 results
Filter search resultsUsing Mobile Data for Development
This guide, written by Ed Naef, Philipp Muelbert, Syed Raza, Raquel Frederick, Jake Kendall and Nirant Gupta for Cartesian and theResourceTechnology for Evaluation in Fragile and Conflict Affected States: An Introduction for the Digital Immigrant Evaluator
This paper aims to help evaluators working in fragile and conflict affected states (FCAS) to determine which technology may be useful in various phases of an evaluation.ResourcePoimapper
Poimapper mobile data collection solution is designed for monitoring the status and progress of field work in any specific area.ResourceDiscussion Paper: Innovations in Monitoring and Evaluation
This discussion paper produced by the United Nations Development Programme discusses various innovations that are occurring in M&E, and the advantages and disadvantages of these methods.ResourceBig data for development: challenges & opportunities
This white paper by UN Global Pulse examines the use of Big Data in development contexts.ResourceWeek 19: Ways of framing the difference between research and evaluation
One of the challenges of working in evaluation is that important terms (like ‘evaluation’, ‘impact’, ‘indicators’, ‘monitoring’ and so on ) are defined and used in very different ways by different people.BlogSemana 19: Formas de descrever a diferença entre pesquisa e avaliação
Um dos desafios em trabalhar em avaliação é que importante termos (como "avaliação", "impacto", "indicadores", "monitoramento" e assim por diante) são definidos e usados de maneiras muito diferentes, porBlogBetterEvaluation community's views on the difference between evaluation and research
In May we blogged about ways of framing the difference between research and evaluation. We had terrific feedback on this issue from the international BetterEvaluation community and this update shares the results.BlogUser feedback on the difference between evaluation and research
This page contains thoughts from the BetterEvaluation community provided in response to the blog post onBlogMachine learning and meta-ethnography: Seven steps to synthesising 578 evaluations into four themes
This paper documents a case study using machine learning and meta-ethnography techniques to synthesise and draw lessons from 578 evaluations. This paper is part of the BetterEvaluation Innovation Working Paper series.ResourceMachine learning in evaluative synthesis: Lessons from private sector evaluation in the World Bank Group
An exploration of the potential to use machine learning techniques to enhance the efficiency of analyzing, classifying, and synthesizing extensive amounts of text in evaluation research.ResourceAdvanced content analysis: Can artificial intelligence accelerate theory-driven complex program evaluation?
This paper presents the methodology and results of an assessment of the applicability and utility of artificial intelligence for advanced theory-based content analysis.ResourceLeveraging imagery data in evaluations: Applications of remote-sensing and streetscape imagery analysis
This paper discusses using imagery data in evaluations and the advantages and limitations of relevant methodologies.Resource