Search
154 results
Filter search resultsFive misunderstandings about case-study research
This article, written by Bent Flyvbjerg (Aalborg University, Denmark) examines five common misunderstandings about case-study research.ResourceRQ+ Research Quality Plus. A Holistic Approach to Evaluating Research
This report describes a holistic approach and assessment framework for evaluating 'research' that goes beyond the traditional deliberative means (e.g., peer review) and often used analytics (e.g., bibliometrics).ResourceCase Study Research: Design and Methods
Providing a complete portal to the world of case study research, the Fifth Edition of Robert K. Yin’s bestselling text offers comprehensive coverage of the design and use of the case study method as a valid research tool.ResourceExcel for evaluation
This website, created by Ann Emery, provides a series of short videos on using Microsoft Excel to analyze data.ResourcePositioning participation on the power spectrum
In the second blog in the 4-part series about participation in evaluation, Irene Guijt and Leslie Groves focus on making power relationships and values in 'participatory' evaluation processes explicit to avoid tokenistic partBlogHandbook on monitoring, evaluating and managing knowledge for policy influence
This handbook from the Center for the Implementation of Public Policies Promoting Equity and Growth (CIPPEC) is designed to support research institutions develop monResourceRegression discontinuity
Regression Discontinuity Design (RDD) is a quasi-experimental evaluation option that measures the impact of an intervention, or treatment, by applying a treatment assignment mechanism based on a continuous eligibility index which is a variaMethodTools for knowledge and learning: A guide for development and humanitarian organisations
This tool kit presents entry points and references to the wide range of tools and methods that have been used to facilitate improved knowledge and learning in the development and humanitarian sectors.ResourceKnowledge management and organizational learning
This article provides an overview of knowledge management and it's role in organisational learning.ResourceQuasi-experimental methods for impact evaluations
This video lecture, given by Dr Jyotsna Puri for the Asian Development Bank (ADB) and the International Initiative for Impact Evaluation (3ie), demonstrates how the use of quasi-experimental methods can circumvent the challenge of creatingResourceQuasi-experimental design and methods
This guide, written by Howard White and Shagun Sabarwal for UNICEF looks at the use of quasi-experimental design and methods in impact evaluation.ResourceUNICEF webinar: Quasi-experimental design and methods
What is the main difference between quasi-experiments and RCTs? How can I measure impact when establishing a control group is not an option?ResourceEthics framework and guidelines: A guide for research funding organizations implementing participatory activities
This framework supports the ethical preparation, implementation, and evaluation of participatory processes in research funding and (applied) research & innovation (R&I).ResourceMultiple lines and levels of evidence
Multiple lines and levels of evidence (MLLE) is a systematic approach to causal inference that involves bringing together different types of evidence (lines of evidence) and considering the strength of the evidence in terms of different indMethodJournals and logs
Journals and logs are forms of record-keeping tools that can be used to capture information about activities, results, conditions, or personal perspectives on how change occurred over a period of time.MethodSAVE Toolkit: Technologies for monitoring in insecure environments
In this toolkit from the SAVE research programme, users can find a detailed summary of technologies suited to monitoring in insecure environments, including applications, their pros and cons as well as many links to more detailed informatioResourceIntegrity
Integrity refers to ensuring honesty, transparency, and adherence to ethical behaviour by all those involved in the evaluation process.MethodCultural competency
Cultural competency involves ensuring that evaluators have the skills, knowledge, and experience necessary to work respectfully and safely in cultural contexts different from their own.MethodFeasibility
Feasibility refers to ensuring that an evaluation can be realistically and effectively implemented, considering factors such as practicality, resource use, and responsiveness to the programme's context, including factors such as culture andMethodInclusion of diverse perspectives
Inclusion of diverse perspectives requires attention to ensure that marginalised people and communities are adequately engaged in the evaluation.MethodIndependence
Independence can include organisational independence, where an evaluator or evaluation team can independently set a work plan and finalise reports without undue interference, and behavioural independence, where evaluators can conduct and reMethodEvaluation accountability
Evaluation accountability relates to processes in place to ensure the evaluation is carried out transparently and to a high-quality standard.MethodTransferability
Transferability involves presenting findings in a way that they can be applied in other contexts or settings, considering the local culture and context to enhance the utility and reach of evaluation insights.MethodUtility
Utility standards are intended to increase the extent to which program stakeholders find evaluation processes and products valuable in meeting their needs.MethodProfessionalism
Professionalism within evaluation is largely understood in terms of high levels of competence and ethical practice.MethodPropriety
Propriety refers to ensuring that an evaluation will be conducted legally, ethically, and with due regard for the welfare of those involved in it and those affected by its results.MethodSystematic inquiry
Systematic inquiry involves thorough, methodical, contextually relevant and empirical inquiry into evaluation questions. Systematic inquiry is one of the guiding principles of the American Evaluation Association:MethodTransparency
Transparency refers to the evaluation processes and conclusions being able to be scrutinised.MethodEthical practice
Ethical practice in evaluation can be understood in terms of designing and conducting an evaluation to minimise any potential for harm and to maximise the value of the evaluation.MethodAccuracy
Accuracy refers to the correctness of the evidence and conclusions in an evaluation. It may have an implication of precision.MethodAccessibility
Accessibility of evaluation products includes consideration of the format and access options for reports, including plain language, inclusive print design, material in multiple languages, and material in alternative formats (such as online,MethodCompetence
Competence refers to ensuring that the evaluation team has or can draw on the skills, knowledge and experience needed to undertake the evaluation.Method