Search

153 results

Filter search results
  • RQ+ Research Quality Plus. A Holistic Approach to Evaluating Research

    This report describes a holistic approach and assessment framework for evaluating 'research' that goes beyond the traditional deliberative means (e.g., peer review) and often used analytics (e.g., bibliometrics).
    Resource
  • Excel for evaluation

    This website, created by Ann Emery, provides a series of short videos on using Microsoft Excel to analyze data.
    Resource
  • Positioning participation on the power spectrum

    In the second blog in the 4-part series about participation in evaluation, Irene Guijt and Leslie Groves focus on making power relationships and values in 'participatory' evaluation processes explicit to avoid tokenistic part
    Blog
  • Joint after-action review of our humanitarian response to the tsunami crisis

    This paper outlines the findings from the Joint After Action Review conducted to evaluate the response to the 2004 Indian Ocean tsunami.
    Resource
  • Handbook on monitoring, evaluating and managing knowledge for policy influence

    This handbook from the Center for the Implementation of Public Policies Promoting Equity and Growth (CIPPEC) is designed to support research institutions develop mon
    Resource
  • Regression discontinuity

    Regression Discontinuity Design (RDD) is a quasi-experimental evaluation option that measures the impact of an intervention, or treatment, by applying a treatment assignment mechanism based on a continuous eligibility index which is a varia
    Method
  • Tools for knowledge and learning: A guide for development and humanitarian organisations

    This tool kit presents entry points and references to the wide range of tools and methods that have been used to facilitate improved knowledge and learning in the development and humanitarian sectors.
    Resource
  • Knowledge management and organizational learning

    This article provides an overview of knowledge management and it's role in organisational learning.
    Resource
  • Quasi-experimental methods for impact evaluations

    This video lecture, given by Dr Jyotsna Puri for the Asian Development Bank (ADB) and the International Initiative for Impact Evaluation (3ie), demonstrates how the use of quasi-experimental methods can circumvent the challenge of creating
    Resource
  • Quasi-experimental design and methods

    This guide, written by Howard White and Shagun Sabarwal for UNICEF looks at the use of quasi-experimental design and methods in impact evaluation.
    Resource
  • UNICEF webinar: Quasi-experimental design and methods

    What is the main difference between quasi-experiments and RCTs? How can I measure impact when establishing a control group is not an option?
    Resource
  • Ethics framework and guidelines: A guide for research funding organizations implementing participatory activities

    This framework supports the ethical preparation, implementation, and evaluation of participatory processes in research funding and (applied) research & innovation (R&I).
    Resource
  • Multiple lines and levels of evidence

    Multiple lines and levels of evidence (MLLE) is a systematic approach to causal inference that involves bringing together different types of evidence (lines of evidence) and considering the strength of the evidence in terms of different ind
    Method
  • Journals and logs

    Journals and logs are forms of record-keeping tools that can be used to capture information about activities, results, conditions, or personal perspectives on how change occurred over a period of time.
    Method
  • SAVE Toolkit: Technologies for monitoring in insecure environments

    In this toolkit from the SAVE research programme, users can find a detailed summary of technologies suited to monitoring in insecure environments, including applications, their pros and cons as well as many links to more detailed informatio
    Resource
  • Integrity

    Integrity refers to ensuring honesty, transparency, and adherence to ethical behaviour by all those involved in the evaluation process.
    Method
  • Cultural competency

    Cultural competency involves ensuring that evaluators have the skills, knowledge, and experience necessary to work respectfully and safely in cultural contexts different from their own.
    Method
  • Feasibility

    Feasibility refers to ensuring that an evaluation can be realistically and effectively implemented, considering factors such as practicality, resource use, and responsiveness to the programme's context, including factors such as culture and
    Method
  • Inclusion of diverse perspectives

    Inclusion of diverse perspectives requires attention to ensure that marginalised people and communities are adequately engaged in the evaluation.
    Method
  • Independence

    Independence can include organisational independence, where an evaluator or evaluation team can independently set a work plan and finalise reports without undue interference, and behavioural independence, where evaluators can conduct and re
    Method
  • Evaluation accountability

    Evaluation accountability relates to processes in place to ensure the evaluation is carried out transparently and to a high-quality standard.
    Method
  • Transferability

    Transferability involves presenting findings in a way that they can be applied in other contexts or settings, considering the local culture and context to enhance the utility and reach of evaluation insights.
    Method
  • Utility

    Utility standards are intended to increase the extent to which program stakeholders find evaluation processes and products valuable in meeting their needs.
    Method
  • Professionalism

    Professionalism within evaluation is largely understood in terms of high levels of competence and ethical practice.
    Method
  • Propriety

    Propriety refers to ensuring that an evaluation will be conducted legally, ethically, and with due regard for the welfare of those involved in it and those affected by its results.
    Method
  • Systematic inquiry

    Systematic inquiry involves thorough, methodical, contextually relevant and empirical inquiry into evaluation questions. Systematic inquiry is one of the guiding principles of the American Evaluation Association:
    Method
  • Transparency

    Transparency refers to the evaluation processes and conclusions being able to be scrutinised.
    Method
  • Ethical practice

    Ethical practice in evaluation can be understood in terms of designing and conducting an evaluation to minimise any potential for harm and to maximise the value of the evaluation.
    Method
  • Accuracy

    Accuracy refers to the correctness of the evidence and conclusions in an evaluation. It may have an implication of precision.
    Method
  • Accessibility

    Accessibility of evaluation products includes consideration of the format and access options for reports, including plain language, inclusive print design, material in multiple languages, and material in alternative formats (such as online,
    Method
  • Competence

    Competence refers to ensuring that the evaluation team has or can draw on the skills, knowledge and experience needed to undertake the evaluation.
    Method
  • Outcome harvesting

    Outcome Harvesting collects (“harvests”) evidence of what has changed (“outcomes”) and, working backwards, determines whether and how an intervention has contributed to these changes.
    Approach