Search

155 results

Filter search results
  • How to design an M&E framework for a policy research project

    This Methods Lab guidance note focuses on the designing and structuring of a monitoring and evaluation framework for policy research projec
    Resource
  • Developing monitoring and evaluation frameworks + framework template

    This book, written by Anne Markiewicz and Ian Patrick, offers a step-by-step guide to developing a monitoring and evaluation framework.
    Resource
  • Top tips for young and emerging evaluators - Blog series

    This blog series shares advice for young and emerging evaluators from a range of experienced evaluation practitioners. The tips range from methodological advice to personal tips on building resilience and relationships.
    Resource
  • DIY M&E: A step-by-step guide to building a monitoring and evaluation framework

    This guide, written by Dana Cross of Grosvenor Management Consulting, gives an overview of how to create an M&E framework.
    Resource
  • Australian Volunteers program monitoring, evaluation and learning framework

    This example of a monitoring, evaluation and learning framework sets out the approach to assessing the performance of the Australian Volunteers Program. This resource and the following information was contributed by Jo Hall.
    Resource
  • Why do programs benefit from developing monitoring and evaluation frameworks?

    This guest blog is by Anne Markiewicz, Director of Anne Markiewicz and Associates, a consultancy that specialises in developing Monitoring and Evaluation Frameworks.
    Blog
  • From paper to practice: Supporting the uptake of high-level M&E frameworks

    Evaluation frameworks are often developed to provide a common reference point for evaluations of different projects that form a program, or different types of evaluations of a single program. 
    Blog
  • Pathways to professionalisation - Part 1: Professionalisation within the context of the AES

    In part 1 of this two-part blog series, greet Peersman and Patricia Rogers introduce the ‘Pathways to advance professionalisation within the context of the AES’ project and report.
    Blog
  • Un-boxing the expert label

    This guest blog is the third in our series about un-boxing evaluation – the theme of aes19 in Sydney, Australia.
    Blog
  • 4 tips for planning your policy research M&E

    In this guest blog post, Tiina Pasanen, from the Overseas Development Institute (ODI), lays out four key ideas to keep in mind when designing an M&E framework for a policy research project
    Blog
  • Health Policy Project: Strengthening capacity in policy, advocacy, governance, and finance: A facilitator guide for organizational capacity assessments

    The resource, developed by the Health Policy Project, is a self-assessment tool designed to align with an organization's mission concerning health policy, though the tool is useful more broadly outside the health sector.
    Resource
  • Evaluation framework

    An evaluation framework (sometimes called a Monitoring and Evaluation framework, or more recently a Monitoring, Evaluation and Learning framework) provides an overall framework for evaluations across different programs or different evaluati
    Method
  • Pathways to advance professionalisation within the context of the AES

    This report by Greet Peersman and Patricia Rogers for the Australasian Evaluation Society (AES) identifies four potential pathways towards professionalisation within the context of the AES. These pathways are as follows:
    Resource
  • Public impact fundamentals and observatory

    The Public Impact Fundamentals are a framework developed by the Centre for Public Impact to assess what makes a successful policy outcome and describe what can be done to maximise the chances of achieving public impact.
    Resource
  • Evaluation journals

    Evaluation journals play an important role in documenting, developing, and sharing theory and practice. They are an important component in strengthening evaluation capacity.
    Method
  • Multiple lines and levels of evidence

    Multiple lines and levels of evidence (MLLE) is a systematic approach to causal inference that involves bringing together different types of evidence (lines of evidence) and considering the strength of the evidence in terms of different ind
    Method
  • Journals and logs

    Journals and logs are forms of record-keeping tools that can be used to capture information about activities, results, conditions, or personal perspectives on how change occurred over a period of time.
    Method
  • Integrity

    Integrity refers to ensuring honesty, transparency, and adherence to ethical behaviour by all those involved in the evaluation process.
    Method
  • Cultural competency

    Cultural competency involves ensuring that evaluators have the skills, knowledge, and experience necessary to work respectfully and safely in cultural contexts different from their own.
    Method
  • Feasibility

    Feasibility refers to ensuring that an evaluation can be realistically and effectively implemented, considering factors such as practicality, resource use, and responsiveness to the programme's context, including factors such as culture and
    Method
  • Inclusion of diverse perspectives

    Inclusion of diverse perspectives requires attention to ensure that marginalised people and communities are adequately engaged in the evaluation.
    Method
  • Independence

    Independence can include organisational independence, where an evaluator or evaluation team can independently set a work plan and finalise reports without undue interference, and behavioural independence, where evaluators can conduct and re
    Method
  • Evaluation accountability

    Evaluation accountability relates to processes in place to ensure the evaluation is carried out transparently and to a high-quality standard.
    Method
  • Transferability

    Transferability involves presenting findings in a way that they can be applied in other contexts or settings, considering the local culture and context to enhance the utility and reach of evaluation insights.
    Method
  • Utility

    Utility standards are intended to increase the extent to which program stakeholders find evaluation processes and products valuable in meeting their needs.
    Method
  • Professionalism

    Professionalism within evaluation is largely understood in terms of high levels of competence and ethical practice.
    Method
  • Propriety

    Propriety refers to ensuring that an evaluation will be conducted legally, ethically, and with due regard for the welfare of those involved in it and those affected by its results.
    Method
  • Systematic inquiry

    Systematic inquiry involves thorough, methodical, contextually relevant and empirical inquiry into evaluation questions. Systematic inquiry is one of the guiding principles of the American Evaluation Association:
    Method
  • Transparency

    Transparency refers to the evaluation processes and conclusions being able to be scrutinised.
    Method
  • Ethical practice

    Ethical practice in evaluation can be understood in terms of designing and conducting an evaluation to minimise any potential for harm and to maximise the value of the evaluation.
    Method
  • Accuracy

    Accuracy refers to the correctness of the evidence and conclusions in an evaluation. It may have an implication of precision.
    Method
  • Accessibility

    Accessibility of evaluation products includes consideration of the format and access options for reports, including plain language, inclusive print design, material in multiple languages, and material in alternative formats (such as online,
    Method