This report from the Open Data Institute discusses key concepts including ‘data stewardship’, ‘data ethics’, ‘data justice’ ‘data for good’ ‘responsible data’, ‘data sovereignty’, ‘Indigenous data sovereignty’, and ‘data feminism’.
Patricia Rogers

Contributed by this member
- This set of webpages and video from the Department of Education in New South Wales, Australia, provides background information on evaluative thinking and its use.
- This resource, from the Footprint Evaluation Initiative, discusses how the six evaluation criteria of the OECD DAC (Organization for Economic Co-Operation and Development – Development Assistance Committee) can be used to get environmental
- In this recording of the 2022 NEC Conference's session on environmental sustainability, the Footprint Evaluation Initiative’s Patricia Rogers and Andy Rowe joined Joana Varela (Ministry of Finance and Blue Economy Plan, São Tomé and Princip
- In part one of this three-part webinar series, Andy Rowe and Patricia Rogers discuss what was learnt during the
- This blog provides guidance and examples on co-evaluating with lived and living experience (LLE) researchers.
- This paper provides guidance on Environmental Approach for Generational Impact – a proposed approach to designing and evaluating programmes in a way that includes consideration of environment and biodiversity issues.
- In the last part of this three-part webinar series, Andy Rowe and Patricia Rogers introduce a typology being developed that will assist a wide range of evaluations in assessing the effect of interventions on natural systems and sustainabili
- In this EvalSDGs Insight paper, Scott Chaplowe and Juha Uitto argue that there is an urgent need to mainstream environmental sustainability in evaluation.
- In part two of this three-part webinar series, Jane Davidson and Patricia Rogers discuss several ways to get sustainability on the evaluation agenda, even for projects that have no explicit environmental objectives and where there is no men
- This blog is the second of a 2-part series on the issues raised about COP26 in papers published in the journal Evaluation.
- This blog is the first of a 2-part series on the issues raised about COP26 in papers published in the journal Evaluation.
- This chapter from Transformational Evaluation for the Global Crises of Our Times argues for the need to transform evaluation in the light of current environmental crises and sets out the major ways this needs to happen.
- This guide sets out the rationale for why transformative equity needs to be addressed by all evaluations, especially in the South African context of high inequality, and how this might be done during the commissioning, design and conduct of
- This guide sets out the rationale for why climate and ecosystem health need to be addressed by all evaluations and how this might be done during the commissioning, design and conduct of an evaluation.
- Happy Holidays! As we head into 2022 we thought we'd share a list of resources for you to peruse in the new year.
- This blog by Jo Hall and Patricia Rogers provides an update on the Global Partnership for Better Monitoring project.
- This webpage provides 13 separate rubrics developed for different aspects of the Measurable Gains Framework in the Maori Education Strategy.
- We’re currently going through a global period of rapid change and adaption, due in large part to the effects of the COVID-19 pandemic on our lives and work.
- While we’re happy to wish the year 2020 farewell, many of the challenges and difficulties that arose over the past 12 months are still with us, as is the sadness over the many different forms of loss we’ve all experienced.
- This paper provides detailed guidance on using big data to fill data gaps in impact evaluations.
- We’re continuing our series, sharing ideas and resources on ways of ensuring that evaluation adequately responds to the new challenges during the pandemic.
- Given the numerous interconnected environmental crises the world faces, there is an urgent need to include consideration of environmental impacts into all evaluations.
- We’re excited to be involved in the 2020 IPDET Hackathon – a week-long event in which hundreds of people from around the world bring together their skills,
- The Covid-19 pandemic has led to rapid changes in the activities and goals of many organisations, whether these relate to addressing direct health impacts, the consequential economic and social impacts or to the need to change the way thing
- Organisations around the world are quickly having to adapt their programme and project activities to respond to the COVID-19 pandemic and its consequences. We’re starting a new blog series to help support these efforts.
- This paper is the first in the BetterEvaluation Monitoring and Evaluation for Adaptive Management working paper series.
- Drawing on interviews with 19 UK evaluation commissioners and contractors, this paper investigates the role of evaluation commissioning in hindering the take-up of complexity-appropriate evaluation methods and explores ways of improving thi
- In this first blog of 2019, Patricia Rogers, Greet Peersman and Alice Macfarlan examine how New Year's resolutions are similar to many evaluation practices.
- This special issue of New Directions in Evaluation includes discussions of different types of sustainability – sustainable environment, sustainable development, sustainable programs, and sustainable evaluation systems – and a synthesis of t
- This blog introduces the 'Context Matters' framework - a living tool that builds on and contributes to learning and thinking on evidence-informed policy making, by providing a lens through which to examine the context (internal and external
- This freely available, online book brings together case studies using an impact evaluation approach, the Qualitative Impact Protocol (QUIP), without a control group that uses narrative causal statements elicited directly from intended proje
- This collection of resources - toolkit, short guide and cheat sheet - sets out the challenges of talking about climate change and presents effective strategies to address them.
- In part 1 of this two-part blog series, greet Peersman and Patricia Rogers introduce the ‘Pathways to advance professionalisation within the context of the AES’ project and report.
- In the previous blog in this series, greet Peersman and Patricia Rogers introduced the ‘Pathways to advance professionalisation within the context of the AES’ project and report.
- This blog is an abridged version of the brief Innovations in evaluation: How to choose, develop and support them, written by Patricia Rogers and Alice Macfarlan.
- This is the second of a two-part blog on strategies to support the use of evaluation, building on a session the BetterEvaluation team facilitated at the American Evaluation Association conference last year.
- What can be done to support the use of evaluation? How can evaluators, evaluation managers and others involved in or affected by evaluations support the constructive use of findings and evaluation processes?
- This webinar (recorded January 23, 2018) features an interview with Michael Quinn Patton and discusses his latest book - Principles-Focused Evaluation: The GUIDE - and the principles-focused evaluation (P-FE) approach, exploring its relevan
- A clear and well-informed guide to evaluating value for money which addresses important issues including the limitations of using indicators epsecially for complex interventions, and the need to address unintended impacts and complicated ca
- This toolkit, developed by the Victorian Department of Health and Human Services, provides a step-by-step guide to developing and implementing a successful stakeholder engagement plan.
- How can programs and organizations ensure they are adhering to core principles—and assess whether doing so is yielding desired results?
- Written by Mary Marcussen, this chapter from the online resource, Principal Investigator's Guide (produced by Informal Science), is about locating an evaluator well matched to your project needs.
- This visual overview was developed through a research process that identified, defined and illustrated 16 key features of complex systems.
- This short guide from the Poverty Action Lab presents six rules very clearly, with helpful diagrams to explain or reinforce the points.
- These examples have been contributed for discussion at the 'flipped conference' session of the American Evaluation Association to be held Saturday (November 11, 2017) 09:15 AM - 10:00 AM in the Thurgood Marshall North Room. Example 1 - Usi
- In this flipped conference session, we invite participants and evaluators, evaluation managers and evaluation capacity developers around the world to build and share knowledge about what can be done to support the use of evaluation findings after they'v
- We have all been there. You dive into a new book or head to a conference/workshop/course and come out all fired up about a new evaluation method. But when you get back to the real world, applying it turns out to be harder than you thought! What next?
- Although it’s sometimes referred to as program theory or program logic, theories of change can be used for interventions at any scale, including policies, whole-of-government initiatives, and systems.
- Many evaluations include a process of developing
- Part of our commitment to better evaluation is making sure that evaluation itself is evaluated better. Like any intervention, evaluations can be evaluated in different ways.
- Adaptive management is usually understood to refer to an iterative process of reviewing and making changes to programmes and projects throughout implementation.
- All too often conferences fail to make good use of the experience and knowledge of people attending, with most time spent presenting prepared material that could be better delivered other ways, and not enough time spent on discussions and a
- This article presents an example of a rigorous non-counterfactual causal analysis that describes how different evidence and methods were used together for causal inference without a control group or comparison group.
- The Potent Presentations Initiative (p2i) (sponsored by the American Evaluation Association) was created to help evaluators improve their presentation skills, both at conferences and in their everyday evaluation practice.
- In this edition of the BE FAQ blog, we address a question that comes up quite often: How do you go about analysing data that has been collected from respondents via a questionnaire?
- A theory of change can be very useful in designing an impact evaluation, but what kinds of theories should we use?
- The updated 2016 UNEG Norms and Standards for Evaluation, a UNEG foundational document, is intended for application for all United Nations’ evaluations.
- Many development programme staff have had the experience of commissioning an impact evaluation towards the end of a project or programme only to find that the monitoring system did not provide adequate data about implementation, context, ba
- Are there particular examples of evaluations that you return to think about often?
- Impact evaluation, like many areas of evaluation, is under-researched. Doing systematic research about evaluation takes considerable resources, and is often constrained by the availability of information about evaluation practice.
- In development, government and philanthropy, there is increasing recognition of the potential value of impact evaluation.
- Welcome to the International Year of Evaluation!
- In May we blogged about ways of framing the difference between research and evaluation. We had terrific feedback on this issue from the international BetterEvaluation community and this update shares the results.
- Two years ago, during the European Evaluation Society conference in Helsinki, the BetterEvaluation.org website went live for public access.
- Being able to compare alternatives is essential when designing an evaluation.
- Case studies are often used in evaluations – but not always in ways that use their real potential.
- One of the challenges of working in evaluation is that important terms (like ‘evaluation’, ‘impact’, ‘indicators’, ‘monitoring’ and so on ) are defined and used in very different ways by different people.
- One of the challenges of working in evaluation is that important terms (like ‘evaluation’, ‘impact’, ‘indicators’, ‘monitoring’ and so on ) are defined and used in very different ways by different people.
- This week we start the first in an ongoing series of Real-Time Evaluation Queries, where BetterEvaluation members ask for advice and assistance with something they are working on, together we suggest some strategies and useful resources - a
- BetterEvaluation was privileged to sponsor the Methodological Innovation stream at the African Evaluation Association (AfREA) conference from 3-7 March. What did we learn?
- This is the first in a series of blogs on innovation which includes contributions from Thomas Winderl and Julia Coffman.
- Realist impact evaluation is an approach to impact evaluation that emphasises the importance of context for programme outcomes.
- Happy New Year! As you start filling in your diaries and calendars (desk, wall or electronic), make some space for at least one of the evaluation conferences listed below.
- In this e-book, Judy Oakden discusses the use of Rich Pictures in evaluation. In particular, she addresses why (and when) you should use rich pictures, and answers some of the common questions around the use of rich pictures.
- While most of the evaluation resources on the BetterEvaluation site are in English, we're keen to provide access to resources in other languages. In 2014, making the site more accessible in different languages will be one of our priorities.
- This week we're celebrating the first year of BetterEvaluation since it went live to the public in October 2012. Thank you to everyone who has contributed material, reviewed content, developed the website, and participated in live events.
- You'll find hundreds of evaluation resources on the BetterEvaluation site. Some have come from recommendations by stewards. Some have come from our writeshop project or design clinics.
- How do you balance the different dimensions of an evaluation?
- What's one of the most common mistakes in planning an evaluation? Going straight to deciding data collection methods. Before you choose data collection methods, you need a good understanding of why the evaluation is being done.
- Evaluation journals play an important role in documenting, developing, and sharing theory and practice.
- How do we ensure our evaluations are conducted ethically? Where do we go for advice and guidance, especially when we don't have a formal process for ethical review?
- Many organisations are having to find ways of doing more for less – including doing evaluation with fewer resources.
- Many evaluations use a theory of change approach, which identifies how activities are understood to contribute to a series of outcomes and impacts. These can help guide data collection, analysis and reporting.
- The term "rubric" is often used in education to refer to a systematic way of setting out the expectations for students in terms of what would constitute poor, good and excellent performance.
- There is increasing recognition that a theory of change can be useful when planning an evaluation. A theory of change is an explanation of how activities are understood to contribute to a series of outcomes and impacts.
- There is increasing discussion about the potential relevance of ideas and methods for addressing complexity in evaluation. But what does this mean? And is it the same as addressing complication?
- Across the world evaluation associations provide a supportive community of practice for evaluators, evaluation managers and those who do evaluation as part of their service delivery or management job.
- There are many decisions to be made in an evaluation – its purpose and scope; the key evaluation questions; how different values will be negotiated; what should be the research design and methods for data collection and analysis; how
- This week on BetterEvaluation we're presenting Questions and Answers about logic models. A logic model represents a program theory - how an intervention (such as a program, project or policy) is understood to contribute to its impacts.
- This book, co-edited by Marlène Läubli Loud and John Mayne, offers invaluable insights from real evaluators who share strategies they have adopted through their own experiences in evaluation.
- This framework has been developed to guide the consistent and transparent evaluation of government programs in the New South Wales (Australia) State Government to inform decision making on policy directions, program design and implementatio
- A webpage and blog site of Professor Ray Calabrese, from Ohio State University USA.
- This resource discusses interactive, face-to-face techniques to use to achieve particular evaluation goals, especially in formal meetings of an evaluation.
- The following example is an appendix from the Final Evaluation Report of the Recognised Seasonal Employer Policy, which details merit determination rubrics involving a rating, worker dimensions and employer dimensions.
- Many problems with evaluations can be traced back to the Terms of Reference (ToR) - the statement of what is required in an evaluation. Many ToRs are too vague, too ambitious, inaccurate or not appropriate.
- In a recent workshop on 'Designs for Performance Evaluation', which Patricia Rogers conducted with program officers from USAID, we looked at seven methods and strategies which might be usefully added to the repertoire for collecting, analy
- This article, written by Emotional Intelligence Coach Andy Smith, describes the anticipatory principle which is one of the underpinnings of Appreciative Inquiry (AI).
- This website, created by Ann Emery, provides a series of short videos on using Microsoft Excel to analyze data.
- This slide show provides an overview of this option and lists the advantages and disadvantages of its use. There are also a number of examples of covariance and linear regression equations.
- This article, "Evaluation, valuation, negotiation: some reflections towards a culture of evaluation" explores the issues of developing standards for an evaluation, when these have not previously been agreed, in a rural development program i
- The fully interactive site will go live at this URL later in 2012 [Updated - now scheduled for September 2012].
- As part of the 'Impact Evaluation in Action' session at last week's InterAction 2011 Forum last week, participants were invited to consider a recent impact evaluation they had been involved in, and to identify one thing that worked well, an
- "The IASC Gender Marker is a tool that codes, on a 0-2 scale, whether or not a humanitarian project is designed well enough to ensure that women/girls and men/boys will benefit equally from it or that it will advance gender equality in anot
- This article summarizes an extensive literature review addressing the question, How can we spread and sustain innovations in health service delivery and organization?
Designing and conducting health systems research projects Volume 2: Data analyses and report writing
This guide provides 13 modules designed to demonstrate aspects of data analysis and report writing.- Multiple lines and levels of evidence (MLLE) is a systematic approach to causal inference that involves bringing together different types of evidence (lines of evidence) and considering the strength of the evidence in terms of different ind
- This paper by Rick Davies from the Centre for Development Studies describes the use of hierarchical card sorting as a way to elicit the views of development sector staff to gain an understanding of their perceptions of the world around them
- Monitoring is a process to periodically collect, analyse and use information to actively manage performance, maximise positive impacts and minimise the risk of adverse impacts. It is an important part of effective management because it can provide early and ongoing information to help shape implementation in advance of evaluations
- An impact evaluation provides information about the observed changes or 'impacts' produced by an intervention. These observed changes can be positive and negative, intended and unintended, direct and indirect. An impact evaluation must establish the cause of the observed changes. Identifying the cause is known as 'causal attribution' or 'causal inference'.
- Impact investment aims to create positive social change alongside financial returns, thereby creating blended value. Assessing the intended and actual blended value created is an important part of impact investing.
- Different types of evaluation are used in humanitarian action for different purposes, including rapid internal reviews to improve implementation in real time and discrete external evaluations intended to draw out lessons learned with the broader aim of improving policy and practice, and enhancing accountability.
- An environmental footprint calculator estimates the environmental impact of specific activities, such as transport and energy use, food consumption, and production and use of products.
- Sustained and emerging impact evaluation (SEIE) evaluates the enduring results of an intervention some time after it has ended, or after a long period of implementation
- The term 'adaptive management' refers to adaptation that goes beyond the usual adaptation involved in good management - modifying plans in response to changes in circumstances or understanding, and using information to inform these decisions.
- Footprint evaluation aims to embed consideration of environmental sustainability in all evaluations (and all monitoring systems), not only those with explicit environmental objectives.