This guidance from the Footprint Evaluation Initiative aims to support those doing or overseeing evaluations to include environmental sustainability in feasible and useful ways.
Patricia Rogers

Contributed by this member
- In this blog, Patricia Rogers explores how you can make the most of evaluation journals.
- This 90-minute webinar organised by the Canberra Regional Network of the Australian Evaluation Society focuses on the use of evaluation maturity models, which provide a roadmap for improving evaluation capability and culture within an organ
- The week before last, we were treated to over 300 diverse, live events on evaluation in the annual event that is glocal – a week of locally hosted, globally accessible webinars, presentations and hybrid sessions that is convened by the Glob
- In this working paper for the Center for International Development at Harvard University, Patricia Rogers and Michael Woolcock argue that implementation and process evaluations serve the vital purpose of jointly promoting accountability and
- This report from the Open Data Institute discusses key concepts including ‘data stewardship’, ‘data ethics’, ‘data justice’ ‘data for good’ ‘responsible data’, ‘data sovereignty’, ‘Indigenous data sovereignty’, and ‘data feminism’.
- This set of webpages and video from the Department of Education in New South Wales, Australia, provides background information on evaluative thinking and its use.
- This paper, written by George Julnes, University of New Mexico, Melvin M. Mark, Penn State University, and Stephanie Shipman, U.S.
- This resource, from the Footprint Evaluation Initiative, discusses how the six evaluation criteria of the OECD DAC (Organization for Economic Co-Operation and Development – Development Assistance Committee) can be used to get environmental
- In this recording of the 2022 NEC Conference's session on environmental sustainability, the Footprint Evaluation Initiative’s Patricia Rogers and Andy Rowe joined Joana Varela (Ministry of Finance and Blue Economy Plan, São Tomé and Princip
- In part one of this three-part webinar series, Andy Rowe and Patricia Rogers discuss what was learnt during the
- This blog provides guidance and examples on co-evaluating with lived and living experience (LLE) researchers.
- This paper provides guidance on Environmental Approach for Generational Impact – a proposed approach to designing and evaluating programmes in a way that includes consideration of environment and biodiversity issues.
- In the last part of this three-part webinar series, Andy Rowe and Patricia Rogers introduce a typology being developed that will assist a wide range of evaluations in assessing the effect of interventions on natural systems and sustainabili
- In this EvalSDGs Insight paper, Scott Chaplowe and Juha Uitto argue that there is an urgent need to mainstream environmental sustainability in evaluation.
- This Footprint Evaluation Initiative report describes four 'thought experiments' undertaken as part of this project.
- In part two of this three-part webinar series, Jane Davidson and Patricia Rogers discuss several ways to get sustainability on the evaluation agenda, even for projects that have no explicit environmental objectives and where there is no men
- This blog is the first of a 2-part series on the issues raised about COP26 in papers published in the journal Evaluation.
- This blog is the second of a 2-part series on the issues raised about COP26 in papers published in the journal Evaluation.
- This chapter from Transformational Evaluation for the Global Crises of Our Times argues for the need to transform evaluation in the light of current environmental crises and sets out the major ways this needs to happen.
- This guide sets out the rationale for why transformative equity needs to be addressed by all evaluations, especially in the South African context of high inequality, and how this might be done during the commissioning, design and conduct of
- This guide sets out the rationale for why climate and ecosystem health need to be addressed by all evaluations and how this might be done during the commissioning, design and conduct of an evaluation.
- Happy Holidays! As we head into 2022 we thought we'd share a list of resources for you to peruse in the new year.
- This blog by Jo Hall and Patricia Rogers provides an update on the Global Partnership for Better Monitoring project.
- This webpage provides 13 separate rubrics developed for different aspects of the Measurable Gains Framework in the Maori Education Strategy.
- We’re currently going through a global period of rapid change and adaption, due in large part to the effects of the COVID-19 pandemic on our lives and work.
- While we’re happy to wish the year 2020 farewell, many of the challenges and difficulties that arose over the past 12 months are still with us, as is the sadness over the many different forms of loss we’ve all experienced.
- This paper provides detailed guidance on using big data to fill data gaps in impact evaluations.
- We’re continuing our series, sharing ideas and resources on ways of ensuring that evaluation adequately responds to the new challenges during the pandemic.
- Given the numerous interconnected environmental crises the world faces, there is an urgent need to include consideration of environmental impacts into all evaluations.
- We’re excited to be involved in the 2020 IPDET Hackathon – a week-long event in which hundreds of people from around the world bring together their skills,
- The Covid-19 pandemic has led to rapid changes in the activities and goals of many organisations, whether these relate to addressing direct health impacts, the consequential economic and social impacts or to the need to change the way thing
- The report sets out research findings on the "digital dividends" of various types of technology on natural resource management in low and middle-income countries.
- Organisations around the world are quickly having to adapt their programme and project activities to respond to the COVID-19 pandemic and its consequences. We’re starting a new blog series to help support these efforts.
- This paper is the first in the BetterEvaluation Monitoring and Evaluation for Adaptive Management working paper series.
- Drawing on interviews with 19 UK evaluation commissioners and contractors, this paper investigates the role of evaluation commissioning in hindering the take-up of complexity-appropriate evaluation methods and explores ways of improving thi
- In this first blog of 2019, Patricia Rogers, Greet Peersman and Alice Macfarlan examine how New Year's resolutions are similar to many evaluation practices.
- This special issue of New Directions in Evaluation includes discussions of different types of sustainability – sustainable environment, sustainable development, sustainable programs, and sustainable evaluation systems – and a synthesis of t
- This blog introduces the 'Context Matters' framework - a living tool that builds on and contributes to learning and thinking on evidence-informed policy making, by providing a lens through which to examine the context (internal and external
- This freely available, online book brings together case studies using an impact evaluation approach, the Qualitative Impact Protocol (QUIP), without a control group that uses narrative causal statements elicited directly from intended proje
- This collection of resources - toolkit, short guide and cheat sheet - sets out the challenges of talking about climate change and presents effective strategies to address them.
- In part 1 of this two-part blog series, greet Peersman and Patricia Rogers introduce the ‘Pathways to advance professionalisation within the context of the AES’ project and report.
- In the previous blog in this series, greet Peersman and Patricia Rogers introduced the ‘Pathways to advance professionalisation within the context of the AES’ project and report.
- This blog is an abridged version of the brief Innovations in evaluation: How to choose, develop and support them, written by Patricia Rogers and Alice Macfarlan.
- This is the second of a two-part blog on strategies to support the use of evaluation, building on a session the BetterEvaluation team facilitated at the American Evaluation Association conference last year.
- What can be done to support the use of evaluation? How can evaluators, evaluation managers and others involved in or affected by evaluations support the constructive use of findings and evaluation processes?
- This webinar (recorded January 23, 2018) features an interview with Michael Quinn Patton and discusses his latest book - Principles-Focused Evaluation: The GUIDE - and the principles-focused evaluation (P-FE) approach, exploring its relevan
- A clear and well-informed guide to evaluating value for money which addresses important issues including the limitations of using indicators epsecially for complex interventions, and the need to address unintended impacts and complicated ca
- This toolkit, developed by the Victorian Department of Health and Human Services, provides a step-by-step guide to developing and implementing a successful stakeholder engagement plan.
- Written by Mary Marcussen, this chapter from the online resource, Principal Investigator's Guide (produced by Informal Science), is about locating an evaluator well matched to your project needs.
- How can programs and organizations ensure they are adhering to core principles—and assess whether doing so is yielding desired results?
- This short guide from the Poverty Action Lab presents six rules very clearly, with helpful diagrams to explain or reinforce the points.
- This visual overview was developed through a research process that identified, defined and illustrated 16 key features of complex systems.
- These examples have been contributed for discussion at the 'flipped conference' session of the American Evaluation Association to be held Saturday (November 11, 2017) 09:15 AM - 10:00 AM in the Thurgood Marshall North Room. Example 1 - Usi
- We have all been there. You dive into a new book or head to a conference/workshop/course and come out all fired up about a new evaluation method. But when you get back to the real world, applying it turns out to be harder than you thought! What next?
- In this flipped conference session, we invite participants and evaluators, evaluation managers and evaluation capacity developers around the world to build and share knowledge about what can be done to support the use of evaluation findings after they'v
- Although it’s sometimes referred to as program theory or program logic, theories of change can be used for interventions at any scale, including policies, whole-of-government initiatives, and systems.
- Many evaluations include a process of developing
- Part of our commitment to better evaluation is making sure that evaluation itself is evaluated better. Like any intervention, evaluations can be evaluated in different ways.
- Adaptive management is usually understood to refer to an iterative process of reviewing and making changes to programmes and projects throughout implementation.
- All too often conferences fail to make good use of the experience and knowledge of people attending, with most time spent presenting prepared material that could be better delivered other ways, and not enough time spent on discussions and a
- This article presents an example of a rigorous non-counterfactual causal analysis that describes how different evidence and methods were used together for causal inference without a control group or comparison group.
- The Potent Presentations Initiative (p2i) (sponsored by the American Evaluation Association) was created to help evaluators improve their presentation skills, both at conferences and in their everyday evaluation practice.
- In this edition of the BE FAQ blog, we address a question that comes up quite often: How do you go about analysing data that has been collected from respondents via a questionnaire?
- A theory of change can be very useful in designing an impact evaluation, but what kinds of theories should we use?
- The updated 2016 UNEG Norms and Standards for Evaluation, a UNEG foundational document, is intended for application for all United Nations’ evaluations.
- Many development programme staff have had the experience of commissioning an impact evaluation towards the end of a project or programme only to find that the monitoring system did not provide adequate data about implementation, context, ba
- Are there particular examples of evaluations that you return to think about often?
- Impact evaluation, like many areas of evaluation, is under-researched. Doing systematic research about evaluation takes considerable resources, and is often constrained by the availability of information about evaluation practice.
- In development, government and philanthropy, there is increasing recognition of the potential value of impact evaluation.
- Welcome to the International Year of Evaluation!
- In May we blogged about ways of framing the difference between research and evaluation. We had terrific feedback on this issue from the international BetterEvaluation community and this update shares the results.
- Two years ago, during the European Evaluation Society conference in Helsinki, the BetterEvaluation.org website went live for public access.
- Being able to compare alternatives is essential when designing an evaluation.
- Case studies are often used in evaluations – but not always in ways that use their real potential.
- One of the challenges of working in evaluation is that important terms (like ‘evaluation’, ‘impact’, ‘indicators’, ‘monitoring’ and so on ) are defined and used in very different ways by different people.
- One of the challenges of working in evaluation is that important terms (like ‘evaluation’, ‘impact’, ‘indicators’, ‘monitoring’ and so on ) are defined and used in very different ways by different people.
- This week we start the first in an ongoing series of Real-Time Evaluation Queries, where BetterEvaluation members ask for advice and assistance with something they are working on, together we suggest some strategies and useful resources - a
- BetterEvaluation was privileged to sponsor the Methodological Innovation stream at the African Evaluation Association (AfREA) conference from 3-7 March. What did we learn?
- This is the first in a series of blogs on innovation which includes contributions from Thomas Winderl and Julia Coffman.
- This guide, written by Patricia Rogers for UNICEF, looks at the use of theory of change in an impact evaluation.
- Realist impact evaluation is an approach to impact evaluation that emphasises the importance of context for programme outcomes.
- Happy New Year! As you start filling in your diaries and calendars (desk, wall or electronic), make some space for at least one of the evaluation conferences listed below.
- In this e-book, Judy Oakden discusses the use of Rich Pictures in evaluation. In particular, she addresses why (and when) you should use rich pictures, and answers some of the common questions around the use of rich pictures.
- While most of the evaluation resources on the BetterEvaluation site are in English, we're keen to provide access to resources in other languages. In 2014, making the site more accessible in different languages will be one of our priorities.
- This week we're celebrating the first year of BetterEvaluation since it went live to the public in October 2012. Thank you to everyone who has contributed material, reviewed content, developed the website, and participated in live events.
- You'll find hundreds of evaluation resources on the BetterEvaluation site. Some have come from recommendations by stewards. Some have come from our writeshop project or design clinics.
- How do you balance the different dimensions of an evaluation?
- What's one of the most common mistakes in planning an evaluation? Going straight to deciding data collection methods. Before you choose data collection methods, you need a good understanding of why the evaluation is being done.
- Evaluation journals play an important role in documenting, developing, and sharing theory and practice.
- How do we ensure our evaluations are conducted ethically? Where do we go for advice and guidance, especially when we don't have a formal process for ethical review?
- Many organisations are having to find ways of doing more for less – including doing evaluation with fewer resources.
- Many evaluations use a theory of change approach, which identifies how activities are understood to contribute to a series of outcomes and impacts. These can help guide data collection, analysis and reporting.
- The term "rubric" is often used in education to refer to a systematic way of setting out the expectations for students in terms of what would constitute poor, good and excellent performance.
- There is increasing recognition that a theory of change can be useful when planning an evaluation. A theory of change is an explanation of how activities are understood to contribute to a series of outcomes and impacts.
- There is increasing discussion about the potential relevance of ideas and methods for addressing complexity in evaluation. But what does this mean? And is it the same as addressing complication?
- Across the world evaluation associations provide a supportive community of practice for evaluators, evaluation managers and those who do evaluation as part of their service delivery or management job.
- There are many decisions to be made in an evaluation – its purpose and scope; the key evaluation questions; how different values will be negotiated; what should be the research design and methods for data collection and analysis; how
- This week on BetterEvaluation we're presenting Questions and Answers about logic models. A logic model represents a program theory - how an intervention (such as a program, project or policy) is understood to contribute to its impacts.
- This book, co-edited by Marlène Läubli Loud and John Mayne, offers invaluable insights from real evaluators who share strategies they have adopted through their own experiences in evaluation.
- A webpage and blog site of Professor Ray Calabrese, from Ohio State University USA.
- This framework has been developed to guide the consistent and transparent evaluation of government programs in the New South Wales (Australia) State Government to inform decision making on policy directions, program design and implementatio
- This resource discusses interactive, face-to-face techniques to use to achieve particular evaluation goals, especially in formal meetings of an evaluation.
- The following example can be found in Appendix H of the Final Evaluation Report of the Recognised Seasonal Employer Policy, which details merit determination rubrics involving a rating, worker dimensions and employer dimensions.
- Many problems with evaluations can be traced back to the Terms of Reference (ToR) - the statement of what is required in an evaluation. Many ToRs are too vague, too ambitious, inaccurate or not appropriate.
- In a recent workshop on 'Designs for Performance Evaluation', which Patricia Rogers conducted with program officers from USAID, we looked at seven methods and strategies which might be usefully added to the repertoire for collecting, analy
- This article, written by Emotional Intelligence Coach Andy Smith, describes the anticipatory principle which is one of the underpinnings of Appreciative Inquiry (AI).
- This website, created by Ann Emery, provides a series of short videos on using Microsoft Excel to analyze data.
- This slide show provides an overview of this option and lists the advantages and disadvantages of its use. There are also a number of examples of covariance and linear regression equations.
- This article, "Evaluation, valuation, negotiation: some reflections towards a culture of evaluation" explores the issues of developing standards for an evaluation, when these have not previously been agreed, in a rural development program i
- The fully interactive site will go live at this URL later in 2012 [Updated - now scheduled for September 2012].
- As part of the 'Impact Evaluation in Action' session at last week's InterAction 2011 Forum last week, participants were invited to consider a recent impact evaluation they had been involved in, and to identify one thing that worked well, an
- "The IASC Gender Marker is a tool that codes, on a 0-2 scale, whether or not a humanitarian project is designed well enough to ensure that women/girls and men/boys will benefit equally from it or that it will advance gender equality in anot
- This article summarizes an extensive literature review addressing the question, How can we spread and sustain innovations in health service delivery and organization?
Designing and conducting health systems research projects Volume 2: Data analyses and report writing
This guide provides 13 modules designed to demonstrate aspects of data analysis and report writing.- Expert review involves an identified expert providing a review of draft documents at specified stages of a process and/or planned processes.
- Multiple lines and levels of evidence (MLLE) is a systematic approach to causal inference that involves bringing together different types of evidence (lines of evidence) and considering the strength of the evidence in terms of different ind
- An important part of evaluation capacity strengthening is providing a clear definition or explanation of evaluation in online and printed materials.
- An expert review involves experts reviewing the evaluation, drawing in part on their expertise and experience of the particular type of program or project.
- Multivariate descriptive statistics involves analysing relationships between more than two variables.
- A mural, a large drawing on the wall, can be used to collect data from a group of people about the current situation, their experiences using a service, or their perspectives on the outcomes from a project.
- Different types of evaluation are used in humanitarian action for different purposes, including rapid internal reviews to improve implementation in real time and discrete external evaluations intended to draw out lessons learned with the broader aim of improving policy and practice, and enhancing accountability.
- ‘Enriching’ is achieved by using qualitative work to identify issues or obtain information on variables not obtained by quantitative surveys.
- Best evidence synthesis is a synthesis that, like a realist synthesis, draws on a wide range of evidence (including single case studies) and explores the impact of context.
- Evaluation associations can leverage their membership to engage in knowledge construction through research and development.
- Footprint evaluation aims to embed consideration of environmental sustainability in all evaluations and monitoring systems, not only those with explicit environmental objectives.
- For evaluation to be truly useful it needs to engage in public discussions about relevant issues.
- Reviewing documents produced as part of the implementation of the evaluand can provide useful background information and be beneficial in understanding the alignment between planned and actual implementation.
- A bar chart plots the number of times a particular value or category occurs in a data set, with the length of the bar representing the number of observations with that score or in that category.
- Evaluation management often involves a steering group, which makes the decisions about the evaluation.
- An impact evaluation provides information about the observed changes or 'impacts' produced by an intervention. These observed changes can be positive and negative, intended and unintended, direct and indirect.
- A formal contract is needed to engage an external evaluator and a written agreement covering similar issues can also be used to document agreements about an internal evaluator.
- Self-assessment is an individual reflection on one's skills, knowledge and attitudes related to evaluation competencies.
- An environmental footprint calculator estimates the environmental impact of specific activities, such as transport and energy use, food consumption, and production and use of products.
- An award is a formal recognition by peers of outstanding individuals or practice. Some awards are made for cumulative good practice, and others are for exemplars of good practice, such as awards for the best evaluation.
- Peer assessment can provide additional benefits beyond self-assessment – in particular, the opportunity for peer learning through the review process.
- A frequency table provides collected data values arranged in ascending order of magnitude, along with their corresponding frequencies.
- A time series is a collection of observations of well-defined data items obtained through repeated measurements over time.
- Impact investment aims to create positive social change alongside financial returns, thereby creating blended value. Assessing the intended and actual blended value created is an important part of impact investing.
- Component design is an approach to mixed methods evaluation that conducts qualitative components of the evaluation separately to quantitative components and then combines the data at the time of report writing.
- A hybrid evaluation involves both internal and external staff working together.
- Viewing learning materials, such as previously recorded webinars, at your own pace.
- An internship is a paid or unpaid entry-level position that provides work experience and some professional development.
- As part of its public advocacy role, a professional association can provide potential clients with information about engaging with evaluators effectively.
- A brief (4-page) overview that presents a statement from the American Evaluation Association defining evaluation as "a systematic process to determine merit, worth, value or significance".
- Projective techniques, originally developed for use in psychology, can be used in an evaluation to provide a prompt for interviews.
- Parametric inferential tests are carried out on data that follow certain parameters.
- Most programme theories, logic models and theories of change show how an intervention is expected to contribute to positive impacts; Negative programme theory, a technique developed by Carol Weiss, shows how it might produce negative impact
- Sustained and emerging impact evaluation (SEIE) evaluates the enduring results of an intervention some time after it has ended, or after a long period of implementation
- ‘Examining’ refers to generating hypotheses from qualitative work to be tested through the quantitative approach.
- Social media refers to a range of internet-based applications that support the creation and exchange of user-generated content - including Facebook, Twitter, Instagram, Pinterest and LinkedIn.
- A distinct occupational category or role title recognised at a national or organisational level.
- Footprint evaluation aims to embed consideration of environmental sustainability in all evaluations and monitoring systems, not only those with explicit environmental objectives.
- Evaluation journals play an important role in documenting, developing, and sharing theory and practice. They are an important component in strengthening evaluation capacity.
- Vote counting is a simple but limited method for synthesizing evidence from multiple evaluations and involves comparing the number of positive studies (studies showing benefit) with the number of negative studies (studies showing harm).
- A histogram is a graphical way of presenting a frequency distribution of quantitative data organised into a number equally spaced intervals or bins (e.g. 1-10, 11-20…).
- A systematic review is an approach to synthesising evidence from multiple studies.
- Professional development courses can be a useful way to develop people’s knowledge and skills in conducting and/or managing an evaluation.
- Dialogues refer to a range of learning conversations that go beyond knowledge transfer to include knowledge articulation and translation.
- Expert advice provides advice in response to specific queries. It might include a process to clarify and reframe the question that is being asked.
- Fellow is a category of membership of an association or society, often awarded to an individual based on their contributions to evaluation.
- This paper by Rick Davies from the Centre for Development Studies describes the use of hierarchical card sorting as a way to elicit the views of development sector staff to gain an understanding of their perceptions of the world around them
- Standards, evaluative criteria, or benchmarks refer to the criteria by which an evaluand will be judged during an evaluation.
- Crosstabulation (or crosstab) is a basic part of survey research in which researchers can get an indication of the frequency of two variables (e.g.
- Process tracing is a case-based approach to causal inference which focuses on the use of clues within a case (causal-process observations, CPOs) to adjudicate between alternative possible explanations.
- The term 'adaptive management' refers to adaptation that goes beyond the usual adaptation involved in good management - modifying plans in response to changes in circumstances or understanding, and using information to inform these decisions.
- Integrated Design is an approach to mixed options evaluation where qualitative and quantitative data are integrated into an overall design.
- Value for money is a term used in different ways, including as a synonym for cost-effectiveness, and as systematic approach to considering these issues throughout planning and implementation, not only in evaluation.
- Learning partnerships involve structured processes over several years to support learning between a defined number of organisations working on similar programs, usually facilitated by a third party organisation.
- For evaluation to be truly useful it needs to be embedded in organisational processes.
- A concept map shows how different ideas relate to each other - sometimes this is called a mind map or a cluster map.
- Inferential statistics suggest statements or make predictions about a population based on a sample from that population. Non-parametric tests relate to data that are flexible and do not follow a normal distribution.
- An outcomes hierarchy shows all the outcomes (from short-term to longer-term) required to bring about the ultimate goal of an intervention.
- Monitoring is a process to periodically collect, analyse and use information to actively manage performance, maximise positive impacts and minimise the risk of adverse impacts. It is an important part of effective management because it can provide early and ongoing information to help shape implementation in advance of evaluations
- ‘Explaining’ involves using qualitative work to understand unanticipated results from quantitative data.
- A data party is a time-limited event of several hours where diverse stakeholders come together to collectively analyse data that have been collected.
- An expectation that members of an association or organisation will engage in ongoing competency development.
- Associations from different but related sectors and fields can be good places to find useful events and training, network connections, and ideas.
- A realist synthesis is the synthesis of a wide range of evidence that seeks to identify underlying causal mechanisms and explore how they work under what conditions, answering the question "what works for whom under what circumstances?" rat
- A pie chart is a divided circle, in which each slice of the pie represents a part of the whole.
- A rubric is a framework that sets out criteria and standards for different levels of performance and describes what performance would look like at each level.
- Mobile Data Collection (MDC) is the use of mobile phones, tablets or personal digital assistants (PDAs) for programming or data collection.
- Peer learning refers to a practitioner-to-practitioner approach in which the transfer of tacit knowledge is particularly important (Andrews and Manning 2016).