Patricia Rogers

Profile image

Contributed by this member

Resource

Blog

Method

  • Reviewing documents produced as part of the implementation of the evaluand can provide useful background information and be beneficial in understanding the alignment between planned and actual implementation.
  • A realist synthesis is the synthesis of a wide range of evidence that seeks to identify underlying causal mechanisms and explore how they work under what conditions, answering the question "what works for whom under what circumstances?" rat
  • Crosstabulation (or crosstab) is a basic part of survey research in which researchers can get an indication of the frequency of two variables (e.g.
  • Parametric inferential tests are carried out on data that follow certain parameters.
  • A bar chart plots the number of times a particular value or category occurs in a data set, with the length of the bar representing the number of observations with that score or in that category.
  • A pie chart is a divided circle, in which each slice of the pie represents a part of the whole.
  • Process tracing is a case-based and theory-driven method for causal inference that applies specific types of tests to assess the strength of evidence for concluding that an intervention has contributed to changes that have been observed or
  • Most programme theories, logic models and theories of change show how an intervention is expected to contribute to positive impacts; Negative programme theory, a technique developed by Carol Weiss, shows how it might produce negative impact
  • Evaluation management often involves a steering group, which makes the decisions about the evaluation.
  • A rubric is a framework that sets out criteria and standards for different levels of performance and describes what performance would look like at each level.
  • Integrated Design is an approach to mixed options evaluation where qualitative and quantitative data are integrated into an overall design.  
  • ‘Examining’ refers to generating hypotheses from qualitative work to be tested through the quantitative approach.
  • A formal contract is needed to engage an external evaluator and a written agreement covering similar issues can also be used to document agreements about an internal evaluator.
  • Mobile Data Collection (MDC) is the use of mobile phones, tablets or personal digital assistants (PDAs) for programming or data collection.
  • Value for money is a term used in different ways, including as a synonym for cost-effectiveness, and as systematic approach to considering these issues throughout planning and implementation, not only in evaluation.
  • Social media refers to a range of internet-based applications that support the creation and exchange of user-generated content - including Facebook, Twitter, Instagram, Pinterest and LinkedIn.
  • Self-assessment is an individual reflection on one's skills, knowledge and attitudes related to evaluation competencies.
  • Peer learning refers to a practitioner-to-practitioner approach in which the transfer of tacit knowledge is particularly important (Andrews and Manning 2016).
  • Expert advice provides advice in response to specific queries. It might include a process to clarify and reframe the question that is being asked.
  • An internship is a paid or unpaid entry-level position that provides work experience and some professional development.
  • Evaluation associations can leverage their membership to engage in knowledge construction through research and development.
  • An expectation that members of an association or organisation will engage in ongoing competency development.
  • Fellow is a category of membership of an association or society, often awarded to an individual based on their contributions to evaluation.
  • As part of its public advocacy role, a professional association can provide potential clients with information about engaging with evaluators effectively.
  • For evaluation to be truly useful it needs to engage in public discussions about relevant issues.
  • Associations from different but related sectors and fields can be good places to find useful events and training, network connections, and ideas.
  • Standards, evaluative criteria, or benchmarks refer to the criteria by which an evaluand will be judged during an evaluation.
  • Projective techniques, originally developed for use in psychology, can be used in an evaluation to provide a prompt for interviews.
  • An environmental footprint calculator estimates the environmental impact of specific activities, such as transport and energy use, food consumption, and production and use of products.
  • Multiple lines and levels of evidence (MLLE) is a systematic approach to causal inference that involves bringing together different types of evidence (lines of evidence) and considering the strength of the evidence in terms of different ind
  • A concept map shows how different ideas relate to each other - sometimes this is called a mind map or a cluster map.
  • Vote counting is a simple but limited method for synthesizing evidence from multiple evaluations and involves comparing the number of positive studies (studies showing benefit) with the number of negative studies (studies showing harm).
  • A frequency table provides collected data values arranged in ascending order of magnitude, along with their corresponding frequencies.
  • Multivariate descriptive statistics involves analysing relationships between more than two variables.
  • Inferential statistics suggest statements or make predictions about a population based on a sample from that population. Non-parametric tests relate to data that are flexible and do not follow a normal distribution.
  • A histogram is a graphical way of presenting a frequency distribution of quantitative data organised into a number equally spaced intervals or bins (e.g. 1-10, 11-20…).
  • A time series is a collection of observations of well-defined data items obtained through repeated measurements over time.
  • A mural, a large drawing on the wall, can be used to collect data from a group of people about the current situation, their experiences using a service, or their perspectives on the outcomes from a project.
  • An outcomes hierarchy shows all the outcomes (from short-term to longer-term) required to bring about the ultimate goal of an intervention.  
  • A systematic review is an approach to synthesising evidence from multiple studies.
  • Component design is an approach to mixed methods evaluation that conducts qualitative components of the evaluation separately to quantitative components and then combines the data at the time of report writing.  
  • ‘Enriching’ is achieved by using qualitative work to identify issues or obtain information on variables not obtained by quantitative surveys. 
  • ‘Explaining’ involves using qualitative work to understand unanticipated results from quantitative data.  
  • Professional development courses can be a useful way to develop people’s knowledge and skills in conducting and/or managing an evaluation.
  • A hybrid evaluation involves both internal and external staff working together.   
  • Best evidence synthesis is a synthesis that, like a realist synthesis, draws on a wide range of evidence (including single case studies) and explores the impact of context.
  • A data party is a time-limited event of several hours where diverse stakeholders come together to collectively analyse data that have been collected.
  • Dialogues refer to a range of learning conversations that go beyond knowledge transfer to include knowledge articulation and translation.
  • Viewing learning materials, such as previously recorded webinars, at your own pace.
  • Expert review involves an identified expert providing a review of draft documents at specified stages of a process and/or planned processes.
  • Learning partnerships involve structured processes over several years to support learning between a defined number of organisations working on similar programs, usually facilitated by a third party organisation.
  • A distinct occupational category or role title recognised at a national or organisational level.
  • An award is a formal recognition by peers of outstanding individuals or practice. Some awards are made for cumulative good practice, and others are for exemplars of good practice, such as awards for the best evaluation.
  • An important part of evaluation capacity strengthening is providing a clear definition or explanation of evaluation in online and printed materials.
  • For evaluation to be truly useful it needs to be embedded in organisational processes.
  • Evaluation journals play an important role in documenting, developing, and sharing theory and practice. They are an important component in strengthening evaluation capacity.
  • Peer assessment can provide additional benefits beyond self-assessment – in particular, the opportunity for peer learning through the review process.
  • An expert review involves experts reviewing the evaluation, drawing in part on their expertise and experience of the particular type of program or project.

Approach

  • The Qualitative Impact Assessment Protocol (QuIP) is an impact evaluation approach that collects and documents narrative causal statements directly from those affected by an intervention.
  • Realist evaluation aims to identify the underlying generative causal mechanisms that explain how outcomes were caused and how context influences these.
  • Outcome Mapping is an approach that helps unpack an initiative’s theory of change and provides a framework to collect data on the immediate, basic changes that lead to longer, more transformative change. This allows for the plausible assessment of the initiative’s contribution to results.
  • Qualitative Comparative Analysis (QCA) is an evaluation approach that supports causal reasoning by examining how different conditions contribute to an outcome.
  • The Qualitative Impact Assessment Protocol (QuIP) is an impact evaluation approach that collects and documents narrative causal statements directly from those affected by an intervention.
  • Outcome Mapping is an approach that helps unpack an initiative’s theory of change and provides a framework to collect data on the immediate, basic changes that lead to longer, more transformative change. This allows for the plausible assessment of the initiative’s contribution to results.
  • Outcome Harvesting collects (“harvests”) evidence of what has changed (“outcomes”) and, working backwards, determines whether and how an intervention has contributed to these changes.

Theme

  • Impact investment aims to create positive social change alongside financial returns, thereby creating blended value. Assessing the intended and actual blended value created is an important part of impact investing.
  • Different types of evaluation are used in humanitarian action for different purposes, including rapid internal reviews to improve implementation in real time and discrete external evaluations intended to draw out lessons learned with the broader aim of improving policy and practice, and enhancing accountability.
  • Footprint evaluation aims to embed consideration of environmental sustainability in all evaluations and monitoring systems, not only those with explicit environmental objectives.
  • Monitoring is a process to periodically collect, analyse and use information to actively manage performance, maximise positive impacts and minimise the risk of adverse impacts. It is an important part of effective management because it can provide early and ongoing information to help shape implementation in advance of evaluations
  • The term 'adaptive management' refers to adaptation that goes beyond the usual adaptation involved in good management - modifying plans in response to changes in circumstances or understanding, and using information to inform these decisions.
  • Footprint evaluation aims to embed consideration of environmental sustainability in all evaluations and monitoring systems, not only those with explicit environmental objectives.
  • Sustained and emerging impact evaluation (SEIE) evaluates the enduring results of an intervention some time after it has ended, or after a long period of implementation
  • An impact evaluation provides information about the observed changes or 'impacts' produced by an intervention. These observed changes can be positive and negative, intended and unintended, direct and indirect.