Adapting evaluation in the time of COVID-19 – Part 4: Describe

Blog%20Image%20-%20Adaptive%20Evaluation%20Describe.png

We’re continuing our series, sharing ideas and resources on ways of ensuring that evaluation adequately responds to the new challenges during the pandemic.

This is the first of a number of blogs on the evaluation tasks that relate to the DESCRIBE cluster in the BetterEvaluation Rainbow Framework.  This cluster covers methods and processes for answering descriptive questions such as what the situation is, and how it has changed, and what activities have been undertaken. A later blog in this series will focus on methods for collecting data, given restrictions and risk of the commonly used methods, which has been a major focus of discussions about changing evaluation practices during the pandemic.  This blog looks at the other tasks involved in answering descriptive questions – sampling and managing data.

Overarching issues to consider

There are particular implications for answering descriptive questions in evaluation during the pandemic. These include:

Needing to describe new things.  

Given the focus on changing service delivery to deliver new services or existing services in different ways, evaluations are likely to need to include information about new activities, outcomes or contextual factors.  There are unlikely to be systems in place to effectively collect, manage and analyse data about these.   

Implications of barriers to physical or usual data collection methods (like interviews and observation).

With many restrictions in place globally that limit travel and face-to-face contact, there has been an increasing use of technology, 3rd party collection, remote collection, secondary data. With these changes come important implications for data management and data analysis, as well as implications around equity and who is involved in evaluation in terms of data collection and analysis.

Increasing awareness of the disproportionate impact of the pandemic and the lockdown measures on certain communities.

This has implications for how data are analysed and visualised – and hence how it needs to be collected so that data are gathered about the experiences and perspectives of the most marginalised, and can be disaggregated to show patterns in terms of service access and outcomes. 

Needing to provide quick information to inform decision-making

Where evaluation is seeking to inform rapid adaptations of implementation, speed is of the essence.  Quality in evaluation has always been about balancing comprehensiveness and timeliness, but the increased need for speed raises challenges for many traditional approaches to ensuring quality – and at a time when new processes need to be developed to gather and analyse data about new things

What to think about when sampling or managing data

Sample

What sampling strategies will you use for collecting data?

Given barriers to traditional data collection, the need for speedy results, and often a desire to document and learn from “bright light” innovations that seem to be working, purposeful sampling is of increased relevance for many evaluations.

For example, Eleanor Williams’ rapid evaluations of Covid-19 related service and practice changes in the Victorian Department of Health and Human Services (Australia) focused on services changes which were perceived to be working well,  and (in addition to checking that the perception of success was well-founded) focused on understanding what had been done and what aspects of this change should be kept or extended.

Whereas random sampling uses statistical generalisation to extrapolate findings about a sample to draw conclusions about the population, purposeful sampling uses analytical generalisation to extrapolate findings to other cases and sites.  In order to do this, it is important to understand any contextual factors in the success cases that might not be present more widely.  For example, service delivery options that worked in rural sites, where they drew on strong local networks, might work well in well-connected urban communities, but not in urban communities without strong local networks.

The other issue that has arisen in terms of sampling relates to the risk of using convenience sampling – gathering data from those who are readily accessible, or who volunteer to be engaged.  Sometimes convenience sampling is undertaken by simply interviewing people who are readily available.  Doing a scattergun invitation to a survey (untargeted invitations such sending out thousands of invitations to everyone on a list or publishing the link publicly)  risks such a poor response rate that it ends up being effectively a convenience sample, not because others were not able to participate but because they chose not to, for example, when telephone opinion polling does not reach people without a telephone or who screen calls, or when a questionnaire is mailed out to thousands of potential respondents but only completed by a few hundred. During the pandemic, conducting an unintended convenience sample due to restrictions on field work and travel becomes more of a risk.  

However, a convenience sample cannot be statistically generalised to the larger population of interest and this has a practical and ethical dimension.  

In practical terms, there is the risk of drawing the wrong conclusion about the situation, based on a biased sample.  Last week, David Hill, writing in The Washington Post, referred to the “dirty little secret” of response rates that is affecting the reliability of opinion polls.

In ethical terms, a convenience sample is likely to exclude people and groups who are more geographically remote or otherwise marginalised, meaning that their concerns and experiences are not included in the data or even considered.  

Manage data

How will you organise and store data and ensure its quality?

Appropriate data management becomes more challenging when new data is being collected, and the management processes also need to be developed.

One of the other aspects of managing data is processing data after it has been collected.  Some of the resources below, such as the USAID guidance, looks at this. Another good resource for an often tricky type of data to process is the ALNAP guide to using qualitative data in the humanitarian sector, which discusses the processing of qualitative data, including how to reduce time and plan for use.

A number of recent resources provide detailed guidance and examples on issues around managing data responsibly during the pandemic, including processes that address data quality, retention and sharing. Many of these also consider ethical issues in terms of how data are collected. 

  • Using data responsibly during the COVID-19 crisis: This webinar, held as part of a three part webinar series by CLEAR-AA and MERL Tech during gLOCAL week, features Korstiaan Wapenaar (Genesis Analytics), Jerusha Govender (Data Innovator), and Teki Akkueteh (Africa Digital Rights Hub). In this 30 minute session, they discuss data as a necessary and critical part of COVID-19 prevention and response efforts, as well as the potential harm that can come from not managing data responsibily. Some of the key points from the session include that there are clear responsibilities for MERL practitioners when sharing, presenting, consuming and interpreting data; that contextual information and guidance should be provided when sharing data so that it can be used and interpreted in the right way; that misuse of data can be extremely negative or harmful, and that ethical and legal principles should not be overridden in the rush to collect data.

  • Considerations for Using Data Responsibly at USAID:  This USAID paper provides a framework for identifying and understanding risks associated with development data, though the considerations it covers will be applicable in a wide range of sectors. Focused on enabling users to have better conversations about data, especially in terms of “balancing the tremendous opportunity presented by data with the associated risks,” this document covers good data policy and planning for data use, informed consent, data collection and protection, data quality, retention and sharing.

  • Considerations for USAID Mission Staff for Programmatic COVID-19 Preparedness and response: Digital Technologies and Data Systems: This brief USAID guidance draws on lessons from the 2014 Ebola outbreak in West Africa to suggest steps that can be taken to aid coordination of information and data among multiple stakeholders in different sectors. While it is written for a USAID context, it covers a number of practical issues that otherwise impede effective and quick decision-making from data.

  • Responsible data resource list: This resource list is curated and maintained by MERL Tech and The Engine Room. It’s a useful, up-to-date list of resources for a number of important topics, including  data policy, standards and frameworks, data security, management, and deletion, big data and open data, data platforms. It also includes a number of tools, templates and case studies.

  • Free tools to anonymise photos: This article by TechCrunch lists some free tools that can be used to strip photos of their metadata and blur faces.

Stay tuned

There's a lot to cover in the DESCRIBE task, and so the next blog will be out shortly where we'll be focusing on the remaining tasks in this cluster - using measures/indicators, metrics, collecting data, combining qualitative and quantitative data, analysing data, and visualising data.

'Adapting evaluation in the time of COVID-19 – Part 4: Describe' is referenced in: