Rapid evaluation

People in a yellow inflatable boat paddle on river rapids

Eleanor Williams is the Director of the Centre for Evaluation and Research Evidence at the Victorian Department of Health. In this role, she leads the department's evaluation and research strategy. 

She is also the co-convenor for the Victorian Public Sector Evaluation Network and the Victorian Committee of the Australian Evaluation Society. Eleanor holds a Masters of Public Policy and Management and a Masters of Evaluation from the University of Melbourne.  As part of her studies, Eleanor conducted an extensive literature review on rapid evaluation methods that informed this article and rapid evaluation delivery conducted in relation to Covid-19 service and practice changes in health and human services in Victoria. 

During the Covid-19 pandemic, decision-makers, particularly in the health sector, have required a fast turnaround of evidence to inform service and practice changes.  This context has presented a real challenge to established norms and standards for evaluation, which generally operate over longer timeframes. Government departments and research units internationally, including the Victorian Department of Health Centre for Evaluation and Research Evidence have responded to this call to provide evaluations and evidence faster, and ‘rapid evaluations’, have become not only more common but also more of a necessity. 

Rapid evaluations can be deployed within a few days through to a few months and are particularly helpful in unexpected or unprecedented events, such as the COVID-19 pandemic. But while interest in rapid evaluation methods has accelerated during this recent period, there is actually a long history behind rapid methods that we can learn from.  This blog steps through the common components of rapid evaluation and where these methods come from to hopefully shed some light on where they can add value.

Current context and challenges

The current demand for rapid methods responds to concerns that standard research and evaluation methods are too slow to translate into practice, both in emergency contexts but also for standard government policy-making.

In experimental design, for example, randomized trials take around 5.5 years from initiation to publication and 7 years or longer after adding the time from grant application submission.  Even outside of emergency management scenarios, these timeframes are often not adequate to inform decision-making, such as for governments working to 3- or 4-year election cycles or decision-makers with immediate decisions to be made on whether a program should be extended or expanded.  And more importantly, over these long timeframes, the context can change making evaluation findings irrelevant or obsolete. 

While the current spotlight on this problem is focused on the pandemic, this is not a new issue – and evaluators have long been exploring ways of getting appropriate evidence quickly enough to inform decisions.

Where did Rapid Evaluation come from, and where is it going?

Expedited research and evaluation methods have a long history, particularly in international development and emergency management. After the initial introduction of rapid methods in the 1980s, there was a further push for real-time evaluation promoted by the United Nations in the 1990s in response to an increase in humanitarian crises and the need for quick, evidence-based evaluations.

An International Conference on Rapid Assessment Methodologies for Health Related Programmes (PDF), was organized by the United Nations University and held in November 1990, and raised interest and expectations for anthropologically based methodologies for the design, evaluation, and improvement of programmes of nutrition and primary health care.  Rapid methods then expanded in the field of public health in the early 2000s alongside other methodological advances to move away from ‘quick and dirty’ methods towards more robust and standardised rapid approaches. 

More recently, rapid methods have been used to respond to government needs for faster and earlier evidence to inform decision-making on programs and projects while they are being implemented, rather than years after. This is both due to pressure for fast decision-making during the Covid-19 crisis but also a broader increase in the speed and volume of decision-making activities in government (archived link) (PDF). To illustrate, the Department of Planning, Monitoring and Evaluation (DPME) in South Africa released guidelines and a toolkit (PDF) for Rapid Evaluation in 2020 explicitly responding to the need for responsiveness to senior management requests and urgent demands for information.

What do rapid evaluation methods involve?

Since their beginnings, rapid methods have been referred to under a broad range of terms and approaches including:

  • rapid appraisals,
  • rapid ethnographic assessments (REA),
  • rapid qualitative inquiry (RQI),
  • rapid assessment procedures (RAPs),
  • the rapid assessment, response and evaluation model (RARE model),
  • quick, focused or short-term ethnographies,
  • real-time evaluations (RTE),
  • rapid feedback evaluations (RFE),
  • rapid evaluation methods (REM) and
  • rapid-cycle evaluations (RCE). 

While each of the rapid approaches listed above has distinct backgrounds, methods, and contexts of practice, rapid methods generally share a similar set of techniques for delivering actionable information to decision-makers at critical times.

Most rapid evaluation models involve similar elements that reflect both their need to respond to an emerging situation, alongside the desire for robust and reliable findings:

  • Iterative/flexible design;
  • Multiple methods and data sources (but often rely more heavily on qualitative data)
  • Expedited data collection and analysis processes and concurrent streams of work
  • Action-oriented findings and recommendations
  • Tailored communications products
  • Multi-disciplinary and highly skilled teams, and
  • A participatory approach

Some of the specific strategies used to speed up evaluations include conducting data collection and analysis in parallel, eliminating the use of transcripts, and utilizing larger evaluation teams to share the workload. 

Even though there have been significant advances in these approaches, rapid methods are still sometimes considered lower quality than longer-term approaches in some situations.  Dr Cecilia Vindrola-Padros from RREAL has speculated that this is in part due to the lack of explicit quality standards for rapid methods and the lack of consensus on terminology. There’s also likely a misunderstanding about the different types of rapid evaluation, when and how these are used, and how evaluators can mitigate any methodological trade-offs that are necessary to meet an evaluation’s tight timeframe.

Types of rapid evaluation

Rapid evaluation models generally fall into one of three types based on the reason for their need for speed:

  • Rapid evaluation for near-term or frequent decision making
  • Rapid evaluation due to resource constraints
  • Rapid evaluation due to short-term impacts

The reason behind the evaluation will determine resourcing approach and the level of data analysis and participation that is possible.  For example, in a health crisis, it is important to deploy a well-resourced and experienced team to deliver the rapid evaluation as timely and accurate findings are critical to project success.  However, in other situations, evaluations are rapid due to resource constraints, and a rapid approach may be deployed by a single evaluator who limits the scope of their data collection and consultation to deliver findings as cost-efficiently as possible.

Rapid evaluation for near-term or frequent decision making is conducted to meet the needs of decision-makers working to tight timeframes.  This could take the form of a short-term evaluation to inform a longer-term process, or a model where rapid methods are deployed at multiple time points over a long-term project to provide feedback loops throughout program design and implementation.  This rapid evaluation type includes all of the common features listed above, including employing a multidisciplinary team and multiple methods to achieve a robust design.

By contrast, rapid evaluations which are designed for situations where time and resources are restricted necessarily involve sacrifices that must be made to the rapid evaluation design. In these situations, there are more likely to be single evaluators rather than teams, and single rather than mixed methods. Rapidness will be driven by how much time can be afforded to data collection, analysis and reporting, rather than the contextual needs of the topic. This is not to suggest that the products of this type of rapid evaluation cannot be of value and Bamberger (2004) has useful, practical suggestions on how to tailor methods when time, money and staffing are in short supply, like sensitively reducing the scope and scale of data collection, managing related threats to validity,  and reconstructing baseline data where it is not readily available.

Finally, there is a smaller category of rapid evaluations where substantive change is expected to occur within the timeframe of the rapid evaluation, for example in a health crisis where you are hoping to see transmission rates reduce during the course of the project. These contexts, like other rapid evaluations, will require multiple team members and multiple methods to meet the demands of the short time-frames and the level of robustness required for decision-making.  In addition, they will often require a multidisciplinary team with a strong participatory element, to bring multiple perspectives to bear, ensure project findings are robust and have sufficient stakeholder buy-in to lead to program adaptations or decision-making. 

Before launching into a rapid evaluation, it is worth reflecting on the reason behind the rapidity and what that means for the methods you will use. If there is near-term, or high-stake decision-making involved, for example, it is worth seeking appropriate resourcing to ensure you can deliver robust findings in the timeframe available. Alternately, if the driver is a lack of time and resources, you will need to think carefully about how you can appropriately cut back on data collection, analysis and/or reporting.

Final thoughts

In the context of the current pandemic environment and the increasing speed of government decision-making, there is a clear need for evaluations to be done more rapidly. There is still work required to bring together the long history of rapid evaluation and agree upon what meets an acceptable threshold for evidence to inform real-life decisions and outcomes when working rapidly.  It’s important to keep an open mind, particularly with new frontiers being forged in terms of what can be achieved in a short timeframe. A robust, rapid evaluation is a powerful tool to add a little more evidence to decision-making in these uncertain times.

Further reading

Bamberger, M. (2004). Shoestring evaluation: Designing impact evaluations under budget, time and data constraints. American Journal of Evaluation, 25(1), 5–37.

Department of Planning, Monitoring and Evaluation (South Africa) (2020) Evaluation Guideline No 2.2.21: How to undertake rapid evaluations, Republic of South Africa.

McNall, M. & Foster-Fishman, P.G., (2007) Methods of Rapid Evaluation, Assessment and Appraisal, American Journal of Evaluation 28(2):151-168.

Messer, E. (1991) International conference on rapid assessment methodologies for planning and evaluation of health related programmes: interpretative summary. In Food and nutrition bulletin. 1991 13(4):287-292.

Vindrola-Padros, C., Brage, E. & Johnson, G. A., (2021) Rapid, Responsive, and Relevant? A Systematic Review of Rapid Evaluations in Health Care, American Journal of Evaluation, Vol. 42(1) 13-27