52 weeks of BetterEvaluation: Week 13: Evaluation on a shoestring

Patricia Rogers

Many organisations are having to find ways of doing more for less – including doing evaluation with fewer resources. This can mean little money (or no money) to engage external expertise and a need to rely on resources internal to an organisation – specifically people who might also have less time to devote to evaluation.


Many organisations are having to find ways of doing more for less – including doing evaluation with fewer resources. This can mean little money (or no money) to engage external expertise and a need to rely on resources internal to an organisation – specifically people who might also have less time to devote to evaluation.

This week’s post draws on issues and strategies from a recent session with the South Australian branch of the Australasian Evaluation Society who were interested in exploring how BetterEvaluation could support ‘evaluation on a shoestring’.

Do you have other tips. strategies or resources you've found useful in this situation?  

Types of shoestring evaluation​

Doing an evaluation ‘on a shoestring’ refers to conducting it within limited resources. There are different levels of limited resources.

Doing evaluation with "less string" might mean reducing the budget for an external evaluation, and/or the degree of support available from internal sources such as an evaluation unit. The key challenge here is reducing the scope appropriately. Don't keep the scope of the evaluation the same and expect it to be done for a fraction of the budget. Don’t try to take shortcuts with the initial scoping of the evaluation, or in the active management of the evaluation. Be clear about the trade-offs between depth and breadth, timeliness and comprehensiveness in the design and keep focused on the most important priorities. In this scenario, pay particular attention to suggestions 6,7 and 8 below.

Doing evaluation with "very little string" might mean having no budget to contract an evaluation externally, but having some scope to access resources (either internally or externally) for advice and quality assurance. The key challenge here is making best use of these. In this scenario, use all the suggestions to plan the evaluation effectively and then use available resources to for meta-evaluation (see suggestion 12 below) at critical stages - when finalising the evaluation brief, the design and the report. Use the BetterEvaluation Rainbow Framework, which outlines 32 tasks in an evaluation, and related material on the BetterEvaluation site for guidance on doing all of these.

Doing evaluation with "no string" has the same challenges as “very little string” but without any scope to engage external expertise for advice and quality control. In addition to the advice above, try to identify a peer with whom you can do reciprocal peer review at critical stages.  Having to articulate and explain the decisions made for each of the 32 tasks can improve the quality. See if you can secure some pro bono advice at critical stages.

12 suggestions for evaluation on a shoestring

The suggestions relate to four different stages of an evaluation.

(Click on the resources below to view them)
1. Scoping 2. Designing 3. Conducting 4. Using
Evaluation brief Evaluation design Evaluation report Evaluation use
Clarifying what the evaluation needs to do – purpose, intended uses, Key Evaluation Questions, standards for the evaluation, what ‘success’ looks like for what is being evaluated How the evaluation will answer the Key Evaluation Questions – data collection, analysis and reporting Collecting, analysing and reporting data in terms of the Key Evaluation Questions Disseminating findings, developing recommendations (if not done as part of the report), developing plans and tracking them

Done by commissioners.

Might have external advice and facilitation

Can be done by commissioners (as part of RFP) or by evaluators (as part of proposal, as first stage of evaluation, or as a separate project) Done by evaluators, sometimes with support from commissioners Done by commissioners, possibly with support from evaluators
BetterEvaluation site: Manage, Frame BetterEvaluation site: Describe, Understand Causes, Synthesize (for choosing methods and approaches) BetterEvaluation site: DescribeUnderstand CausesSynthesize (for using methods and approaches well) BetterEvaluation site: Report and Support Use

Evaluation Brief

1. Purpose – I​dentify and address the priority intended uses of primary intended users

Primary intended users are the specific, identified people who will use the findings from the evaluation. Ideally they should be actively engaged in the evaluation decision making process to ensure it will be relevant and credible to them.

Identifying the intended user(s) and use(s) of an evaluation

This guideline from the International Development Research Centre (IDRC) highlights the importance of identifying the primary intended user(s) and the intended use(s) of an evaluation and outlines a variety of methods that can be used to achieve this in

2. Focus – ​Develop a small set of clear, answerable and useful Key Evaluation Questions

KEQs are not interview questions but the high level questions an evaluation is intended to answer. They are usually a combination of Descriptive Questions (What is the situation? What has happened?), Causal Questions (Did the program contribute to or cause the observed outcomes?), and Evaluative Questions (Was it good, good enough, better than before, better than alternatives?).

Articulate a small number of broad evaluation questions that the evaluation will be designed to answer. These are different to the specific questions you might ask in an interview or a questionnaire.

Advice for commissioners of evaluation to get maximum value from external evaluators (PDF)

A presentation to the ANZEA conference (Aotearoa/New Zealand Evaluation Association) by E. Jane Davidson & Nan Wehipeihana (2010) which outlines some generic evaluation questions.

3. Resources – Clarify what resources will be available for the evaluation

Resources include the time of internal staff, especially those with evaluation expertise and/or content knowledge, and funding to engage external expertise, and for costs incurred in data collection and analysis (e.g. software). It also includes the available time before reports are needed. If no funding is available for external resources, and internal resources are not sufficient, investigate options for reciprocal help with evaluations in other programs or organisations, or options to get assistance from universities or groups.

Identify what resources (time, money, expertise, equipment, etc.) will be needed and available for the evaluation. Consider both internal resources (e.g. staff time) and external resources (e.g. participants' time to attend meetings to provide feedback).

4. Standards – Be clear about the level of accuracy and generalizability needed

What will be ‘good enough’ data?

Clarify what will be considered appropriate quality and ethical standards for the evaluation and what will need to be done to ensure these standards are achieved.

Evaluation Design

5. Program Theory – Develop an explanation of how the program is understood to contribute to its intended outcomes and impacts

This can be in the form of inputs-> processes -> outputs-> outcomes-> impacts if the program is fairly simple and all activities are done at the beginning of the causal chain. Otherwise an ‘outcomes hierarchy’ format might work better to show how different activities contribute to particular interim outcomes.

Learn how to develop a Theory of Change and Programme Theory. Understand the impacts and outcomes of interventions using logic models.

6. Coverage – Plan data collection and analysis in terms of a matrix of options

A matrix with Key Evaluation Questions down one side, and possible data sources across the top will make it easier to plan for efficient data collection that covers all questions.

The Evaluation Matrix (archived link)

7. Existing Data – Make maximum possible use of existing data if quality is adequate

This includes project documentation, performance indicators, documented observations, social indicators, and findings and methods from other relevant evaluations.

8. Short Cuts – Identify where it will be possible to take shortcuts in data collection and analysis

For example, it might be possible to reduce the cost of an evaluation by: conducting group interviews instead of individual interviews; reconstructing baseline data through retrospective recall; reducing sample sizes; using email questionnaires instead of interviews; using a small purposeful sample instead of a large random sample; using volunteer interviewers instead of professionals (either staff or community members); accessing data or respondents through links with partner organisations; using existing computer software for analysis (e.g. Excel and Word) rather than specialist software not available in the organisation (e.g. SPSS and NVivo).

Michael Bamberger (2005)

Simplifying Qualitative Data Analysis Using General Purpose Software Tools (PDF)

La Pelle, Nancy (2004)

9. Risk Management – Identify possible risks and trial collection, analysis and reporting

Any design can have unforeseen problems in implementation. Build in a short cycle of collecting, analysing and discussing some real data before finalising the design, and always trial and pilot data collection tools and analysis strategies with hypothetical or early data. Back up all data and store securely. Check for unrepresentative samples (especially if you have a low response rate) and triangulate data to improve validity.

Mistakes not to make. International suggestions on Genuine Evaluation

Patricia J Rogers and E Jane Davidson blog about real, genuine, authentic, practical evaluation

Evaluation Report

10. Messaging – Develop a report outline early on and negotiate agreement about format

Clarify what is required in the evaluation report well before starting to write it.

Produce appropriate written, visual, and/or verbal products that communicate the findings.

Evaluation reports can be read by many different audiences, ranging from individuals in government departments, donor and partner staff, development professionals working with similar projects or programmes, students and community groups. 

Improving evaluation questions and answers: Getting actionable answers for real-world decision makers

A presentation from E. Jane Davidson (2009) to the AEA (American Evaluation Association) conference which explains how to use a skeleton report to negotiate the format of a final report. Available through the AEA elibrary.

Supporting Use

11. Actively Plan for Use – Implement specific activities to support users to understand and use findings

Simply releas​ing a report is not sufficient. Use different avenues, processes and formats to make the findings readily available to intended primary users.

Plan processes to support primary intended users to make decisions and take action on the basis of the findings.


12. Meta-evaluate – Build in formative and summative evaluation of the evaluation

Formative evaluation can improve the evaluation brief, the evaluation design, and the evaluation report. If you have limited scope to engage external expertise, focus it here. If you have no budget for external review of the evaluation, see if you can engage with partners, colleagues or others to review the evaluation at key stages. Finish the process by documenting learnings about doing evaluation – perhaps through an after action review.

The after action review (AAR) is a simple method for facilitating an assessment of organisational performance by bringing together a team to discuss a task, event, activity or project in an open and honest fashion.

'52 weeks of BetterEvaluation: Week 13: Evaluation on a shoestring' is referenced in: