52 weeks of BetterEvaluation: Week 24: Choosing methods to describe activities, results and context
How many methods do you usually see in evaluation reports as having been used to collect data?
Chances are you'll see project document review, key information interviews, surveys of some kind, and perhaps group interviews with intended beneficiaries. These methods are all useful to help describe what has happened, the outcomes and the context in which change occurred.
It is not difficult to go beyond the usual suspects. Many other methods exist to collect data that can enrich the analysis and improve the quality of the evaluation.
For example, polling booth and keypad technology are more appropriate than a group interview for gathering information about sensitive issues. Methods such as PhotoVoice and PhotoLanguage can help people articulate what they value in ways that go beyond what they might be able or willing to share in an interview. Participatory methods, such as dotmocracy and murals, are ways to support discussions by a group of the different perspectives among the members. Or what about a reputational monitoring dashboard to automatically track what is being said about an organization or a project in social media?
The Describe cluster guides you on the methods for collecting or retrieving data that are available, when to choose which ones and how to use them well.
But data collection is only one part of the Describe cluster. It also includes options for critical tasks such as how to sample people, sites or time periods, how to combine qualitative and quantitative data, and how to use existing measures and indicators. Once you have the data, you can turn to this cluster for ideas about how to manage the data, clean it, and analyze data, including a section on data visualization to facilitate analysis.
In the recent AEA CoffeeBreak webinar series, I provided an overview of these tasks and some of the available methods.
You can watch the webinar, access downloadable slides, and get a full overview of the webinar series below:
Questions from the webinar
1) Does this cluster of tasks refer to planning for describing the data collection plan and presenting findings?
The Describe cluster covers planning and implementing data collection and facilitating analysis, how to choose options well and use them well. The cluster on Report and Support Use has information about different options for presenting findings.
2) What software package do you use for analysing qualitative data?
There are a number of different packages available. The CAQDAS site (Computer Assisted Qualitative Data Analysis Software) provides an overview of different packages, including NVivo, HyperResearch, and Atlas TI.
3) You mentioned 'combining qualitative and quantitative data'. Please explain what you mean by 'combining'.
It can help to start by considering why you want to combine qualitative and quantitative data. Is it intended to triangulate data, to check interim findings from different perspectives and using different methods with complementary strengths? Is it primarily intended to elaborate or explain findings? Do you need to elaborate on what lies behind a simple quantitative change or do you want quantify a described change?
Once you are clearer on the purpose, there are different ways of combining quantitative and qualitative data.
Sometimes they are combined sequentially. For example, an evaluation might start with a few open-ended interviews and reviewing project documentation to identify core issues, and then using this to develop a structured questionnaire to collect quantitative data. Or it might start by analyzing existing quantitative data on service delivery, and then explore issues through interviews that generate qualitative data.
Sometimes the two types of data are collected simultaneously with one method. For example, a questionnaire might have quantitative as well as qualitative data, and might be combined with quantitative and qualitative data gathered through observation.
Find more information and examples about combining qualitative and quantitative data.
This blog post is part of a series of eight posts covering the BetterEvaluation Rainbow Framework and presenting the recordings of eight corresponding webinars hosted by the American Evaluation Association. The full series of posts is below.
'52 weeks of BetterEvaluation: Week 24: Choosing methods to describe activities, results and context' is referenced in: