Stories

Synonyms: 
Anecdotes, Narrative Techniques, Story Telling

Personal stories provide a glimpse into how people experience their lives and have long been an important part of evaluations -for example being reported in case studies. The process of collecting stories usually begins with an interview, whether in groups (e.g. through group interviews or “story circles”) or in individual interviews. There are different ways of recording the information, including standardised questionnaires and open-ended notes, and different narrative technique to draw out, code and aggregate fragments of data, including options used in the Most Significant Change approach, such as voting*, and the self-signification used in the SenseMaker™ software programme) varies. New techniques make it possible to conduct quantitative analyses based on qualitative, emotional and open-ended narratives.

(*Scroll down to the comments at the end of this article to see Theo Nabben's insightful discussion of the MSC approach)

Storytelling is a powerful mode of human expression that helps make sense of the past and to understand possible futures. While coming together to exchange stories is an ancient tradition, evaluations also make use of personal stories through different narrative techniques to get information on the impact of development initiatives.

An individual narrative is like a fragment of data that provides a perspective at one point in time from a particular point of view. Personal stories provide qualitative information that is not easily classified, categorised, calculated or analysed, but more value has been placed on narrative and anecdotal information in recent years. As explained by Sole and Wilson (2002), the tacit, experience-based knowledge that comes up more easily in stories, can be more important in problem-solving than information coming through more formal options. Stories are used to provide insights into programme processes, to show impact, to demonstrate innovation and to support numerical data. They have been used to identify issues, support project development, and facilitate reflection on experiences. More recently, software programmes facilitate categorisation of story fragments, which allows for analysis of patterns that can lead to quantitative information.

According to McClintock (2004), personal stories are useful for evaluation because of their following attributes: 

  • Storytelling lends itself to participatory change processes because it relies on people to make sense of their own experiences and environments.
  • Stories can be used to focus on particular interventions while also reflecting on the array of contextual factors that influence outcomes.
  • Stories can be systematically gathered and claims verified from independent sources or options.
  • Narrative data can be analysed using existing conceptual frameworks or assessed for emergent themes.
  • Narrative options can be integrated into on-going organisational processes to aid in programme planning, decision making, and strategic management.

Besides contributing another dimension to evaluation, stories can be shaped to target different audiences, from funders and policymakers to the media and the general public. For example, the use of “success stories” can help to communicate with stakeholders about a programme's achievements. They can take many forms, from oral and written narratives to art, music, drama, photographs, films – and can be digitised and made available as “digital stories” around the web. They are an effective way of highlighting programme progress as many programmes (e.g. prevention programmes) are often unable to demonstrate outcomes for several years. 

Examples of topics for stories (adapted from Krueger):

  • Organisational topics
  • How I perceive the functioning of the team
  • A major change and how we handled it
  • A time when I needed help and couldn’t get it
  • A time when I was delighted with the help I received
  • Programme topics
  • Something happened that was wonderful was...
  • The best/worst thing about the programme was...
  • Learning and change topics
  • I learned something that changed how I work
  • The biggest change I've ever made was...
  • The most important thing I've ever learned was...

Example

Example from GlobalGiving in Kenya 

(Sources for this summary: http://cognitive-edge.com/library/more/case-studies/globalgiving-narrative-pilot-project-narrative-analysis-final-report/ and http://www.globalgiving.org/jcr-content/gg/landing-pages/story-tools/files/-story-real-book--2010.pdf)

GlobalGiving is a non-profit foundation that runs a website (www.globalgiving.org) which seeks to connect donors to non-profit organisations around the world. In 2010, GlobalGiving trained local “scribes” who collected 2600 stories about 200 organisations (not all funded by GlobalGiving) from communities around Kenya. The stories were prompted by open-ended questions about: either a project by a NGO or the work of an individual to improve life in the community.  On the basis of their own stories, the scribes asked for more information and visualisation of different themes in order to find some larger-scale trends. The storytellers were also prompted to put their stories into context. For example, they were to rate the benefits of a community effort for different actors (e.g. leaders, people or outsiders). They then indicated their rating according to where they placed a dot in a triangle with the three corners representing the three actors. Once the interviews were completed, a software package (SenseMaker™) then analysed story fragments to come up with patterns of findings.

Some conclusions from this exercise on using stories for evaluation: 

  • It provides valuable information and feedback on the community’s perceptions about local organisations’ efforts
  • The stories led to different conclusions and patterns from those coming out of partner organisations’ progress reports. For example, evaluation experts and implementers could not predict what would be the most frequently mentioned themes:
  • Implementers and evaluators both predicted food (#1) and shelter (#2), whereas
  • Local stories emphasised social relations (#1) and safety (#2). 
    The community therefore looked at the underlying root causes of the problems, rather than at their tangible outcomes: In Kenya, lack of food and shelter are a result of a breakdown in social relations, leading to a lack of security.
  • By moving to continuous capture, the data can be used as a management tool as well as for analysis, and will improve data quality (by allowing follow-up on questionable data).

Advice

Advice for CHOOSING this option (tips and traps)

  • If you truly want to understand how people experience a situation: open-ended, exploratory narrative techniques offer a good approach. 
  • Stories work well in adaptive environments:  This approach works well if there is a willingness to make changes along the way, in response to obstacles and “surprises”.
  • Realise the limitations of the story:  Complement the stories you include in your evaluation report with other sources of information. Stories should be combined with surveys, focus groups, observations and other options of evaluation. Providing multiple forms of data and including the perspectives of the full range of your participants will enhance the quality of your programme evaluation as well as the stories’ impact. 
  • Match the narrative technique with the situation: In a project environment that is more fixed and limited, a more standardised survey will be a better technique than an exploratory, open-ended technique such as Sense-Maker.

Advice for USING this option (tips and traps)

When recording:

  • Be consistent and systematic: Keep notes, record your stories in a particular place, document sources, be consistent, and be careful with your data.
  • Make backups, and do not rely on only one form of documentation – and especially do not count on computer systems working in all contexts.
  • It’s easier to record stories when they are fresh in your mind than to go back and re-construct them. 
  • Consider the ethical implications: Using storytelling as an evaluation tool in practice requires a high element of trust and openness between the storytellers and the evaluator. 
  • Always ask participants for permission before recording their stories. 
  • Also get their approval to share with others the stories they tell you, explaining that their real name will not be used in connection with their stories if they so prefer. 
  • Apply responsible and ethical research practices to protect human rights, dignity and welfare of storytellers (Sukop, 2007). 
  • Take care of confidentiality and “protect the storyteller from direct and indirect harm” (Krueger).

When collecting data:

  • Check the circumstances and sources of the story: Look into the background and circumstances of the story to get clues about whether it is typical or extreme, if the story is authentic, if the story has been changed over time and other factors relating to the story. Verify the sources (Krueger).
  • Consider the use of incentives to get stories: Incentives can improve data quality, as storytellers will provide honest feedback (both praise and criticism) about organisations when honesty comes with rewards, and dishonesty results in a loss of economic opportunity for the storyteller (Seah and Webster, 2010).
  • Hone your story-listening skills: Stories can give deep clues about tacit fears and “undiscussable assumptions”. Listening “below the surface” of the complaints, challenges, successes and general anecdotes of others can reveal guiding principles and vital clues about (collective) attitudes and feelings in a programme or organisation (Sole and Wilson, 2002).
  • To get people to tell their stories, do one or more of the following (from Krueger):
  • Let people be comfortable and relax.
  • Food and beverages help.
  • Take your time - There will be many "dead ends".
  • Have several "provocative" questions (e.g. best and worst moments).
  • Use cues to stimulate memory, such as timelines, photographs, objects, etc.
  • Let people listen to stories of others.
  • Show interest in their stories - smile and make eye contact.
  • Tell a story to help get people started.
  • Take the opportunity for further analysis.
  • Ask probing questions (e.g. What was done? Why was this done? What was accomplished/ what happened? What can be learned from this?).
  • Coding data for aggregation is a major challenge for this approach: Different techniques offer different ways to ease aggregation. The SenseMaker programme, for example, uses different ways to visualise story themes, to help code information for analysis. The Most-Significant-Change option does this by tracking stories of changes related to certain “critical domains”.

When communicating about the data:

  • Use the stories to connect with your intended audiences (e.g. donors, staff, wider public): Personal stories provide a human face to evaluation data which can strengthen messages you would like to present about your programme. 
  • Design stories to deliberately incorporate the perspective(s) that speak most to the concerns of the target group. Each story represents a single point of view, so it may be necessary to incorporate multiple perspectives into the final story. 
  • Shape the information to reach your target group (e.g. case studies in reports to donors; videos, press releases, photos on website for general public).
  • The tasks  in the “Report and support use” evaluation component provide relevant information and resources
  • Avoid “static-ness” in story messages: The impact of a personal story varies depending on when and how it is presented. Especially in today’s fast-moving virtual world, it is necessary to regularly revisit and update your story messages to reconnect them with the language and issues of the present.

Resources

Guides

Websites

Sources

Field, J (2004). Evaluation through storytelling. The Higher Education Academy, York, UK. http://www.heacademy.ac.uk/assets/documents/resources/resourcedatabase/id473_valuation_through_storytelling.pdf

Krueger, R A. (website accessed 20-02-2012). Storytelling. University of Minnesota: http://www.tc.umn.edu/~rkrueger/story.html

McClintock C (2004). Using Narrative Options to Link Program Evaluation and Organization Development. The Evaluation Exchange IX: 4 Winter 2003/2004. http://www.hfrp.org/evaluation/the-evaluation-exchange/issue-archive/reflecting-on-the-past-and-future-of-evaluation/using-narrative-methods-to-link-program-evaluation-and-organization-development

Seah, A and Webster, L (2010). GlobalGiving narrative pilot project, narrative analysis final report. Cognitive Edge Pte. Ltd. and GlobalGiving.

Sole, D. and Wilson, D. (2002). Storytelling in organizations: The power and traps of using story to share knowledge in organizations. Harvard Learning Innovations Laboratory. Presidents and Fellows of Harvard College, Cambridge, USA.  http://providersedge.com/docs/km_articles/Storytelling_in_Organizations.pdf

Sukop, S(2007). Storytelling Approaches to Program Evaluation: An Introduction (Pamphlet adapted from an original report by Joseph Tobin and Gustavo E. Fischman). The California Endowment. http://www.tcfv.org/pdf/prevention/Storytelling%20Approaches%20to%20Program%20Evaluation%20-%20CA%20Endowment.pdf

Image: AH-BJ-100920-5574 World Bank, photo taken by Arne Hoel

Updated: 9th August 2016 - 12:47am
This Option is useful for:
A special thanks to this page's contributors
Author
Banana hill.
Wageningen.
Reviewer
Research Fellow, RMIT University.
Melbourne.

Comments

Anonymous's picture
Theo Nabben

Great that you are highlighting the value of stories (and visuals) within evaluation. I'd like to add a few extra points and a point of clarifiction.

For MSC, your article reads  as if voting is only tool used in the process for selection within MSC. This is perhaps a grammatical mistake- unfortunately  it gives an incorrect picture for story selection process. Firstly the key point of selection process (an essential step in MSC) is to have a dialogue with a number of people. Voting is only one of many ways to come to a decision once the dialogue has occurred. Often consensus discussion, along with other tools, are used to make the final decision on the most significant story. 

Collecting MSC stories I have sometimes come across a greater sense of empowerment and willingness to share from the story teller (including  from some in more marginalised communities) compared to other qualitative interview techniques. I suspect its because I am asking them to tell me what is important from their perspective - rather than answering my questions about what the project thinks is important. 

A wonderful spin-off from story telling is the connection it allows people to make (between story teller and interviewer).

With MSC I have frequently found a side benefit has been increasing motivation among staff and story tellers (wow - when I tell my story I reflect back on what I have achieved as a community member and you as a staff member hear about the difference you made to  my life.)  

.   

Alice Macfarlan's picture
Alice Macfarlan

Thanks Theo, these are great comments. The Most Significant Change page has been linked in the article so readers who are interested can read more about the approach, and on your advice I've changed the wording slightly to capture that voting is just one option for MSC.

I've also added a note in the text that readers should scroll down to view your comment as your experiences in using story-telling in evaluations are really valuable so thank you for sharing.

Best,

Alice

Anonymous's picture
rick davies

A few comments about SenseMaker

1. The use of self-signifiers is an important option to consider with _any_ story collecting process

2. The simplest options for self-signifiers are: (a) lists of items that people can choose from (one or more), and bi-polar scales (where people select a point on a scale). The use of triads is not necessary

3. It is not necessary to use the proprietary Sensemaker software. Many (free) social network visualisation software packages will enable you to identify clusters of stories (with similar attributes) and clusters of attributes (with similar sets of stories).

4. While the use of self-signifiers puts some power in the hands of the story teller a lot of power still remains in the hands of the researcher / evaluator, when they then look for patterns in the use of these self-signifiers across multiple stories. That process of analysis needs to be as transparent as possible.

 

 

Add new comment

Login Login and comment as BetterEvaluation member or simply fill out the fields below.