Lessons from a trial of the Success Case Method

Liz McGuinness's picture 28th July 2017 by Liz McGuinness

The Success Case Method approach is useful for documenting stories of impact and for understanding the factors that help or hinder impact. It is particularly useful for uncovering the contextual forces that influence impact. Originally designed for evaluating corporate training programs, the Success Case Method is now being applied to other programs including international development interventions.

Last year, I provided technical assistance to a pilot of the Success Case Method as part of the USAID-funded Complexity-Aware M&E Trials. The purpose of the trials was to address two challenges. The first challenge is the lack of monitoring and evaluation (M&E) methods that can accommodate complexity and provide data to support adaptive management of development interventions. The second challenge, which is generally true across the evaluation field, is the lack of research on the effectiveness of evaluation methods (See Editor’s Notes in the 2015 Winter edition of New Directions for Evaluation: Research on Evaluation). The lessons learned from the Success Case Method trial were to contribute to USAID’s guidance on the use of complexity-aware M&E methods.

The trial, although short-lived, surfaced several key lessons that are applicable for those considering using this approach. Originally developed to quickly evaluate, and in some cases put a monetary value on, professional training programs, the Success Case Method was adapted and applied to a multi-country capacity building project funded by USAID. The program provided training and support to a small number of professionals who were placed in new positions within government departments. The objectives for using this method were to support the donor and implementer to adaptively manage the project and to discover the, as yet unknown, development pathways from the project activities to the desired outcomes.

About the Success Case Method

The Success Case Method has two steps. The first is to survey all (or a representative sample) of program participants and their supervisors to identify the extreme success and non-success cases, based on the outcomes that the training program seeks to achieve. The participants’ survey focuses narrowly on whether the participant made use of the program inputs (i.e., did they use what they learned in the training) and whether they achieved a positive program outcome, as a result. The supervisors’ survey is intended to verify the participants’ reported results (i.e., did the participant do what they said they did and did it result in a positive program outcome?).

Once success case participants are identified, the second step is to interview the participants and their supervisors in order to learn more about how they were successful, and which conditions enabled or inhibited this success. Though this wasn’t the focus of our study, this method can also be used to identify unsuccessful cases in order to learn why and under what conditions, they were not successful.

Contextual factors

Our conditions differed from the conditions in which the Success Case Method has traditionally been used in several ways:

  1. It was applied in an international development context in a multi-country project.
  2. The study was conducted in English as a foreign language, as it was the common language across all trainees and supervisors. None of the study participants were native English speakers nor were two of the three M&E team members conducting the research.
  3. The M & E team were not part of the organization in which the training participants and their supervisors were located and had no real authority or leverage over them.
  4. Participants were not only geographically dispersed but also separated by difficult communications links, in particular, weak internet accessibility, which limited the options for administering the survey and the individual interviews.

Implementation challenges

We encountered a number of challenges when implementing this approach, including these:

  1. Identifying success cases
  2. Keeping the focus on the primary purposes
  3. Setting up appropriate analysis processes

Identifying success cases

At the survey stage, it was more difficult to clearly identify success and non-success cases than we anticipated. Participants somewhat surprisingly claimed achievements that did not qualify as the “desired outcomes” of the program. That is, they claimed to be success cases but were not. Broadly speaking, there was a tendency to confuse actions for outcomes. For similar reasons, we were unable to identify any non-success cases. In some instances, it was obvious to us that participants were not true success cases. In other cases, it was not obvious until the individual’s interview stage.

The interview questions (How did you apply the capacity building support you received? Why are the results you achieved important?) required participants to describe their results and the implications for achieving program objectives in more detail. While the team interviewed three participants who had been identified as “success cases” through the survey, the interviews showed that only one of them qualified as such, according to our criteria.  Better orientation of the participants and their supervisors to the SCM approach and its purpose might have averted the challenge encountered with over-reporting of success. It may be beneficial to emphasize to program participants that the Success Case Method exercise is not a performance evaluation and the results will have no repercussions for them personally.

Lack of clarity on participants’ success was compounded by the large proportion of supervisors who did not respond to the survey. While we don’t have information on why this was the case, we can speculate that some may have lacked confidence in their English language skills. Or they simply may have been too busy. The supervisors were often senior government officials. One solution to this challenge is to investigate whether there are other individuals who can corroborate the participant-reported achievements.

When supervisors did respond to the survey, their responses did not always improve our ability to identify success or non-success cases. This is a problem because it then takes more time to truly identify successes and it is nearly impossible to identify non-success cases. Supervisors seemed to be reluctant to report any negative findings regarding their trainee. To solve this problem, as we mentioned earlier, supervisors should be oriented at the beginning of the program on the importance of the evaluation to the success and sustainability of the intervention.

Keeping the focus on the primary purposes

Administering and analyzing the participant and supervisor surveys were bogged down as the surveys had too many questions, particularly open questions. The method calls for a limited number of closed questions, however both donors and implementers sought to add survey questions to gather more contextual information and to obtain data for use in regular M&E reports. While the qualitative data provided a trove of information about the program as experienced by the trainees, which was valuable for improving the program, the volume of the data, combined with weak qualitative analysis skills of the team, delayed the identification of the success cases. When the timeline between the survey step and the individual interviews is short, users of this methodology should resist pressure from both donors and program implementers to add extraneous but “nice to have” questions. Researchers should keep the survey focused on the primary purpose of the evaluation.

Setting up appropriate analysis processes

The M&E team was time constrained because the Success Case Method evaluation was added on to their normal work load preparing routine M&E reports. Due to the small number of participants in the evaluation and the lack of access to reliable internet connections, the survey was administered through a Word-based questionnaire emailed to participants. To save time and reduced the burden on the team, we created an Excel database to consolidate the survey data and to automatically identify success and non-success cases. In addition, it easily provided analyses and charts that fed into the report. (The report was not called for in the original methodology but we created it as a vehicle for packaging and communicating the contextual and participant information that was gathered.) The data collection and consolidation tasks could be more efficient if we had used an online survey platform such as SurveyMonkey or Survey Gizmo. Programs that are considering using the Success Case Method should ensure staff have the bandwidth before engaging in the study.

Monitoring and evaluation teams in the field are often highly skilled in quantitative survey techniques and working with indicators, but they may have difficulties in analyzing qualitative data quickly and well. Capacity building and role modeling are needed to bring qualitative data analysis skills up to a level that can support timely use of this method. We provided distance training over skype and provided templates for analyzing both the survey data and the individual interview data. We believe that in-person training would have been more effective for building the analytical skills. If staff do not have the skills and there is insufficient time for training them, the number of open or qualitative questions on the survey should be kept to a minimum to ensure timely data analysis.

Benefits of using the Success Case Method

The survey provided rich information about the activities of the participants, their progress and the contexts in which they operate. It highlighted barriers to progress for some participants which program implementers should address. The individual interview was a successful technique to fully understand the nature of the reported outcome and its importance and for uncovering the conditions necessary to achieving this. Together, the survey data and the interview data allow the donor and implementer to begin to build out a theory of change for the intervention.

The data collected on the one true success case highlighted very well the significant results that can be achieved by the capacity building program. The success case was written up based on the individual interviews with the success case participant and their supervisor. The narrative approach recommended by the method, provided the foundation for a success story that can be used to promote the program and importantly, it serves as a case study for understanding some of the factors that enable program success.

Final thoughts

Liz's final advice for using the Success Case Method:

When to use the Success Case Method

  • When you know the long-term objectives of your program and you have identified your program activities but you do not know the causal pathway from activities to impact.
  • When you can identify both the program participant and at least one individual who oversees their work and can vouch for both the participant’s actions and the outcomes of these actions.
  • When the M&E team can be given authority to directly communicate with, and collect data from both the participant and supervisor subjects.

Tips for implementing the Success Case Method

  • The M&E team who are to conduct SCM evaluations should include at least one member who can design, administer and analyze surveys and at least one member who has strong qualitative data analysis skills.
  • Ensure that the evaluation is included in the project design, workplan and budget so that does not become added work for the staff.
  • Use an online survey platform (ex. SurveyMonkey, Survey Gizmo, etc) to make data collection and consolidation more efficient. Find out in advance what kind of internet access your respondents have before deciding on the survey approach.
  • Create an Excel database to consolidate the survey data, to analyze the closed questions, and to automatically identify success and non-success cases.
  • Create templates for analyzing the open questions to serve as guides.
  • If the Success Case Method results are to be used to support adaptive management of your project, ensure that the schedules of the evaluation and of management decision-making are in sync. That is, make sure that your results will be reported out sufficiently before decisions are made about annual workplans, to be taken into account.

Over to you

Looking for more information about the Success Case Method? Head over to our approach page for an overview and some resources. The page is currently in Stub mode and we'd love your help expanding it:

Have you used the Success Case Method in an evaluation? What did you use it for? What worked well and what could be improved? Do you have any advice or resources you would like to share with the BetterEvaluation community? Let us know in the comments or send us a message.

A special thanks to this page's contributors
President, LMG Consulting LLC.
Washington, DC, United States of America.


Anonymous's picture
Rick Davies

One alternative to asking for the most and least successful cases (e.g. projects) is to ask respondents to name X cases and then sort these named cases into two equal-size-as-possible groups. One representing the "more successful" cases, the other representing the "less successful" cases.

I did this recently in a SurveyGizmo survey of 14 organisations who were partners in a large project. The survey then asked respondents to tick yes/no to a list of possible attributes of working relationships with the named organisations, first for the "more successful" group as a whole, and then for the "less successful" group as a whole. With a few exceptions, this worked well, and it was in Indonesia

Add new comment

Login Login and comment as BetterEvaluation member or simply fill out the fields below.