Horizontal Evaluation

Synonyms: 
Horizontal Learning

Horizontal evaluation is an approach that combines self-assessment by local participants and external review by peers. Originally  developed to evaluate new methodologies for agricultural research and development, horizontal evaluation has wider potential for application.  In its original setting, the focus of horizontal evaluation is the actual R&D methodology itself rather than the project per se or the team or organization that developed it.

The involvement of peers neutralizes the lopsided power relations that prevail in traditional external evaluations, creating a more favourable atmosphere for learning and improvement.

The central element of any horizontal evaluation is a professionally-facilitated, three-day workshop that includes all of the steps and processes essential to this approach. The workshop brings together a group of 10-15 ‘local participants’ who are developing a new R&D methodology and a similar-sized group of ‘visitors’ or ‘peers’ who are also interested in the methodology. The workshop is organized from start to finish by a small group known as the “workshop organizers” (a sub-group of the local participants).  It combines presentations about the methodology with field visits, small group work and plenary discussions. It elicits and compares the perceptions of the two groups concerning the strengths and weaknesses of the methodology.

The perceptions of the groups are captured in an evaluation matrix which is a key tool in this approach. The matrix is used to collect data on a limited, pre-agreed upon number of relevant, highly focused criteria by site. The analysis phase provides practical suggestions for improvement which arise out of the strengths and weaknesses observed by the ‘peer group’ and discussed in the workshop. These recommendations are intended to be put to use immediately and so Horizontal Evaluation is essentially formative/developmental (TAG). 

The processes employed during the workshop and field-visits serve to promote social learning among the different groups involved. Experience to date suggests the approach stimulates further experimentation with and development of the methodology in other settings. The authors believe that horizontal evaluation can be applied in different types of projects and programmes and is especially suited to those that operate in a multi-site, network mode.

The primary responsibility for the planning and execution of the details of the workshop lies with the group of “workshop organizers" - a sub-group of the 'local participants'. The workshop organizers are responsible for the following tasks:

  1. Identifying the appropriate object for evaluation (in the cases the authors have supported this has been an R&D methodology of broad regional interest).
  2. Ensuring the recruitment and participation of an appropriate group of local participants and visitors. The latter should have an interest in learning about and perhaps using the methodology or a revised and improved version of it.
  3. Designing in detail the 3-day workshop.
  4. Finding a facilitator  who should already be familiar with or be willing to master the horizontal evaluation approach.
  5. Developing, in a participatory manner, the preliminary evaluation criteria. These are often based on the criteria of the organization or project using the methodology.
  6. Arranging the field visits that will demonstrate the application of the methodology.
  7. Sending both sets of participants - local and visiting – all necessary background information prior to the workshop.
  8. Arranging a ‘dress rehearsal’ of key moments and presentations for the workshop.
  9. Making provisions for writing up and promoting the use of the workshop’s findings.

Planning the workshop: The professional facilitator, selected by the workshop organizers, works with the local group to (i) identify the appropriate methodology to be evaluated, (ii) select participants (both local and visiting) and (iii) contribute to the workshop organizers’ preparations for the event. 

Day 1– Introducing the methodology: At the start of the event, the facilitator introduces the objectives of the workshop and the procedures to be followed. The facilitator stresses that the workshop is intended to evaluate ONLY the methodology selected, not the project as a whole, nor the executing agency as an organization. S/he encourages the visitors to be critical but constructive by identifying the strengths and positive aspects of the methodology as well as its weaknesses. S/he also encourages the local participants to be open and receptive to comments and suggestions.

Day 2 – Field visits: Field visits provide the opportunity for visitors to see at first hand the methodology under development and to talk with those whose livelihoods are directly affected by it. Visitors conduct semi-structured interviews and carefully observe what they see and as far as possible try to triangulate different sources of information.

Day 3 – Comparative analysis and closure:  For each evaluation criterion, the two groups (visitors and local participants) separately identify strengths, weaknesses and suggestions for improvement.  Each group is asked to limit itself to identifying no more than six strengths, six weaknesses and six suggestions for each evaluation criterion. Following this group work, visitors and local participants present their findings in separate plenary sessions. The facilitator then helps all participants to identify convergent and divergent ideas. Where the strengths converge, local participants can feel confident that they are on the right track. Where weaknesses coincide for both groups, the need for corrective action is indicated. Where the two groups’ assessments of strengths or weaknesses diverge, the reasons must be explored together in detail in order to reach a shared understanding of the differences of opinion.

Strengths

  • Overcomes the lack of clear outcomes and lack of follow up that typically results from mere site visits.
  • Overcomes traditional ‘external expert-led’ evaluations that limit participation and learning and may result in poorly implemented recommendations.
  • Flexible in that it can be applied in a range of settings and a range of evaluations including fairly complex R&D methodologies
  • Facilitates the sharing of information, experiences and knowledge, interactive learning
  • Facilitates the building of trust and sense of community
  • Promotes ownership of results that in turn encourages the adoption of corrective action needed to improve R&D methodologies
  • Creates the conditions for the adaptation and wider use of the R&D technologies being evaluated
  • It is enjoyable for participants who, as part of the process, learn a great deal in a dynamic yet structured environment;
  • Local participants accept critical feedback and observations more easily from peers than from external evaluators;
  • It fosters social learning, as local participants and visitors are actively engaged throughout the review process, which guides analysis and synthesis and generates new knowledge and proposals for action;
  • It stimulates experimentation with and further development of the methodology in other locations;
  • It can be used in conjunction with a more traditional external evaluation to generate additional information and insights.

Critical Success Factors

The authors identify the following factors as critical for the success of an application of the horizontal evaluation approach:

  • Select the right moment for the workshop – one when the new R&D methodology is sufficiently advanced so that there is real substance to review but not so finished that there is little scope for modification;
  • Select visitors with care to ensure that they have diverse perspectives, possess adequate knowledge and experience, and are perceived as peers rather than superiors to the members of the team whose technology is being evaluated;
  • Ensure good facilitation so as to create an environment of trust, focus the attention of participants and manage time efficiently;
  • Identify and employ a limited number of clearly defined evaluation criteria;
  • Ensure presentations and field visits and well planned and prepared to ensure the visitors have all the information they need to understand the methodology.

Resources

Guides

References and Further Reading

  • Bernet, T., Devaux, A. , Ortiz, O. and Thiele, G. 2005. Participatory Market Chain Approach. BeraterInnen News, 1. Downloaded 26 December 2005 from the website of the Swiss Center for Agricultural Extension (LBL): http://www.ifad.org/innovation/presentations/andes.pdf
  • Papa Andina. 2004. Memoria Taller de Evaluación Horizontal: Articulando demanda y oferta tecnologica, la experiencia del proyecto Innova-Bolivia. CIP, Lima, Peru: CIP.
  • Papa Andina. 2005. Final Report – 3rd PMCA Workshop in Uganda, 13–15 December 2005. Lima, Peru: CIP.

Do you have an example of using a horizontal evaluation approach that you'd like to share with us? Let us know using our contact form.

 

Comments

Claude Kasonka's picture
Claude Kasonka

Are there practical experiences and reports from those that have used this approach before? Sounds like an interesting concept

Alice Macfarlan's picture
Alice Macfarlan

Hi Claude,

The best resource we have on the site is the guide Horizontal evaluation: Stimulating social learning among peers. However if you are interested and find some further examples in your own research we'd be very grateful if you could share them with us, as we'd like to be able to expand our examples and resources for this method.

Best,

Alice

Add new comment

Login Login and comment as BetterEvaluation member or simply fill out the fields below.