What is it?
Research, monitoring and evaluation tasks (such as developing an M&E Framework, undertaking small studies and evaluations) can be done internally by existing staff (within one organisation or as a partnership or joint activity involving a number of implementing partners), externally by a consultant, or a hybrid of these two options (where there is a combination of internal staff and stakeholders and external researchers and evaluators). It could also be done by peers, or by community groups. Required types of expertise, need for fresh outsider perspectives, cost, and time are key issues in deciding who will conduct some or all of the evaluation tasks.
The Steps in Planning and Managing Evaluation give a comparison of the strengths and trade-offs of internal and external evaluation options. This guide helps decision makers to be explicit about the reasons for decisions. The Steps also provide guidance on qualities to consider when recruiting external evaluators or researchers. This page also provides an overview of key options, and key approaches. These pages are recommended background reading before considering options to apply to C4D.
Internal, External and Hybrid options for conducting R,M&E and C4D
Applying the C4D principles
One important decision that needs to be made is who will conduct the R,M&E. This might mean involving internal staff, partners, community groups and other stakeholders in the R,M&E process. External consultant may still have a role in participatory R,M&E:
|Sometimes there are very few local evaluators with the skills and knowledge to be able to undertake C4D evaluation and studies. In these cases partnerships between international/regional consultants, local consultants and local community groups and organisations can be considered. In these situations you can state explicitly that mentoring and capacity development of the local partner are expected.|
|What are the assumptions about who should conduct the R,M&E? What alternatives are there, and how might they be more or less inclusive of diverse voices? What kinds of qualities are important for a facilitator/ evaluator? How will might different facilitators influence power dynamic|
Recommended options and adaptations for C4D
Several good options that would work well for C4D are listed on this page, including more information on options, including: internal options; hybrid options; community-based options; external consultant; expert review; peer review) and approaches such as: horizontal evaluation and participatory evaluations.
Mentoring role descriptions
If mentoring roles will be part of your plan, consider this as part of determining consultant qualities (click here for general information on consultant qualities), and include it in the EOI (see Document management processes and agreements).
One way to work through the different ways different stakeholders might be involved in R,M&E is a participatory matrix. Working through this matrix with the key decision makers (see Establish Decision making processes) can be helpful for thinking through who might conduct the R,M&E, and what kind of a role this will entail (facilitation, independent evaluation etc.).
Community Radio Continuous Improvement Toolkit - This toolkit is premised on a mix of self-assessment and peer-review towards co-learning and horizontal evaluation. In this case, it is fellow community radio station staff and volunteers who undertake the assessment. It was created in the context of community radios in India, but, with some adaptation of the questions, the processes and guidance could be applied to support peer-assessment between organisations doing a range of different types of C4D. Click here read a summary about when and why you could use this resource here.
The Ruka Juu Impact Evaluation was undertaken as a partnership between C4D NGO Femina HIP's Monitoring and Evaluation (M&E) department, international consultants and two local partners. Click here to read more.
My Rights My Voice Completion Report was led by a team of independent evaluators. Youth familiar with the programme were included in the field research as ‘peer evaluators’ in three out of the four countries. After initial training and the development of appropriate data collection tools, they independently carried out evaluation research with peers, parents and teachers, and presented the findings to Oxfam staff and partners. Click here to read a summary page about this example. This example is consistent with the C4D Evaluation framework in relation to this task in the following ways:
- Participatory: the report's background section (52-53) provides an example of an evaluation can be designed to incorporate both professional evaluators and young people in conducting evaluation tasks.
- Learning-based: participation of young people in peer-evaluation was to support mutual learning. It depended on adequate training in data collection tools.