Frame the boundaries of the evaluation or M&E system in FCV settings

In fragile, conflict-affected, and violent (FCV) contexts, framing the boundaries of an intervention is crucial to ensure that monitoring and evaluation (M&E) activities are contextually appropriate, ethically sound, and adaptable to changing conditions.

This involves identifying primary users, clarifying the purpose, and defining the scope in a way that reflects the complexities of the environment.

On this page

Overarching principles

Particularly relevant principles for framing the boundaries of M&E activities include:

  • Do no harm and ensure conflict sensitivity: In FCV settings, considering local tensions when identifying users and defining purposes and scope is crucial to reducing the risk of exacerbating issues and causing unintended harm. Understanding how different groups might react to the M&E process and findings helps ensure that the evaluation remains sensitive to local conflicts and doesn’t inadvertently increase tensions.
  • Prioritise safety and security: Safety considerations are critical when defining who will be involved and what the evaluation will cover. For instance, identifying users in ways that minimise exposure to risk, especially when dealing with sensitive data or topics, is essential in FCV contexts. Safety considerations are also important when specifying key evaluation questions (KEQs) in FCV contexts by ensuring that the questions are not only relevant and aligned with the evaluation's purpose but also tailored to minimise risks.
  • Respect local contexts and involve communities: This principle connects to Clarify who will actually use the evaluation and Specify the key evaluation questions. Respecting local contexts involves engaging with community members to better understand their perspectives, needs, and values. This makes the evaluation more relevant to local users and ensures the evaluation questions align with community concerns and cultural norms.
  • Build trust, transparency, and accountability: Transparency and clear communication with stakeholders about who the evaluation is for and what it aims to achieve helps build trust with communities, which is critical in FCV contexts where relationships are often fragile. This principle supports involving local stakeholders in defining values and success criteria to ensure the evaluation is fair and accountable. It’s important to note that transparency should not be prioritised above safety, security, and conflict-sensitivity.
  • Foster flexibility and adaptability: Given the dynamic nature of FCV environments, new information or shifting priorities may necessitate adjustment to the focus of M&E activities. Establishing adaptable boundaries means setting goals and scopes that can accommodate new information or shifts in the environment, supporting an iterative approach to M&E. For example, success criteria and evaluation questions may need adjustment as the situation evolves to support the adaptive management of programs. In FCV contexts, being flexible with scope and boundaries ensures the evaluation remains relevant even as local conditions change.
  • Act strategically to decide where to focus evaluation efforts: Setting strategic priorities to ensure resources are used efficiently is critical in FCV environments. Identifying which user needs and questions are most critical, and feasible to answer safely, allows for a focused approach, ensuring that the evaluation addresses high-impact areas that contribute to both short-term and long-term goals. Strategically defining boundaries helps ensure that M&E activities remain practical, efficient, and responsive to immediate needs.

View the full list of overarching principles here.

Identify intended users

When identifying intended users for M&E activities, it's important to consider that different user groups may have different agendas and intended uses for the findings.

  • Consider the needs of both primary and secondary users: Understanding the information needs of all key users is essential, as these needs can significantly influence the evaluation design.
  • Balance trade-offs between design approaches and uses: Different user groups may benefit from different types of insights. For instance, longitudinal studies can be valuable for policymakers seeking long-term trends, but may not suit program managers who need immediate data to inform immediate, context-specific decisions, particularly in complex environments like FCV settings.

Decide purposes

Resource and security constraints can limit the scope of evaluations in FCV contexts. In these environments, clearly defining the primary and secondary uses of evaluation is critical in order to ensure that the evaluation has maximum utility. In some cases, addressing multiple purposes within a single evaluation can save time and money by consolidating efforts and leveraging existing data collection processes; in other cases it will be important to select a few purposes and frame the evaluation around these.

Primary intended uses of findings that are particularly relevant for FCV contexts are:

  • Inform decision-making aimed at improvement (formative): Formative evaluations can inform the adaption of strategies to improve performance or to respond to changes in context.
  • Inform decision-making aimed at selection, continuation or termination (summative): Summative evaluations can inform decisions about whether to continue, scale, or terminate programs in highly resource-constrained environments.
  • Contribute to broader evidence base: In addition to their immediate, local use, M&E can contribute to understanding what works in these challenging environments, helping to shape future interventions by other organisations and influencing broader strategies in conflict-sensitive development and peacebuilding.

Primary intended uses of processes that are particularly relevant for FCV contexts are:

  • In FCV contexts, where trust is fragile and power imbalances are prevalent, the M&E process can be a vital tool for building legitimacy by demonstrating transparency, inclusiveness, and responsiveness to local needs and expectations.  
  • Conducting and using M&E activities in decision-making can also help strengthen a culture of evaluation, fostering norms around identifying and checking assumptions, listening to community voices, and using evidence to inform decisions.  
  • This process can also be used to strengthen capacity by building skills in creating, conducting, managing, and using evidence.  
  • Moreover, M&E plays a crucial role in demonstrating accountability, both to funders and to communities, signalling that inclusive engagement, competent management of resources and risks, and transparent decision-making are in place.

Resources

  • Clarify the intended uses of this evaluation—is it to support improvement, for accountability, for knowledge building? Is there a specific timeframe required (for example, to inform a specific decision or funding allocations)? If there are multiple purposes, decide how you will balance these.
  • Supporting the use of M&E in a national M&E system means actively facilitating and encouraging the application of monitoring and evaluation findings to inform policy decisions, improve programs, and guide strategic planning.

  • What is evaluation?

    There are many different ways that people use the term 'evaluation'. At BetterEvaluation, when we talk about evaluation, we mean: "any systematic process to judge merit, worth or significance by combining evidence and values."

Specify the key evaluation questions (KEQs)

When developing key evaluation questions (KEQs) in FCV contexts, it is essential to align them with the evaluation’s purpose and context. This ensures that the questions guide data collection and analysis while addressing the unique challenges of limited resources, equity, and evolving conditions.

Align KEQs with the evaluation’s purpose and scope

In FCV contexts, where resources are often limited, KEQs should be guided by the evaluation’s purpose and scope. KEQs play a critical role in shaping data collection, analysis, and the resulting recommendations and therefore must be crafted to meet the needs of primary intended users and provide actionable information

  • Aligning KEQs with the evaluation’s scope is critical—questions that are not aligned or could be answered by other means (such as through audits or thematic studies) should be avoided.
  • Limiting the KEQs to ‘must-have’ questions helps to ensure depth and relevance. Too many KEQs can lead to shallow evaluations and make it harder to generate actionable findings

Read more on how types of evaluation and purpose influence questions.

Consider the state of current knowledge

The level of existing, agreed-upon knowledge should play an important role in shaping KEQs, and by extension, the design of M&E activities. (Adapted from the BetterEvaluation Manager’s Guide):

  • Established knowledge: When there is established, agreed knowledge about what works and why, focus on whether processes follow agreed standards and align with proven approaches to ensure relevance in resource-constrained FCV settings.
  • Uncertainty about effectiveness: In complex and rapidly changing FCV contexts, KEQs should examine processes, outcomes and impacts and use an appropriate causal inference design to test the underlying theory of the intervention, particularly as assumptions may not hold true given the complexity of FCV contexts.
  • No clear best approach: When there are multiple potential strategies but no clear best approach, it is essential to document the processes and context of each strategy. Comparing their performance in terms of outcomes, efficiency, and adaptability helps identify which methods are most effective in particular FCV settings, where conditions can change rapidly and standard practices frequently need to be adjusted to better fit the specific context.
  • Limited knowledge about solutions: When little is known about potential solutions, KEQs should explore early signs of success or failure to support ongoing learning and adaptation, which are key to navigating uncertainty and responding to emerging challenges in FCV contexts.

There will likely be different levels of knowledge and confidence in different aspects of an intervention. Therefore, a range of different question types can be used.

Influence on design and methods

KEQs determine the evaluation design and appropriate methods. For example:

  • Causal KEQs: The design must allow for establishing attribution or contribution.
  • Budget implications: KEQs requiring resource-intensive methods (e.g., surveys) may increase costs. Therefore, KEQs must be realistic and feasible within the evaluation’s resource constraints (ALNAP, 2016, p.106).

Frame KEQs in an action-oriented manner

KEQs should be framed in a way that directly supports decision-making and future actions. Action-oriented KEQs ensure that the evaluation produces findings that are directly applicable and lead to clear recommendations. This is particularly important in FCV settings, where interventions must remain flexible and responsive to changing conditions. For example, instead of just asking what has happened, an action-oriented question would focus on how future interventions can be improved. (ALNAP, 2016)

Include equity-focused KEQs

Consider how different groups are impacted by the crisis:

  • Marginalised populations are likely to be disproportionately impacted, and it's important to recognise and investigate these different impacts so that policies and programmes can be designed or revised appropriately.
  • Incorporate questions that address equity, gender, and environmental sustainability to capture the impact on vulnerable populations and the environment.

Involve affected groups in developing KEQs

It’s important to incorporate input from those affected by the evaluation's results, ensuring that KEQs reflect local priorities, even if they differ from funder interests.

As with any form of participation in FCV settings, it is important to consider risks to participants and prioritise safety and to take a conflict sensitive approach.

Resources

Determine what ‘success’ looks like

In FCV contexts, defining 'success' requires flexibility and sensitivity to the specific context. Success is not only about meeting predefined objectives or measurable outcomes but also reflecting the agreed standards, values and priorities of the stakeholders involved.

Adopt a participatory and conflict-sensitive approach

  • Defining success in FCV contexts should involve collaboration with stakeholders and community members, but conflicting interests and environmental complexities can make this challenging.
  • A conflict-sensitive approach should be used to avoid exacerbating tensions or reinforcing power imbalances. Prioritise equity by ensuring that vulnerable groups have a voice in the process.
  • Full agreement may be hard to achieve, but the aim is to create a definition of success that is inclusive, contextually appropriate, and mindful of potential impacts on conflicts and power dynamics.

Ensure flexibility and adaptability

Success criteria need to remain flexible as conditions in FCV settings can change rapidly. Regularly revisit what ‘success looks like’ and adapt as needed as the situation evolves to ensure that success is measured in ways that remain relevant and meaningful.

Go beyond stated objects

Evaluations should go beyond assessing whether stated objectives were met and consider a broader range of criteria, including:

  • Positive and negative unintended consequences of interventions, such as effects on equity and the environment
  • The sustainability of outcomes
  • Contributed to peacebuilding, resilience, and social justice.

Include cross-cutting themes

Cross-cutting themes such as local context, human resources, protection, participation of primary stakeholders, coping strategies, resilience, gender equality, HIV/AIDS, and the environment should be considered.

While not all themes will apply to every evaluation, evaluators should provide a clear rationale for excluding any themes. (Beck, 2006)

Apply standard criteria thoughtfully

By using standard criteria as a flexible guide rather than a strict framework, evaluators can ensure that no major issues are overlooked while still allowing for the flexibility needed in these challenging settings (Beck, 2006).

Standard criteria like those from the OECD-DAC Evaluation of Humanitarian Action (EHA) criteria provide a widely agreed-upon structure for evaluations in FCV contexts. These criteria include:

  • relevance
  • effectiveness
  • sustainability
  • impact
  • efficiency
  • coverage
  • coherence
  • appropriateness
  • connectedness
  • coordination
  • protection.

The DAC criteria work best in combination, so that effectiveness can be considered in line with criteria of relevance, efficiency and sustainability, for example.

Not all DAC criteria are applicable in every evaluation. Coherence, for example, may be less relevant to single-agency projects. Use of the criteria should be tailored to the specific organisational context.

Example: Everyday Peace Indicators in Sri Lanka

The Everyday Peace Indicator (EPI) project, developed by peacebuilding researchers and evaluators, employs a participatory approach to defining and measuring peace in conflict-affected communities. One such project in Sri Lanka demonstrates how participatory methods can redefine the measurement of reconciliation in post-conflict societies. Recognising that reconciliation is a subjective concept varying across communities, EPI engaged 30 diverse Grama Niladhari divisions across six districts, encompassing Sinhala, Tamil, and Muslim communities. Instead of imposing pre-existing metrics, EPI asked ordinary citizens—including schoolteachers, students, farmers, and businesspersons—to identify indicators that reflect reconciliation in their daily lives. This approach revealed significant variations in understanding and priorities for reconciliation across different ethnic communities and regions.

The participatory numbers generated through this process illuminated nuanced local perspectives on what "success" in reconciliation looks like. For instance, in war-affected Tamil communities in the North, indicators focused on addressing issues of missing persons and improving regional development. In contrast, the more ethnically diverse Eastern region prioritised indicators such as increased business relationships across ethnic lines and inter-community participation in social events like weddings and funerals. Even within similar regions, communities developed distinct metrics—one might measure reconciliation progress through road repairs while another through installing streetlights. By quantifying these locally defined indicators, EPI provided a more granular and relevant measure of reconciliation progress. It empowered communities to define and track success on their terms rather than relying solely on national-level or externally imposed metrics.

Methods and approaches

Resources

Last updated:

'Frame the boundaries of the evaluation or M&E system in FCV settings' is referenced in: