C4D Hub: Use measures, indicators or metrics

What are measures, indicators and metrics?

Measures, indicators or metrics are used to succinctly describe the context, implementation and/or results of an intervention (project, program, policy) such as inputs, processes or activities, outputs, outcomes and impacts. The terms are often used in different ways in different organisations, so it is important to check their meaning in a specific setting or context. In this guidance, we use the term ‘indicator’ to refer to all of these terms and make a distinction only where it is important to do so –in particular, to distinguish between a direct and accurate ‘measure’ of something and a partial, approximate ‘indicator’. 

General information 

The use measures and indicators page provides detailed information about these concepts and a range of resources including examples of how they have been used in practice across different topical areas and sectors. It is highly recommended to read this page first before considering options to apply to C4D interventions. 

Indicators and C4D

Applying the C4D principles


The selection and creation of outcome and impact indicators is a tricky area for C4D since emergent outcomes are hard to predict and are different in each context.


Indicators should reflect local ways of looking at and measuring the world. Ideally, those funding, managing, planning, implementing, collecting and using the data should be involved in the selection of indicators. In C4D this includes community groups and partners. Participatory numbers (click here for a working paper by Robert Chambers) is an option for generating quantitative measures in participatory ways. 


Indicator selection should be focused on the type of ‘summary’ information that can tell us whether or not the intervention is ‘on track’ in terms of its implementation and anticipated results. In the first instance, it is important to check if appropriate indicators already exist rather than developing new ones. That way, we can draw on the experience from others in terms of the usefulness and use of an indicator as well as the feasibility of collecting and interpreting the data on a regular basis (i.e., we can learn something from the track record of these indicators to help us decide whether or not to select that indicator for our particular purposes, resources and context). Where the intervention content or implementation needs to be very adaptive and/or the results cannot be fully defined in advance (such as in complex situations), different indicators may need to be selected at different times during the intervention period. The indicators should help to answer the ‘key learning questions’ that are posed at various times. 


Indicators are concise, partial, aggregates of information. This is  the opposite of holistic, in depth information. Indicators can be used to ‘indicate’ areas that might need further, more in-depth, investigation (e.g., negative and positive outliers or lack of change where you expected to see change). Indicators should be used in combination with other more holistic methods to deeply understand situations. 


We usually think about indicators as being useful for reporting and accountability to managers and donors. Indicators should also be used for providing partners, communication groups and others participating in the intervention with information about what was achieved/not achieved, and the importance of the indicators for their community. When using the data from indicators in this way, it is important to acknowledge that the information is simplified and partial, and that other types of information are usually needed to make informed decisions about the intervention. 


Indicators should specify the required data disaggregations (often this needs to include age, sex, income, levels of vulnerability etc.). Local groups and institutions should be meaningfully involved in the process of developing and using indicators. This inclusion of local perspectives and attention to equity reduces the risk of indicators incentivising easier reach to populations to achieve targets.

Important considerations in selecting indicators for C4D

Indicators can be useful when recognised for what they are: partial information that can provide alerts of things not going as planned and signs of important changes (or lack thereof) which may trigger further investigation. It is important to select an appropriate ‘set’ of indicators –usually consisting of different types (input, process/activity, output, outcome, impact) – which can be interpreted together to get a more complete picture of what has happened. It might be useful to undertake a ‘data rehearsal’, where primary intended users of indicator data are presented with different scenarios of data and asked to discuss how they could use these to inform their decisions – and to identify what changes need to be made to their content or presentation to make them more useful. It is recommended to do this as part of the process of selecting or developing indicators. 

Collecting, analysing and interpreting longer-term results (outcomes and impacts) is often expensive and difficult to do well. As noted above, these results are also most likely due to a range of interventions, not just C4D. Hence, it is advisable to partner with others (such as those funding or implementing other interventions with similar goals) to ensure this information is collected –where appropriate– at regular intervals and with high quality.

It is also critically important that indicators are not only about results but also about the quality and quantity of implementation (e.g., making sure that a C4D intervention adheres to the principles of ‘participation’ or that implementation of the C4D strategy is done to the extent needed to expect results). 

Given there are many elements to quality assurance, it is often hard to capture through just a handful of indicators. Therefore using rubrics may be particularly useful to cover different dimensions of what is considered ‘success’. Rubrics can complement indicators, can incorporate indicators or can be used as an alternative to indicators (see below).

Characteristics of good indicators and good indicator sets: 

For most indicators, we are particularly interested in assessing changes over time (i.e., looking at trends in the indicator data) so it is crucially important to be able to collect, analyse and interpret the data regularly (the frequency will depend on the type of indicator) and with good quality. Indicator data that is of low quality can mislead decision making.

Developing a good indicator can be quite hard. One has to ensure, among other things, that:

  • the indicator is fully defined so it is clear to those collecting, analysing, interpreting and using the indicator data what it is that is being measured, how, with what frequency etc;
  • it actually measures what it intends to measure or is a reasonable indicator of it (referred to as its ‘validity')
  • data can be collected consistently by different people and at different times (referred to as its ‘reliability’)
  • it is affordable and feasible to collect the data regularly and with high quality.
  • (the use measures and indicators page on the BetterEvaluation website has more information on the common characteristics of a good indicator or good indicator set, such as affordable, comparable, feasible, measurable, operational, reliable, sensitive, specific).

For these reasons, it is usually much better to use an existing indicator that ticks most, if not all, of the boxes of a good indicator and has been used in a real life context by others.As noted above, a good ‘set’ of indicators reflects different dimensions of the intervention and the anticipated results along the pathway to ultimate outcomes or impact. 

If you need to craft a new indicator, you need to provide in written guidelines:

Title (indicator label)


Purpose (rationale)

Method of measurement


Denominator (where relevant)


Data collection method

Data collection tools

Data collection frequency

Data disaggregation


Information to interpret and use the data

You will also need to pilot test data collection and revise the indicator where needed, and provide training for those collecting, managing, analysing and using the data. This may include data rehearsal as described above.

Rubrics: a complementary or alternative way of capturing key information

Different stakeholder groups often have different views on:

  • ‘what is important’ in terms of what the intervention provides, how it is done and what the results are intended to be;

  • ‘how well’ the program is performing on the things that matter.

This is especially the case for interventions that are complex in nature or operate in a complex environment. Defining ‘success’ needs to go beyond just selecting a handful of results indicators.

Rubrics can be used to assess and judge performance along various dimensions. A rubric has two core aspects:

(1)evaluative criteria that define ‘what is important’ in terms of what the intervention provides, how it is done and what the results are intended to be; and,

(2)descriptions of levels of performance in terms of what constitutes ‘excellent’, ‘very good’, ‘good’, ’adequate’, or ‘poor’ performance.

Rubrics can incorporate qualitative and quantitative indicators (including important ones that are already in use) and other types of evidence (including emergent) plus specific guidance about the synthesis of this evidence (such as hurdle requirements or benchmarks).

The page on rubrics on the BetterEvaluation website provides further information, resources and examples of rubrics.

Recommended steps for selecting and using C4D

In collaboration with key stakeholders (at minimum, primary intended users of the data, which usually includes partners and community groups):

  • Use the intervention’s theory of change (see Develop program theory or logic model) to identify key questions about the C4D components of implementation and their anticipated contribution to expected results. Clarify which of these key C4D questions might be answered (partially or in full) by using indicators.
  • Select – from existing indicator sources – different types of indicators (inputs, activities/processes, outputs, outcomes, impacts) at different levels of the system where relevant (such as individual, community, society)  to obtain a ‘set’ of indicators that matches the identified information needs. (A C4D Registry of Indicators is under development).
  • Critically reflect on the gaps and assumptions, and consider how well the available indicators reflect local perspectives, realities and priorities. 
  • Where needed, develop new indicators (ideally, only if existing good indicators do not serve your information needs) using a collaborative process for indicator development. Consider the common standards for good indicators. Then, pilot-test them and revise them as needed before rolling them out for use.
  • As part of rolling them out for use, make sure they are fully defined and described (indicator guidelines) and train people in how to collect the data, how to store and manage the date, and how to interpret and use the data.
  • Periodically re-assess the utility of the indicator and continue using it (as is), stop using it, or revise it (you need to weigh up the pros and cons of a disruption in trend data before you stop using or revise the indicator).


The Monitoring and Evaluation for Participatory Theatre for Change outlines suggested indicators (21-27), which are tied to the theory of change (page 11-14), and include methods to collect the information. See a table 2, page 17 for a sample of indicators with timing and methods. Although it has been developed for participatory theatre, the 'Reach, Resonance and Response' framing could be adapted to a range of C4D initiatives.  Click here to read a summary and review of this resource. This resource is consistent with the C4D Evaluation Framework in relation to this task in the following ways: 

  • Complexthe indicators relate directly to the six different, interconnected theories in the Theory of Change.   
  • Realistic'Reach, Resonance and Response' framing provides a powerful yet manageable way to think through groups of indicators. The tools suggested to collect the information are as simple as possible while still achieving rigour and sensitivity. The plan also requires creating a plan for the timing of data collection. 
  • holistic: the guide makes specific reference to the importance of thinking about timing, especially for longer term changes, which should not be measured immediately after.

Participatory Numbers (or parti-numbers)

Indicators tend to require quantitative data. 'Participatory numbers' refers to a collection of methods that involve communities in the process of generating statistically valid and reliable quantitative data. Some of the strategies include: mapping, modelling, pile sorting, pie diagrams, card writing and sorting, matrix ranking and scoring, linkage diagramming. With planning and testing, these methods could be used to inform and define indicators in C4D, with repeated cycles of data collection to assess trends and changes. For more information see a Working Paper by Robert Chambers from 2007. See also a book edited by Jeremy Holland (Who Counts? The power of participatory statistics) with chapters on the use of different methods from contexts around the world.  

Participatory Rural Communication Appraisal (Chapters 5 & 6)

The Participatory Rural Communication Appraisal Handbook (click here for a summary of this resource) provides guidance on how to plan and undertake a baseline study, building on the situation analysis framework (used in a similar way to a program theory) to develop a questionnaire or survey design including pre-testing and sharing results with the community. The resource is consistent with the C4D Evaluation Framework in the following ways:

  • Participatory: PRCA allow rural people to participate in everything from information collection and analysis, problem identification and prioritisation to decision-making about how best to tackle issues revealed. 
  • Critical:  PRCA brings attention to the common biases that can distort the study findings. 
  • ComplexityThe process encouraged through PCRA to undertake a baseline includes strong reference to the understandings about underlying causes and contextual factors as understood through the situation analysis.
  • Learning-based: RPRCA emphasises information sharing, including of the findings from the baseline study.


Measuring Empowerment? Ask Them - Quantifying qualitative outcomes from people's own analysis by Jupp, D. & Ali, S. I. (with contribution from  C. Barahona) - (click here to read more about it here) Although different from the usual process of selecting indicators and the advice above, this is a strong example of a participatory process to develop and measure progress against indicators (in this case of community empowerment). This is consistent with the C4D Evaluation Framework in the following ways:

  • complexthe program's intended outcomes included difficult to measure things such as empowerment and realisation of rights and good governance. The mix of qualitative self-assessments and quantified indicators rooted in the local context makes this a strong example of evaluating complex initiatives.
  • participatory: This program used participatory techniques to arrive at quantified indicators of progress towards empowerment. The process begins with PRA, drama, story-telling, songs, picture making, conversations and debate. The generated descriptive statements are then clustered. Every year the participants then meet to review each indicator and mark with a happy face or sad face.
  • holisticthe indicators are generated by participants using rich, participatory techniques to define what success looks like in that context.
  • criticalthe participatory techniques were cleverly used to ensure that women's and men's perspectives were given spaces to be included, such as through different dramas.  
  • accountablefor the most part this process was a part of a community learning process. An addition process was undertaken by external evaluators to meet the needs of results-based management processes. This process used the community monitoring data (with their permission).    

Other Resources 


There are currently no comments. Be the first to comment on this page!

Add new comment

Login Login and comment as BetterEvaluation member or simply fill out the fields below.