What is specific about evaluating research?

The International Development Research Centre (IDRC) in Canada primarily funds and facilitates global South-based research for development (R4D).  Its mandate is: “To initiate, encourage, support, and conduct research into the problems of the developing regions of the world and into the means for applying and adapting scientific, technical, and other knowledge to the economic and social advancement of those regions.

Evaluating research for development (R4D) includes several unique features when compared to evaluating other international development interventions.  It is also different from evaluating other areas of research. These differences are described below along with tools to evaluate R4D programming that may be relevant to the evaluation you are commissioning.

Long, non-linear results chains: As shown below, three different projects may all aim to improve health: building a hospital has a clear link with improved health; training health care workers also has a plausible connection to improve health outcomes, though there are more links in a causal chain between that training and improved health; and, finally, a research project that studies food consumption and nutrition in children has an ultimate aim of improving health, but there are many intermediary links in the causal chain before that research can make a difference to the health of the children. Evaluating research for development starts with figuring out what results in that long causal chain you want to evaluate, and then, to assess the contribution of research to the outcomes sought. To evaluate the results of research for development programming, one must accept that result pathways are typically non-linear, context is crucial, and complexity is the norm.

The types of outcomes of R4D are also different from those of other development interventions. They can include, for example, increased capacity of the individuals, organizations and networks doing the research and using the research. Outcome evaluations might focus on the influence of research on technological development, innovation, or policy and practice changes. They also include efforts to scale up the influence of research.  The following resources can help evaluate R4D outcomes:

  • The Knowledge Translation Toolkit. Bridging the Know–Do Gap: A Resource for Researchers –gives an overview of what knowledge translation entails and how to use it effectively to bridge the “know–do” gap between research, policy, practice, and people. It describes underlying theories and provides strategies and tools to encourage and enable evidence-informed decision making.
  • Evaluating policy influence is the subject of the freely available book by Fred Carden: Knowledge to policy. Making the most of development research – The book starts from a sophisticated understanding about how research influences public policy and decision-making. It shows how research can contribute to better governance in at least three ways: by encouraging open inquiry and debate, by empowering people with the knowledge to hold governments accountable, and by enlarging the array of policy options and solutions available to the policy process.
  • The Overseas Development Institute has several useful guides for evaluating policy influence. For example, the RAPID Outcome Mapping Approach (ROMA) and brief guides such as Monitoring and evaluation of policy influence and advocacy
  • Tools to evaluate capacity development include the framework used in IDRC’s capacity development evaluation for individuals, organizations and networks and the European Centre for Development Policy Management (ECDPM) 5C framework.
  • In the Monitoring and Evaluation Strategy Brief, Douthwaite and colleagues give an overview of the monitoring and evaluation (M&E) system of the CGIAR Research Program on Aquatic Agricultural Systems (AAS) and describes how the M&E system is designed to support the program to achieve its goals.  The brief covers: (1) the objectives of the AAS M&E system in keeping with the key program elements; (2) the theory drawn upon to design the M&E system; and, (3) the M&E system components.
  • CIRAD’s Impress project describes a research project that explores the impacts of international agricultural research including the methodology used and several case studies.

Finally, evaluating research for development is also different from evaluating academic research. Typically, academic research evaluation is done through deliberative means (such as peer review) and analytics (such as bibliometrics). IDRC uses a holistic approach that acknowledges scientific merit as a necessary but insufficient condition for judging research quality and the role of multiple stakeholders and potential users in determining the effectiveness of research (in terms of its relevance, use and impact). IDRC developed the Research Quality Plus (or RQ+) Assessment Framework which consists of three components:

  1. Key influences (enabling or constraining factors) either within the research endeavor or in the external environment including: (a) maturity of the research field; (b) intention to strengthen research capacity; (c) risk in the research environment; (d) risk in the political environment; and, (e) risk in the data environment.
  2. Research quality dimensions and sub-dimensions which are closely inter-related including: (a) scientific integrity; (b) research legitimacy; (c) importance; and, (d) positioning for use.
  3. Customizable assessment rubrics (or 'evaluative rubrics') that make use of both qualitative and quantitative measures to characterize each key influence and to judge the performance of the research study on the various quality dimensions and sub-dimensions.

The work of IDRC contributes to the wider ongoing debate about how to evaluate research quality and acknowledges valuable approaches of other organizations working in this area (see, for example, resources below).  IDRC invites funders of research and researchers to treat the RQ+ framework as a dynamic tool for adaptation to their specific purposes.

Further information & Resources

See also: http://www.researchtoaction.org/2013/08/altmetrics-and-the-global-south-increasing-research-visibility/ which includes Altmetrics and ImpactStory (sites that track impact of written work by analysing the uptake of research within social media ) and The Scholarly Communication in Africa Programme (SCAP) (an initiative seeking to increase the visibility and developmental impact of research outputs from universities in Southern Africa).

Back to IDRC Evaluation Commissioners' guide