User feedback on the difference between evaluation and research


This page contains thoughts from the BetterEvaluation community provided in response to the blog post on Ways of framing the difference between research and evaluation.

There is a follow up blog on this topic here.


Comments from the BetterEvaluation community:

  • Like MQP [Micheal Quinn Patton], I find it useful to distinguish evaluation from research when talking to clients, to reassure them that an evaluation won't be overly academic/hard to understand or driven by the agenda of someone trying to get published.
  • While all four models are viable, I think seeing research and evaluation as a dichotomy is the most useful and most common.
  • It’s just another way of talking about applied research (evaluation) and theoretical research (your concept of research).
  • Research and evaluation should ideally inform each other.
  • Valuing each [research and evaluation] in its own right
  • Since we are talking about the difference, dichotomy is best to describe it. We are not talking about similarities but differences. I support dichotomy since it falls within my own line of argument which is based on the fact is that a research is all on hypothetical questions meant to prove a speculation/idea/phenomenon right or wrong. If it is proved right, the hypothesis is accepted otherwise it is rejected. On the other hand, an evaluation is based on a set of questions centered on indicators which are meant to improve what is being done or what has been done.
  • Dichotomy with some overlap, for example in the cases of impact evaluations that test hypotheses and yield findings that contribute to theory and can be generalised.
  • Evaluation is also a management tool to inform the design process for new or existing programs in a practical way. This is not always the case for research. Yes, as an evaluator I do use research tools, but this is far from saying that research is a subset of evaluation. Both are needed by NGOs (my client base) in different ways.
  • While there are distinctions between research and evaluation, I don’t think it’s very useful to consider them as a strict dichotomy since they’re clearly inter-connected in many ways. For example, evaluation often requires research work in order to understand people’s beliefs, values, culture and everyday lives, as well as to gather basic data such as demographic information.
  • From the perspective of my practice as an evaluation consultant within a large private sector research firm, my experience is actually more like d) Research as a subset of evaluation. This is because when compared with my colleagues, my practice involves all the same social research skills and expertise that they work with, but also involves additional skills and expertise that they do not use for standard research projects. So my practice is broader, and my research expertise is a subset of the broader expertise that I bring to a project.
    However, I am aware that the practice of research consultants does not reflect the full spectrum of research activities - particularly those of determining the research questions to be investigated. I also feel that research consultants (and their commissioners) tend to skirt the issues of the theoretical underpinnings of knowledge creation, which would not happen in the academic sector. Research consultants are essentially data collection and interpretation experts.
    So I feel that data collection and interpretation are a subset of evaluation (answer d) however they are also a subset of the entire social research activity. This is why I chose answer a. Because evaluation CAN involve research (e.g. collection of empirical evidence of household income), and research CAN involve evaluation (e.g. applying industry standards to judge whether measured quality is high enough). But they don't have to, as indicated in the blog.


Comments from the BetterEvaluation community:

  • It is about difference in methodology. The more research we have in evaluation the better, research is proud of its methodology, but life calls for decisions even if can not fully satisfy what is called scientific standards.
  • When talking with evaluators, I frame them as mutually independent because I think that's the reality (but it's not a useful way to explain things to clients).
  • I often talk about how any one project can have some questions that are more like research questions (trying to understand the nature of something, for example) and other questions that are clearly evaluative (i.e. asking about quality or value). And some questions might be both. In other words, I spend less time trying to label the entire project as clearly one thing or another, and more time focusing on what kinds of questions we are trying to answer. Why? Because evaluative questions need evaluation-specific methodologies whereas non-evaluative research questions don't. And that matters more than what we call the whole project.
  • In my view, data collection and analysis can and should be based on empirical methods where this is practical. There are however many cases where time and resources do not permit a 'scientific' approach that would be recognisable to academics as research.
  • I work for IDS on the DFID Accountable Grant and believe that evaluation and research are mutually independent - but rely on researchers and evaluators working together to produce valuable developments in programme theory. In some instances, the research performs something that is akin to an ex-ante evaluation design, which then enables an evaluation design to be more fit for purpose. I am unclear about the debate around the end-users of the research and evaluation being different. If we look at democratic evaluation, then the evaluation should be accessible to - and inform - the wider public debate. Is this not the case too for research?
  • Because the methods used to collect data in both evaluation and research are so similar, I think we HAVE to regard them as mutually independent (overlapping). But there are also differences that set them apart - purpose, conceptualisation, peer review, funding sources and processes, involvement of users of the end knowledge, publication imperatives etc.
  • Some research is evaluative, some evaluation requires research. This makes sense to me. Unresearched evaluation often has validity...although you might need to do some research to prove it. :-/


Comments from the BetterEvaluation community:

  • [Evaluation is a subset of research] because ... there are so many extra steps and considerations when doing evaluation.
  • Good evaluation is research, but evaluation does not need the level of rigor involved in research. Some research is so esoteric that, while something is learned, it is not evaluation.
  • They are both part of the policy/programme ROAMEF cycle (Rationale - Options - Appraisal - Monitoring - Evaluation - Feedback). I would see Research as more general than Evaluation as it is about having an evidence base for your policies. Your evidence base can come from evaluations, or from elsewhere (e.g. academic studies).
  • All evaluations use research methodologies so should be considered as a subset of research. Operationally, I would think evaluations are programme specific while research is done to develop a knowledge base about a specific subject.
  • In my opinion evaluation is a subset of research largely built on a foundation of social science research methods. The field has taken on its own identity and transformed itself into its own unique field of research tied to more practical applications to programs and services. The whole industry of evaluation and pressures and accountability structures tied to programs/services have also played a role in its growth as a unique and sometime lucrative field in and of itself. This has created a whole set of issues and opinions on the subject.
  • I agree with Endias' observation that doing research does not necessarily require doing evaluation while doing evaluation always requires doing research. [See original blog post for Endias' observation]
  • We use research methods in evaluation such as the social sciences methods.
  • Evaluation is a type of research but they serve different purposes. Evaluation uses research methodologies - and a good evaluation uses rigorous research methodologies.
  • I also don’t think it’s very useful to see research or evaluation as a sub-set of the other since this tends to suggest that one is more important than the other.
  • I tend use the words "evaluation", "evaluation research," and "research" inter-changeably, because in my experience many evaluators and evaluation users tend to use those words inter-changeably. If I want to explain how evaluation is different from other types of research that people may be familiar with, I would say that evaluation research is different from purely theoretical research, because the purpose of evaluation is to produce information that is useful for improving whatever is being evaluated.
  • The Endias explanation is spot on. [See original blog post for Endias' observation]
  • Evaluation is generally about human actions so is a subset of research, which can go as broad as you like.
  • I believe evaluation to be a type of applied research. It uses research tools in an applied manner to make evaluative judgements about quality and value.
  • I would see evaluation's systematic approach to data gathering as being a type of research. That is why the basic academic qualification for evaluation is a research degree, the higher the better. However the difference lies in the purpose of the activity. Research could be for learning, for corrective action, for strategic action, for knowledge or for the mere sake of it. Evaluations may be for corrective action and for learning, this a subset of research.


Comments from the BetterEvaluation community:

  • I also don’t think it’s very useful to see research or evaluation as a sub-set of the other since this tends to suggest that one is more important than the other (as in the comment from Patricia’s colleagues about evaluation).

5. Other: Dependent on context

Comments from the BetterEvaluation community:​

  • None of the above because you are missing a crucial piece - monitoring. For me, a specialist in rights-based participatory Social & Behavior Change Communication (SBCC), I see research, monitoring & evaluation (RM&E) as a continuous, participatory process that begins with asking questions around a particular topic of interest, then using the results of that research to engage stakeholders in dialogical processes that winnow out solutions to whatever issue is being addressed, then implementing a participatory monitoring system/mechanism that allows stakeholders and programmers to jointly follow the path of the intervention/project/program, followed at the end with an evaluation that enables all players to learn whether or not they achieved their objectives on all levels (process/outcome/impact), and if so, how, and if not, why not. Rinse and repeat. Neither of these three - RM&E - are independent of the others - they are all interdependent on each other and each plays an integral role in rights-based programming, no matter what the topical issue at hand is.
  • How we distinguish between research and evaluation obviously depends on the perspective and approach we take. My particular field is action research which includes an evaluative component (critical reflection). In both action research and evaluation the key questions are usually generated by the stakeholders involved in a project or initiative. Participatory forms of research and evaluation are both learning processes that can lead to new knowledge and mutual understanding. This suggests that when we look at research and evaluation from an action research perspective, many of the dichotomies that Patton sets up break down.
    So from my perspective, research is undertaken to better understand something, including the context in which people live, and to create new awareness and knowledge that can lead to action focused on social change and improvement. Evaluation is undertaken to understand and assess the value that people place on a program or initiative in order to improve things like its effectiveness and sustainability and to better reach its aims.
  • In my field (epidemiology) we have generally described evaluation as a subset of research. However, in a broader scope of fields, for example education, I can see how the mutually independent bubbles would be more applicable.
  • As a consultant I found myself having explicit or implicit debates about the differences between research and evaluation.
    I find that all your categorisations apply, depending on what type of a debate I have with my team or with a client. There are issues to consider about the purpose, methodology, required robustness of the data and analysis etc. I also ticked the 'other' box as finding a budget for filling information gaps seems to be one of the most powerful driving factors in these debates.
  • I agree with all the descriptions above, and I would combine the last two choices. I don't see either as a subset of the other. Rather I see evaluation as an addition, or forgive the pun, a field that provides added value to studying the world. Someone can study to be a researcher, and then remain a researcher. To be an evaluator, I would argue you need to first be trained as a researcher, and then specialise in evaluation. While an evaluation is judged by different criteria, part of that criteria is about your data quality, so it draws on research criteria as well. Ok, so maybe research is an element of evaluation, but for some reason to categorise it as a subset minimizes its important role.
  • This was a difficult choice to make, because I am of the view that the distinction (if you believe there's one) or the similarity between these two demands more than a simple unilateral view. And therein lies the grey area in our field of work. Perhaps one of the options on this list should have been… that they are mutually reinforcing activities. I’ve generally operated as evaluation being the “baseline” or starting point of research and ultimately its end point. In the sense that we evaluate so we can refine our research question by narrowing its focus and its relevance in advancing knowledge, building towards fewer confounders and study limitations. Once conducted, research should then inform policy and program change, which, ultimately must also be evaluated. In this way, evaluation can be viewed as pushing research outside its comfort zone – the academic domain and into the lived experiences of program participants and communities (practice).
  • I believe that there is no definitive answer to this question as I believe that any one of these options may be accurate in a particular context.
  • In his book Understanding Social Research, the South African methodologist Johann Mouton (1996) makes a valuable distinction of what he calls the three (knowledge) worlds; the world of everyday life (in which lay knowledge is applied, clearly with a ‘pragmatic’ interest), the world of science (where phenomena of World 1 are taken as ‘objects’ of inquiry to produce scientific knowledge, an act with ‘epistemic’ interest), and the world of meta-scientific reflection about World 2 (producing critiques, deconstruction, analyses – in short, meta-scientific knowledge on research methodology). Furthermore, in an acclaimed handbook The Practice of Social Research, also by Mouton (2001) in his adaptive text of Earl Babbie’s classic guide with the same title (1998), he reserves a chapter under the section, Types of Research Design, for Evaluation Research, clearly opting for defining evaluation as a subset of research (Option 3). The topic of this chapter is programme evaluation, defined according to Leonard Rutman (1984): “programme evaluation entails the use of scientific methods to measure the implementation and outcomes of programs for decision-making purposes”; and Rossi & Freeman (1982): “the systematic application of social research procedures for assessing the conceptualization, design, implementation, and utility of social intervention programmes”. These descriptions may be turning the preferred option around to seeing research as a subset of evaluation (Option 4).

    The ensuing uncertainty of how to view the relationship between evaluation and research requires furthermore clarity on what we have in mind with the term, Research. Following Richard S Rudman (1966), research is the application of scientific method and scientific techniques according to a logic of validation to produce scientific knowledge presented in a corpus of statements about the aspect of the universe in question. This raises the issue of the so-called process-product ambiguity in science (Black, 1952; Rudner, 1966). Science or research as a product has contributed to pragmatic knowledge (World 1) to enable human actors to perform activities within the everyday world. This has been done, inter alia, through research as process (as expounded in World 2), informed by the reflective mode of thinking in World 3, which among other, critically refine the logic of validation as a necessary element of scientific analysis.

    Having attempted to relate the various stances in knowledge production according to the few distinctions above, Evaluation may be regarded similarly as process and product. As product, I suggest that we call it Evaluation Research, functioning in the above-mentioned outlined framework very much as a form of scientific research. In this sense it may be called a research design as Babbie & Mouton (2001) do. However, should be we regard it as a product, scientific research may be seen as a particular mode of evaluation, which opens up the possibility to conduct evaluations in non-scientific modes as well. Only in so far as evaluation claims to be a scientific endeavour, this last option should be regarded as an invalid one.

    Therefore, my suggestion is that within a scientific frame of reference, Options 2, 3 and 4 may be used concurrently as ways of distinguishing research and evaluation. They can only be viewed as a dichotomy in so far as evaluation has been discarded as a scientific endeavour, which may be a conclusion reached in terms of World 3 reflections.​


    Earl Babbie (1998), The Practice of Social Research. Belmont, CA: Wadsworth Publishing Company.
    Earl Babbie & Johann Mouton (2001), The Practice of Social Research. South African Edition. Cape Town: Oxford University Press.
    Max Black (1952), Critical Thinking. Second Edition. Englewood Cliffs, N.J.: Prentice-Hall, Inc.
    Johann Mouton (1996), Understanding Social Research. Pretoria: JL van Schaik Academic.
    P.H. Rossi & H.E. Freeman (1982), Evaluation: A Systematic Approach. Beverly Hills, CA: Sage Publications.
    Richard S Rudner (1966), Philosophy of Social Science. Englewood Cliffs, N.J.: Prentice-Hall, Inc.
    Leonard Rutman (Ed.) (1984), Evaluation Research Methods: A Basic Guide. Second Edition. London: Sage Publications.

Thank-you to the BetterEvaluation members who supplied us with their ideas on this debate. We have not included individual names for privacy reasons