The term "rubric" is often used in education to refer to a systematic way of setting out the expectations for students in terms of what would constitute poor, good and excellent performance.
There is increasing recognition that a theory of change can be useful when planning an evaluation. A theory of change is an explanation of how activities are understood to contribute to a series of outcomes and impacts. It might be called a program theory, an intervention logic, an outcomes hierarchy, or something else. It is usually represented in a diagram called a logic model, which can take various forms.
[Blog post updated and extended 4 March 2013]
There is increasing discussion about the potential relevance of ideas and methods for addressing complexity in evaluation. But what does this mean? And is it the same as addressing complication?
We're excited to be joining evaluators from across the world, and particularly South Asia, for the Evaluation Conclave this week. The theme of the conference is "Evaluation for development" - and sessions will look at ways that of going from evaluation of development to evaluation that actively contributes to development through its findings and processes.
It is neither relevant nor useful to either only criticise randomised control trials (RCT) or treat them as the only choice for rigorous impact evaluation (IE). We need to look for other approaches and methods that can contribute to causal inference and systematically link observed effects to causes as well as extend what we mean by rigorous IE.
Most of the work done in development is done in collaboration, in partnership with individuals or organizations who contribute to a particular task or project we are working on. These collaborations are sometimes very straight forward, but sometimes they are quite complex, and involve many links and relationships.
With that in mind, I would like to share an approach I am working on, Social Network Analysis (SNA). We are using SNA to study research networks, its characteristics and how the network contributes to better research outcomes.
Across the world evaluation associations provide a supportive community of practice for evaluators, evaluation managers and those who do evaluation as part of their service delivery or management job.
There are many decisions to be made in an evaluation – its purpose and scope; the key evaluation questions; how different values will be negotiated; what should be the research design and methods for data collection and analysis; how information will be shared; what recommendations should be developed and how.
As part of developing the BetterEvaluation site, we ran an "Evaluation Challenge" process, inviting people to submit their biggest challenges in evaluation, and then inviting experts to suggest ways to address these.
This week we present the first challenge, one that is frequently heard from people when they first start learning about the field of evaluation: