We’re delighted to be participating in this week’s conference - Impact, Innovation and Learning: Towards a Research and Practice Agenda for the Future - being held in conjunction with the launch of the Centre for Development Impact (CDI), a partnership between the Institute of Development Studies and ITAD.
Many evaluations use a theory of change approach, which identifies how activities are understood to contribute to a series of outcomes and impacts. These can help guide data collection, analysis and reporting. But what if the theory of change is has gaps, leaves out important things – or is just plain wrong?
In Aoteoroa New Zealand the use of rubrics has been adopted across a number of institutions to help ensure there is transparent and clear assessment which respects and includes diverse lines of evidence in evaluation. This case, written as part of the BetterEvaluation writeshop process, discusses how the use of rubrics was helpful throughout all stages of an evaluation of the First-time Principals’ Induction Programme.
The term "rubric" is often used in education to refer to a systematic way of setting out the expectations for students in terms of what would constitute poor, good and excellent performance.
There is increasing recognition that a theory of change can be useful when planning an evaluation. A theory of change is an explanation of how activities are understood to contribute to a series of outcomes and impacts. It might be called a program theory, an intervention logic, an outcomes hierarchy, or something else. It is usually represented in a diagram called a logic model, which can take various forms.
[Blog post updated and extended 4 March 2013]
There is increasing discussion about the potential relevance of ideas and methods for addressing complexity in evaluation. But what does this mean? And is it the same as addressing complication?
We're excited to be joining evaluators from across the world, and particularly South Asia, for the Evaluation Conclave this week. The theme of the conference is "Evaluation for development" - and sessions will look at ways that of going from evaluation of development to evaluation that actively contributes to development through its findings and processes.
It is neither relevant nor useful to either only criticise randomised control trials (RCT) or treat them as the only choice for rigorous impact evaluation (IE). We need to look for other approaches and methods that can contribute to causal inference and systematically link observed effects to causes as well as extend what we mean by rigorous IE.
Most of the work done in development is done in collaboration, in partnership with individuals or organizations who contribute to a particular task or project we are working on. These collaborations are sometimes very straight forward, but sometimes they are quite complex, and involve many links and relationships.
With that in mind, I would like to share an approach I am working on, Social Network Analysis (SNA). We are using SNA to study research networks, its characteristics and how the network contributes to better research outcomes.