Beyond the evaluation box – Social innovation with Ingrid Burkett

Several colourful shipping containers stacked up at a port

This blog is the sixth in our series about un-boxing evaluation – the theme of aes19 in Sydney. The series is designed to generate a global discussion of the theme ‘un-boxing evaluation’ and what it means for our profession and practice.

Associate Professor Ingrid Burkett, Co-Director of the Yunus Social Business Centre at Griffith University, wants evaluators to get beyond the evaluation box and work with other disciplines. She is a social designer, designing processes, products and knowledge that deepen social impact and facilitate social innovation. Ingrid Burkett spoke to Rae Fry.

This blog is the fifth in our series about un-boxing evaluation – the theme of aes19 in Sydney. The series is designed to generate a global discussion of the theme ‘un-boxing evaluation’ and what it means for our profession and practice. Associate Professor Ingrid Burkett, Co-Director of the Yunus Social Business Centre at Griffith University, wants evaluators to get beyond the evaluation box and work with other disciplines. She is a social designer, designing processes, products and knowledge that deepen social impact and facilitate social innovation. Ingrid Burkett spoke to Rae Fry.

Rae Fry: What are the boxes you’d like to see evaluators going beyond?

Ingrid Burkett: The biggest box for me is getting out of the project and program boxes. I know lots of evaluators are already exploring that systems space, but looking at how we can evaluate across really complex systems. Also, if we stay in the project or program box but take a systems perspective, how can evaluators do simultaneous evaluation from sites across a system, so we can look at the intersection of different projects and how they are all contributing as a whole to changes in communities, or changes in relation to various missions?

I’ve been doing a lot of work in the space of mission-driven innovation — addressing major missions like the sustainable development goals — and I see enormous possibilities for evaluators in that space. At the moment, everybody’s obsessed with measurement, but they’re not seeing the bigger picture of what evaluation could offer. What we’re doing is just making up all of these extraordinary measures and indicators that really are not telling us anything about: Are we doing the right things? Are we doing things right? And are we really making a difference across systems?  That, to me, is evaluation. That’s the big picture “box”. 

Rae Fry: Why do you think evaluators can make a contribution there?

Ingrid Burkett: I think evaluators have a more integrated approach to process, activity and outcomes and the connection between them. I look at the people who are doing the big data collection and what they’re doing is just understanding the outputs, and not really looking at the relationships between any of the outputs — so what we’re getting is one-dimensional picture of what’s happening. Evaluators have a lot more capacity to think holistically and look at the interactions between things, and also the gaps. I see evaluators as detectives. 

So that’s one box.  The other box I’m really interested in is what would be called “boring” areas like procurement and investment.  For example in procurement, instead of just asking for contracts that can deliver the next road, we’re asking contractors to think more broadly and strategically about delivering a road, plus economic benefits for communities, plus employment outcomes for Aboriginal and Torres Strait Islander people, plus training opportunities for people who’ve been long-term unemployed. 

Yet we’re flat out measuring whether someone delivers a good quality road! So my question is, when I’m talking to procurement people and helping them design contracts that can be much more strategic, what role can evaluators play in that process? In investment, we’re all talking about impact investment, but how do we know whether we’re actually achieving impact? 

The danger, if we don’t bring evaluators into those questions, is that we end up with fakery. There is so much potential in the economic space for really thinking through fundamentals. What do we mean by value? How do we measure value that extends beyond value for money? 

Rae Fry: But I think a real challenge for evaluation and program design, especially in the public sector, is that cross-government, cross-sector understanding of impacts, and even trying to collaborate sufficiently across different sectors. Have you seen that happen really well?

Ingrid Burkett: It happens really well in micro settings.  I think that’s why place is such an interesting vector — because we can start to see how to collaborate across organisations, and to share ways to evaluate and share indicators and share resources.  

So I think place is one of those vectors. The other one — going back to mission-related innovation — is that we’re not going to achieve the big challenges of dealing with things like climate change or ageing populations unless we start to really rethink how we collaborate, and not force people to agree when they collaborate. I think evaluators could tell us so much about different models of collaboration. That’s why the connection between evaluation and design is so critical — because we can’t just evaluate, we need to learn from that evaluation, and redesign how we’re approaching things. 

Rae Fry: In the abstract for your keynote address at aes19 you mention the frustrations you’ve experienced with both evaluation and design. Can you talk a bit about that?

Ingrid Burkett: I think it’s about the walls of the box. We’re having conversations between the two now, but what I’m seeing is that the conversations happen — we break down the walls of each of those boxes and we come to the middle ground — and then as soon as we go back to the office it’s business as usual. 

I’ve been in the design space more than the evaluation space, and we go back into business as usual and start to get all excited about great ideas without going back and having a look at the evidence the evaluators have produced, and then the evaluators come in and deliver their reports and they don’t talk to the designers. 

So it’s about bringing those two worlds together in a business as usual frame — and then getting the education system to align with that. I have found myself (for my sins) working back in a university. How do we create courses where you can learn evaluation and design, not just either/or? That’s where the frustration happens for me. 

It feels like we need to disrupt in all of those spaces, and stop thinking that running training on co-design is going to change behaviour. If we ask evaluators to evaluate ‘does training change behaviour,’ nine out of ten evaluations will say, no it doesn’t. We know that. How do we actually implement that — those are the challenging questions that I’m interested in raising. 

Rae Fry: In your keynote, are you going to talk about some examples of where you’ve seen it working really well?

Ingrid Burkett: Absolutely, but I’m not going to focus on one particular example, I’m going to talk about different examples from different parts of the ecosystems that I work in, for two reasons. (A), I don’t see a “golden paradise” example — I haven’t found it yet, although I keep looking! (B), I also think we can get hooked up on hero examples and that stops us from inventing our own pathways in very different contexts. I’ll talk about micro examples from different projects that I’ve been involved in and what I’ve seen work. 

There’s so much potential — I think evaluation is one of those spaces where we have a lot of things on the ground that could really help so many other disciplines. I just wish more people knew about the power of evaluation.  

Read more

To read more about Ingrid's work, visit:

To read more about some of the evaluation themes discussed in this piece, check out the 'Related content' below.

Related content

'Beyond the evaluation box – Social innovation with Ingrid Burkett' is referenced in: