Week 36: Systems thinking

bob williams's picture 15th September 2014 by bob williams

This is #2 in our series on visionary evaluation. This year’s AEA Conference theme is visionary evaluation – systems thinking, equity and sustainability.  Which begs the question what is systems thinking?

There are dozens of definitions but for me it is the combination of three things:

  1. Understanding inter-relationships
  2. Engaging with multiple-perspectives
  3. Reflecting on boundary choices

…. And how does that contribute to visionary evaluation?

Because it has the ability to change how we do evaluation and indeed what we think evaluation is about.  Let’s take those three ideas in turn.  Evaluators use the idea of inter-relationships a lot, but often in a relatively limited way.  Take your classic program logic.  Evaluators generally worry and argue about what’s in the boxes and tend to ignore the arrows between them.  In contrast the systems and complexity field tend to focus more what the arrows mean and rather less on what’s in the boxes.  Evaluators talk a lot about multiple perspectives, but do we really deeply engage with the consequences of those perspectives on the situations we evaluate?  If we did then we’d never consider an intervention having a single purpose or single framing.  Yet this is something which our program logics and theories of change nearly always do.  Finally if we assume that boundaries distinguish between what is important and what is unimportant, then boundary choices are essentially about what is valued. If we commonly reflected on and critiqued boundary choices then we’d never allow the values by which an intervention is judged to be determined solely by the program implementer or the evaluation client. These are all big issues for evaluation to engage with and has the potential to change what we do quite fundamentally.

What’s the first thing an evaluator should do when trying to think more systemically

Treat the systems and complexity field with the respect it deserves.  It’s a big field and like the evaluation field has diverse methods and methodologies, big unresolved disputes and a history.  Do your homework and avoid grabbing hold of simple clichés.

Simple clichés like?

Clichés like systems approaches are about including everything.  That’s clearly impossible and will lead to worse evaluation practice not better evaluation practice.  Every endeavour is limited in some way – hence the focus on boundaries.  So ‘holism’ for me is about being very smart, very informed and very considered about what to leave out, rather than opening the floodgates to more stuff.  Another cliché is that systems approaches are only about big things.  I frequently hear people talk about ‘systems change’ only in terms of large entities.  It’s a notion that comes primarily from the management field rather than the systems field.  Something as small as a cell can be considered a system.  The final cliché is that while systems approaches help us deal with ambiguous and uncertain situations, the way we understand situations and why they behave the way they do is not magic, it’s not ‘stuff happens’.  Systems and complexity approaches are very disciplined approaches to making sense of how things happen the way they do.

You spoke earlier about the systems and complexity field being large and full of many methods.  If evaluators want to think systemically how do they choose which ones to use?

That’s an important question.  If you rephrase it in terms of evaluative thinking you can also see how difficult it is to answer.  Yes of course there are some great systems and complexity methods out there and I know ones that could be particularly helpful to evaluators.   But any method takes time to learn and apply well.  So in the first instance I’d prefer to see evaluators start where they are now and use the methods they already know in more systemic ways.  Once they’ve got the hang of that, then they can gain the full benefit from learning specific systems methods – and they are likely to learn faster and make fewer mistakes.

So how do evaluators make their current methods more systemic?

Easy.  Improve those methods’ approaches to understanding inter-relationships, engaging with multiple perspectives and reflecting on boundary choices.

Perspectives from others

Pablo Rodriguez-Bilella, Universidad Nacional de San Juan & National Research Council of Science and Technology (CONICET), Argentina.

Bob's thoughts are both kind to and provocative for the field of evaluation - and evaluators themselves. I say kind because his text shines matured thoughts in the practice and reflection on evaluative thinking. And they are provocative as they uncover some of the less publicised facets of evaluators. For example, by introducing the three things that, for him, define systems thinking, Bob insists that evaluators use and talk a lot about these key ideas. However, in some/many cases they seem to pay lip service to them, without delving deeply into its contents, and its consequences. Hence the admonition "Do your homework and avoid grabbing hold of mere clichés" is a quite pertinent one, in order to warn us of the risk of converting systemic thinking into a (new) buzzword, peppering speeches and reports with fashionable terms, without sinking the teeth deep into the bone.

While the recommendation that evaluators must start where they are and use their known methods in a more systemic way is really sensitive, it is particularly provocative when Bob considers "simple" that evaluators can make their methods more systemic (see his reply on the end). Can every method be turned more systemic by paying attention to the three dimensions mentioned? Can we think on “systematic reviews” of system thinking evaluations? How could evaluations in simple, complicated and complex environments benefit differently from systemic thinking?

Dear Bob, we need you to blog frequently ;-)

Sheila B Robinson, Grant Coordinator, Greece Central School District and Adjunct Professor, University of Rochester, Rochester, NY.

Bob makes excellent points here, and offers sound advice for evaluators. While I find systems thinking a fascinating area of study, I don’t think you need to be a systems expert to incorporate elements of systems thinking into your evaluation practice. That said, I think it behooves an evaluator to devote some time to learning the basics.

Understanding interrelationships and reflecting on boundary choices are two areas Bob emphasizes. Recognizing that all human service programs and policies are parts of open systems with interrelationships probably more complex than we think, and boundaries probably less easily identified and defined than we think helps me to challenge my own assumptions, push others’ thinking when we’re working together, and, as Bob suggests, focus on the arrows between the boxes. For me, it involves the recognition that there is always more to the story, and the humbling acceptance that I may never be privy to the whole story (nor may anyone else necessarily). I’m reminded of a quote from the late Donnella Meadows, author of Thinking in Systems, A Primer (2008): “We know a tremendous amount about how the world works, but not nearly enough. Our knowledge is amazing; our ignorance even more so” (p. 87).

For me, thinking systemically means using evaluative thinking, and all it entails (for a brief introduction to evaluative thinking, read Tom Archibald and Jane Buckley on Evaluative Thinking: The ‘Je Ne Sais Quoi’ of Evaluation Capacity Building and Evaluation Practice.) It means valuing and questioning evidence, engaging in rich dialogue (and often this means having difficult conversations) with colleagues about why we might be seeing what we are seeing in our data, and figuring out where in the system we may look for elements of the story we are trying to construct from an evaluation of a program. It means asking a question I learned from studying developmental evaluation with Michael Quinn Patton. Instead of just asking what works? ask, What works, for whom, and under what conditions? You have to think systemically to be able to answer a question like that with any degree of veracity. As Bob says, it’s about “making sense of how things happen the way they do.” I think that all too often, we tend to stop short at what happened?

Bob urges us to engage multiple perspectives and while I think many of us certainly do attempt this in our evaluation work (especially if we’re using participatory, collaborative, or empowerment approaches), I can't help but wonder if we can do a better job of it thinking in more systemic ways and reflecting on our boundary choices. Who might we at first consider an “outsider” to the system who could potentially be affected by an evaluation? I worked on an evaluation recently of a program serving primarily high performing high school students who traditionally pursue higher education. I would never have guessed at the beginning of that evaluation that an outcome of our work would be linking with a program that serves students traditionally underrepresented in higher education. Engaging with people who had experience in both programs who could offer broader perspectives than those only associated with the program under review is what it took to put the two together - to think of the two programs as part of the same system and ultimately to expand our boundary choices.

I love this poetic advice from Donella Meadows (from Thinking in Systems: A Primer) that also captures Bob’s main points:

Guidelines for Living in a World of Systems

  1.  Get the beat of the system.
  2.  Expose your mental models to the light of day.
  3.  Honor, respect, and distribute information.
  4.  Use language with care and enrich it with systems concepts.
  5.  Pay attention to what is important, not just what is quantifiable.
  6.  Make feedback policies for feedback systems.
  7.  Go for the good of the whole.
  8.  Listen to the wisdom of the system.
  9.  Locate responsibility within the system.
  10.  Stay humble – stay a learner.
  11.  Celebrate complexity.
  12.  Expand time horizons.
  13.  Defied the disciplines.
  14.  Expand the boundaries of caring.
  15.  Don’t erode the goal of goodness (2008, p. 194).

You might almost think this came from an evaluation textbook!

Resources

Using systems concepts to navigate complexity - Bob Williams

This links the three fundamental systems thinking elements (perspectives, relationships and boundaries)  to various systems methods,through a set of evaluation style questions.  So if you are attracted to Question X, use systems approach Y.

This document was prepared for the 2008 innovation dialogue, Navigating Complexity, organised by Wageningen University,  the Netherlands.  It draws on the opening chapter of the book Williams B. Imam I. (2007) Systems Concepts in Evaluation - An Expert Anthology  EdgePress/AEA Point Reyes CA. http://www.amazon.com/Systems-Concepts-Evaluation-Expert-Anthology/dp/0918528216

Free online learning material - Open University

Using A Systems Orientation in Evaluation – Beverly Parsons.

Notes from a 2013 American Evaluation Association pre-conference workshop.

Wearing your systems thinking hat – Megan Roberts

This post by Megan Roberts discusses ways of embedding systems thinking in everyday evaluation practice.

AEA conference presentations

Check out presentations at this year’s American Evaluation Association conference by filtering the program to show those in the Systems in Evaluation Track.

Image credit: Aerial view of Sigiriya Citadel and Palace, Sigiriya, Sri Lanka by James Gordon

A special thanks to this page's contributors
Author
New Zealand.

Comments

MarcusJ's picture
Marcus Jenal

Hi Bob. Great post. I am totally with you on the use of systems thinking. There is just one thing that I thought was a bit confusing. You say that "In contrast the systems and complexity field tend to focus more what the arrows mean and rather less on what’s in the boxes." In my experience, complexity thinkers refrain from defining simple causal connections in the form of change and rather embrace the concept of emergence, where an intervention – at least in a complex system – does not cause a change, but uses the propensity or disposition of a system to change its dynamics so changes emerge. Am I saying the same you are but using more complicated words?

bob williams's picture
Bob Williams

I've been musing on how to respond to Marcus' comment for far too long.  My apologies.  Let's go back to what I wrote.  I wrote that systems and complexity thinkers are more interested in what the arrows mean.  I carefully and deliberately avoided any mention of causality for precisely the reason that Marcus stated.  I was making a point about epistemological sensemaking not ontological engineering.  But I do think we have to be careful.  If we use the concept of 'emergence' as jargon for 'stuff happens' then we are on a trip to nowhere very useful.  While complexity proponents consider that - as Marcus writes - the relationships between the various elements are deeply entangled and thus somewhat difficult to fathom, they don't deny causality.  Complex adaptive systems theory may be based on some pretty involved mathematics and unfathomable to many, but it is not a branch of the Magic Circle.  As Jonny Morell pointed out very well at the American Evaluation Association conference complex adaptive systems are not necessarily unpredictable.  They are in many cases highly stable and thus predictable.  What's not predictable is when and how that predictability stops becoming predictable.

MarcusJ's picture
Marcus Jenal

Hi Bob. I appreciate you taking time to muse on how to respond to my comment. The point you make is interesting: complex adaptive systems can be stable and predictable. And there is causality in there. I agree. As long as we leave them alone, they can be and usually are pretty stable. But this is not what we want. What we want is to see change. But we cannot predict how they are going to change when we poke them – this is also a fact coming out of complex systems science. And this is the challenge in development. We need to poke them. And because systems are generally resistant to change due to many dampening feedback loops (people don't like change, especially the powerful), we need to poke them quite a bit. 

Evaluators are in a comfortable position, they usually can look backwards, asking: what happened? Causalities in complex systems can be clear in hindsight. But this does not help us to look forward, to do stuff. We still need to be explorational and consensus on a causal theory often limit us to explore the breath needed to find effective solutions.

Emergence is not about 'stuff happens.' Emergence is about a higher level of order that comes (emerges) out of the interaction of the system's parts. The higher level of order provides the system with new abilities. At the same time, it constrains the individual elements; they loose some degrees of freedom. This is a form of causality, but it is not billiard ball like causality. Emergent constraints add a level of intricacy. When I interact with a system, I do not only have to account for my effect on the element I interact with, but the constraints that are effective in the system because they determine in part how my counterpart can react to my interaction. This is why I have problems with simple linear causality that is often (always?) assumed in causal models. And this is why Dave Snowden says that complex systems are not causal, they are dispositional.

Add new comment

Login Login and comment as BetterEvaluation member or simply fill out the fields below.