Dealing with complexity in system redesign evaluations

Dealing with complexity in system redesign evaluations


Evaluation is critical to what we do. We as an organisation are trying to come up
with really good service solutions to improve services for people affected by
cancer. The best way of doing that is to try things
and to learn from them. So evaluation gives us to the answers to those
sorts of things. Well I think today’s workshop has been a
really useful process of bringing together people who have experience in large change and transformation
projects and also in evaluating them. There really isn’t a toolkit in how to do
this because very few organisations have actually taken through projects right
through from initiation to measuring outcome, which
may be five or even ten years later. This is a really important day to be able
to learn what are the things that we should be measuring and how should we communicate those things. 20 years ago, when I was first appointed as
a Macmillan nurse specialist actually the change started that far back. Macmillan were employing nurses that had different
types of roles but they had different roles now where we
were delivering chemotherapy services and dealing with individuals at diagnosis. So, when I talked about evolutionary change actually we were starting the system redesign
then and what we’re doing now is we’re just
taking it on to a broader scale. Well, when you’re evaluating whole systems
change programmes there are a whole range of different challenges
that face you but really the main one is that of complexity. My top tips for the evaluator really follow
from the challenge of complexity. The first is to understand that you’re in
a complex environment and that you can’t apply traditional evaluation techniques in
that environment. You must accept the complexity and do something
about it, not to wish it away. So there’s an important debate to be had
in thinking about systems change. A lot of us have used logic models as a basic
way of structuring our conceptual thinking around the evaluation
that we’re doing. The advantage of that is that is makes the
intervention clear, the outcomes clear. It gives you things that you ought to be measuring
and assessing and allows you to form a judgement. The downside is that it is ineluctably linear. Systems are not linear. Systems react in unanticipated ways, they
bubble, they’re turbulent, they shift. You get cliff edges that you’ll suddenly
drop off you get takeoff ramps that you suddenly take
off into and they’re very, very difficult to analyse
in a linear manner. However, that said, I would argue that in
most of the evaluations that we do in and around healthcare, they’re not completely
chaotic systems they’re not incredibly messy. They’re just fluid and flexible. But they have degrees of stickiness and they
have degrees of institutional rigidity and path dependency. Those are all things that you can evaluate
and give you some stability in the thing that you’re evaluating. One of the common traditional tools used in
evaluation is that of a logic model and I think the logic model has its place I just don’t think its place is in terms
of looking at system change. The logic model works off a very linear logic. If we do this, then that will result, if that
results, then this will result. But that’s not the way that systems work. Systems have different emergent characteristics and that’s what makes it a system. Evaluators must respond. I think what I’ve learnt is that it’s
okay for us to rethink and challenge our traditional approaches to service evaluations. One of the things we’ve talked about and
discussed is that the tools we’ve traditionally used for single pathway redesign
interventions are probably not the most appropriate for very complex
multi-component systems and what we really need to look at is what we’re
hoping to achieve which is better service outcomes, but also
how we can use our evaluations to support learning and development throughout
the system really and I think that’s probably something I’ve
learnt in terms of how we might refocus how we frame our evaluations for the
different audiences that will benefit from them. There’s a danger that we exaggerate the
difference between people who use logic models and the people who don’t. Even using logic models, we know the world
changes that there’s a need for iteration. It’s
not perfectly linear and that we need to course adapt and change and that the evaluation therefore contributes
to learning. The people that don’t want to use logic
models emphasize their linearity in a non-linear world and therefore say that
we need to rethink that.

Leave a Reply

Your email address will not be published. Required fields are marked *