By Brian Pelan, editor, VIEW
The most recent issue of VIEW about Social Impact (http://viewdigital.org/2016/06/28/latest-issue-view-look-social-impact-making-difference/) contains two articles which have been the subject of debate around the issue of Outcomes Based Accountability.
For those not in the loop in the community/voluntary sector, the OBA model is at the heart of Stormont’s Draft Programme for Government.
For £96, you can attend one of the OBA workshops run by NCB NI organisation who are based at the NICVA Building, 61 Duncairn Gardens, Belfast.
And for £300, you can attend a summit on OBA at the Waterfront in Belfast in October.
Dr Toby Lowe’s reply to Mark Friedman’s letter (http://viewdigital.org/2016/07/29/open-letter/) is below.
By Dr Toby Lowe, Senior Research Associate at the Centre for Knowledge, Innovation, Technology and Enterprise, Newcastle University
A further contribution to the debate about outcomes
It is fantastic that people have written in to say how much they value debate about outcomes-based approaches to the performance management of social interventions. To contribute further to this spirited debate, here are a couple of points for people to consider.
Outcome-Based Performance Management takes many forms
A great deal of my research to this point has been concerned with exploring both the underlying principles, and evidence of what happens, when people implement outcomes-based performance management (OBPM) approaches to social interventions. OBPM takes many forms – from Mark Friedman’s Outcome-Based Accountability (OBA), to Payment by Results mechanisms.
In many ways, OBA can be seen to have fewer negative consequences than other forms of OBPM, such as Payment by Results. This is because part of its process involves a discussion amongst stakeholders about what desirable outcomes look like, and how they might be achieved. As I have highlighted elsewhere, this is a good thing.
Is OBA a form of Performance Management?
However, OBA still has negative consequences, because it shares a core problem with all other forms of OBPM – it is attempting to do the impossible – to hold organisations accountable for producing outcomes in people’s lives.
If OBA focussed only on how systems can learn and adapt to improve outcomes, then it would be one of the useful tools for creating the system change required to genuinely improve the lives of people who ask for, and need, help.
But OBA doesn’t describe itself in that way. It uses the language of accountability and “Performance Measures”. It asks stakeholders to produce metrics by which outcomes can be measured, and holds delivery organisations accountable for producing changes in those measures. This is classic Performance Management – define a strategic objective, turn that objective into something measurable, report progress (or otherwise) regarding those measures to those in charge.
But OBA seeks to distance itself from the well-known destructive effects of this form of Performance Management. Mark Friedman says that OBA warns its users against the “misuse of targets and penalties” and suggests that the problems caused by OBA are where it has been implemented badly.
On the face of it, this is a bit confusing. OBA looks like it follows a classic Performance Management methodology, but, at the same time warns people about the ‘misuse of targets’. What can this mean?
We can decipher what this means from a previous correspondence I had with David Burnby, one of the people who trains organisations how to use OBA. We had been corresponding about the problems of OBA and Performance Management, and David chose to publish this correspondence. So, let me quote from him [n.b. In this correspondence, he refers to OBA by its other name, Results-Based Accountability (RBA)]:
“Traditional public sector commissioning focussed almost entirely on “how much” measures in the mistaken belief that providing lots of service automatically means people will be better off. RBA focusses particularly on the “better off” data whilst acknowledging the importance of quality (how well?) measures. Again, incentivising improved performance through arbitrary target setting is actively discouraged. Progress is measured by the rate of curves turned and distance travelled. The relationship between service-user performance measures and whole population indicators is a contributory one. We can hold managers accountable for the impact their interventions make on their client population (as measured by the ‘better-off’ performance measures), but in any system of complexity, no manager can sensibly be held accountable for the well-being of a whole population.” David Burnby, 2013, “The Toby Lowe Letters”
This helps us to understand what Friedman means by “the misuse of targets”. Interpreting from David’s response, we can see that ‘misuse’ seems to mean:
- Setting ‘arbitrary’ outcome – i.e. targets for metrics which do not relate to the baseline position, and the distance travelled from the baseline
- Setting ‘population level’ outcome targets rather than targets for those clients that a service works with – i.e. targets which relate to everyone in an area, rather than just the people served by the intervention.
But so long as we don’t do these things, it is appropriate to use information from these Performance Measures as targets which inform the most fundamental commissioning decisions:
“… if we run a healthy living programme based on three performance measures, say (for example), Body Mass Indicator, Blood Pressure and Alcohol Consumption, and the data tells us that 85% of our service users experience improvement in these areas, then that’s good enough to say that this is a project worth investing in which will contribute to a whole population outcome of a Healthy Community (and, in the longer term, an indicator such as longevity).” David Burnby, 2013, “The Toby Lowe Letters”
This demonstrates exactly why OBA’s warnings about “the misuse of targets” do not enable it to escape the classic problems which all versions of OBPM encounter:
Problem 1: Proxy measures aren’t real outcomes
This is perfectly illustrated by David’s example of using BMI as a proxy indicator for obesity. BMI doesn’t actually measure obesity, but is used as a proxy measure because genuine measures of body-fat are intrusive and expensive to collect. But because it doesn’t really measure whether someone is obese, it generates perverse outcomes when used as a performance metric by organisations. See here for the classic example of exactly this problem, when a body builder and personal trainer was told to go on a calorie-controlled diet, because her BMI was too high.
Problem 2: The use of these targets causes gaming
If we use changes in BMI, Blood Pressure and Alcohol Consumption amongst users of a service as means to decide whether that service should be re-commissioned (or not), then all the evidence says that the service will start to game the system in order to produce the required data (See here, for a summary of this evidence):
- They will cherry-pick people to work with, only helping those who will produce the right data, ignoring difficult cases.
- They will teach to the test – they will focus their work on achieving change in these three measures, irrespective of whether that is what the client wants or needs.
- They will reclassify what counts as success. They will measure and report changes to these metrics in ways which appear to show success, even it’s not.
- If all else fails, they will make up the figures.
All this is well known, and was summed up back in 1976 by Donald Campbell, an American social scientist wrote his famous essay “Assessing the impact of planned social change”. In it he formulated Campbell’s Law: “The more any quantitative social indicator is used for social decision-making, the more subject it will be to corruption pressures and the more apt it will be to distort and corrupt the social processes it is intended to monitor.”
Why does gaming happen?
Gaming happens in all forms of OBPM (including OBA) because OBPM does not respond well to the complex nature of people’s real lives. The simplified proxy measures that OBA uses to judge performance are not helpful for understanding performance in a complex system:
Firstly, the baseline measures that OBA uses are essentially as arbitrary as any other targets. Just because a measure was taken at the start of an intervention doesn’t mean that the measure is helpful to judge the performance of that intervention over time. Think about employment figures and the economic crash of 2008/9. If an employment programme started working in that year, its baseline figures for employment (or for the numbers of people it got into work in the previous year) would be meaningless as measures for its performance in the following year. The context in which organisations work is dynamic and unpredictable – baseline figures are not necessarily any less arbitrary as targets than any other figures. So, organisations can end up working to unsuitable targets in OBA, just as in any other target-based Performance Management system.
Secondly, as we have seen in the case of BMI and obesity, the simplification from the complexity of real life, to ‘what is measurable’ misses out what is really important in people’s lives. It means that organisations concentrate on hitting targets, not helping people with the complexity of their lives.
Thirdly, there are thousands of factors which lead to changes in outcome measures, and the majority of these are not under the control of the organisation undertaking an intervention (as can be seen from the systems map of all the factors that lead to obesity, produced by the Government Office for Science in 2007,
OBA seems to have a strange set of blinkers around this issue. It recognises that the population as a whole has huge complexity in how outcomes are generated. There are just too many factors at work. And so, as David Burnby says, we can’t hold managers accountable for changes in their outcomes. But for the service user group, OBA says it is appropriate to hold managers accountable, and even decide whether to recommission their service, on the basis of whether changes in proxy measures are achieved for those people. But the effects of all the other complex factors (on BMI, Blood Pressure and Alcohol Consumption) also exist for service users, so if there is too much complexity to hold managers accountable for population level outcomes, why isn’t there too much complexity to hold managers accountable for outcomes for service users?
It’s the managers fault if things go wrong
And this is the rub. All forms of OBPM attempt the impossible – to hold organisations accountable for producing outcomes which are beyond their control.
As result of being held accountable for things that are beyond their control, managers learn to manage what they can control, which is the production of data. They cherry pick, they teach to the test, they reclassify, they make things up. And they do this, because they are being asked to do an impossible thing – to be accountable for things that they don’t control. Mark Friedman recognises that this is impossible, but chooses to make managers accountable anyway:
“Don’t accept lack of control as an excuse… If control were the overriding criteria for performance measures then there would be no performance measures at all. ” (Mark Friedman, Results-Based Accountability Implementation Guide)
And because they’re being asked to do something impossible, things inevitably go wrong. And this is why we see the long-term evidence we do around the implementation of outcomes-approaches. But sadly, OBA passes the buck at this point. When things go wrong, it’s not because the system was faulty, it’s because of bad managers:
“RBA thinking doesn’t kill people, managers do. Blaming RBA for the adoption of a target setting culture is akin to blaming the hammer when the thumb is hit.” David Burnby, 2013, “The Toby Lowe Letters”
So, when the inevitable happens in Northern Ireland, and all the interesting and complex initial discussion about outcomes gets distilled into simple Performance Measures which create gaming, we know where the finger of blame will be pointed. Good luck managers, this will be on you.