Patient safety data for the NHS in England was published on April 30th, as it has been every six months for about five years. This data summarizes patient safety incident reports from NHS healthcare providers. I looked at the data from mental health trusts to see what’s been happening, and to see how 2gether compares.
What was originally the National Reporting and Learning System (NRLS) of the National Patient Safety Agency (NPSA) is now part of NHS England, but it retains its own identity and publishes its own data.
The first of the reports on patient safety incidents was for the first half of 2008–9, but the format then changed and more detail was added for the second half of that year. Unusually for NHS data, the format has not changed over the years since then. (Usually, there are significant changes every couple of years that prevent comparisons over time — conspiracy theorists might say this is deliberate.)
The result is fairly consistent data every six months, starting with October–March 2008 and ending with the latest release for April–September 2013. As usual for NHS data, there is a completely unjustifiable delay in publication — seven months in this case to get some numbers from some computer databases, put them in a spreadsheet and upload it to a web site.
Spinning up and down
The conclusions drawn from the numbers of patient safety incidents are easy to spin, whatever happens.
If the numbers go up, the spin is “Hooray! Reporting of patient safety incidents got more open and transparent!”
If the numbers go down, the spin is “Hooray! Patient safety improved!”
If the numbers stay the same, the spin is “Hooray! Patient safety improved and reporting improved, and the effects cancelled each other out!”
Looking at the way the numbers have changed over the years in mental health trusts, however, it’s not so easy to explain what’s really been happening.
Numbers of incidents
You would expect the number of patient safety incidents in each trust to depend on its size. In a large trust treating many patients there would be more incidents than in a small trust treating few patients. To make comparisons possible, the published reports tell you the number of incidents per 1,000 bed-days.
In the last reporting period, April–September 2013, 2gether reported 1,500 incidents. For comparison with other trusts, this comes to about 33 incidents per 1,000 bed days. The average for metal health trusts in the same period was about 28 incidents per 1,000 bed days. The figure for 2gether is about 18% higher.
Is 2gether’s patient safety 18% worse? Is 2gether’s incident reporting 18% better? Is it a bit of both? Who can say? The number of bed days that 2gether uses in relation to the number of patients is lower than in other trusts. This could account for the difference. Perhaps the use of bed days in the official statistics is misleading, and has been misleading everyone for the past five years.
Improvements over time?
Over time, you might hope that patient safety would improve and reporting would improve, too. It’s not clear which effect would outweigh the other in the results. Perhaps safety would improve more than reporting and the numbers would go down, or perhaps the opposite.
What actually happened is surprising. Over the years the numbers fell quite fast for a while and then drifted back up again. The chart shows this was true for 2gether (the upper line, red squares), and also for mental health trusts on average (the lower line, blue diamonds):
The initial fall (for mental health trusts on average) was considerable, nearly 40% of the initial rate.
Speculating on this, you might imagine that perhaps when trusts realized the data was being published it frightened them into reporting fewer incidents, so the numbers fell, but gradually they have got used to the idea, and they are gradually reporting more.
Or you might imagine that when trusts realized the data was being published it frightened them into being more vigilant in protecting patients, so the numbers fell, but gradually they have got used to the idea, and they are gradually becoming less vigilant again.
Or you might imagine various other more complicated scenarios.
The data also classifies incidents by the level of harm to the patient, ranging from no harm at all to death. Around half of the incidents reported caused no harm at all to the patient. This makes the raw figures rather misleading.
For example, of the 1,500 incidents reported by 2gether in the last reporting period, only about 650 caused any harm at all to the patient (and most of those only caused a little harm). 2gether did report 13 deaths, but that’s about average for mental health trusts relative to bed-days.
If a patient dies, it would probably be difficult not to report it, but if there’s some incident that doesn’t cause any harm at all, it would probably be easy not to report it. So you’d expect improvements in reporting to increase the number of harmless incidents, but not to have so much effect on incidents that cause some harm.
It’s possible to use the published data to analyse just the incidents that caused some harm, and ignore the harmless incidents. If the changes in the numbers over time that we saw in the chart above are mainly changes in reporting, then ignoring the harmless incidents should remove much of the change. This is not what happened:
Although the lines on this chart look flatter, that’s only because we’re now looking at fewer incidents. The initial fall (for mental health trusts on average) is greater than in the first chart, nearly 50% of the initial rate.
I think this tends to suggest that the initial fall does not reflect a change in reporting, but an actual improvement in patient safety. And it tends to suggest that the subsequent rise does not reflect a change in reporting either, but an actual worsening of patient safety.
The data also divides incidents into eleven broad categories. The largest category reported by 2gether over the years was self-harming behaviour by patients, which accounted for nearly 40% of patient safety incidents.
For mental health trusts on average, self-harming behaviour accounted for less than 20% of incidents over the years. Here’s a chart showing the percentage of incidents that were categorized as self harm:
It’s not just that the proportion of self harm is much higher in 2gether. The proportion of self harm in 2gether is very variable. Over the years the proportion of self-harm incidents in 2gether rose and then drifted back down before starting to rise again. I have no idea why this should be.
It’s strange, in a way, that self harm appears in the safety statistics at all. Deliberate self harm can be associated with the distress that patients feel as a result of various mental illnesses, and accidental self harm may be associated with patients’ lack of concern for their own safety, also as a result of various mental illnesses or as a side effect of medication.
The most obvious conclusion to draw from the figures is that distress and lack of concern for oneself are worse in 2gether than in other providers of mental health care. But another conclusion might be that self harm is particularly difficult to prevent, and that 2gether is successful in preventing other kinds of safety incident, leading to a greater proportion of self harm.
It’s rare to find a stable published data set that allows these kinds of comparisons to be made over such a long period of time. That makes it all the more unfortunate that it’s so hard to know what conclusions to draw.