On November 9th I attended a Department of Health workshop to explore how the NHS Choices website might be improved to support people involved in accountability. The all-day workshop in London was attended by about twenty people in all, mostly from LINks, local authorities and the NHS Choices team. One participant was a non-executive director of an NHS foundation trust, amongst other things. I think I was the only NHS foundation trust governor present.
NHS Choices is the main NHS website for the public, having the primary nhs.uk URL. As its name implies it has a strong focus is on choice even though there is plenty of information there about other aspects of the NHS and healthcare.
The workshop was a preliminary exercise to gather information that will inform discussions about the future of NHS Choices. Decisions about its future will be bound up in the long-awaited information strategy that was consulted on a year ago and might not be seen for months yet.
A workshop participant suggested that data for local HealthWatch to use in holding NHS bodies to account might be better provided by a dedicated organization along the lines of an NHS Quality Observatory. And then there’s the NHS Information Centre, and so on. I suggested that too much consolidation might not be a good thing, because having a variety of distinct points of view of the NHS gives valuable perspective. However, it’s not clear that NHS Choices does give a distinct point of view, because it only re-publishes data from elsewhere.
The workshop looked at how the existing NHS Choices site provides comparative data about healthcare providers, how participants use data in their accountability roles, and how NHS Choices might provide accountability data in the future. It wasn’t as orderly as that, though. For much of the day there was a rambling discussion that sometimes went far off topic. I suppose behind this there might have been a conscious decision to allow participants to express themselves freely so as to capture a wide range of opinions.
Development of the NHS Choices site was apparently outsourced to Capita Group plc some years ago. Capita have a promotional video about the deal on their Outsourcing page (5th one down). Capita seem to re-outsource the actual running of the web servers to Attenda Ltd., a private company owned by Darwin Private Equity LLP. None of this structural information was discussed during the workshop, even though at least a couple of the participants made it quite clear that it’s the kind of thing that they feel is very important. Personally, I don’t care about the contractual arrangements but I do care about the lack of transparency.
The policy is misleading about cookies, too, suggesting that they are OK as long as they don’t collect information or store personal data. The real problem with cookies is that third-party cookies can be used, like trackers, for profiling. I’m disappointed to find that such an important healthcare website is mealy mouthed about privacy in this way.
NHS Choices at present
The workshop’s premise was that the way NHS Choices presents comparative data (so that members of the public can choose a GP or a hospital) can be adapted for the slightly different purpose of holding NHS bodies to account (so that, for example, a local HealthWatch can compare GPs or hospitals). There are several challenges.
To provide a common basis for discussion, workshop participants were invited to use the site to perform two tasks based on typical scenarios. This was a homework task to be performed before the workshop. Unfortunately at the time I chose to try the tasks the NHS Choices site was appallingly slow. In the workshop I was told they had been upgrading the site around that time, but there was no message on the site making that clear.
One challenge is that some of the data that might be useful does not exist at all on the NHS Choices site.
Actually, there is some information about psychosis on the site, and there is even some information about psychiatric hospitals, but it’s organized in an incoherent way. It seems that because psychiatric patients don’t have a choice of hospital, the site doesn’t allow them to compare hospitals.
If this isn’t confusing enough, there are some basic glitches that can get in the way of finding information. For example, Gloucestershire’s main psychiatric hospital, Wotton Lawn, has an entry here: Overview. When I used the main Search box to search for Wotton Lawn that Overview page didn’t appear anywhere in the results. Instead there was a broken entry Overview – Wotton Lawn – NHS Choices and an empty entry Overview – Wotton Lawn – NHS Choices. That kind of thing doesn’t inspire confidence that an even more complex design on the site would work well.
Another challenge is that when information is available, it’s sometimes presented in a misleading way. For example, hospitals are classified as either “NHS Hospitals” or “Independent hospitals with free NHS services”. Many people’s assumption will be that the independent hospitals are all private profit-making companies, when in fact many of them are charities. And the classification also misleads people into thinking that the NHS hospitals are all centrally managed, as if from the Kremlin, when in fact many of them are managed by highly independent NHS foundation trusts.
The classification of hospitals preserves an outdated political fiction. Some workshop participants seemed keen to distinguish between hospitals that generate private-sector profit and hospitals that reinvest all surpluses in healthcare, but the site conceals this real-world distinction. Some NHS hospitals generate large and controversial private-sector profits through Private Finance Initiative (PFI) schemes, while charities like Nuffield Health reinvest profits from their hospitals in healthcare. After the workshop there was a story in the newspapers about an NHS hospital that is to be managed by a private company while retaining its NHS brand, making NHS Choices’ simplistic classification even more problematic.
Other examples of misleading presentation are easy to find. A small percentage difference in mortality data can lead to a hospital getting a scary red warning sign. For example, here’s a comparison of three hospitals:
The two NHS hospitals appear to have mortality rates of 109% and 107% of the national average, but only one of them has a warning sign. Is that 2% difference statistically significant in the context of this comparison? The hospital run by a charity has no data, making it look like a dodgy outfit that doesn’t measure outcomes (which it isn’t).
This data is not actually hospital data, anyway. The small print says it’s for “the NHS Trust that each hospital belongs to”, but at the NHS Information Centre I can only find data for commissioning primary care trusts (PCTs), not data for the provider trusts that the hospitals actually belong to. I doubt the accuracy of NHS Choices’ small print, which again seems to preserve an outdated concept.
The actual rounded values for these two PCTs from the current NHS Information Centre SHMI spreadsheet are 1.10 and 1.08, making me think that NHS Choices truncated the data instead of rounding it, or their data is out of date, or their data is from some other source. If a 2% difference really is significant, then so are rounding errors of this magnitude, and if they want to claim they are simply re-publishing data then people need to be able to go to the source and find exactly the same figures without these uncertainties.
NHS Choices advertises the data as the “Unadjusted Mortality ratio” but if it’s the NHS IC SHMI data then in fact it’s adjusted for several factors including age, gender and diagnosis. This caused confusion during the workshop, where participants and NHS Choices staff alike seemed to believe that the numbers really were unadjusted (and at the time I had not yet looked it up, so I didn’t know what to think).
The small print about this on the NHS Choices site is confused. There’s only one SHMI ratio, and it’s an adjusted ratio. There are two ways to group PCTs into bands — a simple banding, and banding that contains an further adjustment. It looks like NHS Choices used the unadjusted banding for the adjusted ratio. (As it happens, if they had used the banding with the further adjustment, that first hospital would not have had a warning sign at all.)
Data not available
When data is not available there is no indication of what’s missing. For example, when I compared two GP surgeries I saw this:
What it’s trying to tell us is that the GPs only speak English.
In other places the data is available, but not available from NHS Choices. For example, data on MRSA and C. Difficile infections, amongst other indicators, is available for non-NHS hospitals but it never shows up on this site.
It’s not always clear that the data presented is complete and up to date. For example, some results from the GP Patient Survey are presented, but only some. I would guess this may also be true for the other sources of data.
Curiously, whether or not people would recommend a surgery is presented based on the often very few comments on the NHS Choices site itself, but the more reliable version of the same information from the GP Patient Survey is not presented. Some of the other omitted items might also be of interest to the public. Certainly, anyone who has participated in the survey might find it strange that they can’t find all the items.
The small print on NHS Choices’ GP Patient Survey results says they are from April 2009 – March 2010 data, but April 2010 – March 2011 data seems to be available for download on the GP Patient Survey website.
Another feature of NHS Choices is that the public can submit comments about their experience of hospitals and GPs. Several participants, me included, were disappointed that there were so few comments. Not enough has been done, it seems to me, to aggregate comments from a wide range of sources. If comments submitted to PALS, LINks, the Care Quality Commission, the ICAS services, etc. were all included then the data might start to look useful. There is aggregated comment data from the independent website Patient Opinion, showing that it can be done.
The entire 2gether NHS Foundation Trust has just four comments, the most recent dated November last year. Two of those comments are identical:
I would like to put on record my appreciation for my CPN, who has helped me to recover from a bipolar episode, so that now I’m what I call “sea-level” and I’m able to function normally again.
Of course, the patient might have suffered another bipolar episode and felt exactly the same as before about it.
The other two comments are somewhat critical. For example, both patients agreed that they were treated in environments that were not completely clean, and that decisions about their care were often made without involving them. No one at the trust has responded. Other healthcare providers have responded to some of the comments about their hospitals, but few of the responses are very helpful, I thought.
The workshop spent less time on how accountability actually works, and on the role of comparative data in accountability.
As presently structured, NHS Choices would not provide easy ways to access comparative data for accountability purposes. For example, there are NHS Gloucestershire contracts with providers outside Gloucestershire, and this situation will become more complicated as “any qualified provider” is rolled out across more and more services.
It was also clear that the kind of filtered data NHS Choices presents to the public is not always adequate for accountability purposes. For example, some of the figures NHS Choices presents are medians, which may conceal the extreme values that are important for accountability purposes.
What’s happening in this scenario is that around two thirds of the cases are easy to deal with, and they are all seen within a week or two. Slightly more complicated cases are being put off, sometimes for many months. The people who most need the service are getting the worst service.
Because the median is the waiting time for 50% of patients, it shows up as two weeks. That both conceals the problem and encourages managers of the service to stick with the unsatisfactory arrangement. It would be better to provide a percentile that captures the great majority of patients, like “95% of patients are seen within 8 months”.
A related problem was in the news recently in relation to 18-week targets, because people who have been waiting longer than 18 weeks become invisible to the target-driven system, and they can end up waiting for very long times.
I’ve done a fair amount of nit-picking in the above. The nits themselves aren’t really the point, though. The point is that there was no sign of robust processes in place to identify and resolve the many issues. There was no sign, either, of a robust information model providing a common structure for data from many different sources.
Rather than any enthusiasm from the people leading the workshop for processes and models, there was much enthusiasm for funky charts and maps. There was surprise when some participants said we might want to download raw data to work with in our own spreadsheets, databases and charting software.
Trying to provide data for accountability would make the site much more complex than it is now, and I’m not confident that the complexity can be managed with the current approach. It’s an interesting and exciting challenge, though, and I fully support the attempt. The project is ongoing, and there should be more developments next year.