Care in the Community

Category: CQCOn September 18th the Care Quality Commission (CQC) published the results of its latest Community Mental Health survey, not a survey that measures the mental health of communities, as one might perhaps think, but a survey of people who are mentally ill and receiving secondary care in the community (rather than in hospital or through their GP). The results for the 2gether NHS Foundation Trust are excellent, although upon looking deeper it appears that the survey reveals more about the CQC than it does about mental healthcare.

The survey is sent each year to a proportion of the people in England receiving care for mental illnesses from secondary care providers — like NHS trusts. It excludes people who only receive NHS mental healthcare through GP surgeries or in hospital, and it excludes people whose mental illness is primarily an addiction to alcohol or other drugs. This year around 46,500 people received the survey and 13,500 responded.

Response rates from patients of different providers varied from 24% to 36%, and 2gether’s response rate was one of the higher ones, 33% (285 out of 850 patients who received the questionnaire). It’s unclear how this relatively high response rate should affect interpretation of the results.

The sampling rate is much smaller. For 2gether, the responses received represent roughly 2% of people in contact with services (285 out of 13,000 or so).

It’s also unclear what to make of the somewhat different age profile of 2gether’s patients, just about half of whom (49%) were aged 66 or over, higher than the average (38%). This partly reflects the age profile in the population that 2gether serves.

Similar surveys to this one have been undertaken in previous years, but the questions this year have changed, making it difficult to assess any improvement or decline. The accompanying documentation warns, with bold type as in the original:

“[T]he results from the 2014 survey for all questions are not comparable with the results from previous surveys.”

The key phrase here, however, is “for all questions”. It seems likely that the results for some questions are in fact comparable. The results for some other questions could have been comparable if the CQC had not taken steps to prevent comparison.

Despite the apparent intention being to compare results between providers, not between years, the CQC publishes the results in a format that makes any detailed comparison between providers difficult and time consuming.

The good news

The results, expressed as scores from 0 to 10, contain some very good news for 2gether, over and above its good response rate. In seven of the nine sections of the survey, 2gether was among, or close to being among, the best performing providers as identified by the CQC. In the other two sections 2gether was above average.

On three questions 2gether was the highest scoring provider (or one of the joint highest scoring providers). These were the questions on involving patients in agreeing what care they are to receive (Q13, score 8.1), on help finding or staying in work (Q34, score 6.2), and on overall experience of care (Q42, score 7.5).

2gether’s best score in absolute terms was for patients knowing how to contact the person in charge of their care (Q10, score 9.5) although the average for all providers was very high (9.2) and at least one provider got a perfect score on this question (10.0).

2gether’s worst score in absolute terms was for “peer support”, support from people with experience of the same mental health needs (Q38, score 3.9) although the average for all providers was even lower (2.3) and even the best scores were mediocre (5.1).

The not so good news

The not so good news is that the survey doesn’t measure what you might think it ought to measure. If you happen to meet a chap who has a mental illness, and who is receiving care in the community from a secondary mental healthcare provider — just the kind of chap who might well have received a questionnaire from the CQC — what would you want to know about him and the healthcare he’s receiving?

You’d probably want to know what’s wrong with him, his diagnosis. But the CQC doesn’t ask that.

Depending on his diagnosis, you might want to know how the illness affects him, whether he’s at risk or whether he puts other people at risk. The CQC doesn’t ask about that.

You’d probably want to know what treatment he’s getting, whether he’s receiving any drugs, psychotherapy, or whatever. The CQC doesn’t ask that either.

You might well want to know whether the treatment has side effects, whether the drugs make him twitch or the psychotherapy makes him anxious. The CQC doesn’t ask those kind of things either.

You’d almost certainly want to know whether the treatment works, whether he’s getting any better. The CQC doesn’t ask that, of course.

And you might want to know whether he’s likely to get over his illness, what hope he has for a future in which he no longer needs healthcare. The CQC doesn’t ask about that (although it does ask about feeling “hopeful about the things that are important to you”, which could mean anything at all).

Are these the wrong kinds of questions to expect the CQC to ask? The CQC has a published Strategy (PDF) that explains where its focus lies, and it’s in the following five questions about services:

  • Are they safe?
  • Are they effective?
  • Are they caring?
  • Are they well led?
  • Are they responsive to people’s needs?

But this survey doesn’t ask about any of those things. A provider could run services that put people at risk, and that are ineffective, uncaring, badly led and unresponsive, yet get great scores in this survey.

To understand how the CQC’s survey manages to evade the focus of what the CQC claims is its strategy, it’s instructive to look closely at the last strategic question: “Are [services] responsive to people’s needs?” The survey tiptoes around this with questions that ask about side issues. Perhaps the best way to explain is to imagine that you want to get a train to Glasgow, and you answer a survey about that. The survey asks:

  • “Were you given enough time to discuss your journey?”
  • “Did everyone understand how your journey might affect the rest of your life?”
  • “How well was your journey organised?”
  • “When you phoned the information line, did you get help with your journey?”
  • “Did you get any help with other journeys?”
  • “Did you get any help from other people travelling to Glasgow?”
  • “Were you able to contact the ticket office often enough?”

The important questions like whether you caught the train, whether it was on time, whether you were travel sick, and whether you got to Glasgow at all, are not asked. It’s the same with this CQC survey.

The particular slant that the CQC has designed into its survey is that it only asks about aspects of care that could be delivered by a non-clinical administrator who has no particular knowledge of mental health. Aspects of care that require specialist clinical expertise or knowledge are excluded from the survey.

Comparing previous surveys makes this even clearer. There used to be a question about whether talking therapies were helpful. Delivering effective talking therapies requires substantial clinical skills. The question was removed this year. There has never been a question about the effectiveness of drug therapy (which also requires substantial clinical skills in prescribing and monitoring).

The weird news

Another feature of the survey is that the way the CQC processed the responses to turn them into scores from 0 to 10 has had a weird effect on some of the results. It seems likely that it makes most of the results much lower than common sense would suggest they should be, and a few of the results higher. By doing this it compresses the results into a narrow range.

For example, take Q5: “Did the person or people you saw listen carefully to you?” Listening carefully is a trivial skill that requires no clinical expertise. You’d expect very high scores on this question.

Yet providers’ scores for Q5 were between 7.7 and 8.9. The scores seem to suggest that with even the very best providers, patients only feel they’re being listened to carefully around 90% of the time. This is what we might call a proficiency of 90%. And 90% is terribly low for something that requires hardly any skill and no specialist knowledge at all.

The score is that low because of CQC’s processing of the results. To explain how this can be, assume that some example provider’s proficiency really is 90%. In addition, assume that patients are seen every week, that they answer the survey questions in relation to the past year, and that whether they are listened to or not is random (nothing to do with their age, gender, diagnosis or anything else).

From these assumptions it’s easy to calculate how often, on average, patients will have felt listened to. It’s most likely they’ll have felt listened to about 47 times in the 52 weeks of the past year (because 47 is roughly 90% of 52). Statistically, about 184 patients in 1,000 will have felt listened to exactly 47 times in the year. Hardly anyone will have felt listened to all the time. Statistically, only about 4 patients in 1,000 will have felt listened to 52 times out of 52.

So, in response to Q5 only 4 patients in 1,000 will answer “Always” and score 10. Practically everyone else will answer “Sometimes” and score 5. The provider’s aggregate score will be 5, the way CQC calculates it. Proficiency of 90% leads to a score of just 5.

Now we can turn things around and wonder, if 90% proficiency leads to a score of 5, what level of proficiency leads to a score of 8.9, the score that the best provider actually achieved on this question? Statistically, the proficiency level required for a score of 8.9 is a little over 99.5% based on these assumptions.

And 99.5% proficiency on a simple task like listening to patients is much closer to what anyone might expect from a good mental health provider.

The national average score for this question was 8.35, which seems to imply that the NHS has quite a way to go to develop listening skills in community mental healthcare. It makes listening seem like an important national issue. But the average proficiency required for a score of 8.35 is a little more than 99.2% on these assumptions. There’s no issue.

The message

The CQC’s survey doesn’t really appear to have been designed to assess mental healthcare in the community. It doesn’t relate to the CQC’s own strategic issues of safe, effective, caring, well led and responsive services.

Instead, the survey appears to have been designed to send a message. The message is that care in the community doesn’t need clinical skills or specialist knowledge. The most important issues, according to the survey questions, are simple administrative matters and basic skills.

Then, because a survey of simple administrative matters and basic skills would give the game away by resulting in phenomenally high scores like 99.5%, the scoring system has been designed to make most of the scores low enough to be talking points.

It seems likely that many patients who receive mental healthcare in the community do receive good care from skilled clinicians. At the same time there are many individual stories of things going wrong. The CQC’s survey should be a national resource that helps everyone to understand what’s really happening. Instead, it seems to have turned into an exercise in directing our attention away from what’s important.

 

Advertisements

About Rod

Chairman of the Gloucestershire charity Suicide Crisis, Vice Chair of Relate Gloucestershire & Swindon, and an enthusiast for public involvement in the NHS.
This entry was posted in CQC and tagged , , , , , . Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s