Inspection

Category: CQCThe Care Quality Commission (CQC) has reported its findings following its week-long inspection in October 2015 of 2gether NHS Foundation Trust, the mental health trust for Gloucestershire and Herefordshire. The report reveals as much about the CQC, however, as it does about the trust.

This is a rather hasty review of the inspection report.

The CQC’s high-level framework for inspections is based on five key questions, about whether the service is safe, effective, caring, responsive and well-led. Each service within the scope of the inspection is rated outstanding, good, requires improvement or inadequate on each key question. There’s an overall rating for each key question, and an overall overall rating for the trust.

At 2gether the CQC’s overall overall rating for the trust was good. Of the key questions four were rated good overall, but safety requires improvement. Ten services were rated: two of them outstanding, two requiring improvement and the rest good.

If you want to know more, there’s a matrix of five key questions and ten services—fifty ratings in all. In this matrix, four of the ratings are outstanding, nine require improvement, and the rest are good.

There’s a bewildering amount of information to make sense of from the ratings alone, before you get down to any real information about the trust. At this level, moreover, the information you’re seeing is just the CQC’s subjective ratings. It gives the impression that the CQC’s subjective ratings are hard facts, but that impression is false.

Scope

The inspection was limited in scope because of the way the CQC works. For example, inpatient services for mentally ill children and young people were not inspected because 2gether doesn’t provide them directly. Hospitals in surrounding areas are used for these services. The CQC provides no easy way to discover which hospitals they are and the quality of their care. I have previously raised this with the CQC but it doesn’t seem to be a problem they want to fix.

Services provided directly by 2gether were inspected selectively. The service that provides care for the largest number of people is IAPT (Improving Access to Psychological Therapies, branded “Let’s Talk” locally). This wasn’t inspected.

At the other end of the scale, specialized services like the eating disorders service weren’t inspected either. On the other hand, the extremely specialized Psychiatric Intensive Care Unit (PICU) was inspected.

The inspection report doesn’t list all of the services omitted, or explain the rationale for the inspection’s limited scope. Several services that have been controversial amongst local people were among the services omitted, leading to an impression of overall bias in the inspection.

Methodology

The inspection report gives little away about the inspection’s methodology. A credible inspection would reveal the credentials of the inspectors who made the subjective ratings. It’s easy to imagine, for example, that inspectors whose only personal experience has been in outstanding services would tend to down-rate the services they inspect, while inspectors whose only personal experience has been in struggling services would tend to up-rate.

So when we see a service rated outstanding we don’t really know if that was just because the inspectors were weak and easily dazzled; when we see a service rated as needing improvement we don’t know if that was just because the inspectors were whiz kids.

My personal experience of the inspection was very limited, and it doesn’t properly reflect on the inspectors as a whole. Nevertheless it was not very encouraging, as I noted in this blog at the time in Listening. If the rest of the inspection was as badly handled as that, it would be a serious concern. The report gives nothing away on this.

Reporting

The inspection report is poorly produced. It reads like a rough first draft. This makes it difficult to interpret in places, and it gives the impression that inspecting the quality of care in the NHS isn’t really of much importance, or rather that going through the motions of the inspection is very important, while communicating the outcome to the public is of no importance.

Taken together with my personal experience of the inspection, the rather messy report does tend to suggest that the inspection as a whole was an amateurish affair whose results can’t be relied on.

The report credits, but doesn’t name (p. 14):

• a CQC report writing coach

I can well understand that the coach wouldn’t wish to be named in such a report.

Feedback

The inspection made some use of feedback from patients, carers, staff and others. As I was present at three of the feedback meetings, two for the public and one for governors, I searched the report for evidence that this feedback had been taken into account.

The report says (p. 15-16, copied and pasted here with the CQC’s original errors, as are subsequent quotes):

During the inspection took place we met with a variety of different groups of patients, carers and other user representative groups during two listening events. In the main comments were positive -.with people saying they had received good care and treatment and that services were responsive. Patients and carers praised staff for their kindness and thoughtfulness.

Those weren’t the listening events I attended! The notes I made at the time, summarized here in Listening and in Listening more, show that the people who attended were generally suspicious of 2gether. Many had stories to tell about dreadful care.

CQC staff running the events told us at the time they would look into things that we now see are absent from the report. We raised other things that they clearly should have looked into, and that are also absent from the report. It doesn’t appear that they listened to the feedback at all.

Safety

On safety the inspection report isn’t very credible. For example, it says (p. 9):

Staff did not always followed the trust policy on seclusion. Staff lacked a clear understanding of policies and how these should be applied.

That doesn’t tell us why the seclusion was unsafe. What harm did patients come to, or were they put at risk of? If staff didn’t understand or follow policy, that seems to me to be a black mark under the heading well-led, not specifically a safety issue.

The report says safety requires improvement because, amongst other things, inspectors found a location where (p. 23):

The wall paper was peeling and the skirting boards were stained.

This is meant to be unsafe?

On p. 26 we learn that inspectors came across one patient who had missed one routine physical health check.

Under the heading of safety, the bulk of the remarks in the report are positive. The overall rating of requires improvement is incomprehensible on this evidence.

However, some things that might really be unsafe are not properly explored in the report. For example, (p. 24) it says the intensive care unit (PICU) uses a lot of bank and agency staff. One might imagine that psychiatric intensive care depends quite a lot on continuity of care, and that high usage of bank and agency staff might introduce risk, but the report doesn’t tell us one way or the other.

Moreover, the PICU comes within acute inpatient services, which were rated outstanding for safety and being well led. The staffing of the PICU, as reported, seems to contradict this rating.

There’s no information about safety that would support the rating of outstanding given for acute wards in general.

Effectiveness

On effectiveness the report begins (p. 27) by mentioning, in a roundabout way, that in some services records did not show physical health assessments on admission and routinely thereafter. In some services information that should have been recorded was missed or duplicated. This seems a strange way for the report to illustrate 2gether’s effectiveness.

On the same page the report tells us that

Patients in mental health services were assessed and monitored using the health of the nation outcome scales (HoNOS), which covered twelve health and social care domains.

It doesn’t tell us what proportion of patients were assessed and monitored in this way, or what the results of the assessment and monitoring were. This section is supposed to be about the effectiveness of the service. It says at the top of the page in large print:

By effective, we mean that people’s care, treatment and support achieves good outcomes, promotes a good quality of life and is based on the best available evidence.

The body of the text largely ignores this definition. I didn’t spot any mention at all of patients’ healthcare outcomes in this section. We don’t know from the report whether they’re good or not.

Inspectors seemed to think that there’s no point in inspecting actual outcomes, as long as some systems are in place that could theoretically lead to good outcomes—as long as a bunch of boxes are being ticked. Then they report that in some cases some of the boxes are not being ticked. And then they rate effectiveness as good!

Just like in the section on safety, some of the reported failings reflect poor leadership. For example, in relation to the Mental Health Act Code of Practice, which was updated in January 2015 (p. 29):

Training on the new Code of Practice had been incorporated into a professional development day for a small number of staff. However, many staff that we spoke with had not received this training and so did not fully understand the changes. The revised Code of Practice was not available on all wards.

Yet this is hidden in a section about effectiveness and good outcomes, in support of an overall rating of good.

No service was rated outstanding for effectiveness.

Caring

I was going to skip the section on caring, because even the worst NHS healthcare providers tend to be rated good on it, even if everything else needs improvement. 2gether was rated good just like all the rest. What is there to say?

I’ll just note in passing that the information in this section is inconsistent just like the rest of the report. For example (p. 31):

In most wards and teams, there was access to advocacy… However, we did not find good promotion of advocacy services in specialist community teams for children with mental health problems.

Is this failure a failure of caring? It’s more likely, again, to be a failure of leadership in those teams. The observation doesn’t belong in the section on caring.

Crisis services were rated outstanding for being caring, but there is no evidence in the report to support this.  The report only says crisis services saw patients quickly (responsive, but see below) and provided good care plans (effective).

Responsiveness

On responsiveness the inspectors adopted a similar approach as on effectiveness, looking at whether systems exist rather than whether the systems work. Because there are lots of systems in place, lots of boxes can be ticked. The overall rating is good. Whether services are actually responsive to people’s real-life needs is neither here nor there.

For example, under the heading Right care at the right time the report notes that (p. 33):

The trust had set its own targets for the times from referral to assessment for a wide range of its community teams.

So there’s a system in place—so far so good. Does this system provide timely healthcare that meets people’s needs? The report doesn’t address that.

In this section, just like the rest of the report, failures of leadership are noted. For example, (p. 33):

Staff, patients and carers on the older adult wards told us that there were not enough activities on the wards and that they sometimes get cancelled.

And (p. 34):

…we found Children and young people didn’t know how to make complaints…

These kinds of failures are not the fault of unresponsive staff, they’re the fault of systems that are in place and that tick boxes, but that don’t work.

Crisis services were rated outstanding for being responsive, with 98% of emergency referrals being seen within 4 hours. This conflicts, however, with feedback from patients and carers at the listening events. It seems possible that some patients in a crisis cannot get an emergency referral in order to qualify for the 4-hour response. CQC inspectors had knowledge of this feedback but did not refer to it in the report, making the outstanding rating somewhat mysterious.

Well led

On whether services are well led, the report rates leadership as good. Again this is based on systems being in place whether or not they work.

Under the heading Vision, values and strategy there’s no exploration of whether the trust’s strategic priorities explain its actual performance, and there’s no exploration of whether the trust’s vision and values mean anything in practice.

For example, the first of the values is (p. 35):

Seeing from a service user perspective

It’s not mentioned anywhere else in the report. Inspectors didn’t seem to consider whether it might perhaps be an empty sentiment.

CQC staff heard about the service user perspective at their listening events, but didn’t appear to notice when it turned out to be in conflict with the Trust’s perspective. An example of this is the Trust’s open door policy on wards (p. 5), which the Trust apparently promotes as a good thing but patients at the listening events saw as unsafe because of the ease of escaping.

The list of initiatives and systems in this section goes on on on, with little or no assessment of whether they work or what falls into the gaps.

For example, we learn that (p. 36):

The trust achieved ‘ward to board’ assurance through a number of mechanisms.

There’s some more detail, but what we don’t learn is whether the assurance was accurate. Was it audited at all? Or was it just that everyone nodded in unison?

This section is overwhelmingly positive in tone, but almost entirely lacking in evidence of an inspection. It could have been written by the Trust itself, without the expense of hiring any inspectors.

The reason for the overwhelmingly positive tone, however, is that evidence of leadership failure has been salted away in other sections of the report where you have to look hard to find it. Taking into account the report as a whole, the rating of good seems somewhat far fetched.

Acute wards were rated outstanding for being well led, but like the other outstanding ratings there is no evidence in support of this. I noted above that there’s evidence the PICU isn’t well led, and in another ward high staff absenteeism was reported, so the rating doesn’t appear to be meaningful.

Historical context

The report doesn’t refer to previous CQC reports.

A report in November 2014 identified communication between senior management and staff as a risk for the Trust. The present report continues to identify communication as an issue (p. 37):

The Trust highlighted three issues which were increasing  workloads, absenteeism and improved communications.

A report in February 2012 identified care plans as a concern, amongst others, noting that:

This situation had not improved since 2010.

The present report continues to identify care plans as problematic (p. 8):

In a number of services across the trust we had some concerns… Care plans were not always comprehensive and it was not always clear whether patients had been involved in developing their care plans.

In failing to take into account the historical context, this report missed the significance of the Trust’s apparent failure to address long-standing concerns, which again, would reflect on whether the Trust is well led.

Summary and conclusion

This is a report based on only a partial inspection of services in Gloucestershire and Herefordshire. Very significant services were omitted, so that the CQC’s conclusions cannot be said to apply to the Trust as a whole.

Based on my experience of the inspection itself and the presentation and organization of the report, the inspection does not seem to have been conducted well. Its conclusions are not likely to be reliable.

The CQC’s inspection process results in a matrix of subjective ratings that cannot be taken very seriously.

On safety I found the CQC’s rating is not well supported by the evidence in the report. I suspect a more reasonable rating might be good.

On caring I noted that the NHS is generally caring, so I was not amazed to learn that Trust staff are caring too. I’m satisfied with the rating good.

On effectiveness and responsiveness, I found that the inspection looked at whether systems were in place without determining whether the systems work. It’s difficult to know what to make of this. It seems likely that if there had been evidence that the systems work, then staff would have told inspectors and inspectors would have put the evidence in the report. Absence of evidence is some evidence of absence in this case. I suspect that a more reasonable rating for both these key questions might be requires improvement.

On whether services are well led I found plenty of evidence of ineffective leadership. The report hides the evidence in the other four sections, making it hard to find, and the report doesn’t consider the context of previous reports. I suspect a more reasonable rating might be requires improvement or worse.

 

Advertisements

About Rod

Chairman of the Gloucestershire charity Suicide Crisis, Vice Chair of Relate Gloucestershire & Swindon, and an enthusiast for public involvement in the NHS.
This entry was posted in Care Quality Commission. Bookmark the permalink.

One Response to Inspection

  1. Jacqueline Thomas says:

    I feel the same, i had mental health problems for 20 years in Herefordshire And I wouldn’t rate them as good.

    The rating reflects more of a political agenda

    If the CQC highlighted the issues within the trust and and gave it a poor rating would the finger not be pointed at the CQC ?

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s