A New Start for the CQC

Category: CQCThe Care Quality Commission (CQC) is consulting on proposals for the first phase of changes to the way it goes about inspecting and regulating health and social care services. This first phase particularly affects acute hospitals, both NHS and independent, and it will be implemented from October 2013. My view is that the proposals are poorly thought out and that they will eventually lead to renewed criticism of the CQC. Here are my responses to the consultation, which closes on August 12.

General

1. What do you think about the overall changes we are making to how we regulate? What do you like about them? Do you have any concerns?

I think the overall changes proposed contain some good ideas but are are poorly thought out, so that as they stand they will lead to renewed failures of regulation followed by renewed criticism of the CQC.

I like the idea that standards should be simpler and clearer, the idea of intelligent monitoring and the idea of expert inspection.

I have concerns that the implementation of these ideas has not been planned in enough detail, so that unintended consequences may outweigh the benefits.

2. Do you agree with our definitions of the five questions we will ask about quality and safety (is the service safe, effective, caring, responsive and well-led)?

No. I only agree with the first three of the five questions (safe, effective and caring). I disagree with the other two (responsive and well-led).

The “responsive” question is redundant because it is entirely covered by “safe” and “effective”. A service can only be safe and effective if it is responsive, while an unresponsive service cannot be safe and effective. Your claim that “responsive” is a separate question is bogus. The “Unacceptable care example” that you give to illustrate “responsive” is a clear example of an uncaring service, making it obvious that the “responsive” questions fails to add any value. This question should be dropped.

The “well-led” question fails to put people who use services at the centre of regulation, because people who use services are not the people being led. Leadership of a service is a means to an end, not an end in itself. CQC should not attempt to meddle in the means that providers adopt to provide safe, effective and caring services. Again, the “Unacceptable care example” that you give to illustrate “well-led” is a clear example of an unsafe and ineffective service, making it obvious that the “well-led” questions fails to add any value when applied in this way. Even if CQC did attempt to assess leadership separately, CQC’s meddling would have a disruptive effect in cases where a provider’s perfectly effective leadership style happened not to agree with some internal CQC preconception. This question should be dropped.

Fundamentals of care

3. Do you think any of the areas in the draft fundamentals of care above should not be included?

It is not clear what “areas in the draft fundamentals of care” refers to. The phrase “draft fundamentals” is only used in the consultation questions. It does not appear anywhere in the actual text of the consultation document, so it is impossible to answer this question with any certainty.

The examples of fundamentals of care listed on pages 13-14 all seem OK to include.

4. Do you think there are additional areas that should be fundamentals of care?

Yes. For example, it should be a fundamental that care and treatment are strictly appropriate to a person’s condition. The list on pages 13-14 makes it OK for a nurse in a hospital to give a patient unnecessary drugs, or drugs prescribed for some other patient, as long as it turns out in the end that no harm was done. Risk to the patient should be taken into account.

5. Are the fundamentals of care expressed in a way that makes it clear whether a standard has been broken?

No. Several of the fundamentals listed on pages 13-14 are woolly. They could not possibly be used as the basis for prosecutions in their present form. An example is the term “harm” (with the difficulty acknowledged in the footnote). Other examples are the phrases “when I need it” and “organised properly”.

6. Do the draft fundamentals of care feel relevant to all groups of people and settings?

No. For example, it seems likely that the boundaries of care would be different for people being treated in their own homes. Should a community psychiatric nurse arrange for a patient’s home to be cleaned before providing support for depression? Perhaps in some circumstances, but not as a universally applied fundamental of care.

Intelligent monitoring of NHS acute hospitals

7. Do you agree with the proposals for how we will organise the indicators to inform and direct our regulatory activity?

No. It is not OK to restrict the types of indicator that may trigger action. For example, if you ignore a significant one-off data collection just because it is “not yet nationally comparable” your inaction will once again bring your entire methodology into disrepute.

8. Do you agree with the sources we have identified for the first set of indicators? Please also refer to the annex to this consultaiton [sic].

The sources do not seem to be well defined, making it unclear what the difference is between Tier 1 and Tier 2 (in figure 3 on page 25). Tier 2 seems to consist of indicators that you might arbitrarily choose to ignore no matter how bad things get, even though these indicators are nationally comparable.

Also, some of the potential indicators in table 1 on page 26 are inadequate. For example, if you use “Deaths of people in contact with the service” it might encourage providers to discharge patients who are high risk of suicide so that they are no longer in contact with the service.

9. Which approach should we adopt for publishing information and analysis about how we monitor each NHS trust? Should we:
− Publish the full methodology for the indicators?

Yes. Some indicators will inevitably turn out to be inadequate, and if you don’t publish the methodology you’ll be accused of (yet more) cover-up.

− Share the analysis with the providers to which the analysis relates?

Yes, and not only with the providers but directly with other stakeholders. Otherwise the providers may try to conceal analysis that they consider unfavourable and you’ll be implicated in the cover-up. For example, NHS foundation trusts might try to conceal details from their councils of governors.

− Publish our analysis once we have completed any resulting follow up and inquiries (even if we did not carry out an inspection)?

Yes.

Inspections

10. Do you agree with our proposals for inspecting NHS and independent acute hospitals?

Yes, mostly. However, there is potential to be mislead by inspections, audits and accreditation schemes run by other bodies, because they may not always be thorough or impartial. Furthermore, working too closely with other organisations risks exactly the kind of groupthink that led to problems in Mid Staffordshire being ignored for so long.

Ratings

11. Should the rating seek to be the ‘single, authoritative assessment of quality and safety’?

No. Ratings will prove fallible, and it is better to acknowledge this in advance. For example, an ‘outstanding’ provider might reorganise its services or take on a new contract in a way that that turns out to be a disaster, making a nonsense of your rating.

Although the sources of information to decide a rating will include indicators and the findings of others, should the inspection judgement be the most important factor?

Yes.

12. Should a core of services always have to be inspected to enable a rating to be awarded at either hospital or trust level?

Yes.

13. Would rating the five key questions (safe, effective, caring, responsive and well-led) at the level of an individual service, a hospital and a whole trust provide the right level of information and be clear to the public, providers and commissioners?

Yes, with two provisos. One proviso is that “service” should be defined in a clear and realistic way. For example, if “emergency care” is a service, then from a patient’s point of view it may depend on 111, 999, the ambulance service, A&E, a hospital ward, and perhaps other parties all working well together. It’s not clear whether this is what you mean by “service”, or whether you really mean a contract between a single commissioner and a single provider, or something else.

The other proviso is that “responsive” and “well-led” are intrinsically misleading, as outlined in 2. above.

14. Do you agree with the ratings labels and scale and are they clear and fair?

No. The ratings labels are not all clear.

“Inadequate” is a misleadingly mild description of “serious and systemic failings”. A better label would be “Failing”.

“Requires improvement” is a misleadingly mild description when “fundamentals of care are breached”. A better label would be “Inadequate”.

“Good” is OK.

“Outstanding” is too easy to achieve. The description implies that a trust could be rated “outstanding” while some of its services are in breach of fundamental standards. “Outstanding” should be reserved for trusts and hospitals where all services are at least good and some outstanding. For services, “outstanding” should mean world class performance exceeding national standards requirements.

15. Do you agree with the risk adjusted inspection frequency set out which is based on ratings, i.e. outstanding every 3-5 years, good every 2-3 years, requires improvement at least once per year and inadequate as and when needed?

Yes, except that the risk adjustment is not well defined and needs more work. For example, a provider that enters into a financially “significant transaction” like a merger or acquisition, or a significant change of high-level personnel, should be inspected soon afterwards regardless of its past performance, because of the risk of disruption to services.

General

16. The model set out in this chapter applies to all NHS acute trusts. Which elements of the approach might apply to other types of NHS provider?

The same model has to be used for all providers, because the ideal of an “NHS acute trust” that only provides a certain restricted range of hospital-based services is not realistic. Providers can and will take on contracts for a variety of services that cross boundaries between “acute”, “mental health”, “social care” and so forth, and it would be absurd not to treat them consistently.

Annex to the consultation

A1.  Do you agree with the principles that we have set out for assessing indicators?

Yes and no. It’s hard to disagree with any of the principles individually, yet taken together they are impossibly idealistic. It seems likely that no indicator will ever satisfy all the principles. For this reason it would be better to call them ideals rather than principles.

A2.  Do you agree with the indicators and sources of information?

In most cases I don’t know enough to comment. However, taking stroke as an example, “% of patients scanned within 1 hour” isn’t ideal. The clinically significant time is between the stroke event and the scan, but this indicator is only a measure of the time between hospital admission and the scan. Hospitals can delay admission, putting patients at risk while improving their score on this indicator. They can also delay interpreting and acting on the scan. This is not to say that the indicator should be scrapped. It may be the best indicator available. But CQC should be transparent about indicators that are not ideal.

Another example is “A&E waiting times under 4 hours”, which was criticised by the Commons Health Select Committee in its report, Urgent and emergency services (July 24, 2013). The Committee recommended “a  broader  assessment  of  patient  outcome  and
experience” and I agree.

A3.  Are there any additional indicators that we should include as ‘tier one’ indicators?

Probably. For example, in “Qualitative Intelligence” dimension N, staff concerns reported through normal management channels (as opposed to whistleblowing) are a glaring omission. And in dimension S, even planned changes in leadership create risk, as do “significant transactions” such as mergers and acquisitions.

A4.  Do the proposed clinical areas broadly capture the main risks of harm in acute trusts? If not, which key areas are absent?

Unclear, though the methodology seems reasonable.

A5.  Do you agree with our proposal to include more information from National Clinical Audits once it is available?

Yes.

A6.  Do you agree with our approach of using patient experience as the focus for measuring caring?

Yes.

Advertisements

About Rod

Chairman of the Gloucestershire charity Suicide Crisis, Vice Chair of Relate Gloucestershire & Swindon, and an enthusiast for public involvement in the NHS.
This entry was posted in Care Quality Commission and tagged , , , , , , , . Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s