After spending a long time working with the Improving Access to Psychological Therapies (IAPT) programme’s performance indicators, I still feel they don’t make it obvious how well IAPT performs. I have come to think that the reason for this is that the indicators give an insider view of parts of the IAPT programme, concealing the big picture. What’s needed is just the opposite, a view of IAPT that ignores the details of its internal workings.
From the point of view of ordinary members of the public, who don’t necessarily know anything about the common mental disorders, differential diagnosis, the stepped-care model, treatment modalities or caseness thresholds, the IAPT indicators provide too much information that is impossible to interpret.
This makes it too easy for IAPT providers, commissioners and IAPT itself to pick and choose statistics that give whatever impression they want to give. I only came to understand this through discussions about waiting times for IAPT treatment in Gloucestershire.
There’s a kind of stalemate in Gloucestershire between the LINk and the IAPT provider, 2gether. According to LINk, people have to wait too long to get treatment. According to 2gether, waiting times are within the target.
The stalemate is documented in 2gether’s Quality Report for 2011-12. LINk says (page 37):
“During the year, we have had a number of comments on the waiting time for IAPT services.”
Its report on suicide prevention put it more strongly, for example (page 45):
“It is demoralising to wait so long for the right treatment and at the right level.”
2gether says (page 20):
“At year end the average waiting times were as follows:
• Step 2 intervention 18.4 days
• Step 3 intervention 27.4 days
“An explanation of what is meant by Step 2 and Step 3 intervention would be helpful in this section.”
The difficulty here is that LINk is expressing the patient’s point of view, without knowing or needing to know anything about how the internals of the service work. 2gether appears to be choosing internal statistics carefully to give the impression that it meets a 28-day national target.
Public perception of the service is that people can self-refer or be referred by their GP or another healthcare professional. It’s as if a stopwatch starts ticking at that moment.
The stopwatch stops when the patient gets access to effective treatment. The provider’s internal processes and steps are irrelevant, as are treatments that don’t work.
For example, a man goes to his GP and is diagnosed as having moderate depression. He is advised to wait and see if it improves. After a month it hasn’t improved. He is advised to contact IAPT, who send him a booklet about healthy lifestyles. After a month of trying to improve his lifestyle he is no better. Now he’s put on a waiting list to join a group. After a month, he gets a place and starts attending the groups. A month later he is no better, so he’s put on a waiting list for psychotherapy. After a month he sees a therapist, and a month later he has recovered.
This man will tell LINk he had to wait five months to get effective treatment from IAPT, because five months elapsed between telling his GP and seeing the therapist, but 2gether will tell LINk that he only had to wait a month between steps.
Unfortunately the national indicators don’t reveal the big picture around waiting times, so the stalemate in Gloucestershire can’t be resolved that way. The indicators do, however, reveal a bigger picture around how successful IAPT is from a patient’s point of view.
An ordinary person’s perception of a successful treatment is that for everyone referred, the treatment works. The national indicators include measures of both these things.
There’s a slight complication caused by some people being referred for treatment when they don’t have anything to recover from, but this is easy to correct for. It means that measuring how successful IAPT is requires three indicators: the number of people referred, the number of people who recover, and a correction for the number of people who had nothing to recover from but were treated anyway.
Another complication is that some people may see their GP for a common mental health condition but not be referred to IAPT, so that these people don’t appear in the IAPT indicators at all. From the patient’s perspective the NHS starts to treat them when they first see their GP about the problem, but the best we can do is measure from when they are referred to IAPT. Similarly, some people might go on to receive further NHS treatment after IAPT, and recover as a result, but they don’t appear in the IAPT indicators either.
Extending my previous analysis of PCT IAPT performance, I’ve added a new indicator of success. This is the percentage of people referred who were unwell at the start and moving to recovery at the end of treatment, ranking the PCTs. This measures the perceived success of the service.
For example, in Rotherham (1st) very nearly a third of people referred and unwell were moving to recovery at the end of treatment. But in South Birmingham (150th) so many people who had nothing to recover from were treated anyway that they slightly outweighed any real success the service had, making the success rate negative.
Here’s the histogram showing the distribution of success by PCT:
Gloucestershire comes out of this looking good, but the quarterly results contain a hidden twist, as you can see on this graph:
That 20% success rate was the average over the year, but during the year the success rate fell dramatically. At the start of the year, IAPT in Gloucestershire was successful for very nearly a third of people referred. At the end of the year, it was successful for only about one in thirteen of people referred.
Although the published figures don’t show it directly, it seems very likely that the long waiting times people complained to LINk about were from the start of the year, when IAPT was achieving a high success rate. By the end of the year, which is when the waiting times in 2gether’s quality report were measured, the success rate was less than a quarter of what it had been, and below the national average.
I plan to continue tracking the success rate as future quarterly results are published.