An argument against payment-by-outcomes for mental health

I have just seen a report on Payment by Results (PbR) for the adult Improving Access to Psychological Therapies (IAPT) programme and have concerns about the approach. The conclusion of the summary is that “the system appears feasible and the currency appears to be fit for purpose” which seems to suggest that the approach is going ahead.

This IAPT PbR proposal is outcomes based, so that the more improvement shown by service users, as partly determined by patient-reported outcome measures (PROMs), the more money service providers would receive. This is a worry as there is evidence that linking measures to targets has a tendency to cause the measures to stop measuring what it is hoped that they measure. For instance targets on ambulance response times have led to statistically unlikely spikes at exactly the target, suggesting times have been changed [1]. A national phonics screen has a statistically unlikely spike just at the cutoff score, suggesting that teachers have rounded marks up where they fell just below [2]. The effect has been around for such a long time that it has a name, Goodhart’s law: “Any observed statistical regularity will tend to collapse once pressure is placed upon it for control purposes” [3]. Faced with funding cuts, how many NHS managers will be forced to “game” performance-based payment systems to ensure their service survives?

PROMs have been criticised by therapists for leading to an “administratively created reality” [5] and being clinically unhelpful, perhaps even damaging. However, evidence is building that feeding back results from PROMs to clinicians is helpful for improving care [4]. It would be very sad indeed if this useful tool were destroyed by payment systems, just as many mental health practitioners — and more importantly, service users — are seeing the benefits. Linking outcomes algorithmically to finances at all seems to be a bad idea in general — it’s especially bad when PROMs are just beginning to be trusted in routine practice.

References

[1] G. Bevan and C. Hood, “What’s measured is what matters: targets and gaming in the English public health care system,” Public Adm., vol. 84, no. 3, pp. 517–538, 2006.

[2] L. Townley and D. Gotts, “Topic Note: 2012 Phonics Screening Check Research report,” 2013.

[3] C. A. E. Goodhart, “Monetary relationships: A view from Threadneedle Street.” 1975.

[4] C. Knaup, M. Koesters, D. Schoefer, T. Becker, and B. Puschner, “Effect of feedback of treatment outcome in specialist mental healthcare: meta-analysis.,” Br. J. Psychiatry, vol. 195, no. 1, pp. 15–22, Jul. 2009.

[5] J. McLeod, “An administratively created reality: Some problems with the use of self-report questionnaire measures of adjustment in counselling/psychotherapy outcome research,” Couns. Psychother. Res., vol. 1, no. 3, pp. 215–226, Dec. 2001.

Advertisements

3 comments

  1. Pingback: Early May round-up of mental health and health and social care related stuff | Launchpad: By and for mental health service users
  2. Pingback: Veja Du and “routine” data collection in mental health services | saraheknowles
  3. Pingback: A matter of routine? | Sectioned

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s