Poetical science

16 04 2015

The interdisciplinary struggle experienced by Ada Lovelace, world’s first computer programmer, described by Betty Toole (1996):

Her mother, Lady Byron, had the reputation of being a fine mathematician; her father was the famous poet. Ada’s struggle to unite the conflicting strains in her background was especially difficult, since her parents separated when she was only five weeks old. Yet her father’s heritage could not be ignored. In frustration Ada described this struggle when she wrote in an undated fragment to Lady Byron: “You will not concede me philosophical poetry. Invert the order! Will you give me poetical philosophy, poetical science?”





Thinking about you…

23 03 2015

I often think about you Sven
Or is it Anders, Björn or Christer?
Packing parachute in Gothenburg laundry room
Your face so very flustered

I returned to remove wet clothes
To get the drier going
There you were still packing
Looking worried and startled

So I wonder – hope! – if one day
When the time to jump was near
You opted to stay in the aeroplane
Paralysed with faulty-chute fear





Another open letter to Treasury

18 03 2015

Dear Glenda Jackson,

Thank you very much for writing to Treasury on my behalf and forwarding on the reply from Mr Danny Alexander.

I specifically asked who, at Treasury level and above, is responsible for budget decisions in relation to mental health. The reply, Ministers and Civil Servants, though true, does not answer my question. Is Mr Alexander claiming (by omission) that Sir Nicholas Macpherson, Sharon White, John Kingman, Mark Bowman, Dave Ramsden, Charles Roxburgh, and Indra Morris are not involved. Does Mr Alexander take any responsibility for the budget decisions? Can he not name the senior Civil Servants (and others?) responsible for the analyses?

I also asked for documentation on the rationale behind decisions – any decisions. Let me be more specific in this letter: how was the figure of £1.25bn, recently identified for child mental health, calculated?

Yours sincerely,

Andy Fugard





Comment on Peter Kinderman’s blog post

11 02 2015

(Peter’s blog post.)

Peter Kinderman seems to be arguing that it doesn’t matter if an experience is classified as resulting from disease, illness, disorder, or a response to circumstance (genetically mediated or otherwise). People who have “obvious and quantifiable needs” should get the help they need with social challenges which may have led to the difficulties in the first place. They should have someone to talk to so they can make sense of what has happened. Removing the category of illness doesn’t remove distress, doesn’t mean people shouldn’t be helped. This makes a lot of sense.

Much has been said about the problems with diagnostic categories and with naïve reification to biological entities. You have disease D if and only if you have symptoms S1, S2, … Sn. Why do you have those symptoms? Why of course it’s because you have disease D. I think we can safely conclude, along with many others, that this is circular. An argument that we “need” diagnoses to care for people is unconvincing.

Should we completely throw away what has been collected in diagnostic tomes, however flawed? I don’t think we should.

One complaint about DSM and ICD is that they cover all aspects of human experience. Most of us can find a diagnosis in there, especially if interpreting the descriptions broadly. But in many ways this is a strength — when naïve reification is eliminated. Denny Borsboom, Angelique Cramer and others have done important work extracting the individual complaints (e.g., loss of interest, thinking about suicide, fatigue, muscle tension) which make up diagnoses and modelling how they relate to each other (Borsboom, Cramer, Schmittmann, Epskamp, & Waldorp, 2011; Borsboom & Cramer, 2013). The individual descriptions and their interrelationships might gain in meaning when stripped of their diagnostic group.

Describing the sorts of situations people find themselves in and how they feel is crucial for conducting research and helping build up evidence for what works. When is talking therapy helpful? When might it make more sense for people to work four days a week rather than five? When should a focus be on interpersonal problems and who should be involved in sessions?

DSM-5 includes a chapter on “Other conditions that may be a focus of clinical attention” (American Psychiatric Association, 2013, pp. 715–727). It’s brief, making up only about 2% of the book, and should be expanded, however, it seems relevant to a psychosocial approach and could perhaps be combined with other descriptions of predicaments and problems. Example problems include:

  • High expressed emotion level within family
  • Spouse or partner violence
  • Inadequate housing
  • Discord with neighbour, lodger, or landlord
  • Problem related to current military deployment status
  • Academic or education problem
  • Social exclusion or rejection
  • Insufficient social insurance or welfare support

So, “DSM” is not synonymous with “biological”. There is again plenty to be built upon, despite its problems.

Kinderman argues that practitioners “can offer practical help, negotiate social benefits (which could be financial support, negotiated time off work, or deferred studies, for example), or offer psychological or emotional support.” It was great to see specific examples. Medication also likely has a place, especially when the mechanisms of action are conceptualized in a drug-centred way rather than keeping up the pretense that they cure a disease (Moncrieff & Cohen, 2005). I think we all should be doing more to elaborate how a meaningful psychosocial approach can work in practice.

References

American Psychiatric Association. (2013). Diagnostic and statistical manual of mental disorders (5th ed.). Washington, DC.

Borsboom, D., & Cramer, A. O. J. (2013). Network analysis: an integrative approach to the structure of psychopathology. Annual Review of Clinical Psychology, 9, 91–121. doi:10.1146/annurev-clinpsy-050212-185608

Borsboom, D., Cramer, A. O. J., Schmittmann, V. D., Epskamp, S., & Waldorp, L. J. (2011). The small world of psychopathology. PloS ONE, 6(11), e27407. doi:10.1371/journal.pone.0027407

Moncrieff, J., & Cohen, D. (2005). Rethinking models of psychotropic drug action. Psychotherapy and Psychosomatics, 74, 145–153. doi:10.1159/000083999





Big White Wall “Evaluative Review”

29 01 2015

I was having trouble getting a copy of an “independent review” of Big White Wall. Their website states, “A copy of the independent review is available on request”. When I asked for a copy (May 2014), they replied that “there is some potentially commercially sensitive data in this review, so we’re not able to share it directly.”

Discovered today (Jan 2015) that it’s online over here.

(I have also mirrored it here in case it disappears.)





Mental healthcare funding in England

6 01 2015

Probability of a useful reply low — but let’s see what comes back…

Sent: 04 January 2015 14:22
To: public.enquiries@hmtreasury.gsi.gov.uk
Subject: Mental heath budget decision responsibility and advisors

Madam/Sir,

Blame for insufficient mental healthcare budgets has been passed around between DH, NHSE, and CCGs, however, the source of funding is the Treasury. Could you please send a summary of the people responsible for decisions made in relation to mental health budgets and who advises them (I’m interested in substantive causal responsibility rather than bureaucratic responsibility). For example, are any of these members of the management team responsible for mental health budgetary advice?

Sir Nicholas Macpherson
Sharon White
John Kingman
Mark Bowman
Dave Ramsden
Charles Roxburgh
Indra Morris

Also what documentation exists on decisions made (at Treasury level or above) in relation to mental healthcare, for instance concerning Improving Access to Psychological Therapies, inpatient beds, Payment by Results/Payment Systems, and questions around the involvement of the “third sector”.

I’d be grateful for any information.

Best wishes,

Andy





Worrying developments in NHS England mental health outcomes monitoring

19 12 2014

Mental health service users hope that the therapeutic interventions they receive will help them feel better. Randomised controlled trials are one important way to test whether a therapy “works”; however, they don’t reveal how interventions are experienced in routine care. This has led to routine outcomes monitoring which uses questionnaires to ask service users and clinicians to rate symptoms and other relevant information before, during, and after treatment. Outcomes monitoring has been used by NHS services for some years, for example through the Child Outcomes Research Consortium and Improving Access to Psychological Therapies (IAPT). It is, however, controversial. Ros Mayo (2010, p. 63) for example argues that:

“The application of oversimplified questions requiring tick-box answers … are driven by short-term and superficial policies and management techniques, largely incorporated from industry and the financial sector and primarily concerned with speed, change, results, cost effectiveness – turnover and minimising human contact and time involvement… They have nothing to do with human engagement…”

And yet, outcomes monitoring could be better than bureaucracy. There is emerging evidence that providing regular progress feedback to clinicians improves outcomes, especially when questionnaires are completed by service users. Intuitively this seems to make sense: people could sometimes reveal more about how they feel on paper than they can orally face-to-face. Items used in IAPT include:

  • “How often have you been bothered by… not being able to stop or control worrying?”
  • “Heart palpitations bother me when I am around people”

They also ask directly about the care received, for example:

  • “Did staff listen to you and treat your concerns seriously?”
  • “Did you feel involved in making choices about you treatment and care?”

Responses to these items could help clinicians understand to what extent services are helping service users. Also using standardised questionnaires means that expected progress curves can be developed (for example see work by Lambert and colleagues), so clinicians can see, for example, if progress is slower than would be expected given initial assessment and, if warranted, try a different approach.

It’s early days for outcomes monitoring but the above examples suggest that it could be a promising approach. However, closer examination shows that there are clear problems with how questionnaires are being used in practice, and I think NHS services in England are being asked to implement actively damaging approaches to outcomes monitoring.

Problem 1. Use of an unreliable measure

Suppose you wish to develop a rating scale for quality of life or how distressed you are so that you can monitor progress over time with a summary “score”. As a bare minimum requirement, all of the questions for one topic should be related to each other: the items should be “internally consistent”. This questionnaire probably wouldn’t do very well:

THE GENERAL STUFF QUESTIONNAIRE

  1. How often do you sing in the shower?
  2. What height are you?
  3. How far do you live from the nearest park?
  4. What’s your favourite number?
  5. How often do you go dancing?

You might learn interesting things from some of the individual answers, but summing all the answers together is unlikely to be revealing. This questionnaire would fare better:

THE RELIABLE FEELINGS QUESTIONNAIRE

  1. How do you feel? (0 is terrible, 10 fantastic)
  2. How do you feel? (0 is terrible, 10 fantastic)
  3. How do you feel? (0 is terrible, 10 fantastic)
  4. How do you feel? (0 is terrible, 10 fantastic)
  5. And finally… how do you feel? (0 is terrible, 10 fantastic)

However, you might wonder if questions 2 to 5 add anything. There are many ways to test the internal consistency of questionnaires, using the answers that people give. One is to use a formula called Cronbach’s alpha which gives answers from 0 to 1. Higher, say around 0.8, is better. Too close to 1 suggests redundancy in questions, as would be likely for the Reliable Feelings Questionnaire above.

In England, it is now recommended to use a “Mental Health Clustering Tool” to evaluate outcomes (see section 7.1 of recent guidelines). This is a questionnaire completed by clinicians covering areas such as hallucinations, delusions, depression, and relationship difficulties. The questionnaire suffers from a very basic problem: it’s not internally consistent. This has been discovered by the very people who proposed the approach (see p.30 of their report): “As a general guideline, alpha values of 0.70 or above are indicative of a reasonable level of consistency”. Their results are: 0.44, 0.58, 0.63, 0.57 – conspicuously smaller than 0.70. The authors also refer to previous studies explaining that this would always be the case, due to “its original intended purpose of being a scale with independent items” (p. 30). So, by design, it’s closer to the General Stuff Questionnaire above: a mixed bag of independent questions with low reliability.

Problem 2. Proposals to link outcomes to payment

Given evidence that collecting regular feedback might improve the quality of care people receive, it may be a good idea that the IAPT programme includes regular progress monitoring. IAPT uses service user completed questionnaires, which could in principle provide information clinicians might not otherwise have learned. There is, however, another potential difficulty over and above that of the quality of questionnaires used, and that is how external influences such as “Payment by Results” (PbR) initiatives can change for the worse how data is gathered and used. And PbR initiatives are beginning to be used in practice. The IAPT webpage notes, “An outcome based payment and pricing system is being developed for IAPT services. This is unique as other systems of PbR are activity or needs based.” Initial pilot results were “encouraging,” says the web page, and another pilot is currently running.

The idea with this proposal is that the more improvement shown by service users, as partly determined by outcomes scores, the more money service providers would receive. This is a worry as linking measures to targets has a tendency to cause the measures to stop measuring what it is hoped that they measure. For instance targets on ambulance response times have led to statistically unlikely peaks at exactly the target, suggesting that times have been changed. A national phonics screen has a statistically unlikely peak just at the cutoff score, suggesting that teachers have rounded marks up where they fell just below the cutoff. The effect has been around for such a long time that it has a name, Goodhart’s law:

“Any observed statistical regularity will tend to collapse once pressure is placed upon it for control purposes.”

Faced with funding cuts, how many NHS managers in overstretched services will be forced to “game” performance-based payment systems to ensure their service survives? It’s not hard to do so, for example people who drop out of therapy tend to do so because they didn’t think it was helping. It can be easy to justify not asking people who leave therapy to complete questionnaires. Those who stay in therapy may be the ones with higher scores (see Clark, 2011, p. 321). Therefore, the missing data from those whom therapy was not helping could lead to a false picture of how well a service is working for service users. It is difficult to see how any data gathered that has been subject to these difficulties could tell clinicians or service providers anything helpful about their services or the wellbeing of those who use them.

Concluding thoughts

I think it is possible that statistically reliable, high quality questionnaires could be helpful in clinical practice, if thoughtfully used and explained to service users. Perhaps such questionnaires could be thought of as a bit like blood pressure readings; few patients feel reduced to numbers when told their results, and it’s obvious that more information is required to formulate a treatment if something is awry. However, using unreliable measures – especially when the developers know they are unreliable – is unacceptable for a national mental health programme. Moreover, linking questionnaire scores to payment raises even more complex ethical issues: there is a risk that the bureaucratic burden of questionnaires for service users would increase; also poorer treatment and financial decisions could result because those decisions would be made on the basis of low quality, unreliable data. Mental health services need to do much better than that, for the sake of everyone’s wellbeing.

Thanks very much to Martha Pollard and Justine McMahon for helpful comments.

Andy Fugard is a Lecturer in Research Methods and Statistics at the Department of Clinical, Educational and Health PsychologyUniversity College London. His research investigates psychological therapies for children and young people: are they effective in routine practice and what moderates effectiveness. He is also interested in policy around practice-based evidence and recently became a member of the NHS England/Monitor Quality and Cost Benchmarking Advisory Group.








Follow

Get every new post delivered to your Inbox.

Join 58 other followers