Comment on Peter Kinderman’s blog post

11 02 2015

(Peter’s blog post.)

Peter Kinderman seems to be arguing that it doesn’t matter if an experience is classified as resulting from disease, illness, disorder, or a response to circumstance (genetically mediated or otherwise). People who have “obvious and quantifiable needs” should get the help they need with social challenges which may have led to the difficulties in the first place. They should have someone to talk to so they can make sense of what has happened. Removing the category of illness doesn’t remove distress, doesn’t mean people shouldn’t be helped. This makes a lot of sense.

Much has been said about the problems with diagnostic categories and with naïve reification to biological entities. You have disease D if and only if you have symptoms S1, S2, … Sn. Why do you have those symptoms? Why of course it’s because you have disease D. I think we can safely conclude, along with many others, that this is circular. An argument that we “need” diagnoses to care for people is unconvincing.

Should we completely throw away what has been collected in diagnostic tomes, however flawed? I don’t think we should.

One complaint about DSM and ICD is that they cover all aspects of human experience. Most of us can find a diagnosis in there, especially if interpreting the descriptions broadly. But in many ways this is a strength — when naïve reification is eliminated. Denny Borsboom, Angelique Cramer and others have done important work extracting the individual complaints (e.g., loss of interest, thinking about suicide, fatigue, muscle tension) which make up diagnoses and modelling how they relate to each other (Borsboom, Cramer, Schmittmann, Epskamp, & Waldorp, 2011; Borsboom & Cramer, 2013). The individual descriptions and their interrelationships might gain in meaning when stripped of their diagnostic group.

Describing the sorts of situations people find themselves in and how they feel is crucial for conducting research and helping build up evidence for what works. When is talking therapy helpful? When might it make more sense for people to work four days a week rather than five? When should a focus be on interpersonal problems and who should be involved in sessions?

DSM-5 includes a chapter on “Other conditions that may be a focus of clinical attention” (American Psychiatric Association, 2013, pp. 715–727). It’s brief, making up only about 2% of the book, and should be expanded, however, it seems relevant to a psychosocial approach and could perhaps be combined with other descriptions of predicaments and problems. Example problems include:

  • High expressed emotion level within family
  • Spouse or partner violence
  • Inadequate housing
  • Discord with neighbour, lodger, or landlord
  • Problem related to current military deployment status
  • Academic or education problem
  • Social exclusion or rejection
  • Insufficient social insurance or welfare support

So, “DSM” is not synonymous with “biological”. There is again plenty to be built upon, despite its problems.

Kinderman argues that practitioners “can offer practical help, negotiate social benefits (which could be financial support, negotiated time off work, or deferred studies, for example), or offer psychological or emotional support.” It was great to see specific examples. Medication also likely has a place, especially when the mechanisms of action are conceptualized in a drug-centred way rather than keeping up the pretense that they cure a disease (Moncrieff & Cohen, 2005). I think we all should be doing more to elaborate how a meaningful psychosocial approach can work in practice.

References

American Psychiatric Association. (2013). Diagnostic and statistical manual of mental disorders (5th ed.). Washington, DC.

Borsboom, D., & Cramer, A. O. J. (2013). Network analysis: an integrative approach to the structure of psychopathology. Annual Review of Clinical Psychology, 9, 91–121. doi:10.1146/annurev-clinpsy-050212-185608

Borsboom, D., Cramer, A. O. J., Schmittmann, V. D., Epskamp, S., & Waldorp, L. J. (2011). The small world of psychopathology. PloS ONE, 6(11), e27407. doi:10.1371/journal.pone.0027407

Moncrieff, J., & Cohen, D. (2005). Rethinking models of psychotropic drug action. Psychotherapy and Psychosomatics, 74, 145–153. doi:10.1159/000083999





Big White Wall “Evaluative Review”

29 01 2015

I was having trouble getting a copy of an “independent review” of Big White Wall. Their website states, “A copy of the independent review is available on request”. When I asked for a copy (May 2014), they replied that “there is some potentially commercially sensitive data in this review, so we’re not able to share it directly.”

Discovered today (Jan 2015) that it’s online over here.

(I have also mirrored it here in case it disappears.)





Mental healthcare funding in England

6 01 2015

Probability of a useful reply low — but let’s see what comes back…

Sent: 04 January 2015 14:22
To: public.enquiries@hmtreasury.gsi.gov.uk
Subject: Mental heath budget decision responsibility and advisors

Madam/Sir,

Blame for insufficient mental healthcare budgets has been passed around between DH, NHSE, and CCGs, however, the source of funding is the Treasury. Could you please send a summary of the people responsible for decisions made in relation to mental health budgets and who advises them (I’m interested in substantive causal responsibility rather than bureaucratic responsibility). For example, are any of these members of the management team responsible for mental health budgetary advice?

Sir Nicholas Macpherson
Sharon White
John Kingman
Mark Bowman
Dave Ramsden
Charles Roxburgh
Indra Morris

Also what documentation exists on decisions made (at Treasury level or above) in relation to mental healthcare, for instance concerning Improving Access to Psychological Therapies, inpatient beds, Payment by Results/Payment Systems, and questions around the involvement of the “third sector”.

I’d be grateful for any information.

Best wishes,

Andy





Worrying developments in NHS England mental health outcomes monitoring

19 12 2014

Mental health service users hope that the therapeutic interventions they receive will help them feel better. Randomised controlled trials are one important way to test whether a therapy “works”; however, they don’t reveal how interventions are experienced in routine care. This has led to routine outcomes monitoring which uses questionnaires to ask service users and clinicians to rate symptoms and other relevant information before, during, and after treatment. Outcomes monitoring has been used by NHS services for some years, for example through the Child Outcomes Research Consortium and Improving Access to Psychological Therapies (IAPT). It is, however, controversial. Ros Mayo (2010, p. 63) for example argues that:

“The application of oversimplified questions requiring tick-box answers … are driven by short-term and superficial policies and management techniques, largely incorporated from industry and the financial sector and primarily concerned with speed, change, results, cost effectiveness – turnover and minimising human contact and time involvement… They have nothing to do with human engagement…”

And yet, outcomes monitoring could be better than bureaucracy. There is emerging evidence that providing regular progress feedback to clinicians improves outcomes, especially when questionnaires are completed by service users. Intuitively this seems to make sense: people could sometimes reveal more about how they feel on paper than they can orally face-to-face. Items used in IAPT include:

  • “How often have you been bothered by… not being able to stop or control worrying?”
  • “Heart palpitations bother me when I am around people”

They also ask directly about the care received, for example:

  • “Did staff listen to you and treat your concerns seriously?”
  • “Did you feel involved in making choices about you treatment and care?”

Responses to these items could help clinicians understand to what extent services are helping service users. Also using standardised questionnaires means that expected progress curves can be developed (for example see work by Lambert and colleagues), so clinicians can see, for example, if progress is slower than would be expected given initial assessment and, if warranted, try a different approach.

It’s early days for outcomes monitoring but the above examples suggest that it could be a promising approach. However, closer examination shows that there are clear problems with how questionnaires are being used in practice, and I think NHS services in England are being asked to implement actively damaging approaches to outcomes monitoring.

Problem 1. Use of an unreliable measure

Suppose you wish to develop a rating scale for quality of life or how distressed you are so that you can monitor progress over time with a summary “score”. As a bare minimum requirement, all of the questions for one topic should be related to each other: the items should be “internally consistent”. This questionnaire probably wouldn’t do very well:

THE GENERAL STUFF QUESTIONNAIRE

  1. How often do you sing in the shower?
  2. What height are you?
  3. How far do you live from the nearest park?
  4. What’s your favourite number?
  5. How often do you go dancing?

You might learn interesting things from some of the individual answers, but summing all the answers together is unlikely to be revealing. This questionnaire would fare better:

THE RELIABLE FEELINGS QUESTIONNAIRE

  1. How do you feel? (0 is terrible, 10 fantastic)
  2. How do you feel? (0 is terrible, 10 fantastic)
  3. How do you feel? (0 is terrible, 10 fantastic)
  4. How do you feel? (0 is terrible, 10 fantastic)
  5. And finally… how do you feel? (0 is terrible, 10 fantastic)

However, you might wonder if questions 2 to 5 add anything. There are many ways to test the internal consistency of questionnaires, using the answers that people give. One is to use a formula called Cronbach’s alpha which gives answers from 0 to 1. Higher, say around 0.8, is better. Too close to 1 suggests redundancy in questions, as would be likely for the Reliable Feelings Questionnaire above.

In England, it is now recommended to use a “Mental Health Clustering Tool” to evaluate outcomes (see section 7.1 of recent guidelines). This is a questionnaire completed by clinicians covering areas such as hallucinations, delusions, depression, and relationship difficulties. The questionnaire suffers from a very basic problem: it’s not internally consistent. This has been discovered by the very people who proposed the approach (see p.30 of their report): “As a general guideline, alpha values of 0.70 or above are indicative of a reasonable level of consistency”. Their results are: 0.44, 0.58, 0.63, 0.57 – conspicuously smaller than 0.70. The authors also refer to previous studies explaining that this would always be the case, due to “its original intended purpose of being a scale with independent items” (p. 30). So, by design, it’s closer to the General Stuff Questionnaire above: a mixed bag of independent questions with low reliability.

Problem 2. Proposals to link outcomes to payment

Given evidence that collecting regular feedback might improve the quality of care people receive, it may be a good idea that the IAPT programme includes regular progress monitoring. IAPT uses service user completed questionnaires, which could in principle provide information clinicians might not otherwise have learned. There is, however, another potential difficulty over and above that of the quality of questionnaires used, and that is how external influences such as “Payment by Results” (PbR) initiatives can change for the worse how data is gathered and used. And PbR initiatives are beginning to be used in practice. The IAPT webpage notes, “An outcome based payment and pricing system is being developed for IAPT services. This is unique as other systems of PbR are activity or needs based.” Initial pilot results were “encouraging,” says the web page, and another pilot is currently running.

The idea with this proposal is that the more improvement shown by service users, as partly determined by outcomes scores, the more money service providers would receive. This is a worry as linking measures to targets has a tendency to cause the measures to stop measuring what it is hoped that they measure. For instance targets on ambulance response times have led to statistically unlikely peaks at exactly the target, suggesting that times have been changed. A national phonics screen has a statistically unlikely peak just at the cutoff score, suggesting that teachers have rounded marks up where they fell just below the cutoff. The effect has been around for such a long time that it has a name, Goodhart’s law:

“Any observed statistical regularity will tend to collapse once pressure is placed upon it for control purposes.”

Faced with funding cuts, how many NHS managers in overstretched services will be forced to “game” performance-based payment systems to ensure their service survives? It’s not hard to do so, for example people who drop out of therapy tend to do so because they didn’t think it was helping. It can be easy to justify not asking people who leave therapy to complete questionnaires. Those who stay in therapy may be the ones with higher scores (see Clark, 2011, p. 321). Therefore, the missing data from those whom therapy was not helping could lead to a false picture of how well a service is working for service users. It is difficult to see how any data gathered that has been subject to these difficulties could tell clinicians or service providers anything helpful about their services or the wellbeing of those who use them.

Concluding thoughts

I think it is possible that statistically reliable, high quality questionnaires could be helpful in clinical practice, if thoughtfully used and explained to service users. Perhaps such questionnaires could be thought of as a bit like blood pressure readings; few patients feel reduced to numbers when told their results, and it’s obvious that more information is required to formulate a treatment if something is awry. However, using unreliable measures – especially when the developers know they are unreliable – is unacceptable for a national mental health programme. Moreover, linking questionnaire scores to payment raises even more complex ethical issues: there is a risk that the bureaucratic burden of questionnaires for service users would increase; also poorer treatment and financial decisions could result because those decisions would be made on the basis of low quality, unreliable data. Mental health services need to do much better than that, for the sake of everyone’s wellbeing.

Thanks very much to Martha Pollard and Justine McMahon for helpful comments.

Andy Fugard is a Lecturer in Research Methods and Statistics at the Department of Clinical, Educational and Health PsychologyUniversity College London. His research investigates psychological therapies for children and young people: are they effective in routine practice and what moderates effectiveness. He is also interested in policy around practice-based evidence and recently became a member of the NHS England/Monitor Quality and Cost Benchmarking Advisory Group.





Why can psychological therapy be helpful?

24 10 2014

Research explaining how therapy might help is filled with very technical terminology, e.g., invoking “transference”, “extinction”, heightening access to “cognitive–emotional structures and processes”, “reconfiguring intersubjective relationship networks” (see over here for more).

Could simpler explanations be provided? Here are some quick thoughts, partly inspired by literature, discussions, and engaging myself as a client in therapy:

  • You know the therapist is there to listen to you — they’re paid to do so — so there’s less need to worry about their thoughts and feelings. One can and is encouraged to talk at length about oneself. This can feel liberating whereas in other settings it might feel selfish or self-indulgent.
  • The therapist keeps track of topics within and across sessions. This can be important for recognising patterns and maintaining focus, whilst allowing time to tell stories, meandering around past experiences, to see where they lead.
  • The therapist has knowledge (e.g., through literature, supervisory meetings, and conversations with other clients) of a range of people who may have had similar feelings and experiences. So although we’re all unique, it can also be helpful to know that others have faced and survived similar struggles — especially if we learn what they tried and what helped.
  • Drawing on this knowledge, the therapist can conjecture what might be going on. This, perhaps, works best if the conjectures are courageous (so a step or two away from what the clients says) — and tentative, so it’s possible to disagree.
  • There can be an opportunity for practice, for instance of activities or conversations which are distressing. Practicing is a good way to learn.
  • Related, there’s a regular structure and progress monitoring (verbally, with a diary, or using questionnaires). Self-reflection becomes routine and constrained in time, like (this might be a bit crude but bear with me) a psychological analogue of flossing one’s teeth.
  • (Idea from Clare) “… daring to talk about things never spoken of before with someone who demonstrates compassion and acceptance; helpful because allows us to face things in ourselves that scare us and develop less harsh ways of responding to ourselves”
  • (added 27/10/2014) The therapist has more distance from situations having an impact on someone than friends might have so, e.g., alternative explanations for interpersonal disputes can more easily be provided.
  • (added 27/10/2014) It’s easier for a therapist to be courageous in interactions and suggestions than for a friend as — if all goes wrong — it’s easier for the client to drop out of the therapeutic relationship without long-term consequences (e.g., there’s no loss of friendship).
  • (added 15/01/2015) Telling your story to a therapist gives you an audience who is missing all of the context of your life. Most of the context can feel obvious, until you start to tell your story. Story telling requires explaining the context, making it explicit. For instance who are the people in your life? Why did you and others say and do the things they did? Perhaps this act of storytelling and making the context explicit also makes it easier to become aware of and find solutions.

Some thoughts…





“How I became an analyst” by Arthur Valenstein

19 10 2014

Interesting multidisciplinary background — some excerpts from Valenstein (1995):

“When I was sixteen years old I built my own short-wave receiver and transmitter and became a ham radio operator. This bent towards electronics motivated me to enter the engineering school at Cornell University in 1931 with the intention of becoming an electrical engineer…”

“But those were depression years, and it seemed unlikely that I could make a sufficient livelihood as an electrical engineer.”

“… from early years I had been curious about people, how and why they were as they were. I was puzzled about myself as well, feeling myself to be something of an ‘outsider’ in school. As I learned later, this is one of the elements contributing to psychological-mindedness, a predisposition that is conducive to psychoanalytic inquiry.”

“I have always had one foot in hard science and one foot in literature and the humanities, and fortunately I don’t seem to have fallen between the two.”

“George Henry was carrying out a heavily funded research project on homosexuality. This opened a whole world to me that I had never known, especially the gay world, and I learned something about it, even getting to know some of its colloquial terms. Later Henry and his research assistant, who in retrospect I realize was homosexual, published several books on homosexuality from a descriptive point of view.”

“… I came to be in Boston, which I never left except for one year in neurology with Foster Kennedy (a colourful man, a Northern Ireland Orangeman of great sartorial splendour and the gift of marvellously eloquent, elegant speech) at Bellevue Hospital in New York, and my years in the military.”

“My initial exposure to the activities and ambience of the Hampstead Child Therapy Clinic [now the Anna Freud Centre] forty years ago, and my continued contact with it and with Anna Freud over many years, greatly influenced my identity as a psychoanalyst, both theoretically and clinically. Before my sabbatical in London in 1955, I had become interested not only in what nowadays seems to be called ‘cognitive developmental psychology’ and ‘attachment theory’, but also what might be termed ‘affect developmental psychology’.”

Reference

Valenstein, A. (1995). How I became an analyst. Bulletin of the Anna Freud Centre, 18, 283–291.





Politics

11 10 2014

In 1962, John F. Kennedy announces that “We choose to go the moon.” Seen as wonderful and achievable, even though JFK didn’t have the required knowledge and expertise to make it happen himself.

Zoom forward to the 21st century: the Green Party proposes that everyone should have a living wage and that we should strive for equality; that it’s unfair that a small minority of people whose job it is to move numbers between bank accounts should claim ownership of most of the world’s resources and that we need to and can change this. This is considered hopelessly utopian. How are they going to achieve that, people complain, their policies don’t spell out the details.

What’s going on? I’m too thick to grasp why the first is reasonable but the second irrational.








Follow

Get every new post delivered to your Inbox.

Join 55 other followers