A farcical proposal for mental health outcomes measurement

If you’re going to develop a questionnaire for something resulting in a total “score” — quality of life, feelings, distress, whatever — you’ll want all of the questions for one topic to be related to each other (as a bare minimum). This questionnaire probably wouldn’t be very “internally consistent”:

THE GENERAL STUFF QUESTIONNAIRE

  1. How often do you sing in the shower?
  2. What height are you?
  3. How far do you live from the nearest park?
  4. What’s your favourite number?

(You might still learn interesting things from the individual answers.)

This one would:

THE RELIABLE FEELINGS QUESTIONNAIRE

  1. How do you feel?
  2. How do you feel?
  3. How do you feel?
  4. How do you feel?
  5. How do you feel?
  6. How do you feel?
  7. How do you feel?
  8. How do you feel?
  9. How do you feel?
  10. How do you feel?

However, you might wonder if questions 2 to 10 add anything… (So internal consistency isn’t everything.)

There are many ways to test the internal consistency of questionnaires, using the answers that people give. One is to use a formula by Lee Cronbach called Cronbach’s alpha. Answers run from 0 to 1. Higher is better (but not too high; see the second example above).

In England, it is now recommended (see p. 12 of Mental Health Payment by Results Guidance) to use scores on a “Mental Health Clustering Tool” to evaluate outcomes. I think there are at least two problems with this:

  1. It’s completed by clinicians. It’s unclear if service users even get to know how they have been scored, never mind to what extent they can influence the process.
  2. The questionnaire scores aren’t internally consistent.

The people who proposed the approach write (see p.30 of their report): “As a general guideline, alpha values of 0.70 or above are indicative of a reasonable level of consistency”. Their results: 0.44, 0.58, 0.63, 0.57. They also refer to previous studies showing that this would always be the case, due to “its original intended purpose of being a scale with independent items” (p. 30). So, by design, it’s closer to the General Stuff Questionnaire above: a list of “presenting problems” to be read individually.

Not only are clinicians deciding whether someone has a good outcome (are they really in the best position to decide?), but the questionnaire they’re using to do so is rubbish — as shown by the very people proposing the approach!

Undergraduate psychology students wouldn’t use a questionnaire this poor in their projects. Why is it acceptable for a national mental health programme?

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s