What can happen when “real world” training meets academia

15 09 2014

I did my PhD research at a doctoral training centre. It was a wonderful, though often stressful, experience (and I still work in academia). One of the conditions for the programme was that we attended various transferable skills workshops. These workshops were often framed as being about making us more employable, even if we were to fall from the “ivory tower” (heaven forbid).

One workshop was particularly memorable: two days on creativity, organised by the funding body and run by a multinational company which claims to have helped corporations to be more creative. Students travelled from across the UK to attend and were doing research on a range of topics across the breadth of science.

First the positive. Although research conferences are a good way to meet folk from elsewhere, everyone tends to have a similar background, so it was great to chat with people who were doing radically different things and discover what we all had in common, for instance in terms of the day-to-day challenges of doing a PhD.

The creativity workshop itself was exceedingly painful and for most of the time felt like a parody. One of the PowerPoint slides was a picture of three (badly drawn) tunnels. You could see the light at the end of one of the tunnels, you could vaguely see around the second tunnel, and the third tunnel was totally dark. We were told to strive to fumble around in the dark tunnel as it’s in there we’d find the really exciting ideas. Another slide had a diagram showing how there is a continuum of ideas, bad ideas at one end of the continuum, good ideas at the other. We should be striving to move to the end where the good ideas live. There were motivational posters all around the room with quotes like, “Creativity is seeing relationships were none exist” (a platonic friend and I got our photo taken in front of this poster, holding hands, gazing longingly into each other’s eyes).

The “facilitators”, I discovered later via the web, were neurolinguistic programming (NLP) “Master Practitioners”. As far as I can tell, NLP has no evidence of efficacy. Guess what happens when NLP practitioners try to use corporate training techniques with science and engineering students…? By the end of the second day, there were 40 of us sitting looking miserable, arms folded. They presumably get paid rather a lot to deliver something devoid of any content and they didn’t give us any evidence whatsoever to support any of the claims they made.

So much for transferable skills.

Is it possible that PhDs already train people up on useful skills? Most people leave academia and get jobs elsewhere, i.e., are successful! I think having more highly trained scientists wandering around the world can only be a good thing. Imagine if government used more evidence for its policies, for instance, relying on scientific thinking rather than ideology and rhetoric.





A farcical proposal for mental health outcomes measurement

5 09 2014

If you’re going to develop a questionnaire for something resulting in a total “score” — quality of life, feelings, distress, whatever — you’ll want all of the questions for one topic to be related to each other (as a bare minimum). This questionnaire probably wouldn’t be very “internally consistent”:

THE GENERAL STUFF QUESTIONNAIRE

  1. How often do you sing in the shower?
  2. What height are you?
  3. How far do you live from the nearest park?
  4. What’s your favourite number?

(You might still learn interesting things from the individual answers.)

This one would:

THE RELIABLE FEELINGS QUESTIONNAIRE

  1. How do you feel?
  2. How do you feel?
  3. How do you feel?
  4. How do you feel?
  5. How do you feel?
  6. How do you feel?
  7. How do you feel?
  8. How do you feel?
  9. How do you feel?
  10. How do you feel?

However, you might wonder if questions 2 to 10 add anything… (So internal consistency isn’t everything.)

There are many ways to test the internal consistency of questionnaires, using the answers that people give. One is to use a formula by Lee Cronbach called Cronbach’s alpha. Answers run from 0 to 1. Higher is better (but not too high; see the second example above).

In England, it is now recommended (see p. 12 of Mental Health Payment by Results Guidance) to use scores on a “Mental Health Clustering Tool” to evaluate outcomes. I think there are at least two problems with this:

  1. It’s completed by clinicians. It’s unclear if service users even get to know how they have been scored, never mind to what extent they can influence the process.
  2. The questionnaire scores aren’t internally consistent.

The people who proposed the approach write (see p.30 of their report): “As a general guideline, alpha values of 0.70 or above are indicative of a reasonable level of consistency”. Their results: 0.44, 0.58, 0.63, 0.57. They also refer to previous studies showing that this would always be the case, due to “its original intended purpose of being a scale with independent items” (p. 30). So, by design, it’s closer to the General Stuff Questionnaire above: a list of “presenting problems” to be read individually.

Not only are clinicians deciding whether someone has a good outcome (are they really in the best position to decide?), but the questionnaire they’re using to do so is rubbish — as shown by the very people proposing the approach!

Undergraduate psychology students wouldn’t use a questionnaire this poor in their projects. Why is it acceptable for a national mental health programme?





No biomarkers

27 08 2014

There are no biomarkers
We can’t treat it
Not so! Said the psychiatrist
Here, we have tears
They are easy to treat
This pill dries them up

But that’s treating the
Symptom not the cause
Not so! Said the psychiatrist
Crying is diagnosed by DSM
You are Crying if
You are crying

Stop crying, pleaded the psychiatrist
Giving her a hug.





Advice to empiricists from Alan Grafen

20 08 2014

Spotted in Grafen, A. (1987). Measuring sexual selection: why bother.

advicefield





Some claims psychology students might benefit from discussing

18 08 2014
  1. It’s okay if participants see the logic underlying a self-report questionnaire, e.g., can guess what the subscales are. It’s a self-report questionnaire — how else are they going to complete the thing? (Related: lie scales — too good to be true?)
  2. Brain geography is not sufficient to make psychology a science.
  3. Going beyond proportion of variance “explained” probably is necessary for psychology to become a science.
  4. People learn stuff. It’s worth explicitly thinking about this, especially for complex activities like reasoning and remembering. How much of psychology is the study of cultural artifacts? (Not necessarily a criticism.)
  5. Fancy data analysis is nice but don’t forget to look at descriptives.
  6. We can’t completely know another’s mind, not even with qualitative methods.
  7. Observation presupposes theory (and unarticulated prejudice is the worst kind of theory).
  8. Most metrics in psychology are arbitrary, e.g., what are the units of PHQ-9?
  9. Latent variables don’t necessarily represent unitary psychological constructs. (Related: “general intelligence” isn’t itself an explanation for anything; it’s a statistical re-representation of correlations and these correlations need to be explained.)
  10. Averages are useful but the rest of the distribution is important too.




Individuals versus aggregrates

5 08 2014

“Winwood Reade is good upon the subject,” said Holmes. “He remarks that, while the individual man is an insoluble puzzle, in the aggregate he becomes a mathematical certainty. You can, for example, never foretell what any one man will do, but you can say with precision what an average number will be up to. Individuals vary, but percentages remain constant. So says the statistician.”

The Sign of Four by Sir Arthur Conan Doyle (hat-tip MP)





Did You Used to be R.D. Laing? (Full Documentary)

3 08 2014








Follow

Get every new post delivered to your Inbox.

Join 54 other followers