Tagged: Rant

Another rant about consciousness

The nature of consciousness — the real consciousness, not some empirically measured proxy thereof — is pretty elusive. Why do we have a phenomenological feeling of free will, being in control, any feeling of being at all, if it’s all just an epiphenomenon, if everything is predetermined, if we are all just in the service of history?

People do things without being consciously aware of them. No high-brow examples are needed, no appeal to Freud or to being tricked by social psychologists to do things “we” didn’t choose: think about bladder control. You only become conscious of what your bladder is up to when your internal sphincter muscle lets go and your external sphincter (the one that feels under your control, or IS under your control, depending on your religious perspective) has to do some work. Bodies, including brains, do stuff which is kept hidden from their phenomenal consciousness.

The bit of you which feels like You might not be the only conscious bit of your body. Your enteric nervous system, controlling what your gut is up to, has a hundred million neurons. Maybe it feels like it’s in control, inhabiting an environment as natural to it as is “our” environment of fields and trees to us. Maybe your internal sphincter and associated nervous system is conscious too. There’s a stronger position that everything is conscious: panpsychism. Even blades of grass have a dash of consciousness according to this view. Not great news for vegans if it’s true.

Advertisements

Security through obscurity

Vaughan at Mind Hacks writes:

“Almost every psychological test relies on the fact that the person being assessed has no foreknowledge of the material.”

Any test with good test-retest reliability does not require this, and there are a few of those around.  But irrespective of this, there is a hidden assumption that psychologists are somehow out to trick people into revealing their psychological properties.  Undergraduate students, for instance, often complain that personality questionnaires—and questionnaires in general—are rubbish because it’s obvious what they’re asking or what the answers “should” be.

I don’t think it’s a problem if what is being tested is transparent.  For instance take selection for a job or a university.  The point should be that a good selection process benefits both the candidate and the folk making the selection.  If you cheat your way into a job you’re not capable of doing by obsessively practicing a test, then it won’t be long until the pressures of performance will force you out.

This assumes that tests used for selection have predictive validity, of course. And… well you can imagine how this argument would continue, how some jobs might require people who are good at pretending, how validity might depend on people being motivated enough to try some practice IQ tests—acquisition of foreknowledge might (unknowingly to the tester) be part of the test—and so on and so forth…

For the application of clinical diagnosis, I find it quite frightening that tests should somehow trick patients into revealing their complaints. There is a diagnosis tool for autism spectrum conditions which works basically by tricking people into revealing how socially inept they are by various “social presses”, including during a period which appears to be a break between testing sections. The most frightening part of this was the obvious power trip the person who explained this test to me was on everytime she used it.

Be suspicious of tests which are designed to trick people.

General and useless

Wise words(?) from The Last Psychiatrist:

Please do not say the words “dopamine” and “nucleus accumbens” anywhere near me, I still have my old sack of doorknobs.  These explanations could not be more general and useless.  Using those two in support of a common addiction pathway is like involving “gasoline” and “spoons” in the diathesis for serial rapes.  Even though these are involved in various “addictions”—cocaine, alcohol, internet, sex—these “addictions” and their associated behaviors are so disparate that the pathway serves no useful clinical target.  Haldol blocks dopamine in the nucleus accumbens, but you can’t cure alcoholism with it, can you?

I’m not denying that such a pathway exists, I’m doubting the utility of this information, even if true.  Call me when science catches up to your lies.

An old rant from 2nd year PhD me. But is it true? Do I believe it now? *Chin stroke*

[I can’t remember what I was responding to.]

There’s nothing “cartesian” about the language of cognitivism. Information processing is just a viewpoint on phenomena which doesn’t give a damn about ion flows or gene expression. It just posits that there’s something transforming what’s perceived into the actions, and whether it’s a set of cogs or a Turing machine isn’t particularly interesting. These guys need to go back to Neisser!

I imagine a load of these conceptual analysts (using a priori wisdom they received from where?!) pounding their fists on a table, some of them agreeing it is a table, some of them arguing that, no, that’s ridiculous, it’s a collection of atoms, electrons, and protons, … There are multiple levels of analysis, and somewhere those levels have to connect to what it feels like to be a person and how people communicate with each other about what they’re doing. I agree that sometimes the language that we use at the personal level gets applied, by analogy, to what the brain’s doing at the sub-personal level, but often that’s just to try to tell a story about what’s going on. For instance today [a fairly famous researcher] talking about a parietal area “caring” about something or other. It was just a cheap way to get an idea across instead of saying, “We were able to reject the null hypothesis that there is no difference in BOLD activation between the two conditions (with alpha = 0.05).”

Many of the theories used by fMRI folk seem not far from the folk psychological vernacular and thus are much in need of refinement to make them more consistent with what a charming Italian professor termed the “meat machine” is up to. That’s the point, to me, of fMRI et al: improving consistency between what the brain’s up to and our models of information processing.