Tagged: Cognitive Science

Thoughts after a talk by Michelle Dawson

Some thoughts, not yet expanded…

1. Autistic people have been shown to be more emotionally expressive than non-autistics, contrary to some stereotypes. In one experiment, they were also still less susceptible to a framing effect.

2. Seemingly narrow abilities can get one very far, e.g., spotting weird interpretations of results in papers; systematically cataloguing results. They are only “narrow” if judged that way.

3. Everyone needs to find their talents and spot and help cultivate talents in others. Autism is another more visible instance of this.

4. “Interventions” are often poor substitutes for mentoring relationships, which have been found to be so important in, e.g., apprenticeships, Oxbridge undergrad supervision, and PhD supervision elsewhere.

5. Opportunities to try things can be the best intervention.

6. Judgmental observation is a kind of interaction, as when you see something, a trait, behaviour, you assess to be negative, it’s difficult to avoid broadcasting your opinion, even if just in a brief facial expression. This affects the person you’ve just observed.

7. Verbal fluency is still over-emphasised in academia. Visuospatial processing, rapid categorisation, implicit learning – still computationally complex cognitive processes – are often undervalued.

8. Everyone has biases, e.g., results they want to be true, even those pointing out biases in others. That’s where debate and criticism from other folk who are less involved is crucial.

Brains and personality

(Prompted by this.)

The main complaint one hears about the big five is that there’s an absence of theory explaining what’s driving the different dimensions.  It’s all descriptive, or at best “finger in the wind” theorizing.  At least that’s (my perception of) what the critics go on about (usual disclaimers apply).  Oh and the stats can be a bit dodgy (see Borsboom, 2006, for a discussion of misapplications of PCA).

DeYoung and colleagues found a bunch of correlations between personality traits and volume of different brain regions.  That there’s a relationship between personality and brain structure is unsurprising.  That it can be detected is nice, though.  That it can be detected with such a crude measure (bigger is better function – except for one bit of the brain and agreeableness) is perhaps surprising, but has been spotted before in (part of) taxi drivers’ hippocampi in the context of spatial navigation (Maguire et al, 2000).

This work belongs to the genre of trying to work out what cognitive processes are driving personality.  The geography is not particularly interesting in itself.  But with a spot of detective work linking to other studies, the geography gives clues about what might be going on.

Reference

Borsboom, D. (2006). The attack of the psychometricians. Psychometrika, 71, 425-440

DeYoung, C. G., Hirsh, J. B., Shane, M. S., Papademetris, X., Rajeevan, N. & Gray, J. R. (2010).  Testing Predictions From Personality Neuroscience: Brain Structure and the Big Five. Psychological Science, 21, 820-828

Maguire, E. A., Gadian, D. G., Johnsrude, I. S., Good, C. D., Ashburner, J., Frackowiak, R. S. and Frith, C. D. (2000). Navigation-related structural change in the hippocampi of taxi drivers. Proceedings of the National Academy of Sciences of the United States of America, 97, 4398-4403

More on “context aware” systems

Erickson (2002) argues that “context awareness” is motivated by a desire for systems to take action, autonomously, leaving us out of the loop.  The ability to do so accurately requires a lot of intelligence to draw inferences from the available sensors.  Erickson reckons the project is doomed to failure.  However he thinks we might make some progress if humans are brought back into the loop and given the contextual data in rawer form so they can interpret it and take appropriate action themselves.  Not sure.  The example he gives can easily be modified to reveal potentially damaging information about a user’s whereabouts and actions… (exercise to reader):

“Lee has been motionless in a dim place with high ambient sound for the last 45 minutes. Continue with call or leave a message.”

Reminds me of the impressive-looking thesis by Nora Balfe (2010) on a safety critical railway signalling systems.  For instance from the conclusions:

“Feedback from [the system] was … found to be very poor, resulting in low understanding and low predictability of the automation. As signallers cannot predict what the automation will do in all situations they do not feel they can trust it to set routes and frequently step in to ensure trains are routed in the correct order. In the observation study, the differences found between high and low interveners in terms of feedback, understanding and predictability confirm the importance of good mental models in the development and calibration of trust…”

Reference

Balfe, N.(2010) Appropriate automation of rail signalling systems: a human factors study. PhD thesis, University of Nottingham.

Erickson, T. (2002). Some problems with the notion of context-aware computing: Ask not for whom the cell phone tolls. Communications of the ACM, 45(2), 102-104.

What is cognition? Again

A while back I posted a list of quotations attempting to define cognition.  In discussing and searching for these, I came to the conclusion that definitions of these kinds of global, general, concepts are only useful as department labels. They allow people to work out, vaguely, to whom they could talk to learn about a topic that interests them.  Concepts like “cognition” should be defined with that goal in mind, in a way that causes as least confusion as possible.  For instance it seems likely that separating cognition and emotion, or cognition and perception, or equating cognition with conscious deliberative thought, are all bad ideas.

Adams (2010) likes definitions.  He suggests that philosophers ask cognitive scientists:

of the processes which are cognitive, what (exactly) makes them cognitive? This is the question that will really irritate, and, I’ve discovered, really interest them. It will interest them because it is a central question to the entire discipline of the cognitive sciences, and it will irritate them because it is a question that virtually no one is asking.  [emphasis original]

He gives some examples of processes which are clearly to him not cognitive, e.g., processes that regulate blood sugar levels, thermo-regulatory processes such as capillary constriction and dilation.

He also provides a list of necessary conditions for a process to be cognitive:

  1. Cognitive processes involve states that are semantically evaluable.
  2. The contents carried by cognitive systems do not depend for their content on other minds.
  3. Cognitive contents can be false or even empty, and hence are detached from the actual environmental causes.
  4. Cognitive systems and processes cause and explain in virtue of their representational content.

This all left me rather cold.  I don’t understand what this list helps to explain.  I’m not sure it’s even wrong.

Why not, for instance, allow cognitive processes regulate blood sugar levels?  If, at an abstract level of analysis, this turns out to be useful, for instance if performing a task which may be analysed in a cognitive fashion seems to influence blood sugar levels, then why not make it a cognitive process?

The word “cognitive” seems to cause more trouble than it’s worth so maybe we should stop talking about “cognitive processes” altogether.  As I wrote in a previous post:

It used to be considered bad form to refer to something as a neural process unless it referred to synapses, but is this still the case? There are various levels of “neural” from absence of neural due to lesions and BOLD activation patterns, down to vesicle kissing and gene expression. Maybe behavioral neuroscience is allowed up another level to more abstract representations currently called “mental” or “cognitive”, and the mental can be returned to refer to the what-it-feels-like.  Similarly maybe psychologists are behavioral neuroscientists focusing on an abstract level of explanation.

That probably wouldn’t help either.  If only we could find a pill to take which makes us less anxious about the meaning of individual words and phrases.

Reference

Adams, F. (2010). Why we still need a mark of the cognitive. Cognitive Systems Research, 11, 324-331

What is a mental process?

What is a “mental” process? The stuff we’re conscious of or a limbo between real, wet, neural processes and observable behavior?

A well known analogy is the computer. The hardware stuff you can kick is analogous to the brain; the stuff you see on the screen is, I suppose, the phenomenology; then the software, all of which correlates with processes you could detect in the hardware if you looked hard enough, some but not all of which affects the screen, is cognition.

Forget for a moment about minds and consider the engineering perspective; then the point of the levels is clear. When you want, say, to check your email, you probably don’t want to fiddle around directly with the chips in your PC. It’s much less painful to rely on years of abstraction and just click or tap on the appropriate icon. You intervene at the level of software, and care very little about what the hardware is doing being the scenes.

What is the point of the levels for understanding a system? Psychologists want to explain, tell an empirically grounded story about, people-level phenomena, like remembering things, reasoning about things, understanding language, feeling and expressing emotions. Layers of abstraction are necessary to isolate the important points of this story. The effect of phonological similarity on remembering or pragmatic language effects when reasoning would be lost if expressed in terms of (say) gene expression.

I don’t understand when the neural becomes the cognitive or the mental. There are many levels of neural, not all of which you can poke. At the top level I’m thinking here about the sorts of things you can do with EEG where the story is tremendously abstract (for instance event-related potentials or the frequency of oscillations) though dependent on stuff going on in the brain. “Real neuroscientists” sometimes get a bit sniffy about that level: it’s not brain science unless you are able to talk about actual bits of brain like synapses and vesicles. But what are actual bits of brain?

Maybe a clue comes from how you intervene on the system. You can intervene with TMS, you can intervene with drugs, or you can intervene with verbal instructions. How do you intervene cognitively or mentally?  Is this the correct way to think about it?

Levels of description — in the Socialist Worker

The mainstream media is notoriously rubbish at explaining the relationships between brain, feelings, and behaviour. Those of a suspicious disposition might argue that the scientists don’t mind, as often the reports are very flattering — pictures of brains look impressive — and positive public opinion can’t harm grant applications.

The Socialist Worker printed a well chosen and timely antidote: an excerpt of a speech by Steven Rose about levels of description.

… brains are embedded in bodies and bodies are embedded in the social order in which we grow up and live. […]

George Brown and Tirril Harris made an observation when they were working on a south London housing estate decades ago.

They said that the best predictor of depression is being a working class woman with an unstable income and a child, living in a high-rise block. No drug is going to treat that particular problem, is it?

Many of the issues that are so enormously important to us—whether bringing up children or growing old—remain completely hidden in the biological levels.

You can always find a brain “correlate” of behaviour,  and what you’re experiencing, what you’re learning, changes the brain. For instance becoming an expert London taxi driver — a cognitively extremely demanding task — is associated with a bit of your brain getting bigger (Maguire et al, 2000). These kinds of data have important implications for (still laughably immature) theories of cognition, but, as Steven Rose illustrates with his example of depression, the biological level of analysis often suggests misleading interventions.

It’s obvious to all that would-be taxi drivers are unlikely to develop the skills they need by having their skull opened by a brain surgeon or by popping brain pills. The causal story is trickier to untangle when it comes to conditions such as depression. Is it possible that Big Science, with its fMRI and pharma, is pushing research in completely the wrong direction?

Reference

Maguire, E. A., Gadian, D. G., Johnsrude, I. S., Good, C. D., Ashburner, J., Frackowiak, R. S. and Frith, C. D. (2000). Navigation-related structural change in the hippocampi of taxi drivers. Proceedings of the National Academy of Sciences of the United States of America, 97, 4398-4403

On the inseparability of intellect and emotion (from 1933)

“[…] Imagine that we are engaged in a friendly serious discussion with some one, and that we decide to enquire into the meanings of words. For this special experiment, it is not necessary to be very exacting, as this would enormously and unnecessarily complicate the experiment. It is useful to have a piece of paper and a pencil to keep a record of the progress.

“We begin by asking the ‘meaning’ of every word uttered, being satisfied for this purpose with the roughest definitions; then we ask the ‘meaning’ of the words used in the definitions, and this process is continued usually for no more than ten to fifteen minutes, until the victim begins to speak in circles—as, for instance, defining ‘space’ by ‘length’ and ‘length’ by ‘space’. When this stage is reached, we have come usually to the undefined terms of a given individual. If we still press, no matter how gently, for definitions, a most interesting fact occurs. Sooner or later, signs of affective disturbances appear. Often the face reddens; there is bodily restlessness; sweat appears—symptoms quite similar to those seen in a schoolboy who has forgotton his lesson, which he ‘knows but cannot tell’. […] Here we have reached the bottom and the foundation of all non-elementalistic meanings—the meanings of undefined terms, which we ‘know’ somehow, but cannot tell. In fact, we have reached the un-speakable level. This ‘knowledge’ is supplied by the lower nerve centres; it represents affective first order effects, and is interwoven and interlocked with other affective states, such as those called ‘wishes’, ‘intentions’, ‘intuitions’, ‘evalution’, and many others. […]

“The above explanation, as well as the neurological attitude towards ‘meaning’, as expressed by Head, is non-elementalistic. We have not illegitimately split organismal processes into ‘intellect’ and ’emotions’.”

Reference

Korzybski, A. (1933).  Science and Sanity: An Introduction to Non-Aristotelian Systems and General Semantics Institute of General Semantics.