Tom asked if I had any papers on the dangers of ignoring individual differences. Here goes with a bit of a brain dump.
I view individual differences as trying to pull out a little bit more from the residual term. Contrast these approaches:
- If I do A to a load of people then on average B happens (but with a bit of noise).
- If I do A to a load of people, then those who have property P do B, those with property Q do C (and there’s a bit of noise, but less than in setup 1). (Hence the derogatory response to individual differences research: “Oh you’re just modelling the noise term”.)
Much individual differences research is a bit dull, so I’m not surprised it is criticised: collect a load of data, preferably Gaussian distributed and obtained by summing many items, and examine the correlation structure (sometimes disguised with a spot of structural equation modelling).
Here’s a little example I happen to like. Some people interpret some A are B as meaning some and possibly all; others interpret it as some and not all. Two things you can do: try to force people towards one or other interpretation; or take an individual differences approach, find some other way to guess whether people are good at language pragmatics, and see if that predicts their interpretation of some A are B. I think both are important. The former is about trying to increase the probability that more people will understand what you mean (so you could imagine—and presumably psycholinguists do this—showing people big wads of text and seeing what influences ambiguity of interpretation). The individual differences approach is interesting as much of the information we deal with in life is ambiguous, and people seem to differ in how they deal with language and social context, how often they ask questions, etc, so it’s nice to leave things a bit ambiguous, one might even go as far as to say more ecologically valid, and characterise how people deal with it.
(Well actually there’s at least one other thing you can do: focus on areas of psychology where there are negligible individual differences: brick-to-skull manipulation and so on.)
Ignoring individual differences can be a bit dangerous. Suppose you want to model processes based on empirical data. If you assume everyone is doing the same thing, and just average across responses, then your model is not going to be very good. And forcing people to do the same thing can be a bit authoritarian.
I hate methodological papers. I like methodological asides in ordinary papers. Here’s one (from the abstract of Guasti et al 2005):
“… some of the manipulations of the experimental context have an effect on all subjects, whereas others produce effects on just a subset of children. Individual differences of this kind may have been concealed in previous research because performance by individual subjects was not reported.”
Or from Stenning and van Lambalgen (2005, p. 924):
“… although a considerable number of Byrne’s subjects (about 35%) withdraw the modus ponens inference after the second conditional premiss is presented, many more (about 65%) continue to draw the inference. What interpretation of the materials do these subjects have? If it is the same, then why do they not withdraw the inference too? And if it is different, then how can it be accommodated within the semantic framework that underpins the theory of reasoning? Does failure to suppress mean that these subjects have mental logics with inference rules (as Byrne would presumably would have interpreted the data if no subject had suppressed)? The psychological data is full of variation, but the psychological conclusions have been rather monolithic.”
I quite like this too, on the subject of longitudinal analyses but much more general (Bauer et al 2002, p. 202):
“The logic of deductive longitudinal analyses represents a clear advance over the assumption that all children follow a universal language trajectory. A priori hypotheses about group differences are evaluated with reference to individual developmental trajectories. However, individual variation that is not explained by group differences is relegated to a residual or error term. Given the crudeness of many of the hypotheses in the social sciences, it is often the case that much of the observed variation in individual trajectories is allocated to the residual term. The deductive approach thus seems to be at odds with an explicit focus on individual differences, in that departures from normative patterns are regarded as random error and typically are not investigated.
“The alternative, “inductive” approach seeks to maximize the information that can be gained from the individual trajectories themselves. In contrast to theory driven deductive methods, which evaluate differences between predefined groups, inductive methods are data driven and are used to examine the natural structure of individual differences. These methods are sometimes referred to as pattern oriented, person centered, or personalogic because they begin by examining similarities and differences in individual developmental patterns (see Cairns, Bergman, & Kagan, 1998, for a review). Using this bottom-up strategy, decision rules or clustering procedures are used to aggregate individuals into groups that display similar developmental patterns, irrespective of their status on theoretically relevant predictor variables.“
Bauer, D. J.; Goldfield, B. A. & Reznick, J. S. Alternative approaches to analyzing individual differences in the rate of early vocabulary development. Applied Psycholinguistics, 2002, 23, 313-335
Guasti, M. T.; Chierchia, G.; Crain, S.; Foppolo, F.; Gualmini, A. & Meroni, L. Why children and adults sometimes (but not always) compute implicatures. Language and Cognitive Processes, 2005, 20, 667-696
Stenning, K. & van Lambalgen, M. Semantic Interpretation as Computation in Nonmonotonic Logic: The Real Meaning of the Suppression Task. Cognitive Science, 2005, 29, 919-96