Boxes and arrows (very preliminary scribble!)

Psychology is infamous for its use of box-and-arrow models. Typically boxes represent something like processes and arrows represent something like connections with other processes. Could something analogous be developed using functions? Take a function, f : A → B. This gives us a load of properties to think about, for instance what are the domain and codomain A and B? Are the functions total, partial, surjective, bijective? Note now how the arrows are the devices that do the work, and the box-equivalents are type spaces.

The specification of a function could be as vague as is possible to determine experimentally. Some decisions could be made about representation, so long as it is understood that the choice will be one member of an equivalence class of representations and it is unlikely to be possible to experimentally determine the representation—all that may be determined is information flow.

We will need composition of functions, possibly some form of branching, and some form of repetition (recursion). Let’s pause for a moment and examine what may be achieved merely by thinking about the types of the functions. Think about “simple deductive reasoning tasks”—which are often complex discourse comprehension tasks:

Some elephants are mammals
Some mammals are happy

What follows? Different people have different specifications. Think in terms of functions and spaces. Do we want to model in terms of—at least close to—the surface form? We know that some people are sensitive to the order in which the premises are presented, so this empirical fact would have to be included in the model.

What is the task that participants have to do? What aspects of it do we want to model? Do we want to develop a program which can imitate a participant? Do we want to take into consideration sensory input and motor output or do we want to abstract these away?

First specification of top level function’s type

solvesyll : Premises × Options → (Option x RT)

Takes a sequence of premises, a sequence of options and returns a pair of the selection option and how long it took to return this option.

Instructions may also affect response:

solvesyll : Instruction ×
Premises ×
Options
→ (Option × RT)

How do we represent the instructions?

What strategy is used? That may also need to feature

solvesyll : Instruction ×
Premises ×
Strategy ×
Options
→ (Option × RT)

We can also ask participant how they felt they solved the problem

solvesyll : Instruction ×
Premises ×
Strategy ×
Options
→ (Option × RT × VerbalReport)

More generally, people have to solve many types of reasoning problem:

solve-syll
solve-immediateinference
solve-suppression
solve-linda
solve-wason

Even more generally, people have to solve sequences of reasoning problems of each type:

solve-sylls
solve-immediateinferences
solve-suppressions
solve-lindas
solve-wasons

Perhaps the order in which the items are presented affects how they are solved. Perhaps only for some people is this an issue.

Stenning and Cox (2006) related immediate inference to syllogisms using multiple regression. How could we model that in terms of information flow? What level of detail do we want to model? Thinking about types may help to organise that, even before we go to a detailed computational model which mimics in AI stylie how a person performs a task.

Options for model; start at the participant end:

modelperson :
TaskList × StrategyList → Responses

which maps a sequence of tasks and strategies to a sequence of reponses. More generally, perhaps:

model :
PersonFeatures × TaskFeatures → Responses

So in addition we now have a list of traits of the person. The problem with this approach is that a scalar quantity such as how extroverted someone is seems odd—almost as if there’s a little number in the mind which may perhaps be tweaked with a screwdriver in the same way one tweaks the little pots in an old transistor radio. It is fine if we concentrate on information flow. Where does an element of PersonFeatures come from? Simply as the output from another function.

model([extrovert(bob),syll-strategy(bob),
ii-strategy(bob)], […])

This quickly deteriorates into something ACT-R like, but without any of its advantages. The real issue we’re interested in: what predicts the things we can measure? Or a bit more detailed:

  1. What affects people’s actions?
  2. What affects how people feel?

But do we really want a big piece of nasty machinery which takes a heap of parameters (including whether one had chips for dinner) and spews out acts and feelings (at time t)?

The performance in syllogisms (and immediate inference and the suppression task and the other tasks) is just a proxy to get at more general processes, so a critical step would be abstracting away specific details of processes. Analogously, how you perform in Raven’s matrices is not the thing we’re actually interested in; mercury volume in a thermometer is not the thing we’re actually interested in. However perhaps we don’t yet know enough about the properties of the machinery we’re using to measure to do such abstraction.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s