More on levels of description

David Marr (1982) is often cited for his theory of levels of explanation. The three levels he gives are (a) the computational theory, specifying the goals of the computation; (b) representation and algorithm, giving a representation of the input and output and the algorithm which transforms one into the other; and (c) the hardware implementation, how algorithm and representation may be physically realised. I sometimes wonder how ideas from computer science related to levels of analysis could map across to the cognitive and brain sciences and perhaps generalise or make more concrete Marr’s three levels. This is already being done, mostly notably by researchers who investigate the relationship between logics and connectionist networks (see this earlier posting for a recentish example). But how about deeper in computer science, well away from speculation about brains?

There is a large body of work on showing how an obviously correct but inefficient description of a solution to a problem may be transformed into something (at one extreme) fast and difficult to understand. One particularly vivid example is given by Hutton (2002) on how to solve the Countdown arithmetic problem. Here follows a sketch of the approach.

In the Countdown problem you are given a set of numbers, each of which you are allowed to use at most once in a solution. The task is to produce an expression which will evaluate to a given target number by combining these numbers with the arithmetic operators +, -, /, * (each of which may be used any number of times), and parentheses. For instance from

1, 5, 6, 75, 43, 65, 32, 12

you may be asked to generate 23. One way to do this is

((1 + 5) – 6) + 20 – (32 – 35)

Hutton begins by producing a high-level formal specification which is quite close to the original problem. This requires specifying:

  1. A method for generating all ways of selecting collections of numbers from the list, e.g. [1], [5], [6], …, [5,6], … [1,5,75,43], …
  2. A method for working out all ways to split a list in two so you’ve got two non-empty lists, e.g. for [1,5,75,43] you’d get

    [1],[5,75,43]
    [1,5],[73,43]
    [1,5,75],[43]

  3. A method which given a couple of lists of numbers gives you back all the ways of combining them with arithmetical operators.
  4. A method which evaluates the expression and checks if what pops out gives the right answer.

When carried through, this results in something executable which can relatively easily be proved equivalent to a formalisation of the problem description. The downside is that it’s slow. One of the reasons for this is that you end up generating a bucketload of expressions which aren’t valid. The method for solving the various elements described above are too isolated from each other. Hutton gives the example of finding expressions for the numbers 1, 3, 7, 10, 25, and 50. There are 33,665,406, but only 4,672,540 are valid (around 14%); the others fail to evaluate because of properties of arithmetic, e.g. division by zero. His solution is to fuse the generation and evaluation stages, thus allowing cleverer generation. He proves that the new version is equivalent to the previous version. Next he takes advantage of other properties of arithemetic, e.g. commutativity of addition, x + y = y + x, which again reduces the search space. More proofs prove equivalence. This process continues until you’re left with something less obvious, but fast, and with explanations at each stage showing the correspondences between each refinement.

Why is this relevant to brain stuff? I’m not suggesting that people should try to use refinement methods to relate stuff measurable directly from the brain to stuff measurable and theorised about in psychology. The relevance is that this is an excellent example of levels of description. There may be many levels and they’re relatively arbitrary, guided by ease of explanation, constrained by ease of execution. Presumably the ultimate goal of brain research is to relate feeling and behaviour down through dozens of levels to the physics, but the journey is going to require many fictional constructions to make sense of what’s going on. Naively mapping the constructs to, e.g., areas of the brain seems likely to bring much misery and despair, as does arguing about which fiction is correct.

Hutton, G. (2002). The Countdown Problem. Journal of Functional Programming, 12(6), 609-616.

Marr, D. (1982). Vision: A Computational Investigation into the Human Representation and Processing of Visual Information. W. H. Freeman.

Advertisements

2 comments

  1. David McCullough

    Sorting out levels of description is indeed what cognitive science is all about. (And of course providing the descriptions.) Prior to cognitive science, philosophy hung up on its exclusion of what was then named “psychologism,” i.e. empirical facts. And psychology, lacking the discipline of conceptual analysis to generate representational models, could not fight off charges of “reductionism.” Fortunately for all of us, computers came along to point the way forward. Marr noticed, among others.

  2. Pingback: The Tractable Cognition Thesis « Figural Effect

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s