Tagged: Logics

Some troubling and interesting things about investigating reasoning

Competence models are typically created and explored by a small number of experts.  Boole, Gentzen, Kolmogorov, Ramsey, De Finetti, …  The authority can often be shifted to the mathematics.   However, although non-experts can usually understand a statement of the theorem to proved, often they can’t understand the details of the proof.

There are problems with being an expert.  If you stare too long at the formalism, then you lose your intuition, and can’t see why someone would interpret a task “the wrong” way.  Often there are a priori non-obvious interpretations.

And who decides what constitutes a permissible interpretation?  Some obvious ideas for this are open to debate.  For instance, is it always reasonable for people to keep their interpretation constant across tasks?  Or is it rational to change your mind as you learn more about a problem?  Is it rational to be aware of when you change your mind?

To complicate things further, various measures loading on g predict interpretations.  Does that mean that those who have better cognitive ability can be thought of as having reasoned to the correct interpretation?

Recognizing textual entailment with natural logic

How do you work out whether a segment of natural language prose entails a sentence?

There are two extreme positions on how to model what’s going on.  One is to translate the natural language into a logic of some kind, then apply a theorem prover to draw conclusions.  The other is to use algorithms which work directly on the original text, using no knowledge of logic, for instance applying lexical or syntactic matching between premises and putative conclusion.

The main problem with the translation approach is that it’s very hard, as anyone who has tried manually to formalise some prose will agree.  The main problem with approaches processing the text in a shallow fashion is that they can be easilly tricked,  e.g., by negation, or systematically replacing quantifiers.

Bill MacCartney and Christopher D. Manning (2009) report some work from the space in between using so-called natural logics, which work by annotating the lexical elements of the original text in a way that allows inference. One example of such a logic familar to those in the psychology of reasoning community is described by Geurts (2003).

The general idea is finding a sequence of edits, guided by the logic, which try to transform the premises into the conclusion.  The edits are driven solely by the lexical items and require no context.

Seems promising for many cases, easily beating both the naive lexical comparisons and attempts automatically to formlalise and prove properties in first-order logic.

References

Bill MacCartney and Christopher D. Manning (2009). An extended model of natural logic.  The Eighth International Conference on Computational Semantics (IWCS-8), Tilburg, Netherlands, January 2009.

Geurts, B. (2003). Reasoning with quantifiers. Cognition, 86, 223-251.

Language and logic (updated)

Some careful philosophical discussion by Monti, Parsons, and Osherson (2009):

There may well be a “language of thought” (LOT) that underlies much of human cognition without LOT being structured like English or other natural languages. Even if tokens of LOT provide the semantic interpretations of English sentences, such tokens might also arise in the minds of aphasic individuals and even in other species and may not resemble the expressions found in natural language. Hence, qualifying logical deduction as an “extra-linguistic” mental capacity is not to deny that some sort of structured representation is engaged when humans perform such reasoning. On the other hand, it is possible that LOT (in humans) coincides with the ‘‘logical form’’ (LF) of natural language sentences, as studied by linguists. Indeed, LF (serving as the LOT) might be pervasive in the cortex, functioning well beyond the language circuit […].

Levels of analysis again. Just because something “is” not linguistic doesn’t mean it “is” not linguistic.

This calls for a bit of elaboration! (Thanks Martin for the necessary poke.)  There could be languages—in a broad sense of the term—implemented all over the brain. Or, to put it another way, various neural processes, lifted up a level of abstraction or two, could be viewed linguistically. At the more formal end of cognitive science, I’m thinking here of the interesting work in the field of neuro-symbolic integration, where connectionist networks are related to various logics (which have a language).

I don’t think there is any language in the brain. It’s a bit too damp for that. There is evidence that bits of the brain support (at the personal-level of explanation) linguistic function: picking up people in bars and conferences, for instance. There must be linguistic-function-supporting bits in the brain somewhere; one question is how distributed they are. I would also argue that linguistic-like structures (the formal kind) can characterise (i.e., a theorist can use them to chacterise) many aspects of brain function, irrespective of whether that function is linguistic at the personal-level. If this is the case, and those cleverer than I think it is, then that suggests that the brain (at some level of abstraction) has properties related to those linguistic formalisms.

Reference

Monti, M. M.; Parsons, L. M. & Osherson, D. N. (2009). The boundaries of language and thought in deductive inference. Proceedings of the National Academy of Sciences of the United States of America.

Free books

From LogBlog:

Exciting developments! The Association of Symbolic Logic has made the now-out of print volumes in the Lecture Notes in Logic (vols. 1-12) and Perspectives in Mathematical Logic (vols. 1-12) open-access through Project Euclid. This includes classics like

Computational logic and psychology

Prediction. This stuff is going to be put to work in psychology soon. (End of prediction.)

Computability logic … is a recently launched program for redeveloping logic as a formal theory of computability, as opposed to the formal theory of truth that logic has more traditionally been. Formulas in it represent computational problems, “truth” means existence of an algorithmic solution, and proofs encode such solutions…

P.S. The most frustrating logic paper I’ve seen in a long time is over here: a graph theoretical notion of propositional logic. Why is this frustrating? Check out the sickening rhetoric, especially Figure 1 which shows a proof of ((p \Rightarrow q) \Rightarrow p) \Rightarrow p in logic which the author claims is commonly taught to mathematics undergraduates. What the author doesn’t mention is that it’s often taught as it’s easy to prove meta-theorems in the logic. Most sensible people (well sensible formal logic users, e.g. implementers of provers) use, e.g., a tableaux system or natural deduction to actually prove things. Why was this nonsense allowed to stay in the paper?

Science for the half-wits

A bit from Jean Yves-Girard‘s latest rant, The phantom of transparency:

Still under the heading « science for the half-wits », let us mention non monotonic « logics ». They belong in our discussion because of the fantasy of completeness, i.e., of the answer to all questions. Here, the slogan is what is not provable is false : one thus seeks a completion by adding unprovable statements. Every person with a minimum of logical culture knows that this completion (that would yield transparency) is fundamentally impossible, because of the undecidability of the halting problem, in other terms, of incompleteness, which has been rightly named : it denotes, not a want with respect to a preexisiting totality, but the fundamentally incomplete nature of the cognitive process.

Completeness is boring.  Maybe Y-G would be less confused if he viewed these logics as modelling information update, important given the “fundamentally incomplete nature of the cognitive process”.

Logic and Reasoning: do the facts matter?

Had a read of Logic and reasoning: do the facts matter? by Johan van Benthem. Covers much ground in a short space but I found it thought provoking. Here’s a quick sketch of the bits I liked.

Van Benthem mentions the anti-psychologism stance, briefly the idea that human practice cannot tell us what correct reasoning is. He contrasts Frege’s view with that of Wundt; the latter, he argues, was too close to practice; Frege was too far. He argues that if logics were totally inconsistent with real practice then they’d be useless.

Much logic is about going beyond what classical logic has to offer and is driven by real language use. Van Bentham cites Prior’s work on temporal structure, Lewis and Stalnaker’s work on comparative orderings of worlds, work on generalised quantifiers which was driven by the mess of real language and for instance produced formalisations of quantifiers like most and few. Generally, van Bentham argues, “one needs to move closer to the goal of providing more direct and faithful mathematical renderings of what seem to be stable reasoning practices.” You want your logic to be more natural, closer to the phenomena. Conceptions of mathematical logic were driven by the terms that appeared in rigorous proofs, so the linguistic stuff is just widening the set of practices that are modelled.

Correctability in a logic is more important than correctness, he argues. This is consistent with the goals of the non-monotonic logic crowd I know and love. I find this most interesting when looking at individual differences in reasoning processes: perhaps a correctability dimension is out there somewhere, if only we could measure it and its correlates. I have some ideas—stay tuned.

Divergences from competence criteria, he argues, suggest new practices. I still see many papers in which people are scored against classical logic. Failure should cause an attempt to work out what practice is being followed by a person rather than the more common concern of what went wrong and how we could bring people back.

Much more in this little paper…