Tagged: Logic

Some troubling and interesting things about investigating reasoning

Competence models are typically created and explored by a small number of experts.  Boole, Gentzen, Kolmogorov, Ramsey, De Finetti, …  The authority can often be shifted to the mathematics.   However, although non-experts can usually understand a statement of the theorem to proved, often they can’t understand the details of the proof.

There are problems with being an expert.  If you stare too long at the formalism, then you lose your intuition, and can’t see why someone would interpret a task “the wrong” way.  Often there are a priori non-obvious interpretations.

And who decides what constitutes a permissible interpretation?  Some obvious ideas for this are open to debate.  For instance, is it always reasonable for people to keep their interpretation constant across tasks?  Or is it rational to change your mind as you learn more about a problem?  Is it rational to be aware of when you change your mind?

To complicate things further, various measures loading on g predict interpretations.  Does that mean that those who have better cognitive ability can be thought of as having reasoned to the correct interpretation?

Advertisements

Recognizing textual entailment with natural logic

How do you work out whether a segment of natural language prose entails a sentence?

There are two extreme positions on how to model what’s going on.  One is to translate the natural language into a logic of some kind, then apply a theorem prover to draw conclusions.  The other is to use algorithms which work directly on the original text, using no knowledge of logic, for instance applying lexical or syntactic matching between premises and putative conclusion.

The main problem with the translation approach is that it’s very hard, as anyone who has tried manually to formalise some prose will agree.  The main problem with approaches processing the text in a shallow fashion is that they can be easilly tricked,  e.g., by negation, or systematically replacing quantifiers.

Bill MacCartney and Christopher D. Manning (2009) report some work from the space in between using so-called natural logics, which work by annotating the lexical elements of the original text in a way that allows inference. One example of such a logic familar to those in the psychology of reasoning community is described by Geurts (2003).

The general idea is finding a sequence of edits, guided by the logic, which try to transform the premises into the conclusion.  The edits are driven solely by the lexical items and require no context.

Seems promising for many cases, easily beating both the naive lexical comparisons and attempts automatically to formlalise and prove properties in first-order logic.

References

Bill MacCartney and Christopher D. Manning (2009). An extended model of natural logic.  The Eighth International Conference on Computational Semantics (IWCS-8), Tilburg, Netherlands, January 2009.

Geurts, B. (2003). Reasoning with quantifiers. Cognition, 86, 223-251.

Language and logic (updated)

Some careful philosophical discussion by Monti, Parsons, and Osherson (2009):

There may well be a “language of thought” (LOT) that underlies much of human cognition without LOT being structured like English or other natural languages. Even if tokens of LOT provide the semantic interpretations of English sentences, such tokens might also arise in the minds of aphasic individuals and even in other species and may not resemble the expressions found in natural language. Hence, qualifying logical deduction as an “extra-linguistic” mental capacity is not to deny that some sort of structured representation is engaged when humans perform such reasoning. On the other hand, it is possible that LOT (in humans) coincides with the ‘‘logical form’’ (LF) of natural language sentences, as studied by linguists. Indeed, LF (serving as the LOT) might be pervasive in the cortex, functioning well beyond the language circuit […].

Levels of analysis again. Just because something “is” not linguistic doesn’t mean it “is” not linguistic.

This calls for a bit of elaboration! (Thanks Martin for the necessary poke.)  There could be languages—in a broad sense of the term—implemented all over the brain. Or, to put it another way, various neural processes, lifted up a level of abstraction or two, could be viewed linguistically. At the more formal end of cognitive science, I’m thinking here of the interesting work in the field of neuro-symbolic integration, where connectionist networks are related to various logics (which have a language).

I don’t think there is any language in the brain. It’s a bit too damp for that. There is evidence that bits of the brain support (at the personal-level of explanation) linguistic function: picking up people in bars and conferences, for instance. There must be linguistic-function-supporting bits in the brain somewhere; one question is how distributed they are. I would also argue that linguistic-like structures (the formal kind) can characterise (i.e., a theorist can use them to chacterise) many aspects of brain function, irrespective of whether that function is linguistic at the personal-level. If this is the case, and those cleverer than I think it is, then that suggests that the brain (at some level of abstraction) has properties related to those linguistic formalisms.

Reference

Monti, M. M.; Parsons, L. M. & Osherson, D. N. (2009). The boundaries of language and thought in deductive inference. Proceedings of the National Academy of Sciences of the United States of America.

Free books

From LogBlog:

Exciting developments! The Association of Symbolic Logic has made the now-out of print volumes in the Lecture Notes in Logic (vols. 1-12) and Perspectives in Mathematical Logic (vols. 1-12) open-access through Project Euclid. This includes classics like

Prover9 and Mace4

Just found two fantastic programs and a GUI for exploring first-order classical models and also automated proof, Prover9 and Mace4.  There are many other theorem provers and model checkers out there.  This one is special as it comes as a self-contained and easy to use package for Windows and Macs.

There are many impressive examples built in which you can play with.  To start easy, I gave it a little syllogism:

all B are A
no B are C

with existential presupposition, which is expressed simply:

exists x a(x).
exists x b(x).
exists x c(x).
all x (b(x) -> a(x)).
all x (b(x) -> -c(x)).

and asked it to find a model. Out popped a model with two individuals, named 0 and 1:

a(0).
– a(1).

b(0).
– b(1).

– c(0).
c(1).

So individual 0 is an A, a B, but not a C. Individual 1 is not an A, nor a B, but is a C.

Then I requested a counterexample to the conclusion no C are A:

a(0).
a(1).

b(0).
– b(1).

– c(0).
c(1).

The premises are true in this model, but the conclusion is false.

Finally, does the conclusion some A are not C follow from the premises?

2 (exists x b(x)) [assumption].
4 (all x (b(x) -> a(x))) [assumption].
5 (all x (b(x) -> -c(x))) [assumption].
6 (exists x (a(x) & -c(x))) [goal].
7 -a(x) | c(x). [deny(6)].
9 -b(x) | a(x). [clausify(4)].
10 -b(x) | -c(x). [clausify(5)].
11 b(c2). [clausify(2)].
12 c(x) | -b(x). [resolve(7,a,9,b)].
13 -c(c2). [resolve(10,a,11,a)].
16 c(c2). [resolve(12,b,11,a)].
17 $F. [resolve(16,a,13,a)].

Indeed it does. Unfortunately the proofs aren’t very pretty as everything is rewritten in normal forms.  One thing I want to play with is how non-classical logics may be embedded in this system.

A non-judgmental reconstruction of drunken logic

Simmons (2007) makes a helpful contribution to the logical modelling of real arguments by an addition of the shot glass modality to intuitionist logic.  A snippet:

Per Per Martin-Löf [7], something is true when witnessed by an object of knowledge, which lends itself to an obvious question of whether the truth of a proposition can be obviated by the presence of alcohol, seeing as alcohol has an clearly negative impact on one’s knowledge [1]. The possibility of the analytical truth of a proposition becoming questionable under the influence is also evidenced by discussion as to whether conference submissions that can be understood while drunk are novel enough to be worth accepting.

I think the following inference rule which I discovered while living in the homeland of Martin-Löf still requires further investigation:

\frac{\Gamma \vdash A\mathit{, right?}}{\Gamma \vdash A}

Reference

Robert J. Simmons.  A non-judgmental reconstruction of drunken logic.  Presented at SIGBOVIK 2007, April 1, 2007. Winner of the Best Paper raffle. [PDF]