# Recognizing textual entailment with natural logic

How do you work out whether a segment of natural language prose entails a sentence?

There are two extreme positions on how to model what’s going on. One is to translate the natural language into a logic of some kind, then apply a theorem prover to draw conclusions. The other is to use algorithms which work directly on the original text, using no knowledge of logic, for instance applying lexical or syntactic matching between premises and putative conclusion.

The main problem with the translation approach is that it’s very hard, as anyone who has tried manually to formalise some prose will agree. The main problem with approaches processing the text in a shallow fashion is that they can be easilly tricked, e.g., by negation, or systematically replacing quantifiers.

Bill MacCartney and Christopher D. Manning (2009) report some work from the space in between using so-called natural logics, which work by annotating the lexical elements of the original text in a way that allows inference. One example of such a logic familar to those in the psychology of reasoning community is described by Geurts (2003).

The general idea is finding a sequence of edits, guided by the logic, which try to transform the premises into the conclusion. The edits are driven solely by the lexical items and require no context.

Seems promising for many cases, easily beating both the naive lexical comparisons and attempts automatically to formlalise and prove properties in first-order logic.

**References**

Bill MacCartney and Christopher D. Manning (2009). An extended model of natural logic. The Eighth International Conference on Computational Semantics (IWCS-8), Tilburg, Netherlands, January 2009.

Geurts, B. (2003). Reasoning with quantifiers. *Cognition*, *86*, 223-251.