Search This Blog

Thursday, March 9, 2023

Argument map

From Wikipedia, the free encyclopedia
 
A schematic argument map showing a contention (or conclusion), supporting arguments and objections, and an inference objection
 
An argument map or argument diagram is a visual representation of the structure of an argument. An argument map typically includes the key components of the argument, traditionally called the conclusion and the premises, also called contention and reasons. Argument maps can also show co-premises, objections, counterarguments, rebuttals, and lemmas. There are different styles of argument map but they are often functionally equivalent and represent an argument's individual claims and the relationships between them.

Argument maps are commonly used in the context of teaching and applying critical thinking. The purpose of mapping is to uncover the logical structure of arguments, identify unstated assumptions, evaluate the support an argument offers for a conclusion, and aid understanding of debates. Argument maps are often designed to support deliberation of issues, ideas and arguments in wicked problems.

An argument map is not to be confused with a concept map or a mind map, two other kinds of node–link diagram which have different constraints on nodes and links.

Key features

A number of different kinds of argument maps have been proposed but the most common, which Chris Reed and Glenn Rowe called the standard diagram, consists of a tree structure with each of the reasons leading to the conclusion. There is no consensus as to whether the conclusion should be at the top of the tree with the reasons leading up to it or whether it should be at the bottom with the reasons leading down to it. Another variation diagrams an argument from left to right.

According to Douglas N. Walton and colleagues, an argument map has two basic components: "One component is a set of circled numbers arrayed as points. Each number represents a proposition (premise or conclusion) in the argument being diagrammed. The other component is a set of lines or arrows joining the points. Each line (arrow) represents an inference. The whole network of points and lines represents a kind of overview of the reasoning in the given argument..." With the introduction of software for producing argument maps, it has become common for argument maps to consist of boxes containing the actual propositions rather than numbers referencing those propositions.

There is disagreement on the terminology to be used when describing argument maps, but the standard diagram contains the following structures:

Dependent premises or co-premises, where at least one of the joined premises requires another premise before it can give support to the conclusion: An argument with this structure has been called a linked argument.

Statements 1 and 2 are dependent premises or co-premises.

Independent premises, where the premise can support the conclusion on its own: Although independent premises may jointly make the conclusion more convincing, this is to be distinguished from situations where a premise gives no support unless it is joined to another premise. Where several premises or groups of premises lead to a final conclusion the argument might be described as convergent. This is distinguished from a divergent argument where a single premise might be used to support two separate conclusions.

Statements 2, 3, 4 are independent premises.

Intermediate conclusions or sub-conclusions, where a claim is supported by another claim that is used in turn to support some further claim, i.e. the final conclusion or another intermediate conclusion: In the following diagram, statement 4 is an intermediate conclusion in that it is a conclusion in relation to statement 5 but is a premise in relation to the final conclusion, i.e. statement 1. An argument with this structure is sometimes called a complex argument. If there is a single chain of claims containing at least one intermediate conclusion, the argument is sometimes described as a serial argument or a chain argument.

Statement 4 is an intermediate conclusion or sub-conclusion.

Each of these structures can be represented by the equivalent "box and line" approach to argument maps. In the following diagram, the contention is shown at the top, and the boxes linked to it represent supporting reasons, which comprise one or more premises. The green arrow indicates that the two reasons support the contention:

A box and line diagram

Argument maps can also represent counterarguments. In the following diagram, the two objections weaken the contention, while the reasons support the premise of the objection:

A sample argument using objections

Representing an argument as an argument map

Diagramming written text

A written text can be transformed into an argument map by following a sequence of steps. Monroe Beardsley's 1950 book Practical Logic recommended the following procedure:

  1. Separate statements by brackets and number them.
  2. Put circles around the logical indicators.
  3. Supply, in parenthesis, any logical indicators that are left out.
  4. Set out the statements in a diagram in which arrows show the relationships between statements.
A diagram of the example from Beardsley's Practical Logic

Beardsley gave the first example of a text being analysed in this way:

Though ① [people who talk about the "social significance" of the arts don’t like to admit it], ② [music and painting are bound to suffer when they are turned into mere vehicles for propaganda]. For ③ [propaganda appeals to the crudest and most vulgar feelings]: (for) ④ [look at the academic monstrosities produced by the official Nazi painters]. What is more important, ⑤ [art must be an end in itself for the artist], because ⑥ [the artist can do the best work only in an atmosphere of complete freedom].

Beardsley said that the conclusion in this example is statement ②. Statement ④ needs to be rewritten as a declarative sentence, e.g. "Academic monstrosities [were] produced by the official Nazi painters." Statement ① points out that the conclusion isn't accepted by everyone, but statement ① is omitted from the diagram because it doesn't support the conclusion. Beardsley said that the logical relation between statement ③ and statement ④ is unclear, but he proposed to diagram statement ④ as supporting statement ③.

A box and line diagram of Beardsley's example, produced using Harrell's procedure

More recently, philosophy professor Maralee Harrell recommended the following procedure:

  1. Identify all the claims being made by the author.
  2. Rewrite them as independent statements, eliminating non-essential words.
  3. Identify which statements are premises, sub-conclusions, and the main conclusion.
  4. Provide missing, implied conclusions and implied premises. (This is optional depending on the purpose of the argument map.)
  5. Put the statements into boxes and draw a line between any boxes that are linked.
  6. Indicate support from premise(s) to (sub)conclusion with arrows.

Diagramming as thinking

Argument maps are useful not only for representing and analyzing existing writings, but also for thinking through issues as part of a problem-structuring process or writing process. The use of such argument analysis for thinking through issues has been called "reflective argumentation".

An argument map, unlike a decision tree, does not tell how to make a decision, but the process of choosing a coherent position (or reflective equilibrium) based on the structure of an argument map can be represented as a decision tree.

History

The philosophical origins and tradition of argument mapping

From Whately's Elements of Logic p467, 1852 edition

In the Elements of Logic, published in 1826 and issued in many subsequent editions, Archbishop Richard Whately gave probably the first form of an argument map, introducing it with the suggestion that "many students probably will find it a very clear and convenient mode of exhibiting the logical analysis of the course of argument, to draw it out in the form of a Tree, or Logical Division".

However, the technique did not become widely used, possibly because for complex arguments, it involved much writing and rewriting of the premises.

Wigmore evidence chart, from 1905

Legal philosopher and theorist John Henry Wigmore produced maps of legal arguments using numbered premises in the early 20th century, based in part on the ideas of 19th century philosopher Henry Sidgwick who used lines to indicate relations between terms.

Anglophone argument diagramming in the 20th century

Dealing with the failure of formal reduction of informal argumentation, English speaking argumentation theory developed diagrammatic approaches to informal reasoning over a period of fifty years.

Monroe Beardsley proposed a form of argument diagram in 1950. His method of marking up an argument and representing its components with linked numbers became a standard and is still widely used. He also introduced terminology that is still current describing convergent, divergent and serial arguments.

A Toulmin argument diagram, redrawn from his 1959 Uses of Argument
 
A generalised Toulmin diagram

Stephen Toulmin, in his groundbreaking and influential 1958 book The Uses of Argument, identified several elements to an argument which have been generalized. The Toulmin diagram is widely used in educational critical teaching. Whilst Toulmin eventually had a significant impact on the development of informal logic he had little initial impact and the Beardsley approach to diagramming arguments along with its later developments became the standard approach in this field. Toulmin introduced something that was missing from Beardsley's approach. In Beardsley, "arrows link reasons and conclusions (but) no support is given to the implication itself between them. There is no theory, in other words, of inference distinguished from logical deduction, the passage is always deemed not controversial and not subject to support and evaluation". Toulmin introduced the concept of warrant which "can be considered as representing the reasons behind the inference, the backing that authorizes the link".

Beardsley's approach was refined by Stephen N. Thomas, whose 1973 book Practical Reasoning In Natural Language introduced the term linked to describe arguments where the premises necessarily worked together to support the conclusion. However, the actual distinction between dependent and independent premises had been made prior to this. The introduction of the linked structure made it possible for argument maps to represent missing or "hidden" premises. In addition, Thomas suggested showing reasons both for and against a conclusion with the reasons against being represented by dotted arrows. Thomas introduced the term argument diagram and defined basic reasons as those that were not supported by any others in the argument and the final conclusion as that which was not used to support any further conclusion.

Scriven's argument diagram. The explicit premise 1 is conjoined with additional unstated premises a and b to imply 2.

Michael Scriven further developed the Beardsley-Thomas approach in his 1976 book Reasoning. Whereas Beardsley had said "At first, write out the statements...after a little practice, refer to the statements by number alone" Scriven advocated clarifying the meaning of the statements, listing them and then using a tree diagram with numbers to display the structure. Missing premises (unstated assumptions) were to be included and indicated with an alphabetical letter instead of a number to mark them off from the explicit statements. Scriven introduced counterarguments in his diagrams, which Toulmin had defined as rebuttal. This also enabled the diagramming of "balance of consideration" arguments.

In 1998 a series of large-scale argument maps released by Robert E. Horn stimulated widespread interest in argument mapping.

Development of computer-supported argument visualization

Human–computer interaction pioneer Douglas Engelbart, in a famous 1962 technical report on intelligence augmentation, envisioned in detail something like argument-mapping software as an integral part of future intelligence-augmenting computer interfaces:

You usually think of an argument as a serial sequence of steps of reason, beginning with known facts, assumptions, etc., and progressing toward a conclusion. Well, we do have to think through these steps serially, and we usually do list the steps serially when we write them out because that is pretty much the way our papers and books have to present them—they are pretty limiting in the symbol structuring they enable us to use. ... To help us get better comprehension of the structure of an argument, we can also call forth a schematic or graphical display. Once the antecedent-consequent links have been established, the computer can automatically construct such a display for us.

— Douglas Engelbart, "Augmenting human intellect: a conceptual framework" (1962)

In the middle to late 1980s, hypertext software applications that supported argument visualization were developed, including NoteCards and gIBIS; the latter generated an on-screen graphical hypertextual map of an issue-based information system, a model of argumentation developed by Werner Kunz and Horst Rittel in the 1970s. In the 1990s, Tim van Gelder and colleagues developed a series of software applications that permitted an argument map's premises to be fully stated and edited in the diagram, rather than in a legend. Van Gelder's first program, Reason!Able, was superseded by two subsequent programs, bCisive and Rationale.

Throughout the 1990s and 2000s, many other software applications were developed for argument visualization. By 2013, more than 60 such software systems existed. In a 2010 survey of computer-supported argumentation, Oliver Scheuer and colleagues noted that one of the differences between these software systems is whether collaboration is supported. In their survey, single-user argumentation systems included Convince Me, iLogos, LARGO, Athena, Araucaria, and Carneades; small group argumentation systems included Digalo, QuestMap, Compendium, Belvedere, and AcademicTalk; community argumentation systems included Debategraph and Collaboratorium.

Applications

Argument maps have been applied in many areas, but foremost in educational, academic and business settings, including design rationale. Argument maps are also used in forensic science, law, and artificial intelligence. It has also been proposed that argument mapping has a great potential to improve how we understand and execute democracy, in reference to the ongoing evolution of e-democracy.

Difficulties with the philosophical tradition

It has traditionally been hard to separate teaching critical thinking from the philosophical tradition of teaching logic and method, and most critical thinking textbooks have been written by philosophers. Informal logic textbooks are replete with philosophical examples, but it is unclear whether the approach in such textbooks transfers to non-philosophy students. There appears to be little statistical effect after such classes. Argument mapping, however, has a measurable effect according to many studies. For example, instruction in argument mapping has been shown to improve the critical thinking skills of business students.

Evidence that argument mapping improves critical thinking ability

There is empirical evidence that the skills developed in argument-mapping-based critical thinking courses substantially transfer to critical thinking done without argument maps. Alvarez's meta-analysis found that such critical thinking courses produced gains of around 0.70 SD, about twice as much as standard critical-thinking courses. The tests used in the reviewed studies were standard critical-thinking tests.

Limitations

When used with students in school, argument maps have limitations. They can "end up looking overly complex" and can increase cognitive load beyond what is optimal for learning the course content. Creating maps requires extensive coaching and feedback from an experienced argument mapper. Depending on the learning objectives, the time spent coaching students to create good maps may be better spent learning the course content instead of learning to diagram. When the goal is to prompt students to consider other perspectives and counterarguments, the goal may be more easily accomplished with other methods such as discussion, rubrics, and a simple argument framework or simple graphic organizer such as a vee diagram. To maximize the strengths of argument mapping and minimize its limitations in the classroom requires considering at what point in a learning progression the potential benefits of argument mapping would outweigh its potential disadvantages.

Standards

Argument Interchange Format

The Argument Interchange Format, AIF, is an international effort to develop a representational mechanism for exchanging argument resources between research groups, tools, and domains using a semantically rich language. AIF-RDF is the extended ontology represented in the Resource Description Framework Schema (RDFS) semantic language. Though AIF is still something of a moving target, it is settling down.

Legal Knowledge Interchange Format

The Legal Knowledge Interchange Format (LKIF) was developed in the European ESTRELLA project and designed with the goal of becoming a standard for representing and interchanging policy, legislation and cases, including their justificatory arguments, in the legal domain. LKIF builds on and uses the Web Ontology Language (OWL) for representing concepts and includes a reusable basic ontology of legal concepts.

Argdown

Argdown is a Markdown-inspired lightweight markup language for complex argumentation. It is intended for exchanging arguments and argument reconstructions in a universally accessible and highly human-readable way. The Argdown syntax is accompanied by tools that facilitate coding and transform Argdown documents into argument maps.

Natural deduction

From Wikipedia, the free encyclopedia

In logic and proof theory, natural deduction is a kind of proof calculus in which logical reasoning is expressed by inference rules closely related to the "natural" way of reasoning. This contrasts with Hilbert-style systems, which instead use axioms as much as possible to express the logical laws of deductive reasoning.

Motivation

Natural deduction grew out of a context of dissatisfaction with the axiomatizations of deductive reasoning common to the systems of Hilbert, Frege, and Russell (see, e.g., Hilbert system). Such axiomatizations were most famously used by Russell and Whitehead in their mathematical treatise Principia Mathematica. Spurred on by a series of seminars in Poland in 1926 by Łukasiewicz that advocated a more natural treatment of logic, Jaśkowski made the earliest attempts at defining a more natural deduction, first in 1929 using a diagrammatic notation, and later updating his proposal in a sequence of papers in 1934 and 1935. His proposals led to different notations such as Fitch-style calculus (or Fitch's diagrams) or Suppes' method for which Lemmon gave a variant called system L.

Natural deduction in its modern form was independently proposed by the German mathematician Gerhard Gentzen in 1934, in a dissertation delivered to the faculty of mathematical sciences of the University of Göttingen. The term natural deduction (or rather, its German equivalent natürliches Schließen) was coined in that paper:

Ich wollte nun zunächst einmal einen Formalismus aufstellen, der dem wirklichen Schließen möglichst nahe kommt. So ergab sich ein "Kalkül des natürlichen Schließens".
(First I wished to construct a formalism that comes as close as possible to actual reasoning. Thus arose a "calculus of natural deduction".)

Gentzen was motivated by a desire to establish the consistency of number theory. He was unable to prove the main result required for the consistency result, the cut elimination theorem—the Hauptsatz—directly for natural deduction. For this reason he introduced his alternative system, the sequent calculus, for which he proved the Hauptsatz both for classical and intuitionistic logic. In a series of seminars in 1961 and 1962 Prawitz gave a comprehensive summary of natural deduction calculi, and transported much of Gentzen's work with sequent calculi into the natural deduction framework. His 1965 monograph Natural deduction: a proof-theoretical study was to become a reference work on natural deduction, and included applications for modal and second-order logic.

In natural deduction, a proposition is deduced from a collection of premises by applying inference rules repeatedly. The system presented in this article is a minor variation of Gentzen's or Prawitz's formulation, but with a closer adherence to Martin-Löf's description of logical judgments and connectives.

Judgments and propositions

A judgment is something that is knowable, that is, an object of knowledge. It is evident if one in fact knows it. Thus "it is raining" is a judgment, which is evident for the one who knows that it is actually raining; in this case one may readily find evidence for the judgment by looking outside the window or stepping out of the house. In mathematical logic however, evidence is often not as directly observable, but rather deduced from more basic evident judgments. The process of deduction is what constitutes a proof; in other words, a judgment is evident if one has a proof for it.

The most important judgments in logic are of the form "A is true". The letter A stands for any expression representing a proposition; the truth judgments thus require a more primitive judgment: "A is a proposition". Many other judgments have been studied; for example, "A is false" (see classical logic), "A is true at time t" (see temporal logic), "A is necessarily true" or "A is possibly true" (see modal logic), "the program M has type τ" (see programming languages and type theory), "A is achievable from the available resources" (see linear logic), and many others. To start with, we shall concern ourselves with the simplest two judgments "A is a proposition" and "A is true", abbreviated as "A prop" and "A true" respectively.

The judgment "A prop" defines the structure of valid proofs of A, which in turn defines the structure of propositions. For this reason, the inference rules for this judgment are sometimes known as formation rules. To illustrate, if we have two propositions A and B (that is, the judgments "A prop" and "B prop" are evident), then we form the compound proposition A and B, written symbolically as "". We can write this in the form of an inference rule:

where the parentheses are omitted to make the inference rule more succinct:

This inference rule is schematic: A and B can be instantiated with any expression. The general form of an inference rule is:

where each is a judgment and the inference rule is named "name". The judgments above the line are known as premises, and those below the line are conclusions. Other common logical propositions are disjunction (), negation (), implication (), and the logical constants truth () and falsehood (). Their formation rules are below.

Introduction and elimination

Now we discuss the "A true" judgment. Inference rules that introduce a logical connective in the conclusion are known as introduction rules. To introduce conjunctions, i.e., to conclude "A and B true" for propositions A and B, one requires evidence for "A true" and "B true". As an inference rule:

It must be understood that in such rules the objects are propositions. That is, the above rule is really an abbreviation for:

This can also be written:

In this form, the first premise can be satisfied by the formation rule, giving the first two premises of the previous form. In this article we shall elide the "prop" judgments where they are understood. In the nullary case, one can derive truth from no premises.

If the truth of a proposition can be established in more than one way, the corresponding connective has multiple introduction rules.

Note that in the nullary case, i.e., for falsehood, there are no introduction rules. Thus one can never infer falsehood from simpler judgments.

Dual to introduction rules are elimination rules to describe how to deconstruct information about a compound proposition into information about its constituents. Thus, from "A ∧ B true", we can conclude "A true" and "B true":

As an example of the use of inference rules, consider commutativity of conjunction. If AB is true, then BA is true; this derivation can be drawn by composing inference rules in such a fashion that premises of a lower inference match the conclusion of the next higher inference.

The inference figures we have seen so far are not sufficient to state the rules of implication introduction or disjunction elimination; for these, we need a more general notion of hypothetical derivation.

Hypothetical derivations

A pervasive operation in mathematical logic is reasoning from assumptions. For example, consider the following derivation:

This derivation does not establish the truth of B as such; rather, it establishes the following fact:

If A ∧ (B ∧ C) is true then B is true.

In logic, one says "assuming A ∧ (B ∧ C) is true, we show that B is true"; in other words, the judgment "B true" depends on the assumed judgment "A ∧ (B ∧ C) true". This is a hypothetical derivation, which we write as follows:

The interpretation is: "B true is derivable from A ∧ (B ∧ C) true". Of course, in this specific example we actually know the derivation of "B true" from "A ∧ (B ∧ C) true", but in general we may not a priori know the derivation. The general form of a hypothetical derivation is:

Each hypothetical derivation has a collection of antecedent derivations (the Di) written on the top line, and a succedent judgment (J) written on the bottom line. Each of the premises may itself be a hypothetical derivation. (For simplicity, we treat a judgment as a premise-less derivation.)

The notion of hypothetical judgment is internalised as the connective of implication. The introduction and elimination rules are as follows.

In the introduction rule, the antecedent named u is discharged in the conclusion. This is a mechanism for delimiting the scope of the hypothesis: its sole reason for existence is to establish "B true"; it cannot be used for any other purpose, and in particular, it cannot be used below the introduction. As an example, consider the derivation of "A ⊃ (B ⊃ (A ∧ B)) true":

This full derivation has no unsatisfied premises; however, sub-derivations are hypothetical. For instance, the derivation of "B ⊃ (A ∧ B) true" is hypothetical with antecedent "A true" (named u).

With hypothetical derivations, we can now write the elimination rule for disjunction:

In words, if A ∨ B is true, and we can derive "C true" both from "A true" and from "B true", then C is indeed true. Note that this rule does not commit to either "A true" or "B true". In the zero-ary case, i.e. for falsehood, we obtain the following elimination rule:

This is read as: if falsehood is true, then any proposition C is true.

Negation is similar to implication.

The introduction rule discharges both the name of the hypothesis u, and the succedent p, i.e., the proposition p must not occur in the conclusion A. Since these rules are schematic, the interpretation of the introduction rule is: if from "A true" we can derive for every proposition p that "p true", then A must be false, i.e., "not A true". For the elimination, if both A and not A are shown to be true, then there is a contradiction, in which case every proposition C is true. Because the rules for implication and negation are so similar, it should be fairly easy to see that not A and A ⊃ ⊥ are equivalent, i.e., each is derivable from the other.

Consistency, completeness, and normal forms

A theory is said to be consistent if falsehood is not provable (from no assumptions) and is complete if every theorem or its negation is provable using the inference rules of the logic. These are statements about the entire logic, and are usually tied to some notion of a model. However, there are local notions of consistency and completeness that are purely syntactic checks on the inference rules, and require no appeals to models. The first of these is local consistency, also known as local reducibility, which says that any derivation containing an introduction of a connective followed immediately by its elimination can be turned into an equivalent derivation without this detour. It is a check on the strength of elimination rules: they must not be so strong that they include knowledge not already contained in their premises. As an example, consider conjunctions.

Dually, local completeness says that the elimination rules are strong enough to decompose a connective into the forms suitable for its introduction rule. Again for conjunctions:

These notions correspond exactly to β-reduction (beta reduction) and η-conversion (eta conversion) in the lambda calculus, using the Curry–Howard isomorphism. By local completeness, we see that every derivation can be converted to an equivalent derivation where the principal connective is introduced. In fact, if the entire derivation obeys this ordering of eliminations followed by introductions, then it is said to be normal. In a normal derivation all eliminations happen above introductions. In most logics, every derivation has an equivalent normal derivation, called a normal form. The existence of normal forms is generally hard to prove using natural deduction alone, though such accounts do exist in the literature, most notably by Dag Prawitz in 1961.[7] It is much easier to show this indirectly by means of a cut-free sequent calculus presentation.

First and higher-order extensions

Summary of first-order system

The logic of the earlier section is an example of a single-sorted logic, i.e., a logic with a single kind of object: propositions. Many extensions of this simple framework have been proposed; in this section we will extend it with a second sort of individuals or terms. More precisely, we will add a new kind of judgment, "t is a term" (or "t term") where t is schematic. We shall fix a countable set V of variables, another countable set F of function symbols, and construct terms with the following formation rules:

and

For propositions, we consider a third countable set P of predicates, and define atomic predicates over terms with the following formation rule:

The first two rules of formation provide a definition of a term that is effectively the same as that defined in term algebra and model theory, although the focus of those fields of study is quite different from natural deduction. The third rule of formation effectively defines an atomic formula, as in first-order logic, and again in model theory.

To these are added a pair of formation rules, defining the notation for quantified propositions; one for universal (∀) and existential (∃) quantification:

The universal quantifier has the introduction and elimination rules:

The existential quantifier has the introduction and elimination rules:

In these rules, the notation [t/x] A stands for the substitution of t for every (visible) instance of x in A, avoiding capture. As before the superscripts on the name stand for the components that are discharged: the term a cannot occur in the conclusion of ∀I (such terms are known as eigenvariables or parameters), and the hypotheses named u and v in ∃E are localised to the second premise in a hypothetical derivation. Although the propositional logic of earlier sections was decidable, adding the quantifiers makes the logic undecidable.

So far, the quantified extensions are first-order: they distinguish propositions from the kinds of objects quantified over. Higher-order logic takes a different approach and has only a single sort of propositions. The quantifiers have as the domain of quantification the very same sort of propositions, as reflected in the formation rules:

A discussion of the introduction and elimination forms for higher-order logic is beyond the scope of this article. It is possible to be in-between first-order and higher-order logics. For example, second-order logic has two kinds of propositions, one kind quantifying over terms, and the second kind quantifying over propositions of the first kind.

Different presentations of natural deduction

Tree-like presentations

Gentzen's discharging annotations used to internalise hypothetical judgments can be avoided by representing proofs as a tree of sequents Γ ⊢A instead of a tree of A true judgments.

Sequential presentations

Jaśkowski's representations of natural deduction led to different notations such as Fitch-style calculus (or Fitch's diagrams) or Suppes' method, of which Lemmon gave a variant called system L. Such presentation systems, which are more accurately described as tabular, include the following.

  • 1940: In a textbook, Quine indicated antecedent dependencies by line numbers in square brackets, anticipating Suppes' 1957 line-number notation.
  • 1950: In a textbook, Quine (1982, pp. 241–255) demonstrated a method of using one or more asterisks to the left of each line of proof to indicate dependencies. This is equivalent to Kleene's vertical bars. (It is not totally clear if Quine's asterisk notation appeared in the original 1950 edition or was added in a later edition.)
  • 1957: An introduction to practical logic theorem proving in a textbook by Suppes (1999, pp. 25–150). This indicated dependencies (i.e. antecedent propositions) by line numbers at the left of each line.
  • 1963: Stoll (1979, pp. 183–190, 215–219) uses sets of line numbers to indicate antecedent dependencies of the lines of sequential logical arguments based on natural deduction inference rules.
  • 1965: The entire textbook by Lemmon (1965) is an introduction to logic proofs using a method based on that of Suppes.
  • 1967: In a textbook, Kleene (2002, pp. 50–58, 128–130) briefly demonstrated two kinds of practical logic proofs, one system using explicit quotations of antecedent propositions on the left of each line, the other system using vertical bar-lines on the left to indicate dependencies.

Proofs and type theory

The presentation of natural deduction so far has concentrated on the nature of propositions without giving a formal definition of a proof. To formalise the notion of proof, we alter the presentation of hypothetical derivations slightly. We label the antecedents with proof variables (from some countable set V of variables), and decorate the succedent with the actual proof. The antecedents or hypotheses are separated from the succedent by means of a turnstile (⊢). This modification sometimes goes under the name of localised hypotheses. The following diagram summarises the change.

──── u1 ──── u2 ... ──── un
 J1      J2          Jn
              ⋮
              J
u1:J1, u2:J2, ..., un:Jn ⊢ J

The collection of hypotheses will be written as Γ when their exact composition is not relevant. To make proofs explicit, we move from the proof-less judgment "A true" to a judgment: "π is a proof of (A true)", which is written symbolically as "π : A true". Following the standard approach, proofs are specified with their own formation rules for the judgment "π proof". The simplest possible proof is the use of a labelled hypothesis; in this case the evidence is the label itself.

u ∈ V
─────── proof-F
u proof

───────────────────── hyp
u:A true ⊢ u : A true

For brevity, we shall leave off the judgmental label true in the rest of this article, i.e., write "Γ ⊢ π : A". Let us re-examine some of the connectives with explicit proofs. For conjunction, we look at the introduction rule ∧I to discover the form of proofs of conjunction: they must be a pair of proofs of the two conjuncts. Thus:

π1 proof    π2 proof
──────────────────── pair-F
(π1, π2) proof

Γ ⊢ π1 : A    Γ ⊢ π2 : B
───────────────────────── ∧I
Γ ⊢ (π1, π2) : A ∧ B

The elimination rules ∧E1 and ∧E2 select either the left or the right conjunct; thus the proofs are a pair of projections—first (fst) and second (snd).

π proof
─────────── fst-F
fst π proof

Γ ⊢ π : A ∧ B
───────────── ∧E1
Γ ⊢ fst π : A
π proof
─────────── snd-F
snd π proof

Γ ⊢ π : A ∧ B
───────────── ∧E2
Γ ⊢ snd π : B

For implication, the introduction form localises or binds the hypothesis, written using a λ; this corresponds to the discharged label. In the rule, "Γ, u:A" stands for the collection of hypotheses Γ, together with the additional hypothesis u.

π proof
──────────── λ-F
λu. π proof

Γ, u:A ⊢ π : B
───────────────── ⊃I
Γ ⊢ λu. π : A ⊃ B
π1 proof   π2 proof
─────────────────── app-F
π1 π2 proof

Γ ⊢ π1 : A ⊃ B    Γ ⊢ π2 : A
──────────────────────────── ⊃E
Γ ⊢ π1 π2 : B

With proofs available explicitly, one can manipulate and reason about proofs. The key operation on proofs is the substitution of one proof for an assumption used in another proof. This is commonly known as a substitution theorem, and can be proved by induction on the depth (or structure) of the second judgment.

Substitution theorem
If Γ ⊢ π1 : A and Γ, u:A ⊢ π2 : B, then Γ ⊢ [π1/u] π2 : B.

So far the judgment "Γ ⊢ π : A" has had a purely logical interpretation. In type theory, the logical view is exchanged for a more computational view of objects. Propositions in the logical interpretation are now viewed as types, and proofs as programs in the lambda calculus. Thus the interpretation of "π : A" is "the program π has type A". The logical connectives are also given a different reading: conjunction is viewed as product (×), implication as the function arrow (→), etc. The differences are only cosmetic, however. Type theory has a natural deduction presentation in terms of formation, introduction and elimination rules; in fact, the reader can easily reconstruct what is known as simple type theory from the previous sections.

The difference between logic and type theory is primarily a shift of focus from the types (propositions) to the programs (proofs). Type theory is chiefly interested in the convertibility or reducibility of programs. For every type, there are canonical programs of that type which are irreducible; these are known as canonical forms or values. If every program can be reduced to a canonical form, then the type theory is said to be normalising (or weakly normalising). If the canonical form is unique, then the theory is said to be strongly normalising. Normalisability is a rare feature of most non-trivial type theories, which is a big departure from the logical world. (Recall that almost every logical derivation has an equivalent normal derivation.) To sketch the reason: in type theories that admit recursive definitions, it is possible to write programs that never reduce to a value; such looping programs can generally be given any type. In particular, the looping program has type ⊥, although there is no logical proof of "⊥ true". For this reason, the propositions as types; proofs as programs paradigm only works in one direction, if at all: interpreting a type theory as a logic generally gives an inconsistent logic.

Example: Dependent Type Theory

Like logic, type theory has many extensions and variants, including first-order and higher-order versions. One branch, known as dependent type theory, is used in a number of computer-assisted proof systems. Dependent type theory allows quantifiers to range over programs themselves. These quantified types are written as Π and Σ instead of ∀ and ∃, and have the following formation rules:

Γ ⊢ A type    Γ, x:A ⊢ B type
───────────────────────────── Π-F
Γ ⊢ Πx:A. B type

Γ ⊢ A type  Γ, x:A ⊢ B type
──────────────────────────── Σ-F
Γ ⊢ Σx:A. B type

These types are generalisations of the arrow and product types, respectively, as witnessed by their introduction and elimination rules.

Γ, x:A ⊢ π : B
──────────────────── ΠI
Γ ⊢ λx. π : Πx:A. B

Γ ⊢ π1 : Πx:A. B   Γ ⊢ π2 : A
───────────────────────────── ΠE
Γ ⊢ π1 π2 : [π2/x] B
Γ ⊢ π1 : A    Γ, x:A ⊢ π2 : B
───────────────────────────── ΣI
Γ ⊢ (π1, π2) : Σx:A. B

Γ ⊢ π : Σx:A. B
──────────────── ΣE1
Γ ⊢ fst π : A

Γ ⊢ π : Σx:A. B
──────────────────────── ΣE2
Γ ⊢ snd π : [fst π/x] B

Dependent type theory in full generality is very powerful: it is able to express almost any conceivable property of programs directly in the types of the program. This generality comes at a steep price — either typechecking is undecidable (extensional type theory), or extensional reasoning is more difficult (intensional type theory). For this reason, some dependent type theories do not allow quantification over arbitrary programs, but rather restrict to programs of a given decidable index domain, for example integers, strings, or linear programs.

Since dependent type theories allow types to depend on programs, a natural question to ask is whether it is possible for programs to depend on types, or any other combination. There are many kinds of answers to such questions. A popular approach in type theory is to allow programs to be quantified over types, also known as parametric polymorphism; of this there are two main kinds: if types and programs are kept separate, then one obtains a somewhat more well-behaved system called predicative polymorphism; if the distinction between program and type is blurred, one obtains the type-theoretic analogue of higher-order logic, also known as impredicative polymorphism. Various combinations of dependency and polymorphism have been considered in the literature, the most famous being the lambda cube of Henk Barendregt.

The intersection of logic and type theory is a vast and active research area. New logics are usually formalised in a general type theoretic setting, known as a logical framework. Popular modern logical frameworks such as the calculus of constructions and LF are based on higher-order dependent type theory, with various trade-offs in terms of decidability and expressive power. These logical frameworks are themselves always specified as natural deduction systems, which is a testament to the versatility of the natural deduction approach.

Classical and modal logics

For simplicity, the logics presented so far have been intuitionistic. Classical logic extends intuitionistic logic with an additional axiom or principle of excluded middle:

For any proposition p, the proposition p ∨ ¬p is true.

This statement is not obviously either an introduction or an elimination; indeed, it involves two distinct connectives. Gentzen's original treatment of excluded middle prescribed one of the following three (equivalent) formulations, which were already present in analogous forms in the systems of Hilbert and Heyting:

────────────── XM1
A ∨ ¬A true

¬¬A true
────────── XM2
A true

──────── u
¬A true
⋮
p true
────── XM3u, p
A true

(XM3 is merely XM2 expressed in terms of E.) This treatment of excluded middle, in addition to being objectionable from a purist's standpoint, introduces additional complications in the definition of normal forms.

A comparatively more satisfactory treatment of classical natural deduction in terms of introduction and elimination rules alone was first proposed by Parigot in 1992 in the form of a classical lambda calculus called λμ. The key insight of his approach was to replace a truth-centric judgment A true with a more classical notion, reminiscent of the sequent calculus: in localised form, instead of Γ ⊢ A, he used Γ ⊢ Δ, with Δ a collection of propositions similar to Γ. Γ was treated as a conjunction, and Δ as a disjunction. This structure is essentially lifted directly from classical sequent calculi, but the innovation in λμ was to give a computational meaning to classical natural deduction proofs in terms of a callcc or a throw/catch mechanism seen in LISP and its descendants. (See also: first class control.)

Another important extension was for modal and other logics that need more than just the basic judgment of truth. These were first described, for the alethic modal logics S4 and S5, in a natural deduction style by Prawitz in 1965,[4] and have since accumulated a large body of related work. To give a simple example, the modal logic S4 requires one new judgment, "A valid", that is categorical with respect to truth:

If "A true" under no assumptions of the form "B true", then "A valid".

This categorical judgment is internalised as a unary connective ◻A (read "necessarily A") with the following introduction and elimination rules:

A valid
──────── ◻I
◻ A true

◻ A true
──────── ◻E
A true

Note that the premise "A valid" has no defining rules; instead, the categorical definition of validity is used in its place. This mode becomes clearer in the localised form when the hypotheses are explicit. We write "Ω;Γ ⊢ A true" where Γ contains the true hypotheses as before, and Ω contains valid hypotheses. On the right there is just a single judgment "A true"; validity is not needed here since "Ω ⊢ A valid" is by definition the same as "Ω;⋅ ⊢ A true". The introduction and elimination forms are then:

Ω;⋅ ⊢ π : A true
──────────────────── ◻I
Ω;⋅ ⊢ box π : ◻ A true

Ω;Γ ⊢ π : ◻ A true
────────────────────── ◻E
Ω;Γ ⊢ unbox π : A true

The modal hypotheses have their own version of the hypothesis rule and substitution theorem.

─────────────────────────────── valid-hyp
Ω, u: (A valid) ; Γ ⊢ u : A true
Modal substitution theorem
If Ω;⋅ ⊢ π1 : A true and Ω, u: (A valid) ; Γ ⊢ π2 : C true, then Ω;Γ ⊢ [π1/u] π2 : C true.

This framework of separating judgments into distinct collections of hypotheses, also known as multi-zoned or polyadic contexts, is very powerful and extensible; it has been applied for many different modal logics, and also for linear and other substructural logics, to give a few examples. However, relatively few systems of modal logic can be formalised directly in natural deduction. To give proof-theoretic characterisations of these systems, extensions such as labelling or systems of deep inference.

The addition of labels to formulae permits much finer control of the conditions under which rules apply, allowing the more flexible techniques of analytic tableaux to be applied, as has been done in the case of labelled deduction. Labels also allow the naming of worlds in Kripke semantics; Simpson (1993) presents an influential technique for converting frame conditions of modal logics in Kripke semantics into inference rules in a natural deduction formalisation of hybrid logic. Stouppa (2004) surveys the application of many proof theories, such as Avron and Pottinger's hypersequents and Belnap's display logic to such modal logics as S5 and B.

Comparison with other foundational approaches

Sequent calculus

The sequent calculus is the chief alternative to natural deduction as a foundation of mathematical logic. In natural deduction the flow of information is bi-directional: elimination rules flow information downwards by deconstruction, and introduction rules flow information upwards by assembly. Thus, a natural deduction proof does not have a purely bottom-up or top-down reading, making it unsuitable for automation in proof search. To address this fact, Gentzen in 1935 proposed his sequent calculus, though he initially intended it as a technical device for clarifying the consistency of predicate logic. Kleene, in his seminal 1952 book Introduction to Metamathematics, gave the first formulation of the sequent calculus in the modern style.

In the sequent calculus all inference rules have a purely bottom-up reading. Inference rules can apply to elements on both sides of the turnstile. (To differentiate from natural deduction, this article uses a double arrow ⇒ instead of the right tack ⊢ for sequents.) The introduction rules of natural deduction are viewed as right rules in the sequent calculus, and are structurally very similar. The elimination rules on the other hand turn into left rules in the sequent calculus. To give an example, consider disjunction; the right rules are familiar:

Γ ⇒ A
───────── ∨R1
Γ ⇒ A ∨ B

Γ ⇒ B
───────── ∨R2
Γ ⇒ A ∨ B

On the left:

Γ, u:A ⇒ C       Γ, v:B ⇒ C
─────────────────────────── ∨L
Γ, w: (A ∨ B) ⇒ C

Recall the ∨E rule of natural deduction in localised form:

Γ ⊢ A ∨ B    Γ, u:A ⊢ C    Γ, v:B ⊢ C
─────────────────────────────────────── ∨E
Γ ⊢ C

The proposition A ∨ B, which is the succedent of a premise in ∨E, turns into a hypothesis of the conclusion in the left rule ∨L. Thus, left rules can be seen as a sort of inverted elimination rule. This observation can be illustrated as follows:

natural deduction
sequent calculus
 ────── hyp
 |
 | elim. rules
 |
 ↓
 ────────────────────── ↑↓ meet
 ↑
 |
 | intro. rules
 |
 conclusion
 ─────────────────────────── init
 ↑            ↑
 |            |
 | left rules | right rules
 |            |
 conclusion

In the sequent calculus, the left and right rules are performed in lock-step until one reaches the initial sequent, which corresponds to the meeting point of elimination and introduction rules in natural deduction. These initial rules are superficially similar to the hypothesis rule of natural deduction, but in the sequent calculus they describe a transposition or a handshake of a left and a right proposition:

────────── init
Γ, u:A ⇒ A

The correspondence between the sequent calculus and natural deduction is a pair of soundness and completeness theorems, which are both provable by means of an inductive argument.

Soundness of ⇒ wrt. ⊢
If Γ ⇒ A, then Γ ⊢ A.
Completeness of ⇒ wrt. ⊢
If Γ ⊢ A, then Γ ⇒ A.

It is clear by these theorems that the sequent calculus does not change the notion of truth, because the same collection of propositions remain true. Thus, one can use the same proof objects as before in sequent calculus derivations. As an example, consider the conjunctions. The right rule is virtually identical to the introduction rule

sequent calculus
natural deduction
Γ ⇒ π1 : A     Γ ⇒ π2 : B
─────────────────────────── ∧R
Γ ⇒ (π1, π2) : A ∧ B

Γ ⊢ π1 : A      Γ ⊢ π2 : B
───────────────────────── ∧I
Γ ⊢ (π1, π2) : A ∧ B

The left rule, however, performs some additional substitutions that are not performed in the corresponding elimination rules.

sequent calculus
natural deduction
Γ, u:A ⇒ π : C
──────────────────────────────── ∧L1
Γ, v: (A ∧ B) ⇒ [fst v/u] π : C

Γ ⊢ π : A ∧ B
───────────── ∧E1
Γ ⊢ fst π : A
Γ, u:B ⇒ π : C
──────────────────────────────── ∧L2
Γ, v: (A ∧ B) ⇒ [snd v/u] π : C

Γ ⊢ π : A ∧ B
───────────── ∧E2
Γ ⊢ snd π : B

The kinds of proofs generated in the sequent calculus are therefore rather different from those of natural deduction. The sequent calculus produces proofs in what is known as the β-normal η-long form, which corresponds to a canonical representation of the normal form of the natural deduction proof. If one attempts to describe these proofs using natural deduction itself, one obtains what is called the intercalation calculus (first described by John Byrnes), which can be used to formally define the notion of a normal form for natural deduction.

The substitution theorem of natural deduction takes the form of a structural rule or structural theorem known as cut in the sequent calculus.

Cut (substitution)
If Γ ⇒ π1 : A and Γ, u:A ⇒ π2 : C, then Γ ⇒ [π1/u] π2 : C.

In most well behaved logics, cut is unnecessary as an inference rule, though it remains provable as a meta-theorem; the superfluousness of the cut rule is usually presented as a computational process, known as cut elimination. This has an interesting application for natural deduction; usually it is extremely tedious to prove certain properties directly in natural deduction because of an unbounded number of cases. For example, consider showing that a given proposition is not provable in natural deduction. A simple inductive argument fails because of rules like ∨E or E which can introduce arbitrary propositions. However, we know that the sequent calculus is complete with respect to natural deduction, so it is enough to show this unprovability in the sequent calculus. Now, if cut is not available as an inference rule, then all sequent rules either introduce a connective on the right or the left, so the depth of a sequent derivation is fully bounded by the connectives in the final conclusion. Thus, showing unprovability is much easier, because there are only a finite number of cases to consider, and each case is composed entirely of sub-propositions of the conclusion. A simple instance of this is the global consistency theorem: "⋅ ⊢ ⊥ true" is not provable. In the sequent calculus version, this is manifestly true because there is no rule that can have "⋅ ⇒ ⊥" as a conclusion! Proof theorists often prefer to work on cut-free sequent calculus formulations because of such properties.

Cooperative

From Wikipedia, the free encyclopedia ...