Search This Blog

Tuesday, December 4, 2018

Logic programming

From Wikipedia, the free encyclopedia

Logic programming is a type of programming paradigm which is largely based on formal logic. Any program written in a logic programming language is a set of sentences in logical form, expressing facts and rules about some problem domain. Major logic programming language families include Prolog, Answer set programming (ASP) and Datalog. In all of these languages, rules are written in the form of clauses:
H :- B1, …, Bn.
and are read declaratively as logical implications:
H if B1 and … and Bn.
H is called the head of the rule and B1, …, Bn is called the body. Facts are rules that have no body, and are written in the simplified form:
H.
In the simplest case in which H, B1, …, Bn are all atomic formulae, these clauses are called definite clauses or Horn clauses. However, there exist many extensions of this simple case, the most important one being the case in which conditions in the body of a clause can also be negations of atomic formulae. Logic programming languages that include this extension have the knowledge representation capabilities of a non-monotonic logic.

In ASP and Datalog, logic programs have only a declarative reading, and their execution is performed by means of a proof procedure or model generator whose behaviour is not meant to be under the control of the programmer. However, in the Prolog family of languages, logic programs also have a procedural interpretation as goal-reduction procedures:
to solve H, solve B1, and ... and solve Bn.
Consider, for example, the following clause:
fallible(X) :- human(X).
based on an example used by Terry Winograd to illustrate the programming language Planner. As a clause in a logic program, it can be used both as a procedure to test whether X is fallible by testing whether X is human, and as a procedure to find an X that is fallible by finding an X that is human. Even facts have a procedural interpretation. For example, the clause:
human(socrates).
can be used both as a procedure to show that socrates is human, and as a procedure to find an X that is human by "assigning" socrates to X.

The declarative reading of logic programs can be used by a programmer to verify their correctness. Moreover, logic-based program transformation techniques can also be used to transform logic programs into logically equivalent programs that are more efficient. In the Prolog family of logic programming languages, the programmer can also use the known problem-solving behaviour of the execution mechanism to improve the efficiency of programs.

History

The use of mathematical logic to represent and execute computer programs is also a feature of the lambda calculus, developed by Alonzo Church in the 1930s. However, the first proposal to use the clausal form of logic for representing computer programs was made by Cordell Green. This used an axiomatization of a subset of LISP, together with a representation of an input-output relation, to compute the relation by simulating the execution of the program in LISP. Foster and Elcock's Absys, on the other hand, employed a combination of equations and lambda calculus in an assertional programming language which places no constraints on the order in which operations are performed.

Logic programming in its present form can be traced back to debates in the late 1960s and early 1970s about declarative versus procedural representations of knowledge in Artificial Intelligence. Advocates of declarative representations were notably working at Stanford, associated with John McCarthy, Bertram Raphael and Cordell Green, and in Edinburgh, with John Alan Robinson (an academic visitor from Syracuse University), Pat Hayes, and Robert Kowalski. Advocates of procedural representations were mainly centered at MIT, under the leadership of Marvin Minsky and Seymour Papert.

Although it was based on the proof methods of logic, Planner, developed at MIT, was the first language to emerge within this proceduralist paradigm. Planner featured pattern-directed invocation of procedural plans from goals (i.e. goal-reduction or backward chaining) and from assertions (i.e. forward chaining). The most influential implementation of Planner was the subset of Planner, called Micro-Planner, implemented by Gerry Sussman, Eugene Charniak and Terry Winograd. It was used to implement Winograd's natural-language understanding program SHRDLU, which was a landmark at that time. To cope with the very limited memory systems at the time, Planner used a backtracking control structure so that only one possible computation path had to be stored at a time. Planner gave rise to the programming languages QA-4, Popler, Conniver, QLISP, and the concurrent language Ether.

Hayes and Kowalski in Edinburgh tried to reconcile the logic-based declarative approach to knowledge representation with Planner's procedural approach. Hayes (1973) developed an equational language, Golux, in which different procedures could be obtained by altering the behavior of the theorem prover. Kowalski, on the other hand, developed SLD resolution, a variant of SL-resolution, and showed how it treats implications as goal-reduction procedures. Kowalski collaborated with Colmerauer in Marseille, who developed these ideas in the design and implementation of the programming language Prolog.

The Association for Logic Programming was founded to promote Logic Programming in 1986.
Prolog gave rise to the programming languages ALF, Fril, Gödel, Mercury, Oz, Ciao, Visual Prolog, XSB, and λProlog, as well as a variety of concurrent logic programming languages, constraint logic programming languages and datalog.

Concepts

Logic and control

Logic programming can be viewed as controlled deduction. An important concept in logic programming is the separation of programs into their logic component and their control component. With pure logic programming languages, the logic component alone determines the solutions produced. The control component can be varied to provide alternative ways of executing a logic program. This notion is captured by the slogan
Algorithm = Logic + Control
where "Logic" represents a logic program and "Control" represents different theorem-proving strategies.

Problem solving

In the simplified, propositional case in which a logic program and a top-level atomic goal contain no variables, backward reasoning determines an and-or tree, which constitutes the search space for solving the goal. The top-level goal is the root of the tree. Given any node in the tree and any clause whose head matches the node, there exists a set of child nodes corresponding to the sub-goals in the body of the clause. These child nodes are grouped together by an "and". The alternative sets of children corresponding to alternative ways of solving the node are grouped together by an "or".

Any search strategy can be used to search this space. Prolog uses a sequential, last-in-first-out, backtracking strategy, in which only one alternative and one sub-goal is considered at a time. Other search strategies, such as parallel search, intelligent backtracking, or best-first search to find an optimal solution, are also possible.

In the more general case, where sub-goals share variables, other strategies can be used, such as choosing the subgoal that is most highly instantiated or that is sufficiently instantiated so that only one procedure applies. Such strategies are used, for example, in concurrent logic programming.

Negation as failure

For most practical applications, as well as for applications that require non-monotonic reasoning in artificial intelligence, Horn clause logic programs need to be extended to normal logic programs, with negative conditions. A clause in a normal logic program has the form:
H :- A1, …, An, not B1, …, not Bn.
and is read declaratively as a logical implication:
H if A1 and … and An and not B1 and … and not Bn.
where H and all the Ai and Bi are atomic formulas. The negation in the negative literals not Bi is commonly referred to as "negation as failure", because in most implementations, a negative condition not Bi is shown to hold by showing that the positive condition Bi fails to hold. For example:

canfly(X) :- bird(X), not abnormal(X).
abnormal(X) :-  wounded(X).
bird(john).
bird(mary).
wounded(john).

Given the goal of finding something that can fly:
 
:- canfly(X).

there are two candidate solutions, which solve the first subgoal bird(X), namely X = john and X = mary. The second subgoal not abnormal(john) of the first candidate solution fails, because wounded(john) succeeds and therefore abnormal(john) succeeds. However, The second subgoal not abnormal(mary) of the second candidate solution succeeds, because wounded(mary) fails and therefore abnormal(mary) fails. Therefore, X = mary is the only solution of the goal.

Micro-Planner had a construct, called "thnot", which when applied to an expression returns the value true if (and only if) the evaluation of the expression fails. An equivalent operator is normally built-in in modern Prolog's implementations. It is normally written as not(Goal) or \+ Goal, where Goal is some goal (proposition) to be proved by the program. This operator differs from negation in first-order logic: a negation such as \+ X == 1 fails when the variable X has been bound to the atom 1, but it succeeds in all other cases, including when X is unbound. This makes Prolog's reasoning non-monotonic: X = 1, \+ X == 1 always fails, while \+ X == 1, X = 1 can succeed, binding X to 1, depending on whether X was initially bound (note that standard Prolog executes goals in left-to-right order).

The logical status of negation as failure was unresolved until Keith Clark [1978] showed that, under certain natural conditions, it is a correct (and sometimes complete) implementation of classical negation with respect to the completion of the program. Completion amounts roughly to regarding the set of all the program clauses with the same predicate on the left hand side, say
H :- Body1.
H :- Bodyk.
as a definition of the predicate
H iff (Body1 or … or Bodyk)
where "iff" means "if and only if". Writing the completion also requires explicit use of the equality predicate and the inclusion of a set of appropriate axioms for equality. However, the implementation of negation by failure needs only the if-halves of the definitions without the axioms of equality.
For example, the completion of the program above is:
canfly(X) iff bird(X), not abnormal(X).
abnormal(X) iff wounded(X).
bird(X) iff X = john or X = mary.
X = X.
not john = mary.
not mary = john.
The notion of completion is closely related to McCarthy's circumscription semantics for default reasoning, and to the closed world assumption.

As an alternative to the completion semantics, negation as failure can also be interpreted epistemically, as in the stable model semantics of answer set programming. In this interpretation not(Bi) means literally that Bi is not known or not believed. The epistemic interpretation has the advantage that it can be combined very simply with classical negation, as in "extended logic programming", to formalise such phrases as "the contrary can not be shown", where "contrary" is classical negation and "can not be shown" is the epistemic interpretation of negation as failure.

Knowledge representation

The fact that Horn clauses can be given a procedural interpretation and, vice versa, that goal-reduction procedures can be understood as Horn clauses + backward reasoning means that logic programs combine declarative and procedural representations of knowledge. The inclusion of negation as failure means that logic programming is a kind of non-monotonic logic.

Despite its simplicity compared with classical logic, this combination of Horn clauses and negation as failure has proved to be surprisingly expressive. For example, it provides a natural representation for the common-sense laws of cause and effect, as formalised by both the situation calculus and event calculus. It has also been shown to correspond quite naturally to the semi-formal language of legislation. In particular, Prakken and Sartor credit the representation of the British Nationality Act as a logic program with being "hugely influential for the development of computational representations of legislation, showing how logic programming enables intuitively appealing representations that can be directly deployed to generate automatic inferences".

Variants and extensions

Prolog

The programming language Prolog was developed in 1972 by Alain Colmerauer. It emerged from a collaboration between Colmerauer in Marseille and Robert Kowalski in Edinburgh. Colmerauer was working on natural language understanding, using logic to represent semantics and using resolution for question-answering. During the summer of 1971, Colmerauer and Kowalski discovered that the clausal form of logic could be used to represent formal grammars and that resolution theorem provers could be used for parsing. They observed that some theorem provers, like hyper-resolution, behave as bottom-up parsers and others, like SL-resolution (1971), behave as top-down parsers.
It was in the following summer of 1972, that Kowalski, again working with Colmerauer, developed the procedural interpretation of implications. This dual declarative/procedural interpretation later became formalised in the Prolog notation
H :- B1, …, Bn.
which can be read (and used) both declaratively and procedurally. It also became clear that such clauses could be restricted to definite clauses or Horn clauses, where H, B1, …, Bn are all atomic predicate logic formulae, and that SL-resolution could be restricted (and generalised) to LUSH or SLD-resolution. Kowalski's procedural interpretation and LUSH were described in a 1973 memo, published in 1974.

Colmerauer, with Philippe Roussel, used this dual interpretation of clauses as the basis of Prolog, which was implemented in the summer and autumn of 1972. The first Prolog program, also written in 1972 and implemented in Marseille, was a French question-answering system. The use of Prolog as a practical programming language was given great momentum by the development of a compiler by David Warren in Edinburgh in 1977. Experiments demonstrated that Edinburgh Prolog could compete with the processing speed of other symbolic programming languages such as Lisp. Edinburgh Prolog became the de facto standard and strongly influenced the definition of ISO standard Prolog.

Abductive logic programming

Abductive logic programming is an extension of normal Logic Programming that allows some predicates, declared as abducible predicates, to be "open" or undefined. A clause in an abductive logic program has the form:
H :- B1, …, Bn, A1, …, An.
where H is an atomic formula that is not abducible, all the Bi are literals whose predicates are not abducible, and the Ai are atomic formulas whose predicates are abducible. The abducible predicates can be constrained by integrity constraints, which can have the form:
false :- B1, …, Bn.
where the Bi are arbitrary literals (defined or abducible, and atomic or negated). For example:

canfly(X) :- bird(X), normal(X).
false :-  normal(X), wounded(X).
bird(john).
bird(mary).
wounded(john).

where the predicate normal is abducible.

Problem solving is achieved by deriving hypotheses expressed in terms of the abducible predicates as solutions of problems to be solved. These problems can be either observations that need to be explained (as in classical abductive reasoning) or goals to be solved (as in normal logic programming). For example, the hypothesis normal(mary) explains the observation canfly(mary). Moreover, the same hypothesis entails the only solution X = mary of the goal of finding something that can fly:
 
:- canfly(X).

Abductive logic programming has been used for fault diagnosis, planning, natural language processing and machine learning. It has also been used to interpret Negation as Failure as a form of abductive reasoning.

Metalogic programming

Because mathematical logic has a long tradition of distinguishing between object language and metalanguage, logic programming also allows metalevel programming. The simplest metalogic program is the so-called "vanilla" meta-interpreter:

    solve(true).
    solve((A,B)):- solve(A),solve(B).
    solve(A):- clause(A,B),solve(B).

where true represents an empty conjunction, and clause(A,B) means there is an object-level clause of the form A :- B.

Metalogic programming allows object-level and metalevel representations to be combined, as in natural language. It can also be used to implement any logic that is specified by means of inference rules. Metalogic is used in logic programming to implement metaprograms, which manipulate other programs, databases, knowledge bases or axiomatic theories as data.

Constraint logic programming

Constraint logic programming combines Horn clause logic programming with constraint solving. It extends Horn clauses by allowing some predicates, declared as constraint predicates, to occur as literals in the body of clauses. A constraint logic program is a set of clauses of the form:
H :- C1, …, Cn ◊ B1, …, Bn.
where H and all the Bi are atomic formulas, and the Ci are constraints. Declaratively, such clauses are read as ordinary logical implications:
H if C1 and … and Cn and B1 and … and Bn.
However, whereas the predicates in the heads of clauses are defined by the constraint logic program, the predicates in the constraints are predefined by some domain-specific model-theoretic structure or theory.

Procedurally, subgoals whose predicates are defined by the program are solved by goal-reduction, as in ordinary logic programming, but constraints are checked for satisfiability by a domain-specific constraint-solver, which implements the semantics of the constraint predicates. An initial problem is solved by reducing it to a satisfiable conjunction of constraints.

The following constraint logic program represents a toy temporal database of john's history as a teacher:

teaches(john, hardware, T) :- 1990  T, T < 1999.
teaches(john, software, T) :- 1999  T, T < 2005.
teaches(john, logic, T) :- 2005  T, T  2012.
rank(john, instructor, T) :- 1990  T, T < 2010.
rank(john, professor, T) :- 2010  T, T < 2014.

Here and < are constraint predicates, with their usual intended semantics. The following goal clause queries the database to find out when john both taught logic and was a professor:
:- teaches(john, logic, T), rank(john, professor, T).
The solution is 2010 ≤ T, T ≤ 2012.

Constraint logic programming has been used to solve problems in such fields as civil engineering, mechanical engineering, digital circuit verification, automated timetabling, air traffic control, and finance. It is closely related to abductive logic programming.

Concurrent logic programming

Concurrent logic programming integrates concepts of logic programming with concurrent programming. Its development was given a big impetus in the 1980s by its choice for the systems programming language of the Japanese Fifth Generation Project (FGCS).

A concurrent logic program is a set of guarded Horn clauses of the form:
H :- G1, …, Gn | B1, …, Bn.
The conjunction G1, … , Gn is called the guard of the clause, and | is the commitment operator. Declaratively, guarded Horn clauses are read as ordinary logical implications:
H if G1 and … and Gn and B1 and … and Bn.
However, procedurally, when there are several clauses whose heads H match a given goal, then all of the clauses are executed in parallel, checking whether their guards G1, … , Gn hold. If the guards of more than one clause hold, then a committed choice is made to one of the clauses, and execution proceedes with the subgoals B1, …, Bn of the chosen clause. These subgoals can also be executed in parallel. Thus concurrent logic programming implements a form of "don't care nondeterminism", rather than "don't know nondeterminism".

For example, the following concurrent logic program defines a predicate shuffle(Left, Right, Merge) , which can be used to shuffle two lists Left and Right, combining them into a single list Merge that preserves the ordering of the two lists Left and Right:
 
shuffle([], [], []).
shuffle(Left, Right, Merge) :-
    Left = [First | Rest] |
    Merge = [First | ShortMerge],
    shuffle(Rest, Right, ShortMerge).
shuffle(Left, Right, Merge) :-
    Right = [First | Rest] |
    Merge = [First | ShortMerge],
    shuffle(Left, Rest, ShortMerge).

Here, [] represents the empty list, and [Head | Tail] represents a list with first element Head followed by list Tail, as in Prolog. (Notice that the first occurrence of | in the second and third clauses is the list constructor, whereas the second occurrence of | is the commitment operator.) The program can be used, for example, to shuffle the lists [ace, queen, king] and [1, 4, 2] by invoking the goal clause:
 
shuffle([ace, queen, king], [1, 4, 2], Merge).

The program will non-deterministically generate a single solution, for example Merge = [ace, queen, 1, king, 4, 2].

Arguably, concurrent logic programming is based on message passing and consequently is subject to the same indeterminacy as other concurrent message-passing systems, such as Actors. Carl Hewitt has argued that, concurrent logic programming is not based on logic in his sense that computational steps cannot be logically deduced. However, in concurrent logic programming, any result of a terminating computation is a logical consequence of the program, and any partial result of a partial computation is a logical consequence of the program and the residual goal (process network). Consequently, the indeterminacy of computations implies that not all logical consequences of the program can be deduced.

Concurrent constraint logic programming

Concurrent constraint logic programming combines concurrent logic programming and constraint logic programming, using constraints to control concurrency. A clause can contain a guard, which is a set of constraints that may block the applicability of the clause. When the guards of several clauses are satisfied, concurrent constraint logic programming makes a committed choice to the use of only one.

Inductive logic programming

Inductive logic programming is concerned with generalizing positive and negative examples in the context of background knowledge: machine learning of logic programs. Recent work in this area, combining logic programming, learning and probability, has given rise to the new field of statistical relational learning and probabilistic inductive logic programming.

Higher-order logic programming

Several researchers have extended logic programming with higher-order programming features derived from higher-order logic, such as predicate variables. Such languages include the Prolog extensions HiLog and λProlog.

Linear logic programming

Basing logic programming within linear logic has resulted in the design of logic programming languages that are considerably more expressive than those based on classical logic. Horn clause programs can only represent state change by the change in arguments to predicates. In linear logic programming, one can use the ambient linear logic to support state change. Some early designs of logic programming languages based on linear logic include LO [Andreoli & Pareschi, 1991], Lolli, ACL, and Forum [Miller, 1996]. Forum provides a goal-directed interpretation of all of linear logic.

Object-oriented logic programming

F-logic extends logic programming with objects and the frame syntax.

Logtalk extends the Prolog programming language with support for objects, protocols, and other OOP concepts. Highly portable, it supports most standard-compliant Prolog systems as backend compilers.

Transaction logic programming

Transaction logic is an extension of logic programming with a logical theory of state-modifying updates. It has both a model-theoretic semantics and a procedural one. An implementation of a subset of Transaction logic is available in the Flora-2 system. Other prototypes are also available.

Metamathematics

From Wikipedia, the free encyclopedia

The title page of the Principia Mathematica (shortened version, including sections only up to *56), an important work of metamathematics.

Metamathematics is the study of mathematics itself using mathematical methods. This study produces metatheories, which are mathematical theories about other mathematical theories. Emphasis on metamathematics (and perhaps the creation of the term itself) owes itself to David Hilbert's attempt to secure the foundations of mathematics in the early part of the 20th century. Metamathematics provides "a rigorous mathematical technique for investigating a great variety of foundation problems for mathematics and logic" (Kleene 1952, p. 59). An important feature of metamathematics is its emphasis on differentiating between reasoning from inside a system and from outside a system. An informal illustration of this is categorizing the proposition "2+2=4" as belonging to mathematics while categorizing the proposition "'2+2=4' is valid" as belonging to metamathematics.

History

Metamathematical metatheorems about mathematics itself were originally differentiated from ordinary mathematical theorems in the 19th century to focus on what was then called the foundational crisis of mathematics. Richard's paradox (Richard 1905) concerning certain 'definitions' of real numbers in the English language is an example of the sort of contradictions that can easily occur if one fails to distinguish between mathematics and metamathematics. Something similar can be said around the well-known Russell's paradox (Does the set of all those sets that do not contain themselves contain itself?).

Metamathematics was intimately connected to mathematical logic, so that the early histories of the two fields, during the late 19th and early 20th centuries, largely overlap. More recently, mathematical logic has often included the study of new pure mathematics, such as set theory, recursion theory and pure model theory, which is not directly related to metamathematics.

Serious metamathematical reflection began with the work of Gottlob Frege, especially his Begriffsschrift.

David Hilbert was the first to invoke the term "metamathematics" with regularity (see Hilbert's program). In his hands, it meant something akin to contemporary proof theory, in which finitary methods are used to study various axiomatized mathematical theorems (Kleene 1952, p. 55).

Other prominent figures in the field include Bertrand Russell, Thoralf Skolem, Emil Post, Alonzo Church, Stephen Kleene, Willard Quine, Paul Benacerraf, Hilary Putnam, Gregory Chaitin, Alfred Tarski and Kurt Gödel.

Today, metalogic and metamathematics are largely synonymous[citation needed] with each other, and both have been substantially subsumed by mathematical logic in academia.

Milestones

The discovery of hyperbolic geometry

The discovery of hyperbolic geometry had important philosophical consequences for Metamathematics. Before its discovery there was just one geometry and mathematics; the idea that another geometry existed was considered improbable.

When Gauss discovered hyperbolic geometry, it is said that he did not publish anything about it out of fear of the "uproar of the Boeotians", which would ruin his status as princeps mathematicorum (Latin, "the Prince of Mathematicians"). The "uproar of the Boeotians" came and went, and gave an impetus to metamathematics and great improvements in mathematical rigour, analytical philosophy and logic.

Begriffsschrift

Begriffsschrift (German for, roughly, "concept-script") is a book on logic by Gottlob Frege, published in 1879, and the formal system set out in that book.

Begriffsschrift is usually translated as concept writing or concept notation; the full title of the book identifies it as "a formula language, modeled on that of arithmetic, of pure thought." Frege's motivation for developing his formal approach to logic resembled Leibniz's motivation for his calculus ratiocinator (despite that, in his Foreword Frege clearly denies that he reached this aim, and also that his main aim would be constructing an ideal language like Leibniz's, what Frege declares to be quite hard and idealistic, however, not impossible task). Frege went on to employ his logical calculus in his research on the foundations of mathematics, carried out over the next quarter century.

Principia Mathematica

Principia Mathematica, or "PM" as it is often abbreviated, was an attempt to describe a set of axioms and inference rules in symbolic logic from which all mathematical truths could in principle be proven. As such, this ambitious project is of great importance in the history of mathematics and philosophy, being one of the foremost products of the belief that such an undertaking may be achievable. However, in 1931, Gödel's incompleteness theorem proved definitively that PM, and in fact any other attempt, could never achieve this lofty goal; that is, for any set of axioms and inference rules proposed to encapsulate mathematics, there would in fact be some truths of mathematics which could not be deduced from them.

One of the main inspirations and motivations for PM was the earlier work of Gottlob Frege on logic, which Russell discovered allowed for the construction of paradoxical sets. PM sought to avoid this problem by ruling out the unrestricted creation of arbitrary sets. This was achieved by replacing the notion of a general set with notion of a hierarchy of sets of different 'types', a set of a certain type only allowed to contain sets of strictly lower types. Contemporary mathematics, however, avoids paradoxes such as Russell's in less unwieldy ways, such as the system of Zermelo–Fraenkel set theory.

Gödel's incompleteness theorem

Gödel's incompleteness theorems are two theorems of mathematical logic that establish inherent limitations of all but the most trivial axiomatic systems capable of doing arithmetic. The theorems, proven by Kurt Gödel in 1931, are important both in mathematical logic and in the philosophy of mathematics. The two results are widely, but not universally, interpreted as showing that Hilbert's program to find a complete and consistent set of axioms for all mathematics is impossible, giving a negative answer to Hilbert's second problem.

The first incompleteness theorem states that no consistent system of axioms whose theorems can be listed by an "effective procedure" (e.g., a computer program, but it could be any sort of algorithm) is capable of proving all truths about the relations of the natural numbers (arithmetic). For any such system, there will always be statements about the natural numbers that are true, but that are unprovable within the system. The second incompleteness theorem, an extension of the first, shows that such a system cannot demonstrate its own consistency.

Tarski's definition of model-theoretic satisfaction

The T-schema or truth schema (not to be confused with 'Convention T') is used to give an inductive definition of truth which lies at the heart of any realisation of Alfred Tarski's semantic theory of truth. Some authors refer to it as the "Equivalence Schema", a synonym introduced by Michael Dummett.

The T-schema is often expressed in natural language, but it can be formalized in many-sorted predicate logic or modal logic; such a formalisation is called a T-theory. T-theories form the basis of much fundamental work in philosophical logic, where they are applied in several important controversies in analytic philosophy.

As expressed in semi-natural language (where 'S' is the name of the sentence abbreviated to S): 'S' is true if and only if S

Example: 'snow is white' is true if and only if snow is white.

The impossibility of the Entscheidungsproblem

The Entscheidungsproblem (German for 'decision problem') is a challenge posed by David Hilbert in 1928. The Entscheidungsproblem asks for an algorithm that takes as input a statement of a first-order logic (possibly with a finite number of axioms beyond the usual axioms of first-order logic) and answers "Yes" or "No" according to whether the statement is universally valid, i.e., valid in every structure satisfying the axioms. By the completeness theorem of first-order logic, a statement is universally valid if and only if it can be deduced from the axioms, so the Entscheidungsproblem can also be viewed as asking for an algorithm to decide whether a given statement is provable from the axioms using the rules of logic.

In 1936, Alonzo Church and Alan Turing published independent papers showing that a general solution to the Entscheidungsproblem is impossible, assuming that the intuitive notation of "effectively calculable" is captured by the functions computable by a Turing machine (or equivalently, by those expressible in the lambda calculus). This assumption is now known as the Church–Turing thesis.

Metamath

From Wikipedia, the free encyclopedia

Metamath
Metamath logo.png
Developer(s)Norman Megill
Written inANSI C
Operating systemLinux, Windows, macOS
TypeComputer-assisted proof checking
LicenseGNU General Public License (Creative Commons Public Domain Dedication for databases)
Websitehttp://metamath.org

Metamath is a language for developing strictly formalized mathematical definitions and proofs accompanied by a proof checker for this language and a growing database of thousands of proved theorems covering conventional results in logic, set theory, number theory, group theory, algebra, analysis, and topology, as well as topics in Hilbert spaces and quantum logic.

The Metamath language

While the large database of proved theorems follows conventional ZFC set theory, the Metamath language is a metalanguage, suitable for developing a wide variety of formal systems.

The set of symbols that can be used for constructing formulas is declared using $c and $v statements; for example:
 
$( Declare the constant symbols we will use $)
    $c 0 + = -> ( ) term wff |- $.
$( Declare the metavariables we will use $)
    $v t r s P Q $.

The grammar for formulas is specified using a combination of $f and $a statements; for example:
 
$( Specify properties of the metavariables $)
    tt $f term t $.
    tr $f term r $.
    ts $f term s $.
    wp $f wff P $.
    wq $f wff Q $.
$( Define "wff" (part 1) $)
    weq $a wff t = r $.
$( Define "wff" (part 2) $)
    wim $a wff ( P -> Q ) $.

Axioms and rules of inference are specified with $a statements along with ${ and $} for block scoping; for example:

$( State axiom a1 $)
    a1 $a |- ( t = r -> ( t = s -> r = s ) ) $.
$( State axiom a2 $)
    a2 $a |- ( t + 0 ) = t $.
    ${
       min $e |- P $.
       maj $e |- ( P -> Q ) $.
$( Define the modus ponens inference rule $)
       mp  $a |- Q $.
    $}

The metamath program can convert statements to more conventional TeX notation; for example, the modus ponens axiom from set.mm:
Using one construct, $a statements, to capture syntactic rules, axiom schemas, and rules of inference provides a level of flexibility similar to higher order logical frameworks without a dependency on a complex type system.

Theorems (and derived rules of inference) are written with $p statements; for example:

$( Prove a theorem $)
    th1 $p |- t = t $=
  $( Here is its proof: $)
       tt tze tpl tt weq tt tt weq tt a2 tt tze tpl
       tt weq tt tze tpl tt weq tt tt weq wim tt a2
       tt tze tpl tt tt a1 mp mp
     $.

Note the inclusion of the proof in the $p statement. It abbreviates the following detailed proof:


tt            $f term t
tze           $a term 0
1,2 tpl       $a term ( t + 0 )
3,1 weq       $a wff ( t + 0 ) = t
1,1 weq       $a wff t = t
1 a2          $a |- ( t + 0 ) = t
1,2 tpl       $a term ( t + 0 )
7,1 weq       $a wff ( t + 0 ) = t
1,2 tpl       $a term ( t + 0 )
9,1 weq       $a wff ( t + 0 ) = t
1,1 weq       $a wff t = t
10,11 wim     $a wff ( ( t + 0 ) = t -> t = t )
1 a2          $a |- ( t + 0 ) = t
1,2 tpl       $a term ( t + 0 )
14,1,1 a1     $a |- ( ( t + 0 ) = t -> ( ( t + 0 ) = t -> t = t ) )
8,12,13,15 mp $a |- ( ( t + 0 ) = t -> t = t )
4,5,6,16 mp   $a |- t = t

The "essential" form of the proof elides syntactic details, leaving a more conventional presentation:


a2             $a |- ( t + 0 ) = t
a2             $a |- ( t + 0 ) = t
a1             $a |- ( ( t + 0 ) = t -> ( ( t + 0 ) = t -> t = t ) )
2,3 mp         $a |- ( ( t + 0 ) = t -> t = t )
1,4 mp         $a |- t = t

A generic proof checker

Metamath has no specific logic embedded and can simply be regarded as a device to apply inference rules to formulas. Simplicity is the master concept in the design of Metamath: the language of Metamath, employed to state the definitions, axioms, inference rules and theorems is only composed of a handful of keywords, and all the proofs are checked using one simple algorithm based on the substitution of variables (with optional provisos for what variables must remain distinct after a substitution is made). This "substitution" is just the simple replacement of a variable with an expression and not the proper substitution described in works on predicate calculus.

Even if Metamath is used for mathematical proof checking, its algorithm is so general we can extend the field of its usage. In fact Metamath could be used with every sort of formal system: the checking of a computer program could be considered (even if Metamath's low level would make it difficult); it could possibly even be a syntactic checker for a natural language (same remark). Because Metamath has a very generic concept of what a proof is (namely a tree of formulas connected by inference rules) and no specific logic is embedded in the software, Metamath can be used with species of logic as different as Hilbert-style logics or sequents-based logics or even with lambda calculus. In contrast, it is largely incompatible with logical systems which use other things rather than formulas and inference rules. The original natural deduction system (due to Gerhard Gentzen), which uses an extra stack, is an example of a system that cannot be implemented with Metamath. In the case of natural deduction however it is possible to append the stack to the formulas (transforming the natural deduction formulas into a sort of sequent) so that Metamath's requirements are met.

What makes Metamath so generic is its substitution algorithm. This algorithm makes no assumption about the used logic and only checks the substitutions of variables are correctly done.

A step-by-step proof

So here is a detailed example of how this algorithm works. Steps 1 and 2 of the theorem 2p2e4 in set.mm are depicted left. Let's explain how Metamath uses its substitution algorithm to check that step 2 is the logical consequence of step 1 when you use the theorem opreq2i. Step 2 states that ( 2 + 2 ) = ( 2 + ( 1 + 1 ) ). It is the conclusion of the theorem opreq2i. The theorem opreq2i states that if A = B, then (C F A) = (C F B). This theorem would never appear under this cryptic form in a textbook but its literate formulation is banal: when two quantities are equal, one can replace one by the other in an operation. To check the proof Metamath attempts to unify (C F A) = (C F B) with ( 2 + 2 ) = ( 2 + ( 1 + 1 ) ). There is only one way to do so: unifying C with 2, F with +, A with 2 and B with ( 1 + 1 ). So now Metamath uses the premise of opreq2i. This premise states that A = B. As a consequence of its previous computation, Metamath knows that A should be substituted by 2 and B by ( 1 + 1 ). The premise A = B becomes 2=( 1 + 1 ) and thus step 1 is therefore generated. In its turn step 1 is unified with df-2. df-2 is the definition of the number 2 and states that 2 = ( 1 + 1 ). Here the unification is simply a matter of constants and is straightforward (no problem of variables to substitute). So the verification is finished and these two steps of the proof of 2p2e4 are correct.

There is however some complications that are not shown on the picture. When Metamath unifies ( 2 + 2 ) with B it has to check that the syntactical rules are respected. In fact B has the type class thus Metamath has to check that ( 2 + 2 ) is also typed class This is done using the same sort of unification described in the paragraph above.

The above explanation may let suppose that formulas are stored by Metamath. In fact nothing of that sort exists. Metamath only stores the conclusion and the premises of the proven theorem and the list of the names of the theorems used by the proof and nothing more. But since it is possible, with the substitution algorithm, to generate the conclusion from the premises nothing more is required.

Databases

Metamath comes along with two main databases set.mm and ql.mm. set.mm stores theorems concerning ZFC theory and ql.mm develops a set of quantum logic theorems. Three internet interfaces (the Metamath Proof Explorer, the Hilbert Space Explorer and the Quantum Logic Explorer) are provided to explore these two databases in a human friendly way.

set.mm is by far the biggest database written for Metamath, but there is also a formalization (by Robert Solovay) of Peano arithmetic called peano.mm (included in metamath.zip) and a formalization of natural deduction called nat.mm. There is a database based on the formal system MIU presented in Gödel, Escher, Bach. Raph Levien has also designed several databases for his Ghilbert program.

Metamath Proof Explorer

Metamath Proof Explorer
Metamath-theorem-avril1-indexed.png
A proof of the Metamath Proof Explorer
Type of site
Internet encyclopedia project
HeadquartersUSA
OwnerNorman Megill
Created byNorman Megill
Websiteus.metamath.org/mpegif/mmset.html
Alexa rankIncrease 2,281,276 (April 2014)
CommercialNo
RegistrationNo

Metamath has been used to develop set.mm, a human-readable database containing over 30,000 (As of March 2017) fully formal proofs of mathematical theorems built upon ZFC set theory. Those proofs may be browsed on the internet using an interface called Metamath Proof Explorer. New theorems are added to set.mm daily; a table of the most recent proofs is maintained.

One of the seminal ideas that lead Megill to design Metamath was the desire to precisely determine the correctness of some proofs ("I enjoy abstract mathematics, but I sometimes get lost in a barrage of definitions and start to lose confidence that my proofs are correct."), we can also think that the spirit of the Encyclopedia animates the growing up of Metamath and its most important database (called set.mm). Reading set.mm we may have sometimes the impression that the ambition of its author is essentially to add all the mathematics one theorem after the other.

set.mm has been maintained for twenty years now (the first proofs in set.mm are dated August 1993). It is mainly a work by Norman Megill but there are also proofs made by other participants. Technically speaking set.mm develops—in the Hilbert style—ZFC set theory with the addition of the Grothendieck-Tarski axiom (to manage categories). The underlying logic is classical propositional calculus and classical predicate calculus with equality.

set.mm is a valuable tool to understand how well-known set theory concepts such as classes, power sets, union, relations, functions, equivalence classes and so on are derived from the axioms.

However set.mm doesn't stop at these basic notions but explores more elaborated theories.

Cantor concepts such as ordinal and cardinal numbers, equinumerosity or aleph function are defined.
Integers and natural numbers are constructed along with traditional arithmetic tools such as operations, recursion or induction.

The real and complex numbers are constructed from Dedekind cuts, and the concepts of sequence, limit of a sequence, sum of a series and so on are developed for them. The concept of integral is still missing.

Square root, exponentiation, exponential and trigonometric functions are implemented.
General topology is currently developed: topological spaces, closed and open sets, neighborhood, limit point, continuous function, Hausdorff spaces, metric spaces, Cauchy sequences have been defined.

One can also find some theorems of algebra concerning groups, rings, vector spaces, and Hilbert spaces.

Hilbert Space Explorer

The Hilbert Space Explorer presents more than 1,000 theorems pertaining to the Hilbert space theory. Those theorems are included in set.mm. They are not shown in the Metamath Proof Explorer because they have been developed by adding extra axioms to the standard axioms of set.mm. ZFC is weakened by this adding which explains why the resulting proofs are shown in a separate Explorer. This adding (justified by historical opportunity reasons) is theoretically useless since the concept of Hilbert space can be designed with the standard ZFC axioms.

Quantum Logic Explorer

Quantum logic theorems can be found in the database ql.mm. The Quantum Logic Explorer is an internet interface to this database.

Pedagogy

The method of proof used by Metamath is far different from what is used in a school context. In schools what is required is the literate, synthetic method of proof developed by mathematicians since Euclid's time. In Metamath, the method of proof is the symbolic, analytical method of proof invented by Aristotle, Leibniz, Peano, and Frege. Thus, Metamath is unsuitable for school exercises. To speak simply, the proofs in Metamath are much too detailed to be used with ease in school. However, set.mm can be used in a school context as an example of a symbolic system that is big enough to be interesting. set.mm can also be useful because its detailed, symbolic, unambiguous definitions can resolve confusion with textbook definitions. Students may also appreciate the rigor of the Metamath Proof Explorer; no steps are skipped, no assumption left unstated, and no proofs are left "to the reader".

The Proof Explorer references many text books that can be used in conjunction with Metamath. Thus, people interested in studying mathematics can use Metamath in connection with these books.

Other works connected to Metamath

Proof checkers

Using the design ideas implemented in Metamath, Raph Levien has implemented very small proof checker, mmverify.py, at only 500 lines of Python code.

Ghilbert is a similar though more elaborate language based on mmverify.py. Levien would like to implement a system where several people could collaborate and his work is emphasizing modularity and connection between small theories.

Using Levien seminal works, many other implementations of the Metamath design principles have been implemented for a broad variety of languages. Juha Arpiainen has implemented his own proof checker in Common Lisp called Bourbaki and Marnix Klooster has coded a proof checker in Haskell called Hmm.

Although they all use the overall Metamath approach to formal system checker coding, they also implement new concepts of their own.

Editors

Mel O'Cat designed a system called Mmj2, which provides a graphic user interface for proof entry. The initial aim of Mel O'Cat was to allow the user to enter the proofs by simply typing the formulas and letting Mmj2 find the appropriate inference rules to connect them. In Metamath on the contrary you may only enter the theorems names. You may not enter the formulas directly. Mmj2 has also the possibility to enter the proof forward or backward (Metamath only allows to enter proof backward). Moreover Mmj2 has a real grammar parser (unlike Metamath). This technical difference brings more comfort to the user. In particular Metamath sometimes hesitates between several formulas analyzes (most of them being meaningless) and asks the user to choose. In Mmj2 this limitation no longer exists.

There is also a project by William Hale to add a graphical user interface to Metamath called Mmide. Paul Chapman in its turn is working on a new proof browser, which has highlighting that allows you to see the referenced theorem before and after the substitution was made.

Lie group

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Lie_group In mathematics , a Lie gro...