Search This Blog

Thursday, January 25, 2024

Metamathematics

From Wikipedia, the free encyclopedia
The title page of the Principia Mathematica (shortened version), an important work of metamathematics.

Metamathematics is the study of mathematics itself using mathematical methods. This study produces metatheories, which are mathematical theories about other mathematical theories. Emphasis on metamathematics (and perhaps the creation of the term itself) owes itself to David Hilbert's attempt to secure the foundations of mathematics in the early part of the 20th century. Metamathematics provides "a rigorous mathematical technique for investigating a great variety of foundation problems for mathematics and logic" (Kleene 1952, p. 59). An important feature of metamathematics is its emphasis on differentiating between reasoning from inside a system and from outside a system. An informal illustration of this is categorizing the proposition "2+2=4" as belonging to mathematics while categorizing the proposition "'2+2=4' is valid" as belonging to metamathematics.

History

Metamathematical metatheorems about mathematics itself were originally differentiated from ordinary mathematical theorems in the 19th century to focus on what was then called the foundational crisis of mathematics. Richard's paradox (Richard 1905) concerning certain 'definitions' of real numbers in the English language is an example of the sort of contradictions that can easily occur if one fails to distinguish between mathematics and metamathematics. Something similar can be said around the well-known Russell's paradox (Does the set of all those sets that do not contain themselves contain itself?).

Metamathematics was intimately connected to mathematical logic, so that the early histories of the two fields, during the late 19th and early 20th centuries, largely overlap. More recently, mathematical logic has often included the study of new pure mathematics, such as set theory, category theory, recursion theory and pure model theory, which is not directly related to metamathematics.

Serious metamathematical reflection began with the work of Gottlob Frege, especially his Begriffsschrift, published in 1879.

David Hilbert was the first to invoke the term "metamathematics" with regularity (see Hilbert's program), in the early 20th century. In his hands, it meant something akin to contemporary proof theory, in which finitary methods are used to study various axiomatized mathematical theorems (Kleene 1952, p. 55).

Other prominent figures in the field include Bertrand Russell, Thoralf Skolem, Emil Post, Alonzo Church, Alan Turing, Stephen Kleene, Willard Quine, Paul Benacerraf, Hilary Putnam, Gregory Chaitin, Alfred Tarski, Paul Cohen and Kurt Gödel.

Today, metalogic and metamathematics broadly overlap, and both have been substantially subsumed by mathematical logic in academia.

Milestones

The discovery of hyperbolic geometry

The discovery of hyperbolic geometry had important philosophical consequences for metamathematics. Before its discovery there was just one geometry and mathematics; the idea that another geometry existed was considered improbable.

When Gauss discovered hyperbolic geometry, it is said that he did not publish anything about it out of fear of the "uproar of the Boeotians", which would ruin his status as princeps mathematicorum (Latin, "the Prince of Mathematicians"). The "uproar of the Boeotians" came and went, and gave an impetus to metamathematics and great improvements in mathematical rigour, analytical philosophy and logic.

Begriffsschrift

Begriffsschrift (German for, roughly, "concept-script") is a book on logic by Gottlob Frege, published in 1879, and the formal system set out in that book.

Begriffsschrift is usually translated as concept writing or concept notation; the full title of the book identifies it as "a formula language, modeled on that of arithmetic, of pure thought." Frege's motivation for developing his formal approach to logic resembled Leibniz's motivation for his calculus ratiocinator (despite that, in his Foreword Frege clearly denies that he reached this aim, and also that his main aim would be constructing an ideal language like Leibniz's, what Frege declares to be quite hard and idealistic, however, not impossible task). Frege went on to employ his logical calculus in his research on the foundations of mathematics, carried out over the next quarter century.

Principia Mathematica

Principia Mathematica, or "PM" as it is often abbreviated, was an attempt to describe a set of axioms and inference rules in symbolic logic from which all mathematical truths could in principle be proven. As such, this ambitious project is of great importance in the history of mathematics and philosophy, being one of the foremost products of the belief that such an undertaking may be achievable. However, in 1931, Gödel's incompleteness theorem proved definitively that PM, and in fact any other attempt, could never achieve this goal; that is, for any set of axioms and inference rules proposed to encapsulate mathematics, there would in fact be some truths of mathematics which could not be deduced from them.

One of the main inspirations and motivations for PM was the earlier work of Gottlob Frege on logic, which Russell discovered allowed for the construction of paradoxical sets. PM sought to avoid this problem by ruling out the unrestricted creation of arbitrary sets. This was achieved by replacing the notion of a general set with notion of a hierarchy of sets of different 'types', a set of a certain type only allowed to contain sets of strictly lower types. Contemporary mathematics, however, avoids paradoxes such as Russell's in less unwieldy ways, such as the system of Zermelo–Fraenkel set theory.

Gödel's incompleteness theorem

Gödel's incompleteness theorems are two theorems of mathematical logic that establish inherent limitations of all but the most trivial axiomatic systems capable of doing arithmetic. The theorems, proven by Kurt Gödel in 1931, are important both in mathematical logic and in the philosophy of mathematics. The two results are widely, but not universally, interpreted as showing that Hilbert's program to find a complete and consistent set of axioms for all mathematics is impossible, giving a negative answer to Hilbert's second problem.

The first incompleteness theorem states that no consistent system of axioms whose theorems can be listed by an "effective procedure" (e.g., a computer program, but it could be any sort of algorithm) is capable of proving all truths about the relations of the natural numbers (arithmetic). For any such system, there will always be statements about the natural numbers that are true, but that are unprovable within the system. The second incompleteness theorem, an extension of the first, shows that such a system cannot demonstrate its own consistency.

Tarski's definition of model-theoretic satisfaction

The T-schema or truth schema (not to be confused with 'Convention T') is used to give an inductive definition of truth which lies at the heart of any realisation of Alfred Tarski's semantic theory of truth. Some authors refer to it as the "Equivalence Schema", a synonym introduced by Michael Dummett.

The T-schema is often expressed in natural language, but it can be formalized in many-sorted predicate logic or modal logic; such a formalisation is called a T-theory. T-theories form the basis of much fundamental work in philosophical logic, where they are applied in several important controversies in analytic philosophy.

As expressed in semi-natural language (where 'S' is the name of the sentence abbreviated to S): 'S' is true if and only if S

Example: 'snow is white' is true if and only if snow is white.

The undecidability of the Entscheidungsproblem

The Entscheidungsproblem (German for 'decision problem') is a challenge posed by David Hilbert in 1928. The Entscheidungsproblem asks for an algorithm that takes as input a statement of a first-order logic (possibly with a finite number of axioms beyond the usual axioms of first-order logic) and answers "Yes" or "No" according to whether the statement is universally valid, i.e., valid in every structure satisfying the axioms. By the completeness theorem of first-order logic, a statement is universally valid if and only if it can be deduced from the axioms, so the Entscheidungsproblem can also be viewed as asking for an algorithm to decide whether a given statement is provable from the axioms using the rules of logic.

In 1936, Alonzo Church and Alan Turing published independent papers showing that a general solution to the Entscheidungsproblem is impossible, assuming that the intuitive notation of "effectively calculable" is captured by the functions computable by a Turing machine (or equivalently, by those expressible in the lambda calculus). This assumption is now known as the Church–Turing thesis.

Introduction to entropy

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Introduct...