Search This Blog

Wednesday, February 18, 2015

Gödel's incompleteness theorems



From Wikipedia, the free encyclopedia

Gödel's incompleteness theorems are two theorems of mathematical logic that establish inherent limitations of all but the most trivial axiomatic systems capable of doing arithmetic. The theorems, proven by Kurt Gödel in 1931, are important both in mathematical logic and in the philosophy of mathematics. The two results are widely, but not universally, interpreted as showing that Hilbert's program to find a complete and consistent set of axioms for all mathematics is impossible, giving a negative answer to Hilbert's second problem.

The first incompleteness theorem states that no consistent system of axioms whose theorems can be listed by an "effective procedure" (e.g., a computer program, but it could be any sort of algorithm) is capable of proving all truths about the relations of the natural numbers (arithmetic). For any such system, there will always be statements about the natural numbers that are true, but that are unprovable within the system. The second incompleteness theorem, an extension of the first, shows that such a system cannot demonstrate its own consistency.

Background

Because statements of a formal theory are written in symbolic form, it is possible to verify mechanically that a formal proof from a finite set of axioms is valid. This task, known as automatic proof verification, is closely related to automated theorem proving. The difference is that instead of constructing a new proof, the proof verifier simply checks that a provided formal proof (or, in instructions that can be followed to create a formal proof) is correct. This process is not merely hypothetical; systems such as Isabelle and Coq are used today to formalize proofs and then check their validity.

Many theories of interest include an infinite set of axioms, however. To verify a formal proof when the set of axioms is infinite, it must be possible to determine whether a statement that is claimed to be an axiom is actually an axiom. This issue arises in first order theories of arithmetic, such as Peano arithmetic, because the principle of mathematical induction is expressed as an infinite set of axioms (an axiom schema).

A formal theory is said to be effectively generated if its set of axioms is a recursively enumerable set. This means that there is a computer program that, in principle, could enumerate all the axioms of the theory without listing any statements that are not axioms. This is equivalent to the existence of a program that enumerates all the theorems of the theory without enumerating any statements that are not theorems. Examples of effectively generated theories with infinite sets of axioms include Peano arithmetic and Zermelo–Fraenkel set theory.

In choosing a set of axioms, one goal is to be able to prove as many correct results as possible, without proving any incorrect results. A set of axioms is complete if, for any statement in the axioms' language, either that statement or its negation is provable from the axioms. A set of axioms is (simply) consistent if there is no statement such that both the statement and its negation are provable from the axioms. In the standard system of first-order logic, an inconsistent set of axioms will prove every statement in its language (this is sometimes called the principle of explosion), and is thus automatically complete. A set of axioms that is both complete and consistent, however, proves a maximal set of non-contradictory theorems. Gödel's incompleteness theorems show that in certain cases it is not possible to obtain an effectively generated, complete, consistent theory.

First incompleteness theorem

Gödel's first incompleteness theorem first appeared as "Theorem VI" in Gödel's 1931 paper On Formally Undecidable Propositions in Principia Mathematica and Related Systems I.

The formal theorem is written in highly technical language. It may be stated in English as (the following is not a quote, but rather a précis):
Any effectively generated theory capable of expressing elementary arithmetic cannot be both consistent and complete. In particular, for any consistent, effectively generated formal theory that proves certain basic arithmetic truths, there is an arithmetical statement that is true,[1] but not provable in the theory.
The true but unprovable statement referred to by the theorem is often referred to as "the Gödel sentence" for the theory. The proof constructs a specific Gödel sentence for each consistent effectively generated theory, but there are infinitely many statements in the language of the theory that share the property of being true but unprovable. For example, the conjunction of the Gödel sentence and any logically valid sentence will have this property.

For each consistent formal theory T having the required small amount of number theory, the corresponding Gödel sentence G asserts: "G cannot be proved within the theory T". This interpretation of G leads to the following informal analysis. If G were provable under the axioms and rules of inference of T, then T would have a theorem, G, which effectively contradicts itself, and thus the theory T would be inconsistent. This means that if the theory T is consistent then G cannot be proved within it, and so the theory T is incomplete. Moreover, the claim G makes about its own unprovability is correct. In this sense G is not only unprovable but true, and provability-within-the-theory-T is not the same as truth. This informal analysis can be formalized to make a rigorous proof of the incompleteness theorem, as described in the section "Proof sketch for the first theorem" below. The formal proof reveals exactly the hypotheses required for the theory T in order for the self-contradictory nature of G to lead to a genuine contradiction.

Each effectively generated theory has its own Gödel statement. It is possible to define a larger theory T’ that contains the whole of T, plus G as an additional axiom. This will not result in a complete theory, because Gödel's theorem will also apply to T’, and thus T’ cannot be complete. In this case, G is indeed a theorem in T’, because it is an axiom. Since G states only that it is not provable in T, no contradiction is presented by its provability in T’. However, because the incompleteness theorem applies to T’, there will be a new Gödel statement G’ for T’, showing that T’ is also incomplete. G’ will differ from G in that G’ will refer to T’, rather than T.

To prove the first incompleteness theorem, Gödel represented statements by numbers. Then the theory at hand, which is assumed to prove certain facts about numbers, also proves facts about its own statements, provided that it is effectively generated. Questions about the provability of statements are represented as questions about the properties of numbers, which would be decidable by the theory if it were complete. In these terms, the Gödel sentence states that no natural number exists with a certain, strange property. A number with this property would encode a proof of the inconsistency of the theory. If there were such a number then the theory would be inconsistent, contrary to the consistency hypothesis. So, under the assumption that the theory is consistent, there is no such number.

Meaning of the first incompleteness theorem

Gödel's first incompleteness theorem shows that any consistent effective formal system that includes enough of the theory of the natural numbers is incomplete: there are true statements expressible in its language that are unprovable within the system. Thus no formal system (satisfying the hypotheses of the theorem) that aims to characterize the natural numbers can actually do so, as there will be true number-theoretical statements that that system cannot prove. This fact is sometimes thought to have severe consequences for the program of logicism proposed by Gottlob Frege and Bertrand Russell, which aimed to define the natural numbers in terms of logic (Hellman 1981, p. 451–468). Bob Hale and Crispin Wright argue that it is not a problem for logicism because the incompleteness theorems apply equally to first order logic as they do to arithmetic. They argue that only those who believe that the natural numbers are to be defined in terms of first order logic have this problem.

The existence of an incomplete formal system is, in itself, not particularly surprising. A system may be incomplete simply because not all the necessary axioms have been discovered. For example, Euclidean geometry without the parallel postulate is incomplete; it is not possible to prove or disprove the parallel postulate from the remaining axioms.

Gödel's theorem shows that, in theories that include a small portion of number theory, a complete and consistent finite list of axioms can never be created, nor even an infinite list that can be enumerated by a computer program. Each time a new statement is added as an axiom, there are other true statements that still cannot be proved, even with the new axiom. If an axiom is ever added that makes the system complete, it does so at the cost of making the system inconsistent.

There are complete and consistent lists of axioms for arithmetic that cannot be enumerated by a computer program. For example, one might take all true statements about the natural numbers to be axioms (and no false statements), which gives the theory known as "true arithmetic". The difficulty is that there is no mechanical way to decide, given a statement about the natural numbers, whether it is an axiom of this theory, and thus there is no effective way to verify a formal proof in this theory.

Many logicians believe that Gödel's incompleteness theorems struck a fatal blow to David Hilbert's second problem, which asked for a finitary consistency proof for mathematics. The second incompleteness theorem, in particular, is often viewed as making the problem impossible. Not all mathematicians agree with this analysis, however, and the status of Hilbert's second problem is not yet decided (see "Modern viewpoints on the status of the problem").

Relation to the liar paradox

The liar paradox is the sentence "This sentence is false." An analysis of the liar sentence shows that it cannot be true (for then, as it asserts, it is false), nor can it be false (for then, it is true). A Gödel sentence G for a theory T makes a similar assertion to the liar sentence, but with truth replaced by provability: G says "G is not provable in the theory T." The analysis of the truth and provability of G is a formalized version of the analysis of the truth of the liar sentence.

It is not possible to replace "not provable" with "false" in a Gödel sentence because the predicate "Q is the Gödel number of a false formula" cannot be represented as a formula of arithmetic. This result, known as Tarski's undefinability theorem, was discovered independently by Gödel (when he was working on the proof of the incompleteness theorem) and by Alfred Tarski.

Extensions of Gödel's original result

Gödel demonstrated the incompleteness of the theory of Principia Mathematica, a particular theory of arithmetic, but a parallel demonstration could be given for any effective theory of a certain expressiveness. Gödel commented on this fact in the introduction to his paper, but restricted the proof to one system for concreteness. In modern statements of the theorem, it is common to state the effectiveness and expressiveness conditions as hypotheses for the incompleteness theorem, so that it is not limited to any particular formal theory. The terminology used to state these conditions was not yet developed in 1931 when Gödel published his results.

Gödel's original statement and proof of the incompleteness theorem requires the assumption that the theory is not just consistent but ω-consistent. A theory is ω-consistent if it is not ω-inconsistent, and is ω-inconsistent if there is a predicate P such that for every specific natural number m the theory proves ~P(m), and yet the theory also proves that there exists a natural number n such that P(n). That is, the theory says that a number with property P exists while denying that it has any specific value. The ω-consistency of a theory implies its consistency, but consistency does not imply ω-consistency. J. Barkley Rosser (1936) strengthened the incompleteness theorem by finding a variation of the proof (Rosser's trick) that only requires the theory to be consistent, rather than ω-consistent. This is mostly of technical interest, since all true formal theories of arithmetic (theories whose axioms are all true statements about natural numbers) are ω-consistent, and thus Gödel's theorem as originally stated applies to them. The stronger version of the incompleteness theorem that only assumes consistency, rather than ω-consistency, is now commonly known as Gödel's incompleteness theorem and as the Gödel–Rosser theorem.

Second incompleteness theorem

Gödel's second incompleteness theorem first appeared as "Theorem XI" in Gödel's 1931 paper On Formally Undecidable Propositions in Principia Mathematica and Related Systems I.

Like with the first incompleteness theorem, Gödel wrote this theorem in highly technical formal mathematics. It may be paraphrased in English as:
For any formal effectively generated theory T including basic arithmetical truths and also certain truths about formal provability, if T includes a statement of its own consistency then T is inconsistent.
This strengthens the first incompleteness theorem, because the statement constructed in the first incompleteness theorem does not directly express the consistency of the theory. The proof of the second incompleteness theorem is obtained by formalizing the proof of the first incompleteness theorem within the theory itself.

A technical subtlety in the second incompleteness theorem is how to express the consistency of T as a formula in the language of T. There are many ways to do this, and not all of them lead to the same result. In particular, different formalizations of the claim that T is consistent may be inequivalent in T, and some may even be provable. For example, first-order Peano arithmetic (PA) can prove that the largest consistent subset of PA is consistent. But since PA is consistent, the largest consistent subset of PA is just PA, so in this sense PA "proves that it is consistent". What PA does not prove is that the largest consistent subset of PA is, in fact, the whole of PA. (The term "largest consistent subset of PA" is technically ambiguous, but what is meant here is the largest consistent initial segment of the axioms of PA ordered according to specific criteria; i.e., by "Gödel numbers", the numbers encoding the axioms as per the scheme used by Gödel mentioned above).

For Peano arithmetic, or any familiar explicitly axiomatized theory T, it is possible to canonically define a formula Con(T) expressing the consistency of T; this formula expresses the property that "there does not exist a natural number coding a sequence of formulas, such that each formula is either one of the axioms of T, a logical axiom, or an immediate consequence of preceding formulas according to the rules of inference of first-order logic, and such that the last formula is a contradiction".

The formalization of Con(T) depends on two factors: formalizing the notion of a sentence being derivable from a set of sentences and formalizing the notion of being an axiom of T. Formalizing derivability can be done in canonical fashion: given an arithmetical formula A(x) defining a set of axioms, one can canonically form a predicate ProvA(P), which expresses that a sentence P is provable from the set of axioms defined by A(x).

In addition, the standard proof of the second incompleteness theorem assumes that ProvA(P) satisfies the Hilbert–Bernays provability conditions. Letting #(P) represent the Gödel number of a formula P, the derivability conditions say:
  1. If T proves P, then T proves ProvA(#(P)).
  2. T proves 1.; that is, T proves that if T proves P, then T proves ProvA(#(P)). In other words, T proves that ProvA(#(P)) implies ProvA(#(ProvA(#(P)))).
  3. T proves that if T proves that (PQ) and T proves P then T proves Q. In other words, T proves that ProvA(#(PQ)) and ProvA(#(P)) imply ProvA(#(Q)).

Implications for consistency proofs

Gödel's second incompleteness theorem also implies that a theory T1 satisfying the technical conditions outlined above cannot prove the consistency of any theory T2 that proves the consistency of T1. This is because such a theory T1 can prove that if T2 proves the consistency of T1, then T1 is in fact consistent. For the claim that T1 is consistent has form "for all numbers n, n has the decidable property of not being a code for a proof of contradiction in T1". If T1 were in fact inconsistent, then T2 would prove for some n that n is the code of a contradiction in T1. But if T2 also proved that T1 is consistent (that is, that there is no such n), then it would itself be inconsistent. This reasoning can be formalized in T1 to show that if T2 is consistent, then T1 is consistent. Since, by second incompleteness theorem, T1 does not prove its consistency, it cannot prove the consistency of T2 either.

This corollary of the second incompleteness theorem shows that there is no hope of proving, for example, the consistency of Peano arithmetic using any finitistic means that can be formalized in a theory the consistency of which is provable in Peano arithmetic. For example, the theory of primitive recursive arithmetic (PRA), which is widely accepted as an accurate formalization of finitistic mathematics, is provably consistent in PA. Thus PRA cannot prove the consistency of PA. This fact is generally seen to imply that Hilbert's program, which aimed to justify the use of "ideal" (infinitistic) mathematical principles in the proofs of "real" (finitistic) mathematical statements by giving a finitistic proof that the ideal principles are consistent, cannot be carried out.[clarification needed]

The corollary also indicates the epistemological relevance of the second incompleteness theorem. It would actually provide no interesting information if a theory T proved its consistency. This is because inconsistent theories prove everything, including their consistency. Thus a consistency proof of T in T would give us no clue as to whether T really is consistent; no doubts about the consistency of T would be resolved by such a consistency proof. The interest in consistency proofs lies in the possibility of proving the consistency of a theory T in some theory T’ that is in some sense less doubtful than T itself, for example weaker than T. For many naturally occurring theories T and T’, such as T = Zermelo–Fraenkel set theory and T’ = primitive recursive arithmetic, the consistency of T’ is provable in T, and thus T’ can't prove the consistency of T by the above corollary of the second incompleteness theorem.

The second incompleteness theorem does not rule out consistency proofs altogether, only consistency proofs that could be formalized in the theory that is proved consistent. For example, Gerhard Gentzen proved the consistency of Peano arithmetic (PA) in a different theory that includes an axiom asserting that the ordinal called ε0 is wellfounded; see Gentzen's consistency proof. Gentzen's theorem spurred the development of ordinal analysis in proof theory.

Examples of undecidable statements

There are two distinct senses of the word "undecidable" in mathematics and computer science. The first of these is the proof-theoretic sense used in relation to Gödel's theorems, that of a statement being neither provable nor refutable in a specified deductive system. The second sense, which will not be discussed here, is used in relation to computability theory and applies not to statements but to decision problems, which are countably infinite sets of questions each requiring a yes or no answer. 
Such a problem is said to be undecidable if there is no computable function that correctly answers every question in the problem set (see undecidable problem).
Because of the two meanings of the word undecidable, the term independent is sometimes used instead of undecidable for the "neither provable nor refutable" sense. The usage of "independent" is also ambiguous, however. Some[who?] use it to mean just "not provable", leaving open whether an independent statement might be refuted.

Undecidability of a statement in a particular deductive system does not, in and of itself, address the question of whether the truth value of the statement is well-defined, or whether it can be determined by other means. Undecidability only implies that the particular deductive system being considered does not prove the truth or falsity of the statement. Whether there exist so-called "absolutely undecidable" statements, whose truth value can never be known or is ill-specified, is a controversial point in the philosophy of mathematics.

The combined work of Gödel and Paul Cohen has given two concrete examples of undecidable statements (in the first sense of the term): The continuum hypothesis can neither be proved nor refuted in ZFC (the standard axiomatization of set theory), and the axiom of choice can neither be proved nor refuted in ZF (which is all the ZFC axioms except the axiom of choice). These results do not require the incompleteness theorem. Gödel proved in 1940 that neither of these statements could be disproved in ZF or ZFC set theory. In the 1960s, Cohen proved that neither is provable from ZF, and the continuum hypothesis cannot be proven from ZFC.

In 1973, the Whitehead problem in group theory was shown to be undecidable, in the first sense of the term, in standard set theory.

Gregory Chaitin produced undecidable statements in algorithmic information theory and proved another incompleteness theorem in that setting. Chaitin's incompleteness theorem states that for any theory that can represent enough arithmetic, there is an upper bound c such that no specific number can be proven in that theory to have Kolmogorov complexity greater than c. While Gödel's theorem is related to the liar paradox, Chaitin's result is related to Berry's paradox.

Undecidable statements provable in larger systems

These are natural mathematical equivalents of the Gödel "true but undecidable" sentence. They can be proved in a larger system which is generally accepted as a valid form of reasoning, but are undecidable in a more limited system such as Peano Arithmetic.

In 1977, Paris and Harrington proved that the Paris-Harrington principle, a version of the Ramsey theorem, is undecidable in the first-order axiomatization of arithmetic called Peano arithmetic, but can be proven in the larger system of second-order arithmetic. Kirby and Paris later showed Goodstein's theorem, a statement about sequences of natural numbers somewhat simpler than the Paris-Harrington principle, to be undecidable in Peano arithmetic.

Kruskal's tree theorem, which has applications in computer science, is also undecidable from Peano arithmetic but provable in set theory. In fact Kruskal's tree theorem (or its finite form) is undecidable in a much stronger system codifying the principles acceptable based on a philosophy of mathematics called predicativism. The related but more general graph minor theorem (2003) has consequences for computational complexity theory.

Limitations of Gödel's theorems

The conclusions of Gödel's theorems are only proven for the formal theories that satisfy the necessary hypotheses. Not all axiom systems satisfy these hypotheses, even when these systems have models that include the natural numbers as a subset. For example, there are first-order axiomatizations of Euclidean geometry, of real closed fields, and of arithmetic in which multiplication is not provably total; none of these meet the hypotheses of Gödel's theorems. The key fact is that these axiomatizations are not expressive enough to define the set of natural numbers or develop basic properties of the natural numbers. Regarding the third example, Dan Willard (2001) has studied many weak systems of arithmetic which do not satisfy the hypotheses of the second incompleteness theorem, and which are consistent and capable of proving their own consistency (see self-verifying theories).

Gödel's theorems only apply to effectively generated (that is, recursively enumerable) theories. If all true statements about natural numbers are taken as axioms for a theory, then this theory is a consistent, complete extension of Peano arithmetic (called true arithmetic) for which none of Gödel's theorems apply in a meaningful way, because this theory is not recursively enumerable.

The second incompleteness theorem only shows that the consistency of certain theories cannot be proved from the axioms of those theories themselves. It does not show that the consistency cannot be proved from other (consistent) axioms. For example, the consistency of the Peano arithmetic can be proved in Zermelo–Fraenkel set theory (ZFC), or in theories of arithmetic augmented with transfinite induction, as in Gentzen's consistency proof.

Relationship with computability

The incompleteness theorem is closely related to several results about undecidable sets in recursion theory.

Stephen Cole Kleene (1943) presented a proof of Gödel's incompleteness theorem using basic results of computability theory. One such result shows that the halting problem is undecidable: there is no computer program that can correctly determine, given any program P as input, whether P eventually halts when run with a particular given input. Kleene showed that the existence of a complete effective theory of arithmetic with certain consistency properties would force the halting problem to be decidable, a contradiction. This method of proof has also been presented by Shoenfield (1967, p. 132); Charlesworth (1980); and Hopcroft and Ullman (1979).

Franzén (2005, p. 73) explains how Matiyasevich's solution to Hilbert's 10th problem can be used to obtain a proof to Gödel's first incompleteness theorem. Matiyasevich proved that there is no algorithm that, given a multivariate polynomial p(x1, x2,...,xk) with integer coefficients, determines whether there is an integer solution to the equation p = 0. Because polynomials with integer coefficients, and integers themselves, are directly expressible in the language of arithmetic, if a multivariate integer polynomial equation p = 0 does have a solution in the integers then any sufficiently strong theory of arithmetic T will prove this. Moreover, if the theory T is ω-consistent, then it will never prove that a particular polynomial equation has a solution when in fact there is no solution in the integers. Thus, if T were complete and ω-consistent, it would be possible to determine algorithmically whether a polynomial equation has a solution by merely enumerating proofs of T until either "p has a solution" or "p has no solution" is found, in contradiction to Matiyasevich's theorem. Moreover, for each consistent effectively generated theory T, it is possible to effectively generate a multivariate polynomial p over the integers such that the equation p = 0 has no solutions over the integers, but the lack of solutions cannot be proved in T (Davis 2006:416, Jones 1980).

Smorynski (1977, p. 842) shows how the existence of recursively inseparable sets can be used to prove the first incompleteness theorem. This proof is often extended to show that systems such as Peano arithmetic are essentially undecidable (see Kleene 1967, p. 274).

Chaitin's incompleteness theorem gives a different method of producing independent sentences, based on Kolmogorov complexity. Like the proof presented by Kleene that was mentioned above, Chaitin's theorem only applies to theories with the additional property that all their axioms are true in the standard model of the natural numbers. Gödel's incompleteness theorem is distinguished by its applicability to consistent theories that nonetheless include statements that are false in the standard model; these theories are known as ω-inconsistent.

Proof sketch for the first theorem

The proof by contradiction has three essential parts. To begin, choose a formal system that meets the proposed criteria:
  1. Statements in the system can be represented by natural numbers (known as Gödel numbers). The significance of this is that properties of statements—such as their truth and falsehood—will be equivalent to determining whether their Gödel numbers have certain properties, and that properties of the statements can therefore be demonstrated by examining their Gödel numbers. This part culminates in the construction of a formula expressing the idea that "statement S is provable in the system" (which can be applied to any statement "S" in the system).
  2. In the formal system it is possible to construct a number whose matching statement, when interpreted, is self-referential and essentially says that it (i.e. the statement itself) is unprovable. This is done using a technique called "diagonalization" (so-called because of its origins as Cantor's diagonal argument).
  3. Within the formal system this statement permits a demonstration that it is neither provable nor disprovable in the system, and therefore the system cannot in fact be ω-consistent. Hence the original assumption that the proposed system met the criteria is false.

Arithmetization of syntax

The main problem in fleshing out the proof described above is that it seems at first that to construct a statement p that is equivalent to "p cannot be proved", p would somehow have to contain a reference to p, which could easily give rise to an infinite regress. Gödel's ingenious technique is to show that statements can be matched with numbers (often called the arithmetization of syntax) in such a way that "proving a statement" can be replaced with "testing whether a number has a given property". This allows a self-referential formula to be constructed in a way that avoids any infinite regress of definitions. The same technique was later used by Alan Turing in his work on the
Entscheidungsproblem.

In simple terms, a method can be devised so that every formula or statement that can be formulated in the system gets a unique number, called its Gödel number, in such a way that it is possible to mechanically convert back and forth between formulas and Gödel numbers. The numbers involved might be very long indeed (in terms of number of digits), but this is not a barrier; all that matters is that such numbers can be constructed. A simple example is the way in which English is stored as a sequence of numbers in computers using ASCII or Unicode:
  • The word HELLO is represented by 72-69-76-76-79 using decimal ASCII, ie the number 7269767679.
  • The logical statement x=y => y=x is represented by 120-061-121-032-061-062-032-121-061-120 using octal ASCII, ie the number 120061121032061062032121061120.
In principle, proving a statement true or false can be shown to be equivalent to proving that the number matching the statement does or doesn't have a given property. Because the formal system is strong enough to support reasoning about numbers in general, it can support reasoning about numbers which represent formulae and statements as well. Crucially, because the system can support reasoning about properties of numbers, the results are equivalent to reasoning about provability of their equivalent statements.

Construction of a statement about "provability"

Having shown that in principle the system can indirectly make statements about provability, by analyzing properties of those numbers representing statements it is now possible to show how to create a statement that actually does this.

A formula F(x) that contains exactly one free variable x is called a statement form or class-sign. As soon as x is replaced by a specific number, the statement form turns into a bona fide statement, and it is then either provable in the system, or not. For certain formulas one can show that for every natural number n, F(n) is true if and only if it can be proven (the precise requirement in the original proof is weaker, but for the proof sketch this will suffice). In particular, this is true for every specific arithmetic operation between a finite number of natural numbers, such as "2×3=6".

Statement forms themselves are not statements and therefore cannot be proved or disproved. But every statement form F(x) can be assigned a Gödel number denoted by G(F). The choice of the free variable used in the form F(x) is not relevant to the assignment of the Gödel number G(F).

Now comes the trick: The notion of provability itself can also be encoded by Gödel numbers, in the following way. Since a proof is a list of statements which obey certain rules, the Gödel number of a proof can be defined. Now, for every statement p, one may ask whether a number x is the Gödel number of its proof. The relation between the Gödel number of p and x, the potential Gödel number of its proof, is an arithmetical relation between two numbers. Therefore there is a statement form Bew(y) that uses this arithmetical relation to state that a Gödel number of a proof of y exists:
Bew(y) = ∃ x ( y is the Gödel number of a formula and x is the Gödel number of a proof of the formula encoded by y).
The name Bew is short for beweisbar, the German word for "provable"; this name was originally used by Gödel to denote the provability formula just described. Note that "Bew(y)" is merely an abbreviation that represents a particular, very long, formula in the original language of T; the string "Bew" itself is not claimed to be part of this language.

An important feature of the formula Bew(y) is that if a statement p is provable in the system then Bew(G(p)) is also provable. This is because any proof of p would have a corresponding Gödel number, the existence of which causes Bew(G(p)) to be satisfied.

Diagonalization

The next step in the proof is to obtain a statement that says it is unprovable. Although Gödel constructed this statement directly, the existence of at least one such statement follows from the diagonal lemma, which says that for any sufficiently strong formal system and any statement form F there is a statement p such that the system proves
pF(G(p)).
By letting F be the negation of Bew(x), we obtain the theorem
p~Bew(G(p))
and the p defined by this roughly states that its own Gödel number is the Gödel number of an unprovable formula.

The statement p is not literally equal to ~Bew(G(p)); rather, p states that if a certain calculation is performed, the resulting Gödel number will be that of an unprovable statement. But when this calculation is performed, the resulting Gödel number turns out to be the Gödel number of p itself.

This is similar to the following sentence in English:
", when preceded by itself in quotes, is unprovable.", when preceded by itself in quotes, is unprovable.
This sentence does not directly refer to itself, but when the stated transformation is made the original sentence is obtained as a result, and thus this sentence asserts its own unprovability. The proof of the diagonal lemma employs a similar method.

Now, assume that the axiomatic system is ω-consistent, and let p be the statement obtained in the previous section.

If p were provable, then Bew(G(p)) would be provable, as argued above. But p asserts the negation of Bew(G(p)). Thus the system would be inconsistent, proving both a statement and its negation. This contradiction shows that p cannot be provable.

If the negation of p were provable, then Bew(G(p)) would be provable (because p was constructed to be equivalent to the negation of Bew(G(p))). However, for each specific number x, x cannot be the Gödel number of the proof of p, because p is not provable (from the previous paragraph). Thus on one hand the system proves there is a number with a certain property (that it is the Gödel number of the proof of p), but on the other hand, for every specific number x, we can prove that it does not have this property. This is impossible in an ω-consistent system. Thus the negation of p is not provable.

Thus the statement p is undecidable in our axiomatic system: it can neither be proved nor disproved within the system.

In fact, to show that p is not provable only requires the assumption that the system is consistent. The stronger assumption of ω-consistency is required to show that the negation of p is not provable. Thus, if p is constructed for a particular system:
  • If the system is ω-consistent, it can prove neither p nor its negation, and so p is undecidable.
  • If the system is consistent, it may have the same situation, or it may prove the negation of p. In the later case, we have a statement ("not p") which is false but provable, and the system is not ω-consistent.
If one tries to "add the missing axioms" to avoid the incompleteness of the system, then one has to add either p or "not p" as axioms. But then the definition of "being a Gödel number of a proof" of a statement changes. which means that the formula Bew(x) is now different. Thus when we apply the diagonal lemma to this new Bew, we obtain a new statement p, different from the previous one, which will be undecidable in the new system if it is ω-consistent.

Proof via Berry's paradox

George Boolos (1989) sketches an alternative proof of the first incompleteness theorem that uses Berry's paradox rather than the liar paradox to construct a true but unprovable formula. A similar proof method was independently discovered by Saul Kripke (Boolos 1998, p. 383). Boolos's proof proceeds by constructing, for any computably enumerable set S of true sentences of arithmetic, another sentence which is true but not contained in S. This gives the first incompleteness theorem as a corollary. According to Boolos, this proof is interesting because it provides a "different sort of reason" for the incompleteness of effective, consistent theories of arithmetic (Boolos 1998, p. 388).

Formalized proofs

Formalized proofs of versions of the incompleteness theorem have been developed by Natarajan Shankar in 1986 using Nqthm (Shankar 1994) and by Russell O'Connor in 2003 using Coq (O'Connor 2005).

Proof sketch for the second theorem

The main difficulty in proving the second incompleteness theorem is to show that various facts about provability used in the proof of the first incompleteness theorem can be formalized within the system using a formal predicate for provability. Once this is done, the second incompleteness theorem follows by formalizing the entire proof of the first incompleteness theorem within the system itself.

Let p stand for the undecidable sentence constructed above, and assume that the consistency of the system can be proven from within the system itself. The demonstration above shows that if the system is consistent, then p is not provable. The proof of this implication can be formalized within the system, and therefore the statement "p is not provable", or "not P(p)" can be proven in the system.

But this last statement is equivalent to p itself (and this equivalence can be proven in the system), so p can be proven in the system. This contradiction shows that the system must be inconsistent.

Discussion and implications

The incompleteness results affect the philosophy of mathematics, particularly versions of formalism, which use a single system formal logic to define their principles. One can paraphrase the first theorem as saying the following:
An all-encompassing axiomatic system can never be found that is able to prove all mathematical truths, but no falsehoods.
On the other hand, from a strict formalist perspective this paraphrase would be considered meaningless because it presupposes that mathematical "truth" and "falsehood" are well-defined in an absolute sense, rather than relative to each formal system.

The following rephrasing of the second theorem is even more unsettling to the foundations of mathematics:
If an axiomatic system can be proven to be consistent from within itself, then it is inconsistent.
Therefore, to establish the consistency of a system S, one needs to use some other system T, but a proof in T is not completely convincing unless T's consistency has already been established without using S.

Theories such as Peano arithmetic, for which any computably enumerable consistent extension is incomplete, are called essentially undecidable or essentially incomplete.

Minds and machines

Authors including the philosopher J. R. Lucas and physicist Roger Penrose have debated what, if anything, Gödel's incompleteness theorems imply about human intelligence. Much of the debate centers on whether the human mind is equivalent to a Turing machine, or by the Church–Turing thesis, any finite machine at all. If it is, and if the machine is consistent, then Gödel's incompleteness theorems would apply to it.
Hilary Putnam (1960) suggested that while Gödel's theorems cannot be applied to humans, since they make mistakes and are therefore inconsistent, it may be applied to the human faculty of science or mathematics in general. Assuming that it is consistent, either its consistency cannot be proved or it cannot be represented by a Turing machine.

Avi Wigderson (2010) has proposed that the concept of mathematical "knowability" should be based on computational complexity rather than logical decidability. He writes that "when knowability is interpreted by modern standards, namely via computational complexity, the Gödel phenomena are very much with us."

Paraconsistent logic

Although Gödel's theorems are usually studied in the context of classical logic, they also have a role in the study of paraconsistent logic and of inherently contradictory statements (dialetheia). Graham Priest (1984, 2006) argues that replacing the notion of formal proof in Gödel's theorem with the usual notion of informal proof can be used to show that naive mathematics is inconsistent, and uses this as evidence for dialetheism. The cause of this inconsistency is the inclusion of a truth predicate for a theory within the language of the theory (Priest 2006:47). Stewart Shapiro (2002) gives a more mixed appraisal of the applications of Gödel's theorems to dialetheism.

Appeals to the incompleteness theorems in other fields

Appeals and analogies are sometimes made to the incompleteness theorems in support of arguments that go beyond mathematics and logic. Several authors have commented negatively on such extensions and interpretations, including Torkel Franzén (2005); Alan Sokal and Jean Bricmont (1999); and Ophelia Benson and Jeremy Stangroom (2006). Bricmont and Stangroom (2006, p. 10), for example, quote from Rebecca Goldstein's comments on the disparity between Gödel's avowed Platonism and the anti-realist uses to which his ideas are sometimes put. Sokal and Bricmont (1999, p. 187) criticize Régis Debray's invocation of the theorem in the context of sociology; Debray has defended this use as metaphorical (ibid.).

Role of self-reference

Torkel Franzén (2005, p. 46) observes:
Gödel's proof of the first incompleteness theorem and Rosser's strengthened version have given many the impression that the theorem can only be proved by constructing self-referential statements [...] or even that only strange self-referential statements are known to be undecidable in elementary arithmetic. To counteract such impressions, we need only introduce a different kind of proof of the first incompleteness theorem.
He then proposes the proofs based on computability, or on information theory, as described earlier in this article, as examples of proofs that should "counteract such impressions".

History

After Gödel published his proof of the completeness theorem as his doctoral thesis in 1929, he turned to a second problem for his habilitation. His original goal was to obtain a positive solution to Hilbert's second problem (Dawson 1997, p. 63). At the time, theories of the natural numbers and real numbers similar to second-order arithmetic were known as "analysis", while theories of the natural numbers alone were known as "arithmetic".

Gödel was not the only person working on the consistency problem. Ackermann had published a flawed consistency proof for analysis in 1925, in which he attempted to use the method of ε-substitution originally developed by Hilbert. Later that year, von Neumann was able to correct the proof for a theory of arithmetic without any axioms of induction. By 1928, Ackermann had communicated a modified proof to Bernays; this modified proof led Hilbert to announce his belief in 1929 that the consistency of arithmetic had been demonstrated and that a consistency proof of analysis would likely soon follow. After the publication of the incompleteness theorems showed that Ackermann's modified proof must be erroneous, von Neumann produced a concrete example showing that its main technique was unsound (Zach 2006, p. 418, Zach 2003, p. 33).

In the course of his research, Gödel discovered that although a sentence which asserts its own falsehood leads to paradox, a sentence that asserts its own non-provability does not. In particular, Gödel was aware of the result now called Tarski's indefinability theorem, although he never published it. Gödel announced his first incompleteness theorem to Carnap, Feigel and Waismann on August 26, 1930; all four would attend a key conference in Königsberg the following week.

Announcement

The 1930 Königsberg conference was a joint meeting of three academic societies, with many of the key logicians of the time in attendance. Carnap, Heyting, and von Neumann delivered one-hour addresses on the mathematical philosophies of logicism, intuitionism, and formalism, respectively (Dawson 1996, p. 69). The conference also included Hilbert's retirement address, as he was leaving his position at the University of Göttingen. Hilbert used the speech to argue his belief that all mathematical problems can be solved. He ended his address by saying,
For the mathematician there is no Ignorabimus, and, in my opinion, not at all for natural science either. ... The true reason why [no one] has succeeded in finding an unsolvable problem is, in my opinion, that there is no unsolvable problem. In contrast to the foolish Ignoramibus, our credo avers: We must know. We shall know!
This speech quickly became known as a summary of Hilbert's beliefs on mathematics (its final six words, "Wir müssen wissen. Wir werden wissen!", were used as Hilbert's epitaph in 1943). Although Gödel was likely in attendance for Hilbert's address, the two never met face to face (Dawson 1996, p. 72).

Gödel announced his first incompleteness theorem at a roundtable discussion session on the third day of the conference. The announcement drew little attention apart from that of von Neumann, who pulled Gödel aside for conversation. Later that year, working independently with knowledge of the first incompleteness theorem, von Neumann obtained a proof of the second incompleteness theorem, which he announced to Gödel in a letter dated November 20, 1930 (Dawson 1996, p. 70). Gödel had independently obtained the second incompleteness theorem and included it in his submitted manuscript, which was received by Monatshefte für Mathematik on November 17, 1930.

Gödel's paper was published in the Monatshefte in 1931 under the title Über formal unentscheidbare Sätze der Principia Mathematica und verwandter Systeme I (On Formally Undecidable Propositions in Principia Mathematica and Related Systems I). As the title implies, Gödel originally planned to publish a second part of the paper; it was never written.

Generalization and acceptance

Gödel gave a series of lectures on his theorems at Princeton in 1933–1934 to an audience that included Church, Kleene, and Rosser. By this time, Gödel had grasped that the key property his theorems required is that the theory must be effective (at the time, the term "general recursive" was used). Rosser proved in 1936 that the hypothesis of ω-consistency, which was an integral part of Gödel's original proof, could be replaced by simple consistency, if the Gödel sentence was changed in an appropriate way. These developments left the incompleteness theorems in essentially their modern form.

Gentzen published his consistency proof for first-order arithmetic in 1936. Hilbert accepted this proof as "finitary" although (as Gödel's theorem had already shown) it cannot be formalized within the system of arithmetic that is being proved consistent.

The impact of the incompleteness theorems on Hilbert's program was quickly realized. Bernays included a full proof of the incompleteness theorems in the second volume of Grundlagen der Mathematik (1939), along with additional results of Ackermann on the ε-substitution method and Gentzen's consistency proof of arithmetic. This was the first full published proof of the second incompleteness theorem.

Criticisms

Finsler

Paul Finsler (1926) used a version of Richard's paradox to construct an expression that was false but unprovable in a particular, informal framework he had developed. Gödel was unaware of this paper when he proved the incompleteness theorems (Collected Works Vol. IV., p. 9). Finsler wrote to Gödel in 1931 to inform him about this paper, which Finsler felt had priority for an incompleteness theorem. Finsler's methods did not rely on formalized provability, and had only a superficial resemblance to Gödel's work (van Heijenoort 1967:328). Gödel read the paper but found it deeply flawed, and his response to Finsler laid out concerns about the lack of formalization (Dawson:89). Finsler continued to argue for his philosophy of mathematics, which eschewed formalization, for the remainder of his career.

Zermelo

In September 1931, Ernst Zermelo wrote Gödel to announce what he described as an "essential gap" in Gödel's argument (Dawson:76). In October, Gödel replied with a 10-page letter (Dawson:76, Grattan-Guinness:512-513). But Zermelo did not relent and published his criticisms in print with "a rather scathing paragraph on his young competitor" (Grattan-Guinness:513). Gödel decided that to pursue the matter further was pointless, and Carnap agreed (Dawson:77). Much of Zermelo's subsequent work was related to logics stronger than first-order logic, with which he hoped to show both the consistency and categoricity of mathematical theories.

Wittgenstein

Ludwig Wittgenstein wrote several passages about the incompleteness theorems that were published posthumously in his 1953 Remarks on the Foundations of Mathematics. Gödel was a member of the Vienna Circle during the period in which Wittgenstein's early ideal language philosophy and Tractatus Logico-Philosophicus dominated the circle's thinking. Writings in Gödel's Nachlass express the belief that Wittgenstein deliberately misread his ideas.

Multiple commentators have read Wittgenstein as misunderstanding Gödel (Rodych 2003), although Juliet Floyd and Hilary Putnam (2000), as well as Graham Priest (2004) have provided textual readings arguing that most commentary misunderstands Wittgenstein. On their release, Bernays, Dummett, and Kreisel wrote separate reviews on Wittgenstein's remarks, all of which were extremely negative (Berto 2009:208). The unanimity of this criticism caused Wittgenstein's remarks on the incompleteness theorems to have little impact on the logic community. In 1972, Gödel, stated: "Has Wittgenstein lost his mind? Does he mean it seriously?" (Wang 1996:197) And wrote to Karl Menger that Wittgenstein's comments demonstrate a willful misunderstanding of the incompleteness theorems writing:
"It is clear from the passages you cite that Wittgenstein did "not" understand [the first incompleteness theorem] (or pretended not to understand it). He interpreted it as a kind of logical paradox, while in fact is just the opposite, namely a mathematical theorem within an absolutely uncontroversial part of mathematics (finitary number theory or combinatorics)." (Wang 1996:197)
Since the publication of Wittgenstein's Nachlass in 2000, a series of papers in philosophy have sought to evaluate whether the original criticism of Wittgenstein's remarks was justified. Floyd and Putnam (2000) argue that Wittgenstein had a more complete understanding of the incompleteness theorem than was previously assumed. They are particularly concerned with the interpretation of a Gödel sentence for an ω-inconsistent theory as actually saying "I am not provable", since the theory has no models in which the provability predicate corresponds to actual provability. Rodych (2003) argues that their interpretation of Wittgenstein is not historically justified, while Bays (2004) argues against Floyd and Putnam's philosophical analysis of the provability predicate. Berto (2009) explores the relationship between Wittgenstein's writing and theories of paraconsistent logic.

Martin Gardner


From Wikipedia, the free encyclopedia

Martin Gardner
Martin Gardner.jpeg
Born (1914-10-21)October 21, 1914
Tulsa, Oklahoma, USA
Died May 22, 2010(2010-05-22) (aged 95)
Norman, Oklahoma, USA
Pen name George Groth
Armand T. Ringer
Uriah Fuller
Occupation Author
Nationality United States
Alma mater University of Chicago
Period 1930–2010
Genre Recreational mathematics, Puzzles, Stage magic, Debunking
Literary movement Scientific skepticism
Notable works Fads and Fallacies in the Name of Science;
Mathematical Games (Scientific American column);
The Annotated Alice;
The Ambidextrous Universe
Notable awards Leroy P. Steele Prize for Mathematical Exposition (1987)[1]
George Pólya Award (1999)[2][3]

Signature
Martin Gardner (October 21, 1914 – May 22, 2010)[4] was an American popular mathematics and popular science writer, with interests also encompassing micromagic, scientific skepticism, philosophy, religion, and literature—especially the writings of Lewis Carroll and G.K. Chesterton.[5]
Gardner was best known for creating and sustaining general interest in recreational mathematics for a large part of the 20th century, principally through his Scientific American "Mathematical Games" columns from 1956 to 1981 and subsequent books collecting them. He was an uncompromising critic of fringe science and was a founding member of CSICOP, an organization devoted to debunking pseudoscience, and wrote a monthly column ("Notes of a Fringe Watcher") from 1983 to 2002 in Skeptical Inquirer, that organization's monthly magazine. He also wrote a "Puzzle Tale" column for Asimov's Science Fiction magazine from 1977 to 1986 and altogether published more than 100 books.[6]

Biography


Gardner as a high school senior, 1932.

Youth and education

Gardner, son of a petroleum geologist, grew up in and around Tulsa, Oklahoma. He showed an early interest in puzzles and games and his closest childhood friend, John Bennett Shaw, later became "the greatest of all collectors of Sherlockian memorabilia".[7] He attended the University of Chicago where he earned his bachelor's degree in philosophy in 1936. Early jobs included reporter on the Tulsa Tribune, writer at the University of Chicago Office of Press Relations, and case worker in Chicago's Black Belt for the city's Relief Administration. During World War II, he served for four years in the U.S. Navy as a yeoman on board the destroyer escort USS Pope in the Atlantic. His ship was still in the Atlantic when the war came to an end with the surrender of Japan in August 1945.
After the war, Gardner returned to the University of Chicago.[8] He attended graduate school for a year there, but he did not earn an advanced degree.[1] In 1950 he published an article in the Antioch Review entitled "The Hermit Scientist", a pioneering work on what would later come to be called pseudoscientists.[9] It was Gardner's first publication of a skeptical nature and two years later it was published in a much expanded book version: In the Name of Science, his first book.

Early career

In the late 1940s, Gardner moved to New York City and became a writer and designer at Humpty Dumpty magazine where for eight years he wrote features and stories for it and several other children's magazines.[10] His paper-folding puzzles at that magazine (sister publication to Children's Digest at the time, and now sister publication to Jack and Jill magazine) led to his first work at Scientific American.[11] For many decades, Gardner, his wife Charlotte, and their two sons lived in Hastings-on-Hudson, New York, where he earned his living as an independent author, publishing books with several different publishers, and also publishing hundreds of magazine and newspaper articles. Appropriately enough – given his interest in logic and mathematics – they lived on Euclid Avenue. The year 1960 saw the original edition of his best-selling book ever, The Annotated Alice, various editions of which have sold over a million copies worldwide in several languages.

Gatherings for Gardner

Gardner was famously shy and declined many honors when he learned that a public appearance would be required if he accepted.[12] (He once told Colm Mulcahy that he "never gave a lecture in his life and that he wouldn't know how to.") However, in 1993 Atlanta puzzle collector Tom Rodgers persuaded Gardner to attend an evening devoted to Gardner's puzzle-solving efforts, called "Gathering for Gardner". The event was repeated in 1996, again with Gardner in attendance, which convinced Rodgers and his friends to make the gathering a regular event. It has been held since then in even-numbered years near Atlanta, and the program consists of any topic which could have been touched by Gardner during his writing career. The event's name is abbreviated to "G4Gn", with n being replaced by the number of the event (the 2010 event thus was G4G9). Gardner attended the 1993 and 1996 events.

Retirement and death

In 1979, Gardner and his wife Charlotte semi-retired and moved to Hendersonville, North Carolina. Gardner never really retired as an author, but rather he continued to do literature research and to write, especially in updating many of his older books, such as Origami, Eleusis, and the Soma Cube, ISBN 978-0-521-73524-7, published 2008. Charlotte died in 2000 and two years later Gardner returned to Norman, Oklahoma where his son, James Gardner, was a professor of education at the University of Oklahoma.[1] He died there on May 22, 2010.[4] An autobiography — Undiluted Hocus-Pocus: The Autobiography of Martin Gardner — was published posthumously.[13]

Views and interests

I just play all the time and am fortunate enough to get paid for it.
– Martin Gardner, 1998

Recreational mathematics and Mathematical Games

For over a quarter century Gardner wrote a monthly column on the subject of "recreational mathematics" for Scientific American. It all began with his free-standing article on hexaflexagons which ran in the December 1956 issue.[14] Flexagons became a bit of a fad and soon people all over New York City were making them. Gerry Piel, the SA publisher at the time asked Gardner, “Is there enough similar material to this to make a regular feature?” Gardner said he thought so. The January 1957 issue contained his first column, entitled Mathematical Games.[15] Almost 300 more columns were to follow.[1]
The "Mathematical Games" column ran from 1956 to 1981 and was the first introduction of many subjects to a wider audience, notably:

Ironically, Gardner had problems learning calculus and never took a mathematics course after high school. While editing Humpty Dumpty's Magazine he constructed many paper folding puzzles, and this led to his interest in the flexagons invented by British mathematician Arthur H Stone. The subsequent article he wrote on hexaflexagons led directly to the column.

In the 1980s the Mathematical Games column began to appear only irregularly. Other authors began to share the column and the June 1986 issue saw the final installment under that title. In 1981, on Gardner's retirement from Scientific American, the column was replaced by Douglas Hofstadter's "Metamagical Themas", a name that is an anagram of "Mathematical Games".

Many of the games columns were collected in book form starting in 1959 with The Scientific American Book of Mathematical Puzzles & Diversions. Over the next four decades fourteen more books followed. Donald Knuth called them the canonical books.

Pseudoscience and skepticism

Gardner's uncompromising attitude toward pseudoscience made him one of the foremost anti-pseudoscience polemicists of the 20th century.[16] His book Fads and Fallacies in the Name of Science (1952, revised 1957) is a classic and seminal work of the skeptical movement. It explored myriad dubious outlooks and projects including Fletcherism, creationism, food faddism, Charles Fort, Rudolf Steiner, Scientology, Dianetics, UFOs, dowsing, extra-sensory perception, the Bates method, and psychokinesis. This book and his subsequent efforts (Science: Good, Bad and Bogus, 1981; Order and Surprise, 1983, Gardner's Whys & Wherefores, 1989, etc.) earned him a wealth of detractors and antagonists in the fields of "fringe science" and New Age philosophy, with many of whom he kept up running dialogs (both public and private) for decades.

In 1976 Gardner joined with Carl Sagan, Isaac Asimov and others in founding the Committee for the Scientific Investigation of Claims of the Paranormal (CSICOP). He wrote a column called "Notes of a Fringe Watcher"[17] (originally "Notes of a Psi-Watcher") from 1983 to 2002 for that organization's periodical Skeptical Inquirer. These have been collected in five books: New Age: Notes of a Fringe Watcher (1988), On the Wild Side (1992), Weird Water and Fuzzy Logic (1996), Did Adam and Eve Have Navels (2000), and Are Universes Thicker than Blackberries (2003). Gardner was a senior CSICOP fellow and prominent skeptic of the paranormal.

On August 21, 2010, Gardner was posthumously honored with an award recognizing his contributions in the skeptical field, from the Independent Investigations Group during its 10th Anniversary Gala.[18]

Religion and philosophy

Gardner had an abiding fascination with religious belief. He was a fideistic theist, professing belief in one God as Creator, but critical of organized religion. He has been quoted as saying that he regards parapsychology and other research into the paranormal as tantamount to "tempting God" and seeking "signs and wonders". He stated that while he would expect tests on the efficacy of prayers to be negative, he would not rule out a priori the possibility that as yet unknown paranormal forces may allow prayers to influence the physical world.[19]
I am a philosophical theist. I believe in a personal God, and I believe in an afterlife, and I believe in prayer, but I don’t believe in any established religion. This is called philosophical theism.... Philosophical theism is entirely emotional. As Kant said, he destroyed pure reason to make room for faith.[20]
– Martin Gardner, 2008

In his autobiography, edited by Persi Diaconis and James Randi, Gardner stated: "When many of my fans discovered that I believed in God and even hoped for an afterlife, they were shocked and dismayed... I do not mean the God of the Bible, especially the God of the Old Testament, or any other book that claims to be divinely inspired. For me God is a "Wholly Other" transcendent intelligence, impossible for us to understand. He or she is somehow responsible for our universe and capable of providing, how I have no inkling, an afterlife."[21]

Gardner wrote repeatedly about what public figures such as Robert Maynard Hutchins, Mortimer Adler, and William F. Buckley, Jr. believed and whether their beliefs were logically consistent. In some cases, he attacked prominent religious figures such as Mary Baker Eddy on the grounds that their claims are unsupportable. His semi-autobiographical novel The Flight of Peter Fromm depicts a traditionally Protestant Christian man struggling with his faith, examining 20th century scholarship and intellectual movements and ultimately rejecting Christianity while remaining a theist. He described his own belief as philosophical theism inspired by the theology of the philosopher Miguel de Unamuno. While eschewing systematic religious doctrine, Gardner believed in God, asserting that this belief cannot be confirmed or disconfirmed by reason or science. At the same time, he was skeptical of claims that any god has communicated with human beings through spoken or telepathic revelation or through miracles in the natural world.

Gardner's religious philosophy may be summarized as follows: There is nothing supernatural, and nothing in human reason or visible in the world to compel people to believe in any gods.[citation needed] The mystery of existence is enchanting, but a belief in "The Old One" comes from faith without evidence. However, with faith and prayer people can find greater happiness than without. If there is an afterlife, the loving "Old One" is probably real. "[To an atheist] the universe is the most exquisite masterpiece ever constructed by nobody", from G. K. Chesterton, was one of Gardner's favorite quotes.[19]

Gardner said that he suspected that the fundamental nature of human consciousness may not be knowable or discoverable, unless perhaps a physics more profound than ("underlying") quantum mechanics is some day developed. In this regard, he said, he was an adherent of the "New Mysterianism".

Literary criticism and fiction

Gardner was considered a leading authority on Lewis Carroll. His annotated version of Alice's Adventures in Wonderland and Through the Looking Glass, explaining the many mathematical riddles, wordplay, and literary references found in the Alice books, was first published as The Annotated Alice (Clarkson Potter, 1960), a sequel published with new annotations as More Annotated Alice (Random House, 1990), and finally as The Annotated Alice: The Definitive Edition (Norton, 1999) combining notes from the earlier editions and new material. The book arose when Gardner, who found the Alice books 'sort of frightening' when he was young but found them fascinating as an adult,[22] felt that someone ought to annotate them and suggested to a publisher that Bertrand Russell be asked; when the publisher did not manage to get past Russell's secretary, Gardner was asked to take the project. The book has been Gardner's most successful, selling over half a million copies.[23]

Gardner's interest in wordplay led him to conceive of a magazine on recreational linguistics. In 1967 he pitched the idea to Greenwood Periodicals and nominated Dmitri Borgmann as editor.[13][24] The resulting journal, Word Ways, carried many articles from Gardner; as of 2013 it was still publishing his submissions posthumously.

In addition to the 'Alice' books, Gardner produced “Annotated” editions of G. K. Chesterton’s The Innocence Of Father Brown and The Man Who Was Thursday as well as of celebrated poems including The Rime of the Ancient Mariner, Casey at the Bat, The Night Before Christmas, and The Hunting of the Snark; the last also written by Lewis Carroll.

Gardner occasionally tried his hand at fiction of a kind always closely associated with his non-fictional preoccupations. His roman à clef novel was The Flight of Peter Fromm (1973) and his short stories were collected in The No-Sided Professor and Other Tales of Fantasy, Humor, Mystery, and Philosophy (1987). Gardner published stories about an imaginary numerologist named Dr. Matrix and Visitors from Oz (1998), based on L. Frank Baum's Oz books, which reflected his love of Oz. (He was a founding member of the International Wizard of Oz Club, and winner of its 1971 L. Frank Baum Memorial Award.) Gardner was a member of the all-male literary banqueting club, the Trap Door Spiders, which served as the basis of Isaac Asimov's fictional group of mystery solvers, the Black Widowers.

John Horgan (American journalist)



From Wikipedia, the free encyclopedia

John Horgan
John Horgan speaking at HSS.jpg
Born 1953[1]
Other names "Chip" Horgan, "The Horganism"
Education Columbia University School of Journalism (1983)
Occupation Science writer, Author
Notable credit(s) author of The End of Science; has written for many publications, including National Geographic, The New York Times, Time and Newsweek; frequent guest on BloggingHeads.tv; blogger for "Scientific American".

John Horgan is an American science journalist best known for his 1996 book The End of Science. He has written for many publications, including National Geographic, Scientific American, The New York Times, Time, Newsweek, and IEEE Spectrum. His awards include two Science Journalism Awards from the American Association for the Advancement of Science and the National Association of Science Writers Science-in-Society Award. His articles have been included in the 2005, 2006 and 2007 editions of The Best American Science and Nature Writing. Since 2010 he has written the "Cross-check" blog for ScientificAmerican.com.[2]

Horgan graduated from the Columbia University School of Journalism in 1983. Between 1986 and 1997 he was a senior writer at Scientific American.[2]

1990s assertions

His October 1993 Scientific American article, "The Death of Proof", claimed that the growing complexity of mathematics, combined with "computer proofs" and other developments, were undermining traditional concepts of mathematical proof. The article generated "torrents of howls and complaints" from mathematicians, according to David Hoffman (one of the mathematicians Horgan interviewed for the article).[3]

Horgan's 1996 book The End of Science begins where "The Death of Proof" leaves off: in it, Horgan argues that pure science, defined as "the primordial human quest to understand the universe and our place in it," may be coming to an end. Horgan claims that science will not achieve insights into nature as profound as evolution by natural selection, the double helix, the big bang, relativity theory or quantum mechanics. In the future, he suggests, scientists will refine, extend and apply this pre-existing knowledge but will not achieve any more great "revolutions or revelations."

Nobel laureate Phil Anderson wrote in 1999 "The reason that Horgan's pessimism is so wrong lies in the nature of science itself. Whenever a question receives an answer, science moves on and asks a new kind of question, of which there seem to be an endless supply."[4] A front-page review in the New York Times called the book "intellectually bracing, sweepingly reported, often brilliant and sometimes bullying."[5]

Later work

In 1999 Horgan followed up The End of Science with The Undiscovered Mind: How the Human Brain Defies Replication, Medication and Explanation, which critiques neuroscience, psychoanalysis, psychopharmacology, evolutionary psychology, behavioral genetics, artificial intelligence and other mind-related fields. For his 2003 book Rational Mysticism,[6] he profiled a number of scientists, mystics, and religious thinkers who have delved into the interface of science, religion and mysticism. He presents his personal impressions of these individuals and a sometimes controversial analysis of their contributions to rational mysticism and the relationship between religion and science. His 2012 book "The End of War" presents scientific arguments against the widespread belief that war is inevitable.

In 2005, Horgan became the Director of the Center for Science Writings (CSW) at Stevens Institute of Technology, in Hoboken, NJ, where he also teaches science journalism, history of science and other courses. The CSW sponsors lectures by leading science communicators, including geographer Jared Diamond of UCLA, financier/philosopher Nassim Taleb, psychologist Steven Pinker of Harvard, neurologist Oliver Sacks, philosopher Peter Singer of Princeton, economist Jeffrey Sachs of Columbia, and biologist Edward O. Wilson of Harvard.

Media appearances


Horgan and George Johnson on a "Science Faction" episode of Bloggingheads.tv.

Horgan has appeared on the Charlie Rose show, the Lehrer News Hour and many other media outlets in the U.S. and Europe. Currently he is a frequent host (usually with science writer George Johnson) of "Science Faction", a monthly discussion related to science topics on the website BloggingHeads.tv.

Questions and answers with Marcelo Gleiser

A theoretical physicist argues that “science is a permanent work in progress” and that it is impossible to arrive at a final theory.
 
Marcelo Gleiser is the Appleton Professor of Natural Philosophy and a professor of physics and astronomy at Dartmouth College in New Hampshire. He obtained his PhD from King's College London in 1986 and has been a postdoctoral fellow at the Kavli Institute for Theoretical Physics at the University of California, Santa Barbara, and at Fermilab.
 
Original link:  http://scitation.aip.org/content/aip/magazine/physicstoday/news/10.1063/PT.5.3020
 


A theoretical physicist, Gleiser focuses his research on cosmology, field theory, complexity, and the origin of life. His contributions include advancing research on cosmological phase transitions and nonequilibrium field theory; he also codiscovered oscillons, a metastable state found in such physical systems as granular media and in many field-theory models of fundamental interactions and condensed-matter physics.

Gleiser received the 1994 NSF Presidential Faculty Fellows Award—the awards are chosen by the White House—in recognition of his teaching and research in theoretical physics. Believing that “science contributes in essential ways to our culture and worldview,” Gleiser participates frequently in TV documentaries and on radio programs, primarily in the US and in his birth country of Brazil. He cofounded the science and culture blog 13.7, which is hosted on NPR’s website. He has authored several popularizations; The Island of Knowledge: The Limits of Science and the Search for Meaning (Basic Books, 2014), is his fourth popularization in English. It was reviewed in this month’s issue.
Physics Today recently caught up with Gleiser to discuss The Island of Knowledge.

PT: Past philosophers have advanced the thesis that a Theory of Everything should not be the end goal of science. What motivated you to revisit that debate, and what does your book contribute that is not covered in previous works?

Gleiser: [My thesis] is not that a Theory of Everything shouldn't be the end goal of science. That impossibility is a consequence of the book's main thesis, which is to clarify how science works: as a series of better approximations to what we call “reality,” itself an evolving concept as we learn more about the world.

I'm interested in understanding how we make sense of physical reality through the use of tools and ideas—not a very common theme in science popularizations, which tend to focus mostly on the science. I developed the central metaphor of the book—the island of knowledge—independently and, I believe, quite beyond a few other thinkers that had similar ideas. As we learn more about the world, we become equipped to ask questions and build analogies we couldn't have considered before: for example, astronomy before and after the telescope, or nonlinear dynamics before and after computers. Science is a permanent work in progress.

PT: How do you respond to the reviewer’s conclusion that “only those who think that profound discoveries are possible will be motivated to try to make them”?

Gleiser: I understand where he is coming from, but I am not sure I agree. Many physicists make profound discoveries without being motivated to make them. Not everyone aspires to be Einstein, Newton, or Bohr. There are countless examples in the history of science of profound discoveries being made without this grand sense of purpose. Besides, profound discoveries are made in many fields, including those that do not ask fundamental questions about the structure of space, time, and matter. It is wrong to assume that one is being defeatist by not believing in unification or final answers. How awful would it be if one day we arrived at the end of fundamental physics? As I explain in the book, I'm glad that the very nature of science prevents us from getting there.

PT: In terms of attracting science funding, what do you see as the advantages of the final theories approach versus the iterative approach?

Gleiser: Final theories have the seductive appeal of “grand answers” and those are, of course, an important scientific pursuit: How far can we get simplifying our fundamental description of matter? But the real core of science is related not to such questions but to more concrete problems and technical challenges. For example, the possibility of life on Mars is an essential scientific question having nothing to do with final theories. I think a healthy scientific community would have a good balance of both, as long as “final theories” were properly renamed as fundamental theories. There are no final theories, only better ones.

PT: This is your fourth English popularization. What appeals to you about writing on scientific topics for a broad audience, and what have you found to be the most challenging aspect of that?

Gleiser: I find that we owe it to the general public explanations of what we do and how we do it. Not just because our funding is tax based, as is often said, but because I believe that science contributes in essential ways to our culture and worldview. We are pondering questions that are not just the province of science but of human inquiry. As such, I think some of us should engage with the public, sharing what's going on. Humans are storytellers, and science is the greatest of all stories. Plus, the children love it, and we need to motivate our future colleagues.

To balance content and quality of exposition and still make it appealing to a large number of people—that's the challenge. Scientists have a somewhat captive audience of science buffs that read the books we write. But branching out of this group is quite hard, as we see from the rare science popularization that makes it into the bestselling lists. We need to find ways to broaden the appeal of science.

PT: What’s the next book project for you?

Gleiser: My next book is the lightest I've written so far in English, a kind of travel journal combining my personal experiences as a traveling scientist and fly fisherman, exploring conferences and rivers around the world. [The estimated publication date is spring 2016.] The format gives me the freedom to explore topics in science and philosophy in a more approachable way than, say, The Island of Knowledge. It's an experiment in a new format of science popularization, somewhat reminiscent of Zen and the Art of Motorcycle Maintenance: An Inquiry Into Values (by Robert Pirsig, William Morrow Paperbacks, 2005), but with fly-fishing subbing for motorcycles—which, by the way, I also love.

PT: What books are you currently reading?

Gleiser: Nick Bostrom's Superintelligence: Paths, Dangers, Strategies (Oxford University Press, 2014) and Roberto Unger and Lee Smolin's The Singular Universe and the Reality of Time: A Proposal in Natural Philosophy (Cambridge University Press, 2014).

Scientists reprogram plants for drought tolerance

Original link:  https://richarddawkins.net/2015/02/scientists-reprogram-plants-for-drought-tolerance/

 Credit: Sang-Youl Park, UC Riverside

By Science Daily

UC Riverside-led research in synthetic biology provides a strategy that has reprogrammed plants to consume less water after they are exposed to an agrochemical, opening new doors for crop improvement.

Crops and other plants are constantly faced with adverse environmental conditions, such as rising temperatures (2014 was the warmest year on record) and lessening fresh water supplies, which lower yield and cost farmers billions of dollars annually.

Drought is a major environmental stress factor affecting plant growth and development. When plants encounter drought, they naturally produce abscisic acid (ABA), a stress hormone that inhibits plant growth and reduces water consumption. Specifically, the hormone turns on a receptor (special protein) in plants when it binds to the receptor like a hand fitting into a glove, resulting in beneficial changes — such as the closing of guard cells on leaves, called stomata, to reduce water loss — that help the plants survive.

While it is true that crops could be sprayed with ABA to assist their survival during drought, ABA is costly to make, rapidly inactivated inside plant cells and light-sensitive, and has therefore failed to find much direct use in agriculture. Several research groups are working to develop synthetic ABA mimics to modulate drought tolerance, but once discovered these mimics are expected to face lengthy and costly development processes.

Source: 
http://www.sciencedaily.com/releases/2015/02/150204134119.htm

Streaming algorithm

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Streaming_algorithm ...