Search This Blog

Wednesday, June 16, 2021

Gravity

From Wikipedia, the free encyclopedia
Hammer and feather drop: astronaut David Scott (from mission Apollo 15) on the Moon enacting the legend of Galileo's gravity experiment

Gravity (from Latin gravitas 'weight'), or gravitation, is a natural phenomenon by which all things with mass or energy—including planets, stars, galaxies, and even light—are attracted to (or gravitate toward) one another. On Earth, gravity gives weight to physical objects, and the Moon's gravity causes the ocean tides. The gravitational attraction of the original gaseous matter present in the Universe caused it to begin coalescing and forming stars and caused the stars to group together into galaxies, so gravity is responsible for many of the large-scale structures in the Universe. Gravity has an infinite range, although its effects become weaker as objects get further away.

Gravity is most accurately described by the general theory of relativity (proposed by Albert Einstein in 1915), which describes gravity not as a force, but as a consequence of masses moving along geodesic lines in a curved spacetime caused by the uneven distribution of mass. The most extreme example of this curvature of spacetime is a black hole, from which nothing—not even light—can escape once past the black hole's event horizon. However, for most applications, gravity is well approximated by Newton's law of universal gravitation, which describes gravity as a force causing any two bodies to be attracted toward each other, with magnitude proportional to the product of their masses and inversely proportional to the square of the distance between them.

Gravity is the weakest of the four fundamental interactions of physics, approximately 1038 times weaker than the strong interaction, 1036 times weaker than the electromagnetic force and 1029 times weaker than the weak interaction. As a consequence, it has no significant influence at the level of subatomic particles. In contrast, it is the dominant interaction at the macroscopic scale, and is the cause of the formation, shape and trajectory (orbit) of astronomical bodies.

Current models of particle physics imply that the earliest instance of gravity in the Universe, possibly in the form of quantum gravity, supergravity or a gravitational singularity, along with ordinary space and time, developed during the Planck epoch (up to 10−43 seconds after the birth of the Universe), possibly from a primeval state, such as a false vacuum, quantum vacuum or virtual particle, in a currently unknown manner. Attempts to develop a theory of gravity consistent with quantum mechanics, a quantum gravity theory, which would allow gravity to be united in a common mathematical framework (a theory of everything) with the other three fundamental interactions of physics, are a current area of research.

History of gravitational theory

Ancient world

The ancient Greek philosopher Archimedes discovered the center of gravity of a triangle. He also postulated that if two equal weights did not have the same center of gravity, the center of gravity of the two weights together would be in the middle of the line that joins their centers of gravity.

The Roman architect and engineer Vitruvius in De Architectura postulated that gravity of an object did not depend on weight but its "nature".

Scientific revolution

Modern work on gravitational theory began with the work of Galileo Galilei in the late 16th and early 17th centuries. In his famous (though possibly apocryphal) experiment dropping balls from the Tower of Pisa, and later with careful measurements of balls rolling down inclines, Galileo showed that gravitational acceleration is the same for all objects. This was a major departure from Aristotle's belief that heavier objects have a higher gravitational acceleration. Galileo postulated air resistance as the reason that objects with low density and a high surface area fall more slowly in an atmosphere. Galileo's work set the stage for the formulation of Newton's theory of gravity.

Newton's theory of gravitation

English physicist and mathematician, Sir Isaac Newton (1642–1727)

In 1687, English mathematician Sir Isaac Newton published Principia, which hypothesizes the inverse-square law of universal gravitation. In his own words, "I deduced that the forces which keep the planets in their orbs must [be] reciprocally as the squares of their distances from the centers about which they revolve: and thereby compared the force requisite to keep the Moon in her Orb with the force of gravity at the surface of the Earth; and found them answer pretty nearly." The equation is the following:

Where F is the force, m1 and m2 are the masses of the objects interacting, r is the distance between the centers of the masses and G is the gravitational constant.

Newton's theory enjoyed its greatest success when it was used to predict the existence of Neptune based on motions of Uranus that could not be accounted for by the actions of the other planets. Calculations by both John Couch Adams and Urbain Le Verrier predicted the general position of the planet, and Le Verrier's calculations are what led Johann Gottfried Galle to the discovery of Neptune.

A discrepancy in Mercury's orbit pointed out flaws in Newton's theory. By the end of the 19th century, it was known that its orbit showed slight perturbations that could not be accounted for entirely under Newton's theory, but all searches for another perturbing body (such as a planet orbiting the Sun even closer than Mercury) had been fruitless. The issue was resolved in 1915 by Albert Einstein's new theory of general relativity, which accounted for the small discrepancy in Mercury's orbit. This discrepancy was the advance in the perihelion of Mercury of 42.98 arcseconds per century.

Although Newton's theory has been superseded by Albert Einstein's general relativity, most modern non-relativistic gravitational calculations are still made using Newton's theory because it is simpler to work with and it gives sufficiently accurate results for most applications involving sufficiently small masses, speeds and energies.

Equivalence principle

The equivalence principle, explored by a succession of researchers including Galileo, Loránd Eötvös, and Einstein, expresses the idea that all objects fall in the same way, and that the effects of gravity are indistinguishable from certain aspects of acceleration and deceleration. The simplest way to test the weak equivalence principle is to drop two objects of different masses or compositions in a vacuum and see whether they hit the ground at the same time. Such experiments demonstrate that all objects fall at the same rate when other forces (such as air resistance and electromagnetic effects) are negligible. More sophisticated tests use a torsion balance of a type invented by Eötvös. Satellite experiments, for example STEP, are planned for more accurate experiments in space.

Formulations of the equivalence principle include:

  • The weak equivalence principle: The trajectory of a point mass in a gravitational field depends only on its initial position and velocity, and is independent of its composition.
  • The Einsteinian equivalence principle: The outcome of any local non-gravitational experiment in a freely falling laboratory is independent of the velocity of the laboratory and its location in spacetime.
  • The strong equivalence principle requiring both of the above.

General relativity

Two-dimensional analogy of spacetime distortion generated by the mass of an object. Matter changes the geometry of spacetime, this (curved) geometry being interpreted as gravity. White lines do not represent the curvature of space but instead represent the coordinate system imposed on the curved spacetime, which would be rectilinear in a flat spacetime.

In general relativity, the effects of gravitation are ascribed to spacetime curvature instead of a force. The starting point for general relativity is the equivalence principle, which equates free fall with inertial motion and describes free-falling inertial objects as being accelerated relative to non-inertial observers on the ground. In Newtonian physics, however, no such acceleration can occur unless at least one of the objects is being operated on by a force.

Einstein proposed that spacetime is curved by matter, and that free-falling objects are moving along locally straight paths in curved spacetime. These straight paths are called geodesics. Like Newton's first law of motion, Einstein's theory states that if a force is applied on an object, it would deviate from a geodesic. For instance, we are no longer following geodesics while standing because the mechanical resistance of the Earth exerts an upward force on us, and we are non-inertial on the ground as a result. This explains why moving along the geodesics in spacetime is considered inertial.

Einstein discovered the field equations of general relativity, which relate the presence of matter and the curvature of spacetime and are named after him. The Einstein field equations are a set of 10 simultaneous, non-linear, differential equations. The solutions of the field equations are the components of the metric tensor of spacetime. A metric tensor describes a geometry of spacetime. The geodesic paths for a spacetime are calculated from the metric tensor.

Solutions

Notable solutions of the Einstein field equations include:

Tests

The tests of general relativity included the following:

  • General relativity accounts for the anomalous perihelion precession of Mercury.
  • The prediction that time runs slower at lower potentials (gravitational time dilation) has been confirmed by the Pound–Rebka experiment (1959), the Hafele–Keating experiment, and the GPS.
  • The prediction of the deflection of light was first confirmed by Arthur Stanley Eddington from his observations during the Solar eclipse of 29 May 1919. Eddington measured starlight deflections twice those predicted by Newtonian corpuscular theory, in accordance with the predictions of general relativity. However, his interpretation of the results was later disputed. More recent tests using radio interferometric measurements of quasars passing behind the Sun have more accurately and consistently confirmed the deflection of light to the degree predicted by general relativity.
  • The time delay of light passing close to a massive object was first identified by Irwin I. Shapiro in 1964 in interplanetary spacecraft signals.
  • Gravitational radiation has been indirectly confirmed through studies of binary pulsars. On 11 February 2016, the LIGO and Virgo collaborations announced the first observation of a gravitational wave.
  • Alexander Friedmann in 1922 found that Einstein equations have non-stationary solutions (even in the presence of the cosmological constant). In 1927 Georges Lemaître showed that static solutions of the Einstein equations, which are possible in the presence of the cosmological constant, are unstable, and therefore the static Universe envisioned by Einstein could not exist. Later, in 1931, Einstein himself agreed with the results of Friedmann and Lemaître. Thus general relativity predicted that the Universe had to be non-static—it had to either expand or contract. The expansion of the Universe discovered by Edwin Hubble in 1929 confirmed this prediction.
  • The theory's prediction of frame dragging was consistent with the recent Gravity Probe B results.
  • General relativity predicts that light should lose its energy when traveling away from massive bodies through gravitational redshift. This was verified on earth and in the solar system around 1960.

Gravity and quantum mechanics

An open question is whether it is possible to describe the small-scale interactions of gravity with the same framework as quantum mechanics. General relativity describes large-scale bulk properties whereas quantum mechanics is the framework to describe the smallest scale interactions of matter. Without modifications these frameworks are incompatible.

One path is to describe gravity in the framework of quantum field theory, which has been successful to accurately describe the other fundamental interactions. The electromagnetic force arises from an exchange of virtual photons, where the QFT description of gravity is that there is an exchange of virtual gravitons. This description reproduces general relativity in the classical limit. However, this approach fails at short distances of the order of the Planck length, where a more complete theory of quantum gravity (or a new approach to quantum mechanics) is required.

Specifics

Earth's gravity

An initially-stationary object that is allowed to fall freely under gravity drops a distance that is proportional to the square of the elapsed time. This image spans half a second and was captured at 20 flashes per second.

Every planetary body (including the Earth) is surrounded by its own gravitational field, which can be conceptualized with Newtonian physics as exerting an attractive force on all objects. Assuming a spherically symmetrical planet, the strength of this field at any given point above the surface is proportional to the planetary body's mass and inversely proportional to the square of the distance from the center of the body.

If an object with comparable mass to that of the Earth were to fall towards it, then the corresponding acceleration of the Earth would be observable.

The strength of the gravitational field is numerically equal to the acceleration of objects under its influence. The rate of acceleration of falling objects near the Earth's surface varies very slightly depending on latitude, surface features such as mountains and ridges, and perhaps unusually high or low sub-surface densities. For purposes of weights and measures, a standard gravity value is defined by the International Bureau of Weights and Measures, under the International System of Units (SI).

That value, denoted g, is g = 9.80665 m/s2 (32.1740 ft/s2).

The standard value of 9.80665 m/s2 is the one originally adopted by the International Committee on Weights and Measures in 1901 for 45° latitude, even though it has been shown to be too high by about five parts in ten thousand. This value has persisted in meteorology and in some standard atmospheres as the value for 45° latitude even though it applies more precisely to latitude of 45°32'33".

Assuming the standardized value for g and ignoring air resistance, this means that an object falling freely near the Earth's surface increases its velocity by 9.80665 m/s (32.1740 ft/s or 22 mph) for each second of its descent. Thus, an object starting from rest will attain a velocity of 9.80665 m/s (32.1740 ft/s) after one second, approximately 19.62 m/s (64.4 ft/s) after two seconds, and so on, adding 9.80665 m/s (32.1740 ft/s) to each resulting velocity. Also, again ignoring air resistance, any and all objects, when dropped from the same height, will hit the ground at the same time.

According to Newton's 3rd Law, the Earth itself experiences a force equal in magnitude and opposite in direction to that which it exerts on a falling object. This means that the Earth also accelerates towards the object until they collide. Because the mass of the Earth is huge, however, the acceleration imparted to the Earth by this opposite force is negligible in comparison to the object's. If the object does not bounce after it has collided with the Earth, each of them then exerts a repulsive contact force on the other which effectively balances the attractive force of gravity and prevents further acceleration.

The force of gravity on Earth is the resultant (vector sum) of two forces: (a) The gravitational attraction in accordance with Newton's universal law of gravitation, and (b) the centrifugal force, which results from the choice of an earthbound, rotating frame of reference. The force of gravity is weakest at the equator because of the centrifugal force caused by the Earth's rotation and because points on the equator are furthest from the center of the Earth. The force of gravity varies with latitude and increases from about 9.780 m/s2 at the Equator to about 9.832 m/s2 at the poles.

Equations for a falling body near the surface of the Earth

Under an assumption of constant gravitational attraction, Newton's law of universal gravitation simplifies to F = mg, where m is the mass of the body and g is a constant vector with an average magnitude of 9.81 m/s2 on Earth. This resulting force is the object's weight. The acceleration due to gravity is equal to this g. An initially stationary object which is allowed to fall freely under gravity drops a distance which is proportional to the square of the elapsed time. The image on the right, spanning half a second, was captured with a stroboscopic flash at 20 flashes per second. During the first 120 of a second the ball drops one unit of distance (here, a unit is about 12 mm); by 220 it has dropped at total of 4 units; by 320, 9 units and so on.

Under the same constant gravity assumptions, the potential energy, Ep, of a body at height h is given by Ep = mgh (or Ep = Wh, with W meaning weight). This expression is valid only over small distances h from the surface of the Earth. Similarly, the expression for the maximum height reached by a vertically projected body with initial velocity v is useful for small heights and small initial velocities only.

Gravity and astronomy

Gravity acts on stars that form the Milky Way.

The application of Newton's law of gravity has enabled the acquisition of much of the detailed information we have about the planets in the Solar System, the mass of the Sun, and details of quasars; even the existence of dark matter is inferred using Newton's law of gravity. Although we have not traveled to all the planets nor to the Sun, we know their masses. These masses are obtained by applying the laws of gravity to the measured characteristics of the orbit. In space an object maintains its orbit because of the force of gravity acting upon it. Planets orbit stars, stars orbit galactic centers, galaxies orbit a center of mass in clusters, and clusters orbit in superclusters. The force of gravity exerted on one object by another is directly proportional to the product of those objects' masses and inversely proportional to the square of the distance between them.

The earliest gravity (possibly in the form of quantum gravity, supergravity or a gravitational singularity), along with ordinary space and time, developed during the Planck epoch (up to 10−43 seconds after the birth of the Universe), possibly from a primeval state (such as a false vacuum, quantum vacuum or virtual particle), in a currently unknown manner.

Gravitational radiation

LIGO Hanford Observatory
The LIGO Hanford Observatory located in Washington, US where gravitational waves were first observed in September 2015.

General relativity predicts that energy can be transported out of a system through gravitational radiation. Any accelerating matter can create curvatures in the space-time metric, which is how the gravitational radiation is transported away from the system. Co-orbiting objects can generate curvatures in space-time such as the Earth-Sun system, pairs of neutron stars, and pairs of black holes. Another astrophysical system predicted to lose energy in the form of gravitational radiation are exploding supernovae.

The first indirect evidence for gravitational radiation was through measurements of the Hulse–Taylor binary in 1973. This system consists of a pulsar and neutron star in orbit around one another. Its orbital period has decreased since its initial discovery due to a loss of energy, which is consistent for the amount of energy loss due to gravitational radiation. This research was awarded the Nobel Prize in Physics in 1993.

The first direct evidence for gravitational radiation was measured on 14 September 2015 by the LIGO detectors. The gravitational waves emitted during the collision of two black holes 1.3 billion-light years from Earth were measured. This observation confirms the theoretical predictions of Einstein and others that such waves exist. It also opens the way for practical observation and understanding of the nature of gravity and events in the Universe including the Big Bang. Neutron star and black hole formation also create detectable amounts of gravitational radiation. This research was awarded the Nobel Prize in physics in 2017.

As of 2020, the gravitational radiation emitted by the Solar System is far too small to measure with current technology.

Speed of gravity

In December 2012, a research team in China announced that it had produced measurements of the phase lag of Earth tides during full and new moons which seem to prove that the speed of gravity is equal to the speed of light. This means that if the Sun suddenly disappeared, the Earth would keep orbiting the vacant point normally for 8 minutes, which is the time light takes to travel that distance. The team's findings were released in the Chinese Science Bulletin in February 2013.

In October 2017, the LIGO and Virgo detectors received gravitational wave signals within 2 seconds of gamma ray satellites and optical telescopes seeing signals from the same direction. This confirmed that the speed of gravitational waves was the same as the speed of light.

Anomalies and discrepancies

There are some observations that are not adequately accounted for, which may point to the need for better theories of gravity or perhaps be explained in other ways.

Rotation curve of a typical spiral galaxy: predicted (A) and observed (B). The discrepancy between the curves is attributed to dark matter.
  • Extra-fast stars: Stars in galaxies follow a distribution of velocities where stars on the outskirts are moving faster than they should according to the observed distributions of normal matter. Galaxies within galaxy clusters show a similar pattern. Dark matter, which would interact through gravitation but not electromagnetically, would account for the discrepancy. Various modifications to Newtonian dynamics have also been proposed.
  • Flyby anomaly: Various spacecraft have experienced greater acceleration than expected during gravity assist maneuvers.
  • Accelerating expansion: The metric expansion of space seems to be speeding up. Dark energy has been proposed to explain this. A recent alternative explanation is that the geometry of space is not homogeneous (due to clusters of galaxies) and that when the data are reinterpreted to take this into account, the expansion is not speeding up after all, however this conclusion is disputed.
  • Anomalous increase of the astronomical unit: Recent measurements indicate that planetary orbits are widening faster than if this were solely through the Sun losing mass by radiating energy.
  • Extra energetic photons: Photons travelling through galaxy clusters should gain energy and then lose it again on the way out. The accelerating expansion of the Universe should stop the photons returning all the energy, but even taking this into account photons from the cosmic microwave background radiation gain twice as much energy as expected. This may indicate that gravity falls off faster than inverse-squared at certain distance scales.
  • Extra massive hydrogen clouds: The spectral lines of the Lyman-alpha forest suggest that hydrogen clouds are more clumped together at certain scales than expected and, like dark flow, may indicate that gravity falls off slower than inverse-squared at certain distance scales.

Mathematical universe hypothesis

From Wikipedia, the free encyclopedia

In physics and cosmology, the mathematical universe hypothesis (MUH), also known as the ultimate ensemble theory and struogony (from mathematical structure, Latin: struō), is a speculative "theory of everything" (TOE) proposed by cosmologist Max Tegmark.

Description

Tegmark's MUH is: Our external physical reality is a mathematical structure. That is, the physical universe is not merely described by mathematics, but is mathematics (specifically, a mathematical structure). Mathematical existence equals physical existence, and all structures that exist mathematically exist physically as well. Observers, including humans, are "self-aware substructures (SASs)". In any mathematical structure complex enough to contain such substructures, they "will subjectively perceive themselves as existing in a physically 'real' world".

The theory can be considered a form of Pythagoreanism or Platonism in that it proposes the existence of mathematical entities; a form of mathematical monism in that it denies that anything exists except mathematical objects; and a formal expression of ontic structural realism.

Tegmark claims that the hypothesis has no free parameters and is not observationally ruled out. Thus, he reasons, it is preferred over other theories-of-everything by Occam's Razor. Tegmark also considers augmenting the MUH with a second assumption, the computable universe hypothesis (CUH), which says that the mathematical structure that is our external physical reality is defined by computable functions.

The MUH is related to Tegmark's categorization of four levels of the multiverse. This categorization posits a nested hierarchy of increasing diversity, with worlds corresponding to different sets of initial conditions (level 1), physical constants (level 2), quantum branches (level 3), and altogether different equations or mathematical structures (level 4).

Reception

Andreas Albrecht of Imperial College in London, called it a "provocative" solution to one of the central problems facing physics. Although he "wouldn't dare" go so far as to say he believes it, he noted that "it's actually quite difficult to construct a theory where everything we see is all there is".

Criticisms and responses

Definition of the ensemble

Jürgen Schmidhuber argues that "Although Tegmark suggests that '... all mathematical structures are a priori given equal statistical weight,' there is no way of assigning equal non-vanishing probability to all (infinitely many) mathematical structures." Schmidhuber puts forward a more restricted ensemble which admits only universe representations describable by constructive mathematics, that is, computer programs; e.g., the Global Digital Mathematics Library and Digital Library of Mathematical Functions, linked open data representations of formalized fundamental theorems intended to serve as building blocks for additional mathematical results. He explicitly includes universe representations describable by non-halting programs whose output bits converge after finite time, although the convergence time itself may not be predictable by a halting program, due to the undecidability of the halting problem.

In response, Tegmark notes (sec. V.E) that a constructive mathematics formalized measure of free parameter variations of physical dimensions, constants, and laws over all universes has not yet been constructed for the string theory landscape either, so this should not be regarded as a "show-stopper".

Consistency with Gödel's theorem

It has also been suggested that the MUH is inconsistent with Gödel's incompleteness theorem. In a three-way debate between Tegmark and fellow physicists Piet Hut and Mark Alford, the "secularist" (Alford) states that "the methods allowed by formalists cannot prove all the theorems in a sufficiently powerful system... The idea that math is 'out there' is incompatible with the idea that it consists of formal systems."

Tegmark's response in (sec VI.A.1) is to offer a new hypothesis "that only Gödel-complete (fully decidable) mathematical structures have physical existence. This drastically shrinks the Level IV multiverse, essentially placing an upper limit on complexity, and may have the attractive side effect of explaining the relative simplicity of our universe." Tegmark goes on to note that although conventional theories in physics are Gödel-undecidable, the actual mathematical structure describing our world could still be Gödel-complete, and "could in principle contain observers capable of thinking about Gödel-incomplete mathematics, just as finite-state digital computers can prove certain theorems about Gödel-incomplete formal systems like Peano arithmetic." In (sec. VII) he gives a more detailed response, proposing as an alternative to MUH the more restricted "Computable Universe Hypothesis" (CUH) which only includes mathematical structures that are simple enough that Gödel's theorem does not require them to contain any undecidable or uncomputable theorems. Tegmark admits that this approach faces "serious challenges", including (a) it excludes much of the mathematical landscape; (b) the measure on the space of allowed theories may itself be uncomputable; and (c) "virtually all historically successful theories of physics violate the CUH".

Observability

Stoeger, Ellis, and Kircher note that in a true multiverse theory, "the universes are then completely disjoint and nothing that happens in any one of them is causally linked to what happens in any other one. This lack of any causal connection in such multiverses really places them beyond any scientific support". Ellis specifically criticizes the MUH, stating that an infinite ensemble of completely disconnected universes is "completely untestable, despite hopeful remarks sometimes made, see, e.g., Tegmark (1998)." Tegmark maintains that MUH is testable, stating that it predicts (a) that "physics research will uncover mathematical regularities in nature", and (b) by assuming that we occupy a typical member of the multiverse of mathematical structures, one could "start testing multiverse predictions by assessing how typical our universe is".

Plausibility of radical Platonism

The MUH is based on the radical Platonist view that math is an external reality ( sec V.C). However, Jannes argues that "mathematics is at least in part a human construction", on the basis that if it is an external reality, then it should be found in some other animals as well: "Tegmark argues that, if we want to give a complete description of reality, then we will need a language independent of us humans, understandable for non-human sentient entities, such as aliens and future supercomputers". Brian Greene argues similarly: "The deepest description of the universe should not require concepts whose meaning relies on human experience or interpretation. Reality transcends our existence and so shouldn't, in any fundamental way, depend on ideas of our making."

However, there are many non-human entities, plenty of which are intelligent, and many of which can apprehend, memorise, compare and even approximately add numerical quantities. Several animals have also passed the mirror test of self-consciousness. But a few surprising examples of mathematical abstraction notwithstanding (for example, chimpanzees can be trained to carry out symbolic addition with digits, or the report of a parrot understanding a “zero-like concept”), all examples of animal intelligence with respect to mathematics are limited to basic counting abilities. He adds, "non-human intelligent beings should exist that understand the language of advanced mathematics. However, none of the non-human intelligent beings that we know of confirm the status of (advanced) mathematics as an objective language." In the paper "On Math, Matter and Mind" the secularist viewpoint examined argues (sec. VI.A) that math is evolving over time, there is "no reason to think it is converging to a definite structure, with fixed questions and established ways to address them", and also that "The Radical Platonist position is just another metaphysical theory like solipsism... In the end the metaphysics just demands that we use a different language for saying what we already knew." Tegmark responds (sec VI.A.1) that "The notion of a mathematical structure is rigorously defined in any book on Model Theory", and that non-human mathematics would only differ from our own "because we are uncovering a different part of what is in fact a consistent and unified picture, so math is converging in this sense." In his 2014 book on the MUH, Tegmark argues that the resolution is not that we invent the language of mathematics, but that we discover the structure of mathematics.

Coexistence of all mathematical structures

Don Page has argued that "At the ultimate level, there can be only one world and, if mathematical structures are broad enough to include all possible worlds or at least our own, there must be one unique mathematical structure that describes ultimate reality. So I think it is logical nonsense to talk of Level 4 in the sense of the co-existence of all mathematical structures." This means there can only be one mathematical corpus. Tegmark responds that "this is less inconsistent with Level IV than it may sound, since many mathematical structures decompose into unrelated substructures, and separate ones can be unified."

Consistency with our "simple universe"

Alexander Vilenkin comments (Ch. 19, p. 203) that "the number of mathematical structures increases with increasing complexity, suggesting that 'typical' structures should be horrendously large and cumbersome. This seems to be in conflict with the beauty and simplicity of the theories describing our world". He goes on to note (footnote 8, p. 222) that Tegmark's solution to this problem, the assigning of lower "weights" to the more complex structures seems arbitrary ("Who determines the weights?") and may not be logically consistent ("It seems to introduce an additional mathematical structure, but all of them are supposed to be already included in the set").

Occam's razor

Tegmark has been criticized as misunderstanding the nature and application of Occam's razor; Massimo Pigliucci reminds that "Occam's razor is just a useful heuristic, it should never be used as the final arbiter to decide which theory is to be favored".

Foundations of mathematics

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Foundations_of_mathematics

Foundations of mathematics is the study of the philosophical and logical and/or algorithmic basis of mathematics, or, in a broader sense, the mathematical investigation of what underlies the philosophical theories concerning the nature of mathematics. In this latter sense, the distinction between foundations of mathematics and philosophy of mathematics turns out to be quite vague. Foundations of mathematics can be conceived as the study of the basic mathematical concepts (set, function, geometrical figure, number, etc.) and how they form hierarchies of more complex structures and concepts, especially the fundamentally important structures that form the language of mathematics (formulas, theories and their models giving a meaning to formulas, definitions, proofs, algorithms, etc.) also called metamathematical concepts, with an eye to the philosophical aspects and the unity of mathematics. The search for foundations of mathematics is a central question of the philosophy of mathematics; the abstract nature of mathematical objects presents special philosophical challenges.

The foundations of mathematics as a whole does not aim to contain the foundations of every mathematical topic. Generally, the foundations of a field of study refers to a more-or-less systematic analysis of its most basic or fundamental concepts, its conceptual unity and its natural ordering or hierarchy of concepts, which may help to connect it with the rest of human knowledge. The development, emergence, and clarification of the foundations can come late in the history of a field, and might not be viewed by everyone as its most interesting part.

Mathematics always played a special role in scientific thought, serving since ancient times as a model of truth and rigor for rational inquiry, and giving tools or even a foundation for other sciences (especially physics). Mathematics' many developments towards higher abstractions in the 19th century brought new challenges and paradoxes, urging for a deeper and more systematic examination of the nature and criteria of mathematical truth, as well as a unification of the diverse branches of mathematics into a coherent whole.

The systematic search for the foundations of mathematics started at the end of the 19th century and formed a new mathematical discipline called mathematical logic, which later had strong links to theoretical computer science. It went through a series of crises with paradoxical results, until the discoveries stabilized during the 20th century as a large and coherent body of mathematical knowledge with several aspects or components (set theory, model theory, proof theory, etc.), whose detailed properties and possible variants are still an active research field. Its high level of technical sophistication inspired many philosophers to conjecture that it can serve as a model or pattern for the foundations of other sciences.

Historical context

Ancient Greek mathematics

While the practice of mathematics had previously developed in other civilizations, special interest in its theoretical and foundational aspects was clearly evident in the work of the Ancient Greeks.

Early Greek philosophers disputed as to which is more basic, arithmetic or geometry. Zeno of Elea (490 – c. 430 BC) produced four paradoxes that seem to show the impossibility of change. The Pythagorean school of mathematics originally insisted that only natural and rational numbers exist. The discovery of the irrationality of 2, the ratio of the diagonal of a square to its side (around 5th century BC), was a shock to them which they only reluctantly accepted. The discrepancy between rationals and reals was finally resolved by Eudoxus of Cnidus (408–355 BC), a student of Plato, who reduced the comparison of two irrational ratios to comparisons of multiples of the magnitudes involved. His method anticipated that of the Dedekind cut in the modern definition of real numbers by Richard Dedekind (1831–1916).

In the Posterior Analytics, Aristotle (384–322 BC) laid down the axiomatic method for organizing a field of knowledge logically by means of primitive concepts, axioms, postulates, definitions, and theorems. Aristotle took a majority of his examples for this from arithmetic and from geometry. This method reached its high point with Euclid's Elements (300 BC), a treatise on mathematics structured with very high standards of rigor: Euclid justifies each proposition by a demonstration in the form of chains of syllogisms (though they do not always conform strictly to Aristotelian templates). Aristotle's syllogistic logic, together with the axiomatic method exemplified by Euclid's Elements, are recognized as scientific achievements of ancient Greece.

Platonism as a traditional philosophy of mathematics

Starting from the end of the 19th century, a Platonist view of mathematics became common among practicing mathematicians.

The concepts or, as Platonists would have it, the objects of mathematics are abstract and remote from everyday perceptual experience: geometrical figures are conceived as idealities to be distinguished from effective drawings and shapes of objects, and numbers are not confused with the counting of concrete objects. Their existence and nature present special philosophical challenges: How do mathematical objects differ from their concrete representation? Are they located in their representation, or in our minds, or somewhere else? How can we know them?

The ancient Greek philosophers took such questions very seriously. Indeed, many of their general philosophical discussions were carried on with extensive reference to geometry and arithmetic. Plato (424/423 BC – 348/347 BC) insisted that mathematical objects, like other platonic Ideas (forms or essences), must be perfectly abstract and have a separate, non-material kind of existence, in a world of mathematical objects independent of humans. He believed that the truths about these objects also exist independently of the human mind, but is discovered by humans. In the Meno Plato's teacher Socrates asserts that it is possible to come to know this truth by a process akin to memory retrieval.

Above the gateway to Plato's academy appeared a famous inscription: "Let no one who is ignorant of geometry enter here". In this way Plato indicated his high opinion of geometry. He regarded geometry as "the first essential in the training of philosophers", because of its abstract character.

This philosophy of Platonist mathematical realism is shared by many mathematicians. It can be argued that Platonism somehow comes as a necessary assumption underlying any mathematical work.

In this view, the laws of nature and the laws of mathematics have a similar status, and the effectiveness ceases to be unreasonable. Not our axioms, but the very real world of mathematical objects forms the foundation.

Aristotle dissected and rejected this view in his Metaphysics. These questions provide much fuel for philosophical analysis and debate.

Middle Ages and Renaissance

For over 2,000 years, Euclid's Elements stood as a perfectly solid foundation for mathematics, as its methodology of rational exploration guided mathematicians, philosophers, and scientists well into the 19th century.

The Middle Ages saw a dispute over the ontological status of the universals (platonic Ideas): Realism asserted their existence independently of perception; conceptualism asserted their existence within the mind only; nominalism denied either, only seeing universals as names of collections of individual objects (following older speculations that they are words, "logoi").

René Descartes published La Géométrie (1637), aimed at reducing geometry to algebra by means of coordinate systems, giving algebra a more foundational role (while the Greeks embedded arithmetic into geometry by identifying whole numbers with evenly spaced points on a line). Descartes' book became famous after 1649 and paved the way to infinitesimal calculus.

Isaac Newton (1642–1727) in England and Leibniz (1646–1716) in Germany independently developed the infinitesimal calculus based on heuristic methods greatly efficient, but direly lacking rigorous justifications. Leibniz even went on to explicitly describe infinitesimals as actual infinitely small numbers (close to zero). Leibniz also worked on formal logic but most of his writings on it remained unpublished until 1903.

The Protestant philosopher George Berkeley (1685–1753), in his campaign against the religious implications of Newtonian mechanics, wrote a pamphlet on the lack of rational justifications of infinitesimal calculus: "They are neither finite quantities, nor quantities infinitely small, nor yet nothing. May we not call them the ghosts of departed quantities?"

Then mathematics developed very rapidly and successfully in physical applications, but with little attention to logical foundations.

19th century

In the 19th century, mathematics became increasingly abstract. Concerns about logical gaps and inconsistencies in different fields led to the development of axiomatic systems.

Real analysis

Cauchy (1789–1857) started the project of formulating and proving the theorems of infinitesimal calculus in a rigorous manner, rejecting the heuristic principle of the generality of algebra exploited by earlier authors. In his 1821 work Cours d'Analyse he defines infinitely small quantities in terms of decreasing sequences that converge to 0, which he then used to define continuity. But he did not formalize his notion of convergence.

The modern (ε, δ)-definition of limit and continuous functions was first developed by Bolzano in 1817, but remained relatively unknown. It gives a rigorous foundation of infinitesimal calculus based on the set of real numbers, arguably resolving the Zeno paradoxes and Berkeley's arguments.

Mathematicians such as Karl Weierstrass (1815–1897) discovered pathological functions such as continuous, nowhere-differentiable functions. Previous conceptions of a function as a rule for computation, or a smooth graph, were no longer adequate. Weierstrass began to advocate the arithmetization of analysis, to axiomatize analysis using properties of the natural numbers.

In 1858, Dedekind proposed a definition of the real numbers as cuts of rational numbers. This reduction of real numbers and continuous functions in terms of rational numbers, and thus of natural numbers, was later integrated by Cantor in his set theory, and axiomatized in terms of second order arithmetic by Hilbert and Bernays.

Group theory

For the first time, the limits of mathematics were explored. Niels Henrik Abel (1802–1829), a Norwegian, and Évariste Galois, (1811–1832) a Frenchman, investigated the solutions of various polynomial equations, and proved that there is no general algebraic solution to equations of degree greater than four (Abel–Ruffini theorem). With these concepts, Pierre Wantzel (1837) proved that straightedge and compass alone cannot trisect an arbitrary angle nor double a cube. In 1882, Lindemann building on the work of Hermite showed that a straightedge and compass quadrature of the circle (construction of a square equal in area to a given circle) was also impossible by proving that π is a transcendental number. Mathematicians had attempted to solve all of these problems in vain since the time of the ancient Greeks.

Abel and Galois's works opened the way for the developments of group theory (which would later be used to study symmetry in physics and other fields), and abstract algebra. Concepts of vector spaces emerged from the conception of barycentric coordinates by Möbius in 1827, to the modern definition of vector spaces and linear maps by Peano in 1888. Geometry was no more limited to three dimensions. These concepts did not generalize numbers but combined notions of functions and sets which were not yet formalized, breaking away from familiar mathematical objects.

Non-Euclidean geometries

After many failed attempts to derive the parallel postulate from other axioms, the study of the still hypothetical hyperbolic geometry by Johann Heinrich Lambert (1728–1777) led him to introduce the hyperbolic functions and compute the area of a hyperbolic triangle (where the sum of angles is less than 180°). Then the Russian mathematician Nikolai Lobachevsky (1792–1856) established in 1826 (and published in 1829) the coherence of this geometry (thus the independence of the parallel postulate), in parallel with the Hungarian mathematician János Bolyai (1802–1860) in 1832, and with Gauss. Later in the 19th century, the German mathematician Bernhard Riemann developed Elliptic geometry, another non-Euclidean geometry where no parallel can be found and the sum of angles in a triangle is more than 180°. It was proved consistent by defining point to mean a pair of antipodal points on a fixed sphere and line to mean a great circle on the sphere. At that time, the main method for proving the consistency of a set of axioms was to provide a model for it.

Projective geometry

One of the traps in a deductive system is circular reasoning, a problem that seemed to befall projective geometry until it was resolved by Karl von Staudt. As explained by Russian historians:

In the mid-nineteenth century there was an acrimonious controversy between the proponents of synthetic and analytic methods in projective geometry, the two sides accusing each other of mixing projective and metric concepts. Indeed the basic concept that is applied in the synthetic presentation of projective geometry, the cross-ratio of four points of a line, was introduced through consideration of the lengths of intervals.

The purely geometric approach of von Staudt was based on the complete quadrilateral to express the relation of projective harmonic conjugates. Then he created a means of expressing the familiar numeric properties with his Algebra of Throws. English language versions of this process of deducing the properties of a field can be found in either the book by Oswald Veblen and John Young, Projective Geometry (1938), or more recently in John Stillwell's Four Pillars of Geometry (2005). Stillwell writes on page 120

... projective geometry is simpler than algebra in a certain sense, because we use only five geometric axioms to derive the nine field axioms.

The algebra of throws is commonly seen as a feature of cross-ratios since students ordinarily rely upon numbers without worry about their basis. However, cross-ratio calculations use metric features of geometry, features not admitted by purists. For instance, in 1961 Coxeter wrote Introduction to Geometry without mention of cross-ratio.

Boolean algebra and logic

Attempts of formal treatment of mathematics had started with Leibniz and Lambert (1728–1777), and continued with works by algebraists such as George Peacock (1791–1858). Systematic mathematical treatments of logic came with the British mathematician George Boole (1847) who devised an algebra that soon evolved into what is now called Boolean algebra, in which the only numbers were 0 and 1 and logical combinations (conjunction, disjunction, implication and negation) are operations similar to the addition and multiplication of integers. Additionally, De Morgan published his laws in 1847. Logic thus became a branch of mathematics. Boolean algebra is the starting point of mathematical logic and has important applications in computer science.

Charles Sanders Peirce built upon the work of Boole to develop a logical system for relations and quantifiers, which he published in several papers from 1870 to 1885.

The German mathematician Gottlob Frege (1848–1925) presented an independent development of logic with quantifiers in his Begriffsschrift (formula language) published in 1879, a work generally considered as marking a turning point in the history of logic. He exposed deficiencies in Aristotle's Logic, and pointed out the three expected properties of a mathematical theory

  1. Consistency: impossibility of proving contradictory statements.
  2. Completeness: any statement is either provable or refutable (i.e. its negation is provable).
  3. Decidability: there is a decision procedure to test any statement in the theory.

He then showed in Grundgesetze der Arithmetik (Basic Laws of Arithmetic) how arithmetic could be formalised in his new logic.

Frege's work was popularized by Bertrand Russell near the turn of the century. But Frege's two-dimensional notation had no success. Popular notations were (x) for universal and (∃x) for existential quantifiers, coming from Giuseppe Peano and William Ernest Johnson until the ∀ symbol was introduced by Gerhard Gentzen in 1935 and became canonical in the 1960s.

From 1890 to 1905, Ernst Schröder published Vorlesungen über die Algebra der Logik in three volumes. This work summarized and extended the work of Boole, De Morgan, and Peirce, and was a comprehensive reference to symbolic logic as it was understood at the end of the 19th century.

Peano arithmetic

The formalization of arithmetic (the theory of natural numbers) as an axiomatic theory started with Peirce in 1881 and continued with Richard Dedekind and Giuseppe Peano in 1888. This was still a second-order axiomatization (expressing induction in terms of arbitrary subsets, thus with an implicit use of set theory) as concerns for expressing theories in first-order logic were not yet understood. In Dedekind's work, this approach appears as completely characterizing natural numbers and providing recursive definitions of addition and multiplication from the successor function and mathematical induction.

Foundational crisis

The foundational crisis of mathematics (in German Grundlagenkrise der Mathematik) was the early 20th century's term for the search for proper foundations of mathematics.

Several schools of the philosophy of mathematics ran into difficulties one after the other in the 20th century, as the assumption that mathematics had any foundation that could be consistently stated within mathematics itself was heavily challenged by the discovery of various paradoxes (such as Russell's paradox).

The name "paradox" should not be confused with contradiction. A contradiction in a formal theory is a formal proof of an absurdity inside the theory (such as 2 + 2 = 5), showing that this theory is inconsistent and must be rejected. But a paradox may be either a surprising but true result in a given formal theory, or an informal argument leading to a contradiction, so that a candidate theory, if it is to be formalized, must disallow at least one of its steps; in this case the problem is to find a satisfying theory without contradiction. Both meanings may apply if the formalized version of the argument forms the proof of a surprising truth. For instance, Russell's paradox may be expressed as "there is no set of all sets" (except in some marginal axiomatic set theories).

Various schools of thought opposed each other. The leading school was that of the formalist approach, of which David Hilbert was the foremost proponent, culminating in what is known as Hilbert's program, which thought to ground mathematics on a small basis of a logical system proved sound by metamathematical finitistic means. The main opponent was the intuitionist school, led by L. E. J. Brouwer, which resolutely discarded formalism as a meaningless game with symbols. The fight was acrimonious. In 1920 Hilbert succeeded in having Brouwer, whom he considered a threat to mathematics, removed from the editorial board of Mathematische Annalen, the leading mathematical journal of the time.

Philosophical views

At the beginning of the 20th century, three schools of philosophy of mathematics opposed each other: Formalism, Intuitionism and Logicism. The Second Conference on the Epistemology of the Exact Sciences held in Königsberg in 1930 gave space to these three schools.

Formalism

It has been claimed that formalists, such as David Hilbert (1862–1943), hold that mathematics is only a language and a series of games. Indeed, he used the words "formula game" in his 1927 response to L. E. J. Brouwer's criticisms:

And to what extent has the formula game thus made possible been successful? This formula game enables us to express the entire thought-content of the science of mathematics in a uniform manner and develop it in such a way that, at the same time, the interconnections between the individual propositions and facts become clear ... The formula game that Brouwer so deprecates has, besides its mathematical value, an important general philosophical significance. For this formula game is carried out according to certain definite rules, in which the technique of our thinking is expressed. These rules form a closed system that can be discovered and definitively stated.

Thus Hilbert is insisting that mathematics is not an arbitrary game with arbitrary rules; rather it must agree with how our thinking, and then our speaking and writing, proceeds.

We are not speaking here of arbitrariness in any sense. Mathematics is not like a game whose tasks are determined by arbitrarily stipulated rules. Rather, it is a conceptual system possessing internal necessity that can only be so and by no means otherwise.

The foundational philosophy of formalism, as exemplified by David Hilbert, is a response to the paradoxes of set theory, and is based on formal logic. Virtually all mathematical theorems today can be formulated as theorems of set theory. The truth of a mathematical statement, in this view, is represented by the fact that the statement can be derived from the axioms of set theory using the rules of formal logic.

Merely the use of formalism alone does not explain several issues: why we should use the axioms we do and not some others, why we should employ the logical rules we do and not some others, why do "true" mathematical statements (e.g., the laws of arithmetic) appear to be true, and so on. Hermann Weyl would ask these very questions of Hilbert:

What "truth" or objectivity can be ascribed to this theoretic construction of the world, which presses far beyond the given, is a profound philosophical problem. It is closely connected with the further question: what impels us to take as a basis precisely the particular axiom system developed by Hilbert? Consistency is indeed a necessary but not a sufficient condition. For the time being we probably cannot answer this question ...

In some cases these questions may be sufficiently answered through the study of formal theories, in disciplines such as reverse mathematics and computational complexity theory. As noted by Weyl, formal logical systems also run the risk of inconsistency; in Peano arithmetic, this arguably has already been settled with several proofs of consistency, but there is debate over whether or not they are sufficiently finitary to be meaningful. Gödel's second incompleteness theorem establishes that logical systems of arithmetic can never contain a valid proof of their own consistency. What Hilbert wanted to do was prove a logical system S was consistent, based on principles P that only made up a small part of S. But Gödel proved that the principles P could not even prove P to be consistent, let alone S.

Intuitionism

Intuitionists, such as L. E. J. Brouwer (1882–1966), hold that mathematics is a creation of the human mind. Numbers, like fairy tale characters, are merely mental entities, which would not exist if there were never any human minds to think about them.

The foundational philosophy of intuitionism or constructivism, as exemplified in the extreme by Brouwer and Stephen Kleene, requires proofs to be "constructive" in nature – the existence of an object must be demonstrated rather than inferred from a demonstration of the impossibility of its non-existence. For example, as a consequence of this the form of proof known as reductio ad absurdum is suspect.

Some modern theories in the philosophy of mathematics deny the existence of foundations in the original sense. Some theories tend to focus on mathematical practice, and aim to describe and analyze the actual working of mathematicians as a social group. Others try to create a cognitive science of mathematics, focusing on human cognition as the origin of the reliability of mathematics when applied to the real world. These theories would propose to find foundations only in human thought, not in any objective outside construct. The matter remains controversial.

Logicism

Logicism is a school of thought, and research programme, in the philosophy of mathematics, based on the thesis that mathematics is an extension of a logic or that some or all mathematics may be derived in a suitable formal system whose axioms and rules of inference are 'logical' in nature. Bertrand Russell and Alfred North Whitehead championed this theory initiated by Gottlob Frege and influenced by Richard Dedekind.

Set-theoretic Platonism

Many researchers in axiomatic set theory have subscribed to what is known as set-theoretic Platonism, exemplified by Kurt Gödel.

Several set theorists followed this approach and actively searched for axioms that may be considered as true for heuristic reasons and that would decide the continuum hypothesis. Many large cardinal axioms were studied, but the hypothesis always remained independent from them and it is now considered unlikely that CH can be resolved by a new large cardinal axiom. Other types of axioms were considered, but none of them has reached consensus on the continuum hypothesis yet. Recent work by Hamkins proposes a more flexible alternative: a set-theoretic multiverse allowing free passage between set-theoretic universes that satisfy the continuum hypothesis and other universes that do not.

Indispensability argument for realism

This argument by Willard Quine and Hilary Putnam says (in Putnam's shorter words),

... quantification over mathematical entities is indispensable for science ...; therefore we should accept such quantification; but this commits us to accepting the existence of the mathematical entities in question.

However, Putnam was not a Platonist.

Rough-and-ready realism

Few mathematicians are typically concerned on a daily, working basis over logicism, formalism or any other philosophical position. Instead, their primary concern is that the mathematical enterprise as a whole always remains productive. Typically, they see this as ensured by remaining open-minded, practical and busy; as potentially threatened by becoming overly-ideological, fanatically reductionistic or lazy.

Such a view has also been expressed by some well-known physicists.

For example, the Physics Nobel Prize laureate Richard Feynman said

People say to me, "Are you looking for the ultimate laws of physics?" No, I'm not ... If it turns out there is a simple ultimate law which explains everything, so be it – that would be very nice to discover. If it turns out it's like an onion with millions of layers ... then that's the way it is. But either way there's Nature and she's going to come out the way She is. So therefore when we go to investigate we shouldn't predecide what it is we're looking for only to find out more about it.

And Steven Weinberg:

The insights of philosophers have occasionally benefited physicists, but generally in a negative fashion – by protecting them from the preconceptions of other philosophers. ... without some guidance from our preconceptions one could do nothing at all. It is just that philosophical principles have not generally provided us with the right preconceptions.

Weinberg believed that any undecidability in mathematics, such as the continuum hypothesis, could be potentially resolved despite the incompleteness theorem, by finding suitable further axioms to add to set theory.

Philosophical consequences of Gödel's completeness theorem

Gödel's completeness theorem establishes an equivalence in first-order logic between the formal provability of a formula and its truth in all possible models. Precisely, for any consistent first-order theory it gives an "explicit construction" of a model described by the theory; this model will be countable if the language of the theory is countable. However this "explicit construction" is not algorithmic. It is based on an iterative process of completion of the theory, where each step of the iteration consists in adding a formula to the axioms if it keeps the theory consistent; but this consistency question is only semi-decidable (an algorithm is available to find any contradiction but if there is none this consistency fact can remain unprovable).

This can be seen as a giving a sort of justification to the Platonist view that the objects of our mathematical theories are real. More precisely, it shows that the mere assumption of the existence of the set of natural numbers as a totality (an actual infinity) suffices to imply the existence of a model (a world of objects) of any consistent theory. However several difficulties remain:

  • For any consistent theory this usually does not give just one world of objects, but an infinity of possible worlds that the theory might equally describe, with a possible diversity of truths between them.
  • In the case of set theory, none of the models obtained by this construction resemble the intended model, as they are countable while set theory intends to describe uncountable infinities. Similar remarks can be made in many other cases. For example, with theories that include arithmetic, such constructions generally give models that include non-standard numbers, unless the construction method was specifically designed to avoid them.
  • As it gives models to all consistent theories without distinction, it gives no reason to accept or reject any axiom as long as the theory remains consistent, but regards all consistent axiomatic theories as referring to equally existing worlds. It gives no indication on which axiomatic system should be preferred as a foundation of mathematics.
  • As claims of consistency are usually unprovable, they remain a matter of belief or non-rigorous kinds of justifications. Hence the existence of models as given by the completeness theorem needs in fact two philosophical assumptions: the actual infinity of natural numbers and the consistency of the theory.

Another consequence of the completeness theorem is that it justifies the conception of infinitesimals as actual infinitely small nonzero quantities, based on the existence of non-standard models as equally legitimate to standard ones. This idea was formalized by Abraham Robinson into the theory of nonstandard analysis.

More paradoxes

The following lists some notable results in metamathematics. Zermelo–Fraenkel set theory is the most widely studied axiomatization of set theory. It is abbreviated ZFC when it includes the axiom of choice and ZF when the axiom of choice is excluded.

  • 1920: Thoralf Skolem corrected Leopold Löwenheim's proof of what is now called the downward Löwenheim–Skolem theorem, leading to Skolem's paradox discussed in 1922, namely the existence of countable models of ZF, making infinite cardinalities a relative property.
  • 1922: Proof by Abraham Fraenkel that the axiom of choice cannot be proved from the axioms of Zermelo set theory with urelements.
  • 1931: Publication of Gödel's incompleteness theorems, showing that essential aspects of Hilbert's program could not be attained. It showed how to construct, for any sufficiently powerful and consistent recursively axiomatizable system – such as necessary to axiomatize the elementary theory of arithmetic on the (infinite) set of natural numbers – a statement that formally expresses its own unprovability, which he then proved equivalent to the claim of consistency of the theory; so that (assuming the consistency as true), the system is not powerful enough for proving its own consistency, let alone that a simpler system could do the job. It thus became clear that the notion of mathematical truth can not be completely determined and reduced to a purely formal system as envisaged in Hilbert's program. This dealt a final blow to the heart of Hilbert's program, the hope that consistency could be established by finitistic means (it was never made clear exactly what axioms were the "finitistic" ones, but whatever axiomatic system was being referred to, it was a 'weaker' system than the system whose consistency it was supposed to prove).
  • 1936: Alfred Tarski proved his truth undefinability theorem.
  • 1936: Alan Turing proved that a general algorithm to solve the halting problem for all possible program-input pairs cannot exist.
  • 1938: Gödel proved the consistency of the axiom of choice and of the generalized continuum hypothesis.
  • 1936–1937: Alonzo Church and Alan Turing, respectively, published independent papers showing that a general solution to the Entscheidungsproblem is impossible: the universal validity of statements in first-order logic is not decidable (it is only semi-decidable as given by the completeness theorem).
  • 1955: Pyotr Novikov showed that there exists a finitely presented group G such that the word problem for G is undecidable.
  • 1963: Paul Cohen showed that the Continuum Hypothesis is unprovable from ZFC. Cohen's proof developed the method of forcing, which is now an important tool for establishing independence results in set theory.
  • 1964: Inspired by the fundamental randomness in physics, Gregory Chaitin starts publishing results on algorithmic information theory (measuring incompleteness and randomness in mathematics).
  • 1966: Paul Cohen showed that the axiom of choice is unprovable in ZF even without urelements.
  • 1970: Hilbert's tenth problem is proven unsolvable: there is no recursive solution to decide whether a Diophantine equation (multivariable polynomial equation) has a solution in integers.
  • 1971: Suslin's problem is proven to be independent from ZFC.

Toward resolution of the crisis

Starting in 1935, the Bourbaki group of French mathematicians started publishing a series of books to formalize many areas of mathematics on the new foundation of set theory.

The intuitionistic school did not attract many adherents, and it was not until Bishop's work in 1967 that constructive mathematics was placed on a sounder footing.

One may consider that Hilbert's program has been partially completed, so that the crisis is essentially resolved, satisfying ourselves with lower requirements than Hilbert's original ambitions. His ambitions were expressed in a time when nothing was clear: it was not clear whether mathematics could have a rigorous foundation at all.

There are many possible variants of set theory, which differ in consistency strength, where stronger versions (postulating higher types of infinities) contain formal proofs of the consistency of weaker versions, but none contains a formal proof of its own consistency. Thus the only thing we don't have is a formal proof of consistency of whatever version of set theory we may prefer, such as ZF.

In practice, most mathematicians either do not work from axiomatic systems, or if they do, do not doubt the consistency of ZFC, generally their preferred axiomatic system. In most of mathematics as it is practiced, the incompleteness and paradoxes of the underlying formal theories never played a role anyway, and in those branches in which they do or whose formalization attempts would run the risk of forming inconsistent theories (such as logic and category theory), they may be treated carefully.

The development of category theory in the middle of the 20th century showed the usefulness of set theories guaranteeing the existence of larger classes than does ZFC, such as Von Neumann–Bernays–Gödel set theory or Tarski–Grothendieck set theory, albeit that in very many cases the use of large cardinal axioms or Grothendieck universes is formally eliminable.

One goal of the reverse mathematics program is to identify whether there are areas of "core mathematics" in which foundational issues may again provoke a crisis.

Delayed-choice quantum eraser

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Delayed-choice_quantum_eraser A delayed-cho...