Search This Blog

Sunday, February 18, 2018

Theory of everything

From Wikipedia, the free encyclopedia

A theory of everything (ToE), final theory, ultimate theory, or master theory is a hypothetical single, all-encompassing, coherent theoretical framework of physics that fully explains and links together all physical aspects of the universe.[1]:6 Finding a ToE is one of the major unsolved problems in physics. Over the past few centuries, two theoretical frameworks have been developed that, as a whole, most closely resemble a ToE. These two theories upon which all modern physics rests are general relativity (GR) and quantum field theory (QFT). GR is a theoretical framework that only focuses on gravity for understanding the universe in regions of both large-scale and high-mass: stars, galaxies, clusters of galaxies, etc. On the other hand, QFT is a theoretical framework that only focuses on three non-gravitational forces for understanding the universe in regions of both small scale and low mass: sub-atomic particles, atoms, molecules, etc. QFT successfully implemented the Standard Model and unified the interactions (so-called Grand Unified Theory) between the three non-gravitational forces: weak, strong, and electromagnetic force.[2]:122

Through years of research, physicists have experimentally confirmed with tremendous accuracy virtually every prediction made by these two theories when in their appropriate domains of applicability. In accordance with their findings, scientists also learned that GR and QFT, as they are currently formulated, are mutually incompatible – they cannot both be right. Since the usual domains of applicability of GR and QFT are so different, most situations require that only one of the two theories be used.[3][4]:842–844 As it turns out, this incompatibility between GR and QFT is apparently only an issue in regions of extremely small-scale and high-mass, such as those that exist within a black hole or during the beginning stages of the universe (i.e., the moment immediately following the Big Bang). To resolve this conflict, a theoretical framework revealing a deeper underlying reality, unifying gravity with the other three interactions, must be discovered to harmoniously integrate the realms of GR and QFT into a seamless whole: a single theory that, in principle, is capable of describing all phenomena. In pursuit of this goal, quantum gravity has become an area of active research.

Eventually a single explanatory framework, called "string theory", emerged that intends to be the ultimate theory of the universe. String theory posits that at the beginning of the universe (up to 10−43 seconds after the Big Bang), the four fundamental forces were once a single fundamental force. According to string theory, every particle in the universe, at its most microscopic level (Planck length), consists of varying combinations of vibrating strings (or strands) with preferred patterns of vibration. String theory further claims that it is through these specific oscillatory patterns of strings that a particle of unique mass and force charge is created (that is to say, the electron is a type of string that vibrates one way, while the up-quark is a type of string vibrating another way, and so forth).

Initially, the term theory of everything was used with an ironic connotation to refer to various overgeneralized theories. For example, a grandfather of Ijon Tichy – a character from a cycle of Stanisław Lem's science fiction stories of the 1960s – was known to work on the "General Theory of Everything". Physicist John Ellis claims[5] to have introduced the term into the technical literature in an article in Nature in 1986.[6] Over time, the term stuck in popularizations of theoretical physics research.

Historical antecedents

From ancient Greece to Einstein

In ancient Greece, pre-Socratic philosophers speculated that the apparent diversity of observed phenomena was due to a single type of interaction, namely the motions and collisions of atoms. The concept of 'atom', introduced by Democritus, was an early philosophical attempt to unify all phenomena observed in nature.

Archimedes was possibly the first scientist known to have described nature with axioms (or principles) and then deduce new results from them. He thus tried to describe "everything" starting from a few axioms. Any "theory of everything" is similarly expected to be based on axioms and to deduce all observable phenomena from them.[7]:340

Following Democritean atomism, the mechanical philosophy of the 17th century posited that all forces could be ultimately reduced to contact forces between the atoms, then imagined as tiny solid particles.[8]:184[9]

In the late 17th century, Isaac Newton's description of the long-distance force of gravity implied that not all forces in nature result from things coming into contact. Newton's work in his Mathematical Principles of Natural Philosophy dealt with this in a further example of unification, in this case unifying Galileo's work on terrestrial gravity, Kepler's laws of planetary motion and the phenomenon of tides by explaining these apparent actions at a distance under one single law: the law of universal gravitation.[10]

In 1814, building on these results, Laplace famously suggested that a sufficiently powerful intellect could, if it knew the position and velocity of every particle at a given time, along with the laws of nature, calculate the position of any particle at any other time:[11]:ch 7
An intellect which at a certain moment would know all forces that set nature in motion, and all positions of all items of which nature is composed, if this intellect were also vast enough to submit these data to analysis, it would embrace in a single formula the movements of the greatest bodies of the universe and those of the tiniest atom; for such an intellect nothing would be uncertain and the future just like the past would be present before its eyes.
— Essai philosophique sur les probabilités, Introduction. 1814
Laplace thus envisaged a combination of gravitation and mechanics as a theory of everything. Modern quantum mechanics implies that uncertainty is inescapable, and thus that Laplace's vision has to be amended: a theory of everything must include gravitation and quantum mechanics.

In 1820, Hans Christian Ørsted discovered a connection between electricity and magnetism, triggering decades of work that culminated in 1865, in James Clerk Maxwell's theory of electromagnetism. During the 19th and early 20th centuries, it gradually became apparent that many common examples of forces – contact forces, elasticity, viscosity, friction, and pressure – result from electrical interactions between the smallest particles of matter.

In his experiments of 1849–50, Michael Faraday was the first to search for a unification of gravity with electricity and magnetism.[12] However, he found no connection.

In 1900, David Hilbert published a famous list of mathematical problems. In Hilbert's sixth problem, he challenged researchers to find an axiomatic basis to all of physics. In this problem he thus asked for what today would be called a theory of everything.[13]

In the late 1920s, the new quantum mechanics showed that the chemical bonds between atoms were examples of (quantum) electrical forces, justifying Dirac's boast that "the underlying physical laws necessary for the mathematical theory of a large part of physics and the whole of chemistry are thus completely known".[14]

After 1915, when Albert Einstein published the theory of gravity (general relativity), the search for a unified field theory combining gravity with electromagnetism began with a renewed interest. In Einstein's day, the strong and the weak forces had not yet been discovered, yet, he found the potential existence of two other distinct forces -gravity and electromagnetism- far more alluring. This launched his thirty-year voyage in search of the so-called "unified field theory" that he hoped would show that these two forces are really manifestations of one grand underlying principle. During these last few decades of his life, this quixotic quest isolated Einstein from the mainstream of physics. Understandably, the mainstream was instead far more excited about the newly emerging framework of quantum mechanics. Einstein wrote to a friend in the early 1940s, "I have become a lonely old chap who is mainly known because he doesn't wear socks and who is exhibited as a curiosity on special occasions." Prominent contributors were Gunnar Nordström, Hermann Weyl, Arthur Eddington, David Hilbert,[15] Theodor Kaluza, Oskar Klein (see Kaluza-Klein theory), and most notably, Albert Einstein and his collaborators. Einstein intensely searched for, but ultimately failed to find, a unifying theory.[16]:ch 17 (But see:Einstein–Maxwell–Dirac equations.) More than a half a century later, Einstein's dream of discovering a unified theory has become the Holy Grail of modern physics.

Twentieth century and the nuclear interactions

In the twentieth century, the search for a unifying theory was interrupted by the discovery of the strong and weak nuclear forces (or interactions), which differ both from gravity and from electromagnetism. A further hurdle was the acceptance that in a ToE, quantum mechanics had to be incorporated from the start, rather than emerging as a consequence of a deterministic unified theory, as Einstein had hoped.

Gravity and electromagnetism could always peacefully coexist as entries in a list of classical forces, but for many years it seemed that gravity could not even be incorporated into the quantum framework, let alone unified with the other fundamental forces. For this reason, work on unification, for much of the twentieth century, focused on understanding the three "quantum" forces: electromagnetism and the weak and strong forces. The first two were combined in 1967–68 by Sheldon Glashow, Steven Weinberg, and Abdus Salam into the "electroweak" force.[17] Electroweak unification is a broken symmetry: the electromagnetic and weak forces appear distinct at low energies because the particles carrying the weak force, the W and Z bosons, have non-zero masses of 80.4 GeV/c2 and 91.2 GeV/c2, whereas the photon, which carries the electromagnetic force, is massless. At higher energies Ws and Zs can be created easily and the unified nature of the force becomes apparent.

While the strong and electroweak forces peacefully coexist in the Standard Model of particle physics, they remain distinct. So far, the quest for a theory of everything is thus unsuccessful on two points: neither a unification of the strong and electroweak forces – which Laplace would have called 'contact forces' – has been achieved, nor has a unification of these forces with gravitation been achieved.

Modern physics

Conventional sequence of theories

A Theory of Everything would unify all the fundamental interactions of nature: gravitation, strong interaction, weak interaction, and electromagnetism. Because the weak interaction can transform elementary particles from one kind into another, the ToE should also yield a deep understanding of the various different kinds of possible particles. The usual assumed path of theories is given in the following graph, where each unification step leads one level up:

 
 
 
 
Theory of everything
 

 
 
 
 
 
 
 
 
 

 
 
 
 
Quantum gravity
 

 
 
 
 
 
 
 
 
 

 
Space Curvature
 
 
 
 
Electronuclear force (GUT)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

 
Standard model of cosmology
 
 
 
Standard model of particle physics

 
 
 
 
 
 
 
 
 
 
 
 

 
 
 
 
Strong interaction
SU(3)
 
 
 
 
 
Electroweak interaction
SU(2) x U(1)Y
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

 
 
 
 
 
 
 
 
Weak interaction
 
 
 
 
Electromagnetism
U(1)EM

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

 
 
 
 
 
 
 
 
 
 
 
Electricity
 
 
 
 
Magnetism
 
 
 
 
In this graph, electroweak unification occurs at around 100 GeV, grand unification is predicted to occur at 1016 GeV, and unification of the GUT force with gravity is expected at the Planck energy, roughly 1019 GeV.

Several Grand Unified Theories (GUTs) have been proposed to unify electromagnetism and the weak and strong forces. Grand unification would imply the existence of an electronuclear force; it is expected to set in at energies of the order of 1016 GeV, far greater than could be reached by any possible Earth-based particle accelerator. Although the simplest GUTs have been experimentally ruled out, the general idea, especially when linked with supersymmetry, remains a favorite candidate in the theoretical physics community. Supersymmetric GUTs seem plausible not only for their theoretical "beauty", but because they naturally produce large quantities of dark matter, and because the inflationary force may be related to GUT physics (although it does not seem to form an inevitable part of the theory). Yet GUTs are clearly not the final answer; both the current standard model and all proposed GUTs are quantum field theories which require the problematic technique of renormalization to yield sensible answers. This is usually regarded as a sign that these are only effective field theories, omitting crucial phenomena relevant only at very high energies.[3]

The final step in the graph requires resolving the separation between quantum mechanics and gravitation, often equated with general relativity. Numerous researchers concentrate their efforts on this specific step; nevertheless, no accepted theory of quantum gravity – and thus no accepted theory of everything – has emerged yet. It is usually assumed that the ToE will also solve the remaining problems of GUTs.

In addition to explaining the forces listed in the graph, a ToE may also explain the status of at least two candidate forces suggested by modern cosmology: an inflationary force and dark energy. Furthermore, cosmological experiments also suggest the existence of dark matter, supposedly composed of fundamental particles outside the scheme of the standard model. However, the existence of these forces and particles has not been proven.

String theory and M-theory

Since the 1990s, some physicists[who?] believe that 11-dimensional M-theory, which is described in some limits by one of the five perturbative superstring theories, and in another by the maximally-supersymmetric 11-dimensional supergravity, is the theory of everything. However, there is no widespread consensus on this issue.

A surprising property of string/M-theory is that extra dimensions are required for the theory's consistency. In this regard, string theory can be seen as building on the insights of the Kaluza–Klein theory, in which it was realized that applying general relativity to a five-dimensional universe (with one of them small and curled up) looks from the four-dimensional perspective like the usual general relativity together with Maxwell's electrodynamics. This lent credence to the idea of unifying gauge and gravity interactions, and to extra dimensions, but did not address the detailed experimental requirements. Another important property of string theory is its supersymmetry, which together with extra dimensions are the two main proposals for resolving the hierarchy problem of the standard model, which is (roughly) the question of why gravity is so much weaker than any other force. The extra-dimensional solution involves allowing gravity to propagate into the other dimensions while keeping other forces confined to a four-dimensional spacetime, an idea that has been realized with explicit stringy mechanisms.[18]

Research into string theory has been encouraged by a variety of theoretical and experimental factors. On the experimental side, the particle content of the standard model supplemented with neutrino masses fits into a spinor representation of SO(10), a subgroup of E8 that routinely emerges in string theory, such as in heterotic string theory[19] or (sometimes equivalently) in F-theory.[20][21] String theory has mechanisms that may explain why fermions come in three hierarchical generations, and explain the mixing rates between quark generations.[22] On the theoretical side, it has begun to address some of the key questions in quantum gravity, such as resolving the black hole information paradox, counting the correct entropy of black holes[23][24] and allowing for topology-changing processes.[25][26][27] It has also led to many insights in pure mathematics and in ordinary, strongly-coupled gauge theory due to the Gauge/String duality.

In the late 1990s, it was noted that one major hurdle in this endeavor is that the number of possible four-dimensional universes is incredibly large. The small, "curled up" extra dimensions can be compactified in an enormous number of different ways (one estimate is 10500 ) each of which leads to different properties for the low-energy particles and forces. This array of models is known as the string theory landscape.[7]:347

One proposed solution is that many or all of these possibilities are realised in one or another of a huge number of universes, but that only a small number of them are habitable. Hence what we normally conceive as the fundamental constants of the universe are ultimately the result of the anthropic principle rather than dictated by theory. This has led to criticism of string theory,[28] arguing that it cannot make useful (i.e., original, falsifiable, and verifiable) predictions and regarding it as a pseudoscience. Others disagree,[29] and string theory remains an extremely active topic of investigation in theoretical physics.[citation needed]

Loop quantum gravity

Current research on loop quantum gravity may eventually play a fundamental role in a ToE, but that is not its primary aim.[30] Also loop quantum gravity introduces a lower bound on the possible length scales.

There have been recent claims that loop quantum gravity may be able to reproduce features resembling the Standard Model. So far only the first generation of fermions (leptons and quarks) with correct parity properties have been modelled by Sundance Bilson-Thompson using preons constituted of braids of spacetime as the building blocks.[31] However, there is no derivation of the Lagrangian that would describe the interactions of such particles, nor is it possible to show that such particles are fermions, nor that the gauge groups or interactions of the Standard Model are realised. Utilization of quantum computing concepts made it possible to demonstrate that the particles are able to survive quantum fluctuations.[32]

This model leads to an interpretation of electric and colour charge as topological quantities (electric as number and chirality of twists carried on the individual ribbons and colour as variants of such twisting for fixed electric charge).

Bilson-Thompson's original paper suggested that the higher-generation fermions could be represented by more complicated braidings, although explicit constructions of these structures were not given. The electric charge, colour, and parity properties of such fermions would arise in the same way as for the first generation. The model was expressly generalized for an infinite number of generations and for the weak force bosons (but not for photons or gluons) in a 2008 paper by Bilson-Thompson, Hackett, Kauffman and Smolin.[33]

Other attempts

A recent development is the theory of causal fermion systems,[34] giving the two current physical theories (general relativity and quantum field theory) as limiting cases.

A recent attempt is called Causal Sets. As some of the approaches mentioned above, its direct goal isn't necessarily to achieve a ToE but primarily a working theory of quantum gravity, which might eventually include the standard model and become a candidate for a ToE. Its founding principle is that spacetime is fundamentally discrete and that the spacetime events are related by a partial order. This partial order has the physical meaning of the causality relations between relative past and future distinguishing spacetime events.

Outside the previously mentioned attempts there is Garrett Lisi's E8 proposal. This theory provides an attempt of identifying general relativity and the standard model within the Lie group E8. The theory doesn't provide a novel quantization procedure and the author suggests its quantization might follow the Loop Quantum Gravity approach above mentioned.[35]

Causal dynamical triangulation does not assume any pre-existing arena (dimensional space), but rather attempts to show how the spacetime fabric itself evolves.

Christoph Schiller's Strand Model attempts to account for the gauge symmetry of the Standard Model of particle physics, U(1)×SU(2)×SU(3), with the three Reidemeister moves of knot theory by equating each elementary particle to a different tangle of one, two, or three strands (selectively a long prime knot or unknotted curve, a rational tangle, or a braided tangle respectively).

Another attempt may be related to ER=EPR, a conjecture in physics stating that entangled particles are connected by a wormhole (or Einstein–Rosen bridge).[36][37]

Present status

At present, there is no candidate theory of everything that includes the standard model of particle physics and general relativity. For example, no candidate theory is able to calculate the fine structure constant or the mass of the electron. Most particle physicists expect that the outcome of the ongoing experiments – the search for new particles at the large particle accelerators and for dark matter – are needed in order to provide further input for a ToE.

Philosophy

The philosophical implications of a physical ToE are frequently debated. For example, if philosophical physicalism is true, a physical ToE will coincide with a philosophical theory of everything.
The "system building" style of metaphysics attempts to answer all the important questions in a coherent way, providing a complete picture of the world. Aristotle is the first and most noteworthy philosopher to have attempted such a comprehensive system in his Metaphysics. While Aristotle made important contributions to all the sciences in terms of his method of logic and his first principal of causality, he was later demonized by later modern philosophers of the Enlightenment like Immanuel Kant who criticized him for his idea of God as first cause. Isaac Newton and his Mathematical Principles of Natural Philosophy constituted the most all encompassing attempt at a theory of everything up until the twentieth century and Albert Einstein's General Theory of Relativity. After David Hume's attacks upon the inductive method utilized in all the sciences, the German Idealists such as Kant and G.W.F. Hegel - and the many philosophical reactions they inspired - took a decided turn away from natural philosophy and the physical sciences and focused instead on issues of perception, cognition, consciousness and ultimately language.

Arguments against

In parallel to the intense search for a ToE, various scholars have seriously debated the possibility of its discovery.

Gödel's incompleteness theorem

A number of scholars claim that Gödel's incompleteness theorem suggests that any attempt to construct a ToE is bound to fail. Gödel's theorem, informally stated, asserts that any formal theory expressive enough for elementary arithmetical facts to be expressed and strong enough for them to be proved is either inconsistent (both a statement and its denial can be derived from its axioms) or incomplete, in the sense that there is a true statement that can't be derived in the formal theory.
Stanley Jaki, in his 1966 book The Relevance of Physics, pointed out that, because any "theory of everything" will certainly be a consistent non-trivial mathematical theory, it must be incomplete. He claims that this dooms searches for a deterministic theory of everything.[38]

Freeman Dyson has stated that "Gödel's theorem implies that pure mathematics is inexhaustible. No matter how many problems we solve, there will always be other problems that cannot be solved within the existing rules. […] Because of Gödel's theorem, physics is inexhaustible too. The laws of physics are a finite set of rules, and include the rules for doing mathematics, so that Gödel's theorem applies to them."[39]

Stephen Hawking was originally a believer in the Theory of Everything but, after considering Gödel's Theorem, concluded that one was not obtainable: "Some people will be very disappointed if there is not an ultimate theory, that can be formulated as a finite number of principles. I used to belong to that camp, but I have changed my mind."[40]

Jürgen Schmidhuber (1997) has argued against this view; he points out that Gödel's theorems are irrelevant for computable physics.[41] In 2000, Schmidhuber explicitly constructed limit-computable, deterministic universes whose pseudo-randomness based on undecidable, Gödel-like halting problems is extremely hard to detect but does not at all prevent formal ToEs describable by very few bits of information.[42]

Related critique was offered by Solomon Feferman,[43] among others. Douglas S. Robertson offers Conway's game of life as an example:[44] The underlying rules are simple and complete, but there are formally undecidable questions about the game's behaviors. Analogously, it may (or may not) be possible to completely state the underlying rules of physics with a finite number of well-defined laws, but there is little doubt that there are questions about the behavior of physical systems which are formally undecidable on the basis of those underlying laws.

Since most physicists would consider the statement of the underlying rules to suffice as the definition of a "theory of everything", most physicists argue that Gödel's Theorem does not mean that a ToE cannot exist. On the other hand, the scholars invoking Gödel's Theorem appear, at least in some cases, to be referring not to the underlying rules, but to the understandability of the behavior of all physical systems, as when Hawking mentions arranging blocks into rectangles, turning the computation of prime numbers into a physical question.[45] This definitional discrepancy may explain some of the disagreement among researchers.

Fundamental limits in accuracy

No physical theory to date is believed to be precisely accurate. Instead, physics has proceeded by a series of "successive approximations" allowing more and more accurate predictions over a wider and wider range of phenomena. Some physicists believe that it is therefore a mistake to confuse theoretical models with the true nature of reality, and hold that the series of approximations will never terminate in the "truth". Einstein himself expressed this view on occasions.[46] Following this view, we may reasonably hope for a theory of everything which self-consistently incorporates all currently known forces, but we should not expect it to be the final answer.

On the other hand, it is often claimed that, despite the apparently ever-increasing complexity of the mathematics of each new theory, in a deep sense associated with their underlying gauge symmetry and the number of dimensionless physical constants, the theories are becoming simpler. If this is the case, the process of simplification cannot continue indefinitely.

Lack of fundamental laws

There is a philosophical debate within the physics community as to whether a theory of everything deserves to be called the fundamental law of the universe.[47] One view is the hard reductionist position that the ToE is the fundamental law and that all other theories that apply within the universe are a consequence of the ToE. Another view is that emergent laws, which govern the behavior of complex systems, should be seen as equally fundamental. Examples of emergent laws are the second law of thermodynamics and the theory of natural selection. The advocates of emergence argue that emergent laws, especially those describing complex or living systems are independent of the low-level, microscopic laws. In this view, emergent laws are as fundamental as a ToE.

The debates do not make the point at issue clear. Possibly the only issue at stake is the right to apply the high-status term "fundamental" to the respective subjects of research. A well-known debate over this took place between Steven Weinberg and Philip Anderson[citation needed]

Impossibility of being "of everything"

Although the name "theory of everything" suggests the determinism of Laplace's quotation, this gives a very misleading impression. Determinism is frustrated by the probabilistic nature of quantum mechanical predictions, by the extreme sensitivity to initial conditions that leads to mathematical chaos, by the limitations due to event horizons, and by the extreme mathematical difficulty of applying the theory. Thus, although the current standard model of particle physics "in principle" predicts almost all known non-gravitational phenomena, in practice only a few quantitative results have been derived from the full theory (e.g., the masses of some of the simplest hadrons), and these results (especially the particle masses which are most relevant for low-energy physics) are less accurate than existing experimental measurements. The ToE would almost certainly be even harder to apply for the prediction of experimental results, and thus might be of limited use.

A motive for seeking a ToE,[citation needed] apart from the pure intellectual satisfaction of completing a centuries-long quest, is that prior examples of unification have predicted new phenomena, some of which (e.g., electrical generators) have proved of great practical importance. And like in these prior examples of unification, the ToE would probably allow us to confidently define the domain of validity and residual error of low-energy approximations to the full theory.

Infinite number of onion layers

Frank Close regularly argues that the layers of nature may be like the layers of an onion, and that the number of layers might be infinite.[48] This would imply an infinite sequence of physical theories.
The argument is not universally accepted, because it is not obvious that infinity is a concept that applies to the foundations of nature.

Impossibility of calculation

Weinberg[49] points out that calculating the precise motion of an actual projectile in the Earth's atmosphere is impossible. So how can we know we have an adequate theory for describing the motion of projectiles? Weinberg suggests that we know principles (Newton's laws of motion and gravitation) that work "well enough" for simple examples, like the motion of planets in empty space. These principles have worked so well on simple examples that we can be reasonably confident they will work for more complex examples. For example, although general relativity includes equations that do not have exact solutions, it is widely accepted as a valid theory because all of its equations with exact solutions have been experimentally verified. Likewise, a ToE must work for a wide range of simple examples in such a way that we can be reasonably confident it will work for every situation in physics.

Saturday, February 17, 2018

Axiom

From Wikipedia, the free encyclopedia
An axiom or postulate is a statement that is taken to be true, to serve as a premise or starting point for further reasoning and arguments. The word comes from the Greek axíōma (ἀξίωμα) 'that which is thought worthy or fit' or 'that which commends itself as evident.'[1][2]
The term has subtle differences in definition when used in the context of different fields of study. As defined in classic philosophy, an axiom is a statement that is so evident or well-established, that it is accepted without controversy or question.[3] As used in modern logic, an axiom is simply a premise or starting point for reasoning.[4]

As used in mathematics, the term axiom is used in two related but distinguishable senses: "logical axioms" and "non-logical axioms". Logical axioms are usually statements that are taken to be true within the system of logic they define (e.g., (A and B) implies A), often shown in symbolic form, while non-logical axioms (e.g., a + b = b + a) are actually substantive assertions about the elements of the domain of a specific mathematical theory (such as arithmetic). When used in the latter sense, "axiom", "postulate", and "assumption" may be used interchangeably. In general, a non-logical axiom is not a self-evident truth, but rather a formal logical expression used in deduction to build a mathematical theory. To axiomatize a system of knowledge is to show that its claims can be derived from a small, well-understood set of sentences (the axioms). There are typically multiple ways to axiomatize a given mathematical domain.

In both senses, an axiom is any mathematical statement that serves as a starting point from which other statements are logically derived. Whether it is meaningful (and, if so, what it means) for an axiom, or any mathematical statement, to be "true" is an open question[citation needed] in the philosophy of mathematics.[5]

Etymology

The word axiom comes from the Greek word ἀξίωμα (axíōma), a verbal noun from the verb ἀξιόειν (axioein), meaning "to deem worthy", but also "to require", which in turn comes from ἄξιος (áxios), meaning "being in balance", and hence "having (the same) value (as)", "worthy", "proper". Among the ancient Greek philosophers an axiom was a claim which could be seen to be true without any need for proof.

The root meaning of the word postulate is to "demand"; for instance, Euclid demands that one agree that some things can be done, e.g. any two points can be joined by a straight line, etc.[6]
Ancient geometers maintained some distinction between axioms and postulates. While commenting on Euclid's books, Proclus remarks that, "Geminus held that this [4th] Postulate should not be classed as a postulate but as an axiom, since it does not, like the first three Postulates, assert the possibility of some construction but expresses an essential property."[7] Boethius translated 'postulate' as petitio and called the axioms notiones communes but in later manuscripts this usage was not always strictly kept.

Historical development

Early Greeks

The logico-deductive method whereby conclusions (new knowledge) follow from premises (old knowledge) through the application of sound arguments (syllogisms, rules of inference), was developed by the ancient Greeks, and has become the core principle of modern mathematics.  Tautologies excluded, nothing can be deduced if nothing is assumed. Axioms and postulates are the basic assumptions underlying a given body of deductive knowledge. They are accepted without demonstration. All other assertions (theorems, if we are talking about mathematics) must be proven with the aid of these basic assumptions. However, the interpretation of mathematical knowledge has changed from ancient times to the modern, and consequently the terms axiom and postulate hold a slightly different meaning for the present day mathematician, than they did for Aristotle and Euclid.

The ancient Greeks considered geometry as just one of several sciences, and held the theorems of geometry on par with scientific facts. As such, they developed and used the logico-deductive method as a means of avoiding error, and for structuring and communicating knowledge. Aristotle's posterior analytics is a definitive exposition of the classical view.

An "axiom", in classical terminology, referred to a self-evident assumption common to many branches of science. A good example would be the assertion that
When an equal amount is taken from equals, an equal amount results.
At the foundation of the various sciences lay certain additional hypotheses which were accepted without proof. Such a hypothesis was termed a postulate. While the axioms were common to many sciences, the postulates of each particular science were different. Their validity had to be established by means of real-world experience. Indeed, Aristotle warns that the content of a science cannot be successfully communicated, if the learner is in doubt about the truth of the postulates.[8]

The classical approach is well-illustrated by Euclid's Elements, where a list of postulates is given (common-sensical geometric facts drawn from our experience), followed by a list of "common notions" (very basic, self-evident assertions).
Postulates
  1. It is possible to draw a straight line from any point to any other point.
  2. It is possible to extend a line segment continuously in both directions.
  3. It is possible to describe a circle with any center and any radius.
  4. It is true that all right angles are equal to one another.
  5. ("Parallel postulate") It is true that, if a straight line falling on two straight lines make the interior angles on the same side less than two right angles, the two straight lines, if produced indefinitely, intersect on that side on which are the angles less than the two right angles.
Common notions
  1. Things which are equal to the same thing are also equal to one another.
  2. If equals are added to equals, the wholes are equal.
  3. If equals are subtracted from equals, the remainders are equal.
  4. Things which coincide with one another are equal to one another.
  5. The whole is greater than the part.

Modern development

A lesson learned by mathematics in the last 150 years is that it is useful to strip the meaning away from the mathematical assertions (axioms, postulates, propositions, theorems) and definitions. One must concede the need for primitive notions, or undefined terms or concepts, in any study. Such abstraction or formalization makes mathematical knowledge more general, capable of multiple different meanings, and therefore useful in multiple contexts. Alessandro Padoa, Mario Pieri, and Giuseppe Peano were pioneers in this movement.

Structuralist mathematics goes further, and develops theories and axioms (e.g. field theory, group theory, topology, vector spaces) without any particular application in mind. The distinction between an "axiom" and a "postulate" disappears. The postulates of Euclid are profitably motivated by saying that they lead to a great wealth of geometric facts. The truth of these complicated facts rests on the acceptance of the basic hypotheses. However, by throwing out Euclid's fifth postulate we get theories that have meaning in wider contexts, hyperbolic geometry for example. We must simply be prepared to use labels like "line" and "parallel" with greater flexibility. The development of hyperbolic geometry taught mathematicians that postulates should be regarded as purely formal statements, and not as facts based on experience.

When mathematicians employ the field axioms, the intentions are even more abstract. The propositions of field theory do not concern any one particular application; the mathematician now works in complete abstraction. There are many examples of fields; field theory gives correct knowledge about them all.

It is not correct to say that the axioms of field theory are "propositions that are regarded as true without proof." Rather, the field axioms are a set of constraints. If any given system of addition and multiplication satisfies these constraints, then one is in a position to instantly know a great deal of extra information about this system.

Modern mathematics formalizes its foundations to such an extent that mathematical theories can be regarded as mathematical objects, and mathematics itself can be regarded as a branch of logic. Frege, Russell, Poincaré, Hilbert, and Gödel are some of the key figures in this development.

In the modern understanding, a set of axioms is any collection of formally stated assertions from which other formally stated assertions follow by the application of certain well-defined rules. In this view, logic becomes just another formal system. A set of axioms should be consistent; it should be impossible to derive a contradiction from the axiom. A set of axioms should also be non-redundant; an assertion that can be deduced from other axioms need not be regarded as an axiom.

It was the early hope of modern logicians that various branches of mathematics, perhaps all of mathematics, could be derived from a consistent collection of basic axioms. An early success of the formalist program was Hilbert's formalization of Euclidean geometry, and the related demonstration of the consistency of those axioms.

In a wider context, there was an attempt to base all of mathematics on Cantor's set theory. Here the emergence of Russell's paradox, and similar antinomies of naïve set theory raised the possibility that any such system could turn out to be inconsistent.

The formalist project suffered a decisive setback, when in 1931 Gödel showed that it is possible, for any sufficiently large set of axioms (Peano's axioms, for example) to construct a statement whose truth is independent of that set of axioms. As a corollary, Gödel proved that the consistency of a theory like Peano arithmetic is an unprovable assertion within the scope of that theory.

It is reasonable to believe in the consistency of Peano arithmetic because it is satisfied by the system of natural numbers, an infinite but intuitively accessible formal system. However, at present, there is no known way of demonstrating the consistency of the modern Zermelo–Fraenkel axioms for set theory. Furthermore, using techniques of forcing (Cohen) one can show that the continuum hypothesis (Cantor) is independent of the Zermelo–Fraenkel axioms. Thus, even this very general set of axioms cannot be regarded as the definitive foundation for mathematics.

Other sciences

Axioms play a key role not only in mathematics, but also in other sciences, notably in theoretical physics. In particular, the monumental work of Isaac Newton is essentially based on Euclid's axioms, augmented by a postulate on the non-relation of spacetime and the physics taking place in it at any moment.

In 1905, Newton's axioms were replaced by those of Albert Einstein's special relativity, and later on by those of general relativity.

Another paper of Albert Einstein and coworkers (see EPR paradox), almost immediately contradicted by Niels Bohr, concerned the interpretation of quantum mechanics. This was in 1935. According to Bohr, this new theory should be probabilistic, whereas according to Einstein it should be deterministic. Notably, the underlying quantum mechanical theory, i.e. the set of "theorems" derived by it, seemed to be identical. Einstein even assumed that it would be sufficient to add to quantum mechanics "hidden variables" to enforce determinism. However, thirty years later, in 1964, John Bell found a theorem, involving complicated optical correlations (see Bell inequalities), which yielded measurably different results using Einstein's axioms compared to using Bohr's axioms. And it took roughly another twenty years until an experiment of Alain Aspect got results in favour of Bohr's axioms, not Einstein's. (Bohr's axioms are simply: The theory should be probabilistic in the sense of the Copenhagen interpretation.)

As a consequence, it is not necessary to explicitly cite Einstein's axioms, the more so since they concern subtle points on the "reality" and "locality" of experiments.

Regardless, the role of axioms in mathematics and in the above-mentioned sciences is different. In mathematics one neither "proves" nor "disproves" an axiom for a set of theorems; the point is simply that in the conceptual realm identified by the axioms, the theorems logically follow. In contrast, in physics a comparison with experiments always makes sense, since a falsified physical theory needs modification.

Mathematical logic

In the field of mathematical logic, a clear distinction is made between two notions of axioms: logical and non-logical (somewhat similar to the ancient distinction between "axioms" and "postulates" respectively).

Logical axioms

These are certain formulas in a formal language that are universally valid, that is, formulas that are satisfied by every assignment of values. Usually one takes as logical axioms at least some minimal set of tautologies that is sufficient for proving all tautologies in the language; in the case of predicate logic more logical axioms than that are required, in order to prove logical truths that are not tautologies in the strict sense.

Examples

Propositional logic
In propositional logic it is common to take as logical axioms all formulae of the following forms, where \phi , \chi , and \psi can be any formulae of the language and where the included primitive connectives are only "\neg " for negation of the immediately following proposition and "\to " for implication from antecedent to consequent propositions:
  1. \phi \to (\psi \to \phi )
  2. (\phi \to (\psi \to \chi ))\to ((\phi \to \psi )\to (\phi \to \chi ))
  3. (\lnot \phi \to \lnot \psi )\to (\psi \to \phi ).
Each of these patterns is an axiom schema, a rule for generating an infinite number of axioms. For example, if A, B, and C are propositional variables, then A\to (B\to A) and (A\to \lnot B)\to (C\to (A\to \lnot B)) are both instances of axiom schema 1, and hence are axioms. It can be shown that with only these three axiom schemata and modus ponens, one can prove all tautologies of the propositional calculus. It can also be shown that no pair of these schemata is sufficient for proving all tautologies with modus ponens.

Other axiom schemas involving the same or different sets of primitive connectives can be alternatively constructed.[9]

These axiom schemata are also used in the predicate calculus, but additional logical axioms are needed to include a quantifier in the calculus.[10]
First-order logic
Axiom of Equality. Let {\mathfrak  {L}} be a first-order language. For each variable x, the formula
x = x
is universally valid.

This means that, for any variable symbol x\,, the formula x = x can be regarded as an axiom. Also, in this example, for this not to fall into vagueness and a never-ending series of "primitive notions", either a precise notion of what we mean by x = x (or, for that matter, "to be equal") has to be well established first, or a purely formal and syntactical usage of the symbol = has to be enforced, only regarding it as a string and only a string of symbols, and mathematical logic does indeed do that.

Another, more interesting example axiom scheme, is that which provides us with what is known as Universal Instantiation:

Axiom scheme for Universal Instantiation. Given a formula \phi in a first-order language {\mathfrak  {L}}, a variable x and a term t that is substitutable for x in \phi , the formula
\forall x\,\phi \to \phi _{t}^{x}
is universally valid.

Where the symbol \phi _{t}^{x} stands for the formula \phi with the term t substituted for x. (See Substitution of variables.) In informal terms, this example allows us to state that, if we know that a certain property P holds for every x and that t stands for a particular object in our structure, then we should be able to claim P(t). Again, we are claiming that the formula \forall x\phi \to \phi _{t}^{x} is valid, that is, we must be able to give a "proof" of this fact, or more properly speaking, a metaproof. Actually, these examples are metatheorems of our theory of mathematical logic since we are dealing with the very concept of proof itself. Aside from this, we can also have Existential Generalization:

Axiom scheme for Existential Generalization. Given a formula \phi in a first-order language {\mathfrak  {L}}, a variable x and a term t that is substitutable for x in \phi , the formula
\phi _{t}^{x}\to \exists x\,\phi
is universally valid.

Non-logical axioms

Non-logical axioms are formulas that play the role of theory-specific assumptions. Reasoning about two different structures, for example the natural numbers and the integers, may involve the same logical axioms; the non-logical axioms aim to capture what is special about a particular structure (or set of structures, such as groups). Thus non-logical axioms, unlike logical axioms, are not tautologies. Another name for a non-logical axiom is postulate.[11]

Almost every modern mathematical theory starts from a given set of non-logical axioms, and it was thought[citation needed] that in principle every theory could be axiomatized in this way and formalized down to the bare language of logical formulas.

Non-logical axioms are often simply referred to as axioms in mathematical discourse. This does not mean that it is claimed that they are true in some absolute sense. For example, in some groups, the group operation is commutative, and this can be asserted with the introduction of an additional axiom, but without this axiom we can do quite well developing (the more general) group theory, and we can even take its negation as an axiom for the study of non-commutative groups.

Thus, an axiom is an elementary basis for a formal logic system that together with the rules of inference define a deductive system.

Examples

This section gives examples of mathematical theories that are developed entirely from a set of non-logical axioms (axioms, henceforth). A rigorous treatment of any of these topics begins with a specification of these axioms.

Basic theories, such as arithmetic, real analysis and complex analysis are often introduced non-axiomatically, but implicitly or explicitly there is generally an assumption that the axioms being used are the axioms of Zermelo–Fraenkel set theory with choice, abbreviated ZFC, or some very similar system of axiomatic set theory like Von Neumann–Bernays–Gödel set theory, a conservative extension of ZFC. Sometimes slightly stronger theories such as Morse–Kelley set theory or set theory with a strongly inaccessible cardinal allowing the use of a Grothendieck universe are used, but in fact most mathematicians can actually prove all they need in systems weaker than ZFC, such as second-order arithmetic.[citation needed]

The study of topology in mathematics extends all over through point set topology, algebraic topology, differential topology, and all the related paraphernalia, such as homology theory, homotopy theory. The development of abstract algebra brought with itself group theory, rings, fields, and Galois theory.

This list could be expanded to include most fields of mathematics, including measure theory, ergodic theory, probability, representation theory, and differential geometry.
Arithmetic
The Peano axioms are the most widely used axiomatization of first-order arithmetic. They are a set of axioms strong enough to prove many important facts about number theory and they allowed Gödel to establish his famous second incompleteness theorem.[12]

We have a language {\displaystyle {\mathfrak {L}}_{NT}=\{0,S\}} where {\displaystyle 0} is a constant symbol and S is a unary function and the following axioms:
  1. \forall x.\lnot (Sx=0)
  2. \forall x.\forall y.(Sx=Sy\to x=y)
  3. {\displaystyle (\phi (0)\land \forall x.\,(\phi (x)\to \phi (Sx)))\to \forall x.\phi (x)} for any {\displaystyle {\mathfrak {L}}_{NT}} formula \phi with one free variable.
The standard structure is {\displaystyle {\mathfrak {N}}=\langle \mathbb {N} ,0,S\rangle } where \mathbb {N} is the set of natural numbers, S is the successor function and {\displaystyle 0} is naturally interpreted as the number 0.
Euclidean geometry
Probably the oldest, and most famous, list of axioms are the 4 + 1 Euclid's postulates of plane geometry. The axioms are referred to as "4 + 1" because for nearly two millennia the fifth (parallel) postulate ("through a point outside a line there is exactly one parallel") was suspected of being derivable from the first four. Ultimately, the fifth postulate was found to be independent of the first four. Indeed, one can assume that exactly one parallel through a point outside a line exists, or that infinitely many exist. This choice gives us two alternative forms of geometry in which the interior angles of a triangle add up to exactly 180 degrees or less, respectively, and are known as Euclidean and hyperbolic geometries. If one also removes the second postulate ("a line can be extended indefinitely") then elliptic geometry arises, where there is no parallel through a point outside a line, and in which the interior angles of a triangle add up to more than 180 degrees.
Real analysis
The objectives of study are within the domain of real numbers. The real numbers are uniquely picked out (up to isomorphism) by the properties of a Dedekind complete ordered field, meaning that any nonempty set of real numbers with an upper bound has a least upper bound. However, expressing these properties as axioms requires use of second-order logic. The Löwenheim–Skolem theorems tell us that if we restrict ourselves to first-order logic, any axiom system for the reals admits other models, including both models that are smaller than the reals and models that are larger. Some of the latter are studied in non-standard analysis.

Role in mathematical logic

Deductive systems and completeness

A deductive system consists of a set \Lambda of logical axioms, a set \Sigma of non-logical axioms, and a set {\displaystyle \{(\Gamma ,\phi )\}} of rules of inference. A desirable property of a deductive system is that it be complete. A system is said to be complete if, for all formulas \phi ,

{\text{if }}\Sigma \models \phi {\text{ then }}\Sigma \vdash \phi

that is, for any statement that is a logical consequence of \Sigma there actually exists a deduction of the statement from \Sigma . This is sometimes expressed as "everything that is true is provable", but it must be understood that "true" here means "made true by the set of axioms", and not, for example, "true in the intended interpretation". Gödel's completeness theorem establishes the completeness of a certain commonly used type of deductive system.

Note that "completeness" has a different meaning here than it does in the context of Gödel's first incompleteness theorem, which states that no recursive, consistent set of non-logical axioms \Sigma of the Theory of Arithmetic is complete, in the sense that there will always exist an arithmetic statement \phi such that neither \phi nor {\displaystyle \lnot \phi } can be proved from the given set of axioms.

There is thus, on the one hand, the notion of completeness of a deductive system and on the other hand that of completeness of a set of non-logical axioms. The completeness theorem and the incompleteness theorem, despite their names, do not contradict one another.

Further discussion

Early mathematicians regarded axiomatic geometry as a model of physical space, and obviously there could only be one such model. The idea that alternative mathematical systems might exist was very troubling to mathematicians of the 19th century and the developers of systems such as Boolean algebra made elaborate efforts to derive them from traditional arithmetic. Galois showed just before his untimely death that these efforts were largely wasted. Ultimately, the abstract parallels between algebraic systems were seen to be more important than the details and modern algebra was born. In the modern view axioms may be any set of formulas, as long as they are not known to be inconsistent.

Cryogenics

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Cryogenics...