Search This Blog

Friday, October 9, 2020

Particle in a box

 From Wikipedia, the free encyclopedia

Some trajectories of a particle in a box according to Newton's laws of classical mechanics (A), and according to the Schrödinger equation of quantum mechanics (B–F). In (B–F), the horizontal axis is position, and the vertical axis is the real part (blue) and imaginary part (red) of the wavefunction. The states (B,C,D) are energy eigenstates, but (E,F) are not.

In quantum mechanics, the particle in a box model (also known as the infinite potential well or the infinite square well) describes a particle free to move in a small space surrounded by impenetrable barriers. The model is mainly used as a hypothetical example to illustrate the differences between classical and quantum systems. In classical systems, for example, a particle trapped inside a large box can move at any speed within the box and it is no more likely to be found at one position than another. However, when the well becomes very narrow (on the scale of a few nanometers), quantum effects become important. The particle may only occupy certain positive energy levels. Likewise, it can never have zero energy, meaning that the particle can never "sit still". Additionally, it is more likely to be found at certain positions than at others, depending on its energy level. The particle may never be detected at certain positions, known as spatial nodes.

The particle in a box model is one of the very few problems in quantum mechanics which can be solved analytically, without approximations. Due to its simplicity, the model allows insight into quantum effects without the need for complicated mathematics. It serves as a simple illustration of how energy quantizations (energy levels), which are found in more complicated quantum systems such as atoms and molecules, come about. It is one of the first quantum mechanics problems taught in undergraduate physics courses, and it is commonly used as an approximation for more complicated quantum systems.

One-dimensional solution

The barriers outside a one-dimensional box have infinitely large potential, while the interior of the box has a constant, zero potential.

The simplest form of the particle in a box model considers a one-dimensional system. Here, the particle may only move backwards and forwards along a straight line with impenetrable barriers at either end. The walls of a one-dimensional box may be visualised as regions of space with an infinitely large potential energy. Conversely, the interior of the box has a constant, zero potential energy. This means that no forces act upon the particle inside the box and it can move freely in that region. However, infinitely large forces repel the particle if it touches the walls of the box, preventing it from escaping. The potential energy in this model is given as

where L is the length of the box, xc is the location of the center of the box and x is the position of the particle within the box. Simple cases include the centered box (xc = 0 ) and the shifted box (xc = L/2 ).

Position wave function

In quantum mechanics, the wavefunction gives the most fundamental description of the behavior of a particle; the measurable properties of the particle (such as its position, momentum and energy) may all be derived from the wavefunction. The wavefunction can be found by solving the Schrödinger equation for the system

where is the reduced Planck constant, is the mass of the particle, is the imaginary unit and is time.

Inside the box, no forces act upon the particle, which means that the part of the wavefunction inside the box oscillates through space and time with the same form as a free particle:

 

 

 

 

(1)

where and are arbitrary complex numbers. The frequency of the oscillations through space and time is given by the wavenumber and the angular frequency respectively. These are both related to the total energy of the particle by the expression

which is known as the dispersion relation for a free particle. Here one must notice that now, since the particle is not entirely free but under the influence of a potential (the potential V described above), the energy of the particle given above is not the same thing as where p is the momentum of the particle, and thus the wavenumber k above actually describes the energy states of the particle, not the momentum states (i.e. it turns out that the momentum of the particle is not given by ). In this sense, it is quite dangerous to call the number k a wavenumber, since it is not related to momentum like "wavenumber" usually is. The rationale for calling k the wavenumber is that it enumerates the number of crests that the wavefunction has inside the box, and in this sense it is a wavenumber. This discrepancy can be seen more clearly below, when we find out that the energy spectrum of the particle is discrete (only discrete values of energy are allowed) but the momentum spectrum is continuous (momentum can vary continuously) and in particular, the relation for the energy and momentum of the particle does not hold. As said above, the reason this relation between energy and momentum does not hold is that the particle is not free, but there is a potential V in the system, and the energy of the particle is , where T is the kinetic and V the potential energy.

Initial wavefunctions for the first four states in a one-dimensional particle in a box

The size (or amplitude) of the wavefunction at a given position is related to the probability of finding a particle there by . The wavefunction must therefore vanish everywhere beyond the edges of the box. Also, the amplitude of the wavefunction may not "jump" abruptly from one point to the next. These two conditions are only satisfied by wavefunctions with the form

where 

,

and

,

where n is a positive integer (1,2,3,4...). For a shifted box (xc = L/2), the solution is particularly simple. The simplest solutions, or both yield the trivial wavefunction , which describes a particle that does not exist anywhere in the system. Negative values of are neglected, since they give wavefunctions identical to the positive solutions except for a physically unimportant sign change. Here one sees that only a discrete set of energy values and wavenumbers k are allowed for the particle. Usually in quantum mechanics it is also demanded that the derivative of the wavefunction in addition to the wavefunction itself be continuous; here this demand would lead to the only solution being the constant zero function, which is not what we desire, so we give up this demand (as this system with infinite potential can be regarded as a nonphysical abstract limiting case, we can treat it as such and "bend the rules"). Note that giving up this demand means that the wavefunction is not a differentiable function at the boundary of the box, and thus it can be said that the wavefunction does not solve the Schrödinger equation at the boundary points and (but does solve everywhere else).

Finally, the unknown constant may be found by normalizing the wavefunction so that the total probability density of finding the particle in the system is 1. It follows that

Thus, A may be any complex number with absolute value 2/L; these different values of A yield the same physical state, so A = 2/L can be selected to simplify.

It is expected that the eigenvalues, i.e., the energy of the box should be the same regardless of its position in space, but changes. Notice that represents a phase shift in the wave function, This phase shift has no effect when solving the Schrödinger equation, and therefore does not affect the eigenvalue.

If we set the origin of coordinates to the left edge of the box, we can rewrite the spacial part of the wave function succinctly as:

.

Momentum wave function

The momentum wavefunction is proportional to the Fourier transform of the position wavefunction. With (note that the parameter k describing the momentum wavefunction below is not exactly the special kn above, linked to the energy eigenvalues), the momentum wavefunction is given by

where sinc is the cardinal sine sinc function, sinc(x)=sin(x)/x. For the centered box (xc= 0), the solution is real and particularly simple, since the phase factor on the right reduces to unity. (With care, it can be written as an even function of p.)

It can be seen that the momentum spectrum in this wave packet is continuous, and one may conclude that for the energy state described by the wavenumber kn, the momentum can, when measured, also attain other values beyond .

Hence, it also appears that, since the energy is for the nth eigenstate, the relation does not strictly hold for the measured momentum p; the energy eigenstate is not a momentum eigenstate, and, in fact, not even a superposition of two momentum eigenstates, as one might be tempted to imagine from equation (1) above: peculiarly, it has no well-defined momentum before measurement!

Position and momentum probability distributions

In classical physics, the particle can be detected anywhere in the box with equal probability. In quantum mechanics, however, the probability density for finding a particle at a given position is derived from the wavefunction as For the particle in a box, the probability density for finding the particle at a given position depends upon its state, and is given by

Thus, for any value of n greater than one, there are regions within the box for which , indicating that spatial nodes exist at which the particle cannot be found.

In quantum mechanics, the average, or expectation value of the position of a particle is given by

For the steady state particle in a box, it can be shown that the average position is always , regardless of the state of the particle. For a superposition of states, the expectation value of the position will change based on the cross term which is proportional to .

The variance in the position is a measure of the uncertainty in position of the particle:

The probability density for finding a particle with a given momentum is derived from the wavefunction as . As with position, the probability density for finding the particle at a given momentum depends upon its state, and is given by

where, again, . The expectation value for the momentum is then calculated to be zero, and the variance in the momentum is calculated to be:

The uncertainties in position and momentum ( and ) are defined as being equal to the square root of their respective variances, so that:

This product increases with increasing n, having a minimum value for n=1. The value of this product for n=1 is about equal to 0.568 which obeys the Heisenberg uncertainty principle, which states that the product will be greater than or equal to

Another measure of uncertainty in position is the information entropy of the probability distribution Hx:

where x0 is an arbitrary reference length.

Another measure of uncertainty in momentum is the information entropy of the probability distribution Hp:

where γ is Euler's constant. The quantum mechanical entropic uncertainty principle states that for

(nats)

For , the sum of the position and momentum entropies yields:

(nats)

which satisfies the quantum entropic uncertainty principle.

Energy levels

The energy of a particle in a box (black circles) and a free particle (grey line) both depend upon wavenumber in the same way. However, the particle in a box may only have certain, discrete energy levels.

The energies which correspond with each of the permitted wavenumbers may be written as

.

The energy levels increase with , meaning that high energy levels are separated from each other by a greater amount than low energy levels are. The lowest possible energy for the particle (its zero-point energy) is found in state 1, which is given by

The particle, therefore, always has a positive energy. This contrasts with classical systems, where the particle can have zero energy by resting motionlessly. This can be explained in terms of the uncertainty principle, which states that the product of the uncertainties in the position and momentum of a particle is limited by

It can be shown that the uncertainty in the position of the particle is proportional to the width of the box. Thus, the uncertainty in momentum is roughly inversely proportional to the width of the box. The kinetic energy of a particle is given by , and hence the minimum kinetic energy of the particle in a box is inversely proportional to the mass and the square of the well width, in qualitative agreement with the calculation above.

Higher-dimensional boxes

(Hyper)rectangular walls

The wavefunction of a 2D well with nx=4 and ny=4

If a particle is trapped in a two-dimensional box, it may freely move in the and -directions, between barriers separated by lengths and respectively. For a centered box, the position wave function may be written including the length of the box as . Using a similar approach to that of the one-dimensional box, it can be shown that the wavefunctions and energies for a centered box are given respectively by

,
,

where the two-dimensional wavevector is given by

.

For a three dimensional box, the solutions are

,
,

where the three-dimensional wavevector is given by:

.

In general for an n-dimensional box, the solutions are

The n-dimensional momentum wave functions may likewise be represented by and the momentum wave function for an n-dimensional centered box is then:

An interesting feature of the above solutions is that when two or more of the lengths are the same (e.g. ), there are multiple wavefunctions corresponding to the same total energy. For example, the wavefunction with has the same energy as the wavefunction with . This situation is called degeneracy and for the case where exactly two degenerate wavefunctions have the same energy that energy level is said to be doubly degenerate. Degeneracy results from symmetry in the system. For the above case two of the lengths are equal so the system is symmetric with respect to a 90° rotation.

More complicated wall shapes

The wavefunction for a quantum-mechanical particle in a box whose walls have arbitrary shape is given by the Helmholtz equation subject to the boundary condition that the wavefunction vanishes at the walls. These systems are studied in the field of quantum chaos for wall shapes whose corresponding dynamical billiard tables are non-integrable.

Applications

Because of its mathematical simplicity, the particle in a box model is used to find approximate solutions for more complex physical systems in which a particle is trapped in a narrow region of low electric potential between two high potential barriers. These quantum well systems are particularly important in optoelectronics, and are used in devices such as the quantum well laser, the quantum well infrared photodetector and the quantum-confined Stark effect modulator. It is also used to model a lattice in the Kronig-Penney model and for a finite metal with the free electron approximation.

Conjugated polyenes

β-carotene is a conjugated polyene

Conjugated polyene systems can be modeled using particle in a box. The conjugated system of electrons can be modeled as a one dimensional box with length equal to the total bond distance from one terminus of the polyene to the other. In this case each pair of electrons in each π bond corresponds to one energy level. The energy difference between two energy levels, nf and ni is:

The difference between the ground state energy, n, and the first excited state, n+1, corresponds to the energy required to excite the system. This energy has a specific wavelength, and therefore color of light, related by:

A common example of this phenomenon is in β-carotene. β-carotene (C40H56) is a conjugated polyene with an orange color and a molecular length of approximately 3.8 nm (though its chain length is only approximately 2.4 nm). Due to β-carotene's high level of conjugation, electrons are dispersed throughout the length of the molecule, allowing one to model it as a one-dimensional particle in a box. β-carotene has 11 carbon-carbon double bonds in conjugation; each of those double bonds contains two π-electrons, therefore β-carotene has 22 π-electrons. With two electrons per energy level, β-carotene can be treated as a particle in a box at energy level n=11. Therefore, the minimum energy needed to excite an electron to the next energy level can be calculated, n=12, as follows (recalling that the mass of an electron is 9.109 × 10−31 kg):

Using the previous relation of wavelength to energy, recalling both Planck's constant h and the speed of light c:

This indicates that β-carotene primarily absorbs light in the infrared spectrum, therefore it would appear white to a human eye. However the observed wavelength is 450 nm, indicating that the particle in a box is not a perfect model for this system.

Quantum well laser

The particle in a box model can be applied to quantum well lasers, which are laser diodes consisting of one semiconductor “well” material sandwiched between two other semiconductor layers of different material . Because the layers of this sandwich are very thin (the middle layer is typically about 100 Å thick), quantum confinement effects can be observed. The idea that quantum effects could be harnessed to create better laser diodes originated in the 1970s. The quantum well laser was patented in 1976 by R. Dingle and C. H. Henry.

Specifically, the quantum well’s behavior can be represented by the particle in a finite well model. Two boundary conditions must be selected. The first is that the wave function must be continuous. Often, the second boundary condition is chosen to be the derivative of the wave function must be continuous across the boundary, but in the case of the quantum well the masses are different on either side of the boundary. Instead, the second boundary condition is chosen to conserve particle flux as, which is consistent with experiment. The solution to the finite well particle in a box must be solved numerically, resulting in wave functions that are sine functions inside the quantum well and exponentially decaying functions in the barriers. This quantization of the energy levels of the electrons allows a quantum well laser to emit light more efficiently than conventional semiconductor lasers.

Due to their small size, quantum dots do not showcase the bulk properties of the specified semi-conductor but rather show quantised energy states. This effect is known as the quantum confinement and has led to numerous applications of quantum dots such as the quantum well laser.

Researchers at Princeton University have recently built a quantum well laser which is no bigger than a grain of rice. The laser is powered by a single electron which passes through two quantum dots; a double quantum dot. The electron moves from a state of higher energy, to a state of lower energy whilst emitting photons in the microwave region. These photons bounce off mirrors to create a beam of light; the laser.

The quantum well laser is heavily based on the interaction between light and electrons. This relationship is a key component in quantum mechanical theories which include the De Broglie Wavelength and Particle in a box. The double quantum dot allows scientists to gain full control over the movement of an electron which consequently results in the production of a laser beam.

Quantum dots

Quantum dots are extremely small semiconductors (on the scale of nanometers). They display quantum confinement in that the electrons cannot escape the “dot”, thus allowing particle-in-a-box approximations to be applied. Their behavior can be described by three-dimensional particle-in-a-box energy quantization equations.

The energy gap of a quantum dot is the energy gap between its valence and conduction bands. This energy gap is equal to the band gap of the bulk material plus the energy equation derived from particle-in-a-box, which gives the energy for electrons and holes. This can be seen in the following equation, where and are the effective masses of the electron and hole, is radius of the dot, and is Planck's constant:

Hence, the energy gap of the quantum dot is inversely proportional to the square of the “length of the box,” i.e. the radius of the quantum dot.

Manipulation of the band gap allows for the absorption and emission of specific wavelengths of light, as energy is inversely proportional to wavelength. The smaller the quantum dot, the larger the band gap and thus the shorter the wavelength absorbed.

Different semiconducting materials are used to synthesize quantum dots of different sizes and therefore emit different wavelengths of light. Materials that normally emit light in the visible region are often used and their sizes are fine-tuned so that certain colors are emitted. Typical substances used to synthesize quantum dots are cadmium (Cd) and selenium (Se). For example, when the electrons of two nanometer CdSe quantum dots relax after excitation, blue light is emitted. Similarly, red light is emitted in four nanometer CdSe quantum dots.

Quantum dots have a variety of functions including but not limited to fluorescent dyes, transistors, LEDs, solar cells, and medical imaging via optical probes.

One function of quantum dots is their use in lymph node mapping, which is feasible due to their unique ability to emit light in the near infrared (NIR) region. Lymph node mapping allows surgeons to track if and where cancerous cells exist.

Quantum dots are useful for these functions due to their emission of brighter light, excitation by a wide variety of wavelengths, and higher resistance to light than other substances.

Teleology

From Wikipedia, the free encyclopedia
Plato and Aristotle, depicted here in The School of Athens, both developed philosophical arguments addressing the universe's apparent order (logos)

Teleology (from τέλος, telos, 'end', 'aim', or 'goal,' and λόγος, logos, 'explanation' or 'reason') or finality is a reason or explanation for something as a function of its end, purpose, or goal. A purpose that is imposed by a human use, such as that of a fork, is called extrinsic.

Natural teleology, common in classical philosophy, though controversial today, contends that natural entities also have intrinsic purposes, irrespective of human use or opinion. For instance, Aristotle claimed that an acorn's intrinsic telos is to become a fully grown oak tree. Though ancient atomists rejected the notion of natural teleology, teleological accounts of non-personal or non-human nature were explored and often endorsed in ancient and medieval philosophies, but fell into disfavor during the modern era (1600–1900).

In the late 18th century, Immanuel Kant used the concept of telos as a regulative principle in his Critique of Judgment (1790). Teleology was also fundamental to the philosophy of Karl Marx and G. W. F. Hegel.

Contemporary philosophers and scientists are still in debate as to whether teleological axioms are useful or accurate in proposing modern philosophies and scientific theories. An example of the reintroduction of teleology into modern language is the notion of an attractor. Another instance is when Thomas Nagel (2012), though not a biologist, proposed a non-Darwinian account of evolution that incorporates impersonal and natural teleological laws to explain the existence of life, consciousness, rationality, and objective value. Regardless, the accuracy can also be considered independently from the usefulness: it is a common experience in pedagogy that a minimum of apparent teleology can be useful in thinking about and explaining Darwinian evolution even if there is no true teleology driving evolution. Thus it is easier to say that evolution "gave" wolves sharp canine teeth because those teeth "serve the purpose of" predation regardless of whether there is an underlying non-teleologic reality in which evolution is not an actor with intentions. In other words, because human cognition and learning often rely on the narrative structure of stories (with actors, goals, and immediate (proximal) rather than ultimate (distal) causation (see also proximate and ultimate causation), some minimal level of teleology might be recognized as useful or at least tolerable for practical purposes even by people who reject its cosmologic accuracy. Its accuracy is upheld by Barrow and Tippler (1986), whose citings of such teleologists as Max Planck and Norbert Wiener are significant for scientific endeavor.

History

In western philosophy, the term and concept of teleology originated in the writings of Plato and Aristotle. Aristotle's 'four causes' give special place to the telos or "final cause" of each thing. In this, he followed Plato in seeing purpose in both human and subhuman nature.

Etymology

The word teleology combines Greek telos (τέλος, from τελε-, 'end' or 'purpose') and logia (-λογία, 'speak of', 'study of', or 'a branch of learning"'). German philosopher Christian Wolff would coin the term, as teleologia (Latin), in his work Philosophia rationalis, sive logica (1728).

Platonic

In the Phaedo, Plato through Socrates argues that true explanations for any given physical phenomenon must be teleological. He bemoans those who fail to distinguish between a thing's necessary and sufficient causes, which he identifies respectively as material and final causes:

Imagine not being able to distinguish the real cause, from that without which the cause would not be able to act, as a cause. It is what the majority appear to do, like people groping in the dark; they call it a cause, thus giving it a name that does not belong to it. That is why one man surrounds the earth with a vortex to make the heavens keep it in place, another makes the air support it like a wide lid. As for their capacity of being in the best place they could be at this very time, this they do not look for, nor do they believe it to have any divine force, but they believe that they will some time discover a stronger and more immortal Atlas to hold everything together more, and they do not believe that the truly good and 'binding' binds and holds them together.

— Plato, Phaedo, 99

Plato here argues that while the materials that compose a body are necessary conditions for its moving or acting in a certain way, they nevertheless cannot be the sufficient condition for its moving or acting as it does. For example, if Socrates is sitting in an Athenian prison, the elasticity of his tendons is what allows him to be sitting, and so a physical description of his tendons can be listed as necessary conditions or auxiliary causes of his act of sitting. However, these are only necessary conditions of Socrates' sitting. To give a physical description of Socrates' body is to say that Socrates is sitting, but it does not give us any idea why it came to be that he was sitting in the first place. To say why he was sitting and not not sitting, we have to explain what it is about his sitting that is good, for all things brought about (i.e., all products of actions) are brought about because the actor saw some good in them. Thus, to give an explanation of something is to determine what about it is good. Its goodness is its actual cause—its purpose, telos or "reason for which."

Aristotelian

Aristotle argued that Democritus was wrong to attempt to reduce all things to mere necessity, because doing so neglects the aim, order, and "final cause", which brings about these necessary conditions:

Democritus, however, neglecting the final cause, reduces to necessity all the operations of nature. Now, they are necessary, it is true, but yet they are for a final cause and for the sake of what is best in each case. Thus nothing prevents the teeth from being formed and being shed in this way; but it is not on account of these causes but on account of the end.…

— Aristotle, Generation of Animals 5.8, 789a8–b15

In Physics, using eternal forms as his model, Aristotle rejects Plato's assumption that the universe was created by an intelligent designer. For Aristotle, natural ends are produced by "natures" (principles of change internal to living things), and natures, Aristotle argued, do not deliberate:

It is absurd to suppose that ends are not present [in nature] because we do not see an agent deliberating.

— Aristotle, Physics, 2.8, 199b27-9

These Platonic and Aristotelian arguments ran counter to those presented earlier by Democritus and later by Lucretius, both of whom were supporters of what is now often called accidentalism:

Nothing in the body is made in order that we may use it. What happens to exist is the cause of its use.

— Lucretius, De rerum natura [On the Nature of Things] 4, 833

Economics

A teleology of human aims played a crucial role in the work of economist Ludwig von Mises, especially in the development of his science of praxeology. More specifically, Mises believed that human action (i.e. purposeful behavior) is teleological, based on the presupposition that an individual's action is governed or caused by the existence of their chosen ends. In other words, individuals select what they believe to be the most appropriate means to achieve a sought after goal or end. Mises also stressed that, with respect to human action, teleology is not independent of causality: "No action can be devised and ventured upon without definite ideas about the relation of cause and effect, teleology presupposes causality."

Assuming reason and action to be predominantly influenced by ideological credence, Mises derived his portrayal of human motivation from Epicurean teachings, insofar as he assumes "atomistic individualism, teleology, and libertarianism, and defines man as an egoist who seeks a maximum of happiness" (i.e. the ultimate pursuit of pleasure over pain). "Man strives for," Mises remarks, "but never attains the perfect state of happiness described by Epicurus." Moreover, expanding upon the Epicurean groundwork, Mises formalized his conception of pleasure and pain by assigning each specific meaning, allowing him to extrapolate his conception of attainable happiness to a critique of liberal versus socialist ideological societies. It is there, in his application of Epicurean belief to political theory, that Mises flouts Marxist theory, considering labor to be one of many of man's 'pains', a consideration which positioned labor as a violation of his original Epicurean assumption of man's manifest hedonistic pursuit. From here he further postulates a critical distinction between introversive labor and extroversive labor, further divaricating from basic Marxist theory, in which Marx hails labor as man's "species-essense", or his "species-activity".

Postmodern philosophy

Teleological-based "grand narratives" are renounced by the postmodern tradition, where teleology may be viewed as reductive, exclusionary, and harmful to those whose stories are diminished or overlooked.

Against this postmodern position, Alasdair MacIntyre has argued that a narrative understanding of oneself, of one's capacity as an independent reasoner, one's dependence on others and on the social practices and traditions in which one participates, all tend towards an ultimate good of liberation. Social practices may themselves be understood as teleologically oriented to internal goods, for example practices of philosophical and scientific inquiry are teleologically ordered to the elaboration of a true understanding of their objects. MacIntyre's After Virtue (1981) famously dismissed the naturalistic teleology of Aristotle's 'metaphysical biology', but he has cautiously moved from that book's account of a sociological teleology toward an exploration of what remains valid in a more traditional teleological naturalism.

Hegel

Historically, teleology may be identified with the philosophical tradition of Aristotelianism. The rationale of teleology was explored by Immanuel Kant (1790) in his Critique of Judgement and made central to speculative philosophy by G. W. F. Hegel (as well as various neo-Hegelian schools). Hegel proposed a history of our species which some consider to be at variance with Darwin, as well as with the dialectical materialism of Karl Marx and Friedrich Engels, employing what is now called analytic philosophy—the point of departure being not formal logic and scientific fact but 'identity', or "objective spirit" in Hegel's terminology.

Individual human consciousness, in the process of reaching for autonomy and freedom, has no choice but to deal with an obvious reality: the collective identities (e.g. the multiplicity of world views, ethnic, cultural, and national identities) that divide the human race and set different groups in violent conflict with each other. Hegel conceived of the 'totality' of mutually antagonistic world-views and life-forms in history as being 'goal-driven', i.e. oriented towards an end-point in history. The 'objective contradiction' of 'subject' and 'object' would eventually 'sublate' into a form of life that leaves violent conflict behind. This goal-oriented, teleological notion of the "historical process as a whole" is present in a variety of 20th-century authors, although its prominence declined drastically after the Second World War.

Ethics

Teleology significantly informs the study of ethics, such as in:

  • Business ethics: People in business commonly think in terms of purposeful action, as in, for example, management by objectives. Teleological analysis of business ethics leads to consideration of the full range of stakeholders in any business decision, including the management, the staff, the customers, the shareholders, the country, humanity and the environment.
  • Medical ethics: Teleology provides a moral basis for the professional ethics of medicine, as physicians are generally concerned with outcomes and must therefore know the telos of a given treatment paradigm.

Consequentialism

The broad spectrum of consequentialist ethics—of which utilitarianism is a well-known example—focuses on the end result or consequences, with such principles as John Stuart Mill's 'principle of utility': "the greatest good for the greatest number." This principle is thus teleological, though in a broader sense than is elsewhere understood in philosophy.

In the classical notion, teleology is grounded in the inherent nature of things themselves, whereas in consequentialism, teleology is imposed on nature from outside by the human will. Consequentialist theories justify inherently what most people would call evil acts by their desirable outcomes, if the good of the outcome outweighs the bad of the act. So, for example, a consequentialist theory would say it was acceptable to kill one person in order to save two or more other people. These theories may be summarized by the maxim "the end justifies the means."

Deontologicalism

Consequentialism stands in contrast to the more classical notions of deontological ethics, of which examples include Immanuel Kant's categorical imperative, and Aristotle's virtue ethics—although formulations of virtue ethics are also often consequentialist in derivation.

In deontological ethics, the goodness or badness of individual acts is primary and a larger, more desirable goal is insufficient to justify bad acts committed on the way to that goal, even if the bad acts are relatively minor and the goal is major (like telling a small lie to prevent a war and save millions of lives). In requiring all constituent acts to be good, deontological ethics is much more rigid than consequentialism, which varies by circumstances.

Practical ethics are usually a mix of the two. For example, Mill also relies on deontic maxims to guide practical behavior, but they must be justifiable by the principle of utility.

Science

In modern science, explanations that rely on teleology are often, but not always, avoided, either because they are unnecessary or because whether they are true or false is thought to be beyond the ability of human perception and understanding to judge. But using teleology as an explanatory style, in particular within evolutionary biology, is still controversial.

Since the Novum Organum of Francis Bacon, teleological explanations in physical science tend to be deliberately avoided in favor of focus on material and efficient explanations. Final and formal causation came to be viewed as false or too subjective. Nonetheless, some disciplines, in particular within evolutionary biology, continue to use language that appears teleological in describing natural tendencies towards certain end conditions. Some suggest, however, that these arguments ought to be, and practicably can be, rephrased in non-teleological forms, others hold that teleological language cannot always be easily expunged from descriptions in the life sciences, at least within the bounds of practical pedagogy.

Biology

Apparent teleology is a recurring issue in evolutionary biology, much to the consternation of some writers.

Statements implying that nature has goals, for example where a species is said to do something "in order to" achieve survival, appear teleological, and therefore invalid. Usually, it is possible to rewrite such sentences to avoid the apparent teleology. Some biology courses have incorporated exercises requiring students to rephrase such sentences so that they do not read teleologically. Nevertheless, biologists still frequently write in a way which can be read as implying teleology even if that is not the intention. John Reiss (2009) argues that evolutionary biology can be purged of such teleology by rejecting the analogy of natural selection as a watchmaker. Other arguments against this analogy have also been promoted by writers such as Richard Dawkins (1987).

Some authors, like James Lennox (1993), have argued that Darwin was a teleologist, while others, such as Michael Ghiselin (1994), describe this claim as a myth promoted by misinterpretations of his discussions and emphasized the distinction between using teleological metaphors and being teleological.

Biologist philosopher Francisco Ayala (1998) has argued that all statements about processes can be trivially translated into teleological statements, and vice versa, but that teleological statements are more explanatory and cannot be disposed of. Karen Neander (1998) has argued that the modern concept of biological 'function' is dependent upon selection. So, for example, it is not possible to say that anything that simply winks into existence without going through a process of selection has functions. We decide whether an appendage has a function by analysing the process of selection that led to it. Therefore, any talk of functions must be posterior to natural selection and function cannot be defined in the manner advocated by Reiss and Dawkins.

Ernst Mayr (1992) states that "adaptedness…is an a posteriori result rather than an a priori goal-seeking." Various commentators view the teleological phrases used in modern evolutionary biology as a type of shorthand. For example, S. H. P. Madrell (1998) writes that "the proper but cumbersome way of describing change by evolutionary adaptation [may be] substituted by shorter overtly teleological statements" for the sake of saving space, but that this "should not be taken to imply that evolution proceeds by anything other than from mutations arising by chance, with those that impart an advantage being retained by natural selection." Likewise, J. B. S. Haldane says, "Teleology is like a mistress to a biologist: he cannot live without her but he's unwilling to be seen with her in public."

Selected-effects accounts, such as the one suggested by Neander (1998), face objections due to their reliance on etiological accounts, which some fields lack the resources to accommodate. Many such sciences, which study the same traits and behaviors regarded by evolutionary biology, still correctly attribute teleological functions without appeal to selection history. Corey J. Maley and Gualtiero Piccinini (2018/2017) are proponents of one such account, which focuses instead on goal-contribution. With the objective goals of organisms being survival and inclusive fitness, Piccinini and Maley define teleological functions to be “a stable contribution by a trait (or component, activity, property) of organisms belonging to a biological population to an objective goal of those organisms.”

Cybernetics

Cybernetics is the study of the communication and control of regulatory feedback both in living beings and machines, and in combinations of the two.

Arturo Rosenblueth, Norbert Wiener, and Julian Bigelow (1943) had conceived of feedback mechanisms as lending a teleology to machinery. Wiener (1948) coined the term cybernetics to denote the study of "teleological mechanisms." In the cybernetic classification presented by Rosenblueth, Wiener, and Bigelow (1943), teleology is feedback controlled purpose.

The classification system underlying cybernetics has been criticized by Frank Honywill George and Les Johnson (1985), who cite the need for an external observability to the purposeful behavior in order to establish and validate the goal-seeking behavior. In this view, the purpose of observing and observed systems is respectively distinguished by the system's subjective autonomy and objective control.

Fine-tuned universe

From Wikipedia, the free encyclopedia
Jump to navigation Jump to search

The characterization of the universe as finely tuned suggests that the occurrence of life in the Universe is very sensitive to the values of certain fundamental physical constants and that the observed values are, for some reason, improbable. If the values of any of certain free parameters in contemporary physical theories had differed only slightly from those observed, the evolution of the Universe would have proceeded very differently and life as it is understood may not have been possible.

Various explanations of this ostensible fine-tuning have been proposed. However, the belief that the observed values require explanation depends on assumptions about what values are probable or "natural" in some sense. Alternatively, the anthropic principle may be understood to render the observed values tautological and not in need of explanation.

History

In 1913, the chemist Lawrence Joseph Henderson (1878–1942) wrote The Fitness of the Environment, one of the first books to explore concepts of fine tuning in the universe. Henderson discusses the importance of water and the environment with respect to living things, pointing out that life depends entirely on the very specific environmental conditions on Earth, especially with regard to the prevalence and properties of water.

In 1961, physicist Robert H. Dicke claimed that certain forces in physics, such as gravity and electromagnetism, must be perfectly fine-tuned for life to exist anywhere in the universe. Fred Hoyle also argued for a fine-tuned universe in his 1984 book The Intelligent Universe. "The list of anthropic properties, apparent accidents of a non-biological nature without which carbon-based and hence human life could not exist, is large and impressive."

Belief in the fine-tuned universe led to the expectation that the Large Hadron Collider would produce evidence of physics beyond the standard model. However, by 2012 results from the LHC had ruled out the class of supersymmetric theories that may have explained the fine-tuning.

Motivation

The premise of the fine-tuned universe assertion is that a small change in several of the physical constants would make the universe radically different. As Stephen Hawking has noted, "The laws of science, as we know them at present, contain many fundamental numbers, like the size of the electric charge of the electron and the ratio of the masses of the proton and the electron. ... The remarkable fact is that the values of these numbers seem to have been very finely adjusted to make possible the development of life."

If, for example, the strong nuclear force were 2% stronger than it is (i.e. if the coupling constant representing its strength were 2% larger), while the other constants were left unchanged, diprotons would be stable; according to physicist Paul Davies, hydrogen would fuse into them instead of deuterium and helium. This would drastically alter the physics of stars, and presumably preclude the existence of life similar to what we observe on Earth. The existence of the diproton would short-circuit the slow fusion of hydrogen into deuterium. Hydrogen would fuse so easily that it is likely that all of the universe's hydrogen would be consumed in the first few minutes after the Big Bang. However, this "diproton argument" is disputed by other physicists, who calculate that as long as the increase in strength is less than 50%, stellar fusion could occur despite the existence of stable diprotons.

The precise formulation of the idea is made difficult by the fact that physicists do not yet know how many independent physical constants there are. The current standard model of particle physics has 25 freely adjustable parameters and general relativity has one additional parameter, the cosmological constant, which is known to be non-zero, but profoundly small in value. However, because physicists have not developed an empirically successful theory of quantum gravity, there is no known way to combine quantum mechanics, on which the standard model depends, and general relativity. Without knowledge of this more complete theory that is suspected to underlie the standard model, definitively counting the number of truly independent physical constants is not possible. In some candidate theories, the number of independent physical constants may be as small as one. For example, the cosmological constant may be a fundamental constant, but attempts have also been made to calculate it from other constants, and according to the author of one such calculation, "the small value of the cosmological constant is telling us that a remarkably precise and totally unexpected relation exists among all the parameters of the Standard Model of particle physics, the bare cosmological constant and unknown physics."

Examples

Martin Rees formulates the fine-tuning of the universe in terms of the following six dimensionless physical constants.

  • N, the ratio of the electromagnetic force to the gravitational force between a pair of protons, is approximately 1036. According to Rees, if it were significantly smaller, only a small and short-lived universe could exist.
  • Epsilon (ε), a measure of the nuclear efficiency of fusion from hydrogen to helium, is 0.007: when four nucleons fuse into helium, 0.007 (0.7%) of their mass is converted to energy. The value of ε is in part determined by the strength of the strong nuclear force. If ε were 0.006, only hydrogen could exist, and complex chemistry would be impossible. According to Rees, if it were above 0.008, no hydrogen would exist, as all the hydrogen would have been fused shortly after the Big Bang. Other physicists disagree, calculating that substantial hydrogen remains as long as the strong force coupling constant increases by less than about 50%.
  • Omega (Ω), commonly known as the density parameter, is the relative importance of gravity and expansion energy in the universe. It is the ratio of the mass density of the universe to the "critical density" and is approximately 1. If gravity were too strong compared with dark energy and the initial metric expansion, the universe would have collapsed before life could have evolved. On the other side, if gravity were too weak, no stars would have formed.
  • Lambda (Λ), commonly known as the cosmological constant, describes the ratio of the density of dark energy to the critical energy density of the universe, given certain reasonable assumptions such as positing that dark energy density is a constant. In terms of Planck units, and as a natural dimensionless value, the cosmological constant, Λ, is on the order of 10−122. This is so small that it has no significant effect on cosmic structures that are smaller than a billion light-years across. If the cosmological constant were not extremely small, stars and other astronomical structures would not be able to form.
  • Q, the ratio of the gravitational energy required to pull a large galaxy apart to the energy equivalent of its mass, is around 10−5. If it is too small, no stars can form. If it is too large, no stars can survive because the universe is too violent, according to Rees.
  • D, the number of spatial dimensions in spacetime, is 3. Rees claims that life could not exist if there were 2 or 4 dimensions of spacetime nor if any other than 1 time dimension existed in spacetime. However, contends Rees, this does not preclude the existence of ten-dimensional strings.

Carbon and oxygen

An older example is the Hoyle state, the third-lowest energy state of the carbon-12 nucleus, with an energy of 7.656 MeV above the ground level. According to one calculation, if the state's energy level were lower than 7.3 or greater than 7.9 MeV, insufficient carbon would exist to support life.

 Furthermore, to explain the universe's abundance of carbon, the Hoyle state must be further tuned to a value between 7.596 and 7.716 MeV. A similar calculation, focusing on the underlying fundamental constants that give rise to various energy levels, concludes that the strong force must be tuned to a precision of at least 0.5%, and the electromagnetic force to a precision of at least 4%, to prevent either carbon production or oxygen production from dropping significantly.

Dark Energy

A slightly larger quantity of dark energy, or a slightly larger value of the cosmological constant would have caused space to expand rapidly enough that galaxies would not form.

Criticism

The fine-tuned universe argument's regarding the formation of life assumes only carbon-based life forms are possible, sometimes referred to as carbon chauvinism. Conceptually, alternative biochemistry or other forms of life are possible.

Explanations

There are fine tuning arguments that are naturalistic. First, as mentioned in premise section the fine tuning might be an illusion: we don't know the true number of independent physical constants, which could be small and even reduce to one. And we don't know either the laws of the "potential universe factory", i.e. the range and statistical distribution ruling the "choice" for each constant (including our arbitrary choice of units and precise set of constants). Still, as modern cosmology developed various hypotheses not presuming hidden order have been proposed. One is an oscillatory universe or a multiverse, where fundamental physical constants are postulated to resolve themselves to random values in different iterations of reality. Under this hypothesis, separate parts of reality would have wildly different characteristics. In such scenarios, the appearance of fine-tuning is explained as a consequence of the weak anthropic principle and selection bias (specifically survivor bias) that only those universes with fundamental constants hospitable to life (such as the universe we observe) would have living beings emerge and evolve capable of contemplating the questions of origins and of fine-tuning. All other universes would go utterly unbeheld by any such beings.

Multiverse

The Multiverse hypothesis proposes the existence of many universes with different physical constants, some of which are hospitable to intelligent life (see multiverse: anthropic principle). Because we are intelligent beings, it is unsurprising that we find ourselves in a hospitable universe if there is such a multiverse. The Multiverse hypothesis is therefore thought to provide an elegant explanation of the finding that we exist despite the required fine-tuning. (See for a detailed discussion of the arguments for and against this suggested explanation.)

The multiverse idea has led to considerable research into the anthropic principle and has been of particular interest to particle physicists, because theories of everything do apparently generate large numbers of universes in which the physical constants vary widely. As yet, there is no evidence for the existence of a multiverse, but some versions of the theory do make predictions that some researchers studying M-theory and gravity leaks hope to see some evidence of soon. Some multiverse theories are not falsifiable, thus scientists may be reluctant to call any multiverse theory "scientific". UNC-Chapel Hill professor Laura Mersini-Houghton claims that the WMAP cold spot may provide testable empirical evidence for a parallel universe, although this claim was later refuted as the WMAP cold spot was found to be nothing more than a statistical artifact. Variants on this approach include Lee Smolin's notion of cosmological natural selection, the Ekpyrotic universe, and the Bubble universe theory.

Critics of the multiverse-related explanations argue that there is no independent evidence that other universes exist. Some criticize the inference from fine-tuning for life to a multiverse as fallacious, whereas others defend it against that challenge.

Top-down cosmology

Stephen Hawking, along with Thomas Hertog of CERN, proposed that the universe's initial conditions consisted of a superposition of many possible initial conditions, only a small fraction of which contributed to the conditions we see today. According to their theory, it is inevitable that we find our universe's "fine-tuned" physical constants, as the current universe "selects" only those past histories that led to the present conditions. In this way, top-down cosmology provides an anthropic explanation for why we find ourselves in a universe that allows matter and life, without invoking the ontic existence of the Multiverse.

Alien design

One hypothesis is that the universe may have been designed by extra-universal aliens. Some believe this would solve the problem of how a designer or design team capable of fine-tuning the universe could come to exist. Cosmologist Alan Guth believes humans will in time be able to generate new universes. By implication previous intelligent entities may have generated our universe. This idea leads to the possibility that the extra-universal designer/designers are themselves the product of an evolutionary process in their own universe, which must therefore itself be able to sustain life. However it also raises the question of where that universe came from, leading to an infinite regress.

The Designer Universe theory of John Gribbin suggests that the universe could have been made deliberately by an advanced civilization in another part of the Multiverse, and that this civilization may have been responsible for causing the Big Bang.

Religious apologetics

Some scientists, theologians, and philosophers, as well as certain religious groups, argue that providence or creation are responsible for fine-tuning.

Christian philosopher Alvin Plantinga argues that random chance, applied to a single and sole universe, only raises the question as to why this universe could be so "lucky" as to have precise conditions that support life at least at some place (the Earth) and time (within millions of years of the present).

One reaction to these apparent enormous coincidences is to see them as substantiating the theistic claim that the universe has been created by a personal God and as offering the material for a properly restrained theistic argument—hence the fine-tuning argument. It's as if there are a large number of dials that have to be tuned to within extremely narrow limits for life to be possible in our universe. It is extremely unlikely that this should happen by chance, but much more likely that this should happen, if there is such a person as God.

— Alvin Plantinga, "The Dawkins Confusion: Naturalism ad absurdum"

This fine-tuning of the universe is cited by philosopher and Christian apologist William Lane Craig as an evidence for the existence of God or some form of intelligence capable of manipulating (or designing) the basic physics that governs the universe. Craig argues, however, "that the postulate of a divine Designer does not settle for us the religious question."

Philosopher and theologian Richard Swinburne reaches the design conclusion using Bayesian probability.

Scientist and theologian Alister McGrath has pointed out that the fine-tuning of carbon is even responsible for nature's ability to tune itself to any degree.

The entire biological evolutionary process depends upon the unusual chemistry of carbon, which allows it to bond to itself, as well as other elements, creating highly complex molecules that are stable over prevailing terrestrial temperatures, and are capable of conveying genetic information (especially DNA). […] Whereas it might be argued that nature creates its own fine-tuning, this can only be done if the primordial constituents of the universe are such that an evolutionary process can be initiated. The unique chemistry of carbon is the ultimate foundation of the capacity of nature to tune itself.

Theoretical physicist and Anglican priest John Polkinghorne has stated: "Anthropic fine tuning is too remarkable to be dismissed as just a happy accident."

Cosmological constant

From Wikipedia, the free encyclopedia
Jump to navigation Jump to search
Sketch of the timeline of the Universe in the ΛCDM model. The accelerated expansion in the last third of the timeline represents the dark-energy dominated era.

In cosmology, the cosmological constant (usually denoted by the Greek capital letter lambda: Λ) is the energy density of space, or vacuum energy, that arises in Albert Einstein's field equations of general relativity. It is closely associated to the concepts of dark energy and quintessence.

Einstein originally introduced the concept in 1917 to counterbalance the effects of gravity and achieve a static universe, a notion which was the accepted view at the time. Einstein abandoned the concept in 1931 after Hubble's confirmation of the expanding universe. From the 1930s until the late 1990s, most physicists assumed the cosmological constant to be equal to zero. That changed with the surprising discovery in 1998 that the expansion of the universe is accelerating, implying the possibility of a positive nonzero value for the cosmological constant.

Since the 1990s, studies have shown that around 68% of the mass–energy density of the universe can be attributed to so-called dark energy. The cosmological constant Λ is the simplest possible explanation for dark energy, and is used in the current standard model of cosmology known as the ΛCDM model.

According to quantum field theory (QFT) which underlies modern particle physics, empty space is defined by the vacuum state which is a collection of quantum fields. All these quantum fields exhibit fluctuations in their ground state (lowest energy density) arising from the zero-point energy present everywhere in space. These zero-point fluctuations should act as a contribution to the cosmological constant Λ, but when calculations are performed these fluctuations give rise to an enormous vacuum energy. The discrepancy between theorized vacuum energy from quantum field theory and observed vacuum energy from cosmology is a source of major contention, with the values predicted exceeding observation by some 120 orders of magnitude, a discrepancy that has been called "the worst theoretical prediction in the history of physics!". This issue is called the cosmological constant problem and it is one of the greatest mysteries in science with many physicists believing that "the vacuum holds the key to a full understanding of nature".

History

Einstein included the cosmological constant as a term in his field equations for general relativity because he was dissatisfied that otherwise his equations did not allow, apparently, for a static universe: gravity would cause a universe that was initially at dynamic equilibrium to contract. To counteract this possibility, Einstein added the cosmological constant. However, soon after Einstein developed his static theory, observations by Edwin Hubble indicated that the universe appears to be expanding; this was consistent with a cosmological solution to the original general relativity equations that had been found by the mathematician Friedmann, working on the Einstein equations of general relativity. Einstein reportedly referred to his failure to accept the validation of his equations—when they had predicted the expansion of the universe in theory, before it was demonstrated in observation of the cosmological redshift—as his "biggest blunder".

In fact, adding the cosmological constant to Einstein's equations does not lead to a static universe at equilibrium because the equilibrium is unstable: if the universe expands slightly, then the expansion releases vacuum energy, which causes yet more expansion. Likewise, a universe that contracts slightly will continue contracting.

However, the cosmological constant remained a subject of theoretical and empirical interest. Empirically, the onslaught of cosmological data in the past decades strongly suggests that our universe has a positive cosmological constant. The explanation of this small but positive value is an outstanding theoretical challenge, the so-called cosmological constant problem.

Some early generalizations of Einstein's gravitational theory, known as classical unified field theories, either introduced a cosmological constant on theoretical grounds or found that it arose naturally from the mathematics. For example, Sir Arthur Stanley Eddington claimed that the cosmological constant version of the vacuum field equation expressed the "epistemological" property that the universe is "self-gauging", and Erwin Schrödinger's pure-affine theory using a simple variational principle produced the field equation with a cosmological term.

Equation

Estimated ratios of dark matter and dark energy (which may be the cosmological constant) in the universe. According to current theories of physics, dark energy now dominates as the largest source of energy of the universe, in contrast to earlier epochs when it was insignificant.

The cosmological constant appears in Einstein's field equation in the form

where the Ricci tensor/scalar R and the metric tensor g describe the structure of spacetime, the stress–energy tensor T describes the energy and momentum density and flux of the matter in that point in spacetime, and the universal constants G and c are conversion factors that arise from using traditional units of measurement. When Λ is zero, this reduces to the field equation of general relativity usually used in the mid-20th century. When T is zero, the field equation describes empty space (the vacuum).

The cosmological constant has the same effect as an intrinsic energy density of the vacuum, ρvac (and an associated pressure). In this context, it is commonly moved onto the right-hand side of the equation, and defined with a proportionality factor of 8π: Λ = 8πρvac, where unit conventions of general relativity are used (otherwise factors of G and c would also appear, i.e. Λ = 8π(G/c2)ρvac = κρvac, where κ is the Einstein gravitational constant). It is common to quote values of energy density directly, though still using the name "cosmological constant", with convention 8πG = 1. The true dimension of Λ is a length−2.

Given the Planck (2018) values of ΩΛ = 0.6889±0.0056 and H0 = 67.66±0.42 (km/s)/Mpc = (2.1927664±0.0136)×10−18 s−1, Λ has the value of

where is the Planck length. A positive vacuum energy density resulting from a cosmological constant implies a negative pressure, and vice versa. If the energy density is positive, the associated negative pressure will drive an accelerated expansion of the universe, as observed. (See dark energy and cosmic inflation for details.)

ΩΛ (Omega Lambda)

Instead of the cosmological constant itself, cosmologists often refer to the ratio between the energy density due to the cosmological constant and the critical density of the universe, the tipping point for a sufficient density to stop the universe from expanding forever. This ratio is usually denoted ΩΛ, and is estimated to be 0.6889±0.0056, according to results published by the Planck Collaboration in 2018.

In a flat universe, ΩΛ is the fraction of the energy of the universe due to the cosmological constant, i.e., what we would intuitively call the fraction of the universe that is made up of dark energy. Note that this value changes over time: the critical density changes with cosmological time, but the energy density due to the cosmological constant remains unchanged throughout the history of the universe: the amount of dark energy increases as the universe grows, while the amount of matter does not.

Equation of state

Another ratio that is used by scientists is the equation of state, usually denoted w, which is the ratio of pressure that dark energy puts on the universe to the energy per unit volume. This ratio is w = −1 for a true cosmological constant, and is generally different for alternative time-varying forms of vacuum energy such as quintessence. The Planck Collaboration (2018) has measured w = −1.028±0.032, consistent with −1, assuming no evolution in w over cosmic time.

Positive value

Lambda-CDM, accelerated expansion of the universe. The time-line in this schematic diagram extends from the Big Bang/inflation era 13.7 Byr ago to the present cosmological time.

Observations announced in 1998 of distance–redshift relation for Type Ia supernovae indicated that the expansion of the universe is accelerating. When combined with measurements of the cosmic microwave background radiation these implied a value of ΩΛ ≈ 0.7, a result which has been supported and refined by more recent measurements. There are other possible causes of an accelerating universe, such as quintessence, but the cosmological constant is in most respects the simplest solution. Thus, the current standard model of cosmology, the Lambda-CDM model, includes the cosmological constant, which is measured to be on the order of 10−52 m−2, in metric units. It is often expressed as 10−35 s−2 (by multiplication with c2, i.e. ≈1017 m2⋅s−2) or as 10−122 (by multiplication with square Planck length, i.e. ≈10−70 m2). The value is based on recent measurements of vacuum energy density, .

As was only recently seen, by works of 't Hooft, Susskind and others, a positive cosmological constant has surprising consequences, such as a finite maximum entropy of the observable universe (see the holographic principle).[18]

Predictions

Quantum field theory

A major outstanding problem is that most quantum field theories predict a huge value for the quantum vacuum. A common assumption is that the quantum vacuum is equivalent to the cosmological constant. Although no theory exists that supports this assumption, arguments can be made in its favor.[19]

Such arguments are usually based on dimensional analysis and effective field theory. If the universe is described by an effective local quantum field theory down to the Planck scale, then we would expect a cosmological constant of the order of ( in reduced Planck units). As noted above, the measured cosmological constant is smaller than this by a factor of ~10−120. This discrepancy has been called "the worst theoretical prediction in the history of physics!".

Some supersymmetric theories require a cosmological constant that is exactly zero, which further complicates things. This is the cosmological constant problem, the worst problem of fine-tuning in physics: there is no known natural way to derive the tiny cosmological constant used in cosmology from particle physics.

No vacuum in the string theory landscape is known to support a metastable, positive cosmological constant, and in 2018 a group of four physicists advanced a controversial conjecture which would imply that no such universe exists.

Anthropic principle

One possible explanation for the small but non-zero value was noted by Steven Weinberg in 1987 following the anthropic principle. Weinberg explains that if the vacuum energy took different values in different domains of the universe, then observers would necessarily measure values similar to that which is observed: the formation of life-supporting structures would be suppressed in domains where the vacuum energy is much larger. Specifically, if the vacuum energy is negative and its absolute value is substantially larger than it appears to be in the observed universe (say, a factor of 10 larger), holding all other variables (e.g. matter density) constant, that would mean that the universe is closed; furthermore, its lifetime would be shorter than the age of our universe, possibly too short for intelligent life to form. On the other hand, a universe with a large positive cosmological constant would expand too fast, preventing galaxy formation. According to Weinberg, domains where the vacuum energy is compatible with life would be comparatively rare. Using this argument, Weinberg predicted that the cosmological constant would have a value of less than a hundred times the currently accepted value. In 1992, Weinberg refined this prediction of the cosmological constant to 5 to 10 times the matter density.

This argument depends on a lack of a variation of the distribution (spatial or otherwise) in the vacuum energy density, as would be expected if dark energy were the cosmological constant. There is no evidence that the vacuum energy does vary, but it may be the case if, for example, the vacuum energy is (even in part) the potential of a scalar field such as the residual inflaton (also see quintessence). Another theoretical approach that deals with the issue is that of multiverse theories, which predict a large number of "parallel" universes with different laws of physics and/or values of fundamental constants. Again, the anthropic principle states that we can only live in one of the universes that is compatible with some form of intelligent life. Critics claim that these theories, when used as an explanation for fine-tuning, commit the inverse gambler's fallacy.

In 1995, Weinberg's argument was refined by Alexander Vilenkin to predict a value for the cosmological constant that was only ten times the matter density, i.e. about three times the current value since determined.

Failure to detect dark energy

An attempt to directly observe dark energy in a laboratory failed to detect a new force.

Inequality (mathematics)

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Inequality...