Search This Blog

Friday, October 9, 2020

Teleology

From Wikipedia, the free encyclopedia
Plato and Aristotle, depicted here in The School of Athens, both developed philosophical arguments addressing the universe's apparent order (logos)

Teleology (from τέλος, telos, 'end', 'aim', or 'goal,' and λόγος, logos, 'explanation' or 'reason') or finality is a reason or explanation for something as a function of its end, purpose, or goal. A purpose that is imposed by a human use, such as that of a fork, is called extrinsic.

Natural teleology, common in classical philosophy, though controversial today, contends that natural entities also have intrinsic purposes, irrespective of human use or opinion. For instance, Aristotle claimed that an acorn's intrinsic telos is to become a fully grown oak tree. Though ancient atomists rejected the notion of natural teleology, teleological accounts of non-personal or non-human nature were explored and often endorsed in ancient and medieval philosophies, but fell into disfavor during the modern era (1600–1900).

In the late 18th century, Immanuel Kant used the concept of telos as a regulative principle in his Critique of Judgment (1790). Teleology was also fundamental to the philosophy of Karl Marx and G. W. F. Hegel.

Contemporary philosophers and scientists are still in debate as to whether teleological axioms are useful or accurate in proposing modern philosophies and scientific theories. An example of the reintroduction of teleology into modern language is the notion of an attractor. Another instance is when Thomas Nagel (2012), though not a biologist, proposed a non-Darwinian account of evolution that incorporates impersonal and natural teleological laws to explain the existence of life, consciousness, rationality, and objective value. Regardless, the accuracy can also be considered independently from the usefulness: it is a common experience in pedagogy that a minimum of apparent teleology can be useful in thinking about and explaining Darwinian evolution even if there is no true teleology driving evolution. Thus it is easier to say that evolution "gave" wolves sharp canine teeth because those teeth "serve the purpose of" predation regardless of whether there is an underlying non-teleologic reality in which evolution is not an actor with intentions. In other words, because human cognition and learning often rely on the narrative structure of stories (with actors, goals, and immediate (proximal) rather than ultimate (distal) causation (see also proximate and ultimate causation), some minimal level of teleology might be recognized as useful or at least tolerable for practical purposes even by people who reject its cosmologic accuracy. Its accuracy is upheld by Barrow and Tippler (1986), whose citings of such teleologists as Max Planck and Norbert Wiener are significant for scientific endeavor.

History

In western philosophy, the term and concept of teleology originated in the writings of Plato and Aristotle. Aristotle's 'four causes' give special place to the telos or "final cause" of each thing. In this, he followed Plato in seeing purpose in both human and subhuman nature.

Etymology

The word teleology combines Greek telos (τέλος, from τελε-, 'end' or 'purpose') and logia (-λογία, 'speak of', 'study of', or 'a branch of learning"'). German philosopher Christian Wolff would coin the term, as teleologia (Latin), in his work Philosophia rationalis, sive logica (1728).

Platonic

In the Phaedo, Plato through Socrates argues that true explanations for any given physical phenomenon must be teleological. He bemoans those who fail to distinguish between a thing's necessary and sufficient causes, which he identifies respectively as material and final causes:

Imagine not being able to distinguish the real cause, from that without which the cause would not be able to act, as a cause. It is what the majority appear to do, like people groping in the dark; they call it a cause, thus giving it a name that does not belong to it. That is why one man surrounds the earth with a vortex to make the heavens keep it in place, another makes the air support it like a wide lid. As for their capacity of being in the best place they could be at this very time, this they do not look for, nor do they believe it to have any divine force, but they believe that they will some time discover a stronger and more immortal Atlas to hold everything together more, and they do not believe that the truly good and 'binding' binds and holds them together.

— Plato, Phaedo, 99

Plato here argues that while the materials that compose a body are necessary conditions for its moving or acting in a certain way, they nevertheless cannot be the sufficient condition for its moving or acting as it does. For example, if Socrates is sitting in an Athenian prison, the elasticity of his tendons is what allows him to be sitting, and so a physical description of his tendons can be listed as necessary conditions or auxiliary causes of his act of sitting. However, these are only necessary conditions of Socrates' sitting. To give a physical description of Socrates' body is to say that Socrates is sitting, but it does not give us any idea why it came to be that he was sitting in the first place. To say why he was sitting and not not sitting, we have to explain what it is about his sitting that is good, for all things brought about (i.e., all products of actions) are brought about because the actor saw some good in them. Thus, to give an explanation of something is to determine what about it is good. Its goodness is its actual cause—its purpose, telos or "reason for which."

Aristotelian

Aristotle argued that Democritus was wrong to attempt to reduce all things to mere necessity, because doing so neglects the aim, order, and "final cause", which brings about these necessary conditions:

Democritus, however, neglecting the final cause, reduces to necessity all the operations of nature. Now, they are necessary, it is true, but yet they are for a final cause and for the sake of what is best in each case. Thus nothing prevents the teeth from being formed and being shed in this way; but it is not on account of these causes but on account of the end.…

— Aristotle, Generation of Animals 5.8, 789a8–b15

In Physics, using eternal forms as his model, Aristotle rejects Plato's assumption that the universe was created by an intelligent designer. For Aristotle, natural ends are produced by "natures" (principles of change internal to living things), and natures, Aristotle argued, do not deliberate:

It is absurd to suppose that ends are not present [in nature] because we do not see an agent deliberating.

— Aristotle, Physics, 2.8, 199b27-9

These Platonic and Aristotelian arguments ran counter to those presented earlier by Democritus and later by Lucretius, both of whom were supporters of what is now often called accidentalism:

Nothing in the body is made in order that we may use it. What happens to exist is the cause of its use.

— Lucretius, De rerum natura [On the Nature of Things] 4, 833

Economics

A teleology of human aims played a crucial role in the work of economist Ludwig von Mises, especially in the development of his science of praxeology. More specifically, Mises believed that human action (i.e. purposeful behavior) is teleological, based on the presupposition that an individual's action is governed or caused by the existence of their chosen ends. In other words, individuals select what they believe to be the most appropriate means to achieve a sought after goal or end. Mises also stressed that, with respect to human action, teleology is not independent of causality: "No action can be devised and ventured upon without definite ideas about the relation of cause and effect, teleology presupposes causality."

Assuming reason and action to be predominantly influenced by ideological credence, Mises derived his portrayal of human motivation from Epicurean teachings, insofar as he assumes "atomistic individualism, teleology, and libertarianism, and defines man as an egoist who seeks a maximum of happiness" (i.e. the ultimate pursuit of pleasure over pain). "Man strives for," Mises remarks, "but never attains the perfect state of happiness described by Epicurus." Moreover, expanding upon the Epicurean groundwork, Mises formalized his conception of pleasure and pain by assigning each specific meaning, allowing him to extrapolate his conception of attainable happiness to a critique of liberal versus socialist ideological societies. It is there, in his application of Epicurean belief to political theory, that Mises flouts Marxist theory, considering labor to be one of many of man's 'pains', a consideration which positioned labor as a violation of his original Epicurean assumption of man's manifest hedonistic pursuit. From here he further postulates a critical distinction between introversive labor and extroversive labor, further divaricating from basic Marxist theory, in which Marx hails labor as man's "species-essense", or his "species-activity".

Postmodern philosophy

Teleological-based "grand narratives" are renounced by the postmodern tradition, where teleology may be viewed as reductive, exclusionary, and harmful to those whose stories are diminished or overlooked.

Against this postmodern position, Alasdair MacIntyre has argued that a narrative understanding of oneself, of one's capacity as an independent reasoner, one's dependence on others and on the social practices and traditions in which one participates, all tend towards an ultimate good of liberation. Social practices may themselves be understood as teleologically oriented to internal goods, for example practices of philosophical and scientific inquiry are teleologically ordered to the elaboration of a true understanding of their objects. MacIntyre's After Virtue (1981) famously dismissed the naturalistic teleology of Aristotle's 'metaphysical biology', but he has cautiously moved from that book's account of a sociological teleology toward an exploration of what remains valid in a more traditional teleological naturalism.

Hegel

Historically, teleology may be identified with the philosophical tradition of Aristotelianism. The rationale of teleology was explored by Immanuel Kant (1790) in his Critique of Judgement and made central to speculative philosophy by G. W. F. Hegel (as well as various neo-Hegelian schools). Hegel proposed a history of our species which some consider to be at variance with Darwin, as well as with the dialectical materialism of Karl Marx and Friedrich Engels, employing what is now called analytic philosophy—the point of departure being not formal logic and scientific fact but 'identity', or "objective spirit" in Hegel's terminology.

Individual human consciousness, in the process of reaching for autonomy and freedom, has no choice but to deal with an obvious reality: the collective identities (e.g. the multiplicity of world views, ethnic, cultural, and national identities) that divide the human race and set different groups in violent conflict with each other. Hegel conceived of the 'totality' of mutually antagonistic world-views and life-forms in history as being 'goal-driven', i.e. oriented towards an end-point in history. The 'objective contradiction' of 'subject' and 'object' would eventually 'sublate' into a form of life that leaves violent conflict behind. This goal-oriented, teleological notion of the "historical process as a whole" is present in a variety of 20th-century authors, although its prominence declined drastically after the Second World War.

Ethics

Teleology significantly informs the study of ethics, such as in:

  • Business ethics: People in business commonly think in terms of purposeful action, as in, for example, management by objectives. Teleological analysis of business ethics leads to consideration of the full range of stakeholders in any business decision, including the management, the staff, the customers, the shareholders, the country, humanity and the environment.
  • Medical ethics: Teleology provides a moral basis for the professional ethics of medicine, as physicians are generally concerned with outcomes and must therefore know the telos of a given treatment paradigm.

Consequentialism

The broad spectrum of consequentialist ethics—of which utilitarianism is a well-known example—focuses on the end result or consequences, with such principles as John Stuart Mill's 'principle of utility': "the greatest good for the greatest number." This principle is thus teleological, though in a broader sense than is elsewhere understood in philosophy.

In the classical notion, teleology is grounded in the inherent nature of things themselves, whereas in consequentialism, teleology is imposed on nature from outside by the human will. Consequentialist theories justify inherently what most people would call evil acts by their desirable outcomes, if the good of the outcome outweighs the bad of the act. So, for example, a consequentialist theory would say it was acceptable to kill one person in order to save two or more other people. These theories may be summarized by the maxim "the end justifies the means."

Deontologicalism

Consequentialism stands in contrast to the more classical notions of deontological ethics, of which examples include Immanuel Kant's categorical imperative, and Aristotle's virtue ethics—although formulations of virtue ethics are also often consequentialist in derivation.

In deontological ethics, the goodness or badness of individual acts is primary and a larger, more desirable goal is insufficient to justify bad acts committed on the way to that goal, even if the bad acts are relatively minor and the goal is major (like telling a small lie to prevent a war and save millions of lives). In requiring all constituent acts to be good, deontological ethics is much more rigid than consequentialism, which varies by circumstances.

Practical ethics are usually a mix of the two. For example, Mill also relies on deontic maxims to guide practical behavior, but they must be justifiable by the principle of utility.

Science

In modern science, explanations that rely on teleology are often, but not always, avoided, either because they are unnecessary or because whether they are true or false is thought to be beyond the ability of human perception and understanding to judge. But using teleology as an explanatory style, in particular within evolutionary biology, is still controversial.

Since the Novum Organum of Francis Bacon, teleological explanations in physical science tend to be deliberately avoided in favor of focus on material and efficient explanations. Final and formal causation came to be viewed as false or too subjective. Nonetheless, some disciplines, in particular within evolutionary biology, continue to use language that appears teleological in describing natural tendencies towards certain end conditions. Some suggest, however, that these arguments ought to be, and practicably can be, rephrased in non-teleological forms, others hold that teleological language cannot always be easily expunged from descriptions in the life sciences, at least within the bounds of practical pedagogy.

Biology

Apparent teleology is a recurring issue in evolutionary biology, much to the consternation of some writers.

Statements implying that nature has goals, for example where a species is said to do something "in order to" achieve survival, appear teleological, and therefore invalid. Usually, it is possible to rewrite such sentences to avoid the apparent teleology. Some biology courses have incorporated exercises requiring students to rephrase such sentences so that they do not read teleologically. Nevertheless, biologists still frequently write in a way which can be read as implying teleology even if that is not the intention. John Reiss (2009) argues that evolutionary biology can be purged of such teleology by rejecting the analogy of natural selection as a watchmaker. Other arguments against this analogy have also been promoted by writers such as Richard Dawkins (1987).

Some authors, like James Lennox (1993), have argued that Darwin was a teleologist, while others, such as Michael Ghiselin (1994), describe this claim as a myth promoted by misinterpretations of his discussions and emphasized the distinction between using teleological metaphors and being teleological.

Biologist philosopher Francisco Ayala (1998) has argued that all statements about processes can be trivially translated into teleological statements, and vice versa, but that teleological statements are more explanatory and cannot be disposed of. Karen Neander (1998) has argued that the modern concept of biological 'function' is dependent upon selection. So, for example, it is not possible to say that anything that simply winks into existence without going through a process of selection has functions. We decide whether an appendage has a function by analysing the process of selection that led to it. Therefore, any talk of functions must be posterior to natural selection and function cannot be defined in the manner advocated by Reiss and Dawkins.

Ernst Mayr (1992) states that "adaptedness…is an a posteriori result rather than an a priori goal-seeking." Various commentators view the teleological phrases used in modern evolutionary biology as a type of shorthand. For example, S. H. P. Madrell (1998) writes that "the proper but cumbersome way of describing change by evolutionary adaptation [may be] substituted by shorter overtly teleological statements" for the sake of saving space, but that this "should not be taken to imply that evolution proceeds by anything other than from mutations arising by chance, with those that impart an advantage being retained by natural selection." Likewise, J. B. S. Haldane says, "Teleology is like a mistress to a biologist: he cannot live without her but he's unwilling to be seen with her in public."

Selected-effects accounts, such as the one suggested by Neander (1998), face objections due to their reliance on etiological accounts, which some fields lack the resources to accommodate. Many such sciences, which study the same traits and behaviors regarded by evolutionary biology, still correctly attribute teleological functions without appeal to selection history. Corey J. Maley and Gualtiero Piccinini (2018/2017) are proponents of one such account, which focuses instead on goal-contribution. With the objective goals of organisms being survival and inclusive fitness, Piccinini and Maley define teleological functions to be “a stable contribution by a trait (or component, activity, property) of organisms belonging to a biological population to an objective goal of those organisms.”

Cybernetics

Cybernetics is the study of the communication and control of regulatory feedback both in living beings and machines, and in combinations of the two.

Arturo Rosenblueth, Norbert Wiener, and Julian Bigelow (1943) had conceived of feedback mechanisms as lending a teleology to machinery. Wiener (1948) coined the term cybernetics to denote the study of "teleological mechanisms." In the cybernetic classification presented by Rosenblueth, Wiener, and Bigelow (1943), teleology is feedback controlled purpose.

The classification system underlying cybernetics has been criticized by Frank Honywill George and Les Johnson (1985), who cite the need for an external observability to the purposeful behavior in order to establish and validate the goal-seeking behavior. In this view, the purpose of observing and observed systems is respectively distinguished by the system's subjective autonomy and objective control.

Fine-tuned universe

From Wikipedia, the free encyclopedia
Jump to navigation Jump to search

The characterization of the universe as finely tuned suggests that the occurrence of life in the Universe is very sensitive to the values of certain fundamental physical constants and that the observed values are, for some reason, improbable. If the values of any of certain free parameters in contemporary physical theories had differed only slightly from those observed, the evolution of the Universe would have proceeded very differently and life as it is understood may not have been possible.

Various explanations of this ostensible fine-tuning have been proposed. However, the belief that the observed values require explanation depends on assumptions about what values are probable or "natural" in some sense. Alternatively, the anthropic principle may be understood to render the observed values tautological and not in need of explanation.

History

In 1913, the chemist Lawrence Joseph Henderson (1878–1942) wrote The Fitness of the Environment, one of the first books to explore concepts of fine tuning in the universe. Henderson discusses the importance of water and the environment with respect to living things, pointing out that life depends entirely on the very specific environmental conditions on Earth, especially with regard to the prevalence and properties of water.

In 1961, physicist Robert H. Dicke claimed that certain forces in physics, such as gravity and electromagnetism, must be perfectly fine-tuned for life to exist anywhere in the universe. Fred Hoyle also argued for a fine-tuned universe in his 1984 book The Intelligent Universe. "The list of anthropic properties, apparent accidents of a non-biological nature without which carbon-based and hence human life could not exist, is large and impressive."

Belief in the fine-tuned universe led to the expectation that the Large Hadron Collider would produce evidence of physics beyond the standard model. However, by 2012 results from the LHC had ruled out the class of supersymmetric theories that may have explained the fine-tuning.

Motivation

The premise of the fine-tuned universe assertion is that a small change in several of the physical constants would make the universe radically different. As Stephen Hawking has noted, "The laws of science, as we know them at present, contain many fundamental numbers, like the size of the electric charge of the electron and the ratio of the masses of the proton and the electron. ... The remarkable fact is that the values of these numbers seem to have been very finely adjusted to make possible the development of life."

If, for example, the strong nuclear force were 2% stronger than it is (i.e. if the coupling constant representing its strength were 2% larger), while the other constants were left unchanged, diprotons would be stable; according to physicist Paul Davies, hydrogen would fuse into them instead of deuterium and helium. This would drastically alter the physics of stars, and presumably preclude the existence of life similar to what we observe on Earth. The existence of the diproton would short-circuit the slow fusion of hydrogen into deuterium. Hydrogen would fuse so easily that it is likely that all of the universe's hydrogen would be consumed in the first few minutes after the Big Bang. However, this "diproton argument" is disputed by other physicists, who calculate that as long as the increase in strength is less than 50%, stellar fusion could occur despite the existence of stable diprotons.

The precise formulation of the idea is made difficult by the fact that physicists do not yet know how many independent physical constants there are. The current standard model of particle physics has 25 freely adjustable parameters and general relativity has one additional parameter, the cosmological constant, which is known to be non-zero, but profoundly small in value. However, because physicists have not developed an empirically successful theory of quantum gravity, there is no known way to combine quantum mechanics, on which the standard model depends, and general relativity. Without knowledge of this more complete theory that is suspected to underlie the standard model, definitively counting the number of truly independent physical constants is not possible. In some candidate theories, the number of independent physical constants may be as small as one. For example, the cosmological constant may be a fundamental constant, but attempts have also been made to calculate it from other constants, and according to the author of one such calculation, "the small value of the cosmological constant is telling us that a remarkably precise and totally unexpected relation exists among all the parameters of the Standard Model of particle physics, the bare cosmological constant and unknown physics."

Examples

Martin Rees formulates the fine-tuning of the universe in terms of the following six dimensionless physical constants.

  • N, the ratio of the electromagnetic force to the gravitational force between a pair of protons, is approximately 1036. According to Rees, if it were significantly smaller, only a small and short-lived universe could exist.
  • Epsilon (ε), a measure of the nuclear efficiency of fusion from hydrogen to helium, is 0.007: when four nucleons fuse into helium, 0.007 (0.7%) of their mass is converted to energy. The value of ε is in part determined by the strength of the strong nuclear force. If ε were 0.006, only hydrogen could exist, and complex chemistry would be impossible. According to Rees, if it were above 0.008, no hydrogen would exist, as all the hydrogen would have been fused shortly after the Big Bang. Other physicists disagree, calculating that substantial hydrogen remains as long as the strong force coupling constant increases by less than about 50%.
  • Omega (Ω), commonly known as the density parameter, is the relative importance of gravity and expansion energy in the universe. It is the ratio of the mass density of the universe to the "critical density" and is approximately 1. If gravity were too strong compared with dark energy and the initial metric expansion, the universe would have collapsed before life could have evolved. On the other side, if gravity were too weak, no stars would have formed.
  • Lambda (Λ), commonly known as the cosmological constant, describes the ratio of the density of dark energy to the critical energy density of the universe, given certain reasonable assumptions such as positing that dark energy density is a constant. In terms of Planck units, and as a natural dimensionless value, the cosmological constant, Λ, is on the order of 10−122. This is so small that it has no significant effect on cosmic structures that are smaller than a billion light-years across. If the cosmological constant were not extremely small, stars and other astronomical structures would not be able to form.
  • Q, the ratio of the gravitational energy required to pull a large galaxy apart to the energy equivalent of its mass, is around 10−5. If it is too small, no stars can form. If it is too large, no stars can survive because the universe is too violent, according to Rees.
  • D, the number of spatial dimensions in spacetime, is 3. Rees claims that life could not exist if there were 2 or 4 dimensions of spacetime nor if any other than 1 time dimension existed in spacetime. However, contends Rees, this does not preclude the existence of ten-dimensional strings.

Carbon and oxygen

An older example is the Hoyle state, the third-lowest energy state of the carbon-12 nucleus, with an energy of 7.656 MeV above the ground level. According to one calculation, if the state's energy level were lower than 7.3 or greater than 7.9 MeV, insufficient carbon would exist to support life.

 Furthermore, to explain the universe's abundance of carbon, the Hoyle state must be further tuned to a value between 7.596 and 7.716 MeV. A similar calculation, focusing on the underlying fundamental constants that give rise to various energy levels, concludes that the strong force must be tuned to a precision of at least 0.5%, and the electromagnetic force to a precision of at least 4%, to prevent either carbon production or oxygen production from dropping significantly.

Dark Energy

A slightly larger quantity of dark energy, or a slightly larger value of the cosmological constant would have caused space to expand rapidly enough that galaxies would not form.

Criticism

The fine-tuned universe argument's regarding the formation of life assumes only carbon-based life forms are possible, sometimes referred to as carbon chauvinism. Conceptually, alternative biochemistry or other forms of life are possible.

Explanations

There are fine tuning arguments that are naturalistic. First, as mentioned in premise section the fine tuning might be an illusion: we don't know the true number of independent physical constants, which could be small and even reduce to one. And we don't know either the laws of the "potential universe factory", i.e. the range and statistical distribution ruling the "choice" for each constant (including our arbitrary choice of units and precise set of constants). Still, as modern cosmology developed various hypotheses not presuming hidden order have been proposed. One is an oscillatory universe or a multiverse, where fundamental physical constants are postulated to resolve themselves to random values in different iterations of reality. Under this hypothesis, separate parts of reality would have wildly different characteristics. In such scenarios, the appearance of fine-tuning is explained as a consequence of the weak anthropic principle and selection bias (specifically survivor bias) that only those universes with fundamental constants hospitable to life (such as the universe we observe) would have living beings emerge and evolve capable of contemplating the questions of origins and of fine-tuning. All other universes would go utterly unbeheld by any such beings.

Multiverse

The Multiverse hypothesis proposes the existence of many universes with different physical constants, some of which are hospitable to intelligent life (see multiverse: anthropic principle). Because we are intelligent beings, it is unsurprising that we find ourselves in a hospitable universe if there is such a multiverse. The Multiverse hypothesis is therefore thought to provide an elegant explanation of the finding that we exist despite the required fine-tuning. (See for a detailed discussion of the arguments for and against this suggested explanation.)

The multiverse idea has led to considerable research into the anthropic principle and has been of particular interest to particle physicists, because theories of everything do apparently generate large numbers of universes in which the physical constants vary widely. As yet, there is no evidence for the existence of a multiverse, but some versions of the theory do make predictions that some researchers studying M-theory and gravity leaks hope to see some evidence of soon. Some multiverse theories are not falsifiable, thus scientists may be reluctant to call any multiverse theory "scientific". UNC-Chapel Hill professor Laura Mersini-Houghton claims that the WMAP cold spot may provide testable empirical evidence for a parallel universe, although this claim was later refuted as the WMAP cold spot was found to be nothing more than a statistical artifact. Variants on this approach include Lee Smolin's notion of cosmological natural selection, the Ekpyrotic universe, and the Bubble universe theory.

Critics of the multiverse-related explanations argue that there is no independent evidence that other universes exist. Some criticize the inference from fine-tuning for life to a multiverse as fallacious, whereas others defend it against that challenge.

Top-down cosmology

Stephen Hawking, along with Thomas Hertog of CERN, proposed that the universe's initial conditions consisted of a superposition of many possible initial conditions, only a small fraction of which contributed to the conditions we see today. According to their theory, it is inevitable that we find our universe's "fine-tuned" physical constants, as the current universe "selects" only those past histories that led to the present conditions. In this way, top-down cosmology provides an anthropic explanation for why we find ourselves in a universe that allows matter and life, without invoking the ontic existence of the Multiverse.

Alien design

One hypothesis is that the universe may have been designed by extra-universal aliens. Some believe this would solve the problem of how a designer or design team capable of fine-tuning the universe could come to exist. Cosmologist Alan Guth believes humans will in time be able to generate new universes. By implication previous intelligent entities may have generated our universe. This idea leads to the possibility that the extra-universal designer/designers are themselves the product of an evolutionary process in their own universe, which must therefore itself be able to sustain life. However it also raises the question of where that universe came from, leading to an infinite regress.

The Designer Universe theory of John Gribbin suggests that the universe could have been made deliberately by an advanced civilization in another part of the Multiverse, and that this civilization may have been responsible for causing the Big Bang.

Religious apologetics

Some scientists, theologians, and philosophers, as well as certain religious groups, argue that providence or creation are responsible for fine-tuning.

Christian philosopher Alvin Plantinga argues that random chance, applied to a single and sole universe, only raises the question as to why this universe could be so "lucky" as to have precise conditions that support life at least at some place (the Earth) and time (within millions of years of the present).

One reaction to these apparent enormous coincidences is to see them as substantiating the theistic claim that the universe has been created by a personal God and as offering the material for a properly restrained theistic argument—hence the fine-tuning argument. It's as if there are a large number of dials that have to be tuned to within extremely narrow limits for life to be possible in our universe. It is extremely unlikely that this should happen by chance, but much more likely that this should happen, if there is such a person as God.

— Alvin Plantinga, "The Dawkins Confusion: Naturalism ad absurdum"

This fine-tuning of the universe is cited by philosopher and Christian apologist William Lane Craig as an evidence for the existence of God or some form of intelligence capable of manipulating (or designing) the basic physics that governs the universe. Craig argues, however, "that the postulate of a divine Designer does not settle for us the religious question."

Philosopher and theologian Richard Swinburne reaches the design conclusion using Bayesian probability.

Scientist and theologian Alister McGrath has pointed out that the fine-tuning of carbon is even responsible for nature's ability to tune itself to any degree.

The entire biological evolutionary process depends upon the unusual chemistry of carbon, which allows it to bond to itself, as well as other elements, creating highly complex molecules that are stable over prevailing terrestrial temperatures, and are capable of conveying genetic information (especially DNA). […] Whereas it might be argued that nature creates its own fine-tuning, this can only be done if the primordial constituents of the universe are such that an evolutionary process can be initiated. The unique chemistry of carbon is the ultimate foundation of the capacity of nature to tune itself.

Theoretical physicist and Anglican priest John Polkinghorne has stated: "Anthropic fine tuning is too remarkable to be dismissed as just a happy accident."

Cosmological constant

From Wikipedia, the free encyclopedia
Jump to navigation Jump to search
Sketch of the timeline of the Universe in the ΛCDM model. The accelerated expansion in the last third of the timeline represents the dark-energy dominated era.

In cosmology, the cosmological constant (usually denoted by the Greek capital letter lambda: Λ) is the energy density of space, or vacuum energy, that arises in Albert Einstein's field equations of general relativity. It is closely associated to the concepts of dark energy and quintessence.

Einstein originally introduced the concept in 1917 to counterbalance the effects of gravity and achieve a static universe, a notion which was the accepted view at the time. Einstein abandoned the concept in 1931 after Hubble's confirmation of the expanding universe. From the 1930s until the late 1990s, most physicists assumed the cosmological constant to be equal to zero. That changed with the surprising discovery in 1998 that the expansion of the universe is accelerating, implying the possibility of a positive nonzero value for the cosmological constant.

Since the 1990s, studies have shown that around 68% of the mass–energy density of the universe can be attributed to so-called dark energy. The cosmological constant Λ is the simplest possible explanation for dark energy, and is used in the current standard model of cosmology known as the ΛCDM model.

According to quantum field theory (QFT) which underlies modern particle physics, empty space is defined by the vacuum state which is a collection of quantum fields. All these quantum fields exhibit fluctuations in their ground state (lowest energy density) arising from the zero-point energy present everywhere in space. These zero-point fluctuations should act as a contribution to the cosmological constant Λ, but when calculations are performed these fluctuations give rise to an enormous vacuum energy. The discrepancy between theorized vacuum energy from quantum field theory and observed vacuum energy from cosmology is a source of major contention, with the values predicted exceeding observation by some 120 orders of magnitude, a discrepancy that has been called "the worst theoretical prediction in the history of physics!". This issue is called the cosmological constant problem and it is one of the greatest mysteries in science with many physicists believing that "the vacuum holds the key to a full understanding of nature".

History

Einstein included the cosmological constant as a term in his field equations for general relativity because he was dissatisfied that otherwise his equations did not allow, apparently, for a static universe: gravity would cause a universe that was initially at dynamic equilibrium to contract. To counteract this possibility, Einstein added the cosmological constant. However, soon after Einstein developed his static theory, observations by Edwin Hubble indicated that the universe appears to be expanding; this was consistent with a cosmological solution to the original general relativity equations that had been found by the mathematician Friedmann, working on the Einstein equations of general relativity. Einstein reportedly referred to his failure to accept the validation of his equations—when they had predicted the expansion of the universe in theory, before it was demonstrated in observation of the cosmological redshift—as his "biggest blunder".

In fact, adding the cosmological constant to Einstein's equations does not lead to a static universe at equilibrium because the equilibrium is unstable: if the universe expands slightly, then the expansion releases vacuum energy, which causes yet more expansion. Likewise, a universe that contracts slightly will continue contracting.

However, the cosmological constant remained a subject of theoretical and empirical interest. Empirically, the onslaught of cosmological data in the past decades strongly suggests that our universe has a positive cosmological constant. The explanation of this small but positive value is an outstanding theoretical challenge, the so-called cosmological constant problem.

Some early generalizations of Einstein's gravitational theory, known as classical unified field theories, either introduced a cosmological constant on theoretical grounds or found that it arose naturally from the mathematics. For example, Sir Arthur Stanley Eddington claimed that the cosmological constant version of the vacuum field equation expressed the "epistemological" property that the universe is "self-gauging", and Erwin Schrödinger's pure-affine theory using a simple variational principle produced the field equation with a cosmological term.

Equation

Estimated ratios of dark matter and dark energy (which may be the cosmological constant) in the universe. According to current theories of physics, dark energy now dominates as the largest source of energy of the universe, in contrast to earlier epochs when it was insignificant.

The cosmological constant appears in Einstein's field equation in the form

where the Ricci tensor/scalar R and the metric tensor g describe the structure of spacetime, the stress–energy tensor T describes the energy and momentum density and flux of the matter in that point in spacetime, and the universal constants G and c are conversion factors that arise from using traditional units of measurement. When Λ is zero, this reduces to the field equation of general relativity usually used in the mid-20th century. When T is zero, the field equation describes empty space (the vacuum).

The cosmological constant has the same effect as an intrinsic energy density of the vacuum, ρvac (and an associated pressure). In this context, it is commonly moved onto the right-hand side of the equation, and defined with a proportionality factor of 8π: Λ = 8πρvac, where unit conventions of general relativity are used (otherwise factors of G and c would also appear, i.e. Λ = 8π(G/c2)ρvac = κρvac, where κ is the Einstein gravitational constant). It is common to quote values of energy density directly, though still using the name "cosmological constant", with convention 8πG = 1. The true dimension of Λ is a length−2.

Given the Planck (2018) values of ΩΛ = 0.6889±0.0056 and H0 = 67.66±0.42 (km/s)/Mpc = (2.1927664±0.0136)×10−18 s−1, Λ has the value of

where is the Planck length. A positive vacuum energy density resulting from a cosmological constant implies a negative pressure, and vice versa. If the energy density is positive, the associated negative pressure will drive an accelerated expansion of the universe, as observed. (See dark energy and cosmic inflation for details.)

ΩΛ (Omega Lambda)

Instead of the cosmological constant itself, cosmologists often refer to the ratio between the energy density due to the cosmological constant and the critical density of the universe, the tipping point for a sufficient density to stop the universe from expanding forever. This ratio is usually denoted ΩΛ, and is estimated to be 0.6889±0.0056, according to results published by the Planck Collaboration in 2018.

In a flat universe, ΩΛ is the fraction of the energy of the universe due to the cosmological constant, i.e., what we would intuitively call the fraction of the universe that is made up of dark energy. Note that this value changes over time: the critical density changes with cosmological time, but the energy density due to the cosmological constant remains unchanged throughout the history of the universe: the amount of dark energy increases as the universe grows, while the amount of matter does not.

Equation of state

Another ratio that is used by scientists is the equation of state, usually denoted w, which is the ratio of pressure that dark energy puts on the universe to the energy per unit volume. This ratio is w = −1 for a true cosmological constant, and is generally different for alternative time-varying forms of vacuum energy such as quintessence. The Planck Collaboration (2018) has measured w = −1.028±0.032, consistent with −1, assuming no evolution in w over cosmic time.

Positive value

Lambda-CDM, accelerated expansion of the universe. The time-line in this schematic diagram extends from the Big Bang/inflation era 13.7 Byr ago to the present cosmological time.

Observations announced in 1998 of distance–redshift relation for Type Ia supernovae indicated that the expansion of the universe is accelerating. When combined with measurements of the cosmic microwave background radiation these implied a value of ΩΛ ≈ 0.7, a result which has been supported and refined by more recent measurements. There are other possible causes of an accelerating universe, such as quintessence, but the cosmological constant is in most respects the simplest solution. Thus, the current standard model of cosmology, the Lambda-CDM model, includes the cosmological constant, which is measured to be on the order of 10−52 m−2, in metric units. It is often expressed as 10−35 s−2 (by multiplication with c2, i.e. ≈1017 m2⋅s−2) or as 10−122 (by multiplication with square Planck length, i.e. ≈10−70 m2). The value is based on recent measurements of vacuum energy density, .

As was only recently seen, by works of 't Hooft, Susskind and others, a positive cosmological constant has surprising consequences, such as a finite maximum entropy of the observable universe (see the holographic principle).[18]

Predictions

Quantum field theory

A major outstanding problem is that most quantum field theories predict a huge value for the quantum vacuum. A common assumption is that the quantum vacuum is equivalent to the cosmological constant. Although no theory exists that supports this assumption, arguments can be made in its favor.[19]

Such arguments are usually based on dimensional analysis and effective field theory. If the universe is described by an effective local quantum field theory down to the Planck scale, then we would expect a cosmological constant of the order of ( in reduced Planck units). As noted above, the measured cosmological constant is smaller than this by a factor of ~10−120. This discrepancy has been called "the worst theoretical prediction in the history of physics!".

Some supersymmetric theories require a cosmological constant that is exactly zero, which further complicates things. This is the cosmological constant problem, the worst problem of fine-tuning in physics: there is no known natural way to derive the tiny cosmological constant used in cosmology from particle physics.

No vacuum in the string theory landscape is known to support a metastable, positive cosmological constant, and in 2018 a group of four physicists advanced a controversial conjecture which would imply that no such universe exists.

Anthropic principle

One possible explanation for the small but non-zero value was noted by Steven Weinberg in 1987 following the anthropic principle. Weinberg explains that if the vacuum energy took different values in different domains of the universe, then observers would necessarily measure values similar to that which is observed: the formation of life-supporting structures would be suppressed in domains where the vacuum energy is much larger. Specifically, if the vacuum energy is negative and its absolute value is substantially larger than it appears to be in the observed universe (say, a factor of 10 larger), holding all other variables (e.g. matter density) constant, that would mean that the universe is closed; furthermore, its lifetime would be shorter than the age of our universe, possibly too short for intelligent life to form. On the other hand, a universe with a large positive cosmological constant would expand too fast, preventing galaxy formation. According to Weinberg, domains where the vacuum energy is compatible with life would be comparatively rare. Using this argument, Weinberg predicted that the cosmological constant would have a value of less than a hundred times the currently accepted value. In 1992, Weinberg refined this prediction of the cosmological constant to 5 to 10 times the matter density.

This argument depends on a lack of a variation of the distribution (spatial or otherwise) in the vacuum energy density, as would be expected if dark energy were the cosmological constant. There is no evidence that the vacuum energy does vary, but it may be the case if, for example, the vacuum energy is (even in part) the potential of a scalar field such as the residual inflaton (also see quintessence). Another theoretical approach that deals with the issue is that of multiverse theories, which predict a large number of "parallel" universes with different laws of physics and/or values of fundamental constants. Again, the anthropic principle states that we can only live in one of the universes that is compatible with some form of intelligent life. Critics claim that these theories, when used as an explanation for fine-tuning, commit the inverse gambler's fallacy.

In 1995, Weinberg's argument was refined by Alexander Vilenkin to predict a value for the cosmological constant that was only ten times the matter density, i.e. about three times the current value since determined.

Failure to detect dark energy

An attempt to directly observe dark energy in a laboratory failed to detect a new force.

Wave

From Wikipedia, the free encyclopedia
 
A wave packet without dispersion (real or imaginary part)
A wave packet with dispersion

In physics, a wave packet (or wave train) is a short "burst" or "envelope" of localized wave action that travels as a unit. A wave packet can be analyzed into, or can be synthesized from, an infinite set of component sinusoidal waves of different wavenumbers, with phases and amplitudes such that they interfere constructively only over a small region of space, and destructively elsewhere. Each component wave function, and hence the wave packet, are solutions of a wave equation. Depending on the wave equation, the wave packet's profile may remain constant (no dispersion, see figure) or it may change (dispersion) while propagating.

Quantum mechanics ascribes a special significance to the wave packet; it is interpreted as a probability amplitude, its norm squared describing the probability density that a particle or particles in a particular state will be measured to have a given position or momentum. The wave equation is in this case the Schrödinger equation. It is possible to deduce the time evolution of a quantum mechanical system, similar to the process of the Hamiltonian formalism in classical mechanics. The dispersive character of solutions of the Schrödinger equation has played an important role in rejecting Schrödinger's original interpretation, and accepting the Born rule.

In the coordinate representation of the wave (such as the Cartesian coordinate system), the position of the physical object's localized probability is specified by the position of the packet solution. Moreover, the narrower the spatial wave packet, and therefore the better localized the position of the wave packet, the larger the spread in the momentum of the wave. This trade-off between spread in position and spread in momentum is a characteristic feature of the Heisenberg uncertainty principle, and will be illustrated below.

Historical background

In the early 1900s, it became apparent that classical mechanics had some major failings. Isaac Newton originally proposed the idea that light came in discrete packets, which he called corpuscles, but the wave-like behavior of many light phenomena quickly led scientists to favor a wave description of electromagnetism. It wasn't until the 1930s that the particle nature of light really began to be widely accepted in physics. The development of quantum mechanics – and its success at explaining confusing experimental results – was at the root of this acceptance. Thus, one of the basic concepts in the formulation of quantum mechanics is that of light coming in discrete bundles called photons. The energy of a photon is a function of its frequency,

The photon's energy is equal to Planck's constant, h, multiplied by its frequency, ν. This resolved a problem in classical physics, called the ultraviolet catastrophe.

The ideas of quantum mechanics continued to be developed throughout the 20th century. The picture that was developed was of a particulate world, with all phenomena and matter made of and interacting with discrete particles; however, these particles were described by a probability wave. The interactions, locations, and all of physics would be reduced to the calculations of these probability amplitudes.

The particle-like nature of the world has been confirmed by experiment over a century, while the wave-like phenomena could be characterized as consequences of the wave packet aspect of quantum particles (see wave-particle duality.) According to the principle of complementarity, the wave-like and particle-like characteristics never manifest themselves at the same time, i.e. in the same experiment; see, however, the Afshar experiment and the lively discussion around it.

Basic behaviors

Position space probability density of an initially Gaussian state trapped in an infinite potential well experiencing periodic Quantum Tunneling in a centered potential wall.

Non-dispersive

As an example of propagation without dispersion, consider wave solutions to the following wave equation from classical physics

where c is the speed of the wave's propagation in a given medium.

Using the physics time convention, exp(−iωt), the wave equation has plane-wave solutions

where

, and

This relation between ω and k should be valid so that the plane wave is a solution to the wave equation. It is called a dispersion relation.

To simplify, consider only waves propagating in one dimension (extension to three dimensions is straightforward). Then the general solution is

in which we may take ω = kc. The first term represents a wave propagating in the positive x-direction since it is a function of x − ct only; the second term, being a function of x + ct, represents a wave propagating in the negative x-direction.

A wave packet is a localized disturbance that results from the sum of many different wave forms. If the packet is strongly localized, more frequencies are needed to allow the constructive superposition in the region of localization and destructive superposition outside the region. From the basic solutions in one dimension, a general form of a wave packet can be expressed as

As in the plane-wave case the wave packet travels to the right for ω(k) = kc, since u(x, t)= F(x − ct), and to the left for ω(k) = −kc, since u(x,t) = F(x + ct).

The factor ​12 comes from Fourier transform conventions. The amplitude A(k) contains the coefficients of the linear superposition of the plane-wave solutions. These coefficients can in turn be expressed as a function of u(x, t) evaluated at t = 0 by inverting the Fourier transform relation above:

For instance, choosing

we obtain

and finally

The nondispersive propagation of the real or imaginary part of this wave packet is presented in the above animation.

Dispersive

Position space probability density of an initially Gaussian state moving in one dimension at minimally uncertain, constant momentum in free space.

By contrast, as an example of propagation now with dispersion, consider instead solutions to the Schrödinger equation (Pauli 2000, with m and ħ set equal to one),

yielding the dispersion relation

Once again, restricting attention to one dimension, the solution to the Schrödinger equation satisfying the initial condition , representing a wave packet localized in space at the origin, is seen to be

An impression of the dispersive behavior of this wave packet is obtained by looking at the probability density,

It is evident that this dispersive wave packet, while moving with constant group velocity ko, is delocalizing rapidly: it has a width increasing with time as 1 + 4t² → 2t, so eventually it diffuses to an unlimited region of space.

The momentum profile A(k) remains invariant. The probability current is

Gaussian wave packets in quantum mechanics

Superposition of 1D plane waves (blue) that sum to form a quantum Gaussian wave packet (red) that propagates to the right while spreading. Blue dots follow each plane wave's phase velocity while the red line follows the central group velocity.
 
Position space probability density of an initially Gaussian state trapped in an infinite potential well experiencing periodic Quantum Tunneling in a centered potential wall.
1D Gaussian wave packet, shown in the complex plane, for a=2 and k=4

The above dispersive Gaussian wave packet, unnormalized and just centered at the origin, instead, at t=0, can now be written in 3D, now in standard units:

where a is a positive real number, the square of the width of the wave packet,

The Fourier transform is also a Gaussian in terms of the wavenumber, t=0, the k-vector, (with inverse width,

so that

i.e., it saturates the uncertainty relation),

Each separate wave only phase-rotates in time, so that the time dependent Fourier-transformed solution is

The inverse Fourier transform is still a Gaussian, but now the parameter a has become complex, and there is an overall normalization factor.

The integral of Ψ over all space is invariant, because it is the inner product of Ψ with the state of zero energy, which is a wave with infinite wavelength, a constant function of space. For any energy eigenstate η(x), the inner product,

only changes in time in a simple way: its phase rotates with a frequency determined by the energy of η. When η has zero energy, like the infinite wavelength wave, it doesn't change at all.

The integral ∫|Ψ|2d3r is also invariant, which is a statement of the conservation of probability. Explicitly,

in which √a is the width of P(r) at t = 0; r is the distance from the origin; the speed of the particle is zero; and the time origin t = 0 can be chosen arbitrarily.

The width of the Gaussian is the interesting quantity which can be read off from the probability density, |Ψ|2,

This width eventually grows linearly in time, as ħt/(m√a), indicating wave-packet spreading.

For example, if an electron wave packet is initially localized in a region of atomic dimensions (i.e., 10−10 m) then the width of the packet doubles in about 10−16 s. Clearly, particle wave packets spread out very rapidly indeed (in free space): For instance, after 1 ms, the width will have grown to about a kilometer.

This linear growth is a reflection of the (time-invariant) momentum uncertainty: the wave packet is confined to a narrow Δx=a/2, and so has a momentum which is uncertain (according to the uncertainty principle) by the amount ħ/2a, a spread in velocity of ħ/m2a, and thus in the future position by ħt /m2a. The uncertainty relation is then a strict inequality, very far from saturation, indeed! The initial uncertainty ΔxΔp = ħ/2 has now increased by a factor of ħt/ma (for large t).

The Airy wave train

In contrast to the above Gaussian wave packet, it has been observed that a particular wave function based on Airy functions, propagates freely without envelope dispersion, maintaining its shape. It accelerates undistorted in the absence of a force field: ψ=Ai(B(xB³t ²)) exp(iB³t(x−2B³t²/3)). (For simplicity, ħ=1, m=1/2, and B is a constant, cf. nondimensionalization.)

Truncated view of time development for the Airy front in phase space. (Click to animate.)

Nevertheless, there is no dissonance with Ehrenfest's theorem in this force-free situation, because the state is both non-normalizable and has an undefined (infinite) x for all times. (To the extent that it could be defined, p⟩ = 0 for all times, despite the apparent acceleration of the front.)

In phase space, this is evident in the pure state Wigner quasiprobability distribution of this wavetrain, whose shape in x and p is invariant as time progresses, but whose features accelerate to the right, in accelerating parabolas B(xB³t ²) + (p/BtB²)² = 0,

Note the momentum distribution obtained by integrating over all x is constant. Since this is the probability density in momentum space, it is evident that the wave function itself is not normalizable.

Free propagator

The narrow-width limit of the Gaussian wave packet solution discussed is the free propagator kernel K. For other differential equations, this is usually called the Green's function, but in quantum mechanics it is traditional to reserve the name Green's function for the time Fourier transform of K.

Returning to one dimension for simplicity, with m and ħ set equal to one, when a is the infinitesimal quantity ε, the Gaussian initial condition, rescaled so that its integral is one,

becomes a delta function, δ(x), so that its time evolution,

yields the propagator.

Note that a very narrow initial wave packet instantly becomes infinitely wide, but with a phase which is more rapidly oscillatory at large values of x. This might seem strange—the solution goes from being localized at one point to being "everywhere" at all later times, but it is a reflection of the enormous momentum uncertainty of a localized particle, as explained above.

Further note that the norm of the wave function is infinite, which is also correct, since the square of a delta function is divergent in the same way.

The factor involving ε is an infinitesimal quantity which is there to make sure that integrals over K are well defined. In the limit that ε→0, K becomes purely oscillatory, and integrals of K are not absolutely convergent. In the remainder of this section, it will be set to zero, but in order for all the integrations over intermediate states to be well defined, the limit ε→0 is to be only taken after the final state is calculated.

The propagator is the amplitude for reaching point x at time t, when starting at the origin, x=0. By translation invariance, the amplitude for reaching a point x when starting at point y is the same function, only now translated,

In the limit when t is small, the propagator, of course, goes to a delta function,

but only in the sense of distributions: The integral of this quantity multiplied by an arbitrary differentiable test function gives the value of the test function at zero.

To see this, note that the integral over all space of K equals 1 at all times,

since this integral is the inner-product of K with the uniform wave function. But the phase factor in the exponent has a nonzero spatial derivative everywhere except at the origin, and so when the time is small there are fast phase cancellations at all but one point. This is rigorously true when the limit ε→0 is taken at the very end.

So the propagation kernel is the (future) time evolution of a delta function, and it is continuous, in a sense: it goes to the initial delta function at small times. If the initial wave function is an infinitely narrow spike at position y,

it becomes the oscillatory wave,

Now, since every function can be written as a weighted sum of such narrow spikes,

the time evolution of every function ψ0 is determined by this propagation kernel K,

Thus, this is a formal way to express the fundamental solution or general solution. The interpretation of this expression is that the amplitude for a particle to be found at point x at time t is the amplitude that it started at y, times the amplitude that it went from y to x, summed over all the possible starting points. In other words, it is a convolution of the kernel K with the arbitrary initial condition ψ0,

Since the amplitude to travel from x to y after a time t+t' can be considered in two steps, the propagator obeys the composition identity,

which can be interpreted as follows: the amplitude to travel from x to z in time t+t' is the sum of the amplitude to travel from x to y in time t, multiplied by the amplitude to travel from y to z in time t', summed over all possible intermediate states y. This is a property of an arbitrary quantum system, and by subdividing the time into many segments, it allows the time evolution to be expressed as a path integral.

Analytic continuation to diffusion

The spreading of wave packets in quantum mechanics is directly related to the spreading of probability densities in diffusion. For a particle which is randomly walking, the probability density function at any point satisfies the diffusion equation (also see the heat equation),

where the factor of 2, which can be removed by rescaling either time or space, is only for convenience.

A solution of this equation is the spreading Gaussian,

and, since the integral of ρt is constant while the width is becoming narrow at small times, this function approaches a delta function at t=0,

again only in the sense of distributions, so that

for any smooth test function f.

The spreading Gaussian is the propagation kernel for the diffusion equation and it obeys the convolution identity,

which allows diffusion to be expressed as a path integral. The propagator is the exponential of an operator H,

which is the infinitesimal diffusion operator,

A matrix has two indices, which in continuous space makes it a function of x and x'. In this case, because of translation invariance, the matrix element K only depend on the difference of the position, and a convenient abuse of notation is to refer to the operator, the matrix elements, and the function of the difference by the same name:

Translation invariance means that continuous matrix multiplication,

is essentially convolution,

The exponential can be defined over a range of ts which include complex values, so long as integrals over the propagation kernel stay convergent,

As long as the real part of z is positive, for large values of x, K is exponentially decreasing, and integrals over K are indeed absolutely convergent.

The limit of this expression for z approaching the pure imaginary axis is the above Schrödinger propagator encountered,

which illustrates the above time evolution of Gaussians.

From the fundamental identity of exponentiation, or path integration,

holds for all complex z values, where the integrals are absolutely convergent so that the operators are well defined.

Thus, quantum evolution of a Gaussian, which is the complex diffusion kernel K,

amounts to the time-evolved state,

This illustrates the above diffusive form of the complex Gaussian solutions,

 

Plastic pollution

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Plastic_pollution Plastic pollution a...