Search This Blog

Monday, November 9, 2015

The Cassandra Effect



From Wikipedia, the free encyclopedia


Painting of Cassandra by Evelyn De Morgan

The Cassandra metaphor (variously labelled the Cassandra 'syndrome', 'complex', 'phenomenon', 'predicament', 'dilemma', or 'curse') occurs when valid warnings or concerns are dismissed or disbelieved.

The term originates in Greek mythology. Cassandra was a daughter of Priam, the King of Troy. Struck by her beauty, Apollo provided her with the gift of prophecy, but when Cassandra refused Apollo's romantic advances, he placed a curse ensuring that nobody would believe her warnings. Cassandra was left with the knowledge of future events, but could neither alter these events nor convince others of the validity of her predictions.

The metaphor has been applied in a variety of contexts such as psychology, environmentalism, politics, science, cinema, the corporate world, and in philosophy, and has been in circulation since at least 1949 when French philosopher Gaston Bachelard coined the term 'Cassandra Complex' to refer to a belief that things could be known in advance.[1]

Usage

Psychology

The Cassandra metaphor is applied by some psychologists to individuals who experience physical and emotional suffering as a result of distressing personal perceptions, and who are disbelieved when they attempt to share the cause of their suffering with others.

Melanie Klein

In 1963, psychologist Melanie Klein provided an interpretation of Cassandra as representing the human moral conscience whose main task is to issue warnings. Cassandra as moral conscience, "predicts ill to come and warns that punishment will follow and grief arise."[2] Cassandra's need to point out moral infringements and subsequent social consequences is driven by what Klein calls "the destructive influences of the cruel super-ego," which is represented in the Greek myth by the god Apollo, Cassandra's overlord and persecutor.[3] Klein's use of the metaphor centers on the moral nature of certain predictions, which tends to evoke in others "a refusal to believe what at the same time they know to be true, and expresses the universal tendency toward denial, [with] denial being a potent defence against persecutory anxiety and guilt."[2]

Laurie Layton Schapira

In a 1988 study Jungian analyst Laurie Layton Schapira explored what she called the "Cassandra Complex" in the lives of two of her analysands.[4]

Based on clinical experience, she delineates three factors which constitute the Cassandra complex:
  1. dysfunctional relationships with the "Apollo archetype",
  2. emotional or physical suffering, including hysteria or ‘women’s problems’,
  3. and being disbelieved when attempting to relate the facticity of these experiences to others.[4]
Layton Schapira views the Cassandra complex as resulting from a dysfunctional relationship with what she calls the "Apollo archetype", which refers to any individual's or culture's pattern that is dedicated to, yet bound by, order, reason, intellect, truth and clarity that disavows itself of anything occult or irrational.[5] The intellectual specialization of this archetype creates emotional distance and can predispose relationships to a lack of emotional reciprocity and consequent dysfunctions.[4] She further states that a 'Cassandra woman' is very prone to hysteria because she "feels attacked not only from the outside world but also from within, especially from the body in the form of somatic, often gynaecological, complaints."[6]

Addressing the metaphorical application of the Greek Cassandra myth, Layton Schapira states that:
What the Cassandra woman sees is something dark and painful that may not be apparent on the surface of things or that objective facts do not corroborate. She may envision a negative or unexpected outcome; or something which would be difficult to deal with; or a truth which others, especially authority figures, would not accept. In her frightened, ego-less state, the Cassandra woman may blurt out what she sees, perhaps with the unconscious hope that others might be able to make some sense of it. But to them her words sound meaningless, disconnected and blown out of all proportion.[6]

Jean Shinoda Bolen

In 1989, Jean Shinoda Bolen, Clinical Professor of Psychiatry at the University of California, published an essay on the god Apollo[7] in which she detailed a psychological profile of the ‘Cassandra woman’ whom she suggested referred to someone suffering — as happened in the mythological relationship between Cassandra and Apollo — a dysfunctional relationship with an “Apollo man”. Bolen added that the Cassandra woman may exhibit “hysterical” overtones, and may be disbelieved when attempting to share what she knows.[8]

According to Bolen, the archetypes of Cassandra and Apollo are not gender-specific. She states that "women often find that a particular [male] god exists in them as well, just as I found that when I spoke about goddesses men could identify a part of themselves with a specific goddess. Gods and goddesses represent different qualities in the human psyche. The pantheon of Greek deities together, male and female, exist as archetypes in us all… There are gods and goddesses in every person."[9]

"As an archetype, Apollo personifies the aspect of the personality that wants clear definitions, is drawn to master a skill, values order and harmony, and prefers to look at the surface rather than at what underlies appearances. The Apollo archetype favors thinking over feeling, distance over closeness, objective assessment over subjective intuition."[10]

Of what she describes as the negative Apollonic influence, Dr. Bolen writes:
Individuals who resemble Apollo have difficulties that are related to emotional distance, such as communication problems, and the inability to be intimate… Rapport with another person is hard for the Apollo man. He prefers to access (or judge) the situation or the person from a distance, not knowing that he must "get close up" – be vulnerable and empathic – in order to truly know someone else…. But if the woman wants a deeper, more personal relationship, then there are difficulties… she may become increasingly irrational or hysterical.[8]
Bolen suggests that a Cassandra woman (or man) may become increasingly hysterical and irrational when in a dysfunctional relationship with a negative Apollo, and may experience others' disbelief when describing her experiences.[8]

Corporate world

Foreseeing potential future directions for a corporation or company is sometimes called ‘visioning’.[11] Yet achieving a clear, shared vision in an organization is often difficult due to a lack of commitment to the new vision by some individuals in the organization, because it does not match reality as they see it. Those who support the new vision are termed ‘Cassandras’ – able to see what is going to happen, but not believed.[11] Sometimes the name Cassandra is applied to those who can predict rises, falls, and particularly crashes on the global stock market, as happened with Warren Buffett, who repeatedly warned that the 1990s stock market surge was a bubble, attracting to him the title of 'Wall Street Cassandra'.[12]

Environmental movement

Many environmentalists have predicted looming environmental catastrophes including climate change, rise in sea levels, irreversible pollution, and an impending collapse of ecosystems, including those of rainforests and ocean reefs.[13] Such individuals sometimes acquire the label of 'Cassandras', whose warnings of impending environmental disaster are disbelieved or mocked.[13]

Environmentalist Alan Atkisson states that to understand that humanity is on a collision course with the laws of nature is to be stuck in what he calls the 'Cassandra dilemma' in which one can see the most likely outcome of current trends and can warn people about what is happening, but the vast majority can not, or will not respond, and later if catastrophe occurs, they may even blame you, as if your prediction set the disaster in motion.[14] Occasionally there may be a "successful" alert, though the succession of books, campaigns, organizations, and personalities that we think of as the environmental movement has more generally fallen toward the opposite side of this dilemma: a failure to "get through" to the people and avert disaster. In the words of Atkisson: "too often we watch helplessly, as Cassandra did, while the soldiers emerge from the Trojan horse just as foreseen and wreak their predicted havoc. Worse, Cassandra's dilemma has seemed to grow more inescapable even as the chorus of Cassandras has grown larger."[15]

Other examples

There are examples of the Cassandra metaphor being applied in the contexts of medical science,[16][17] the media,[18] to feminist perspectives on 'reality',[19][20] in relation to Asperger’s Disorder (a 'Cassandra Syndrome' is sometimes said to arise when partners or family members of the Asperger individual seek help but are disbelieved,)[21][22][23] and in politics.[24] There are also examples of the metaphor being used in popular music lyrics, such as the 1982 ABBA song "Cassandra"[25][26] and Star One's "Cassandra Complex". The five-part The Mars Volta song "Cassandra Gemini" may reference this syndrome,[27] as well as the film 12 Monkeys or in dead and divine's "cassandra syndrome".

Novikov self-consistency principle



From Wikipedia, the free encyclopedia

The Novikov self-consistency principle, also known as the Novikov self-consistency conjecture, is a principle developed by Russian physicist Igor Dmitriyevich Novikov in the mid-1980s to solve the problem of paradoxes in time travel, which is theoretically permitted in certain solutions of general relativity (solutions containing what are known as closed timelike curves). The principle asserts that if an event exists that would give rise to a paradox, or to any "change" to the past whatsoever, then the probability of that event is zero. It would thus be impossible to create time paradoxes.

History of the principle

Physicists have long been aware that there are solutions to the theory of general relativity which contain closed timelike curves, or CTCs—see for example the Gödel metric. Novikov discussed the possibility of CTCs in books written in 1975 and 1983,[1] offering the opinion that only self-consistent trips back in time would be permitted.[2] In a 1990 paper by Novikov and several others, "Cauchy problem in spacetimes with closed timelike curves",[3] the authors state:
The only type of causality violation that the authors would find unacceptable is that embodied in the science-fiction concept of going backward in time and killing one's younger self ("changing the past"). Some years ago one of us (Novikov10) briefly considered the possibility that CTCs might exist and argued that they cannot entail this type of causality violation: Events on a CTC are already guaranteed to be self-consistent, Novikov argued; they influence each other around a closed curve in a self-adjusted, cyclical, self-consistent way. The other authors recently have arrived at the same viewpoint.
We shall embody this viewpoint in a principle of self-consistency, which states that the only solutions to the laws of physics that can occur locally in the real Universe are those which are globally self-consistent. This principle allows one to build a local solution to the equations of physics only if that local solution can be extended to a part of a (not necessarily unique) global solution, which is well defined throughout the nonsingular regions of the spacetime.
Among the coauthors of this 1990 paper were Kip Thorne, Mike Morris, and Ulvi Yurtsever, who in 1988 had stirred up renewed interest in the subject of time travel in general relativity with their paper "Wormholes, Time Machines, and the Weak Energy Condition",[4] which showed that a new general relativity solution known as a traversable wormhole could lead to closed timelike curves, and unlike previous CTC-containing solutions it did not require unrealistic conditions for the universe as a whole. After discussions with another coauthor of the 1990 paper, John Friedman, they convinced themselves that time travel need not lead to unresolvable paradoxes, regardless of what type of object was sent through the wormhole.[5]:509

In response, another physicist named Joseph Polchinski sent them a letter in which he argued that one could avoid questions of free will by considering a potentially paradoxical situation involving a billiard ball sent through a wormhole which sends it back in time. In this scenario, the ball is fired into a wormhole at an angle such that, if it continues along that path, it will exit the wormhole in the past at just the right angle to collide with its earlier self, thereby knocking it off course and preventing it from entering the wormhole in the first place. Thorne deemed this problem "Polchinski's paradox".[5]:510–511

After considering the problem, two students at Caltech (where Thorne taught), Fernando Echeverria and Gunnar Klinkhammer, were able to find a solution beginning with the original billiard ball trajectory proposed by Polchinski which managed to avoid any inconsistencies. In this situation, the billiard ball emerges from the future at a different angle than the one used to generate the paradox, and delivers its younger self a glancing blow instead of knocking it completely away from the wormhole, a blow which changes its trajectory in just the right way so that it will travel back in time with the angle required to deliver its younger self this glancing blow. Echeverria and Klinkhammer actually found that there was more than one self-consistent solution, with slightly different angles for the glancing blow in each case. Later analysis by Thorne and Robert Forward showed that for certain initial trajectories of the billiard ball, there could actually be an infinite number of self-consistent solutions.[5]:511–513

Echeverria, Klinkhammer and Thorne published a paper discussing these results in 1991;[6] in addition, they reported that they had tried to see if they could find any initial conditions for the billiard ball for which there were no self-consistent extensions, but were unable to do so. Thus it is plausible that there exist self-consistent extensions for every possible initial trajectory, although this has not been proven.[7]:184 This only applies to initial conditions which are outside of the chronology-violating region of spacetime,[7]:187 which is bounded by a Cauchy horizon.[8] This could mean that the Novikov self-consistency principle does not actually place any constraints on systems outside of the region of spacetime where time travel is possible, only inside it.

Even if self-consistent extensions can be found for arbitrary initial conditions outside the Cauchy Horizon, the finding that there can be multiple distinct self-consistent extensions for the same initial condition—indeed, Echeverria et al. found an infinite number of consistent extensions for every initial trajectory they analyzed[7]:184—can be seen as problematic, since classically there seems to be no way to decide which extension the laws of physics will choose. To get around this difficulty, Thorne and Klinkhammer analyzed the billiard ball scenario using quantum mechanics,[5]:514–515 performing a quantum-mechanical sum over histories (path integral) using only the consistent extensions, and found that this resulted in a well-defined probability for each consistent extension. The authors of Cauchy problem in spacetimes with closed timelike curves write:
The simplest way to impose the principle of self-consistency in quantum mechanics (in a classical space-time) is by a sum-over-histories formulation in which one includes all those, and only those, histories that are self-consistent. It turns out that, at least formally (modulo such issues as the convergence of the sum), for every choice of the billiard ball's initial, nonrelativistic wave function before the Cauchy horizon, such a sum over histories produces unique, self-consistent probabilities for the outcomes of all sets of subsequent measurements. ... We suspect, more generally, that for any quantum system in a classical wormhole spacetime with a stable Cauchy horizon, the sum over all self-consistent histories will give unique, self-consistent probabilities for the outcomes of all sets of measurements that one might choose to make.

Assumptions of the Novikov self-consistency principle

The Novikov consistency principle assumes certain conditions about what sort of time travel is possible. Specifically, it assumes either that there is only one timeline, or that any alternative timelines (such as those postulated by the many-worlds interpretation of quantum mechanics) are not accessible.

Given these assumptions, the constraint that time travel must not lead to inconsistent outcomes could be seen merely as a tautology, a self-evident truth that cannot possibly be false, because if you make the assumption that it is false this would lead to a logical paradox. However, the Novikov self-consistency principle is intended to go beyond just the statement that history must be consistent, making the additional nontrivial assumption that the universe obeys the same local laws of physics in situations involving time travel that it does in regions of spacetime that lack closed timelike curves. This is made clear in the above-mentioned Cauchy problem in spacetimes with closed timelike curves,[3] where the authors write:
That the principle of self-consistency is not totally tautological becomes clear when one considers the following alternative: The laws of physics might permit CTC's; and when CTC's occur, they might trigger new kinds of local physics which we have not previously met. ... The principle of self-consistency is intended to rule out such behavior. It insists that local physics is governed by the same types of physical laws as we deal with in the absence of CTC's: the laws that entail self-consistent single valuedness for the fields. In essence, the principle of self-consistency is a principle of no new physics. If one is inclined from the outset to ignore or discount the possibility of new physics, then one will regard self-consistency as a trivial principle.

Implications for time travelers

The assumptions of the self-consistency principle can be extended to hypothetical scenarios involving intelligent time travelers as well as unintelligent objects such as billiard balls. The authors of "Cauchy problem in spacetimes with closed timelike curves" commented on the issue in the paper's conclusion, writing:
If CTC's are allowed, and if the above vision of theoretical physics' accommodation with them turns out to be more or less correct, then what will this imply about the philosophical notion of free will for humans and other intelligent beings? It certainly will imply that intelligent beings cannot change the past. Such change is incompatible with the principle of self-consistency. Consequently, any being who went through a wormhole and tried to change the past would be prevented by physical law from making the change; i.e., the "free will" of the being would be constrained. Although this constraint has a more global character than constraints on free will that follow from the standard, local laws of physics, it is not obvious to us that this constraint is more severe than those imposed by standard physical law.[3]
Similarly, physicist and astronomer J. Craig Wheeler concludes that:
According to the consistency conjecture, any complex interpersonal interactions must work themselves out self-consistently so that there is no paradox. That is the resolution. This means, if taken literally, that if time machines exist, there can be no free will. You cannot will yourself to kill your younger self if you travel back in time. You can coexist, take yourself out for a beer, celebrate your birthday together, but somehow circumstances will dictate that you cannot behave in a way that leads to a paradox in time. Novikov supports this point of view with another argument: physics already restricts your free will every day. You may will yourself to fly or to walk through a concrete wall, but gravity and condensed-matter physics dictate that you cannot. Why, Novikov asks, is the consistency restriction placed on a time traveler any different?[9]

Time loop logic

Time loop logic, coined by the roboticist and futurist Hans Moravec,[10] is the name of a hypothetical system of computation that exploits the Novikov self-consistency principle to compute answers much faster than possible with the standard model of computational complexity using Turing machines. In this system, a computer sends a result of a computation backwards through time and relies upon the self-consistency principle to force the sent result to be correct, providing the machine can reliably receive information from the future and providing the algorithm and the underlying mechanism are formally correct. An incorrect result or no result can still be produced if the time travel mechanism or algorithm are not guaranteed to be accurate.

A simple example is an iterative method algorithm. Moravec states:
Make a computing box that accepts an input, which represents an approximate solution to some problem, and produces an output that is an improved approximation. Conventionally you would apply such a computation repeatedly a finite number of times, and then settle for the better, but still approximate, result. Given an appropriate negative delay something else is possible: [...] the result of each iteration of the function is brought back in time to serve as the "first" approximation. As soon as the machine is activated, a so-called "fixed-point" of F, an input which produces an identical output, usually signaling a perfect answer, appears (by an extraordinary coincidence!) immediately and steadily. [...] If the iteration does not converge, that is, if F has no fixed point, the computer outputs and inputs will shut down or hover in an unlikely intermediate state.
Physicist David Deutsch showed in 1991 that this model of computation could solve NP problems in polynomial time,[11] and Scott Aaronson later extended this result to show that the model could also be used to solve PSPACE problems in polynomial time.[12][13]

Wormhole



From Wikipedia, the free encyclopedia

A wormhole or Einstein-Rosen Bridge is a hypothetical topological feature that would fundamentally be a shortcut connecting two separate points in spacetime that could connect extremely far distances such as a billion light years or more, short distances, such as a few feet, different universes, and in theory, different points in time. A wormhole is much like a tunnel with two ends, each in separate points in spacetime.
For a simplified notion of a wormhole, space can be visualized as a two-dimensional (2D) surface. In this case, a wormhole would appear as a hole in that surface, lead into a 3D tube (the inside surface of a cylinder), then re-emerge at another location on the 2D surface with a hole similar to the entrance. An actual wormhole would be analogous to this, but with the spatial dimensions raised by one. For example, instead of circular holes on a 2D plane, the entry and exit points could be visualized as spheres in 3D space.

Overview

Researchers have no observational evidence for wormholes, but the equations of the theory of general relativity have valid solutions that contain wormholes. The first type of wormhole solution discovered was the Schwarzschild wormhole, which would be present in the Schwarzschild metric describing an eternal black hole, but it was found that it would collapse too quickly for anything to cross from one end to the other. Wormholes that could be crossed in both directions, known as traversable wormholes, would only be possible if exotic matter with negative energy density could be used to stabilize them. Wormholes are also a very powerful mathematical metaphor for teaching general relativity.

The Casimir effect shows that quantum field theory allows the energy density in certain regions of space to be negative relative to the ordinary vacuum energy, and it has been shown theoretically that quantum field theory allows states where energy can be arbitrarily negative at a given point.[1] Many physicists, such as Stephen Hawking,[2] Kip Thorne[3] and others,[4][5][6] therefore argue that such effects might make it possible to stabilize a traversable wormhole. Physicists have not found any natural process that would be predicted to form a wormhole naturally in the context of general relativity, although the quantum foam hypothesis is sometimes used to suggest that tiny wormholes might appear and disappear spontaneously at the Planck scale,[7][8] and stable versions of such wormholes have been suggested as dark matter candidates.[9][10] It has also been proposed that, if a tiny wormhole held open by a negative mass cosmic string had appeared around the time of the Big Bang, it could have been inflated to macroscopic size by cosmic inflation.[11]

The American theoretical physicist John Archibald Wheeler coined the term wormhole in 1957; the German mathematician Hermann Weyl, however, had proposed the wormhole theory in 1921, in connection with mass analysis of electromagnetic field energy.[12]
This analysis forces one to consider situations... where there is a net flux of lines of force, through what topologists would call "a handle" of the multiply-connected space, and what physicists might perhaps be excused for more vividly terming a "wormhole".
— John Wheeler in Annals of Physics

"Embedding diagram" of a Schwarzschild wormhole (see below)

Definition

The basic notion of an intra-universe wormhole is that it is a compact region of spacetime whose boundary is topologically trivial, but whose interior is not simply connected. Formalizing this idea leads to definitions such as the following, taken from Matt Visser's Lorentzian Wormholes.
If a Minkowski spacetime contains a compact region Ω, and if the topology of Ω is of the form Ω ~ R x Σ, where Σ is a three-manifold of the nontrivial topology, whose boundary has topology of the form ∂Σ ~ S2, and if, furthermore, the hypersurfaces Σ are all spacelike, then the region Ω contains a quasipermanent intra-universe wormhole.
Wormholes have been defined geometrically, as opposed to topologically,[clarification needed] as regions of spacetime that constrain the incremental deformation of closed surfaces. For example, in Enrico Rodrigo’s The Physics of Stargates, a wormhole is defined informally as:
a region of spacetime containing a "world tube" (the time evolution of a closed surface) that cannot be continuously deformed (shrunk) to a world line (the time evolution of a point).

Schwarzschild wormholes


An artist's impression of a wormhole from an observer's perspective, crossing the event horizon of a Schwarzschild wormhole that bridges two different universes. The observer originates from the right, and another universe becomes visible in the center of the wormhole’s shadow once the horizon is crossed, the observer seeing light that has fallen into the black hole interior region from the other universe; however, this other universe is unreachable in the case of a Schwarzschild wormhole, as the bridge always collapses before the observer has time to cross it, and everything that has fallen through the event horizon of either universe is inevitably crushed in the singularity.

Lorentzian wormholes known as Schwarzschild wormholes or Einstein–Rosen bridges are connections between areas of space that can be modeled as vacuum solutions to the Einstein field equations, and that are now understood to be intrinsic parts of the maximally extended version of the Schwarzschild metric describing an eternal black hole with no charge and no rotation. Here, "maximally extended" refers to the idea that the spacetime should not have any "edges": for any possible trajectory of a free-falling particle (following a Geodesic in the spacetime, it should be possible to continue this path arbitrarily far into the particle's future or past, unless the trajectory hits a gravitational singularity like the one at the center of the black hole's interior. In order to satisfy this requirement, it turns out that in addition to the black hole interior region that particles enter when they fall through the event horizon from the outside, there must be a separate white hole interior region that allows us to extrapolate the trajectories of particles that an outside observer sees rising up away from the event horizon. And just as there are two separate interior regions of the maximally extended spacetime, there are also two separate exterior regions, sometimes called two different "universes", with the second universe allowing us to extrapolate some possible particle trajectories in the two interior regions. This means that the interior black hole region can contain a mix of particles that fell in from either universe (and thus an observer who fell in from one universe might be able to see light that fell in from the other one), and likewise particles from the interior white hole region can escape into either universe. All four regions can be seen in a spacetime diagram that uses Kruskal–Szekeres coordinates.

In this spacetime, it is possible to come up with coordinate systems such that if you pick a hypersurface of constant time (a set of points that all have the same time coordinate, such that every point on the surface has a space-like separation, giving what is called a 'space-like surface') and draw an "embedding diagram" depicting the curvature of space at that time, the embedding diagram will look like a tube connecting the two exterior regions, known as an "Einstein–Rosen bridge". Note that the Schwarzschild metric describes an idealized black hole that exists eternally from the perspective of external observers; a more realistic black hole that forms at some particular time from a collapsing star would require a different metric. When the infalling stellar matter is added to a diagram of a black hole's history, it removes the part of the diagram corresponding to the white hole interior region, along with the part of the diagram corresponding to the other universe.[13]

The Einstein–Rosen bridge was discovered by Ludwig Flamm[14] in 1916, a few months after Schwarzschild published his solution, and was rediscovered (although it is hard to imagine that Einstein had not seen Flamm's paper when it came out) by Albert Einstein and his colleague Nathan Rosen, who published their result in 1935. However, in 1962, John A. Wheeler and Robert W. Fuller published a paper showing that this type of wormhole is unstable if it connects two parts of the same universe, and that it will pinch off too quickly for light (or any particle moving slower than light) that falls in from one exterior region to make it to the other exterior region.

According to general relativity, the gravitational collapse of a sufficiently compact mass forms a singular Schwarzschild black hole. In the Einstein–Cartan–Sciama–Kibble theory of gravity, however, it forms a regular Einstein–Rosen bridge. This theory extends general relativity by removing a constraint of the symmetry of the affine connection and regarding its antisymmetric part, the torsion tensor, as a dynamical variable. Torsion naturally accounts for the quantum-mechanical, intrinsic angular momentum (spin) of matter. The minimal coupling between torsion and Dirac spinors generates a repulsive spin–spin interaction that is significant in fermionic matter at extremely high densities. Such an interaction prevents the formation of a gravitational singularity.[clarification needed] Instead, the collapsing matter reaches an enormous but finite density and rebounds, forming the other side of the bridge.[15]

Before the stability problems of Schwarzschild wormholes were apparent, it was proposed that quasars were white holes forming the ends of wormholes of this type.[citation needed]

Although Schwarzschild wormholes are not traversable in both directions, their existence inspired Kip Thorne to imagine traversable wormholes created by holding the "throat" of a Schwarzschild wormhole open with exotic matter (material that has negative mass/energy).

Traversable wormholes


Image of a simulated traversable wormhole that connects the square in front of the physical institutes of University of Tübingen with the sand dunes near Boulogne sur Mer in the north of France. The image is calculated with 4D raytracing in a Morris–Thorne wormhole metric, but the gravitational effects on the wavelength of light have not been simulated.[16]

Lorentzian traversable wormholes would allow travel in both directions from one part of the universe to another part of that same universe very quickly or would allow travel from one universe to another. The possibility of traversable wormholes in general relativity was first demonstrated by Kip Thorne and his graduate student Mike Morris in a 1988 paper. For this reason, the type of traversable wormhole they proposed, held open by a spherical shell of exotic matter, is referred to as a Morris–Thorne wormhole. Later, other types of traversable wormholes were discovered as allowable solutions to the equations of general relativity, including a variety analyzed in a 1989 paper by Matt Visser, in which a path through the wormhole can be made where the traversing path does not pass through a region of exotic matter. However, in the pure Gauss–Bonnet gravity (a modification to general relativity involving extra spatial dimensions which is sometimes studied in the context of brane cosmology) exotic matter is not needed in order for wormholes to exist—they can exist even with no matter.[17] A type held open by negative mass cosmic strings was put forth by Visser in collaboration with Cramer et al.,[11] in which it was proposed that such wormholes could have been naturally created in the early universe.

Wormholes connect two points in spacetime, which means that they would in principle allow travel in time, as well as in space. In 1988, Morris, Thorne and Yurtsever worked out explicitly how to convert a wormhole traversing space into one traversing time.[3] However, according to general relativity, it would not be possible to use a wormhole to travel back to a time earlier than when the wormhole was first converted into a time machine by accelerating one of its two mouths.[18]

Raychaudhuri's theorem and exotic matter

To see why exotic matter is required, consider an incoming light front traveling along geodesics, which then crosses the wormhole and re-expands on the other side. The expansion goes from negative to positive. As the wormhole neck is of finite size, we would not expect caustics to develop, at least within the vicinity of the neck. According to the optical Raychaudhuri's theorem, this requires a violation of the averaged null energy condition. Quantum effects such as the Casimir effect cannot violate the averaged null energy condition in any neighborhood of space with zero curvature,[19] but calculations in semiclassical gravity suggest that quantum effects may be able to violate this condition in curved spacetime.[20] Although it was hoped recently that quantum effects could not violate an achronal version of the averaged null energy condition,[21] violations have nevertheless been found,[22] so it remains an open possibility that quantum effects might be used to support a wormhole.

Faster-than-light travel

The impossibility of faster-than-light relative speed only applies locally. Wormholes might allow superluminal (faster-than-light) travel by ensuring that the speed of light is not exceeded locally at any time. While traveling through a wormhole, subluminal (slower-than-light) speeds are used. If two points are connected by a wormhole whose length is shorter than the distance between them outside the wormhole, the time taken to traverse it could be less than the time it would take a light beam to make the journey if it took a path through the space outside the wormhole. However, a light beam traveling through the wormhole would of course beat the traveler.

Time travel

The theory of general relativity predicts that if traversable wormholes exist, they could allow time travel.[3] This would be accomplished by accelerating one end of the wormhole to a high velocity relative to the other, and then sometime later bringing it back; relativistic time dilation would result in the accelerated wormhole mouth aging less than the stationary one as seen by an external observer, similar to what is seen in the twin paradox. However, time connects differently through the wormhole than outside it, so that synchronized clocks at each mouth will remain synchronized to someone traveling through the wormhole itself, no matter how the mouths move around.[23] This means that anything which entered the accelerated wormhole mouth would exit the stationary one at a point in time prior to its entry.

For example, consider two clocks at both mouths both showing the date as 2000. After being taken on a trip at relativistic velocities, the accelerated mouth is brought back to the same region as the stationary mouth with the accelerated mouth's clock reading 2004 while the stationary mouth's clock read 2012. A traveler who entered the accelerated mouth at this moment would exit the stationary mouth when its clock also read 2004, in the same region but now eight years in the past. Such a configuration of wormholes would allow for a particle's world line to form a closed loop in spacetime, known as a closed timelike curve. An object traveling through a wormhole could carry energy or charge from one time to another, but this would not violate conservation of energy or charge in each time, because the energy/charge of the wormhole mouth itself would change to compensate for the object that fell into it or emerged from it.[24][25]

It is thought that it may not be possible to convert a wormhole into a time machine in this manner; the predictions are made in the context of general relativity, but general relativity does not include quantum effects. Analyses using the semiclassical approach to incorporating quantum effects into general relativity have sometimes indicated that a feedback loop of virtual particles would circulate through the wormhole and pile up on themselves, driving the energy density in the region very high and possibly destroying it before any information could be passed through it, in keeping with the chronology protection conjecture. The debate on this matter is described by Kip S. Thorne in the book Black Holes and Time Warps, and a more technical discussion can be found in The quantum physics of chronology protection by Matt Visser.[26] There is also the Roman ring, which is a configuration of more than one wormhole. This ring seems to allow a closed time loop with stable wormholes when analyzed using semiclassical gravity, although without a full theory of quantum gravity it is uncertain whether the semiclassical approach is reliable in this case.

Interuniversal travel

A possible resolution to the paradoxes resulting from wormhole-enabled time travel rests on the many-worlds interpretation of quantum mechanics. In 1991 David Deutsch showed that quantum theory is fully consistent (in the sense that the so-called density matrix can be made free of discontinuities) in spacetimes with closed timelike curves.[27] However, later it was shown that such model of closed timelike curve can have internal inconsistencies as it will lead to strange phenomena like distinguishing non orthogonal quantum states and distinguishing proper and improper mixture.[28][29] Accordingly, the destructive positive feedback loop of virtual particles circulating through a wormhole time machine, a result indicated by semi-classical calculations, is averted. A particle returning from the future does not return to its universe of origination but to a parallel universe. This suggests that a wormhole time machine with an exceedingly short time jump is a theoretical bridge between contemporaneous parallel universes.[30] Because a wormhole time-machine introduces a type of nonlinearity into quantum theory, this sort of communication between parallel universes is consistent with Joseph Polchinski’s discovery of an "Everett phone" in Steven Weinberg’s formulation of nonlinear quantum mechanics.[31] Such a possibility is depicted in the science-fiction 2014 movie Interstellar.

Metrics

Theories of wormhole metrics describe the spacetime geometry of a wormhole and serve as theoretical models for time travel. An example of a (traversable) wormhole metric is the following:[clarification needed (equations that are not discussed, not part of general discussion)]
ds^2= - c^2 dt^2 + dl^2 + (k^2 + l^2)(d \theta^2 + \sin^2 \theta \, d\phi^2).
One type of non-traversable wormhole metric is the Schwarzschild solution (see the first diagram):
ds^2= - c^2 \left(1 - \frac{2GM}{rc^2}\right)dt^2 + \frac{dr^2}{1 - \frac{2GM}{rc^2}} + r^2(d \theta^2 + \sin^2 \theta \, d\phi^2).

In fiction

Wormholes are a common element in science fiction as they allow interstellar, intergalactic, and sometimes interuniversal travel within human timescales. They have also served as a method for time travel.

Saturday, November 7, 2015

Big Bang



From Wikipedia, the free encyclopedia


According to the Big Bang model, the universe expanded from an extremely dense and hot state and continues to expand.

The Big Bang theory is the prevailing cosmological model for the universe from the earliest known periods through its subsequent large-scale evolution.[1][2][3] The model accounts for the fact that the universe expanded from a very high density and high temperature state,[4][5] and offers a comprehensive explanation for a broad range of phenomena, including the abundance of light elements, the cosmic microwave background, large scale structure and Hubble's Law.[6] If the known laws of physics are extrapolated beyond where they are valid, there is a singularity. Modern measurements place this moment at approximately 13.8 billion years ago, which is thus considered the age of the universe.[7] After the initial expansion, the universe cooled sufficiently to allow the formation of subatomic particles, and later simple atoms. Giant clouds of these primordial elements later coalesced through gravity to form stars and galaxies.

Since Georges Lemaître first noted, in 1927, that an expanding universe might be traced back in time to an originating single point, scientists have built on his idea of cosmic expansion. While the scientific community was once divided between supporters of two different expanding universe theories, the Big Bang and the Steady State theory, accumulated empirical evidence provides strong support for the former.[8] In 1929, from analysis of galactic redshifts, Edwin Hubble concluded that galaxies are drifting apart, important observational evidence consistent with the hypothesis of an expanding universe. In 1965, the cosmic microwave background radiation was discovered, which was crucial evidence in favor of the Big Bang model, since that theory predicted the existence of background radiation throughout the universe before it was discovered. More recently, measurements of the redshifts of supernovae indicate that the expansion of the universe is accelerating, an observation attributed to dark energy's existence.[9] The known physical laws of nature can be used to calculate the characteristics of the universe in detail back in time to an initial state of extreme density and temperature.[10][11][12]

Overview


History of the Universe - gravitational waves are hypothesized to arise from cosmic inflation, an expansion just after the Big Bang.[13][14][15][16]

Hubble observed that the distances to faraway galaxies were strongly correlated with their redshifts. This was interpreted to mean that all distant galaxies and clusters are receding away from our vantage point with an apparent velocity proportional to their distance: that is, the farther they are, the faster they move away from us, regardless of direction.[17] Assuming the Copernican principle (that the Earth is not the center of the universe), the only remaining interpretation is that all observable regions of the universe are receding from all others. Since we know that the distance between galaxies increases today, it must mean that in the past galaxies were closer together. The continuous expansion of the universe implies that the universe was denser and hotter in the past.

Large particle accelerators can replicate the conditions that prevailed after the early moments of the universe, resulting in confirmation and refinement of the details of the Big Bang model. However, these accelerators can only probe so far into high energy regimes. Consequently, the state of the universe in the earliest instants of the Big Bang expansion is still poorly understood and an area of open investigation and indeed, speculation.

The first subatomic particles included protons, neutrons, and electrons. Though simple atomic nuclei formed within the first three minutes after the Big Bang, thousands of years passed before the first electrically neutral atoms formed. The majority of atoms produced by the Big Bang were hydrogen, along with helium and traces of lithium. Giant clouds of these primordial elements later coalesced through gravity to form stars and galaxies, and the heavier elements were synthesized either within stars or during supernovae.

The Big Bang theory offers a comprehensive explanation for a broad range of observed phenomena, including the abundance of light elements, the cosmic microwave background, large scale structure, and Hubble's Law.[6] The framework for the Big Bang model relies on Albert Einstein's theory of general relativity and on simplifying assumptions such as homogeneity and isotropy of space. The governing equations were formulated by Alexander Friedmann, and similar solutions were worked on by Willem de Sitter. Since then, astrophysicists have incorporated observational and theoretical additions into the Big Bang model, and its parametrization as the Lambda-CDM model serves as the framework for current investigations of theoretical cosmology. The Lambda-CDM model is the standard model of Big Bang cosmology, the simplest model that provides a reasonably good account of various observations about the universe.

Timeline

Singularity

Extrapolation of the expansion of the universe backwards in time using general relativity yields an infinite density and temperature at a finite time in the past.[18] This singularity signals the breakdown of general relativity and thus, all the laws of physics. How closely this can be extrapolated toward the singularity is debated—certainly no closer than the end of the Planck epoch. This singularity is sometimes called "the Big Bang",[19] but the term can also refer to the early hot, dense phase itself,[20][notes 1] which can be considered the "birth" of our universe. Based on measurements of the expansion using Type Ia supernovae and measurements of temperature fluctuations in the cosmic microwave background, the universe has an estimated age of 13.799 ± 0.021 billion years.[21] The agreement of these three independent measurements strongly supports the ΛCDM model that describes in detail the contents of the universe.

Inflation and baryogenesis

The earliest phases of the Big Bang are subject to much speculation. In the most common models the universe was filled homogeneously and isotropically with a very high energy density and huge temperatures and pressures and was very rapidly expanding and cooling. Approximately 10−37 seconds into the expansion, a phase transition caused a cosmic inflation, during which the universe grew exponentially.[22] After inflation stopped, the universe consisted of a quark–gluon plasma, as well as all other elementary particles.[23] Temperatures were so high that the random motions of particles were at relativistic speeds, and particle–antiparticle pairs of all kinds were being continuously created and destroyed in collisions.[4] At some point an unknown reaction called baryogenesis violated the conservation of baryon number, leading to a very small excess of quarks and leptons over antiquarks and antileptons—of the order of one part in 30 million. This resulted in the predominance of matter over antimatter in the present universe.[24]

Cooling

Panoramic view of the entire near-infrared sky reveals the distribution of galaxies beyond the Milky Way. Galaxies are color-coded by redshift.

The universe continued to decrease in density and fall in temperature, hence the typical energy of each particle was decreasing. Symmetry breaking phase transitions put the fundamental forces of physics and the parameters of elementary particles into their present form.[25] After about 10−11 seconds, the picture becomes less speculative, since particle energies drop to values that can be attained in particle physics experiments. At about 10−6 seconds, quarks and gluons combined to form baryons such as protons and neutrons. The small excess of quarks over antiquarks led to a small excess of baryons over antibaryons. The temperature was now no longer high enough to create new proton–antiproton pairs (similarly for neutrons–antineutrons), so a mass annihilation immediately followed, leaving just one in 1010 of the original protons and neutrons, and none of their antiparticles. A similar process happened at about 1 second for electrons and positrons. After these annihilations, the remaining protons, neutrons and electrons were no longer moving relativistically and the energy density of the universe was dominated by photons (with a minor contribution from neutrinos).

A few minutes into the expansion, when the temperature was about a billion (one thousand million; 109; SI prefix giga-) kelvin and the density was about that of air, neutrons combined with protons to form the universe's deuterium and helium nuclei in a process called Big Bang nucleosynthesis.[26] Most protons remained uncombined as hydrogen nuclei. As the universe cooled, the rest mass energy density of matter came to gravitationally dominate that of the photon radiation. After about 379,000 years the electrons and nuclei combined into atoms (mostly hydrogen); hence the radiation decoupled from matter and continued through space largely unimpeded. This relic radiation is known as the cosmic microwave background radiation.[27] The chemistry of life may have begun shortly after the Big Bang, 13.8 billion years ago, during a habitable epoch when the universe was only 10–17 million years old.[28][29][30]

Structure formation


Over a long period of time, the slightly denser regions of the nearly uniformly distributed matter gravitationally attracted nearby matter and thus grew even denser, forming gas clouds, stars, galaxies, and the other astronomical structures observable today.[4] The details of this process depend on the amount and type of matter in the universe. The four possible types of matter are known as cold dark matter, warm dark matter, hot dark matter, and baryonic matter. The best measurements available (from WMAP) show that the data is well-fit by a Lambda-CDM model in which dark matter is assumed to be cold (warm dark matter is ruled out by early reionization[32]), and is estimated to make up about 23% of the matter/energy of the universe, while baryonic matter makes up about 4.6%.[33] In an "extended model" which includes hot dark matter in the form of neutrinos, then if the "physical baryon density" Ωbh2 is estimated at about 0.023 (this is different from the 'baryon density' Ωb expressed as a fraction of the total matter/energy density, which as noted above is about 0.046), and the corresponding cold dark matter density Ωch2 is about 0.11, the corresponding neutrino density Ωvh2 is estimated to be less than 0.0062.[33]

Cosmic acceleration

Lambda-CDM, accelerated expansion of the universe. The time-line in this schematic diagram extends from the big bang/inflation era 13.7 Gyr ago to the present cosmological time.

Independent lines of evidence from Type Ia supernovae and the CMB imply that the universe today is dominated by a mysterious form of energy known as dark energy, which apparently permeates all of space. The observations suggest 73% of the total energy density of today's universe is in this form. When the universe was very young, it was likely infused with dark energy, but with less space and everything closer together, gravity predominated, and it was slowly braking the expansion. But eventually, after numerous billion years of expansion, the growing abundance of dark energy caused the expansion of the universe to slowly begin to accelerate. Dark energy in its simplest formulation takes the form of the cosmological constant term in Einstein's field equations of general relativity, but its composition and mechanism are unknown and, more generally, the details of its equation of state and relationship with the Standard Model of particle physics continue to be investigated both observationally and theoretically.[9]

All of this cosmic evolution after the inflationary epoch can be rigorously described and modelled by the ΛCDM model of cosmology, which uses the independent frameworks of quantum mechanics and Einstein's General Relativity. There is no well-supported model describing the action prior to 10−15 seconds or so. Apparently a new unified theory of quantum gravitation is needed to break this barrier. Understanding this earliest of eras in the history of the universe is currently one of the greatest unsolved problems in physics.

Underlying assumptions

The Big Bang theory depends on two major assumptions: the universality of physical laws and the cosmological principle. The cosmological principle states that on large scales the universe is homogeneous and isotropic.
These ideas were initially taken as postulates, but today there are efforts to test each of them. For example, the first assumption has been tested by observations showing that largest possible deviation of the fine structure constant over much of the age of the universe is of order 10−5.[34] Also, general relativity has passed stringent tests on the scale of the Solar System and binary stars.[notes 2]
If the large-scale universe appears isotropic as viewed from Earth, the cosmological principle can be derived from the simpler Copernican principle, which states that there is no preferred (or special) observer or vantage point. To this end, the cosmological principle has been confirmed to a level of 10−5 via observations of the CMB. The universe has been measured to be homogeneous on the largest scales at the 10% level.[35]

Expansion of space

General relativity describes spacetime by a metric, which determines the distances that separate nearby points. The points, which can be galaxies, stars, or other objects, themselves are specified using a coordinate chart or "grid" that is laid down over all spacetime. The cosmological principle implies that the metric should be homogeneous and isotropic on large scales, which uniquely singles out the Friedmann–Lemaître–Robertson–Walker metric (FLRW metric). This metric contains a scale factor, which describes how the size of the universe changes with time. This enables a convenient choice of a coordinate system to be made, called comoving coordinates. In this coordinate system the grid expands along with the universe, and objects that are moving only due to the expansion of the universe remain at fixed points on the grid. While their coordinate distance (comoving distance) remains constant, the physical distance between two such comoving points expands proportionally with the scale factor of the universe.[36]
The Big Bang is not an explosion of matter moving outward to fill an empty universe. Instead, space itself expands with time everywhere and increases the physical distance between two comoving points. In other words, the Big Bang is not an explosion in space, but rather an expansion of space.[4] Because the FLRW metric assumes a uniform distribution of mass and energy, it applies to our universe only on large scales—local concentrations of matter such as our galaxy are gravitationally bound and as such do not experience the large-scale expansion of space.[37]

Horizons

An important feature of the Big Bang spacetime is the presence of horizons. Since the universe has a finite age, and light travels at a finite speed, there may be events in the past whose light has not had time to reach us. This places a limit or a past horizon on the most distant objects that can be observed. Conversely, because space is expanding, and more distant objects are receding ever more quickly, light emitted by us today may never "catch up" to very distant objects. This defines a future horizon, which limits the events in the future that we will be able to influence. The presence of either type of horizon depends on the details of the FLRW model that describes our universe. Our understanding of the universe back to very early times suggests that there is a past horizon, though in practice our view is also limited by the opacity of the universe at early times. So our view cannot extend further backward in time, though the horizon recedes in space. If the expansion of the universe continues to accelerate, there is a future horizon as well.[38]

History

Etymology

English astronomer Fred Hoyle is credited with coining the term "Big Bang" during a 1949 BBC radio broadcast. It is popularly reported that Hoyle, who favored an alternative "steady state" cosmological model, intended this to be pejorative, but Hoyle explicitly denied this and said it was just a striking image meant to highlight the difference between the two models.[39][40][41]:129

Development

XDF size compared to the size of the moon - several thousand galaxies, each consisting of billions of stars, are in this small view.
XDF (2012) view - each light speck is a galaxy - some of these are as old as 13.2 billion years[42] - the universe is estimated to contain 200 billion galaxies.
XDF image shows fully mature galaxies in the foreground plane - nearly mature galaxies from 5 to 9 billion years ago - protogalaxies, blazing with young stars, beyond 9 billion years.

The Big Bang theory developed from observations of the structure of the universe and from theoretical considerations. In 1912 Vesto Slipher measured the first Doppler shift of a "spiral nebula" (spiral nebula is the obsolete term for spiral galaxies), and soon discovered that almost all such nebulae were receding from Earth. He did not grasp the cosmological implications of this fact, and indeed at the time it was highly controversial whether or not these nebulae were "island universes" outside our Milky Way.[43][44] Ten years later, Alexander Friedmann, a Russian cosmologist and mathematician, derived the Friedmann equations from Albert Einstein's equations of general relativity, showing that the universe might be expanding in contrast to the static universe model advocated by Einstein at that time.[45] In 1924 Edwin Hubble's measurement of the great distance to the nearest spiral nebulae showed that these systems were indeed other galaxies. Independently deriving Friedmann's equations in 1927, Georges Lemaître, a Belgian physicist and Roman Catholic priest, proposed that the inferred recession of the nebulae was due to the expansion of the universe.[46]

In 1931 Lemaître went further and suggested that the evident expansion of the universe, if projected back in time, meant that the further in the past the smaller the universe was, until at some finite time in the past all the mass of the universe was concentrated into a single point, a "primeval atom" where and when the fabric of time and space came into existence.[47]

Starting in 1924, Hubble painstakingly developed a series of distance indicators, the forerunner of the cosmic distance ladder, using the 100-inch (2.5 m) Hooker telescope at Mount Wilson Observatory. This allowed him to estimate distances to galaxies whose redshifts had already been measured, mostly by Slipher. In 1929 Hubble discovered a correlation between distance and recession velocity—now known as Hubble's law.[17][48] Lemaître had already shown that this was expected, given the cosmological principle.[9]

In the 1920s and 1930s almost every major cosmologist preferred an eternal steady state universe, and several complained that the beginning of time implied by the Big Bang imported religious concepts into physics; this objection was later repeated by supporters of the steady state theory.[49] This perception was enhanced by the fact that the originator of the Big Bang theory, Monsignor Georges Lemaître, was a Roman Catholic priest.[50] Arthur Eddington agreed with Aristotle that the universe did not have a beginning in time, viz., that matter is eternal. A beginning in time was "repugnant" to him.[51][52] Lemaître, however, thought that
If the world has begun with a single quantum, the notions of space and time would altogether fail to have any meaning at the beginning; they would only begin to have a sensible meaning when the original quantum had been divided into a sufficient number of quanta. If this suggestion is correct, the beginning of the world happened a little before the beginning of space and time.[53]
During the 1930s other ideas were proposed as non-standard cosmologies to explain Hubble's observations, including the Milne model,[54] the oscillatory universe (originally suggested by Friedmann, but advocated by Albert Einstein and Richard Tolman)[55] and Fritz Zwicky's tired light hypothesis.[56]

After World War II, two distinct possibilities emerged. One was Fred Hoyle's steady state model, whereby new matter would be created as the universe seemed to expand. In this model the universe is roughly the same at any point in time.[57] The other was Lemaître's Big Bang theory, advocated and developed by George Gamow, who introduced big bang nucleosynthesis (BBN)[58] and whose associates, Ralph Alpher and Robert Herman, predicted the cosmic microwave background radiation (CMB).[59] Ironically, it was Hoyle who coined the phrase that came to be applied to Lemaître's theory, referring to it as "this big bang idea" during a BBC Radio broadcast in March 1949.[41]:129[notes 3] For a while, support was split between these two theories. Eventually, the observational evidence, most notably from radio source counts, began to favor Big Bang over Steady State. The discovery and confirmation of the cosmic microwave background radiation in 1965[61] secured the Big Bang as the best theory of the origin and evolution of the universe. Much of the current work in cosmology includes understanding how galaxies form in the context of the Big Bang, understanding the physics of the universe at earlier and earlier times, and reconciling observations with the basic theory.

In 1968 and 1970, Roger Penrose, Stephen Hawking, and George F. R. Ellis published papers where they showed that mathematical singularities were an inevitable initial condition of general relativistic models of the Big Bang.[62][63] Then, from the 1970s to the 1990s, cosmologists worked on characterizing the features of the Big Bang universe and resolving outstanding problems. In 1981, Alan Guth made a breakthrough in theoretical work on resolving certain outstanding theoretical problems in the Big Bang theory with the introduction of an epoch of rapid expansion in the early universe he called "inflation".[64] Meanwhile, during these decades, two questions in observational cosmology that generated much discussion and disagreement were over the precise values of the Hubble Constant[65] and the matter-density of the universe (before the discovery of dark energy, thought to be the key predictor for the eventual fate of the universe).[66] In the mid-1990s observations of certain globular clusters appeared to indicate that they were about 15 billion years old, which conflicted with most then-current estimates of the age of the universe (and indeed with the age measured today). This issue was later resolved when new computer simulations, which included the effects of mass loss due to stellar winds, indicated a much younger age for globular clusters.[67] While there still remain some questions as to how accurately the ages of the clusters are measured, globular clusters are of interest to cosmology as some of the oldest objects in the universe.
Significant progress in Big Bang cosmology have been made since the late 1990s as a result of advances in telescope technology as well as the analysis of data from satellites such as COBE,[68] the Hubble Space Telescope and WMAP.[69] Cosmologists now have fairly precise and accurate measurements of many of the parameters of the Big Bang model, and have made the unexpected discovery that the expansion of the universe appears to be accelerating.

Observational evidence


Artist's depiction of the WMAP satellite gathering data to help scientists understand the Big Bang

The earliest and most direct observational evidence of the validity of the theory are the expansion of the universe according to Hubble's law (as indicated by the redshifts of galaxies), discovery and measurement of the cosmic microwave background and the relative abundances of light elements produced by Big Bang nucleosynthesis. More recent evidence includes observations of galaxy formation and evolution, and the distribution of large-scale cosmic structures,[71] These are sometimes called the "four pillars" of the Big Bang theory.[72]

Precise modern models of the Big Bang appeal to various exotic physical phenomena that have not been observed in terrestrial laboratory experiments or incorporated into the Standard Model of particle physics. Of these features, dark matter is currently subjected to the most active laboratory investigations.[73] Remaining issues include the cuspy halo problem and the dwarf galaxy problem of cold dark matter. Dark energy is also an area of intense interest for scientists, but it is not clear whether direct detection of dark energy will be possible.[74] Inflation and baryogenesis remain more speculative features of current Big Bang models. Viable, quantitative explanations for such phenomena are still being sought. These are currently unsolved problems in physics.

Hubble's law and the expansion of space

Observations of distant galaxies and quasars show that these objects are redshifted—the light emitted from them has been shifted to longer wavelengths. This can be seen by taking a frequency spectrum of an object and matching the spectroscopic pattern of emission lines or absorption lines corresponding to atoms of the chemical elements interacting with the light. These redshifts are uniformly isotropic, distributed evenly among the observed objects in all directions. If the redshift is interpreted as a Doppler shift, the recessional velocity of the object can be calculated. For some galaxies, it is possible to estimate distances via the cosmic distance ladder. When the recessional velocities are plotted against these distances, a linear relationship known as Hubble's law is observed:[17]
v = H0D,
where
Hubble's law has two possible explanations. Either we are at the center of an explosion of galaxies—which is untenable given the Copernican principle—or the universe is uniformly expanding everywhere. This universal expansion was predicted from general relativity by Alexander Friedmann in 1922[45] and Georges Lemaître in 1927,[46] well before Hubble made his 1929 analysis and observations, and it remains the cornerstone of the Big Bang theory as developed by Friedmann, Lemaître, Robertson, and Walker.

 The theory requires the relation v = HD to hold at all times, where D is the comoving distance, v is the recessional velocity, and v, H, and D vary as the universe expands (hence we write H0 to denote the present-day Hubble "constant"). For distances much smaller than the size of the observable universe, the Hubble redshift can be thought of as the Doppler shift corresponding to the recession velocity v. However, the redshift is not a true Doppler shift, but rather the result of the expansion of the universe between the time the light was emitted and the time that it was detected.[75]

That space is undergoing metric expansion is shown by direct observational evidence of the Cosmological principle and the Copernican principle, which together with Hubble's law have no other explanation. Astronomical redshifts are extremely isotropic and homogeneous,[17] supporting the Cosmological principle that the universe looks the same in all directions, along with much other evidence. If the redshifts were the result of an explosion from a center distant from us, they would not be so similar in different directions.

Measurements of the effects of the cosmic microwave background radiation on the dynamics of distant astrophysical systems in 2000 proved the Copernican principle, that, on a cosmological scale, the Earth is not in a central position.[76] Radiation from the Big Bang was demonstrably warmer at earlier times throughout the universe. Uniform cooling of the cosmic microwave background over billions of years is explainable only if the universe is experiencing a metric expansion, and excludes the possibility that we are near the unique center of an explosion.

Cosmic microwave background radiation

9 year WMAP image of the cosmic microwave background radiation (2012).[77][78] The radiation is isotropic to roughly one part in 100,000.[79]

In 1965, Arno Penzias and Robert Wilson serendipitously discovered the cosmic background radiation, an omnidirectional signal in the microwave band.[61] Their discovery provided substantial confirmation of the big-bang predictions by Alpher, Herman and Gamow around 1950. Through the 1970s the radiation was found to be approximately consistent with a black body spectrum in all directions; this spectrum has been redshifted by the expansion of the universe, and today corresponds to approximately 2.725 K. This tipped the balance of evidence in favor of the Big Bang model, and Penzias and Wilson were awarded a Nobel Prize in 1978.


The cosmic microwave background spectrum measured by the FIRAS instrument on the COBE satellite is the most-precisely measured black body spectrum in nature.[80] The data points and error bars on this graph are obscured by the theoretical curve.

The surface of last scattering corresponding to emission of the CMB occurs shortly after recombination, the epoch when neutral hydrogen becomes stable. Prior to this, the universe comprised a hot dense photon-baryon plasma sea where photons were quickly scattered from free charged particles. Peaking at around 372±14 kyr,[32] the mean free path for a photon becomes long enough to reach the present day and the universe becomes transparent.

In 1989 NASA launched the Cosmic Background Explorer satellite (COBE) which made two major advances: in 1990, high-precision spectrum measurements showed the CMB frequency spectrum is an almost perfect blackbody with no deviations at a level of 1 part in 104, and measured a residual temperature of 2.726 K (more recent measurements have revised this figure down slightly to 2.7255 K); then in 1992 further COBE measurements discovered tiny fluctuations (anisotropies) in the CMB temperature across the sky, at a level of about one part in 105.[68] John C. Mather and George Smoot were awarded the 2006 Nobel Prize in Physics for their leadership in these results. During the following decade, CMB anisotropies were further investigated by a large number of ground-based and balloon experiments. In 2000–2001 several experiments, most notably BOOMERanG, found the shape of the universe to be spatially almost flat by measuring the typical angular size (the size on the sky) of the anisotropies.[81][82][83]

In early 2003 the first results of the Wilkinson Microwave Anisotropy Probe (WMAP) were released, yielding what were at the time the most accurate values for some of the cosmological parameters. The results disproved several specific cosmic inflation models, but are consistent with the inflation theory in general.[69] The Planck space probe was launched in May 2009. Other ground and balloon based cosmic microwave background experiments are ongoing.

Abundance of primordial elements

Using the Big Bang model it is possible to calculate the concentration of helium-4, helium-3, deuterium, and lithium-7 in the universe as ratios to the amount of ordinary hydrogen.[26] The relative abundances depend on a single parameter, the ratio of photons to baryons. This value can be calculated independently from the detailed structure of CMB fluctuations. The ratios predicted (by mass, not by number) are about 0.25 for 4He/H, about 10−3 for 2H/H, about 10−4 for 3He/H and about 10−9 for 7Li/H.[26]
The measured abundances all agree at least roughly with those predicted from a single value of the baryon-to-photon ratio. The agreement is excellent for deuterium, close but formally discrepant for 4He, and off by a factor of two for 7Li; in the latter two cases there are substantial systematic uncertainties. Nonetheless, the general consistency with abundances predicted by Big Bang nucleosynthesis is strong evidence for the Big Bang, as the theory is the only known explanation for the relative abundances of light elements, and it is virtually impossible to "tune" the Big Bang to produce much more or less than 20–30% helium.[84] Indeed, there is no obvious reason outside of the Big Bang that, for example, the young universe (i.e., before star formation, as determined by studying matter supposedly free of stellar nucleosynthesis products) should have more helium than deuterium or more deuterium than 3He, and in constant ratios, too.[85]:182–185

Galactic evolution and distribution

Detailed observations of the morphology and distribution of galaxies and quasars are in agreement with the current state of the Big Bang theory. A combination of observations and theory suggest that the first quasars and galaxies formed about a billion years after the Big Bang, and since then larger structures have been forming, such as galaxy clusters and superclusters. Populations of stars have been aging and evolving, so that distant galaxies (which are observed as they were in the early universe) appear very different from nearby galaxies (observed in a more recent state). Moreover, galaxies that formed relatively recently appear markedly different from galaxies formed at similar distances but shortly after the Big Bang. These observations are strong arguments against the steady-state model. Observations of star formation, galaxy and quasar distributions and larger structures agree well with Big Bang simulations of the formation of structure in the universe and are helping to complete details of the theory.[86][87]

Primordial gas clouds


Focal plane of BICEP2 telescope under a microscope - may have detected gravitational waves from the infant universe.[13][14][15][16]

In 2011 astronomers found what they believe to be pristine clouds of primordial gas, by analyzing absorption lines in the spectra of distant quasars. Before this discovery, all other astronomical objects have been observed to contain heavy elements that are formed in stars. These two clouds of gas contain no elements heavier than hydrogen and deuterium.[88][89] Since the clouds of gas have no heavy elements, they likely formed in the first few minutes after the Big Bang, during Big Bang nucleosynthesis.

Other lines of evidence

The age of the universe as estimated from the Hubble expansion and the CMB is now in good agreement with other estimates using the ages of the oldest stars, both as measured by applying the theory of stellar evolution to globular clusters and through radiometric dating of individual Population II stars.[90]

The prediction that the CMB temperature was higher in the past has been experimentally supported by observations of very low temperature absorption lines in gas clouds at high redshift.[91] This prediction also implies that the amplitude of the Sunyaev–Zel'dovich effect in clusters of galaxies does not depend directly on redshift. Observations have found this to be roughly true, but this effect depends on cluster properties that do change with cosmic time, making precise measurements difficult.[92][93]

On 17 March 2014, astronomers at the Harvard-Smithsonian Center for Astrophysics announced the apparent detection of primordial gravitational waves, which, if confirmed, may provide strong evidence for inflation and the Big Bang.[13][14][15][16] However, on 19 June 2014, lowered confidence in confirming the findings was reported;[94][95][96] and on 19 September 2014, even more lowered confidence.[97][98]

Problems and related issues in physics

As with any theory, a number of mysteries and problems have arisen as a result of the development of the Big Bang theory. Some of these mysteries and problems have been resolved while others are still outstanding. Proposed solutions to some of the problems in the Big Bang model have revealed new mysteries of their own. For example, the horizon problem, the magnetic monopole problem, and the flatness problem are most commonly resolved with inflationary theory, but the details of the inflationary universe are still left unresolved and alternatives to inflation are even still entertained in the literature.[99][100] What follows are a list of the mysterious aspects of the Big Bang theory still under intense investigation by cosmologists and astrophysicists.

Baryon asymmetry

It is not yet understood why the universe has more matter than antimatter.[101] It is generally assumed that when the universe was young and very hot, it was in statistical equilibrium and contained equal numbers of baryons and antibaryons. However, observations suggest that the universe, including its most distant parts, is made almost entirely of matter. A process called baryogenesis was hypothesized to account for the asymmetry. For baryogenesis to occur, the Sakharov conditions must be satisfied. These require that baryon number is not conserved, that C-symmetry and CP-symmetry are violated and that the universe depart from thermodynamic equilibrium.[102] All these conditions occur in the Standard Model, but the effect is not strong enough to explain the present baryon asymmetry.

Dark energy

Measurements of the redshiftmagnitude relation for type Ia supernovae indicate that the expansion of the universe has been accelerating since the universe was about half its present age. To explain this acceleration, general relativity requires that much of the energy in the universe consists of a component with large negative pressure, dubbed "dark energy".[9] Dark energy, though speculative, solves numerous problems. Measurements of the cosmic microwave background indicate that the universe is very nearly spatially flat, and therefore according to general relativity the universe must have almost exactly the critical density of mass/energy. But the mass density of the universe can be measured from its gravitational clustering, and is found to have only about 30% of the critical density.[9] Since theory suggests that dark energy does not cluster in the usual way it is the best explanation for the "missing" energy density. Dark energy also helps to explain two geometrical measures of the overall curvature of the universe, one using the frequency of gravitational lenses, and the other using the characteristic pattern of the large-scale structure as a cosmic ruler.
Negative pressure is believed to be a property of vacuum energy, but the exact nature and existence of dark energy remains one of the great mysteries of the Big Bang. Results from the WMAP team in 2008 are in accordance with a universe that consists of 73% dark energy, 23% dark matter, 4.6% regular matter and less than 1% neutrinos.[33] According to theory, the energy density in matter decreases with the expansion of the universe, but the dark energy density remains constant (or nearly so) as the universe expands. Therefore, matter made up a larger fraction of the total energy of the universe in the past than it does today, but its fractional contribution will fall in the far future as dark energy becomes even more dominant.

The dark energy component of the universe has been explained by theorists using a variety of competing theories including Einstein's cosmological constant but also extending to more exotic forms of quintessence or other modified gravity schemes.[103] A cosmological constant problem sometimes called the "most embarrassing problem in physics" results from the apparent discrepancy between the measured energy density of dark energy and the one naively predicted from Planck units.[104]

Dark matter

Chart shows the proportion of different components of the universe  – about 95% is dark matter and dark energy.

During the 1970s and 80s, various observations showed that there is not sufficient visible matter in the universe to account for the apparent strength of gravitational forces within and between galaxies. This led to the idea that up to 90% of the matter in the universe is dark matter that does not emit light or interact with normal baryonic matter. In addition, the assumption that the universe is mostly normal matter led to predictions that were strongly inconsistent with observations. In particular, the universe today is far more lumpy and contains far less deuterium than can be accounted for without dark matter. While dark matter has always been controversial, it is inferred by various observations: the anisotropies in the CMB, galaxy cluster velocity dispersions, large-scale structure distributions, gravitational lensing studies, and X-ray measurements of galaxy clusters.[105]

Indirect evidence for dark matter comes from its gravitational influence on other matter, as no dark matter particles have been observed in laboratories. Many particle physics candidates for dark matter have been proposed, and several projects to detect them directly are underway.[106]

Additionally, there are outstanding problems associated with the currently favored cold dark matter model which include the dwarf galaxy problem[107] and the cuspy halo problem.[108] Alternative theories have been proposed that do not require a large amount of undetected matter but instead modify the laws of gravity established by Newton and Einstein, but no alternative theory as been as successful as the cold dark matter proposal in explaining all extant observations.[109]

Horizon problem

The horizon problem results from the premise that information cannot travel faster than light. In a universe of finite age this sets a limit—the particle horizon—on the separation of any two regions of space that are in causal contact.[110] The observed isotropy of the CMB is problematic in this regard: if the universe had been dominated by radiation or matter at all times up to the epoch of last scattering, the particle horizon at that time would correspond to about 2 degrees on the sky. There would then be no mechanism to cause wider regions to have the same temperature.[85]:191–202

A resolution to this apparent inconsistency is offered by inflationary theory in which a homogeneous and isotropic scalar energy field dominates the universe at some very early period (before baryogenesis). During inflation, the universe undergoes exponential expansion, and the particle horizon expands much more rapidly than previously assumed, so that regions presently on opposite sides of the observable universe are well inside each other's particle horizon. The observed isotropy of the CMB then follows from the fact that this larger region was in causal contact before the beginning of inflation.[22]:180–186

Heisenberg's uncertainty principle predicts that during the inflationary phase there would be quantum thermal fluctuations, which would be magnified to cosmic scale. These fluctuations serve as the seeds of all current structure in the universe.[85]:207 Inflation predicts that the primordial fluctuations are nearly scale invariant and Gaussian, which has been accurately confirmed by measurements of the CMB.[111]:sec 6

If inflation occurred, exponential expansion would push large regions of space well beyond our observable horizon.[22]:180–186

A related issue to the classic horizon problem arises due to the fact that in most standard cosmological inflation models, inflation ceases well before electroweak symmetry breaking occurs, so inflation should not be able to prevent large-scale discontinuities in the electroweak vacuum since distant parts of the observable universe were causally separate when the electroweak epoch ended.[112]

Magnetic monopoles

The magnetic monopole objection was raised in the late 1970s. Grand unification theories predicted topological defects in space that would manifest as magnetic monopoles. These objects would be produced efficiently in the hot early universe, resulting in a density much higher than is consistent with observations, given that no monopoles have been found. This problem is also resolved by cosmic inflation, which removes all point defects from the observable universe, in the same way that it drives the geometry to flatness.[110]

Flatness problem


The overall geometry of the universe is determined by whether the Omega cosmological parameter is less than, equal to or greater than 1. Shown from top to bottom are a closed universe with positive curvature, a hyperbolic universe with negative curvature and a flat universe with zero curvature.

The flatness problem (also known as the oldness problem) is an observational problem associated with a Friedmann–Lemaître–Robertson–Walker metric.[110] The universe may have positive, negative, or zero spatial curvature depending on its total energy density. Curvature is negative if its density is less than the critical density, positive if greater, and zero at the critical density, in which case space is said to be flat. The problem is that any small departure from the critical density grows with time, and yet the universe today remains very close to flat.[notes 4] Given that a natural timescale for departure from flatness might be the Planck time, 10−43 seconds,[4] the fact that the universe has reached neither a heat death nor a Big Crunch after billions of years requires an explanation. For instance, even at the relatively late age of a few minutes (the time of nucleosynthesis), the universe density must have been within one part in 1014 of its critical value, or it would not exist as it does today.[113]

Ultimate fate of the Universe

Before observations of dark energy, cosmologists considered two scenarios for the future of the universe. If the mass density of the universe were greater than the critical density, then the universe would reach a maximum size and then begin to collapse. It would become denser and hotter again, ending with a state similar to that in which it started—a Big Crunch.[38] Alternatively, if the density in the universe were equal to or below the critical density, the expansion would slow down but never stop. Star formation would cease with the consumption of interstellar gas in each galaxy; stars would burn out leaving white dwarfs, neutron stars, and black holes. Very gradually, collisions between these would result in mass accumulating into larger and larger black holes. The average temperature of the universe would asymptotically approach absolute zero—a Big Freeze.[114] Moreover, if the proton were unstable, then baryonic matter would disappear, leaving only radiation and black holes. 
Eventually, black holes would evaporate by emitting Hawking radiation. The entropy of the universe would increase to the point where no organized form of energy could be extracted from it, a scenario known as heat death.[115]:sec VI.D
Modern observations of accelerating expansion imply that more and more of the currently visible universe will pass beyond our event horizon and out of contact with us. The eventual result is not known. The ΛCDM model of the universe contains dark energy in the form of a cosmological constant. This theory suggests that only gravitationally bound systems, such as galaxies, will remain together, and they too will be subject to heat death as the universe expands and cools. Other explanations of dark energy, called phantom energy theories, suggest that ultimately galaxy clusters, stars, planets, atoms, nuclei, and matter itself will be torn apart by the ever-increasing expansion in a so-called Big Rip.[116]

Speculations

Timeline of the metric expansion of space, where space (including hypothetical non-observable portions of the universe) is represented at each time by the circular sections. On the left the dramatic expansion occurs in the inflationary epoch, and at the center the expansion accelerates (artist's concept; not to scale).

While the Big Bang model is well established in cosmology, it is likely to be refined. The Big Bang theory, built upon the equations of classical general relativity, indicates a singularity at the origin of cosmic time; this infinite energy density is regarded as impossible in physics. Still, it is known that the equations are not applicable before the time when the universe cooled down to the Planck temperature, and this conclusion depends on various assumptions, of which some could never be experimentally verified. (Also see Planck epoch.)

One proposed refinement to avoid this would-be singularity is to develop a correct treatment of quantum gravity.[117]

It is not known what could have preceded the hot dense state of the early universe or how and why it originated, though speculation abounds in the field of cosmogony.

Some proposals, each of which entails untested hypotheses, are:
  • Models including the Hartle–Hawking no-boundary condition, in which the whole of space-time is finite; the Big Bang does represent the limit of time but without any singularity.[118]
  • Big Bang lattice model, states that the universe at the moment of the Big Bang consists of an infinite lattice of fermions, which is smeared over the fundamental domain so it has rotational, translational and gauge symmetry. The symmetry is the largest symmetry possible and hence the lowest entropy of any state.[119]
  • Brane cosmology models, in which inflation is due to the movement of branes in string theory; the pre-Big Bang model; the ekpyrotic model, in which the Big Bang is the result of a collision between branes and the cyclic model, a variant of the ekpyrotic model in which collisions occur periodically. In the latter model the Big Bang was preceded by a Big Crunch and the universe cycles from one process to the other.[120][121][122][123]
  • Eternal inflation, in which universal inflation ends locally here and there in a random fashion, each end-point leading to a bubble universe, expanding from its own big bang.[124][125]
Proposals in the last two categories, see the Big Bang as an event in either a much larger and older universe or in a multiverse.

Religious and philosophical interpretations

As a description of the origin of the universe, the Big Bang has significant bearing on religion and philosophy.[126][127] As a result, it has become one of the liveliest areas in the discourse between science and religion.[128] Some believe the Big Bang implies a creator,[129][130] and some see its mention in their holy books,[131] while others argue that Big Bang cosmology makes the notion of a creator superfluous.[127][132]

Optimal tax

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Optimal_tax   Optimal tax theory or the theory of optimal taxation is ...