Search This Blog

Friday, January 30, 2015

The Selfish Gene


From Wikipedia, the free encyclopedia
Cover

Original cover, with details from the painting The Expectant Valley by zoologist Desmond Morris.
Author Richard Dawkins
Subject Evolutionary biology
Publisher Oxford University Press
Publication date
1976
Second edition in 1989
Third edition in 2006
Pages 224
ISBN ISBN 0-19-857519-X
OCLC 2681149
Followed by The Extended Phenotype

The Selfish Gene is a book on evolution by Richard Dawkins, published in 1976. It builds upon the principal theory of George C. Williams's first book Adaptation and Natural Selection. Dawkins used the term "selfish gene" as a way of expressing the gene-centred view of evolution as opposed to the views focused on the organism and the group, popularising ideas developed during the 1960s by W. D. Hamilton and others. From the gene-centred view follows that the more two individuals are genetically related, the more sense (at the level of the genes) it makes for them to behave selflessly with each other. Therefore the concept is especially good at explaining many forms of altruism. This should not be confused with misuse of the term along the lines of a selfishness gene.

An organism is expected to evolve to maximise its inclusive fitness—the number of copies of its genes passed on globally (rather than by a particular individual). As a result, populations will tend towards an evolutionarily stable strategy. The book also coins the term meme for a unit of human cultural evolution analogous to the gene, suggesting that such "selfish" replication may also model human culture, in a different sense. Memetics has become the subject of many studies since the publication of the book.

In the foreword to the book's 30th-anniversary edition, Dawkins said he "can readily see that [the book's title] might give an inadequate impression of its contents" and in retrospect thinks he should have taken Tom Maschler's advice and called the book The Immortal Gene.[1]

"Selfish" genes

In describing genes as being "selfish", the author does not intend (as he states unequivocally) to imply that they are driven by any motives or will, but merely that their effects can be metaphorically and pedagogically described as if they were. The contention is that the genes that get passed on are the ones whose evolutionary consequences serve their own implicit interests (to continue being replicated), not necessarily those of the organism. Bringing the level of evolutionary dynamics down to the single gene, or complementary genes which work well together in a given type of organism, Dawkins categorically rejects the school of thought which tells that evolution operates on the level of social group.

This view is said to explain altruism at the individual level in nature, especially in kinship relationships: when an individual sacrifices its own life to protect the lives of kin, it is acting in the interest of its own genes. Some people find this metaphor entirely clear, while others find it confusing, misleading or simply redundant to ascribe mental attributes to something that is mindless.

For example, Andrew Brown has written:
"Selfish", when applied to genes, doesn't mean "selfish" at all. It means, instead, an extremely important quality for which there is no good word in the English language: "the quality of being copied by a Darwinian selection process." This is a complicated mouthful. There ought to be a better, shorter word—but "selfish" isn't it.[2]
Donald Symons also finds it inappropriate to use everyday language in conveying scientific meaning in general and particularly for the present instance:
In summary, the rhetoric of The Selfish Gene exactly reverses the real situation: through metaphor genes are endowed with properties only sentient beings can possess, such as selfishness, while sentient beings are stripped of these properties and called machines (robots).[3]

Genes and selection

Dawkins proposes the idea of the "replicator,"[4] the initial molecule which first managed to reproduce itself and thus gained an advantage over other molecules within the primordial soup.[5] As replicating molecules became more complex, Dawkins postulates, the replicators became the genes within organisms, with each organism's body serving the purpose of a 'survival machine' for its genes.
Dawkins writes that gene combinations which help an organism to survive and reproduce tend to also improve the gene's own chances of being passed on and, as a result, frequently "successful" genes will also be beneficial to the organism. An example of this might be a gene that protects the organism against a disease, which helps the gene spread and also helps the organism.

Genes can reproduce at the expense of the organism

There are other times when the implicit interests of the vehicle and replicator are in conflict, such as the genes behind certain male spiders' instinctive mating behaviour, which increase the organism's inclusive fitness by allowing it to reproduce, but shorten its life by exposing it to the risk of being eaten by the cannibalistic female. Another good example is the existence of segregation distortion genes that are detrimental to their host but nonetheless propagate themselves at its expense. Likewise, the existence of junk DNA that provides no benefit to its host, once a puzzle, can be more easily explained.[6]

Power struggles are rare

These examples might suggest that there is a power-struggle between genes and their host. In fact, the claim is that there isn't much of a struggle because the genes usually win without a fight. Only if the organism becomes intelligent enough to understand its own interests, as distinct from those of its genes, can there be true conflict.

An example of this conflict might be a person using birth control to prevent fertilisation, thereby inhibiting the replication of his or her genes.

But that may not be a conflict of the 'self-interest' of the organism with his or her genes, since a person using birth control may also be enhancing the survival chances of their genes by limiting family size to conform with available resources, thus avoiding extinction as predicted under the Malthusian model of population growth.

Many phenomena explained

When examined from the standpoint of gene selection, many biological phenomena that, in prior models, were difficult to explain become easier to understand. In particular, phenomena such as kin selection and eusociality, where organisms act altruistically, against their individual interests (in the sense of health, safety or personal reproduction) to help related organisms reproduce, can be explained as gene sets "helping" copies of themselves (or sequences with the same phenotypic effect) in other bodies to replicate. Interestingly, the "selfish" actions of genes lead to unselfish actions by organisms.

Prior to the 1960s, it was common for such behaviour to be explained in terms of group selection, where the benefits to the organism or even population were supposed to account for the popularity of the genes responsible for the tendency towards that behaviour. This was shown not to be an evolutionarily stable strategy, in that it would only take a single individual with a tendency towards more selfish behaviour to undermine a population otherwise filled only with the gene for altruism towards non-kin.

Reception

The book was extremely popular when first published, caused "a silent and almost immediate revolution in biology",[7] and continues to be widely read. It has sold over a million copies, and has been translated into more than 25 languages.[8]

Proponents argue that the central point, that the gene is the unit of selection, usefully completes and extends the explanation of evolution given by Charles Darwin before the basic mechanisms of genetics were understood. Critics argue that it oversimplifies the relationship between genes and the organism. Mathematical biologists' initial relationship with the ideas in the book was, according to Alan Grafen, "at best difficult" due to what Grafen postulates is a reliance solely on Mendelian genetics by these biologists.[9]

In 1976, Arthur Cain, one of Dawkins's tutors at Oxford in the 1960s, called it a "young man's book" (which Dawkins points out was a deliberate quote of a commentator on A.J. Ayer's Language, Truth, and Logic); Dawkins later noted he had been "flattered by the comparison, [but] knew that Ayer had recanted much of his first book and [he] could hardly miss Cain's pointed implication that [he] should, in the fullness of time, do the same."[1]

Other types of selection suggested

Most modern evolutionary biologists accept that the idea is consistent with many processes in evolution. However, the view that selection on other levels, such as organisms and populations, seldom opposes selection on genes is more controversial. While naïve versions of group selectionism have been disproved, more sophisticated formulations make accurate predictions in some cases while positing selection at higher levels.[10] Nevertheless, the explanatory gains of using sophisticated formulations of group selectionism as opposed to Dawkins's gene-centred selectionism are still under dispute. Both sides agree that very favourable genes are likely to prosper and replicate if they arise and both sides agree that living in groups can be an advantage to the group members. The conflict arises not so much over disputes on hard facts but over what is the best way of viewing evolutionary selection in animals.

In "The Social Conquest of Earth," E. O. Wilson contends that kin selection as described in "The Selfish Gene" is a largely ineffective model of social evolution. Chapter 18 of "The Social Conquest of Earth" describes the deficiencies of kin selection and outlines group selection, which Wilson argues is a more realistic model of social evolution. He writes, "...unwarranted faith in the central role of kinship in social evolution has led to the reversal of the usual order in which biological research is conducted. The proven best way in evolutionary biology, as in most of science, is to define a problem arising during empirical research, then select or devise the theory that is needed to solve it. Almost all research in inclusive-fitness theory [such as in "The Selfish Gene"] has been the opposite: hypothesize the key roles of kinship and kin selection, then look for evidence to test that hypothesis."[11]

Unit of selection or of evolution

Some biologists have criticised the idea for describing the gene as the unit of selection, but suggest describing the gene as the unit of evolution, on the grounds that selection is a "here and now" event of reproduction and survival, while evolution is the long-term trend of shifting allele frequencies.[12]

Stephen Jay Gould also took issue with the gene as the unit of selection, arguing that genes are not directly 'visible' to natural selection. Rather, the unit of selection is the phenotype, not the genotype, because it is phenotypes which interact with the environment at the natural selection interface.[13] As Kim Sterelny[14] summarises Gould's view, "Gene differences do not cause evolutionary changes in populations, they register those changes". This is also Niles Eldredge's view. Eldredge[15] notes that in Dawkins' book A Devil's Chaplain, which was published just before Eldredge's book, "Richard Dawkins comments on what he sees as the main difference between his position and that of the late Stephen Jay Gould. He concludes that it is his own vision that genes play a causal role in evolution", while Gould (and Eldredge) "sees genes as passive recorders of what worked better than what".

Moral arguments

Another criticism of the book, made by the philosopher Mary Midgley in her book Evolution as a Religion, is that it discusses philosophical and moral questions that go beyond the biological arguments that Dawkins makes. For instance, humanity finally gaining power over the "selfish replicators" is a major theme at the end of the book. This view is criticised by primatologist Frans de Waal, who refers to it as the "veneer theory". Dawkins has pointed out that he is only describing how things are under evolution, not endorsing them as morally good.[16][17] Mary Midgley's essential argument is that what separates humans from other aspects of nature, is that humans have the ability to reconstruct nature through the tools of what humans call "society" and "culture". She argues that Richard Dawkins account of The Selfish Gene is in fact a moral and ideological justification for nature's behavior of selfishness to be adopted by modern human societies. She argues further that humanly organized social and political institutions and structures in society have been created to counteract the selfish tendencies of nature and that Dawkins's conception of selfishness as the engine of genetic behavior will have disastrous consequences to future human society.

Editions

The Selfish Gene was first published in 1976[18] in eleven chapters with a preface by the author and a foreword by Robert Trivers. A second edition was published in 1989. This edition added two extra chapters, and substantial endnotes to the preceding chapters, reflecting new findings and thoughts. It also added a second preface by the author, but the original foreword by Trivers was dropped.

30th anniversary

In 2006, a 30th anniversary edition[8] was published which reinstated the Trivers foreword and contained a new introduction by the author (alongside the previous two prefaces), with some selected extracts from reviews at the back. It was accompanied by a festschrift entitled Richard Dawkins: How a Scientist Changed the Way We Think. In March 2006, a special event entitled The Selfish Gene: Thirty Years On was held at the London School of Economics. The event was organised by Helena Cronin, and chaired by Melvyn Bragg. In March 2011, Audible Inc published an audiobook edition narrated by Richard Dawkins and Lalla Ward.

Schrödinger's cat


From Wikipedia, the free encyclopedia
Schrödinger's cat: a cat, a flask of poison, and a radioactive source are placed in a sealed box. If an internal monitor detects radioactivity (i.e. a single atom decaying), the flask is shattered, releasing the poison that kills the cat. The Copenhagen interpretation of quantum mechanics implies that after a while, the cat is represented by a wave function that simultaneously includes alive and dead possibilities. Yet, when one looks in the box, one sees the cat either alive or dead, not both alive and dead. This poses the question of when exactly quantum superposition ends and reality collapses into one possibility or the other.

Schrödinger's cat is a thought experiment, sometimes described as a paradox, devised by Austrian physicist Erwin Schrödinger in 1935. It illustrates what he saw as the problem of the Copenhagen interpretation of quantum mechanics applied to everyday objects. The scenario presents a cat that is randomly put in a state where alive and dead are both possibilities, requiring further observation to determine which. The thought experiment is also often featured in theoretical discussions of the interpretations of quantum mechanics. In the course of developing this experiment, Schrödinger coined the term Verschränkung (entanglement).

Origin and motivation


Real-size cat figure in the garden of Huttenstrasse 30, Zurich, where Erwin Schrödinger lived 1921 – 1926. Depending on the light conditions, the cat appears either alive or not.

Schrödinger intended his thought experiment as a discussion of the EPR article—named after its authors Einstein, Podolsky, and Rosen—in 1935.[1] The EPR article highlighted the strange nature of quantum superpositions, in which a quantum system such as an atom or photon can exist as a combination of multiple states corresponding to different possible outcomes. The prevailing theory, called the Copenhagen interpretation, said that a quantum system remained in this superposition until it interacted with, or was observed by, the external world, at which time the superposition collapses into one or another of the possible definite states. The EPR experiment showed that a system with multiple particles separated by large distances could be in such a superposition. Schrödinger and Einstein exchanged letters about Einstein's EPR article, in the course of which Einstein pointed out that the state of an unstable keg of gunpowder will, after a while, contain a superposition of both exploded and unexploded states.

To further illustrate, Schrödinger described how one could, in principle, create a superposition in a large-scale system by making it dependent on a quantum particle that was in a superposition. He proposed a scenario with a cat in a sealed box, wherein the cat's life or death depended on the state of a radioactive atom, whether it had decayed and emitted radiation or not. According to Schrödinger, the Copenhagen interpretation implies that the cat remains both alive and dead until the box is opened. Schrödinger did not wish to promote the idea of dead-and-alive cats as a serious possibility; on the contrary, he intended the example to illustrate the absurdity of the existing view of quantum mechanics.[2] However, since Schrödinger's time, other interpretations of the mathematics of quantum mechanics have been advanced by physicists, some of which regard the "alive and dead" cat superposition as quite real. Intended as a critique of the Copenhagen interpretation (the prevailing orthodoxy in 1935), the Schrödinger's cat thought experiment remains a defining touchstone for modern interpretations of quantum mechanics. Physicists often use the way each interpretation deals with Schrödinger's cat as a way of illustrating and comparing the particular features, strengths, and weaknesses of each interpretation.

The thought experiment

Schrödinger wrote:[2][3]
One can even set up quite ridiculous cases. A cat is penned up in a steel chamber, along with the following device (which must be secured against direct interference by the cat): in a Geiger counter, there is a tiny bit of radioactive substance, so small, that perhaps in the course of the hour one of the atoms decays, but also, with equal probability, perhaps none; if it happens, the counter tube discharges and through a relay releases a hammer that shatters a small flask of hydrocyanic acid. If one has left this entire system to itself for an hour, one would say that the cat still lives if meanwhile no atom has decayed. The psi-function of the entire system would express this by having in it the living and dead cat (pardon the expression) mixed or smeared out in equal parts.
It is typical of these cases that an indeterminacy originally restricted to the atomic domain becomes transformed into macroscopic indeterminacy, which can then be resolved by direct observation. That prevents us from so naively accepting as valid a "blurred model" for representing reality. In itself, it would not embody anything unclear or contradictory. There is a difference between a shaky or out-of-focus photograph and a snapshot of clouds and fog banks.
—Erwin Schrödinger, Die gegenwärtige Situation in der Quantenmechanik (The present situation in quantum mechanics), Naturwissenschaften
(translated by John D. Trimmer in Proceedings of the American Philosophical Society)
Schrödinger's famous thought experiment poses the question, "when does a quantum system stop existing as a superposition of states and become one or the other?" (More technically, when does the actual quantum state stop being a linear combination of states, each of which resembles different classical states, and instead begin to have a unique classical description?) If the cat survives, it remembers only being alive. But explanations of the EPR experiments that are consistent with standard microscopic quantum mechanics require that macroscopic objects, such as cats and notebooks, do not always have unique classical descriptions. The thought experiment illustrates this apparent paradox. Our intuition says that no observer can be in a mixture of states—yet the cat, it seems from the thought experiment, can be such a mixture. Is the cat required to be an observer, or does its existence in a single well-defined classical state require another external observer? Each alternative seemed absurd to Albert Einstein, who was impressed by the ability of the thought experiment to highlight these issues. In a letter to Schrödinger dated 1950, he wrote:
You are the only contemporary physicist, besides Laue, who sees that one cannot get around the assumption of reality, if only one is honest. Most of them simply do not see what sort of risky game they are playing with reality—reality as something independent of what is experimentally established. Their interpretation is, however, refuted most elegantly by your system of radioactive atom + amplifier + charge of gunpowder + cat in a box, in which the psi-function of the system contains both the cat alive and blown to bits. Nobody really doubts that the presence or absence of the cat is something independent of the act of observation.[4]
Note that the charge of gunpowder is not mentioned in Schrödinger's setup, which uses a Geiger counter as an amplifier and hydrocyanic poison instead of gunpowder. The gunpowder had been mentioned in Einstein's original suggestion to Schrödinger 15 years before, and Einstein carried it forward to the present discussion.

Interpretations of the experiment

Since Schrödinger's time, other interpretations of quantum mechanics have been proposed that give different answers to the questions posed by Schrödinger's cat of how long superpositions last and when (or whether) they collapse.

Copenhagen interpretation

The most commonly held interpretation of quantum mechanics is the Copenhagen interpretation.[5] In the Copenhagen interpretation, a system stops being a superposition of states and becomes either one or the other when an observation takes place. This thought experiment makes apparent the fact that the nature of measurement, or observation, is not well-defined in this interpretation. The experiment can be interpreted to mean that while the box is closed, the system simultaneously exists in a superposition of the states "decayed nucleus/dead cat" and "undecayed nucleus/living cat", and that only when the box is opened and an observation performed does the wave function collapse into one of the two states.
However, one of the main scientists associated with the Copenhagen interpretation, Niels Bohr, never had in mind the observer-induced collapse of the wave function, so that Schrödinger's cat did not pose any riddle to him. The cat would be either dead or alive long before the box is opened by a conscious observer.[6] Analysis of an actual experiment found that measurement alone (for example by a Geiger counter) is sufficient to collapse a quantum wave function before there is any conscious observation of the measurement.[7] The view that the "observation" is taken when a particle from the nucleus hits the detector can be developed into objective collapse theories. The thought experiment requires an "unconscious observation" by the detector in order for magnification to occur. In contrast, the many worlds approach denies that collapse ever occurs.

Many-worlds interpretation and consistent histories


The quantum-mechanical "Schrödinger's cat" paradox according to the many-worlds interpretation. In this interpretation, every event is a branch point. The cat is both alive and dead—regardless of whether the box is opened—but the "alive" and "dead" cats are in different branches of the universe that are equally real but cannot interact with each other.
In 1957, Hugh Everett formulated the many-worlds interpretation of quantum mechanics, which does not single out observation as a special process. In the many-worlds interpretation, both alive and dead states of the cat persist after the box is opened, but are decoherent from each other. In other words, when the box is opened, the observer and the possibly-dead cat split into an observer looking at a box with a dead cat, and an observer looking at a box with a live cat. But since the dead and alive states are decoherent, there is no effective communication or interaction between them.
When opening the box, the observer becomes entangled with the cat, so "observer states" corresponding to the cat's being alive and dead are formed; each observer state is entangled or linked with the cat so that the "observation of the cat's state" and the "cat's state" correspond with each other. Quantum decoherence ensures that the different outcomes have no interaction with each other. The same mechanism of quantum decoherence is also important for the interpretation in terms of consistent histories. Only the "dead cat" or the "alive cat" can be a part of a consistent history in this interpretation.

Roger Penrose criticises this:
"I wish to make it clear that, as it stands, this is far from a resolution of the cat paradox. For there is nothing in the formalism of quantum mechanics that demands that a state of consciousness cannot involve the simultaneous perception of a live and a dead cat."[8]
However, the mainstream view (without necessarily endorsing many-worlds) is that decoherence is the mechanism that forbids such simultaneous perception.[9][10]

A variant of the Schrödinger's cat experiment, known as the quantum suicide machine, has been proposed by cosmologist Max Tegmark. It examines the Schrödinger's cat experiment from the point of view of the cat, and argues that by using this approach, one may be able to distinguish between the Copenhagen interpretation and many-worlds.

Ensemble interpretation

The ensemble interpretation states that superpositions are nothing but subensembles of a larger statistical ensemble. The state vector would not apply to individual cat experiments, but only to the statistics of many similarly prepared cat experiments. Proponents of this interpretation state that this makes the Schrödinger's cat paradox a trivial matter, or a non-issue.

This interpretation serves to discard the idea that a single physical system in quantum mechanics has a mathematical description that corresponds to it in any way.

Relational interpretation

The relational interpretation makes no fundamental distinction between the human experimenter, the cat, or the apparatus, or between animate and inanimate systems; all are quantum systems governed by the same rules of wavefunction evolution, and all may be considered "observers". But the relational interpretation allows that different observers can give different accounts of the same series of events, depending on the information they have about the system.[11] The cat can be considered an observer of the apparatus; meanwhile, the experimenter can be considered another observer of the system in the box (the cat plus the apparatus). Before the box is opened, the cat, by nature of its being alive or dead, has information about the state of the apparatus (the atom has either decayed or not decayed); but the experimenter does not have information about the state of the box contents. In this way, the two observers simultaneously have different accounts of the situation: To the cat, the wavefunction of the apparatus has appeared to "collapse"; to the experimenter, the contents of the box appear to be in superposition. Not until the box is opened, and both observers have the same information about what happened, do both system states appear to "collapse" into the same definite result, a cat that is either alive or dead.

Objective collapse theories

According to objective collapse theories, superpositions are destroyed spontaneously (irrespective of external observation) when some objective physical threshold (of time, mass, temperature, irreversibility, etc.) is reached. Thus, the cat would be expected to have settled into a definite state long before the box is opened. This could loosely be phrased as "the cat observes itself", or "the environment observes the cat".

Objective collapse theories require a modification of standard quantum mechanics to allow superpositions to be destroyed by the process of time evolution. This process, known as "decoherence", is among the fastest processes currently known to physics.[12]

Applications and tests

Schrodinger's cat quantum superposition of states and effect of the environment through decoherence

The experiment as described is a purely theoretical one, and the machine proposed is not known to have been constructed. However, successful experiments involving similar principles, e.g. superpositions of relatively large (by the standards of quantum physics) objects have been performed.[13] These experiments do not show that a cat-sized object can be superposed, but the known upper limit on "cat states" has been pushed upwards by them. In many cases the state is short-lived, even when cooled to near absolute zero.
  • A "cat state" has been achieved with photons.[14]
  • A beryllium ion has been trapped in a superposed state.[15]
  • An experiment involving a superconducting quantum interference device ("SQUID") has been linked to the theme of the thought experiment: "The superposition state does not correspond to a billion electrons flowing one way and a billion others flowing the other way. Superconducting electrons move en masse. All the superconducting electrons in the SQUID flow both ways around the loop at once when they are in the Schrödinger's cat state."[16]
  • A piezoelectric "tuning fork" has been constructed, which can be placed into a superposition of vibrating and non vibrating states. The resonator comprises about 10 trillion atoms.[17]
  • An experiment involving a flu virus has been proposed.[18]
In quantum computing the phrase "cat state" often refers to the special entanglement of qubits wherein the qubits are in an equal superposition of all being 0 and all being 1; e.g.,
 | \psi \rangle = \frac{1}{\sqrt{2}} \bigg( | 00\ldots0 \rangle + |11\ldots1 \rangle \bigg).

Extensions

Wigner's friend is a variant on the experiment with two external observers: the first opens and inspects the box and then communicates his observations to a second observer. The issue here is, does the wave function "collapse" when the first observer opens the box, or only when the second observer is informed of the first observer's observations?

In another extension, prominent physicists have gone so far as to suggest that astronomers observing dark energy in the universe in 1998 may have "reduced its life expectancy" through a pseudo-Schrödinger's cat scenario, although this is a controversial viewpoint.[19][20]

Arrow of time


From Wikipedia, the free encyclopedia


Arthur Stanley Eddington

The Arrow of Time, or Time's Arrow, is a concept developed in 1927 by the British astronomer Arthur Eddington involving the "one-way direction" or "asymmetry" of time. This direction, which can be determined, according to Eddington, by studying the organization of atoms, molecules and bodies, might be drawn upon a four-dimensional relativistic map of the world ("a solid block of paper").[1]

Physical processes at the microscopic level[when defined as?] are believed to be either entirely or mostly time-symmetric: if the direction of time were to reverse, the theoretical statements that describe them would remain true. Yet at the macroscopic level[when defined as?] it often appears that this is not the case: there is an obvious direction (or flow) of time.

Eddington

In the 1928 book The Nature of the Physical World, which helped to popularize the concept, Eddington stated:
Let us draw an arrow arbitrarily. If as we follow the arrow we find more and more of the random element in the state of the world, then the arrow is pointing towards the future; if the random element decreases the arrow points towards the past. That is the only distinction known to physics. This follows at once if our fundamental contention is admitted that the introduction of randomness is the only thing which cannot be undone. I shall use the phrase ‘time's arrow’ to express this one-way property of time which has no analogue in space.
Eddington then gives three points to note about this arrow:
  1. It is vividly recognized by consciousness.
  2. It is equally insisted on by our reasoning faculty, which tells us that a reversal of the arrow would render the external world nonsensical.
  3. It makes no appearance in physical science except in the study of organization of a number of individuals.
According to Eddington the arrow indicates the direction of progressive increase of the random element. Following a lengthy argument upon the nature of thermodynamics he concludes that, so far as physics is concerned, time's arrow is a property of entropy alone.

Overview

The symmetry of time (T-symmetry) can be understood by a simple analogy: if time were perfectly symmetrical a video of real events would seem realistic whether played forwards or backwards.[2] An obvious objection to this notion is gravity: things fall down, not up. Yet a ball that is tossed up, slows to a stop and falls into the hand is a case where recordings would look equally realistic forwards and backwards. The system is T-symmetrical but while going "forward" kinetic energy is dissipated and entropy is increased. Entropy may be one of the few processes that is not time-reversible. According to the statistical notion of increasing entropy the "arrow" of time is identified with a decrease of free energy.[3]

If we record somebody dropping a ball that falls for a meter and stops, in reverse we will notice an unrealistic discrepancy: a ball falling upward! But when the ball lands its kinetic energy is dispersed into sound, shock-waves and heat. In reverse those sound waves, ground vibrations and heat will rush back into the ball, imparting enough energy to propel it upward one meter into the person's hand. The only unrealism lies in the statistical unlikelihood that such forces could coincide to propel a ball upward into a waiting hand.

Arrows

The thermodynamic arrow of time

The arrow of time is the "one-way direction" or "asymmetry" of time. The thermodynamic arrow of time is provided by the Second Law of Thermodynamics, which says that in an isolated system, entropy tends to increase with time. Entropy can be thought of as a measure of microscopic disorder; thus the Second Law implies that time is asymmetrical with respect to the amount of order in an isolated system: as a system advances through time, it will statistically become more disordered. This asymmetry can be used empirically to distinguish between future and past though measuring entropy does not accurately measure time. Also in an open system entropy can decrease with time.
British physicist Sir Alfred Brian Pippard wrote, "There is thus no justification for the view, often glibly repeated, that the Second Law of Thermodynamics is only statistically true, in the sense that microscopic violations repeatedly occur, but never violations of any serious magnitude. On the contrary, no evidence has ever been presented that the Second Law breaks down under any circumstances."[4] However, there are a number of paradoxes regarding violation of the Second Law of Thermodynamics, one of them due to the Poincaré recurrence theorem.

This arrow of time seems to be related to all other arrows of time and arguably underlies some of them, with the exception of the weak arrow of time.

The cosmological arrow of time

The cosmological arrow of time points in the direction of the universe's expansion. It may be linked to the thermodynamic arrow, with the universe heading towards a heat death (Big Chill) as the amount of usable energy becomes negligible. Alternatively, it may be an artifact of our place in the universe's evolution (see the Anthropic bias), with this arrow reversing as gravity pulls everything back into a Big Crunch.
If this arrow of time is related to the other arrows of time, then the future is by definition the direction towards which the universe becomes bigger. Thus, the universe expands - rather than shrinks - by definition.

The thermodynamic arrow of time and the Second law of thermodynamics are thought to be a consequence of the initial conditions in the early universe[citation needed]. Therefore they ultimately result from the cosmological set-up.

The radiative arrow of time

Waves, from radio waves to sound waves to those on a pond from throwing a stone, expand outward from their source, even though the wave equations allow for solutions of convergent waves as well as radiative ones. This arrow has been reversed in carefully worked experiments which have created convergent waves,[5] so this arrow probably follows from the thermodynamic arrow in that meeting the conditions to produce a convergent wave requires more order than the conditions for a radiative wave. Put differently, the probability for initial conditions that produce a convergent wave is much lower than the probability for initial conditions that produce a radiative wave. In fact, normally a radiative wave increases entropy, while a convergent wave decreases it[citation needed], making the latter contradictory to the Second Law of Thermodynamics in usual circumstances.

The causal arrow of time

A cause precedes its effect: the causal event occurs before the event it affects. Birth, for example, follows a successful conception and not vice versa. Thus causality is intimately bound up with time's arrow.

An epistemological problem with using causality as an arrow of time is that, as David Hume maintained, the causal relation per se cannot be perceived; one only perceives sequences of events. Furthermore, it is surprisingly difficult to provide a clear explanation of what the terms cause and effect really mean, or to define the events to which they refer. However, it does seem evident that dropping a cup of water is a cause while the cup subsequently shattering and spilling the water is the effect.

Physically speaking, the perception of cause and effect in the dropped cup example is a phenomenon of the thermodynamic arrow of time, a consequence of the Second law of thermodynamics.[6]
Controlling the future, or causing something to happen, creates correlations between the doer and the effect,[7] and these can only be created as we move forwards in time, not backwards.

The particle physics (weak) arrow of time

Certain subatomic interactions involving the weak nuclear force violate the conservation of both parity and charge conjugation, but only very rarely. An example is the kaon decay [1]. According to the CPT Theorem, this means they should also be time irreversible, and so establish an arrow of time. Such processes should be responsible for matter creation in the early universe.
That the combination of parity and charge conjugation is broken so rarely means that this arrow only "barely" points in one direction, setting it apart from the other arrows whose direction is much more obvious. This arrow is not linked to any other arrow by any proposed mechanism.

The quantum arrow of time

According to the Copenhagen interpretation of quantum mechanics, quantum evolution is governed by the Schrödinger equation, which is time-symmetric, and by wave function collapse, which is time irreversible. As the mechanism of wave function collapse is philosophically obscure, it is not completely clear how this arrow links to the others. Despite the post-measurement state being entirely stochastic in formulations of quantum mechanics, a link to the thermodynamic arrow has been proposed, noting that the second law of thermodynamics amounts to an observation that nature shows a bias for collapsing wave functions into higher entropy states versus lower ones, and the claim that this is merely due to more possible states being high entropy runs afoul of Loschmidt's paradox.

According to the modern physical view of wave function collapse, the theory of quantum decoherence, the quantum arrow of time is a consequence of the thermodynamic arrow of time.[citation needed]

The quantum source of time

Physicists say that quantum uncertainty gives rise to entanglement, the putative source of the arrow of time. The idea that entanglement might explain the arrow of time was by Seth Lloyd in 1960s, when he was a 23-year-old philosophy student at Cambridge University with a Harvard physics degree. Lloyd realized that quantum uncertainty, and the way it spreads as particles become increasingly entangled, could replace human uncertainty in the old classical proofs as the true source of the arrow of time. According to Lloyd; “The arrow of time is an arrow of increasing correlations.”[8]

The psychological/perceptual arrow of time

A related mental arrow arises because one has the sense that one's perception is a continuous movement from the known (past) to the unknown (future). Anticipating the unknown forms the psychological future which always seems to be something one is moving towards, but, like a projection in a mirror, it makes what is actually already a part of memory, such as desires, dreams, and hopes, seem ahead of the observer. The association of "behind ⇔ past" and "ahead ⇔ future" is itself culturally determined. For example, the Aymara language associates "ahead ⇔ past" and "behind ⇔ future".[9] Similarly, the Chinese term for "the day after tomorrow" literally means "behind day", whereas "the day before yesterday" is referred to as "front day."

The words yesterday and tomorrow both translate to the same word in Hindi: कल ("kal"),[10] meaning "the day remote from today."[11]

The other side of the psychological passage of time is in the realm of volition and action. We plan and often execute actions intended to affect the course of events in the future. Hardly anyone tries to change past events. Indeed, in the Rubaiyat it is written (sic):
The Moving Finger writes; and, having writ,
  Moves on: nor all thy Piety nor Wit
Shall lure it back to cancel half a Line,
  Nor all thy Tears wash out a Word of it.
- Omar Khayyám (translation by Edward Fitzgerald).

White hole


From Wikipedia, the free encyclopedia

In general relativity, a white hole is a hypothetical region of spacetime which cannot be entered from the outside, although matter and light can escape from it. In this sense, it is the reverse of a black hole, which can only be entered from the outside, from which nothing, including light, can escape. White holes appear in the theory of eternal black holes. In addition to a black hole region in the future, such a solution of the Einstein field equations has a white hole region in its past.[1] However, this region does not exist for black holes that have formed through gravitational collapse, nor are there any known physical processes through which a white hole could be formed. No white hole has ever been observed.

Like black holes, white holes have properties like mass, charge, and angular momentum. They attract matter like any other mass, but objects falling towards a white hole would never actually reach the white hole's event horizon (though in the case of the maximally extended Schwarzschild solution, discussed below, the white hole event horizon in the past becomes a black hole event horizon in the future, so any object falling towards it will eventually reach the black hole horizon).

In quantum mechanics, the black hole emits Hawking radiation and so can come to thermal equilibrium with a gas of radiation. Because a thermal-equilibrium state is time-reversal-invariant, Stephen Hawking argued that the time reverse of a black hole in thermal equilibrium is again a black hole in thermal equilibrium.[2] This implies that black holes and white holes are the same object[clarification needed]. The Hawking radiation from an ordinary black hole is then identified with the white-hole emission. Hawking's semi-classical argument is reproduced in a quantum mechanical AdS/CFT treatment,[3] where a black hole in anti-de Sitter space is described by a thermal gas in a gauge theory, whose time reversal is the same as itself.

Origin 


A diagram of the structure of the maximally extended black hole spacetime. The horizontal direction is space and the vertical direction time.

The possibility of the existence of white holes was put forward by I. Novikov in 1964.[4] White holes are predicted as part of a solution to the Einstein field equations known as the maximally extended version of the Schwarzschild metric[clarification needed] describing an eternal black hole with no charge and no rotation. Here, "maximally extended" refers to the idea that the spacetime should not have any "edges": for any possible trajectory of a free-falling particle (following a geodesic) in the spacetime, it should be possible to continue this path arbitrarily far into the particle's future, unless the trajectory hits a gravitational singularity like the one at the center of the black hole's interior. In order to satisfy this requirement, it turns out that in addition to the black hole interior region which particles enter when they fall through the event horizon from the outside, there must be a separate white hole interior region which allows us to extrapolate the trajectories of particles which an outside observer sees rising up away from the event horizon. For an observer outside using Schwarzschild coordinates, infalling particles take an infinite time to reach the black hole horizon infinitely far in the future, while outgoing particles which pass the observer have been traveling outward for an infinite time since crossing the white hole horizon infinitely far in the past (however, the particles or other objects experience only a finite proper time between crossing the horizon and passing the outside observer).
The black hole/white hole appears "eternal" from the perspective of an outside observer, in the sense that particles traveling outward from the white hole interior region can pass the observer at any time, and particles traveling inward which will eventually reach the black hole interior region can also pass the observer at any time.

Just as there are two separate interior regions of the maximally extended spacetime, there are also two separate exterior regions, sometimes called two different "universes", with the second universe allowing us to extrapolate some possible particle trajectories in the two interior regions. This means that the interior black-hole region can contain a mix of particles that fell in from either universe (and thus an observer who fell in from one universe might be able to see light that fell in from the other one), and likewise particles from the interior white-hole region can escape into either universe. All four regions can be seen in a spacetime diagram which uses Kruskal–Szekeres coordinates. see figure.[5]

In this spacetime, it is possible to come up with coordinate systems such that if you pick a hypersurface of constant time (a set of points that all have the same time coordinate, such that every point on the surface has a space-like separation, giving what is called a 'space-like surface') and draw an "embedding diagram" depicting the curvature of space at that time, the embedding diagram will look like a tube connecting the two exterior regions, known as an "Einstein-Rosen bridge" or Schwarzschild wormhole.[5] Depending on where the space-like hypersurface is chosen, the Einstein-Rosen bridge can either connect two black hole event horizons in each universe (with points in the interior of the bridge being part of the black hole region of the spacetime), or two white hole event horizons in each universe (with points in the interior of the bridge being part of the white hole region). It is impossible to use the bridge to cross from one universe to the other, however, because it is impossible to enter a white hole event horizon from the outside, and anyone entering a black hole horizon from either universe will inevitably hit the black hole singularity.

Note that the maximally extended Schwarzschild metric describes an idealized black hole/white hole that exists eternally from the perspective of external observers; a more realistic black hole that forms at some particular time from a collapsing star would require a different metric. When the infalling stellar matter is added to a diagram of a black hole's history, it removes the part of the diagram corresponding to the white hole interior region.[6] But because the equations of general relativity are time-reversible (they exhibit T-symmetry), general relativity must also allow the time-reverse of this type of "realistic" black hole that forms from collapsing matter. The time-reversed case would be a white hole that has existed since the beginning of the universe, and which emits matter until it finally "explodes" and disappears.[7] Despite the fact that such objects are permitted theoretically, they are not taken as seriously as black holes by physicists, since there would be no processes that would naturally lead to their formation, they could only exist if they were built into the initial conditions of the Big Bang.[7] Additionally, it is predicted that such a white hole would be highly "unstable" in the sense that if any small amount of matter fell towards the horizon from the outside, this would prevent the white hole's explosion as seen by distant observers, with the matter emitted from the singularity never able to escape the white hole's gravitational radius.[8]

1980s – present speculations

A view of black holes first proposed in the late 1980s might be interpreted as shedding some light on the nature of classical white holes. Some researchers have proposed that when a black hole forms, a big bang may occur at the core, which would create a new universe that expands outside of the parent universe.[9][10][11] See also Fecund universes.

The Einstein–CartanSciamaKibble theory of gravity extends general relativity by removing a constraint of the symmetry of the affine connection and regarding its antisymmetric part, the torsion tensor, as a dynamical variable. Torsion naturally accounts for the quantum-mechanical, intrinsic angular momentum (spin) of matter. According to general relativity, the gravitational collapse of a sufficiently compact mass forms a singular black hole. In the Einstein–Cartan theory, however, the minimal coupling between torsion and Dirac spinors generates a repulsive spin–spin interaction which is significant in fermionic matter at extremely high densities. Such an interaction prevents the formation of a gravitational singularity. Instead, the collapsing matter on the other side of the event horizon reaches an enormous but finite density and rebounds, forming a regular Einstein–Rosen bridge.[12] The other side of the bridge becomes a new, growing baby universe. For observers in the baby universe, the parent universe appears as the only white hole. Accordingly, the observable universe is the Einstein–Rosen interior of a black hole existing as one of possibly many inside a larger universe. The Big Bang was a nonsingular Big Bounce at which the observable universe had a finite, minimum scale factor.[13]

A 2011 paper argues that the Big Bang itself is a white hole. It further suggests that the emergence of a white hole, which was named a 'Small Bang', is spontaneous—all the matter is ejected at a single pulse. Thus, unlike black holes, white holes cannot be continuously observed—rather their effect can only be detected around the event itself. The paper even proposed identifying a new group of gamma-ray bursts with white holes.[14] The idea of a Big-Bang produced by a white hole explosion was recently explored in the framework of a five dimensional vacuum by Madriz Aguilar, Moreno and Bellini in the paper.[15]

Education

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Education Education is the transmissio...