Search This Blog

Monday, September 26, 2022

Bell test

From Wikipedia, the free encyclopedia

A Bell test, also known as Bell inequality test or Bell experiment, is a real-world physics experiment designed to test the theory of quantum mechanics in relation to Albert Einstein's concept of local realism. The experiments test whether or not the real world satisfies local realism, which requires the presence of some additional local variables (called "hidden" because they are not a feature of quantum theory) to explain the behavior of particles like photons and electrons. To date, all Bell tests have found that the hypothesis of local hidden variables is inconsistent with the way that physical systems behave.

According to Bell's theorem, if nature actually operates in accord with any theory of local hidden variables, then the results of a Bell test will be constrained in a particular, quantifiable way. If a Bell test is performed in a laboratory and the results are not thus constrained, then they are inconsistent with the hypothesis that local hidden variables exist. Such results would support the position that there is no way to explain the phenomena of quantum mechanics in terms of a more fundamental description of nature that is more in line with the rules of classical physics.

Many types of Bell tests have been performed in physics laboratories, often with the goal of ameliorating problems of experimental design or set-up that could in principle affect the validity of the findings of earlier Bell tests. This is known as "closing loopholes in Bell tests".

Overview

The Bell test has its origins in the debate between Einstein and other pioneers of quantum physics, principally Niels Bohr. One feature of the theory of quantum mechanics under debate was the meaning of Heisenberg's uncertainty principle. This principle states that if some information is known about a given particle, there is some other information about it that is impossible to know. An example of this is found in observations of the position and the momentum of a given particle. According to the uncertainty principle, a particle's momentum and its position cannot simultaneously be determined with arbitrarily high precision.

In 1935, Einstein, Boris Podolsky, and Nathan Rosen published a claim that quantum mechanics predicts that more information about a pair of entangled particles could be observed than Heisenberg's principle allowed, which would only be possible if information were travelling instantly between the two particles. This produces a paradox which came to be known as the "EPR paradox" after the three authors. It arises if any effect felt in one location is not the result of a cause that occurred in its past, relative to its location. This action at a distance would violate the theory of relativity, by allowing information between the two locations to travel faster than the speed of light.

Based on this, the authors concluded that the quantum wave function does not provide a complete description of reality. They suggested that there must be some local hidden variables at work in order to account for the behavior of entangled particles. In a theory of hidden variables, as Einstein envisaged it, the randomness and indeterminacy seen in the behavior of quantum particles would only be apparent. For example, if one knew the details of all the hidden variables associated with a particle, then one could predict both its position and momentum. The uncertainty that had been quantified by Heisenberg's principle would simply be an artifact of not having complete information about the hidden variables. Furthermore, Einstein argued that the hidden variables should obey the condition of locality: Whatever the hidden variables actually are, the behavior of the hidden variables for one particle should not be able to instantly affect the behavior of those for another particle far away. This idea, called the principle of locality, is rooted in intuition from classical physics that physical interactions do not propagate instantly across space. These ideas were the subject of ongoing debate between their proponents. In particular, Einstein himself did not approve of the way Podolsky had stated the problem in the famous EPR paper.

In 1964, John Stewart Bell proposed his now famous theorem, which states that no physical theory of hidden local variables can ever reproduce all the predictions of quantum mechanics. Implicit in the theorem is the proposition that the determinism of classical physics is fundamentally incapable of describing quantum mechanics. Bell expanded on the theorem to provide what would become the conceptual foundation of the Bell test experiments.

A typical experiment involves the observation of particles, often photons, in an apparatus designed to produce entangled pairs and allow for the measurement of some characteristic of each, such as their spin. The results of the experiment could then be compared to what was predicted by local realism and those predicted by quantum mechanics.

In theory, the results could be "coincidentally" consistent with both. To address this problem, Bell proposed a mathematical description of local realism that placed a statistical limit on the likelihood of that eventuality. If the results of an experiment violate Bell's inequality, local hidden variables can be ruled out as their cause. Later researchers built on Bell's work by proposing new inequalities that serve the same purpose and refine the basic idea in one way or another. Consequently, the term "Bell inequality" can mean any one of a number of inequalities satisfied by local hidden-variables theories; in practice, many present-day experiments employ the CHSH inequality. All these inequalities, like the original devised by Bell, express the idea that assuming local realism places restrictions on the statistical results of experiments on sets of particles that have taken part in an interaction and then separated.

To date, all Bell tests have supported the theory of quantum physics, and not the hypothesis of local hidden variables.

Conduct of optical Bell test experiments

In practice most actual experiments have used light, assumed to be emitted in the form of particle-like photons (produced by atomic cascade or spontaneous parametric down conversion), rather than the atoms that Bell originally had in mind. The property of interest is, in the best known experiments, the polarisation direction, though other properties can be used. Such experiments fall into two classes, depending on whether the analysers used have one or two output channels.

A typical CHSH (two-channel) experiment

Scheme of a "two-channel" Bell test
The source S produces pairs of "photons", sent in opposite directions. Each photon encounters a two-channel polariser whose orientation can be set by the experimenter. Emerging signals from each channel are detected and coincidences counted by the coincidence monitor CM.

The diagram shows a typical optical experiment of the two-channel kind for which Alain Aspect set a precedent in 1982. Coincidences (simultaneous detections) are recorded, the results being categorised as '++', '+−', '−+' or '−−' and corresponding counts accumulated.

Four separate subexperiments are conducted, corresponding to the four terms E(a, b) in the test statistic S (equation (2) shown below). The settings a, a′, b and b′ are generally in practice chosen to be 0, 45°, 22.5° and 67.5° respectively — the "Bell test angles" — these being the ones for which the quantum mechanical formula gives the greatest violation of the inequality.

For each selected value of a and b, the numbers of coincidences in each category (N++, N−−, N+− and N−+) are recorded. The experimental estimate for E(a, b) is then calculated as:

 

 

 

 

(1)

Once all four E’s have been estimated, an experimental estimate of the test statistic

 

 

 

 

(2)

can be found. If S is numerically greater than 2 it has infringed the CHSH inequality. The experiment is declared to have supported the QM prediction and ruled out all local hidden-variable theories.

A strong assumption has had to be made, however, to justify use of expression (2). It has been assumed that the sample of detected pairs is representative of the pairs emitted by the source. That this assumption may not be true comprises the fair sampling loophole.

The derivation of the inequality is given in the CHSH Bell test page.

A typical CH74 (single-channel) experiment

Setup for a "single-channel" Bell test
The source S produces pairs of "photons", sent in opposite directions. Each photon encounters a single channel (e.g. "pile of plates") polariser whose orientation can be set by the experimenter. Emerging signals are detected and coincidences counted by the coincidence monitor CM.

Prior to 1982 all actual Bell tests used "single-channel" polarisers and variations on an inequality designed for this setup. The latter is described in Clauser, Horne, Shimony and Holt's much-cited 1969 article as being the one suitable for practical use. As with the CHSH test, there are four subexperiments in which each polariser takes one of two possible settings, but in addition there are other subexperiments in which one or other polariser or both are absent. Counts are taken as before and used to estimate the test statistic.

 

 

 

 

(3)

where the symbol ∞ indicates absence of a polariser.

If S exceeds 0 then the experiment is declared to have infringed Bell's inequality and hence to have "refuted local realism". In order to derive (3), CHSH in their 1969 paper had to make an extra assumption, the so-called "fair sampling" assumption. This means that the probability of detection of a given photon, once it has passed the polarizer, is independent of the polarizer setting (including the 'absence' setting). If this assumption were violated, then in principle a local hidden-variable (LHV) model could violate the CHSH inequality.

In a later 1974 article, Clauser and Horne replaced this assumption by a much weaker, "no enhancement" assumption, deriving a modified inequality, see the page on Clauser and Horne's 1974 Bell test.

Experimental assumptions

In addition to the theoretical assumptions made, there are practical ones. There may, for example, be a number of "accidental coincidences" in addition to those of interest. It is assumed that no bias is introduced by subtracting their estimated number before calculating S, but that this is true is not considered by some to be obvious. There may be synchronisation problems — ambiguity in recognising pairs because in practice they will not be detected at exactly the same time.

Nevertheless, despite all these deficiencies of the actual experiments, one striking fact emerges: the results are, to a very good approximation, what quantum mechanics predicts. If imperfect experiments give us such excellent overlap with quantum predictions, most working quantum physicists would agree with John Bell in expecting that, when a perfect Bell test is done, the Bell inequalities will still be violated. This attitude has led to the emergence of a new sub-field of physics which is now known as quantum information theory. One of the main achievements of this new branch of physics is showing that violation of Bell's inequalities leads to the possibility of a secure information transfer, which utilizes the so-called quantum cryptography (involving entangled states of pairs of particles).

Notable experiments

Over the past thirty or so years, a great number of Bell test experiments have been conducted. The experiments are commonly interpreted to rule out local hidden-variable theories, and recently an experiment has been performed that is not subject to either the locality loophole or the detection loophole (Hensen et al.). An experiment free of the locality loophole is one where for each separate measurement and in each wing of the experiment, a new setting is chosen and the measurement completed before signals could communicate the settings from one wing of the experiment to the other. An experiment free of the detection loophole is one where close to 100% of the successful measurement outcomes in one wing of the experiment are paired with a successful measurement in the other wing. This percentage is called the efficiency of the experiment. Advancements in technology have led to a great variety of methods to test Bell-type inequalities.

Some of the best known and recent experiments include:

Freedman and Clauser (1972)

Stuart J. Freedman and John Clauser carried out the first actual Bell test, using Freedman's inequality, a variant on the CH74 inequality.

Aspect et al. (1982)

Alain Aspect and his team at Orsay, Paris, conducted three Bell tests using calcium cascade sources. The first and last used the CH74 inequality. The second was the first application of the CHSH inequality. The third (and most famous) was arranged such that the choice between the two settings on each side was made during the flight of the photons (as originally suggested by John Bell).

Tittel et al. (1998)

The Geneva 1998 Bell test experiments showed that distance did not destroy the "entanglement". Light was sent in fibre optic cables over distances of several kilometers before it was analysed. As with almost all Bell tests since about 1985, a "parametric down-conversion" (PDC) source was used.

Weihs et al. (1998): experiment under "strict Einstein locality" conditions

In 1998 Gregor Weihs and a team at Innsbruck, led by Anton Zeilinger, conducted an ingenious experiment that closed the "locality" loophole, improving on Aspect's of 1982. The choice of detector was made using a quantum process to ensure that it was random. This test violated the CHSH inequality by over 30 standard deviations, the coincidence curves agreeing with those predicted by quantum theory.

Pan et al. (2000) experiment on the GHZ state

This is the first of new Bell-type experiments on more than two particles; this one uses the so-called GHZ state of three particles.

Rowe et al. (2001): the first to close the detection loophole

The detection loophole was first closed in an experiment with two entangled trapped ions, carried out in the ion storage group of David Wineland at the National Institute of Standards and Technology in Boulder. The experiment had detection efficiencies well over 90%.

Go et al. (Belle collaboration): Observation of Bell inequality violation in B mesons

Using semileptonic B0 decays of Υ(4S) at Belle experiment, a clear violation of Bell Inequality in particle-antiparticle correlation is observed.

Gröblacher et al. (2007) test of Leggett-type non-local realist theories

A specific class of non-local theories suggested by Anthony Leggett is ruled out. Based on this, the authors conclude that any possible non-local hidden-variable theory consistent with quantum mechanics must be highly counterintuitive.

Salart et al. (2008): separation in a Bell Test

This experiment filled a loophole by providing an 18 km separation between detectors, which is sufficient to allow the completion of the quantum state measurements before any information could have traveled between the two detectors.

Ansmann et al. (2009): overcoming the detection loophole in solid state

This was the first experiment testing Bell inequalities with solid-state qubits (superconducting Josephson phase qubits were used). This experiment surmounted the detection loophole using a pair of superconducting qubits in an entangled state. However, the experiment still suffered from the locality loophole because the qubits were only separated by a few millimeters.

Giustina et al. (2013), Larsson et al (2014): overcoming the detection loophole for photons

The detection loophole for photons has been closed for the first time in a group by Anton Zeilinger, using highly efficient detectors. This makes photons the first system for which all of the main loopholes have been closed, albeit in different experiments.

Christensen et al. (2013): overcoming the detection loophole for photons

The Christensen et al. (2013) experiment is similar to that of Giustina et al. Giustina et al. did just four long runs with constant measurement settings (one for each of the four pairs of settings). The experiment was not pulsed so that formation of "pairs" from the two records of measurement results (Alice and Bob) had to be done after the experiment which in fact exposes the experiment to the coincidence loophole. This led to a reanalysis of the experimental data in a way which removed the coincidence loophole, and fortunately the new analysis still showed a violation of the appropriate CHSH or CH inequality. On the other hand, the Christensen et al. experiment was pulsed and measurement settings were frequently reset in a random way, though only once every 1000 particle pairs, not every time.

Hensen et al., Giustina et al., Shalm et al. (2015): "loophole-free" Bell tests

In 2015 the first three significant-loophole-free Bell-tests were published within three months by independent groups in Delft, Vienna and Boulder. All three tests simultaneously addressed the detection loophole, the locality loophole, and the memory loophole. This makes them “loophole-free” in the sense that all remaining conceivable loopholes like superdeterminism require truly exotic hypotheses that might never get closed experimentally.

The first published experiment by Hensen et al. used a photonic link to entangle the electron spins of two nitrogen-vacancy defect centres in diamonds 1.3 kilometers apart and measured a violation of the CHSH inequality (S = 2.42 ± 0.20). Thereby the local-realist hypothesis could be rejected with a p-value of 0.039, i.e. the chance of accidentally measuring the reported result in a local-realist world would be 3.9% at most.

Both simultaneously published experiments by Giustina et al. and Shalm et al. used entangled photons to obtain a Bell inequality violation with high statistical significance (p-value ≪10−6). Notably, the experiment by Shalm et al. also combined three types of (quasi-)random number generators to determine the measurement basis choices. One of these methods, detailed in an ancillary file, is the “'Cultural' pseudorandom source” which involved using bit strings from popular media such as the Back to the Future films, Star Trek: Beyond the Final Frontier, Monty Python and the Holy Grail, and the television shows Saved by the Bell and Dr. Who.

Schmied et al. (2016): Detection of Bell correlations in a many-body system

Using a witness for Bell correlations derived from a multi-partite Bell inequality, physicists at the University of Basel were able to conclude for the first time Bell correlation in a many-body system composed by about 480 atoms in a Bose-Einstein condensate. Even though loopholes were not closed, this experiment shows the possibility of observing Bell correlations in the macroscopic regime.

Handsteiner et al. (2017): "Cosmic Bell Test" - Measurement Settings from Milky Way Stars

Physicists led by David Kaiser of the Massachusetts Institute of Technology and Anton Zeilinger of the Institute for Quantum Optics and Quantum Information and University of Vienna performed an experiment that "produced results consistent with nonlocality" by measuring starlight that had taken 600 years to travel to Earth. The experiment “represents the first experiment to dramatically limit the space-time region in which hidden variables could be relevant.”

Rosenfeld et al. (2017): "Event-Ready" Bell test with entangled atoms and closed detection and locality loopholes

Physicists at the Ludwig Maximilian University of Munich and the Max Planck Institute of Quantum Optics published results from an experiment in which they observed a Bell inequality violation using entangled spin states of two atoms with a separation distance of 398 meters in which the detection loophole, the locality loophole, and the memory loophole were closed. The violation of S = 2.221 ± 0.033 rejected local realism with a significance value of P = 1.02×10−16 when taking into account 7 months of data and 55000 events or an upper bound of P = 2.57×10−9 from a single run with 10000 events.

The BIG Bell Test Collaboration (2018): "Challenging local realism with human choices"

An international collaborative scientific effort showed that human free will can be used to close the 'freedom-of-choice loophole'. This was achieved by collecting random decisions from humans instead of random number generators. Around 100,000 participants were recruited in order to provide sufficient input for the experiment to be statistically significant.

Rauch et al (2018): measurement settings from distant quasars

In 2018, an international team used light from two quasars (one whose light was generated approximately eight billion years ago and the other approximately twelve billion years ago) as the basis for their measurement settings. This experiment pushed the timeframe for when the settings could have been mutually determined to at least 7.8 billion years in the past, a substantial fraction of the superdeterministic limit (that being the creation of the universe 13.8 billion years ago).

The 2019 PBS Nova episode Einstein's Quantum Riddle documents this "cosmic Bell test" measurement, with footage of the scientific team on-site at the high-altitude Teide Observatory located in the Canary Islands.

Loopholes

Though the series of increasingly sophisticated Bell test experiments has convinced the physics community in general that local realism is untenable, local realism can never be excluded entirely. For example, the hypothesis of superdeterminism in which all experiments and outcomes (and everything else) are predetermined cannot be tested (it is unfalsifiable).

Up to 2015, the outcome of all experiments that violate a Bell inequality could still theoretically be explained by exploiting the detection loophole and/or the locality loophole. The locality (or communication) loophole means that since in actual practice the two detections are separated by a time-like interval, the first detection may influence the second by some kind of signal. To avoid this loophole, the experimenter has to ensure that particles travel far apart before being measured, and that the measurement process is rapid. More serious is the detection (or unfair sampling) loophole, because particles are not always detected in both wings of the experiment. It can be imagined that the complete set of particles would behave randomly, but instruments only detect a subsample showing quantum correlations, by letting detection be dependent on a combination of local hidden variables and detector setting.

Experimenters had repeatedly voiced that loophole-free tests could be expected in the near future. In 2015, a loophole-free Bell violation was reported using entangled diamond spins over a distance of 1.3 kilometres (1,300 m) and corroborated by two experiments using entangled photon pairs.

The remaining possible theories that obey local realism can be further restricted by testing different spatial configurations, methods to determine the measurement settings, and recording devices. It has been suggested that using humans to generate the measurement settings and observe the outcomes provides a further test. David Kaiser of MIT told the New York Times in 2015 that a potential weakness of the "loophole-free" experiments is that the systems used to add randomness to the measurement may be predetermined in a method that was not detected in experiments.

Detection loophole

A common problem in optical Bell tests is that only a small fraction of the emitted photons are detected. It is then possible that the correlations of the detected photons are unrepresentative: although they show a violation of a Bell inequality, if all photons were detected the Bell inequality would actually be respected. This was first noted by Pearle in 1970, who devised a local hidden variable model that faked a Bell violation by letting the photon be detected only if the measurement setting was favourable. The assumption that this does not happen, i.e., that the small sample is actually representative of the whole is called the fair sampling assumption.

To do away with this assumption it is necessary to detect a sufficiently large fraction of the photons. This is usually characterized in terms of the detection efficiency , defined as the probability that a photodetector detects a photon that arrives at it. Garg and Mermin showed that when using a maximally entangled state and the CHSH inequality an efficiency of is required for a loophole-free violation. Later Eberhard showed that when using a partially entangled state a loophole-free violation is possible for , which is the optimal bound for the CHSH inequality. Other Bell inequalities allow for even lower bounds. For example, there exists a four-setting inequality which is violated for .

Historically, only experiments with non-optical systems have been able to reach high enough efficiencies to close this loophole, such as trapped ions, superconducting qubits, and nitrogen-vacancy centers. These experiments were not able to close the locality loophole, which is easy to do with photons. More recently, however, optical setups have managed to reach sufficiently high detection efficiencies by using superconducting photodetectors, and hybrid setups have managed to combine the high detection efficiency typical of matter systems with the ease of distributing entanglement at a distance typical of photonic systems.

Locality loophole

One of the assumptions of Bell's theorem is the one of locality, namely that the choice of setting at a measurement site does not influence the result of the other. The motivation for this assumption is the theory of relativity, that prohibits communication faster than light. For this motivation to apply to an experiment, it needs to have space-like separation between its measurements events. That is, the time that passes between the choice of measurement setting and the production of an outcome must be shorter than the time it takes for a light signal to travel between the measurement sites.

The first experiment that strived to respect this condition was Alain Aspect's 1982 experiment. In it the settings were changed fast enough, but deterministically. The first experiment to change the settings randomly, with the choices made by a quantum random number generator, was Weihs et al.'s 1998 experiment. Scheidl et al. improved on this further in 2010 by conducting an experiment between locations separated by a distance of 144 km (89 mi).

Coincidence loophole

In many experiments, especially those based on photon polarization, pairs of events in the two wings of the experiment are only identified as belonging to a single pair after the experiment is performed, by judging whether or not their detection times are close enough to one another. This generates a new possibility for a local hidden variables theory to "fake" quantum correlations: delay the detection time of each of the two particles by a larger or smaller amount depending on some relationship between hidden variables carried by the particles and the detector settings encountered at the measurement station.

The coincidence loophole can be ruled out entirely simply by working with a pre-fixed lattice of detection windows which are short enough that most pairs of events occurring in the same window do originate with the same emission and long enough that a true pair is not separated by a window boundary.

Memory loophole

In most experiments, measurements are repeatedly made at the same two locations. A local hidden variable theory could exploit the memory of past measurement settings and outcomes in order to increase the violation of a Bell inequality. Moreover, physical parameters might be varying in time. It has been shown that, provided each new pair of measurements is done with a new random pair of measurement settings, that neither memory nor time inhomogeneity have a serious effect on the experiment.

Koinophilia

From Wikipedia, the free encyclopedia
 
This leucistic Indian peacock, Pavo cristatus, is unlikely to find a mate and reproduce in a natural setting due to its unusual coloration. However, its striking colour is appreciated by humans, and may be included in artificial selective breeding to produce more individuals with the leucistic phenotype.

Koinophilia is an evolutionary hypothesis proposing that during sexual selection, animals preferentially seek mates with a minimum of unusual or mutant features, including functionality, appearance and behavior. Koinophilia intends to explain the clustering of sexual organisms into species and other issues described by Darwin's Dilemma. The term derives from the Greek, koinos, "common", "that which is shared", and philia, "fondness".

Natural selection causes beneficial inherited features to become more common at the expense of their disadvantageous counterparts. The koinophilia hypothesis proposes that a sexually-reproducing animal would therefore be expected to avoid individuals with rare or unusual features, and to prefer to mate with individuals displaying a predominance of common or average features. Mutants with strange, odd or peculiar features would be avoided because most mutations that manifest themselves as changes in appearance, functionality or behavior are disadvantageous. Because it is impossible to judge whether a new mutation is beneficial (or might be advantageous in the unforeseeable future) or not, koinophilic animals avoid them all, at the cost of avoiding the very occasional potentially beneficial mutation. Thus, koinophilia, although not infallible in its ability to distinguish fit from unfit mates, is a good strategy when choosing a mate. A koinophilic choice ensures that offspring are likely to inherit a suite of features and attributes that have served all the members of the species well in the past.

Koinophilia differs from the "like prefers like" mating pattern of assortative mating. If like preferred like, leucistic animals (such as white peacocks) would be sexually attracted to one another, and a leucistic subspecies would come into being. Koinophilia predicts that this is unlikely because leucistic animals are attracted to the average in the same way as are all the other members of its species. Since non-leucistic animals are not attracted by leucism, few leucistic individuals find mates, and leucistic lineages will rarely form.

Koinophilia provides simple explanations for the almost universal canalization of sexual creatures into species, the rarity of transitional forms between species (between both extant and fossil species), evolutionary stasis, punctuated equilibria, and the evolution of cooperation. Koinophilia might also contribute to the maintenance of sexual reproduction, preventing its reversion to the much simpler asexual form of reproduction.

The koinophilia hypothesis is supported by the findings of Judith Langlois and her co-workers. They found that the average of two human faces was more attractive than either of the faces from which that average was derived. The more faces (of the same gender and age) that were used in the averaging process the more attractive and appealing the average face became. This work into averageness supports koinophilia as an explanation of what constitutes a beautiful face.

Speciation and punctuated equilibria

Biologists from Darwin onwards have puzzled over how evolution produces species whose adult members look extraordinarily alike, and distinctively different from the members of other species. Lions and leopards are, for instance, both large carnivores that inhabit the same general environment, and hunt much the same prey, but look quite different. The question is why intermediates do not exist.

The overwhelming impression of strict uniformity, involving all the external features of the adult members of a species, is illustrated by this herd of Springbok, Antidorcas marsupialis, in the Kalahari Desert. This homogeneity in appearance is typical, and virtually diagnostic, of almost all species, and a great evolutionary mystery. Darwin emphasized individual variation, which is unquestionably present in any herd such as this, but is extraordinarily difficult to discern, even after long-term familiarity with the herd. Each individual needs to be uniquely and prominently tagged to follow its life history and interactions with the other (tagged) members of the population.

This is the "horizontal" dimension of a two-dimensional problem, referring to the almost complete absence of transitional or intermediate forms between present-day species (e.g. between lions, leopards, and cheetahs).

Speciation poses a "2-dimensional" problem. The discontinuities in appearance between existing species represent the "horizontal dimension" of the problem. The succession of fossil species represent the "vertical dimension".

The "vertical" dimension concerns the fossil record. Fossil species are frequently remarkably stable over extremely long periods of geological time, despite continental drift, major climate changes, and mass extinctions. When a change in form occurs, it tends to be abrupt in geological terms, again producing phenotypic gaps (i.e. an absence of intermediate forms), but now between successive species, which then often co-exist for long periods of time. Thus the fossil record suggests that evolution occurs in bursts, interspersed by long periods of evolutionary stagnation in so-called punctuated equilibria. Why this is so has been an evolutionary enigma ever since Darwin first recognized the problem.

Koinophilia could explain both the horizontal and vertical manifestations of speciation, and why it, as a general rule, involves the entire external appearance of the animals concerned. Since koinophilia affects the entire external appearance, the members of an interbreeding group are driven to look alike in every detail. Each interbreeding group will rapidly develop its own characteristic appearance. An individual from one group which wanders into another group will consequently be recognized as different, and will be discriminated against during the mating season. Reproductive isolation induced by koinophilia might thus be the first crucial step in the development of, ultimately, physiological, anatomical and behavioral barriers to hybridization, and thus, ultimately, full specieshood. Koinophilia will thereafter defend that species' appearance and behavior against invasion by unusual or unfamiliar forms (which might arise by immigration or mutation), and thus be a paradigm of punctuated equilibria (or the "vertical" aspect of the speciation problem).

Evolution under koinophilic conditions

Plants and domestic animals and can differ markedly from their wild ancestors
 
Top: wild teosinte; middle: maize-teosinte hybrid; bottom: maize
 
More detailed version of diagram on left by geographical area.

Background

Evolution can be extremely rapid, as shown by the creation of domesticated animals and plants in a very short period of geological time, spanning only a few tens of thousands of years, by humans with little or no knowledge of genetics. Maize, Zea mays, for instance, was created in Mexico in only a few thousand years, starting about 7 000 to 12 000 years ago. This raises the question of why the long term rate of evolution is far slower than is theoretically possible.

Evolution is imposed on species or groups. It is not planned or striven for in some Lamarckist way. The mutations on which the process depends are random events, and, except for the "silent mutations" which do not affect the functionality or appearance of the carrier, are thus usually disadvantageous, and their chance of proving to be useful in the future is vanishingly small. Therefore, while a species or group might benefit by being able to adapt to a new environment through the accumulation of a wide range of genetic variation, this is to the detriment of the individuals who have to carry these mutations until a small, unpredictable minority of them ultimately contributes to such an adaptation. Thus, the capability to evolve is a group adaptation, which has been discredited by, among others, George C. Williams, John Maynard Smith and Richard Dawkins. because it is not to the benefit of the individual.

Consequently, sexual individuals would be expected to avoid transmitting mutations to their progeny by avoiding mates with strange or unusual characteristics. Mutations that therefore affect the external appearance and habits of their carriers will seldom be passed on to the next and subsequent generations. They will therefore seldom be tested by natural selection. Evolutionary change in a large population with a wide choice of mates, will, therefore, come to a virtual standstill. The only mutations that can accumulate in a population are ones that have no noticeable effect on the outward appearance and functionality of their bearers (they are thus termed "silent" or "neutral mutations").

Evolutionary process

The restraint koinophilia exerts on phenotypic change suggests that evolution can only occur if mutant mates cannot be avoided as a result of a severe scarcity of potential mates. This is most likely to occur in small restricted communities, such as on small islands, in remote valleys, lakes, river systems, caves, or during periods of glaciation, or following mass extinctions, when sudden bursts of evolution can be expected. Under these circumstances, not only is the choice of mates severely restricted, but population bottlenecks, founder effects, genetic drift and inbreeding cause rapid, random changes in the isolated population's genetic composition. Furthermore, hybridization with a related species trapped in the same isolate might introduce additional genetic changes. If an isolated population such as this survives its genetic upheavals, and subsequently expands into an unoccupied niche, or into a niche in which it has an advantage over its competitors, a new species, or subspecies, will have come in being. In geological terms this will be an abrupt event. A resumption of avoiding mutant mates will, thereafter, result, once again, in evolutionary stagnation.

Thus the fossil record of an evolutionary progression typically consists of species that suddenly appear, and ultimately disappear hundreds of thousands or millions of years later, without any change in external appearance. Graphically, these fossil species are represented by horizontal lines, whose lengths depict how long each of them existed. The horizontality of the lines illustrates the unchanging appearance of each of the fossil species depicted on the graph. During each species' existence new species appear at random intervals, each also lasting many hundreds of thousands of years before disappearing without a change in appearance. The degree of relatedness and the lines of descent of these concurrent species is generally impossible to determine. This is illustrated in the following diagram depicting the evolution of modern humans from the time that the hominins separated from the line that led to the evolution of our closest living primate relatives, the chimpanzees.

Distribution of Hominin species over time. For examples of similar evolutionary timelines see the paleontological list of African dinosaurs, Asian dinosaurs, the Lampriformes and the Amiiformes.

Phenotypic implications

This proposal, that population bottlenecks are possibly the primary generators of the variation that fuels evolution, predicts that evolution will usually occur in intermittent, relatively large scale morphological steps, interspersed with prolonged periods of evolutionary stagnation,nstead of in a continuous series of finely graded changes. However, it makes a further prediction. Darwin emphasized that the shared biologically useless oddities and incongruities that characterize a species are signs of an evolutionary history – something that would not be expected if a bird's wing, for instance, was engineered de novo, as argued by his detractors. The present model predicts that, in addition to vestiges which reflect an organism's evolutionary heritage, all the members of a given species will also bear the stamp of their isolationary past – arbitrary, random features, accumulated through founder effects, genetic drift and the other genetic consequences of sexual reproduction in small, isolated communities. Thus all lions, African and Asian, have a highly characteristic black tuft of fur at the end of their tails, which is difficult to explain in terms of an adaptation, or as a vestige from an early feline, or more ancient ancestor. The unique, often color- and pattern-rich plumage of each of today's wide variety of bird species presents a similar evolutionary enigma. This richly varied array of phenotypes is more easily explained as the products of isolates, subsequently defended by koinophilia, than as assemblies of very diverse evolutionary relics, or as sets of uniquely evolved adaptations.

Evolution of co-operation

Co-operation is any group behavior that benefits the individuals more than if they were to act as independent agents.

Co-operative hunting by wolves allows them to tackle much larger and more nutritious prey than any individual wolf could handle. However, such co-operation could be exploited by selfish individuals which do not expose themselves to the dangers of the hunt, but nevertheless share in the spoils.

However selfish individuals can exploit the co-operativeness of others by not taking part in the group activity, but still enjoying its benefits. For instance, a selfish individual which does not join the hunting pack and share in its risks, but nevertheless shares in the spoils, has a fitness advantage over the other members of the pack. Thus, although a group of co-operative individuals is fitter than an equivalent group of selfish individuals, selfish individuals interspersed among a community of co-operators are always fitter than their hosts. They will raise, on average, more offspring than their hosts, and will ultimately replace them.

If, however, the selfish individuals are ostracized, and rejected as mates, because of their deviant and unusual behavior, then their evolutionary advantage becomes an evolutionary liability. Co-operation then becomes evolutionarily stable.

Effects of diets and environmental conditions

Male Drosophila pseudoobscura

The best-documented creations of new species in the laboratory were performed in the late 1980s. William Rice and G.W. Salt bred fruit flies, Drosophila melanogaster, using a maze with three different choices of habitat, such as light/dark and wet/dry. Each generation was placed into the maze, and the groups of flies that came out of two of the eight exits were set apart to breed with each other in their respective groups. After thirty-five generations, the two groups and their offspring were isolated reproductively because of their strong habitat preferences: they mated only within the areas they preferred, and so did not mate with flies that preferred the other areas. The history of such attempts is described in Rice and Hostert (1993).

Diane Dodd used a laboratory experiment to show how reproductive isolation can evolve in Drosophila pseudoobscura fruit flies after several generations by placing them in different media, starch- or maltose-based media.

Drosophila speciation experiment.svg

Dodd's experiment has been easy for many others to replicate, including with other kinds of fruit flies and foods.

A map of Europe indicating the distribution of the carrion and hooded crows on either side of a contact zone (white line) separating the two species.

The carrion crow (Corvus corone) and hooded crow (Corvus cornix) are two closely related species whose geographical distribution across Europe is illustrated in the accompanying diagram. It is believed that this distribution might have resulted from the glaciation cycles during the Pleistocene, which caused the parent population to split into isolates which subsequently re-expanded their ranges when the climate warmed causing secondary contact. Jelmer W. Poelstra and coworkers sequenced almost the entire genomes of both species in populations at varying distances from the contact zone to find that the two species were genetically identical, both in their DNA and in its expression (in the form of RNA), except for the lack of expression of a small portion (<0.28%) of the genome (situated on avian chromosome 18) in the hooded crow, which imparts the lighter plumage coloration on its torso. Thus the two species can viably hybridize, and occasionally do so at the contact zone, but the all-black carrion crows on the one side of the contact zone mate almost exclusively with other all-black carrion crows, while the same occurs among the hooded crows on the other side of the contact zone. It is therefore clear that it is only the outward appearance of the two species that inhibits hybridization. The authors attribute this to assortative mating, the advantage of which is not clear, and it would lead to the rapid appearance of streams of new lineages, and possibly even species, through mutual attraction between mutants. Unnikrishnan and Akhila propose, instead, that koinophilia is a more precise explanation for the resistance to hybridization across the contact zone, despite the absence of physiological, anatomical or genetic barriers to such hybridization.

Reception

William B. Miller, in an extensive recent (2013) review of koinophilia theory, notes that while it provides precise explanations for the grouping of sexual animals into species, their unchanging persistence in the fossil record over long periods of time, and the phenotypic gaps between species, both fossil and extant, it represents a major departure from the widely accepted view that beneficial mutations spread, ultimately, to the whole, or some portion of the population (causing it to evolve gene by gene). Darwin recognized that this process had no inherent, or inevitable propensity to produce species. Instead populations would be in a perpetual state of transition both in time and space. They would, at any given moment, consist of individuals with varying numbers of beneficial characteristics that may or may not have reached them from their various points of origin in the population, and neutral features will have a scattering determined by random mechanisms such as genetic drift.

He also notes that koinophilia provides no explanation as to how the physiological, anatomical and genetic causes of reproductive isolation come about. It is only the behavioral reproductive isolation that is addressed by koinophilia. It is furthermore difficult to see how koinophilia might apply to plants, and certain marine creatures that discharge their gametes into the environment to meet up and fuse, it seems, entirely randomly (within conspecific confines). However, when pollen from several compatible donors is used to pollinate stigmata, the donors typically do not sire equal numbers of seeds. Marshall and Diggle state that the existence of some kind of non-random seed paternity is, in fact, not in question in flowering plants. How this occurs remains unknown. Pollen choice is one of the possibilities, taking into account that 50% of the pollen grain's haploid genome is expressed during its tube's growth towards the ovule.

The apparent preference of the females of certain, particularly bird, species for exaggerated male ornaments, such as the peacock's tail, is not easily reconciled with the concept of koinophilia.

Inhalant

From Wikipedia, the free encyclopedia https://en.wikipedia.org/w...