Search This Blog

Tuesday, September 20, 2022

Bell's theorem

From Wikipedia, the free encyclopedia

Bell's theorem is a term encompassing a number of closely-related results in physics, all of which determine that quantum mechanics is incompatible with local hidden-variable theories. The "local" in this case refers to the principle of locality, the idea that a particle can only be influenced by its immediate surroundings, and that interactions mediated by physical fields can only occur at speeds no greater than the speed of light. "Hidden variables" are hypothetical properties possessed by quantum particles, properties that are undetectable but still affect the outcome of experiments. In the words of physicist John Stewart Bell, for whom this family of results is named, "If [a hidden-variable theory] is local it will not agree with quantum mechanics, and if it agrees with quantum mechanics it will not be local."

The term is broadly applied to a number of different derivations, the first of which was introduced by Bell in a 1964 paper titled "On the Einstein Podolsky Rosen Paradox". Bell's paper was a response to a 1935 thought experiment that Albert Einstein, Boris Podolsky and Nathan Rosen used to argue that quantum physics is an "incomplete" theory. By 1935, it was already recognized that the predictions of quantum physics are probabilistic. Einstein, Podolsky and Rosen presented a scenario that involves preparing a pair of particles such that the quantum state of the pair is entangled, and then separating the particles to an arbitrarily large distance. The experimenter has a choice of possible measurements that can be performed on one of the particles. When they choose a measurement and obtain a result, the quantum state of the other particle apparently collapses instantaneously into a new state depending upon that result, no matter how far away the other particle is. This suggests that either the measurement of the first particle somehow also interacted with the second particle at faster than the speed of light, or that the entangled particles had some unmeasured property which pre-determined their final quantum states before they were separated. Therefore, assuming locality, quantum mechanics must be incomplete, because it cannot give a complete description of the particle's true physical characteristics. In other words, quantum particles, like electrons and photons, must carry some property or attributes not included in quantum theory, and the uncertainties in quantum theory's predictions would then be due to ignorance or unknowability of these properties, later termed "hidden variables".

Bell carried the analysis of quantum entanglement much further. He deduced that if measurements are performed independently on the two separated particles of an entangled pair, then the assumption that the outcomes depend upon hidden variables within each half implies a mathematical constraint on how the outcomes on the two measurements are correlated. This constraint would later be named the Bell inequality. Bell then showed that quantum physics predicts correlations that violate this inequality. Consequently, the only way that hidden variables could explain the predictions of quantum physics is if they are "nonlocal", which is to say that somehow the two particles were able to interact instantaneously no matter how widely the two particles are separated.

Multiple variations on Bell's theorem were put forward in the following years, introducing other closely related conditions generally known as Bell (or "Bell-type") inequalities. The first rudimentary experiment designed to test Bell's theorem was performed in 1972 by John Clauser and Stuart Freedman; more advanced experiments, known collectively as Bell tests, have been performed many times since. Often, these experiments have had the goal of "closing loopholes", that is, ameliorating problems of experimental design or set-up that could in principle affect the validity of the findings of earlier Bell tests. To date, Bell tests have consistently found that physical systems obey quantum mechanics and violate Bell inequalities; which is to say that the results of these experiments are incompatible with any local hidden variable theory.

The exact nature of the assumptions required to prove a Bell-type constraint on correlations has been debated by physicists and by philosophers. While the significance of Bell's theorem is not in doubt, its full implications for the interpretation of quantum mechanics remain unresolved.

Theorem

There are many variations on the basic idea, some employing stronger mathematical assumptions than others. Significantly, Bell-type theorems do not refer to any particular theory of local hidden variables, but instead show that quantum physics violates general assumptions behind classical pictures of nature. The original theorem proved by Bell in 1964 is not the most amenable to experiment, and it is convenient to introduce the genre of Bell-type inequalities with a later example.

Alice and Bob stand in widely separated locations. Victor prepares a pair of particles and sends one to Alice and the other to Bob. When Alice receives her particle, she chooses to perform one of two possible measurements (perhaps by flipping a coin to decide which). Denote these measurements by and . Both and are binary measurements: the result of is either or , and likewise for . When Bob receives his particle, he chooses one of two measurements, and , which are also both binary.

Suppose that each measurement reveals a property that the particle already possessed. For instance, if Alice chooses to measure and obtains the result , then the particle she received carried a value of for a property . Consider the following combination:

Because both and take the values , then either or . In the former case, , while in the latter case, . So, one of the terms on the right-hand side of the above expression will vanish, and the other will equal . Consequently, if the experiment is repeated over many trials, with Victor preparing new pairs of particles, the average value of the combination across all the trials will be less than or equal to 2. No single trial can measure this quantity, because Alice and Bob can only choose one measurement each, but on the assumption that the underlying properties exist, the average value of the sum is just the sum of the averages for each term. Using angle brackets to denote averages,

This is a Bell inequality, specifically, the CHSH inequality. Its derivation here depends upon two assumptions: first, that the underlying physical properties and exist independently of being observed or measured (sometimes called the assumption of realism); and second, that Alice's choice of action cannot influence Bob's result or vice versa (often called the assumption of locality).

Quantum mechanics can violate the CHSH inequality, as follows. Victor prepares a pair of qubits which he describes by the Bell state

Victor then passes the first qubit to Alice and the second to Bob. Alice and Bob's choices of possible measurements are defined by the Pauli matrices. Alice measures either of the two observables and :
and Bob measures either of the two observables
Victor can calculate the quantum expectation values for pairs of these observables using the Born rule:
While only one of these four measurements can be made in a single trial of the experiment, the sum

gives the sum of the average values that Victor expects to find across multiple trials. This value exceeds the classical upper bound of 2 that was deduced from the hypothesis of local hidden variables. The value is in fact the largest that quantum physics permits for this combination of expectation values, making it a Tsirelson bound.

An illustration of the CHSH game: the referee, Victor, sends a bit each to Alice and to Bob, and Alice and Bob each send a bit back to the referee.

The CHSH inequality can also be thought of as a game in which Alice and Bob try to coordinate their actions. Victor prepares two bits, and , independently and at random. He sends bit to Alice and bit to Bob. Alice and Bob win if they return answer bits and to Victor, satisfying

Or, equivalently, Alice and Bob win if the logical AND of and is the logical XOR of and . Alice and Bob can agree upon any strategy they desire before the game, but they cannot communicate once the game begins. In any theory based on local hidden variables, Alice and Bob's probability of winning is no greater than , regardless of what strategy they agree upon beforehand. However, if they share an entangled quantum state, their probability of winning can be as large as

Variations and related results

Bell (1964)

Bell's 1964 paper points out that under restricted conditions, local hidden variable models can reproduce the predictions of quantum mechanics. He then demonstrates that this cannot hold true in general. Bell considers a refinement by David Bohm of the Einstein–Podolsky–Rosen (EPR) thought experiment. In this scenario, a pair of particles are formed together in such a way that they are described by a spin singlet state (which is an example of an entangled state). The particles then move apart in opposite directions. Each particle is measured by a Stern–Gerlach device, a measuring instrument that can be oriented in different directions and that reports one of two possible outcomes, representable by and . The configuration of each measuring instrument is represented by a unit vector, and the quantum-mechanical prediction for the correlation between two detectors with settings and is

In particular, if the orientation of the two detectors is the same (), then the outcome of one measurement is certain to be the negative of the outcome of the other, giving . And if the orientations of the two detectors are orthogonal (), then the outcomes are uncorrelated, and . Bell proves by example that these special cases can be explained in terms of hidden variables, then proceeds to show that the full range of possibilities involving intermediate angles cannot.

Bell posited that a local hidden variable model for these correlations would explain them in terms of an integral over the possible values of some hidden parameter :

where is a probability density function. The two functions and provide the responses of the two detectors given the orientation vectors and the hidden variable:
Crucially, the outcome of detector does not depend upon , and likewise the outcome of does not depend upon , because the two detectors are physically separated. Now we suppose that the experimenter has a choice of settings for the second detector: it can be set either to or to . Bell proves that the difference in correlation between these two choices of detector setting must satisfy the inequality
However, it is easy to find situations where quantum mechanics violates the Bell inequality. For example, let the vectors and be orthogonal, and let lie in their plane at a 45° angle from both of them. Then
while
but
Therefore, there is no local hidden variable model that can reproduce the predictions of quantum mechanics for all choices of , , and . Experimental results contradict the classical curves and match the curve predicted by quantum mechanics as long as experimental shortcomings are accounted for.

Bell's 1964 theorem requires the possibility of perfect anti-correlations: the ability to make a probability-1 prediction about the result from the second detector, knowing the result from the first. This is related to the "EPR criterion of reality", a concept introduced in the 1935 paper by Einstein, Podolsky, and Rosen. This paper posits, "If, without in any way disturbing a system, we can predict with certainty (i.e., with probability equal to unity) the value of a physical quantity, then there exists an element of reality corresponding to that quantity."

GHZ–Mermin

Greenberger, Horne, and Zeilinger presented a four-particle thought experiment, which Mermin then simplified to use only three particles. In this thought experiment, Victor generates a set of three spin-1/2 particles described by the quantum state

where as above, and are the eigenvectors of the Pauli matrix . Victor then sends a particle each to Alice, Bob, and Charlie, who wait at widely-separated locations. Alice measures either or on her particle, and so do Bob and Charlie. The result of each measurement is either or . Applying the Born rule to the three-qubit state , Victor predicts that whenever the three measurements include one and two 's, the product of the outcomes will always be . This follows because is an eigenvector of with eigenvalue , and likewise for and . Therefore, knowing Alice's result for a measurement and Bob's result for a measurement, Victor can predict with probability 1 what result Charlie will return for a measurement. According to the EPR criterion of reality, there would be an "element of reality" corresponding to the outcome of a measurement upon Charlie's qubit. Indeed, this same logic applies to both measurements and all three qubits. Per the EPR criterion of reality, then, each particle contains an "instruction set" that determines the outcome of a or measurement upon it. The set of all three particles would then be described by the instruction set
with each entry being either or , and each or measurement simply returning the appropriate value.

If Alice, Bob, and Charlie all perform the measurement, then the product of their results would be . This value can be deduced from

because the square of either or is . Each factor in parentheses equals , so
and the product of Alice, Bob, and Charlie's results will be with probability unity. But this is inconsistent with quantum physics: Victor can predict using the state that the measurement will instead yield with probability unity.

This thought experiment can also be recast as a traditional Bell inequality or, equivalently, as a nonlocal game in the same spirit as the CHSH game. In it, Alice, Bob, and Charlie receive bits from Victor, promised to always have an even number of ones, that is, , and send him back bits . They win the game if have an odd number of ones for all inputs except , when they need to have an even number of ones. That is, they win the game iff . With local hidden variables the highest probability of victory they can have is 3/4, whereas using the quantum strategy above they win it with certainty. This is an example of quantum pseudo-telepathy.

Kochen–Specker theorem

In quantum theory, orthonormal bases for a Hilbert space represent measurements than can be performed upon a system having that Hilbert space. Each vector in a basis represents a possible outcome of that measurement. Suppose that a hidden variable exists, so that knowing the value of would imply certainty about the outcome of any measurement. Given a value of , each measurement outcome — that is, each vector in the Hilbert space — is either impossible or guaranteed. A Kochen–Specker configuration is a finite set of vectors made of multiple interlocking bases, with the property that a vector in it will always be impossible when considered as belonging to one basis and guaranteed when taken as belonging to another. In other words, a Kochen–Specker configuration is an "uncolorable set" that demonstrates the inconsistency of assuming a hidden variable can be controlling the measurement outcomes.

Free will theorem

The Kochen–Specker type of argument, using configurations of interlocking bases, can be combined with the idea of measuring entangled pairs that underlies Bell-type inequalities. This was noted beginning in the 1970s by Kochen, Heywood and Redhead, Stairs, and Brown and Svetlichny. As EPR pointed out, obtaining a measurement outcome on one half of an entangled pair implies certainty about the outcome of a corresponding measurement on the other half. The "EPR criterion of reality" posits that because the second half of the pair was not disturbed, that certainty must be due to a physical property belonging to it. In other words, by this criterion, a hidden variable must exist within the second, as-yet unmeasured half of the pair. No contradiction arises if only one measurement on the first half is considered. However, if the observer has a choice of multiple possible measurements, and the vectors defining those measurements form a Kochen–Specker configuration, then some outcome on the second half will be simultaneously impossible and guaranteed.

This type of argument gained attention when an instance of it was advanced by John Conway and Simon Kochen under the name of the free will theorem. The Conway–Kochen theorem uses a pair of entangled qutrits and a Kochen–Specker configuration discovered by Asher Peres.

Quasiclassical entanglement

As Bell pointed out, some predictions of quantum mechanics can be replicated in local hidden variable models, including special cases of correlations produced from entanglement. This topic has been studied systematically in the years since Bell's theorem. In 1989, Reinhard Werner introduced what are now called Werner states, joint quantum states for a pair of systems that yield EPR-type correlations but also admit a hidden-variable model. Werner states are bipartite quantum states that are invariant under unitaries of symmetric tensor-product form:

More recently, Robert Spekkens introduced a toy model that starts with the premise of local, discretized degrees of freedom and then imposes a "knowledge balance principle" that restricts how much an observer can know about those degrees of freedom, thereby making them into hidden variables. The allowed states of knowledge ("epistemic states") about the underlying variables ("ontic states") mimic some features of quantum states. Correlations in the toy model can emulate some aspects of entanglement, like monogamy, but by construction, the toy model can never violate a Bell inequality.

History

Background

The question of whether quantum mechanics can be "completed" by hidden variables dates to the early years of quantum theory. In his 1932 textbook on quantum mechanics, the Hungarian-born polymath John von Neumann presented what he claimed to be a proof that there could be no "hidden parameters". The validity and definitiveness of von Neumann's proof were questioned by Hans Reichenbach, in more detail by Grete Hermann, and possibly in conversation though not in print by Albert Einstein. (Simon Kochen and Ernst Specker rejected von Neumann's key assumption as early as 1961, but did not publish a criticism of it until 1967.)

Einstein argued persistently that quantum mechanics could not be a complete theory. His preferred argument relied on a principle of locality:

Consider a mechanical system constituted of two partial systems A and B which have interaction with each other only during limited time. Let the ψ function before their interaction be given. Then the Schrödinger equation will furnish the ψ function after their interaction has taken place. Let us now determine the physical condition of the partial system A as completely as possible by measurements. Then the quantum mechanics allows us to determine the ψ function of the partial system B from the measurements made, and from the ψ function of the total system. This determination, however, gives a result which depends upon which of the determining magnitudes specifying the condition of A has been measured (for instance coordinates or momenta). Since there can be only one physical condition of B after the interaction and which can reasonably not be considered as dependent on the particular measurement we perform on the system A separated from B it may be concluded that the ψ function is not unambiguously coordinated with the physical condition. This coordination of several ψ functions with the same physical condition of system B shows again that the ψ function cannot be interpreted as a (complete) description of a physical condition of a unit system.

The EPR thought experiment is similar, also considering two separated systems A and B described by a joint wave function. However, the EPR paper adds the idea later known as the EPR criterion of reality, according to which the ability to predict with probability 1 the outcome of a measurement upon B implies the existence of an "element of reality" within B.

In 1951, David Bohm proposed a variant of the EPR thought experiment in which the measurements have discrete ranges of possible outcomes, unlike the position and momentum measurements considered by EPR. The year before, Chien-Shiung Wu and Irving Shaknov had successfully measured polarizations of photons produced in entangled pairs, thereby making the Bohm version of the EPR thought experiment practically feasible.

By the late 1940s, the mathematician George Mackey had grown interested in the foundations of quantum physics, and in 1957 he drew up a list of postulates that he took to be a precise definition of quantum mechanics. Mackey conjectured that one of the postulates was redundant, and shortly thereafter, Andrew M. Gleason proved that it was indeed deducible from the other postulates. Gleason's theorem provided an argument that a broad class of hidden-variable theories are incompatible with quantum mechanics. More specifically, Gleason's theorem rules out hidden-variable models that are "noncontextual". Any hidden-variable model for quantum mechanics must, in order to avoid the implications of Gleason's theorem, involve hidden variables that are not properties belonging to the measured system alone but also dependent upon the external context in which the measurement is made. This type of dependence is often seen as contrived or undesirable; in some settings, it is inconsistent with special relativity. The Kochen–Specker theorem refines this statement by constructing a specific finite subset of rays on which no such probability measure can be defined.

Tsung-Dao Lee came close to deriving Bell's theorem in 1960. He considered events where two kaons were produced traveling in opposite directions, and came to the conclusion that hidden variables could not explain the correlations that could be obtained in such situations. However, complications arose due to the fact that kaons decay, and he did not go so far as to deduce a Bell-type inequality.

Bell's publications

Bell chose to publish his theorem in a comparatively obscure journal because it did not require page charges, and at the time it in fact paid the authors who published there. Because the journal did not provide free reprints of articles for the authors to distribute, however, Bell had to spend the money he received to buy copies that he could send to other physicists. While the articles printed in the journal themselves listed the publication's name simply as Physics, the covers carried the trilingual version Physics Physique Физика to reflect that it would print articles in English, French and Russian.

Prior to proving his 1964 result, Bell also proved a result equivalent to the Kochen–Specker theorem (hence why the latter is sometimes also known as the Bell–Kochen–Specker or Bell–KS theorem). However, publication of this theorem was inadvertently delayed until 1966. In that paper, Bell argued that because an explanation of quantum phenomena in terms of hidden variables would require nonlocality, the EPR paradox "is resolved in the way which Einstein would have liked least".

Experiments

Scheme of a "two-channel" Bell test
The source S produces pairs of "photons", sent in opposite directions. Each photon encounters a two-channel polariser whose orientation (a or b) can be set by the experimenter. Emerging signals from each channel are detected and coincidences of four types (++, −−, +− and −+) counted by the coincidence monitor.
 

In 1967, the unusual title Physics Physique Физика caught the attention of John Clauser, who then discovered Bell's paper and began to consider how to perform a Bell test in the laboratory. Clauser and Stuart Freedman would go on to perform a Bell test in 1972. This was only a limited test, because the choice of detector settings was made before the photons had left the source. In 1982, Alain Aspect and collaborators performed the first Bell test to remove this limitation. This began a trend of progressively more stringent Bell tests. The GHZ thought experiment was implemented in practice, using entangled triplets of photons, in 2000. By 2002, testing the CHSH inequality was feasible in undergraduate laboratory courses.

In Bell tests, there may be problems of experimental design or set-up that affect the validity of the experimental findings. These problems are often referred to as "loopholes". The purpose of the experiment is to test whether nature can be described by local hidden-variable theory, which would contradict the predictions of quantum mechanics.

The most prevalent loopholes in real experiments are the detection and locality loopholes. The detection loophole is opened when a small fraction of the particles (usually photons) are detected in the experiment, making it possible to explain the data with local hidden variables by assuming that the detected particles are an unrepresentative sample. The locality loophole is opened when the detections are not done with a spacelike separation, making it possible for the result of one measurement to influence the other without contradicting relativity. In some experiments there may be additional defects that make local-hidden-variable explanations of Bell test violations possible.

Although both the locality and detection loopholes had been closed in different experiments, a long-standing challenge was to close both simultaneously in the same experiment. This was finally achieved in three experiments in 2015. Regarding these results, Alain Aspect writes that "... no experiment ... can be said to be totally loophole-free," but he says the experiments "remove the last doubts that we should renounce" local hidden variables, and refers to examples of remaining loopholes as being "far fetched" and "foreign to the usual way of reasoning in physics."

Interpretations of Bell's theorem

The Copenhagen Interpretation

The Copenhagen interpretation is a collection of views about the meaning of quantum mechanics principally attributed to Niels Bohr and Werner Heisenberg. It is one of the oldest of numerous proposed interpretations of quantum mechanics, as features of it date to the development of quantum mechanics during 1925–1927, and it remains one of the most commonly taught. There is no definitive historical statement of what is the Copenhagen interpretation. In particular, there were fundamental disagreements between the views of Bohr and Heisenberg. Some basic principles generally accepted as part of the Copenhagen collection include the idea that quantum mechanics is intrinsically indeterministic, with probabilities calculated using the Born rule, and the complementarity principle: certain properties cannot be jointly defined for the same system at the same time. In order to talk about a specific property of a system, that system must be considered within the context of a specific laboratory arrangement. Observable quantities corresponding to mutually exclusive laboratory arrangements cannot be predicted together, but considering multiple such mutually exclusive experiments is necessary to characterize a system. Bohr himself used complementarity to argue that the EPR "paradox" was fallacious. Because measurements of position and of momentum are complementary, making the choice to measure one excludes the possibility of measuring the other. Consequently, he argued, a fact deduced regarding one arrangement of laboratory apparatus could not be combined with a fact deduced by means of the other, and so, the inference of predetermined position and momentum values for the second particle was not valid. Bohr concluded that EPR's "arguments do not justify their conclusion that the quantum description turns out to be essentially incomplete."

Copenhagen-type interpretations generally take the violation of Bell inequalities as grounds to reject the assumption often called counterfactual definiteness or "realism", which is not necessarily the same as abandoning realism in a broader philosophical sense. For example, Roland Omnès argues for the rejection of hidden variables and concludes that "quantum mechanics is probably as realistic as any theory of its scope and maturity ever will be." This is also the route taken by interpretations that descend from the Copenhagen tradition, such as consistent histories (often advertised as "Copenhagen done right"), as well as QBism.

Many-worlds interpretation of quantum mechanics

The Many-Worlds interpretation, also known as the Everett interpretation, is local and deterministic, as it consists of the unitary part of quantum mechanics without collapse. It can generate correlations that violate a Bell inequality because it violates an implicit assumption by Bell that measurements have a single outcome. In fact, Bell's theorem can be proven in the Many-Worlds framework from the assumption that a measurement has a single outcome. Therefore a violation of a Bell inequality can be interpreted as a demonstration that measurements have multiple outcomes.

The explanation it provides for the Bell correlations is that when Alice and Bob make their measurements, they split into local branches. From the point of view of each copy of Alice, there are multiple copies of Bob experiencing different results, so Bob cannot have a definite result, and the same is true from the point of view of each copy of Bob. They will obtain a mutually well-defined result only when their future light cones overlap. At this point we can say that the Bell correlation starts existing, but it was produced by a purely local mechanism. Therefore the violation of a Bell inequality cannot be interpreted as a proof of non-locality.

Non-local hidden variables

Most advocates of the hidden-variables idea believe that experiments have ruled out local hidden variables. They are ready to give up locality, explaining the violation of Bell's inequality by means of a non-local hidden variable theory, in which the particles exchange information about their states. This is the basis of the Bohm interpretation of quantum mechanics, which requires that all particles in the universe be able to instantaneously exchange information with all others. A 2007 experiment ruled out a large class of non-Bohmian non-local hidden variable theories, though not Bohmian mechanics itself.

The transactional interpretation, which postulates waves traveling both backwards and forwards in time, is likewise non-local.

Superdeterminism

A necessary assumption to derive Bell's theorem is that the hidden variables are not correlated with the measurement settings. This assumption has been justified on the grounds that the experimenter has "free will" to choose the settings, and that it is necessary to do science in the first place. A (hypothetical) theory where the choice of measurement is determined by the system being measured is known as superdeterministic.

A few advocates of deterministic models have not given up on local hidden variables. For example, Gerard 't Hooft has argued that superdeterminism cannot be dismissed.

History of virology

From Wikipedia, the free encyclopedia
 
Electron micrograph of the rod-shaped particles of tobacco mosaic virus that are too small to be seen using a light microscope

The history of virology – the scientific study of viruses and the infections they cause – began in the closing years of the 19th century. Although Louis Pasteur and Edward Jenner developed the first vaccines to protect against viral infections, they did not know that viruses existed. The first evidence of the existence of viruses came from experiments with filters that had pores small enough to retain bacteria. In 1892, Dmitri Ivanovsky used one of these filters to show that sap from a diseased tobacco plant remained infectious to healthy tobacco plants despite having been filtered. Martinus Beijerinck called the filtered, infectious substance a "virus" and this discovery is considered to be the beginning of virology.

The subsequent discovery and partial characterization of bacteriophages by Frederick Twort and Félix d'Herelle further catalyzed the field, and by the early 20th century many viruses had been discovered. In 1926, Thomas Milton Rivers defined viruses as obligate parasites. Viruses were demonstrated to be particles, rather than a fluid, by Wendell Meredith Stanley, and the invention of the electron microscope in 1931 allowed their complex structures to be visualised.

Pioneers

Adolf Mayer in 1875
 
 
An old, bespectacled man wearing a suit and sitting at a bench by a large window. The bench is covered with small bottles and test tubes. On the wall behind him is a large old-fashioned clock below frick u which are four small enclosed shelves on which sit many neatly labelled bottles.
Martinus Beijerinck in his laboratory in 1921.

Despite his other successes, Louis Pasteur (1822–1895) was unable to find a causative agent for rabies and speculated about a pathogen too small to be detected using a microscope. In 1884, the French microbiologist Charles Chamberland (1851–1931) invented a filter – known today as the Chamberland filter – that had pores smaller than bacteria. Thus, he could pass a solution containing bacteria through the filter and completely remove them from the solution.

In 1876, Adolf Mayer, who directed the Agricultural Experimental Station in Wageningen, was the first to show that what he called "Tobacco Mosaic Disease" was infectious. He thought that it was caused by either a toxin or a very small bacterium. Later, in 1892, the Russian biologist Dmitry Ivanovsky (1864–1920) used a Chamberland filter to study what is now known as the tobacco mosaic virus. His experiments showed that crushed leaf extracts from infected tobacco plants remain infectious after filtration. Ivanovsky suggested the infection might be caused by a toxin produced by bacteria, but did not pursue the idea.

In 1898, the Dutch microbiologist Martinus Beijerinck (1851–1931), a microbiology teacher at the Agricultural School in Wageningen repeated experiments by Adolf Mayer and became convinced that filtrate contained a new form of infectious agent. He observed that the agent multiplied only in cells that were dividing and he called it a contagium vivum fluidum (soluble living germ) and re-introduced the word virus. Beijerinck maintained that viruses were liquid in nature, a theory later discredited by the American biochemist and virologist Wendell Meredith Stanley (1904–1971), who proved that they were in fact, particles. In the same year, 1898, Friedrich Loeffler (1852–1915) and Paul Frosch (1860–1928) passed the first animal virus through a similar filter and discovered the cause of foot-and-mouth disease.

The first human virus to be identified was the yellow fever virus. In 1881, Carlos Finlay (1833–1915), a Cuban physician, first conducted and published research that indicated that mosquitoes were carrying the cause of yellow fever, a theory proved in 1900 by commission headed by Walter Reed (1851–1902). During 1901 and 1902, William Crawford Gorgas (1854–1920) organised the destruction of the mosquitoes' breeding habitats in Cuba, which dramatically reduced the prevalence of the disease. Gorgas later organised the elimination of the mosquitoes from Panama, which allowed the Panama Canal to be opened in 1914. The virus was finally isolated by Max Theiler (1899–1972) in 1932 who went on to develop a successful vaccine.

By 1928 enough was known about viruses to enable the publication of Filterable Viruses, a collection of essays covering all known viruses edited by Thomas Milton Rivers (1888–1962). Rivers, a survivor of typhoid fever contracted at the age of twelve, went on to have a distinguished career in virology. In 1926, he was invited to speak at a meeting organised by the Society of American Bacteriology where he said for the first time, "Viruses appear to be obligate parasites in the sense that their reproduction is dependent on living cells."

The notion that viruses were particles was not considered unnatural and fitted in nicely with the germ theory. It is assumed that Dr. J. Buist of Edinburgh was the first person to see virus particles in 1886, when he reported seeing "micrococci" in vaccine lymph, though he had probably observed clumps of vaccinia. In the years that followed, as optical microscopes were improved "inclusion bodies" were seen in many virus-infected cells, but these aggregates of virus particles were still too small to reveal any detailed structure. It was not until the invention of the electron microscope in 1931 by the German engineers Ernst Ruska (1906–1988) and Max Knoll (1887–1969), that virus particles, especially bacteriophages, were shown to have complex structures. The sizes of viruses determined using this new microscope fitted in well with those estimated by filtration experiments. Viruses were expected to be small, but the range of sizes came as a surprise. Some were only a little smaller than the smallest known bacteria, and the smaller viruses were of similar sizes to complex organic molecules.

In 1935, Wendell Stanley examined the tobacco mosaic virus and found it was mostly made of protein. In 1939, Stanley and Max Lauffer (1914) separated the virus into protein and nucleic acid, which was shown by Stanley's postdoctoral fellow Hubert S. Loring to be specifically RNA. The discovery of RNA in the particles was important because in 1928, Fred Griffith (c. 1879–1941) provided the first evidence that its "cousin", DNA, formed genes.

In Pasteur's day, and for many years after his death, the word "virus" was used to describe any cause of infectious disease. Many bacteriologists soon discovered the cause of numerous infections. However, some infections remained, many of them horrendous, for which no bacterial cause could be found. These agents were invisible and could only be grown in living animals. The discovery of viruses paved the way to understanding these mysterious infections. And, although Koch's postulates could not be fulfilled for many of these infections, this did not stop the pioneer virologists from looking for viruses in infections for which no other cause could be found.

Bacteriophages

Bacteriophage

Discovery

Bacteriophages are the viruses that infect and replicate in bacteria. They were discovered in the early 20th century, by the English bacteriologist Frederick Twort (1877–1950). But before this time, in 1896, the bacteriologist Ernest Hanbury Hankin (1865–1939) reported that something in the waters of the River Ganges could kill Vibrio cholerae – the cause of cholera. The agent in the water could be passed through filters that remove bacteria but was destroyed by boiling. Twort discovered the action of bacteriophages on staphylococci bacteria. He noticed that when grown on nutrient agar some colonies of the bacteria became watery. He collected some of these watery colonies and passed them through a Chamberland filter to remove the bacteria and discovered that when the filtrate was added to fresh cultures of bacteria, they in turn became watery. He proposed that the agent might be "an amoeba, an ultramicroscopic virus, a living protoplasm, or an enzyme with the power of growth".

Félix d'Herelle (1873–1949) was a mainly self-taught French-Canadian microbiologist. In 1917 he discovered that "an invisible antagonist", when added to bacteria on agar, would produce areas of dead bacteria. The antagonist, now known to be a bacteriophage, could pass through a Chamberland filter. He accurately diluted a suspension of these viruses and discovered that the highest dilutions (lowest virus concentrations), rather than killing all the bacteria, formed discrete areas of dead organisms. Counting these areas and multiplying by the dilution factor allowed him to calculate the number of viruses in the original suspension. He realised that he had discovered a new form of virus and later coined the term "bacteriophage". Between 1918 and 1921 d'Herelle discovered different types of bacteriophages that could infect several other species of bacteria including Vibrio cholerae. Bacteriophages were heralded as a potential treatment for diseases such as typhoid and cholera, but their promise was forgotten with the development of penicillin. Since the early 1970s, bacteria have continued to develop resistance to antibiotics such as penicillin, and this has led to a renewed interest in the use of bacteriophages to treat serious infections.

1920-1940: Early research D'Herelle travelled widely to promote the use of bacteriophages in the treatment of bacterial infections. In 1928, he became professor of biology at Yale and founded several research institutes. He was convinced that bacteriophages were viruses despite opposition from established bacteriologists such as the Nobel Prize winner Jules Bordet (1870–1961). Bordet argued that bacteriophages were not viruses but just enzymes released from "lysogenic" bacteria. He said "the invisible world of d'Herelle does not exist". But in the 1930s, the proof that bacteriophages were viruses was provided by Christopher Andrewes (1896–1988) and others. They showed that these viruses differed in size and in their chemical and serological properties. In 1940, the first electron micrograph of a bacteriophage was published and this silenced sceptics who had argued that bacteriophages were relatively simple enzymes and not viruses. Numerous other types of bacteriophages were quickly discovered and were shown to infect bacteria wherever they are found. Early research was interrupted by World War II. d'Herelle, despite his Canadian citizenship, was interned by the Vichy Government until the end of the war.

Modern era

Knowledge of bacteriophages increased in the 1940s following the formation of the Phage Group by scientists throughout the US. Among the members were Max Delbrück (1906–1981) who founded a course on bacteriophages at Cold Spring Harbor Laboratory. Other key members of the Phage Group included Salvador Luria (1912–1991) and Alfred Hershey (1908–1997). During the 1950s, Hershey and Chase made important discoveries on the replication of DNA during their studies on a bacteriophage called T2. Together with Delbruck they were jointly awarded the 1969 Nobel Prize in Physiology or Medicine "for their discoveries concerning the replication mechanism and the genetic structure of viruses". Since then, the study of bacteriophages has provided insights into the switching on and off of genes, and a useful mechanism for introducing foreign genes into bacteria and many other fundamental mechanisms of molecular biology.

Plant viruses

In 1882, Adolf Mayer (1843–1942) described a condition of tobacco plants, which he called "mosaic disease" ("mozaïkziekte"). The diseased plants had variegated leaves that were mottled. He excluded the possibility of a fungal infection and could not detect any bacterium and speculated that a "soluble, enzyme-like infectious principle was involved". He did not pursue his idea any further, and it was the filtration experiments of Ivanovsky and Beijerinck that suggested the cause was a previously unrecognised infectious agent. After tobacco mosaic was recognized as a virus disease, virus infections of many other plants were discovered.

The importance of tobacco mosaic virus in the history of viruses cannot be overstated. It was the first virus to be discovered, and the first to be crystallised and its structure shown in detail. The first X-ray diffraction pictures of the crystallised virus were obtained by Bernal and Fankuchen in 1941. On the basis of her pictures, Rosalind Franklin discovered the full structure of the virus in 1955. In the same year, Heinz Fraenkel-Conrat and Robley Williams showed that purified tobacco mosaic virus RNA and its coat protein can assemble by themselves to form functional viruses, suggesting that this simple mechanism was probably the means through which viruses were created within their host cells.

By 1935, many plant diseases were thought to be caused by viruses. In 1922, John Kunkel Small (1869–1938) discovered that insects could act as vectors and transmit virus to plants. In the following decade many diseases of plants were shown to be caused by viruses that were carried by insects and in 1939, Francis Holmes, a pioneer in plant virology, described 129 viruses that caused disease of plants. Modern, intensive agriculture provides a rich environment for many plant viruses. In 1948, in Kansas, US, 7% of the wheat crop was destroyed by wheat streak mosaic virus. The virus was spread by mites called Aceria tulipae.

In 1970, the Russian plant virologist Joseph Atabekov discovered that many plant viruses only infect a single species of host plant. The International Committee on Taxonomy of Viruses now recognises over 900 plant viruses.

20th century

By the end of the 19th century, viruses were defined in terms of their infectivity, their ability to be filtered, and their requirement for living hosts. Up until this time, viruses had only been grown in plants and animals, but in 1906, Ross Granville Harrison (1870–1959) invented a method for growing tissue in lymph, and, in 1913, E Steinhardt, C Israeli, and RA Lambert used this method to grow vaccinia virus in fragments of guinea pig corneal tissue. In 1928, HB and MC Maitland grew vaccinia virus in suspensions of minced hens' kidneys. Their method was not widely adopted until the 1950s, when poliovirus was grown on a large scale for vaccine production. In 1941–42, George Hirst (1909–94) developed assays based on haemagglutination to quantify a wide range of viruses as well as virus-specific antibodies in serum.

Influenza

A woman working during the 1918–1919 influenza epidemic.
 

Although the influenza virus that caused the 1918–1919 influenza pandemic was not discovered until the 1930s, the descriptions of the disease and subsequent research has proved it was to blame. The pandemic killed 40–50 million people in less than a year, but the proof that it was caused by a virus was not obtained until 1933. Haemophilus influenzae is an opportunistic bacterium which commonly follows influenza infections; this led the eminent German bacteriologist Richard Pfeiffer (1858–1945) to incorrectly conclude that this bacterium was the cause of influenza. A major breakthrough came in 1931, when the American pathologist Ernest William Goodpasture grew influenza and several other viruses in fertilised chickens' eggs. Hirst identified an enzymic activity associated with the virus particle, later characterised as the neuraminidase, the first demonstration that viruses could contain enzymes. Frank Macfarlane Burnet showed in the early 1950s that the virus recombines at high frequencies, and Hirst later deduced that it has a segmented genome.

Poliomyelitis

In 1949, John F. Enders (1897–1985) Thomas Weller (1915–2008), and Frederick Robbins (1916–2003) grew polio virus for the first time in cultured human embryo cells, the first virus to be grown without using solid animal tissue or eggs. Infections by poliovirus most often cause the mildest of symptoms. This was not known until the virus was isolated in cultured cells and many people were shown to have had mild infections that did not lead to poliomyelitis. But, unlike other viral infections, the incidence of polio – the rarer severe form of the infection – increased in the 20th century and reached a peak around 1952. The invention of a cell culture system for growing the virus enabled Jonas Salk (1914–1995) to make an effective polio vaccine.

Epstein–Barr virus

Denis Parsons Burkitt (1911–1993) was born in Enniskillen, County Fermanagh, Ireland. He was the first to describe a type of cancer that now bears his name Burkitt's lymphoma. This type of cancer was endemic in equatorial Africa and was the commonest malignancy of children in the early 1960s. In an attempt to find a cause for the cancer, Burkitt sent cells from the tumour to Anthony Epstein (b. 1921) a British virologist, who along with Yvonne Barr and Bert Achong (1928–1996), and after many failures, discovered viruses that resembled herpes virus in the fluid that surrounded the cells. The virus was later shown to be a previously unrecognised herpes virus, which is now called Epstein–Barr virus. Surprisingly, Epstein–Barr virus is a very common but relatively mild infection of Europeans. Why it can cause such a devastating illness in Africans is not fully understood, but reduced immunity to virus caused by malaria might be to blame. Epstein–Barr virus is important in the history of viruses for being the first virus shown to cause cancer in humans.

Late 20th and early 21st century

A rotavirus particle

The second half of the 20th century was the golden age of virus discovery and most of the 2,000 recognised species of animal, plant, and bacterial viruses were discovered during these years. In 1946, bovine virus diarrhea was discovered, which is still possibly the most common pathogen of cattle throughout the world and in 1957, equine arterivirus was discovered. In the 1950s, improvements in virus isolation and detection methods resulted in the discovery of several important human viruses including varicella zoster virus, the paramyxoviruses, – which include measles virus, and respiratory syncytial virus – and the rhinoviruses that cause the common cold. In the 1960s more viruses were discovered. In 1963, the hepatitis B virus was discovered by Baruch Blumberg (b. 1925). Reverse transcriptase, the key enzyme that retroviruses use to translate their RNA into DNA, was first described in 1970, independently by Howard Temin and David Baltimore (b. 1938). This was important to the development of antiviral drugs – a key turning-point in the history of viral infections. In 1983, Luc Montagnier (b. 1932) and his team at the Pasteur Institute in France first isolated the retrovirus now called HIV. In 1989 Michael Houghton's team at Chiron Corporation discovered hepatitis C. New viruses and strains of viruses were discovered in every decade of the second half of the 20th century. These discoveries have continued in the 21st century as new viral diseases such as SARS and nipah virus have emerged. Despite scientists' achievements over the past one hundred years, viruses continue to pose new threats and challenges.

Butane

From Wikipedia, the free encyclopedia ...