Search This Blog

Thursday, February 5, 2015

Quantum entanglement


From Wikipedia, the free encyclopedia


Spontaneous parametric down-conversion process can split photons into type II photon pairs with mutually perpendicular polarization.

Quantum entanglement is a physical phenomenon that occurs when pairs or groups of particles are generated or interact in ways such that the quantum state of each particle cannot be described independently—instead, a quantum state may be given for the system as a whole.

Measurements of physical properties such as position, momentum, spin, polarization, etc. performed on entangled particles are found to be appropriately correlated. For example, if a pair of particles is generated in such a way that their total spin is known to be zero, and one particle is found to have clockwise spin on a certain axis, then the spin of the other particle, measured on the same axis, will be found to be counterclockwise. Because of the nature of quantum measurement, however, this behavior gives rise to effects that can appear paradoxical: any measurement of a property of a particle can be seen as acting on that particle (e.g. by collapsing a number of superimposed states); and in the case of entangled particles, such action must be on the entangled system as a whole. It thus appears that one particle of an entangled pair "knows" what measurement has been performed on the other, and with what outcome, even though there is no known means for such information to be communicated between the particles, which at the time of measurement may be separated by arbitrarily large distances.

Such phenomena were the subject of a 1935 paper by Albert Einstein, Boris Podolsky and Nathan Rosen,[1] and several papers by Erwin Schrödinger shortly thereafter,[2][3] describing what came to be known as the EPR paradox. Einstein and others considered such behavior to be impossible, as it violated the local realist view of causality (Einstein referred to it as "spooky action at a distance"),[4] and argued that the accepted formulation of quantum mechanics must therefore be incomplete. Later, however, the counterintuitive predictions of quantum mechanics were verified experimentally.[5] Experiments have been performed involving measuring the polarization or spin of entangled particles in different directions, which—by producing violations of Bell's inequality—demonstrate statistically that the local realist view cannot be correct. This has been shown to occur even when the measurements are performed more quickly than light could travel between the sites of measurement: there is no lightspeed or slower influence that can pass between the entangled particles.[6] Recent experiments have measured entangled particles within less than one part in 10,000 of the light travel time between them.[7] According to the formalism of quantum theory, the effect of measurement happens instantly.[8][9] It is not possible, however, to use this effect to transmit classical information at faster-than-light speeds[10] (see Faster-than-light → Quantum mechanics).

Quantum entanglement is an area of extremely active research by the physics community, and its effects have been demonstrated experimentally with photons, electrons, molecules the size of buckyballs,[11][12] and even small diamonds.[13][14] Research is also focused on the utilization of entanglement effects in communication and computation.

History


May 4, 1935 New York Times article headline regarding the imminent EPR paper.

The counterintuitive predictions of quantum mechanics about strongly correlated systems were first discussed by Albert Einstein in 1935, in a joint paper with Boris Podolsky and Nathan Rosen.[1] In this study, they formulated the EPR paradox (Einstein, Podolsky, Rosen paradox), a thought experiment that attempted to show that quantum mechanical theory was incomplete. They wrote: "We are thus forced to conclude that the quantum-mechanical description of physical reality given by wave functions is not complete."[1]

However, they did not coin the word entanglement, nor did they generalize the special properties of the state they considered. Following the EPR paper, Erwin Schrödinger wrote a letter (in German) to Einstein in which he used the word Verschränkung (translated by himself as entanglement) "to describe the correlations between two particles that interact and then separate, as in the EPR experiment."[15] He shortly thereafter published a seminal paper defining and discussing the notion, and terming it "entanglement." In the paper he recognized the importance of the concept, and stated:[2] "I would not call [entanglement] one but rather the characteristic trait of quantum mechanics, the one that enforces its entire departure from classical lines of thought."

Like Einstein, Schrödinger was dissatisfied with the concept of entanglement, because it seemed to violate the speed limit on the transmission of information implicit in the theory of relativity.[16]
Einstein later famously derided entanglement as "spukhafte Fernwirkung"[17] or "spooky action at a distance."

The EPR paper generated significant interest among physicists and inspired much discussion about the foundations of quantum mechanics (perhaps most famously Bohm's interpretation of quantum mechanics), but produced relatively little other published work. So, despite the interest, the flaw in EPR's argument was not discovered until 1964, when John Stewart Bell proved that one of their key assumptions, the principle of locality, was not consistent with the hidden variables interpretation of quantum theory that EPR purported to establish. Specifically, he demonstrated an upper limit, seen in Bell's inequality, regarding the strength of correlations that can be produced in any theory obeying local realism, and he showed that quantum theory predicts violations of this limit for certain entangled systems.[18] His inequality is experimentally testable, and there have been numerous relevant experiments, starting with the pioneering work of Freedman and Clauser in 1972[19] and Aspect's experiments in 1982.[20] They have all shown agreement with quantum mechanics rather than the principle of local realism. However, the issue is not finally settled, as each of these experimental tests has left open at least one loophole by which it is possible to question the validity of the results.

The work of Bell raised the possibility of using these super strong correlations as a resource for communication. It led to the discovery of quantum key distribution protocols, most famously BB84 by Bennet and Brassard and E91 by Artur Ekert. Although BB84 does not use entanglement, Ekert's protocol uses the violation of a Bell's inequality as a proof of security.

David Kaiser of MIT mentioned in his book, How the Hippies Saved Physics, that the possibilities of instantaneous long-range communication derived from Bell's theorem stirred interest among hippies, psychics, and even the CIA, with the counter-culture playing a critical role in its development toward practical use.[21]

Concept

Meaning of entanglement

Quantum systems can become entangled through various types of interactions. (For some ways in which entanglement may be achieved for experimental purposes, see the section below on methods).
An entangled system has a quantum state which cannot be factored out into the product of states of its local constituents (e.g. individual particles). The system cannot be expressed as a direct product of quantum states that make up the system. If entangled, one constituent cannot be fully described without considering the other(s). Like the quantum states of individual particles, the state of an entangled system is expressible as a sum, or superposition, of basis states, which are eigenstates of some observable(s). Entanglement is broken when the entangled particles decohere through interaction with the environment; for example, when a measurement is made.[22]

As an example of entanglement: a subatomic particle decays into an entangled pair of other particles. The decay events obey the various conservation laws, and as a result, the measurement outcomes of one daughter particle must be highly correlated with the measurement outcomes of the other daughter particle (so that the total momenta, angular momenta, energy, and so forth remains roughly the same before and after this process). For instance, a spin-zero particle could decay into a pair of spin-1/2 particles. Since the total spin before and after this decay must be zero (conservation of angular momentum), whenever the first particle is measured to be spin up on some axis, the other (when measured on the same axis) is always found to be spin down. (This is called the spin anti-correlated case; and if the prior probabilities for measuring each spin are equal, the pair is said to be in the singlet state.)

Apparent paradox

The seeming paradox here is that a measurement made on either of the particles apparently collapses the state of the entire entangled system—and does so instantaneously, before any information about the measurement could have reached the other particle (assuming that information cannot travel faster than light). In the quantum formalism, the result of a spin measurement on one of the particles is a collapse into a state in which each particle has a definite spin (either up or down) along the axis of measurement. The outcome is taken to be random, with each possibility having a probability of 50%.
However, if both spins are measured along the same axis, they are found to be anti-correlated. This means that the random outcome of the measurement made on one particle seems to have been transmitted to the other, so that it can make the "right choice" when it is measured. The distance and timing of the measurements can be chosen so as to make the interval between the two measurements spacelike, i.e. from any of the two measuring events to the other a message would have to travel faster than light. Then, according to the principles of special relativity, it is not in fact possible for any information to travel between two such measuring events—it is not even possible to say which of the measurements came first, as this would depend on the inertial system of the observer. Therefore the correlation between the two measurements cannot appropriately be explained as one measurement determining the other: different observers would disagree about the role of cause and effect.

The hidden variables theory

A possible resolution to the apparent paradox might be to assume that the state of the particles contains some hidden variables, whose values effectively determine, right from the moment of separation, what the outcomes of the spin measurements are going to be. This would mean that each particle carries all the required information with it, and nothing needs to be transmitted from one particle to the other at the time of measurement. It was originally believed by Einstein and others (see the previous section) that this was the only way out, and therefore that the accepted quantum mechanical description (with a random measurement outcome) must be incomplete. (In fact similar paradoxes can arise even without entanglement: the position of a single particle is spread out over space, and two detectors attempting to detect the particle at different positions must attain appropriate correlation, so that they do not both detect the particle.)

Violations of Bell's inequality

The hidden variables theory fails, however, when we consider measurements of the spin of entangled particles along different axes (for example, along any of three axes which make angles of 120 degrees). If a large number of pairs of such measurements are made (on a large number of pairs of entangled particles), then statistically, if the local realist or hidden variables view were correct, the results would always satisfy Bell's inequality. A number of experiments have shown in practice, however, that Bell's inequality is not satisfied. This tends to confirm that the original formulation of quantum mechanics is indeed correct, in spite of its apparently paradoxical nature. Even when measurements of the entangled particles are made in moving relativistic reference frames, in which each measurement (in its own relativistic time frame) occurs before the other, the measurement results remain correlated.[23][24]

The fundamental issue about measuring spin along different axes is that these measurements cannot have definite values at the same time―they are incompatible in the sense that these measurements' maximum simultaneous precision is constrained by the uncertainty principle. This is contrary to what is found in classical physics, where any number of properties can be measured simultaneously with arbitrary accuracy. It has been proven mathematically that compatible measurements cannot show Bell-inequality-violating correlations,[25] and thus entanglement is a fundamentally non-classical phenomenon.

Other types of experiments

In a 2012 experiment, "delayed-choice entanglement swapping" was used to decide whether two particles were entangled or not after they had already been measured.[26]

In a 2013 experiment, entanglement swapping has been used to create entanglement between photons that never coexisted in time, thus demonstrating that "the nonlocality of quantum mechanics, as manifested by entanglement, does not apply only to particles with spacelike separation, but also to particles with timelike [i.e., temporal] separation".[27]

In three independent experiments it was shown that classically-communicated separable quantum states can be used to carry entangled states.[28]

In August 2014, researcher Gabriela Barreto Lemos and team were able to "take pictures" of objects using photons that have not interacted with the subjects, but were entangled with photons that did interact with such objects. Lemos, from the University of Vienna, is confident that this new quantum imaging technique could find application where low light imaging is imperative, in fields like biological or medical imaging.[29]

Special Theory of Relativity

Another theory explains quantum entanglement using special relativity.[30] According to this theory, faster-than-light communication between entangled systems can be achieved because the time dilation of special relativity allows time to stand still in light's point of view. For example, in the case of two entangled photons, a measurement made on one photon at present time would determine the state of the photon for both the present and past at the same moment. This leads to the instantaneous determination of the state of the other photon. Corresponding logic is applied to explain entangled systems, i.e. electron and positron, that travel below the speed of light.

The Mystery of Time

Physicists say that time is an emergent phenomenon that is a side effect of quantum entanglement.[31][32] The Wheeler–DeWitt equation that combines general relativity and quantum mechanics – by leaving out time altogether – was introduced in 1960s and it was a huge problem for the scientific community until in 1983, when the theorists Don Page and William Wootters made a solution based on the quantum phenomenon of entanglement. Page and Wootters showed how entanglement can be used to measure time.[33]

In 2013, at the Istituto Nazionale di Ricerca Metrologica (INRIM) in Turin, Italy, Ekaterina Moreva, together with Giorgio Brida, Marco Gramegna, Vittorio Giovannetti, Lorenzo Maccone, and Marco Genovese performed the first experimental test of Page and Wootters' ideas. They confirmed that time is an emergent phenomenon for internal observers but absent for external observers of the universe.[33]

Source for the arrow of time

Physicist Seth Lloyd says that quantum uncertainty gives rise to entanglement, the putative source of the arrow of time. According to Lloyd; "The arrow of time is an arrow of increasing correlations."[34]

Non-locality and hidden variables

There is much confusion about the meaning of entanglement, non-locality and hidden variables and how they relate to each other. As described above, entanglement is an experimentally verified and accepted property of nature, which has critical implications for the interpretations of quantum mechanics. The question becomes, "How can one account for something that was at one point indefinite with regard to its spin (or whatever is in this case the subject of investigation) suddenly becoming definite in that regard even though no physical interaction with the second object occurred, and, if the two objects are sufficiently far separated, could not even have had the time needed for such an interaction to proceed from the first to the second object?"[35] The latter question involves the issue of locality, i.e., whether for a change to occur in something the agent of change has to be in physical contact (at least via some intermediary such as a field force) with the thing that changes. Study of entanglement brings into sharp focus the dilemma between locality and the completeness or lack of completeness of quantum mechanics.

Bell's theorem and related results rule out a local realistic explanation for quantum mechanics (one which obeys the principle of locality while also ascribing definite values to quantum observables). However, in other interpretations, the experiments that demonstrate the apparent non-locality can also be described in local terms: If each distant observer regards the other as a quantum system, communication between the two must then be treated as a measurement process, and this communication is strictly local.[36] In particular, in the many worlds interpretation, the underlying description is fully local.[37] More generally, the question of locality in quantum physics is extraordinarily subtle and sometimes hinges on precisely how it is defined.

In the media and popular science, quantum non-locality is often portrayed as being equivalent to entanglement. While it is true that a bipartite quantum state must be entangled in order for it to produce non-local correlations, there exist entangled states that do not produce such correlations. A well-known example of this is the Werner state that is entangled for certain values of p_{sym}, but can always be described using local hidden variables.[38] In short, entanglement of a two-party state is necessary but not sufficient for that state to be non-local. It is important to recognise that entanglement is more commonly viewed as an algebraic concept, noted for being a precedent to non-locality as well as to quantum teleportation and to superdense coding, whereas non-locality is defined according to experimental statistics and is much more involved with the foundations and interpretations of quantum mechanics.

Quantum mechanical framework

The following subsections are for those with a good working knowledge of the formal, mathematical description of quantum mechanics, including familiarity with the formalism and theoretical framework developed in the articles: bra–ket notation and mathematical formulation of quantum mechanics.

Pure states

Consider two noninteracting systems A and B, with respective Hilbert spaces HA and HB. The Hilbert space of the composite system is the tensor product
 H_A \otimes H_B.
If the first system is in state \scriptstyle| \psi \rangle_A and the second in state \scriptstyle| \phi \rangle_B, the state of the composite system is
|\psi\rangle_A \otimes |\phi\rangle_B.
States of the composite system which can be represented in this form are called separable states, or (in the simplest case) product states.

Not all states are separable states (and thus product states). Fix a basis \scriptstyle \{|i \rangle_A\} for HA and a basis \scriptstyle \{|j \rangle_B\} for HB. The most general state in HAHB is of the form
|\psi\rangle_{AB} = \sum_{i,j} c_{ij} |i\rangle_A \otimes |j\rangle_B.
This state is separable if there exist c^A_i,c^B_j so that \scriptstyle c_{ij}= c^A_ic^B_j, yielding \scriptstyle |\psi\rangle_A = \sum_{i} c^A_{i} |i\rangle_A and \scriptstyle |\phi\rangle_B = \sum_{j} c^B_{j} |j\rangle_B. It is inseparable if for all c^A_i,c^B_j we have \scriptstyle c_{ij} \neq c^A_ic^B_j. If a state is inseparable, it is called an entangled state.

For example, given two basis vectors \scriptstyle \{|0\rangle_A, |1\rangle_A\} of HA and two basis vectors \scriptstyle \{|0\rangle_B, |1\rangle_B\} of HB, the following is an entangled state:
\tfrac{1}{\sqrt{2}} \left ( |0\rangle_A \otimes |1\rangle_B - |1\rangle_A \otimes |0\rangle_B \right ).
If the composite system is in this state, it is impossible to attribute to either system A or system B a definite pure state. Another way to say this is that while the von Neumann entropy of the whole state is zero (as it is for any pure state), the entropy of the subsystems is greater than zero. In this sense, the systems are "entangled". This has specific empirical ramifications for interferometry.[39] It is worthwhile to note that the above example is one of four Bell states, which are (maximally) entangled pure states (pure states of the HAHB space, but which cannot be separated into pure states of each HA and HB).

Now suppose Alice is an observer for system A, and Bob is an observer for system B. If in the entangled state given above Alice makes a measurement in the \scriptstyle \{|0\rangle, |1\rangle\} eigenbasis of A, there are two possible outcomes, occurring with equal probability:[40]
  1. Alice measures 0, and the state of the system collapses to \scriptstyle |0\rangle_A |1\rangle_B.
  2. Alice measures 1, and the state of the system collapses to \scriptstyle |1\rangle_A |0\rangle_B.
If the former occurs, then any subsequent measurement performed by Bob, in the same basis, will always return 1. If the latter occurs, (Alice measures 1) then Bob's measurement will return 0 with certainty. Thus, system B has been altered by Alice performing a local measurement on system A. This remains true even if the systems A and B are spatially separated. This is the foundation of the EPR paradox.

The outcome of Alice's measurement is random. Alice cannot decide which state to collapse the composite system into, and therefore cannot transmit information to Bob by acting on her system. Causality is thus preserved, in this particular scheme. For the general argument, see no-communication theorem.

Ensembles

As mentioned above, a state of a quantum system is given by a unit vector in a Hilbert space. More generally, if one has a large number of copies of the same system, then the state of this ensemble is described by a density matrix, which is a positive-semidefinite matrix, or a trace class when the state space is infinite-dimensional, and has trace 1. Again, by the spectral theorem, such a matrix takes the general form:
\rho = \sum_i w_i |\alpha_i\rangle \langle\alpha_i|,
where the positive valued wi sum up to 1, and in the infinite-dimensional case, we would take the closure of such states in the trace norm. We can interpret ρ as representing an ensemble where wi is the proportion of the ensemble whose states are |\alpha_i\rangle. When a mixed state has rank 1, it therefore describes a pure ensemble. When there is less than total information about the state of a quantum system we need density matrices to represent the state.

Following the definition in previous section, for a bipartite composite system, mixed states are just density matrices on HAHB. Extending the definition of separability from the pure case, we say that a mixed state is separable if it can be written as[41]:131–132
\rho = \sum_i p_i \rho_i^A \otimes \rho_i^B,
where pi are positive valued probabilities, \rho_i^A's and \rho_i^B's are themselves states on the subsystems A and B respectively. In other words, a state is separable if it is a probability distribution over uncorrelated states, or product states. We can assume without loss of generality that \rho_i^A and \rho_i^B are pure ensembles. A state is then said to be entangled if it is not separable. In general, finding out whether or not a mixed state is entangled is considered difficult. The general bipartite case has been shown to be NP-hard.[42] For the 2 × 2 and 2 × 3 cases, a necessary and sufficient criterion for separability is given by the famous Positive Partial Transpose (PPT) condition.[43]

Experimentally, a mixed ensemble might be realized as follows. Consider a "black-box" apparatus that spits electrons towards an observer. The electrons' Hilbert spaces are identical. The apparatus might produce electrons that are all in the same state; in this case, the electrons received by the observer are then a pure ensemble. However, the apparatus could produce electrons in different states. For example, it could produce two populations of electrons: one with state |\mathbf{z}+\rangle with spins aligned in the positive z direction, and the other with state |\mathbf{y}-\rangle with spins aligned in the negative y direction. Generally, this is a mixed ensemble, as there can be any number of populations, each corresponding to a different state.

Reduced density matrices

The idea of a reduced density matrix was introduced by Paul Dirac in 1930.[44] Consider as above systems A and B each with a Hilbert space HA, HB. Let the state of the composite system be
 |\Psi \rangle \in H_A \otimes H_B.
As indicated above, in general there is no way to associate a pure state to the component system A. However, it still is possible to associate a density matrix. Let
\rho_T = |\Psi\rangle \; \langle\Psi|.
which is the projection operator onto this state. The state of A is the partial trace of ρT over the basis of system B:
\rho_A \ \stackrel{\mathrm{def}}{=}\ \sum_j \langle j|_B \left( |\Psi\rangle \langle\Psi| \right) |j\rangle_B = \hbox{Tr}_B \; \rho_T.
ρA is sometimes called the reduced density matrix of ρ on subsystem A. Colloquially, we "trace out" system B to obtain the reduced density matrix on A.

For example, the reduced density matrix of A for the entangled state
\tfrac{1}{\sqrt{2}} \left ( |0\rangle_A \otimes |1\rangle_B - |1\rangle_A \otimes |0\rangle_B \right),
discussed above is
\rho_A = \tfrac{1}{2} \left ( |0\rangle_A \langle 0|_A + |1\rangle_A \langle 1|_A \right )
This demonstrates that, as expected, the reduced density matrix for an entangled pure ensemble is a mixed ensemble. Also not surprisingly, the density matrix of A for the pure product state |\psi\rangle_A \otimes |\phi\rangle_B discussed above is
\rho_A = |\psi\rangle_A \langle\psi|_A .
In general, a bipartite pure state ρ is entangled if and only if its reduced states are mixed rather than pure. Reduced density matrices were explicitly calculated in different spin chains with unique ground state. An example is the one-dimensional AKLT spin chain:[45] the ground state can be divided into a block and an environment. The reduced density matrix of the block is proportional to a projector to a degenerate ground state of another Hamiltonian.

The reduced density matrix also was evaluated for XY spin chains, where it has full rank. It was proved that in the thermodynamic limit, the spectrum of the reduced density matrix of a large block of spins is an exact geometric sequence[46] in this case.

Entropy

In this section, the entropy of a mixed state is discussed as well as how it can be viewed as a measure of quantum entanglement.

Definition


The plot of von Neumann entropy Vs Eigenvalue for a bipartite 2-level pure state. When the eigenvalue has value .5, von Neumann entropy is at a maximum, corresponding to maximum entanglement.

In classical information theory, the Shannon entropy, H is associated to a probability distribution,p_1, \cdots, p_n, in the following way:[47]
H(p_1, \cdots, p_n ) = - \sum_i p_i \log_2 p_i.
Since a mixed state ρ is a probability distribution over an ensemble, this leads naturally to the definition of the von Neumann entropy:
S(\rho) = - \hbox{Tr} \left( \rho \log_2 {\rho} \right).
In general, one uses the Borel functional calculus to calculate log(ρ). If ρ acts on a finite-dimensional Hilbert space and has eigenvalues \lambda_1, \cdots, \lambda_n, the Shannon entropy is recovered:
S(\rho) = - \hbox{Tr} \left( \rho \log_2 {\rho} \right) = - \sum_i \lambda_i \log_2 \lambda_i.
Since an event of probability 0 should not contribute to the entropy, and given that
 \lim_{p \to 0} p \log p = 0,
the convention 0 log(0) = 0 is adopted. This extends to the infinite-dimensional case as well: if ρ has spectral resolution
 \rho = \int \lambda d P_{\lambda},
assume the same convention when calculating
 \rho \log_2 \rho = \int \lambda \log_2 \lambda d P_{\lambda}.
As in statistical mechanics, the more uncertainty (number of microstates) the system should possess, the larger the entropy. For example, the entropy of any pure state is zero, which is unsurprising since there is no uncertainty about a system in a pure state. The entropy of any of the two subsystems of the entangled state discussed above is log(2) (which can be shown to be the maximum entropy for 2 × 2 mixed states).

As a measure of entanglement

Entropy provides one tool which can be used to quantify entanglement, although other entanglement measures exist.[48] If the overall system is pure, the entropy of one subsystem can be used to measure its degree of entanglement with the other subsystems.

For bipartite pure states, the von Neumann entropy of reduced states is the unique measure of entanglement in the sense that it is the only function on the family of states that satisfies certain axioms required of an entanglement measure.

It is a classical result that the Shannon entropy achieves its maximum at, and only at, the uniform probability distribution {1/n,...,1/n}. Therefore, a bipartite pure state ρHH is said to be a maximally entangled state if the reduced state of ρ is the diagonal matrix
\begin{bmatrix} \frac{1}{n}& & \\ & \ddots & \\ & & \frac{1}{n}\end{bmatrix}.
For mixed states, the reduced von Neumann entropy is not the unique entanglement measure.
As an aside, the information-theoretic definition is closely related to entropy in the sense of statistical mechanics[citation needed] (comparing the two definitions, we note that, in the present context, it is customary to set the Boltzmann constant k = 1). For example, by properties of the Borel functional calculus, we see that for any unitary operator U,
S(\rho) = S \left (U \rho U^* \right).
Indeed, without the above property, the von Neumann entropy would not be well-defined. In particular, U could be the time evolution operator of the system, i.e.
U(t) = \exp \left(\frac{-i H t }{\hbar}\right),
where H is the Hamiltonian of the system. This associates the reversibility of a process with its resulting entropy change, i.e., a process is reversible if, and only if, it leaves the entropy of the system invariant. This provides a connection between quantum information theory and thermodynamics. Rényi entropy also can be used as a measure of entanglement.

Therefore the march of the arrow of time towards thermodynamic equilibrium is simply the growing spread of quantum entanglement.[49]

Entanglement measures

Entanglement measures quantify the amount of entanglement in a bipartite quantum state. As aforementioned, entanglement entropy is the standard measure of entanglement for pure states (but no longer a measure of entanglement for mixed states). For mixed states, there are some entanglement measures in the literature [48] and no single one is standard.
Most (but not all) of these entanglement measures reduce for pure states to entanglement entropy, and are difficult (NP-hard) to compute.[50]

Quantum field theory

The Reeh-Schlieder theorem of quantum field theory is sometimes seen as an analogue of quantum entanglement.

Applications

Entanglement has many applications in quantum information theory. With the aid of entanglement, otherwise impossible tasks may be achieved.

Among the best-known applications of entanglement are superdense coding and quantum teleportation.[51]

Most researchers believe that entanglement is necessary to realize quantum computing (although this is disputed by some[52]).

Entanglement is used in some protocols of quantum cryptography.[53][54] This is because the "shared noise" of entanglement makes for an excellent one-time pad. Moreover, since measurement of either member of an entangled pair destroys the entanglement they share, entanglement-based quantum cryptography allows the sender and receiver to more easily detect the presence of an interceptor.

In interferometry, entanglement is necessary for surpassing the standard quantum limit and achieving the Heisenberg limit.[55]

Entangled states

There are several canonical entangled states that appear often in theory and experiments.

For two qubits, the Bell states are
|\Phi^\pm\rangle = \frac{1}{\sqrt{2}} (|0\rangle_A \otimes |0\rangle_B \pm |1\rangle_A \otimes |1\rangle_B)
|\Psi^\pm\rangle = \frac{1}{\sqrt{2}} (|0\rangle_A \otimes |1\rangle_B \pm |1\rangle_A \otimes |0\rangle_B).
These four pure states are all maximally entangled (according to the entropy of entanglement) and form an orthonormal basis (linear algebra) of the Hilbert space of the two qubits. They play a fundamental role in Bell's theorem.

For M>2 qubits, the GHZ state is
|\mathrm{GHZ}\rangle = \frac{|0\rangle^{\otimes M} + |1\rangle^{\otimes M}}{\sqrt{2}},
which reduces to the Bell state |\Phi^+\rangle for M=2. The traditional GHZ state was defined for M=3. GHZ states are occasionally extended to qudits, i.e. systems of d rather than 2 dimensions.
Also for M>2 qubits, there are spin squeezed states.[56] Spin squeezed states are a class of states satisfying certain restrictions on the uncertainty of spin measurements, and are necessarily entangled.[57]

For two bosonic modes, a NOON state is
|\psi_\text{NOON} \rangle = \frac{|N \rangle_a |0\rangle_b + |{0}\rangle_a |{N}\rangle_b}{\sqrt{2}}, \,
This is like a Bell state |\Phi^+\rangle except the basis kets 0 and 1 have been replaced with "the N photons are in one mode" and "the N photons are in the other mode".

Finally, there also exist twin Fock states for bosonic modes, which can be created by feeding a Fock state into two arms leading to a beam-splitter. They are the sum of multiple of NOON states, and can used to achieve the Heisenberg limit.[58]

For the appropriately chosen measure of entanglement, Bell, GHZ, and NOON states are maximally entangled while spin squeezed and twin Fock states are only partially entangled. The partially entangled states are generally easier to prepare experimentally.

Methods of creating entanglement

Entanglement is usually created by direct interactions between subatomic particles. These interactions can take numerous forms. One of the most commonly used methods is spontaneous parametric down-conversion to generate a pair of photons entangled in polarisation.[59] Other methods include the use of a fiber coupler to confine and mix photons, the use of quantum dots to trap electrons until decay occurs, the use of the Hong-Ou-Mandel effect, etc. In the earliest tests of Bell's theorem, the entangled particles were generated using atomic cascades.

It is also possible to create entanglement between quantum systems that never directly interacted, through the use of entanglement swapping.

Testing a system for entanglement

Systems which contain no entanglement are said to be separable. For 2-Qubit and Qubit-Qutrit systems (2 x 2 and 2 x 3 respectively) the simple Peres-Horodecki criterion provides both a necessary and a sufficient criterion for separability, and thus for detecting entanglement. However, for the general case, the criterion is merely a sufficient one for separability, as the problem becomes NP-hard.[60][61] A numerical approach to the problem is suggested by Jon Magne Leinaas, Jan Myrheim and Eirik Ovrum in their paper "Geometrical aspects of entanglement".[62] Leinaas et al. offer a numerical approach, iteratively refining an estimated separable state towards the target state to be tested, and checking if the target state can indeed be reached. An implementation of the algorithm (including a built in Peres-Horodecki criterion testing) is brought in the "StateSeparator" web-app

Renewable resource


From Wikipedia, the free encyclopedia

Daneshill Energy Forest The UK's largest plantation of Eucalypts, planted in 2005 by Nottinghamshire County Council.

A renewable resource is an organic natural resource that can replenish in due time compared to the usage, either through biological reproduction or other naturally recurring processes. Renewable resources are a part of Earth's natural environment and the largest components of its ecosphere. A positive life cycle assessment is a key indicator of a resource's sustainability.[1]

Definitions of renewable resources may also include agriculture production, as in sustainable agriculture and to an extent water resources.[2] In 1962 Paul Alfred Weiss defined Renewable Resources as: "The total range of living organisms providing man with food, fibres, drugs, etc...".[3] Another type of renewable resources is renewable energy resources. Common sources of renewable energy include solar, geothermal and wind power, which are all categorised as renewable resources.

Food and water

Water resources

Water can be considered a renewable material when carefully controlled usage, treatment, and release are followed. If not, it would become a non-renewable resource at that location. For example, groundwater is usually removed from an aquifer at a rate much greater than its very slow natural recharge, and so groundwater is considered non-renewable. Removal of water from the pore spaces may cause permanent compaction (subsidence) that cannot be renewed. 97.5% of the water on the Earth is salt water, and 3% is fresh water; slightly over two thirds of this is frozen in glaciers and polar ice caps.[4] The remaining unfrozen freshwater is found mainly as groundwater, with only a small fraction (0,008%) present above ground or in the air.[5]

Water pollution is one of the main concerns regarding water resources. It is estimated that 22% of worldwide water is used in industry.[6] Major industrial users include hydroelectric dams, thermoelectric power plants, which use water for cooling, ore and oil refineries, which use water in chemical processes, and manufacturing plants, which use water as a solvent.

Non agricultural food


Alaska wild "berries" from the Innoko National Wildlife Refuge - Renewable Resources

Food is any substance consumed to provide nutritional support for the body.[7] Most food has its origin in renewable resources. Food is obtained directly from plants and animals.

Wild berries and other fruits, mushrooms, plants, seeds and naturally growing edible resources, still represent a valuable source of nutrition in many countries, especially in rural areas. in fact many wild animals are dependent on wild plants and fruits as a source of food.[8]

Hunting may not be the first source of meat in the modernised world, but it is still an important and essential source for many rural and remote groups. It is also the sole source of feeding for wild carnivores.[9]

Sustainable agriculture

The phrase sustainable agriculture was coined by Australian agricultural scientist Gordon McClymont.[10] It has been defined as "an integrated system of plant and animal production practices having a site-specific application that will last over the long term".[11] Expansion of agricultural land has an impact on biodiversity and contributes to deforestation. The Food and Agriculture Organisation of the United Nations estimates that in coming decades, cropland will continue to be lost to industrial and urban development, along with reclamation of wetlands, and conversion of forest to cultivation, resulting in the loss of biodiversity and increased soil erosion.[12]

Polyculture practices in Andhra Pradesh

Although air and sunlight are available everywhere on Earth, crops also depend on soil nutrients and the availability of water. Monoculture is a method of growing only one crop at a time in a given field, which can damage land and cause it to become either unusable or suffer from reduced yields. Monoculture can also cause the build-up of pathogens and pests that target one specific species. The Great Irish Famine (1845–1849) is a well-known example of the dangers of monoculture.

Crop rotation and long-term crop rotations confer the replenishment of nitrogen through the use of green manure in sequence with cereals and other crops, and can improve soil structure and fertility by alternating deep-rooted and shallow-rooted plants. Other methods to combat lost soil nutrients are returning to natural cycles that annually flood cultivated lands (returning lost nutrients indefinitely) such as the Flooding of the Nile, the long-term use of biochar, and use of crop and livestock landraces that are adapted to less than ideal conditions such as pests, drought, or lack of nutrients.

Agricultural practices are the single greatest contributor to the global increase in soil erosion rates.[13] It is estimated that "more than a thousand million tonnes of southern Africa's soil are eroded every year. Experts predict that crop yields will be halved within thirty to fifty years if erosion continues at present rates."[14] The Dust Bowl phenomenon in the 1930s was caused by severe drought combined with farming methods that did not include crop rotation, fallow fields, cover crops, soil terracing and wind-breaking trees to prevent wind erosion.[15]

The tillage of agricultural lands is one of the primary contributing factors to erosion, due to mechanised agricultural equipment that allows for deep plowing, which severely increases the amount of soil that is available for transport by water erosion.[16][17] The phenomenon called Peak Soil describes how large-scale factory farming techniques are jeopardizing humanity's ability to grow food in the present and in the future.[18] Without efforts to improve soil management practices, the availability of arable soil will become increasingly problematic.[19]

Methods to combat erosion include no-till farming, using a keyline design, growing wind breaks to hold the soil, and widespread use of compost. Chemical fertiliser and pesticides can also have an effect of soil erosion, which can contribute to soil salinity and prevent other species from growing.
Phosphate is a primary component in the chemical fertiliser applied most commonly in modern agricultural production. However, scientists estimate that rock phosphate reserves will be depleted in 50–100 years and that Peak Phosphate will occur in about 2030.[20]

Industrial processing and logistics also have an effect on agriculture's sustainability. The way and locations crops are sold requires energy for transportation, as well as the energy cost for materials, labour, and transport. Food sold at a local location, such a farmers' market, have reduced energy overheads.

Illegal slash and burn practice in Madagascar, 2010

Overview of non-food resources


Douglas (Pseudotsuga menziesii) forest created in 1850, Meymac (Corrèze), France

The most important renewable resource is wood[dubious ] provided by means of forestry, which is used for construction, housing and firewood since ancient times. [21] [22] [23] Plants provide the main sources for renewable resources, the main distinction is made between energy crops and Non-food crops. A large variety of lubricants, industrially used vegetable oils, textiles and fibre made e.g. of cotton, copra or hemp, paper derived from wood, rags or grasses, bioplastic are based on plant renewable resources. A large variety of chemical base products like latex, ethanol, resin, sugar and starch can be provided with plant renewables. Animal based renewables include fur, leather, technical fat and lubricants and further derived products, as e.g. animal glue, tendons, casings or in historical times ambra and baleen provided by whaling.

With regard to pharmacy ingredients and legal and illegal drugs, plants are important sources, however e.g. venom of snakes, frogs and insects has been a valuable renewable source of pharmacological ingredients. Befeore GMO production set in, insulin and important hormones ware based on animal sources. Feathers an important byproduct of poultry farming for food is still being used as filler and as base for keratin in general. Same applies for the chitin produced in farming Crustaceans which may be used as base of chitosan. The most important part of the human body used for non medical purposes is human hair as for artificial hair integrations, which is being traded world wide.

Historical role


An adult and sub-adult Minke whale are dragged aboard the Nisshin Maru, a Japanese whaling vessel

Hemp insulation, a renewable resource used as building material

Historically, renewable resources like firewood, latex, guano, charcoal, wood ash, plant colors as indigo, and whale products have been crucial for human needs but failed to supply demand in the begin of the industrial era.[24] Early modern times faced large problems with overuse of renewable resources as in deforestation, overgrazing or overfishing.[24]

Besides fresh meat and milk, which is as a food item not topic of this section, livestock farmers and artisans used further animal ingredients as tendons, horn, bones, bladders. Complex technical constructions as the composite bow were based on combination of animal and plant based materials. The current distribution conflict between biofuel and food production is being described as Food vs. fuel. Conflicts between food needs and usage, as supposed by fief obligations were in so far common in historical times as well.[25] However, a significant percentage of (middle European) farmers yields went into livestock, which provides as well organic fertiliser.[26] Oxen and horses were important for transportation purposes, drove engines as e.g. in treadmills.

Other regions solved the transportation problem with terracing, urban and garden agriculture.[24] Further conflicts as between forestry and herding, or (sheep) herders and cattle farmers led to various solutions. Some confined wool production and sheep to large state and nobility domains or outsourced to professional shepherds with larger wandering herds.[27]

The British Agricultural Revolution was mainly based on a new system of crop rotation, the four-field rotation. British agriculturist Charles Townshend recognised the invention in Dutch Waasland and popularised it in the 18th century UK, George Washington Carver in the USA. The system used wheat, turnips and barley and introduced as well clover. Clover is able to fix nitrogen from air, a practically non exhaustive renewable resource, into fertilizing compounds to the soil and allowed to increase yields by large. Farmers opened up a fodder crop and grazing crop. Thus livestock could to be bred year-round and winter culling was avoided. The amount of manure rose and allowed more crops but to refrain from wood pasture.[24]

Early modern times and the 19th century saw the previous resource base partially replaced respectively supplemented by large scale chemical synthesis and by the use of fossil and mineral resources respectively.[28] Besides the still central role of wood, there is a sort of renaissance of renewable products based on modern agriculture, genetic research and extraction technology. Besides fears about an upcoming global shortage of fossil fuels, local shortages due to boycotts, war and blockades or just transportation problems in remote regions have contributed to different methods of replacing or substituting fossil resources based on renewables.

Challenges

The use of certain basically renewable products as in TCM endangers various species. Just the black market in rhinoceros horn reduced the world's rhino population by more than 90 percent over the past 40 years.[29][30]

Renewables used for autarky approaches


In vitro-culture of Vitis (grapevine), Geisenheim Grape Breeding Institute

The success of the German chemical industry till World War I was based on the replacement of colonial products. The predecessors of IG Farben dominated the world market for synthetic dyes at the beginning of the 20th century[31] and had an important role in artificial pharmaceuticals, photographic film, agricultural chemicals and electrochemicals.[28]

However the former Plant breeding research institutes took a different approach. After the loss of the German colonial empire, important players in the field as Erwin Baur and Konrad Meyer switched to using local crops as base for economic autarky.[32][33] Meyer as a key agricultural scientist and spatial planner of the Nazi era managed and lead Deutsche Forschungsgemeinschaft resources and focused about a third of the complete research grants in Nazi Germany on agricultural and genetic research and especially on resources needed in case of a further German war effort.[32] A wide array of agrarian research institutes still existing today and having importance in the field was founded or enlarged in the time.

There were some major failures as trying to e.g. grow frost resistant olive species, but some success in the case of hemp, Linum, rapeseed, which are still of current importance.[32] During the war, German scientists tried to systematically exploit foreign research results in occupied countries. Heinrich Himmler personally supported a research project using Russian Taraxacum (dandelion) species to manufacture natural rubber.[32] The project was conducted using 150 female KZ prisoners and captured Russian scientists kept together as 'Kommando Pflanzenzucht' (Plant breeding command) in a subcamp (SS) of Konzentrationslager Auschwitz led by SS agrarian research officer Joachim Caesar. Rubber dandelions are still of interest, as scientists in the Fraunhofer Institute for Molecular Biology and Applied Ecology (IME) announced 2013 to have developed a cultivar that is suitable for commercial production of natural rubber.[34]

Legal situation and subsidies

Several legal and economic means have been used to enhance the market share of renewables. The UK uses Non-Fossil Fuel Obligations (NFFO), a collection of orders requiring the electricity Distribution Network Operators in England and Wales to purchase electricity from the nuclear power and renewable energy sectors. Similar mechanisms operate in Scotland (the Scottish Renewable Orders under the Scottish Renewables Obligation) and Northern Ireland (the Northern Ireland Non-Fossil Fuel Obligation). In the USA, Renewable Energy Certificates (RECs), use a similar approach. German Energiewende is using fed-in tariffs. An unexpected outcome of the subsidies was the quick increase of pellet byfiring in conventional fossil fuel plants (compare Tilbury power stations) and cement works, making wood respectively biomass accounting for about half of Europe’s renewable-energy consumption.[23]

Examples of industrial use

Bioplastics

A packaging blister made from cellulose acetate, a bioplastic

Bioplastics are a form of plastics derived from renewable biomass sources, such as vegetable fats and oils, lignin, corn starch, pea starch[35] or microbiota.[36] The most common form of bioplastic is thermoplastic starch. Other forms include Cellulose bioplastics, biopolyester, Polylactic acid, and bio-derived polyethylene.

The production and use of bioplastics is generally regarded as a more sustainable activity when compared to plastic production from petroleum (petroplastic); however, manufacturing of bioplastic materials is often still reliant upon petroleum as an energy and materials source. Because of the fragmentation in the market and ambiguous definitions it is difficult to describe the total market size for bioplastics, but the global production capacity is estimated at 327,000 tonnes.[37] In contrast, global consumption of all flexible packaging is estimated at around 12.3 million tonnes.[38]

Bioasphalt

Bioasphalt is an asphalt alternative made from non-petroleum based renewable resources. Manufacturing sources of bioasphalt include sugar, molasses and rice, corn and potato starches, and vegetable oil based waste. Asphalt made with vegetable oil based binders was patented by Colas SA in France in 2004.[39][40]

Renewable energy

Renewable energy refers to the provision of energy via renewable resources which are naturally replenished fast enough as being used. It includes e.g. sunlight, wind, biomass, rain, tides, waves and geothermal heat.[41] Renewable energy may replace or enhance fossil energy supply various distinct areas: electricity generation, hot water/space heating, motor fuels, and rural (off-grid) energy services.[42]

Biomass

A sugarcane plantation in Brazil (State of São Paulo). Cane is used for biomass energy.

Biomass is referring to biological material from living, or recently living organisms, most often referring to plants or plant-derived materials.

Sustainable harvesting and use of renewable resources (i.e., maintaining a positive renewal rate) can reduce air pollution, soil contamination, habitat destruction and land degradation.[43] Biomass energy is derived from six distinct energy sources: garbage, wood, plants, waste, landfill gases, and alcohol fuels. Historically, humans have harnessed biomass-derived energy since the advent of burning wood to make fire, and wood remains the largest biomass energy source today.[44][45]

However, low tech use of biomass, which still amounts for more than 10% of world energy needs may induce indoor air pollution in developing nations[46] and results in between 1.5 million and 2 million deaths in 2000.[47]

The biomass used for electricity generation varies by region.[48] Forest by-products, such as wood residues, are common in the United States.[48] Agricultural waste is common in Mauritius (sugar cane residue) and Southeast Asia (rice husks).[48] Animal husbandry residues, such as poultry litter, are common in the UK.[48] The biomass power generating industry in the United States, which consists of approximately 11,000 MW of summer operating capacity actively supplying power to the grid, produces about 1.4 percent of the U.S. electricity supply.[49]

Biofuel

Brazil has bioethanol made from sugarcane available throughout the country. Shown a typical Petrobras gas station at São Paulo with dual fuel service, marked A for alcohol (ethanol) and G for gasoline.

A biofuel is a type of fuel whose energy is derived from biological carbon fixation. Biofuels include fuels derived from biomass conversion, as well as solid biomass, liquid fuels and various biogases.[50]

Bioethanol is an alcohol made by fermentation, mostly from carbohydrates produced in sugar or starch crops such as corn, sugarcane or switchgrass.

Biodiesel is made from vegetable oils and animal fats. Biodiesel is produced from oils or fats using transesterification and is the most common biofuel in Europe.

Biogas is methane produced by the process of anaerobic digestion of organic material by anaerobes.,[51] etc. is also a renewable source of energy.

Biogas

Biogas typically refers to a mixture of gases produced by the breakdown of organic matter in the absence of oxygen. Biogas is produced by anaerobic digestion with anaerobic bacteria or fermentation of biodegradable materials such as manure, sewage, municipal waste, green waste, plant material, and crops.[52] It is primarily methane (CH
4
) and carbon dioxide (CO
2
) and may have small amounts of hydrogen sulphide (H
2
S
), moisture and siloxanes.

Natural fibre

Natural fibres are a class of hair-like materials that are continuous filaments or are in discrete elongated pieces, similar to pieces of thread. They can be used as a component of composite materials. They can also be matted into sheets to make products such as paper or felt. Fibres are of two types: natural fibre which consists of animal and plant fibres, and man made fibre which consists of synthetic fibres and regenerated fibres.

Threats to renewable resources

Renewable resources are endangered by non-regulated industrial developments and growth.[53] They must be carefully managed to avoid exceeding the natural world's capacity to replenish them.[1] A life cycle assessment provides a systematic means of evaluating renewability. This is a matter of sustainability in the natural environment.[54]

Overfishing


Atlantic cod stocks severely overfished leading to abrupt collapse

National Geographic has described ocean over fishing as "simply the taking of wildlife from the sea at rates too high for fished species to replace themselves."[55]

Tuna meat is driving overfishing as to endanger some species like the bluefin tuna. The European Community and other organisations are trying to regulate fishery as to protect species and to prevent their extinctions.[56] The United Nations Convention on the Law of the Sea treaty deals with aspects of overfishing in articles 61, 62, and 65.[57]

Examples of overfishing exist in areas such as the North Sea of Europe, the Grand Banks of North America and the East China Sea of Asia.[58]

The decline of penguin population is caused in part by overfishing, caused by human competition over the same renewable resources[59]

Deforestation

Besides their role as a resource for fuel and building material, trees protect the environment by absorbing carbon dioxide and by creating oxygen.[60] The destruction of rain forests is one of the critical causes of climate change. Deforestation causes carbon dioxide to linger in the atmosphere. As carbon dioxide accrues, it produces a layer in the atmosphere that traps radiation from the sun. The radiation converts to heat which causes global warming, which is better known as the greenhouse effect.[61]
Deforestation also affects the water cycle. It reduces the content of water in the soil and groundwater as well as atmospheric moisture.[62] Deforestation reduces soil cohesion, so that erosion, flooding and landslides ensue.[63][64]

Rain forests shelter many species and organisms providing local populations with food and other commodities. In this way biofuels may well be unsustainable if their production contributes to deforestation.[65]

Endangered species


Over-hunting of American Bison.

Some renewable resources, species and organisms are facing a very high risk of extinction caused by growing human population and over-consumption. It has been estimated that over 40% of all living species on Earth are at risk of going extinct.[66] Many nations have laws to protect hunted species and to restrict the practice of hunting. Other conservation methods includes restricting land development or creating preserves. The IUCN Red List of Threatened Species is the best-known worldwide conservation status listing and ranking system.[67] Internationally, 199 countries have signed an accord agreeing to create Biodiversity Action Plans to protect endangered and other threatened species.

Quantum physics rules nature

Quantum theory doesn't just apply to physics - it's behind natural things like photosynthesis, homing pigeons and possibly consciousness itself, writes Brian Clegg.

Original link:  http://www.abc.net.au/science/articles/2015/02/04/4098244.htm
 
Leaf
Photosynthesis is as biological as you get, but it relies on some serious quantum physics (Source: scyther5/iStockphoto)

Because of the way we are taught science, it is tempting to divide the subject up into tight compartments. Physics is about how stuff behaves, while biology explains the living side of nature. (As someone with a physics background, I might cruelly say that chemistry is the clean-up operation for the bits in between that neither of the other subjects wants.) But these labels and divisions are arbitrary and human-imposed. Quantum theory has no intention of staying confined in the box labelled physics.

Indeed, we are discovering an increasing range of quantum processes — like quantum tunnelling and entanglement — cropping up in nature, where they might not have been expected before.
One of the most dramatic and important biological processes that is likely to involve high-level quantum effects is photosynthesis, the process by which plants convert light into energy.

Photosynthesis: let there be (coherent) light

Any interaction between light and matter is quantum mechanical, just as anything involving an atom or electron is, but recent studies of photosynthesis have shown that quantum physics probably has a more functional role.

The physics and chemistry involved in photosynthesis is convoluted, with a whole chain of reactions taking place. First the light bumps up the energy levels of electrons in special coloured molecules like the green chlorophyll in a plant. This energy is converted to chemical form by the photosynthetic reaction centre, which produces oxygen and incorporates carbon into the plant.

One of the steps of this intricate process is the fastest known chemical reaction in existence, taking place in a trillionth of a second. The oddities of the quantum world come into play in the energy's journey from that first excitation of an electron in chlorophyll to its arrival in the reaction centre, where it gets to work converting carbon dioxide to sugars (and releasing some oxygen in the bargain).

The way the energy passes from molecule to molecule on the way in is a result of quantum particles behaving like waves. The energised wave of the first excited electron extends into the next molecule, passing on the excitation, and so on. What's more, these waves don't seem to take a random drunkard's walk, but rather they overlap, coming into coherence — the state where the waves all ripple together.

This coherent behaviour had been postulated for a while, and there was some weak evidence of it existing from large plant samples, but in 2013 researchers in Spain and Glasgow discovered it at the molecular level, training lasers on single light-processing molecules to observe the detailed workings of the reaction centres that convert photons to chemical energy.

Experiments on light-harvesting purple bacteria also showed that the ability of the quantum particle to probabilistically explore all routes and find the best path meant that the connections change as parts of the organism move, constantly tuning the process, meaning that the conversion can reach levels of around 90 per cent efficiency, far higher than a solar cell (and possibly with implications for photovoltaic cell development in the future).

The pigeon's spinning, entangled compass

A rather less certain but fascinating possibility is that a quantum effect is behind one of the marvels of nature — the way that birds like homing pigeons can navigate, apparently by picking up the Earth's magnetic field, using a built-in compass. This mysterious ability has been linked to magnetic particles in their beaks, but there is also evidence that may be stronger that the process is triggered by light hitting the retina of the bird's eye. (In fact three mechanisms have been proposed, and it is entirely possible pigeons use some combination of them.)

When light hits the receptor in the bird's eye, it is used to split a molecule to form two free radicals. These are very reactive molecules that have an unpaired electron (it's free radicals that are restrained by antioxidants from causing cell damage). These electrons can act as tiny magnetic compasses, with their quantum property called spin influenced by a magnetic field.

Typically one radical will be closer to the nearest atomic nucleus, and hence feels the magnetic field less than the other. This difference between the two gives the chemicals a different level of reactivity, making it possible for the bird to get some feedback from the interaction, perhaps by the synthesis of a chemical in the retina. The two unpaired electrons are created entangled, linked to each other in a quantum fashion, and this could help amplify the effect.

I think, therefore I'm quantum

The most extreme — and most contentious — overlap between quantum theory and biology is the idea that consciousness itself is a quantum phenomenon. Although there is no direct evidence to base this theory on, some have suggested that it is not possible to explain the phenomenon of the conscious mind using conventional classical physics, and that it needs quantum effects like entanglement to make it possible.

One suggestion, with the clumsy name of 'orchestrated objective reduction', comes from physicist Roger Penrose and medical doctor Stuart Hameroff. Penrose proposed that the brain is capable of computation that would be impossible using conventional mechanisms, with the probabilistic nature at the heart of quantum theory explaining this extra capability. Hameroff, an anaesthetist, suggested that the cytoskeleton, the structure that supports the neurons in the brain, and in particular microtubules (thin polymers that form part of the cytoskeleton) could act as quantum systems, where electrons tunnel between the microtubules.

The idea that consciousness involves quantum effects does not seem to stretch the bounds of probability to too great an extent, though as yet the jury is out. We just don't understand what consciousness is, or the mechanism behind it, well enough to explore how much it could depend on quantum effects.

But given that atoms and light are governed by quantum theory, and pretty well everything in our natural world is either atoms or light, it is inevitable that quantum processes will rule in nature.
 
About the author  Brian Clegg is a science writer with a background in physics. His previous books include Build your own time machine and The universe inside you. This is an edited extract from his latest book The Quantum Age, published by Icon Books.

How Science Can Inform Ethics and Champion Sentient Beings

The arc of the moral universe really is bending toward progress
 
Izhar Cohen
Why is it wrong to enslave or torture other humans, or take their property, or discriminate against them? That these actions are wrong, almost no one disputes. But why are they wrong?

For an answer, most people turn to religion (because God says so), or to philosophy (because rights theory says so), or to political theory (because the social contract says so). In The Moral Arc, published in January, I show how science may also contribute an answer. My moral starting point is the survival and flourishing of sentient beings. By survival, I mean the instinct to live, and by flourishing, I mean having adequate sustenance, safety, shelter, and social relations for physical and mental health. By sentient, I mean emotive, perceptive, sensitive, responsive, conscious, and, especially, having the capacity to feel and to suffer. Instead of using criteria such as tool use, language, reasoning or intelligence, I go deeper into our evolved brains, toward these more basic emotive capacities. There is sound science behind this proposition.

According to the Cambridge Declaration on Consciousness—a statement issued in 2012 by an international group of prominent cognitive and computational neuroscientists, neuropharmacologists and neuroanatomists—there is a continuity between humans and nonhuman animals, and sentience is the most important common characteristic. The neural pathways of emotions, for example, are not confined to higher-level cortical structures in the brain but are found in evolutionarily older subcortical regions. Artificially stimulating the same regions in human and nonhuman animal brains produces the same emotional reactions in both. Attentiveness, decision making, and the emotional capacity to feel and suffer are found across the branches of the evolutionary tree. This is what brings all humans and many nonhuman animals into our moral sphere.

The arc of the moral universe really is bending toward progress, by which I mean the improvement of the survival and flourishing of individual sentient beings. I emphasize the individual because that is who survives and flourishes, or who suffers and dies, not the group, tribe, race, gender, state or any other collective. Individual beings perceive, emote, respond, love, feel and suffer—not populations, races, genders or groups. Historically, abuses have been most rampant—and body counts have run the highest—when the individual is sacrificed for the good of the group. It happens when people are judged by the color of their skin, or by their gender, or by whom they prefer to sleep with, or by which political or religious group they belong to, or by any other distinguishing trait our species has identified to differentiate among members instead of by the content of their individual character.

The rights revolutions of the past three centuries have focused almost entirely on the freedom and autonomy of individuals, not collectives—on the rights of persons, not groups. Individuals vote, not genders. Individuals want to be treated equally, not races. In fact, most rights protect individuals from being discriminated against as individual members of a group, such as by race, creed, color, gender, and now sexual orientation and gender preference.

The singular and separate organism is to biology and society what the atom is to physics—a fundamental unit of nature. The first principle of the survival and flourishing of sentient beings is grounded in the biological fact that it is the discrete organism that is the main target of natural selection and social evolution, not the group. We are a social species, but we are first and foremost individuals within social groups and therefore ought not to be subservient to the collective.

This drive to survive is part of our essence, and therefore the freedom to pursue the fulfillment of that essence is a natural right, by which I mean it is universal and inalienable and thus not contingent only on the laws and customs of a particular culture or government. As a natural right, the personal autonomy of the individual gives us criteria by which we can judge actions as right or wrong: Do they increase or decrease the survival and flourishing of individual sentient beings? Slavery, torture, robbery and discrimination lead to a decrease in survival and flourishing, and thus they are wrong. QED.
This article was originally published with the title "A Moral Starting Point."

Platinum group

From Wikipedia, the free encyclopedia ...