Search This Blog

Tuesday, October 6, 2020

Optical computing

From Wikipedia, the free encyclopedia

Optical or photonic computing uses photons produced by lasers or diodes for computation. For decades, photons have promised to allow a higher bandwidth than the electrons used in conventional computers (see optical fibers).

Most research projects focus on replacing current computer components with optical equivalents, resulting in an optical digital computer system processing binary data. This approach appears to offer the best short-term prospects for commercial optical computing, since optical components could be integrated into traditional computers to produce an optical-electronic hybrid. However, optoelectronic devices lose 30% of their energy converting electronic energy into photons and back; this conversion also slows the transmission of messages. All-optical computers eliminate the need for optical-electrical-optical (OEO) conversions, thus lessening the need for electrical power.

Application-specific devices, such as synthetic aperture radar (SAR) and optical correlators, have been designed to use the principles of optical computing. Correlators can be used, for example, to detect and track objects, and to classify serial time-domain optical data.

Optical components for binary digital computer

The fundamental building block of modern electronic computers is the transistor. To replace electronic components with optical ones, an equivalent optical transistor is required. This is achieved using materials with a non-linear refractive index. In particular, materials exist where the intensity of incoming light affects the intensity of the light transmitted through the material in a similar manner to the current response of a bipolar transistor. Such an optical transistor can be used to create optical logic gates, which in turn are assembled into the higher level components of the computer's CPU. These will be nonlinear optical crystals used to manipulate light beams into controlling other light beams.

Like any computing system, an Optical computing system needs three things to function well:

  1. optical processor
  2. optical data transfer, e.g. Fiber optic cable
  3. optical storage, e.g. CD/DVD/Blu-ray, etc.

Substituting electrical components will need data format conversion from photons to electrons, which will make the system slower.

Controversy

There are disagreements between researchers about the future capabilities of optical computers; whether or not they may be able to compete with semiconductor-based electronic computers in terms of speed, power consumption, cost, and size is an open question. Critics note that real-world logic systems require "logic-level restoration, cascadability, fan-out and input–output isolation", all of which are currently provided by electronic transistors at low cost, low power, and high speed. For optical logic to be competitive beyond a few niche applications, major breakthroughs in non-linear optical device technology would be required, or perhaps a change in the nature of computing itself.

Misconceptions, challenges, and prospects

A significant challenge to optical computing is that computation is a nonlinear process in which multiple signals must interact. Light, which is an electromagnetic wave, can only interact with another electromagnetic wave in the presence of electrons in a material, and the strength of this interaction is much weaker for electromagnetic waves, such as light, than for the electronic signals in a conventional computer. This may result in the processing elements for an optical computer requiring more power and larger dimensions than those for a conventional electronic computer using transistors.

A further misconception is that since light can travel much faster than the drift velocity of electrons, and at frequencies measured in THz, optical transistors should be capable of extremely high frequencies. However, any electromagnetic wave must obey the transform limit, and therefore the rate at which an optical transistor can respond to a signal is still limited by its spectral bandwidth. However, in fiber optic communications, practical limits such as dispersion often constrain channels to bandwidths of 10s of GHz, only slightly better than many silicon transistors. Obtaining dramatically faster operation than electronic transistors would therefore require practical methods of transmitting ultrashort pulses down highly dispersive waveguides.

Photonic logic

Realization of a photonic controlled-NOT gate for use in quantum computing

Photonic logic is the use of photons (light) in logic gates (NOT, AND, OR, NAND, NOR, XOR, XNOR). Switching is obtained using nonlinear optical effects when two or more signals are combined.

Resonators are especially useful in photonic logic, since they allow a build-up of energy from constructive interference, thus enhancing optical nonlinear effects.

Other approaches that have been investigated include photonic logic at a molecular level, using photoluminescent chemicals. In a demonstration, Witlicki et al. performed logical operations using molecules and SERS.

Unconventional approaches

Time delays optical computing

The basic idea is to delay light (or any other signal) in order to perform useful computations. Of interest would be to solve NP-complete problems as those are difficult problems for the conventional computers.

There are 2 basic properties of light that are actually used in this approach:

  • The light can be delayed by passing it through an optical fiber of a certain length.
  • The light can be split into multiple (sub)rays. This property is also essential because we can evaluate multiple solutions in the same time.

When solving a problem with time-delays the following steps must be followed:

  • The first step is to create a graph-like structure made from optical cables and splitters. Each graph has a start node and a destination node.
  • The light enters through the start node and traverses the graph until it reaches the destination. It is delayed when passing through arcs and divided inside nodes.
  • The light is marked when passing through an arc or through an node so that we can easily identify that fact at the destination node.
  • At the destination node we will wait for a signal (fluctuation in the intensity of the signal) which arrives at a particular moment(s) in time. If there is no signal arriving at that moment, it means that we have no solution for our problem. Otherwise the problem has a solution. Fluctuations can be read with a photodetector and an oscilloscope.

The first problem attacked in this way was the Hamiltonian path problem.

The simplest one is the subset sum problem. An optical device solving an instance with 4 numbers {a1, a2, a3, a4} is depicted below:

Optical device for solving the Subset sum problem

The light will enter in Start node. It will be divided into 2 (sub)rays of smaller intensity. These 2 rays will arrive into the second node at moments a1 and 0. Each of them will be divided into 2 subrays which will arrive in the 3rd node at moments 0, a1, a2 and a1 + a2. These represents the all subsets of the set {a1, a2}. We expect fluctuations in the intensity of the signal at no more than 4 different moments. In the destination node we expect fluctuations at no more than 16 different moments (which are all the subsets of the given). If we have a fluctuation in the target moment B, it means that we have a solution of the problem, otherwise there is no subset whose sum of elements equals B. For the practical implementation we cannot have zero-length cables, thus all cables are increased with a small (fixed for all) value k. In this case the solution is expected at moment B+n*k.

Wavelength-based computing

Wavelength-based computing can be used to solve the 3-SAT problem with n variables, m clauses and with no more than 3 variables per clause. Each wavelength, contained in a light ray, is considered as possible value-assignments to n variables. The optical device contains prisms and mirrors are used to discriminate proper wavelengths which satisfy the formula.

Computing by xeroxing on transparencies

This approach uses a Xerox machine and transparent sheets for performing computations. k-SAT problem with n variables, m clauses and at most k variables per clause has been solved in 3 steps:

  • Firstly all 2^n possible assignments of n variables have been generated by performing n xerox copies.
  • Using at most 2k copies of the truth table, each clause is evaluated at every row of the truth table simultaneously.
  • The solution is obtained by making a single copy operation of the overlapped transparencies of all m clauses.

Masking optical beams

The travelling salesman problem has been solved in by using an optical approach. All possible TSP paths have been generated and stored in a binary matrix which was multiplied with another gray-scale vector containing the distances between cities. The multiplication is performed optically by using an optical correlator.

Optical Fourier co-processors

Many computations, particularly in scientific applications, require frequent use of the 2D discrete Fourier transform (DFT) – for example in solving differential equations describing propagation of waves or transfer of heat. Though modern GPU technologies typically enable high-speed computation of large 2D DFTs, techniques have been developed that can perform continuous Fourier transform optically by utilising the natural Fourier transforming property of lenses. The input is encoded using a liquid crystal spatial light modulator and the result is measured using a conventional CMOS or CCD image sensor. Such optical architectures can offer superior scaling of computational complexity due to the inherently highly interconnected nature of optical propagation, and have been used to solve 2D heat equations.

Ising machines

Physical computers whose design was inspired by the theoretical Ising model are called Ising machines.

Yoshihisa Yamamoto's lab at Stanford pioneered building Ising machines using photons. Initially Yamamoto and his colleagues built an Ising machine using lasers, mirrors, and other optical components commonly found on an optical table.

Later a team at Hewlett Packard Labs developed photonic chip design tools and used them to build an Ising machine on a single chip, integrating 1,052 optical components on that single chip.

Action at a distance

From Wikipedia, the free encyclopedia

In physics, action at a distance is the concept that an object can be moved, changed, or otherwise affected without being physically touched (as in mechanical contact) by another object. That is, it is the non-local interaction of objects that are separated in space.

This term was used most often in the context of early theories of gravity and electromagnetism to describe how an object responds to the influence of distant objects. For example, Coulomb's law and Newton's law of universal gravitation are such early theories.

More generally "action at a distance" describes the failure of early atomistic and mechanistic theories which sought to reduce all physical interaction to collision. The exploration and resolution of this problematic phenomenon led to significant developments in physics, from the concept of a field, to descriptions of quantum entanglement and the mediator particles of the Standard Model.

Electricity and magnetism

Philosopher William of Ockham discussed action at a distance to explain magnetism and the ability of the Sun to heat the Earth's atmosphere without affecting the intervening space.

Efforts to account for action at a distance in the theory of electromagnetism led to the development of the concept of a field which mediated interactions between currents and charges across empty space. According to field theory, we account for the Coulomb (electrostatic) interaction between charged particles through the fact that charges produce around themselves an electric field, which can be felt by other charges as a force. Maxwell directly addressed the subject of action-at-a-distance in chapter 23 of his A Treatise on Electricity and Magnetism in 1873. He began by reviewing the explanation of Ampère's formula given by Gauss and Weber. On page 437 he indicates the physicists' disgust with action at a distance. In 1845 Gauss wrote to Weber desiring "action, not instantaneous, but propagated in time in a similar manner to that of light". This aspiration was developed by Maxwell with the theory of an electromagnetic field described by Maxwell's equations, which used the field to elegantly account for all electromagnetic interactions, as well as light (which, until then, had been seen as a completely unrelated phenomenon). In Maxwell's theory, the field is its own physical entity, carrying momenta and energy across space, and action-at-a-distance is only the apparent effect of local interactions of charges with their surrounding field.

Electrodynamics was later described without fields (in Minkowski space) as the direct interaction of particles with lightlike separation vectors. This resulted in the Fokker-Tetrode-Schwarzschild action integral. This kind of electrodynamic theory is often called "direct interaction" to distinguish it from field theories where action at a distance is mediated by a localized field (localized in the sense that its dynamics are determined by the nearby field parameters). This description of electrodynamics, in contrast with Maxwell's theory, explains apparent action at a distance not by postulating a mediating entity (a field) but by appealing to the natural geometry of special relativity.

Direct interaction electrodynamics is explicitly symmetrical in time and avoids the infinite energy predicted in the field immediately surrounding point particles. Feynman and Wheeler have shown that it can account for radiation and radiative damping (which had been considered strong evidence for the independent existence of the field). However, various proofs, beginning with that of Dirac, have shown that direct interaction theories (under reasonable assumptions) do not admit Lagrangian or Hamiltonian formulations (these are the so-called No Interaction Theorems). Also significant is the measurement and theoretical description of the Lamb shift which strongly suggests that charged particles interact with their own field. Fields, because of these and other difficulties, have been elevated to the fundamental operators in Quantum Field Theory and Modern physics has thus largely abandoned direct interaction theory.

Gravity

Newton

Newton's classical theory of gravity offered no prospect of identifying any mediator of gravitational interaction. His theory assumed that gravitation acts instantaneously, regardless of distance. Kepler's observations gave strong evidence that in planetary motion angular momentum is conserved. (The mathematical proof is valid only in the case of a Euclidean geometry.) Gravity is also known as a force of attraction between two objects because of their mass.

From a Newtonian perspective, action at a distance can be regarded as "a phenomenon in which a change in intrinsic properties of one system induces a change in the intrinsic properties of a distant system, independently of the influence of any other systems on the distant system, and without there being a process that carries this influence contiguously in space and time" (Berkovitz 2008).

A related question, raised by Ernst Mach, was how rotating bodies know how much to bulge at the equator. This, it seems, requires an action-at-a-distance from distant matter, informing the rotating object about the state of the universe. Einstein coined the term Mach's principle for this question.

It is inconceivable that inanimate Matter should, without the Mediation of something else, which is not material, operate upon, and affect other matter without mutual Contact…That Gravity should be innate, inherent and essential to Matter, so that one body may act upon another at a distance thro' a Vacuum, without the Mediation of any thing else, by and through which their Action and Force may be conveyed from one to another, is to me so great an Absurdity that I believe no Man who has in philosophical Matters a competent Faculty of thinking can ever fall into it. Gravity must be caused by an Agent acting constantly according to certain laws; but whether this Agent be material or immaterial, I have left to the Consideration of my readers.

— Isaac Newton, Letters to Bentley, 1692/3

Different authors have attempted to clarify the aspects of remote action and God’s involvement on the basis of textual investigations, mainly from the Mathematical Principles of Natural Philosophy, Newton’s correspondence with Richard Bentley (1692/93), and Queries that Newton introduced at the end of the Opticks book in the first three editions (between 1704 and 1721).

Andrew Janiak, in Newton as philosopher, considered that Newton denied that gravity could be essential to matter, dismissed direct action at a distance, and also rejected the idea of a material substance. But Newton agreed, in Janiak’s view, with an immaterial ether, which he considered that Newton identifies himself with God himself: “Newton obviously thinks that God might be the very “immaterial medium” underlying all gravitational interactions among material bodies.”

Steffen Ducheyne, in Newton on Action at a Distance, considered that Newton never accepted direct remote action, only material intervention or immaterial substance.

Hylarie Kochiras, in Gravity and Newton’s substance counting problem, argued that Newton was inclined to reject direct action, giving priority to the hypothesis of an intangible environment. But, in his speculative moments, Newton oscillated between accepting and rejecting direct remote action. Newton, according to Kochiras, claims that God is a virtual omnipresent, the force/agent must subsist in substance, and God is omnipresent substantially, resulting in a hidden premise, the principle of local action.

Eric Schliesser, in Newton’s substance monism, distant action, and the nature of Newton’s Empiricism, argued that Newton does not categorically refuse the idea that matter is active, and therefore accepted the possibility of a direct action at a distance. Newton affirms the virtual omnipresence of God in addition to his substantial omnipresence.

John Henry, in Gravity and De gravitatione: The Development of Newton’s Ideas on Action at a Distance, also argued that direct remote action was not inconceivable for Newton, rejecting the idea that gravity can be explained by subtle matter, accepting the idea of an omnipotent God, and rejecting the Epicurean attraction.

For further discussion see Ducheyne, S. "Newton on Action at a Distance". Journal of the History of Philosophy vol. 52.4 (2014): 675–702.

Einstein

According to Albert Einstein's theory of special relativity, instantaneous action at a distance violates the relativistic upper limit on speed of propagation of information. If one of the interacting objects were to suddenly be displaced from its position, the other object would feel its influence instantaneously, meaning information had been transmitted faster than the speed of light.

One of the conditions that a relativistic theory of gravitation must meet is that gravity is mediated with a speed that does not exceed c, the speed of light in a vacuum. From the previous success of electrodynamics, it was foreseeable that the relativistic theory of gravitation would have to use the concept of a field, or something similar.

This has been achieved by Einstein's theory of general relativity, in which gravitational interaction is mediated by deformation of space-time geometry. Matter warps the geometry of space-time, and these effects are—as with electric and magnetic fields—propagated at the speed of light. Thus, in the presence of matter, space-time becomes non-Euclidean, resolving the apparent conflict between Newton's proof of the conservation of angular momentum and Einstein's theory of special relativity.

Mach's question regarding the bulging of rotating bodies is resolved because local space-time geometry is informing a rotating body about the rest of the universe. In Newton's theory of motion, space acts on objects, but is not acted upon. In Einstein's theory of motion, matter acts upon space-time geometry, deforming it; and space-time geometry acts upon matter, by affecting the behavior of geodesics.

As a consequence, and unlike the classical theory, general relativity predicts that accelerating masses emit gravitational waves, i.e. disturbances in the curvature of spacetime that propagate outward at lightspeed. Their existence (like many other aspects of relativity) has been experimentally confirmed by astronomers—most dramatically in the direct detection of gravitational waves originating from a black hole merger when they passed through LIGO in 2015.

Quantum mechanics

Since the early twentieth century, quantum mechanics has posed new challenges for the view that physical processes should obey locality. Whether quantum entanglement counts as action-at-a-distance hinges on the nature of the wave function and decoherence, issues over which there is still considerable debate among scientists and philosophers.

One important line of debate originated with Einstein, who challenged the idea that quantum mechanics offers a complete description of reality, along with Boris Podolsky and Nathan Rosen. They proposed a thought experiment involving an entangled pair of observables with non-commuting operators (e.g. position and momentum).

This thought experiment, which came to be known as the EPR paradox, hinges on the principle of locality. A common presentation of the paradox is as follows: two particles interact and fly off in opposite directions. Even when the particles are so far apart that any classical interaction would be impossible (see principle of locality), a measurement of one particle nonetheless determines the corresponding result of a measurement of the other.

After the EPR paper, several scientists such as de Broglie studied local hidden variables theories. In the 1960s John Bell derived an inequality that indicated a testable difference between the predictions of quantum mechanics and local hidden variables theories. To date, all experiments testing Bell-type inequalities in situations analogous to the EPR thought experiment have results consistent with the predictions of quantum mechanics, suggesting that local hidden variables theories can be ruled out. Whether or not this is interpreted as evidence for nonlocality depends on one's interpretation of quantum mechanics.

Non-standard interpretations of quantum mechanics vary in their response to the EPR-type experiments. The Bohm interpretation gives an explanation based on nonlocal hidden variables for the correlations seen in entanglement. Many advocates of the many-worlds interpretation argue that it can explain these correlations in a way that does not require a violation of locality, by allowing measurements to have non-unique outcomes.

If "action" is defined as a force, physical work or information, then it should be stated clearly that entanglement cannot communicate action between two entangled particles (Einstein's worry about "spooky action at a distance" does not actually violate special relativity). What happens in entanglement is that a measurement on one entangled particle yields a random result, then a later measurement on another particle in the same entangled (shared) quantum state must always yield a value correlated with the first measurement. Since no force, work, or information is communicated (the first measurement is random), the speed of light limit does not apply (see Quantum entanglement and Bell test experiments). In the standard Copenhagen interpretation, as discussed above, entanglement demonstrates a genuine nonlocal effect of quantum mechanics, but does not communicate information, either quantum or classical.

Monday, October 5, 2020

Quantum teleportation (partial)

From Wikipedia, the free encyclopedia

Quantum teleportation is a process in which quantum information (e.g. the exact state of an atom or photon) can be transmitted (exactly, in principle) from one location to another, with the help of classical communication and previously shared quantum entanglement between the sending and receiving location. Because it depends on classical communication, which can proceed no faster than the speed of light, it cannot be used for faster-than-light transport or communication of classical bits. While it has proven possible to teleport one or more qubits of information between two (entangled) quanta,this has not yet been achieved between anything larger than molecules.

Although the name is inspired by the teleportation commonly used in fiction, quantum teleportation is limited to the transfer of information rather than matter itself. Quantum teleportation is not a form of transportation, but of communication: it provides a way of transferring a qubit from one location to another.

The term was coined by physicist Charles Bennett. The seminal paper first expounding the idea of quantum teleportation was published by C. H. Bennett, G. Brassard, C. Crépeau, R. Jozsa, A. Peres, and W. K. Wootters in 1993. Quantum teleportation was first realized in single photons, later being demonstrated in various material systems such as atoms, ions, electrons and superconducting circuits. The latest reported record distance for quantum teleportation is 1,400 km (870 mi) by the group of Jian-Wei Pan using the Micius satellite for space-based quantum teleportation.

Non-technical summary

In matters relating to quantum or classical information theory, it is convenient to work with the simplest possible unit of information, the two-state system. In classical information, this is a bit, commonly represented using one or zero (or true or false). The quantum analog of a bit is a quantum bit, or qubit. Qubits encode a type of information, called quantum information, which differs sharply from "classical" information. For example, quantum information can be neither copied (the no-cloning theorem) nor destroyed (the no-deleting theorem).

Quantum teleportation provides a mechanism of moving a qubit from one location to another, without having to physically transport the underlying particle to which that qubit is normally attached. Much like the invention of the telegraph allowed classical bits to be transported at high speed across continents, quantum teleportation holds the promise that one day, qubits could be moved likewise. As of 2015, the quantum states of single photons, photon modes, single atoms, atomic ensembles, defect centers in solids, single electrons, and superconducting circuits have been employed as information bearers.

The movement of qubits does not require the movement of "things" any more than communication over the internet does: no quantum object needs to be transported, but it is necessary to communicate two classical bits per teleported qubit from the sender to the receiver. The actual teleportation protocol requires that an entangled quantum state or Bell state be created, and its two parts shared between two locations (the source and destination, or Alice and Bob). In essence, a certain kind of quantum channel between two sites must be established first, before a qubit can be moved. Teleportation also requires a classical information channel to be established, as two classical bits must be transmitted to accompany each qubit. The reason for this is that the results of the measurements must be communicated between the source and destination so as to reconstruct the qubit, or else the state of the destination qubit would not be known to the source, and any attempt to reconstruct the state would be random; this must be done over ordinary classical communication channels. The need for such classical channels may, at first, seem disappointing, and this explains why teleportation is limited to the speed of transfer of information, i.e., the speed of light. The main advantages is that Bell states can be shared using photons from lasers, and so teleportation is achievable through open space, i.e., without the need to send information through cables or optical fibers.

The quantum states of single atoms have been teleported. Quantum states can be encoded in various degrees of freedom of atoms. For example, qubits can be encoded in the degrees of freedom of electrons surrounding the atomic nucleus or in the degrees of freedom of the nucleus itself. It is inaccurate to say "an atom has been teleported". It is the quantum state of an atom that is teleported. Thus, performing this kind of teleportation requires a stock of atoms at the receiving site, available for having qubits imprinted on them. The importance of teleporting the nuclear state is unclear: the nuclear state does affect the atom, e.g. in hyperfine splitting, but whether such state would need to be teleported in some futuristic "practical" application is debatable.

An important aspect of quantum information theory is entanglement, which imposes statistical correlations between otherwise distinct physical systems by creating or placing two or more separate particles into a single, shared quantum state. These correlations hold even when measurements are chosen and performed independently, out of causal contact from one another, as verified in Bell test experiments. Thus, an observation resulting from a measurement choice made at one point in spacetime seems to instantaneously affect outcomes in another region, even though light hasn't yet had time to travel the distance; a conclusion seemingly at odds with special relativity (EPR paradox). However such correlations can never be used to transmit any information faster than the speed of light, a statement encapsulated in the no-communication theorem. Thus, teleportation, as a whole, can never be superluminal, as a qubit cannot be reconstructed until the accompanying classical information arrives.

Understanding quantum teleportation requires a good grounding in finite-dimensional linear algebra, Hilbert spaces and projection matrixes. A qubit is described using a two-dimensional complex number-valued vector space (a Hilbert space), which are the primary basis for the formal manipulations given below. A working knowledge of quantum mechanics is not absolutely required to understand the mathematics of quantum teleportation, although without such acquaintance, the deeper meaning of the equations may remain quite mysterious.

Protocol

Diagram for quantum teleportation of a photon

The prerequisites for quantum teleportation are a qubit that is to be teleported, a conventional communication channel capable of transmitting two classical bits (i.e., one of four states), and means of generating an entangled EPR pair of qubits, transporting each of these to two different locations, A and B, performing a Bell measurement on one of the EPR pair qubits, and manipulating the quantum state of the other pair. The protocol is then as follows:

  1. An EPR pair is generated, one qubit sent to location A, the other to B.
  2. At location A, a Bell measurement of the EPR pair qubit and the qubit to be teleported (the quantum state ) is performed, yielding one of four measurement outcomes, which can be encoded in two classical bits of information. Both qubits at location A are then discarded.
  3. Using the classical channel, the two bits are sent from A to B. (This is the only potentially time-consuming step after step 1, due to speed-of-light considerations.)
  4. As a result of the measurement performed at location A, the EPR pair qubit at location B is in one of four possible states. Of these four possible states, one is identical to the original quantum state , and the other three are closely related. Which of these four possibilities actually obtained, is encoded in the two classical bits. Knowing this, the EPR pair qubit at location B is modified in one of three ways, or not at all, to result in a qubit identical to , the qubit that was chosen for teleportation.

It is worth to notice that the above protocol assumes that the qubits are individually addressable, that means the qubits are distinguishable and physically labeled. However, there can be situations where two identical qubits are indistinguishable due to the spatial overlap of their wave functions. Under this condition, the qubits cannot be individually controlled or measured. Nevertheless, a teleportation protocol analogous to that described above can still be (conditionally) implemented by exploiting two independently prepared qubits, with no need of an initial EPR pair. This can be made by addressing the internal degrees of freedom of the qubits (e.g., spins or polarizations) by spatially localized measurements performed in separated regions A and B shared by the wave functions of the two indistinguishable qubits.

Experimental results and records

Work in 1998 verified the initial predictions, and the distance of teleportation was increased in August 2004 to 600 meters, using optical fiber. Subsequently, the record distance for quantum teleportation has been gradually increased to 16 kilometres (9.9 mi), then to 97 km (60 mi), and is now 143 km (89 mi), set in open air experiments in the Canary Islands, done between the two astronomical observatories of the Instituto de Astrofísica de Canarias. There has been a recent record set (as of September 2015) using superconducting nanowire detectors that reached the distance of 102 km (63 mi) over optical fiber. For material systems, the record distance is 21 metres (69 ft).

A variant of teleportation called "open-destination" teleportation, with receivers located at multiple locations, was demonstrated in 2004 using five-photon entanglement. Teleportation of a composite state of two single qubits has also been realized. In April 2011, experimenters reported that they had demonstrated teleportation of wave packets of light up to a bandwidth of 10 MHz while preserving strongly nonclassical superposition states. In August 2013, the achievement of "fully deterministic" quantum teleportation, using a hybrid technique, was reported. On 29 May 2014, scientists announced a reliable way of transferring data by quantum teleportation. Quantum teleportation of data had been done before but with highly unreliable methods. On 26 February 2015, scientists at the University of Science and Technology of China in Hefei, led by Chao-yang Lu and Jian-Wei Pan carried out the first experiment teleporting multiple degrees of freedom of a quantum particle. They managed to teleport the quantum information from ensemble of rubidium atoms to another ensemble of rubidium atoms over a distance of 150 metres (490 ft) using entangled photons. In 2016, researchers demonstrated quantum teleportation with two independent sources which are separated by 6.5 km (4.0 mi) in Hefei optical fiber network. In September 2016, researchers at the University of Calgary demonstrated quantum teleportation over the Calgary metropolitan fiber network over a distance of 6.2 km (3.9 mi).

Researchers have also successfully used quantum teleportation to transmit information between clouds of gas atoms, notable because the clouds of gas are macroscopic atomic ensembles.

In 2018, physicists at Yale demonstrated a deterministic teleported CNOT operation between logically encoded qubits.

EPR paradox

From Wikipedia, the free encyclopedia
 

The Einstein–Podolsky–Rosen paradox (EPR paradox) is a thought experiment proposed by physicists Albert Einstein, Boris Podolsky and Nathan Rosen (EPR), with which they argued that the description of physical reality provided by quantum mechanics was incomplete. In a 1935 paper titled "Can Quantum-Mechanical Description of Physical Reality be Considered Complete?", they argued for the existence of "elements of reality" that were not part of quantum theory, and speculated that it should be possible to construct a theory containing them. Resolutions of the paradox have important implications for the interpretation of quantum mechanics.

The thought experiment involves a pair of particles prepared in an entangled state (note that this terminology was invented only later). Einstein, Podolsky, and Rosen pointed out that, in this state, if the position of the first particle were measured, the result of measuring the position of the second particle could be predicted. If, instead, the momentum of the first particle were measured, then the result of measuring the momentum of the second particle could be predicted. They argued that no action taken on the first particle could instantaneously affect the other, since this would involve information being transmitted faster than light, which is forbidden by the theory of relativity. They invoked a principle, later known as the "EPR criterion of reality", positing that, "If, without in any way disturbing a system, we can predict with certainty (i.e., with probability equal to unity) the value of a physical quantity, then there exists an element of reality corresponding to that quantity". From this, they inferred that the second particle must have a definite value of position and of momentum prior to either being measured. This contradicted the view associated with Niels Bohr and Werner Heisenberg, according to which a quantum particle does not have a definite value of a property like momentum until the measurement takes place.

History

The work was done at the Institute for Advanced Study in 1934, which Einstein had joined the prior year after he had fled Nazi Germany. The resulting paper was written by Podolsky, and Einstein thought it did not accurately reflect his own views. The publication of the paper prompted a response by Niels Bohr, which he published in the same journal, in the same year, using the same title. This exchange was only one chapter in a prolonged debate between Bohr and Einstein about the fundamental nature of reality.

Einstein struggled unsuccessfully for the rest of his life to find a theory that could better comply with his idea of locality. Since his death, experiments analogous to the one described in the EPR paper have been carried out (notoriously by the group of Alain Aspect in the 1980s) that have confirmed that physical probabilities, as predicted by quantum theory, do exhibit the phenomena of Bell-inequality violations that are considered to invalidate EPR's preferred "local hidden-variables" type of explanation for the correlations to which EPR first drew attention.

The paradox

The original paper purports to describe what must happen to "two systems I and II, which we permit to interact ...", and, after some time, "we suppose that there is no longer any interaction between the two parts." The EPR description involves "two particles, A and B, [which] interact briefly and then move off in opposite directions." According to Heisenberg's uncertainty principle, it is impossible to measure both the momentum and the position of particle B exactly. However, it is possible to measure the exact position of particle A. By calculation, therefore, with the exact position of particle A known, the exact position of particle B can be known. Alternatively, the exact momentum of particle A can be measured, so the exact momentum of particle B can be worked out. As Manjit Kumar writes, "EPR argued that they had proved that ... [particle] B can have simultaneously exact values of position and momentum. ... Particle B has a position that is real and a momentum that is real."

EPR appeared to have contrived a means to establish the exact values of either the momentum or the position of B due to measurements made on particle A, without the slightest possibility of particle B being physically disturbed.

EPR tried to set up a paradox to question the range of true application of quantum mechanics: Quantum theory predicts that both values cannot be known for a particle, and yet the EPR thought experiment purports to show that they must all have determinate values. The EPR paper says: "We are thus forced to conclude that the quantum-mechanical description of physical reality given by wave functions is not complete."

The EPR paper ends by saying:

While we have thus shown that the wave function does not provide a complete description of the physical reality, we left open the question of whether or not such a description exists. We believe, however, that such a theory is possible.

The 1935 EPR paper condensed the philosophical discussion into a physical argument. The authors claim that given a specific experiment, in which the outcome of a measurement is known before the measurement takes place, there must exist something in the real world, an "element of reality", that determines the measurement outcome. They postulate that these elements of reality are, in modern terminology, local, in the sense that each belongs to a certain point in spacetime. Each element may, again in modern terminology, only be influenced by events which are located in the backward light cone of its point in spacetime (i.e., the past). These claims are founded on assumptions about nature that constitute what is now known as local realism.

Though the EPR paper has often been taken as an exact expression of Einstein's views, it was primarily authored by Podolsky, based on discussions at the Institute for Advanced Study with Einstein and Rosen. Einstein later expressed to Erwin Schrödinger that, "it did not come out as well as I had originally wanted; rather, the essential thing was, so to speak, smothered by the formalism." (Einstein would later go on to present an individual account of his local realist ideas.) Shortly before the EPR paper appeared in the Physical Review, the New York Times ran a news story about it, under the headline "Einstein Attacks Quantum Theory". The story, which quoted Podolsky, irritated Einstein, who wrote to the Times, "Any information upon which the article 'Einstein Attacks Quantum Theory' in your issue of May 4 is based was given to you without authority. It is my invariable practice to discuss scientific matters only in the appropriate forum and I deprecate advance publication of any announcement in regard to such matters in the secular press."

The Times story also sought out comment from physicist Edward Condon, who said, "Of course, a great deal of the argument hinges on just what meaning is to be attached to the word 'reality' in physics." The physicist and historian Max Jammer later noted, "t remains a historical fact that the earliest criticism of the EPR paper — moreover, a criticism which correctly saw in Einstein's conception of physical reality the key problem of the whole issue — appeared in a daily newspaper prior to the publication of the criticized paper itself."

Bohr's reply

Bohr's response to the EPR paper was published in the Physical Review later in 1935. He argued that EPR had reasoned fallaciously. Because measurements of position and of momentum are complementary, making the choice to measure one excludes the possibility of measuring the other. Consequently, a fact deduced regarding one arrangement of laboratory apparatus could not be combined with a fact deduced by means of the other, and so, the inference of predetermined position and momentum values for the second particle was not valid. Bohr concluded that EPR's "arguments do not justify their conclusion that the quantum description turns out to be essentially incomplete."

Einstein's own argument

In his own publications and correspondence, Einstein used a different argument to insist that quantum mechanics is an incomplete theory. He explicitly de-emphasized EPR's attribution of "elements of reality" to the position and momentum of particle B, saying that "I couldn't care less" whether the resulting states of particle B allowed one to predict the position and momentum with certainty.

For Einstein, the crucial part of the argument was the demonstration of nonlocality, that the choice of measurement done in particle A, either position or momentum, would lead to two different quantum states of particle B. He argued that, because of locality, the real state of particle B couldn't depend on which kind of measurement was done in A, and therefore the quantum states cannot be in one-to-one correspondence with the real states.

Later developments

Bohm's variant

In 1951, David Bohm proposed a variant of the EPR thought experiment in which the measurements have discrete ranges of possible outcomes, unlike the position and momentum measurements considered by EPR. The EPR–Bohm thought experiment can be explained using electron–positron pairs. Suppose we have a source that emits electron–positron pairs, with the electron sent to destination A, where there is an observer named Alice, and the positron sent to destination B, where there is an observer named Bob. According to quantum mechanics, we can arrange our source so that each emitted pair occupies a quantum state called a spin singlet. The particles are thus said to be entangled. This can be viewed as a quantum superposition of two states, which we call state I and state II. In state I, the electron has spin pointing upward along the z-axis (+z) and the positron has spin pointing downward along the z-axis (−z). In state II, the electron has spin −z and the positron has spin +z. Because it is in a superposition of states it is impossible without measuring to know the definite state of spin of either particle in the spin singlet.

The EPR thought experiment, performed with electron–positron pairs. A source (center) sends particles toward two observers, electrons to Alice (left) and positrons to Bob (right), who can perform spin measurements.

Alice now measures the spin along the z-axis. She can obtain one of two possible outcomes: +z or −z. Suppose she gets +z. Informally speaking, the quantum state of the system collapses into state I. The quantum state determines the probable outcomes of any measurement performed on the system. In this case, if Bob subsequently measures spin along the z-axis, there is 100% probability that he will obtain −z. Similarly, if Alice gets −z, Bob will get +z.

There is, of course, nothing special about choosing the z-axis: according to quantum mechanics the spin singlet state may equally well be expressed as a superposition of spin states pointing in the x direction. Suppose that Alice and Bob had decided to measure spin along the x-axis. We'll call these states Ia and IIa. In state Ia, Alice's electron has spin +x and Bob's positron has spin −x. In state IIa, Alice's electron has spin −x and Bob's positron has spin +x. Therefore, if Alice measures +x, the system 'collapses' into state Ia, and Bob will get −x. If Alice measures −x, the system collapses into state IIa, and Bob will get +x.

Whatever axis their spins are measured along, they are always found to be opposite. In quantum mechanics, the x-spin and z-spin are "incompatible observables", meaning the Heisenberg uncertainty principle applies to alternating measurements of them: a quantum state cannot possess a definite value for both of these variables. Suppose Alice measures the z-spin and obtains +z, so that the quantum state collapses into state I. Now, instead of measuring the z-spin as well, Bob measures the x-spin. According to quantum mechanics, when the system is in state I, Bob's x-spin measurement will have a 50% probability of producing +x and a 50% probability of -x. It is impossible to predict which outcome will appear until Bob actually performs the measurement.

Therefore, Bob's positron will have a definite spin when measured along the same axis as Alice's electron, but when measured in the perpendicular axis its spin will be uniformly random. It seems as if information has propagated (faster than light) from Alice's apparatus to make Bob's positron assume a definite spin in the appropriate axis.

Bell's theorem

In 1964, John Bell published a paper investigating the puzzling situation at that time: on one hand, the EPR paradox purportedly showed that quantum mechanics was nonlocal, and suggested that a hidden-variable theory could heal this nonlocality. On the other hand, David Bohm had recently developed the first successful hidden-variable theory, but it had a grossly nonlocal character. Bell set out to investigate whether it was indeed possible to solve the nonlocality problem with hidden variables, and found out that first, the correlations shown in both EPR's and Bohm's versions of the paradox could indeed be explained in a local way with hidden variables, and second, that the correlations shown in his own variant of the paradox couldn't be explained by any local hidden-variable theory. This second result became known as the Bell theorem.

To understand the first result, consider the following toy hidden-variable theory introduced later by J.J. Sakurai: in it, quantum spin-singlet states emitted by the source are actually approximate descriptions for "true" physical states possessing definite values for the z-spin and x-spin. In these "true" states, the positron going to Bob always has spin values opposite to the electron going to Alice, but the values are otherwise completely random. For example, the first pair emitted by the source might be "(+z, −x) to Alice and (−z, +x) to Bob", the next pair "(−z, −x) to Alice and (+z, +x) to Bob", and so forth. Therefore, if Bob's measurement axis is aligned with Alice's, he will necessarily get the opposite of whatever Alice gets; otherwise, he will get "+" and "−" with equal probability.

Bell showed, however, that such models can only reproduce the singlet correlations when Alice and Bob make measurements on the same axis or on perpendicular axes. As soon as other angles between their axes are allowed, local hidden-variable theories become unable to reproduce the quantum mechanical correlations. This difference, expressed using inequalities known as "Bell inequalities", is in principle experimentally testable. After the publication of Bell's paper, a variety of experiments to test Bell's inequalities were devised. All experiments conducted to date have found behavior in line with the predictions of quantum mechanics. The present view of the situation is that quantum mechanics flatly contradicts Einstein's philosophical postulate that any acceptable physical theory must fulfill "local realism". The fact that quantum mechanics violates Bell inequalities indicates that any hidden-variable theory underlying quantum mechanics must be non-local; whether this should be taken to imply that quantum mechanics itself is non-local is a matter of debate.

Steering

Inspired by Schrödinger's treatment of the EPR paradox back in 1935, Wiseman et al. formalised it in 2007 as the phenomenon of quantum steering. They defined steering as the situation where Alice's measurements on a part of an entangled state steer Bob's part of the state. That is, Bob's observations cannot be explained by a local hidden state model, where Bob would have a fixed quantum state in his side, that is classically correlated, but otherwise independent of Alice's.

Locality in the EPR paradox

The word locality has several different meanings in physics. EPR describe the principle of locality as asserting that physical processes occurring at one place should have no immediate effect on the elements of reality at another location. At first sight, this appears to be a reasonable assumption to make, as it seems to be a consequence of special relativity, which states that energy can never be transmitted faster than the speed of light without violating causality.

However, it turns out that the usual rules for combining quantum mechanical and classical descriptions violate EPR's principle of locality without violating special relativity or causality.Causality is preserved because there is no way for Alice to transmit messages (i.e., information) to Bob by manipulating her measurement axis. Whichever axis she uses, she has a 50% probability of obtaining "+" and 50% probability of obtaining "−", completely at random; according to quantum mechanics, it is fundamentally impossible for her to influence what result she gets. Furthermore, Bob is only able to perform his measurement once: there is a fundamental property of quantum mechanics, the no cloning theorem, which makes it impossible for him to make an arbitrary number of copies of the electron he receives, perform a spin measurement on each, and look at the statistical distribution of the results. Therefore, in the one measurement he is allowed to make, there is a 50% probability of getting "+" and 50% of getting "−", regardless of whether or not his axis is aligned with Alice's.

In summary, the results of the EPR thought experiment do not contradict the predictions of special relativity. Neither the EPR paradox nor any quantum experiment demonstrates that superluminal signaling is possible.

However, the principle of locality appeals powerfully to physical intuition, and Einstein, Podolsky and Rosen were unwilling to abandon it. Einstein derided the quantum mechanical predictions as "spooky action at a distance". The conclusion they drew was that quantum mechanics is not a complete theory.

Mathematical formulation

Bohm's variant of the EPR paradox can be expressed mathematically using the quantum mechanical formulation of spin. The spin degree of freedom for an electron is associated with a two-dimensional complex vector space V, with each quantum state corresponding to a vector in that space. The operators corresponding to the spin along the x, y, and z direction, denoted Sx, Sy, and Sz respectively, can be represented using the Pauli matrices:

where is the reduced Planck constant (or the Planck constant divided by 2π).

The eigenstates of Sz are represented as

and the eigenstates of Sx are represented as

The vector space of the electron-positron pair is , the tensor product of the electron's and positron's vector spaces. The spin singlet state is

where the two terms on the right hand side are what we have referred to as state I and state II above.

From the above equations, it can be shown that the spin singlet can also be written as

where the terms on the right hand side are what we have referred to as state Ia and state IIa.

To illustrate the paradox, we need to show that after Alice's measurement of Sz (or Sx), Bob's value of Sz (or Sx) is uniquely determined and Bob's value of Sx (or Sz) is uniformly random. This follows from the principles of measurement in quantum mechanics. When Sz is measured, the system state collapses into an eigenvector of Sz. If the measurement result is +z, this means that immediately after measurement the system state collapses to

Similarly, if Alice's measurement result is −z, the state collapses to

The left hand side of both equations show that the measurement of Sz on Bob's positron is now determined, it will be −z in the first case or +z in the second case. The right hand side of the equations show that the measurement of Sx on Bob's positron will return, in both cases, +x or -x with probability 1/2 each.

Thermodynamic diagrams

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Thermodynamic_diagrams Thermodynamic diagrams are diagrams used to repr...