Search This Blog

Saturday, December 21, 2024

Integrable system

From Wikipedia, the free encyclopedia

In mathematics, integrability is a property of certain dynamical systems. While there are several distinct formal definitions, informally speaking, an integrable system is a dynamical system with sufficiently many conserved quantities, or first integrals, that its motion is confined to a submanifold of much smaller dimensionality than that of its phase space.

Three features are often referred to as characterizing integrable systems:

  • the existence of a maximal set of conserved quantities (the usual defining property of complete integrability)
  • the existence of algebraic invariants, having a basis in algebraic geometry (a property known sometimes as algebraic integrability)
  • the explicit determination of solutions in an explicit functional form (not an intrinsic property, but something often referred to as solvability)

Integrable systems may be seen as very different in qualitative character from more generic dynamical systems, which are more typically chaotic systems. The latter generally have no conserved quantities, and are asymptotically intractable, since an arbitrarily small perturbation in initial conditions may lead to arbitrarily large deviations in their trajectories over a sufficiently large time.

Many systems studied in physics are completely integrable, in particular, in the Hamiltonian sense, the key example being multi-dimensional harmonic oscillators. Another standard example is planetary motion about either one fixed center (e.g., the sun) or two. Other elementary examples include the motion of a rigid body about its center of mass (the Euler top) and the motion of an axially symmetric rigid body about a point in its axis of symmetry (the Lagrange top).

In the late 1960s, it was realized that there are completely integrable systems in physics having an infinite number of degrees of freedom, such as some models of shallow water waves (Korteweg–de Vries equation), the Kerr effect in optical fibres, described by the nonlinear Schrödinger equation, and certain integrable many-body systems, such as the Toda lattice. The modern theory of integrable systems was revived with the numerical discovery of solitons by Martin Kruskal and Norman Zabusky in 1965, which led to the inverse scattering transform method in 1967.

In the special case of Hamiltonian systems, if there are enough independent Poisson commuting first integrals for the flow parameters to be able to serve as a coordinate system on the invariant level sets (the leaves of the Lagrangian foliation), and if the flows are complete and the energy level set is compact, this implies the Liouville–Arnold theorem; i.e., the existence of action-angle variables. General dynamical systems have no such conserved quantities; in the case of autonomous Hamiltonian systems, the energy is generally the only one, and on the energy level sets, the flows are typically chaotic.

A key ingredient in characterizing integrable systems is the Frobenius theorem, which states that a system is Frobenius integrable (i.e., is generated by an integrable distribution) if, locally, it has a foliation by maximal integral manifolds. But integrability, in the sense of dynamical systems, is a global property, not a local one, since it requires that the foliation be a regular one, with the leaves embedded submanifolds.

Integrability does not necessarily imply that generic solutions can be explicitly expressed in terms of some known set of special functions; it is an intrinsic property of the geometry and topology of the system, and the nature of the dynamics.

General dynamical systems

In the context of differentiable dynamical systems, the notion of integrability refers to the existence of invariant, regular foliations; i.e., ones whose leaves are embedded submanifolds of the smallest possible dimension that are invariant under the flow. There is thus a variable notion of the degree of integrability, depending on the dimension of the leaves of the invariant foliation. This concept has a refinement in the case of Hamiltonian systems, known as complete integrability in the sense of Liouville (see below), which is what is most frequently referred to in this context.

An extension of the notion of integrability is also applicable to discrete systems such as lattices. This definition can be adapted to describe evolution equations that either are systems of differential equations or finite difference equations.

The distinction between integrable and nonintegrable dynamical systems has the qualitative implication of regular motion vs. chaotic motion and hence is an intrinsic property, not just a matter of whether a system can be explicitly integrated in an exact form.

Hamiltonian systems and Liouville integrability

In the special setting of Hamiltonian systems, we have the notion of integrability in the Liouville sense. (See the Liouville–Arnold theorem.) Liouville integrability means that there exists a regular foliation of the phase space by invariant manifolds such that the Hamiltonian vector fields associated with the invariants of the foliation span the tangent distribution. Another way to state this is that there exists a maximal set of functionally independent Poisson commuting invariants (i.e., independent functions on the phase space whose Poisson brackets with the Hamiltonian of the system, and with each other, vanish).

In finite dimensions, if the phase space is symplectic (i.e., the center of the Poisson algebra consists only of constants), it must have even dimension and the maximal number of independent Poisson commuting invariants (including the Hamiltonian itself) is . The leaves of the foliation are totally isotropic with respect to the symplectic form and such a maximal isotropic foliation is called Lagrangian. All autonomous Hamiltonian systems (i.e. those for which the Hamiltonian and Poisson brackets are not explicitly time-dependent) have at least one invariant; namely, the Hamiltonian itself, whose value along the flow is the energy. If the energy level sets are compact, the leaves of the Lagrangian foliation are tori, and the natural linear coordinates on these are called "angle" variables. The cycles of the canonical -form are called the action variables, and the resulting canonical coordinates are called action-angle variables (see below).

There is also a distinction between complete integrability, in the Liouville sense, and partial integrability, as well as a notion of superintegrability and maximal superintegrability. Essentially, these distinctions correspond to the dimensions of the leaves of the foliation. When the number of independent Poisson commuting invariants is less than maximal (but, in the case of autonomous systems, more than one), we say the system is partially integrable. When there exist further functionally independent invariants, beyond the maximal number that can be Poisson commuting, and hence the dimension of the leaves of the invariant foliation is less than n, we say the system is superintegrable. If there is a regular foliation with one-dimensional leaves (curves), this is called maximally superintegrable.

Action-angle variables

When a finite-dimensional Hamiltonian system is completely integrable in the Liouville sense, and the energy level sets are compact, the flows are complete, and the leaves of the invariant foliation are tori. There then exist, as mentioned above, special sets of canonical coordinates on the phase space known as action-angle variables, such that the invariant tori are the joint level sets of the action variables. These thus provide a complete set of invariants of the Hamiltonian flow (constants of motion), and the angle variables are the natural periodic coordinates on the tori. The motion on the invariant tori, expressed in terms of these canonical coordinates, is linear in the angle variables.

The Hamilton–Jacobi approach

In canonical transformation theory, there is the Hamilton–Jacobi method, in which solutions to Hamilton's equations are sought by first finding a complete solution of the associated Hamilton–Jacobi equation. In classical terminology, this is described as determining a transformation to a canonical set of coordinates consisting of completely ignorable variables; i.e., those in which there is no dependence of the Hamiltonian on a complete set of canonical "position" coordinates, and hence the corresponding canonically conjugate momenta are all conserved quantities. In the case of compact energy level sets, this is the first step towards determining the action-angle variables. In the general theory of partial differential equations of Hamilton–Jacobi type, a complete solution (i.e. one that depends on n independent constants of integration, where n is the dimension of the configuration space), exists in very general cases, but only in the local sense. Therefore, the existence of a complete solution of the Hamilton–Jacobi equation is by no means a characterization of complete integrability in the Liouville sense. Most cases that can be "explicitly integrated" involve a complete separation of variables, in which the separation constants provide the complete set of integration constants that are required. Only when these constants can be reinterpreted, within the full phase space setting, as the values of a complete set of Poisson commuting functions restricted to the leaves of a Lagrangian foliation, can the system be regarded as completely integrable in the Liouville sense.

Solitons and inverse spectral methods

A resurgence of interest in classical integrable systems came with the discovery, in the late 1960s, that solitons, which are strongly stable, localized solutions of partial differential equations like the Korteweg–de Vries equation (which describes 1-dimensional non-dissipative fluid dynamics in shallow basins), could be understood by viewing these equations as infinite-dimensional integrable Hamiltonian systems. Their study leads to a very fruitful approach for "integrating" such systems, the inverse scattering transform and more general inverse spectral methods (often reducible to Riemann–Hilbert problems), which generalize local linear methods like Fourier analysis to nonlocal linearization, through the solution of associated integral equations.

The basic idea of this method is to introduce a linear operator that is determined by the position in phase space and which evolves under the dynamics of the system in question in such a way that its "spectrum" (in a suitably generalized sense) is invariant under the evolution, cf. Lax pair. This provides, in certain cases, enough invariants, or "integrals of motion" to make the system completely integrable. In the case of systems having an infinite number of degrees of freedom, such as the KdV equation, this is not sufficient to make precise the property of Liouville integrability. However, for suitably defined boundary conditions, the spectral transform can, in fact, be interpreted as a transformation to completely ignorable coordinates, in which the conserved quantities form half of a doubly infinite set of canonical coordinates, and the flow linearizes in these. In some cases, this may even be seen as a transformation to action-angle variables, although typically only a finite number of the "position" variables are actually angle coordinates, and the rest are noncompact.

Hirota bilinear equations and τ-functions

Another viewpoint that arose in the modern theory of integrable systems originated in a calculational approach pioneered by Ryogo Hirota, which involved replacing the original nonlinear dynamical system with a bilinear system of constant coefficient equations for an auxiliary quantity, which later came to be known as the τ-function. These are now referred to as the Hirota equations. Although originally appearing just as a calculational device, without any clear relation to the inverse scattering approach, or the Hamiltonian structure, this nevertheless gave a very direct method from which important classes of solutions such as solitons could be derived.

Subsequently, this was interpreted by Mikio Sato and his students, at first for the case of integrable hierarchies of PDEs, such as the Kadomtsev–Petviashvili hierarchy, but then for much more general classes of integrable hierarchies, as a sort of universal phase space approach, in which, typically, the commuting dynamics were viewed simply as determined by a fixed (finite or infinite) abelian group action on a (finite or infinite) Grassmann manifold. The τ-function was viewed as the determinant of a projection operator from elements of the group orbit to some origin within the Grassmannian, and the Hirota equations as expressing the Plücker relations, characterizing the Plücker embedding of the Grassmannian in the projectivization of a suitably defined (infinite) exterior space, viewed as a fermionic Fock space.

Quantum integrable systems

There is also a notion of quantum integrable systems.

In the quantum setting, functions on phase space must be replaced by self-adjoint operators on a Hilbert space, and the notion of Poisson commuting functions replaced by commuting operators. The notion of conservation laws must be specialized to local conservation laws. Every Hamiltonian has an infinite set of conserved quantities given by projectors to its energy eigenstates. However, this does not imply any special dynamical structure.

To explain quantum integrability, it is helpful to consider the free particle setting. Here all dynamics are one-body reducible. A quantum system is said to be integrable if the dynamics are two-body reducible. The Yang–Baxter equation is a consequence of this reducibility and leads to trace identities which provide an infinite set of conserved quantities. All of these ideas are incorporated into the quantum inverse scattering method where the algebraic Bethe ansatz can be used to obtain explicit solutions. Examples of quantum integrable models are the Lieb–Liniger model, the Hubbard model and several variations on the Heisenberg model. Some other types of quantum integrability are known in explicitly time-dependent quantum problems, such as the driven Tavis-Cummings model.

Exactly solvable models

In physics, completely integrable systems, especially in the infinite-dimensional setting, are often referred to as exactly solvable models. This obscures the distinction between integrability, in the Hamiltonian sense, and the more general dynamical systems sense.

There are also exactly solvable models in statistical mechanics, which are more closely related to quantum integrable systems than classical ones. Two closely related methods: the Bethe ansatz approach, in its modern sense, based on the Yang–Baxter equations and the quantum inverse scattering method, provide quantum analogs of the inverse spectral methods. These are equally important in the study of solvable models in statistical mechanics.

An imprecise notion of "exact solvability" as meaning: "The solutions can be expressed explicitly in terms of some previously known functions" is also sometimes used, as though this were an intrinsic property of the system itself, rather than the purely calculational feature that we happen to have some "known" functions available, in terms of which the solutions may be expressed. This notion has no intrinsic meaning, since what is meant by "known" functions very often is defined precisely by the fact that they satisfy certain given equations, and the list of such "known functions" is constantly growing. Although such a characterization of "integrability" has no intrinsic validity, it often implies the sort of regularity that is to be expected in integrable systems.

List of some well-known integrable systems

Classical mechanical systems
Integrable lattice models
Integrable systems in 1 + 1 dimensions
Integrable PDEs in 2 + 1 dimensions
Integrable PDEs in 3 + 1 dimensions
Exactly solvable statistical lattice models

Spacetime symmetries

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Spacetime_symmetries

Spacetime symmetries are features of spacetime that can be described as exhibiting some form of symmetry. The role of symmetry in physics is important in simplifying solutions to many problems. Spacetime symmetries are used in the study of exact solutions of Einstein's field equations of general relativity. Spacetime symmetries are distinguished from internal symmetries.

Physical motivation

Physical problems are often investigated and solved by noticing features which have some form of symmetry. For example, in the Schwarzschild solution, the role of spherical symmetry is important in deriving the Schwarzschild solution and deducing the physical consequences of this symmetry (such as the nonexistence of gravitational radiation in a spherically pulsating star). In cosmological problems, symmetry plays a role in the cosmological principle, which restricts the type of universes that are consistent with large-scale observations (e.g. the Friedmann–Lemaître–Robertson–Walker (FLRW) metric). Symmetries usually require some form of preserving property, the most important of which in general relativity include the following:

  • preserving geodesics of the spacetime
  • preserving the metric tensor
  • preserving the curvature tensor

These and other symmetries will be discussed below in more detail. This preservation property which symmetries usually possess (alluded to above) can be used to motivate a useful definition of these symmetries themselves.

Mathematical definition

A rigorous definition of symmetries in general relativity has been given by Hall (2004). In this approach, the idea is to use (smooth) vector fields whose local flow diffeomorphisms preserve some property of the spacetime. (Note that one should emphasize in one's thinking this is a diffeomorphism—a transformation on a differential element. The implication is that the behavior of objects with extent may not be as manifestly symmetric.) This preserving property of the diffeomorphisms is made precise as follows. A smooth vector field X on a spacetime M is said to preserve a smooth tensor T on M (or T is invariant under X) if, for each smooth local flow diffeomorphism ϕt associated with X, the tensors T and ϕ
t
(T)
are equal on the domain of ϕt. This statement is equivalent to the more usable condition that the Lie derivative of the tensor under the vector field vanishes: on M. This has the consequence that, given any two points p and q on M, the coordinates of T in a coordinate system around p are equal to the coordinates of T in a coordinate system around q. A symmetry on the spacetime is a smooth vector field whose local flow diffeomorphisms preserve some (usually geometrical) feature of the spacetime. The (geometrical) feature may refer to specific tensors (such as the metric, or the energy–momentum tensor) or to other aspects of the spacetime such as its geodesic structure. The vector fields are sometimes referred to as collineations, symmetry vector fields or just symmetries. The set of all symmetry vector fields on M forms a Lie algebra under the Lie bracket operation as can be seen from the identity: the term on the right usually being written, with an abuse of notation, as

Killing symmetry

A Killing vector field is one of the most important types of symmetries and is defined to be a smooth vector field X that preserves the metric tensor g:

This is usually written in the expanded form as:

Killing vector fields find extensive applications (including in classical mechanics) and are related to conservation laws.

Homothetic symmetry

A homothetic vector field is one which satisfies: where c is a real constant. Homothetic vector fields find application in the study of singularities in general relativity.

Affine symmetry

An affine vector field is one that satisfies:

An affine vector field preserves geodesics and preserves the affine parameter.

The above three vector field types are special cases of projective vector fields which preserve geodesics without necessarily preserving the affine parameter.

Conformal symmetry

A conformal vector field is one which satisfies: where ϕ is a smooth real-valued function on M.

Curvature symmetry

A curvature collineation is a vector field which preserves the Riemann tensor:

where Rabcd are the components of the Riemann tensor. The set of all smooth curvature collineations forms a Lie algebra under the Lie bracket operation (if the smoothness condition is dropped, the set of all curvature collineations need not form a Lie algebra). The Lie algebra is denoted by CC(M) and may be infinite-dimensional. Every affine vector field is a curvature collineation.

Matter symmetry

A less well-known form of symmetry concerns vector fields that preserve the energy–momentum tensor. These are variously referred to as matter collineations or matter symmetries and are defined by: where T is the covariant energy–momentum tensor. The intimate relation between geometry and physics may be highlighted here, as the vector field X is regarded as preserving certain physical quantities along the flow lines of X, this being true for any two observers. In connection with this, it may be shown that every Killing vector field is a matter collineation (by the Einstein field equations, with or without cosmological constant). Thus, given a solution of the EFE, a vector field that preserves the metric necessarily preserves the corresponding energy–momentum tensor. When the energy–momentum tensor represents a perfect fluid, every Killing vector field preserves the energy density, pressure and the fluid flow vector field. When the energy–momentum tensor represents an electromagnetic field, a Killing vector field does not necessarily preserve the electric and magnetic fields.

Local and global symmetries

Applications

As mentioned at the start of this article, the main application of these symmetries occur in general relativity, where solutions of Einstein's equations may be classified by imposing some certain symmetries on the spacetime.

Spacetime classifications

Classifying solutions of the EFE constitutes a large part of general relativity research. Various approaches to classifying spacetimes, including using the Segre classification of the energy–momentum tensor or the Petrov classification of the Weyl tensor have been studied extensively by many researchers, most notably Stephani et al. (2003). They also classify spacetimes using symmetry vector fields (especially Killing and homothetic symmetries). For example, Killing vector fields may be used to classify spacetimes, as there is a limit to the number of global, smooth Killing vector fields that a spacetime may possess (the maximum being ten for four-dimensional spacetimes). Generally speaking, the higher the dimension of the algebra of symmetry vector fields on a spacetime, the more symmetry the spacetime admits. For example, the Schwarzschild solution has a Killing algebra of dimension four (three spatial rotational vector fields and a time translation), whereas the Friedmann–Lemaître–Robertson–Walker metric (excluding the Einstein static subcase) has a Killing algebra of dimension six (three translations and three rotations). The Einstein static metric has a Killing algebra of dimension seven (the previous six plus a time translation).

The assumption of a spacetime admitting a certain symmetry vector field can place restrictions on the spacetime.

List of symmetric spacetimes

The following spacetimes have their own distinct articles in Wikipedia:

Delayed-choice quantum eraser

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Delayed-choice_quantum_eraser

A delayed-choice quantum eraser experiment, first performed by Yoon-Ho Kim, R. Yu, S. P. Kulik, Y. H. Shih and Marlan O. Scully, and reported in early 1998, is an elaboration on the quantum eraser experiment that incorporates concepts considered in John Archibald Wheeler's delayed-choice experiment. The experiment was designed to investigate peculiar consequences of the well-known double-slit experiment in quantum mechanics, as well as the consequences of quantum entanglement.

The delayed-choice quantum eraser experiment investigates a paradox. If a photon manifests itself as though it had come by a single path to the detector, then "common sense" (which Wheeler and others challenge) says that it must have entered the double-slit device as a particle. If a photon manifests itself as though it had come by two indistinguishable paths, then it must have entered the double-slit device as a wave. Accordingly, if the experimental apparatus is changed while the photon is in mid‑flight, the photon may have to revise its prior "commitment" as to whether to be a wave or a particle. Wheeler pointed out that when these assumptions are applied to a device of interstellar dimensions, a last-minute decision made on Earth on how to observe a photon could alter a situation established millions or even billions of years earlier.

While delayed-choice experiments might seem to allow measurements made in the present to alter events that occurred in the past, this conclusion requires assuming a non-standard view of quantum mechanics. If a photon in flight is instead interpreted as being in a so-called "superposition of states"—that is, if it is allowed the potentiality of manifesting as a particle or wave, but during its time in flight is neither—then there is no causation paradox. This notion of superposition reflects the standard interpretation of quantum mechanics.

Introduction

In the basic double-slit experiment, a beam of light (usually from a laser) is directed perpendicularly towards a wall pierced by two parallel slit apertures. If a detection screen (anything from a sheet of white paper to a CCD) is put on the other side of the double-slit wall (far enough for light from both slits to overlap), a pattern of light and dark fringes will be observed, a pattern that is called an interference pattern. Other atomic-scale entities such as electrons are found to exhibit the same behavior when fired toward a double slit. By decreasing the brightness of the source sufficiently, individual particles that form the interference pattern are detectable. The emergence of an interference pattern suggests that each particle passing through the slits interferes with itself, and that therefore in some sense the particles are going through both slits at once. This is an idea that contradicts our everyday experience of discrete objects.

A well-known thought experiment, which played a vital role in the history of quantum mechanics (for example, see the discussion on Einstein's version of this experiment), demonstrated that if particle detectors are positioned at the slits, showing through which slit a photon goes, the interference pattern will disappear. This which-way experiment illustrates the complementarity principle that photons can behave as either particles or as waves, but cannot be simultaneously observed to be both a particle and a wave. However, technically feasible realizations of this experiment were not proposed until the 1970s.

Which-path information and the visibility of interference fringes are complementary quantities, meaning that information about a photon's path can be observed, or interference fringes can be observed, but they cannot both be observed in the same trial. In the double-slit experiment, conventional wisdom held that observing the particles' path inevitably disturbed them enough to destroy the interference pattern as a result of the Heisenberg uncertainty principle.

In 1982, Scully and Drühl pointed out a workaround alternative to this interpretation. They proposed to save the information about which slit the photon went through - or, in their setup, from which atom the photon was re-emitted - in the excited state of that atom. At this point the which-path information is known and no interference is observed. However, one can "erase" this information by making the atom to emit another photon and fall to the ground state. That on its own will not bring the interference pattern back, the which-path information can still be extracted from an appropriate measurement of the second photon. However, if the second photon is measured at a place where it could get to equally likely from any of the atoms, that successfully "erases" the which-path information. The original photon would now show the interference pattern (the position of its fringes depends on where exactly the second photon was observed, so that in the total statistics they average out and no fringes are seen). Since 1982, multiple experiments have demonstrated the validity of this so-called quantum "eraser".

A simple quantum-eraser experiment

A simple version of the quantum eraser can be described as follows: Rather than splitting one photon or its probability wave between two slits, the photon is subjected to a beam splitter. If one thinks in terms of a stream of photons being randomly directed by such a beam splitter to go down two paths that are kept from interaction, it would seem that no photon can then interfere with any other or with itself.

If the rate of photon production is reduced so that only one photon enters the apparatus at any one time, it becomes impossible to understand the photon as only moving through one path, because when the path outputs are redirected so that they coincide on a common detector or detectors, interference phenomena appear. This is similar to envisioning one photon in a two-slit apparatus: even though it is one photon, it still somehow interacts with both slits.

Figure 1. Experiment that shows delayed determination of photon path

In the two diagrams in Fig. 1, photons are emitted one at a time from a laser symbolized by a yellow star. They pass through a 50% beam splitter (green block) that reflects or transmits 1/2 of the photons. The reflected or transmitted photons travel along two possible paths depicted by the red or blue lines.

In the top diagram, it seems as though the trajectories of the photons are known: If a photon emerges from the top of the apparatus, it seems as though it had to have come by way of the blue path, and if it emerges from the side of the apparatus, it seems as though it had to have come by way of the red path. However, it is important to keep in mind that the photon is in a superposition of the paths until it is detected. The assumption above—that it 'had to have come by way of' either path—is a form of the 'separation fallacy'.

In the bottom diagram, a second beam splitter is introduced at the top right. It recombines the beams corresponding to the red and blue paths. By introducing the second beam splitter, the usual way of thinking is that the path information has been "erased." However, we have to be careful, because the photon cannot be assumed to have 'really' gone along one or the other path. Recombining the beams results in interference phenomena at detection screens positioned just beyond each exit port. What issues to the right side displays reinforcement, and what issues toward the top displays cancellation. It is important to keep in mind however that the illustrated interferometer effects apply only to a single photon in a pure state. When dealing with a pair of entangled photons, the photon encountering the interferometer will be in a mixed state, and there will be no visible interference pattern without coincidence counting to select appropriate subsets of the data.

Delayed choice

Elementary precursors to current quantum-eraser experiments such as the "simple quantum eraser" described above have straightforward classical-wave explanations. Indeed, it could be argued that there is nothing particularly quantum about this experiment. Nevertheless, Jordan has argued on the basis of the correspondence principle, that despite the existence of classical explanations, first-order interference experiments such as the above can be interpreted as true quantum erasers.

These precursors use single-photon interference. Versions of the quantum eraser using entangled photons, however, are intrinsically non-classical. Because of that, in order to avoid any possible ambiguity concerning the quantum versus classical interpretation, most experimenters have opted to use nonclassical entangled-photon light sources to demonstrate quantum erasers with no classical analog.

Furthermore, the use of entangled photons enables the design and implementation of versions of the quantum eraser that are impossible to achieve with single-photon interference, such as the delayed-choice quantum eraser, which is the topic of this article.

The experiment of Kim et al. (1999)

Figure 2. Setup of the delayed-choice quantum-eraser experiment of Kim et al. Detector D0 is movable

The experimental setup, described in detail in Kim et al., is illustrated in Fig 2. An argon laser generates individual 351.1 nm photons that pass through a double-slit apparatus (vertical black line in the upper left corner of the diagram).

An individual photon goes through one (or both) of the two slits. In the illustration, the photon paths are color-coded as red or light blue lines to indicate which slit the photon came through (red indicates slit A, light blue indicates slit B).

So far, the experiment is like a conventional two-slit experiment. However, after the slits, spontaneous parametric down-conversion (SPDC) is used to prepare an entangled two-photon state. This is done by a nonlinear optical crystal BBO (beta barium borate) that converts the photon (from either slit) into two identical, orthogonally polarized entangled photons with 1/2 the frequency of the original photon. The paths followed by these orthogonally polarized photons are caused to diverge by the Glan–Thompson prism.

One of these 702.2 nm photons, referred to as the "signal" photon (look at the red and light-blue lines going upwards from the Glan–Thompson prism) continues to the target detector called D0. During an experiment, detector D0 is scanned along its x axis, its motions controlled by a step motor. A plot of "signal" photon counts detected by D0 versus x can be examined to discover whether the cumulative signal forms an interference pattern.

The other entangled photon, referred to as the "idler" photon (look at the red and light-blue lines going downwards from the Glan–Thompson prism), is deflected by prism PS that sends it along divergent paths depending on whether it came from slit A or slit B.

Somewhat beyond the path split, the idler photons encounter beam splitters BSa, BSb, and BSc that each have a 50% chance of allowing the idler photon to pass through and a 50% chance of causing it to be reflected. Ma and Mb are mirrors.

Figure 3. x axis: position of D0. y axis: joint detection rates between D0 and D1, D2, D3, D4 (R01, R02, R03, R04). R04 is not provided in the Kim article and is supplied according to their verbal description.
Figure 4. Simulated recordings of photons jointly detected between D0 and D1, D2, D3, D4 (R01, R02, R03, R04)

The beam splitters and mirrors direct the idler photons towards detectors labeled D1, D2, D3 and D4. Note that:

  • If an idler photon is recorded at detector D3, it can only have come from slit B.
  • If an idler photon is recorded at detector D4, it can only have come from slit A.
  • If an idler photon is detected at detector D1 or D2, it might have come from slit A or slit B.
  • The optical path length measured from slit to D1, D2, D3, and D4 is 2.5 m longer than the optical path length from slit to D0. This means that any information that one can learn from an idler photon must be approximately 8 ns later than what one can learn from its entangled signal photon.

Detection of the idler photon by D3 or D4 provides delayed "which-path information" indicating whether the signal photon with which it is entangled had gone through slit A or B. On the other hand, detection of the idler photon by D1 or D2 provides a delayed indication that such information is not available for its entangled signal photon. Insofar as which-path information had earlier potentially been available from the idler photon, it is said that the information has been subjected to a "delayed erasure".

By using a coincidence counter, the experimenters were able to isolate the entangled signal from photo-noise, recording only events where both signal and idler photons were detected (after compensating for the 8 ns delay). Refer to Figs 3 and 4.

  • When the experimenters looked at the signal photons whose entangled idlers were detected at D1 or D2, they detected interference patterns.
  • However, when they looked at the signal photons whose entangled idlers were detected at D3 or D4, they detected simple diffraction patterns with no interference.

Significance

This result is similar to that of the double-slit experiment since interference is observed when it is extracted according to phase value (R01 or R02). Note that the phase cannot be measured if the photon's path (the slit through which it passes) is known.

Figure 5. Distribution of signal photons at D0 can be compared with distribution of bulbs on digital billboard. When all the bulbs are lit, billboard does not reveal any pattern of image, which can be "recovered" only by switching off some bulbs. Likewise interference pattern or no-interference pattern among signal photons at D0 can be recovered only after "switching off" (or ignoring) some signal photons and which signal photons should be ignored to recover pattern, this information can be gained only by looking at corresponding entangled idler photons in detectors D1 to D4.

However, what makes this experiment possibly astonishing is that, unlike in the classic double-slit experiment, the choice of whether to preserve or erase the which-path information of the idler was not made until 8 ns after the position of the signal photon had already been measured by D0.

Detection of signal photons at D0 does not directly yield any which-path information. Detection of idler photons at D3 or D4, which provide which-path information, means that no interference pattern can be observed in the jointly detected subset of signal photons at D0. Likewise, detection of idler photons at D1 or D2, which do not provide which-path information, means that interference patterns can be observed in the jointly detected subset of signal photons at D0.

In other words, even though an idler photon is not observed until long after its entangled signal photon arrives at D0 due to the shorter optical path for the latter, interference at D0 is determined by whether a signal photon's entangled idler photon is detected at a detector that preserves its which-path information (D3 or D4), or at a detector that erases its which-path information (D1 or D2).

Some have interpreted this result to mean that the delayed choice to observe or not observe the path of the idler photon changes the outcome of an event in the past. Note in particular that an interference pattern may only be pulled out for observation after the idlers have been detected (i.e., at D1 or D2).

The total pattern of all signal photons at D0, whose entangled idlers went to multiple different detectors, will never show interference regardless of what happens to the idler photons. One can get an idea of how this works by looking at the graphs of R01, R02, R03, and R04, and observing that the peaks of R01 line up with the troughs of R02 (i.e. a π phase shift exists between the two interference fringes). R03 shows a single maximum, and R04, which is experimentally identical to R03 will show equivalent results. The entangled photons, as filtered with the help of the coincidence counter, are simulated in Fig. 5 to give a visual impression of the evidence available from the experiment. In D0, the sum of all the correlated counts will not show interference. If all the photons that arrive at D0 were to be plotted on one graph, one would see only a bright central band.

Implications

Retrocausality

Delayed-choice experiments raise questions about the causal connections between the events. If events at D1, D2, D3, D4 determine outcomes at D0, then the effects might seem to precede their causes in time.

Consensus: no retrocausality

However, the interference pattern can only be seen retroactively once the idler photons have been detected and the detection information used to select subsets of signal photons.

Moreover, it's observed that the apparent retroactive action vanishes if the effects of observations on the state of the entangled signal and idler photons are considered in their historic order. Specifically, in the case when detection/deletion of which-way information happens before the detection on D0, the standard simplistic explanation says "The detector Di, at which the idler photon is detected, determines the probability distribution at D0 for the signal photon". Similarly, in the case when D0 precedes detection of the idler photon, the following description is just as accurate: "The position at D0 of the detected signal photon determines the probabilities for the idler photon to hit either of D1, D2, D3 or D4". These are just equivalent ways of formulating the correlations of entangled photons' observables in an intuitive causal way, so one may choose any of those (in particular, that one where the cause precedes the consequence and no retrograde action appears in the explanation).

The total pattern of signal photons at the primary detector never shows interference (see Fig. 5), so it is not possible to deduce what will happen to the idler photons by observing the signal photons alone. In a paper by Johannes Fankhauser, it is shown that the delayed choice quantum eraser experiment resembles a Bell-type scenario in which the paradox's resolution is rather trivial, and so there really is no mystery. Moreover, it gives a detailed account of the experiment in the de Broglie-Bohm picture with definite trajectories arriving at the conclusion that there is no "backwards in time influence" present. The delayed-choice quantum eraser does not communicate information in a retro-causal manner because it takes another signal, one which must arrive by a process that can go no faster than the speed of light, to sort the superimposed data in the signal photons into four streams that reflect the states of the idler photons at their four distinct detection screens.

A theorem proven by Phillippe Eberhard shows that if the accepted equations of relativistic quantum field theory are correct, faster than light communications is impossible.

Other delayed-choice quantum-eraser experiments

Many refinements and extensions of Kim et al. delayed-choice quantum eraser have been performed or proposed. Only a small sampling of reports and proposals are given here:

Scarcelli et al. (2007) reported on a delayed-choice quantum-eraser experiment based on a two-photon imaging scheme. After detecting a photon passed through a double-slit, a random delayed choice was made to erase or not erase the which-path information by the measurement of its distant entangled twin; the particle-like and wave-like behavior of the photon were then recorded simultaneously and respectively by only one set of joint detectors.

Peruzzo et al. (2012) have reported on a quantum delayed-choice experiment based on a quantum-controlled beam splitter, in which particle and wave behaviors were investigated simultaneously. The quantum nature of the photon's behavior was tested with a Bell inequality, which replaced the delayed choice of the observer.

Rezai et al. (2018) have combined the Hong-Ou-Mandel interference with a delayed choice quantum eraser. They impose two incompatible photons onto a beam-splitter, such that no interference pattern could be observed. When the output ports are monitored in an integrated fashion (i.e. counting all the clicks), no interference occurs. Only when the outcoming photons are polarization analysed and the right subset is selected, quantum interference in the form of a Hong-Ou-Mandel dip occurs.

The construction of solid-state electronic Mach–Zehnder interferometers (MZI) has led to proposals to use them in electronic versions of quantum-eraser experiments. This would be achieved by Coulomb coupling to a second electronic MZI acting as a detector.

Entangled pairs of neutral kaons have also been examined and found suitable for investigations using quantum marking and quantum-erasure techniques.

A quantum eraser has been proposed using a modified Stern-Gerlach setup. In this proposal, no coincident counting is required, and quantum erasure is accomplished by applying an additional Stern-Gerlach magnetic field.

Superluminal communication

From Wikipedia, the free encyclopedia

Superluminal communication is a hypothetical process in which information is conveyed at faster-than-light speeds. The current scientific consensus is that faster-than-light communication is not possible, and to date it has not been achieved in any experiment.

Superluminal communication other than possibly through wormholes is likely impossible because, in a Lorentz-invariant theory, it could be used to transmit information into the past. This would complicate causality, but no theoretical arguments conclusively preclude this possibility.

A number of theories and phenomena related to superluminal communication have been proposed or studied, including tachyons, neutrinos, quantum nonlocality, wormholes, and quantum tunneling.

Proposed mechanisms

Spacetime diagram showing that moving faster than light implies time travel in the context of special relativity. A spaceship departs from Earth from A to C slower than light. At B, Earth emits a tachyon, particle that travels faster than light but forward in time in Earth's reference frame. It reaches the spaceship at C. The spaceship then sends another tachyon back to Earth from C to D. This tachyon also travels forward in time in the spaceship's reference frame. This effectively allows Earth to send a signal from B to D, back in time.

Tachyons

Tachyonic particles are hypothetical particles that travel faster than light, which could conceivably allow for superluminal communication. Because such a particle would violate the known laws of physics, many scientists reject the idea that they exist. By contrast, tachyonic fields – quantum fields with imaginary mass – do exist and exhibit superluminal group velocity under some circumstances. However, such fields have luminal signal velocity and do not allow superluminal communication.

Quantum nonlocality

Quantum mechanics is non-local in the sense that distant systems can be entangled. Entangled states lead to correlations in the results of otherwise random measurements, even when the measurements are made nearly simultaneously and at far distant points. The impossibility of superluminal communication led Einstein, Podolsky, and Rosen to propose that quantum mechanics must be incomplete (see EPR paradox).

However, it is now well understood that quantum entanglement does not allow any influence or information to propagate superluminally.

Practically, any attempt to force one member of an entangled pair of particles into a particular quantum state, breaks the entanglement between the two particles. That is to say, the other member of the entangled pair is completely unaffected by this "forcing" action, and its quantum state remains random; a preferred outcome cannot be encoded into a quantum measurement.

Technically, the microscopic causality postulate of axiomatic quantum field theory implies the impossibility of superluminal communication using any phenomena whose behavior can be described by orthodox quantum field theory. A special case of this is the no-communication theorem, which prevents communication using the quantum entanglement of a composite system shared between two spacelike-separated observers.

Wormholes

If wormholes are possible, then ordinary subluminal methods of communication could be sent through them to achieve effectively superluminal transmission speeds across non-local regions of spacetime. Considering the immense energy or exotic matter with negative mass/negative energy that current theories suggest would be required to open a wormhole large enough to pass spacecraft through, it may be that only atomic-scale wormholes would be practical to build, limiting their use solely to information transmission. Some hypotheses of wormhole formation would prevent them from ever becoming "timeholes", allowing superluminal communication without the additional complication of allowing communication with the past.

Fictional devices

Tachyon-like

The Dirac communicator features in several of the works of James Blish, notably his 1954 short story "Beep" (later expanded into The Quincunx of Time). As alluded to in the title, any active device received the sum of all transmitted messages in universal space-time, in a single pulse, so that demultiplexing yielded information about the past, present, and future.

Superluminal transmitters and ansibles

The terms "ultrawave" and "hyperwave" have been used by several authors, often interchangeably, to denote faster-than-light communications. Examples include:

  • E. E. Smith used the term "ultrawave" in his Lensman series, for waves which propagated through a sub-ether and could be used for weapons, communications, and other applications.
  • In Isaac Asimov's Foundation series, "ultrawave" and "hyperwave" are used interchangeably to represent a superluminal communications medium. The hyperwave relay also features.
  • In the Star Trek universe, subspace carries faster-than-light communication (subspace radio) and travel (warp drive).
  • The Cities in Flight series by James Blish featured ultrawave communications which used the known phenomenon of phase velocity to carry information, a property which in fact is impossible. The limitations of phase velocity beyond the speed of light later led him to develop his Dirac communicator.
  • Larry Niven used hyperwave in his Known Space series as the term for a faster-than-light method of communication. Unlike the hyperdrive that moved ships at a finite superluminal speed, hyperwave was essentially instantaneous.
  • In Richard K. Morgan's Takeshi Kovacs novels human colonies on distant planets maintain contact with earth and each other via hyperspatial needlecast, a technology which moves information "...so close to instantaneously that scientists are still arguing about the terminology".

A later device was the ansible coined by Ursula K. Le Guin and used extensively in her Hainish Cycle. Like Blish's device it provided instantaneous communication, but without the inconvenient beep.

The ansible is also a major plot element, nearly a MacGuffin, in Elizabeth Moon's Vatta's War series. Much of the story line revolves around various parties attacking or repairing ansibles, and around the internal politics of ISC (InterStellar Communications), a corporation which holds a monopoly on the ansible technology.

The ansible is also used as the main form of communication in Orson Scott Card's Ender's Game series. It is inhabited by an energy based, non-artificial sentient creature called an Aiúa that was placed within the ansible network and goes by the name of Jane. It was made when the humans realized that the Buggers, an alien species that attacked Earth, could communicate instantaneously and so the humans tried to do the same.

Quantum entanglement

  • In Ernest Cline's novel Armada, alien invaders possess technology for instant "quantum communication" with unlimited range. Humans reverse engineer the device from captured alien technology.
  • In the Mass Effect series of video games, instantaneous communication is possible using quantum-entanglement communicators placed in the communications rooms of starships.
  • In the Avatar continuity, faster-than-light communication via a subtle control over the state of entangled particles is possible, but for practical purposes extremely slow and expensive: at a transmission rate of three bits of information per hour and a cost of $7,500 per bit, it is used for only the highest priority messages.
  • Charles Stross's books Singularity Sky and Iron Sunrise make use of "causal channels" which use entangled particles for instantaneous two-way communication. The technique has drawbacks in that the entangled particles are expendable and the use of faster-than-light travel destroys the entanglement, so that one end of the channel must be transported below light speed. This makes them expensive and limits their usefulness somewhat.
  • In Liu Cixin's novel The Three-Body Problem, the alien Trisolarans, while preparing to invade the Solar System, use a device with Ansible characteristics to communicate with their collaborators on Earth in real time. Additionally, they use spying/sabotaging devices called 'Sophons' on Earth which by penetration can access any kind of electronically saved and visual information, interact with electronics, and communicate results back to Trisolaris in real-time via quantum entanglement. The technology used is "single protons that have been unfolded from eleven space dimensions to two dimensions, programmed, and then refolded" and thus Sophons remain undetectable for humans.

Psychic links, belonging to pseudoscience, have been described as explainable by physical principles or unexplained, but they are claimed to operate instantaneously over large distances.

In the Stargate television series, characters are able to communicate instantaneously over long distances by transferring their consciousness into another person or being anywhere in the universe using "Ancient communication stones". It is not known how these stones operate, but the technology explained in the show usually revolves around wormholes for instant teleportation, faster-than-light, space-warping travel, and sometimes around quantum multiverses.

In Robert A. Heinlein's Time for the Stars, twin telepathy was used to maintain communication with a distant spaceship.

Peter F. Hamilton's Void Trilogy features psychic links between the multiple bodies simultaneously occupied by some characters.

In Brandon Sanderson's Skyward series, characters are able to use "Cytonics" to communicate instantaneously over any distance by sending messages via an inter-dimensional reality called "nowhere".

Other devices

Similar devices are present in the works of numerous others, such as Frank Herbert and Philip Pullman, who called his a lodestone resonator.

Anne McCaffrey's Crystal Singer series posited an instantaneous communication device powered by rare "Black Crystal" from the planet Ballybran. Black Crystals cut from the same mineral deposit could be "tuned" to sympathetically vibrate with each other instantly, even when separated by interstellar distances, allowing instantaneous telephone-like voice and data communication. Similarly, in Gregory Keyes' series The Age of Unreason, "aetherschreibers" use two-halves of a single "chime" to communicate, aided by scientific alchemy. While the speed of communication is important, so is the fact that the messages cannot be overheard except by listeners with a piece of the same original crystal.

Stephen R. Donaldson, in his Gap cycle, proposed a similar system, Symbiotic Crystalline Resonance Transmission, clearly ansible-type technology but very difficult to produce and limited to text messages.

In "With Folded Hands" (1947) and The Humanoids (1949), by Jack Williamson, instant communication and power transfer through interstellar space is possible with rhodomagnetic energy.

In Ivan Yefremov's 1957 novel Andromeda Nebula, a device for instant transfer of information and matter is made real by using "bipolar mathematics" to explore use of anti-gravitational shadow vectors through a zero field and the antispace, which enables them to make contact with the planet of Epsilon Tucanae.

In Edmond Hamilton's The Star Kings (1949), the discovery of an unknown form of electromagnetic radiation called sub-spectrum rays moves faster than light. The fastest of these are those of the Minus-42nd Octave, which allows for real time telestereo communication with anyone within the galaxy.

In Cordwainer Smith's Instrumentality novels and stories, interplanetary and interstellar communication is normally relayed from planet to planet, presumably at superluminal speed for each stage (at least between solar systems) but with a cumulative delay. For urgent communication there is the "instant message", which is effectively instantaneous but very expensive.

In Howard Taylor's web comic series Schlock Mercenary, superluminal communication is performed via the hypernet, a galaxy-spanning analogue to the Internet. Through the hypernet, communications and data are routed through nanoscopic wormholes, using conventional electromagnetic signals.

Quantum computing

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Quantum_computing   ...