Search This Blog

Friday, October 9, 2020

Quantum limit

From Wikipedia, the free encyclopedia

A quantum limit in physics is a limit on measurement accuracy at quantum scales. Depending on the context, the limit may be absolute (such as the Heisenberg limit), or it may only apply when the experiment is conducted with naturally occurring quantum states (e.g. the standard quantum limit in interferometry) and can be circumvented with advanced state preparation and measurement schemes.

The usage of the term standard quantum limit or SQL is, however, broader than just interferometry. In principle, any linear measurement of a quantum mechanical observable of a system under study that does not commute with itself at different times leads to such limits. In short, it is the Heisenberg uncertainty principle that is the cause.

A schematic description of how physical measurement process is described in quantum mechanics

A more detailed explanation would be that any measurement in quantum mechanics involves at least two parties, an Object and a Meter. The former is the system whose observable, say , we want to measure. The latter is the system we couple to the Object in order to infer the value of of the Object by recording some chosen observable, , of this system, e.g. the position of the pointer on a scale of the Meter. This, in a nutshell, is a model of most of the measurements happening in physics, known as indirect measurements (see pp. 38–42 of ). So any measurement is a result of interaction and that acts in both ways. Therefore, the Meter acts on the Object during each measurement, usually via the quantity, , conjugate to the readout observable , thus perturbing the value of measured observable and modifying the results of subsequent measurements. This is known as back action (quantum) of the Meter on the system under measurement.

At the same time, quantum mechanics prescribes that readout observable of the Meter should have an inherent uncertainty, , additive to and independent of the value of the measured quantity . This one is known as measurement imprecision or measurement noise. Because of the Heisenberg uncertainty principle, this imprecision cannot be arbitrary and is linked to the back-action perturbation by the uncertainty relation:

where is a standard deviation of observable and stands for expectation value of in whatever quantum state the system is. The equality is reached if the system is in a minimum uncertainty state. The consequence for our case is that the more precise is our measurement, i.e the smaller is , the larger will be perturbation the Meter exerts on the measured observable . Therefore, the readout of the meter will, in general, consist of three terms:

where is a value of that the Object would have, were it not coupled to the Meter, and is the perturbation to the value of caused by back action force, . The uncertainty of the latter is proportional to . Thus, there is a minimal value, or the limit to the precision one can get in such a measurement, provided that and are uncorrelated.

The terms "quantum limit" and "standard quantum limit" are sometimes used interchangeably. Usually, "quantum limit" is a general term which refers to any restriction on measurement due to quantum effects, while the "standard quantum limit" in any given context refers to a quantum limit which is ubiquitous in that context.

Examples

Displacement measurement

Consider a very simple measurement scheme, which, nevertheless, embodies all key features of a general position measurement. In the scheme shown in Figure, a sequence of very short light pulses are used to monitor the displacement of a probe body . The position of is probed periodically with time interval . We assume mass large enough to neglect the displacement inflicted by the pulses regular (classical) radiation pressure in the course of measurement process.

Simplified scheme of optical measurement of mechanical object position

Then each -th pulse, when reflected, carries a phase shift proportional to the value of the test-mass position at the moment of reflection:

 

 

 

 

(1)

where , is the light frequency, is the pulse number and is the initial (random) phase of the -th pulse. We assume that the mean value of all these phases is equal to zero, , and their root mean square (RMS) uncertainty is equal to .

The reflected pulses are detected by a phase-sensitive device (the phase detector). The implementation of an optical phase detector can be done using, e.g. homodyne or heterodyne detection scheme (see Section 2.3 in  and references therein), or other such read-out techniques.

In this example, light pulse phase serves as the readout observable of the Meter. Then we suppose that the phase measurement error introduced by the detector is much smaller than the initial uncertainty of the phases . In this case, the initial uncertainty will be the only source of the position measurement error:

 

 

 

 

(2)

For convenience, we renormalise Eq. (1) as the equivalent test-mass displacement:

 

 

 

 

(3)

where

are the independent random values with the RMS uncertainties given by Eq. (2).

Upon reflection, each light pulse kicks the test mass, transferring to it a back-action momentum equal to

 

 

 

 

(4)

where and are the test-mass momentum values just before and just after the light pulse reflection, and is the energy of the -th pulse, that plays the role of back action observable of the Meter. The major part of this perturbation is contributed by classical radiation pressure:

with the mean energy of the pulses. Therefore, one could neglect its effect, for it could be either subtracted from the measurement result or compensated by an actuator. The random part, which cannot be compensated, is proportional to the deviation of the pulse energy:

and its RMS uncertainly is equal to

 

 

 

 

(5)

with the RMS uncertainty of the pulse energy.

Assuming the mirror is free (which is a fair approximation if time interval between pulses is much shorter than the period of suspended mirror oscillations, ), one can estimate an additional displacement caused by the back action of the -th pulse that will contribute to the uncertainty of the subsequent measurement by the pulse time later:

Its uncertainty will be simply

If we now want to estimate how much has the mirror moved between the and pulses, i.e. its displacement , we will have to deal with three additional uncertainties that limit precision of our estimate:

where we assumed all contributions to our measurement uncertainty statistically independent and thus got sum uncertainty by summation of standard deviations. If we further assume that all light pulses are similar and have the same phase uncertainty, thence

 .

Now, what is the minimum this sum and what is the minimum error one can get in this simple estimate? The answer ensues from quantum mechanics, if we recall that energy and the phase of each pulse are canonically conjugate observables and thus obey the following uncertainty relation:

Therefore, it follows from Eqs. (2 and 5) that the position measurement error and the momentum perturbation due to back action also satisfy the uncertainty relation:

Taking this relation into account, the minimal uncertainty, , the light pulse should have in order not to perturb the mirror too much, should be equal to yielding for both . Thus the minimal displacement measurement error that is prescribed by quantum mechanics read:

This is the Standard Quantum Limit for such a 2-pulse procedure. In principle, if we limit our measurement to two pulses only and do not care about perturbing mirror position afterwards, the second pulse measurement uncertainty, , can, in theory, be reduced to 0 (it will yield, of course, ) and the limit of displacement measurement error will reduce to:

which is known as the Standard Quantum Limit for the measurement of free mass displacement.

This example represents a simple particular case of a linear measurement. This class of measurement schemes can be fully described by two linear equations of the form~(3) and (4), provided that both the measurement uncertainty and the object back-action perturbation ( and in this case) are statistically independent of the test object initial quantum state and satisfy the same uncertainty relation as the measured observable and its canonically conjugate counterpart (the object position and momentum in this case).

Usage in quantum optics

In the context of interferometry or other optical measurements, the standard quantum limit usually refers to the minimum level of quantum noise which is obtainable without squeezed states.

There is additionally a quantum limit for phase noise, reachable only by a laser at high noise frequencies.

In spectroscopy, the shortest wavelength in an X-ray spectrum is called the quantum limit.

Misleading relation to the classical limit

Note that due to an overloading of the word "limit", the classical limit is not the opposite of the quantum limit. In "quantum limit", "limit" is being used in the sense of a physical limitation (e.g. the Armstrong limit). In "classical limit", "limit" is used in the sense of a limiting process. (Note that there is no simple rigorous mathematical limit which fully recovers classical mechanics from quantum mechanics, the Ehrenfest theorem notwithstanding. Nevertheless, in the phase space formulation of quantum mechanics, such limits are more systematic and practical.)

Measurement problem

From Wikipedia, the free encyclopedia

In quantum mechanics, the measurement problem considers how, or whether, wave function collapse occurs. The inability to observe such a collapse directly has given rise to different interpretations of quantum mechanics and poses a key set of questions that each interpretation must answer.

The wave function in quantum mechanics evolves deterministically according to the Schrödinger equation as a linear superposition of different states. However, actual measurements always find the physical system in a definite state. Any future evolution of the wave function is based on the state the system was discovered to be in when the measurement was made, meaning that the measurement "did something" to the system that is not obviously a consequence of Schrödinger evolution. The measurement problem is describing what that "something" is, how a superposition of many possible values becomes a single measured value.

To express matters differently (paraphrasing Steven Weinberg), the Schrödinger wave equation determines the wave function at any later time. If observers and their measuring apparatus are themselves described by a deterministic wave function, why can we not predict precise results for measurements, but only probabilities? As a general question: How can one establish a correspondence between quantum and classical reality?

Schrödinger's cat

A thought experiment often used to illustrate the measurement problem is the "paradox" of Schrödinger's cat. A mechanism is arranged to kill a cat if a quantum event, such as the decay of a radioactive atom, occurs. Thus the fate of a large-scale object, the cat, is entangled with the fate of a quantum object, the atom. Prior to observation, according to the Schrödinger equation and numerous particle experiments, the atom is in a quantum superposition, a linear combination of decayed and undecayed states, which evolve with time. Therefore the cat should also be in a superposition, a linear combination of states that can be characterized as an "alive cat" and states that can be characterized as a "dead cat". Each of these possibilities is associated with a specific nonzero probability amplitude. However, a single, particular observation of the cat does not find a superposition: it always finds either a living cat, or a dead cat. After the measurement the cat is definitively alive or dead. The question is: How are the probabilities converted into an actual, sharply well-defined classical outcome?

Interpretations

The Copenhagen interpretation is the oldest and probably still the most widely held interpretation of quantum mechanics. Most generally, it posits something in the act of observation which results in the collapse of the wave function. How this could happen is widely disputed. In general, proponents of the Copenhagen Interpretation tend to be impatient with epistemic explanations of the mechanism behind it. This attitude is summed up in the oft-quoted mantra "Shut up and calculate!"

Hugh Everett's many-worlds interpretation attempts to solve the problem by suggesting that there is only one wave function, the superposition of the entire universe, and it never collapses—so there is no measurement problem. Instead, the act of measurement is simply an interaction between quantum entities, e.g. observer, measuring instrument, electron/positron etc., which entangle to form a single larger entity, for instance living cat/happy scientist. Everett also attempted to demonstrate how the probabilistic nature of quantum mechanics would appear in measurements; work later extended by Bryce DeWitt.

De Broglie–Bohm theory tries to solve the measurement problem very differently: the information describing the system contains not only the wave function, but also supplementary data (a trajectory) giving the position of the particle(s). The role of the wave function is to generate the velocity field for the particles. These velocities are such that the probability distribution for the particle remains consistent with the predictions of the orthodox quantum mechanics. According to de Broglie–Bohm theory, interaction with the environment during a measurement procedure separates the wave packets in configuration space, which is where apparent wave function collapse comes from, even though there is no actual collapse.

The Ghirardi–Rimini–Weber (GRW) theory is not an interpretation, differing from other collapse interpretations by proposing that wave function collapse happens spontaneously as part of the dynamics. Particles have a non-zero probability of undergoing a "hit", or spontaneous collapse of the wave function, on the order of once every hundred million years. Though collapse is extremely rare, the sheer number of particles in a measurement system means that the probability of a collapse occurring somewhere in the system is high. Since the entire measurement system is entangled (by quantum entanglement), the collapse of a single particle initiates the collapse of the entire measurement apparatus.

Erich Joos and Heinz-Dieter Zeh claim that the phenomenon of quantum decoherence, which was put on firm ground in the 1980s, resolves the problem. The idea is that the environment causes the classical appearance of macroscopic objects. Zeh further claims that decoherence makes it possible to identify the fuzzy boundary between the quantum microworld and the world where the classical intuition is applicable. Quantum decoherence was proposed in the context of the many-worlds interpretation, but it has also become an important part of some modern updates of the Copenhagen interpretation based on consistent histories. Quantum decoherence does not describe the actual collapse of the wave function, but it explains the conversion of the quantum probabilities (that exhibit interference effects) to the ordinary classical probabilities. See, for example, Zurek, Zeh and Schlosshauer.

The present situation is slowly clarifying, as described in a 2006 article by Schlosshauer as follows:

Several decoherence-unrelated proposals have been put forward in the past to elucidate the meaning of probabilities and arrive at the Born rule ... It is fair to say that no decisive conclusion appears to have been reached as to the success of these derivations. ...
As it is well known, [many papers by Bohr insist upon] the fundamental role of classical concepts. The experimental evidence for superpositions of macroscopically distinct states on increasingly large length scales counters such a dictum. Superpositions appear to be novel and individually existing states, often without any classical counterparts. Only the physical interactions between systems then determine a particular decomposition into classical states from the view of each particular system. Thus classical concepts are to be understood as locally emergent in a relative-state sense and should no longer claim a fundamental role in the physical theory.

A fourth approach is given by objective-collapse models. In such models, the Schrödinger equation is modified and obtains nonlinear terms. These nonlinear modifications are of stochastic nature and lead to a behaviour that for microscopic quantum objects, e.g. electrons or atoms, is unmeasurably close to that given by the usual Schrödinger equation. For macroscopic objects, however, the nonlinear modification becomes important and induces the collapse of the wave function. Objective-collapse models are effective theories. The stochastic modification is thought to stem from some external non-quantum field, but the nature of this field is unknown. One possible candidate is the gravitational interaction as in the models of Diósi and Penrose. The main difference of objective-collapse models compared to the other approaches is that they make falsifiable predictions that differ from standard quantum mechanics. Experiments are already getting close to the parameter regime where these predictions can be tested

Objective-collapse theory

Objective-collapse theories, also known as models of spontaneous wave function collapse or dynamical reduction models, were formulated as a response to the measurement problem in quantum mechanics, to explain why and how quantum measurements always give definite outcomes, not a superposition of them as predicted by the Schrödinger equation, and more generally how the classical world emerges from quantum theory. The fundamental idea is that the unitary evolution of the wave function describing the state of a quantum system is approximate. It works well for microscopic systems, but progressively loses its validity when the mass / complexity of the system increases.

In collapse theories, the Schrödinger equation is supplemented with additional nonlinear and stochastic terms (spontaneous collapses) which localize the wave function in space. The resulting dynamics is such that for microscopic isolated systems the new terms have a negligible effect; therefore, the usual quantum properties are recovered, apart from very tiny deviations. Such deviations can potentially be detected in dedicated experiments, and efforts are increasing worldwide towards testing them.

An inbuilt amplification mechanism makes sure that for macroscopic systems consisting of many particles, the collapse becomes stronger than the quantum dynamics. Then their wave function is always well localized in space, so well localized that it behaves, for all practical purposes, like a point moving in space according to Newton’s laws.

In this sense, collapse models provide a unified description of microscopic and macroscopic systems, avoiding the conceptual problems associated to measurements in quantum theory.

The most well-known examples of such theories are:

Collapse theories stand in opposition to many-worlds interpretation theories, in that they hold that a process of wave function collapse curtails the branching of the wave function and removes unobserved behaviour.

History of collapse theories

The genesis of collapse models dates back to the 1970s. In Italy, the group of L. Fonda, G.C. Ghirardi and A. Rimini was studying how to derive the exponential decay law in decay processes, within quantum theory. In their model, an essential feature was that, during the decay, particles undergo spontaneous collapses in space, an idea that was later carried over to characterize the GRW model. Meanwhile, P. Pearle in the USA was developing nonlinear and stochastic equations, to model the collapse of the wave function in a dynamical way; this formalism was later used for the CSL model. However, these models lacked the character of “universality” of the dynamics, i.e. its applicability to an arbitrary physical system (at least at the non-relativistic level), a necessary condition for any model to become a viable option.

The breakthrough came in 1986, when Ghirardi, Rimini and Weber published the paper with the meaningful title “Unified dynamics for microscopic and macroscopic systems”, where they presented what is now known as the GRW model, after the initials of the authors. The model contains all the ingredients a collapse model should have:

  • The Schrödinger dynamics is modified by adding nonlinear stochastic terms, whose effect is to randomly localize the wave function in space.
  • For microscopic systems, the new terms are mostly negligible.
  • For macroscopic object, the new dynamics keeps the wave function well localized in space, thus ensuring classicality.
  • In particular, at the end of measurements, there are always definite outcomes, distributed according to the Born rule.
  • Deviations from quantum predictions are compatible with current experimental data.  

In 1990 the efforts for the GRW group on one side, and of P. Pearle on the other side, were brought together in formulating the Continuous Spontaneous Localization (CSL) model, where the Schrödinger dynamics and the random collapse are described within one stochastic differential equation, which is capable of describing also systems of identical particles, a feature which was missing in the GRW model.

In the late 1980s and 1990s, Diosi and Penrose independently formulated the idea that the wave function collapse is related to gravity. The dynamical equation is structurally similar to the CSL equation.

In the context of collapse models, it is worthwhile to mention the theory of quantum state diffusion.

Most popular models

Three are the models, which are most widely discussed in the literature:

  • Ghirardi–Rimini–Weber (GRW) model: It is assumed that each constituent of a physical system independently undergoes spontaneous collapses. The collapses are random in time, distributed according to a Poisson distribution; they are random in space and are more likely to occur where the wave function is larger. In between collapses, the wave function evolves according to the Schrödinger equation. For composite systems, the collapse on each constituent causes the collapse of the center of mass wave functions.
  • Continuous spontaneous localization (CSL) model: The Schrödinger equation is supplemented with a nonlinear and stochastic diffusion process driven by a suitably chosen universal noise coupled to the mass-density of the system, which counteracts the quantum spread of the wave function. As for the GRW model, the larger the system, the stronger the collapse, thus explaining the quantum-to-classical transition as a progressive breakdown of quantum linearity, when the system’s mass increases. The CSL model is formulated in terms of identical particles.
  • Diósi–Penrose (DP) model: Diósi and Penrose formulated the idea that gravity is responsible for the collapse of the wave function. Penrose argued that, in a quantum gravity scenario where a spatial superposition creates the superposition of two different spacetime curvatures, gravity does not tolerate such superpositions and spontaneously collapses them. He also provided a phenomenological formula for the collapse time. Independently and prior to Penrose, Diósi presented a dynamical model that collapses the wave function with the same time scale suggested by Penrose.

The Quantum Mechanics with Universal Position Localization (QMUPL) model should also be mentioned; an extension of the GRW model for identical particles formulated by Tumulka, which proves several important mathematical results regarding the collapse equations.

In all models listed so far, the noise responsible for the collapse is Markovian (memoryless): either a Poisson process in the discrete GRW model, or a white noise in the continuous models. The models can be generalized to include arbitrary (colored) noises, possibly with a frequency cutoff: the CSL model model has been extended to its colored version (cCSL), as well as the QMUPL model (cQMUPL). In these new models the collapse properties remain basically unaltered, but specific physical predictions can change significantly.

In collapse models the energy is not conserved, because the noise responsible for the collapse induces Brownian motion on each constituent of a physical system. Accordingly, the kinetic energy increases at a faint but constant rate. Such a feature can be modified, without altering the collapse properties, by including appropriate dissipative effects in the dynamics. This is achieved for the GRW, CSL and QMUPL models, obtaining their dissipative counterparts (dGRW, dCSL, dQMUPL). In these new models, the energy thermalizes to a finite value.

Lastly, the QMUPL model was further generalized to include both colored noise as well as dissipative effects (dcQMUPL model).

Tests of collapse models

Collapse models modify the Schrödinger equation; therefore, they make predictions, which differ from standard quantum mechanical predictions. Although the deviations are difficult to detect, there is a growing number of experiments searching for spontaneous collapse effects. They can be classified in two groups:

  • Interferometric experiments. They are refined versions of the double-slit experiment, showing the wave nature of matter (and light). The modern versions are meant to increase the mass of the system, the time of flight, and/or the delocalization distance in order to create ever larger superpositions. The most prominent experiments of this kind are with atoms, molecules and phonons.
  • Non-interferometric experiments. They are based on the fact that the collapse noise, besides collapsing the wave function, also induces a diffusion on top of particles’ motion, which acts always, also when the wave function is already localized. Experiments of this kind involve cold atoms, opto-mechanical systems, gravitational wave detectors, underground experiments.

Problems and criticisms to collapse theories

Violation of the principle of the conservation of energy. According to collapse theories, energy is not conserved, also for isolated particles. More precisely, in the GRW, CSL and DP models the kinetic energy increases at a constant rate, which is small but non-zero. This is often presented as an unavoidable consequence of Heisenberg’s uncertainty principle: the collapse in position causes a larger uncertainty in momentum. This explanation is fundamentally wrong. Actually, in collapse theories the collapse in position determines also a localization in momentum: the wave function is driven to an almost minimum uncertainty state both in position as well as in momentum, compatibly with Heisenberg’s principle.

The reason why the energy increases according to collapse theories, is that the collapse noise diffuses the particle, thus accelerating it. This is the same situation as in classical Brownian motion. And as for classical Brownian motion, this increase can be stopped by adding dissipative effects. Dissipative versions of the QMUPL, GRW and CSL model exist, where the collapse properties are left unaltered with respect to the original models, while the energy thermalizes to a finite value (therefore it can even decrease, depending on its initial value).

Still, also in the dissipative model the energy is not strictly conserved. A resolution to this situation might come by considering also the noise a dynamical variable with its own energy, which is exchanged with the quantum system in such a way that the total system+noise energy is conserved.

Relativistic collapse models. One of the biggest challenges in collapse theories is to make them compatible with relativistic requirements. The GRW, CSL and DP models are not. The biggest difficulty is how to combine the nonlocal character of the collapse, which is necessary in order to make it compatible with the experimentally verified violation of Bell inequalities, with the relativistic principle of locality. Models exist, that attempt to generalize in a relativistic sense the GRW and CSL models, but their status as relativistic theories is still unclear. The formulation of a proper Lorentz-covariant theory of continuous objective collapse is still a matter of research.

Tail problem. In all collapse theories, the wave function is never fully contained within one (small) region of space, because the Schrödinger term of the dynamics will always spread it outside. Therefore, wave functions always contain tails stretching out to infinity, although their “weight” is the smaller, the larger the system. Critics of collapse theories argue that it is not clear how to interpret these tails, since they amount to the system never being really fully localized in space.  Supporters of collapse theories mostly dismiss this criticism as a misunderstanding of the theory, as in the context of dynamical collapse theories, the absolute square of the wave function is interpreted as an actual matter density. In this case, the tails merely represent an immeasurably small amount of smeared-out matter, while from a macroscopic perspective, all particles appear to be point-like for all practical purposes.

Entropy (information theory)

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Entropy_(information_theory) In info...