The measurement problem in quantum mechanics is the problem of how (or whether) wave function collapse occurs. The inability to observe this process directly has given rise to different interpretations of quantum mechanics, and poses a key set of questions that each interpretation must answer. The wave function in quantum mechanics evolves deterministically according to the Schrödinger equation as a linear superposition
of different states, but actual measurements always find the physical
system in a definite state. Any future evolution is based on the state
the system was discovered to be in when the measurement was made,
meaning that the measurement "did something" to the system that is not
obviously a consequence of Schrödinger evolution.
To express matters differently (to paraphrase Steven Weinberg[1][2]), the Schrödinger wave equation determines the wave function at any later time. If observers and their measuring apparatus are themselves described by a deterministic wave function, why can we not predict precise results for measurements, but only probabilities? As a general question: How can one establish a correspondence between quantum and classical reality?[3]
To express matters differently (to paraphrase Steven Weinberg[1][2]), the Schrödinger wave equation determines the wave function at any later time. If observers and their measuring apparatus are themselves described by a deterministic wave function, why can we not predict precise results for measurements, but only probabilities? As a general question: How can one establish a correspondence between quantum and classical reality?[3]
Schrödinger's cat
Interpretations
The Copenhagen interpretation is the oldest and probably still the most widely held interpretation of quantum mechanics.[4][5][6] Most generally it posits something in the act of observation which results in the collapse of the wave function. According to the von Neumann–Wigner interpretation the causative agent in this collapse is consciousness.[7] How this could happen is widely disputed. Hugh Everett's many-worlds interpretation attempts to solve the problem by suggesting there is only one wave function, the superposition of the entire universe, and it never collapses—so there is no measurement problem. Instead, the act of measurement is simply an interaction between quantum entities, e.g. observer, measuring instrument, electron/positron etc., which entangle to form a single larger entity, for instance living cat/happy scientist. Everett also attempted to demonstrate the way that in measurements the probabilistic nature of quantum mechanics would appear; work later extended by Bryce DeWitt.De Broglie–Bohm theory tries to solve the measurement problem very differently: the information describing the system contains not only the wave function, but also supplementary data (a trajectory) giving the position of the particle(s). The role of the wave function is to generate the velocity field for the particles. These velocities are such that the probability distribution for the particle remains consistent with the predictions of the orthodox quantum mechanics. According to de Broglie–Bohm theory, interaction with the environment during a measurement procedure separates the wave packets in configuration space which is where apparent wave function collapse comes from even though there is no actual collapse.
The Ghirardi–Rimini–Weber (GRW) theory differs from other collapse theories by proposing that wave function collapse happens spontaneously. Particles have a non-zero probability of undergoing a "hit", or spontaneous collapse of the wave function, on the order of once every hundred million years.[8] Though collapse is extremely rare, the sheer number of particles in a measurement system means that the probability of a collapse occurring somewhere in the system is high. Since the entire measurement system is entangled (via quantum entanglement), the collapse of a single particle initiates the collapse of the entire measurement apparatus.
Erich Joos and Heinz-Dieter Zeh claim that the phenomenon of quantum decoherence, which was put on firm ground in the 1980s, resolves the problem.[9] The idea is that the environment causes the classical appearance of macroscopic objects. Zeh further claims that decoherence makes it possible to identify the fuzzy boundary between the quantum microworld and the world where the classical intuition is applicable.[10][11] Quantum decoherence was proposed in the context of the many-worlds interpretation[citation needed], but it has also become an important part of some modern updates of the Copenhagen interpretation based on consistent histories.[12][13] Quantum decoherence does not describe the actual process of the wave function collapse, but it explains the conversion of the quantum probabilities (that exhibit interference effects) to the ordinary classical probabilities. See, for example, Zurek,[3] Zeh[10] and Schlosshauer.[14]
The present situation is slowly clarifying, as described in a recent paper by Schlosshauer as follows:[15]
- Several decoherence-unrelated proposals have been put forward in the past to elucidate the meaning of probabilities and arrive at the Born rule ... It is fair to say that no decisive conclusion appears to have been reached as to the success of these derivations. ...
- As it is well known, [many papers by Bohr insist upon] the fundamental role of classical concepts. The experimental evidence for superpositions of macroscopically distinct states on increasingly large length scales counters such a dictum. Superpositions appear to be novel and individually existing states, often without any classical counterparts. Only the physical interactions between systems then determine a particular decomposition into classical states from the view of each particular system. Thus classical concepts are to be understood as locally emergent in a relative-state sense and should no longer claim a fundamental role in the physical theory.
An interesting solution to the measurement problem is also provided by the hidden-measurements interpretation of quantum mechanics. The hypothesis at the basis of this approach is that in a typical quantum measurement there is a condition of lack of knowledge about which interaction between the measured entity and the measuring apparatus is actualized at each run of the experiment. One can then show that the Born rule can be derived by considering a uniform average over all these possible measurement-interactions.