Search This Blog

Tuesday, May 5, 2015

Quantum computing


From Wikipedia, the free encyclopedia

The Bloch sphere is a representation of a qubit, the fundamental building block of quantum computers.

Quantum computing studies theoretical computation systems (quantum computers) that make direct use of quantum-mechanical phenomena, such as superposition and entanglement, to perform operations on data.[1]

Quantum computers are different from digital computers based on transistors. Whereas digital computers require data to be encoded into binary digits (bits), each of which is always in one of two definite states (0 or 1), quantum computation uses qubits (quantum bits), which can be in superpositions of states. A quantum Turing machine is a theoretical model of such a computer, and is also known as the universal quantum computer. Quantum computers share theoretical similarities with non-deterministic and probabilistic computers. The field of quantum computing was initiated by the work of Yuri Manin in 1980,[2] Richard Feynman in 1982,[3] and David Deutsch.[4] A quantum computer with spins as quantum bits was also formulated for use as a quantum space–time in 1968.[5]

As of 2015, the development of actual quantum computers is still in its infancy, but experiments have been carried out in which quantum computational operations were executed on a very small number of qubits.[6][citation needed]

Both practical and theoretical research continues, and many national governments and military agencies are funding quantum computing research in an effort to develop quantum computers for civilian, business, trade, and national security purposes, such as cryptanalysis.[7]

Large-scale quantum computers will be able to solve certain problems much more quickly than any classical computers that use even the best currently known algorithms, like integer factorization using Shor's algorithm or the simulation of quantum many-body systems. There exist quantum algorithms, such as Simon's algorithm, that run faster than any possible probabilistic classical algorithm.[8] Given sufficient computational resources, however, a classical computer could be made to simulate any quantum algorithm, as quantum computation does not violate the Church–Turing thesis. [9]

Basis

A classical computer has a memory made up of bits, where each bit represents either a one or a zero. A quantum computer maintains a sequence of qubits. A single qubit can represent a one, a zero, or any quantum superposition of those two qubit states; a pair of qubits can be in any quantum superposition of 4 states, and three qubits in any superposition of 8 states. In general, a quantum computer with n qubits can be in an arbitrary superposition of up to 2^n different states simultaneously (this compares to a normal computer that can only be in one of these 2^n states at any one time). A quantum computer operates by setting the qubits in a controlled initial state that represents the problem at hand and by manipulating those qubits with a fixed sequence of quantum logic gates. The sequence of gates to be applied is called a quantum algorithm. The calculation ends with a measurement, collapsing the system of qubits into one of the 2^n pure states, where each qubit is zero or one. The outcome can therefore be at most n classical bits of information. Quantum algorithms are often non-deterministic, in that they provide the correct solution only with a certain known probability.

An example of an implementation of qubits for a quantum computer could start with the use of particles with two spin states: "down" and "up" (typically written |{\downarrow}\rangle and |{\uparrow}\rangle, or |0{\rangle} and |1{\rangle}). But in fact any system possessing an observable quantity A, which is conserved under time evolution such that A has at least two discrete and sufficiently spaced consecutive eigenvalues, is a suitable candidate for implementing a qubit. This is true because any such system can be mapped onto an effective spin-1/2 system.

Mechanics

A quantum computer with a given number of qubits is fundamentally different from a classical computer composed of the same number of classical bits. For example, to represent the state of an n-qubit system on a classical computer would require the storage of 2n complex coefficients. Although this fact may seem to indicate that qubits can hold exponentially more information than their classical counterparts, care must be taken not to overlook the fact that the qubits are only in a probabilistic superposition of all of their states. This means that when the final state of the qubits is measured, they will only be found in one of the possible configurations they were in before measurement. Moreover, it is incorrect to think of the qubits as only being in one particular state before measurement since the fact that they were in a superposition of states before the measurement was made directly affects the possible outcomes of the computation.

Qubits are made up of controlled particles and the means of control (e.g. devices that trap particles and switch them from one state to another).[10]

For example: Consider first a classical computer that operates on a three-bit register. The state of the computer at any time is a probability distribution over the 2^3=8 different three-bit strings 000, 001, 010, 011, 100, 101, 110, 111. If it is a deterministic computer, then it is in exactly one of these states with probability 1.
However, if it is a probabilistic computer, then there is a possibility of it being in any one of a number of different states. We can describe this probabilistic state by eight nonnegative numbers A,B,C,D,E,F,G,H (where A = is the probability that the computer is in state 000, B = is the probability that the computer is in state 001, etc.). There is a restriction that these probabilities sum to 1.

The state of a three-qubit quantum computer is similarly described by an eight-dimensional vector (a,b,c,d,e,f,g,h), called a ket. Here, however, the coefficients can have complex values, and it is the sum of the squares of the coefficients' magnitudes, |a|^2+|b|^2+\cdots+|h|^2, that must equal 1. These squared magnitudes represent the probability of each of the given states. However, because a complex number encodes not just a magnitude but also a direction in the complex plane, the phase difference between any two coefficients (states) represents a meaningful parameter. This is a fundamental difference between quantum computing and probabilistic classical computing.[11]

If you measure the three qubits, you will observe a three-bit string. The probability of measuring a given string is the squared magnitude of that string's coefficient (i.e., the probability of measuring 000 = |a|^2, the probability of measuring 001 = |b|^2, etc..). Thus, measuring a quantum state described by complex coefficients (a,b,...,h) gives the classical probability distribution (|a|^2, |b|^2, \ldots, |h|^2) and we say that the quantum state "collapses" to a classical state as a result of making the measurement.

Note that an eight-dimensional vector can be specified in many different ways depending on what basis is chosen for the space. The basis of bit strings (e.g., 000, 001, …, 111) is known as the computational basis. Other possible bases are unit-length, orthogonal vectors and the eigenvectors of the Pauli-x operator. Ket notation is often used to make the choice of basis explicit. For example, the state (a,b,c,d,e,f,g,h) in the computational basis can be written as:
a\,|000\rangle + b\,|001\rangle + c\,|010\rangle + d\,|011\rangle + e\,|100\rangle + f\,|101\rangle + g\,|110\rangle + h\,|111\rangle
where, e.g., |010\rangle = \left(0,0,1,0,0,0,0,0\right)
The computational basis for a single qubit (two dimensions) is |0\rangle = \left(1,0\right) and |1\rangle = \left(0,1\right).
Using the eigenvectors of the Pauli-x operator, a single qubit is |+\rangle = \tfrac{1}{\sqrt{2}} \left(1,1\right) and |-\rangle = \tfrac{1}{\sqrt{2}} \left(1,-1\right).

Operation

While a classical three-bit state and a quantum three-qubit state are both eight-dimensional vectors, they are manipulated quite differently for classical or quantum computation. For computing in either case, the system must be initialized, for example into the all-zeros string, |000\rangle, corresponding to the vector (1,0,0,0,0,0,0,0). In classical randomized computation, the system evolves according to the application of stochastic matrices, which preserve that the probabilities add up to one (i.e., preserve the L1 norm). In quantum computation, on the other hand, allowed operations are unitary matrices, which are effectively rotations (they preserve that the sum of the squares add up to one, the Euclidean or L2 norm). (Exactly what unitaries can be applied depend on the physics of the quantum device.) Consequently, since rotations can be undone by rotating backward, quantum computations are reversible. (Technically, quantum operations can be probabilistic combinations of unitaries, so quantum computation really does generalize classical computation. See quantum circuit for a more precise formulation.)

Finally, upon termination of the algorithm, the result needs to be read off. In the case of a classical computer, we sample from the probability distribution on the three-bit register to obtain one definite three-bit string, say 000.
Quantum mechanically, we measure the three-qubit state, which is equivalent to collapsing the quantum state down to a classical distribution (with the coefficients in the classical state being the squared magnitudes of the coefficients for the quantum state, as described above), followed by sampling from that distribution. Note that this destroys the original quantum state. Many algorithms will only give the correct answer with a certain probability. However, by repeatedly initializing, running and measuring the quantum computer's results, the probability of getting the correct answer can be increased.

For more details on the sequences of operations used for various quantum algorithms, see universal quantum computer, Shor's algorithm, Grover's algorithm, Deutsch–Jozsa algorithm, amplitude amplification, quantum Fourier transform, quantum gate, quantum adiabatic algorithm and quantum error correction.

Potential

Integer factorization is believed to be computationally infeasible with an ordinary computer for large integers if they are the product of few prime numbers (e.g., products of two 300-digit primes).[12] By comparison, a quantum computer could efficiently solve this problem using Shor's algorithm to find its factors. This ability would allow a quantum computer to decrypt many of the cryptographic systems in use today, in the sense that there would be a polynomial time (in the number of digits of the integer) algorithm for solving the problem. In particular, most of the popular public key ciphers are based on the difficulty of factoring integers or the discrete logarithm problem, both of which can be solved by Shor's algorithm. In particular the RSA, Diffie-Hellman, and Elliptic curve Diffie-Hellman algorithms could be broken. These are used to protect secure Web pages, encrypted email, and many other types of data. Breaking these would have significant ramifications for electronic privacy and security.

However, other cryptographic algorithms do not appear to be broken by those algorithms.[13][14] Some public-key algorithms are based on problems other than the integer factorization and discrete logarithm problems to which Shor's algorithm applies, like the McEliece cryptosystem based on a problem in coding theory.[13][15] Lattice-based cryptosystems are also not known to be broken by quantum computers, and finding a polynomial time algorithm for solving the dihedral hidden subgroup problem, which would break many lattice based cryptosystems, is a well-studied open problem.[16] It has been proven that applying Grover's algorithm to break a symmetric (secret key) algorithm by brute force requires time equal to roughly 2n/2 invocations of the underlying cryptographic algorithm, compared with roughly 2n in the classical case,[17] meaning that symmetric key lengths are effectively halved: AES-256 would have the same security against an attack using Grover's algorithm that AES-128 has against classical brute-force search (see Key size). Quantum cryptography could potentially fulfill some of the functions of public key cryptography.

Besides factorization and discrete logarithms, quantum algorithms offering a more than polynomial speedup over the best known classical algorithm have been found for several problems,[18] including the simulation of quantum physical processes from chemistry and solid state physics, the approximation of Jones polynomials, and solving Pell's equation. No mathematical proof has been found that shows that an equally fast classical algorithm cannot be discovered, although this is considered unlikely. For some problems, quantum computers offer a polynomial speedup. The most well-known example of this is quantum database search, which can be solved by Grover's algorithm using quadratically fewer queries to the database than are required by classical algorithms. In this case the advantage is provable. Several other examples of provable quantum speedups for query problems have subsequently been discovered, such as for finding collisions in two-to-one functions and evaluating NAND trees.

Consider a problem that has these four properties:
  1. The only way to solve it is to guess answers repeatedly and check them,
  2. The number of possible answers to check is the same as the number of inputs,
  3. Every possible answer takes the same amount of time to check, and
  4. There are no clues about which answers might be better: generating possibilities randomly is just as good as checking them in some special order.
An example of this is a password cracker that attempts to guess the password for an encrypted file (assuming that the password has a maximum possible length).

For problems with all four properties, the time for a quantum computer to solve this will be proportional to the square root of the number of inputs. It can be used to attack symmetric ciphers such as Triple DES and AES by attempting to guess the secret key.[19]

Grover's algorithm can also be used to obtain a quadratic speed-up over a brute-force search for a class of problems known as NP-complete.

Since chemistry and nanotechnology rely on understanding quantum systems, and such systems are impossible to simulate in an efficient manner classically, many believe quantum simulation will be one of the most important applications of quantum computing.[20] Quantum simulation could also be used to simulate the behavior of atoms and particles at unusual conditions such as the reactions inside a collider.[21]

There are a number of technical challenges in building a large-scale quantum computer, and thus far quantum computers have yet to solve a problem faster than a classical computer. David DiVincenzo, of IBM, listed the following requirements for a practical quantum computer:[22]
  • scalable physically to increase the number of qubits;
  • qubits that can be initialized to arbitrary values;
  • quantum gates that are faster than decoherence time;
  • universal gate set;
  • qubits that can be read easily.

Quantum decoherence

One of the greatest challenges is controlling or removing quantum decoherence. This usually means isolating the system from its environment as interactions with the external world cause the system to decohere. However, other sources of decoherence also exist. Examples include the quantum gates, and the lattice vibrations and background nuclear spin of the physical system used to implement the qubits. Decoherence is irreversible, as it is non-unitary, and is usually something that should be highly controlled, if not avoided. Decoherence times for candidate systems, in particular the transverse relaxation time T2 (for NMR and MRI technology, also called the dephasing time), typically range between nanoseconds and seconds at low temperature.[11] Currently, some quantum computers require their qubits to be cooled to 20 millikelvin in order to prevent significant decoherence.[23]

These issues are more difficult for optical approaches as the timescales are orders of magnitude shorter and an often-cited approach to overcoming them is optical pulse shaping. Error rates are typically proportional to the ratio of operating time to decoherence time, hence any operation must be completed much more quickly than the decoherence time.

If the error rate is small enough, it is thought to be possible to use quantum error correction, which corrects errors due to decoherence, thereby allowing the total calculation time to be longer than the decoherence time. An often cited figure for required error rate in each gate is 10−4. This implies that each gate must be able to perform its task in one 10,000th of the decoherence time of the system.

Meeting this scalability condition is possible for a wide range of systems. However, the use of error correction brings with it the cost of a greatly increased number of required qubits. The number required to factor integers using Shor's algorithm is still polynomial, and thought to be between L and L2, where L is the number of bits in the number to be factored; error correction algorithms would inflate this figure by an additional factor of L. For a 1000-bit number, this implies a need for about 104 qubits without error correction.[24] With error correction, the figure would rise to about 107 qubits. Note that computation time is about L2 or about 107 steps and on 1 MHz, about 10 seconds.

A very different approach to the stability-decoherence problem is to create a topological quantum computer with anyons, quasi-particles used as threads and relying on braid theory to form stable logic gates.[25][26]

Developments

There are a number of quantum computing models, distinguished by the basic elements in which the computation is decomposed. The four main models of practical importance are:
The Quantum Turing machine is theoretically important but direct implementation of this model is not pursued. All four models of computation have been shown to be equivalent; each can simulate the other with no more than polynomial overhead.

For physically implementing a quantum computer, many different candidates are being pursued, among them (distinguished by the physical system used to realize the qubits):
The large number of candidates demonstrates that the topic, in spite of rapid progress, is still in its infancy, there is also a vast amount of flexibility.

Timeline

In 2001, researchers demonstrated Shor's algorithm to factor 15 using a 7-qubit NMR computer.[41]

In 2005, researchers at the University of Michigan built a semiconductor chip ion trap. Such devices from standard lithography, may point the way to scalable quantum computing.[42]

In 2009, researchers at Yale University created the first solid-state quantum processor. The two-qubit superconducting chip had artificial atom qubits made of a billion aluminum atoms that acted like a single atom that could occupy two states.[43][44]

A team at the University of Bristol, also created a silicon chip based on quantum optics, able to run Shor's algorithm.[45] Further developments were made in 2010.[46] Springer publishes a journal (Quantum Information Processing) devoted to the subject.[47]

In April 2011, a team of scientists from Australia and Japan made a breakthrough in quantum teleportation. They successfully transferred a complex set of quantum data with full transmission integrity, without affecting the qubits superpositions.[48][49]

Photograph of a chip constructed by D-Wave Systems Inc., mounted and wire-bonded in a sample holder. The D-Wave processor is designed to use 128 superconducting logic elements that exhibit controllable and tunable coupling to perform operations.

In 2011, D-Wave Systems announced the first commercial quantum annealer, the D-Wave One, claiming a 128 qubit processor.[50] On May 25, 2011 Lockheed Martin agreed to purchase a D-Wave One system.[51] Lockheed and the University of Southern California (USC) will house the D-Wave One at the newly formed USC Lockheed Martin Quantum Computing Center.[52] D-Wave's engineers designed the chips with an empirical approach, focusing on solving particular problems. Investors liked this more than academics, who said D-Wave had not demonstrated they really had a quantum computer. Criticism softened after a D-Wave paper in Nature, that proved the chips have some quantum properties.[53][54] Experts remain skeptical of D-Waves claims. Two published papers have concluded that the D-Wave machine operates classically, not via quantum computing.[55][56]

During the same year, researchers at the University of Bristol created an all-bulk optics system that ran a version of Shor's algorithm to successfully factor 21.[57]

In September 2011 researchers proved quantum computers can be made with a Von Neumann architecture (separation of RAM).[58]

In November 2011 researchers factorized 143 using 4 qubits.[59]

In February 2012 IBM scientists said that they had made several breakthroughs in quantum computing with superconducting integrated circuits.[60]

In April 2012 a multinational team of researchers from the University of Southern California, Delft University of Technology, the Iowa State University of Science and Technology, and the University of California, Santa Barbara, constructed a two-qubit quantum computer on a doped diamond crystal that can easily be scaled up and is functional at room temperature. Two logical qubit directions of electron spin and nitrogen kernels spin were used, with microwave impulses. This computer ran Grover's algorithm generating the right answer from the first try in 95% of cases.[61]

In September 2012, Australian researchers at the University of New South Wales said the world's first quantum computer was just 5 to 10 years away, after announcing a global breakthrough enabling manufacture of its memory building blocks. A research team led by Australian engineers created the first working qubit based on a single atom in silicon, invoking the same technological platform that forms the building blocks of modern day computers.[62] [63]

In October 2012, Nobel Prizes were presented to David J. Wineland and Serge Haroche for their basic work on understanding the quantum world, which may help make quantum computing possible.[64][65]

In November 2012, the first quantum teleportation from one macroscopic object to another was reported.[66][67]

In December 2012, the first dedicated quantum computing software company, 1QBit was founded in Vancouver, BC.[68] 1QBit is the first company to focus exclusively on commercializing software applications for commercially available quantum computers, including the D-Wave Two. 1QBit's research demonstrated the ability of superconducting quantum annealing processors to solve real-world problems.[69]

In February 2013, a new technique, boson sampling, was reported by two groups using photons in an optical lattice that is not a universal quantum computer but may be good enough for practical problems. Science Feb 15, 2013

In May 2013, Google announced that it was launching the Quantum Artificial Intelligence Lab, hosted by NASA‍‍ '​‍s Ames Research Center, with a 512-qubit D-Wave quantum computer. The USRA (Universities Space Research Association) will invite researchers to share time on it with the goal of studying quantum computing for machine learning.[70]

In early 2014 it was reported, based on documents provided by former NSA contractor Edward Snowden, that the U.S. National Security Agency (NSA) is running a $79.7 million research program (titled "Penetrating Hard Targets") to develop a quantum computer capable of breaking vulnerable encryption.[71]

In 2014, a group of researchers from ETH Zürich, USC, Google and Microsoft reported a definition of quantum speedup, and were not able to measure quantum speedup with the D-Wave Two device, but did not explicitly rule it out.[72][73]

In 2014, researchers at University of New South Wales used silicon as a protectant shell around qubits, making them more accurate, increasing the length of time they will hold information and possibly made quantum computers easier to build.[74]

In April 2015 IBM scientists claimed two critical advances towards the realization of a practical quantum computer. They claimed the ability to detect and measure both kinds of quantum errors simultaneously, as well as a new, square quantum bit circuit design that could scale to larger dimensions. [75]

Relation to computational complexity theory

The suspected relationship of BQP to other problem spaces.[76]

The class of problems that can be efficiently solved by quantum computers is called BQP, for "bounded error, quantum, polynomial time". Quantum computers only run probabilistic algorithms, so BQP on quantum computers is the counterpart of BPP ("bounded error, probabilistic, polynomial time") on classical computers. It is defined as the set of problems solvable with a polynomial-time algorithm, whose probability of error is bounded away from one half.[77] A quantum computer is said to "solve" a problem if, for every instance, its answer will be right with high probability. If that solution runs in polynomial time, then that problem is in BQP.

BQP is contained in the complexity class #P (or more precisely in the associated class of decision problems P#P),[78] which is a subclass of PSPACE.

BQP is suspected to be disjoint from NP-complete and a strict superset of P, but that is not known. Both integer factorization and discrete log are in BQP. Both of these problems are NP problems suspected to be outside BPP, and hence outside P. Both are suspected to not be NP-complete. There is a common misconception that quantum computers can solve NP-complete problems in polynomial time. That is not known to be true, and is generally suspected to be false.[78]

The capacity of a quantum computer to accelerate classical algorithms has rigid limits—upper bounds of quantum computation's complexity. The overwhelming part of classical calculations cannot be accelerated on a quantum computer.[79] A similar fact takes place for particular computational tasks, like the search problem, for which Grover's algorithm is optimal.[80]

Although quantum computers may be faster than classical computers, those described above can't solve any problems that classical computers can't solve, given enough time and memory (however, those amounts might be practically infeasible). A Turing machine can simulate these quantum computers, so such a quantum computer could never solve an undecidable problem like the halting problem. The existence of "standard" quantum computers does not disprove the Church–Turing thesis.[81] It has been speculated that theories of quantum gravity, such as M-theory or loop quantum gravity, may allow even faster computers to be built. Currently, defining computation in such theories is an open problem due to the problem of time, i.e., there currently exists no obvious way to describe what it means for an observer to submit input to a computer and later receive output.[82]

Quantum state


From Wikipedia, the free encyclopedia

In quantum physics, quantum state refers to the state of a quantum system.

A quantum state can be either pure or mixed. A pure quantum state is represented by a vector, called a state vector, in a Hilbert space. For example, when dealing with the energy spectrum of the electron in a hydrogen atom, the relevant state vectors are identified by the principal quantum number, written \{ n \} . For a more complicated case, consider Bohm's formulation of the EPR experiment, where the state vector
\left|\psi\right\rang = \frac{1}{\sqrt{2}}\bigg(\left|\uparrow\downarrow\right\rang - \left|\downarrow\uparrow\right\rang \bigg)
involves superposition of joint spin states for two particles. Mathematically, a pure quantum state is represented by a state vector in a Hilbert space over complex numbers, which is a generalization of our more usual three-dimensional space.[1] If this Hilbert space is represented as a function space, then its elements are called wave functions.

A mixed quantum state corresponds to a probabilistic mixture of pure states; however, different distributions of pure states can generate equivalent (i.e., physically indistinguishable) mixed states. Mixed states are described by so-called density matrices. A pure state can also be recast as a density matrix; in this way, pure states can be represented as a subset of the more general mixed states.

For example, if the spin of an electron is measured in any direction, e.g. with a Stern–Gerlach experiment, there are two possible results: up or down. The Hilbert space for the electron's spin is therefore two-dimensional. A pure state here is represented by a two-dimensional complex vector (\alpha, \beta), with a length of one; that is, with
|\alpha|^2 + |\beta|^2 = 1,
where |\alpha| and |\beta| are the absolute values of \alpha and \beta. A mixed state, in this case, is a 2 \times 2 matrix that is Hermitian, positive-definite, and has trace 1.

Before a particular measurement is performed on a quantum system, the theory usually gives only a probability distribution for the outcome, and the form that this distribution takes is completely determined by the quantum state and the observable describing the measurement. These probability distributions arise for both mixed states and pure states: it is impossible in quantum mechanics (unlike classical mechanics) to prepare a state in which all properties of the system are fixed and certain. This is exemplified by the uncertainty principle, and reflects a core difference between classical and quantum physics. Even in quantum theory, however, for every observable there are some states that have an exact and determined value for that observable.[2][3]

Conceptual description

Pure states


Probability densities for the electron of a hydrogen atom in different quantum states.

In the mathematical formulation of quantum mechanics, pure quantum states correspond to vectors in a Hilbert space, while each observable quantity (such as the energy or momentum of a particle) is associated with a mathematical operator. The operator serves as a linear function which acts on the states of the system. The eigenvalues of the operator correspond to the possible values of the observable, i.e. it is possible to observe a particle with a momentum of 1 kg⋅m/s if and only if one of the eigenvalues of the momentum operator is 1 kg⋅m/s.

The corresponding eigenvector (which physicists call an "eigenstate") with eigenvalue 1 kg⋅m/s would be a quantum state with a definite, well-defined value of momentum of 1 kg⋅m/s, with no quantum uncertainty. If its momentum were measured, the result is guaranteed to be 1 kg⋅m/s.

On the other hand, a system in a linear combination of multiple different eigenstates does in general have quantum uncertainty for the given observable. We can represent this linear combination of eigenstates as:
|\Psi(t)\rangle = \sum_n C_n(t) |\Phi_n\rang.
The coefficient which corresponds to a particular state in the linear combination is complex thus allowing interference effects between states. The coefficients are time dependent. How a quantum system changes in time is governed by the time evolution operator. The symbols "|" and ""[4] surrounding the \Psi are part of bra–ket notation.

Statistical mixtures of states are different from a linear combination. A statistical mixture of states is a statistical ensemble of independent systems. Statistical mixtures represent the degree of knowledge whilst the uncertainty within quantum mechanics is fundamental. Mathematically, a statistical mixture is not a combination using complex coefficients, but rather a combination using real-valued, positive probabilities of different states \Phi_n. A number P_n represents the probability of a randomly selected system being in the state \Phi_n. Unlike the linear combination case each system is in a definite eigenstate.[5][6]

The expectation value \langle A \rangle _\sigma of an observable A is a statistical mean of measured values of the observable. It is this mean, and the distribution of probabilities, that is predicted by physical theories.

There is no state which is simultaneously an eigenstate for all observables. For example, we cannot prepare a state such that both the position measurement Q(t) and the momentum measurement P(t) (at the same time t) are known exactly; at least one of them will have a range of possible values.[a] This is the content of the Heisenberg uncertainty relation.

Moreover, in contrast to classical mechanics, it is unavoidable that performing a measurement on the system generally changes its state[clarification needed] More precisely: After measuring an observable A, the system will be in an eigenstate of A; thus the state has changed, unless the system was already in that eigenstate. This expresses a kind of logical consistency: If we measure A twice in the same run of the experiment, the measurements being directly consecutive[clarification needed] in time, then they will produce the same results. This has some strange consequences, however, as follows.

Consider two observables, A and B, where A corresponds to a measurement earlier in time than B.[7] Suppose that the system is in an eigenstate of B at the experiment's begin. If we measure only B, we will not notice statistical[clarification needed] behaviour. If we measure first A and then B in the same run of the experiment, the system will transfer to an eigenstate of A after the first measurement, and we will generally notice that the results of B are statistical. Thus: Quantum mechanical measurements influence one another, and it is important in which order they are performed.

Another feature of quantum states becomes relevant if we consider a physical system that consists of multiple subsystems; for example, an experiment with two particles rather than one. Quantum physics allows for certain states, called entangled states, that show certain statistical correlations between measurements on the two particles which cannot be explained by classical theory. For details, see entanglement. These entangled states lead to experimentally testable properties (Bell's theorem) that allow us to distinguish between quantum theory and alternative classical (non-quantum) models.

Schrödinger picture vs. Heisenberg picture

One can take the observables to be dependent on time, while the state σ was fixed once at the beginning of the experiment. This approach is called the Heisenberg picture. (This approach was taken in the later part of the discussion above, with time-varying observables P(t), Q(t).) One can, equivalently, treat the observables as fixed, while the state of the system depends on time; that is known as the Schrödinger picture. (This approach was taken in the earlier part of the discussion above, with a time-varying state |\Psi(t)\rangle = \sum_n C_n(t) |\Phi_n\rang.) Conceptually (and mathematically), the two approaches are equivalent; choosing one of them is a matter of convention.

Both viewpoints are used in quantum theory. While non-relativistic quantum mechanics is usually formulated in terms of the Schrödinger picture, the Heisenberg picture is often preferred in a relativistic context, that is, for quantum field theory. Compare with Dirac picture.[8]

Formalism in quantum physics

Pure states as rays in a Hilbert space

Quantum physics is most commonly formulated in terms of linear algebra, as follows. Any given system is identified with some finite- or infinite-dimensional Hilbert space. The pure states correspond to vectors of norm 1. Thus the set of all pure states corresponds to the unit sphere in the Hilbert space.

Multiplying a pure state by a scalar is physically inconsequential (as long as the state is considered by itself). If one vector is obtained from the other by multiplying by a scalar of unit magnitude, the two vectors are said to correspond to the same "ray" in Hilbert space[9] and also to the same point in the projective Hilbert space.

Bra–ket notation

Calculations in quantum mechanics make frequent use of linear operators, inner products, dual spaces and Hermitian conjugation. In order to make such calculations flow smoothly, and to obviate the need (in some contexts) to fully understand the underlying linear algebra, Paul Dirac invented a notation to describe quantum states, known as bra-ket notation. Although the details of this are beyond the scope of this article (see the article bra–ket notation), some consequences of this are:
  • The expression used to denote a state vector (which corresponds to a pure quantum state) takes the form |\psi\rangle (where the "\psi" can be replaced by any other symbols, letters, numbers, or even words). This can be contrasted with the usual mathematical notation, where vectors are usually bold, lower-case letters, or letters with arrows on top.
  • Instead of vector, the term ket is used synonymously.
  • Each ket |\psi\rangle is uniquely associated with a so-called bra, denoted \langle\psi|, which corresponds to the same physical quantum state. Technically, the bra is the adjoint of the ket. It is an element of the dual space, and related to the ket by the Riesz representation theorem. In a finite-dimensional space with a chosen basis, writing |\psi\rangle as a column vector, \langle\psi| is a row vector; to obtain it just take the transpose and entry-wise complex conjugate of |\psi\rangle.
  • Inner products (also called brackets) are written so as to look like a bra and ket next to each other: \lang \psi_1|\psi_2\rang. (The phrase "bra-ket" is supposed to resemble "bracket".)

Spin

The angular momentum has the same dimension as the Planck constant and, at quantum scale, behaves as a discrete degree of freedom. Most particles possess a kind of intrinsic angular momentum that does not appear at all in classical mechanics and arises from Dirac's relativistic generalization of the theory. Mathematically it is described with spinors. In non-relativistic quantum mechanics the group representations of the Lie group SU(2) are used to describe this additional freedom. For a given particle, the choice of representation (and hence the range of possible values of the spin observable) is specified by a non-negative number S that, in units of Planck's reduced constant ħ, is either an integer (0, 1, 2 ...) or a half-integer (1/2, 3/2, 5/2 ...). For a massive particle with spin S, its spin quantum number m always assumes one of the 2S + 1 possible values in the set
\{ -S, -S+1, \ldots +S-1, +S \}
As a consequence, the quantum state of a particle with spin is described by a vector-valued wave function with values in C2S+1. Equivalently, it is represented by a complex-valued function of four variables: one discrete quantum number variable (for the spin) is added to the usual three continuous variables (for the position in space).

Many-body states and particle statistics

The quantum state of a system of N particles, each potentially with spin, is described by a complex-valued function with four variables per particle, e.g.
|\psi (\mathbf r_1,m_1;\dots ;\mathbf r_N,m_N)\rangle.
Here, the spin variables mν assume values from the set
\{ -S_\nu, -S_\nu +1, \ldots +S_\nu -1,+S_\nu \}
where S_\nu is the spin of νth particle. S_\nu=0 for a particle that does not exhibit spin.

The treatment of identical particles is very different for bosons (particles with integer spin) versus fermions (particles with half-integer spin). The above N-particle function must either be symmetrized (in the bosonic case) or anti-symmetrized (in the fermionic case) with respect to the particle numbers. If not all N particles are identical, but some of them are, then the function must be (anti)symmetrized separately over the variables corresponding to each group of identical variables, according to its statistics (bosonic or fermionic).

Electrons are fermions with S = 1/2, photons (quanta of light) are bosons with S = 1 (although in the vacuum they are massless and can't be described with Schrödingerian mechanics).

When symmetrization or anti-symmetrization is unnecessary, N-particle spaces of states can be obtained simply by tensor products of one-particle spaces, to which we will return later.

Basis states of one-particle systems

As with any Hilbert space, if a basis is chosen for the Hilbert space of a system, then any ket can be expanded as a linear combination of those basis elements. Symbolically, given basis kets |{k_i}\rang, any ket |\psi\rang can be written
| \psi \rang = \sum_i c_i |{k_i}\rangle
where ci are complex numbers. In physical terms, this is described by saying that |\psi\rang has been expressed as a quantum superposition of the states |{k_i}\rang. If the basis kets are chosen to be orthonormal (as is often the case), then c_i=\lang {k_i} | \psi \rang.
One property worth noting is that the normalized states |\psi\rang are characterized by
\sum_i \left | c_i \right | ^2 = 1.
Expansions of this sort play an important role in measurement in quantum mechanics. In particular, if the |{k_i}\rang are eigenstates (with eigenvalues ki) of an observable, and that observable is measured on the normalized state |\psi\rang, then the probability that the result of the measurement is ki is |ci|2. (The normalization condition above mandates that the total sum of probabilities is equal to one.)

A particularly important example is the position basis, which is the basis consisting of eigenstates of the observable which corresponds to measuring position. If these eigenstates are nondegenerate (for example, if the system is a single, spinless particle), then any ket |\psi\rang is associated with a complex-valued function of three-dimensional space:[clarification needed]
\psi(\mathbf{r}) \equiv \lang \mathbf{r} | \psi \rang.
This function is called the wavefunction corresponding to |\psi\rang.

Superposition of pure states

One aspect of quantum states, mentioned above, is that superpositions of them can be formed. If |\alpha\rangle and |\beta\rangle are two kets corresponding to quantum states, the ket
c_\alpha|\alpha\rang+c_\beta|\beta\rang
is a different quantum state (possibly not normalized). Note that which quantum state it is depends on both the amplitudes and phases (arguments) of c_\alpha and c_\beta. In other words, for example, even though |\psi\rang and e^{i\theta}|\psi\rang (for real θ) correspond to the same physical quantum state, they are not interchangeable, since for example |\phi\rang+|\psi\rang and |\phi\rang+e^{i\theta}|\psi\rang do not (in general) correspond to the same physical state. However, |\phi\rang+|\psi\rang and e^{i\theta}(|\phi\rang+|\psi\rang) do correspond to the same physical state. This is sometimes described by saying that "global" phase factors are unphysical, but "relative" phase factors are physical and important.

One example of a quantum interference phenomenon that arises from superposition is the double-slit experiment. The photon state is a superposition of two different states, one of which corresponds to the photon having passed through the left slit, and the other corresponding to passage through the right slit. The relative phase of those two states has a value which depends on the distance from each of the two slits. Depending on what that phase is, the interference is constructive at some locations and destructive in others, creating the interference pattern. By the analogy with coherence in other wave phenomena, a superposed state can be referred to as a coherent superposition.

Another example of the importance of relative phase in quantum superposition is Rabi oscillations, where the relative phase of two states varies in time due to the Schrödinger equation. The resulting superposition ends up oscillating back and forth between two different states.

Mixed states

A pure quantum state is a state which can be described by a single ket vector, as described above. A mixed quantum state is a statistical ensemble of pure states (see quantum statistical mechanics). Mixed states inevitably arise from pure states when, for a composite quantum system H_1 \otimes H_2 with an entangled state on it, the part H_2 is inaccessible to the observer. The state of the part H_1 is expressed then as the partial trace over H_2.

A mixed state cannot be described as a ket vector. Instead, it is described by its associated density matrix (or density operator), usually denoted ρ. Note that density matrices can describe both mixed and pure states, treating them on the same footing. Moreover, a mixed quantum state on a given quantum system described by a Hilbert space H can be always represented as the partial trace of a pure quantum state (called a purification) on a larger bipartite system H \otimes K for a sufficiently large Hilbert space K.

The density matrix describing a mixed state is defined to be an operator of the form
\rho = \sum_s p_s | \psi_s \rangle \langle \psi_s |
where p_s is the fraction of the ensemble in each pure state |\psi_s\rangle. The density matrix can be thought of as a way of using the one-particle formalism to describe the behavior of many similar particles by giving a probability distribution (or ensemble) of states that these particles can be found in.

A simple criterion for checking whether a density matrix is describing a pure or mixed state is that the trace of ρ2 is equal to 1 if the state is pure, and less than 1 if the state is mixed.[10] Another, equivalent, criterion is that the von Neumann entropy is 0 for a pure state, and strictly positive for a mixed state.

The rules for measurement in quantum mechanics are particularly simple to state in terms of density matrices. For example, the ensemble average (expectation value) of a measurement corresponding to an observable A is given by
\langle A \rangle = \sum_s p_s \langle \psi_s | A | \psi_s \rangle = \sum_s \sum_i p_s a_i | \langle \alpha_i | \psi_s \rangle |^2 = \operatorname{tr}(\rho A)
where |\alpha_i\rangle, \; a_i are eigenkets and eigenvalues, respectively, for the operator A, and "tr" denotes trace. It is important to note that two types of averaging are occurring, one being a weighted quantum superposition over the basis kets |\psi_s\rangle of the pure states, and the other being a statistical (said incoherent) average with the probabilities ps of those states.

According to Wigner,[11] the concept of mixture was put forward by Landau.[12][13]

Interpretation

Although theoretically, for a given quantum system, a state vector provides the full information about its evolution, it is not easy to understand what information about the "real world" it carries. Due to the uncertainty principle, a state, even if it has the value of one observable exactly defined (i.e. the observable has this state as an eigenstate), cannot exactly define values of all observables.

For state vectors (pure states), probability amplitudes offer a probabilistic interpretation. It can be generalized for all states (including mixed), for instance, as expectation values mentioned above.

Mathematical generalizations

States can be formulated in terms of observables, rather than as vectors in a vector space. These are positive normalized linear functionals on a C*-algebra, or sometimes other classes of algebras of observables. See State on a C*-algebra and Gelfand–Naimark–Segal construction for more details.

Delayed-choice quantum eraser

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Delayed-choice_quantum_eraser A delayed-cho...