Search This Blog

Thursday, June 14, 2018

Quantum computing

From Wikipedia, the free encyclopedia


The Bloch sphere is a representation of a qubit, the fundamental building block of quantum computers.

Quantum computing is computing using quantum-mechanical phenomena, such as superposition and entanglement.[1] A quantum computer is a device that performs quantum computing. They are different from binary digital electronic computers based on transistors. Whereas common digital computing requires that the data be encoded into binary digits (bits), each of which is always in one of two definite states (0 or 1), quantum computation uses quantum bits, which can be in superpositions of states. A quantum Turing machine is a theoretical model of such a computer, and is also known as the universal quantum computer. The field of quantum computing was initiated by the work of Paul Benioff[2] and Yuri Manin in 1980,[3] Richard Feynman in 1982,[4] and David Deutsch in 1985.[5]

As of 2018, the development of actual quantum computers is still in its infancy, but experiments have been carried out in which quantum computational operations were executed on a very small number of quantum bits.[6] Both practical and theoretical research continues, and many national governments and military agencies are funding quantum computing research in additional effort to develop quantum computers for civilian, business, trade, environmental and national security purposes, such as cryptanalysis.[7] A small 20-qubit quantum computer exists and is available for experiments via the IBM quantum experience project. D-Wave Systems has been developing their own version of a quantum computer that uses annealing.[8]

Large-scale quantum computers would theoretically be able to solve certain problems much more quickly than any classical computers that use even the best currently known algorithms, like integer factorization using Shor's algorithm (which is a quantum algorithm) and the simulation of quantum many-body systems. There exist quantum algorithms, such as Simon's algorithm, that run faster than any possible probabilistic classical algorithm.[9] A classical computer could in principle (with exponential resources) simulate a quantum algorithm, as quantum computation does not violate the Church–Turing thesis.[10]:202 On the other hand, quantum computers may be able to efficiently solve problems which are not practically feasible on classical computers.

Basics

A classical computer has a memory made up of bits, where each bit is represented by either a one or a zero. A quantum computer maintains a sequence of qubits. A single qubit can represent a one, a zero, or any quantum superposition of those two qubit states;[10]:13–16 a pair of qubits can be in any quantum superposition of 4 states,[10]:16 and three qubits in any superposition of 8 states. In general, a quantum computer with n qubits can be in an arbitrary superposition of up to 2^{n} different states simultaneously[10]:17 (this compares to a normal computer that can only be in one of these 2^{n} states at any one time). A quantum computer operates on its qubits using quantum gates and measurement (which also alters the observed state). An algorithm is composed of a fixed sequence of quantum logic gates and a problem is encoded by setting the initial values of the qubits, similar to how a classical computer works. The calculation usually ends with a measurement, collapsing the system of qubits into one of the 2^{n} eigenstates, where each qubit is zero or one, decomposing into a classical state. The outcome can therefore be at most n classical bits of information (or, if the algorithm did not end with a measurement, the result is an unobserved quantum state). Quantum algorithms are often probabilistic, in that they provide the correct solution only with a certain known probability.[11] Note that the term non-deterministic computing must not be used in that case to mean probabilistic (computing), because the term non-deterministic has a different meaning in computer science.

An example of an implementation of qubits of a quantum computer could start with the use of particles with two spin states: "down" and "up" (typically written |{\downarrow }\rangle and |{\uparrow }\rangle , or |0{\rangle } and |1{\rangle }). This is true because any such system can be mapped onto an effective spin-1/2 system.

Principles of operation

A quantum computer with a given number of qubits is fundamentally different from a classical computer composed of the same number of classical bits. For example, representing the state of an n-qubit system on a classical computer requires the storage of 2n complex coefficients, while to characterize the state of a classical n-bit system it is sufficient to provide the values of the n bits, that is, only n numbers. Although this fact may seem to indicate that qubits can hold exponentially more information than their classical counterparts, care must be taken not to overlook the fact that the qubits are only in a probabilistic superposition of all of their states. This means that when the final state of the qubits is measured, they will only be found in one of the possible configurations they were in before the measurement. It is generally incorrect to think of a system of qubits as being in one particular state before the measurement, since the fact that they were in a superposition of states before the measurement was made directly affects the possible outcomes of the computation.


Qubits are made up of controlled particles and the means of control (e.g. devices that trap particles and switch them from one state to another).[12]

To better understand this point, consider a classical computer that operates on a three-bit register. If the exact state of the register at a given time is not known, it can be described as a probability distribution over the 2^{3}=8 different three-bit strings 000, 001, 010, 011, 100, 101, 110, and 111. If there is no uncertainty over its state, then it is in exactly one of these states with probability 1. However, if it is a probabilistic computer, then there is a possibility of it being in any one of a number of different states.

The state of a three-qubit quantum computer is similarly described by an eight-dimensional vector {\displaystyle (a_{0},a_{1},a_{2},a_{3},a_{4},a_{5},a_{6},a_{7})} (or a one dimensional vector with each vector node holding the amplitude and the state as the bit string of qubits). Here, however, the coefficients a_{i} are complex numbers, and it is the sum of the squares of the coefficients' absolute values, {\displaystyle \sum _{i}|a_{i}|^{2}}, that must equal 1. For each i, the absolute value squared {\displaystyle \left|a_{i}\right|^{2}} gives the probability of the system being found in the i-th state after a measurement. However, because a complex number encodes not just a magnitude but also a direction in the complex plane, the phase difference between any two coefficients (states) represents a meaningful parameter. This is a fundamental difference between quantum computing and probabilistic classical computing.[13]

If you measure the three qubits, you will observe a three-bit string. The probability of measuring a given string is the squared magnitude of that string's coefficient (i.e., the probability of measuring 000 = {\displaystyle |a_{0}|^{2}}, the probability of measuring 001 = {\displaystyle |a_{1}|^{2}}, etc.). Thus, measuring a quantum state described by complex coefficients {\displaystyle (a_{0},a_{1},a_{2},a_{3},a_{4},a_{5},a_{6},a_{7})} gives the classical probability distribution {\displaystyle (|a_{0}|^{2},|a_{1}|^{2},|a_{2}|^{2},|a_{3}|^{2},|a_{4}|^{2},|a_{5}|^{2},|a_{6}|^{2},|a_{7}|^{2})} and we say that the quantum state "collapses" to a classical state as a result of making the measurement.

An eight-dimensional vector can be specified in many different ways depending on what basis is chosen for the space. The basis of bit strings (e.g., 000, 001, …, 111) is known as the computational basis. Other possible bases are unit-length, orthogonal vectors and the eigenvectors of the Pauli-x operator. Ket notation is often used to make the choice of basis explicit. For example, the state {\displaystyle (a_{0},a_{1},a_{2},a_{3},a_{4},a_{5},a_{6},a_{7})} in the computational basis can be written as:
{\displaystyle a_{0}\,|000\rangle +a_{1}\,|001\rangle +a_{2}\,|010\rangle +a_{3}\,|011\rangle +a_{4}\,|100\rangle +a_{5}\,|101\rangle +a_{6}\,|110\rangle +a_{7}\,|111\rangle }
where, e.g., {\displaystyle |010\rangle =\left(0,0,1,0,0,0,0,0\right)}
The computational basis for a single qubit (two dimensions) is
|0\rangle =\left(1,0\right) and |1\rangle =\left(0,1\right).

Using the eigenvectors of the Pauli-x operator, a single qubit is
|+\rangle ={\tfrac {1}{\sqrt {2}}}\left(1,1\right) and |-\rangle ={\tfrac {1}{\sqrt {2}}}\left(1,-1\right).

Operation

While a classical 3-bit state and a quantum 3-qubit state are each eight-dimensional vectors, they are manipulated quite differently for classical or quantum computation. For computing in either case, the system must be initialized, for example into the all-zeros string, |000\rangle , corresponding to the vector (1,0,0,0,0,0,0,0). In classical randomized computation, the system evolves according to the application of stochastic matrices, which preserve that the probabilities add up to one (i.e., preserve the L1 norm). In quantum computation, on the other hand, allowed operations are unitary matrices, which are effectively rotations (they preserve that the sum of the squares add up to one, the Euclidean or L2 norm). (Exactly what unitaries can be applied depend on the physics of the quantum device.) Consequently, since rotations can be undone by rotating backward, quantum computations are reversible. (Technically, quantum operations can be probabilistic combinations of unitaries, so quantum computation really does generalize classical computation. See quantum circuit for a more precise formulation.)

Finally, upon termination of the algorithm, the result needs to be read off. In the case of a classical computer, we sample from the probability distribution on the three-bit register to obtain one definite three-bit string, say 000. Quantum mechanically, one measures the three-qubit state, which is equivalent to collapsing the quantum state down to a classical distribution (with the coefficients in the classical state being the squared magnitudes of the coefficients for the quantum state, as described above), followed by sampling from that distribution. This destroys the original quantum state. Many algorithms will only give the correct answer with a certain probability. However, by repeatedly initializing, running and measuring the quantum computer's results, the probability of getting the correct answer can be increased. In contrast, counterfactual quantum computation allows the correct answer to be inferred when the quantum computer is not actually running in a technical sense, though earlier initialization and frequent measurements are part of the counterfactual computation protocol.

For more details on the sequences of operations used for various quantum algorithms, see universal quantum computer, Shor's algorithm, Grover's algorithm, Deutsch–Jozsa algorithm, amplitude amplification, quantum Fourier transform, quantum gate, quantum adiabatic algorithm and quantum error correction.

Potential

Integer factorization, which underpins the security of public key cryptographic systems, is believed to be computationally infeasible with an ordinary computer for large integers if they are the product of few prime numbers (e.g., products of two 300-digit primes).[14] By comparison, a quantum computer could efficiently solve this problem using Shor's algorithm to find its factors. This ability would allow a quantum computer to break many of the cryptographic systems in use today, in the sense that there would be a polynomial time (in the number of digits of the integer) algorithm for solving the problem. In particular, most of the popular public key ciphers are based on the difficulty of factoring integers or the discrete logarithm problem, both of which can be solved by Shor's algorithm. In particular the RSA, Diffie–Hellman, and elliptic curve Diffie–Hellman algorithms could be broken. These are used to protect secure Web pages, encrypted email, and many other types of data. Breaking these would have significant ramifications for electronic privacy and security.

However, other cryptographic algorithms do not appear to be broken by those algorithms.[15][16] Some public-key algorithms are based on problems other than the integer factorization and discrete logarithm problems to which Shor's algorithm applies, like the McEliece cryptosystem based on a problem in coding theory.[15][17] Lattice-based cryptosystems are also not known to be broken by quantum computers, and finding a polynomial time algorithm for solving the dihedral hidden subgroup problem, which would break many lattice based cryptosystems, is a well-studied open problem.[18] It has been proven that applying Grover's algorithm to break a symmetric (secret key) algorithm by brute force requires time equal to roughly 2n/2 invocations of the underlying cryptographic algorithm, compared with roughly 2n in the classical case,[19] meaning that symmetric key lengths are effectively halved: AES-256 would have the same security against an attack using Grover's algorithm that AES-128 has against classical brute-force search (see Key size). Quantum cryptography could potentially fulfill some of the functions of public key cryptography.

Besides factorization and discrete logarithms, quantum algorithms offering a more than polynomial speedup over the best known classical algorithm have been found for several problems,[20] including the simulation of quantum physical processes from chemistry and solid state physics, the approximation of Jones polynomials, and solving Pell's equation. No mathematical proof has been found that shows that an equally fast classical algorithm cannot be discovered, although this is considered unlikely.[21] For some problems, quantum computers offer a polynomial speedup. The most well-known example of this is quantum database search, which can be solved by Grover's algorithm using quadratically fewer queries to the database than are required by classical algorithms. In this case the advantage is provable. Several other examples of provable quantum speedups for query problems have subsequently been discovered, such as for finding collisions in two-to-one functions and evaluating NAND trees.

Consider a problem that has these four properties:
  1. The only way to solve it is to guess answers repeatedly and check them,
  2. The number of possible answers to check is the same as the number of inputs,
  3. Every possible answer takes the same amount of time to check, and
  4. There are no clues about which answers might be better: generating possibilities randomly is just as good as checking them in some special order.
An example of this is a password cracker that attempts to guess the password for an encrypted file (assuming that the password has a maximum possible length).

For problems with all four properties, the time for a quantum computer to solve this will be proportional to the square root of the number of inputs. It can be used to attack symmetric ciphers such as Triple DES and AES by attempting to guess the secret key.[22]

Since chemistry and nanotechnology rely on understanding quantum systems, and such systems are impossible to simulate in an efficient manner classically, many believe quantum simulation will be one of the most important applications of quantum computing.[23] Quantum simulation could also be used to simulate the behavior of atoms and particles at unusual conditions such as the reactions inside a collider.[24]

Quantum supremacy

John Preskill has introduced the term quantum supremacy to refer to the hypothetical speedup advantage that a quantum computer would have over a classical computer in a certain field.[25]  Google announced in 2017 that it expected to achieve quantum supremacy by the end of the year, and IBM says that the best classical computers will be beaten on some task within about five years.[26] Quantum supremacy has not been achieved yet, and skeptics like Gil Kalai doubt that it will ever be.[27][28] Bill Unruh doubted the practicality of quantum computers in a paper published back in 1994.[29] Paul Davies pointed out that a 400-qubit computer would even come into conflict with the cosmological information bound implied by the holographic principle.[30] Those such as Roger Schlafly have pointed out that the claimed theoretical benefits of quantum computing go beyond the proven theory of quantum mechanics and imply non-standard interpretations, such as multiple worlds and negative probabilities. Schlafly maintains that the Born rule is just "metaphysical fluff" and that quantum mechanics doesn't rely on probability any more than other branches of science but simply calculates the expected values of observables. He also points out that arguments about Turing complexity cannot be run backwards.[31][32][33] Those who prefer Bayesian interpretations of quantum mechanics have questioned the physical nature of the mathematical abstractions employed.[34]

Obstacles

There are a number of technical challenges in building a large-scale quantum computer, and thus far quantum computers have yet to solve a problem faster than a classical computer. David DiVincenzo, of IBM, listed the following requirements for a practical quantum computer:[35]
  • scalable physically to increase the number of qubits;
  • qubits that can be initialized to arbitrary values;
  • quantum gates that are faster than decoherence time;
  • universal gate set;
  • qubits that can be read easily.

Quantum decoherence

One of the greatest challenges is controlling or removing quantum decoherence. This usually means isolating the system from its environment as interactions with the external world cause the system to decohere. However, other sources of decoherence also exist. Examples include the quantum gates, and the lattice vibrations and background thermonuclear spin of the physical system used to implement the qubits. Decoherence is irreversible, as it is effectively non-unitary, and is usually something that should be highly controlled, if not avoided. Decoherence times for candidate systems, in particular the transverse relaxation time T2 (for NMR and MRI technology, also called the dephasing time), typically range between nanoseconds and seconds at low temperature.[13] Currently, some quantum computers require their qubits to be cooled to 20 millikelvins in order to prevent significant decoherence.[36]

As a result, time consuming tasks may render some quantum algorithms inoperable, as maintaining the state of qubits for a long enough duration will eventually corrupt the superpositions.[37]

These issues are more difficult for optical approaches as the timescales are orders of magnitude shorter and an often-cited approach to overcoming them is optical pulse shaping. Error rates are typically proportional to the ratio of operating time to decoherence time, hence any operation must be completed much more quickly than the decoherence time.

As described in the Quantum threshold theorem, If the error rate is small enough, it is thought to be possible to use quantum error correction to suppress errors and decoherence. This allows the total calculation time to be longer than the decoherence time if the error correction scheme can correct errors faster than decoherence introduces them. An often cited figure for required error rate in each gate for fault tolerant computation is 10−3, assuming the noise is depolarizing.

Meeting this scalability condition is possible for a wide range of systems. However, the use of error correction brings with it the cost of a greatly increased number of required qubits. The number required to factor integers using Shor's algorithm is still polynomial, and thought to be between L and L2, where L is the number of qubits in the number to be factored; error correction algorithms would inflate this figure by an additional factor of L. For a 1000-bit number, this implies a need for about 104 bits without error correction.[38] With error correction, the figure would rise to about 107 bits. Computation time is about L2 or about 107 steps and at 1 MHz, about 10 seconds.

A very different approach to the stability-decoherence problem is to create a topological quantum computer with anyons, quasi-particles used as threads and relying on braid theory to form stable logic gates.[39][40]

Developments

There are a number of quantum computing models, distinguished by the basic elements in which the computation is decomposed. The four main models of practical importance are:
The quantum Turing machine is theoretically important but direct implementation of this model is not pursued. All four models of computation have been shown to be equivalent; each can simulate the other with no more than polynomial overhead.

For physically implementing a quantum computer, many different candidates are being pursued, among them (distinguished by the physical system used to realize the qubits):
The large number of candidates demonstrates that the topic, in spite of rapid progress, is still in its infancy. There is also a vast amount of flexibility.

Timeline

In 1959 Richard Feynman in his lecture "There's Plenty of Room at the Bottom" states the possibility of using quantum effects for computation.

In 1980 Paul Benioff described quantum mechanical Hamiltonian models of computers[56] and the Russian mathematician Yuri Manin motivated the development of quantum computers.[57]

In 1981, at a conference co-organized by MIT and IBM, physicist Richard Feynman urged the world to build a quantum computer. He said "Nature isn't classical, dammit, and if you want to make a simulation of nature, you'd better make it quantum mechanical, and by golly it's a wonderful problem, because it doesn't look so easy."[58]

In 1984, BB84 is published, the world's first quantum cryptography protocol by IBM scientists Charles Bennett and Gilles Brassard.

In 1993, an international group of six scientists, including Charles Bennett, showed that perfect quantum teleportation is possible[59] in principle, but only if the original is destroyed.

In 1996, The DiVincenzo's criteria are published which is a list of conditions that are necessary for constructing a quantum computer proposed by the theoretical physicist David P. DiVincenzo in his 2000 paper "The Physical Implementation of Quantum Computation".

In 2001, researchers demonstrated Shor's algorithm to factor 15 using a 7-qubit NMR computer.[60]

In 2005, researchers at the University of Michigan built a semiconductor chip ion trap. Such devices from standard lithography, may point the way to scalable quantum computing.[61]

In 2009, researchers at Yale University created the first solid-state quantum processor. The two-qubit superconducting chip had artificial atom qubits made of a billion aluminum atoms that acted like a single atom that could occupy two states.[62][63]

A team at the University of Bristol, also created a silicon chip based on quantum optics, able to run Shor's algorithm.[64] Further developments were made in 2010.[65] Springer publishes a journal (Quantum Information Processing) devoted to the subject.[66]

In February 2010, Digital Combinational Circuits like adder, subtractor etc. are designed with the help of Symmetric Functions organized from different quantum gates.[67][68]

In April 2011, a team of scientists from Australia and Japan made a breakthrough in quantum teleportation. They successfully transferred a complex set of quantum data with full transmission integrity, without affecting the qubits' superpositions.[69][70]


Photograph of a chip constructed by D-Wave Systems Inc., mounted and wire-bonded in a sample holder. The D-Wave processor is designed to use 128 superconducting logic elements that exhibit controllable and tunable coupling to perform operations.

In 2011, D-Wave Systems announced the first commercial quantum annealer, the D-Wave One, claiming a 128 qubit processor. On May 25, 2011, Lockheed Martin agreed to purchase a D-Wave One system.[71] Lockheed and the University of Southern California (USC) will house the D-Wave One at the newly formed USC Lockheed Martin Quantum Computing Center.[72] D-Wave's engineers designed the chips with an empirical approach, focusing on solving particular problems. Investors liked this more than academics, who said D-Wave had not demonstrated they really had a quantum computer. Criticism softened after a D-Wave paper in Nature, that proved the chips have some quantum properties.[73][74] Two published papers have suggested that the D-Wave machine's operation can be explained classically, rather than requiring quantum models.[75][76] Later work showed that classical models are insufficient when all available data is considered.[77] Experts remain divided on the ultimate classification of the D-Wave systems though their quantum behavior was established concretely with a demonstration of entanglement.[78]

During the same year, researchers at the University of Bristol created an all-bulk optics system that ran a version of Shor's algorithm to successfully factor 21.[79]

In September 2011 researchers proved quantum computers can be made with a Von Neumann architecture (separation of RAM).[80]

In November 2011 researchers factorized 143 using 4 qubits.[81]

In February 2012 IBM scientists said that they had made several breakthroughs in quantum computing with superconducting integrated circuits.[82]

In April 2012 a multinational team of researchers from the University of Southern California, Delft University of Technology, the Iowa State University of Science and Technology, and the University of California, Santa Barbara, constructed a two-qubit quantum computer on a doped diamond crystal that can easily be scaled up and is functional at room temperature. Two logical qubit directions of electron spin and nitrogen kernels spin were used, with microwave impulses. This computer ran Grover's algorithm generating the right answer from the first try in 95% of cases.[83]

In September 2012, Australian researchers at the University of New South Wales said the world's first quantum computer was just 5 to 10 years away, after announcing a global breakthrough enabling manufacture of its memory building blocks. A research team led by Australian engineers created the first working qubit based on a single atom in silicon, invoking the same technological platform that forms the building blocks of modern-day computers.[84][85]

In October 2012, Nobel Prizes were presented to David J. Wineland and Serge Haroche for their basic work on understanding the quantum world, which may help make quantum computing possible.[86][87]

In November 2012, the first quantum teleportation from one macroscopic object to another was reported by scientists at the University of Science and Technology of China in Hefei.[88][89]

In December 2012, the first dedicated quantum computing software company, 1QBit was founded in Vancouver, BC.[90] 1QBit is the first company to focus exclusively on commercializing software applications for commercially available quantum computers, including the D-Wave Two. 1QBit's research demonstrated the ability of superconducting quantum annealing processors to solve real-world problems.[91]

In February 2013, a new technique, boson sampling, was reported by two groups using photons in an optical lattice that is not a universal quantum computer but may be good enough for practical problems. Science Feb 15, 2013

In May 2013, Google announced that it was launching the Quantum Artificial Intelligence Lab, hosted by NASA's Ames Research Center, with a 512-qubit D-Wave quantum computer. The USRA (Universities Space Research Association) will invite researchers to share time on it with the goal of studying quantum computing for machine learning.[92] Google added that they had "already developed some quantum machine learning algorithms" and had "learned some useful principles", such as that "best results" come from "mixing quantum and classical computing".[92]

In early 2014 it was reported, based on documents provided by former NSA contractor Edward Snowden, that the U.S. National Security Agency (NSA) is running a $79.7 million research program (titled "Penetrating Hard Targets") to develop a quantum computer capable of breaking vulnerable encryption.[93]

In 2014, a group of researchers from ETH Zürich, USC, Google and Microsoft reported a definition of quantum speedup, and were not able to measure quantum speedup with the D-Wave Two device, but did not explicitly rule it out.[94][95]

In 2014, researchers at University of New South Wales used silicon as a protectant shell around qubits, making them more accurate, increasing the length of time they will hold information, and possibly making quantum computers easier to build.[96]

In April 2015 IBM scientists claimed two critical advances towards the realization of a practical quantum computer. They claimed the ability to detect and measure both kinds of quantum errors simultaneously, as well as a new, square quantum bit circuit design that could scale to larger dimensions.[97]

In October 2015 researchers at University of New South Wales built a quantum logic gate in silicon for the first time.[98]

In December 2015 NASA publicly displayed the world's first fully operational $15-million quantum computer made by the Canadian company D-Wave at the Quantum Artificial Intelligence Laboratory at its Ames Research Center in California's Moffett Field. The device was purchased in 2013 via a partnership with Google and Universities Space Research Association. The presence and use of quantum effects in the D-Wave quantum processing unit is more widely accepted.[99] In some tests it can be shown that the D-Wave quantum annealing processor outperforms Selby’s algorithm.[100] Only 2 of this computer has been made so far.

In May 2016, IBM Research announced[101] that for the first time ever it is making quantum computing available to members of the public via the cloud, who can access and run experiments on IBM’s quantum processor. The service is called the IBM Quantum Experience. The quantum processor is composed of five superconducting qubits and is housed at the IBM T. J. Watson Research Center in New York.

In August 2016, scientists at the University of Maryland successfully built the first reprogrammable quantum computer.[102]

In October 2016 Basel University described a variant of the electron hole based quantum computer, which instead of manipulating electron spins uses electron holes in a semiconductor at low (mK) temperatures which are a lot less vulnerable to decoherence. This has been dubbed the "positronic" quantum computer as the quasi-particle behaves like it has a positive electrical charge.[103]

In March 2017, IBM announced an industry-first initiative to build commercially available universal quantum computing systems called IBM Q. The company also released a new API (Application Program Interface) for the IBM Quantum Experience that enables developers and programmers to begin building interfaces between its existing five quantum bit (qubit) cloud-based quantum computer and classical computers, without needing a deep background in quantum physics.

In May 2017, IBM announced[104] that it has successfully built and tested its most powerful universal quantum computing processors. The first is a 16 qubit processor that will allow for more complex experimentation than the previously available 5 qubit processor. The second is IBM's first prototype commercial processor with 17 qubits and leverages significant materials, device, and architecture improvements to make it the most powerful quantum processor created to date by IBM.

In July 2017, a group of U.S. researchers announced a quantum simulator with 51 qubits. The announcement was made by Mikhail Lukin of Harvard University at the International Conference on Quantum Technologies in Moscow.[105] A quantum simulator differs from a computer. Lukin’s simulator was designed to solve one equation. Solving a different equation would require building a new system. A computer can solve many different equations.

In September 2017, IBM Research scientists use a 7 qubit device to model the largest molecule,[106] Beryllium hydride, ever by a quantum computer. The results were published as the cover story in the peer-reviewed journal Nature.

In October 2017, IBM Research scientists successfully "broke the 49-qubit simulation barrier" and simulated 49- and 56-qubit short-depth circuits, using the Lawrence Livermore National Laboratory's Vulcan supercomputer, and the University of Illinois' Cyclops Tensor Framework (originally developed at the University of California). The results were published in arxiv.[107]

In November 2017, the University of Sydney research team in Australia successfully made a microwave circulator, an important quantum computer part, 1000 times smaller than a conventional circulator by using topological insulators to slow down the speed of light in a material.[108]

In November 2017, IBM announced[109] the availability of its most-powerful 20 qubit commercial processor, and the first prototype 50 qubit processor. The 20 qubit processor has an industry-leading 90 μs coherence time for the systems' operations.

In December 2017, IBM announced[110] its first IBM Q Network clients. The companies, universities, and labs to explore practical quantum applications, using IBM Q 20 qubit commercial systems, for business and science include: JPMorgan Chase, Daimler AG, Samsung, JSR Corporation, Barclays, Hitachi Metals, Honda, Nagase, Keio University, Oak Ridge National Lab, Oxford University and University of Melbourne.

In December 2017, Microsoft released a preview version of a "Quantum Development Kit".[111] It includes a programming language, Q#, which can be used to write programs that are run on an emulated quantum computer.

In 2017 D-Wave reported to start selling a 2000 qubit quantum computer.[112]

In February 2018, scientists reported, for the first time, the discovery of a new form of light, which may involve polaritons, that could be useful in the development of quantum computers.[113][114]

In March 2018, Google Quantum AI Lab announced a 72 qubit processor called Bristlecone.[115]

In April 2018, IBM Research announced eight quantum computing startups joined the IBM Q Network,[116] including: Zapata Computing, Strangeworks, QxBranch, Quantum Benchmark, QC Ware, Q-CTRL, Cambridge Quantum Computing, and 1QBit.

Relation to computational complexity theory


The suspected relationship of BQP to other problem spaces.[117]

The class of problems that can be efficiently solved by quantum computers is called BQP, for "bounded error, quantum, polynomial time". Quantum computers only run probabilistic algorithms, so BQP on quantum computers is the counterpart of BPP ("bounded error, probabilistic, polynomial time") on classical computers. It is defined as the set of problems solvable with a polynomial-time algorithm, whose probability of error is bounded away from one half.[118] A quantum computer is said to "solve" a problem if, for every instance, its answer will be right with high probability. If that solution runs in polynomial time, then that problem is in BQP.

BQP is contained in the complexity class #P (or more precisely in the associated class of decision problems P#P),[119] which is a subclass of PSPACE.

BQP is suspected to be disjoint from NP-complete and a strict superset of P, but that is not known. Both integer factorization and discrete log are in BQP. Both of these problems are NP problems suspected to be outside BPP, and hence outside P. Both are suspected to not be NP-complete. There is a common misconception that quantum computers can solve NP-complete problems in polynomial time. That is not known to be true, and is generally suspected to be false.[119]

The capacity of a quantum computer to accelerate classical algorithms has rigid limits—upper bounds of quantum computation's complexity. The overwhelming part of classical calculations cannot be accelerated on a quantum computer.[120] A similar fact takes place for particular computational tasks, like the search problem, for which Grover's algorithm is optimal.[121]

Bohmian Mechanics is a non-local hidden variable interpretation of quantum mechanics. It has been shown that a non-local hidden variable quantum computer could implement a search of an N-item database at most in {\displaystyle O({\sqrt[{3}]{N}})} steps. This is slightly faster than the O({\sqrt  {N}}) steps taken by Grover's algorithm. Neither search method will allow quantum computers to solve NP-Complete problems in polynomial time.[122]

Although quantum computers may be faster than classical computers for some problem types, those described above can't solve any problem that classical computers can't already solve. A Turing machine can simulate these quantum computers, so such a quantum computer could never solve an undecidable problem like the halting problem. The existence of "standard" quantum computers does not disprove the Church–Turing thesis.[123] It has been speculated that theories of quantum gravity, such as M-theory or loop quantum gravity, may allow even faster computers to be built. Currently, defining computation in such theories is an open problem due to the problem of time, i.e., there currently exists no obvious way to describe what it means for an observer to submit input to a computer and later receive output.[124][71]

Condensed matter physics

From Wikipedia, the free encyclopedia

Condensed matter physics is the field of physics that deals with the macroscopic and microscopic physical properties of matter. In particular it is concerned with the "condensed" phases that appear whenever the number of constituents in a system is extremely large and the interaction between the constituents are strong. The most familiar examples of condensed phases are solids and liquids, which arise from the electromagnetic forces between atoms. Condensed matter physicists seek to understand the behavior of these phases by using physical laws. In particular, they include the laws of quantum mechanics, electromagnetism and statistical mechanics.

The most familiar condensed phases are solids and liquids while more exotic condensed phases include the superconducting phase exhibited by certain materials at low temperature, the ferromagnetic and antiferromagnetic phases of spins on crystal lattices of atoms, and the Bose–Einstein condensate found in ultracold atomic systems. The study of condensed matter physics involves measuring various material properties via experimental probes along with using methods of theoretical physics to develop mathematical models that help in understanding physical behavior.

The diversity of systems and phenomena available for study makes condensed matter physics the most active field of contemporary physics: one third of all American physicists self-identify as condensed matter physicists,[1] and the Division of Condensed Matter Physics is the largest division at the American Physical Society.[2] The field overlaps with chemistry, materials science, and nanotechnology, and relates closely to atomic physics and biophysics. The theoretical physics of condensed matter shares important concepts and methods with that of particle physics and nuclear physics.[3]

A variety of topics in physics such as crystallography, metallurgy, elasticity, magnetism, etc., were treated as distinct areas until the 1940s, when they were grouped together as solid state physics. Around the 1960s, the study of physical properties of liquids was added to this list, forming the basis for the new, related specialty of condensed matter physics.[4] According to physicist Philip Warren Anderson, the term was coined by him and Volker Heine, when they changed the name of their group at the Cavendish Laboratories, Cambridge from Solid state theory to Theory of Condensed Matter in 1967,[5] as they felt it did not exclude their interests in the study of liquids, nuclear matter, and so on.[6] Although Anderson and Heine helped popularize the name "condensed matter", it had been present in Europe for some years, most prominently in the form of a journal published in English, French, and German by Springer-Verlag titled Physics of Condensed Matter, which was launched in 1963.[7] The funding environment and Cold War politics of the 1960s and 1970s were also factors that lead some physicists to prefer the name "condensed matter physics", which emphasized the commonality of scientific problems encountered by physicists working on solids, liquids, plasmas, and other complex matter, over "solid state physics", which was often associated with the industrial applications of metals and semiconductors.[8] The Bell Telephone Laboratories was one of the first institutes to conduct a research program in condensed matter physics.[4]

References to "condensed" state can be traced to earlier sources. For example, in the introduction to his 1947 book Kinetic Theory of Liquids,[9] Yakov Frenkel proposed that "The kinetic theory of liquids must accordingly be developed as a generalization and extension of the kinetic theory of solid bodies. As a matter of fact, it would be more correct to unify them under the title of 'condensed bodies'".

History Of Classical physics

Classical physics


Heike Kamerlingh Onnes and Johannes van der Waals with the helium liquefactor at Leiden in 1908

One of the first studies of condensed states of matter was by English chemist Humphry Davy, in the first decades of the nineteenth century. Davy observed that of the forty chemical elements known at the time, twenty-six had metallic properties such as lustre, ductility and high electrical and thermal conductivity.[10] This indicated that the atoms in John Dalton's atomic theory were not indivisible as Dalton claimed, but had inner structure. Davy further claimed that elements that were then believed to be gases, such as nitrogen and hydrogen could be liquefied under the right conditions and would then behave as metals.[11][notes 1]

In 1823, Michael Faraday, then an assistant in Davy's lab, successfully liquefied chlorine and went on to liquefy all known gaseous elements, except for nitrogen, hydrogen, and oxygen.[10] Shortly after, in 1869, Irish chemist Thomas Andrews studied the phase transition from a liquid to a gas and coined the term critical point to describe the condition where a gas and a liquid were indistinguishable as phases,[13] and Dutch physicist Johannes van der Waals supplied the theoretical framework which allowed the prediction of critical behavior based on measurements at much higher temperatures.[14]:35–38 By 1908, James Dewar and Heike Kamerlingh Onnes were successfully able to liquefy hydrogen and then newly discovered helium, respectively.[10]

Paul Drude in 1900 proposed the first theoretical model for a classical electron moving through a metallic solid.[3] Drude's model described properties of metals in terms of a gas of free electrons, and was the first microscopic model to explain empirical observations such as the Wiedemann–Franz law.[15][16]:27–29 However, despite the success of Drude's free electron model, it had one notable problem: it was unable to correctly explain the electronic contribution to the specific heat and magnetic properties of metals, and the temperature dependence of resistivity at low temperatures.[17]:366–368

In 1911, three years after helium was first liquefied, Onnes working at University of Leiden discovered superconductivity in mercury, when he observed the electrical resistivity of mercury to vanish at temperatures below a certain value.[18] The phenomenon completely surprised the best theoretical physicists of the time, and it remained unexplained for several decades.[19] Albert Einstein, in 1922, said regarding contemporary theories of superconductivity that "with our far-reaching ignorance of the quantum mechanics of composite systems we are very far from being able to compose a theory out of these vague ideas".[20]

Advent of quantum mechanics

Drude's classical model was augmented by Wolfgang Pauli, Arnold Sommerfeld, Felix Bloch and other physicists. Pauli realized that the free electrons in metal must obey the Fermi–Dirac statistics. Using this idea, he developed the theory of paramagnetism in 1926. Shortly after, Sommerfeld incorporated the Fermi–Dirac statistics into the free electron model and made it better able to explain the heat capacity. Two years later, Bloch used quantum mechanics to describe the motion of a quantum electron in a periodic lattice.[17]:366–368 The mathematics of crystal structures developed by Auguste Bravais, Yevgraf Fyodorov and others was used to classify crystals by their symmetry group, and tables of crystal structures were the basis for the series International Tables of Crystallography, first published in 1935.[21] Band structure calculations was first used in 1930 to predict the properties of new materials, and in 1947 John Bardeen, Walter Brattain and William Shockley developed the first semiconductor-based transistor, heralding a revolution in electronics.[3]



A replica of the first point-contact transistor in Bell labs

In 1879, Edwin Herbert Hall working at the Johns Hopkins University discovered a voltage developing across conductors transverse to an electric current in the conductor and magnetic field perpendicular to the current.[22] This phenomenon arising due to the nature of charge carriers in the conductor came to be termed the Hall effect, but it was not properly explained at the time, since the electron was experimentally discovered 18 years later. After the advent of quantum mechanics, Lev Landau in 1930 developed the theory of Landau quantization and laid the foundation for the theoretical explanation for the quantum Hall effect discovered half a century later.[23]:458–460[24]

Magnetism as a property of matter has been known in China since 4000 BC.[25]:1–2 However, the first modern studies of magnetism only started with the development of electrodynamics by Faraday, Maxwell and others in the nineteenth century, which included classifying materials as ferromagnetic, paramagnetic and diamagnetic based on their response to magnetization.[26] Pierre Curie studied the dependence of magnetization on temperature and discovered the Curie point phase transition in ferromagnetic materials.[25] In 1906, Pierre Weiss introduced the concept of magnetic domains to explain the main properties of ferromagnets.[27]:9 The first attempt at a microscopic description of magnetism was by Wilhelm Lenz and Ernst Ising through the Ising model that described magnetic materials as consisting of a periodic lattice of spins that collectively acquired magnetization.[25] The Ising model was solved exactly to show that spontaneous magnetization cannot occur in one dimension but is possible in higher-dimensional lattices. Further research such as by Bloch on spin waves and Néel on antiferromagnetism led to developing new magnetic materials with applications to magnetic storage devices.[25]:36–38,48

Modern many-body physics

A magnet levitating over a superconducting material.
A magnet levitating above a high-temperature superconductor. Today some physicists are working to understand high-temperature superconductivity using the AdS/CFT correspondence.[28]

The Sommerfeld model and spin models for ferromagnetism illustrated the successful application of quantum mechanics to condensed matter problems in the 1930s. However, there still were several unsolved problems, most notably the description of superconductivity and the Kondo effect.[29] After World War II, several ideas from quantum field theory were applied to condensed matter problems. These included recognition of collective excitation modes of solids and the important notion of a quasiparticle. Russian physicist Lev Landau used the idea for the Fermi liquid theory wherein low energy properties of interacting fermion systems were given in terms of what are now termed Landau-quasiparticles.[29] Landau also developed a mean field theory for continuous phase transitions, which described ordered phases as spontaneous breakdown of symmetry. The theory also introduced the notion of an order parameter to distinguish between ordered phases.[30] Eventually in 1965, John Bardeen, Leon Cooper and John Schrieffer developed the so-called BCS theory of superconductivity, based on the discovery that arbitrarily small attraction between two electrons of opposite spin mediated by phonons in the lattice can give rise to a bound state called a Cooper pair.[31]



The quantum Hall effect: Components of the Hall resistivity as a function of the external magnetic field[32]:fig. 14

The study of phase transition and the critical behavior of observables, termed critical phenomena, was a major field of interest in the 1960s.[33] Leo Kadanoff, Benjamin Widom and Michael Fisher developed the ideas of critical exponents and widom scaling. These ideas were unified by Kenneth G. Wilson in 1972, under the formalism of the renormalization group in the context of quantum field theory.[33]

The quantum Hall effect was discovered by Klaus von Klitzing in 1980 when he observed the Hall conductance to be integer multiples of a fundamental constant e^{2}/h.(see figure) The effect was observed to be independent of parameters such as system size and impurities.[32] In 1981, theorist Robert Laughlin proposed a theory explaining the unanticipated precision of the integral plateau. It also implied that the Hall conductance can be characterized in terms of a topological invariable called Chern number.[34][35]:69, 74 Shortly after, in 1982, Horst Störmer and Daniel Tsui observed the fractional quantum Hall effect where the conductance was now a rational multiple of a constant. Laughlin, in 1983, realized that this was a consequence of quasiparticle interaction in the Hall states and formulated a variational method solution, named the Laughlin wavefunction.[36] The study of topological properties of the fractional Hall effect remains an active field of research.

In 1986, Karl Müller and Johannes Bednorz discovered the first high temperature superconductor, a material which was superconducting at temperatures as high as 50 kelvins. It was realized that the high temperature superconductors are examples of strongly correlated materials where the electron–electron interactions play an important role.[37] A satisfactory theoretical description of high-temperature superconductors is still not known and the field of strongly correlated materials continues to be an active research topic.

In 2009, David Field and researchers at Aarhus University discovered spontaneous electric fields when creating prosaic films[clarification needed] of various gases. This has more recently expanded to form the research area of spontelectrics.[38]

In 2012 several groups released preprints which suggest that samarium hexaboride has the properties of a topological insulator [39] in accord with the earlier theoretical predictions.[40] Since samarium hexaboride is an established Kondo insulator, i.e. a strongly correlated electron material, the existence of a topological surface state in this material would lead to a topological insulator with strong electronic correlations.

Theoretical

Theoretical condensed matter physics involves the use of theoretical models to understand properties of states of matter. These include models to study the electronic properties of solids, such as the Drude model, the Band structure and the density functional theory. Theoretical models have also been developed to study the physics of phase transitions, such as the Ginzburg–Landau theory, critical exponents and the use of mathematical methods of quantum field theory and the renormalization group. Modern theoretical studies involve the use of numerical computation of electronic structure and mathematical tools to understand phenomena such as high-temperature superconductivity, topological phases, and gauge symmetries.

Emergence

Theoretical understanding of condensed matter physics is closely related to the notion of emergence, wherein complex assemblies of particles behave in ways dramatically different from their individual constituents.[31] For example, a range of phenomena related to high temperature superconductivity are understood poorly, although the microscopic physics of individual electrons and lattices is well known.[41] Similarly, models of condensed matter systems have been studied where collective excitations behave like photons and electrons, thereby describing electromagnetism as an emergent phenomenon.[42] Emergent properties can also occur at the interface between materials: one example is the lanthanum aluminate-strontium titanate interface, where two non-magnetic insulators are joined to create conductivity, superconductivity, and ferromagnetism.

Electronic theory of solids

The metallic state has historically been an important building block for studying properties of solids.[43] The first theoretical description of metals was given by Paul Drude in 1900 with the Drude model, which explained electrical and thermal properties by describing a metal as an ideal gas of then-newly discovered electrons. He was able to derive the empirical Wiedemann-Franz law and get results in close agreement with the experiments.[16]:90–91 This classical model was then improved by Arnold Sommerfeld who incorporated the Fermi–Dirac statistics of electrons and was able to explain the anomalous behavior of the specific heat of metals in the Wiedemann–Franz law.[16]:101–103 In 1912, The structure of crystalline solids was studied by Max von Laue and Paul Knipping, when they observed the X-ray diffraction pattern of crystals, and concluded that crystals get their structure from periodic lattices of atoms.[16]:48[44] In 1928, Swiss physicist Felix Bloch provided a wave function solution to the Schrödinger equation with a periodic potential, called the Bloch wave.[45]

Calculating electronic properties of metals by solving the many-body wavefunction is often computationally hard, and hence, approximation methods are needed to obtain meaningful predictions.[46] The Thomas–Fermi theory, developed in the 1920s, was used to estimate system energy and electronic density by treating the local electron density as a variational parameter. Later in the 1930s, Douglas Hartree, Vladimir Fock and John Slater developed the so-called Hartree–Fock wavefunction as an improvement over the Thomas–Fermi model. The Hartree–Fock method accounted for exchange statistics of single particle electron wavefunctions. In general, it's very difficult to solve the Hartree–Fock equation. Only the free electron gas case can be solved exactly.[43]:330–337 Finally in 1964–65, Walter Kohn, Pierre Hohenberg and Lu Jeu Sham proposed the density functional theory which gave realistic descriptions for bulk and surface properties of metals. The density functional theory (DFT) has been widely used since the 1970s for band structure calculations of variety of solids.[46]

Symmetry breaking

Some states of matter exhibit symmetry breaking, where the relevant laws of physics possess some symmetry that is broken. A common example is crystalline solids, which break continuous translational symmetry. Other examples include magnetized ferromagnets, which break rotational symmetry, and more exotic states such as the ground state of a BCS superconductor, that breaks U(1) phase rotational symmetry.[47][48]

Goldstone's theorem in quantum field theory states that in a system with broken continuous symmetry, there may exist excitations with arbitrarily low energy, called the Goldstone bosons. For example, in crystalline solids, these correspond to phonons, which are quantized versions of lattice vibrations.[49]

Phase transition

Phase transition refers to the change of phase of a system, which is brought about by change in an external parameter such as temperature. Classical phase transition occurs at finite temperature when the order of the system was destroyed. For example, when ice melts and becomes water, the ordered crystal structure is destroyed. In quantum phase transitions, the temperature is set to absolute zero, and the non-thermal control parameter, such as pressure or magnetic field, causes the phase transitions when order is destroyed by quantum fluctuations originating from the Heisenberg uncertainty principle. Here, the different quantum phases of the system refer to distinct ground states of the Hamiltonian. Understanding the behavior of quantum phase transition is important in the difficult tasks of explaining the properties of rare-earth magnetic insulators, high-temperature superconductors, and other substances.[50]

Two classes of phase transitions occur: first-order transitions and continuous transitions. For the later, the two phases involved do not co-exist at the transition temperature, also called critical point. Near the critical point, systems undergo critical behavior, wherein several of their properties such as correlation length, specific heat, and magnetic susceptibility diverge exponentially.[50] These critical phenomena poses serious challenges to physicists because normal macroscopic laws are no longer valid in the region and novel ideas and methods must be invented to find the new laws that can describe the system.[51]:75ff

The simplest theory that can describe continuous phase transitions is the Ginzburg–Landau theory, which works in the so-called mean field approximation. However, it can only roughly explain continuous phase transition for ferroelectrics and type I superconductors which involves long range microscopic interactions. For other types of systems that involves short range interactions near the critical point, a better theory is needed.[52]:8–11

Near the critical point, the fluctuations happen over broad range of size scales while the feature of the whole system is scale invariant. Renormalization group methods successively average out the shortest wavelength fluctuations in stages while retaining their effects into the next stage. Thus, the changes of a physical system as viewed at different size scales can be investigated systematically. The methods, together with powerful computer simulation, contribute greatly to the explanation of the critical phenomena associated with continuous phase transition.[51]:11

Experimental

Experimental condensed matter physics involves the use of experimental probes to try to discover new properties of materials. Such probes include effects of electric and magnetic fields, measuring response functions, transport properties and thermometry.[53] Commonly used experimental methods include spectroscopy, with probes such as X-rays, infrared light and inelastic neutron scattering; study of thermal response, such as specific heat and measuring transport via thermal and heat conduction.


Image of X-ray diffraction pattern from a protein crystal.

Scattering

Several condensed matter experiments involve scattering of an experimental probe, such as X-ray, optical photons, neutrons, etc., on constituents of a material. The choice of scattering probe depends on the observation energy scale of interest. Visible light has energy on the scale of 1 electron volt (eV) and is used as a scattering probe to measure variations in material properties such as dielectric constant and refractive index. X-rays have energies of the order of 10 keV and hence are able to probe atomic length scales, and are used to measure variations in electron charge density.[54]:33–34

Neutrons can also probe atomic length scales and are used to study scattering off nuclei and electron spins and magnetization (as neutrons have spin but no charge). Coulomb and Mott scattering measurements can be made by using electron beams as scattering probes.[54]:33–34[55]:39–43 Similarly, positron annihilation can be used as an indirect measurement of local electron density.[56] Laser spectroscopy is an excellent tool for studying the microscopic properties of a medium, for example, to study forbidden transitions in media with nonlinear optical spectroscopy.[51] :258–259

External magnetic fields

In experimental condensed matter physics, external magnetic fields act as thermodynamic variables that control the state, phase transitions and properties of material systems.[57] Nuclear magnetic resonance (NMR) is a method by which external magnetic fields are used to find resonance modes of individual electrons, thus giving information about the atomic, molecular, and bond structure of their neighborhood. NMR experiments can be made in magnetic fields with strengths up to 60 Tesla. Higher magnetic fields can improve the quality of NMR measurement data.[58]:69[59]:185 Quantum oscillations is another experimental method where high magnetic fields are used to study material properties such as the geometry of the Fermi surface.[60] High magnetic fields will be useful in experimentally testing of the various theoretical predictions such as the quantized magnetoelectric effect, image magnetic monopole, and the half-integer quantum Hall effect.[58]:57

Cold atomic gases


The first Bose–Einstein condensate observed in a gas of ultracold rubidium atoms. The blue and white areas represent higher density.

Ultracold atom trapping in optical lattices is an experimental tool commonly used in condensed matter physics, and in atomic, molecular, and optical physics. The method involves using optical lasers to form an interference pattern, which acts as a lattice, in which ions or atoms can be placed at very low temperatures. Cold atoms in optical lattices are used as quantum simulators, that is, they act as controllable systems that can model behavior of more complicated systems, such as frustrated magnets.[61] In particular, they are used to engineer one-, two- and three-dimensional lattices for a Hubbard model with pre-specified parameters, and to study phase transitions for antiferromagnetic and spin liquid ordering.[62][63]

In 1995, a gas of rubidium atoms cooled down to a temperature of 170 nK was used to experimentally realize the Bose–Einstein condensate, a novel state of matter originally predicted by S. N. Bose and Albert Einstein, wherein a large number of atoms occupy one quantum state.[64]

Applications


Computer simulation of nanogears made of fullerene molecules. It is hoped that advances in nanoscience will lead to machines working on the molecular scale.

Research in condensed matter physics has given rise to several device applications, such as the development of the semiconductor transistor,[3] laser technology,[51] and several phenomena studied in the context of nanotechnology.[65]:111ff Methods such as scanning-tunneling microscopy can be used to control processes at the nanometer scale, and have given rise to the study of nanofabrication.[66]

In quantum computation, information is represented by quantum bits, or qubits. The qubits may decohere quickly before useful computation is completed. This serious problem must be solved before quantum computing may be realized. To solve this problem, several promising approaches are proposed in condensed matter physics, including Josephson junction qubits, spintronic qubits using the spin orientation of magnetic materials, or the topological non-Abelian anyons from fractional quantum Hall effect states.[66]

Condensed matter physics also has important uses for biophysics, for example, the experimental method of magnetic resonance imaging, which is widely used in medical diagnosis.[66]

Equality (mathematics)

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Equality_...