Search This Blog

Wednesday, October 23, 2019

Quantum Turing machine

From Wikipedia, the free encyclopedia
 
A quantum Turing machine (QTM), also a universal quantum computer, is an abstract machine used to model the effect of a quantum computer. It provides a very simple model which captures all of the power of quantum computation. Any quantum algorithm can be expressed formally as a particular quantum Turing machine.

In 1980 and 1982, physicist Paul Benioff published papers that first described a quantum mechanical model of Turing machines. A 1985 article written by Oxford University physicist David Deutsch further developed the idea of quantum computers by suggesting quantum gates could function in a similar fashion to traditional digital computing binary logic gates.

Quantum Turing machines are not always used for analyzing quantum computation; the quantum circuit is a more common model. These models are computationally equivalent.

Quantum Turing machines can be related to classical and probabilistic Turing machines in a framework based on transition matrices. That is, a matrix can be specified whose product with the matrix representing a classical or probabilistic machine provides the quantum probability matrix representing the quantum machine. This was shown by Lance Fortnow.

Iriyama, Ohya, and Volovich have developed a model of a linear quantum Turing machine (LQTM). This is a generalization of a classical QTM that has mixed states and that allows irreversible transition functions. These allow the representation of quantum measurements without classical outcomes.

A quantum Turing machine with postselection was defined by Scott Aaronson, who showed that the class of polynomial time on such a machine (PostBQP) is equal to the classical complexity class PP.

Informal sketch

A way of understanding the quantum Turing machine (QTM) is that it generalizes the classical Turing machine (TM) in the same way that the quantum finite automaton (QFA) generalizes the deterministic finite automaton (DFA). In essence, the internal states of a classical TM are replaced by pure or mixed states in a Hilbert space; the transition function is replaced by a collection of unitary matrices that map the Hilbert space to itself.

That is, a classical Turing machine is described by a 7-tuple .

For a three-tape quantum Turing machine (one tape holding the input, a second tape holding intermediate calculation results, and a third tape holding output):
  • The set of states is replaced by a Hilbert space.
  • The tape alphabet symbols are likewise replaced by a Hilbert space (usually a different Hilbert space than the set of states).
  • The blank symbol corresponds to the zero-vector.
  • The input and output symbols are usually taken as a discrete set, as in the classical system; thus, neither the input nor output to a quantum machine need be a quantum system itself.
  • The transition function is a generalization of a transition monoid, and is understood to be a collection of unitary matrices that are automorphisms of the Hilbert space .
  • The initial state may be either a mixed state or a pure state.
  • The set of final or accepting states is a subspace of the Hilbert space .
The above is merely a sketch of a quantum Turing machine, rather than its formal definition, as it leaves vague several important details: for example, how often a measurement is performed; see for example, the difference between a measure-once and a measure-many QFA. This question of measurement affects the way in which writes to the output tape are defined.

Universal Turing machine

From Wikipedia, the free encyclopedia
 
In computer science, a universal Turing machine (UTM) is a Turing machine that can simulate an arbitrary Turing machine on arbitrary input. The universal machine essentially achieves this by reading both the description of the machine to be simulated as well as the input there of from its own tape. Alan Turing introduced the idea of such a machine in 1936–1937. This principle is considered to be the origin of the idea of a stored-program computer used by John von Neumann in 1946 for the "Electronic Computing Instrument" that now bears von Neumann's name: the von Neumann architecture.

In terms of computational complexity, a multi-tape universal Turing machine need only be slower by logarithmic factor compared to the machines it simulates.

Introduction

Universal Turing machine.svg

Every Turing machine computes a certain fixed partial computable function from the input strings over its alphabet. In that sense it behaves like a computer with a fixed program. However, we can encode the action table of any Turing machine in a string. Thus we can construct a Turing machine that expects on its tape a string describing an action table followed by a string describing the input tape, and computes the tape that the encoded Turing machine would have computed. Turing described such a construction in complete detail in his 1936 paper:
"It is possible to invent a single machine which can be used to compute any computable sequence. If this machine U is supplied with a tape on the beginning of which is written the S.D ["standard description" of an action table] of some computing machine M, then U will compute the same sequence as M." 

Stored-program computer

Davis makes a persuasive argument that Turing's conception of what is now known as "the stored-program computer", of placing the "action table"—the instructions for the machine—in the same "memory" as the input data, strongly influenced John von Neumann's conception of the first American discrete-symbol (as opposed to analog) computer—the EDVAC. Davis quotes Time magazine to this effect, that "everyone who taps at a keyboard... is working on an incarnation of a Turing machine," and that "John von Neumann [built] on the work of Alan Turing" (Davis 2000:193 quoting Time magazine of 29 March 1999). 

Davis makes a case that Turing's Automatic Computing Engine (ACE) computer "anticipated" the notions of microprogramming (microcode) and RISC processors (Davis 2000:188). Knuth cites Turing's work on the ACE computer as designing "hardware to facilitate subroutine linkage" (Knuth 1973:225); Davis also references this work as Turing's use of a hardware "stack" (Davis 2000:237 footnote 18). 

As the Turing Machine was encouraging the construction of computers, the UTM was encouraging the development of the fledgling computer sciences. An early, if not the very first, assembler was proposed "by a young hot-shot programmer" for the EDVAC (Davis 2000:192). Von Neumann's "first serious program ... [was] to simply sort data efficiently" (Davis 2000:184). Knuth observes that the subroutine return embedded in the program itself rather than in special registers is attributable to von Neumann and Goldstine. Knuth furthermore states that
"The first interpretive routine may be said to be the "Universal Turing Machine" ... Interpretive routines in the conventional sense were mentioned by John Mauchly in his lectures at the Moore School in 1946 ... Turing took part in this development also; interpretive systems for the Pilot ACE computer were written under his direction" (Knuth 1973:226).
Davis briefly mentions operating systems and compilers as outcomes of the notion of program-as-data (Davis 2000:185). 

Some, however, might raise issues with this assessment. At the time (mid-1940s to mid-1950s) a relatively small cadre of researchers were intimately involved with the architecture of the new "digital computers". Hao Wang (1954), a young researcher at this time, made the following observation:
Turing's theory of computable functions antedated but has not much influenced the extensive actual construction of digital computers. These two aspects of theory and practice have been developed almost entirely independently of each other. The main reason is undoubtedly that logicians are interested in questions radically different from those with which the applied mathematicians and electrical engineers are primarily concerned. It cannot, however, fail to strike one as rather strange that often the same concepts are expressed by very different terms in the two developments." (Wang 1954, 1957:63)
Wang hoped that his paper would "connect the two approaches." Indeed, Minsky confirms this: "that the first formulation of Turing-machine theory in computer-like models appears in Wang (1957)" (Minsky 1967:200). Minsky goes on to demonstrate Turing equivalence of a counter machine.

With respect to the reduction of computers to simple Turing equivalent models (and vice versa), Minsky's designation of Wang as having made "the first formulation" is open to debate. While both Minsky's paper of 1961 and Wang's paper of 1957 are cited by Shepherdson and Sturgis (1963), they also cite and summarize in some detail the work of European mathematicians Kaphenst (1959), Ershov (1959), and Péter (1958). The names of mathematicians Hermes (1954, 1955, 1961) and Kaphenst (1959) appear in the bibliographies of both Sheperdson-Sturgis (1963) and Elgot-Robinson (1961). Two other names of importance are Canadian researchers Melzak (1961) and Lambek (1961).

Mathematical theory

With this encoding of action tables as strings it becomes possible in principle for Turing machines to answer questions about the behaviour of other Turing machines. Most of these questions, however, are undecidable, meaning that the function in question cannot be calculated mechanically. For instance, the problem of determining whether an arbitrary Turing machine will halt on a particular input, or on all inputs, known as the Halting problem, was shown to be, in general, undecidable in Turing's original paper. Rice's theorem shows that any non-trivial question about the output of a Turing machine is undecidable.

A universal Turing machine can calculate any recursive function, decide any recursive language, and accept any recursively enumerable language. According to the Church–Turing thesis, the problems solvable by a universal Turing machine are exactly those problems solvable by an algorithm or an effective method of computation, for any reasonable definition of those terms. For these reasons, a universal Turing machine serves as a standard against which to compare computational systems, and a system that can simulate a universal Turing machine is called Turing complete

An abstract version of the universal Turing machine is the universal function, a computable function which can be used to calculate any other computable function. The UTM theorem proves the existence of such a function.

Efficiency

Without loss of generality, the input of Turing machine can be assumed to be in the alphabet {0, 1}; any other finite alphabet can be encoded over {0, 1}. The behavior of a Turing machine M is determined by its transition function. This function can be easily encoded as a string over the alphabet {0, 1} as well. The size of the alphabet of M, the number of tapes it has, and the size of the state space can be deduced from the transition function's table. The distinguished states and symbols can be identified by their position, e.g. the first two states can by convention be the start and stop states. Consequently, every Turing machine can be encoded as a string over the alphabet {0, 1}. Additionally, we convene that every invalid encoding maps to a trivial Turing machine that immediately halts, and that every Turing machine can have an infinite number of encodings by padding the encoding with an arbitrary number of (say) 1's at the end, just like comments work in a programming language. It should be no surprise that we can achieve this encoding given the existence of a Gödel number and computational equivalence between Turing machines and μ-recursive functions. Similarly, our construction associates to every binary string α, a Turing machine Mα.

Starting from the above encoding, in 1966 F. C. Hennie and R. E. Stearns showed that given a Turing machine Mα that halts on input x within N steps, then there exists a multi-tape universal Turing machine that halts on inputs α, x (given on different tapes) in CN log N, where C is a machine-specific constant that does not depend on the length of the input x, but does depend on M's alphabet size, number of tapes, and number of states. Effectively this is an simulation, using Donald Knuth's Big O notation.

Smallest machines

When Alan Turing came up with the idea of a universal machine he had in mind the simplest computing model powerful enough to calculate all possible functions that can be calculated. Claude Shannon first explicitly posed the question of finding the smallest possible universal Turing machine in 1956. He showed that two symbols were sufficient so long as enough states were used (or vice versa), and that it was always possible to exchange states for symbols.

Marvin Minsky discovered a 7-state 4-symbol universal Turing machine in 1962 using 2-tag systems. Other small universal Turing machines have since been found by Yurii Rogozhin and others by extending this approach of tag system simulation. If we denote by (m, n) the class of UTMs with m states and n symbols the following tuples have been found: (15, 2), (9, 3), (6, 4), (5, 5), (4, 6), (3, 9), and (2, 18). Rogozhin's (4, 6) machine uses only 22 instructions, and no standard UTM of lesser descriptional complexity is known.

However, generalizing the standard Turing machine model admits even smaller UTMs. One such generalization is to allow an infinitely repeated word on one or both sides of the Turing machine input, thus extending the definition of universality and known as "semi-weak" or "weak" universality, respectively. Small weakly universal Turing machines that simulate the Rule 110 cellular automaton have been given for the (6, 2), (3, 3), and (2, 4) state-symbol pairs. The proof of universality for Wolfram's 2-state 3-symbol Turing machine further extends the notion of weak universality by allowing certain non-periodic initial configurations. Other variants on the standard Turing machine model that yield small UTMs include machines with multiple tapes or tapes of multiple dimension, and machines coupled with a finite automaton.

Machines with no internal states

If you allow multiple heads on the Turing machine then you can have a Turing machine with no internal states at all. The "states" are encoded as part of the tape. For example, consider a tape with 6 colours: 0, 1, 2, 0A, 1A, 2A. Consider a tape such as 0,0,1,2,2A,0,2,1 where a 3-headed Turing machine is situated over the triple (2,2A,0). The rules then convert any triple to another triple and move the 3-heads left or right. For example, the rules might convert (2,2A,0) to (2,1,0) and move the head left. Thus in this example the machine acts like a 3-colour Turing machine with internal states A and B (represented by no letter). The case for a 2-headed Turing machine is very similar. Thus a 2-headed Turing machine can be Universal with 6 colours. It is not known what the smallest number of colours needed for a multi-headed Turing machine are or if a 2-colour Universal Turing machine is possible with multiple heads. It also means that rewrite rules are Turing complete since the triple rules are equivalent to rewrite rules. Extending the tape to two dimensions with a head sampling a letter and it's 8 neighbours, only 2 colours are needed, as for example, a colour can be encoded in a vertical triple pattern such as 110.

Example of universal-machine coding

The following example is taken from Turing (1936). For more about this example see the page Turing machine examples

Turing used seven symbols { A, C, D, R, L, N, ; } to encode each 5-tuple; as described in the article Turing machine, his 5-tuples are only of types N1, N2, and N3. The number of each "m-configuration" (instruction, state) is represented by "D" followed by a unary string of A's, e.g. "q3" = DAAA. In a similar manner he encodes the symbols blank as "D", the symbol "0" as "DC", the symbol "1" as DCC, etc. The symbols "R", "L", and "N" remain as is. 

After encoding each 5-tuple is then "assembled" into a string in order as shown in the following table
Current m-configuration Tape symbol Print-operation Tape-motion Final m-configuration
Current m-configuration code Tape symbol code Print-operation code Tape-motion code Final m-configuration code 5-tuple assembled code












q1 blank P0 R q2
DA D DC R DAA DADDCRDAA
q2 blank E R q3
DAA D D R DAAA DAADDRDAAA
q3 blank P1 R q4
DAAA D DCC R DAAAA DAAADDCCRDAAAA
q4 blank E R q1
DAAAA D D R DA DAAAADDRDA

Finally, the codes for all four 5-tuples are strung together into a code started by ";" and separated by ";" i.e.:
;DADDCRDAA;DAADDRDAAA;DAAADDCCRDAAAA;DAAAADDRDA
This code he placed on alternate squares—the "F-squares" – leaving the "E-squares" (those liable to erasure) empty. The final assembly of the code on the tape for the U-machine consists of placing two special symbols ("e") one after the other, then the code separated out on alternate squares, and lastly the double-colon symbol "::" (blanks shown here with "." for clarity):
ee..D.A.D.D.C.R.D.A.A..D.A.A.D.D.R.D.A.A.A..D.A.A.A.D.D.C.C.R.D.A.A.A.A..D.A.A.A.A.D.D.R.D.A.......
The U-machine's action-table (state-transition table) is responsible for decoding the symbols. Turing's action table keeps track of its place with markers "u", "v", "x", "y", "z" by placing them in "E-squares" to the right of "the marked symbol" – for example, to mark the current instruction z is placed to the right of ";" x is keeping the place with respect to the current "m-configuration" DAA. The U-machine's action table will shuttle these symbols around (erasing them and placing them in different locations) as the computation progresses:
ee.; .D.A.D.D.C.R.D.A.A. ; zD.A.AxD.D.R.D.A.A.A.;.D.A.A.A.D.D.C.C.R.D.A.A.A.A.;.D.A.A.A.A.D.D.R.D.A.::......
Turing's action-table for his U-machine is very involved. 

A number of other commentators (notably Penrose 1989) provide examples of ways to encode instructions for the Universal machine. As does Penrose, most commentators use only binary symbols i.e. only symbols { 0, 1 }, or { blank, mark | }. Penrose goes further and writes out his entire U-machine code (Penrose 1989:71–73). He asserts that it truly is a U-machine code, an enormous number that spans almost 2 full pages of 1's and 0's. For readers interested in simpler encodings for the Post–Turing machine the discussion of Davis in Steen (Steen 1980:251ff) may be useful. 

Asperti and Ricciotti described a multi-tape UTM defined by composing elementary machines with very simple semantics, rather than explicitly giving its full action table. This approach was sufficiently modular to allow them to formally prove the correctness of the machine in the Matita proof assistant.

Programming Turing machines

Various higher level languages are designed to be compiled into a Turing machine. Examples include Laconic and Turing Machine Descriptor.

Quantum computing

From Wikipedia, the free encyclopedia
 
The Bloch sphere is a representation of a qubit, the fundamental building block of quantum computers.
 
Quantum computing is the study of a currently hypothetical model of computation. Whereas traditional models of computing such as the Turing machine or Lambda calculus rely on "classical" representations of computational memory, a quantum computation could transform the memory into a quantum superposition of possible classical states. A quantum computer is a device that could perform such computation.

Quantum computing began in the early 1980s when physicist Paul Benioff proposed a quantum mechanical model of the Turing machine. Richard Feynman and Yuri Manin later suggested that a quantum computer could perform simulations that are out of reach for classical computers. In 1994, Peter Shor developed a polynomial-time quantum algorithm for factoring integers. This was a major breakthrough in the subject: an important method of asymmetric key exchange known as RSA is based on the belief that factoring integers is computationally difficult. The existence of a polynomial-time quantum algorithm proves that one of the most widely-used cryptographic protocols is vulnerable to an adversary who possesses a quantum computer.

Experimental efforts towards building a quantum computer began after a slew of results known as fault-tolerance threshold theorems. These theorems proved that a quantum computation could be efficiently corrected against the effects of large classes of physically realistic noise models. One early result demonstrated parts of Shor's algorithm in a liquid-state nuclear magnetic resonance experiment. Other notable experiments have been performed in superconducting systems, ion-traps, and photonic systems.

Despite rapid and impressive experimental progress, most researchers believe that "fault-tolerant quantum computing [is] still a rather distant dream". As of September 2019, no scalable quantum computing hardware has been demonstrated. Nevertheless, there is an increasing amount of investment in quantum computing by governments, established companies, and start-ups. Current research focusses on building and using near-term intermediate-scale devices and demonstrating quantum supremacy alongside the long-term goal of building and using a powerful and error-free quantum computer. 

The field of quantum computing is closely related to quantum information science, which includes quantum cryptography and quantum communication.

Basic concept

In most models of classical computation, the computer has access to memory. This is a system that can be found in one of a finite set of possible states, each of which is physically distinct. It is frequently convenient to represent the state of this memory as a string of symbols; most simply, as a string of the symbols 0 and 1. In this scenario, the fundamental unit of memory is called a bit and we can measure the "size" of the memory in terms of the number of bits needed to represent fully the state of the memory.

If the memory obeys the laws of quantum physics, the state of the memory could be found in a quantum superposition of different possible "classical" states. If the classical states are to be represented as a string of bits, the quantum memory could be found in any superposition of the possible bit strings. In the quantum scenario, the fundamental unit of memory is called a qubit.

The defining property of a quantum computer is the ability to turn classical memory states into quantum memory states, and vice-versa. This is not possible with present-day computers because they are carefully designed to ensure that the memory never deviates from clearly defined informational states. To clarify this point, consider that information is normally transmitted through the computer as an electrical signal that could have one of two easily distinguished voltages. If the voltages were to become indistinct (in a classical or quantum sense), the computer would no longer operate correctly.

Of course, in the end we are classical beings and we can only observe classical states. That means the quantum computer must complete its task by returning to us a classical output. To produce these classical outputs, the quantum computer is obliged to measure parts of the memory at various times throughout the computation. The measurement process is inherently probabilistic, meaning that the output of a quantum algorithm is frequently random. The task of a quantum algorithm designer is to ensure that the randomness is tailored to the needs of the problem at hand. For example, if the quantum computer is searching a quantum database for one of several marked items, we can ask the quantum computer to return one of the marked items at random. The quantum computer succeeds in this task as long as it is unlikely to return an unmarked item.

Quantum operations

The prevailing model of quantum computation describes the computation in terms of a network of quantum logic gates. What follows is a brief treatment of the subject based upon Chapter 4 of Nielsen and Chuang.

We may represent the state of a computer memory as a vector whose length is equal to the number of possible states of the memory. So a memory consisting of bits of information has possible states, and the vector representing that memory state has entries. In the classical view, all but one of the entries of this vector would be zero and the remaining entry would be one. The vector should be viewed as a probability vector and represents the fact that the memory is to be found in a particular state with 100% probability (i.e. a probability of one).

In quantum mechanics, probability vectors are generalised to density operators. This is the technically rigorous mathematical foundation for quantum logic gates, but the intermediate quantum state vector formalism is usually introduced first because it is conceptually simpler. Here we focus only on the quantum state vector formalism for simplicity. 

We begin by considering a simple memory consisting of only one bit. This memory may be found in one of two states: the zero state or the one state. We may represent the state of this memory using Dirac notation so that

 
A quantum memory may then be found in any quantum superposition of the two classical states and
 
 
In general, the coefficients and are complex numbers. In this scenario, we say that one qubit of information can be encoded into the quantum memory. The state is not itself a probability vector but can be connected with a probability vector via a measurement operation. If we choose to measure the quantum memory to determine if the state is or (this is known as a computational basis measurement), we would observe the zero state with probability and the one state with probability . Please see the article on quantum amplitudes for further information.
 
To manipulate the state of this one-qubit quantum memory, we imagine applying quantum logic gates analogous to classical logic gates. One obvious gate is the NOT gate, which can be represented by a matrix
 
 
We can formally apply this logic gate to a quantum state vector through matrix multiplication. Thus we find and as expected. But this is not the only interesting single-qubit quantum logic gate. We might, for example, imagine applying one of the other two Pauli matrices.
 
We may imagine extending single qubit gates to operate on multiqubit quantum memories in two important ways. One way to operate a single qubit gate on a multiqubit memory is simply to select a qubit and apply that gate to the target qubit whilst leaving the remainder of the memory unaffected. Another way is to apply the gate to its target only if another part of the memory is in a desired state. We illustrate this with another example. 

Consider a two-qubit quantum memory. Its possible states are 

 
We may then define the CNOT gate as the following matrix: 
 
 
It is easy to check that , , , and . In other words, the CNOT applies a NOT gate ( from before) to the second qubit if and only if the first qubit is in the state . If the first qubit is , nothing is done to either qubit. 
 
The preceding discussion is of course a very brief introduction to the concept of a quantum logic gate. Please see the article on quantum logic gates for further information. 

To put the story together, we can describe a quantum computation as a network of quantum logic gates and measurements. One can always 'defer' a measurement to the end of a quantum computation, though this can come at a computational cost according to some cost models. Because of this possibility of deferring a measurement, most quantum circuits depict a network consisting only of quantum logic gates and no measurements. For more details on the sequences of operations used for various quantum algorithms, see universal quantum computer, Shor's algorithm, Grover's algorithm, Deutsch–Jozsa algorithm, amplitude amplification, quantum Fourier transform, quantum gate, quantum adiabatic algorithm and quantum error correction

One can represent any quantum computation as a network of quantum logic gates from a fairly small family of gates. A choice of gate family that enables this construction is known as a universal gate set. One common such set includes all single-qubit gates as well as the CNOT gate from above. This means any quantum computation can be performed by executing a sequence of single-qubit gates together with CNOT gates. Though this gate set is infinite, it can be replaced with a finite gate set by appealing to the Solovay-Kitaev theorem.

Potential

Cryptography

Integer factorization, which underpins the security of public key cryptographic systems, is believed to be computationally infeasible with an ordinary computer for large integers if they are the product of few prime numbers (e.g., products of two 300-digit primes). By comparison, a quantum computer could efficiently solve this problem using Shor's algorithm to find its factors. This ability would allow a quantum computer to break many of the cryptographic systems in use today, in the sense that there would be a polynomial time (in the number of digits of the integer) algorithm for solving the problem. In particular, most of the popular public key ciphers are based on the difficulty of factoring integers or the discrete logarithm problem, both of which can be solved by Shor's algorithm. In particular, the RSA, Diffie–Hellman, and elliptic curve Diffie–Hellman algorithms could be broken. These are used to protect secure Web pages, encrypted email, and many other types of data. Breaking these would have significant ramifications for electronic privacy and security. 

However, other cryptographic algorithms do not appear to be broken by those algorithms. Some public-key algorithms are based on problems other than the integer factorization and discrete logarithm problems to which Shor's algorithm applies, like the McEliece cryptosystem based on a problem in coding theory. Lattice-based cryptosystems are also not known to be broken by quantum computers, and finding a polynomial time algorithm for solving the dihedral hidden subgroup problem, which would break many lattice based cryptosystems, is a well-studied open problem. It has been proven that applying Grover's algorithm to break a symmetric (secret key) algorithm by brute force requires time equal to roughly 2n/2 invocations of the underlying cryptographic algorithm, compared with roughly 2n in the classical case, meaning that symmetric key lengths are effectively halved: AES-256 would have the same security against an attack using Grover's algorithm that AES-128 has against classical brute-force search. Quantum cryptography could potentially fulfill some of the functions of public key cryptography. Quantum-based cryptographic systems could, therefore, be more secure than traditional systems against quantum hacking.

Quantum search

Besides factorization and discrete logarithms, quantum algorithms offering a more than polynomial speedup over the best known classical algorithm have been found for several problems, including the simulation of quantum physical processes from chemistry and solid state physics, the approximation of Jones polynomials, and solving Pell's equation. No mathematical proof has been found that shows that an equally fast classical algorithm cannot be discovered, although this is considered unlikely. However, quantum computers offer polynomial speedup for some problems. The most well-known example of this is quantum database search, which can be solved by Grover's algorithm using quadratically fewer queries to the database than that are required by classical algorithms. In this case, the advantage is not only provable but also optimal, it has been shown that Grover's algorithm gives the maximal possible probability of finding the desired element for any number of oracle lookups. Several other examples of provable quantum speedups for query problems have subsequently been discovered, such as for finding collisions in two-to-one functions and evaluating NAND trees. 

Problems that can be addressed with Grover's algorithm have the following properties:
  1. There is no searchable structure in the collection of possible answers,
  2. The number of possible answers to check is the same as the number of inputs to the algorithm, and
  3. There exists a boolean function which evaluates each input and determines whether it is the correct answer
For problems with all these properties, the running time of Grover's algorithm on a quantum computer will scale as the square root of the number of inputs (or elements in the database), as opposed to the linear scaling of classical algorithms. A general class of problems to which Grover's algorithm can be applied is Boolean satisfiability problem. In this instance, the database through which the algorithm is iterating is that of all possible answers. An example (and possible) application of this is a password cracker that attempts to guess the password or secret key for an encrypted file or system. Symmetric ciphers such as Triple DES and AES are particularly vulnerable to this kind of attack. This application of quantum computing is a major interest of government agencies.

Quantum simulation

Since chemistry and nanotechnology rely on understanding quantum systems, and such systems are impossible to simulate in an efficient manner classically, many believe quantum simulation will be one of the most important applications of quantum computing. Quantum simulation could also be used to simulate the behavior of atoms and particles at unusual conditions such as the reactions inside a collider.

Quantum annealing and adiabatic optimization

Quantum annealing or Adiabatic quantum computation relies on the adiabatic theorem to undertake calculations. A system is placed in the ground state for a simple Hamiltonian, which is slowly evolved to a more complicated Hamiltonian whose ground state represents the solution to the problem in question. The adiabatic theorem states that if the evolution is slow enough the system will stay in its ground state at all times through the process.

Solving linear equations

The Quantum algorithm for linear systems of equations or "HHL Algorithm", named after its discoverers Harrow, Hassidim, and Lloyd, is expected to provide speedup over classical counterparts.

Quantum supremacy

John Preskill has introduced the term quantum supremacy to refer to the hypothetical speedup advantage that a quantum computer would have over a classical computer in a certain field. Google announced in 2017 that it expected to achieve quantum supremacy by the end of the year though that did not happen. IBM said in 2018 that the best classical computers will be beaten on some practical task within about five years and views the quantum supremacy test only as a potential future benchmark. Although skeptics like Gil Kalai doubt that quantum supremacy will ever be achieved, Google has been reported to have done so, with calculations more than 3,000,000 times as fast as those of Summit, generally considered the world's fastest computer. Bill Unruh doubted the practicality of quantum computers in a paper published back in 1994. Paul Davies argued that a 400-qubit computer would even come into conflict with the cosmological information bound implied by the holographic principle.

Obstacles

There are a number of technical challenges in building a large-scale quantum computer, and thus far quantum computers have yet to solve a problem faster than a classical computer. David DiVincenzo, of IBM, listed the following requirements for a practical quantum computer:
  • scalable physically to increase the number of qubits;
  • qubits that can be initialized to arbitrary values;
  • quantum gates that are faster than decoherence time;
  • universal gate set;
  • qubits that can be read easily.

Quantum decoherence

One of the greatest challenges is controlling or removing quantum decoherence. This usually means isolating the system from its environment as interactions with the external world cause the system to decohere. However, other sources of decoherence also exist. Examples include the quantum gates, and the lattice vibrations and background thermonuclear spin of the physical system used to implement the qubits. Decoherence is irreversible, as it is effectively non-unitary, and is usually something that should be highly controlled, if not avoided. Decoherence times for candidate systems in particular, the transverse relaxation time T2 (for NMR and MRI technology, also called the dephasing time), typically range between nanoseconds and seconds at low temperature. Currently, some quantum computers require their qubits to be cooled to 20 millikelvins in order to prevent significant decoherence.

As a result, time-consuming tasks may render some quantum algorithms inoperable, as maintaining the state of qubits for a long enough duration will eventually corrupt the superpositions.

These issues are more difficult for optical approaches as the timescales are orders of magnitude shorter and an often-cited approach to overcoming them is optical pulse shaping. Error rates are typically proportional to the ratio of operating time to decoherence time, hence any operation must be completed much more quickly than the decoherence time. 

As described in the Quantum threshold theorem, if the error rate is small enough, it is thought to be possible to use quantum error correction to suppress errors and decoherence. This allows the total calculation time to be longer than the decoherence time if the error correction scheme can correct errors faster than decoherence introduces them. An often cited figure for the required error rate in each gate for fault-tolerant computation is 10−3, assuming the noise is depolarizing.

Meeting this scalability condition is possible for a wide range of systems. However, the use of error correction brings with it the cost of a greatly increased number of required qubits. The number required to factor integers using Shor's algorithm is still polynomial, and thought to be between L and L2, where L is the number of qubits in the number to be factored; error correction algorithms would inflate this figure by an additional factor of L. For a 1000-bit number, this implies a need for about 104 bits without error correction. With error correction, the figure would rise to about 107 bits. Computation time is about L2 or about 107 steps and at 1 MHz, about 10 seconds. 

A very different approach to the stability-decoherence problem is to create a topological quantum computer with anyons, quasi-particles used as threads and relying on braid theory to form stable logic gates.

Developments

Quantum computing models

There are a number of quantum computing models, distinguished by the basic elements in which the computation is decomposed. The four main models of practical importance are:
The quantum Turing machine is theoretically important but the direct implementation of this model is not pursued. All four models of computation have been shown to be equivalent; each can simulate the other with no more than polynomial overhead.

Physical realizations

For physically implementing a quantum computer, many different candidates are being pursued, among them (distinguished by the physical system used to realize the qubits):
A large number of candidates demonstrates that the topic, in spite of rapid progress, is still in its infancy. There is also a vast amount of flexibility.

Timeline

In 1980, Paul Benioff describes the first quantum mechanical model of a computer. In this work, Benioff showed that a computer could operate under the laws of quantum mechanics by describing a Schrodinger equation description of Turing machines, laying a foundation for further work in quantum computing. The paper was submitted in June 1979 and published in April of 1980. Russian mathematician Yuri Manin then motivates the development of quantum computers.

In 1981, at the First Conference on the Physics of Computation held at MIT and co-organized by MIT and IBM, Paul Benioff and Richard Feynman give talks on quantum computing. Benioff built on his earlier 1980 work showing that a computer can operate under the laws of quantum mechanics. The talk was titled “Quantum mechanical Hamiltonian models of discrete processes that erase their own histories: application to Turing machines”. In Feynman’s talk, he observed that it appeared to be impossible to efficiently simulate an evolution of a quantum system on a classical computer, and he proposed a basic model for a quantum computer. Urging the world to build a quantum computer, he said, "Nature isn't classical, dammit, and if you want to make a simulation of nature, you'd better make it quantum mechanical, and by golly, it's a wonderful problem because it doesn't look so easy."

In 1982, Paul Benioff further develops his original model of a quantum mechanical Turing machine.

In 1984, IBM scientists Charles Bennett and Gilles Brassard published BB84, the world's first quantum cryptography protocol

In 1985, David Deutsch describes the first universal quantum computer. Just as a Universal Turing machine can simulate any other Turing machine efficiently (Church-Turing thesis), so the universal quantum computer is able to simulate any other quantum computer with at most a polynomial slowdown. 

In 1989, Bikas K. Chakrabarti & collaborators proposes the idea that quantum fluctuations could help explore rough energy landscapes by escaping from local minima of glassy systems having tall but thin barriers by tunneling (instead of climbing over using thermal excitations), suggesting the effectiveness of quantum annealing over classical simulated annealing.

In 1992, David Deutsch and Richard Jozsa propose a computational problem that can be solved efficiently with the determinist Deutsch–Jozsa algorithm on a quantum computer, but for which no deterministic classical algorithm is possible. This was perhaps the earliest result in the computational complexity of quantum computers, proving that they were capable of performing some well-defined computational task more efficiently than any classical computer.

In 1993, an international group of six scientists, including Charles Bennett, showed that perfect quantum teleportation is possible in principle, but only if the original is destroyed.

In 1994, Peter Shor, at AT&T's Bell Labs, discovered an important quantum algorithm, which allows a quantum computer to factor large integers exponentially much faster than the best known classical algorithm. Shor's algorithm can theoretically break many of the Public-key cryptography systems in use today, sparking a tremendous interest in quantum computers.

In 1996, the DiVincenzo's criteria are published, which are a list of conditions that are necessary for constructing a quantum computer, proposed by the theoretical physicist David P. DiVincenzo in his 2000 paper "The Physical Implementation of Quantum Computation".

In 2001, researchers demonstrated Shor's algorithm to factor 15 using a 7-qubit NMR computer.

In 2005, researchers at the University of Michigan built a semiconductor chip ion trap. Such devices from standard lithography may point the way to scalable quantum computing.

In 2009, researchers at Yale University created the first solid-state quantum processor. The 2-qubit superconducting chip had artificial atom qubits made of a billion aluminum atoms that acted like a single atom that could occupy two states.

A team at the University of Bristol also created a silicon chip based on quantum optics, able to run Shor's algorithm. Further developments were made in 2010. Springer publishes a journal, Quantum Information Processing, devoted to the subject.

In February 2010, Digital Combinational Circuits like an adder, subtractor etc. are designed with the help of Symmetric Functions organized from different quantum gates.

In April 2011, a team of scientists from Australia and Japan made a breakthrough in quantum teleportation, successfully transferring a complex set of quantum data with full transmission integrity, without affecting the qubits' superpositions.

Photograph of a chip constructed by D-Wave Systems Inc. Mounted and wire-bonded in a sample holder. The D-Wave processor is designed to use 128 superconducting logic elements that exhibit controllable and tunable coupling to perform operations.
 
In 2011, D-Wave Systems announced the first commercial quantum annealer, the D-Wave One, claiming a 128-qubit processor. On 25 May 2011, Lockheed Martin agreed to purchase a D-Wave One system. Lockheed and the University of Southern California (USC) will house the D-Wave One at the newly formed USC Lockheed Martin Quantum Computing Center. D-Wave's engineers designed the chips with an empirical approach, focusing on solving particular problems. Investors liked this more than academics, who said D-Wave had not demonstrated that they really had a quantum computer. Criticism softened after a D-Wave paper in Nature that proved that the chips have some quantum properties. Two published papers have suggested that the D-Wave machine's operation can be explained classically, rather than requiring quantum models. Later work showed that classical models are insufficient when all available data is considered. Experts remain divided on the ultimate classification of the D-Wave systems though their quantum behavior was established concretely with a demonstration of entanglement.

During the same year, researchers at the University of Bristol created an all-bulk optics system that ran a version of Shor's algorithm to successfully factor 21.

In September 2011, researchers proved quantum computers can be made with a Von Neumann architecture (separation of RAM).

In November 2011, researchers factorized 143 using 4 qubits.

In February 2012, IBM scientists said that they had made several breakthroughs in quantum computing with superconducting integrated circuits.

In April 2012, a multinational team of researchers from the University of Southern California, the Delft University of Technology, the Iowa State University of Science and Technology, and the University of California, Santa Barbara constructed a 2-qubit quantum computer on a doped diamond crystal that can easily be scaled up and is functional at room temperature. Two logical qubit directions of electron spin and nitrogen kernels spin were used, with microwave pulses. This computer ran Grover's algorithm, generating the right answer on the first try in 95% of cases.

In September 2012, Australian researchers at the University of New South Wales said the world's first quantum computer was just 5 to 10 years away, after announcing a global breakthrough enabling the manufacture of its memory building blocks. A research team led by Australian engineers created the first working qubit based on a single atom in silicon, invoking the same technological platform that forms the building blocks of modern-day computers.

In October 2012, Nobel Prizes were awarded to David J. Wineland and Serge Haroche for their basic work on understanding the quantum world, which may help make quantum computing possible.

In November 2012, the first quantum teleportation from one macroscopic object to another was reported by scientists at the University of Science and Technology of China.

In December 2012, 1QBit, the first dedicated quantum computing software company, was founded in Vancouver, BC. 1QBit is the first company to focus exclusively on commercializing software applications for commercially available quantum computers, including the D-Wave Two. 1QBit's research demonstrated the ability of superconducting quantum annealing processors to solve real-world problems.

In February 2013, a new technique, boson sampling, was reported by two groups using photons in an optical lattice that is not a universal quantum computer, but may be good enough for practical problems.

In May 2013, Google announced that it was launching the Quantum Artificial Intelligence Lab, hosted by NASA's Ames Research Center, with a 512-qubit D-Wave quantum computer. The Universities Space Research Association (USRA) will invite researchers to share time on it with the goal of studying quantum computing for machine learning. Google added that they had "already developed some quantum machine learning algorithms" and had "learned some useful principles", such as that "best results" come from "mixing quantum and classical computing".

In early 2014, based on documents provided by former NSA contractor Edward Snowden, it was reported that the U.S. National Security Agency (NSA) is running a $79.7 million research program titled "Penetrating Hard Targets", to develop a quantum computer capable of breaking vulnerable encryption.

In 2014, a group of researchers from ETH Zürich, USC, Google, and Microsoft reported a definition of quantum speedup, and were not able to measure quantum speedup with the D-Wave Two device, but did not explicitly rule it out.

In 2014, researchers at University of New South Wales used silicon as a protectant shell around qubits, making them more accurate, increasing the length of time they will hold information, and possibly making quantum computers easier to build.

In April 2015, IBM scientists claimed two critical advances towards the realization of a practical quantum computer, claiming the ability to detect and measure both kinds of quantum errors simultaneously, as well as a new, square quantum bit circuit design that could scale to larger dimensions.

In October 2015, QuTech successfully conducted the Loophole-free Bell inequality violation test using electron spins separated by 1.3 kilometres.

In October 2015, researchers at the University of New South Wales built a quantum logic gate in silicon for the first time.

In December 2015, NASA publicly displayed the world's first fully operational quantum computer made by D-Wave Systems at the Quantum Artificial Intelligence Lab at its Ames Research Center. The device was purchased in 2013 via a partnership with Google and Universities Space Research Association. The presence and use of quantum effects in the D-Wave quantum processing unit is more widely accepted. In some tests, it can be shown that the D-Wave quantum annealing processor outperforms Selby’s algorithm. Only two of these computers have been made so far. 

In May 2016, IBM Research announced that for the first time ever it is making quantum computing available to members of the public via the cloud, who can access and run experiments on IBM’s quantum processor, calling the service the IBM Quantum Experience. The quantum processor is composed of five superconducting qubits and is housed at IBM's Thomas J. Watson Research Center.
In August 2016, scientists at the University of Maryland successfully built the first reprogrammable quantum computer.

In October 2016, the University of Basel described a variant of the electron-hole based quantum computer, which instead of manipulating electron spins, uses electron holes in a semiconductor at low (mK) temperatures, which are much less vulnerable to decoherence. This has been dubbed the "positronic" quantum computer, as the quasi-particle behaves as if it has a positive electrical charge.

In March 2017, IBM announced an industry-first initiative, called IBM Q, to build commercially available universal quantum computing systems. The company also released a new API for the IBM Quantum Experience that enables developers and programmers to begin building interfaces between its existing 5-qubit cloud-based quantum computer and classical computers, without needing a deep background in quantum physics. 

In May 2017, IBM announced that it had successfully built and tested its most powerful universal quantum computing processors. The first is a 16-qubit processor that will allow for more complex experimentation than the previously available 5-qubit processor. The second is IBM's first prototype commercial processor with 17 qubits, and leverages significant materials, device, and architecture improvements to make it the most powerful quantum processor created to date by IBM.

In July 2017, a group of U.S. researchers announced a quantum simulator with 51 qubits. The announcement was made by Mikhail Lukin of Harvard University at the International Conference on Quantum Technologies in Moscow. A quantum simulator differs from a computer. Lukin’s simulator was designed to solve one equation. Solving a different equation would require building a new system, whereas a computer can solve many different equations. 

In September 2017, IBM Research scientists used a 7-qubit device to model beryllium hydride molecule, the largest molecule to date by a quantum computer. The results were published as the cover story in the peer-reviewed journal Nature

In October 2017, IBM Research scientists successfully "broke the 49-qubit simulation barrier" and simulated 49- and 56-qubit short-depth circuits, using the Lawrence Livermore National Laboratory's Vulcan supercomputer, and the University of Illinois' Cyclops Tensor Framework (originally developed at the University of California).

In November 2017, the University of Sydney research team successfully made a microwave circulator, an important quantum computer part, that was 1000 times smaller than a conventional circulator, by using topological insulators to slow down the speed of light in a material.

In December 2017, IBM announced its first IBM Q Network clients. The companies, universities, and labs that will explore practical business and science quantum applications, using IBM Q 20-qubit commercial systems, include: JPMorgan Chase, Daimler AG, Samsung, JSR Corporation, Barclays, Hitachi Metals, Honda, Nagase, Keio University, Oak Ridge National Lab, Oxford University and University of Melbourne. 

In December 2017, Microsoft released a preview version of a "Quantum Development Kit", which includes a programming language, Q# that can be used to write programs that are run on an emulated quantum computer. 

In 2017, D-Wave was reported to be selling a 2,000-qubit quantum computer.

In late 2017 and early 2018, IBM, Intel, and Google each reported testing quantum processors containing 50, 49, and 72 qubits, respectively, all realized using superconducting circuits. By number of qubits, these circuits are approaching the range in which simulating their quantum dynamics is expected to become prohibitive on classical computers, although it has been argued that further improvements in error rates are needed to put classical simulation out of reach.

In February 2018, scientists reported, for the first time, the discovery of a new form of light, which may involve polaritons, that could be useful in the development of quantum computers.

In February 2018, QuTech reported successfully testing a silicon-based two-spin-qubits quantum processor.

In June 2018, Intel began testing a silicon-based spin-qubit processor, manufactured in the company's D1D Fab in Oregon.

In July 2018, a team led by the University of Sydney achieved the world's first multi-qubit demonstration of a quantum chemistry calculation performed on a system of trapped ions, one of the leading hardware platforms in the race to develop a universal quantum computer.

In December 2018, IonQ reported that its machine could be built as large as 160 qubits.

In January 2019, IBM launched IBM Q System One, its first integrated quantum computing system for commercial use. IBM Q System One is designed by industrial design company Map Project Office and interior design company Universal Design Studio.

In March 2019, a group of Russian scientists used the open-access IBM quantum computer to demonstrate a protocol for the complex conjugation of the probability amplitudes needed for time reversal of a physical process, in this case, for an electron scattered on a two-level impurity, a two-qubit experiment. However, for the three-qubit experiment, the amplitude fell below 50% (failure of time reversal, due to its increased complexity).

In September 2019 Google AI Quantum and NASA published a paper "Quantum supremacy using a programmable superconducting processor" and supplementary material which was later removed from NASA.

Relation to computational complexity theory

The suspected relationship of BQP to other problem spaces.
 
The class of problems that can be efficiently solved by quantum computers is called BQP, for "bounded error, quantum, polynomial time". Quantum computers only run probabilistic algorithms, so BQP on quantum computers is the counterpart of BPP ("bounded error, probabilistic, polynomial time") on classical computers. It is defined as the set of problems solvable with a polynomial-time algorithm, whose probability of error is bounded away from one half. A quantum computer is said to "solve" a problem if, for every instance, its answer will be right with high probability. If that solution runs in polynomial time, then that problem is in BQP.

BQP is contained in the complexity class #P (or more precisely in the associated class of decision problems P#P), which is a subclass of PSPACE.

BQP is suspected to be disjoint from NP-complete and a strict superset of P, but that is not known. Both integer factorization and discrete log are in BQP. Both of these problems are NP problems suspected to be outside BPP, and hence outside P. Both are suspected to not be NP-complete. There is a common misconception that quantum computers can solve NP-complete problems in polynomial time. That is not known to be true, and is generally suspected to be false.

The capacity of a quantum computer to accelerate classical algorithms has rigid limits—upper bounds of quantum computation's complexity. The overwhelming part of classical calculations cannot be accelerated on a quantum computer. A similar fact prevails for particular computational tasks, like the search problem, for which Grover's algorithm is optimal.

Bohmian Mechanics is a non-local hidden variable interpretation of quantum mechanics. It has been shown that a non-local hidden variable quantum computer could implement a search of an N-item database at most in steps. This is slightly faster than the steps taken by Grover's algorithm. Neither search method will allow quantum computers to solve NP-Complete problems in polynomial time.

Although quantum computers may be faster than classical computers for some problem types, those described above cannot solve any problem that classical computers cannot already solve. A Turing machine can simulate these quantum computers, so such a quantum computer could never solve an undecidable problem like the halting problem. The existence of "standard" quantum computers does not disprove the Church–Turing thesis. It has been speculated that theories of quantum gravity, such as M-theory or loop quantum gravity, may allow even faster computers to be built. Currently, defining computation in such theories is an open problem due to the problem of time, i.e., there currently exists no obvious way to describe what it means for an observer to submit input to a computer and later receive output.

Cetacean intelligence

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Cet...