Search This Blog

Thursday, January 8, 2015

Multiverse

From Wikipedia, the free encyclopedia

The multiverse (or meta-universe) is the hypothetical set of infinite or finite possible universes (including the universe we consistently experience) that together comprise everything that exists: the entirety of space, time, matter, and energy as well as the physical laws and constants that describe them. The various universes within the multiverse are sometimes called parallel universes or "alternate universes"

The structure of the multiverse, the nature of each universe within it and the relationships among the various constituent universes, depend on the specific multiverse hypothesis considered. Multiple universes have been hypothesized in cosmology, physics, astronomy, religion, philosophy, transpersonal psychology, and fiction, particularly in science fiction and fantasy. In these contexts, parallel universes are also called "alternate universes", "quantum universes", "interpenetrating dimensions", "parallel dimensions", "parallel worlds", "alternate realities", "alternate timelines", and "dimensional planes," among others. The term 'multiverse' was coined in 1895 by the American philosopher and psychologist William James in a different context.[1]

The multiverse hypothesis is a source of debate within the physics community. Physicists disagree about whether the multiverse exists, and whether the multiverse is a proper subject of scientific inquiry.[2]
Supporters of one of the multiverse hypotheses include Stephen Hawking,[3] Steven Weinberg,[4] Brian Greene,[5][6] Max Tegmark,[7] Alan Guth,[8] Andrei Linde,[9] Michio Kaku,[10] David Deutsch,[11] Leonard Susskind,[12] Raj Pathria,[13] Sean Carroll, Alex Vilenkin,[14] Laura Mersini-Houghton,[15][16] and Neil deGrasse Tyson.[17] In contrast, critics such as Jim Baggott,[18] David Gross,[19] Paul Steinhardt,[20] George Ellis[21][22] and Paul Davies have argued that the multiverse question is philosophical rather than scientific, that the multiverse cannot be a scientific question because it lacks falsifiability, or even that the multiverse hypothesis is harmful or pseudoscientific.

Multiverse hypotheses in physics

Categories

Max Tegmark and Brian Greene have devised classification schemes that categorize the various theoretical types of multiverse, or types of universe that might theoretically comprise a multiverse ensemble.

Max Tegmark's four levels

Cosmologist Max Tegmark has provided a taxonomy of universes beyond the familiar observable universe. The levels according to Tegmark's classification are arranged such that subsequent levels can be understood to encompass and expand upon previous levels, and they are briefly described below.[23][24]
Level I: Beyond our cosmological horizon
A generic prediction of chaotic inflation is an infinite ergodic universe, which, being infinite, must contain Hubble volumes realizing all initial conditions.

Accordingly, an infinite universe will contain an infinite number of Hubble volumes, all having the same physical laws and physical constants. In regard to configurations such as the distribution of matter, almost all will differ from our Hubble volume. However, because there are infinitely many, far beyond the cosmological horizon, there will eventually be Hubble volumes with similar, and even identical, configurations. Tegmark estimates that an identical volume to ours should be about 1010115 meters away from us.[7] Given infinite space, there would, in fact, be an infinite number of Hubble volumes identical to ours in the universe.[25] This follows directly from the cosmological principle, wherein it is assumed our Hubble volume is not special or unique.
Level II: Universes with different physical constants
"Bubble universes": every disk is a bubble universe (Universe 1 to Universe 6 are different bubbles; they have physical constants that are different from our universe); our universe is just one of the bubbles.

In the chaotic inflation theory, a variant of the cosmic inflation theory, the multiverse as a whole is stretching and will continue doing so forever, but some regions of space stop stretching and form distinct bubbles, like gas pockets in a loaf of rising bread. Such bubbles are embryonic level I multiverses. Linde and Vanchurin calculated the number of these universes to be on the scale of 101010,000,000.[26]

Different bubbles may experience different spontaneous symmetry breaking resulting in different properties such as different physical constants.[25]

This level also includes John Archibald Wheeler's oscillatory universe theory and Lee Smolin's fecund universes theory.
Level III: Many-worlds interpretation of quantum mechanics
Hugh Everett's many-worlds interpretation (MWI) is one of several mainstream interpretations of quantum mechanics. In brief, one aspect of quantum mechanics is that certain observations cannot be predicted absolutely. Instead, there is a range of possible observations, each with a different probability. According to the MWI, each of these possible observations corresponds to a different universe.
Suppose a die is thrown that contains six sides and that the numeric result of the throw corresponds to a quantum mechanics observable. All six possible ways the die can fall correspond to six different universes.

Tegmark argues that a level III multiverse does not contain more possibilities in the Hubble volume than a level I-II multiverse. In effect, all the different "worlds" created by "splits" in a level III multiverse with the same physical constants can be found in some Hubble volume in a level I multiverse. Tegmark writes that "The only difference between Level I and Level III is where your doppelgängers reside. In Level I they live elsewhere in good old three-dimensional space. In Level III they live on another quantum branch in infinite-dimensional Hilbert space." Similarly, all level II bubble universes with different physical constants can in effect be found as "worlds" created by "splits" at the moment of spontaneous symmetry breaking in a level III multiverse.[25]

Related to the many-worlds idea are Richard Feynman's multiple histories interpretation and H. Dieter Zeh's many-minds interpretation.
Level IV: Ultimate ensemble
The ultimate ensemble or mathematical universe hypothesis is the hypothesis of Tegmark himself.[27]
This level considers equally real all universes that can be described by different mathematical structures. Tegmark writes that "abstract mathematics is so general that any Theory Of Everything (TOE) that is definable in purely formal terms (independent of vague human terminology) is also a mathematical structure. For instance, a TOE involving a set of different types of entities (denoted by words, say) and relations between them (denoted by additional words) is nothing but what mathematicians call a set-theoretical model, and one can generally find a formal system that it is a model of." He argues this "implies that any conceivable parallel universe theory can be described at Level IV" and "subsumes all other ensembles, therefore brings closure to the hierarchy of multiverses, and there cannot be say a Level V."[7]

Jürgen Schmidhuber, however, says the "set of mathematical structures" is not even well-defined, and admits only universe representations describable by constructive mathematics, that is, computer programs. He explicitly includes universe representations describable by non-halting programs whose output bits converge after finite time, although the convergence time itself may not be predictable by a halting program, due to Kurt Gödel's limitations.[28][29][30] He also explicitly discusses the more restricted ensemble of quickly computable universes.[31]

Brian Greene's nine types

American theoretical physicist and string theorist Brian Greene discussed nine types of parallel universes:[32]
Quilted
The quilted multiverse works only in an infinite universe. With an infinite amount of space, every possible event will occur an infinite number of times. However, the speed of light prevents us from being aware of these other identical areas.
Inflationary
The inflationary multiverse is composed of various pockets where inflation fields collapse and form new universes.
Brane
The brane multiverse follows from M-theory and states that each universe is a 3-dimensional brane that exists with many others. Particles are bound to their respective branes except for gravity.
Cyclic
The cyclic multiverse has multiple branes (each a universe) that collided, causing Big Bangs. The universes bounce back and pass through time, until they are pulled back together and again collide, destroying the old contents and creating them anew.
Landscape
The landscape multiverse relies on string theory's Calabi–Yau shapes. Quantum fluctuations drop the shapes to a lower energy level, creating a pocket with a different set of laws from the surrounding space.
Quantum
The quantum multiverse creates a new universe when a diversion in events occurs, as in the many-worlds interpretation of quantum mechanics.
Holographic
The holographic multiverse is derived from the theory that the surface area of a space can simulate the volume of the region.
Simulated
The simulated multiverse exists on complex computer systems that simulate entire universes.
Ultimate
The ultimate multiverse contains every mathematically possible universe under different laws of physics.

Cyclic theories

In several theories there is a series of infinite, self-sustaining cycles (for example: an eternity of Big Bang-Big crunches).

M-theory

A multiverse of a somewhat different kind has been envisaged within string theory and its higher-dimensional extension, M-theory.[33] These theories require the presence of 10 or 11 spacetime dimensions respectively. The extra 6 or 7 dimensions may either be compactified on a very small scale, or our universe may simply be localized on a dynamical (3+1)-dimensional object, a D-brane. This opens up the possibility that there are other branes which could support "other universes".[34][35] This is unlike the universes in the "quantum multiverse", but both concepts can operate at the same time.[citation needed]
Some scenarios postulate that our big bang was created, along with our universe, by the collision of two branes.[34][35]

Black-hole cosmology

A black-hole cosmology is a cosmological model in which the observable universe is the interior of a black hole existing as one of possibly many inside a larger universe.

Anthropic principle

The concept of other universes has been proposed to explain how our Universe appears to be fine-tuned for conscious life as we experience it. If there were a large (possibly infinite) number of universes, each with possibly different physical laws (or different fundamental physical constants), some of these universes, even if very few, would have the combination of laws and fundamental parameters that are suitable for the development of matter, astronomical structures, elemental diversity, stars, and planets that can exist long enough for life to emerge and evolve. The weak anthropic principle could then be applied to conclude that we (as conscious beings) would only exist in one of those few universes that happened to be finely tuned, permitting the existence of life with developed consciousness. Thus, while the probability might be extremely small that any particular universe would have the requisite conditions for life (as we understand life) to emerge and evolve, this does not require intelligent design as an explanation for the conditions in the Universe that promote our existence in it.

Search for evidence

Around 2010, scientists such as Stephen M. Feeney analyzed Wilkinson Microwave Anisotropy Probe (WMAP) data and claimed to find preliminary evidence suggesting that our universe collided with other (parallel) universes in the distant past.[36][unreliable source?][37][38][39] However, a more thorough analysis of data from the WMAP and from the Planck satellite, which has a resolution 3 times higher than WMAP, failed to find any statistically significant evidence of such a bubble universe collision.[40][41] In addition, there is no evidence of any gravitational pull of other universes on ours.[42][43]

Criticism

Non-scientific claims

In his 2003 NY Times opinion piece, A Brief History of the Multiverse, author and cosmologist, Paul Davies, offers a variety of arguments that multiverse theories are non-scientific :[44]
For a start, how is the existence of the other universes to be tested? To be sure, all cosmologists accept that there are some regions of the universe that lie beyond the reach of our telescopes, but somewhere on the slippery slope between that and the idea that there are an infinite number of universes, credibility reaches a limit. As one slips down that slope, more and more must be accepted on faith, and less and less is open to scientific verification. Extreme multiverse explanations are therefore reminiscent of theological discussions. Indeed, invoking an infinity of unseen universes to explain the unusual features of the one we do see is just as ad hoc as invoking an unseen Creator. The multiverse theory may be dressed up in scientific language, but in essence it requires the same leap of faith.
— Paul Davies, A Brief History of the Multiverse
Taking cosmic inflation as a popular case in point, George Ellis, writing in August 2011, provides a balanced criticism of not only the science, but as he suggests, the scientific philosophy, by which multiverse theories are generally substantiated. He, like most cosmologists, accepts Tegmark's level I "domains", even though they lie far beyond the cosmological horizon. Likewise, the multiverse of cosmic inflation is said to exist very far away. It would be so far away, however, that it's very unlikely any evidence of an early interaction will be found. He argues that for many theorists, the lack of empirical testability or falsifiability is not a major concern. “Many physicists who talk about the multiverse, especially advocates of the string landscape, do not care much about parallel universes per se. For them, objections to the multiverse as a concept are unimportant. Their theories live or die based on internal consistency and, one hopes, eventual laboratory testing.” Although he believes there's little hope that will ever be possible, he grants that the theories on which the speculation is based, are not without scientific merit. He concludes that multiverse theory is a “productive research program”:[45]
As skeptical as I am, I think the contemplation of the multiverse is an excellent opportunity to reflect on the nature of science and on the ultimate nature of existence: why we are here… In looking at this concept, we need an open mind, though not too open. It is a delicate path to tread. Parallel universes may or may not exist; the case is unproved. We are going to have to live with that uncertainty. Nothing is wrong with scientifically based philosophical speculation, which is what multiverse proposals are. But we should name it for what it is.
— George Ellis, Scientific American, Does the Multiverse Really Exist?

Occam's razor

Proponents and critics disagree about how to apply Occam's razor. Critics argue that to postulate a practically infinite number of unobservable universes just to explain our own seems contrary to Occam's razor.[46] In contrast, proponents argue that, in terms of Kolmogorov complexity, the proposed multiverse is simpler than a single idiosyncratic universe.[25]

For example, multiverse proponent Max Tegmark argues:
[A]n entire ensemble is often much simpler than one of its members. This principle can be stated more formally using the notion of algorithmic information content. The algorithmic information content in a number is, roughly speaking, the length of the shortest computer program that will produce that number as output. For example, consider the set of all integers. Which is simpler, the whole set or just one number? Naively, you might think that a single number is simpler, but the entire set can be generated by quite a trivial computer program, whereas a single number can be hugely long. Therefore, the whole set is actually simpler... (Similarly), the higher-level multiverses are simpler. Going from our universe to the Level I multiverse eliminates the need to specify initial conditions, upgrading to Level II eliminates the need to specify physical constants, and the Level IV multiverse eliminates the need to specify anything at all.... A common feature of all four multiverse levels is that the simplest and arguably most elegant theory involves parallel universes by default. To deny the existence of those universes, one needs to complicate the theory by adding experimentally unsupported processes and ad hoc postulates: finite space, wave function collapse and ontological asymmetry. Our judgment therefore comes down to which we find more wasteful and inelegant: many worlds or many words. Perhaps we will gradually get used to the weird ways of our cosmos and find its strangeness to be part of its charm.[25]
— Max Tegmark, "Parallel universes. Not just a staple of science fiction, other universes are a direct implication of cosmological observations." Scientific American 2003 May;288(5):40–51
Princeton cosmologist Paul Steinhardt used the 2014 Annual Edge Question to voice his opposition to multiverse theorizing:
A pervasive idea in fundamental physics and cosmology that should be retired: the notion that we live in a multiverse in which the laws of physics and the properties of the cosmos vary randomly from one patch of space to another. According to this view, the laws and properties within our observable universe cannot be explained or predicted because they are set by chance. Different regions of space too distant to ever be observed have different laws and properties, according to this picture. Over the entire multiverse, there are infinitely many distinct patches. Among these patches, in the words of Alan Guth, "anything that can happen will happen—and it will happen infinitely many times". Hence, I refer to this concept as a Theory of Anything. Any observation or combination of observations is consistent with a Theory of Anything. No observation or combination of observations can disprove it. Proponents seem to revel in the fact that the Theory cannot be falsified. The rest of the scientific community should be up in arms since an unfalsifiable idea lies beyond the bounds of normal science. Yet, except for a few voices, there has been surprising complacency and, in some cases, grudging acceptance of a Theory of Anything as a logical possibility. The scientific journals are full of papers treating the Theory of Anything seriously. What is going on?[20]
— Paul Steinhardt, "Theories of Anything" edge.com'
Steinhardt claims that multiverse theories have gained currency mostly because too much has been invested in theories that have failed, e.g. inflation or string theory. He tends to see in them an attempt to redefine the values of science to which he objects even more strongly:
A Theory of Anything is useless because it does not rule out any possibility and worthless because it submits to no do-or-die tests. (Many papers discuss potential observable consequences, but these are only possibilities, not certainties, so the Theory is never really put at risk.)[20]
— Paul Steinhardt, "Theories of Anything" edge.com'

Multiverse hypotheses in philosophy and logic

Modal realism

Possible worlds are a way of explaining probability, hypothetical statements and the like, and some philosophers such as David Lewis believe that all possible worlds exist, and are just as real as the actual world (a position known as modal realism).[47]

Trans-world identity

A metaphysical issue that crops up in multiverse schema that posit infinite identical copies of any given universe is that of the notion that there can be identical objects in different possible worlds. According to the counterpart theory of David Lewis, the objects should be regarded as similar rather than identical.[48][49]

Fictional realism

The view that because fictions exist, fictional characters exist as well. There are fictional entities, in the same sense in which, setting aside philosophical disputes, there are people, Mondays, numbers and planets.[50][51]

Terraforming

From Wikipedia, the free encyclopedia

An artist's conception shows a terraformed Mars in four stages of development.

Terraforming (literally, "Earth-shaping") of a planet, moon, or other body is the theoretical process of deliberately modifying its atmosphere, temperature, surface topography or ecology to be similar to the biosphere of Earth to make it habitable by Earth-like life.

The term "terraforming" is sometimes used more generally as a synonym for planetary engineering, although some consider this more general usage an error.[citation needed] The concept of terraforming developed from both science fiction and actual science. The term was coined by Jack Williamson in a science-fiction story (Collision Orbit) published during 1942 in Astounding Science Fiction,[1] but the concept may pre-date this work.

Based on experiences with Earth, the environment of a planet can be altered deliberately; however, the feasibility of creating an unconstrained planetary biosphere that mimics Earth on another planet has yet to be verified. Mars is usually considered to be the most likely candidate for terraforming. Much study has been done concerning the possibility of heating the planet and altering its atmosphere, and NASA has even hosted debates on the subject. Several potential methods of altering the climate of Mars may fall within humanity's technological capabilities, but at present the economic resources required to do so are far beyond that which any government or society is willing to allocate to it. The long timescales and practicality of terraforming are the subject of debate. Other unanswered questions relate to the ethics, logistics, economics, politics, and methodology of altering the environment of an extraterrestrial world.

Quantum computing

From Wikipedia, the free encyclopedia
The Bloch sphere is a representation of a qubit, the fundamental building block of quantum computers.

Quantum computing studies theoretical computation systems (quantum computers) that make direct use of quantum-mechanical phenomena, such as superposition and entanglement, to perform operations on data.[1] Quantum computers are different from digital computers based on transistors. Whereas digital computers require data to be encoded into binary digits (bits), each of which is always in one of two definite states (0 or 1), quantum computation uses qubits (quantum bits), which can be in superpositions of states. A theoretical model is the quantum Turing machine, also known as the universal quantum computer. Quantum computers share theoretical similarities with non-deterministic and probabilistic computers; one example is the ability to be in more than one state simultaneously. The field of quantum computing was first introduced by Yuri Manin in 1980[2] and Richard Feynman in 1982.[3][4] A quantum computer with spins as quantum bits was also formulated for use as a quantum space–time in 1968.[5]

As of 2014, the development of actual quantum computers is still in its infancy, but experiments have been carried out in which quantum computational operations were executed on a very small number of qubits.[6] Both practical and theoretical research continues, and many national governments and military funding agencies support quantum computing research to develop quantum computers for civilian, business, trade, gaming and national security purposes, such as cryptanalysis.[7]

Large-scale quantum computers will be able to solve certain problems much more quickly than any classical computer using the best currently known algorithms, like integer factorization using Shor's algorithm or the simulation of quantum many-body systems. There exist quantum algorithms, such as Simon's algorithm, that run faster than any possible probabilistic classical algorithm.[8] Given sufficient computational resources, however, a classical computer could be made to simulate any quantum algorithm, as quantum computation does not violate the Church–Turing thesis. [9]

Basis

A classical computer has a memory made up of bits, where each bit represents either a one or a zero. A quantum computer maintains a sequence of qubits. A single qubit can represent a one, a zero, or any quantum superposition of these two qubit states; moreover, a pair of qubits can be in any quantum superposition of 4 states, and three qubits in any superposition of 8. In general, a quantum computer with n qubits can be in an arbitrary superposition of up to 2^n different states simultaneously (this compares to a normal computer that can only be in one of these 2^n states at any one time). A quantum computer operates by setting the qubits in a controlled initial state that represents the problem at hand and by manipulating those qubits with a fixed sequence of quantum logic gates. The sequence of gates to be applied is called a quantum algorithm. The calculation ends with a measurement, collapsing the system of qubits into one of the 2^n pure states, where each qubit is purely zero or one. The outcome can therefore be at most n classical bits of information. Quantum algorithms are often non-deterministic, in that they provide the correct solution only with a certain known probability.

An example of an implementation of qubits for a quantum computer could start with the use of particles with two spin states: "down" and "up" (typically written |{\downarrow}\rangle and |{\uparrow}\rangle, or |0{\rangle} and |1{\rangle}). But in fact any system possessing an observable quantity A, which is conserved under time evolution such that A has at least two discrete and sufficiently spaced consecutive eigenvalues, is a suitable candidate for implementing a qubit. This is true because any such system can be mapped onto an effective spin-1/2 system.

Bits vs. qubits

A quantum computer with a given number of qubits is fundamentally different from a classical computer composed of the same number of classical bits. For example, to represent the state of an n-qubit system on a classical computer would require the storage of 2n complex coefficients. Although this fact may seem to indicate that qubits can hold exponentially more information than their classical counterparts, care must be taken not to overlook the fact that the qubits are only in a probabilistic superposition of all of their states. This means that when the final state of the qubits is measured, they will only be found in one of the possible configurations they were in before measurement. Moreover, it is incorrect to think of the qubits as only being in one particular state before measurement since the fact that they were in a superposition of states before the measurement was made directly affects the possible outcomes of the computation.
Qubits are made up of controlled particles and the means of control (e.g. devices that trap particles and switch them from one state to another).[10]

For example: Consider first a classical computer that operates on a three-bit register. The state of the computer at any time is a probability distribution over the 2^3=8 different three-bit strings 000, 001, 010, 011, 100, 101, 110, 111. If it is a deterministic computer, then it is in exactly one of these states with probability 1. However, if it is a probabilistic computer, then there is a possibility of it being in any one of a number of different states. We can describe this probabilistic state by eight nonnegative numbers A,B,C,D,E,F,G,H (where A = probability computer is in state 000, B = probability computer is in state 001, etc.). There is a restriction that these probabilities sum to 1.

The state of a three-qubit quantum computer is similarly described by an eight-dimensional vector (a,b,c,d,e,f,g,h), called a ket. Here, however, the coefficients can have complex values, and it is the sum of the squares of the coefficients' magnitudes, |a|^2+|b|^2+\cdots+|h|^2, that must equal 1. These square magnitudes represent the probability amplitudes of given states. However, because a complex number encodes not just a magnitude but also a direction in the complex plane, the phase difference between any two coefficients (states) represents a meaningful parameter. This is a fundamental difference between quantum computing and probabilistic classical computing.[11]

If you measure the three qubits, you will observe a three-bit string. The probability of measuring a given string is the squared magnitude of that string's coefficient (i.e., the probability of measuring 000 = |a|^2, the probability of measuring 001 = |b|^2, etc..). Thus, measuring a quantum state described by complex coefficients (a,b,...,h) gives the classical probability distribution (|a|^2, |b|^2, \ldots, |h|^2) and we say that the quantum state "collapses" to a classical state as a result of making the measurement.

Note that an eight-dimensional vector can be specified in many different ways depending on what basis is chosen for the space. The basis of bit strings (e.g., 000, 001, …, 111) is known as the computational basis. Other possible bases are unit-length, orthogonal vectors and the eigenvectors of the Pauli-x operator. Ket notation is often used to make the choice of basis explicit. For example, the state (a,b,c,d,e,f,g,h) in the computational basis can be written as:
a\,|000\rangle + b\,|001\rangle + c\,|010\rangle + d\,|011\rangle + e\,|100\rangle + f\,|101\rangle + g\,|110\rangle + h\,|111\rangle
where, e.g., |010\rangle = \left(0,0,1,0,0,0,0,0\right)
The computational basis for a single qubit (two dimensions) is |0\rangle = \left(1,0\right) and |1\rangle = \left(0,1\right).

Using the eigenvectors of the Pauli-x operator, a single qubit is |+\rangle = \tfrac{1}{\sqrt{2}} \left(1,1\right) and |-\rangle = \tfrac{1}{\sqrt{2}} \left(1,-1\right).

Operation

While a classical three-bit state and a quantum three-qubit state are both eight-dimensional vectors, they are manipulated quite differently for classical or quantum computation. For computing in either case, the system must be initialized, for example into the all-zeros string, |000\rangle, corresponding to the vector (1,0,0,0,0,0,0,0). In classical randomized computation, the system evolves according to the application of stochastic matrices, which preserve that the probabilities add up to one (i.e., preserve the L1 norm). In quantum computation, on the other hand, allowed operations are unitary matrices, which are effectively rotations (they preserve that the sum of the squares add up to one, the Euclidean or L2 norm). (Exactly what unitaries can be applied depend on the physics of the quantum device.) Consequently, since rotations can be undone by rotating backward, quantum computations are reversible. (Technically, quantum operations can be probabilistic combinations of unitaries, so quantum computation really does generalize classical computation. See quantum circuit for a more precise formulation.)

Finally, upon termination of the algorithm, the result needs to be read off. In the case of a classical computer, we sample from the probability distribution on the three-bit register to obtain one definite three-bit string, say 000. Quantum mechanically, we measure the three-qubit state, which is equivalent to collapsing the quantum state down to a classical distribution (with the coefficients in the classical state being the squared magnitudes of the coefficients for the quantum state, as described above), followed by sampling from that distribution. Note that this destroys the original quantum state. Many algorithms will only give the correct answer with a certain probability. However, by repeatedly initializing, running and measuring the quantum computer, the probability of getting the correct answer can be increased.

Potential

Integer factorization is believed to be computationally infeasible with an ordinary computer for large integers if they are the product of few prime numbers (e.g., products of two 300-digit primes).[12] By comparison, a quantum computer could efficiently solve this problem using Shor's algorithm to find its factors. This ability would allow a quantum computer to decrypt many of the cryptographic systems in use today, in the sense that there would be a polynomial time (in the number of digits of the integer) algorithm for solving the problem. In particular, most of the popular public key ciphers are based on the difficulty of factoring integers or the discrete logarithm problem, which can both be solved by Shor's algorithm. In particular the RSA, Diffie-Hellman, and Elliptic curve Diffie-Hellman algorithms could be broken. These are used to protect secure Web pages, encrypted email, and many other types of data. Breaking these would have significant ramifications for electronic privacy and security.

However, other cryptographic algorithms do not appear to be broken by these algorithms.[13][14] Some public-key algorithms are based on problems other than the integer factorization and discrete logarithm problems to which Shor's algorithm applies, like the McEliece cryptosystem based on a problem in coding theory.[13][15] Lattice-based cryptosystems are also not known to be broken by quantum computers, and finding a polynomial time algorithm for solving the dihedral hidden subgroup problem, which would break many lattice based cryptosystems, is a well-studied open problem.[16] It has been proven that applying Grover's algorithm to break a symmetric (secret key) algorithm by brute force requires time equal to roughly 2n/2 invocations of the underlying cryptographic algorithm, compared with roughly 2n in the classical case,[17] meaning that symmetric key lengths are effectively halved: AES-256 would have the same security against an attack using Grover's algorithm that AES-128 has against classical brute-force search (see Key size). Quantum cryptography could potentially fulfill some of the functions of public key cryptography.

Besides factorization and discrete logarithms, quantum algorithms offering a more than polynomial speedup over the best known classical algorithm have been found for several problems,[18] including the simulation of quantum physical processes from chemistry and solid state physics, the approximation of Jones polynomials, and solving Pell's equation. No mathematical proof has been found that shows that an equally fast classical algorithm cannot be discovered, although this is considered unlikely. For some problems, quantum computers offer a polynomial speedup. The most well-known example of this is quantum database search, which can be solved by Grover's algorithm using quadratically fewer queries to the database than are required by classical algorithms. In this case the advantage is provable. Several other examples of provable quantum speedups for query problems have subsequently been discovered, such as for finding collisions in two-to-one functions and evaluating NAND trees.

Consider a problem that has these four properties:
  1. The only way to solve it is to guess answers repeatedly and check them,
  2. The number of possible answers to check is the same as the number of inputs,
  3. Every possible answer takes the same amount of time to check, and
  4. There are no clues about which answers might be better: generating possibilities randomly is just as good as checking them in some special order.
An example of this is a password cracker that attempts to guess the password for an encrypted file (assuming that the password has a maximum possible length).

For problems with all four properties, the time for a quantum computer to solve this will be proportional to the square root of the number of inputs. It can be used to attack symmetric ciphers such as Triple DES and AES by attempting to guess the secret key.[19]

Grover's algorithm can also be used to obtain a quadratic speed-up over a brute-force search for a class of problems known as NP-complete.

Since chemistry and nanotechnology rely on understanding quantum systems, and such systems are impossible to simulate in an efficient manner classically, many believe quantum simulation will be one of the most important applications of quantum computing.[20] Quantum simulation could also be used to study the behavior of atoms and particles at unusual conditions such as the reactions inside a collider, but in a virtual environment rather than actually making these conditions.[21]

There are a number of technical challenges in building a large-scale quantum computer, and thus far quantum computers have yet to solve a problem faster than a classical computer. David DiVincenzo, of IBM, listed the following requirements for a practical quantum computer:[22]
  • scalable physically to increase the number of qubits;
  • qubits can be initialized to arbitrary values;
  • quantum gates faster than decoherence time;
  • universal gate set;
  • qubits can be read easily.

Quantum decoherence

One of the greatest challenges is controlling or removing quantum decoherence. This usually means isolating the system from its environment as interactions with the external world cause the system to decohere. However, other sources of decoherence also exist. Examples include the quantum gates, and the lattice vibrations and background nuclear spin of the physical system used to implement the qubits.
Decoherence is irreversible, as it is non-unitary, and is usually something that should be highly controlled, if not avoided. Decoherence times for candidate systems, in particular the transverse relaxation time T2 (for NMR and MRI technology, also called the dephasing time), typically range between nanoseconds and seconds at low temperature.[11]

These issues are more difficult for optical approaches as the timescales are orders of magnitude shorter and an often-cited approach to overcoming them is optical pulse shaping. Error rates are typically proportional to the ratio of operating time to decoherence time, hence any operation must be completed much more quickly than the decoherence time.

If the error rate is small enough, it is thought to be possible to use quantum error correction, which corrects errors due to decoherence, thereby allowing the total calculation time to be longer than the decoherence time. An often cited figure for required error rate in each gate is 10−4. This implies that each gate must be able to perform its task in one 10,000th of the decoherence time of the system.

Meeting this scalability condition is possible for a wide range of systems. However, the use of error correction brings with it the cost of a greatly increased number of required qubits. The number required to factor integers using Shor's algorithm is still polynomial, and thought to be between L and L2, where L is the number of bits in the number to be factored; error correction algorithms would inflate this figure by an additional factor of L. For a 1000-bit number, this implies a need for about 104 qubits without error correction.[23] With error correction, the figure would rise to about 107 qubits. Note that computation time is about L2 or about 107 steps and on 1 MHz, about 10 seconds.

A very different approach to the stability-decoherence problem is to create a topological quantum computer with anyons, quasi-particles used as threads and relying on braid theory to form stable logic gates.[24][25]

Developments

There are a number of quantum computing models, distinguished by the basic elements in which the computation is decomposed. The four main models of practical importance are:
The Quantum Turing machine is theoretically important but direct implementation of this model is not pursued. All four models of computation have been shown to be equivalent; each can simulate the other with no more than polynomial overhead.

For physically implementing a quantum computer, many different candidates are being pursued, among them (distinguished by the physical system used to realize the qubits):
The large number of candidates demonstrates that the topic, in spite of rapid progress, is still in its infancy, there is also a vast amount of flexibility.

Timeline

In 2001, researchers demonstrated Shor's algorithm to factor 15 using a 7-qubit NMR computer.[40]

In 2005, researchers at the University of Michigan built a semiconductor chip ion trap. Such devices from standard lithography, may point the way to scalable quantum computing.[41]

In 2009, researchers at Yale University created the first solid-state quantum processor. The two-qubit superconducting chip had artificial atom qubits made of a billion aluminum atoms that acted like a single atom that could occupy two states.[42][43]

A team at the University of Bristol, also created a silicon chip based on quantum optics, able to run Shor's algorithm.[44] Further developments were made in 2010.[45] Springer publishes a journal (Quantum Information Processing) devoted to the subject.[46]

In April 2011, a team of scientists from Australia and Japan made a breakthrough in quantum teleportation. They successfully transferred a complex set of quantum data with full transmission integrity, without affecting the qubits superpositions.[47][48]
Photograph of a chip constructed by D-Wave Systems Inc., mounted and wire-bonded in a sample holder. The D-Wave processor is designed to use 128 superconducting logic elements that exhibit controllable and tunable coupling to perform operations.

In 2011, D-Wave Systems announced the first commercial quantum annealer, the D-Wave One, claiming a 128 qubit processor.[49] On May 25, 2011 Lockheed Martin agreed to purchase a D-Wave One system.[50] Lockheed and the University of Southern California (USC) will house the D-Wave One at the newly formed USC Lockheed Martin Quantum Computing Center.[51] D-Wave's engineers designed the chips with an empirical approach, focusing on solving particular problems. Investors liked this more than academics, saying D-Wave had not demonstrated they really had a quantum computer. Criticism softened after a D-Wave paper in Nature, that proved the chips have some quantum properties.[52][53] Experts remain skeptical of D-Waves claims. Two published papers have concluded that the D-Wave machine operates classically, not via quantum computing.[54][55]

During the same year, researchers at the University of Bristol created an all-bulk optics system that ran a version of Shor's algorithm to successfully factored 21.[56]

In September 2011 researchers proved quantum computers can be made with a Von Neumann architecture (separation of RAM).[57]

In November 2011 researchers factorized 143 using 4 qubits.[58]

In February 2012 IBM scientists said that they had made several breakthroughs in quantum computing with superconducting integrated circuits.[59]

In April 2012 a multinational team of researchers from the University of Southern California, Delft University of Technology, the Iowa State University of Science and Technology, and the University of California, Santa Barbara, constructed a two-qubit quantum computer on a doped diamond crystal, that can easily be scaled up, functional at room temperature. Two logical qubit directions of electron spin and nitrogen kernels spin were used, with microwave impulses. This computer ran Grover's algorithm generating the right answer from the first try in 95% of cases.[60]

In September 2012, Australian researchers at the University of New South Wales said the world's first quantum computer was just 5 to 10 years away, after announcing a global breakthrough enabling manufacture of its memory building blocks. A research team led by Australian engineers created the first working qubit based on a single atom in silicon, invoking the same technological platform that forms the building blocks of modern day computers.[61] [62]

In October 2012, Nobel Prizes were presented to David J. Wineland and Serge Haroche for their basic work on understanding the quantum world, which may help make quantum computing possible.[63][64]
In November 2012, the first quantum teleportation from one macroscopic object to another was reported.[65][66]

In December 2012, the first dedicated quantum computing software company, 1QBit was founded in Vancouver, BC.[67] 1QBit is the first company to focus exclusively on commercializing software applications for commercially available quantum computers, including the D-Wave Two. 1QBit's research demonstrated the ability of superconducting quantum annealing processors to solve real-world problems.[68]

In February 2013, a new technique, boson sampling, was reported by two groups using photons in an optical lattice that is not a universal quantum computer but may be good enough for practical problems. Science Feb 15, 2013

In May 2013, Google announced that it was launching the Quantum Artificial Intelligence Lab, hosted by NASA's Ames Research Center, with a 512-qubit D-Wave quantum computer. The USRA (Universities Space Research Association) will invite researchers to share time on it with the goal of studying quantum computing for machine learning.[69]

In early 2014 it was reported, based on documents provided by former NSA contractor Edward Snowden, that the U.S. National Security Agency (NSA) is running a $79.7 million research program (titled "Penetrating Hard Targets") to develop a quantum computer capable of breaking vulnerable encryption.[70]

In 2014, a group of researchers from ETH Zürich, USC, Google and Microsoft reported a definition of quantum speedup, and were not able to measure quantum speedup with the D-Wave Two device, but did not explicitly rule it out.[71][72]

In 2014, researchers at University of New South Wales used silicon as a protectant shell around qubits, making them more accurate, increasing the length of time they will hold information and possibly made quantum computers easier to build.[73]

Relation to computational complexity theory

The suspected relationship of BQP to other problem spaces.[74]

The class of problems that can be efficiently solved by quantum computers is called BQP, for "bounded error, quantum, polynomial time". Quantum computers only run probabilistic algorithms, so BQP on quantum computers is the counterpart of BPP ("bounded error, probabilistic, polynomial time") on classical computers. It is defined as the set of problems solvable with a polynomial-time algorithm, whose probability of error is bounded away from one half.[75] A quantum computer is said to "solve" a problem if, for every instance, its answer will be right with high probability. If that solution runs in polynomial time, then that problem is in BQP.

BQP is contained in the complexity class #P (or more precisely in the associated class of decision problems P#P),[76] which is a subclass of PSPACE.

BQP is suspected to be disjoint from NP-complete and a strict superset of P, but that is not known. Both integer factorization and discrete log are in BQP. Both of these problems are NP problems suspected to be outside BPP, and hence outside P. Both are suspected to not be NP-complete. There is a common misconception that quantum computers can solve NP-complete problems in polynomial time. That is not known to be true, and is generally suspected to be false.[76]

The capacity of a quantum computer to accelerate classical algorithms has rigid limits—upper bounds of quantum computation's complexity. The overwhelming part of classical calculations cannot be accelerated on a quantum computer.[77] A similar fact takes place for particular computational tasks, like the search problem, for which Grover's algorithm is optimal.[78]

Although quantum computers may be faster than classical computers, those described above can't solve any problems that classical computers can't solve, given enough time and memory (however, those amounts might be practically infeasible). A Turing machine can simulate these quantum computers, so such a quantum computer could never solve an undecidable problem like the halting problem. The existence of "standard" quantum computers does not disprove the Church–Turing thesis.[79] It has been speculated that theories of quantum gravity, such as M-theory or loop quantum gravity, may allow even faster computers to be built. Currently, defining computation in such theories is an open problem due to the problem of time, i.e., there currently exists no obvious way to describe what it means for an observer to submit input to a computer and later receive output.[80]

Limiting Tar Sands, Coal, Arctic Oil Is Key to 2°C Goal


The most efficient way of meeting the world’s most prominent climate goal would involve ending all Arctic oil drilling plans, drastically curbing tar-sands oil mining in Canada, and leaving most of the world’s remaining coal reserves in the ground.

That’s according to a new study that aimed to determine how best to limit the burning of different fuels to meet the international goal of holding global warming to less than 2°C, or 3.6°F. The goal was established during United Nations climate negotiations, and meeting it would require leaving two-thirds of remaining fossil fuel reserves unburned.

A pair of British researchers used a model to compare the amount of energy stored in remaining coal, oil and gas deposits with regional energy demand and the amount of energy and pollution that’s produced when different fuels are burned. That revealed which reserves could be most efficiently exploited through 2050 without warming the planet by more than 2°C. That, in turn, revealed which fossil fuel reserves could be considered untouchable.
Tar-sands oil mining in Canada.
Credit: Gord McKenna/Flickr


Not every country would need to make the same sacrifices under the climate action strategy favored by the computer model. The U.S., for example, would be free to exploit most of its oil and natural gas reserves, but it would need to leave a whole lot of coal in the ground — a feature of an idealized potential climate-soothing strategy that would be sure to raise questions of fairness and equity.

“Although the total unburnable fossil fuel was already known, the regional information has not been provided with this level of detail before,” Corinne Le Quéré, director of the University of East Anglia’s Tyndall Centre for Climate Change Research, said. She was not involved with the study, which was published Wednesday in Nature. “Most governments in rich countries have said they support some form of climate mitigation to keep to 2 degrees. I guess the natural question is, ‘Whose fossil fuel reserve should then go unexploited?’”

The study concluded that drilling in Arctic oilfields and increasing unconventional production of heavy oil in Canada and Venezuela would be “incommensurate with efforts” to limit global warming to 2°C.
Curbing warming to less than 2°C is “certainly technically feasible,” Christophe McGlade, a University College London energy modeler who helped produce the study, said. “Whether the political will is strong enough to ensure that 2°C degrees is met is probably more doubtful.”

Coal is globally shunned by the results of the modeling, with 88 percent of the nearly 1,000 gigatonnes of remaining global reserves unable to be burned under the most efficient scenario developed — or 82 percent if carbon capture and storage (CCS) at power plants becomes used. That’s because coal releases far more greenhouse gas pollution per joule of energy produced than does the burning of oil or gas.
A coal delivery in China.
Credit: JohnShaftFr/Flickr


China and India are both relying heavily on coal generation to underpin fast economic growth, and those countries are home to about a quarter of the world’s coal reserves. Both countries are also installing huge wind and solar farms to ease their energy shortfalls and help reduce air pollution. According to the new analysis, those two emerging economies could use between a third and a quarter of their coal reserves. The U.S., by contrast, where energy demand is more efficiently met with other sources of energy, would basically need to stop mining coal altogether, abandoning more than 90 percent of what’s left in the ground.

Globally, about half of known natural gas could be drilled or fracked and burned, and two-thirds of oil reserves could be exploited, but with significant variation between regions. The U.S. and Europe could use almost all of their natural gas reserves, for example, while Russia and its neighbors would need to leave at least half of their substantial gas reserves in the ground. When it comes to oil, the U.S. and Europe could continue to drill most of their black gold, but Canada would be asked to exploit just a quarter of its larger and more climate-harming reserves, the modeling found.

Unanswered by the study, however, is how best to convince governments to leave fossil fuel assets stranded — let alone convince them to leave more stranded than would be the case for other governments. Some kind of agreement on this could potentially be reached through the ongoing UN climate talks, perhaps through a markets-based approach that helps share economic spoils and sacrifices equitably among nations.
Columns on the left reveal fossil fuel reserves that the model says should remain in the groud, and the percentage columns reveal how much of known reserves would need to be left in the ground, to efficiently keep warming under 2°C. FSU refers to former Soviet Union countries. CSA means Central and South America. ODA refers to other developing Asian countries.
Credit: Nature

"Stranded assets — the need for keeping resources in the ground — show the importance of a strong climate commitment on the part of the major emitters,” Environmental Defense Fund economist Gernot Wagner said. “The trajectory for climate policy is clear, and it'll mean that more and more coal, oil and gas will be expected to stay underground."

The table above shows the scenario that the model revealed would be most efficient if CCS becomes used. The non-CCS scenario is similar, but it allows slightly less use of all three fossil fuels.

The findings “should have eye-popping impact on capital flows — away from what should be stranded assets world- and company-wide,” Cornell University engineering professor Anthony Ingraffea, an expert in computer modeling, said. He praised the robustness of the study, saying it considered “many scenarios,” with all of them “leading to similar conclusions.”

Water on terrestrial planets of the Solar System

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Water_on_terrestrial_planets_of_the_Solar_S...