Search This Blog

Thursday, October 24, 2019

Hypercomputation

From Wikipedia, the free encyclopedia
 
Hypercomputation or super-Turing computation refers to models of computation that can provide outputs that are not Turing computable. For example, a machine that could solve the halting problem would be a hypercomputer; so too would one that can correctly evaluate every statement in Peano arithmetic.

The Church–Turing thesis states that any "computable" function that can be computed by a mathematician with a pen and paper using a finite set of simple algorithms, can be computed by a Turing machine. Hypercomputers compute functions that a Turing machine cannot and which are, hence, not computable in the Church–Turing sense.

Technically the output of a random Turing machine is uncomputable; however, most hypercomputing literature focuses instead on the computation of useful, rather than random, uncomputable functions.

History

A computational model going beyond Turing machines was introduced by Alan Turing in his 1938 PhD dissertation Systems of Logic Based on Ordinals. This paper investigated mathematical systems in which an oracle was available, which could compute a single arbitrary (non-recursive) function from naturals to naturals. He used this device to prove that even in those more powerful systems, undecidability is still present. Turing's oracle machines are mathematical abstractions, and are not physically realizable.

State space

In a sense, most functions are uncomputable: there are computable functions, but there are an uncountable number () of possible Super-Turing functions.

Hypercomputer models

Hypercomputer models range from useful but probably unrealizable (such as Turing's original oracle machines), to less-useful random-function generators that are more plausibly "realizable" (such as a random Turing machine).

Hypercomputers with uncomputable inputs or black-box components

A system granted knowledge of the uncomputable, oracular Chaitin's constant (a number with an infinite sequence of digits that encode the solution to the halting problem) as an input can solve a large number of useful undecidable problems; a system granted an uncomputable random-number generator as an input can create random uncomputable functions, but is generally not believed to be able to meaningfully solve "useful" uncomputable functions such as the halting problem. There are an unlimited number of different types of conceivable hypercomputers, including:
  • Turing's original oracle machines, defined by Turing in 1939.
  • A real computer (a sort of idealized analog computer) can perform hypercomputation if physics admits general real variables (not just computable reals), and these are in some way "harnessable" for useful (rather than random) computation. This might require quite bizarre laws of physics (for example, a measurable physical constant with an oracular value, such as Chaitin's constant), and would require the ability to measure the real-valued physical value to arbitrary precision.
    • Similarly, a neural net that somehow had Chaitin's constant exactly embedded in its weight function would be able to solve the halting problem, though constructing such an infinitely precise neural net, even if you somehow know Chaitin's constant beforehand, is impossible under the laws of quantum mechanics.
  • Certain fuzzy logic-based "fuzzy Turing machines" can, by definition, accidentally solve the halting problem, but only because their ability to solve the halting problem is indirectly assumed in the specification of the machine; this tends to be viewed as a "bug" in the original specification of the machines.
    • Similarly, a proposed model known as fair nondeterminism can accidentally allow the oracular computation of noncomputable functions, because some such systems, by definition, have the oracular ability to identify reject inputs that would "unfairly" cause a subsystem to run forever.
  • Dmytro Taranovsky has proposed a finitistic model of traditionally non-finitistic branches of analysis, built around a Turing machine equipped with a rapidly increasing function as its oracle. By this and more complicated models he was able to give an interpretation of second-order arithmetic. These models require an uncomputable input, such as a physical event-generating process where the interval between events grows at an uncomputably large rate.
    • Similarly, one unorthodox interpretation of a model of unbounded nondeterminism posits, by definition, that the length of time required for an "Actor" to settle is fundamentally unknowable, and therefore it cannot be proven, within the model, that it does not take an uncomputably long period of time.

"Infinite computational steps" models

In order to work correctly, certain computations by the machines below literally require infinite, rather than merely unlimited but finite, physical space and resources; in contrast, with a Turing machine, any given computation that halts will require only finite physical space and resources.
  • A Turing machine that can complete infinitely many steps in finite time, a feat known as a supertask. Simply being able to run for an unbounded number of steps does not suffice. One mathematical model is the Zeno machine (inspired by Zeno's paradox). The Zeno machine performs its first computation step in (say) 1 minute, the second step in ½ minute, the third step in ¼ minute, etc. By summing 1+½+¼+... (a geometric series) we see that the machine performs infinitely many steps in a total of 2 minutes. According to Shagrir, Zeno machines introduce physical paradoxes and its state is logically undefined outside of one-side open period of [0, 2), thus undefined exactly at 2 minutes after beginning of the computation.
  • It seems natural that the possibility of time travel (existence of closed timelike curves (CTCs)) makes hypercomputation possible by itself. However, this is not so since a CTC does not provide (by itself) the unbounded amount of storage that an infinite computation would require. Nevertheless, there are spacetimes in which the CTC region can be used for relativistic hypercomputation. According to a 1992 paper, a computer operating in a Malament–Hogarth spacetime or in orbit around a rotating black hole could theoretically perform non-Turing computations for an observer inside the black hole. Access to a CTC may allow the rapid solution to PSPACE-complete problems, a complexity class which, while Turing-decidable, is generally considered computationally intractable.

Quantum models

Some scholars conjecture that a quantum mechanical system which somehow uses an infinite superposition of states could compute a non-computable function. This is not possible using the standard qubit-model quantum computer, because it is proven that a regular quantum computer is PSPACE-reducible (a quantum computer running in polynomial time can be simulated by a classical computer running in polynomial space).

"Eventually correct" systems

Some physically-realizable systems will always eventually converge to the correct answer, but have the defect that they will often output an incorrect answer and stick with the incorrect answer for an uncomputably large period of time before eventually going back and correcting the mistake.
  • In mid 1960s, E Mark Gold and Hilary Putnam independently proposed models of inductive inference (the "limiting recursive functionals" and "trial-and-error predicates", respectively). These models enable some nonrecursive sets of numbers or languages (including all recursively enumerable sets of languages) to be "learned in the limit"; whereas, by definition, only recursive sets of numbers or languages could be identified by a Turing machine. While the machine will stabilize to the correct answer on any learnable set in some finite time, it can only identify it as correct if it is recursive; otherwise, the correctness is established only by running the machine forever and noting that it never revises its answer. Putnam identified this new interpretation as the class of "empirical" predicates, stating: "if we always 'posit' that the most recently generated answer is correct, we will make a finite number of mistakes, but we will eventually get the correct answer. (Note, however, that even if we have gotten to the correct answer (the end of the finite sequence) we are never sure that we have the correct answer.)" L. K. Schubert's 1974 paper "Iterated Limiting Recursion and the Program Minimization Problem" studied the effects of iterating the limiting procedure; this allows any arithmetic predicate to be computed. Schubert wrote, "Intuitively, iterated limiting identification might be regarded as higher-order inductive inference performed collectively by an ever-growing community of lower order inductive inference machines."
  • A symbol sequence is computable in the limit if there is a finite, possibly non-halting program on a universal Turing machine that incrementally outputs every symbol of the sequence. This includes the dyadic expansion of π and of every other computable real, but still excludes all noncomputable reals. Traditional Turing machines cannot edit their previous outputs; generalized Turing machines, as defined by Jürgen Schmidhuber, can. He defines the constructively describable symbol sequences as those that have a finite, non-halting program running on a generalized Turing machine, such that any output symbol eventually converges; that is, it does not change any more after some finite initial time interval. Due to limitations first exhibited by Kurt Gödel (1931), it may be impossible to predict the convergence time itself by a halting program, otherwise the halting problem could be solved. Schmidhuber uses this approach to define the set of formally describable or constructively computable universes or constructive theories of everything. Generalized Turing machines can eventually converge to a correct solution of the halting problem by evaluating a Specker sequence.

Analysis of capabilities

Many hypercomputation proposals amount to alternative ways to read an oracle or advice function embedded into an otherwise classical machine. Others allow access to some higher level of the arithmetic hierarchy. For example, supertasking Turing machines, under the usual assumptions, would be able to compute any predicate in the truth-table degree containing or . Limiting-recursion, by contrast, can compute any predicate or function in the corresponding Turing degree, which is known to be . Gold further showed that limiting partial recursion would allow the computation of precisely the predicates. 

Model Computable predicates Notes
supertasking tt() dependent on outside observer
limiting/trial-and-error
iterated limiting (k times)
Blum-Shub-Smale machine
incomparable with traditional computable real functions
Malament-Hogarth spacetime HYP dependent on spacetime structure
analog recurrent neural network f is an advice function giving connection weights; size is bounded by runtime
infinite time Turing machine Arithmetical Quasi-Inductive sets
classical fuzzy Turing machine for any computable t-norm
increasing function oracle for the one-sequence model; are r.e.

Criticism

Martin Davis, in his writings on hypercomputation refers to this subject as "a myth" and offers counter-arguments to the physical realizability of hypercomputation. As for its theory, he argues against the claims that this is a new field founded in the 1990s. This point of view relies on the history of computability theory (degrees of unsolvability, computability over functions, real numbers and ordinals), as also mentioned above. In his argument he makes a remark that all of hypercomputation is little more than: " if non-computable inputs are permitted then non computable outputs are attainable."

Church–Turing thesis

From Wikipedia, the free encyclopedia
 
In computability theory, the Church–Turing thesis (also known as computability thesis, the Turing–Church thesis, the Church–Turing conjecture, Church's thesis, Church's conjecture, and Turing's thesis) is a hypothesis about the nature of computable functions. It states that a function on the natural numbers can be calculated by an effective method if and only if it is computable by a Turing machine. The thesis is named after American mathematician Alonzo Church and the British mathematician Alan Turing. Before the precise definition of computable function, mathematicians often used the informal term effectively calculable to describe functions that are computable by paper-and-pencil methods. In the 1930s, several independent attempts were made to formalize the notion of computability:
  • In 1933, Kurt Gödel, with Jacques Herbrand, created a formal definition of a class called general recursive functions. The class of general recursive functions is the smallest class of functions (possibly with more than one argument) which includes all constant functions, projections, the successor function, and which is closed under function composition, recursion, and minimization.
  • In 1936, Alonzo Church created a method for defining functions called the λ-calculus. Within λ-calculus, he defined an encoding of the natural numbers called the Church numerals. A function on the natural numbers is called λ-computable if the corresponding function on the Church numerals can be represented by a term of the λ-calculus.
  • Also in 1936, before learning of Church's work, Alan Turing created a theoretical model for machines, now called Turing machines, that could carry out calculations from inputs by manipulating symbols on a tape. Given a suitable encoding of the natural numbers as sequences of symbols, a function on the natural numbers is called Turing computable if some Turing machine computes the corresponding function on encoded natural numbers.
Church and Turing proved that these three formally defined classes of computable functions coincide: a function is λ-computable if and only if it is Turing computable if and only if it is general recursive. This has led mathematicians and computer scientists to believe that the concept of computability is accurately characterized by these three equivalent processes. Other formal attempts to characterize computability have subsequently strengthened this belief.

On the other hand, the Church–Turing thesis states that the above three formally-defined classes of computable functions coincide with the informal notion of an effectively calculable function. Since, as an informal notion, the concept of effective calculability does not have a formal definition, the thesis, although it has near-universal acceptance, cannot be formally proven.

Since its inception, variations on the original thesis have arisen, including statements about what can physically be realized by a computer in our universe (physical Church-Turing thesis) and what can be efficiently computed (Church–Turing thesis (complexity theory)). These variations are not due to Church or Turing, but arise from later work in complexity theory and digital physics. The thesis also has implications for the philosophy of mind.

Statement in Church's and Turing's words

J. B. Rosser (1939) addresses the notion of "effective computability" as follows: "Clearly the existence of CC and RC (Church's and Rosser's proofs) presupposes a precise definition of 'effective'. 'Effective method' is here used in the rather special sense of a method each step of which is precisely predetermined and which is certain to produce the answer in a finite number of steps". Thus the adverb-adjective "effective" is used in a sense of "1a: producing a decided, decisive, or desired effect", and "capable of producing a result".

In the following, the words "effectively calculable" will mean "produced by any intuitively 'effective' means whatsoever" and "effectively computable" will mean "produced by a Turing-machine or equivalent mechanical device". Turing's "definitions" given in a footnote in his 1938 Ph.D. thesis Systems of Logic Based on Ordinals, supervised by Church, are virtually the same:
We shall use the expression "computable function" to mean a function calculable by a machine, and let "effectively calculable" refer to the intuitive idea without particular identification with any one of these definitions.
The thesis can be stated as: Every effectively calculable function is a computable function.

Turing stated it this way:
It was stated ... that "a function is effectively calculable if its values can be found by some purely mechanical process". We may take this literally, understanding that by a purely mechanical process one which could be carried out by a machine. The development ... leads to ... an identification of computability with effective calculability. [ is the footnote quoted above.]

History

One of the important problems for logicians in the 1930s was the Entscheidungsproblem of David Hilbert and Wilhelm Ackermann, which asked whether there was a mechanical procedure for separating mathematical truths from mathematical falsehoods. This quest required that the notion of "algorithm" or "effective calculability" be pinned down, at least well enough for the quest to begin. But from the very outset Alonzo Church's attempts began with a debate that continues to this day. Was the notion of "effective calculability" to be (i) an "axiom or axioms" in an axiomatic system, (ii) merely a definition that "identified" two or more propositions, (iii) an empirical hypothesis to be verified by observation of natural events, or (iv) just a proposal for the sake of argument (i.e. a "thesis").

Circa 1930–1952

In the course of studying the problem, Church and his student Stephen Kleene introduced the notion of λ-definable functions, and they were able to prove that several large classes of functions frequently encountered in number theory were λ-definable. The debate began when Church proposed to Gödel that one should define the "effectively computable" functions as the λ-definable functions. Gödel, however, was not convinced and called the proposal "thoroughly unsatisfactory". Rather, in correspondence with Church (c. 1934–35), Gödel proposed axiomatizing the notion of "effective calculability"; indeed, in a 1935 letter to Kleene, Church reported that:
His [Gödel's] only idea at the time was that it might be possible, in terms of effective calculability as an undefined notion, to state a set of axioms which would embody the generally accepted properties of this notion, and to do something on that basis.
But Gödel offered no further guidance. Eventually, he would suggest his recursion, modified by Herbrand's suggestion, that Gödel had detailed in his 1934 lectures in Princeton NJ (Kleene and Rosser transcribed the notes). But he did not think that the two ideas could be satisfactorily identified "except heuristically".

Next, it was necessary to identify and prove the equivalence of two notions of effective calculability. Equipped with the λ-calculus and "general" recursion, Stephen Kleene with help of Church and J. Barkley Rosser produced proofs (1933, 1935) to show that the two calculi are equivalent. Church subsequently modified his methods to include use of Herbrand–Gödel recursion and then proved (1936) that the Entscheidungsproblem is unsolvable: there is no algorithm that can determine whether a well formed formula has a "normal form.

Many years later in a letter to Davis (c. 1965), Gödel said that "he was, at the time of these [1934] lectures, not at all convinced that his concept of recursion comprised all possible recursions". By 1963–64 Gödel would disavow Herbrand–Gödel recursion and the λ-calculus in favor of the Turing machine as the definition of "algorithm" or "mechanical procedure" or "formal system".

A hypothesis leading to a natural law?: In late 1936 Alan Turing's paper (also proving that the Entscheidungsproblem is unsolvable) was delivered orally, but had not yet appeared in print. On the other hand, Emil Post's 1936 paper had appeared and was certified independent of Turing's work. Post strongly disagreed with Church's "identification" of effective computability with the λ-calculus and recursion, stating:
Actually the work already done by Church and others carries this identification considerably beyond the working hypothesis stage. But to mask this identification under a definition… blinds us to the need of its continual verification.
Rather, he regarded the notion of "effective calculability" as merely a "working hypothesis" that might lead by inductive reasoning to a "natural law" rather than by "a definition or an axiom". This idea was "sharply" criticized by Church.

Thus Post in his 1936 paper was also discounting Kurt Gödel's suggestion to Church in 1934–35 that the thesis might be expressed as an axiom or set of axioms.

Turing adds another definition, Rosser equates all three: Within just a short time, Turing's 1936–37 paper "On Computable Numbers, with an Application to the Entscheidungsproblem" appeared. In it he stated another notion of "effective computability" with the introduction of his a-machines (now known as the Turing machine abstract computational model). And in a proof-sketch added as an "Appendix" to his 1936–37 paper, Turing showed that the classes of functions defined by λ-calculus and Turing machines coincided. Church was quick to recognise how compelling Turing's analysis was. In his review of Turing's paper he made clear that Turing's notion made "the identification with effectiveness in the ordinary (not explicitly defined) sense evident immediately".

In a few years (1939) Turing would propose, like Church and Kleene before him, that his formal definition of mechanical computing agent was the correct one. Thus, by 1939, both Church (1934) and Turing (1939) had individually proposed that their "formal systems" should be definitions of "effective calculability"; neither framed their statements as theses

Rosser (1939) formally identified the three notions-as-definitions:
All three definitions are equivalent, so it does not matter which one is used.
Kleene proposes Church's Thesis: This left the overt expression of a "thesis" to Kleene. In his 1943 paper Recursive Predicates and Quantifiers Kleene proposed his "THESIS I":
This heuristic fact [general recursive functions are effectively calculable] ... led Church to state the following thesis. The same thesis is implicit in Turing's description of computing machines.
THESIS I. Every effectively calculable function (effectively decidable predicate) is general recursive [Kleene's italics]
Since a precise mathematical definition of the term effectively calculable (effectively decidable) has been wanting, we can take this thesis ... as a definition of it ...
references Church 1936; references Turing 1936–7 Kleene goes on to note that:
the thesis has the character of an hypothesis—a point emphasized by Post and by Church. If we consider the thesis and its converse as definition, then the hypothesis is an hypothesis about the application of the mathematical theory developed from the definition. For the acceptance of the hypothesis, there are, as we have suggested, quite compelling grounds.
Kleene's Church–Turing Thesis: A few years later (1952) Kleene, who switched from presenting his work in the mathematical terminology of the lambda calculus of his phd advisor Alonzo Church to the theory of general recursive functions of his other teacher Kurt Gödel, would overtly name the Church–Turing thesis in his correction of Turing's paper "The Word Problem in Semi-Groups with Cancellation", defend, and express the two "theses" and then "identify" them (show equivalence) by use of his Theorem XXX:
Heuristic evidence and other considerations led Church 1936 to propose the following thesis.
Thesis I. Every effectively calculable function (effectively decidable predicate) is general recursive.
Theorem XXX: The following classes of partial functions are coextensive, i.e. have the same members: (a) the partial recursive functions, (b) the computable functions ...
Turing's thesis: Turing's thesis that every function which would naturally be regarded as computable is computable under his definition, i.e. by one of his machines, is equivalent to Church's thesis by Theorem XXX.

Later developments

An attempt to understand the notion of "effective computability" better led Robin Gandy (Turing's student and friend) in 1980 to analyze machine computation (as opposed to human-computation acted out by a Turing machine). Gandy's curiosity about, and analysis of, cellular automata (including Conway's game of life), parallelism, and crystalline automata, led him to propose four "principles (or constraints) ... which it is argued, any machine must satisfy". His most-important fourth, "the principle of causality" is based on the "finite velocity of propagation of effects and signals; contemporary physics rejects the possibility of instantaneous action at a distance". From these principles and some additional constraints—(1a) a lower bound on the linear dimensions of any of the parts, (1b) an upper bound on speed of propagation (the velocity of light), (2) discrete progress of the machine, and (3) deterministic behavior—he produces a theorem that "What can be calculated by a device satisfying principles I–IV is computable."

In the late 1990s Wilfried Sieg analyzed Turing's and Gandy's notions of "effective calculability" with the intent of "sharpening the informal notion, formulating its general features axiomatically, and investigating the axiomatic framework". In his 1997 and 2002 work Sieg presents a series of constraints on the behavior of a computor—"a human computing agent who proceeds mechanically". These constraints reduce to:
  • "(B.1) (Boundedness) There is a fixed bound on the number of symbolic configurations a computor can immediately recognize.
  • "(B.2) (Boundedness) There is a fixed bound on the number of internal states a computor can be in.
  • "(L.1) (Locality) A computor can change only elements of an observed symbolic configuration.
  • "(L.2) (Locality) A computor can shift attention from one symbolic configuration to another one, but the new observed configurations must be within a bounded distance of the immediately previously observed configuration.
  • "(D) (Determinacy) The immediately recognizable (sub-)configuration determines uniquely the next computation step (and id [instantaneous description])"; stated another way: "A computor's internal state together with the observed configuration fixes uniquely the next computation step and the next internal state."
The matter remains in active discussion within the academic community.

The thesis as a definition

The thesis can be viewed as nothing but an ordinary mathematical definition. Comments by Gödel on the subject suggest this view, e.g. "the correct definition of mechanical computability was established beyond any doubt by Turing". The case for viewing the thesis as nothing more than a definition is made explicitly by Robert I. Soare, where it is also argued that Turing's definition of computability is no less likely to be correct than the epsilon-delta definition of a continuous function.

Success of the thesis

Other formalisms (besides recursion, the λ-calculus, and the Turing machine) have been proposed for describing effective calculability/computability. Stephen Kleene (1952) adds to the list the functions "reckonable in the system S1" of Kurt Gödel 1936, and Emil Post's (1943, 1946) "canonical [also called normal] systems". In the 1950s Hao Wang and Martin Davis greatly simplified the one-tape Turing-machine model (see Post–Turing machine). Marvin Minsky expanded the model to two or more tapes and greatly simplified the tapes into "up-down counters", which Melzak and Lambek further evolved into what is now known as the counter machine model. In the late 1960s and early 1970s researchers expanded the counter machine model into the register machine, a close cousin to the modern notion of the computer. Other models include combinatory logic and Markov algorithms. Gurevich adds the pointer machine model of Kolmogorov and Uspensky (1953, 1958): "... they just wanted to ... convince themselves that there is no way to extend the notion of computable function."

All these contributions involve proofs that the models are computationally equivalent to the Turing machine; such models are said to be Turing complete. Because all these different attempts at formalizing the concept of "effective calculability/computability" have yielded equivalent results, it is now generally assumed that the Church–Turing thesis is correct. In fact, Gödel (1936) proposed something stronger than this; he observed that there was something "absolute" about the concept of "reckonable in S1":
It may also be shown that a function which is computable ['reckonable'] in one of the systems Si, or even in a system of transfinite type, is already computable [reckonable] in S1. Thus the concept 'computable' ['reckonable'] is in a certain definite sense 'absolute', while practically all other familiar metamathematical concepts (e.g. provable, definable, etc.) depend quite essentially on the system to which they are defined ...

Informal usage in proofs

Proofs in computability theory often invoke the Church–Turing thesis in an informal way to establish the computability of functions while avoiding the (often very long) details which would be involved in a rigorous, formal proof. To establish that a function is computable by Turing machine, it is usually considered sufficient to give an informal English description of how the function can be effectively computed, and then conclude "by the Church–Turing thesis" that the function is Turing computable (equivalently, partial recursive). 

Dirk van Dalen gives the following example for the sake of illustrating this informal use of the Church–Turing thesis:
EXAMPLE: Each infinite RE set contains an infinite recursive set.
Proof: Let A be infinite RE. We list the elements of A effectively, n0, n1, n2, n3, ...
From this list we extract an increasing sublist: put m0=n0, after finitely many steps we find an nk such that nk > m0, put m1=nk. We repeat this procedure to find m2 > m1, etc. this yields an effective listing of the subset B={m0,m1,m2,...} of A, with the property mi < mi+1.
Claim. B is decidable. For, in order to test k in B we must check if k=mi for some i. Since the sequence of mi's is increasing we have to produce at most k+1 elements of the list and compare them with k. If none of them is equal to k, then k not in B. Since this test is effective, B is decidable and, by Church's thesis, recursive.
In order to make the above example completely rigorous, one would have to carefully construct a Turing machine, or λ-function, or carefully invoke recursion axioms, or at best, cleverly invoke various theorems of computability theory. But because the computability theorist believes that Turing computability correctly captures what can be computed effectively, and because an effective procedure is spelled out in English for deciding the set B, the computability theorist accepts this as proof that the set is indeed recursive.

Variations

The success of the Church–Turing thesis prompted variations of the thesis to be proposed. For example, the physical Church–Turing thesis states: "All physically computable functions are Turing-computable."

The Church–Turing thesis says nothing about the efficiency with which one model of computation can simulate another. It has been proved for instance that a (multi-tape) universal Turing machine only suffers a logarithmic slowdown factor in simulating any Turing machine.

A variation of the Church–Turing thesis addresses whether an arbitrary but "reasonable" model of computation can be efficiently simulated. This is called the feasibility thesis, also known as the (classical) complexity-theoretic Church–Turing thesis or the extended Church–Turing thesis, which is not due to Church or Turing, but rather was realized gradually in the development of complexity theory. It states: "A probabilistic Turing machine can efficiently simulate any realistic model of computation." The word 'efficiently' here means up to polynomial-time reductions. This thesis was originally called computational complexity-theoretic Church–Turing thesis by Ethan Bernstein and Umesh Vazirani (1997). The complexity-theoretic Church–Turing thesis, then, posits that all 'reasonable' models of computation yield the same class of problems that can be computed in polynomial time. Assuming the conjecture that probabilistic polynomial time (BPP) equals deterministic polynomial time (P), the word 'probabilistic' is optional in the complexity-theoretic Church–Turing thesis. A similar thesis, called the invariance thesis, was introduced by Cees F. Slot and Peter van Emde Boas. It states: "'Reasonable' machines can simulate each other within a polynomially bounded overhead in time and a constant-factor overhead in space." The thesis originally appeared in a paper at STOC'84, which was the first paper to show that polynomial-time overhead and constant-space overhead could be simultaneously achieved for a simulation of a Random Access Machine on a Turing machine.

If BQP is shown to be a strict superset of BPP, it would invalidate the complexity-theoretic Church–Turing thesis. In other words, there would be efficient quantum algorithms that perform tasks that do not have efficient probabilistic algorithms. This would not however invalidate the original Church–Turing thesis, since a quantum computer can always be simulated by a Turing machine, but it would invalidate the classical complexity-theoretic Church–Turing thesis for efficiency reasons. Consequently, the quantum complexity-theoretic Church–Turing thesis states: "A quantum Turing machine can efficiently simulate any realistic model of computation." 

Eugene Eberbach and Peter Wegner claim that the Church–Turing thesis is sometimes interpreted too broadly, stating "the broader assertion that algorithms precisely capture what can be computed is invalid". They claim that forms of computation not captured by the thesis are relevant today, terms which they call super-Turing computation.

Philosophical implications

Philosophers have interpreted the Church–Turing thesis as having implications for the philosophy of mind. B. Jack Copeland states that it is an open empirical question whether there are actual deterministic physical processes that, in the long run, elude simulation by a Turing machine; furthermore, he states that it is an open empirical question whether any such processes are involved in the working of the human brain. There are also some important open questions which cover the relationship between the Church–Turing thesis and physics, and the possibility of hypercomputation. When applied to physics, the thesis has several possible meanings:
  1. The universe is equivalent to a Turing machine; thus, computing non-recursive functions is physically impossible. This has been termed the strong Church–Turing thesis, or Church–Turing–Deutsch principle, and is a foundation of digital physics.
  2. The universe is not equivalent to a Turing machine (i.e., the laws of physics are not Turing-computable), but incomputable physical events are not "harnessable" for the construction of a hypercomputer. For example, a universe in which physics involves random real numbers, as opposed to computable reals, would fall into this category.
  3. The universe is a hypercomputer, and it is possible to build physical devices to harness this property and calculate non-recursive functions. For example, it is an open question whether all quantum mechanical events are Turing-computable, although it is known that rigorous models such as quantum Turing machines are equivalent to deterministic Turing machines. (They are not necessarily efficiently equivalent; see above.) John Lucas and Roger Penrose have suggested that the human mind might be the result of some kind of quantum-mechanically enhanced, "non-algorithmic" computation.
There are many other technical possibilities which fall outside or between these three categories, but these serve to illustrate the range of the concept. 

Philosophical aspects of the thesis, regarding both physical and biological computers, are also discussed in Odifreddi's 1989 textbook on recursion theory.

Non-computable functions

One can formally define functions that are not computable. A well-known example of such a function is the Busy Beaver function. This function takes an input n and returns the largest number of symbols that a Turing machine with n states can print before halting, when run with no input. Finding an upper bound on the busy beaver function is equivalent to solving the halting problem, a problem known to be unsolvable by Turing machines. Since the busy beaver function cannot be computed by Turing machines, the Church–Turing thesis states that this function cannot be effectively computed by any method. 

Several computational models allow for the computation of (Church-Turing) non-computable functions. These are known as hypercomputers. Mark Burgin argues that super-recursive algorithms such as inductive Turing machines disprove the Church–Turing thesis. His argument relies on a definition of algorithm broader than the ordinary one, so that non-computable functions obtained from some inductive Turing machines are called computable. This interpretation of the Church–Turing thesis differs from the interpretation commonly accepted in computability theory, discussed above. The argument that super-recursive algorithms are indeed algorithms in the sense of the Church–Turing thesis has not found broad acceptance within the computability research community.

Gravitational lens

From Wikipedia, the free encyclopedia
 
A light source passes behind a gravitational lens (point mass placed in the center of the image). The aqua circle is the light source as it would be seen if there was no lens, white spots are the multiple images (or Einstein ring) of the source.
 
A gravitational lens is a distribution of matter (such as a cluster of galaxies) between a distant light source and an observer, that is capable of bending the light from the source as the light travels towards the observer. This effect is known as gravitational lensing, and the amount of bending is one of the predictions of Albert Einstein's general theory of relativity. (Classical physics also predicts the bending of light, but only half of that predicted by general relativity.)

Although Einstein made unpublished calculations on the subject in 1912, Orest Khvolson (1924), and Frantisek Link (1936) are generally credited with being the first to discuss the effect in print. However, this effect is more commonly associated with Einstein, who published an article on the subject in 1936.

Fritz Zwicky posited in 1937 that the effect could allow galaxy clusters to act as gravitational lenses. It was not until 1979 that this effect was confirmed by observation of the so-called Twin QSO SBS 0957+561.

Description

Unlike an optical lens, a gravitational lens produces a maximum deflection of light that passes closest to its center, and a minimum deflection of light that travels furthest from its center. Consequently, a gravitational lens has no single focal point, but a focal line. The term "lens" in the context of gravitational light deflection was first used by O.J. Lodge, who remarked that it is "not permissible to say that the solar gravitational field acts like a lens, for it has no focal length". If the (light) source, the massive lensing object, and the observer lie in a straight line, the original light source will appear as a ring around the massive lensing object. If there is any misalignment, the observer will see an arc segment instead. This phenomenon was first mentioned in 1924 by the St. Petersburg physicist Orest Chwolson, and quantified by Albert Einstein in 1936. It is usually referred to in the literature as an Einstein ring, since Chwolson did not concern himself with the flux or radius of the ring image. More commonly, where the lensing mass is complex (such as a galaxy group or cluster) and does not cause a spherical distortion of spacetime, the source will resemble partial arcs scattered around the lens. The observer may then see multiple distorted images of the same source; the number and shape of these depending upon the relative positions of the source, lens, and observer, and the shape of the gravitational well of the lensing object.
 
There are three classes of gravitational lensing:

1. Strong lensing: where there are easily visible distortions such as the formation of Einstein rings, arcs, and multiple images. 

2. Weak lensing: where the distortions of background sources are much smaller and can only be detected by analyzing large numbers of sources in a statistical way to find coherent distortions of only a few percent. The lensing shows up statistically as a preferred stretching of the background objects perpendicular to the direction to the centre of the lens. By measuring the shapes and orientations of large numbers of distant galaxies, their orientations can be averaged to measure the shear of the lensing field in any region. This, in turn, can be used to reconstruct the mass distribution in the area: in particular, the background distribution of dark matter can be reconstructed. Since galaxies are intrinsically elliptical and the weak gravitational lensing signal is small, a very large number of galaxies must be used in these surveys. These weak lensing surveys must carefully avoid a number of important sources of systematic error: the intrinsic shape of galaxies, the tendency of a camera's point spread function to distort the shape of a galaxy and the tendency of atmospheric seeing to distort images must be understood and carefully accounted for. The results of these surveys are important for cosmological parameter estimation, to better understand and improve upon the Lambda-CDM model, and to provide a consistency check on other cosmological observations. They may also provide an important future constraint on dark energy.

3. Microlensing: where no distortion in shape can be seen but the amount of light received from a background object changes in time. The lensing object may be stars in the Milky Way in one typical case, with the background source being stars in a remote galaxy, or, in another case, an even more distant quasar. The effect is small, such that (in the case of strong lensing) even a galaxy with a mass more than 100 billion times that of the Sun will produce multiple images separated by only a few arcseconds. Galaxy clusters can produce separations of several arcminutes. In both cases the galaxies and sources are quite distant, many hundreds of megaparsecs away from our Galaxy.

Gravitational lenses act equally on all kinds of electromagnetic radiation, not just visible light. Weak lensing effects are being studied for the cosmic microwave background as well as galaxy surveys. Strong lenses have been observed in radio and x-ray regimes as well. If a strong lens produces multiple images, there will be a relative time delay between two paths: that is, in one image the lensed object will be observed before the other image.

History

One of Eddington's photographs of the 1919 solar eclipse experiment, presented in his 1920 paper announcing its success
 
Henry Cavendish in 1784 (in an unpublished manuscript) and Johann Georg von Soldner in 1801 (published in 1804) had pointed out that Newtonian gravity predicts that starlight will bend around a massive object as had already been supposed by Isaac Newton in 1704 in his Queries No.1 in his book Opticks. The same value as Soldner's was calculated by Einstein in 1911 based on the equivalence principle alone. However, Einstein noted in 1915, in the process of completing general relativity, that his (and thus Soldner's) 1911-result is only half of the correct value. Einstein became the first to calculate the correct value for light bending.

The first observation of light deflection was performed by noting the change in position of stars as they passed near the Sun on the celestial sphere. The observations were performed in 1919 by Arthur Eddington, Frank Watson Dyson, and their collaborators during the total solar eclipse on May 29. The solar eclipse allowed the stars near the Sun to be observed. Observations were made simultaneously in the cities of Sobral, Ceará, Brazil and in São Tomé and Príncipe on the west coast of Africa. The observations demonstrated that the light from stars passing close to the Sun was slightly bent, so that stars appeared slightly out of position.

Bending light around a massive object from a distant source. The orange arrows show the apparent position of the background source. The white arrows show the path of the light from the true position of the source.
 
In the formation known as Einstein's Cross, four images of the same distant quasar appear around a foreground galaxy due to strong gravitational lensing.
 
The result was considered spectacular news and made the front page of most major newspapers. It made Einstein and his theory of general relativity world-famous. When asked by his assistant what his reaction would have been if general relativity had not been confirmed by Eddington and Dyson in 1919, Einstein said "Then I would feel sorry for the dear Lord. The theory is correct anyway." In 1912, Einstein had speculated that an observer could see multiple images of a single light source, if the light were deflected around a mass. This effect would make the mass act as a kind of gravitational lens. However, as he only considered the effect of deflection around a single star, he seemed to conclude that the phenomenon was unlikely to be observed for the foreseeable future since the necessary alignments between stars and observer would be highly improbable. Several other physicists speculated about gravitational lensing as well, but all reached the same conclusion that it would be nearly impossible to observe.

Although Einstein made unpublished calculations on the subject, the first discussion of the gravitational lens in print was by Khvolson, in a short article discussing the “halo effect” of gravitation when the source, lens, and observer are in near-perfect alignment, now referred to as the Einstein ring

In 1936, after some urging by Rudi W. Mandl, Einstein reluctantly published the short article "Lens-Like Action of a Star By the Deviation of Light In the Gravitational Field" in the journal Science.

In 1937, Fritz Zwicky first considered the case where the newly discovered galaxies (which were called 'nebulae' at the time) could act as both source and lens, and that, because of the mass and sizes involved, the effect was much more likely to be observed.

In 1963 Yu. G. Klimov, S. Liebes, and Sjur Refsdal recognized independently that quasars are an ideal light source for the gravitational lens effect.

It was not until 1979 that the first gravitational lens would be discovered. It became known as the "Twin QSO" since it initially looked like two identical quasistellar objects. (It is officially named SBS 0957+561.) This gravitational lens was discovered by Dennis Walsh, Bob Carswell, and Ray Weymann using the Kitt Peak National Observatory 2.1 meter telescope.

In the 1980s, astronomers realized that the combination of CCD imagers and computers would allow the brightness of millions of stars to be measured each night. In a dense field, such as the galactic center or the Magellanic clouds, many microlensing events per year could potentially be found. This led to efforts such as Optical Gravitational Lensing Experiment, or OGLE, that have characterized hundreds of such events, including those of OGLE-2016-BLG-1190Lb and OGLE-2016-BLG-1195Lb.

Explanation in terms of spacetime curvature

Simulated gravitational lensing (black hole passing in front of a background galaxy).
 
In general relativity, light follows the curvature of spacetime, hence when light passes around a massive object, it is bent. This means that the light from an object on the other side will be bent towards an observer's eye, just like an ordinary lens. In General Relativity the speed of light depends on the gravitational potential (aka the metric) and this bending can be viewed as a consequence of the light traveling along a gradient in light speed. Light rays are the boundary between the future, the spacelike, and the past regions. The gravitational attraction can be viewed as the motion of undisturbed objects in a background curved geometry or alternatively as the response of objects to a force in a flat geometry. The angle of deflection is:
toward the mass M at a distance r from the affected radiation, where G is the universal constant of gravitation and c is the speed of light in a vacuum. This formula is identical to the formula for weak gravitational lensing derived using relativistic Newtonian dynamics  without curving spacetime. 

Since the Schwarzschild radius is defined as , this can also be expressed in simple form as

Search for gravitational lenses

This image from the NASA/ESA Hubble Space Telescope shows the galaxy cluster MACS J1206.
 
Most of the gravitational lenses in the past have been discovered accidentally. A search for gravitational lenses in the northern hemisphere (Cosmic Lens All Sky Survey, CLASS), done in radio frequencies using the Very Large Array (VLA) in New Mexico, led to the discovery of 22 new lensing systems, a major milestone. This has opened a whole new avenue for research ranging from finding very distant objects to finding values for cosmological parameters so we can understand the universe better.

A similar search in the southern hemisphere would be a very good step towards complementing the northern hemisphere search as well as obtaining other objectives for study. If such a search is done using well-calibrated and well-parameterized instrument and data, a result similar to the northern survey can be expected. The use of the Australia Telescope 20 GHz (AT20G) Survey data collected using the Australia Telescope Compact Array (ATCA) stands to be such a collection of data. As the data were collected using the same instrument maintaining a very stringent quality of data we should expect to obtain good results from the search. The AT20G survey is a blind survey at 20 GHz frequency in the radio domain of the electromagnetic spectrum. Due to the high frequency used, the chances of finding gravitational lenses increases as the relative number of compact core objects (e.g. quasars) are higher (Sadler et al. 2006). This is important as the lensing is easier to detect and identify in simple objects compared to objects with complexity in them. This search involves the use of interferometric methods to identify candidates and follow them up at higher resolution to identify them. Full detail of the project is currently under works for publication.

Galaxy cluster SDSS J0915+3826 helps astronomers to study star formation in galaxies.
 
Microlensing techniques have been used to search for planets outside our solar system. A statistical analysis of specific cases of observed microlensing over the time period of 2002 to 2007 found that most stars in the Milky Way galaxy hosted at least one orbiting planet within .5 to 10 AUs.

In a 2009 article on Science Daily a team of scientists led by a cosmologist from the U.S. Department of Energy's Lawrence Berkeley National Laboratory has made major progress in extending the use of gravitational lensing to the study of much older and smaller structures than was previously possible by stating that weak gravitational lensing improves measurements of distant galaxies.

Astronomers from the Max Planck Institute for Astronomy in Heidelberg, Germany, the results of which are accepted for publication on Oct 21, 2013 in the Astrophysical Journal Letters (arXiv.org), discovered what at the time was the most distant gravitational lens galaxy termed as J1000+0221 using NASA’s Hubble Space Telescope. While it remains the most distant quad-image lensing galaxy known, an even more distant two-image lensing galaxy was subsequently discovered by an international team of astronomers using a combination of Hubble Space Telescope and Keck telescope imaging and spectroscopy. The discovery and analysis of the IRC 0218 lens was published in the Astrophysical Journal Letters on June 23, 2014.

Research published Sep 30, 2013 in the online edition of Physical Review Letters, led by McGill University in Montreal, Québec, Canada, has discovered the B-modes, that are formed due to gravitational lensing effect, using National Science Foundation's South Pole Telescope and with help from the Herschel space observatory. This discovery would open the possibilities of testing the theories of how our universe originated.

Abell 2744 galaxy cluster - extremely distant galaxies revealed by gravitational lensing (16 October 2014).

Solar gravitational lens

Albert Einstein predicted in 1936 that rays of light from the same direction that skirt the edges of the Sun would converge to a focal point approximately 542 AUs from the Sun. Thus, a probe positioned at this distance (or greater) from the Sun could use the Sun as a gravitational lens for magnifying distant objects on the opposite side of the Sun. A probe's location could shift around as needed to select different targets relative to the Sun.

This distance is far beyond the progress and equipment capabilities of space probes such as Voyager 1, and beyond the known planets and dwarf planets, though over thousands of years 90377 Sedna will move farther away on its highly elliptical orbit. The high gain for potentially detecting signals through this lens, such as microwaves at the 21-cm hydrogen line, led to the suggestion by Frank Drake in the early days of SETI that a probe could be sent to this distance. A multipurpose probe SETISAIL and later FOCAL was proposed to the ESA in 1993, but is expected to be a difficult task. If a probe does pass 542 AU, magnification capabilities of the lens will continue to act at farther distances, as the rays that come to a focus at larger distances pass further away from the distortions of the Sun's corona. A critique of the concept was given by Landis, who discussed issues including interference of the solar corona, the high magnification of the target, which will make the design of the mission focal plane difficult, and an analysis of the inherent spherical aberration of the lens.

Measuring weak lensing

Galaxy cluster MACS J2129-0741 and lensed galaxy MACS2129-1.
 
Kaiser, Squires and Broadhurst (1995), Luppino & Kaiser (1997) and Hoekstra et al. (1998) prescribed a method to invert the effects of the Point Spread Function (PSF) smearing and shearing, recovering a shear estimator uncontaminated by the systematic distortion of the PSF. This method (KSB+) is the most widely used method in weak lensing shear measurements.

Galaxies have random rotations and inclinations. As a result, the shear effects in weak lensing need to be determined by statistically preferred orientations. The primary source of error in lensing measurement is due to the convolution of the PSF with the lensed image. The KSB method measures the ellipticity of a galaxy image. The shear is proportional to the ellipticity. The objects in lensed images are parameterized according to their weighted quadrupole moments. For a perfect ellipse, the weighted quadrupole moments are related to the weighted ellipticity. KSB calculate how a weighted ellipticity measure is related to the shear and use the same formalism to remove the effects of the PSF.

KSB's primary advantages are its mathematical ease and relatively simple implementation. However, KSB is based on a key assumption that the PSF is circular with an anisotropic distortion. This is a reasonable assumption for cosmic shear surveys, but the next generation of surveys (e.g. LSST) may need much better accuracy than KSB can provide.

Representation of a Lie group

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Representation_of_a_Lie_group...