Search This Blog

Thursday, October 24, 2019

Euclid's theorem

From Wikipedia, the free encyclopedia
 
Euclid's theorem
FieldNumber theory
First proof byEuclid
First proof inc. 300 BCE
GeneralizationsDirichlet's theorem on arithmetic progressions
Prime number theorem

Euclid's theorem is a fundamental statement in number theory that asserts that there are infinitely many prime numbers. It was first proved by Euclid in his work Elements. There are several proofs of the theorem.

Euclid's proof

Euclid offered a proof published in his work Elements (Book IX, Proposition 20), which is paraphrased here.

Consider any finite list of prime numbers p1p2, ..., pn. It will be shown that at least one additional prime number not in this list exists. Let P be the product of all the prime numbers in the list: P = p1p2...pn. Let q = P + 1. Then q is either prime or not:
  • If q is prime, then there is at least one more prime that is not in the list.
  • If q is not prime, then some prime factor p divides q. If this factor p were in our list, then it would divide P (since P is the product of every number in the list); but p divides P + 1 = q. If p divides P and q, then p would have to divide the difference of the two numbers, which is (P + 1) − P or just 1. Since no prime number divides 1, p cannot be on the list. This means that at least one more prime number exists beyond those in the list.
This proves that for every finite list of prime numbers there is a prime number not in the list, and therefore there must be infinitely many prime numbers. Euclid is often erroneously reported to have proved this result by contradiction beginning with the assumption that the finite set initially considered contains all prime numbers. While such a proof does follow from Euclid's method, Euclid's proof deduces the infinitude directly.

Variations

Several variations on Euclid's proof exist, including the following:

The factorial n! of a positive integer n is divisible by every integer from 2 to n, as it is the product of all of them. Hence, n! + 1 is not divisible by any of the integers from 2 to n, inclusive (it gives a remainder of 1 when divided by each). Hence n! + 1 is either prime or divisible by a prime larger than n. In either case, for every positive integer n, there is at least one prime bigger than n. The conclusion is that the number of primes is infinite.

Euler's proof

Another proof, by the Swiss mathematician Leonhard Euler, relies on the fundamental theorem of arithmetic: that every integer has a unique prime factorization. If P is the set of all prime numbers, Euler wrote that:
The first equality is given by the formula for a geometric series in each term of the product. The second equality is a special case of the Euler product formula for the Riemann zeta function. To show this, distribute the product over the sum:
in the result, every product of primes appears exactly once and so by the fundamental theorem of arithmetic the sum is equal to the sum over all integers. 

The sum on the right is the harmonic series, which diverges. Thus the product on the left must also diverge. Since each term of the product is finite, the number of terms must be infinite; therefore, there is an infinite number of primes.

Erdős's proof

Paul Erdős gave a third proof that also relies on the fundamental theorem of arithmetic. First every integer n can be uniquely written as
where r is square-free, or not divisible by any square numbers (let s2 be the largest square number that divides n and then let r = n/s2). Now suppose that there are only finitely many prime numbers and call the number of prime numbers k. As each of the prime numbers factorizes any squarefree number at most once, by the fundamental theorem of arithmetic, there are only 2k square-free numbers. 

Now fix a positive integer N and consider the integers between 1 and N. Each of these numbers can be written as rs2 where r is square-free and s2 is a square, like this:
( 1×1, 2×1, 3×1, 1×4, 5×1, 6×1, 7×1, 2×4, 1×9, 10×1, ...)
There are N different numbers in the list. Each of them is made by multiplying a squarefree number, by a square number that is N or less. There are ⌊N⌋ such square numbers. Then, we form all the possible products of all squares less than N multiplied by all squarefrees everywhere. There are exactly 2kN⌋ such numbers, all different, and they include all the numbers in our list and maybe more. Therefore, 2kN⌋ ≥ N. Here, ⌊x⌋ denotes the floor function.

Since this inequality does not hold for N sufficiently large, there must be infinitely many primes.

Furstenberg's proof

In the 1950s, Hillel Furstenberg introduced a proof by contradiction using point-set topology.

Define a topology on the integers Z, called the evenly spaced integer topology, by declaring a subset U ⊆ Z to be an open set if and only if it is either the empty set, ∅, or it is a union of arithmetic sequences S(ab) (for a ≠ 0), where
Then a contradiction follows from the property that a finite set of integers cannot be open and the property that the basis sets S(ab) are both open and closed, since
cannot be closed because its complement is finite, but is closed since it is a finite union of closed sets.

Some recent proofs

Proof using the inclusion-exclusion principle

Juan Pablo Pinasco has written the following proof.

Let p1, ..., pN be the smallest N primes. Then by the inclusion–exclusion principle, the number of positive integers less than or equal to x that are divisible by one of those primes is
Dividing by x and letting x → ∞ gives 

 This can be written as
If no other primes than p1, ..., pN exist, then the expression in (1) is equal to  and the expression in (2) is equal to 1, but clearly the expression in (3) is not equal to 1. Therefore, there must be more primes than  p1, ..., pN.

Proof using de Polignac's formula

In 2010, Junho Peter Whang published the following proof by contradiction. Let k be any positive integer. Then according to de Polignac's formula (actually due to Legendre

where
But if only finitely many primes exist, then
(the numerator of the fraction would grow singly exponentially while by Stirling's approximation the denominator grows more quickly than singly exponentially), contradicting the fact that for each k the numerator is greater than or equal to the denominator.

Proof by construction

Filip Saidak gave the following proof by construction, which does not use reductio ad absurdum or Euclid's Lemma (that if a prime p divides ab then it must divide a or b). 

Since each natural number (> 1) has at least one prime factor, and two successive numbers n and (n + 1) have no factor in common, the product n(n + 1) has more different prime factors than the number n itself.  So the chain of pronic numbers:

1×2 = 2 {2},    2×3 = 6 {2, 3},    6×7 = 42 {2,3, 7},    42×43 = 1806 {2,3,7, 43},    1806×1807 = 3263443 {2,3,7,43,13,139}, · · ·
 
provides a sequence of unlimited growing sets of primes.

Proof using the irrationality of π

Representing the Leibniz formula for π as an Euler product gives
The numerators of this product are the odd prime numbers, and each denominator is the multiple of four nearest to the numerator. 

If there were finitely many primes this formula would show that π is a rational number whose denominator is the product of all multiples of 4 that are one more or less than a prime number, contradicting the fact that π is irrational.

Proof using information theory

Alexander Shen and others have presented a proof that uses incompressibility:

Suppose there were only k primes (p1... pk). By the fundamental theorem of arithmetic, any positive integer n could then be represented as:
where the non-negative integer exponents ei together with the finite-sized list of primes are enough to reconstruct the number. Since for all i, it follows that all (where denotes the base-2 logarithm). 

This yields an encoding for n of the following size (using big O notation):
bits.
This is a much more efficient encoding than representing n directly in binary, which takes bits. An established result in lossless data compression states that one cannot generally compress N bits of information into less than N bits. The representation above violates this by far when n is large enough since .

Therefore, the number of primes must not be finite.

A generalization: Dirichlet's theorem on arithmetic progressions

Dirichlet's theorem states that for any two positive coprime integers a and d, there are infinitely many primes of the form a + nd, where n is also a positive integer. In other words, there are infinitely many primes that are congruent to a modulo d. Euclid's theorem is a special case of Dirichlet's theorem for a = d = 1. Every case of Dirichlet's theorem yields Euclid's theorem.

A stronger result: the prime number theorem

Let π(x) be the prime-counting function that gives the number of primes less than or equal to x, for any real number x. The prime number theorem then states that x / log x is a good approximation to π(x), in the sense that the limit of the quotient of the two functions π(x) and x / log x as x increases without bound is 1:
Using asymptotic notation this result can be restated as
This yields Euclid's theorem, since

Hypercomputation

From Wikipedia, the free encyclopedia
 
Hypercomputation or super-Turing computation refers to models of computation that can provide outputs that are not Turing computable. For example, a machine that could solve the halting problem would be a hypercomputer; so too would one that can correctly evaluate every statement in Peano arithmetic.

The Church–Turing thesis states that any "computable" function that can be computed by a mathematician with a pen and paper using a finite set of simple algorithms, can be computed by a Turing machine. Hypercomputers compute functions that a Turing machine cannot and which are, hence, not computable in the Church–Turing sense.

Technically the output of a random Turing machine is uncomputable; however, most hypercomputing literature focuses instead on the computation of useful, rather than random, uncomputable functions.

History

A computational model going beyond Turing machines was introduced by Alan Turing in his 1938 PhD dissertation Systems of Logic Based on Ordinals. This paper investigated mathematical systems in which an oracle was available, which could compute a single arbitrary (non-recursive) function from naturals to naturals. He used this device to prove that even in those more powerful systems, undecidability is still present. Turing's oracle machines are mathematical abstractions, and are not physically realizable.

State space

In a sense, most functions are uncomputable: there are computable functions, but there are an uncountable number () of possible Super-Turing functions.

Hypercomputer models

Hypercomputer models range from useful but probably unrealizable (such as Turing's original oracle machines), to less-useful random-function generators that are more plausibly "realizable" (such as a random Turing machine).

Hypercomputers with uncomputable inputs or black-box components

A system granted knowledge of the uncomputable, oracular Chaitin's constant (a number with an infinite sequence of digits that encode the solution to the halting problem) as an input can solve a large number of useful undecidable problems; a system granted an uncomputable random-number generator as an input can create random uncomputable functions, but is generally not believed to be able to meaningfully solve "useful" uncomputable functions such as the halting problem. There are an unlimited number of different types of conceivable hypercomputers, including:
  • Turing's original oracle machines, defined by Turing in 1939.
  • A real computer (a sort of idealized analog computer) can perform hypercomputation if physics admits general real variables (not just computable reals), and these are in some way "harnessable" for useful (rather than random) computation. This might require quite bizarre laws of physics (for example, a measurable physical constant with an oracular value, such as Chaitin's constant), and would require the ability to measure the real-valued physical value to arbitrary precision.
    • Similarly, a neural net that somehow had Chaitin's constant exactly embedded in its weight function would be able to solve the halting problem, though constructing such an infinitely precise neural net, even if you somehow know Chaitin's constant beforehand, is impossible under the laws of quantum mechanics.
  • Certain fuzzy logic-based "fuzzy Turing machines" can, by definition, accidentally solve the halting problem, but only because their ability to solve the halting problem is indirectly assumed in the specification of the machine; this tends to be viewed as a "bug" in the original specification of the machines.
    • Similarly, a proposed model known as fair nondeterminism can accidentally allow the oracular computation of noncomputable functions, because some such systems, by definition, have the oracular ability to identify reject inputs that would "unfairly" cause a subsystem to run forever.
  • Dmytro Taranovsky has proposed a finitistic model of traditionally non-finitistic branches of analysis, built around a Turing machine equipped with a rapidly increasing function as its oracle. By this and more complicated models he was able to give an interpretation of second-order arithmetic. These models require an uncomputable input, such as a physical event-generating process where the interval between events grows at an uncomputably large rate.
    • Similarly, one unorthodox interpretation of a model of unbounded nondeterminism posits, by definition, that the length of time required for an "Actor" to settle is fundamentally unknowable, and therefore it cannot be proven, within the model, that it does not take an uncomputably long period of time.

"Infinite computational steps" models

In order to work correctly, certain computations by the machines below literally require infinite, rather than merely unlimited but finite, physical space and resources; in contrast, with a Turing machine, any given computation that halts will require only finite physical space and resources.
  • A Turing machine that can complete infinitely many steps in finite time, a feat known as a supertask. Simply being able to run for an unbounded number of steps does not suffice. One mathematical model is the Zeno machine (inspired by Zeno's paradox). The Zeno machine performs its first computation step in (say) 1 minute, the second step in ½ minute, the third step in ¼ minute, etc. By summing 1+½+¼+... (a geometric series) we see that the machine performs infinitely many steps in a total of 2 minutes. According to Shagrir, Zeno machines introduce physical paradoxes and its state is logically undefined outside of one-side open period of [0, 2), thus undefined exactly at 2 minutes after beginning of the computation.
  • It seems natural that the possibility of time travel (existence of closed timelike curves (CTCs)) makes hypercomputation possible by itself. However, this is not so since a CTC does not provide (by itself) the unbounded amount of storage that an infinite computation would require. Nevertheless, there are spacetimes in which the CTC region can be used for relativistic hypercomputation. According to a 1992 paper, a computer operating in a Malament–Hogarth spacetime or in orbit around a rotating black hole could theoretically perform non-Turing computations for an observer inside the black hole. Access to a CTC may allow the rapid solution to PSPACE-complete problems, a complexity class which, while Turing-decidable, is generally considered computationally intractable.

Quantum models

Some scholars conjecture that a quantum mechanical system which somehow uses an infinite superposition of states could compute a non-computable function. This is not possible using the standard qubit-model quantum computer, because it is proven that a regular quantum computer is PSPACE-reducible (a quantum computer running in polynomial time can be simulated by a classical computer running in polynomial space).

"Eventually correct" systems

Some physically-realizable systems will always eventually converge to the correct answer, but have the defect that they will often output an incorrect answer and stick with the incorrect answer for an uncomputably large period of time before eventually going back and correcting the mistake.
  • In mid 1960s, E Mark Gold and Hilary Putnam independently proposed models of inductive inference (the "limiting recursive functionals" and "trial-and-error predicates", respectively). These models enable some nonrecursive sets of numbers or languages (including all recursively enumerable sets of languages) to be "learned in the limit"; whereas, by definition, only recursive sets of numbers or languages could be identified by a Turing machine. While the machine will stabilize to the correct answer on any learnable set in some finite time, it can only identify it as correct if it is recursive; otherwise, the correctness is established only by running the machine forever and noting that it never revises its answer. Putnam identified this new interpretation as the class of "empirical" predicates, stating: "if we always 'posit' that the most recently generated answer is correct, we will make a finite number of mistakes, but we will eventually get the correct answer. (Note, however, that even if we have gotten to the correct answer (the end of the finite sequence) we are never sure that we have the correct answer.)" L. K. Schubert's 1974 paper "Iterated Limiting Recursion and the Program Minimization Problem" studied the effects of iterating the limiting procedure; this allows any arithmetic predicate to be computed. Schubert wrote, "Intuitively, iterated limiting identification might be regarded as higher-order inductive inference performed collectively by an ever-growing community of lower order inductive inference machines."
  • A symbol sequence is computable in the limit if there is a finite, possibly non-halting program on a universal Turing machine that incrementally outputs every symbol of the sequence. This includes the dyadic expansion of π and of every other computable real, but still excludes all noncomputable reals. Traditional Turing machines cannot edit their previous outputs; generalized Turing machines, as defined by Jürgen Schmidhuber, can. He defines the constructively describable symbol sequences as those that have a finite, non-halting program running on a generalized Turing machine, such that any output symbol eventually converges; that is, it does not change any more after some finite initial time interval. Due to limitations first exhibited by Kurt Gödel (1931), it may be impossible to predict the convergence time itself by a halting program, otherwise the halting problem could be solved. Schmidhuber uses this approach to define the set of formally describable or constructively computable universes or constructive theories of everything. Generalized Turing machines can eventually converge to a correct solution of the halting problem by evaluating a Specker sequence.

Analysis of capabilities

Many hypercomputation proposals amount to alternative ways to read an oracle or advice function embedded into an otherwise classical machine. Others allow access to some higher level of the arithmetic hierarchy. For example, supertasking Turing machines, under the usual assumptions, would be able to compute any predicate in the truth-table degree containing or . Limiting-recursion, by contrast, can compute any predicate or function in the corresponding Turing degree, which is known to be . Gold further showed that limiting partial recursion would allow the computation of precisely the predicates. 

Model Computable predicates Notes
supertasking tt() dependent on outside observer
limiting/trial-and-error
iterated limiting (k times)
Blum-Shub-Smale machine
incomparable with traditional computable real functions
Malament-Hogarth spacetime HYP dependent on spacetime structure
analog recurrent neural network f is an advice function giving connection weights; size is bounded by runtime
infinite time Turing machine Arithmetical Quasi-Inductive sets
classical fuzzy Turing machine for any computable t-norm
increasing function oracle for the one-sequence model; are r.e.

Criticism

Martin Davis, in his writings on hypercomputation refers to this subject as "a myth" and offers counter-arguments to the physical realizability of hypercomputation. As for its theory, he argues against the claims that this is a new field founded in the 1990s. This point of view relies on the history of computability theory (degrees of unsolvability, computability over functions, real numbers and ordinals), as also mentioned above. In his argument he makes a remark that all of hypercomputation is little more than: " if non-computable inputs are permitted then non computable outputs are attainable."

Cryogenics

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Cryogenics...