In mathematics, the prime number theorem (PNT) describes the asymptotic distribution of the prime numbers among the positive integers. It formalizes the intuitive idea that primes become less common as they become larger by precisely quantifying the rate at which this occurs. The theorem was proved independently by Jacques Hadamard and Charles Jean de la Vallée Poussin in 1896 using ideas introduced by Bernhard Riemann (in particular, the Riemann zeta function).
The first such distribution found is π(N) ~ N/log(N), where π(N) is the prime-counting function (the number of primes less than or equal to N) and log(N) is the natural logarithm of N. This means that for large enough N, the probability that a random integer not greater than N is prime is very close to 1 / log(N). Consequently, a random integer with at most 2n digits (for large enough n) is about half as likely to be prime as a random integer with at most n digits. For example, among the positive integers of at most 1000 digits, about one in 2300 is prime (log(101000) ≈ 2302.6), whereas among positive integers of at most 2000 digits, about one in 4600 is prime (log(102000) ≈ 4605.2). In other words, the average gap between consecutive prime numbers among the first N integers is roughly log(N).
Statement
Let π(x) be the prime-counting function defined to be the number of primes less than or equal to x, for any real number x. For example, π(10) = 4 because there are four prime numbers (2, 3, 5 and 7) less than or equal to 10. The prime number theorem then states that x / log x is a good approximation to π(x) (where log here means the natural logarithm), in the sense that the limit of the quotient of the two functions π(x) and x / log x as x increases without bound is 1:
known as the asymptotic law of distribution of prime numbers. Using asymptotic notation this result can be restated as
This notation (and the theorem) does not say anything about the limit of the difference of the two functions as x increases without bound. Instead, the theorem states that x / log x approximates π(x) in the sense that the relative error of this approximation approaches 0 as x increases without bound.
The prime number theorem is equivalent to the statement that the nth prime number pn satisfies
the asymptotic notation meaning, again, that the relative error of this approximation approaches 0 as n increases without bound. For example, the 2×1017th prime number is 8512677386048191063, and (2×1017)log(2×1017) rounds to 7967418752291744388, a relative error of about 6.4%.
On the other hand, the following asymptotic relations are logically equivalent:
As outlined below, the prime number theorem is also equivalent to
where ϑ and ψ are the first and the second Chebyshev functions respectively, and to
where is the Mertens function.
History of the proof of the asymptotic law of prime numbers
Based on the tables by Anton Felkel and Jurij Vega, Adrien-Marie Legendre conjectured in 1797 or 1798 that π(a) is approximated by the function a / (A log a + B), where A and B are unspecified constants. In the second edition of his book on number theory (1808) he then made a more precise conjecture, with A = 1 and B = −1.08366. Carl Friedrich Gauss considered the same question at age 15 or 16 "in the year 1792 or 1793", according to his own recollection in 1849. In 1838 Peter Gustav Lejeune Dirichlet came up with his own approximating function, the logarithmic integral li(x) (under the slightly different form of a series, which he communicated to Gauss). Both Legendre's and Dirichlet's formulas imply the same conjectured asymptotic equivalence of π(x) and x / log(x) stated above, although it turned out that Dirichlet's approximation is considerably better if one considers the differences instead of quotients.
In two papers from 1848 and 1850, the Russian mathematician Pafnuty Chebyshev attempted to prove the asymptotic law of distribution of prime numbers. His work is notable for the use of the zeta function ζ(s), for real values of the argument "s", as in works of Leonhard Euler, as early as 1737. Chebyshev's papers predated Riemann's celebrated memoir of 1859, and he succeeded in proving a slightly weaker form of the asymptotic law, namely, that if the limit as x goes to infinity of π(x) / (x / log(x)) exists at all, then it is necessarily equal to one. He was able to prove unconditionally that this ratio is bounded above and below by two explicitly given constants near 1, for all sufficiently large x. Although Chebyshev's paper did not prove the Prime Number Theorem, his estimates for π(x) were strong enough for him to prove Bertrand's postulate that there exists a prime number between n and 2n for any integer n ≥ 2.
An important paper concerning the distribution of prime numbers was Riemann's 1859 memoir "On the Number of Primes Less Than a Given Magnitude", the only paper he ever wrote on the subject. Riemann introduced new ideas into the subject, chiefly that the distribution of prime numbers is intimately connected with the zeros of the analytically extended Riemann zeta function of a complex variable. In particular, it is in this paper that the idea to apply methods of complex analysis to the study of the real function π(x) originates. Extending Riemann's ideas, two proofs of the asymptotic law of the distribution of prime numbers were found independently by Jacques Hadamard and Charles Jean de la Vallée Poussin and appeared in the same year (1896). Both proofs used methods from complex analysis, establishing as a main step of the proof that the Riemann zeta function ζ(s) is nonzero for all complex values of the variable s that have the form s = 1 + it with t > 0.
During the 20th century, the theorem of Hadamard and de la Vallée Poussin also became known as the Prime Number Theorem. Several different proofs of it were found, including the "elementary" proofs of Atle Selberg and Paul Erdős (1949). Hadamard's and de la Vallée Poussin's original proofs are long and elaborate; later proofs introduced various simplifications through the use of Tauberian theorems but remained difficult to digest. A short proof was discovered in 1980 by the American mathematician Donald J. Newman. Newman's proof is arguably the simplest known proof of the theorem, although it is non-elementary in the sense that it uses Cauchy's integral theorem from complex analysis.
Proof sketch
Here is a sketch of the proof referred to in one of Terence Tao's lectures. Like most proofs of the PNT, it starts out by reformulating the problem in terms of a less intuitive, but better-behaved, prime-counting function. The idea is to count the primes (or a related set such as the set of prime powers) with weights to arrive at a function with smoother asymptotic behavior. The most common such generalized counting function is the Chebyshev function ψ(x), defined by
This is sometimes written as
where Λ(n) is the von Mangoldt function, namely
It is now relatively easy to check that the PNT is equivalent to the claim that
Indeed, this follows from the easy estimates
and (using big O notation) for any ε > 0,
The next step is to find a useful representation for ψ(x). Let ζ(s) be the Riemann zeta function. It can be shown that ζ(s) is related to the von Mangoldt function Λ(n), and hence to ψ(x), via the relation
A delicate analysis of this equation and related properties of the zeta function, using the Mellin transform and Perron's formula, shows that for non-integer x the equation
holds, where the sum is over all zeros (trivial and nontrivial) of the zeta function. This striking formula is one of the so-called explicit formulas of number theory, and is already suggestive of the result we wish to prove, since the term x (claimed to be the correct asymptotic order of ψ(x)) appears on the right-hand side, followed by (presumably) lower-order asymptotic terms.
The next step in the proof involves a study of the zeros of the zeta function. The trivial zeros −2, −4, −6, −8, ... can be handled separately:
which vanishes for large x. The nontrivial zeros, namely those on the critical strip 0 ≤ Re(s) ≤ 1, can potentially be of an asymptotic order comparable to the main term x if Re(ρ) = 1, so we need to show that all zeros have real part strictly less than 1.
Non-vanishing on Re(s) = 1
To do this, we take for granted that ζ(s) is meromorphic in the half-plane Re(s) > 0, and is analytic there except for a simple pole at s = 1, and that there is a product formula
for Re(s) > 1. This product formula follows from the existence of unique prime factorization of integers, and shows that ζ(s) is never zero in this region, so that its logarithm is defined there and
Write s = x + iy ; then
Now observe the identity
so that
for all x > 1. Suppose now that ζ(1 + iy) = 0. Certainly y is not zero, since ζ(s) has a simple pole at s = 1. Suppose that x > 1 and let x tend to 1 from above. Since has a simple pole at s = 1 and ζ(x + 2iy) stays analytic, the left hand side in the previous inequality tends to 0, a contradiction.
Finally, we can conclude that the PNT is heuristically true. To rigorously complete the proof there are still serious technicalities to overcome, due to the fact that the summation over zeta zeros in the explicit formula for ψ(x) does not converge absolutely but only conditionally and in a "principal value" sense. There are several ways around this problem but many of them require rather delicate complex-analytic estimates. Edwards's book provides the details. Another method is to use Ikehara's Tauberian theorem, though this theorem is itself quite hard to prove. D.J. Newman observed that the full strength of Ikehara's theorem is not needed for the prime number theorem, and one can get away with a special case that is much easier to prove.
Newman's proof of the prime number theorem
D. J. Newman gives a quick proof of the prime number theorem (PNT). The proof is "non-elementary" by virtue of relying on complex analysis, but uses only elementary techniques from a first course in the subject: Cauchy's integral formula, Cauchy's integral theorem and estimates of complex integrals. Here is a brief sketch of this proof. See for the complete details.
The proof uses the same preliminaries as in the previous section except instead of the function , the Chebyshev function is used, which is obtained by dropping some of the terms from the series for . It is easy to show that the PNT is equivalent to . Likewise instead of the function is used, which is obtained by dropping some terms in the series for . The functions and differ by a function holomorphic on . Since, as was shown in the previous section, has no zeroes on the line , has no singularities on .
One further piece of information needed in Newman's proof, and which is the key to the estimates in his simple method, is that is bounded. This is proved using an ingenious and easy method due to Chebyshev.
Integration by parts shows how and are related. For ,
Newman's method proves the PNT by showing the integral
converges, and therefore the integrand goes to zero as , which is the PNT. In general, the convergence of the improper integral does not imply that the integrand goes to zero at infinity, since it may oscillate, but since is increasing, it is easy to show in this case.
To show the convergence of , for let
- and where
then
which is equal to a function holomorphic on the line .
The convergence of the integral , and thus the PNT, is proved by showing that . This involves change of order of limits since it can be written and therefore classified as a Tauberian theorem.
The difference is expressed using Cauchy's integral formula and then shown to be small for large by estimating the integrand. Fix and such that is holomorphic in the region where , and let be the boundary of this region. Since 0 is in the interior of the region, Cauchy's integral formula gives
where is the factor introduced by Newman, which does not change the integral since is entire and .
To estimate the integral, break the contour into two parts, where and . Then where . Since , and hence , is bounded, let be an upper bound for the absolute value of . This bound together with the estimate for gives that the first integral in absolute value is . The integrand over in the second integral is entire, so by Cauchy's integral theorem, the contour can be modified to a semicircle of radius in the left half-plane without changing the integral, and the same argument as for the first integral gives the absolute value of the second integral is . Finally, letting , the third integral goes to zero since and hence goes to zero on the contour. Combining the two estimates and the limit get
This holds for any so , and the PNT follows.
Prime-counting function in terms of the logarithmic integral
In a handwritten note on a reprint of his 1838 paper "Sur l'usage des séries infinies dans la théorie des nombres", which he mailed to Gauss, Dirichlet conjectured (under a slightly different form appealing to a series rather than an integral) that an even better approximation to π(x) is given by the offset logarithmic integral function Li(x), defined by
Indeed, this integral is strongly suggestive of the notion that the "density" of primes around t should be 1 / log t. This function is related to the logarithm by the asymptotic expansion
So, the prime number theorem can also be written as π(x) ~ Li(x). In fact, in another paper in 1899 de la Vallée Poussin proved that
for some positive constant a, where O(...) is the big O notation. This has been improved to
- where .
In 2016, Trudgian proved an explicit upper bound for the difference between and :
for .
The connection between the Riemann zeta function and π(x) is one reason the Riemann hypothesis has considerable importance in number theory: if established, it would yield a far better estimate of the error involved in the prime number theorem than is available today. More specifically, Helge von Koch showed in 1901 that if the Riemann hypothesis is true, the error term in the above relation can be improved to
(this last estimate is in fact equivalent to the Riemann hypothesis). The constant involved in the big O notation was estimated in 1976 by Lowell Schoenfeld: assuming the Riemann hypothesis,
for all x ≥ 2657. He also derived a similar bound for the Chebyshev prime-counting function ψ:
for all x ≥ 73.2 . This latter bound has been shown to express a variance to mean power law (when regarded as a random function over the integers) and 1/ f noise and to also correspond to the Tweedie compound Poisson distribution. (The Tweedie distributions represent a family of scale invariant distributions that serve as foci of convergence for a generalization of the central limit theorem.)
The logarithmic integral li(x) is larger than π(x) for "small" values of x. This is because it is (in some sense) counting not primes, but prime powers, where a power pn of a prime p is counted as 1/ n of a prime. This suggests that li(x) should usually be larger than π(x) by roughly and in particular should always be larger than π(x). However, in 1914, J. E. Littlewood proved that changes sign infinitely often. The first value of x where π(x) exceeds li(x) is probably around x ~ 10316 ; see the article on Skewes' number for more details. (On the other hand, the offset logarithmic integral Li(x) is smaller than π(x) already for x = 2; indeed, Li(2) = 0, while π(2) = 1.)
Elementary proofs
In the first half of the twentieth century, some mathematicians (notably G. H. Hardy) believed that there exists a hierarchy of proof methods in mathematics depending on what sorts of numbers (integers, reals, complex) a proof requires, and that the prime number theorem (PNT) is a "deep" theorem by virtue of requiring complex analysis. This belief was somewhat shaken by a proof of the PNT based on Wiener's tauberian theorem, though this could be set aside if Wiener's theorem were deemed to have a "depth" equivalent to that of complex variable methods.
In March 1948, Atle Selberg established, by "elementary" means, the asymptotic formula
where
for primes p. By July of that year, Selberg and Paul Erdős had each obtained elementary proofs of the PNT, both using Selberg's asymptotic formula as a starting point. These proofs effectively laid to rest the notion that the PNT was "deep" in that sense, and showed that technically "elementary" methods were more powerful than had been believed to be the case. On the history of the elementary proofs of the PNT, including the Erdős–Selberg priority dispute, see an article by Dorian Goldfeld.
There is some debate about the significance of Erdős and Selberg's result. There is no rigorous and widely accepted definition of the notion of elementary proof in number theory, so it is not clear exactly in what sense their proof is "elementary". Although it does not use complex analysis, it is in fact much more technical than the standard proof of PNT. One possible definition of an "elementary" proof is "one that can be carried out in first-order Peano arithmetic." There are number-theoretic statements (for example, the Paris–Harrington theorem) provable using second order but not first-order methods, but such theorems are rare to date. Erdős and Selberg's proof can certainly be formalized in Peano arithmetic, and in 1994, Charalambos Cornaros and Costas Dimitracopoulos proved that their proof can be formalized in a very weak fragment of PA, namely IΔ0 + exp. However, this does not address the question of whether or not the standard proof of PNT can be formalized in PA.
Computer verifications
In 2005, Avigad et al. employed the Isabelle theorem prover to devise a computer-verified variant of the Erdős–Selberg proof of the PNT. This was the first machine-verified proof of the PNT. Avigad chose to formalize the Erdős–Selberg proof rather than an analytic one because while Isabelle's library at the time could implement the notions of limit, derivative, and transcendental function, it had almost no theory of integration to speak of.
In 2009, John Harrison employed HOL Light to formalize a proof employing complex analysis. By developing the necessary analytic machinery, including the Cauchy integral formula, Harrison was able to formalize "a direct, modern and elegant proof instead of the more involved 'elementary' Erdős–Selberg argument".
Prime number theorem for arithmetic progressions
Let πd,a(x) denote the number of primes in the arithmetic progression a, a + d, a + 2d, a + 3d, ... that are less than x. Dirichlet and Legendre conjectured, and de la Vallée Poussin proved, that if a and d are coprime, then
where φ is Euler's totient function. In other words, the primes are distributed evenly among the residue classes [a] modulo d with gcd(a, d) = 1 . This is stronger than Dirichlet's theorem on arithmetic progressions (which only states that there is an infinity of primes in each class) and can be proved using similar methods used by Newman for his proof of the prime number theorem.
The Siegel–Walfisz theorem gives a good estimate for the distribution of primes in residue classes.
Bennett et al. proved the following estimate that has explicit constants A and B (Theorem 1.3): Let d be an integer and let a be an integer that is coprime to d. Then there are positive constants A and B such that
where
and
Prime number race
Although we have in particular
empirically the primes congruent to 3 are more numerous and are nearly always ahead in this "prime number race"; the first reversal occurs at x = 26861. However Littlewood showed in 1914 that there are infinitely many sign changes for the function
so the lead in the race switches back and forth infinitely many times. The phenomenon that π4,3(x) is ahead most of the time is called Chebyshev's bias. The prime number race generalizes to other moduli and is the subject of much research; Pál Turán asked whether it is always the case that π(x;a,c) and π(x;b,c) change places when a and b are coprime to c. Granville and Martin give a thorough exposition and survey.
Non-asymptotic bounds on the prime-counting function
The prime number theorem is an asymptotic result. It gives an ineffective bound on π(x) as a direct consequence of the definition of the limit: for all ε > 0, there is an S such that for all x > S,
However, better bounds on π(x) are known, for instance Pierre Dusart's
The first inequality holds for all x ≥ 599 and the second one for x ≥ 355991.
A weaker but sometimes useful bound for x ≥ 55 is
In Pierre Dusart's thesis there are stronger versions of this type of inequality that are valid for larger x. Later in 2010, Dusart proved:
The proof by de la Vallée Poussin implies the following: For every ε > 0, there is an S such that for all x > S,
Approximations for the nth prime number
As a consequence of the prime number theorem, one gets an asymptotic expression for the nth prime number, denoted by pn:
A better approximation is
Again considering the 2×1017th prime number 8512677386048191063, this gives an estimate of 8512681315554715386; the first 5 digits match and relative error is about 0.00005%.
Rosser's theorem states that
This can be improved by the following pair of bounds:
Table of π(x), x / log x, and li(x)
The table compares exact values of π(x) to the two approximations x / log x and li(x). The last column, x / π(x), is the average prime gap below x.
x π(x) π(x) − x/log x π(x)/x / log x li(x) − π(x) x/π(x) 10 4 −0.3 0.921 2.2 2.5 102 25 3.3 1.151 5.1 4 103 168 23 1.161 10 5.952 104 1229 143 1.132 17 8.137 105 9592 906 1.104 38 10.425 106 78498 6116 1.084 130 12.740 107 664579 44158 1.071 339 15.047 108 5761455 332774 1.061 754 17.357 109 50847534 2592592 1.054 1701 19.667 1010 455052511 20758029 1.048 3104 21.975 1011 4118054813 169923159 1.043 11588 24.283 1012 37607912018 1416705193 1.039 38263 26.590 1013 346065536839 11992858452 1.034 108971 28.896 1014 3204941750802 102838308636 1.033 314890 31.202 1015 29844570422669 891604962452 1.031 1052619 33.507 1016 279238341033925 7804289844393 1.029 3214632 35.812 1017 2623557157654233 68883734693281 1.027 7956589 38.116 1018 24739954287740860 612483070893536 1.025 21949555 40.420 1019 234057667276344607 5481624169369960 1.024 99877775 42.725 1020 2220819602560918840 49347193044659701 1.023 222744644 45.028 1021 21127269486018731928 446579871578168707 1.022 597394254 47.332 1022 201467286689315906290 4060704006019620994 1.021 1932355208 49.636 1023 1925320391606803968923 37083513766578631309 1.020 7250186216 51.939 1024 18435599767349200867866 339996354713708049069 1.019 17146907278 54.243 1025 176846309399143769411680 3128516637843038351228 1.018 55160980939 56.546 OEIS A006880 A057835 A057752
The value for π(1024) was originally computed assuming the Riemann hypothesis; it has since been verified unconditionally.
Analogue for irreducible polynomials over a finite field
There is an analogue of the prime number theorem that describes the "distribution" of irreducible polynomials over a finite field; the form it takes is strikingly similar to the case of the classical prime number theorem.
To state it precisely, let F = GF(q) be the finite field with q elements, for some fixed q, and let Nn be the number of monic irreducible polynomials over F whose degree is equal to n. That is, we are looking at polynomials with coefficients chosen from F, which cannot be written as products of polynomials of smaller degree. In this setting, these polynomials play the role of the prime numbers, since all other monic polynomials are built up of products of them. One can then prove that
If we make the substitution x = qn, then the right hand side is just
which makes the analogy clearer. Since there are precisely qn monic polynomials of degree n (including the reducible ones), this can be rephrased as follows: if a monic polynomial of degree n is selected randomly, then the probability of it being irreducible is about 1/n.
One can even prove an analogue of the Riemann hypothesis, namely that
The proofs of these statements are far simpler than in the classical case. It involves a short, combinatorial argument, summarised as follows: every element of the degree n extension of F is a root of some irreducible polynomial whose degree d divides n; by counting these roots in two different ways one establishes that
where the sum is over all divisors d of n. Möbius inversion then yields
where μ(k) is the Möbius function. (This formula was known to Gauss.) The main term occurs for d = n, and it is not difficult to bound the remaining terms. The "Riemann hypothesis" statement depends on the fact that the largest proper divisor of n can be no larger than n/2.