Search This Blog

Wednesday, August 10, 2022

Pi

From Wikipedia, the free encyclopedia

The number π (/p/; spelled out as "pi") is a mathematical constant that is the ratio of a circle's circumference to its diameter, approximately equal to 3.14159. The number π appears in many formulas across mathematics and physics. It is an irrational number, meaning that it cannot be expressed exactly as a ratio of two integers, although fractions such as 22/7 are commonly used to approximate it. Consequently, its decimal representation never ends, nor enters a permanently repeating pattern. It is a transcendental number, meaning that it cannot be a solution of an equation involving only sums, products, powers, and integers. The transcendence of π implies that it is impossible to solve the ancient challenge of squaring the circle with a compass and straightedge. The decimal digits of π appear to be randomly distributed, but no proof of this conjecture has been found.

For thousands of years, mathematicians have attempted to extend their understanding of π, sometimes by computing its value to a high degree of accuracy. Ancient civilizations, including the Egyptians and Babylonians, required fairly accurate approximations of π for practical computations. Around 250 BC, the Greek mathematician Archimedes created an algorithm to approximate π with arbitrary accuracy. In the 5th century AD, Chinese mathematicians approximated π to seven digits, while Indian mathematicians made a five-digit approximation, both using geometrical techniques. The first computational formula for π, based on infinite series, was discovered a millennium later. The earliest known use of the Greek letter π to represent the ratio of a circle's circumference to its diameter was by the Welsh mathematician William Jones in 1706.

The invention of calculus soon led to the calculation of hundreds of digits of π, enough for all practical scientific computations. Nevertheless, in the 20th and 21st centuries, mathematicians and computer scientists have pursued new approaches that, when combined with increasing computational power, extended the decimal representation of π to many trillions of digits. These computations are motivated by the development of efficient algorithms to calculate numeric series, as well as the human quest to break records. The extensive computations involved have also been used to test supercomputers.

Because its definition relates to the circle, π is found in many formulae in trigonometry and geometry, especially those concerning circles, ellipses and spheres. It is also found in formulae from other topics in science, such as cosmology, fractals, thermodynamics, mechanics, and electromagnetism. In modern mathematical analysis, it is often instead defined without any reference to geometry; therefore, it also appears in areas having little to do with geometry, such as number theory and statistics. The ubiquity of π makes it one of the most widely known mathematical constants inside and outside of science. Several books devoted to π have been published, and record-setting calculations of the digits of π often result in news headlines.

Fundamentals

Name

The symbol used by mathematicians to represent the ratio of a circle's circumference to its diameter is the lowercase Greek letter π, sometimes spelled out as pi. In English, π is pronounced as "pie" (/p/ PY). In mathematical use, the lowercase letter π is distinguished from its capitalized and enlarged counterpart Π, which denotes a product of a sequence, analogous to how Σ denotes summation.

The choice of the symbol π is discussed in the section Adoption of the symbol π.

Definition

A diagram of a circle, with the width labelled as diameter, and the perimeter labelled as circumference
The circumference of a circle is slightly more than three times as long as its diameter. The exact ratio is called π.

π is commonly defined as the ratio of a circle's circumference C to its diameter d:

The ratio C/d is constant, regardless of the circle's size. For example, if a circle has twice the diameter of another circle, it will also have twice the circumference, preserving the ratio C/d. This definition of π implicitly makes use of flat (Euclidean) geometry; although the notion of a circle can be extended to any curve (non-Euclidean) geometry, these new circles will no longer satisfy the formula π = C/d.

Here, the circumference of a circle is the arc length around the perimeter of the circle, a quantity which can be formally defined independently of geometry using limits—a concept in calculus. For example, one may directly compute the arc length of the top half of the unit circle, given in Cartesian coordinates by the equation x2 + y2 = 1, as the integral:

An integral such as this was adopted as the definition of π by Karl Weierstrass, who defined it directly as an integral in 1841.

Integration is no longer commonly used in a first analytical definition because, as Remmert 2012 explains, differential calculus typically precedes integral calculus in the university curriculum, so it is desirable to have a definition of π that does not rely on the latter. One such definition, due to Richard Baltzer and popularized by Edmund Landau, is the following: π is twice the smallest positive number at which the cosine function equals 0. π is also the smallest positive number at which the sine function equals zero, and the difference between consecutive zeroes of the sine function. The cosine and sine can be defined independently of geometry as a power series, or as the solution of a differential equation.

In a similar spirit, π can be defined using properties of the complex exponential, exp z, of a complex variable z. Like the cosine, the complex exponential can be defined in one of several ways. The set of complex numbers at which exp z is equal to one is then an (imaginary) arithmetic progression of the form:

and there is a unique positive real number π with this property.

A variation on the same idea, making use of sophisticated mathematical concepts of topology and algebra, is the following theorem: there is a unique (up to automorphism) continuous isomorphism from the group R/Z of real numbers under addition modulo integers (the circle group), onto the multiplicative group of complex numbers of absolute value one. The number π is then defined as half the magnitude of the derivative of this homomorphism.

Irrationality and normality

π is an irrational number, meaning that it cannot be written as the ratio of two integers. Fractions such as 22/7 and 355/113 are commonly used to approximate π, but no common fraction (ratio of whole numbers) can be its exact value. Because π is irrational, it has an infinite number of digits in its decimal representation, and does not settle into an infinitely repeating pattern of digits. There are several proofs that π is irrational; they generally require calculus and rely on the reductio ad absurdum technique. The degree to which π can be approximated by rational numbers (called the irrationality measure) is not precisely known; estimates have established that the irrationality measure is larger than the measure of e or ln 2 but smaller than the measure of Liouville numbers.

The digits of π have no apparent pattern and have passed tests for statistical randomness, including tests for normality; a number of infinite length is called normal when all possible sequences of digits (of any given length) appear equally often. The conjecture that π is normal has not been proven or disproven.

Since the advent of computers, a large number of digits of π have been available on which to perform statistical analysis. Yasumasa Kanada has performed detailed statistical analyses on the decimal digits of π, and found them consistent with normality; for example, the frequencies of the ten digits 0 to 9 were subjected to statistical significance tests, and no evidence of a pattern was found. Any random sequence of digits contains arbitrarily long subsequences that appear non-random, by the infinite monkey theorem. Thus, because the sequence of π's digits passes statistical tests for randomness, it contains some sequences of digits that may appear non-random, such as a sequence of six consecutive 9s that begins at the 762nd decimal place of the decimal representation of π. This is also called the "Feynman point" in mathematical folklore, after Richard Feynman, although no connection to Feynman is known.

Transcendence

A diagram of a square and circle, both with identical area; the length of the side of the square is the square root of pi
Because π is a transcendental number, squaring the circle is not possible in a finite number of steps using the classical tools of compass and straightedge.

In addition to being irrational, π is also a transcendental number, which means that it is not the solution of any non-constant polynomial equation with rational coefficients, such as x5/120x3/6 + x = 0.

The transcendence of π has two important consequences: First, π cannot be expressed using any finite combination of rational numbers and square roots or n-th roots (such as 331 or 10). Second, since no transcendental number can be constructed with compass and straightedge, it is not possible to "square the circle". In other words, it is impossible to construct, using compass and straightedge alone, a square whose area is exactly equal to the area of a given circle. Squaring a circle was one of the important geometry problems of the classical antiquity. Amateur mathematicians in modern times have sometimes attempted to square the circle and claim success—despite the fact that it is mathematically impossible.

Continued fractions

Like all irrational numbers, π cannot be represented as a common fraction (also known as a simple or vulgar fraction), by the very definition of irrational number (i.e., not a rational number). But every irrational number, including π, can be represented by an infinite series of nested fractions, called a continued fraction:

Truncating the continued fraction at any point yields a rational approximation for π; the first four of these are 3, 22/7, 333/106, and 355/113. These numbers are among the best-known and most widely used historical approximations of the constant. Each approximation generated in this way is a best rational approximation; that is, each is closer to π than any other fraction with the same or a smaller denominator. Because π is known to be transcendental, it is by definition not algebraic and so cannot be a quadratic irrational. Therefore, π cannot have a periodic continued fraction. Although the simple continued fraction for π (shown above) also does not exhibit any other obvious pattern, mathematicians have discovered several generalized continued fractions that do, such as:

Approximate value and digits

Some approximations of pi include:

  • Integers: 3
  • Fractions: Approximate fractions include (in order of increasing accuracy) 22/7, 333/106, 355/113, 52163/16604, 103993/33102, 104348/33215, and 245850922/78256779. (List is selected terms from OEISA063674 and OEISA063673.)
  • Digits: The first 50 decimal digits are 3.14159265358979323846264338327950288419716939937510... (see OEISA000796)

Digits in other number systems

Complex numbers and Euler's identity

A diagram of a unit circle centered at the origin in the complex plane, including a ray from the center of the circle to its edge, with the triangle legs labelled with sine and cosine functions.
The association between imaginary powers of the number e and points on the unit circle centered at the origin in the complex plane given by Euler's formula

Any complex number, say z, can be expressed using a pair of real numbers. In the polar coordinate system, one number (radius or r) is used to represent z's distance from the origin of the complex plane, and the other (angle or φ) the counter-clockwise rotation from the positive real line:

where i is the imaginary unit satisfying i2 = −1. The frequent appearance of π in complex analysis can be related to the behaviour of the exponential function of a complex variable, described by Euler's formula:

where the constant e is the base of the natural logarithm. This formula establishes a correspondence between imaginary powers of e and points on the unit circle centered at the origin of the complex plane. Setting φ = π in Euler's formula results in Euler's identity, celebrated in mathematics due to it containing the five most important mathematical constants:

There are n different complex numbers z satisfying zn = 1, and these are called the "n-th roots of unity" and are given by the formula:

History

Antiquity

The best-known approximations to π dating before the Common Era were accurate to two decimal places; this was improved upon in Chinese mathematics in particular by the mid-first millennium, to an accuracy of seven decimal places. After this, no further progress was made until the late medieval period.

Based on the measurements of the Great Pyramid of Giza (c. 2560 BC), some Egyptologists have claimed that the ancient Egyptians used an approximation of π as 22/7 from as early as the Old Kingdom. This claim has been met with skepticism. The earliest written approximations of π are found in Babylon and Egypt, both within one per cent of the true value. In Babylon, a clay tablet dated 1900–1600 BC has a geometrical statement that, by implication, treats π as 25/8 = 3.125. In Egypt, the Rhind Papyrus, dated around 1650 BC but copied from a document dated to 1850 BC, has a formula for the area of a circle that treats π as (16/9)2 3.16.

Astronomical calculations in the Shatapatha Brahmana (ca. 4th century BC) use a fractional approximation of 339/108 ≈ 3.139 (with a relative error of 9×10−4). Other Indian sources by about 150 BC treat π as 10 ≈ 3.1622.

Polygon approximation era

diagram of a hexagon and pentagon circumscribed outside a circle
π can be estimated by computing the perimeters of circumscribed and inscribed polygons.
 
A painting of a man studying
Archimedes developed the polygonal approach to approximating π.

The first recorded algorithm for rigorously calculating the value of π was a geometrical approach using polygons, devised around 250 BC by the Greek mathematician Archimedes. This polygonal algorithm dominated for over 1,000 years, and as a result π is sometimes referred to as Archimedes's constant. Archimedes computed upper and lower bounds of π by drawing a regular hexagon inside and outside a circle, and successively doubling the number of sides until he reached a 96-sided regular polygon. By calculating the perimeters of these polygons, he proved that 223/71 < π < 22/7 (that is 3.1408 < π < 3.1429). Archimedes' upper bound of 22/7 may have led to a widespread popular belief that π is equal to 22/7. Around 150 AD, Greek-Roman scientist Ptolemy, in his Almagest, gave a value for π of 3.1416, which he may have obtained from Archimedes or from Apollonius of Perga. Mathematicians using polygonal algorithms reached 39 digits of π in 1630, a record only broken in 1699 when infinite series were used to reach 71 digits.

In ancient China, values for π included 3.1547 (around 1 AD), 10 (100 AD, approximately 3.1623), and 142/45 (3rd century, approximately 3.1556). Around 265 AD, the Wei Kingdom mathematician Liu Hui created a polygon-based iterative algorithm and used it with a 3,072-sided polygon to obtain a value of π of 3.1416. Liu later invented a faster method of calculating π and obtained a value of 3.14 with a 96-sided polygon, by taking advantage of the fact that the differences in area of successive polygons form a geometric series with a factor of 4. The Chinese mathematician Zu Chongzhi, around 480 AD, calculated that 3.1415926 < π < 3.1415927 and suggested the approximations π355/113 = 3.14159292035... and π22/7 = 3.142857142857..., which he termed the Milü (''close ratio") and Yuelü ("approximate ratio"), respectively, using Liu Hui's algorithm applied to a 12,288-sided polygon. With a correct value for its seven first decimal digits, this value remained the most accurate approximation of π available for the next 800 years.

The Indian astronomer Aryabhata used a value of 3.1416 in his Āryabhaṭīya (499 AD). Fibonacci in c. 1220 computed 3.1418 using a polygonal method, independent of Archimedes. Italian author Dante apparently employed the value 3+2/10 ≈ 3.14142.

The Persian astronomer Jamshīd al-Kāshī produced 9 sexagesimal digits, roughly the equivalent of 16 decimal digits, in 1424 using a polygon with 3×228 sides, which stood as the world record for about 180 years. French mathematician François Viète in 1579 achieved 9 digits with a polygon of 3×217 sides. Flemish mathematician Adriaan van Roomen arrived at 15 decimal places in 1593. In 1596, Dutch mathematician Ludolph van Ceulen reached 20 digits, a record he later increased to 35 digits (as a result, π was called the "Ludolphian number" in Germany until the early 20th century). Dutch scientist Willebrord Snellius reached 34 digits in 1621, and Austrian astronomer Christoph Grienberger arrived at 38 digits in 1630 using 1040 sides. Christiaan Huygens was able to arrive at 10 decimals places in 1654 using a slightly different method equivalent to Richardson extrapolation.

Infinite series

Comparison of the convergence of several historical infinite series for π. Sn is the approximation after taking n terms. Each subsequent subplot magnifies the shaded area horizontally by 10 times.

The calculation of π was revolutionized by the development of infinite series techniques in the 16th and 17th centuries. An infinite series is the sum of the terms of an infinite sequence. Infinite series allowed mathematicians to compute π with much greater precision than Archimedes and others who used geometrical techniques. Although infinite series were exploited for π most notably by European mathematicians such as James Gregory and Gottfried Wilhelm Leibniz, the approach also appeared in the Kerala school sometime between 1400 and 1500 AD. Around 1500 AD, a written description of an infinite series that could be used to compute π was laid out in Sanskrit verse in Tantrasamgraha by Nilakantha Somayaji. The series are presented without proof, but proofs are presented in a later work, Yuktibhāṣā, from around 1530 AD. Nilakantha attributes the series to an earlier Indian mathematician, Madhava of Sangamagrama, who lived c. 1350 – c. 1425. Several infinite series are described, including series for sine, tangent, and cosine, which are now referred to as the Madhava series or Gregory–Leibniz series. Madhava used infinite series to estimate π to 11 digits around 1400, but that value was improved on around 1430 by the Persian mathematician Jamshīd al-Kāshī, using a polygonal algorithm.

In 1593, François Viète published what is now known as Viète's formula, an infinite product (rather than an infinite sum, which is more typically used in π calculations):

In 1655, John Wallis published what is now known as Wallis product, also an infinite product:

A formal portrait of a man, with long hair
Isaac Newton used infinite series to compute π to 15 digits, later writing "I am ashamed to tell you to how many figures I carried these computations".

In the 1660s, the English scientist Isaac Newton and German mathematician Gottfried Wilhelm Leibniz discovered calculus, which led to the development of many infinite series for approximating π. Newton himself used an arcsin series to compute a 15 digit approximation of π in 1665 or 1666, writing "I am ashamed to tell you to how many figures I carried these computations, having no other business at the time."

In 1671, James Gregory, and independently, Leibniz in 1674, published the series:

This series, sometimes called Gregory–Leibniz series, equals π/4 when evaluated with z = 1.

In 1699, English mathematician Abraham Sharp used the Gregory–Leibniz series for to compute π to 71 digits, breaking the previous record of 39 digits, which was set with a polygonal algorithm. The Gregory–Leibniz for series is simple, but converges very slowly (that is, approaches the answer gradually), so it is not used in modern π calculations.

In 1706, John Machin used the Gregory–Leibniz series to produce an algorithm that converged much faster:

Machin reached 100 digits of π with this formula. Other mathematicians created variants, now known as Machin-like formulae, that were used to set several successive records for calculating digits of π. Machin-like formulae remained the best-known method for calculating π well into the age of computers, and were used to set records for 250 years, culminating in a 620-digit approximation in 1946 by Daniel Ferguson – the best approximation achieved without the aid of a calculating device.

In 1844, a record was set by Zacharias Dase, who employed a Machin-like formula to calculate 200 decimals of π in his head at the behest of German mathematician Carl Friedrich Gauss.

In 1853, British mathematician William Shanks calculated π to 607 digits, but made a mistake in the 528th digit, rendering all subsequent digits incorrect. Though he calculated an additional 100 digits in 1873, bring the total up to 707, his previous mistake rendered all the new digits incorrect as well.

Rate of convergence

Some infinite series for π converge faster than others. Given the choice of two infinite series for π, mathematicians will generally use the one that converges more rapidly because faster convergence reduces the amount of computation needed to calculate π to any given accuracy. A simple infinite series for π is the Gregory–Leibniz series:

As individual terms of this infinite series are added to the sum, the total gradually gets closer to π, and – with a sufficient number of terms – can get as close to π as desired. It converges quite slowly, though – after 500,000 terms, it produces only five correct decimal digits of π.

An infinite series for π (published by Nilakantha in the 15th century) that converges more rapidly than the Gregory–Leibniz series is: Note that (n − 1)n(n + 1) = n3 − n.

The following table compares the convergence rates of these two series:

Infinite series for π After 1st term After 2nd term After 3rd term After 4th term After 5th term Converges to:
4.0000 2.6666 ... 3.4666 ... 2.8952 ... 3.3396 ... π = 3.1415 ...
3.0000 3.1666 ... 3.1333 ... 3.1452 ... 3.1396 ...

After five terms, the sum of the Gregory–Leibniz series is within 0.2 of the correct value of π, whereas the sum of Nilakantha's series is within 0.002 of the correct value of π. Nilakantha's series converges faster and is more useful for computing digits of π. Series that converge even faster include Machin's series and Chudnovsky's series, the latter producing 14 correct decimal digits per term.

Irrationality and transcendence

Not all mathematical advances relating to π were aimed at increasing the accuracy of approximations. When Euler solved the Basel problem in 1735, finding the exact value of the sum of the reciprocal squares, he established a connection between π and the prime numbers that later contributed to the development and study of the Riemann zeta function:

Swiss scientist Johann Heinrich Lambert in 1768 proved that π is irrational, meaning it is not equal to the quotient of any two whole numbers. Lambert's proof exploited a continued-fraction representation of the tangent function. French mathematician Adrien-Marie Legendre proved in 1794 that π2 is also irrational. In 1882, German mathematician Ferdinand von Lindemann proved that π is transcendental, confirming a conjecture made by both Legendre and Euler. Hardy and Wright states that "the proofs were afterwards modified and simplified by Hilbert, Hurwitz, and other writers".

Adoption of the symbol π

The earliest known use of the Greek letter π to represent the ratio of a circle's circumference to its diameter was by Welsh mathematician William Jones in 1706
 
Leonhard Euler popularized the use of the Greek letter π in works he published in 1736 and 1748.

In the earliest usages, the Greek letter π was used to denote the semiperimeter (semiperipheria in Latin) of a circle. and was combined in ratios with δ (for diameter or semidiameter) or ρ (for radius) to form circle constants. (Before then, mathematicians sometimes used letters such as c or p instead.) The first recorded use is Oughtred's "", to express the ratio of periphery and diameter in the 1647 and later editions of Clavis Mathematicae. Barrow likewise used "" to represent the constant 3.14..., while Gregory instead used "" to represent 6.28... .

The earliest known use of the Greek letter π alone to represent the ratio of a circle's circumference to its diameter was by Welsh mathematician William Jones in his 1706 work Synopsis Palmariorum Matheseos; or, a New Introduction to the Mathematics. The Greek letter first appears there in the phrase "1/2 Periphery (π)" in the discussion of a circle with radius one. However, he writes that his equations for π are from the "ready pen of the truly ingenious Mr. John Machin", leading to speculation that Machin may have employed the Greek letter before Jones. Jones' notation was not immediately adopted by other mathematicians, with the fraction notation still being used as late as 1767.

Euler started using the single-letter form beginning with his 1727 Essay Explaining the Properties of Air, though he used π = 6.28..., the ratio of periphery to radius, in this and some later writing. Euler first used π = 3.14... in his 1736 work Mechanica, and continued in his widely-read 1748 work Introductio in analysin infinitorum (he wrote: "for the sake of brevity we will write this number as π; thus π is equal to half the circumference of a circle of radius 1"). Because Euler corresponded heavily with other mathematicians in Europe, the use of the Greek letter spread rapidly, and the practice was universally adopted thereafter in the Western world, though the definition still varied between 3.14... and 6.28... as late as 1761.

Modern quest for more digits

Computer era and iterative algorithms

The Gauss–Legendre iterative algorithm:
Initialize

Iterate

Then an estimate for π is given by

The development of computers in the mid-20th century again revolutionized the hunt for digits of π. Mathematicians John Wrench and Levi Smith reached 1,120 digits in 1949 using a desk calculator. Using an inverse tangent (arctan) infinite series, a team led by George Reitwiesner and John von Neumann that same year achieved 2,037 digits with a calculation that took 70 hours of computer time on the ENIAC computer. The record, always relying on an arctan series, was broken repeatedly (7,480 digits in 1957; 10,000 digits in 1958; 100,000 digits in 1961) until 1 million digits were reached in 1973.

Two additional developments around 1980 once again accelerated the ability to compute π. First, the discovery of new iterative algorithms for computing π, which were much faster than the infinite series; and second, the invention of fast multiplication algorithms that could multiply large numbers very rapidly. Such algorithms are particularly important in modern π computations because most of the computer's time is devoted to multiplication. They include the Karatsuba algorithm, Toom–Cook multiplication, and Fourier transform-based methods.

The iterative algorithms were independently published in 1975–1976 by physicist Eugene Salamin and scientist Richard Brent. These avoid reliance on infinite series. An iterative algorithm repeats a specific calculation, each iteration using the outputs from prior steps as its inputs, and produces a result in each step that converges to the desired value. The approach was actually invented over 160 years earlier by Carl Friedrich Gauss, in what is now termed the arithmetic–geometric mean method (AGM method) or Gauss–Legendre algorithm. As modified by Salamin and Brent, it is also referred to as the Brent–Salamin algorithm.

The iterative algorithms were widely used after 1980 because they are faster than infinite series algorithms: whereas infinite series typically increase the number of correct digits additively in successive terms, iterative algorithms generally multiply the number of correct digits at each step. For example, the Brent-Salamin algorithm doubles the number of digits in each iteration. In 1984, brothers John and Peter Borwein produced an iterative algorithm that quadruples the number of digits in each step; and in 1987, one that increases the number of digits five times in each step. Iterative methods were used by Japanese mathematician Yasumasa Kanada to set several records for computing π between 1995 and 2002. This rapid convergence comes at a price: the iterative algorithms require significantly more memory than infinite series.

Motives for computing π

As mathematicians discovered new algorithms, and computers became available, the number of known decimal digits of π increased dramatically. The vertical scale is logarithmic.

For most numerical calculations involving π, a handful of digits provide sufficient precision. According to Jörg Arndt and Christoph Haenel, thirty-nine digits are sufficient to perform most cosmological calculations, because that is the accuracy necessary to calculate the circumference of the observable universe with a precision of one atom. Accounting for additional digits needed to compensate for computational round-off errors, Arndt concludes that a few hundred digits would suffice for any scientific application. Despite this, people have worked strenuously to compute π to thousands and millions of digits. This effort may be partly ascribed to the human compulsion to break records, and such achievements with π often make headlines around the world. They also have practical benefits, such as testing supercomputers, testing numerical analysis algorithms (including high-precision multiplication algorithms); and within pure mathematics itself, providing data for evaluating the randomness of the digits of π.

Rapidly convergent series

Photo portrait of a man
Srinivasa Ramanujan, working in isolation in India, produced many innovative series for computing π.

Modern π calculators do not use iterative algorithms exclusively. New infinite series were discovered in the 1980s and 1990s that are as fast as iterative algorithms, yet are simpler and less memory intensive. The fast iterative algorithms were anticipated in 1914, when Indian mathematician Srinivasa Ramanujan published dozens of innovative new formulae for π, remarkable for their elegance, mathematical depth and rapid convergence. One of his formulae, based on modular equations, is

This series converges much more rapidly than most arctan series, including Machin's formula. Bill Gosper was the first to use it for advances in the calculation of π, setting a record of 17 million digits in 1985. Ramanujan's formulae anticipated the modern algorithms developed by the Borwein brothers (Jonathan and Peter) and the Chudnovsky brothers. The Chudnovsky formula developed in 1987 is

It produces about 14 digits of π per term, and has been used for several record-setting π calculations, including the first to surpass 1 billion (109) digits in 1989 by the Chudnovsky brothers, 10 trillion (1013) digits in 2011 by Alexander Yee and Shigeru Kondo, over 22 trillion digits in 2016 by Peter Trueb, 50 trillion digits by Timothy Mullican in 2020 and 100 trillion digits by Emma Haruka Iwao in 2022. For similar formulas, see also the Ramanujan–Sato series.

In 2006, mathematician Simon Plouffe used the PSLQ integer relation algorithm to generate several new formulas for π, conforming to the following template:

where q is eπ (Gelfond's constant), k is an odd number, and a, b, c are certain rational numbers that Plouffe computed.

Monte Carlo methods

Needles of length ℓ scattered on stripes with width t
Buffon's needle. Needles a and b are dropped randomly.
 
Thousands of dots randomly covering a square and a circle inscribed in the square.
Random dots are placed on a square and a circle inscribed inside.

Monte Carlo methods, which evaluate the results of multiple random trials, can be used to create approximations of π. Buffon's needle is one such technique: If a needle of length is dropped n times on a surface on which parallel lines are drawn t units apart, and if x of those times it comes to rest crossing a line (x > 0), then one may approximate π based on the counts:

Another Monte Carlo method for computing π is to draw a circle inscribed in a square, and randomly place dots in the square. The ratio of dots inside the circle to the total number of dots will approximately equal π/4.

Five random walks with 200 steps. The sample mean of |W200| is μ = 56/5, and so 2(200)μ−2 ≈ 3.19 is within 0.05 of π.

Another way to calculate π using probability is to start with a random walk, generated by a sequence of (fair) coin tosses: independent random variables Xk such that Xk ∈ {−1,1} with equal probabilities. The associated random walk is

so that, for each n, Wn is drawn from a shifted and scaled binomial distribution. As n varies, Wn defines a (discrete) stochastic process. Then π can be calculated by

This Monte Carlo method is independent of any relation to circles, and is a consequence of the central limit theorem, discussed below.

These Monte Carlo methods for approximating π are very slow compared to other methods, and do not provide any information on the exact number of digits that are obtained. Thus they are never used to approximate π when speed or accuracy is desired.

Spigot algorithms

Two algorithms were discovered in 1995 that opened up new avenues of research into π. They are called spigot algorithms because, like water dripping from a spigot, they produce single digits of π that are not reused after they are calculated. This is in contrast to infinite series or iterative algorithms, which retain and use all intermediate digits until the final result is produced.

Mathematicians Stan Wagon and Stanley Rabinowitz produced a simple spigot algorithm in 1995. Its speed is comparable to arctan algorithms, but not as fast as iterative algorithms.

Another spigot algorithm, the BBP digit extraction algorithm, was discovered in 1995 by Simon Plouffe:

This formula, unlike others before it, can produce any individual hexadecimal digit of π without calculating all the preceding digits. Individual binary digits may be extracted from individual hexadecimal digits, and octal digits can be extracted from one or two hexadecimal digits. Variations of the algorithm have been discovered, but no digit extraction algorithm has yet been found that rapidly produces decimal digits. An important application of digit extraction algorithms is to validate new claims of record π computations: After a new record is claimed, the decimal result is converted to hexadecimal, and then a digit extraction algorithm is used to calculate several random hexadecimal digits near the end; if they match, this provides a measure of confidence that the entire computation is correct.

Between 1998 and 2000, the distributed computing project PiHex used Bellard's formula (a modification of the BBP algorithm) to compute the quadrillionth (1015th) bit of π, which turned out to be 0. In September 2010, a Yahoo! employee used the company's Hadoop application on one thousand computers over a 23-day period to compute 256 bits of π at the two-quadrillionth (2×1015th) bit, which also happens to be zero.

Role and characterizations in mathematics

Because π is closely related to the circle, it is found in many formulae from the fields of geometry and trigonometry, particularly those concerning circles, spheres, or ellipses. Other branches of science, such as statistics, physics, Fourier analysis, and number theory, also include π in some of their important formulae.

Geometry and trigonometry

A diagram of a circle with a square coving the circle's upper right quadrant.
The area of the circle equals π times the shaded area. The area of the unit circle is π.

π appears in formulae for areas and volumes of geometrical shapes based on circles, such as ellipses, spheres, cones, and tori. Below are some of the more common formulae that involve π.

  • The circumference of a circle with radius r is r.
  • The area of a circle with radius r is πr2.
  • The area of an ellipse with semi-major axis a and semi-minor axis b is πab.
  • The volume of a sphere with radius r is 4/3πr3.
  • The surface area of a sphere with radius r is r2.

Some of the formulae above are special cases of the volume of the n-dimensional ball and the surface area of its boundary, the (n−1)-dimensional sphere, given below.

Apart from circles, there are other curves of constant width. By Barbier's theorem, every curve of constant width has perimeter π times its width. The Reuleaux triangle (formed by the intersection of three circles, each centered where the other two circles cross) has the smallest possible area for its width and the circle the largest. There also exist non-circular smooth curves of constant width.

Definite integrals that describe circumference, area, or volume of shapes generated by circles typically have values that involve π. For example, an integral that specifies half the area of a circle of radius one is given by:

In that integral the function 1 − x2 represents the top half of a circle (the square root is a consequence of the Pythagorean theorem), and the integral 1
−1
computes the area between that half of a circle and the x axis.

Units of angle

Diagram showing graphs of functions
Sine and cosine functions repeat with period 2π.

The trigonometric functions rely on angles, and mathematicians generally use radians as units of measurement. π plays an important role in angles measured in radians, which are defined so that a complete circle spans an angle of 2π radians. The angle measure of 180° is equal to π radians, and 1° = π/180 radians.

Common trigonometric functions have periods that are multiples of π; for example, sine and cosine have period 2π, so for any angle θ and any integer k,


Eigenvalues

The overtones of a vibrating string are eigenfunctions of the second derivative, and form a harmonic progression. The associated eigenvalues form the arithmetic progression of integer multiples of π.

Many of the appearances of π in the formulas of mathematics and the sciences have to do with its close relationship with geometry. However, π also appears in many natural situations having apparently nothing to do with geometry.

In many applications, it plays a distinguished role as an eigenvalue. For example, an idealized vibrating string can be modelled as the graph of a function f on the unit interval [0, 1], with fixed ends f(0) = f(1) = 0. The modes of vibration of the string are solutions of the differential equation , or . Thus λ is an eigenvalue of the second derivative operator , and is constrained by Sturm–Liouville theory to take on only certain specific values. It must be positive, since the operator is negative definite, so it is convenient to write λ = ν2, where ν > 0 is called the wavenumber. Then f(x) = sin(π x) satisfies the boundary conditions and the differential equation with ν = π.

The value π is, in fact, the least such value of the wavenumber, and is associated with the fundamental mode of vibration of the string. One way to show this is by estimating the energy, which satisfies Wirtinger's inequality: for a function with f(0) = f(1) = 0 and f, f ' both square integrable, we have:

with equality precisely when f is a multiple of sin(π x). Here π appears as an optimal constant in Wirtinger's inequality, and it follows that it is the smallest wavenumber, using the variational characterization of the eigenvalue. As a consequence, π is the smallest singular value of the derivative operator on the space of functions on [0, 1] vanishing at both endpoints (the Sobolev space ).

Inequalities

The ancient city of Carthage was the solution to an isoperimetric problem, according to a legend recounted by Lord Kelvin (Thompson 1894): those lands bordering the sea that Queen Dido could enclose on all other sides within a single given oxhide, cut into strips.

The number π serves appears in similar eigenvalue problems in higher-dimensional analysis. As mentioned above, it can be characterized via its role as the best constant in the isoperimetric inequality: the area A enclosed by a plane Jordan curve of perimeter P satisfies the inequality

and equality is clearly achieved for the circle, since in that case A = πr2 and P = 2πr.

Ultimately as a consequence of the isoperimetric inequality, π appears in the optimal constant for the critical Sobolev inequality in n dimensions, which thus characterizes the role of π in many physical phenomena as well, for example those of classical potential theory. In two dimensions, the critical Sobolev inequality is

for f a smooth function with compact support in R2, is the gradient of f, and and refer respectively to the L2 and L1-norm. The Sobolev inequality is equivalent to the isoperimetric inequality (in any dimension), with the same best constants.

Wirtinger's inequality also generalizes to higher-dimensional Poincaré inequalities that provide best constants for the Dirichlet energy of an n-dimensional membrane. Specifically, π is the greatest constant such that

for all convex subsets G of Rn of diameter 1, and square-integrable functions u on G of mean zero. Just as Wirtinger's inequality is the variational form of the Dirichlet eigenvalue problem in one dimension, the Poincaré inequality is the variational form of the Neumann eigenvalue problem, in any dimension.

Fourier transform and Heisenberg uncertainty principle

The constant π also appears as a critical spectral parameter in the Fourier transform. This is the integral transform, that takes a complex-valued integrable function f on the real line to the function defined as:

Although there are several different conventions for the Fourier transform and its inverse, any such convention must involve π somewhere. The above is the most canonical definition, however, giving the unique unitary operator on L2 that is also an algebra homomorphism of L1 to L.

The Heisenberg uncertainty principle also contains the number π. The uncertainty principle gives a sharp lower bound on the extent to which it is possible to localize a function both in space and in frequency: with our conventions for the Fourier transform,

The physical consequence, about the uncertainty in simultaneous position and momentum observations of a quantum mechanical system, is discussed below. The appearance of π in the formulae of Fourier analysis is ultimately a consequence of the Stone–von Neumann theorem, asserting the uniqueness of the Schrödinger representation of the Heisenberg group.

Gaussian integrals

A graph of the Gaussian function ƒ(x) = ex2. The coloured region between the function and the x-axis has area π.

The fields of probability and statistics frequently use the normal distribution as a simple model for complex phenomena; for example, scientists generally assume that the observational error in most experiments follows a normal distribution. The Gaussian function, which is the probability density function of the normal distribution with mean μ and standard deviation σ, naturally contains π:

The factor of makes the area under the graph of f equal to one, as is required for a probability distribution. This follows from a change of variables in the Gaussian integral:

which says that the area under the basic bell curve in the figure is equal to the square root of π.

The central limit theorem explains the central role of normal distributions, and thus of π, in probability and statistics. This theorem is ultimately connected with the spectral characterization of π as the eigenvalue associated with the Heisenberg uncertainty principle, and the fact that equality holds in the uncertainty principle only for the Gaussian function. Equivalently, π is the unique constant making the Gaussian normal distribution ex2 equal to its own Fourier transform. Indeed, according to Howe (1980), the "whole business" of establishing the fundamental theorems of Fourier analysis reduces to the Gaussian integral.

Projective geometry

Let V be the set of all twice differentiable real functions that satisfy the ordinary differential equation . Then V is a two-dimensional real vector space, with two parameters corresponding to a pair of initial conditions for the differential equation. For any , let be the evaluation functional, which associates to each the value of the function f at the real point t. Then, for each t, the kernel of is a one-dimensional linear subspace of V. Hence defines a function from from the real line to the real projective line. This function is periodic, and the quantity π can be characterized as the period of this map.

Topology

Uniformization of the Klein quartic, a surface of genus three and Euler characteristic −4, as a quotient of the hyperbolic plane by the symmetry group PSL(2,7) of the Fano plane. The hyperbolic area of a fundamental domain is , by Gauss–Bonnet.

The constant π appears in the Gauss–Bonnet formula which relates the differential geometry of surfaces to their topology. Specifically, if a compact surface Σ has Gauss curvature K, then

where χ(Σ) is the Euler characteristic, which is an integer. An example is the surface area of a sphere S of curvature 1 (so that its radius of curvature, which coincides with its radius, is also 1.) The Euler characteristic of a sphere can be computed from its homology groups and is found to be equal to two. Thus we have

reproducing the formula for the surface area of a sphere of radius 1.

The constant appears in many other integral formulae in topology, in particular, those involving characteristic classes via the Chern–Weil homomorphism.

Vector calculus

Decompositions of spherical harmonics, an area in vector calculus

Vector calculus is a branch of calculus that is concerned with the properties of vector fields, and has many physical applications such as to electricity and magnetism. The Newtonian potential for a point source Q situated at the origin of a three-dimensional Cartesian coordinate system is

which represents the potential energy of a unit mass (or charge) placed a distance |x| from the source, and k is a dimensional constant. The field, denoted here by E, which may be the (Newtonian) gravitational field or the (Coulomb) electric field, is the negative gradient of the potential:

Special cases include Coulomb's law and Newton's law of universal gravitation. Gauss' law states that the outward flux of the field through any smooth, simple, closed, orientable surface S containing the origin is equal to 4πkQ:

\oiint

It is standard to absorb this factor of into the constant k, but this argument shows why it must appear somewhere. Furthermore, is the surface area of the unit sphere, but we have not assumed that S is the sphere. However, as a consequence of the divergence theorem, because the region away from the origin is vacuum (source-free) it is only the homology class of the surface S in R3\{0} that matters in computing the integral, so it can be replaced by any convenient surface in the same homology class, in particular, a sphere, where spherical coordinates can be used to calculate the integral.

A consequence of the Gauss law is that the negative Laplacian of the potential V is equal to kQ times the Dirac delta function:

More general distributions of matter (or charge) are obtained from this by convolution, giving the Poisson equation

where ρ is the distribution function.

The constant π also plays an analogous role in four-dimensional potentials associated with Einstein's equations, a fundamental formula which forms the basis of the general theory of relativity and describes the fundamental interaction of gravitation as a result of spacetime being curved by matter and energy:

where Rμν is the Ricci curvature tensor, R is the scalar curvature, gμν is the metric tensor, Λ is the cosmological constant, G is Newton's gravitational constant, c is the speed of light in vacuum, and Tμν is the stress–energy tensor. The left-hand side of Einstein's equation is a non-linear analogue of the Laplacian of the metric tensor, and reduces to that in the weak field limit, with the term playing the role of a Lagrange multiplier, and the right-hand side is the analogue of the distribution function, times .

Cauchy's integral formula

Complex analytic functions can be visualized as a collection of streamlines and equipotentials, systems of curves intersecting at right angles. Here illustrated is the complex logarithm of the Gamma function.

One of the key tools in complex analysis is contour integration of a function over a positively oriented (rectifiable) Jordan curve γ. A form of Cauchy's integral formula states that if a point z0 is interior to γ, then

Although the curve γ is not a circle, and hence does not have any obvious connection to the constant π, a standard proof of this result uses Morera's theorem, which implies that the integral is invariant under homotopy of the curve, so that it can be deformed to a circle and then integrated explicitly in polar coordinates. More generally, it is true that if a rectifiable closed curve γ does not contain z0, then the above integral is i times the winding number of the curve.

The general form of Cauchy's integral formula establishes the relationship between the values of a complex analytic function f(z) on the Jordan curve γ and the value of f(z) at any interior point z0 of γ:

provided f(z) is analytic in the region enclosed by γ and extends continuously to γ. Cauchy's integral formula is a special case of the residue theorem, that if g(z) is a meromorphic function the region enclosed by γ and is continuous in a neighbourhood of γ, then

where the sum is of the residues at the poles of g(z).

The gamma function and Stirling's approximation

Plot of the gamma function on the real axis

The factorial function is the product of all of the positive integers through n. The gamma function extends the concept of factorial (normally defined only for non-negative integers) to all complex numbers, except the negative real integers, with the identity . When the gamma function is evaluated at half-integers, the result contains π. For example, and .

The gamma function is defined by its Weierstrass product development:

where γ is the Euler–Mascheroni constant. Evaluated at z = 1/2 and squared, the equation Γ(1/2)2 = π reduces to the Wallis product formula. The gamma function is also connected to the Riemann zeta function and identities for the functional determinant, in which the constant π plays an important role.

The gamma function is used to calculate the volume Vn(r) of the n-dimensional ball of radius r in Euclidean n-dimensional space, and the surface area Sn−1(r) of its boundary, the (n−1)-dimensional sphere:

Further, it follows from the functional equation that

The gamma function can be used to create a simple approximation to the factorial function n! for large n: which is known as Stirling's approximation. Equivalently,

As a geometrical application of Stirling's approximation, let Δn denote the standard simplex in n-dimensional Euclidean space, and (n + 1)Δn denote the simplex having all of its sides scaled up by a factor of n + 1. Then

Ehrhart's volume conjecture is that this is the (optimal) upper bound on the volume of a convex body containing only one lattice point.

Number theory and Riemann zeta function

Each prime has an associated Prüfer group, which are arithmetic localizations of the circle. The L-functions of analytic number theory are also localized in each prime p.
 
Solution of the Basel problem using the Weil conjecture: the value of ζ(2) is the hyperbolic area of a fundamental domain of the modular group, times π/2.

The Riemann zeta function ζ(s) is used in many areas of mathematics. When evaluated at s = 2 it can be written as

Finding a simple solution for this infinite series was a famous problem in mathematics called the Basel problem. Leonhard Euler solved it in 1735 when he showed it was equal to π2/6. Euler's result leads to the number theory result that the probability of two random numbers being relatively prime (that is, having no shared factors) is equal to 6/π2. This probability is based on the observation that the probability that any number is divisible by a prime p is 1/p (for example, every 7th integer is divisible by 7.) Hence the probability that two numbers are both divisible by this prime is 1/p2, and the probability that at least one of them is not is 1 − 1/p2. For distinct primes, these divisibility events are mutually independent; so the probability that two numbers are relatively prime is given by a product over all primes:

This probability can be used in conjunction with a random number generator to approximate π using a Monte Carlo approach.

The solution to the Basel problem implies that the geometrically derived quantity π is connected in a deep way to the distribution of prime numbers. This is a special case of Weil's conjecture on Tamagawa numbers, which asserts the equality of similar such infinite products of arithmetic quantities, localized at each prime p, and a geometrical quantity: the reciprocal of the volume of a certain locally symmetric space. In the case of the Basel problem, it is the hyperbolic 3-manifold SL2(R)/SL2(Z).

The zeta function also satisfies Riemann's functional equation, which involves π as well as the gamma function:

Furthermore, the derivative of the zeta function satisfies

A consequence is that π can be obtained from the functional determinant of the harmonic oscillator. This functional determinant can be computed via a product expansion, and is equivalent to the Wallis product formula. The calculation can be recast in quantum mechanics, specifically the variational approach to the spectrum of the hydrogen atom.

Fourier series

π appears in characters of p-adic numbers (shown), which are elements of a Prüfer group. Tate's thesis makes heavy use of this machinery.

The constant π also appears naturally in Fourier series of periodic functions. Periodic functions are functions on the group T =R/Z of fractional parts of real numbers. The Fourier decomposition shows that a complex-valued function f on T can be written as an infinite linear superposition of unitary characters of T. That is, continuous group homomorphisms from T to the circle group U(1) of unit modulus complex numbers. It is a theorem that every character of T is one of the complex exponentials .

There is a unique character on T, up to complex conjugation, that is a group isomorphism. Using the Haar measure on the circle group, the constant π is half the magnitude of the Radon–Nikodym derivative of this character. The other characters have derivatives whose magnitudes are positive integral multiples of 2π. As a result, the constant π is the unique number such that the group T, equipped with its Haar measure, is Pontrjagin dual to the lattice of integral multiples of 2π. This is a version of the one-dimensional Poisson summation formula.

Modular forms and theta functions

Theta functions transform under the lattice of periods of an elliptic curve.

The constant π is connected in a deep way with the theory of modular forms and theta functions. For example, the Chudnovsky algorithm involves in an essential way the j-invariant of an elliptic curve.

Modular forms are holomorphic functions in the upper half plane characterized by their transformation properties under the modular group (or its various subgroups), a lattice in the group . An example is the Jacobi theta function

which is a kind of modular form called a Jacobi form. This is sometimes written in terms of the nome .

The constant π is the unique constant making the Jacobi theta function an automorphic form, which means that it transforms in a specific way. Certain identities hold for all automorphic forms. An example is

which implies that θ transforms as a representation under the discrete Heisenberg group. General modular forms and other theta functions also involve π, once again because of the Stone–von Neumann theorem.

Cauchy distribution and potential theory

The Witch of Agnesi, named for Maria Agnesi (1718–1799), is a geometrical construction of the graph of the Cauchy distribution.
 
The Cauchy distribution governs the passage of Brownian particles through a membrane.

The Cauchy distribution

is a probability density function. The total probability is equal to one, owing to the integral:

The Shannon entropy of the Cauchy distribution is equal to ln(4π), which also involves π.

The Cauchy distribution plays an important role in potential theory because it is the simplest Furstenberg measure, the classical Poisson kernel associated with a Brownian motion in a half-plane. Conjugate harmonic functions and so also the Hilbert transform are associated with the asymptotics of the Poisson kernel. The Hilbert transform H is the integral transform given by the Cauchy principal value of the singular integral

The constant π is the unique (positive) normalizing factor such that H defines a linear complex structure on the Hilbert space of square-integrable real-valued functions on the real line. The Hilbert transform, like the Fourier transform, can be characterized purely in terms of its transformation properties on the Hilbert space L2(R): up to a normalization factor, it is the unique bounded linear operator that commutes with positive dilations and anti-commutes with all reflections of the real line. The constant π is the unique normalizing factor that makes this transformation unitary.

In the Mandelbrot set

An complex black shape on a blue background.
The Mandelbrot set can be used to approximate π.

An occurrence of π in the fractal called the Mandelbrot set was discovered by David Boll in 1991. He examined the behaviour of the Mandelbrot set near the "neck" at (−0.75, 0). When the number of iterations until divergence for the point (−0.75, ε) is multiplied by ε, the result approaches π as ε approaches zero. The point (0.25 + ε, 0) at the cusp of the large "valley" on the right side of the Mandelbrot set behaves similarly: the number of iterations until divergence multiplied by the square root of ε tends to π.

Outside mathematics

Describing physical phenomena

Although not a physical constant, π appears routinely in equations describing fundamental principles of the universe, often because of π's relationship to the circle and to spherical coordinate systems. A simple formula from the field of classical mechanics gives the approximate period T of a simple pendulum of length L, swinging with a small amplitude (g is the earth's gravitational acceleration):

One of the key formulae of quantum mechanics is Heisenberg's uncertainty principle, which shows that the uncertainty in the measurement of a particle's position (Δx) and momentump) cannot both be arbitrarily small at the same time (where h is Planck's constant):

The fact that π is approximately equal to 3 plays a role in the relatively long lifetime of orthopositronium. The inverse lifetime to lowest order in the fine-structure constant α is

where m is the mass of the electron.

π is present in some structural engineering formulae, such as the buckling formula derived by Euler, which gives the maximum axial load F that a long, slender column of length L, modulus of elasticity E, and area moment of inertia I can carry without buckling:

The field of fluid dynamics contains π in Stokes' law, which approximates the frictional force F exerted on small, spherical objects of radius R, moving with velocity v in a fluid with dynamic viscosity η:

In electromagnetics, the vacuum permeability constant μ0 appears in Maxwell's equations, which describe the properties of electric and magnetic fields and electromagnetic radiation. Before 20 May 2019, it was defined as exactly

A relation for the speed of light in vacuum, c can be derived from Maxwell's equations in the medium of classical vacuum using a relationship between μ0 and the electric constant (vacuum permittivity), ε0 in SI units:

Under ideal conditions (uniform gentle slope on a homogeneously erodible substrate), the sinuosity of a meandering river approaches π. The sinuosity is the ratio between the actual length and the straight-line distance from source to mouth. Faster currents along the outside edges of a river's bends cause more erosion than along the inside edges, thus pushing the bends even farther out, and increasing the overall loopiness of the river. However, that loopiness eventually causes the river to double back on itself in places and "short-circuit", creating an ox-bow lake in the process. The balance between these two opposing factors leads to an average ratio of π between the actual length and the direct distance between source and mouth.

Memorizing digits

Piphilology is the practice of memorizing large numbers of digits of π, and world-records are kept by the Guinness World Records. The record for memorizing digits of π, certified by Guinness World Records, is 70,000 digits, recited in India by Rajveer Meena in 9 hours and 27 minutes on 21 March 2015. In 2006, Akira Haraguchi, a retired Japanese engineer, claimed to have recited 100,000 decimal places, but the claim was not verified by Guinness World Records.

One common technique is to memorize a story or poem in which the word lengths represent the digits of π: The first word has three letters, the second word has one, the third has four, the fourth has one, the fifth has five, and so on. Such memorization aids are called mnemonics. An early example of a mnemonic for pi, originally devised by English scientist James Jeans, is "How I want a drink, alcoholic of course, after the heavy lectures involving quantum mechanics." When a poem is used, it is sometimes referred to as a piem. Poems for memorizing π have been composed in several languages in addition to English. Record-setting π memorizers typically do not rely on poems, but instead use methods such as remembering number patterns and the method of loci.

A few authors have used the digits of π to establish a new form of constrained writing, where the word lengths are required to represent the digits of π. The Cadaeic Cadenza contains the first 3835 digits of π in this manner, and the full-length book Not a Wake contains 10,000 words, each representing one digit of π.

In popular culture

Pi Pie at Delft University
A pi pie. Pies are circular, and "pie" and π are homophones, making pie a frequent subject of pi puns.

Perhaps because of the simplicity of its definition and its ubiquitous presence in formulae, π has been represented in popular culture more than other mathematical constructs.

In the 2008 Open University and BBC documentary co-production, The Story of Maths, aired in October 2008 on BBC Four, British mathematician Marcus du Sautoy shows a visualization of the – historically first exact – formula for calculating π when visiting India and exploring its contributions to trigonometry.

In the Palais de la Découverte (a science museum in Paris) there is a circular room known as the pi room. On its wall are inscribed 707 digits of π. The digits are large wooden characters attached to the dome-like ceiling. The digits were based on an 1873 calculation by English mathematician William Shanks, which included an error beginning at the 528th digit. The error was detected in 1946 and corrected in 1949.

In Carl Sagan's 1985 novel Contact it is suggested that the creator of the universe buried a message deep within the digits of π. The digits of π have also been incorporated into the lyrics of the song "Pi" from the 2005 album Aerial by Kate Bush.

In the 1967 Star Trek episode "Wolf in the Fold", an out-of-control computer is contained by being instructed to "Compute to the last digit the value of π", even though "π is a transcendental figure without resolution".

In the United States, Pi Day falls on 14 March (written 3/14 in the US style), and is popular among students. π and its digital representation are often used by self-described "math geeks" for inside jokes among mathematically and technologically minded groups. Several college cheers at the Massachusetts Institute of Technology include "3.14159". Pi Day in 2015 was particularly significant because the date and time 3/14/15 9:26:53 reflected many more digits of pi. In parts of the world where dates are commonly noted in day/month/year format, 22 July represents "Pi Approximation Day," as 22/7 = 3.142857.

During the 2011 auction for Nortel's portfolio of valuable technology patents, Google made a series of unusually specific bids based on mathematical and scientific constants, including π.

In 1958 Albert Eagle proposed replacing π by τ (tau), where τ = π/2, to simplify formulas. However, no other authors are known to use τ in this way. Some people use a different value, τ = 2π = 6.28318..., arguing that τ, as the number of radians in one turn, or as the ratio of a circle's circumference to its radius rather than its diameter, is more natural than π and simplifies many formulas. Celebrations of this number, because it approximately equals 6.28, by making 28 June "Tau Day" and eating "twice the pie", have been reported in the media. However, this use of τ has not made its way into mainstream mathematics. Tau was added to the Python programming language (as math.tau) in version 3.6.

In 1897, an amateur mathematician attempted to persuade the Indiana legislature to pass the Indiana Pi Bill, which described a method to square the circle and contained text that implied various incorrect values for π, including 3.2. The bill is notorious as an attempt to establish a value of mathematical constant by legislative fiat. The bill was passed by the Indiana House of Representatives, but rejected by the Senate, meaning it did not become a law.

In computer culture

In contemporary internet culture, individuals and organizations frequently pay homage to the number π. For instance, the computer scientist Donald Knuth let the version numbers of his program TeX approach π. The versions are 3, 3.1, 3.14, and so forth.

Tuesday, August 9, 2022

Climate of the Arctic

From Wikipedia, the free encyclopedia
 
A map of the Arctic. The red line is the 10°C isotherm in July, commonly used to define the Arctic region; also shown is the Arctic Circle. The white area shows the average minimum extent of sea ice in summer as of 1975.

The climate of the Arctic is characterized by long, cold winters and short, cool summers. There is a large amount of variability in climate across the Arctic, but all regions experience extremes of solar radiation in both summer and winter. Some parts of the Arctic are covered by ice (sea ice, glacial ice, or snow) year-round, and nearly all parts of the Arctic experience long periods with some form of ice on the surface.

The Arctic consists of ocean that is largely surrounded by land. As such, the climate of much of the Arctic is moderated by the ocean water, which can never have a temperature below −2 °C (28 °F). In winter, this relatively warm water, even though covered by the polar ice pack, keeps the North Pole from being the coldest place in the Northern Hemisphere, and it is also part of the reason that Antarctica is so much colder than the Arctic. In summer, the presence of the nearby water keeps coastal areas from warming as much as they might otherwise.

Overview of the Arctic

There are different definitions of the Arctic. The most widely used definition, the area north of the Arctic Circle, where the sun does not set on the June Solstice, is used in astronomical and some geographical contexts. However the two most widely used definitions in the context of climate are the area north of the northern tree line, and the area in which the average summer temperature is less than 10 °C (50 °F), which are nearly coincident over most land areas (NSIDC).

The nations which comprise the Arctic region.

This definition of the Arctic can be further divided into four different regions:

Moving inland from the coast over mainland North America and Eurasia, the moderating influence of the Arctic Ocean quickly diminishes, and the climate transitions from the Arctic to subarctic, generally, in less than 500 kilometres (310 miles), and often over a much shorter distance.

History of Arctic climate observation

Due to the lack of major population centres in the Arctic, weather and climate observations from the region tend to be widely spaced and of short duration compared to the midlatitudes and tropics. Though the Vikings explored parts of the Arctic over a millennium ago, and small numbers of people have been living along the Arctic coast for much longer, scientific knowledge about the region was slow to develop; the large islands of Severnaya Zemlya, just north of the Taymyr Peninsula on the Russian mainland, were not discovered until 1913, and not mapped until the early 1930s. 

Early European exploration

Much of the historical exploration in the Arctic was motivated by the search for the Northwest and Northeast Passages. Sixteenth- and seventeenth-century expeditions were largely driven by traders in search of these shortcuts between the Atlantic and the Pacific. These forays into the Arctic did not venture far from the North American and Eurasian coasts, and were unsuccessful at finding a navigable route through either passage.

National and commercial expeditions continued to expand the detail on maps of the Arctic through the eighteenth century, but largely neglected other scientific observations. Expeditions from the 1760s to the middle of the 19th century were also led astray by attempts to sail north because of the belief by many at the time that the ocean surrounding the North Pole was ice-free. These early explorations did provide a sense of the sea ice conditions in the Arctic and occasionally some other climate-related information.

By the early 19th century some expeditions were making a point of collecting more detailed meteorological, oceanographic, and geomagnetic observations, but they remained sporadic. Beginning in the 1850s regular meteorological observations became more common in many countries, and the British navy implemented a system of detailed observation. As a result, expeditions from the second half of the nineteenth century began to provide a picture of the Arctic climate.

Early European observing efforts

A photograph of the first-IPY station at the Kara Sea site in winter

The first major effort by Europeans to study the meteorology of the Arctic was the First International Polar Year (IPY) in 1882 to 1883. Eleven nations provided support to establish twelve observing stations around the Arctic. The observations were not as widespread or long-lasting as would be needed to describe the climate in detail, but they provided the first cohesive look at the Arctic weather.

In 1884 the wreckage of the Briya, a ship abandoned three years earlier off Russia's eastern Arctic coast, was found on the coast of Greenland. This caused Fridtjof Nansen to realize that the sea ice was moving from the Siberian side of the Arctic to the Atlantic side. He decided to use this motion by freezing a specially designed ship, the Fram, into the sea ice and allowing it to be carried across the ocean. Meteorological observations were collected from the ship during its crossing from September 1893 to August 1896. This expedition also provided valuable insight into the circulation of the ice surface of the Arctic Ocean.

In the early 1930s the first significant meteorological studies were carried out on the interior of the Greenland ice sheet. These provided knowledge of perhaps the most extreme climate of the Arctic, and also the first suggestion that the ice sheet lies in a depression of the bedrock below (now known to be caused by the weight of the ice itself).

Fifty years after the first IPY, in 1932 to 1933, a second IPY was organized. This one was larger than the first, with 94 meteorological stations, but World War II delayed or prevented the publication of much of the data collected during it. Another significant moment in Arctic observing before World War II occurred in 1937 when the USSR established the first of over 30 North-Pole drifting stations. This station, like the later ones, was established on a thick ice floe and drifted for almost a year, its crew observing the atmosphere and ocean along the way.

Cold-War era observations

Following World War II, the Arctic, lying between the USSR and North America, became a front line of the Cold War, inadvertently and significantly furthering our understanding of its climate. Between 1947 and 1957, the United States and Canadian governments established a chain of stations along the Arctic coast known as the Distant Early Warning Line (DEWLINE) to provide warning of a Soviet nuclear attack. Many of these stations also collected meteorological data.

The DEWLINE site at Point Lay, Alaska

The Soviet Union was also interested in the Arctic and established a significant presence there by continuing the North-Pole drifting stations. This program operated continuously, with 30 stations in the Arctic from 1950 to 1991. These stations collected data that are valuable to this day for understanding the climate of the Arctic Basin. This map shows the location of Arctic research facilities during the mid-1970s and the tracks of drifting stations between 1958 and 1975.

Another benefit from the Cold War was the acquisition of observations from United States and Soviet naval voyages into the Arctic. In 1958 an American nuclear submarine, the Nautilus was the first ship to reach the North Pole. In the decades that followed submarines regularly roamed under the Arctic sea ice, collecting sonar observations of the ice thickness and extent as they went. These data became available after the Cold War, and have provided evidence of thinning of the Arctic sea ice. The Soviet navy also operated in the Arctic, including a sailing of the nuclear-powered ice breaker Arktika to the North Pole in 1977, the first time a surface ship reached the pole.

Scientific expeditions to the Arctic also became more common during the Cold-War decades, sometimes benefiting logistically or financially from the military interest. In 1966 the first deep ice core in Greenland was drilled at Camp Century, providing a glimpse of climate through the last ice age. This record was lengthened in the early 1990s when two deeper cores were taken from near the center of the Greenland Ice Sheet. Beginning in 1979 the Arctic Ocean Buoy Program (the International Arctic Buoy Program since 1991) has been collecting meteorological and ice-drift data across the Arctic Ocean with a network of 20 to 30 buoys.

Satellite era

The end of the Soviet Union in 1991 led to a dramatic decrease in regular observations from the Arctic. The Russian government ended the system of drifting North Pole stations, and closed many of the surface stations in the Russian Arctic. Likewise the United States and Canadian governments cut back on spending for Arctic observing as the perceived need for the DEWLINE declined. As a result, the most complete collection of surface observations from the Arctic is for the period 1960 to 1990.

The extensive array of satellite-based remote-sensing instruments now in orbit has helped to replace some of the observations that were lost after the Cold War, and has provided coverage that was impossible without them. Routine satellite observations of the Arctic began in the early 1970s, expanding and improving ever since. A result of these observations is a thorough record of sea-ice extent in the Arctic since 1979; the decreasing extent seen in this record (NASA, NSIDC), and its possible link to anthropogenic global warming, has helped increase interest in the Arctic in recent years. Today's satellite instruments provide routine views of not only cloud, snow, and sea-ice conditions in the Arctic, but also of other, perhaps less-expected, variables, including surface and atmospheric temperatures, atmospheric moisture content, winds, and ozone concentration.

Civilian scientific research on the ground has certainly continued in the Arctic, and it is getting a boost from 2007 to 2009 as nations around the world increase spending on polar research as part of the third International Polar Year. During these two years thousands of scientists from over 60 nations will co-operate to carry out over 200 projects to learn about physical, biological, and social aspects of the Arctic and Antarctic (IPY).

Modern researchers in the Arctic also benefit from computer models. These pieces of software are sometimes relatively simple, but often become highly complex as scientists try to include more and more elements of the environment to make the results more realistic. The models, though imperfect, often provide valuable insight into climate-related questions that cannot be tested in the real world. They are also used to try to predict future climate and the effect that changes to the atmosphere caused by humans may have on the Arctic and beyond. Another interesting use of models has been to use them, along with historical data, to produce a best estimate of the weather conditions over the entire globe during the last 50 years, filling in regions where no observations were made (ECMWF). These reanalysis datasets help compensate for the lack of observations over the Arctic.

Solar radiation

Variations in the length of the day with latitude and time of year. Atmospheric refraction makes the sun appear higher in the sky than it is geometrically, and therefore causes the extent of 24-hour day or night to differ slightly from the polar circles.
 
Variations in the duration of daylight with latitude and time of year. The smaller angle with which the sun intersects the horizon in the Polar regions, compared to the Tropics, leads to longer periods of twilight in the Polar regions, and accounts for the asymmetry of the plot.

Almost all of the energy available to the Earth's surface and atmosphere comes from the sun in the form of solar radiation (light from the sun, including invisible ultraviolet and infrared light). Variations in the amount of solar radiation reaching different parts of the Earth are a principal driver of global and regional climate. Latitude is the most important factor determining the yearly average amount of solar radiation reaching the top of the atmosphere; the incident solar radiation decreases smoothly from the Equator to the poles. Therefore, temperature tends to decrease with increasing latitude.

In addition the length of each day, which is determined by the season, has a significant impact on the climate. The 24-hour days found near the poles in summer result in a large daily-average solar flux reaching the top of the atmosphere in these regions. On the June solstice 36% more solar radiation reaches the top of the atmosphere over the course of the day at the North Pole than at the Equator. However, in the six months from the September equinox to March equinox the North Pole receives no sunlight.

The climate of the Arctic also depends on the amount of sunlight reaching the surface, and being absorbed by the surface. Variations in cloud cover can cause significant variations in the amount of solar radiation reaching the surface at locations with the same latitude. Differences in surface albedo due for example to presence or absence of snow and ice strongly affect the fraction of the solar radiation reaching the surface that is reflected rather than absorbed.

Winter

During the winter months of November through February, the sun remains very low in the sky in the Arctic or does not rise at all. Where it does rise, the days are short, and the sun's low position in the sky means that, even at noon, not much energy is reaching the surface. Furthermore, most of the small amount of solar radiation that reaches the surface is reflected away by the bright snow cover. Cold snow reflects between 70% and 90% of the solar radiation that reaches it, and snow covers most of the Arctic land and ice surface in winter. These factors result in a negligible input of solar energy to the Arctic in winter; the only things keeping the Arctic from continuously cooling all winter are the transport of warmer air and ocean water into the Arctic from the south and the transfer of heat from the subsurface land and ocean (both of which gain heat in summer and release it in winter) to the surface and atmosphere.

Spring

Arctic days lengthen rapidly in March and April, and the sun rises higher in the sky, both bringing more solar radiation to the Arctic than in winter. During these early months of Northern Hemisphere spring most of the Arctic is still experiencing winter conditions, but with the addition of sunlight. The continued low temperatures, and the persisting white snow cover, mean that this additional energy reaching the Arctic from the sun is slow to have a significant impact because it is mostly reflected away without warming the surface. By May, temperatures are rising, as 24-hour daylight reaches many areas, but most of the Arctic is still snow-covered, so the Arctic surface reflects more than 70% of the sun's energy that reaches it over all areas but the Norwegian Sea and southern Bering Sea, where the ocean is ice free, and some of the land areas adjacent to these seas, where the moderating influence of the open water helps melt the snow early.

In most of the Arctic the significant snow melt begins in late May or sometime in June. This begins a feedback, as melting snow reflects less solar radiation (50% to 60%) than dry snow, allowing more energy to be absorbed and the melting to take place faster. As the snow disappears on land, the underlying surfaces absorb even more energy, and begin to warm rapidly.

Summer

At the North Pole on the June solstice, around 21 June, the sun circles at 23.5° above the horizon. This marks noon in the Pole's year-long day; from then until the September equinox, the sun will slowly approach nearer and nearer the horizon, offering less and less solar radiation to the Pole. This period of setting sun also roughly corresponds to summer in the Arctic.

This photograph, from a plane, shows a section of sea ice. The lighter blue areas are melt ponds, and the darkest areas are open water.

As the Arctic continues receiving energy from the sun during this time, the land, which is mostly free of snow by now, can warm up on clear days when the wind is not coming from the cold ocean. Over the Arctic Ocean the snow cover on the sea ice disappears and ponds of melt water start to form on the sea ice, further reducing the amount of sunlight the ice reflects and helping more ice melt. Around the edges of the Arctic Ocean the ice will melt and break up, exposing the ocean water, which absorbs almost all of the solar radiation that reaches it, storing the energy in the water column. By July and August, most of the land is bare and absorbs more than 80% of the sun's energy that reaches the surface. Where sea ice remains, in the central Arctic Basin and the straits between the islands in the Canadian Archipelago, the many melt ponds and lack of snow cause about half of the sun's energy to be absorbed, but this mostly goes toward melting ice since the ice surface cannot warm above freezing.

Frequent cloud cover, exceeding 80% frequency over much of the Arctic Ocean in July, reduces the amount of solar radiation that reaches the surface by reflecting much of it before it gets to the surface. Unusual clear periods can lead to increased sea-ice melt or higher temperatures (NSIDC).

Greenland: The interior of Greenland differs from the rest of the Arctic. Low spring and summer cloud frequency and the high elevation, which reduces the amount of solar radiation absorbed or scattered by the atmosphere, combine to give this region the most incoming solar radiation at the surface out of anywhere in the Arctic. However, the high elevation, and corresponding lower temperatures, help keep the bright snow from melting, limiting the warming effect of all this solar radiation.

In the summer, when the snow melts, Inuit live in tent-like huts made out of animal skins stretched over a frame.

Autumn

In September and October the days get rapidly shorter, and in northern areas the sun disappears from the sky entirely. As the amount of solar radiation available to the surface rapidly decreases, the temperatures follow suit. The sea ice begins to refreeze, and eventually gets a fresh snow cover, causing it to reflect even more of the dwindling amount of sunlight reaching it. Likewise, in the beginning of September both the northern and southern land areas receive their winter snow cover, which combined with the reduced solar radiation at the surface, ensures an end to the warm days those areas may experience in summer. By November, winter is in full swing in most of the Arctic, and the small amount of solar radiation still reaching the region does not play a significant role in its climate.

Temperature

Average January temperature in the Arctic
 
Average July temperature in the Arctic

The Arctic is often perceived as a region stuck in a permanent deep freeze. While much of the region does experience very low temperatures, there is considerable variability with both location and season. Winter temperatures average below freezing over all of the Arctic except for small regions in the southern Norwegian and Bering Seas, which remain ice free throughout the winter. Average temperatures in summer are above freezing over all regions except the central Arctic Basin, where sea ice survives through the summer, and interior Greenland.

The maps on the right show the average temperature over the Arctic in January and July, generally the coldest and warmest months. These maps were made with data from the NCEP/NCAR Reanalysis, which incorporates available data into a computer model to create a consistent global data set. Neither the models nor the data are perfect, so these maps may differ from other estimates of surface temperatures; in particular, most Arctic climatologies show temperatures over the central Arctic Ocean in July averaging just below freezing, a few degrees lower than these maps show (USSR, 1985). An earlier climatology of temperatures in the Arctic, based entirely on available data, is shown in this map from the CIA Polar Regions Atlas.

Record low temperatures in the Northern Hemisphere

The coldest location in the Northern Hemisphere is not in the Arctic, but rather in the interior of Russia's Far East, in the upper-right quadrant of the maps. This is due to the region's continental climate, far from the moderating influence of the ocean, and to the valleys in the region that can trap cold, dense air and create strong temperature inversions, where the temperature increases, rather than decreases, with height. The lowest officially recorded temperature in the Northern Hemisphere is −67.7 °C (−89.9 °F) which occurred in Oymyakon on 6 February 1933, as well as in Verkhoyansk on 5 and 7 February 1892, respectively. However, this region is not part of the Arctic because its continental climate also allows it to have warm summers, with an average July temperature of 15 °C (59 °F). In the figure below showing station climatologies, the plot for Yakutsk is representative of this part of the Far East; Yakutsk has a slightly less extreme climate than Verkhoyansk.

Monthly and annual climatologies of eight locations in the Arctic and sub-Arctic

Arctic Basin

The Arctic Basin is typically covered by sea ice year round, which strongly influences its summer temperatures. It also experiences the longest period without sunlight of any part of the Arctic, and the longest period of continuous sunlight, though the frequent cloudiness in summer reduces the importance of this solar radiation.

Despite its location centered on the North Pole, and the long period of darkness this brings, this is not the coldest part of the Arctic. In winter, the heat transferred from the −2 °C (28 °F) water through cracks in the ice and areas of open water helps to moderate the climate some, keeping average winter temperatures around −30 to −35 °C (−22 to −31 °F). Minimum temperatures in this region in winter are around −50 °C (−58 °F).

In summer, the sea ice keeps the surface from warming above freezing. Sea ice is mostly fresh water since the salt is rejected by the ice as it forms, so the melting ice has a temperature of 0 °C (32 °F), and any extra energy from the sun goes to melting more ice, not to warming the surface. Air temperatures, at the standard measuring height of about 2 meters above the surface, can rise a few degrees above freezing between late May and September, though they tend to be within a degree of freezing, with very little variability during the height of the melt season.

In the figure above showing station climatologies, the lower-left plot, for NP 7–8, is representative of conditions over the Arctic Basin. This plot shows data from the Soviet North Pole drifting stations, numbers 7 and 8. It shows the average temperature in the coldest months is in the −30s, and the temperature rises rapidly from April to May; July is the warmest month, and the narrowing of the maximum and minimum temperature lines shows the temperature does not vary far from freezing in the middle of summer; from August through December the temperature drops steadily. The small daily temperature range (the length of the vertical bars) results from the fact that the sun's elevation above the horizon does not change much or at all in this region during one day.

Much of the winter variability in this region is due to clouds. Since there is no sunlight, the thermal radiation emitted by the atmosphere is one of this region's main sources of energy in winter. A cloudy sky can emit much more energy toward the surface than a clear sky, so when it is cloudy in winter, this region tends to be warm, and when it is clear, this region cools quickly.

Canadian Briya

In winter, the Canadian Archipelago experiences temperatures similar to those in the Arctic Basin, but in the summer months of June to August, the presence of so much land in this region allows it to warm more than the ice-covered Arctic Basin. In the station-climatology figure above, the plot for Resolute is typical of this region. The presence of the islands, most of which lose their snow cover in summer, allows the summer temperatures to rise well above freezing. The average high temperature in summer approaches 10 °C (50 °F), and the average low temperature in July is above freezing, though temperatures below freezing are observed every month of the year.

The straits between these islands often remain covered by sea ice throughout the summer. This ice acts to keep the surface temperature at freezing, just as it does over the Arctic Basin, so a location on a strait would likely have a summer climate more like the Arctic Basin, but with higher maximum temperatures because of winds off of the nearby warm islands.

Greenland

Greenland's ice sheet thickness. Note that much of the area in green has permanent snow cover, it's just less than 10 m (33 ft) thick.

Climatically, Greenland is divided into two very separate regions: the coastal region, much of which is ice free, and the inland ice sheet. The Greenland Ice Sheet covers about 80% of Greenland, extending to the coast in places, and has an average elevation of 2,100 m (6,900 ft) and a maximum elevation of 3,200 m (10,500 ft). Much of the ice sheet remains below freezing all year, and it has the coldest climate of any part of the Arctic. Coastal areas can be affected by nearby open water, or by heat transfer through sea ice from the ocean, and many parts lose their snow cover in summer, allowing them to absorb more solar radiation and warm more than the interior.

Coastal regions on the northern half of Greenland experience winter temperatures similar to or slightly warmer than the Canadian Archipelago, with average January temperatures of −30 to −25 °C (−22 to −13 °F). These regions are slightly warmer than the Archipelago because of their closer proximity to areas of thin, first-year sea ice cover or to open ocean in the Baffin Bay and Greenland Sea.

The coastal regions in the southern part of the island are influenced more by open ocean water and by frequent passage of cyclones, both of which help to keep the temperature there from being as low as in the north. As a result of these influences, the average temperature in these areas in January is considerably higher, between about −20 to −4 °C (−4 to 25 °F).

The interior ice sheet escapes much of the influence of heat transfer from the ocean or from cyclones, and its high elevation also acts to give it a colder climate since temperatures tend to decrease with elevation. The result is winter temperatures that are lower than anywhere else in the Arctic, with average January temperatures of −45 to −30 °C (−49 to −22 °F), depending on location and on which data set is viewed. Minimum temperatures in winter over the higher parts of the ice sheet can drop below −60 °C (−76 °F)(CIA, 1978). In the station climatology figure above, the Centrale plot is representative of the high Greenland Ice Sheet.

In summer, the coastal regions of Greenland experience temperatures similar to the islands in the Canadian Archipelago, averaging just a few degrees above freezing in July, with slightly higher temperatures in the south and west than in the north and east. The interior ice sheet remains snow-covered throughout the summer, though significant portions do experience some snow melt. This snow cover, combined with the ice sheet's elevation, help to keep temperatures here lower, with July averages between −12 and 0 °C (10 and 32 °F). Along the coast, temperatures are kept from varying too much by the moderating influence of the nearby water or melting sea ice. In the interior, temperatures are kept from rising much above freezing because of the snow-covered surface but can drop to −30 °C (−22 °F) even in July. Temperatures above 20 °C are rare but do sometimes occur in the far south and south-west coastal areas.

Ice-free seas

Most Arctic seas are covered by ice for part of the year (see the map in the sea-ice section below); 'ice-free' here refers to those which are not covered year-round.

The only regions that remain ice-free throughout the year are the southern part of the Barents Sea and most of the Norwegian Sea. These have very small annual temperature variations; average winter temperatures are kept near or above the freezing point of sea water (about −2 °C (28 °F)) since the unfrozen ocean cannot have a temperature below that, and summer temperatures in the parts of these regions that are considered part of the Arctic average less than 10 °C (50 °F). During the 46-year period when weather records were kept on Shemya Island, in the southern Bering Sea, the average temperature of the coldest month (February) was −0.6 °C (30.9 °F) and that of the warmest month (August) was 9.7 °C (49.5 °F); temperatures never dropped below −17 °C (1 °F) or rose above 18 °C (64 °F); Western Regional Climate Center)

The rest of the seas have ice cover for some part of the winter and spring, but lose that ice during the summer. These regions have summer temperatures between about 0 and 8 °C (32 and 46 °F). The winter ice cover allows temperatures to drop much lower in these regions than in the regions that are ice-free all year. Over most of the seas that are ice-covered seasonally, winter temperatures average between about −30 and −15 °C (−22 and 5 °F). Those areas near the sea-ice edge will remain somewhat warmer due to the moderating influence of the nearby open water. In the station-climatology figure above, the plots for Point Barrow, Tiksi, Murmansk, and Isfjord are typical of land areas adjacent to seas that are ice-covered seasonally. The presence of the land allows temperatures to reach slightly more extreme values than the seas themselves.

An essentially ice-free Arctic may be a reality in the month of September, anywhere from 2050 to 2100.

Precipitation

Precipitation in most of the Arctic falls only as rain and snow. Over most areas snow is the dominant, or only, form of precipitation in winter, while both rain and snow fall in summer (Serreze and Barry 2005). The main exception to this general description is the high part of the Greenland Ice Sheet, which receives all of its precipitation as snow, in all seasons.

Accurate climatologies of precipitation amount are more difficult to compile for the Arctic than climatologies of other variables such as temperature and pressure. All variables are measured at relatively few stations in the Arctic, but precipitation observations are made more uncertain due to the difficulty in catching in a gauge all of the snow that falls. Typically some falling snow is kept from entering precipitation gauges by winds, causing an underreporting of precipitation amounts in regions that receive a large fraction of their precipitation as snowfall. Corrections are made to data to account for this uncaught precipitation, but they are not perfect and introduce some error into the climatologies (Serreze and Barry 2005).

The observations that are available show that precipitation amounts vary by about a factor of 10 across the Arctic, with some parts of the Arctic Basin and Canadian Archipelago receiving less than 150 mm (5.9 in) of precipitation annually, and parts of southeast Greenland receiving over 1,200 mm (47 in) annually. Most regions receive less than 500 mm (20 in) annually. For comparison, annual precipitation averaged over the whole planet is about 1,000 mm (39 in); see Precipitation). Unless otherwise noted, all precipitation amounts given in this article are liquid-equivalent amounts, meaning that frozen precipitation is melted before it is measured.

Arctic Basin

The Arctic Basin is one of the driest parts of the Arctic. Most of the Basin receives less than 250 mm (9.8 in) of precipitation per year, qualifying it as a desert. Smaller regions of the Arctic Basin just north of Svalbard and the Taymyr Peninsula receive up to about 400 mm (16 in) per year.

Monthly precipitation totals over most of the Arctic Basin average about 15 mm (0.59 in) from November through May, and rise to 20 to 30 mm (0.79 to 1.18 in) in July, August, and September. The dry winters result from the low frequency of cyclones in the region during that time, and the region's distance from warm open water that could provide a source of moisture (Serreze and Barry 2005). Despite the low precipitation totals in winter, precipitation frequency is higher in January, when 25% to 35% of observations reported precipitation, than in July, when 20% to 25% of observations reported precipitation (Serreze and Barry 2005). Much of the precipitation reported in winter is very light, possibly diamond dust. The number of days with measurable precipitation (more than 0.1 mm [0.004 in] in a day) is slightly greater in July than in January (USSR 1985). Of January observations reporting precipitation, 95% to 99% of them indicate it was frozen. In July, 40% to 60% of observations reporting precipitation indicate it was frozen (Serreze and Barry 2005).

The parts of the Basin just north of Svalbard and the Taymyr Peninsula are exceptions to the general description just given. These regions receive many weakening cyclones from the North-Atlantic storm track, which is most active in winter. As a result, precipitation amounts over these parts of the basin are larger in winter than those given above. The warm air transported into these regions also mean that liquid precipitation is more common than over the rest of the Arctic Basin in both winter and summer.

Canadian Archipelago

Annual precipitation totals in the Canadian Archipelago increase dramatically from north to south. The northern islands receive similar amounts, with a similar annual cycle, to the central Arctic Basin. Over Baffin Island and the smaller islands around it, annual totals increase from just over 200 mm (7.9 in) in the north to about 500 mm (20 in) in the south, where cyclones from the North Atlantic are more frequent.

Greenland

Annual precipitation amounts given below for Greenland are from Figure 6.5 in Serreze and Barry (2005). Due to the scarcity of long-term weather records in Greenland, especially in the interior, this precipitation climatology was developed by analyzing the annual layers in the snow to determine annual snow accumulation (in liquid equivalent) and was modified on the coast with a model to account for the effects of the terrain on precipitation amounts.

The southern third of Greenland protrudes into the North-Atlantic storm track, a region frequently influenced by cyclones. These frequent cyclones lead to larger annual precipitation totals than over most of the Arctic. This is especially true near the coast, where the terrain rises from sea level to over 2,500 m (8,200 ft), enhancing precipitation due to orographic lift. The result is annual precipitation totals of 400 mm (16 in) over the southern interior to over 1,200 mm (47 in) near the southern and southeastern coasts. Some locations near these coasts where the terrain is particularly conducive to causing orographic lift receive up 2,200 mm (87 in) of precipitation per year. More precipitation falls in winter, when the storm track is most active, than in summer.

The west coast of the central third of Greenland is also influenced by some cyclones and orographic lift, and precipitation totals over the ice sheet slope near this coast are up to 600 mm (24 in) per year. The east coast of the central third of the island receives between 200 and 600 mm (7.9 and 23.6 in) of precipitation per year, with increasing amounts from north to south. Precipitation over the north coast is similar to that over the central Arctic Basin.

The interior of the central and northern Greenland Ice Sheet is the driest part of the Arctic. Annual totals here range from less than 100 to about 200 mm (4 to 8 in). This region is continuously below freezing, so all precipitation falls as snow, with more in summer than in the winter time. (USSR 1985).

Ice-free seas

The Chukchi, Laptev, and Kara Seas and Baffin Bay receive somewhat more precipitation than the Arctic Basin, with annual totals between 200 and 400 mm (7.9 and 15.7 in); annual cycles in the Chukchi and Laptev Seas and Baffin Bay are similar to those in the Arctic Basin, with more precipitation falling in summer than in winter, while the Kara Sea has a smaller annual cycle due to enhanced winter precipitation caused by cyclones from the North Atlantic storm track.

The Labrador, Norwegian, Greenland, and Barents Seas and Denmark and Davis Straits are strongly influenced by the cyclones in the North Atlantic storm track, which is most active in winter. As a result, these regions receive more precipitation in winter than in summer. Annual precipitation totals increase quickly from about 400 mm (16 in) in the northern to about 1,400 mm (55 in) in the southern part of the region. Precipitation is frequent in winter, with measurable totals falling on an average of 20 days each January in the Norwegian Sea (USSR 1985). The Bering Sea is influenced by the North Pacific storm track, and has annual precipitation totals between 400 and 800 mm (16 and 31 in), also with a winter maximum.

Sea ice

Estimates of the absolute and average minimum and maximum extent of sea ice in the Arctic as of the mid-1970s

Sea ice is frozen sea water that floats on the ocean's surface. It is the dominant surface type throughout the year in the Arctic Basin, and covers much of the ocean surface in the Arctic at some point during the year. The ice may be bare ice, or it may be covered by snow or ponds of melt water, depending on location and time of year. Sea ice is relatively thin, generally less than about 4 m (13 ft), with thicker ridges (NSIDC). NOAA's North Pole Web Cams having been tracking the Arctic summer sea ice transitions through spring thaw, summer melt ponds, and autumn freeze-up since the first webcam was deployed in 2002–present.

Sea ice is important to the climate and the ocean in a variety of ways. It reduces the transfer of heat from the ocean to the atmosphere; it causes less solar energy to be absorbed at the surface, and provides a surface on which snow can accumulate, which further decreases the absorption of solar energy; since salt is rejected from the ice as it forms, the ice increases the salinity of the ocean's surface water where it forms and decreases the salinity where it melts, both of which can affect the ocean's circulation.

The map at right shows the areas covered by sea ice when it is at its maximum extent (March) and its minimum extent (September). This map was made in the 1970s, and the extent of sea ice has decreased since then (see below), but this still gives a reasonable overview. At its maximum extent, in March, sea ice covers about 15 million km2 (5.8 million sq mi) of the Northern Hemisphere, nearly as much area as the largest country, Russia.

Winds and ocean currents cause the sea ice to move. The typical pattern of ice motion is shown on the map at right. On average, these motions carry sea ice from the Russian side of the Arctic Ocean into the Atlantic Ocean through the area east of Greenland, while they cause the ice on the North American side to rotate clockwise, sometimes for many years.

Wind

Wind speeds over the Arctic Basin and the western Canadian Archipelago average between 4 and 6 metres per second (14 and 22 kilometres per hour, 9 and 13 miles per hour) in all seasons. Stronger winds do occur in storms, often causing whiteout conditions, but they rarely exceed 25 m/s (90 km/h (56 mph) in these areas.

During all seasons, the strongest average winds are found in the North-Atlantic seas, Baffin Bay, and Bering and Chukchi Seas, where cyclone activity is most common. On the Atlantic side, the winds are strongest in winter, averaging 7 to 12 m/s (25 to 43 km/h (16 to 27 mph), and weakest in summer, averaging 5 to 7 m/s (18 to 25 km/h (11 to 16 mph). On the Pacific side they average 6 to 9 m/s (22 to 32 km/h (14 to 20 mph) year round. Maximum wind speeds in the Atlantic region can approach 50 m/s (180 km/h (110 mph) in winter.

Changes in Arctic Climate

Past climates

Northern hemisphere glaciation during the last ice ages. The setup of 3 to 4 kilometer thick ice sheets caused a sea level lowering of about 120 m.

As with the rest of the planet, the climate in the Arctic has changed throughout time. About 55 million years ago it is thought that parts of the Arctic supported subtropical ecosystems[10] and that Arctic sea-surface temperatures rose to about 23 °C (73 °F) during the Paleocene–Eocene Thermal Maximum. In the more recent past, the planet has experienced a series of ice ages and interglacial periods over about the last 2 million years, with the last ice age reaching its maximum extent about 18,000 years ago and ending by about 10,000 years ago. During these ice ages, large areas of northern North America and Eurasia were covered by ice sheets similar to the one found today on Greenland; Arctic climate conditions would have extended much further south, and conditions in the present-day Arctic region were likely colder. Temperature proxies suggest that over the last 8000 years the climate has been stable, with globally averaged temperature variations of less than about 1 °C (34 °F); (see Paleoclimate).

Global warming

The image above shows where average air temperatures (October 2010 – September 2011) were up to 3 degrees Celsius above (red) or below (blue) the long-term average (1981–2010).
 
The map shows the 10-year average (2000–2009) global mean temperature anomaly relative to the 1951–1980 mean. The largest temperature increases are in the Arctic and the Antarctic Peninsula. Source: NASA Earth Observatory

There are several reasons to expect that climate changes, from whatever cause, may be enhanced in the Arctic, relative to the mid-latitudes and tropics. First is the ice-albedo feedback, whereby an initial warming causes snow and ice to melt, exposing darker surfaces that absorb more sunlight, leading to more warming. Second, because colder air holds less water vapour than warmer air, in the Arctic, a greater fraction of any increase in radiation absorbed by the surface goes directly into warming the atmosphere, whereas in the tropics, a greater fraction goes into evaporation. Third, because the Arctic temperature structure inhibits vertical air motions, the depth of the atmospheric layer that has to warm in order to cause warming of near-surface air is much shallower in the Arctic than in the tropics. Fourth, a reduction in sea-ice extent will lead to more energy being transferred from the warm ocean to the atmosphere, enhancing the warming. Finally, changes in atmospheric and oceanic circulation patterns caused by a global temperature change may cause more heat to be transferred to the Arctic, enhancing Arctic warming.

According to the Intergovernmental Panel on Climate Change (IPCC), "warming of the climate system is unequivocal", and the global-mean temperature has increased by 0.6 to 0.9 °C (1.1 to 1.6 °F) over the last century. This report also states that "most of the observed increase in global average temperatures since the mid-20th century is very likely [greater than 90% chance] due to the observed increase in anthropogenic greenhouse gas concentrations." The IPCC also indicate that, over the last 100 years, the annually averaged temperature in the Arctic has increased by almost twice as much as the global mean temperature has. In 2009, NASA reported that 45 percent or more of the observed warming in the Arctic since 1976 was likely a result of changes in tiny airborne particles called aerosols.

Climate models predict that the temperature increase in the Arctic over the next century will continue to be about twice the global average temperature increase. By the end of the 21st century, the annual average temperature in the Arctic is predicted to increase by 2.8 to 7.8 °C (5.0 to 14.0 °F), with more warming in winter (4.3 to 11.4 °C (7.7 to 20.5 °F)) than in summer. Decreases in sea-ice extent and thickness are expected to continue over the next century, with some models predicting the Arctic Ocean will be free of sea ice in late summer by the mid to late part of the century.

A study published in the journal Science in September 2009 determined that temperatures in the Arctic are higher presently than they have been at any time in the previous 2,000 years. Samples from ice cores, tree rings and lake sediments from 23 sites were used by the team, led by Darrell Kaufman of Northern Arizona University, to provide snapshots of the changing climate. Geologists were able to track the summer Arctic temperatures as far back as the time of the Romans by studying natural signals in the landscape. The results highlighted that for around 1,900 years temperatures steadily dropped, caused by precession of earth's orbit that caused the planet to be slightly farther away from the sun during summer in the Northern Hemisphere. These orbital changes led to a cold period known as the little ice age during the 17th, 18th and 19th centuries. However, during the last 100 years temperatures have been rising, despite the fact that the continued changes in earth's orbit would have driven further cooling. The largest rises have occurred since 1950, with four of the five warmest decades in the last 2,000 years occurring between 1950 and 2000. The last decade was the warmest in the record.

Philosophy of psychology

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Philosoph...