These theories are usually studied in the context of real and complex numbers and functions. Analysis evolved from calculus, which involves the elementary concepts and techniques of analysis.
Analysis may be distinguished from geometry; however, it can be applied to any space of mathematical objects that has a definition of nearness (a topological space) or specific distances between objects (a metric space).
History
Archimedes used the method of exhaustion to compute the area inside a circle by finding the area of regular polygons with more and more sides. This was an early but informal example of a limit, one of the most basic concepts in mathematical analysis.
In the 18th century, Euler introduced the notion of mathematical function. Real analysis began to emerge as an independent subject when Bernard Bolzano introduced the modern definition of continuity in 1816, but Bolzano's work did not become widely known until the 1870s. In 1821, Cauchy began to put calculus on a firm logical foundation by rejecting the principle of the generality of algebra widely used in earlier work, particularly by Euler. Instead, Cauchy formulated calculus in terms of geometric ideas and infinitesimals. Thus, his definition of continuity required an infinitesimal change in x to correspond to an infinitesimal change in y. He also introduced the concept of the Cauchy sequence, and started the formal theory of complex analysis. Poisson, Liouville, Fourier and others studied partial differential equations and harmonic analysis. The contributions of these mathematicians and others, such as Weierstrass, developed the (ε, δ)-definition of limit approach, thus founding the modern field of mathematical analysis.
In the middle of the 19th century Riemann introduced his theory of integration. The last third of the century saw the arithmetization of analysis by Weierstrass, who thought that geometric reasoning was inherently misleading, and introduced the "epsilon-delta" definition of limit.
Then, mathematicians started worrying that they were assuming the existence of a continuum of real numbers without proof. Dedekind then constructed the real numbers by Dedekind cuts, in which irrational numbers are formally defined, which serve to fill the "gaps" between rational numbers, thereby creating a complete set: the continuum of real numbers, which had already been developed by Simon Stevin in terms of decimal expansions. Around that time, the attempts to refine the theorems of Riemann integration led to the study of the "size" of the set of discontinuities of real functions.
By taking the third property and letting , it can be shown that (non-negative).
Sequences and limits
A sequence is an ordered list. Like a set, it contains members (also called elements, or terms).
Unlike a set, order matters, and exactly the same elements can appear
multiple times at different positions in the sequence. Most precisely, a
sequence can be defined as a function whose domain is a countabletotally ordered set, such as the natural numbers.
One of the most important properties of a sequence is convergence. Informally, a sequence converges if it has a limit. Continuing informally, a (singly-infinite) sequence has a limit if it approaches some point x, called the limit, as n becomes very large. That is, for an abstract sequence (an) (with n running from 1 to infinity understood) the distance between an and x approaches 0 as n → ∞, denoted
Main branches
Real analysis
Real analysis (traditionally, the theory of functions of a real variable) is a branch of mathematical analysis dealing with the real numbers and real-valued functions of a real variable. In particular, it deals with the analytic properties of real functions and sequences, including convergence and limits of sequences of real numbers, the calculus of the real numbers, and continuity, smoothness and related properties of real-valued functions.
Complex analysis is particularly concerned with the analytic functions of complex variables (or, more generally, meromorphic functions). Because the separate real and imaginary parts of any analytic function must satisfy Laplace's equation, complex analysis is widely applicable to two-dimensional problems in physics.
Functional analysis
Functional analysis is a branch of mathematical analysis, the core of which is formed by the study of vector spaces endowed with some kind of limit-related structure (e.g. inner product, norm, topology, etc.) and the linear operators acting upon these spaces and respecting these structures in a suitable sense. The historical roots of functional analysis lie in the study of spaces of functions and the formulation of properties of transformations of functions such as the Fourier transform as transformations defining continuous, unitary etc. operators between function spaces. This point of view turned out to be particularly useful for the study of differential and integral equations.
Differential equations
A differential equation is a mathematicalequation for an unknown function of one or several variables that relates the values of the function itself and its derivatives of various orders. Differential equations play a prominent role in engineering, physics, economics, biology, and other disciplines.
Differential equations arise in many areas of science and technology, specifically whenever a deterministic
relation involving some continuously varying quantities (modeled by
functions) and their rates of change in space or time (expressed as
derivatives) is known or postulated. This is illustrated in classical mechanics, where the motion of a body is described by its position and velocity as the time value varies. Newton's laws
allow one (given the position, velocity, acceleration and various
forces acting on the body) to express these variables dynamically as a
differential equation for the unknown position of the body as a function
of time. In some cases, this differential equation (called an equation of motion) may be solved explicitly.
Measure theory
A measure on a set is a systematic way to assign a number to each suitable subset of that set, intuitively interpreted as its size.
In this sense, a measure is a generalization of the concepts of length,
area, and volume. A particularly important example is the Lebesgue measure on a Euclidean space, which assigns the conventional length, area, and volume of Euclidean geometry to suitable subsets of the -dimensional Euclidean space . For instance, the Lebesgue measure of the interval in the real numbers is its length in the everyday sense of the word – specifically, 1.
Technically, a measure is a function that assigns a non-negative real number or +∞ to (certain) subsets of a set . It must assign 0 to the empty set and be (countably)
additive: the measure of a 'large' subset that can be decomposed into a
finite (or countable) number of 'smaller' disjoint subsets, is the sum
of the measures of the "smaller" subsets. In general, if one wants to
associate a consistent size to each subset of a given set while satisfying the other axioms of a measure, one only finds trivial examples like the counting measure. This problem was resolved by defining measure only on a sub-collection of all subsets; the so-called measurable subsets, which are required to form a -algebra. This means that countable unions, countable intersections and complements of measurable subsets are measurable. Non-measurable sets
in a Euclidean space, on which the Lebesgue measure cannot be defined
consistently, are necessarily complicated in the sense of being badly
mixed up with their complement. Indeed, their existence is a non-trivial
consequence of the axiom of choice.
Modern numerical analysis does not seek exact answers, because
exact answers are often impossible to obtain in practice. Instead, much
of numerical analysis is concerned with obtaining approximate solutions
while maintaining reasonable bounds on errors.
Numerical analysis naturally finds applications in all fields of
engineering and the physical sciences, but in the 21st century, the life
sciences and even the arts have adopted elements of scientific
computations. Ordinary differential equations appear in celestial mechanics (planets, stars and galaxies); numerical linear algebra is important for data analysis; stochastic differential equations and Markov chains are essential in simulating living cells for medicine and biology.
Clifford analysis,
the study of Clifford valued functions that are annihilated by Dirac or
Dirac-like operators, termed in general as monogenic or Clifford
analytic functions.
p-adic analysis, the study of analysis within the context of p-adic numbers, which differs in some interesting and surprising ways from its real and complex counterparts.
Tropical analysis (or idempotent analysis) – analysis in the context of the semiring of the max-plus algebra
where the lack of an additive inverse is compensated somewhat by the
idempotent rule A + A = A. When transferred to the tropical setting,
many nonlinear problems become linear.
When processing signals, such as audio, radio waves, light waves, seismic waves,
and even images, Fourier analysis can isolate individual components of a
compound waveform, concentrating them for easier detection or removal.
A large family of signal processing techniques consist of
Fourier-transforming a signal, manipulating the Fourier-transformed data
in a simple way, and reversing the transformation.
Other areas of mathematics
Techniques from analysis are used in many areas of mathematics, including:
Differential geometry, the application of calculus to specific mathematical spaces known as manifolds that possess a complicated internal structure but behave in a simple manner locally.
In mathematics, a series
is, roughly speaking, a description of the operation of adding
infinitely many quantities, one after the other, to a given starting
quantity. The study of series is a major part of calculus and its generalization, mathematical analysis. Series are used in most areas of mathematics, even for studying finite structures (such as in combinatorics), through generating functions. In addition to their ubiquity in mathematics, infinite series are also widely used in other quantitative disciplines such as physics, computer science, statistics and finance.
For a long time, the idea that such a potentially infinite summation could produce a finite result was considered paradoxical by mathematicians and philosophers. This paradox was resolved using the concept of a limit during the 19th century. Zeno's paradox of Achilles and the tortoise
illustrates this counterintuitive property of infinite sums: Achilles
runs after a tortoise, but when he reaches the position of the tortoise
at the beginning of the race, the tortoise has reached a second
position; when he reaches this second position, the tortoise is at a
third position, and so on. Zeno concluded that Achilles could never
reach the tortoise, and thus that movement does not exist. Zeno divided
the race into infinitely many sub-races, each requiring a finite amount
of time, so that the total time for Achilles to catch the tortoise is
given by a series. The resolution of the paradox is that, although the
series has an infinite number of terms, it has a finite sum, which gives
the time necessary for Achilles to catch the tortoise.
In modern terminology, any (ordered) infinite sequence of terms (that is numbers, functions, or anything that can be added) defines a series, which is the operation of adding the one after the other. To emphasize that there are an infinite number of terms, a series may be called an infinite series. Such a series is represented (or denoted) by an expression like
The infinite sequence of additions implied by a series cannot be
effectively carried on (at least in a finite amount of time). However,
if the set to which the terms and their finite sums belong has a notion
of limit, it is sometimes possible to assign a value to a series, called the sum of the series. This value is the limit as n tends to infinity (if the limit exists) of the finite sums of the n first terms of the series, which are called the nth partial sums of the series. That is,
When this limit exists, one says that the series is convergent or summable, or that the sequence is summable. In this case, the limit is called the sum of the series. Otherwise, the series is said to be divergent.
Generally, the terms of a series come from a ring, often the field of the real numbers or the field of the complex numbers. In this case, the set of all series is itself a ring (and even an associative algebra), in which the addition consists of adding the series term by term, and the multiplication is the Cauchy product.
Basic properties
An infinite series or simply a series is an infinite sum, represented by an infinite expression of the form
where is any ordered sequence of terms, such as numbers, functions, or anything else that can be added (an abelian group). This is an expression that is obtained from the list of terms by laying them side by side, and conjoining them with the symbol "+". A series may also be represented by using summation notation, such as
.
If an abelian group A of terms has a concept of limit (for example, if it is a metric space), then some series, the convergent series, can be interpreted as having a value in A, called the sum of the series. This includes the common cases from calculus in which the group is the
field of real numbers or the field of complex numbers. Given a series its kth partial sum is
By definition, the series converges to the limit L (or simply sums to L), if the sequence of its partial sums has a limit L. In this case, one usually writes
A series is said to be convergent if it converges to some limit or divergent when it does not. The value of this limit, if it exists, is then the value of the series.
Convergent series
Illustration of 3 geometric series with partial sums from 1 to 6 terms. The dashed line represents the limit.
A series ∑an is said to converge or to be convergent when the sequence (sk) of partial sums has a finite limit. If the limit of sk is infinite or does not exist, the series is said to diverge. When the limit of partial sums exists, it is called the value (or sum) of the series
An easy way that an infinite series can converge is if all the an are zero for n sufficiently large. Such a series can be identified with a finite sum, so it is only infinite in a trivial sense.
Working out the properties of the series that converge even if
infinitely many terms are non-zero is the essence of the study of
series. Consider the example
It is possible to "visualize" its convergence on the real number line:
we can imagine a line of length 2, with successive segments marked off
of lengths 1, ½, ¼, etc. There is always room to mark the next segment,
because the amount of line remaining is always the same as the last
segment marked: when we have marked off ½, we still have a piece of
length ½ unmarked, so we can certainly mark the next ¼. This argument
does not prove that the sum is equal to 2 (although it is), but it does prove that it is at most 2.
In other words, the series has an upper bound. Given that the series
converges, proving that it is equal to 2 requires only elementary
algebra. If the series is denoted S, it can be seen that
Therefore,
Mathematicians extend the idiom discussed earlier to other, equivalent notions of series. For instance, when we talk about a recurring decimal, as in
we are talking, in fact, just about the series
But since these series always converge to real numbers (because of what is called the completeness property
of the real numbers), to talk about the series in this way is the same
as to talk about the numbers for which they stand. In particular, the
decimal expansion 0.111… can be identified with 1/9. This leads to an argument that 9 × 0.111… = 0.999… = 1,
which only relies on the fact that the limit laws for series preserve
the arithmetic operations; this argument is presented in the article 0.999....
Examples of numerical series
A geometric series is one where each successive term is produced by multiplying the previous term by a constant number (called the common ratio in this context). Example:
converges if r > 1 and diverges for r ≤ 1, which can be shown with the integral criterion described below in convergence tests. As a function of r, the sum of this series is Riemann's zeta function.
converges if the sequencebn converges to a limit L as n goes to infinity. The value of the series is then b1 − L.
There are some elementary series whose convergence is not yet
known/proven. For example, it is unknown whether the Flint Hills series
converges or not. The convergence depends on how well can be approximated with rational numbers (which is unknown as of yet). More specifically, the values of n with large numerical contributions to the sum are the numerators of the continued fraction convergents of , a sequence beginning with 1, 3, 22, 333, 355, 103993, ... (sequence A046947 in the OEIS). These are integers that are close to for some integer n, so that is close to 0 and its reciprocal is large. Alekseyev (2011) proved that if the series converges, then the irrationality measure of is smaller than 2.5, which is much smaller than the current known bound of 7.6063....
π
Natural logarithm of 2
Natural logarithm base e
Calculus and partial summation as an operation on sequences
Partial summation takes as input a sequence, { an }, and gives as output another sequence, { SN }. It is thus a unary operation on sequences. Further, this function is linear, and thus is a linear operator on the vector space of sequences, denoted Σ. The inverse operator is the finite difference operator, Δ.
These behave as discrete analogs of integration and differentiation,
only for series (functions of a natural number) instead of functions of
a real variable. For example, the sequence {1, 1, 1, ...} has series
{1, 2, 3, 4, ...} as its partial summation, which is analogous to the
fact that
Series are classified not only by whether they converge or diverge, but also by the properties of the terms an (absolute or conditional convergence); type of convergence of the series (pointwise, uniform); the class of the term an (whether it is a real number, arithmetic progression, trigonometric function); etc.
Non-negative terms
When an is a non-negative real number for every n, the sequence SN of partial sums is non-decreasing. It follows that a series ∑an with non-negative terms converges if and only if the sequence SN of partial sums is bounded.
For example, the series
is convergent, because the inequality
and a telescopic sum argument implies that the partial sums are bounded by 2. The exact value of the original series is the Basel problem.
Absolute convergence
A series
is said to converge absolutely if the series of absolute values
converges. This is sufficient to guarantee not only that the original
series converges to a limit, but also that any reordering of it
converges to the same limit.
Conditional convergence
A series of real or complex numbers is said to be conditionally convergent (or semi-convergent) if it is convergent but not absolutely convergent. A famous example is the alternating series
which is convergent (and its sum is equal to ln 2), but the series
formed by taking the absolute value of each term is the divergent harmonic series. The Riemann series theorem says that any conditionally convergent series can be reordered to make a divergent series, and moreover, if the an are real and S is any real number, that one can find a reordering so that the reordered series converges with sum equal to S.
Abel's test is an important tool for handling semi-convergent series. If a series has the form
where the partial sums BN = b0 + ··· + bn are bounded, λn has bounded variation, and lim λnBn exists:
then the series ∑ an is convergent. This applies to the pointwise convergence of many trigonometric series, as in
with 0 < x < 2π. Abel's method consists in writing bn+1 = Bn+1 − Bn, and in performing a transformation similar to integration by parts (called summation by parts), that relates the given series ∑ an to the absolutely convergent series
Convergence tests
n-th term test: If limn→∞an ≠ 0 then the series diverges.
Comparison test 1 (see Direct comparison test): If ∑bn is an absolutely convergent series such that |an | ≤ C |bn | for some number C and for sufficiently large n , then ∑an converges absolutely as well. If ∑|bn | diverges, and |an | ≥ |bn | for all sufficiently large n , then ∑an also fails to converge absolutely (though it could still be conditionally convergent, e.g. if the an alternate in sign).
Comparison test 2 (see Limit comparison test): If ∑bn is an absolutely convergent series such that |an+1 /an | ≤ |bn+1 /bn | for sufficiently large n , then ∑an converges absolutely as well. If ∑|bn | diverges, and |an+1 /an | ≥ |bn+1 /bn | for all sufficiently large n , then ∑an also fails to converge absolutely (though it could still be conditionally convergent, e.g. if the an alternate in sign).
Ratio test: If there exists a constant C < 1 such that |an+1/an|<C for all sufficiently large n, then ∑an
converges absolutely. When the ratio is less than 1, but not less than a
constant less than 1, convergence is possible but this test does not
establish it.
Root test: If there exists a constant C < 1 such that |an|1/n ≤ C for all sufficiently large n, then ∑an converges absolutely.
Cauchy's condensation test: If an is non-negative and non-increasing, then the two series ∑an and ∑2ka(2k) are of the same nature: both convergent, or both divergent.
Alternating series test: A series of the form ∑(−1)nan (with an > 0) is called alternating. Such a series converges if the sequencean is monotone decreasing and converges to 0. The converse is in general not true.
For some specific types of series there are more specialized convergence tests, for instance for Fourier series there is the Dini test.
Series of functions
A series of real- or complex-valued functions
converges pointwise on a set E, if the series converges for each x in E as an ordinary series of real or complex numbers. Equivalently, the partial sums
converge to ƒ(x) as N → ∞ for each x ∈ E.
A stronger notion of convergence of a series of functions is called uniform convergence. The series converges uniformly if it converges pointwise to the function ƒ(x), and the error in approximating the limit by the Nth partial sum,
can be made minimal independently of x by choosing a sufficiently large N.
Uniform convergence is desirable for a series because many
properties of the terms of the series are then retained by the limit.
For example, if a series of continuous functions converges uniformly,
then the limit function is also continuous. Similarly, if the ƒn are integrable on a closed and bounded interval I and converge uniformly, then the series is also integrable on I and can be integrated term-by-term. Tests for uniform convergence include the Weierstrass' M-test, Abel's uniform convergence test, Dini's test, and the Cauchy criterion.
More sophisticated types of convergence of a series of functions can also be defined. In measure theory, for instance, a series of functions converges almost everywhere if it converges pointwise except on a certain set of measure zero. Other modes of convergence depend on a different metric space structure on the space of functions under consideration. For instance, a series of functions converges in mean on a set E to a limit function ƒ provided
as N → ∞.
Power series
A power series is a series of the form
The Taylor series at a point c of a function is a power series that, in many cases, converges to the function in a neighborhood of c. For example, the series
is the Taylor series of at the origin and converges to it for every x.
Unless it converges only at x=c, such a series converges on a certain open disc of convergence centered at the point c
in the complex plane, and may also converge at some of the points of
the boundary of the disc. The radius of this disc is known as the radius of convergence, and can in principle be determined from the asymptotics of the coefficients an. The convergence is uniform on closed and bounded (that is, compact) subsets of the interior of the disc of convergence: to wit, it is uniformly convergent on compact sets.
Historically, mathematicians such as Leonhard Euler
operated liberally with infinite series, even if they were not
convergent. When calculus was put on a sound and correct foundation in
the nineteenth century, rigorous proofs of the convergence of series
were always required.
Formal power series
While many uses of power series refer to their sums, it is also possible to treat power series as formal sums,
meaning that no addition operations are actually performed, and the
symbol "+" is an abstract symbol of conjunction which is not necessarily
interpreted as corresponding to addition. In this setting, the sequence
of coefficients itself is of interest, rather than the convergence of
the series. Formal power series are used in combinatorics to describe and study sequences that are otherwise difficult to handle, for example using the method of generating functions. The Hilbert–Poincaré series is a formal power series used to study graded algebras.
Even if the limit of the power series is not considered, if the
terms support appropriate structure then it is possible to define
operations such as addition, multiplication, derivative, antiderivative
for power series "formally", treating the symbol "+" as if it
corresponded to addition. In the most common setting, the terms come
from a commutative ring, so that the formal power series can be added term-by-term and multiplied via the Cauchy product. In this case the algebra of formal power series is the total algebra of the monoid of natural numbers over the underlying term ring. If the underlying term ring is a differential algebra, then the algebra of formal power series is also a differential algebra, with differentiation performed term-by-term.
Laurent series
Laurent series generalize power series by admitting terms into the
series with negative as well as positive exponents. A Laurent series is
thus any series of the form
If such a series converges, then in general it does so in an annulus
rather than a disc, and possibly some boundary points. The series
converges uniformly on compact subsets of the interior of the annulus of
convergence.
Like the zeta function, Dirichlet series in general play an important role in analytic number theory. Generally a Dirichlet series converges if the real part of s is greater than a number called the abscissa of convergence. In many cases, a Dirichlet series can be extended to an analytic function outside the domain of convergence by analytic continuation. For example, the Dirichlet series for the zeta function converges absolutely when Re s > 1, but the zeta function can be extended to a holomorphic function defined on with a simple pole at 1.
A series of functions in which the terms are trigonometric functions is called a trigonometric series:
The most important example of a trigonometric series is the Fourier series of a function.
History of the theory of infinite series
Development of infinite series
Greek mathematician Archimedes produced the first known summation of an infinite series with a
method that is still used in the area of calculus today. He used the method of exhaustion to calculate the area under the arc of a parabola with the summation of an infinite series, and gave a remarkably accurate approximation of π.
Mathematicians from Kerala, India studied infinite series around 1350 CE.
The investigation of the validity of infinite series is considered to begin with Gauss in the 19th century. Euler had already considered the hypergeometric series
on which Gauss published a memoir in 1812. It established simpler
criteria of convergence, and the questions of remainders and the range
of convergence.
Cauchy
(1821) insisted on strict tests of convergence; he showed that if two
series are convergent their product is not necessarily so, and with him
begins the discovery of effective criteria. The terms convergence and divergence had been introduced long before by Gregory (1668). Leonhard Euler and Gauss had given various criteria, and Colin Maclaurin had anticipated some of Cauchy's discoveries. Cauchy advanced the theory of power series by his expansion of a complex function in such a form.
Abel (1826) in his memoir on the binomial series
corrected certain of Cauchy's conclusions, and gave a completely
scientific summation of the series for complex values of and . He showed the necessity of considering the subject of continuity in questions of convergence.
Cauchy's methods led to special rather than general criteria, and
the same may be said of Raabe (1832), who made the first elaborate
investigation of the subject, of De Morgan (from 1842), whose
logarithmic test DuBois-Reymond (1873) and Pringsheim (1889) have
shown to fail within a certain region; of Bertrand (1842), Bonnet
(1843), Malmsten (1846, 1847, the latter without integration);
Stokes (1847), Paucker (1852), Chebyshev (1852), and Arndt
(1853).
General criteria began with Kummer (1835), and have been
studied by Eisenstein (1847), Weierstrass in his various
contributions to the theory of functions, Dini (1867),
DuBois-Reymond (1873), and many others. Pringsheim's memoirs (1889) present the most complete general theory.
Uniform convergence
The theory of uniform convergence was treated by Cauchy (1821), his
limitations being pointed out by Abel, but the first to attack it
successfully were Seidel and Stokes (1847–48). Cauchy took up the
problem again (1853), acknowledging Abel's criticism, and reaching
the same conclusions which Stokes had already found. Thomae used the
doctrine (1866), but there was great delay in recognizing the
importance of distinguishing between uniform and non-uniform
convergence, in spite of the demands of the theory of functions.
Semi-convergence
A series is said to be semi-convergent (or conditionally convergent) if it is convergent but not absolutely convergent.
Semi-convergent series were studied by Poisson (1823), who also
gave a general form for the remainder of the Maclaurin formula. The most
important solution of the problem is due, however, to Jacobi (1834),
who attacked the question of the remainder from a different standpoint
and reached a different formula. This expression was also worked out,
and another one given, by Malmsten (1847). Schlömilch (Zeitschrift, Vol.I, p. 192, 1856) also improved Jacobi's remainder, and showed the relation between the remainder and Bernoulli's function
Genocchi (1852) has further contributed to the theory.
Among the early writers was Wronski, whose "loi suprême" (1815) was hardly recognized until Cayley (1873) brought it into
prominence.
Fourier series
Fourier series were being investigated
as the result of physical considerations at the same time that
Gauss, Abel, and Cauchy were working out the theory of infinite
series. Series for the expansion of sines and cosines, of multiple
arcs in powers of the sine and cosine of the arc had been treated by
Jacob Bernoulli (1702) and his brother Johann Bernoulli (1701) and still
earlier by Vieta. Euler and Lagrange simplified the subject,
as did Poinsot, Schröter, Glaisher, and Kummer.
Fourier (1807) set for himself a different problem, to
expand a given function of x in terms of the sines or cosines of
multiples of x, a problem which he embodied in his Théorie analytique de la chaleur (1822). Euler had already given the
formulas for determining the coefficients in the series;
Fourier was the first to assert and attempt to prove the general
theorem. Poisson (1820–23) also attacked the problem from a
different standpoint. Fourier did not, however, settle the question
of convergence of his series, a matter left for Cauchy (1826) to
attempt and for Dirichlet (1829) to handle in a thoroughly
scientific manner. Dirichlet's treatment (Crelle, 1829), of trigonometric series was the subject of criticism and improvement by
Riemann (1854), Heine, Lipschitz, Schläfli, and
du Bois-Reymond. Among other prominent contributors to the theory of
trigonometric and Fourier series were Dini, Hermite, Halphen,
Krause, Byerly and Appell.
Generalizations
Asymptotic series
Asymptotic series, otherwise asymptotic expansions,
are infinite series whose partial sums become good approximations in
the limit of some point of the domain. In general they do not converge.
But they are useful as sequences of approximations, each of which
provides a value close to the desired answer for a finite number of
terms. The difference is that an asymptotic series cannot be made to
produce an answer as exact as desired, the way that convergent series
can. In fact, after a certain number of terms, a typical asymptotic
series reaches its best approximation; if more terms are included, most
such series will produce worse answers.
Divergent series
Under many circumstances, it is desirable to assign a limit to a series which fails to converge in the usual sense. A summability method
is such an assignment of a limit to a subset of the set of divergent
series which properly extends the classical notion of convergence.
Summability methods include Cesàro summation, (C,k) summation, Abel summation, and Borel summation, in increasing order of generality (and hence applicable to increasingly divergent series).
A variety of general results concerning possible summability methods are known. The Silverman–Toeplitz theorem characterizes matrix summability methods,
which are methods for summing a divergent series by applying an
infinite matrix to the vector of coefficients. The most general method
for summing a divergent series is non-constructive, and concerns Banach limits.
Series in Banach spaces
The notion of series can be easily extended to the case of a Banach space. If xn is a sequence of elements of a Banach space X, then the series Σxn converges to x ∈ X if the sequence of partial sums of the series tends to x;
as N → ∞.
More generally, convergence of series can be defined in any abelianHausdorfftopological group. Specifically, in this case, Σxn converges to x if the sequence of partial sums converges to x.
Summations over arbitrary index sets
Definitions may be given for sums over an arbitrary index set I. There are two main differences with the usual notion of series: first, there is no specific order given on the set I; second, this set I may be uncountable. The notion of convergence needs to be strengthened, because the concept of conditional convergence depends on the ordering of the index set.
If is a function from an index setI to a set G, then the "series" associated to is the formal sum of the elements over the index elements denoted by the
When the index set is the natural numbers , the function is a sequence denoted by . A series indexed on the natural numbers is an ordered formal sum and so we rewrite as
in order to emphasize the ordering induced by the natural numbers.
Thus, we obtain the common notation for a series indexed by the natural
numbers
Families of non-negative numbers
When summing a family {ai}, i ∈ I, of non-negative numbers, one may define
When the supremum is finite, the set of i ∈ I such that ai > 0 is countable. Indeed, for every n ≥ 1, the set is finite, because
If I is countably infinite and enumerated as I = {i0, i1,...} then the above defined sum satisfies
provided the value ∞ is allowed for the sum of the series.
Any sum over non-negative reals can be understood as the integral of a non-negative function with respect to the counting measure, which accounts for the many similarities between the two constructions.
if it exists and say that the family a is unconditionally summable. Saying that the sum S is the limit of finite partial sums means that for every neighborhood V of 0 in X, there is a finite subset A0 of I such that
For every W, neighborhood of 0 in X, there is a smaller neighborhood V such that V − V ⊂ W. It follows that the finite partial sums of an unconditionally summable family ai, i ∈ I, form a Cauchy net, that is: for every W, neighborhood of 0 in X, there is a finite subset A0 of I such that
When X is complete, a family a is unconditionally summable in X if and only if the finite sums satisfy the latter Cauchy net condition. When X is complete and ai, i ∈ I, is unconditionally summable in X, then for every subset J ⊂ I, the corresponding subfamily aj, j ∈ J, is also unconditionally summable in X.
When the sum of a family of non-negative numbers, in the extended
sense defined before, is finite, then it coincides with the sum in the
topological group X = R.
If a family a in X is unconditionally summable, then for every W, neighborhood of 0 in X, there is a finite subset A0 of I such that ai ∈ W for every i not in A0. If X is first-countable, it follows that the set of i ∈ I such that ai ≠ 0 is countable. This need not be true in a general abelian topological group (see examples below).
Unconditionally convergent series
Suppose that I = N. If a family an, n ∈ N, is unconditionally summable in an abelian Hausdorff topological group X, then the series in the usual sense converges and has the same sum,
By nature, the definition of unconditional summability is insensitive to the order of the summation. When ∑an is unconditionally summable, then the series remains convergent after any permutation σ of the set N of indices, with the same sum,
Conversely, if every permutation of a series ∑an converges, then the series is unconditionally convergent. When X is complete, then unconditional convergence is also equivalent to the fact that all subseries are convergent; if X is a Banach space, this is equivalent to say that for every sequence of signs εn = ±1, the series
converges in X. If X is a Banach space, then one may define the notion of absolute convergence. A series ∑an of vectors in X converges absolutely if
If a series of vectors in a Banach space converges absolutely then it
converges unconditionally, but the converse only holds in
finite-dimensional Banach spaces (theorem of Dvoretzky & Rogers (1950)).
In the definition of partitions of unity, one constructs sums of functions over arbitrary index set I,
While, formally, this requires a notion of sums of uncountable series, by construction there are, for every given x,
only finitely many nonzero terms in the sum, so issues regarding
convergence of such sums do not arise. Actually, one usually assumes
more: the family of functions is locally finite, i.e., for every x there is a neighborhood of x in which all but a finite number of functions vanish. Any regularity property of the φi,
such as continuity, differentiability, that is preserved under finite
sums will be preserved for the sum of any subcollection of this family
of functions.
On the first uncountable ordinal ω1 viewed as a topological space in the order topology, the constant function f: [0,ω1) → [0,ω1] given by f(α) = 1 satisfies
(in other words, ω1 copies of 1 is ω1) only if one takes a limit over all countable partial sums, rather than finite partial sums. This space is not separable.