Search This Blog

Sunday, June 19, 2022

Hilbert space

From Wikipedia, the free encyclopedia

The state of a vibrating string can be modeled as a point in a Hilbert space. The decomposition of a vibrating string into its vibrations in distinct overtones is given by the projection of the point onto the coordinate axes in the space.

In mathematics, Hilbert spaces (named after David Hilbert) allow generalizing the methods of linear algebra and calculus from (finite-dimensional) Euclidean vector spaces to spaces that may be infinite-dimensional. A Hilbert space is a vector space equipped with an inner product which defines a distance function for which it is a complete metric space. Hilbert spaces arise naturally and frequently in mathematics and physics, typically as function spaces.

The earliest Hilbert spaces were studied from this point of view in the first decade of the 20th century by David Hilbert, Erhard Schmidt, and Frigyes Riesz. They are indispensable tools in the theories of partial differential equations, quantum mechanics, Fourier analysis (which includes applications to signal processing and heat transfer), and ergodic theory (which forms the mathematical underpinning of thermodynamics). John von Neumann coined the term Hilbert space for the abstract concept that underlies many of these diverse applications. The success of Hilbert space methods ushered in a very fruitful era for functional analysis. Apart from the classical Euclidean vector spaces, examples of Hilbert spaces include spaces of square-integrable functions, spaces of sequences, Sobolev spaces consisting of generalized functions, and Hardy spaces of holomorphic functions.

Geometric intuition plays an important role in many aspects of Hilbert space theory. Exact analogs of the Pythagorean theorem and parallelogram law hold in a Hilbert space. At a deeper level, perpendicular projection onto a linear subspace or a subspace (the analog of "dropping the altitude" of a triangle) plays a significant role in optimization problems and other aspects of the theory. An element of a Hilbert space can be uniquely specified by its coordinates with respect to an orthonormal basis, in analogy with Cartesian coordinates in classical geometry. When this basis is countably infinite, it allows identifying the Hilbert space with the space of the infinite sequences that are square-summable. The latter space is often in the older literature referred to as the Hilbert space.

Definition and illustration

Motivating example: Euclidean vector space

One of the most familiar examples of a Hilbert space is the Euclidean vector space consisting of three-dimensional vectors, denoted by R3, and equipped with the dot product. The dot product takes two vectors x and y, and produces a real number xy. If x and y are represented in Cartesian coordinates, then the dot product is defined by

The dot product satisfies the properties:

  1. It is symmetric in x and y: xy = yx.
  2. It is linear in its first argument: (ax1 + bx2) ⋅ y = a(x1y) + b(x2y) for any scalars a, b, and vectors x1, x2, and y.
  3. It is positive definite: for all vectors x, xx ≥ 0 , with equality if and only if x = 0.

An operation on pairs of vectors that, like the dot product, satisfies these three properties is known as a (real) inner product. A vector space equipped with such an inner product is known as a (real) inner product space. Every finite-dimensional inner product space is also a Hilbert space. The basic feature of the dot product that connects it with Euclidean geometry is that it is related to both the length (or norm) of a vector, denoted ||x||, and to the angle θ between two vectors x and y by means of the formula

Completeness means that if a particle moves along the broken path (in blue) travelling a finite total distance, then the particle has a well-defined net displacement (in orange).

Multivariable calculus in Euclidean space relies on the ability to compute limits, and to have useful criteria for concluding that limits exist. A mathematical series

consisting of vectors in R3 is absolutely convergent provided that the sum of the lengths converges as an ordinary series of real numbers:

Just as with a series of scalars, a series of vectors that converges absolutely also converges to some limit vector L in the Euclidean space, in the sense that

This property expresses the completeness of Euclidean space: that a series that converges absolutely also converges in the ordinary sense.

Hilbert spaces are often taken over the complex numbers. The complex plane denoted by C is equipped with a notion of magnitude, the complex modulus |z| which is defined as the square root of the product of z with its complex conjugate:

If z = x + iy is a decomposition of z into its real and imaginary parts, then the modulus is the usual Euclidean two-dimensional length:

The inner product of a pair of complex numbers z and w is the product of z with the complex conjugate of w:

This is complex-valued. The real part of z, w gives the usual two-dimensional Euclidean dot product.

A second example is the space C2 whose elements are pairs of complex numbers z = (z1, z2). Then the inner product of z with another such vector w = (w1, w2) is given by

The real part of z, w is then the two-dimensional Euclidean dot product. This inner product is Hermitian symmetric, which means that the result of interchanging z and w is the complex conjugate:

Definition

A Hilbert space H is a real or complex inner product space that is also a complete metric space with respect to the distance function induced by the inner product.

To say that H is a complex inner product space means that H is a complex vector space on which there is an inner product associating a complex number to each pair of elements of H that satisfies the following properties:

  1. The inner product is conjugate symmetric; that is, the inner product of a pair of elements is equal to the complex conjugate of the inner product of the swapped elements:
    Importantly, this implies that is a real number.
  2. The inner product is linear in its first argument. For all complex numbers and
  3. The inner product of an element with itself is positive definite:

It follows from properties 1 and 2 that a complex inner product is antilinear, also called conjugate linear, in its second argument, meaning that

A real inner product space is defined in the same way, except that H is a real vector space and the inner product takes real values. Such an inner product will be a bilinear map and will form a dual system.

The norm is the real-valued function

and the distance between two points in H is defined in terms of the norm by

That this function is a distance function means firstly that it is symmetric in and secondly that the distance between and itself is zero, and otherwise the distance between and must be positive, and lastly that the triangle inequality holds, meaning that the length of one leg of a triangle xyz cannot exceed the sum of the lengths of the other two legs:

Triangle inequality in a metric space.svg

This last property is ultimately a consequence of the more fundamental Cauchy–Schwarz inequality, which asserts

with equality if and only if and are linearly dependent.

With a distance function defined in this way, any inner product space is a metric space, and sometimes is known as a Hausdorff pre-Hilbert space. Any pre-Hilbert space that is additionally also a complete space is a Hilbert space.

The completeness of H is expressed using a form of the Cauchy criterion for sequences in H: a pre-Hilbert space H is complete if every Cauchy sequence converges with respect to this norm to an element in the space. Completeness can be characterized by the following equivalent condition: if a series of vectors

converges absolutely in the sense that
then the series converges in H, in the sense that the partial sums converge to an element of H.

As a complete normed space, Hilbert spaces are by definition also Banach spaces. As such they are topological vector spaces, in which topological notions like the openness and closedness of subsets are well defined. Of special importance is the notion of a closed linear subspace of a Hilbert space that, with the inner product induced by restriction, is also complete (being a closed set in a complete metric space) and therefore a Hilbert space in its own right.

Second example: sequence spaces

The sequence space l2 consists of all infinite sequences z = (z1, z2, …) of complex numbers such that the series

converges. The inner product on l2 is defined by

with the latter series converging as a consequence of the Cauchy–Schwarz inequality and the convergence of the previous series.

Completeness of the space holds provided that whenever a series of elements from l2 converges absolutely (in norm), then it converges to an element of l2. The proof is basic in mathematical analysis, and permits mathematical series of elements of the space to be manipulated with the same ease as series of complex numbers (or vectors in a finite-dimensional Euclidean space).

History

Prior to the development of Hilbert spaces, other generalizations of Euclidean spaces were known to mathematicians and physicists. In particular, the idea of an abstract linear space (vector space) had gained some traction towards the end of the 19th century: this is a space whose elements can be added together and multiplied by scalars (such as real or complex numbers) without necessarily identifying these elements with "geometric" vectors, such as position and momentum vectors in physical systems. Other objects studied by mathematicians at the turn of the 20th century, in particular spaces of sequences (including series) and spaces of functions, can naturally be thought of as linear spaces. Functions, for instance, can be added together or multiplied by constant scalars, and these operations obey the algebraic laws satisfied by addition and scalar multiplication of spatial vectors.

In the first decade of the 20th century, parallel developments led to the introduction of Hilbert spaces. The first of these was the observation, which arose during David Hilbert and Erhard Schmidt's study of integral equations, that two square-integrable real-valued functions f and g on an interval [a, b] have an inner product

which has many of the familiar properties of the Euclidean dot product. In particular, the idea of an orthogonal family of functions has meaning. Schmidt exploited the similarity of this inner product with the usual dot product to prove an analog of the spectral decomposition for an operator of the form

where K is a continuous function symmetric in x and y. The resulting eigenfunction expansion expresses the function K as a series of the form

where the functions φn are orthogonal in the sense that φn, φm⟩ = 0 for all nm. The individual terms in this series are sometimes referred to as elementary product solutions. However, there are eigenfunction expansions that fail to converge in a suitable sense to a square-integrable function: the missing ingredient, which ensures convergence, is completeness.

The second development was the Lebesgue integral, an alternative to the Riemann integral introduced by Henri Lebesgue in 1904. The Lebesgue integral made it possible to integrate a much broader class of functions. In 1907, Frigyes Riesz and Ernst Sigismund Fischer independently proved that the space L2 of square Lebesgue-integrable functions is a complete metric space. As a consequence of the interplay between geometry and completeness, the 19th century results of Joseph Fourier, Friedrich Bessel and Marc-Antoine Parseval on trigonometric series easily carried over to these more general spaces, resulting in a geometrical and analytical apparatus now usually known as the Riesz–Fischer theorem.

Further basic results were proved in the early 20th century. For example, the Riesz representation theorem was independently established by Maurice Fréchet and Frigyes Riesz in 1907. John von Neumann coined the term abstract Hilbert space in his work on unbounded Hermitian operators. Although other mathematicians such as Hermann Weyl and Norbert Wiener had already studied particular Hilbert spaces in great detail, often from a physically motivated point of view, von Neumann gave the first complete and axiomatic treatment of them. Von Neumann later used them in his seminal work on the foundations of quantum mechanics, and in his continued work with Eugene Wigner. The name "Hilbert space" was soon adopted by others, for example by Hermann Weyl in his book on quantum mechanics and the theory of groups.

The significance of the concept of a Hilbert space was underlined with the realization that it offers one of the best mathematical formulations of quantum mechanics. In short, the states of a quantum mechanical system are vectors in a certain Hilbert space, the observables are hermitian operators on that space, the symmetries of the system are unitary operators, and measurements are orthogonal projections. The relation between quantum mechanical symmetries and unitary operators provided an impetus for the development of the unitary representation theory of groups, initiated in the 1928 work of Hermann Weyl. On the other hand, in the early 1930s it became clear that classical mechanics can be described in terms of Hilbert space (Koopman–von Neumann classical mechanics) and that certain properties of classical dynamical systems can be analyzed using Hilbert space techniques in the framework of ergodic theory.

The algebra of observables in quantum mechanics is naturally an algebra of operators defined on a Hilbert space, according to Werner Heisenberg's matrix mechanics formulation of quantum theory. Von Neumann began investigating operator algebras in the 1930s, as rings of operators on a Hilbert space. The kind of algebras studied by von Neumann and his contemporaries are now known as von Neumann algebras. In the 1940s, Israel Gelfand, Mark Naimark and Irving Segal gave a definition of a kind of operator algebras called C*-algebras that on the one hand made no reference to an underlying Hilbert space, and on the other extrapolated many of the useful features of the operator algebras that had previously been studied. The spectral theorem for self-adjoint operators in particular that underlies much of the existing Hilbert space theory was generalized to C*-algebras. These techniques are now basic in abstract harmonic analysis and representation theory.

Examples

Lebesgue spaces

Lebesgue spaces are function spaces associated to measure spaces (X, M, μ), where X is a set, M is a σ-algebra of subsets of X, and μ is a countably additive measure on M. Let L2(X, μ) be the space of those complex-valued measurable functions on X for which the Lebesgue integral of the square of the absolute value of the function is finite, i.e., for a function f in L2(X, μ),

and where functions are identified if and only if they differ only on a set of measure zero.

The inner product of functions f and g in L2(X, μ) is then defined as

or

where the second form (conjugation of the first element) is commonly found in the theoretical physics literature. For f and g in L2, the integral exists because of the Cauchy–Schwarz inequality, and defines an inner product on the space. Equipped with this inner product, L2 is in fact complete. The Lebesgue integral is essential to ensure completeness: on domains of real numbers, for instance, not enough functions are Riemann integrable.

The Lebesgue spaces appear in many natural settings. The spaces L2(R) and L2([0,1]) of square-integrable functions with respect to the Lebesgue measure on the real line and unit interval, respectively, are natural domains on which to define the Fourier transform and Fourier series. In other situations, the measure may be something other than the ordinary Lebesgue measure on the real line. For instance, if w is any positive measurable function, the space of all measurable functions f on the interval [0, 1] satisfying

is called the weighted L2 space L2
w
([0, 1])
, and w is called the weight function. The inner product is defined by

The weighted space L2
w
([0, 1])
is identical with the Hilbert space L2([0, 1], μ) where the measure μ of a Lebesgue-measurable set A is defined by

Weighted L2 spaces like this are frequently used to study orthogonal polynomials, because different families of orthogonal polynomials are orthogonal with respect to different weighting functions.

Sobolev spaces

Sobolev spaces, denoted by Hs or Ws, 2, are Hilbert spaces. These are a special kind of function space in which differentiation may be performed, but that (unlike other Banach spaces such as the Hölder spaces) support the structure of an inner product. Because differentiation is permitted, Sobolev spaces are a convenient setting for the theory of partial differential equations. They also form the basis of the theory of direct methods in the calculus of variations.

For s a non-negative integer and Ω ⊂ Rn, the Sobolev space Hs(Ω) contains L2 functions whose weak derivatives of order up to s are also L2. The inner product in Hs(Ω) is

where the dot indicates the dot product in the Euclidean space of partial derivatives of each order. Sobolev spaces can also be defined when s is not an integer.

Sobolev spaces are also studied from the point of view of spectral theory, relying more specifically on the Hilbert space structure. If Ω is a suitable domain, then one can define the Sobolev space Hs(Ω) as the space of Bessel potentials; roughly,

Here Δ is the Laplacian and (1 − Δ)s/2 is understood in terms of the spectral mapping theorem. Apart from providing a workable definition of Sobolev spaces for non-integer s, this definition also has particularly desirable properties under the Fourier transform that make it ideal for the study of pseudodifferential operators. Using these methods on a compact Riemannian manifold, one can obtain for instance the Hodge decomposition, which is the basis of Hodge theory.

Spaces of holomorphic functions

Hardy spaces

The Hardy spaces are function spaces, arising in complex analysis and harmonic analysis, whose elements are certain holomorphic functions in a complex domain. Let U denote the unit disc in the complex plane. Then the Hardy space H2(U) is defined as the space of holomorphic functions f on U such that the means

remain bounded for r < 1. The norm on this Hardy space is defined by

Hardy spaces in the disc are related to Fourier series. A function f is in H2(U) if and only if

where

Thus H2(U) consists of those functions that are L2 on the circle, and whose negative frequency Fourier coefficients vanish.

Bergman spaces

The Bergman spaces are another family of Hilbert spaces of holomorphic functions. Let D be a bounded open set in the complex plane (or a higher-dimensional complex space) and let L2, h(D) be the space of holomorphic functions f in D that are also in L2(D) in the sense that

where the integral is taken with respect to the Lebesgue measure in D. Clearly L2, h(D) is a subspace of L2(D); in fact, it is a closed subspace, and so a Hilbert space in its own right. This is a consequence of the estimate, valid on compact subsets K of D, that

which in turn follows from Cauchy's integral formula. Thus convergence of a sequence of holomorphic functions in L2(D) implies also compact convergence, and so the limit function is also holomorphic. Another consequence of this inequality is that the linear functional that evaluates a function f at a point of D is actually continuous on L2, h(D). The Riesz representation theorem implies that the evaluation functional can be represented as an element of L2, h(D). Thus, for every zD, there is a function ηzL2, h(D) such that
for all fL2, h(D). The integrand
is known as the Bergman kernel of D. This integral kernel satisfies a reproducing property

A Bergman space is an example of a reproducing kernel Hilbert space, which is a Hilbert space of functions along with a kernel K(ζ, z) that verifies a reproducing property analogous to this one. The Hardy space H2(D) also admits a reproducing kernel, known as the Szegő kernel. Reproducing kernels are common in other areas of mathematics as well. For instance, in harmonic analysis the Poisson kernel is a reproducing kernel for the Hilbert space of square-integrable harmonic functions in the unit ball. That the latter is a Hilbert space at all is a consequence of the mean value theorem for harmonic functions.

Applications

Many of the applications of Hilbert spaces exploit the fact that Hilbert spaces support generalizations of simple geometric concepts like projection and change of basis from their usual finite dimensional setting. In particular, the spectral theory of continuous self-adjoint linear operators on a Hilbert space generalizes the usual spectral decomposition of a matrix, and this often plays a major role in applications of the theory to other areas of mathematics and physics.

Sturm–Liouville theory

The overtones of a vibrating string. These are eigenfunctions of an associated Sturm–Liouville problem. The eigenvalues 1, 1/2, 1/3, ... form the (musical) harmonic series.

In the theory of ordinary differential equations, spectral methods on a suitable Hilbert space are used to study the behavior of eigenvalues and eigenfunctions of differential equations. For example, the Sturm–Liouville problem arises in the study of the harmonics of waves in a violin string or a drum, and is a central problem in ordinary differential equations. The problem is a differential equation of the form

for an unknown function y on an interval [a, b], satisfying general homogeneous Robin boundary conditions
The functions p, q, and w are given in advance, and the problem is to find the function y and constants λ for which the equation has a solution. The problem only has solutions for certain values of λ, called eigenvalues of the system, and this is a consequence of the spectral theorem for compact operators applied to the integral operator defined by the Green's function for the system. Furthermore, another consequence of this general result is that the eigenvalues λ of the system can be arranged in an increasing sequence tending to infinity.

Partial differential equations

Hilbert spaces form a basic tool in the study of partial differential equations. For many classes of partial differential equations, such as linear elliptic equations, it is possible to consider a generalized solution (known as a weak solution) by enlarging the class of functions. Many weak formulations involve the class of Sobolev functions, which is a Hilbert space. A suitable weak formulation reduces to a geometrical problem the analytic problem of finding a solution or, often what is more important, showing that a solution exists and is unique for given boundary data. For linear elliptic equations, one geometrical result that ensures unique solvability for a large class of problems is the Lax–Milgram theorem. This strategy forms the rudiment of the Galerkin method (a finite element method) for numerical solution of partial differential equations.

A typical example is the Poisson equation −Δu = g with Dirichlet boundary conditions in a bounded domain Ω in R2. The weak formulation consists of finding a function u such that, for all continuously differentiable functions v in Ω vanishing on the boundary:

This can be recast in terms of the Hilbert space H1
0
(Ω)
consisting of functions u such that u, along with its weak partial derivatives, are square integrable on Ω, and vanish on the boundary. The question then reduces to finding u in this space such that for all v in this space

where a is a continuous bilinear form, and b is a continuous linear functional, given respectively by

Since the Poisson equation is elliptic, it follows from Poincaré's inequality that the bilinear form a is coercive. The Lax–Milgram theorem then ensures the existence and uniqueness of solutions of this equation.

Hilbert spaces allow for many elliptic partial differential equations to be formulated in a similar way, and the Lax–Milgram theorem is then a basic tool in their analysis. With suitable modifications, similar techniques can be applied to parabolic partial differential equations and certain hyperbolic partial differential equations.

Ergodic theory

The path of a billiard ball in the Bunimovich stadium is described by an ergodic dynamical system.

The field of ergodic theory is the study of the long-term behavior of chaotic dynamical systems. The protypical case of a field that ergodic theory applies to is thermodynamics, in which—though the microscopic state of a system is extremely complicated (it is impossible to understand the ensemble of individual collisions between particles of matter)—the average behavior over sufficiently long time intervals is tractable. The laws of thermodynamics are assertions about such average behavior. In particular, one formulation of the zeroth law of thermodynamics asserts that over sufficiently long timescales, the only functionally independent measurement that one can make of a thermodynamic system in equilibrium is its total energy, in the form of temperature.

An ergodic dynamical system is one for which, apart from the energy—measured by the Hamiltonian—there are no other functionally independent conserved quantities on the phase space. More explicitly, suppose that the energy E is fixed, and let ΩE be the subset of the phase space consisting of all states of energy E (an energy surface), and let Tt denote the evolution operator on the phase space. The dynamical system is ergodic if there are no continuous non-constant functions on ΩE such that

for all w on ΩE and all time t. Liouville's theorem implies that there exists a measure μ on the energy surface that is invariant under the time translation. As a result, time translation is a unitary transformation of the Hilbert space L2E, μ) consisting of square-integrable functions on the energy surface ΩE with respect to the inner product

The von Neumann mean ergodic theorem states the following:

  • If Ut is a (strongly continuous) one-parameter semigroup of unitary operators on a Hilbert space H, and P is the orthogonal projection onto the space of common fixed points of Ut, {xH | Utx = x, ∀t > 0}, then

For an ergodic system, the fixed set of the time evolution consists only of the constant functions, so the ergodic theorem implies the following: for any function fL2E, μ),

That is, the long time average of an observable f is equal to its expectation value over an energy surface.

Fourier analysis

Superposition of sinusoidal wave basis functions (bottom) to form a sawtooth wave (top)
 
Spherical harmonics, an orthonormal basis for the Hilbert space of square-integrable functions on the sphere, shown graphed along the radial direction

One of the basic goals of Fourier analysis is to decompose a function into a (possibly infinite) linear combination of given basis functions: the associated Fourier series. The classical Fourier series associated to a function f defined on the interval [0, 1] is a series of the form

where

The example of adding up the first few terms in a Fourier series for a sawtooth function is shown in the figure. The basis functions are sine waves with wavelengths λ/n (for integer n) shorter than the wavelength λ of the sawtooth itself (except for n = 1, the fundamental wave). All basis functions have nodes at the nodes of the sawtooth, but all but the fundamental have additional nodes. The oscillation of the summed terms about the sawtooth is called the Gibbs phenomenon.

A significant problem in classical Fourier series asks in what sense the Fourier series converges, if at all, to the function f. Hilbert space methods provide one possible answer to this question. The functions en(θ) = einθ form an orthogonal basis of the Hilbert space L2([0, 1]). Consequently, any square-integrable function can be expressed as a series

and, moreover, this series converges in the Hilbert space sense (that is, in the L2 mean).

The problem can also be studied from the abstract point of view: every Hilbert space has an orthonormal basis, and every element of the Hilbert space can be written in a unique way as a sum of multiples of these basis elements. The coefficients appearing on these basis elements are sometimes known abstractly as the Fourier coefficients of the element of the space. The abstraction is especially useful when it is more natural to use different basis functions for a space such as L2([0, 1]). In many circumstances, it is desirable not to decompose a function into trigonometric functions, but rather into orthogonal polynomials or wavelets for instance, and in higher dimensions into spherical harmonics.

For instance, if en are any orthonormal basis functions of L2[0, 1], then a given function in L2[0, 1] can be approximated as a finite linear combination

The coefficients {aj} are selected to make the magnitude of the difference ||ffn||2 as small as possible. Geometrically, the best approximation is the orthogonal projection of f onto the subspace consisting of all linear combinations of the {ej}, and can be calculated by

That this formula minimizes the difference ||ffn||2 is a consequence of Bessel's inequality and Parseval's formula.

In various applications to physical problems, a function can be decomposed into physically meaningful eigenfunctions of a differential operator (typically the Laplace operator): this forms the foundation for the spectral study of functions, in reference to the spectrum of the differential operator. A concrete physical application involves the problem of hearing the shape of a drum: given the fundamental modes of vibration that a drumhead is capable of producing, can one infer the shape of the drum itself? The mathematical formulation of this question involves the Dirichlet eigenvalues of the Laplace equation in the plane, that represent the fundamental modes of vibration in direct analogy with the integers that represent the fundamental modes of vibration of the violin string.

Spectral theory also underlies certain aspects of the Fourier transform of a function. Whereas Fourier analysis decomposes a function defined on a compact set into the discrete spectrum of the Laplacian (which corresponds to the vibrations of a violin string or drum), the Fourier transform of a function is the decomposition of a function defined on all of Euclidean space into its components in the continuous spectrum of the Laplacian. The Fourier transformation is also geometrical, in a sense made precise by the Plancherel theorem, that asserts that it is an isometry of one Hilbert space (the "time domain") with another (the "frequency domain"). This isometry property of the Fourier transformation is a recurring theme in abstract harmonic analysis (since it reflects the conservation of energy for the continuous Fourier Transform), as evidenced for instance by the Plancherel theorem for spherical functions occurring in noncommutative harmonic analysis.

Quantum mechanics

 

In the mathematically rigorous formulation of quantum mechanics, developed by John von Neumann, the possible states (more precisely, the pure states) of a quantum mechanical system are represented by unit vectors (called state vectors) residing in a complex separable Hilbert space, known as the state space, well defined up to a complex number of norm 1 (the phase factor). In other words, the possible states are points in the projectivization of a Hilbert space, usually called the complex projective space. The exact nature of this Hilbert space is dependent on the system; for example, the position and momentum states for a single non-relativistic spin zero particle is the space of all square-integrable functions, while the states for the spin of a single proton are unit elements of the two-dimensional complex Hilbert space of spinors. Each observable is represented by a self-adjoint linear operator acting on the state space. Each eigenstate of an observable corresponds to an eigenvector of the operator, and the associated eigenvalue corresponds to the value of the observable in that eigenstate.

The inner product between two state vectors is a complex number known as a probability amplitude. During an ideal measurement of a quantum mechanical system, the probability that a system collapses from a given initial state to a particular eigenstate is given by the square of the absolute value of the probability amplitudes between the initial and final states. The possible results of a measurement are the eigenvalues of the operator—which explains the choice of self-adjoint operators, for all the eigenvalues must be real. The probability distribution of an observable in a given state can be found by computing the spectral decomposition of the corresponding operator.

For a general system, states are typically not pure, but instead are represented as statistical mixtures of pure states, or mixed states, given by density matrices: self-adjoint operators of trace one on a Hilbert space. Moreover, for general quantum mechanical systems, the effects of a single measurement can influence other parts of a system in a manner that is described instead by a positive operator valued measure. Thus the structure both of the states and observables in the general theory is considerably more complicated than the idealization for pure states.

Color perception

Any true physical color can be represented by a combination of pure spectral colors. As physical colors can be composed of any number of spectral colors, the space of physical colors may aptly be represented by a Hilbert space over spectral colors. Humans have three types of cone cells for color perception, so the perceivable colors can be represented by 3-dimensional Euclidean space. The many-to-one linear mapping from the Hilbert space of physical colors to the Euclidean space of human perceivable colors explains why many distinct physical colors may be perceived by humans to be identical (e.g., pure yellow light versus a mix of red and green light, see metamerism).

Properties

Pythagorean identity

Two vectors u and v in a Hilbert space H are orthogonal when u, v⟩ = 0. The notation for this is uv. More generally, when S is a subset in H, the notation uS means that u is orthogonal to every element from S.

When u and v are orthogonal, one has

By induction on n, this is extended to any family u1, ..., un of n orthogonal vectors,

Whereas the Pythagorean identity as stated is valid in any inner product space, completeness is required for the extension of the Pythagorean identity to series. A series Σuk of orthogonal vectors converges in H if and only if the series of squares of norms converges, and

Furthermore, the sum of a series of orthogonal vectors is independent of the order in which it is taken.

Parallelogram identity and polarization

Geometrically, the parallelogram identity asserts that AC2 + BD2 = 2(AB2 + AD2). In words, the sum of the squares of the diagonals is twice the sum of the squares of any two adjacent sides.

By definition, every Hilbert space is also a Banach space. Furthermore, in every Hilbert space the following parallelogram identity holds:

Conversely, every Banach space in which the parallelogram identity holds is a Hilbert space, and the inner product is uniquely determined by the norm by the polarization identity. For real Hilbert spaces, the polarization identity is

For complex Hilbert spaces, it is

The parallelogram law implies that any Hilbert space is a uniformly convex Banach space.

Best approximation

This subsection employs the Hilbert projection theorem. If C is a non-empty closed convex subset of a Hilbert space H and x a point in H, there exists a unique point yC that minimizes the distance between x and points in C,

This is equivalent to saying that there is a point with minimal norm in the translated convex set D = Cx. The proof consists in showing that every minimizing sequence (dn) ⊂ D is Cauchy (using the parallelogram identity) hence converges (using completeness) to a point in D that has minimal norm. More generally, this holds in any uniformly convex Banach space.

When this result is applied to a closed subspace F of H, it can be shown that the point yF closest to x is characterized by

This point y is the orthogonal projection of x onto F, and the mapping PF : xy is linear (see Orthogonal complements and projections). This result is especially significant in applied mathematics, especially numerical analysis, where it forms the basis of least squares methods.

In particular, when F is not equal to H, one can find a nonzero vector v orthogonal to F (select xF and v = xy). A very useful criterion is obtained by applying this observation to the closed subspace F generated by a subset S of H.

A subset S of H spans a dense vector subspace if (and only if) the vector 0 is the sole vector vH orthogonal to S.

Duality

The dual space H* is the space of all continuous linear functions from the space H into the base field. It carries a natural norm, defined by

This norm satisfies the parallelogram law, and so the dual space is also an inner product space where this inner product can be defined in terms of this dual norm by using the polarization identity. The dual space is also complete so it is a Hilbert space in its own right. If e = (ei)iI is a complete orthonormal basis for H then the inner product on the dual space of any two is
where all but countably many of the terms in this series are zero.

The Riesz representation theorem affords a convenient description of the dual space. To every element u of H, there is a unique element φu of H*, defined by

where moreover,

The Riesz representation theorem states that the map from H to H* defined by uφu is surjective, which makes this map an isometric antilinear isomorphism. So to every element φ of the dual H* there exists one and only one uφ in H such that

for all xH. The inner product on the dual space H* satisfies

The reversal of order on the right-hand side restores linearity in φ from the antilinearity of uφ. In the real case, the antilinear isomorphism from H to its dual is actually an isomorphism, and so real Hilbert spaces are naturally isomorphic to their own duals.

The representing vector uφ is obtained in the following way. When φ ≠ 0, the kernel F = Ker(φ) is a closed vector subspace of H, not equal to H, hence there exists a nonzero vector v orthogonal to F. The vector u is a suitable scalar multiple λv of v. The requirement that φ(v) = ⟨v, u yields

This correspondence φu is exploited by the bra–ket notation popular in physics. It is common in physics to assume that the inner product, denoted by x|y, is linear on the right,

The result x|y can be seen as the action of the linear functional x| (the bra) on the vector |y (the ket).

The Riesz representation theorem relies fundamentally not just on the presence of an inner product, but also on the completeness of the space. In fact, the theorem implies that the topological dual of any inner product space can be identified with its completion. An immediate consequence of the Riesz representation theorem is also that a Hilbert space H is reflexive, meaning that the natural map from H into its double dual space is an isomorphism.

Weakly-convergent sequences

In a Hilbert space H, a sequence {xn} is weakly convergent to a vector xH when

for every vH.

For example, any orthonormal sequence {fn} converges weakly to 0, as a consequence of Bessel's inequality. Every weakly convergent sequence {xn} is bounded, by the uniform boundedness principle.

Conversely, every bounded sequence in a Hilbert space admits weakly convergent subsequences (Alaoglu's theorem). This fact may be used to prove minimization results for continuous convex functionals, in the same way that the Bolzano–Weierstrass theorem is used for continuous functions on Rd. Among several variants, one simple statement is as follows:

If f : HR is a convex continuous function such that f(x) tends to +∞ when ||x|| tends to , then f admits a minimum at some point x0H.

This fact (and its various generalizations) are fundamental for direct methods in the calculus of variations. Minimization results for convex functionals are also a direct consequence of the slightly more abstract fact that closed bounded convex subsets in a Hilbert space H are weakly compact, since H is reflexive. The existence of weakly convergent subsequences is a special case of the Eberlein–Šmulian theorem.

Banach space properties

Any general property of Banach spaces continues to hold for Hilbert spaces. The open mapping theorem states that a continuous surjective linear transformation from one Banach space to another is an open mapping meaning that it sends open sets to open sets. A corollary is the bounded inverse theorem, that a continuous and bijective linear function from one Banach space to another is an isomorphism (that is, a continuous linear map whose inverse is also continuous). This theorem is considerably simpler to prove in the case of Hilbert spaces than in general Banach spaces. The open mapping theorem is equivalent to the closed graph theorem, which asserts that a linear function from one Banach space to another is continuous if and only if its graph is a closed set. In the case of Hilbert spaces, this is basic in the study of unbounded operators (see closed operator).

The (geometrical) Hahn–Banach theorem asserts that a closed convex set can be separated from any point outside it by means of a hyperplane of the Hilbert space. This is an immediate consequence of the best approximation property: if y is the element of a closed convex set F closest to x, then the separating hyperplane is the plane perpendicular to the segment xy passing through its midpoint.

Operators on Hilbert spaces

Bounded operators

The continuous linear operators A : H1H2 from a Hilbert space H1 to a second Hilbert space H2 are bounded in the sense that they map bounded sets to bounded sets. Conversely, if an operator is bounded, then it is continuous. The space of such bounded linear operators has a norm, the operator norm given by

The sum and the composite of two bounded linear operators is again bounded and linear. For y in H2, the map that sends xH1 to Ax, y is linear and continuous, and according to the Riesz representation theorem can therefore be represented in the form

for some vector A*y in H1. This defines another bounded linear operator A* : H2H1, the adjoint of A. The adjoint satisfies A** = A. When the Riesz representation theorem is used to identify each Hilbert space with its continuous dual space, the adjoint of A can be shown to be identical to the transpose tA : H2* → H1* of A, which by definition sends to the functional

The set B(H) of all bounded linear operators on H (meaning operators HH), together with the addition and composition operations, the norm and the adjoint operation, is a C*-algebra, which is a type of operator algebra.

An element A of B(H) is called 'self-adjoint' or 'Hermitian' if A* = A. If A is Hermitian and Ax, x⟩ ≥ 0 for every x, then A is called 'nonnegative', written A ≥ 0; if equality holds only when x = 0, then A is called 'positive'. The set of self adjoint operators admits a partial order, in which AB if AB ≥ 0. If A has the form B*B for some B, then A is nonnegative; if B is invertible, then A is positive. A converse is also true in the sense that, for a non-negative operator A, there exists a unique non-negative square root B such that

In a sense made precise by the spectral theorem, self-adjoint operators can usefully be thought of as operators that are "real". An element A of B(H) is called normal if A*A = AA*. Normal operators decompose into the sum of a self-adjoint operator and an imaginary multiple of a self adjoint operator

that commute with each other. Normal operators can also usefully be thought of in terms of their real and imaginary parts.

An element U of B(H) is called unitary if U is invertible and its inverse is given by U*. This can also be expressed by requiring that U be onto and Ux, Uy⟩ = ⟨x, y for all x, yH. The unitary operators form a group under composition, which is the isometry group of H.

An element of B(H) is compact if it sends bounded sets to relatively compact sets. Equivalently, a bounded operator T is compact if, for any bounded sequence {xk}, the sequence {Txk} has a convergent subsequence. Many integral operators are compact, and in fact define a special class of operators known as Hilbert–Schmidt operators that are especially important in the study of integral equations. Fredholm operators differ from a compact operator by a multiple of the identity, and are equivalently characterized as operators with a finite dimensional kernel and cokernel. The index of a Fredholm operator T is defined by

The index is homotopy invariant, and plays a deep role in differential geometry via the Atiyah–Singer index theorem.

Unbounded operators

Unbounded operators are also tractable in Hilbert spaces, and have important applications to quantum mechanics. An unbounded operator T on a Hilbert space H is defined as a linear operator whose domain D(T) is a linear subspace of H. Often the domain D(T) is a dense subspace of H, in which case T is known as a densely defined operator.

The adjoint of a densely defined unbounded operator is defined in essentially the same manner as for bounded operators. Self-adjoint unbounded operators play the role of the observables in the mathematical formulation of quantum mechanics. Examples of self-adjoint unbounded operators on the Hilbert space L2(R) are:

  • A suitable extension of the differential operator
    where i is the imaginary unit and f is a differentiable function of compact support.
  • The multiplication-by-x operator:

These correspond to the momentum and position observables, respectively. Note that neither A nor B is defined on all of H, since in the case of A the derivative need not exist, and in the case of B the product function need not be square integrable. In both cases, the set of possible arguments form dense subspaces of L2(R).

Constructions

Direct sums

Two Hilbert spaces H1 and H2 can be combined into another Hilbert space, called the (orthogonal) direct sum, and denoted

consisting of the set of all ordered pairs (x1, x2) where xiHi, i = 1, 2, and inner product defined by

More generally, if Hi is a family of Hilbert spaces indexed by iI, then the direct sum of the Hi, denoted

consists of the set of all indexed families
in the Cartesian product of the Hi such that

The inner product is defined by

Each of the Hi is included as a closed subspace in the direct sum of all of the Hi. Moreover, the Hi are pairwise orthogonal. Conversely, if there is a system of closed subspaces, Vi, iI, in a Hilbert space H, that are pairwise orthogonal and whose union is dense in H, then H is canonically isomorphic to the direct sum of Vi. In this case, H is called the internal direct sum of the Vi. A direct sum (internal or external) is also equipped with a family of orthogonal projections Ei onto the ith direct summand Hi. These projections are bounded, self-adjoint, idempotent operators that satisfy the orthogonality condition

The spectral theorem for compact self-adjoint operators on a Hilbert space H states that H splits into an orthogonal direct sum of the eigenspaces of an operator, and also gives an explicit decomposition of the operator as a sum of projections onto the eigenspaces. The direct sum of Hilbert spaces also appears in quantum mechanics as the Fock space of a system containing a variable number of particles, where each Hilbert space in the direct sum corresponds to an additional degree of freedom for the quantum mechanical system. In representation theory, the Peter–Weyl theorem guarantees that any unitary representation of a compact group on a Hilbert space splits as the direct sum of finite-dimensional representations.

Tensor products

If x1, y1H1 and x2, y2H2, then one defines an inner product on the (ordinary) tensor product as follows. On simple tensors, let

This formula then extends by sesquilinearity to an inner product on H1H2. The Hilbertian tensor product of H1 and H2, sometimes denoted by H1 H2, is the Hilbert space obtained by completing H1H2 for the metric associated to this inner product.

An example is provided by the Hilbert space L2([0, 1]). The Hilbertian tensor product of two copies of L2([0, 1]) is isometrically and linearly isomorphic to the space L2([0, 1]2) of square-integrable functions on the square [0, 1]2. This isomorphism sends a simple tensor f1f2 to the function

on the square.

This example is typical in the following sense. Associated to every simple tensor product x1x2 is the rank one operator from H
1
to H2 that maps a given x* ∈ H
1
as

This mapping defined on simple tensors extends to a linear identification between H1H2 and the space of finite rank operators from H
1
to H2. This extends to a linear isometry of the Hilbertian tensor product H1 H2 with the Hilbert space HS(H
1
, H2)
of Hilbert–Schmidt operators from H
1
to H2.

Orthonormal bases

The notion of an orthonormal basis from linear algebra generalizes over to the case of Hilbert spaces. In a Hilbert space H, an orthonormal basis is a family {ek}kB of elements of H satisfying the conditions:

  1. Orthogonality: Every two different elements of B are orthogonal: ek, ej⟩ = 0 for all k, jB with kj.
  2. Normalization: Every element of the family has norm 1: ||ek|| = 1 for all kB.
  3. Completeness: The linear span of the family ek, kB, is dense in H.

A system of vectors satisfying the first two conditions basis is called an orthonormal system or an orthonormal set (or an orthonormal sequence if B is countable). Such a system is always linearly independent. Completeness of an orthonormal system of vectors of a Hilbert space can be equivalently restated as:

if v, ek⟩ = 0 for all kB and some vH then v = 0.

This is related to the fact that the only vector orthogonal to a dense linear subspace is the zero vector, for if S is any orthonormal set and v is orthogonal to S, then v is orthogonal to the closure of the linear span of S, which is the whole space.

Examples of orthonormal bases include:

  • the set {(1, 0, 0), (0, 1, 0), (0, 0, 1)} forms an orthonormal basis of R3 with the dot product;
  • the sequence {fn : nZ} with fn(x) = exp(2πinx) forms an orthonormal basis of the complex space L2([0, 1]);

In the infinite-dimensional case, an orthonormal basis will not be a basis in the sense of linear algebra; to distinguish the two, the latter basis is also called a Hamel basis. That the span of the basis vectors is dense implies that every vector in the space can be written as the sum of an infinite series, and the orthogonality implies that this decomposition is unique.

Sequence spaces

The space of square-summable sequences of complex numbers is the set of infinite sequences

of real or complex numbers such that

This space has an orthonormal basis:

This space is the infinite-dimensional generalization of the space of finite-dimensional vectors. It is usually the first example used to show that in infinite-dimensional spaces, a set that is closed and bounded is not necessarily (sequentially) compact (as is the case in all finite dimensional spaces). Indeed, the set of orthonormal vectors above shows this: It is an infinite sequence of vectors in the unit ball (i.e., the ball of points with norm less than or equal one). This set is clearly bounded and closed; yet, no subsequence of these vectors converges to anything and consequently the unit ball in is not compact. Intuitively, this is because "there is always another coordinate direction" into which the next elements of the sequence can evade.

One can generalize the space in many ways. For example, if B is any (infinite) set, then one can form a Hilbert space of sequences with index set B, defined by

The summation over B is here defined by

the supremum being taken over all finite subsets of B. It follows that, for this sum to be finite, every element of l2(B) has only countably many nonzero terms. This space becomes a Hilbert space with the inner product

for all x, yl2(B). Here the sum also has only countably many nonzero terms, and is unconditionally convergent by the Cauchy–Schwarz inequality.

An orthonormal basis of l2(B) is indexed by the set B, given by

Bessel's inequality and Parseval's formula

Let f1, …, fn be a finite orthonormal system in H. For an arbitrary vector xH, let

Then x, fk⟩ = ⟨y, fk for every k = 1, …, n. It follows that xy is orthogonal to each fk, hence xy is orthogonal to y. Using the Pythagorean identity twice, it follows that

Let {fi}, iI, be an arbitrary orthonormal system in H. Applying the preceding inequality to every finite subset J of I gives Bessel's inequality:

(according to the definition of the sum of an arbitrary family of non-negative real numbers).

Geometrically, Bessel's inequality implies that the orthogonal projection of x onto the linear subspace spanned by the fi has norm that does not exceed that of x. In two dimensions, this is the assertion that the length of the leg of a right triangle may not exceed the length of the hypotenuse.

Bessel's inequality is a stepping stone to the stronger result called Parseval's identity, which governs the case when Bessel's inequality is actually an equality. By definition, if {ek}kB is an orthonormal basis of H, then every element x of H may be written as

Even if B is uncountable, Bessel's inequality guarantees that the expression is well-defined and consists only of countably many nonzero terms. This sum is called the Fourier expansion of x, and the individual coefficients x, ek are the Fourier coefficients of x. Parseval's identity then asserts that

Conversely, if {ek} is an orthonormal set such that Parseval's identity holds for every x, then {ek} is an orthonormal basis.

Hilbert dimension

As a consequence of Zorn's lemma, every Hilbert space admits an orthonormal basis; furthermore, any two orthonormal bases of the same space have the same cardinality, called the Hilbert dimension of the space. For instance, since l2(B) has an orthonormal basis indexed by B, its Hilbert dimension is the cardinality of B (which may be a finite integer, or a countable or uncountable cardinal number).

As a consequence of Parseval's identity, if {ek}kB is an orthonormal basis of H, then the map Φ : Hl2(B) defined by Φ(x) = ⟨x, ekkB is an isometric isomorphism of Hilbert spaces: it is a bijective linear mapping such that

for all x, yH. The cardinal number of B is the Hilbert dimension of H. Thus every Hilbert space is isometrically isomorphic to a sequence space l2(B) for some set B.

Separable spaces

By definition, a Hilbert space is separable provided it contains a dense countable subset. Along with Zorn's lemma, this means a Hilbert space is separable if and only if it admits a countable orthonormal basis. All infinite-dimensional separable Hilbert spaces are therefore isometrically isomorphic to l2.

In the past, Hilbert spaces were often required to be separable as part of the definition. Most spaces used in physics are separable, and since these are all isomorphic to each other, one often refers to any infinite-dimensional separable Hilbert space as "the Hilbert space" or just "Hilbert space". Even in quantum field theory, most of the Hilbert spaces are in fact separable, as stipulated by the Wightman axioms. However, it is sometimes argued that non-separable Hilbert spaces are also important in quantum field theory, roughly because the systems in the theory possess an infinite number of degrees of freedom and any infinite Hilbert tensor product (of spaces of dimension greater than one) is non-separable. For instance, a bosonic field can be naturally thought of as an element of a tensor product whose factors represent harmonic oscillators at each point of space. From this perspective, the natural state space of a boson might seem to be a non-separable space. However, it is only a small separable subspace of the full tensor product that can contain physically meaningful fields (on which the observables can be defined). Another non-separable Hilbert space models the state of an infinite collection of particles in an unbounded region of space. An orthonormal basis of the space is indexed by the density of the particles, a continuous parameter, and since the set of possible densities is uncountable, the basis is not countable.

Orthogonal complements and projections

If S is a subset of a Hilbert space H, the set of vectors orthogonal to S is defined by

The set S is a closed subspace of H (can be proved easily using the linearity and continuity of the inner product) and so forms itself a Hilbert space. If V is a closed subspace of H, then V is called the orthogonal complement of V. In fact, every xH can then be written uniquely as x = v + w, with vV and wV. Therefore, H is the internal Hilbert direct sum of V and V.

The linear operator PV : HH that maps x to v is called the orthogonal projection onto V. There is a natural one-to-one correspondence between the set of all closed subspaces of H and the set of all bounded self-adjoint operators P such that P2 = P. Specifically,

Theorem — The orthogonal projection PV is a self-adjoint linear operator on H of norm ≤ 1 with the property P2
V
= PV
. Moreover, any self-adjoint linear operator E such that E2 = E is of the form PV, where V is the range of E. For every x in H, PV(x) is the unique element v of V that minimizes the distance ||xv||.

This provides the geometrical interpretation of PV(x): it is the best approximation to x by elements of V.

Projections PU and PV are called mutually orthogonal if PUPV = 0. This is equivalent to U and V being orthogonal as subspaces of H. The sum of the two projections PU and PV is a projection only if U and V are orthogonal to each other, and in that case PU + PV = PU+V. The composite PUPV is generally not a projection; in fact, the composite is a projection if and only if the two projections commute, and in that case PUPV = PUV.

By restricting the codomain to the Hilbert space V, the orthogonal projection PV gives rise to a projection mapping π : HV; it is the adjoint of the inclusion mapping

meaning that
for all xV and yH.

The operator norm of the orthogonal projection PV onto a nonzero closed subspace V is equal to 1:

Every closed subspace V of a Hilbert space is therefore the image of an operator P of norm one such that P2 = P. The property of possessing appropriate projection operators characterizes Hilbert spaces:

  • A Banach space of dimension higher than 2 is (isometrically) a Hilbert space if and only if, for every closed subspace V, there is an operator PV of norm one whose image is V such that P2
    V
    = PV
    .

While this result characterizes the metric structure of a Hilbert space, the structure of a Hilbert space as a topological vector space can itself be characterized in terms of the presence of complementary subspaces:

  • A Banach space X is topologically and linearly isomorphic to a Hilbert space if and only if, to every closed subspace V, there is a closed subspace W such that X is equal to the internal direct sum VW.

The orthogonal complement satisfies some more elementary results. It is a monotone function in the sense that if UV, then VU with equality holding if and only if V is contained in the closure of U. This result is a special case of the Hahn–Banach theorem. The closure of a subspace can be completely characterized in terms of the orthogonal complement: if V is a subspace of H, then the closure of V is equal to V⊥⊥. The orthogonal complement is thus a Galois connection on the partial order of subspaces of a Hilbert space. In general, the orthogonal complement of a sum of subspaces is the intersection of the orthogonal complements:

If the Vi are in addition closed, then

Spectral theory

There is a well-developed spectral theory for self-adjoint operators in a Hilbert space, that is roughly analogous to the study of symmetric matrices over the reals or self-adjoint matrices over the complex numbers. In the same sense, one can obtain a "diagonalization" of a self-adjoint operator as a suitable sum (actually an integral) of orthogonal projection operators.

The spectrum of an operator T, denoted σ(T), is the set of complex numbers λ such that Tλ lacks a continuous inverse. If T is bounded, then the spectrum is always a compact set in the complex plane, and lies inside the disc |z| ≤ ||T||. If T is self-adjoint, then the spectrum is real. In fact, it is contained in the interval [m, M] where

Moreover, m and M are both actually contained within the spectrum.

The eigenspaces of an operator T are given by

Unlike with finite matrices, not every element of the spectrum of T must be an eigenvalue: the linear operator Tλ may only lack an inverse because it is not surjective. Elements of the spectrum of an operator in the general sense are known as spectral values. Since spectral values need not be eigenvalues, the spectral decomposition is often more subtle than in finite dimensions.

However, the spectral theorem of a self-adjoint operator T takes a particularly simple form if, in addition, T is assumed to be a compact operator. The spectral theorem for compact self-adjoint operators states:

  • A compact self-adjoint operator T has only countably (or finitely) many spectral values. The spectrum of T has no limit point in the complex plane except possibly zero. The eigenspaces of T decompose H into an orthogonal direct sum:
    Moreover, if Eλ denotes the orthogonal projection onto the eigenspace Hλ, then
    where the sum converges with respect to the norm on B(H).

This theorem plays a fundamental role in the theory of integral equations, as many integral operators are compact, in particular those that arise from Hilbert–Schmidt operators.

The general spectral theorem for self-adjoint operators involves a kind of operator-valued Riemann–Stieltjes integral, rather than an infinite summation. The spectral family associated to T associates to each real number λ an operator Eλ, which is the projection onto the nullspace of the operator (Tλ)+, where the positive part of a self-adjoint operator is defined by

The operators Eλ are monotone increasing relative to the partial order defined on self-adjoint operators; the eigenvalues correspond precisely to the jump discontinuities. One has the spectral theorem, which asserts

The integral is understood as a Riemann–Stieltjes integral, convergent with respect to the norm on B(H). In particular, one has the ordinary scalar-valued integral representation

A somewhat similar spectral decomposition holds for normal operators, although because the spectrum may now contain non-real complex numbers, the operator-valued Stieltjes measure dEλ must instead be replaced by a resolution of the identity.

A major application of spectral methods is the spectral mapping theorem, which allows one to apply to a self-adjoint operator T any continuous complex function f defined on the spectrum of T by forming the integral

The resulting continuous functional calculus has applications in particular to pseudodifferential operators.

The spectral theory of unbounded self-adjoint operators is only marginally more difficult than for bounded operators. The spectrum of an unbounded operator is defined in precisely the same way as for bounded operators: λ is a spectral value if the resolvent operator

fails to be a well-defined continuous operator. The self-adjointness of T still guarantees that the spectrum is real. Thus the essential idea of working with unbounded operators is to look instead at the resolvent Rλ where λ is nonreal. This is a bounded normal operator, which admits a spectral representation that can then be transferred to a spectral representation of T itself. A similar strategy is used, for instance, to study the spectrum of the Laplace operator: rather than address the operator directly, one instead looks as an associated resolvent such as a Riesz potential or Bessel potential.

A precise version of the spectral theorem in this case is:

Theorem — Given a densely defined self-adjoint operator T on a Hilbert space H, there corresponds a unique resolution of the identity E on the Borel sets of R, such that

for all xD(T) and yH. The spectral measure E is concentrated on the spectrum of T.

There is also a version of the spectral theorem that applies to unbounded normal operators.

In popular culture

Thomas Pynchon introduced the fictional character, Sammy Hilbert-Spaess (a pun on "Hilbert Space"), in his 1973 novel, Gravity's Rainbow. Hilbert-Spaess is first described as "a ubiquitous double agent" and later as "at least a double agent". The novel had earlier referenced the work of fellow German mathematician Kurt Gödel's Incompleteness Theorems, which showed that Hilbert's Program, Hilbert's formalized plan to unify mathematics into a single set of axioms, was not possible.

Metallic bonding

From Wikipedia, the free encyclopedia

An example showing metallic bonding. + represents cations, - represents the free floating electrons.
 

Metallic bonding is a type of chemical bonding that arises from the electrostatic attractive force between conduction electrons (in the form of an electron cloud of delocalized electrons) and positively charged metal ions. It may be described as the sharing of free electrons among a structure of positively charged ions (cations). Metallic bonding accounts for many physical properties of metals, such as strength, ductility, thermal and electrical resistivity and conductivity, opacity, and luster.

Metallic bonding is not the only type of chemical bonding a metal can exhibit, even as a pure substance. For example, elemental gallium consists of covalently-bound pairs of atoms in both liquid and solid-state—these pairs form a crystal structure with metallic bonding between them. Another example of a metal–metal covalent bond is the mercurous ion (Hg2+
2
).

History

As chemistry developed into a science, it became clear that metals formed the majority of the periodic table of the elements, and great progress was made in the description of the salts that can be formed in reactions with acids. With the advent of electrochemistry, it became clear that metals generally go into solution as positively charged ions, and the oxidation reactions of the metals became well understood in their electrochemical series. A picture emerged of metals as positive ions held together by an ocean of negative electrons.

With the advent of quantum mechanics, this picture was given a more formal interpretation in the form of the free electron model and its further extension, the nearly free electron model. In both models, the electrons are seen as a gas traveling through the structure of the solid with an energy that is essentially isotropic, in that it depends on the square of the magnitude, not the direction of the momentum vector k. In three-dimensional k-space, the set of points of the highest filled levels (the Fermi surface) should therefore be a sphere. In the nearly-free model, box-like Brillouin zones are added to k-space by the periodic potential experienced from the (ionic) structure, thus mildly breaking the isotropy.

The advent of X-ray diffraction and thermal analysis made it possible to study the structure of crystalline solids, including metals and their alloys; and phase diagrams were developed. Despite all this progress, the nature of intermetallic compounds and alloys largely remained a mystery and their study was often merely empirical. Chemists generally steered away from anything that did not seem to follow Dalton's laws of multiple proportions; and the problem was considered the domain of a different science, metallurgy.

The nearly-free electron model was eagerly taken up by some researchers in this field, notably Hume-Rothery, in an attempt to explain why certain intermetallic alloys with certain compositions would form and others would not. Initially Hume-Rothery's attempts were quite successful. His idea was to add electrons to inflate the spherical Fermi-balloon inside the series of Brillouin-boxes and determine when a certain box would be full. This predicted a fairly large number of alloy compositions that were later observed. As soon as cyclotron resonance became available and the shape of the balloon could be determined, it was found that the assumption that the balloon was spherical did not hold, except perhaps in the case of caesium. This finding reduced many of the conclusions to examples of how a model can sometimes give a whole series of correct predictions, yet still be wrong.

The nearly-free electron debacle showed researchers that any model that assumed that ions were in a sea of free electrons needed modification. So, a number of quantum mechanical models—such as band structure calculations based on molecular orbitals or the density functional theory—were developed. In these models, one either departs from the atomic orbitals of neutral atoms that share their electrons or (in the case of density functional theory) departs from the total electron density. The free-electron picture has, nevertheless, remained a dominant one in education.

The electronic band structure model became a major focus not only for the study of metals but even more so for the study of semiconductors. Together with the electronic states, the vibrational states were also shown to form bands. Rudolf Peierls showed that, in the case of a one-dimensional row of metallic atoms—say, hydrogen—an instability had to arise that would lead to the breakup of such a chain into individual molecules. This sparked an interest in the general question: when is collective metallic bonding stable and when will a more localized form of bonding take its place? Much research went into the study of clustering of metal atoms.

As powerful as the concept of the band structure model proved to be in describing metallic bonding, it has the drawback of remaining a one-electron approximation of a many-body problem. In other words, the energy states of each electron are described as if all the other electrons simply form a homogeneous background. Researchers such as Mott and Hubbard realized that this was perhaps appropriate for strongly delocalized s- and p-electrons; but for d-electrons, and even more for f-electrons, the interaction with electrons (and atomic displacements) in the local environment may become stronger than the delocalization that leads to broad bands. Thus, the transition from localized unpaired electrons to itinerant ones partaking in metallic bonding became more comprehensible.

The nature of metallic bonding

The combination of two phenomena gives rise to metallic bonding: delocalization of electrons and the availability of a far larger number of delocalized energy states than of delocalized electrons. The latter could be called electron deficiency.

In 2D

Graphene is an example of two-dimensional metallic bonding. Its metallic bonds are similar to aromatic bonding in benzene, naphthalene, anthracene, ovalene, etc.

In 3D

Metal aromaticity in metal clusters is another example of delocalization, this time often in three-dimensional arrangements. Metals take the delocalization principle to its extreme, and one could say that a crystal of a metal represents a single molecule over which all conduction electrons are delocalized in all three dimensions. This means that inside the metal one can generally not distinguish molecules, so that the metallic bonding is neither intra- nor inter-molecular. 'Nonmolecular' would perhaps be a better term. Metallic bonding is mostly non-polar, because even in alloys there is little difference among the electronegativities of the atoms participating in the bonding interaction (and, in pure elemental metals, none at all). Thus, metallic bonding is an extremely delocalized communal form of covalent bonding. In a sense, metallic bonding is not a 'new' type of bonding at all. It describes the bonding only as present in a chunk of condensed matter: be it crystalline solid, liquid, or even glass. Metallic vapors, in contrast, are often atomic (Hg) or at times contain molecules, such as Na2, held together by a more conventional covalent bond. This is why it is not correct to speak of a single 'metallic bond'.

Delocalization is most pronounced for s- and p-electrons. Delocalization in caesium is so strong that the electrons are virtually freed from the caesium atoms to form a gas constrained only by the surface of the metal. For caesium, therefore, the picture of Cs+ ions held together by a negatively charged electron gas is not inaccurate. For other elements the electrons are less free, in that they still experience the potential of the metal atoms, sometimes quite strongly. They require a more intricate quantum mechanical treatment (e.g., tight binding) in which the atoms are viewed as neutral, much like the carbon atoms in benzene. For d- and especially f-electrons the delocalization is not strong at all and this explains why these electrons are able to continue behaving as unpaired electrons that retain their spin, adding interesting magnetic properties to these metals.

Electron deficiency and mobility

Metal atoms contain few electrons in their valence shells relative to their periods or energy levels. They are electron-deficient elements and the communal sharing does not change that. There remain far more available energy states than there are shared electrons. Both requirements for conductivity are therefore fulfilled: strong delocalization and partly filled energy bands. Such electrons can therefore easily change from one energy state to a slightly different one. Thus, not only do they become delocalized, forming a sea of electrons permeating the structure, but they are also able to migrate through the structure when an external electrical field is applied, leading to electrical conductivity. Without the field, there are electrons moving equally in all directions. Within such a field, some electrons will adjust their state slightly, adopting a different wave vector. Consequently, there will be more moving one way than another and a net current will result.

The freedom of electrons to migrate also gives metal atoms, or layers of them, the capacity to slide past each other. Locally, bonds can easily be broken and replaced by new ones after a deformation. This process does not affect the communal metallic bonding very much, which gives rise to metals' characteristic malleability and ductility. This is particularly true for pure elements. In the presence of dissolved impurities, the normally easily formed cleavages may be blocked and the material become harder. Gold, for example, is very soft in pure form (24-karat), which is why alloys are preferred in jewelry.

Metals are typically also good conductors of heat, but the conduction electrons only contribute partly to this phenomenon. Collective (i.e., delocalized) vibrations of the atoms, known as phonons that travel through the solid as a wave, are bigger contributors.

However, a substance such as diamond, which conducts heat quite well, is not an electrical conductor. This is not a consequence of delocalization being absent in diamond, but simply that carbon is not electron deficient.

Electron deficiency is important in distinguishing metallic from more conventional covalent bonding. Thus, we should amend the expression given above to: Metallic bonding is an extremely delocalized communal form of electron-deficient covalent bonding.

Metallic radius

The metallic radius is defined as one-half of the distance between the two adjacent metal ions in the metallic structure. This radius depends on the nature of the atom as well as its environment—specifically, on the coordination number (CN), which in turn depends on the temperature and applied pressure.

When comparing periodic trends in the size of atoms it is often desirable to apply the so-called Goldschmidt correction, which converts atomic radii to the values the atoms would have if they were 12-coordinated. Since metallic radii are largest for the highest coordination number, correction for less dense coordinations involves multiplying by x, where 0 < x < 1. Specifically, for CN = 4, x = 0.88; for CN = 6, x = 0.96, and for CN = 8, x = 0.97. The correction is named after Victor Goldschmidt who obtained the numerical values quoted above.

The radii follow general periodic trends: they decrease across the period due to the increase in the effective nuclear charge, which is not offset by the increased number of valence electrons; but the radii increase down the group due to an increase in the principal quantum number. Between the 4d and 5d elements, the lanthanide contraction is observed—there is very little increase of the radius down the group due to the presence of poorly shielding f orbitals.

Strength of the bond

The atoms in metals have a strong attractive force between them. Much energy is required to overcome it. Therefore, metals often have high boiling points, with tungsten (5828 K) being extremely high. A remarkable exception is the elements of the zinc group: Zn, Cd, and Hg. Their electron configurations end in ...ns2, which resembles a noble gas configuration, like that of helium, more and more when going down the periodic table, because the energy differential to the empty np orbitals becomes larger. These metals are therefore relatively volatile, and are avoided in ultra-high vacuum systems.

Otherwise, metallic bonding can be very strong, even in molten metals, such as gallium. Even though gallium will melt from the heat of one's hand just above room temperature, its boiling point is not far from that of copper. Molten gallium is, therefore, a very nonvolatile liquid, thanks to its strong metallic bonding.

The strong bonding of metals in liquid form demonstrates that the energy of a metallic bond is not highly dependent on the direction of the bond; this lack of bond directionality is a direct consequence of electron delocalization, and is best understood in contrast to the directional bonding of covalent bonds. The energy of a metallic bond is thus mostly a function of the number of electrons which surround the metallic atom, as exemplified by the embedded atom model. This typically results in metals assuming relatively simple, close-packed crystal structures, such as FCC, BCC, and HCP.

Given high enough cooling rates and appropriate alloy composition, metallic bonding can occur even in glasses, which have amorphous structures.

Much biochemistry is mediated by the weak interaction of metal ions and biomolecules. Such interactions, and their associated conformational changes, have been measured using dual polarisation interferometry.

Solubility and compound formation

Metals are insoluble in water or organic solvents, unless they undergo a reaction with them. Typically, this is an oxidation reaction that robs the metal atoms of their itinerant electrons, destroying the metallic bonding. However metals are often readily soluble in each other while retaining the metallic character of their bonding. Gold, for example, dissolves easily in mercury, even at room temperature. Even in solid metals, the solubility can be extensive. If the structures of the two metals are the same, there can even be complete solid solubility, as in the case of electrum, an alloy of silver and gold. At times, however, two metals will form alloys with different structures than either of the two parents. One could call these materials metal compounds. But, because materials with metallic bonding are typically not molecular, Dalton's law of integral proportions is not valid; and often a range of stoichiometric ratios can be achieved. It is better to abandon such concepts as 'pure substance' or 'solute' in such cases and speak of phases instead. The study of such phases has traditionally been more the domain of metallurgy than of chemistry, although the two fields overlap considerably.

Localization and clustering: from bonding to bonds

The metallic bonding in complex compounds does not necessarily involve all constituent elements equally. It is quite possible to have one or more elements that do not partake at all. One could picture the conduction electrons flowing around them like a river around an island or a big rock. It is possible to observe which elements do partake: e.g., by looking at the core levels in an X-ray photoelectron spectroscopy (XPS) spectrum. If an element partakes, its peaks tend to be skewed.

Some intermetallic materials, e.g., do exhibit metal clusters reminiscent of molecules; and these compounds are more a topic of chemistry than of metallurgy. The formation of the clusters could be seen as a way to 'condense out' (localize) the electron-deficient bonding into bonds of a more localized nature. Hydrogen is an extreme example of this form of condensation. At high pressures it is a metal. The core of the planet Jupiter could be said to be held together by a combination of metallic bonding and high pressure induced by gravity. At lower pressures, however, the bonding becomes entirely localized into a regular covalent bond. The localization is so complete that the (more familiar) H2 gas results. A similar argument holds for an element such as boron. Though it is electron-deficient compared to carbon, it does not form a metal. Instead it has a number of complex structures in which icosahedral B12 clusters dominate. Charge density waves are a related phenomenon.

As these phenomena involve the movement of the atoms toward or away from each other, they can be interpreted as the coupling between the electronic and the vibrational states (i.e. the phonons) of the material. A different such electron-phonon interaction is thought to lead to a very different result at low temperatures, that of superconductivity. Rather than blocking the mobility of the charge carriers by forming electron pairs in localized bonds, Cooper-pairs are formed that no longer experience any resistance to their mobility.

Optical properties

The presence of an ocean of mobile charge carriers has profound effects on the optical properties of metals, which can only be understood by considering the electrons as a collective, rather than considering the states of individual electrons involved in more conventional covalent bonds.

Light consists of a combination of an electrical and a magnetic field. The electrical field is usually able to excite an elastic response from the electrons involved in the metallic bonding. The result is that photons cannot penetrate very far into the metal and are typically reflected, although some may also be absorbed. This holds equally for all photons in the visible spectrum, which is why metals are often silvery white or grayish with the characteristic specular reflection of metallic luster. The balance between reflection and absorption determines how white or how gray a metal is, although surface tarnish can obscure the luster. Silver, a metal with high conductivity, is one of the whitest.

Notable exceptions are reddish copper and yellowish gold. The reason for their color is that there is an upper limit to the frequency of the light that metallic electrons can readily respond to: the plasmon frequency. At the plasmon frequency, the frequency-dependent dielectric function of the free electron gas goes from negative (reflecting) to positive (transmitting); higher frequency photons are not reflected at the surface, and do not contribute to the color of the metal. There are some materials, such as indium tin oxide (ITO), that are metallic conductors (actually degenerate semiconductors) for which this threshold is in the infrared, which is why they are transparent in the visible, but good reflectors in the infrared.

For silver the limiting frequency is in the far ultraviolet, but for copper and gold it is closer to the visible. This explains the colors of these two metals. At the surface of a metal, resonance effects known as surface plasmons can result. They are collective oscillations of the conduction electrons, like a ripple in the electronic ocean. However, even if photons have enough energy, they usually do not have enough momentum to set the ripple in motion. Therefore, plasmons are hard to excite on a bulk metal. This is why gold and copper look like lustrous metals albeit with a dash of color. However, in colloidal gold the metallic bonding is confined to a tiny metallic particle, which prevents the oscillation wave of the plasmon from 'running away'. The momentum selection rule is therefore broken, and the plasmon resonance causes an extremely intense absorption in the green, with a resulting purple-red color. Such colors are orders of magnitude more intense than ordinary absorptions seen in dyes and the like, which involve individual electrons and their energy states.

Environmentalist

From Wikipedia, the free encyclopedia

An environmentalist is a person who is concerned with and/or advocates for the protection of the environment. An environmentalist can be considered a supporter of the goals of the environmental movement, "a political and ethical movement that seeks to improve and protect the quality of the natural environment through changes to environmentally harmful human activities". An environmentalist is engaged in or believes in the philosophy of environmentalism or one of the related philosophies.

The environmental movement has a number of subcommunities, with different approaches and focuses – each developing distinct movements and identities. Critics of environmentalists sometimes referred to using informal or derogatory terms such as "greenie" and "tree-hugger", with some members of the public disassociating the most radical environmentalists with these derogatory terms.

Types

The environmental movement contains a number of subcommunities, that have developed with different approaches and philosophies in different parts of the world. Notably, the early environmental movement experienced a deep tension between the philosophies of conservation and broader environmental protection. In recent decades the rise to prominence of environmental justice, indigenous rights and key environmental crises like the climate crises, has led to the development of other environmentalist identities. Environmentalists can be describe as one of the following:

Climate activists

The public recognition of the climate crisis and emergence of the climate movement in the beginning of the 21st century led to a distinct group of activists. Activations like the School Strike for Climate and Fridays for Future, have led to a new generation of youth activists like Greta Thunberg, Jamie Margolin and Vanessa Nakate who have created a global youth climate movement.

Conservationists

One notable strain of environmentalism, comes from the philosophy of the conservation movement. Conservationists are concerned with leaving the environment in a better state than the condition they found it distinct from human interaction. The conservation movement is associated with the early parts of the environmental movement of the 19th and 20th century.

Environmental defenders

Environmental defenders or environmental human rights defenders are individuals or collectives who protect the environment from harms resulting from resource extraction, hazardous waste disposal, infrastructure projects, land appropriation, or other dangers. In 2019, the UN Human Rights Council unanimously recognised their importance to environmental protection. The term environmental defender is broadly applied to a diverse range of environmental groups and leaders from different cultures that all employ different tactics and hold different agendas. Use of the term is contested, as it homogenises such a wide range of groups and campaigns, many of whom do not self-identify with the term and may not have explicit aims to protect the environment (being motivated primarily by social justice concerns).

Environmental defenders involved in environmental conflicts face a wide range of threats from governments, local elites, and other powers that benefit from projects that defenders oppose. Global Witness reported 1,922 murders of environmental defenders in 57 countries between 2002 and 2019, with indigenous people accounting for approximately one third of this total. Documentation of this violence is also incomplete. The UN Special Rapporteur on human rights reported that as many as one hundred environmental defenders are intimidated, arrested or otherwise harassed for every one that is killed.

Greens

The adoption of environmentalist into a distinct political ideology led to the development of political parties called "green parties", typically with a leftist political approach to overlapping issues of environmental and social wellbeing.

Water protectors

Oceti Sakowin encampment at the Dakota Access Pipeline protests camps in North Dakota
 
Water protectors marching in Seattle
 

Water protectors are activists, organizers, and cultural workers focused on the defense of the world's water and water systems. The water protector name, analysis and style of activism arose from Indigenous communities in North America during the Dakota Access Pipeline protests at the Standing Rock Indian Reservation, which began with an encampment on LaDonna Brave Bull Allard's land in April, 2016.

Water protectors are similar to land defenders, but are distinguished from other environmental activists by this philosophy and approach that is rooted in an indigenous cultural perspective that sees water and the land as sacred. This relationship with water moves beyond simply having access to clean drinking water, and comes from the beliefs that water is necessary for life and that water is a relative and therefore it must be treated with respect. As such, the reasons for protection of water are older, more holistic, and integrated into a larger cultural and spiritual whole than in most modern forms of environmental activism, which may be more based in seeing water and other extractive resources as commodities.

Historically, water protectors have been led by or composed of women, because as water provides life, so do women.

Notable environmentalists

Sir David Attenborough in May 2003
 
Al Gore, 2007
 
 
Hakob Sanasaryan campaignning against illegal construction of a new ore-processing facility in Sotk, 2011
 
Kevin Buzzacott (Aboriginal activist) in Adelaide 2014

Some of the notable environmentalists who have been active in lobbying for environmental protection and conservation include:

Extension

In recent years, there are not only environmentalists for natural environment but also environmentalists for human environment. For instance, the activists who call for "mental green space" by getting rid of disadvantages of internet, cable TV, and smartphones have been called "info-environmentalists".

Inequality (mathematics)

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Inequality...