Search This Blog

Thursday, March 15, 2018

Tensor

From Wikipedia, the free encyclopedia
Cauchy stress tensor, a second-order tensor. The tensor's components, in a three-dimensional Cartesian coordinate system, form the matrix
{\begin{aligned}\sigma &={\begin{bmatrix}\mathbf {T} ^{(\mathbf {e} _{1})}\mathbf {T} ^{(\mathbf {e} _{2})}\mathbf {T} ^{(\mathbf {e} _{3})}\\\end{bmatrix}}\\&={\begin{bmatrix}\sigma _{11}&\sigma _{12}&\sigma _{13}\\\sigma _{21}&\sigma _{22}&\sigma _{23}\\\sigma _{31}&\sigma _{32}&\sigma _{33}\end{bmatrix}}\\\end{aligned}}
whose columns are the stresses (forces per unit area) acting on the e1, e2, and e3 faces of the cube.

In mathematics, tensors are geometric objects that describe linear relations between geometric vectors, scalars, and other tensors. Elementary examples of such relations include the dot product, the cross product, and linear maps. Geometric vectors, often used in physics and engineering applications, and scalars themselves are also tensors.[1] A more sophisticated example is the Cauchy stress tensor T, which takes a direction v as input and produces the stress T(v) on the surface normal to this vector for output, thus expressing a relationship between these two vectors, shown in the figure (right).

Given a reference basis of vectors, a tensor can be represented as an organized multidimensional array of numerical values. The order (also degree or rank) of a tensor is the dimensionality of the array needed to represent it, or equivalently, the number of indices needed to label a component of that array. For example, a linear map is represented by a matrix (a 2-dimensional array) in a basis, and therefore is a 2nd-order tensor. A vector is represented as a 1-dimensional array in a basis, and is a 1st-order tensor. Scalars are single numbers and are thus 0th-order tensors. The collection of tensors on a vector space forms a tensor algebra.

Because they express a relationship between vectors, tensors themselves must be independent of a particular choice of basis. The basis independence of a tensor then takes the form of a covariant and/or contravariant transformation law that relates the array computed in one basis to that computed in another one. The precise form of the transformation law determines the type (or valence) of the tensor. The tensor type is a pair of natural numbers (n, m), where n is the number of contravariant indices and m is the number of covariant indices. The total order of a tensor is the sum of these two numbers.

Tensors are important in physics because they provide a concise mathematical framework for formulating and solving physics problems in areas such as stress, elasticity, fluid mechanics, and general relativity. In applications, it is common to study situations in which a different tensor can occur at each point of an object; for example the stress within an object may vary from one location to another. This leads to the concept of a tensor field. In some areas, tensor fields are so ubiquitous that they are simply called "tensors".

Tensors were conceived in 1900 by Tullio Levi-Civita and Gregorio Ricci-Curbastro, who continued the earlier work of Bernhard Riemann and Elwin Bruno Christoffel and others, as part of the absolute differential calculus. The concept enabled an alternative formulation of the intrinsic differential geometry of a manifold in the form of the Riemann curvature tensor.[2]

Definition

Although seemingly different, the various approaches to defining tensors describe the same geometric concept using different languages and at different levels of abstraction.

As multidimensional arrays

Just as a vector in an n-dimensional space is represented by a one-dimensional array of length n with respect to a given basis, any tensor with respect to a basis is represented by a multidimensional array. For example, a linear operator is represented in a basis as a two-dimensional square n × n array. The numbers in the multidimensional array are known as the scalar components of the tensor or simply its components. They are denoted by indices giving their position in the array, as subscripts and superscripts, following the symbolic name of the tensor. For example, the components of an order 2 tensor T could be denoted Tij , where i and j are indices running from 1 to n, or also by Tj
i
. Whether an index is displayed as a superscript or subscript depends on the transformation properties of the tensor, described below. Thus while Tij and Tj
i
can both be expressed as n by n matrices, and are numerically related via index juggling, the difference in their transformation laws indicates it would be improper to add them together. The total number of indices required to identify each component uniquely is equal to the dimension of the array, and is called the order, degree or rank of the tensor. However, the term "rank" generally has another meaning in the context of matrices and tensors.

Just as the components of a vector change when we change the basis of the vector space, the components of a tensor also change under such a transformation. Each type of tensor comes equipped with a transformation law that details how the components of the tensor respond to a change of basis. The components of a vector can respond in two distinct ways to a change of basis (see covariance and contravariance of vectors), where the new basis vectors {\displaystyle \mathbf {\hat {e}} _{i}} are expressed in terms of the old basis vectors {\displaystyle \mathbf {e} _{j}} as,
{\displaystyle \mathbf {\hat {e}} _{i}=\sum _{j=1}^{n}\mathbf {e} _{j}R_{i}^{j}=\mathbf {e} _{j}R_{i}^{j}.}
Here R ji are the entries of the change of basis matrix, and in the rightmost expression the summation sign was suppressed: this is the Einstein summation convention, which will be used throughout this article.[Note 1] The components vi of a column vector v transform with the inverse of the matrix R,
{\displaystyle {\hat {v}}^{i}=\left(R^{-1}\right)_{j}^{i}v^{j},}
where the hat denotes the components in the new basis. This is called a contravariant transformation law, because the vector transforms by the inverse of the change of basis. In contrast, the components, wi, of a covector (or row vector), w transform with the matrix R itself,
{\displaystyle {\hat {w}}_{i}=w_{j}R_{i}^{j}.}
This is called a covariant transformation law, because the covector transforms by the same matrix as the change of basis matrix. The components of a more general tensor transform by some combination of covariant and contravariant transformations, with one transformation law for each index. If the transformation matrix of an index is the inverse matrix of the basis transformation, then the index is called contravariant and is conventionally denoted with an upper index (superscript). If the transformation matrix of an index is the basis transformation itself, then the index is called covariant and is denoted with a lower index (subscript).

As a simple example, the matrix of a linear operator with respect to a basis is a rectangular array T that transforms under a change of basis matrix {\displaystyle R=\left(R_{i}^{j}\right)} by {\displaystyle {\hat {T}}=R^{-1}TR}. For the individual matrix entries, this transformation law has the form {\displaystyle {\hat {T}}_{j'}^{i'}=\left(R^{-1}\right)_{i}^{i'}T_{j}^{i}R_{j'}^{j}} so the tensor corresponding to the matrix of a linear operator has one covariant and one contravariant index: it is of type (1,1).

Combinations of covariant and contraviant components with the same index allow us to express geometric invariants. For example, the fact that a vector is the same object in different coordinate systems can be captured by the following equations, using the formulas defined above:
{\displaystyle \mathbf {v} ={\hat {v}}^{i}\,\mathbf {\hat {e}} _{i}=\left(\left(R^{-1}\right)_{j}^{i}{v}^{j}\right)\left(\mathbf {e} _{k}R_{i}^{k}\right)=\left(\left(R^{-1}\right)_{j}^{i}R_{i}^{k}\right){v}^{j}\mathbf {e} _{k}=\delta _{j}^{k}{v}^{j}\mathbf {e} _{k}={v}^{k}\,\mathbf {e} _{k}={v}^{i}\,\mathbf {e} _{i}},
where {\displaystyle \delta _{j}^{k}} is the Kronecker delta, which functions similarly to the identity matrix, and has the effect of renaming indices (j into k in this example). This shows several features of the component notation- the ability to re-arrange terms at will (commutativity), the need to use different indices when working with multiple objects in the same expression, the ability to rename indices, and the manner in which contravariant and covariant tensors combine so that all instances of the transformation matrix and its inverse cancel, so that expressions like {\displaystyle {v}^{i}\,\mathbf {e} _{i}} can immediately be seen to be geometrically identical in all coordinate systems.

Similarly, a linear operator, viewed as a geometric object, does not actually depend on a basis: it is just a linear map that accepts a vector as an argument and produces another vector. The transformation law for the how the matrix of components of a linear operator changes with the basis is consistent with the transformation law for a contravariant vector, so that the action of a linear operator on a contravariant vector is represented in coordinates as the matrix product of their respective coordinate representations. That is, the components (Tv)^{i} are given by (Tv)^{i}=T_{j}^{i}v^{j}. These components transform contravariantly, since
{\displaystyle \left({\widehat {Tv}}\right)^{i'}={\hat {T}}_{j'}^{i'}{\hat {v}}^{j'}=\left[\left(R^{-1}\right)_{i}^{i'}T_{j}^{i}R_{j'}^{j}\right]\left[\left(R^{-1}\right)_{j}^{j'}v^{j}\right]=\left(R^{-1}\right)_{i}^{i'}(Tv)^{i}.}
The transformation law for an order p + q tensor with p contravariant indices and q covariant indices is thus given as,
{\displaystyle {\hat {T}}_{j'_{1},\ldots ,j'_{q}}^{i'_{1},\ldots ,i'_{p}}=\left(R^{-1}\right)_{i_{1}}^{i'_{1}}\cdots \left(R^{-1}\right)_{i_{p}}^{i'_{p}}} {\displaystyle T_{j_{1},\ldots ,j_{q}}^{i_{1},\ldots ,i_{p}}} {\displaystyle R_{j'_{1}}^{j_{1}}\cdots R_{j'_{q}}^{j_{q}}.}
Here the primed indices denote components in the new coordinates, and the unprimed indices denote the components in the old coordinates. Such a tensor is said to be of order or type (p, q). The terms "order", "type", "rank", "valence", and "degree" are all sometimes used for the same concept. Here, the term "order" or "total order" will be used for the total dimension of the array (or its generalisation in other definitions), p + q in the preceding example, and the term "type" for the pair giving the number of contravariant and covariant indices. A tensor of type (p, q) is also called a (p, q)-tensor for short.

This discussion motivates the following formal definition:[3][4]
Definition. A tensor of type (p, q) is an assignment of a multidimensional array
T_{j_{1}\dots j_{q}}^{i_{1}\dots i_{p}}[\mathbf {f} ]
to each basis f = (e1, ..., en) of an n-dimensional vector space such that, if we apply the change of basis
{\displaystyle \mathbf {f} \mapsto \mathbf {f} \cdot R=\left(\mathbf {e} _{i}R_{1}^{i},\dots ,\mathbf {e} _{i}R_{n}^{i}\right)}
then the multidimensional array obeys the transformation law
{\displaystyle T_{j'_{1}\dots j'_{q}}^{i'_{1}\dots i'_{p}}[\mathbf {f} \cdot R]=\left(R^{-1}\right)_{i_{1}}^{i'_{1}}\cdots \left(R^{-1}\right)_{i_{p}}^{i'_{p}}} {\displaystyle T_{j_{1},\ldots ,j_{q}}^{i_{1},\ldots ,i_{p}}[\mathbf {f} ]} {\displaystyle R_{j'_{1}}^{j_{1}}\cdots R_{j'_{q}}^{j_{q}}.}
The definition of a tensor as a multidimensional array satisfying a transformation law traces back to the work of Ricci.[2]

An equivalent definition of a tensor uses the representations of the general linear group. There is an action of the general linear group on the set of all ordered bases of an n-dimensional vector space. If {\displaystyle \mathbf {f} =(\mathbf {f} _{1},\dots ,\mathbf {f} _{n})} is an ordered basis, and {\displaystyle R=(R_{j}^{i})} is an invertible n\times n matrix, then the action is given by
{\displaystyle \mathbf {f} R=(\mathbf {f} _{i}R_{1}^{i},\dots ,\mathbf {f} _{i}R_{n}^{i}).}
Let F be the set of all ordered bases. Then F is a principal homogeneous space for GL(n). Let W be a vector space and let \rho be a representation of GL(n) on W (that is, a group homomorphism {\displaystyle \rho :{\text{GL}}(n)\to {\text{GL}}(W)}). Then a tensor of type \rho is an equivariant map {\displaystyle T:F\to W}. Equivariance here means that
{\displaystyle T(FR)=\rho (R^{-1})T(F).}
When \rho is a tensor representation of the general linear group, this gives the usual definition of tensors as multidimensional arrays. This definition is often used to describe tensors on manifolds,[5] and readily generalizes to other groups.[3]

As multilinear maps

A downside to the definition of a tensor using the multidimensional array approach is that it is not apparent from the definition that the defined object is indeed basis independent, as is expected from an intrinsically geometric object. Although it is possible to show that transformation laws indeed ensure independence from the basis, sometimes a more intrinsic definition is preferred. One approach that is common in differential geometry is to define tensors relative to a fixed (finite-dimensional) vector space V, which is usually taken to be a particular vector space of some geometrical significance like the tangent space to a manifold.[6] In this approach, a type (p, q) tensor T is defined as a multilinear map,
T:\underbrace {V^{*}\times \dots \times V^{*}} _{p{\text{ copies}}}\times \underbrace {V\times \dots \times V} _{q{\text{ copies}}}\rightarrow \mathbf {R} ,
where V is the corresponding dual space of covectors, which is linear in each of its arguments. The above assumes V is a vector space over the real numbers, R. More generally, V can be taken over an arbitrary field of numbers, F (e.g. the complex numbers) with a one-dimensional vector space over F replacing R as the codomain of the multilinear maps.

By applying a multilinear map T of type (p, q) to a basis {ej} for V and a canonical cobasis {εi} for V,
{\displaystyle T_{j_{1}\dots j_{q}}^{i_{1}\dots i_{p}}\equiv T\left({\boldsymbol {\varepsilon }}^{i_{1}},\ldots ,{\boldsymbol {\varepsilon }}^{i_{p}},\mathbf {e} _{j_{1}},\ldots ,\mathbf {e} _{j_{q}}\right),}
a (p + q)-dimensional array of components can be obtained. A different choice of basis will yield different components. But, because T is linear in all of its arguments, the components satisfy the tensor transformation law used in the multilinear array definition. The multidimensional array of components of T thus form a tensor according to that definition. Moreover, such an array can be realized as the components of some multilinear map T. This motivates viewing multilinear maps as the intrinsic objects underlying tensors.

In viewing a tensor as a multilinear map, it is conventional to identify the vector space V with the space of linear functionals on the dual of V, the double dual V∗∗. There is always a natural linear map from V to its double dual, given by evaluating a linear form in V against a vector in V. This linear mapping is an isomorphism in finite dimensions, and it is often then expedient to identify V with its double dual.

Using tensor products

For some mathematical applications, a more abstract approach is sometimes useful. This can be achieved by defining tensors in terms of elements of tensor products of vector spaces, which in turn are defined through a universal property. A type (p, q) tensor is defined in this context as an element of the tensor product of vector spaces,[7][8]
T\in \underbrace {V\otimes \dots \otimes V} _{p{\text{ copies}}}\otimes \underbrace {V^{*}\otimes \dots \otimes V^{*}} _{q{\text{ copies}}}.
A basis vi of V and basis wj of W naturally induce a basis viwj of the tensor product VW. The components of a tensor T are the coefficients of the tensor with respect to the basis obtained from a basis {ei} for V and its dual basis {εj}, i.e.
{\displaystyle T=T_{j_{1}\dots j_{q}}^{i_{1}\dots i_{p}}\;\mathbf {e} _{i_{1}}\otimes \cdots \otimes \mathbf {e} _{i_{p}}\otimes {\boldsymbol {\varepsilon }}^{j_{1}}\otimes \cdots \otimes {\boldsymbol {\varepsilon }}^{j_{q}}.}
Using the properties of the tensor product, it can be shown that these components satisfy the transformation law for a type (p, q) tensor. Moreover, the universal property of the tensor product gives a 1-to-1 correspondence between tensors defined in this way and tensors defined as multilinear maps.

Tensor products can be defined in great generality – for example, involving arbitrary modules over a ring. In principle, one could define a "tensor" simply to be an element of any tensor product. However, the mathematics literature usually reserves the term tensor for an element of a tensor product of any number of copies of a single vector space V and its dual, as above.

Tensors in infinite dimensions

This discussion of tensors so far assumes finite dimensionality of the spaces involved, where the spaces of tensors obtained by each of these constructions are naturally isomorphic.[Note 2] Constructions of spaces of tensors based on the tensor product and multilinear mappings can be generalized, essentially without modification, to vector bundles or coherent sheaves.[9] For infinite-dimensional vector spaces, inequivalent topologies lead to inequivalent notions of tensor, and these various isomorphisms may or may not hold depending on what exactly is meant by a tensor (see topological tensor product). In some applications, it is the tensor product of Hilbert spaces that is intended, whose properties are the most similar to the finite-dimensional case. A more modern view is that it is the tensors' structure as a symmetric monoidal category that encodes their most important properties, rather than the specific models of those categories.[10]

Tensor fields

In many applications, especially in differential geometry and physics, it is natural to consider a tensor with components that are functions of the point in a space. This was the setting of Ricci's original work. In modern mathematical terminology such an object is called a tensor field, often referred to simply as a tensor.[2]
In this context, a coordinate basis is often chosen for the tangent vector space. The transformation law may then be expressed in terms of partial derivatives of the coordinate functions,
{\displaystyle {\bar {x}}^{i}\left(x^{1},\ldots ,x^{n}\right),}
defining a coordinate transformation,[2]
{\displaystyle {\hat {T}}_{j'_{1}\dots j'_{q}}^{i'_{1}\dots i'_{p}}\left({\bar {x}}^{1},\ldots ,{\bar {x}}^{n}\right)={\frac {\partial {\bar {x}}^{i'_{1}}}{\partial x^{i_{1}}}}\cdots {\frac {\partial {\bar {x}}^{i'_{p}}}{\partial x^{i_{p}}}}{\frac {\partial x^{j_{1}}}{\partial {\bar {x}}^{j'_{1}}}}\cdots {\frac {\partial x^{j_{q}}}{\partial {\bar {x}}^{j'_{q}}}}T_{j_{1}\dots j_{q}}^{i_{1}\dots i_{p}}\left(x^{1},\ldots ,x^{n}\right).}

Examples

This table shows important examples of tensors on vector spaces and tensor fields on manifolds. The tensors are classified according to their type (n, m), where n is the number of contravariant indices, m is the number of covariant indices, and n + m gives the total order of the tensor. For example, a bilinear form is the same thing as a (0, 2)-tensor; an inner product is an example of a (0, 2)-tensor, but not all (0, 2)-tensors are inner products. In the (0, M)-entry of the table, M denotes the dimensionality of the underlying vector space or manifold because for each dimension of the space, a separate index is needed to select that dimension to get a maximally covariant antisymmetric tensor.

m
0 1 2 3 M
n 0 Scalar, e.g. scalar curvature Covector, linear functional, 1-form, e.g. dipole moment, gradient of a scalar field Bilinear form, e.g. inner product, quadrupole moment, metric tensor, Ricci curvature, 2-form, symplectic form 3-form E.g. octupole moment
E.g. M-form i.e. volume form
1 Vector, e.g. direction vector Linear transformation,[11] Kronecker delta E.g. cross product in three dimensions E.g. Riemann curvature tensor


2 Inverse metric tensor, bivector, e.g., Poisson structure
E.g. elasticity tensor










N N-vector, a sum of N-blades












Raising an index on an (n, m)-tensor produces an (n + 1, m − 1)-tensor; this corresponds to moving diagonally down and to the left on the table. Symmetrically, lowering an index corresponds to moving diagonally up and to the right on the table. Contraction of an upper with a lower index of an (n, m)-tensor produces an (n − 1, m − 1)-tensor; this corresponds to moving diagonally up and to the left on the table.
Orientation defined by an ordered set of vectors.
Reversed orientation corresponds to negating the exterior product.
Geometric interpretation of grade n elements in a real exterior algebra for n = 0 (signed point), 1 (directed line segment, or vector), 2 (oriented plane element), 3 (oriented volume). The exterior product of n vectors can be visualized as any n-dimensional shape (e.g. n-parallelotope, n-ellipsoid); with magnitude (hypervolume), and orientation defined by that on its n − 1-dimensional boundary and on which side the interior is.[12][13]

Notation

There are several notational systems that are used to describe tensors and perform calculations involving them.

Ricci calculus

Ricci calculus is the modern formalism and notation for tensor indices: indicating inner and outer products, covariance and contravariance, summations of tensor components, symmetry and antisymmetry, and partial and covariant derivatives.

Einstein summation convention

The Einstein summation convention dispenses with writing summation signs, leaving the summation implicit. Any repeated index symbol is summed over: if the index i is used twice in a given term of a tensor expression, it means that the term is to be summed for all i. Several distinct pairs of indices may be summed this way.

Penrose graphical notation

Penrose graphical notation is a diagrammatic notation which replaces the symbols for tensors with shapes, and their indices by lines and curves. It is independent of basis elements, and requires no symbols for the indices.

Abstract index notation

The abstract index notation is a way to write tensors such that the indices are no longer thought of as numerical, but rather are indeterminates. This notation captures the expressiveness of indices and the basis-independence of index-free notation.

Component-free notation

A component-free treatment of tensors uses notation that emphasises that tensors do not rely on any basis, and is defined in terms of the tensor product of vector spaces.

Operations

There are several operations on tensors that again produce a tensor. The linear nature of tensor implies that two tensors of the same type may be added together, and that tensors may be multiplied by a scalar with results analogous to the scaling of a vector. On components, these operations are simply performed component-wise. These operations do not change the type of the tensor; but there are also operations that produce a tensor of different type.

Tensor product

The tensor product takes two tensors, S and T, and produces a new tensor, ST, whose order is the sum of the orders of the original tensors. When described as multilinear maps, the tensor product simply multiplies the two tensors, i.e.
(S\otimes T)(v_{1},\ldots ,v_{n},v_{n+1},\ldots ,v_{n+m})=S(v_{1},\ldots ,v_{n})T(v_{n+1},\ldots ,v_{n+m}),
which again produces a map that is linear in all its arguments. On components, the effect is to multiply the components of the two input tensors pairwise, i.e.
(S\otimes T)_{j_{1}\ldots j_{k}j_{k+1}\ldots j_{k+m}}^{i_{1}\ldots i_{l}i_{l+1}\ldots i_{l+n}}=S_{j_{1}\ldots j_{k}}^{i_{1}\ldots i_{l}}T_{j_{k+1}\ldots j_{k+m}}^{i_{l+1}\ldots i_{l+n}},
If S is of type (l, k) and T is of type (n, m), then the tensor product ST has type (l + n, k + m).

Contraction

Tensor contraction is an operation that reduces a type (n, m) tensor to a type (n − 1, m − 1) tensor, of which the trace is a special case. It thereby reduces the total order of a tensor by two. The operation is achieved by summing components for which one specified contravariant index is the same as one specified covariant index to produce a new component. Components for which those two indices are different are discarded. For example, a (1, 1)-tensor T_{i}^{j} can be contracted to a scalar through
T_{i}^{i}.
Where the summation is again implied. When the (1, 1)-tensor is interpreted as a linear map, this operation is known as the trace.

The contraction is often used in conjunction with the tensor product to contract an index from each tensor.

The contraction can also be understood using the definition of a tensor as an element of a tensor product of copies of the space V with the space V by first decomposing the tensor into a linear combination of simple tensors, and then applying a factor from V to a factor from V. For example, a tensor
T\in V\otimes V\otimes V^{*}
can be written as a linear combination
T=v_{1}\otimes w_{1}\otimes \alpha _{1}+v_{2}\otimes w_{2}\otimes \alpha _{2}+\cdots +v_{N}\otimes w_{N}\otimes \alpha _{N}.
The contraction of T on the first and last slots is then the vector
\alpha _{1}(v_{1})w_{1}+\alpha _{2}(v_{2})w_{2}+\cdots +\alpha _{N}(v_{N})w_{N}.
In a vector space with an inner product (also known as a metric) g, the term contraction is used for removing two contravariant or two covariant indices by forming a trace with the metric tensor or its inverse. For example, a (2, 0)-tensor {\displaystyle T^{ij}} can be contracted to a scalar through
{\displaystyle T^{ij}g_{ij}}
(yet again assuming the summation convention).

Raising or lowering an index

When a vector space is equipped with a nondegenerate bilinear form (or metric tensor as it is often called in this context), operations can be defined that convert a contravariant (upper) index into a covariant (lower) index and vice versa. A metric tensor is a (symmetric) (0, 2)-tensor; it is thus possible to contract an upper index of a tensor with one of the lower indices of the metric tensor in the product. This produces a new tensor with the same index structure as the previous tensor, but with lower index generally shown in the same position of the contracted upper index. This operation is quite graphically known as lowering an index.
Conversely, the inverse operation can be defined, and is called raising an index. This is equivalent to a similar contraction on the product with a (2, 0)-tensor. This inverse metric tensor has components that are the matrix inverse of those of the metric tensor.

Applications

Continuum mechanics

Important examples are provided by continuum mechanics. The stresses inside a solid body or fluid are described by a tensor field. The stress tensor and strain tensor are both second-order tensor fields, and are related in a general linear elastic material by a fourth-order elasticity tensor fields. In detail, the tensor quantifying stress in a 3-dimensional solid object has components that can be conveniently represented as a 3 × 3 array. The three faces of a cube-shaped infinitesimal volume segment of the solid are each subject to some given force. The force's vector components are also three in number. Thus, 3 × 3, or 9 components are required to describe the stress at this cube-shaped infinitesimal segment. Within the bounds of this solid is a whole mass of varying stress quantities, each requiring 9 quantities to describe. Thus, a second-order tensor is needed.

If a particular surface element inside the material is singled out, the material on one side of the surface will apply a force on the other side. In general, this force will not be orthogonal to the surface, but it will depend on the orientation of the surface in a linear manner. This is described by a tensor of type (2, 0), in linear elasticity, or more precisely by a tensor field of type (2, 0), since the stresses may vary from point to point.

Other examples from physics

Common applications include

Applications of tensors of order > 2

The concept of a tensor of order two is often conflated with that of a matrix. Tensors of higher order do however capture ideas important in science and engineering, as has been shown successively in numerous areas as they develop. This happens, for instance, in the field of computer vision, with the trifocal tensor generalizing the fundamental matrix.

The field of nonlinear optics studies the changes to material polarization density under extreme electric fields. The polarization waves generated are related to the generating electric fields through the nonlinear susceptibility tensor. If the polarization P is not linearly proportional to the electric field E, the medium is termed nonlinear. To a good approximation (for sufficiently weak fields, assuming no permanent dipole moments are present), P is given by a Taylor series in E whose coefficients are the nonlinear susceptibilities:
{\frac {P_{i}}{\varepsilon _{0}}}=\sum _{j}\chi _{ij}^{(1)}E_{j}+\sum _{jk}\chi _{ijk}^{(2)}E_{j}E_{k}+\sum _{jk\ell }\chi _{ijk\ell }^{(3)}E_{j}E_{k}E_{\ell }+\cdots .\!
Here \chi ^{(1)} is the linear susceptibility, \chi ^{(2)} gives the Pockels effect and second harmonic generation, and \chi ^{(3)} gives the Kerr effect. This expansion shows the way higher-order tensors arise naturally in the subject matter.

Generalizations

Tensor products of vector spaces

The vector spaces of a tensor product need not be the same, and sometimes the elements of such a more general tensor product are called "tensors". For example, an element of the tensor product space VW is a second-order "tensor" in this more general sense,[14] and an order-d tensor may likewise be defined as an element of a tensor product of d different vector spaces.[15] A type (n, m) tensor, in the sense defined previously, is also a tensor of order n + m in this more general sense. The concept of tensor product can be extended to arbitrary modules over a ring.

Tensors in infinite dimensions

The notion of a tensor can be generalized in a variety of ways to infinite dimensions. One, for instance, is via the tensor product of Hilbert spaces.[16] Another way of generalizing the idea of tensor, common in nonlinear analysis, is via the multilinear maps definition where instead of using finite-dimensional vector spaces and their algebraic duals, one uses infinite-dimensional Banach spaces and their continuous dual.[17] Tensors thus live naturally on Banach manifolds[18] and Fréchet manifolds.

Tensor densities

Suppose that a homogeneous medium fills R3, so that the density of the medium is described by a single scalar value ρ in kg m−3. The mass, in kg, of a region Ω is obtained by multiplying ρ by the volume of the region Ω, or equivalently integrating the constant ρ over the region:
{\displaystyle m=\int _{\Omega }\rho \,dx\,dy\,dz}
where the Cartesian coordinates xyz are measured in m. If the units of length are changed into cm, then the numerical values of the coordinate functions must be rescaled by a factor of 100:
{\displaystyle x'=100x,\quad y'=100y,\quad z'=100z}
The numerical value of the density ρ must then also transform by {\displaystyle 100^{-3}m^{3}/cm^{3}} to compensate, so that the numerical value of the mass in kg is still given by integral of {\displaystyle \rho \,dx\,dy\,dz}. Thus {\displaystyle \rho '=100^{-3}\rho } (in units of kg cm−3).

More generally, if the Cartesian coordinates xyz undergo a linear transformation, then the numerical value of the density ρ must change by a factor of the reciprocal of the absolute value of the determinant of the coordinate transformation, so that the integral remains invariant, by the change of variables formula for integration. Such a quantity that scales by the reciprocal of the absolute value of the determinant of the coordinate transition map is called a scalar density. To model a non-constant density, ρ is a function of the variables xyz (a scalar field), and under a curvilinear change of coordinates, it transforms by the reciprocal of the Jacobian of the coordinate change. For more on the intrinsic meaning, see Density on a manifold.

A tensor density transforms like a tensor under a coordinate change, except that it in addition picks up a factor of the absolute value of the determinant of the coordinate transition:[19]
{\displaystyle T_{j'_{1}\dots j'_{q}}^{i'_{1}\dots i'_{p}}[\mathbf {f} \cdot R]=|\det R|^{-w}\left(R^{-1}\right)_{i_{1}}^{i'_{1}}\cdots \left(R^{-1}\right)_{i_{p}}^{i'_{p}}} {\displaystyle T_{j_{1},\ldots ,j_{q}}^{i_{1},\ldots ,i_{p}}[\mathbf {f} ]} {\displaystyle R_{j'_{1}}^{j_{1}}\cdots R_{j'_{q}}^{j_{q}}.}
Here w is called the weight. In general, any tensor multiplied by a power of this function or its absolute value is called a tensor density, or a weighted tensor.[20][21] An example of a tensor density is the current density of electromagnetism.

Under an affine transformation of the coordinates, a tensor transforms by the linear part of the transformation itself (or its inverse) on each index. These come from the rational representations of the general linear group. But this is not quite the most general linear transformation law that such an object may have: tensor densities are non-rational, but are still semisimple representations. A further class of transformations come from the logarithmic representation of the general linear group, a reducible but not semisimple representation,[22] consisting of an (x,y) ∈ R2 with the transformation law
{\displaystyle (x,y)\mapsto (x+y\log |\det R|,y).}

Geometric objects

The transformation law for a tensor behaves as a functor on the category of admissible coordinate systems, under general linear transformations (or, other transformations within some class, such as local diffeomorphisms.) This makes a tensor a special case of a geometrical object, in the technical sense that it is a function of the coordinate system transforming functorially under coordinate changes.[23] Examples of objects obeying more general kinds of transformation laws are jets and, more generally still, natural bundles.[24][25]

Spinors

When changing from one orthonormal basis (called a frame) to another by a rotation, the components of a tensor transform by that same rotation. This transformation does not depend on the path taken through the space of frames. However, the space of frames is not simply connected (see orientation entanglement and plate trick): there are continuous paths in the space of frames with the same beginning and ending configurations that are not deformable one into the other. It is possible to attach an additional discrete invariant to each frame that incorporates this path dependence, and which turns out (locally) to have values of ±1.[26] A spinor is an object that transforms like a tensor under rotations in the frame, apart from a possible sign that is determined by the value of this discrete invariant.[27][28]
Succinctly, spinors are elements of the spin representation of the rotation group, while tensors are elements of its tensor representations. Other classical groups have tensor representations, and so also tensors that are compatible with the group, but all non-compact classical groups have infinite-dimensional unitary representations as well.

History

The concepts of later tensor analysis arose from the work of Carl Friedrich Gauss in differential geometry, and the formulation was much influenced by the theory of algebraic forms and invariants developed during the middle of the nineteenth century.[29] The word "tensor" itself was introduced in 1846 by William Rowan Hamilton[30] to describe something different from what is now meant by a tensor.[Note 3] The contemporary usage was introduced by Woldemar Voigt in 1898.[31]

Tensor calculus was developed around 1890 by Gregorio Ricci-Curbastro under the title absolute differential calculus, and originally presented by Ricci in 1892.[32] It was made accessible to many mathematicians by the publication of Ricci and Tullio Levi-Civita's 1900 classic text Méthodes de calcul différentiel absolu et leurs applications (Methods of absolute differential calculus and their applications).[33]

In the 20th century, the subject came to be known as tensor analysis, and achieved broader acceptance with the introduction of Einstein's theory of general relativity, around 1915. General relativity is formulated completely in the language of tensors. Einstein had learned about them, with great difficulty, from the geometer Marcel Grossmann.[34] Levi-Civita then initiated a correspondence with Einstein to correct mistakes Einstein had made in his use of tensor analysis. The correspondence lasted 1915–17, and was characterized by mutual respect:
I admire the elegance of your method of computation; it must be nice to ride through these fields upon the horse of true mathematics while the like of us have to make our way laboriously on foot.
— Albert Einstein[35]
Tensors were also found to be useful in other fields such as continuum mechanics. Some well-known examples of tensors in differential geometry are quadratic forms such as metric tensors, and the Riemann curvature tensor. The exterior algebra of Hermann Grassmann, from the middle of the nineteenth century, is itself a tensor theory, and highly geometric, but it was some time before it was seen, with the theory of differential forms, as naturally unified with tensor calculus. The work of Élie Cartan made differential forms one of the basic kinds of tensors used in mathematics.

From about the 1920s onwards, it was realised that tensors play a basic role in algebraic topology (for example in the Künneth theorem).[36] Correspondingly there are types of tensors at work in many branches of abstract algebra, particularly in homological algebra and representation theory. Multilinear algebra can be developed in greater generality than for scalars coming from a field. For example, scalars can come from a ring. But the theory is then less geometric and computations more technical and less algorithmic.[37] Tensors are generalized within category theory by means of the concept of monoidal category, from the 1960s.[38]

Tuesday, March 13, 2018

Coevolution

From Wikipedia, the free encyclopedia
 
The pollinating wasp Dasyscolia ciliata in pseudocopulation with a flower of Ophrys speculum[1]

In biology, coevolution occurs when two or more species reciprocally affect each other's evolution.
Charles Darwin mentioned evolutionary interactions between flowering plants and insects in On the Origin of Species (1859). The term coevolution was coined by Paul R. Ehrlich and Peter H. Raven in 1964. The theoretical underpinnings of coevolution are now well-developed, and demonstrate that coevolution can play an important role in driving major evolutionary transitions such as the evolution of sexual reproduction or shifts in ploidy.[2] More recently, it has also been demonstrated that coevolution influences the structure and function of ecological communities as well as the dynamics of infectious disease.[3]

Each party in a coevolutionary relationship exerts selective pressures on the other, thereby affecting each other's evolution. Coevolution includes many forms of mutualism, host-parasite, and predator-prey relationships between species, as well as competition within or between species. In many cases, the selective pressures drive an evolutionary arms race between the species involved. Pairwise or specific coevolution, between exactly two species, is not the only possibility; in guild or diffuse coevolution, several species may evolve a trait in reciprocity with a trait in another species, as has happened between the flowering plants and pollinating insects such as bees, flies, and beetles.

Coevolution is primarily a biological concept, but researchers have applied it by analogy to fields such as computer science, sociology, and astronomy.

Mutualism

Coevolution is the evolution of two or more species which reciprocally affect each other, sometimes creating a mutualistic relationship between the species. Such relationships can be of many different types.[4][5]

Flowering plants

Flowers appeared and diversified relatively suddenly in the fossil record, creating what Charles Darwin described as the "abominable mystery" of how they had evolved so quickly; he considered whether coevolution could be the explanation.[6][7] He first mentioned coevolution as a possibility in On the Origin of Species, and developed the concept further in Fertilisation of Orchids (1862).[8][9][10]

Insects and entomophilous flowers

Honey bee taking a reward of nectar and collecting pollen in its pollen baskets from white melilot flowers

Modern insect-pollinated (entomophilous) flowers are conspicuously coadapted with insects to ensure pollination and in return to reward the pollinators with nectar and pollen. The two groups have coevolved for over 100 million years, creating a complex network of interactions. Either they evolved together, or at some later stages they came together, likely with pre-adaptations, and became mutually adapted.[11][12] The term coevolution was coined by Paul R. Ehrlich and Peter H. Raven in 1964, to describe the evolutionary interactions of plants and butterflies.[13]

Several highly successful insect groups—especially the Hymenoptera (wasps, bees and ants) and Lepidoptera (butterflies) as well as many types of Diptera (flies) and Coleoptera (beetles)—evolved in conjunction with flowering plants during the Cretaceous (145 to 66 million years ago). The earliest bees, important pollinators today, appeared in the early Cretaceous.[14] A group of wasps sister to the bees evolved at the same time as flowering plants, as did the Lepidoptera.[14] Further, all the major clades of bees first appeared between the middle and late Cretaceous, simultaneously with the adaptive radiation of the eudicots (three quarters of all angiosperms), and at the time when the angiosperms became the world's dominant plants on land.[6]

At least three aspects of flowers appear to have coevolved between flowering plants and insects, because they involve communication between these organisms. Firstly, flowers communicate with their pollinators by scent; insects use this scent to determine how far away a flower is, to approach it, and to identify where to land and finally to feed. Secondly, flowers attract insects with patterns of stripes leading to the rewards of nectar and pollen, and colours such as blue and ultraviolet, to which their eyes are sensitive; in contrast, bird-pollinated flowers tend to be red or orange. Thirdly, flowers such as some orchids mimic females of particular insects, deceiving males into pseudocopulation.[14][1]

The yucca, Yucca whipplei, is pollinated exclusively by Tegeticula maculata, a yucca moth that depends on the yucca for survival.[15] The moth eats the seeds of the plant, while gathering pollen. The pollen has evolved to become very sticky, and remains on the mouth parts when the moth moves to the next flower. The yucca provides a place for the moth to lay its eggs, deep within the flower away from potential predators.[16]

Birds and ornithophilous flowers

Purple-throated carib feeding from and pollinating a flower

Hummingbirds and ornithophilous (bird-pollinated) flowers have evolved a mutualistic relationship. The flowers have nectar suited to the birds' diet, their color suits the birds' vision and their shape fits that of the birds' bills. The blooming times of the flowers have also been found to coincide with hummingbirds' breeding seasons. The floral characteristics of ornithophilous plants vary greatly among each other compared to closely related insect-pollinated species. These flowers also tend to be more ornate, complex, and showy than their insect pollinated counterparts. It is generally agreed that plants formed coevolutionary relationships with insects first, and ornithophilous species diverged at a later time. There is not much scientific support for instances of the reverse of this divergence: from ornithophily to insect pollination. The diversity in floral phenotype in ornithophilous species, and the relative consistency observed in bee-pollinated species can be attributed to the direction of the shift in pollinator preference.[17]

Flowers have converged to take advantage of similar birds.[18] Flowers compete for pollinators, and adaptations reduce unfavourable effects of this competition. The fact that birds can fly during inclement weather makes them more efficient pollinators where bees and other insects would be inactive. Ornithophily may have arisen for this reason in isolated environments with poor insect colonization or areas with plants which flower in the winter.[18][19] Bird-pollinated flowers usually have higher volumes of nectar and higher sugar production than those pollinated by insects.[20] This meets the birds' high energy requirements, the most important determinants of flower choice.[20] In Mimulus, an increase in red pigment in petals and flower nectar volume noticeably reduces the proportion of pollination by bees as opposed to hummingbirds; while greater flower surface area increases bee pollination. Therefore, red pigments in the flowers of Mimulus cardinalis may function primarily to discourage bee visitation.[21] In Penstemon, flower traits that discourage bee pollination may be more influential on the flowers' evolutionary change than 'pro-bird' adaptations, but adaptation 'towards' birds and 'away' from bees can happen simultaneously.[22] However, some flowers such as Heliconia angusta appear not to be as specifically ornithophilous as had been supposed: the species is occasionally (151 visits in 120 hours of observation) visited by Trigona stingless bees. These bees are largely pollen robbers in this case, but may also serve as pollinators.[23]

Following their respective breeding seasons, several species of hummingbirds occur at the same locations in North America, and several hummingbird flowers bloom simultaneously in these habitats. These flowers have converged to a common morphology and color because these are effective at attracting the birds. Different lengths and curvatures of the corolla tubes can affect the efficiency of extraction in hummingbird species in relation to differences in bill morphology. Tubular flowers force a bird to orient its bill in a particular way when probing the flower, especially when the bill and corolla are both curved. This allows the plant to place pollen on a certain part of the bird's body, permitting a variety of morphological co-adaptations.[20]

A fig exposing its many tiny matured, seed-bearing gynoecia. These are pollinated by the fig wasp, Blastophaga psenes. In the cultivated fig, there are also asexual varieties.[24]

Ornithophilous flowers need to be conspicuous to birds.[20] Birds have their greatest spectral sensitivity and finest hue discrimination at the red end of the visual spectrum,[20] so red is particularly conspicuous to them. Hummingbirds may also be able to see ultraviolet "colors". The prevalence of ultraviolet patterns and nectar guides in nectar-poor entomophilous (insect-pollinated) flowers warns the bird to avoid these flowers.[20] Each of the two subfamilies of hummingbirds, the Phaethornithinae (hermits) and the Trochilinae, has evolved in conjunction with a particular set of flowers. Most Phaethornithinae species are associated with large monocotyledonous herbs, while the Trochilinae prefer dicotyledonous plant species.[20]

Fig reproduction and fig wasps

The Ficus genus is composed of 800 species of vines, shrubs, and trees, including the cultivated fig, defined by their syconiums, the fruit-like vessels that either hold female flowers or pollen on the inside. Each fig species has its own fig wasp which (in most cases) pollinates the fig, so a tight mutual dependence has evolved and persisted throughout the genus.[24]

Acacia ants and acacias

Pseudomyrmex ant on bull thorn acacia (Vachellia cornigera) with Beltian bodies that provide the ants with protein[25]

The acacia ant (Pseudomyrmex ferruginea) is an obligate plant ant that protects at least five species of "Acacia" (Vachellia)[a] from preying insects and from other plants competing for sunlight, and the tree provides nourishment and shelter for the ant and its larvae.[25][26] Such mutualism is not automatic: other ant species exploit trees without reciprocating, following different evolutionary strategies. These cheater ants impose important host costs via damage to tree reproductive organs, though their net effect on host fitness is not necessarily negative and, thus, becomes difficult to forecast.[27][28]

Hosts and parasites

Parasites and sexually reproducing hosts

Host–parasite coevolution is the coevolution of a host and a parasite.[29] A general characterization of many viruses, obligate parasites, is that they coevolved alongside their respective hosts. Correlated mutations between the two species enter them into an evolution arms race. Whichever organism, host or parasite, that cannot keep up with the other will be eliminated from their habitat, as the species with the higher average population fitness survives. This race is known as the Red Queen hypothesis.[30] The Red Queen hypothesis predicts that sexual reproduction allows a host to stay just ahead of its parasite, similar to the Red Queen's race in Through the Looking-Glass: "it takes all the running you can do, to keep in the same place".[31] The host reproduces sexually, producing some offspring with immunity over its parasite, which then evolves in response.[32]

The parasite/host relationship probably drove the prevalence of sexual reproduction over the more efficient asexual reproduction. It seems that when a parasite infects a host, sexual reproduction affords a better chance of developing resistance (through variation in the next generation), giving sexual reproduction variability for fitness not seen in the asexual reproduction, which produces another generation of the organism susceptible to infection by the same parasite.[33][34][35] Coevolution between host and parasite may accordingly be responsible for much of the genetic diversity seen in normal populations, including blood-plasma polymorphism, protein polymorphism, and histocompatibility systems.[36]

Brood parasites

Brood parasitism demonstrates close coevolution of host and parasite, for example in cuckoos. These birds do not make their own nests, but lay their eggs in nests of other species, ejecting or killing the eggs and young of the host and thus having a strong negative impact on the host's reproductive fitness. Their eggs are camouflaged as eggs of their hosts, implying that hosts can distinguish their own eggs from those of intruders and are in an evolutionary arms race with the cuckoo between camouflage and recognition. Cuckoos are counter-adapted to host defences with features such as thickened eggshells, shorter incubation (so their young hatch first), and flat backs adapted to lift eggs out of the nest.[37][38]

Predators and prey

Predator and prey: a leopard killing a bushbuck

Predators and prey interact and coevolve, the predator to catch the prey more effectively, the prey to escape. The coevolution of the two mutually imposes selective pressures. These often lead to an evolutionary arms race between prey and predator, resulting in antipredator adaptations.[39]

The same applies to herbivores, animals that eat plants, and the plants that they eat. In the Rocky Mountains, red squirrels and crossbills (seed-eating birds) compete for seeds of the lodgepole pine. The squirrels get at pine seeds by gnawing through the cone scales, whereas the crossbills get at the seeds by extracting them with their unusual crossed mandibles. In areas where there are squirrels, the lodgepole's cones are heavier, and have fewer seeds and thinner scales, making it more difficult for squirrels to get at the seeds. Conversely, where there are crossbills but no squirrels, the cones are lighter in construction, but have thicker scales, making it more difficult for crossbills to get at the seeds. The lodgepole's cones are in an evolutionary arms race with the two kinds of herbivore.[40]

Sexual conflict has been studied in Drosophila melanogaster (shown mating, male on right).

Competition

Both intraspecific competition, with features such as sexual conflict[41] and sexual selection,[42] and interspecific competition, such as between predators, may be able to drive coevolution.[43]

Guild or diffuse coevolution

Long-tongued bees and long-tubed flowers coevolved, whether pairwise or "diffusely" in groups known as guilds.[44]

The types of coevolution listed so far have been described as if they operated pairwise (also called specific coevolution), in which traits of one species have evolved in direct response to traits of a second species, and vice versa. This is not always the case. Another evolutionary mode arises where evolution is still reciprocal, but is among a group of species rather than exactly two. This is called guild or diffuse coevolution. For instance, a trait in several species of flowering plant, such as offering its nectar at the end of a long tube, can coevolve with a trait in one or several species of pollinating insects, such as a long proboscis. More generally, flowering plants are pollinated by insects from different families including bees, flies, and beetles, all of which form a broad guild of pollinators which respond to the nectar or pollen produced by flowers.[44][45][46]

Outside biology

Coevolution is primarily a biological concept, but has been applied to other fields by analogy.

In algorithms

Coevolutionary algorithms are used for generating artificial life as well as for optimization, game learning and machine learning.[47][48][49][50][51] Daniel Hillis added "co-evolving parasites" to prevent an optimization procedure from becoming stuck at local maxima.[52] Karl Sims coevolved virtual creatures.[53]

In architecture

The concept of coevolution was introduced in architecture by the Danish architect-urbanist Henrik Valeur as an antithesis to the concept of "star-architecture".[54] As the curator of the Danish Pavilion at the 2006 Venice Biennale of Architecture he conceived and orchestrated an exhibition project named 'Co-evolution', awarded the Golden Lion for Best National Pavilion.[55]

The exhibition included urban planning projects for the cities of Beijing, Chongqing, Shanghai and Xi'an, which had been developed in collaboration between young professional Danish architects and students and professors and students from leading universities in the four Chinese cities.[56] By creating a framework for collaboration between academics and professionals representing two distinct cultures, it was hoped that the exchange of knowledge, ideas and experiences would stimulate "creativity and imagination to set the spark for new visions for sustainable urban development."[57] Valeur later argued that: "As we become more and more interconnected and interdependent, human development is no longer a matter of the evolution of individual groups of people but rather a matter of the co-evolution of all people."[58]

In technology

Computer software and hardware can be considered as two separate components but tied intrinsically by coevolution. Similarly, operating systems and computer applications, web browsers and web applications.
All of these systems depend upon each other and advance step by step through a kind of evolutionary process. Changes in hardware, an operating system or web browser may introduce new features that are then incorporated into the corresponding applications running alongside.[59] The idea is closely related to the concept of "joint optimization" in sociotechnical systems analysis and design, where a system is understood to consist of both a "technical system" encompassing the tools and hardware used for production and maintenance, and a "social system" of relationships and procedures through which the technology is tied into the goals of the system and all the other human and organizational relationships within and outside the system. Such systems work best when the technical and social systems are deliberately developed together.[60]

In sociology

Models of coevolution have been proposed for sociology and international political economy.[61] Richard Norgaard's 2006 book Development Betrayed proposes a "Co-Evolutionary Revisioning of the Future" of social and economic life.[62]

Operator (computer programming)

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Operator_(computer_programmin...