From Wikipedia, the free encyclopedia
Cauchy stress tensor, a second-order tensor. The tensor's components, in a three-dimensional Cartesian coordinate system, form the matrix
whose columns are the stresses (forces per unit area) acting on the
e1,
e2, and
e3 faces of the cube.
In
mathematics,
tensors are
geometric objects that describe
linear relations between
geometric vectors,
scalars, and other tensors. Elementary examples of such relations include the
dot product, the
cross product, and
linear maps.
Geometric vectors, often used in physics and engineering applications, and scalars themselves are also tensors.
[1] A more sophisticated example is the
Cauchy stress tensor T, which takes a direction
v as input and produces the stress
T(v)
on the surface normal to this vector for output, thus expressing a
relationship between these two vectors, shown in the figure (right).
Given a reference
basis of vectors, a tensor can be represented as an organized
multidimensional array of numerical values. The
order (also
degree or
rank)
of a tensor is the dimensionality of the array needed to represent it,
or equivalently, the number of indices needed to label a component of
that array. For example, a linear map is represented by a matrix (a
2-dimensional array) in a basis, and therefore is a 2nd-order tensor. A
vector is represented as a 1-dimensional array in a basis, and is a
1st-order tensor. Scalars are single numbers and are thus 0th-order
tensors. The collection of tensors on a vector space forms a
tensor algebra.
Because they express a relationship between vectors, tensors
themselves must be independent of a particular choice of basis. The
basis independence of a tensor then takes the form of a
covariant and/or contravariant transformation law
that relates the array computed in one basis to that computed in
another one. The precise form of the transformation law determines the
type (or
valence) of the tensor. The tensor type is a pair of natural numbers
(n, m), where
n is the number of
contravariant indices and
m is the number of
covariant indices. The total order of a tensor is the sum of these two numbers.
Tensors are important in physics because they provide a concise
mathematical framework for formulating and solving physics problems in
areas such as
stress,
elasticity,
fluid mechanics, and
general relativity.
In applications, it is common to study situations in which a different
tensor can occur at each point of an object; for example the stress
within an object may vary from one location to another. This leads to
the concept of a
tensor field. In some areas, tensor fields are so ubiquitous that they are simply called "tensors".
Tensors were conceived in 1900 by
Tullio Levi-Civita and
Gregorio Ricci-Curbastro, who continued the earlier work of
Bernhard Riemann and
Elwin Bruno Christoffel and others, as part of the
absolute differential calculus. The concept enabled an alternative formulation of the intrinsic
differential geometry of a
manifold in the form of the
Riemann curvature tensor.
[2]
Definition
Although
seemingly different, the various approaches to defining tensors
describe the same geometric concept using different languages and at
different levels of abstraction.
As multidimensional arrays
Just as a
vector in an
n-
dimensional space is represented by a one-dimensional array of length
n with respect to a given
basis, any tensor with respect to a basis is represented by a multidimensional array. For example, a
linear operator is represented in a basis as a two-dimensional square
n × n array. The numbers in the multidimensional array are known as the
scalar components of the tensor or simply its
components. They are denoted by indices giving their position in the array, as
subscripts and superscripts, following the symbolic name of the tensor. For example, the components of an order
2 tensor
T could be denoted
Tij , where
i and
j are indices running from
1 to
n, or also by
T j
i.
Whether an index is displayed as a superscript or subscript depends on
the transformation properties of the tensor, described below. Thus while
Tij and
T j
i can both be expressed as
n by
n matrices, and are numerically related via
index juggling,
the difference in their transformation laws indicates it would be
improper to add them together. The total number of indices required to
identify each component uniquely is equal to the
dimension of the array, and is called the
order,
degree or
rank of the tensor. However, the term "rank" generally has
another meaning in the context of matrices and tensors.
Just as the components of a vector change when we change the
basis
of the vector space, the components of a tensor also change under such a
transformation. Each type of tensor comes equipped with a
transformation law that details how the components of the tensor respond to a
change of basis. The components of a vector can respond in two distinct ways to a
change of basis (see
covariance and contravariance of vectors), where the new
basis vectors are expressed in terms of the old basis vectors
as,
Here
R ji are the entries of the change of basis matrix, and in the rightmost expression the summation sign was suppressed: this is the
Einstein summation convention, which will be used throughout this article.
[Note 1] The components
vi of a column vector
v transform with the
inverse of the matrix
R,
where the hat denotes the components in the new basis. This is called a
contravariant transformation law, because the vector transforms by the
inverse of the change of basis. In contrast, the components,
wi, of a covector (or row vector),
w transform with the matrix
R itself,
This is called a
covariant transformation law, because the covector transforms by the
same matrix
as the change of basis matrix. The components of a more general tensor
transform by some combination of covariant and contravariant
transformations, with one transformation law for each index. If the
transformation matrix of an index is the inverse matrix of the basis
transformation, then the index is called
contravariant and is
conventionally denoted with an upper index (superscript). If the
transformation matrix of an index is the basis transformation itself,
then the index is called
covariant and is denoted with a lower index (subscript).
As a simple example, the matrix of a linear operator with respect to a basis is a rectangular array
that transforms under a change of basis matrix
by
. For the individual matrix entries, this transformation law has the form
so the tensor corresponding to the matrix of a linear operator has one
covariant and one contravariant index: it is of type (1,1).
Combinations of covariant and contraviant components with the same
index allow us to express geometric invariants. For example, the fact
that a vector is the same object in different coordinate systems can be
captured by the following equations, using the formulas defined above:
- ,
where
is the
Kronecker delta, which functions similarly to the
identity matrix, and has the effect of renaming indices (
j into
k in this example). This shows several features of the component notation- the ability to re-arrange terms at will (
commutativity),
the need to use different indices when working with multiple objects in
the same expression, the ability to rename indices, and the manner in
which contravariant and covariant tensors combine so that all instances
of the transformation matrix and its inverse cancel, so that expressions
like
can immediately be seen to be geometrically identical in all coordinate systems.
Similarly, a linear operator, viewed as a geometric object, does not
actually depend on a basis: it is just a linear map that accepts a
vector as an argument and produces another vector. The transformation
law for the how the matrix of components of a linear operator changes
with the basis is consistent with the transformation law for a
contravariant vector, so that the action of a linear operator on a
contravariant vector is represented in coordinates as the matrix product
of their respective coordinate representations. That is, the components
are given by
. These components transform contravariantly, since
The transformation law for an order
p + q tensor with
p contravariant indices and
q covariant indices is thus given as,
-
Here the primed indices denote components in the new coordinates, and
the unprimed indices denote the components in the old coordinates. Such
a tensor is said to be of order or
type (p, q).
The terms "order", "type", "rank", "valence", and "degree" are all
sometimes used for the same concept. Here, the term "order" or "total
order" will be used for the total dimension of the array (or its
generalisation in other definitions),
p + q
in the preceding example, and the term "type" for the pair giving the
number of contravariant and covariant indices. A tensor of type
(p, q) is also called a
(p, q)-tensor for short.
This discussion motivates the following formal definition:
[3][4]
Definition. A tensor of type (p, q) is an assignment of a multidimensional array
to each basis f = (e1, ..., en) of an n-dimensional vector space such that, if we apply the change of basis
then the multidimensional array obeys the transformation law
-
The definition of a tensor as a multidimensional array satisfying a transformation law traces back to the work of Ricci.
[2]
An equivalent definition of a tensor uses the
representations of the
general linear group. There is an
action of the general linear group on the set of all
ordered bases of an
n-dimensional vector space. If
is an ordered basis, and
is an invertible
matrix, then the action is given by
Let
F be the set of all ordered bases. Then
F is a
principal homogeneous space for GL(
n). Let
W be a vector space and let
be a representation of GL(
n) on
W (that is, a
group homomorphism ). Then a tensor of type
is an
equivariant map . Equivariance here means that
When
is a
tensor representation
of the general linear group, this gives the usual definition of tensors
as multidimensional arrays. This definition is often used to describe
tensors on manifolds,
[5] and readily generalizes to other groups.
[3]
As multilinear maps
A
downside to the definition of a tensor using the multidimensional array
approach is that it is not apparent from the definition that the
defined object is indeed basis independent, as is expected from an
intrinsically geometric object. Although it is possible to show that
transformation laws indeed ensure independence from the basis, sometimes
a more intrinsic definition is preferred. One approach that is common
in
differential geometry is to define tensors relative to a fixed (finite-dimensional) vector space
V, which is usually taken to be a particular vector space of some geometrical significance like the
tangent space to a manifold.
[6] In this approach, a type
(p, q) tensor
T is defined as a
multilinear map,
where
V∗ is the corresponding
dual space of covectors, which is linear in each of its arguments. The above assumes
V is a vector space over the
real numbers,
R. More generally,
V can be taken over an arbitrary field of numbers,
F (e.g. the
complex numbers) with a one-dimensional vector space over
F replacing
R as the codomain of the multilinear maps.
By applying a multilinear map
T of type
(p, q) to a basis {
ej} for
V and a canonical cobasis {
εi} for
V∗,
a
(p + q)-dimensional array of components can be obtained. A different choice of basis will yield different components. But, because
T
is linear in all of its arguments, the components satisfy the tensor
transformation law used in the multilinear array definition. The
multidimensional array of components of
T thus form a tensor
according to that definition. Moreover, such an array can be realized as
the components of some multilinear map
T. This motivates viewing multilinear maps as the intrinsic objects underlying tensors.
In viewing a tensor as a multilinear map, it is conventional to identify the vector space
V with the space of linear functionals on the dual of
V, the
double dual V∗∗. There is always a natural linear map from
V to its double dual, given by evaluating a linear form in
V∗ against a vector in
V. This linear mapping is an isomorphism in finite dimensions, and it is often then expedient to identify
V with its double dual.
Using tensor products
For some mathematical applications, a more abstract approach is
sometimes useful. This can be achieved by defining tensors in terms of
elements of
tensor products of vector spaces, which in turn are defined through a
universal property. A type
(p, q) tensor is defined in this context as an element of the tensor product of vector spaces,
[7][8]
A basis
vi of
V and basis
wj of
W naturally induce a basis
vi ⊗ wj of the tensor product
V ⊗ W. The components of a tensor
T are the coefficients of the tensor with respect to the basis obtained from a basis
{ei} for
V and its dual basis
{εj}, i.e.
Using the properties of the tensor product, it can be shown that these components satisfy the transformation law for a type
(p, q) tensor. Moreover, the universal property of the tensor product gives a
1-to-1 correspondence between tensors defined in this way and tensors defined as multilinear maps.
Tensor products can be defined in great generality – for example,
involving arbitrary modules
over a ring. In principle, one could define a "tensor" simply to be an
element of any tensor product. However, the mathematics literature
usually reserves the term
tensor for an element of a tensor product of any number of copies of a single vector space
V and its dual, as above.
Tensors in infinite dimensions
This
discussion of tensors so far assumes finite dimensionality of the
spaces involved, where the spaces of tensors obtained by each of these
constructions are
naturally isomorphic.
[Note 2] Constructions of spaces of tensors based on the tensor product and
multilinear mappings can be generalized, essentially without
modification, to
vector bundles or
coherent sheaves.
[9]
For infinite-dimensional vector spaces, inequivalent topologies lead to
inequivalent notions of tensor, and these various isomorphisms may or
may not hold depending on what exactly is meant by a tensor (see
topological tensor product). In some applications, it is the
tensor product of Hilbert spaces
that is intended, whose properties are the most similar to the
finite-dimensional case. A more modern view is that it is the tensors'
structure as a
symmetric monoidal category that encodes their most important properties, rather than the specific models of those categories.
[10]
Tensor fields
In many applications, especially in differential geometry and
physics, it is natural to consider a tensor with components that are
functions of the point in a space. This was the setting of Ricci's
original work. In modern mathematical terminology such an object is
called a
tensor field, often referred to simply as a tensor.
[2]
In this context, a
coordinate basis is often chosen for the
tangent vector space. The transformation law may then be expressed in terms of
partial derivatives of the coordinate functions,
defining a coordinate transformation,
[2]
Examples
This table shows important examples of tensors on vector spaces and
tensor fields on manifolds. The tensors are classified according to
their type
(n, m), where
n is the number of contravariant indices,
m is the number of covariant indices, and
n + m gives the total order of the tensor. For example, a
bilinear form is the same thing as a
(0, 2)-tensor; an
inner product is an example of a
(0, 2)-tensor, but not all
(0, 2)-tensors are inner products. In the
(0, M)-entry of the table,
M
denotes the dimensionality of the underlying vector space or manifold
because for each dimension of the space, a separate index is needed to
select that dimension to get a maximally covariant antisymmetric tensor.
-
|
m |
0 |
1 |
2 |
3 |
⋯ |
M |
⋯ |
n |
0 |
Scalar, e.g. scalar curvature |
Covector, linear functional, 1-form, e.g. dipole moment, gradient of a scalar field |
Bilinear form, e.g. inner product, quadrupole moment, metric tensor, Ricci curvature, 2-form, symplectic form |
3-form E.g. octupole moment |
|
E.g. M-form i.e. volume form |
|
1 |
Vector, e.g. direction vector |
Linear transformation,[11] Kronecker delta |
E.g. cross product in three dimensions |
E.g. Riemann curvature tensor |
|
|
|
2 |
Inverse metric tensor, bivector, e.g., Poisson structure |
|
E.g. elasticity tensor |
|
|
|
|
⋮ |
|
|
|
|
|
|
|
N |
N-vector, a sum of N-blades |
|
|
|
|
|
|
⋮ |
|
|
|
|
|
|
|
Raising an index on an
(n, m)-tensor produces an
(n + 1, m − 1)-tensor;
this corresponds to moving diagonally down and to the left on the
table. Symmetrically, lowering an index corresponds to moving diagonally
up and to the right on the table.
Contraction of an upper with a lower index of an
(n, m)-tensor produces an
(n − 1, m − 1)-tensor; this corresponds to moving diagonally up and to the left on the table.
Orientation defined by an ordered set of vectors.
Reversed orientation corresponds to negating the exterior product.
Geometric interpretation of grade
n elements in a real
exterior algebra for
n = 0 (signed point), 1 (directed line segment, or vector), 2 (oriented plane element), 3 (oriented volume). The exterior product of
n vectors can be visualized as any
n-dimensional shape (e.g.
n-
parallelotope,
n-
ellipsoid); with magnitude (
hypervolume), and
orientation defined by that on its
n − 1-dimensional boundary and on which side the interior is.
[12][13]
Notation
There are several notational systems that are used to describe tensors and perform calculations involving them.
Ricci calculus
Ricci calculus is the modern formalism and notation for tensor indices: indicating
inner and
outer products,
covariance and contravariance,
summations of tensor components,
symmetry and
antisymmetry, and
partial and
covariant derivatives.
Einstein summation convention
The
Einstein summation convention dispenses with writing
summation signs, leaving the summation implicit. Any repeated index symbol is summed over: if the index
i is used twice in a given term of a tensor expression, it means that the term is to be summed for all
i. Several distinct pairs of indices may be summed this way.
Penrose graphical notation
Penrose graphical notation
is a diagrammatic notation which replaces the symbols for tensors with
shapes, and their indices by lines and curves. It is independent of
basis elements, and requires no symbols for the indices.
Abstract index notation
The
abstract index notation is a way to write tensors such that the indices are no longer thought of as numerical, but rather are
indeterminates. This notation captures the expressiveness of indices and the basis-independence of index-free notation.
Component-free notation
A
component-free treatment of tensors uses notation that emphasises that tensors do not rely on any basis, and is defined in terms of the
tensor product of vector spaces.
Operations
There
are several operations on tensors that again produce a tensor. The
linear nature of tensor implies that two tensors of the same type may be
added together, and that tensors may be multiplied by a scalar with
results analogous to the
scaling of a vector.
On components, these operations are simply performed component-wise.
These operations do not change the type of the tensor; but there are
also operations that produce a tensor of different type.
Tensor product
The
tensor product takes two tensors,
S and
T, and produces a new tensor,
S ⊗ T,
whose order is the sum of the orders of the original tensors. When
described as multilinear maps, the tensor product simply multiplies the
two tensors, i.e.
which again produces a map that is linear in all its arguments. On
components, the effect is to multiply the components of the two input
tensors pairwise, i.e.
If
S is of type
(l, k) and
T is of type
(n, m), then the tensor product
S ⊗ T has type
(l + n, k + m).
Contraction
Tensor contraction is an operation that reduces a type
(n, m) tensor to a type
(n − 1, m − 1) tensor, of which the
trace
is a special case. It thereby reduces the total order of a tensor by
two. The operation is achieved by summing components for which one
specified contravariant index is the same as one specified covariant
index to produce a new component. Components for which those two indices
are different are discarded. For example, a
(1, 1)-tensor
can be contracted to a scalar through
- .
Where the summation is again implied. When the
(1, 1)-tensor is interpreted as a linear map, this operation is known as the
trace.
The contraction is often used in conjunction with the tensor product to contract an index from each tensor.
The contraction can also be understood using the definition of a
tensor as an element of a tensor product of copies of the space
V with the space
V∗ by first decomposing the tensor into a linear combination of simple tensors, and then applying a factor from
V∗ to a factor from
V. For example, a tensor
can be written as a linear combination
The contraction of
T on the first and last slots is then the vector
In a vector space with an
inner product (also known as a
metric)
g, the term
contraction
is used for removing two contravariant or two covariant indices by
forming a trace with the metric tensor or its inverse. For example, a
(2, 0)-tensor
can be contracted to a scalar through
(yet again assuming the summation convention).
Raising or lowering an index
When a vector space is equipped with a
nondegenerate bilinear form (or
metric tensor
as it is often called in this context), operations can be defined that
convert a contravariant (upper) index into a covariant (lower) index and
vice versa. A metric tensor is a (symmetric) (
0, 2)-tensor;
it is thus possible to contract an upper index of a tensor with one of
the lower indices of the metric tensor in the product. This produces a
new tensor with the same index structure as the previous tensor, but
with lower index generally shown in the same position of the contracted
upper index. This operation is quite graphically known as
lowering an index.
Conversely, the inverse operation can be defined, and is called
raising an index. This is equivalent to a similar contraction on the product with a
(2, 0)-tensor. This
inverse metric tensor has components that are the matrix inverse of those of the metric tensor.
Applications
Continuum mechanics
Important examples are provided by
continuum mechanics. The stresses inside a
solid body or
fluid are described by a tensor field. The
stress tensor and
strain tensor are both second-order tensor fields, and are related in a general linear elastic material by a fourth-order
elasticity tensor fields.
In detail, the tensor quantifying stress in a 3-dimensional solid
object has components that can be conveniently represented as a 3 × 3
array. The three faces of a cube-shaped infinitesimal volume segment of
the solid are each subject to some given force. The force's vector
components are also three in number. Thus, 3 × 3, or 9 components are
required to describe the stress at this cube-shaped infinitesimal
segment. Within the bounds of this solid is a whole mass of varying
stress quantities, each requiring 9 quantities to describe. Thus, a
second-order tensor is needed.
If a particular
surface element
inside the material is singled out, the material on one side of the
surface will apply a force on the other side. In general, this force
will not be orthogonal to the surface, but it will depend on the
orientation of the surface in a linear manner. This is described by a
tensor of
type (2, 0), in
linear elasticity, or more precisely by a tensor field of type
(2, 0), since the stresses may vary from point to point.
Other examples from physics
Common applications include
Applications of tensors of order > 2
The
concept of a tensor of order two is often conflated with that of a
matrix. Tensors of higher order do however capture ideas important in
science and engineering, as has been shown successively in numerous
areas as they develop. This happens, for instance, in the field of
computer vision, with the
trifocal tensor generalizing the
fundamental matrix.
The field of
nonlinear optics studies the changes to material
polarization density under extreme electric fields. The polarization waves generated are related to the generating
electric fields through the nonlinear susceptibility tensor. If the polarization
P is not linearly proportional to the electric field
E, the medium is termed
nonlinear. To a good approximation (for sufficiently weak fields, assuming no permanent dipole moments are present),
P is given by a
Taylor series in
E whose coefficients are the nonlinear susceptibilities:
Here
is the linear susceptibility,
gives the
Pockels effect and
second harmonic generation, and
gives the
Kerr effect. This expansion shows the way higher-order tensors arise naturally in the subject matter.
Generalizations
Tensor products of vector spaces
The vector spaces of a
tensor product
need not be the same, and sometimes the elements of such a more general
tensor product are called "tensors". For example, an element of the
tensor product space
V ⊗ W is a second-order "tensor" in this more general sense,
[14] and an order-
d tensor may likewise be defined as an element of a tensor product of
d different vector spaces.
[15] A type
(n, m) tensor, in the sense defined previously, is also a tensor of order
n + m in this more general sense. The concept of tensor product
can be extended to arbitrary
modules over a ring.
Tensors in infinite dimensions
The notion of a tensor can be generalized in a variety of ways to
infinite dimensions. One, for instance, is via the
tensor product of
Hilbert spaces.
[16] Another way of generalizing the idea of tensor, common in
nonlinear analysis, is via the
multilinear maps definition where instead of using finite-dimensional vector spaces and their
algebraic duals, one uses infinite-dimensional
Banach spaces and their
continuous dual.
[17] Tensors thus live naturally on
Banach manifolds[18] and
Fréchet manifolds.
Tensor densities
Suppose that a homogeneous medium fills
R3, so that the density of the medium is described by a single
scalar value
ρ in
kg m−3. The mass, in kg, of a region
Ω is obtained by multiplying
ρ by the volume of the region
Ω, or equivalently integrating the constant
ρ over the region:
where the Cartesian coordinates
xyz
are measured in m. If the units of length are changed into cm, then the
numerical values of the coordinate functions must be rescaled by a
factor of 100:
The numerical value of the density
ρ must then also transform by
to compensate, so that the numerical value of the mass in kg is still given by integral of
. Thus
(in units of
kg cm−3).
More generally, if the Cartesian coordinates
xyz undergo a linear transformation, then the numerical value of the density
ρ must change by a factor of the reciprocal of the absolute value of the
determinant of the coordinate transformation, so that the integral remains invariant, by the
change of variables formula
for integration. Such a quantity that scales by the reciprocal of the
absolute value of the determinant of the coordinate transition map is
called a
scalar density. To model a non-constant density,
ρ is a function of the variables
xyz (a
scalar field), and under a curvilinear change of coordinates, it transforms by the reciprocal of the
Jacobian of the coordinate change. For more on the intrinsic meaning, see
Density on a manifold.
A tensor density transforms like a tensor under a coordinate change,
except that it in addition picks up a factor of the absolute value of
the determinant of the coordinate transition:
[19]
-
Here
w is called the weight. In general, any tensor multiplied
by a power of this function or its absolute value is called a tensor
density, or a weighted tensor.
[20][21] An example of a tensor density is the
current density of
electromagnetism.
Under an affine transformation of the coordinates, a tensor
transforms by the linear part of the transformation itself (or its
inverse) on each index. These come from the
rational representations
of the general linear group. But this is not quite the most general
linear transformation law that such an object may have: tensor densities
are non-rational, but are still
semisimple
representations. A further class of transformations come from the
logarithmic representation of the general linear group, a reducible but
not semisimple representation,
[22] consisting of an
(x,y) ∈ R2 with the transformation law
Geometric objects
The transformation law for a tensor behaves as a
functor
on the category of admissible coordinate systems, under general linear
transformations (or, other transformations within some class, such as
local diffeomorphisms.)
This makes a tensor a special case of a geometrical object, in the
technical sense that it is a function of the coordinate system
transforming functorially under coordinate changes.
[23] Examples of objects obeying more general kinds of transformation laws are
jets and, more generally still,
natural bundles.
[24][25]
Spinors
When changing from one
orthonormal basis (called a
frame)
to another by a rotation, the components of a tensor transform by that
same rotation. This transformation does not depend on the path taken
through the space of frames. However, the space of frames is not
simply connected (see
orientation entanglement and
plate trick):
there are continuous paths in the space of frames with the same
beginning and ending configurations that are not deformable one into the
other. It is possible to attach an additional discrete invariant to
each frame that incorporates this path dependence, and which turns out
(locally) to have values of ±1.
[26] A
spinor
is an object that transforms like a tensor under rotations in the
frame, apart from a possible sign that is determined by the value of
this discrete invariant.
[27][28]
Succinctly, spinors are elements of the
spin representation of the rotation group, while tensors are elements of its
tensor representations. Other
classical groups
have tensor representations, and so also tensors that are compatible
with the group, but all non-compact classical groups have
infinite-dimensional unitary representations as well.
History
The concepts of later tensor analysis arose from the work of
Carl Friedrich Gauss in differential geometry, and the formulation was much influenced by the theory of
algebraic forms and invariants developed during the middle of the nineteenth century.
[29] The word "tensor" itself was introduced in 1846 by
William Rowan Hamilton[30] to describe something different from what is now meant by a tensor.
[Note 3] The contemporary usage was introduced by
Woldemar Voigt in 1898.
[31]
Tensor calculus was developed around 1890 by
Gregorio Ricci-Curbastro under the title
absolute differential calculus, and originally presented by Ricci in 1892.
[32] It was made accessible to many mathematicians by the publication of Ricci and
Tullio Levi-Civita's 1900 classic text
Méthodes de calcul différentiel absolu et leurs applications (Methods of absolute differential calculus and their applications).
[33]
In the 20th century, the subject came to be known as
tensor analysis, and achieved broader acceptance with the introduction of
Einstein's theory of
general relativity,
around 1915. General relativity is formulated completely in the
language of tensors. Einstein had learned about them, with great
difficulty, from the geometer
Marcel Grossmann.
[34]
Levi-Civita then initiated a correspondence with Einstein to correct
mistakes Einstein had made in his use of tensor analysis. The
correspondence lasted 1915–17, and was characterized by mutual respect:
I admire the elegance of your method of computation; it must be nice
to ride through these fields upon the horse of true mathematics while
the like of us have to make our way laboriously on foot.
Tensors were also found to be useful in other fields such as
continuum mechanics. Some well-known examples of tensors in
differential geometry are
quadratic forms such as
metric tensors, and the
Riemann curvature tensor. The
exterior algebra of
Hermann Grassmann,
from the middle of the nineteenth century, is itself a tensor theory,
and highly geometric, but it was some time before it was seen, with the
theory of
differential forms, as naturally unified with tensor calculus. The work of
Élie Cartan made differential forms one of the basic kinds of tensors used in mathematics.
From about the 1920s onwards, it was realised that tensors play a basic role in
algebraic topology (for example in the
Künneth theorem).
[36] Correspondingly there are types of tensors at work in many branches of
abstract algebra, particularly in
homological algebra and
representation theory. Multilinear algebra can be developed in greater generality than for scalars coming from a
field. For example, scalars can come from a
ring. But the theory is then less geometric and computations more technical and less algorithmic.
[37] Tensors are generalized within
category theory by means of the concept of
monoidal category, from the 1960s.
[38]