Search This Blog

Friday, September 12, 2014

Molecular symmetry

Molecular symmetry

From Wikipedia, the free encyclopedia
 
Molecular symmetry in chemistry describes the symmetry present in molecules and the classification of molecules according to their symmetry. Molecular symmetry is a fundamental concept in chemistry, as it can predict or explain many of a molecule's chemical properties, such as its dipole moment and its allowed spectroscopic transitions (based on selection rules such as the Laporte rule). Many university level textbooks on physical chemistry, quantum chemistry, and inorganic chemistry devote a chapter to symmetry.[1][2][3][4][5]

While various frameworks for the study of molecular symmetry exist, group theory is the predominant one. This framework is also useful in studying the symmetry of molecular orbitals, with applications such as the Hückel method, ligand field theory, and the Woodward-Hoffmann rules. Another framework on a larger scale is the use of crystal systems to describe crystallographic symmetry in bulk materials.

Many techniques for the practical assessment of molecular symmetry exist, including X-ray crystallography and various forms of spectroscopy, for example infrared spectroscopy of metal carbonyls. Spectroscopic notation is based on symmetry considerations.

Symmetry concepts

The study of symmetry in molecules is an adaptation of mathematical group theory.

Elements

The symmetry of a molecule can be described by 5 types of symmetry elements.
  • Symmetry axis: an axis around which a rotation by  \tfrac{360^\circ} {n} results in a molecule indistinguishable from the original. This is also called an n-fold rotational axis and abbreviated Cn. Examples are the C2 in water and the C3 in ammonia. A molecule can have more than one symmetry axis; the one with the highest n is called the principal axis, and by convention is assigned the z-axis in a Cartesian coordinate system.
  • Plane of symmetry: a plane of reflection through which an identical copy of the original molecule is given. This is also called a mirror plane and abbreviated σ. Water has two of them: one in the plane of the molecule itself and one perpendicular to it. A symmetry plane parallel with the principal axis is dubbed verticalv) and one perpendicular to it horizontalh). A third type of symmetry plane exists: If a vertical symmetry plane additionally bisects the angle between two 2-fold rotation axes perpendicular to the principal axis, the plane is dubbed dihedrald). A symmetry plane can also be identified by its Cartesian orientation, e.g., (xz) or (yz).
  • Center of symmetry or inversion center, abbreviated i. A molecule has a center of symmetry when, for any atom in the molecule, an identical atom exists diametrically opposite this center an equal distance from it. There may or may not be an atom at the center. Examples are xenon tetrafluoride where the inversion center is at the Xe atom, and benzene (C6H6) where the inversion center is at the center of the ring.
  • Rotation-reflection axis: an axis around which a rotation by  \tfrac{360^\circ} {n} , followed by a reflection in a plane perpendicular to it, leaves the molecule unchanged. Also called an n-fold improper rotation axis, it is abbreviated Sn. Examples are present in tetrahedral silicon tetrafluoride, with three S4 axes, and the staggered conformation of ethane with one S6 axis.
  • Identity, abbreviated to E, from the German 'Einheit' meaning unity.[6] This symmetry element simply consists of no change: every molecule has this element. While this element seems physically trivial, it must be included in the list of symmetry elements so that they form a mathematical group, whose definition requires inclusion of the identity element. It is so called because it is analogous to multiplying by one (unity).

Operations

The 5 symmetry elements have associated with them 5 types of symmetry operations. They are often, although not always, distinguished from the respective elements by a caret. Thus, Ĉn is the rotation of a molecule around an axis and Ê is the identity operation. A symmetry element can have more than one symmetry operation associated with it. Since C1 is equivalent to E, S1 to σ and S2 to i, all symmetry operations can be classified as either proper or improper rotations.

Binary Operations

          A binary operation is a mapping that maps from pairs of elements of a set to single elements of the same set. Usual addition and multiplication of numbers are binary operations on the sets of integers, rational numbers and of real numbers.

Groups

          A group is a mathematical structure (usually denoted in the form (G,*)) consisting of a set G and a binary operation say '*' satisfying the following properties:

(1) closure property:
          For every pair of elements x and y in G, the product x*y is also in G.
          ( in symbols, for every two elements x, yG, x*y is also in G )
(2) associative property:
          For every x and y and z in G, both (x*y)*z and x*(y*z) result with the same element in G.
          ( in symbols, (x*y)*z = x*(y*z ) for every x, y, and zG)
(3) existence of identity property:
          There must be an element ( say e ) in G such that product any element of G with e make no change to the element.
          ( in symbols, x*e=e*x= x for every xG )
(4) existence of inverse property:
          For each element ( x ) in G, there must be an element y in G such that product of x and y is the identity element e.
          ( in symbols, for each xG there is a yG such that x*y=y*x= e for every xG )

Remark

The Order of a group is the number of elements in the group.

In case of groups of small orders, verification of the properties can be easily carried out by considering its composition table, a table whose rows and columns corresponds elements of the group and elements corresponds to respective products.

Point group

The successive application (or composition) of one or more symmetry operations of a molecule has an effect equivalent to that of some single symmetry operation of the molecule. Moreover the set of all symmetry operations including this composition operation obeys all the properties of a group, given above. So (S,*) is a group where S is the set of all symmetry operations of some molecule, and * denotes the composition (repeated application) of symmetry operations. This group is called the 'point group of that molecule.

The symmetry of a crystal is described by a space group of symmetry operations rather than a point group.

Examples

    (1)   The point group for the water molecule is C2v, consisting of the symmetry operations E, C2, σv and σv'. Its order is thus 4. Each operation is its own inverse. As an example of closure, a C2 rotation followed by a σv reflection is seen to be a σv' symmetry operation: σv*C2 = σv'. (Note that "Operation A followed by B to form C" is written BA = C).

    (2)   Another example is the ammonia molecule, which is pyramidal and contains a three-fold rotation axis as well as three mirror planes at an angle of 120° to each other. Each mirror plane contains an N-H bond and bisects the H-N-H bond angle opposite to that bond. Thus ammonia molecule belongs to the C3v point group that has order 6: an identity element E, two rotation operations C3 and C32, and three mirror reflections σv, σv' and σv".

Common point groups

The following table contains a list of point groups with representative molecules. The description of structure includes common shapes of molecules based on VSEPR theory.

Point group Symmetry operations Simple description of typical geometry Example 1 Example 2 Example 3
C1 E no symmetry, chiral Chiral.svg
bromochlorofluoromethane
Lysergic acid chemical structure.svg
lysergic acid

Cs E σh mirror plane, no other symmetry Thionyl-chloride-from-xtal-3D-balls-B.png
thionyl chloride
Hypochlorous-acid-3D-vdW.png
hypochlorous acid
Chloroiodomethane-3D-vdW.png
chloroiodomethane
Ci E i inversion center (R,R) 1,2-dichloro-1,2-dibromoethane (anti conformer)

C∞v E 2C ∞σv linear Hydrogen-fluoride-3D-vdW.png
Hydrogen fluoride
Nitrous-oxide-3D-vdW.png
nitrous oxide
(dinitrogen monoxide)

D∞h E 2C ∞σi i 2S ∞C2 linear with inversion center Oxygen molecule.png
oxygen
Carbon dioxide 3D spacefill.png
carbon dioxide

C2 E C2 "open book geometry," chiral Hydrogen-peroxide-3D-balls.png
hydrogen peroxide


C3 E C3 propeller, chiral Triphenylphosphine-3D-vdW.png
triphenylphosphine


C2h E C2 i σh planar with inversion center Trans-dichloroethylene-3D-balls.png
trans-1,2-dichloroethylene


C3h E C3 C32 σh S3 S35 propeller Boric-acid-3D-vdW.png
boric acid


C2v E C2 σv(xz) σv'(yz) angular (H2O) or see-saw (SF4) Water molecule 3D.svg
water
Sulfur-tetrafluoride-3D-balls.png
sulfur tetrafluoride
Sulfuryl-fluoride-3D-balls.png
sulfuryl fluoride
C3v E 2C3v trigonal pyramidal Ammonia-3D-balls-A.png
ammonia
Phosphoryl-chloride-3D-vdW.png
phosphorus oxychloride

C4v E 2C4 C2vd square pyramidal Xenon-oxytetrafluoride-3D-vdW.png
xenon oxytetrafluoride


D2 E C2(x) C2(y) C2(z) twist, chiral cyclohexane twist conformation

D3 E C3(z) 3C2 triple helix, chiral Tris(ethylenediamine)cobalt(III) (molecular diagram).png
Tris(ethylenediamine)cobalt(III) cation


D2h E C2(z) C2(y) C2(x) i σ(xy) σ(xz) σ(yz) planar with inversion center Ethylene-3D-vdW.png
ethylene
Dinitrogen-tetroxide-3D-vdW.png
dinitrogen tetroxide
Diborane-3D-balls-A.png
diborane
D3h E 2C3 3C2 σh 2S3v trigonal planar or trigonal bipyramidal Boron-trifluoride-3D-vdW.png
boron trifluoride
Phosphorus-pentachloride-3D-balls.png
phosphorus pentachloride

D4h E 2C4 C2 2C2' 2C2 i 2S4 σhvd square planar Xenon-tetrafluoride-3D-vdW.png
xenon tetrafluoride
Octachlorodirhenate(III)-3D-balls.png
octachlorodimolybdate(III) anion

D5h E 2C5 2C52 5C2 σh 2S5 2S53v pentagonal Ruthenocene-from-xtal-3D-SF.png
ruthenocene
Fullerene-C70-3D-balls.png
C70

D6h E 2C6 2C3 C2 3C2' 3C2‘’ i 2S3 2S6 σhdv hexagonal Benzene-3D-vdW.png
benzene
Bis(benzene)chromium-from-xtal-2006-3D-balls-A.png
bis(benzene)chromium

D2d E 2S4 C2 2C2' 2σd 90° twist Allene3D.png
allene
Tetrasulfur-tetranitride-from-xtal-2000-3D-balls.png
tetrasulfur tetranitride

D3d E C3 3C2 i 2S6d 60° twist Ethane-3D-vdW.png
ethane (staggered rotamer)
Cyclohexane-3D-space-filling.png
cyclohexane chair conformation

D4d E 2S8 2C4 2S83 C2 4C2' 4σd 45° twist Dimanganese-decacarbonyl-3D-balls.png
dimanganese decacarbonyl (staggered rotamer)


D5d E 2C5 2C52 5C2 i 3S103 2S10d 36° twist Ferrocene 3d model 2.png
ferrocene (staggered rotamer)


Td E 8C3 3C2 6S4d tetrahedral Methane-CRC-MW-3D-balls.png
methane
Phosphorus-pentoxide-3D-balls.png
phosphorus pentoxide
Adamantane-3D-balls.png
adamantane
Oh E 8C3 6C2 6C4 3C2 i 6S4 8S6hd octahedral or cubic Cubane-3D-balls.png
cubane
Sulfur-hexafluoride-3D-balls.png
sulfur hexafluoride

Ih E 12C5 12C52 20C3 15C2 i 12S10 12S103 20S6 15σ icosahedral or dodecahedral Buckminsterfullerene-perspective-3D-balls.png
Buckminsterfullerene
Dodecaborane-3D-balls.png
dodecaborate anion
Dodecahedrane-3D-sticks.png
dodecahedrane

Representations

The symmetry operations can be represented in many ways. A convenient representation is by matrices. For any vector representing a point in Cartesian coordinates, left-multiplying it gives the new location of the point transformed by the symmetry operation. Composition of operations corresponds to matrix multiplication. In the C2v example this is:

 \underbrace{
    \begin{bmatrix}
     -1 &  0 & 0 \\
      0 & -1 & 0 \\
    0 &  0 & 1 \\
      \end{bmatrix}
   }_{C_{2}} \times
 \underbrace{
  \begin{bmatrix}
    1 &  0 & 0 \\
    0 & -1 & 0 \\
    0 &  0 & 1 \\
  \end{bmatrix}
 }_{\sigma_{v}} = 
 \underbrace{
  \begin{bmatrix}
   -1 & 0 & 0 \\
    0 & 1 & 0 \\
    0 & 0 & 1 \\
  \end{bmatrix}
 }_{\sigma'_{v}}
Although an infinite number of such representations exist, the irreducible representations (or "irreps") of the group are commonly used, as all other representations of the group can be described as a linear combination of the irreducible representations.

Character tables

For each point group, a character table summarizes information on its symmetry operations and on its irreducible representations. As there are always equal numbers of irreducible representations and classes of symmetry operations, the tables are square.
The table itself consists of characters that represent how a particular irreducible representation transforms when a particular symmetry operation is applied. Any symmetry operation in a molecule's point group acting on the molecule itself will leave it unchanged. But, for acting on a general entity, such as a vector or an orbital, this need not be the case. The vector could change sign or direction, and the orbital could change type. For simple point groups, the values are either 1 or −1: 1 means that the sign or phase (of the vector or orbital) is unchanged by the symmetry operation (symmetric) and −1 denotes a sign change (asymmetric).

The representations are labeled according to a set of conventions:
  • A, when rotation around the principal axis is symmetrical
  • B, when rotation around the principal axis is asymmetrical
  • E and T are doubly and triply degenerate representations, respectively
  • when the point group has an inversion center, the subscript g (German: gerade or even) signals no change in sign, and the subscript u (ungerade or uneven) a change in sign, with respect to inversion.
  • with point groups C∞v and D∞h the symbols are borrowed from angular momentum description: Σ, Π, Δ.
The tables also capture information about how the Cartesian basis vectors, rotations about them, and quadratic functions of them transform by the symmetry operations of the group, by noting which irreducible representation transforms in the same way. These indications are conventionally on the righthand side of the tables. This information is useful because chemically important orbitals (in particular p and d orbitals) have the same symmetries as these entities.

The character table for the C2v symmetry point group is given below:
C2v E C2 σv(xz) σv'(yz)

A1 1 1 1 1 z x2, y2, z2
A2 1 1 −1 −1 Rz xy
B1 1 −1 1 −1 x, Ry xz
B2 1 −1 −1 1 y, Rx yz

Consider the example of water (H2O), which has the C2v symmetry described above. The 2px orbital of oxygen is oriented perpendicular to the plane of the molecule and switches sign with a C2 and a σv'(yz) operation, but remains unchanged with the other two operations (obviously, the character for the identity operation is always +1). This orbital's character set is thus {1, −1, 1, −1}, corresponding to the B1 irreducible representation. Likewise, the 2pz orbital is seen to have the symmetry of the A1 irreducible representation, 2py B2, and the 3dxy orbital A2. These assignments and others are noted in the rightmost two columns of the table.

Historical background

Hans Bethe used characters of point group operations in his study of ligand field theory in 1929, and Eugene Wigner used group theory to explain the selection rules of atomic spectroscopy.[7] The first character tables were compiled by László Tisza (1933), in connection to vibrational spectra. Robert Mulliken was the first to publish character tables in English (1933), and E. Bright Wilson used them in 1934 to predict the symmetry of vibrational normal modes.[8] The complete set of 32 crystallographic point groups was published in 1936 by Rosenthal and Murphy.[9]

Tensor

Tensor

From Wikipedia, the free encyclopedia

Cauchy stress tensor, a second-order tensor. The tensor's components, in a three-dimensional Cartesian coordinate system, form the matrix

\begin{align}
\sigma & = \begin{bmatrix}\mathbf{T}^{(\mathbf{e}_1)} \mathbf{T}^{(\mathbf{e}_2)} \mathbf{T}^{(\mathbf{e}_3)} \\ \end{bmatrix} \\
& = \begin{bmatrix} \sigma_{11} & \sigma_{12} & \sigma_{13} \\ \sigma_{21} & \sigma_{22} & \sigma_{23} \\ \sigma_{31} & \sigma_{32} & \sigma_{33} \end{bmatrix}\\
\end{align}
 
whose columns are the stresses (forces per unit area) acting on the e1, e2, and e3 faces of the cube.

Tensors are geometric objects that describe linear relations between vectors, scalars, and other tensors. Elementary examples of such relations include the dot product, the cross product, and linear maps. Vectors and scalars themselves are also tensors. A tensor can be represented as a multi-dimensional array of numerical values. The order (also degree) of a tensor is the dimensionality of the array needed to represent it, or equivalently, the number of indices needed to label a component of that array. For example, a linear map can be represented by a matrix (a 2-dimensional array) and therefore is a 2nd-order tensor. A vector can be represented as a 1-dimensional array and is a 1st-order tensor. Scalars are single numbers and are thus 0th-order tensors. It is very important not to confuse dimensions of the array with dimensions of the underlying vector space.

Tensors are used to represent correspondences between sets of geometric vectors; for applications in engineering and Newtonian physics these are normally Euclidean vectors. For example, the Cauchy stress tensor T takes a direction v as input and produces the stress T(v) on the surface normal to this vector for output thus expressing a relationship between these two vectors, shown in the figure (right).

Because they express a relationship between vectors, tensors themselves must be independent of a particular choice of coordinate system. Finding the representation of a tensor in terms of a coordinate basis results in an organized multidimensional array representing the tensor in that basis or frame of reference. The coordinate independence of a tensor then takes the form of a "covariant" transformation law that relates the array computed in one coordinate system to that computed in another one. The precise form of the transformation law determines the type (or valence) of the tensor. The tensor type is a pair of natural numbers (n, m) where n is the number of contravariant indices and m is the number of covariant indices. The total order of a tensor is the sum of these two numbers.

Tensors are important in physics because they provide a concise mathematical framework for formulating and solving physics problems in areas such as elasticity, fluid mechanics, and general relativity. Tensors were first conceived by Tullio Levi-Civita and Gregorio Ricci-Curbastro, who continued the earlier work of Bernhard Riemann and Elwin Bruno Christoffel and others, as part of the absolute differential calculus. The concept enabled an alternative formulation of the intrinsic differential geometry of a manifold in the form of the Riemann curvature tensor.[1]

Definition

There are several approaches to defining tensors. Although seemingly different, the approaches just describe the same geometric concept using different languages and at different levels of abstraction.

As multidimensional arrays

Just as a vector with respect to a given basis is represented by an array of one dimension, any tensor with respect to a basis is represented by a multidimensional array. The numbers in the array are known as the scalar components of the tensor or simply its components. They are denoted by indices giving their position in the array, as subscripts and superscripts, after the symbolic name of the tensor. In most cases, the indices of a tensor are either covariant or contravariant, designated by subscript or superscript, respectively. The total number of indices required to uniquely select each component is equal to the dimension of the array, and is called the order, degree or rank of the tensor.[Note 1] For example, the entries of an order 2 tensor T would be denoted Tij, Ti j, Tij, or Tij, where i and j are indices running from 1 to the dimension of the related vector space.[Note 2] When the basis and its dual coincide (i.e. for an orthonormal basis), the distinction between contravariant and covariant indices may be ignored; in these cases Tij or Tij could be used interchangeably.[Note 3]

Just as the components of a vector change when we change the basis of the vector space, the entries of a tensor also change under such a transformation. Each tensor comes equipped with a transformation law that details how the components of the tensor respond to a change of basis. The components of a vector can respond in two distinct ways to a change of basis (see covariance and contravariance of vectors), where the new basis vectors \mathbf{\hat{e}}_i are expressed in terms of the old basis vectors \mathbf{e}_j as,
\mathbf{\hat{e}}_i = \sum_j R^j_i \mathbf{e}_j = R^j_i \mathbf{e}_j,
where Ri j is a matrix and in the second expression the summation sign was suppressed (a notational convenience introduced by Einstein that will be used throughout this article).[Note 4] The components, vi, of a regular (or column) vector, v, transform with the inverse of the matrix R,
\hat{v}^i = (R^{-1})^i_j v^j,
where the hat denotes the components in the new basis. While the components, wi, of a covector (or row vector), w transform with the matrix R itself,
\hat{w}_i = R_i^j w_j.
The components of a tensor transform in a similar manner with a transformation matrix for each index. If an index transforms like a vector with the inverse of the basis transformation, it is called contravariant and is traditionally denoted with an upper index, while an index that transforms with the basis transformation itself is called covariant and is denoted with a lower index. The transformation law for an order-m tensor with n contravariant indices and mn covariant indices is thus given as,
\hat{T}^{i_1,\ldots,i_n}_{i_{n+1},\ldots,i_m}= (R^{-1})^{i_1}_{j_1}\cdots(R^{-1})^{i_n}_{j_n} R^{j_{n+1}}_{i_{n+1}}\cdots R^{j_{m}}_{i_{m}}T^{j_1,\ldots,j_n}_{j_{n+1},\ldots,j_m}.
Such a tensor is said to be of order or type (n, mn).[Note 5] This discussion motivates the following formal definition:[2]
Definition. A tensor of type (n, mn) is an assignment of a multidimensional array
T^{i_1\dots i_n}_{i_{n+1}\dots i_m}[\mathbf{f}]
to each basis f = (e1,...,eN) such that, if we apply the change of basis
\mathbf{f}\mapsto \mathbf{f}\cdot R = \left( R_1^i \mathbf{e}_i, \dots, R_N^i\mathbf{e}_i\right)
then the multidimensional array obeys the transformation law
T^{i_1\dots i_n}_{i_{n+1}\dots i_m}[\mathbf{f}\cdot R] = (R^{-1})^{i_1}_{j_1}\cdots(R^{-1})^{i_n}_{j_n} R^{j_{n+1}}_{i_{n+1}}\cdots R^{j_{m}}_{i_{m}}T^{j_1,\ldots,j_n}_{j_{n+1},\ldots,j_m}[\mathbf{f}].

The definition of a tensor as a multidimensional array satisfying a transformation law traces back to the work of Ricci.[1] Nowadays, this definition is still used in some physics and engineering text books.[3][4]

Tensor fields

In many applications, especially in differential geometry and physics, it is natural to consider a tensor with components that are functions of the point in a space. This was the setting of Ricci's original work. In modern mathematical terminology such an object is called a tensor field, often referred to simply as a tensor.[1]
In this context, a coordinate basis is often chosen for the tangent vector space. The transformation law may then be expressed in terms of partial derivatives of the coordinate functions, \bar{x}_i(x_1,\ldots,x_k), defining a coordinate transformation,[1]
\hat{T}^{i_1\dots i_n}_{i_{n+1}\dots i_m}(\bar{x}_1,\ldots,\bar{x}_k) =
\frac{\partial \bar{x}^{i_1}}{\partial x^{j_1}}
\cdots
\frac{\partial \bar{x}^{i_n}}{\partial x^{j_n}}
\frac{\partial x^{j_{n+1}}}{\partial \bar{x}^{i_{n+1}}}
\cdots
\frac{\partial x^{j_m}}{\partial \bar{x}^{i_m}}
T^{j_1\dots j_n}_{j_{n+1}\dots j_m}(x_1,\ldots,x_k).

As multilinear maps

A downside to the definition of a tensor using the multidimensional array approach is that it is not apparent from the definition that the defined object is indeed basis independent, as is expected from an intrinsically geometric object. Although it is possible to show that transformation laws indeed ensure independence from the basis, sometimes a more intrinsic definition is preferred. One approach is to define a tensor as a multilinear map. In that approach a type (n, m) tensor T is defined as a map,
 T: \underbrace{ V^* \times\dots\times V^*}_{n \text{ copies}} \times \underbrace{ V \times\dots\times V}_{m \text{ copies}} \rightarrow \mathbf{R},
where V is a vector space and V* is the corresponding dual space of covectors, which is linear in each of its arguments.

By applying a multilinear map T of type (n, m) to a basis {ej} for V and a canonical cobasis {εi} for V*,
T^{i_1\dots i_n}_{j_1\dots j_m} \equiv T(\mathbf{\varepsilon}^{i_1},\ldots,\mathbf{\varepsilon}^{i_n},\mathbf{e}_{j_1},\ldots,\mathbf{e}_{j_m}),
an (n+m)-dimensional array of components can be obtained. A different choice of basis will yield different components. But, because T is linear in all of its arguments, the components satisfy the tensor transformation law used in the multilinear array definition. The multidimensional array of components of T thus form a tensor according to that definition. Moreover, such an array can be realised as the components of some multilinear map T. This motivates viewing multilinear maps as the intrinsic objects underlying tensors.

Using tensor products

For some mathematical applications, a more abstract approach is sometimes useful. This can be achieved by defining tensors in terms of elements of tensor products of vector spaces, which in turn are defined through a universal property. A type (n, m) tensor is defined in this context as an element of the tensor product of vector spaces,[5]
 T\in \underbrace{V \otimes\dots\otimes V}_{n \text{ copies}} \otimes \underbrace{V^* \otimes\dots\otimes V^*}_{m \text{ copies}}.
If vi is a basis of V and wj is a basis of W, then the tensor product  V\otimes W has a natural basis  \mathbf{v}_i\otimes \mathbf{w}_j. The components of a tensor T are the coefficients of the tensor with respect to the basis obtained from a basis {ei} for V and its dual {εj}, i.e.
T = T^{i_1\dots i_n}_{j_1\dots j_m}\; \mathbf{e}_{i_1}\otimes\cdots\otimes \mathbf{e}_{i_n}\otimes \mathbf{\varepsilon}^{j_1}\otimes\cdots\otimes \mathbf{\varepsilon}^{j_m}.
Using the properties of the tensor product, it can be shown that these components satisfy the transformation law for a type (m, n) tensor. Moreover, the universal property of the tensor product gives a 1-to-1 correspondence between tensors defined in this way and tensors defined as multilinear maps.

Examples

This table shows important examples of tensors, including both tensors on vector spaces and tensor fields on manifolds. The tensors are classified according to their type (n, m), where n is the number of contravariant indices, m is the number of covariant indices, and n + m gives the total order of the tensor. For example, a bilinear form is the same thing as a (0, 2)-tensor; an inner product is an example of a (0, 2)-tensor, but not all (0, 2)-tensors are inner products. In the (0, M)-entry of the table, M denotes the dimensionality of the underlying vector space or manifold because for each dimension of the space, a separate index is needed to select that dimension to get a maximally covariant antisymmetric tensor.
n, m n = 0 n = 1 n = 2 ... n ...
m = 0 scalar, e.g. scalar curvature vector (e.g. direction vector) bivector, e.g. inverse metric tensor
n-vector, a sum of n-blades
m = 1 covector, linear functional, 1-form (e.g. gradient of a scalar field) linear transformation, Kronecker delta



m = 2 bilinear form, e.g. inner product, metric tensor, Ricci curvature, 2-form, symplectic form e.g. cross product in three dimensions e.g. elasticity tensor


m = 3 e.g. 3-form e.g. Riemann curvature tensor



...





m = M e.g. M-form i.e. volume form




...





Raising an index on an (n, m)-tensor produces an (n + 1, m − 1)-tensor; this can be visualized as moving diagonally up and to the right on the table. Symmetrically, lowering an index can be visualized as moving diagonally down and to the left on the table. Contraction of an upper with a lower index of an (n, m)-tensor produces an (n − 1, m − 1)-tensor; this can be visualized as moving diagonally up and to the left on the table.

Orientation defined by an ordered set of vectors.
Reversed orientation corresponds to negating the exterior product.
Geometric interpretation of grade n elements in a real exterior algebra for n = 0 (signed point), 1 (directed line segment, or vector), 2 (oriented plane element), 3 (oriented volume). The exterior product of n vectors can be visualized as any n-dimensional shape (e.g. n-parallelotope, n-ellipsoid); with magnitude (hypervolume), and orientation defined by that on its n − 1-dimensional boundary and on which side the interior is.[6][7]

Notation

Ricci calculus

Ricci calculus is the modern formalism and notation for tensor indices: indicating inner and outer products, covariance and contravariance, summations of tensor components, symmetry and antisymmetry, and partial and covariant derivatives.

Einstein summation convention

The Einstein summation convention dispenses with writing summation signs, leaving the summation implicit. Any repeated index symbol is summed over: if the index i is used twice in a given term of a tensor expression, it means that the term is to be summed for all i. Several distinct pairs of indices may be summed this way.

Penrose graphical notation

Penrose graphical notation is a diagrammatic notation which replaces the symbols for tensors with shapes, and their indices by lines and curves. It is independent of basis elements, and requires no symbols for the indices.

Abstract index notation

The abstract index notation is a way to write tensors such that the indices are no longer thought of as numerical, but rather are indeterminates. This notation captures the expressiveness of indices and the basis-independence of index-free notation.

Component-free notation

A component-free treatment of tensors uses notation that emphasises that tensors do not rely on any basis, and is defined in terms of the tensor product of vector spaces.

Operations

There are a number of basic operations that may be conducted on tensors that again produce a tensor.
The linear nature of tensor implies that two tensors of the same type may be added together, and that tensors may be multiplied by a scalar with results analogous to the scaling of a vector. On components, these operations are simply performed component for component. These operations do not change the type of the tensor, however there also exist operations that change the type of the tensors.

Tensor product

The tensor product takes two tensors, S and T, and produces a new tensor, ST, whose order is the sum of the orders of the original tensors. When described as multilinear maps, the tensor product simply multiplies the two tensors, i.e.
(S\otimes T)(v_1,\ldots, v_n, v_{n+1},\ldots, v_{n+m}) = S(v_1,\ldots, v_n)T( v_{n+1},\ldots, v_{n+m}),
which again produces a map that is linear in all its arguments. On components the effect similarly is to multiply the components of the two input tensors, i.e.
(S\otimes T)^{i_1\ldots i_l i_{l+1}\ldots i_{l+n}}_{j_1\ldots j_k j_{k+1}\ldots j_{k+m}} =
S^{i_1\ldots i_l}_{j_1\ldots j_k} T^{i_{l+1}\ldots i_{l+n}}_{j_{k+1}\ldots j_{k+m}},
If S is of type (l,k) and T is of type (n,m), then the tensor product ST has type (l+n,k+m).

Contraction

Tensor contraction is an operation that reduces the total order of a tensor by two. More precisely, it reduces a type (n, m) tensor to a type (n − 1, m − 1) tensor. In terms of components, the operation is achieved by summing over one contravariant and one covariant index of tensor. For example, a (1, 1)-tensor T_i^j can be contracted to a scalar through
T_i^i.
Where the summation is again implied. When the (1, 1)-tensor is interpreted as a linear map, this operation is known as the trace.

The contraction is often used in conjunction with the tensor product to contract an index from each tensor.

The contraction can also be understood in terms of the definition of a tensor as an element of a tensor product of copies of the space V with the space V* by first decomposing the tensor into a linear combination of simple tensors, and then applying a factor from V* to a factor from V. For example, a tensor
T \in V\otimes V\otimes V^*
can be written as a linear combination
T=v_1\otimes w_1\otimes \alpha_1 + v_2\otimes w_2\otimes \alpha_2 +\cdots + v_N\otimes w_N\otimes \alpha_N.
The contraction of T on the first and last slots is then the vector
\alpha_1(v_1)w_1 + \alpha_2(v_2)w_2+\cdots+\alpha_N(v_N)w_N.

Raising or lowering an index

When a vector space is equipped with a nondegenerate bilinear form (or metric tensor as it is often called in this context), operations can be defined that convert a contravariant (upper) index into a covariant (lower) index and vice versa. A metric tensor is a (symmetric) (0, 2)-tensor, it is thus possible to contract an upper index of a tensor with one of lower indices of the metric tensor in the product. This produces a new tensor with the same index structure as the previous, but with lower index in the position of the contracted upper index. This operation is quite graphically known as lowering an index.
Conversely, the inverse operation can be defined, and is called raising an index. This is equivalent to a similar contraction on the product with a (2, 0)-tensor. This inverse metric tensor has components that are the matrix inverse of those if the metric tensor.

Applications

Continuum mechanics

Important examples are provided by continuum mechanics. The stresses inside a solid body or fluid are described by a tensor. The stress tensor and strain tensor are both second-order tensors, and are related in a general linear elastic material by a fourth-order elasticity tensor. In detail, the tensor quantifying stress in a 3-dimensional solid object has components that can be conveniently represented as a 3 × 3 array. The three faces of a cube-shaped infinitesimal volume segment of the solid are each subject to some given force. The force's vector components are also three in number.
Thus, 3 × 3, or 9 components are required to describe the stress at this cube-shaped infinitesimal segment. Within the bounds of this solid is a whole mass of varying stress quantities, each requiring 9 quantities to describe. Thus, a second-order tensor is needed.

If a particular surface element inside the material is singled out, the material on one side of the surface will apply a force on the other side. In general, this force will not be orthogonal to the surface, but it will depend on the orientation of the surface in a linear manner. This is described by a tensor of type (2, 0), in linear elasticity, or more precisely by a tensor field of type (2, 0), since the stresses may vary from point to point.

Other examples from physics

Common applications include

Applications of tensors of order > 2

The concept of a tensor of order two is often conflated with that of a matrix. Tensors of higher order do however capture ideas important in science and engineering, as has been shown successively in numerous areas as they develop. This happens, for instance, in the field of computer vision, with the trifocal tensor generalizing the fundamental matrix.

The field of nonlinear optics studies the changes to material polarization density under extreme electric fields. The polarization waves generated are related to the generating electric fields through the nonlinear susceptibility tensor. If the polarization P is not linearly proportional to the electric field E, the medium is termed nonlinear. To a good approximation (for sufficiently weak fields, assuming no permanent dipole moments are present), P is given by a Taylor series in E whose coefficients are the nonlinear susceptibilities:
 \frac{P_i}{\varepsilon_0} = \sum_j  \chi^{(1)}_{ij} E_j  +  \sum_{jk} \chi_{ijk}^{(2)} E_j E_k + \sum_{jk\ell} \chi_{ijk\ell}^{(3)} E_j E_k E_\ell  + \cdots. \!
Here \chi^{(1)} is the linear susceptibility, \chi^{(2)} gives the Pockels effect and second harmonic generation, and \chi^{(3)} gives the Kerr effect. This expansion shows the way higher-order tensors arise naturally in the subject matter.

Generalizations

Tensors in infinite dimensions

The notion of a tensor can be generalized in a variety of ways to infinite dimensions. One, for instance, is via the tensor product of Hilbert spaces.[8] Another way of generalizing the idea of tensor, common in nonlinear analysis, is via the multilinear maps definition where instead of using finite-dimensional vector spaces and their algebraic duals, one uses infinite-dimensional Banach spaces and their continuous dual.[9] Tensors thus live naturally on Banach manifolds.[10]

Tensor densities

The concept of a tensor field can be generalized by considering objects that transform differently. An object that transforms as an ordinary tensor field under coordinate transformations, except that it is also multiplied by the determinant of the Jacobian of the inverse coordinate transformation to the w^{\text{th}} power, is called a tensor density with weight w.[11] Invariantly, in the language of multilinear algebra, one can think of tensor densities as multilinear maps taking their values in a density bundle such as the (1-dimensional) space of n-forms (where n is the dimension of the space), as opposed to taking their values in just R. Higher "weights" then just correspond to taking additional tensor products with this space in the range.
A special case are the scalar densities. Scalar 1-densities are especially important because it makes sense to define their integral over a manifold. They appear, for instance, in the Einstein–Hilbert action in general relativity. The most common example of a scalar 1-density is the volume element, which in the presence of a metric tensor g is the square root of its determinant in coordinates, denoted \sqrt{\det g}. The metric tensor is a covariant tensor of order 2, and so its determinant scales by the square of the coordinate transition:
\det(g') = \left(\det\frac{\partial x}{\partial x'}\right)^2\det(g)
which is the transformation law for a scalar density of weight +2.

More generally, any tensor density is the product of an ordinary tensor with a scalar density of the appropriate weight. In the language of vector bundles, the determinant bundle of the tangent bundle is a line bundle that can be used to 'twist' other bundles w times. While locally the more general transformation law can indeed be used to recognise these tensors, there is a global question that arises, reflecting that in the transformation law one may write either the Jacobian determinant, or its absolute value. Non-integral powers of the (positive) transition functions of the bundle of densities make sense, so that the weight of a density, in that sense, is not restricted to integer values. Restricting to changes of coordinates with positive Jacobian determinant is possible on orientable manifolds, because there is a consistent global way to eliminate the minus signs; but otherwise the line bundle of densities and the line bundle of n-forms are distinct. For more on the intrinsic meaning, see density on a manifold.

Spinors

When changing from one orthonormal basis (called a frame) to another by a rotation, the components of a tensor transform by that same rotation. This transformation does not depend on the path taken through the space of frames. However, the space of frames is not simply connected (see orientation entanglement and plate trick): there are continuous paths in the space of frames with the same beginning and ending configurations that are not deformable one into the other. It is possible to attach an additional discrete invariant to each frame called the "spin" that incorporates this path dependence, and which turns out to have values of ±1. A spinor is an object that transforms like a tensor under rotations in the frame, apart from a possible sign that is determined by the spin.

History

The concepts of later tensor analysis arose from the work of Carl Friedrich Gauss in differential geometry, and the formulation was much influenced by the theory of algebraic forms and invariants developed during the middle of the nineteenth century.[12] The word "tensor" itself was introduced in 1846 by William Rowan Hamilton[13] to describe something different from what is now meant by a tensor.[Note 6] The contemporary usage was introduced by Woldemar Voigt in 1898.[14]

Tensor calculus was developed around 1890 by Gregorio Ricci-Curbastro under the title absolute differential calculus, and originally presented by Ricci in 1892.[15] It was made accessible to many mathematicians by the publication of Ricci and Tullio Levi-Civita's 1900 classic text Méthodes de calcul différentiel absolu et leurs applications (Methods of absolute differential calculus and their applications).[16]

In the 20th century, the subject came to be known as tensor analysis, and achieved broader acceptance with the introduction of Einstein's theory of general relativity, around 1915. General relativity is formulated completely in the language of tensors. Einstein had learned about them, with great difficulty, from the geometer Marcel Grossmann.[17] Levi-Civita then initiated a correspondence with Einstein to correct mistakes Einstein had made in his use of tensor analysis. The correspondence lasted 1915–17, and was characterized by mutual respect:
I admire the elegance of your method of computation; it must be nice to ride through these fields upon the horse of true mathematics while the like of us have to make our way laboriously on foot.
—Albert Einstein, The Italian Mathematicians of Relativity[18]
Tensors were also found to be useful in other fields such as continuum mechanics. Some well-known examples of tensors in differential geometry are quadratic forms such as metric tensors, and the Riemann curvature tensor. The exterior algebra of Hermann Grassmann, from the middle of the nineteenth century, is itself a tensor theory, and highly geometric, but it was some time before it was seen, with the theory of differential forms, as naturally unified with tensor calculus. The work of Élie Cartan made differential forms one of the basic kinds of tensors used in mathematics.

From about the 1920s onwards, it was realised that tensors play a basic role in algebraic topology (for example in the Künneth theorem).[19] Correspondingly there are types of tensors at work in many branches of abstract algebra, particularly in homological algebra and representation theory. Multilinear algebra can be developed in greater generality than for scalars coming from a field. For example, scalars can come from a ring. But the theory is then less geometric and computations more technical and less algorithmic.[20] Tensors are generalized within category theory by means of the concept of monoidal category, from the 1960s.[21]

Vector (mathematics and physics)

Vector (mathematics and physics)

From Wikipedia, the free encyclopedia
 

Many special instances of the general definition of vector as an element of a vector space are listed below.

Vectors

  • Euclidean vector, a geometric entity endowed with magnitude and direction as well as a positive-definite inner product; an element of a Euclidean vector space. In physics, euclidean vectors are used to represent physical quantities that have both magnitude and direction, such as force, in contrast to scalar quantities, which have no direction.
    • Vector product, or cross product, an operation on two vectors in a three-dimensional Euclidean space, producing a third three-dimensional Euclidean vector
    • Burgers vector, a vector that represents the magnitude and direction of the lattice distortion of dislocation in a crystal lattice
    • Laplace–Runge–Lenz vector, a vector used chiefly to describe the shape and orientation of the orbit of one astronomical body around another
    • Normal vector, or surface normal, a vector that is perpendicular to a (hyper)surface at a point
  • An element of a vector space
    • Basis vector, one of a set of vectors (a "basis") that, in linear combination, can represent every vector in a given vector space
    • Coordinate vector, in linear algebra, an explicit representation of an element of any abstract vector space
    • Row vector or column vector, a one-dimensional matrix often representing the solution of a system of linear equations
    • An element of the real coordinate space Rn
  • Random vector or multivariate random variable, in statistics, a set of real-valued random variables that may be correlated
  • Vector projection, also known as the vector resolute or vector component, a linear mapping producing a vector parallel to a second vector
  • The vector part of a quaternion, a mathematical entity which is one possible generalisation of a vector
  • Null vector, a vector whose magnitude is zero
  • Position vector, a vector representing the position of a point in an affine space in relation to a reference point
  • Displacement vector, a vector that specifies the change in position of a point relative to a previous position
  • Gradient vector, the vector giving the magnitude and direction of maximum increase of a scalar field
  • Poynting vector, in physics, a vector representing the energy flux density of an electromagnetic field
  • Wave vector, a vector representation of the local phase evolution of a wave
  • Tangent vector, an element of the tangent space of a curve, a surface or, more generally, a differential manifold at a given point.
  • Gyrovector, a hyperbolic geometry version of a vector
  • Axial vector, or pseudovector, a quantity that transforms like a vector under proper rotation but not generally under reflection
  • Darboux vector, the areal velocity vector of the Frenet frame of a space curve
  • Four-vector, in the theory of relativity, a vector in a four-dimensional real vector space called Minkowski space
  • Interval vector, in musical set theory, an array that expresses the intervallic content of a pitch-class set
  • P-vector, the tensor obtained by taking linear combinations of the wedge product of p tangent vectors
  • Probability vector, in statistics, a vector with non-negative entries that sum to one
  • Spin vector, or Spinor, is an element of a complex vector space introduced to expand the notion of spatial vector
  • Tuple, an ordered list of numbers, sometimes used to represent a vector
  • Unit vector, a vector in a normed vector space whose length is 1

Vector fields

Vector spaces

Manipulation of vectors, fields, and spaces

  • Vector bundle, a topological construction which makes precise the idea of a family of vector spaces parameterized by another space
  • Vector calculus, a branch of mathematics concerned with differentiation and integration of vector fields
    • Vector Analysis, a free, online book on vector calculus first published in 1901 by Edwin Bidwell Wilson
  • Vector decomposition, refers to decomposing a vector of Rn to several vectors, each linearly independent
  • Vector differential, or del, is a vector differential operator represented by the nabla symbol: \nabla
  • Vector Laplacian, the vector Laplace operator, denoted by \nabla^2 is a differential operator defined over a vector field
  • Vector notation, common notations used when working with vectors
  • Vector operator, a type of differential operator used in vector calculus
  • Vector product, or cross product, an operation on two vectors in a three-dimensional Euclidean space, producing a third three-dimensional Euclidean vector
  • Vector projection, also known as the vector resolute, a mapping of one vector onto another
  • Vector-valued function, a mathematical function that maps real numbers to vectors
  • Vectorization (mathematics), a linear transformation which converts a matrix into a column vector

Other uses in mathematics and physics

  • Vector autoregression, an econometric model used to capture the evolution and the interdependencies between multiple time series
  • Vector boson, a boson with the spin quantum number equal to 1
  • Vector measure, a function defined on a family of sets and taking vector values satisfying certain properties
  • Vector meson, a meson with total spin 1 and odd parity
  • Vector quantization, a quantization technique used in signal processing
  • Vector soliton, a solitary wave with multiple components coupled together that maintains its shape during propagation
  • Vector synthesis, a type of audio synthesis
  • Witt vector, an infinite sequence of elements of a commutative ring

Magnet school

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Magnet_sc...