Search This Blog

Wednesday, October 28, 2015

Dipole -- More on what causes a greenhouse gas


From Wikipedia, the free encyclopedia


The Earth's magnetic field, approximated as a magnetic dipole. However, the "N" and "S" (north and south) poles are labeled here geographically, which is the opposite of the convention for labeling the poles of a magnetic dipole moment.

In physics, there are several kinds of dipole:
  • An electric dipole is a separation of positive and negative charges. The simplest example of this is a pair of electric charges of equal magnitude but opposite sign, separated by some (usually small) distance. A permanent electric dipole is called an electret.
  • A magnetic dipole is a closed circulation of electric current. A simple example of this is a single loop of wire with some constant current through it.[1][2]
  • A current dipole is a current from a sink of current to a source of current within a (usually conducting) medium. Current dipoles are often used to model neuronal sources of electromagnetic fields that can be measured using Magnetoencephalography or Electroencephalography.
Dipoles can be characterized by their dipole moment, a vector quantity. For the simple electric dipole given above, the electric dipole moment points from the negative charge towards the positive charge, and has a magnitude equal to the strength of each charge times the separation between the charges. (To be precise: for the definition of the dipole moment, one should always consider the "dipole limit", where e.g. the distance of the generating charges should converge to 0, while simultaneously the charge strength should diverge to infinity in such a way that the product remains a positive constant.)

For the current loop, the magnetic dipole moment points through the loop (according to the right hand grip rule), with a magnitude equal to the current in the loop times the area of the loop.

In addition to current loops, the electron, among other fundamental particles, has a magnetic dipole moment. This is because it generates a magnetic field that is identical to that generated by a very small current loop. However, the electron's magnetic moment is not due to a current loop, but is instead an intrinsic property of the electron.[3] It is also possible that the electron has an electric dipole moment, although this has not yet been observed (see electron electric dipole moment for more information).


Contour plot of the electrostatic potential of a horizontally oriented electrical dipole of finite size. Strong colors indicate highest and lowest potential (where the opposing charges of the dipole are located).

A permanent magnet, such as a bar magnet, owes its magnetism to the intrinsic magnetic dipole moment of the electron. The two ends of a bar magnet are referred to as poles (not to be confused with monopoles), and may be labeled "north" and "south". In terms of the Earth's magnetic field, these are respectively "north-seeking" and "south-seeking" poles, that is if the magnet were freely suspended in the Earth's magnetic field, the north-seeking pole would point towards the north and the south-seeking pole would point twards the south. The dipole moment of the bar magnet points from its magnetic south to its magnetic north pole. The north pole of a bar magnet in a compass points north. However, this means that Earth's geomagnetic north pole is the south pole (south-seeking pole) of its dipole moment, and vice versa.

The only known mechanisms for the creation of magnetic dipoles are by current loops or quantum-mechanical spin since the existence of magnetic monopoles has never been experimentally demonstrated.

The term comes from the Greek δίς (dis), "twice"[4] and πόλος (pòlos), "axis".[5][6]

Classification


Electric field lines of two opposing charges separated by a finite distance.

Magnetic field lines of a ring current of finite diameter.

Field lines of a point dipole of any type, electric, magnetic, acoustic, …

A physical dipole consists of two equal and opposite point charges: in the literal sense, two poles. Its field at large distances (i.e., distances large in comparison to the separation of the poles) depends almost entirely on the dipole moment as defined above. A point (electric) dipole is the limit obtained by letting the separation tend to 0 while keeping the dipole moment fixed. The field of a point dipole has a particularly simple form, and the order-1 term in the multipole expansion is precisely the point dipole field.

Although there are no known magnetic monopoles in nature, there are magnetic dipoles in the form of the quantum-mechanical spin associated with particles such as electrons (although the accurate description of such effects falls outside of classical electromagnetism). A theoretical magnetic point dipole has a magnetic field of exactly the same form as the electric field of an electric point dipole. A very small current-carrying loop is approximately a magnetic point dipole; the magnetic dipole moment of such a loop is the product of the current flowing in the loop and the (vector) area of the loop.

Any configuration of charges or currents has a 'dipole moment', which describes the dipole whose field is the best approximation, at large distances, to that of the given configuration. This is simply one term in the multipole expansion when the total charge ("monopole moment") is 0 — as it always is for the magnetic case, since there are no magnetic monopoles. The dipole term is the dominant one at large distances: Its field falls off in proportion to 1/r3, as compared to 1/r4 for the next (quadrupole) term and higher powers of 1/r for higher terms, or 1/r2 for the monopole term.

Molecular dipoles

Many molecules have such dipole moments due to non-uniform distributions of positive and negative charges on the various atoms. Such is the case with polar compounds like hydrogen fluoride (HF), where electron density is shared unequally between atoms. Therefore, a molecule's dipole is an electric dipole with an inherent electric field which should not be confused with a magnetic dipole which generates a magnetic field.

The physical chemist Peter J. W. Debye was the first scientist to study molecular dipoles extensively, and, as a consequence, dipole moments are measured in units named debye in his honor.

For molecules there are three types of dipoles:
  • Permanent dipoles: These occur when two atoms in a molecule have substantially different electronegativity: One atom attracts electrons more than another, becoming more negative, while the other atom becomes more positive. A molecule with a permanent dipole moment is called a polar molecule. See dipole-dipole attractions.
  • Instantaneous dipoles: These occur due to chance when electrons happen to be more concentrated in one place than another in a molecule, creating a temporary dipole. See instantaneous dipole.
  • Induced dipoles: These can occur when one molecule with a permanent dipole repels another molecule's electrons, inducing a dipole moment in that molecule. A molecule is polarized when it carries an induced dipole. See induced-dipole attraction.
More generally, an induced dipole of any polarizable charge distribution ρ (remember that a molecule has a charge distribution) is caused by an electric field external to ρ. This field may, for instance, originate from an ion or polar molecule in the vicinity of ρ or may be macroscopic (e.g., a molecule between the plates of a charged capacitor). The size of the induced dipole is equal to the product of the strength of the external field and the dipole polarizability of ρ.

Dipole moment values can be obtained from measurement of the dielectric constant. Some typical gas phase values in debye units are:[7]

The linear molecule CO2 has a zero dipole as the two bond dipoles cancel.

KBr has one of the highest dipole moments because it is a very ionic molecule (which only exists as a molecule in the gas phase).


The bent molecule H2O has a net dipole. The two bond dipoles do not cancel.

The overall dipole moment of a molecule may be approximated as a vector sum of bond dipole moments. As a vector sum it depends on the relative orientation of the bonds, so that from the dipole moment information can be deduced about the molecular geometry. For example the zero dipole of CO2 implies that the two C=O bond dipole moments cancel so that the molecule must be linear. For H2O the O-H bond moments do not cancel because the molecule is bent. For ozone (O3) which is also a bent molecule, the bond dipole moments are not zero even though the O-O bonds are between similar atoms. This agrees with the Lewis structures for the resonance forms of ozone which show a positive charge on the central oxygen atom.
Resonance Lewis structures of the ozone molecule
An example in organic chemistry of the role of geometry in determining dipole moment is the cis and trans isomers of 1,2-dichloroethene. In the cis isomer the two polar C-Cl bonds are on the same side of the C=C double bond and the molecular dipole moment is 1.90 D. In the trans isomer, the dipole moment is zero because the two C-Cl bond are on opposite sides of the C=C and cancel (and the two bond moments for the much less polar C-H bonds also cancel).


Cis isomer, dipole moment 1.90 D

Trans isomer, dipole moment zero

Another example of the role of molecular geometry is boron trifluoride, which has three polar bonds with a difference in electronegativity greater than the traditionally cited threshold of 1.7 for ionic bonding. However, due to the equilateral triangular distribution of the fluoride ions about the boron cation center, the molecule as a whole does not exhibit any identifiable pole: one cannot construct a plane that divides the molecule into a net negative part and a net positive part.

Quantum mechanical dipole operator

Consider a collection of N particles with charges qi and position vectors ri. For instance, this collection may be a molecule consisting of electrons, all with chargee, and nuclei with charge eZi, where Zi is the atomic number of the i th nucleus. The dipole observable (physical quantity) has the quantum mechanical dipole operator:[citation needed]
\mathfrak{p} = \sum_{i=1}^N \, q_i \, \mathbf{r}_i \, .
Notice that this definition is valid only for non-charged dipoles, i.e. total charge equal to zero. To a charged dipole we have the next equation:
\mathfrak{p} = \sum_{i=1}^N \, q_i \, (\mathbf{r}_i - \mathbf{r}_c) \, .
where  \mathbf{r}_c is the center of mass of the molecule/group of particles.[8]

Atomic dipoles

A non-degenerate (S-state) atom can have only a zero permanent dipole. This fact follows quantum mechanically from the inversion symmetry of atoms. All 3 components of the dipole operator are antisymmetric under inversion with respect to the nucleus,
  \mathfrak{I} \;\mathfrak{p}\;  \mathfrak{I}^{-1} = - \mathfrak{p},
where \stackrel{\mathfrak{p}}{} is the dipole operator and  \stackrel{\mathfrak{I}}{}\, is the inversion operator. The permanent dipole moment of an atom in a non-degenerate state (see degenerate energy level) is given as the expectation (average) value of the dipole operator,

\langle \mathfrak{p} \rangle = \langle\, S\, | \mathfrak{p} |\, S \,\rangle,
where  |\, S\, \rangle is an S-state, non-degenerate, wavefunction, which is symmetric or antisymmetric under inversion:   \mathfrak{I}\,|\, S\, \rangle= \pm |\, S\, \rangle. Since the product of the wavefunction (in the ket) and its complex conjugate (in the bra) is always symmetric under inversion and its inverse,

\langle \mathfrak{p} \rangle = \langle\,  \mathfrak{I}^{-1}\, S\, | \mathfrak{p} |\, \mathfrak{I}^{-1}\, S \,\rangle
 = \langle\,  S\, |  \mathfrak{I}\, \mathfrak{p} \, \mathfrak{I}^{-1}| \, S \,\rangle = -\langle \mathfrak{p} \rangle
it follows that the expectation value changes sign under inversion. We used here the fact that  \mathfrak{I}\,, being a symmetry operator, is unitary:  \mathfrak{I}^{-1} =  \mathfrak{I}^{*}\, and by definition the Hermitian adjoint  \mathfrak{I}^*\, may be moved from bra to ket and then becomes  \mathfrak{I}^{**} =  \mathfrak{I}\,. Since the only quantity that is equal to minus itself is the zero, the expectation value vanishes,

\langle \mathfrak{p}\rangle = 0.
In the case of open-shell atoms with degenerate energy levels, one could define a dipole moment by the aid of the first-order Stark effect. This gives a non-vanishing dipole (by definition proportional to a non-vanishing first-order Stark shift) only if some of the wavefunctions belonging to the degenerate energies have opposite parity; i.e., have different behavior under inversion. This is a rare occurrence, but happens for the excited H-atom, where 2s and 2p states are "accidentally" degenerate (see article Laplace–Runge–Lenz vector for the origin of this degeneracy) and have opposite parity (2s is even and 2p is odd).

Field of a static magnetic dipole

Magnitude

The far-field strength, B, of a dipole magnetic field is given by

B(m, r, \lambda) = \frac {\mu_0} {4\pi} \frac {m} {r^3} \sqrt {1+3\sin^2\lambda} \, ,
where
B is the strength of the field, measured in teslas
r is the distance from the center, measured in metres
λ is the magnetic latitude (equal to 90° − θ) where θ is the magnetic colatitude, measured in radians or degrees from the dipole axis[note 1]
m is the dipole moment (VADM=virtual axial dipole moment), measured in ampere square-metres (A·m2), which equals joules per tesla
μ0 is the permeability of free space, measured in henries per metre.
Conversion to cylindrical coordinates is achieved using r2 = z2 + ρ2 and
\lambda = \arcsin\left(\frac{z}{\sqrt{z^2+\rho^2}}\right)
where ρ is the perpendicular distance from the z-axis. Then,
B(\rho,z) = \frac{\mu_0 m}{4 \pi (z^2+\rho^2)^{3/2}} \sqrt{1+\frac{3 z^2}{z^2 + \rho^2}}

Vector form

The field itself is a vector quantity:
\mathbf{B}(\mathbf{m}, \mathbf{r}) = \frac {\mu_0} {4\pi} \left(\frac{3(\mathbf{m}\cdot\hat{\mathbf{r}})\hat{\mathbf{r}}-\mathbf{m}}{r^3}\right) + \frac{2\mu_0}{3}\mathbf{m}\delta^3(\mathbf{r})
where
B is the field
r is the vector from the position of the dipole to the position where the field is being measured
r is the absolute value of r: the distance from the dipole
\hat{\mathbf{r}} = \mathbf{r}/r is the unit vector parallel to r;
m is the (vector) dipole moment
μ0 is the permeability of free space
δ3 is the three-dimensional delta function.[note 2]
This is exactly the field of a point dipole, exactly the dipole term in the multipole expansion of an arbitrary field, and approximately the field of any dipole-like configuration at large distances.

Magnetic vector potential

The vector potential A of a magnetic dipole is
\mathbf{A}(\mathbf{r}) = \frac {\mu_0} {4\pi} \frac{\mathbf{m}\times\hat{\mathbf{r}}}{r^2}
with the same definitions as above.

Field from an electric dipole

The electrostatic potential at position r due to an electric dipole at the origin is given by:
 \Phi(\mathbf{r}) = \frac{1}{4\pi\varepsilon_0}\,\frac{\mathbf{p}\cdot\hat{\mathbf{r}}}{r^2}
where
\hat{\mathbf{r}} is a unit vector in the direction of r, p is the (vector) dipole moment, and ε0 is the permittivity of free space.
This term appears as the second term in the multipole expansion of an arbitrary electrostatic potential Φ(r). If the source of Φ(r) is a dipole, as it is assumed here, this term is the only non-vanishing term in the multipole expansion of Φ(r). The electric field from a dipole can be found from the gradient of this potential:
 \mathbf{E} = - \nabla \Phi =\frac {1} {4\pi\epsilon_0} \left(\frac{3(\mathbf{p}\cdot\hat{\mathbf{r}})\hat{\mathbf{r}}-\mathbf{p}}{r^3}\right) - \frac{1}{3\epsilon_0}\mathbf{p}\delta^3(\mathbf{r})
where E is the electric field and δ3 is the 3-dimensional delta function.[note 2] This is formally identical to the magnetic H field of a point magnetic dipole with only a few names changed.

Torque on a dipole

Since the direction of an electric field is defined as the direction of the force on a positive charge, electric field lines point away from a positive charge and toward a negative charge.

When placed in an electric or magnetic field, equal but opposite forces arise on each side of the dipole creating a torque τ:
 \boldsymbol{\tau} = \mathbf{p} \times \mathbf{E}
for an electric dipole moment p (in coulomb-meters), or
 \boldsymbol{\tau} = \mathbf{m} \times \mathbf{B}
for a magnetic dipole moment m (in ampere-square meters).

The resulting torque will tend to align the dipole with the applied field, which in the case of an electric dipole, yields a potential energy of
 U = -\mathbf{p} \cdot \mathbf{E}.
The energy of a magnetic dipole is similarly
 U = -\mathbf{m} \cdot \mathbf{B}.

Dipole radiation


Evolution of the magnetic field of an oscillating electric dipole. The field lines, which are horizontal rings around the axis of the vertically oriented dipole, are perpendicularly crossing the x-y-plane of the image. Shown as a colored contour plot is the z-component of the field. Cyan is zero magnitude, green–yellow–red and blue–pink–red are increasing strengths in opposing directions.

In addition to dipoles in electrostatics, it is also common to consider an electric or magnetic dipole that is oscillating in time. It is an extension, or a more physical next-step, to spherical wave radiation.
In particular, consider a harmonically oscillating electric dipole, with angular frequency ω and a dipole moment  p_0 along the  \hat{z} direction of the form
\mathbf{p}(\mathbf{r},t)=\mathbf{p}(\mathbf{r})e^{-i\omega t}  = p_0\hat{\mathbf{z}}e^{-i\omega t} .
In vacuum, the exact field produced by this oscillating dipole can be derived using the retarded potential formulation as:

\mathbf{E} = \frac{1}{4\pi\varepsilon_0} \left\{ \frac{\omega^2}{c^2 r}
( \hat{\mathbf{r}} \times \mathbf{p} ) \times \hat{\mathbf{r}}
+ \left( \frac{1}{r^3} - \frac{i\omega}{cr^2} \right) \left[ 3 \hat{\mathbf{r}} (\hat{\mathbf{r}} \cdot \mathbf{p}) - \mathbf{p} \right]  \right\} e^{i\omega r/c} e^{-i\omega t}
\mathbf{B} = \frac{\omega^2}{4\pi\varepsilon_0 c^3} \hat{\mathbf{r}} \times \mathbf{p} \left( 1 - \frac{c}{i\omega r} \right) \frac{e^{i\omega r/c}}{r} e^{-i\omega t}.

For \scriptstyle r \omega /c \gg 1, the far-field takes the simpler form of a radiating "spherical" wave, but with angular dependence embedded in the cross-product:[9]
\mathbf{B} = \frac{\omega^2}{4\pi\varepsilon_0 c^3} (\hat{\mathbf{r}} \times \mathbf{p}) \frac{e^{i\omega (r/c-t)}}{r}
 = \frac{\omega^2 \mu_0 p_0 }{4\pi  c} (\hat{\mathbf{r}} \times \hat{\mathbf{z}}) \frac{e^{i\omega (r/c-t)}}{r}
 = -\frac{\omega^2 \mu_0 p_0 }{4\pi c} \sin\theta \frac{e^{i\omega (r/c-t)}}{r} \mathbf{\hat{\phi} }
\mathbf{E} = c \mathbf{B} \times \hat{\mathbf{r}}
= -\frac{\omega^2 \mu_0 p_0 }{4\pi} \sin\theta (\hat{\phi} \times \mathbf{\hat{r} } )\frac{e^{i\omega (r/c-t)}}{r}
= -\frac{\omega^2 \mu_0 p_0 }{4\pi} \sin\theta \frac{e^{i\omega (r/c-t)}}{r} \hat{\theta}.
The time-averaged Poynting vector

 \langle \mathbf{S} \rangle = \bigg(\frac{\mu_0p_0^2\omega^4}{32\pi^2 c}\bigg) \frac{\sin^2\theta}{r^2} \mathbf{\hat{r}}

is not distributed isotropically, but concentrated around the directions lying perpendicular to the dipole moment, as a result of the non-spherical electric and magnetic waves. In fact, the spherical harmonic function ( \sin\theta ) responsible for such "donut-shaped" angular distribution is precisely the  l=1 "p" wave.

The total time-average power radiated by the field can then be derived from the Poynting vector as
P = \frac{\mu_0 \omega^4 p_0^2}{12\pi c}.
Notice that the dependence of the power on the fourth power of the frequency of the radiation is in accordance with the Rayleigh scattering, and the underlying effects why the sky consists of mainly blue colour.

A circular polarized dipole is described as a superposition of two linear dipoles.

Molecular vibration -- What makes a greenhouse gas


From Wikipedia, the free encyclopedia

A molecular vibration occurs when atoms in a molecule are in periodic motion while the molecule as a whole has constant translational and rotational motion. The frequency of the periodic motion is known as a vibration frequency, and the typical frequencies of molecular vibrations range from less than 1012 to approximately 1014 Hz.

In general, a molecule with N atoms has 3N – 6 normal modes of vibration, but a linear molecule has 3N – 5 such modes, as rotation about its molecular axis cannot be observed.[1] A diatomic molecule has one normal mode of vibration. The normal modes of vibration of polyatomic molecules are independent of each other but each normal mode will involve simultaneous vibrations of different parts of the molecule such as different chemical bonds.
A molecular vibration is excited when the molecule absorbs a quantum of energy, E, corresponding to the vibration's frequency, ν, according to the relation E = (where h is Planck's constant). A fundamental vibration is excited when one such quantum of energy is absorbed by the molecule in its ground state. When two quanta are absorbed the first overtone is excited, and so on to higher overtones.

To a first approximation, the motion in a normal vibration can be described as a kind of simple harmonic motion. In this approximation, the vibrational energy is a quadratic function (parabola) with respect to the atomic displacements and the first overtone has twice the frequency of the fundamental. In reality, vibrations are anharmonic and the first overtone has a frequency that is slightly lower than twice that of the fundamental. Excitation of the higher overtones involves progressively less and less additional energy and eventually leads to dissociation of the molecule, as the potential energy of the molecule is more like a Morse potential.

The vibrational states of a molecule can be probed in a variety of ways. The most direct way is through infrared spectroscopy, as vibrational transitions typically require an amount of energy that corresponds to the infrared region of the spectrum. Raman spectroscopy, which typically uses visible light, can also be used to measure vibration frequencies directly. The two techniques are complementary and comparison between the two can provide useful structural information such as in the case of the rule of mutual exclusion for centrosymmetric molecules.

Vibrational excitation can occur in conjunction with electronic excitation (vibronic transition), giving vibrational fine structure to electronic transitions, particularly with molecules in the gas state.
Simultaneous excitation of a vibration and rotations gives rise to vibration-rotation spectra

Vibrational coordinates

The coordinate of a normal vibration is a combination of changes in the positions of atoms in the molecule. When the vibration is excited the coordinate changes sinusoidally with a frequency ν, the frequency of the vibration.

Internal coordinates

Internal coordinates are of the following types, illustrated with reference to the planar molecule ethylene,
Ethylene
  • Stretching: a change in the length of a bond, such as C-H or C-C
  • Bending: a change in the angle between two bonds, such as the HCH angle in a methylene group
  • Rocking: a change in angle between a group of atoms, such as a methylene group and the rest of the molecule.
  • Wagging: a change in angle between the plane of a group of atoms, such as a methylene group and a plane through the rest of the molecule,
  • Twisting: a change in the angle between the planes of two groups of atoms, such as a change in the angle between the two methylene groups.
  • Out-of-plane: a change in the angle between any one of the C-H bonds and the plane defined by the remaining atoms of the ethylene molecule. Another example is in BF3 when the boron atom moves in and out of the plane of the three fluorine atoms.
In a rocking, wagging or twisting coordinate the bond lengths within the groups involved do not change. The angles do. Rocking is distinguished from wagging by the fact that the atoms in the group stay in the same plane.

In ethene there are 12 internal coordinates: 4 C-H stretching, 1 C-C stretching, 2 H-C-H bending, 2 CH2 rocking, 2 CH2 wagging, 1 twisting. Note that the H-C-C angles cannot be used as internal coordinates as the angles at each carbon atom cannot all increase at the same time.

Vibrations of a methylene group (-CH2-) in a molecule for illustration

The atoms in a CH2 group, commonly found in organic compounds, can vibrate in six different ways: symmetric and asymmetric stretching, scissoring, rocking, wagging and twisting as shown here:

Symmetrical
stretching
Asymmetrical
stretching
Scissoring (Bending)
Symmetrical stretching.gif Asymmetrical stretching.gif Scissoring.gif
Rocking Wagging Twisting
Modo rotacao.gif Wagging.gif Twisting.gif

(These figures do not represent the "recoil" of the C atoms, which, though necessarily present to balance the overall movements of the molecule, are much smaller than the movements of the lighter H atoms).

Symmetry-adapted coordinates

Symmetry-adapted coordinates may be created by applying a projection operator to a set of internal coordinates.[2] The projection operator is constructed with the aid of the character table of the molecular point group. For example, the four(un-normalised) C-H stretching coordinates of the molecule ethene are given by
Q_{s1} =  q_{1} + q_{2} + q_{3} + q_{4}\!
Q_{s2} =  q_{1} + q_{2} - q_{3} - q_{4}\!
Q_{s3} =  q_{1} - q_{2} + q_{3} - q_{4}\!
Q_{s4} =  q_{1} - q_{2} - q_{3} + q_{4}\!
where q_{1} - q_{4} are the internal coordinates for stretching of each of the four C-H bonds.

Illustrations of symmetry-adapted coordinates for most small molecules can be found in Nakamoto.[3]

Normal coordinates

The normal coordinates, denoted as Q, refer to the positions of atoms away from their equilibrium positions, with respect to a normal mode of vibration. Each normal mode is assigned a single normal coordinate, and so the normal coordinate refers to the "progress" along that normal mode at any given time. Formally, normal modes are determined by solving a secular determinant, and then the normal coordinates (over the normal modes) can be expressed as a summation over the cartesian coordinates (over the atom positions). The advantage of working in normal modes is that they diagonalize the matrix governing the molecular vibrations, so each normal mode is an independent molecular vibration, associated with its own spectrum of quantum mechanical states. If the molecule possesses symmetries, it will belong to a point group, and the normal modes will "transform as" an irreducible representation under that group. The normal modes can then be qualitatively determined by applying group theory and projecting the irreducible representation onto the cartesian coordinates. For example, when this treatment is applied to CO2, it is found that the C=O stretches are not independent, but rather there is an O=C=O symmetric stretch and an O=C=O asymmetric stretch.
  • symmetric stretching: the sum of the two C-O stretching coordinates; the two C-O bond lengths change by the same amount and the carbon atom is stationary. Q = q1 + q2
  • asymmetric stretching: the difference of the two C-O stretching coordinates; one C-O bond length increases while the other decreases. Q = q1 - q2
When two or more normal coordinates belong to the same irreducible representation of the molecular point group (colloquially, have the same symmetry) there is "mixing" and the coefficients of the combination cannot be determined a priori. For example, in the linear molecule hydrogen cyanide, HCN, The two stretching vibrations are
  1. principally C-H stretching with a little C-N stretching; Q1 = q1 + a q2 (a << 1)
  2. principally C-N stretching with a little C-H stretching; Q2 = b q1 + q2 (b << 1)
The coefficients a and b are found by performing a full normal coordinate analysis by means of the Wilson GF method.[4]

Newtonian mechanics


The HCl molecule as an anharmonic oscillator vibrating at energy level E3. D0 is dissociation energy here, r0 bond length, U potential energy. Energy is expressed in wavenumbers. The hydrogen chloride molecule is attached to the coordinate system to show bond length changes on the curve.

Perhaps surprisingly, molecular vibrations can be treated using Newtonian mechanics to calculate the correct vibration frequencies. The basic assumption is that each vibration can be treated as though it corresponds to a spring. In the harmonic approximation the spring obeys Hooke's law: the force required to extend the spring is proportional to the extension. The proportionality constant is known as a force constant, k. The anharmonic oscillator is considered elsewhere.[5]
\mathrm{Force}=- k Q \!
By Newton’s second law of motion this force is also equal to a reduced mass, μ, times acceleration.
 \mathrm{Force} = \mu \frac{d^2Q}{dt^2}
Since this is one and the same force the ordinary differential equation follows.
\mu \frac{d^2Q}{dt^2} + k Q = 0
The solution to this equation of simple harmonic motion is
Q(t) =  A \cos (2 \pi \nu  t) ;\ \  \nu =   {1\over {2 \pi}} \sqrt{k \over \mu}. \!
A is the maximum amplitude of the vibration coordinate Q. It remains to define the reduced mass, μ. In general, the reduced mass of a diatomic molecule, AB, is expressed in terms of the atomic masses, mA and mB, as
\frac{1}{\mu} = \frac{1}{m_A}+\frac{1}{m_B}.
The use of the reduced mass ensures that the centre of mass of the molecule is not affected by the vibration. In the harmonic approximation the potential energy of the molecule is a quadratic function of the normal coordinate. It follows that the force-constant is equal to the second derivative of the potential energy.
k=\frac{\partial ^2V}{\partial Q^2}
When two or more normal vibrations have the same symmetry a full normal coordinate analysis must be performed (see GF method). The vibration frequencies,νi are obtained from the eigenvalues,λi, of the matrix product GF. G is a matrix of numbers derived from the masses of the atoms and the geometry of the molecule.[4] F is a matrix derived from force-constant values. Details concerning the determination of the eigenvalues can be found in.[6]

Quantum mechanics

In the harmonic approximation the potential energy is a quadratic function of the normal coordinates. Solving the Schrödinger wave equation, the energy states for each normal coordinate are given by
E_n = h \left( n + {1 \over 2 } \right)\nu=h\left( n + {1 \over 2 } \right) {1\over {2 \pi}} \sqrt{k \over m} \!,
where n is a quantum number that can take values of 0, 1, 2 ... In molecular spectroscopy where several types of molecular energy are studied and several quantum numbers are used, this vibrational quantum number is often designated as v.[7][8]

The difference in energy when n (or v) changes by 1 is therefore equal to h\nu, the product of the Planck constant and the vibration frequency derived using classical mechanics. For a transition from level n to level n+1 due to absorption of a photon, the frequency of the photon is equal to the classical vibration frequency \nu (in the harmonic oscillator approximation).

See quantum harmonic oscillator for graphs of the first 5 wave functions, which allow certain selection rules to be formulated. For example, for a harmonic oscillator transitions are allowed only when the quantum number n changes by one,
\Delta n = \pm 1
but this does not apply to an anharmonic oscillator; the observation of overtones is only possible because vibrations are anharmonic. Another consequence of anharmonicity is that transitions such as between states n=2 and n=1 have slightly less energy than transitions between the ground state and first excited state. Such a transition gives rise to a hot band.

Intensities

In an infrared spectrum the intensity of an absorption band is proportional to the derivative of the molecular dipole moment with respect to the normal coordinate.[9] The intensity of Raman bands depends on polarizability.

Monday, October 26, 2015

Uncertainty

From Wikipedia, the free encyclopedia
 

We are frequently presented with situations wherein a decision must be made when we are uncertain of exactly how to proceed.

Uncertainty is the situation which involves imperfect and / or unknown information. In other words it is a term used in subtly different ways in a number of fields, including insurance, philosophy, physics, statistics, economics, finance, psychology, sociology, engineering, metrology, and information science. It applies to predictions of future events, to physical measurements that are already made, or to the unknown. Uncertainty arises in partially observable and/or stochastic environments, as well as due to ignorance and/or indolence.[1]

Concepts

Although the terms are used in various ways among the general public, many specialists in decision theory, statistics and other quantitative fields have defined uncertainty, risk, and their measurement as:
  1. Uncertainty: The lack of certainty. A state of having limited knowledge where it is impossible to exactly describe the existing state, a future outcome, or more than one possible outcome.
  2. Measurement of Uncertainty: A set of possible states or outcomes where probabilities are assigned to each possible state or outcome – this also includes the application of a probability density function to continuous variable
  3. Risk: A state of uncertainty where some possible outcomes have an undesired effect or significant loss.
  4. Measurement of Risk: A set of measured uncertainties where some possible outcomes are losses, and the magnitudes of those losses – this also includes loss functions over continuous variables.[2]
Knightian uncertainty. In his seminal work Risk, Uncertainty, and Profit (1921), University of Chicago economist Frank Knight established the important distinction between risk and uncertainty:[3]
There are other taxonomies of uncertainties and decisions that include a broader sense of uncertainty and how it should be approached from an ethics perspective:[4]


A taxonomy of uncertainty
There are some things that you know to be true, and others that you know to be false; yet, despite this extensive knowledge that you have, there remain many things whose truth or falsity is not known to you. We say that you are uncertain about them. You are uncertain, to varying degrees, about everything in the future; much of the past is hidden from you; and there is a lot of the present about which you do not have full information. Uncertainty is everywhere and you cannot escape from it.
Dennis Lindley, Understanding Uncertainty (2006)

For example, if it is unknown whether or not it will rain tomorrow, then there is a state of uncertainty. If probabilities are applied to the possible outcomes using weather forecasts or even just a calibrated probability assessment, the uncertainty has been quantified. Suppose it is quantified as a 90% chance of sunshine. If there is a major, costly, outdoor event planned for tomorrow then there is a risk since there is a 10% chance of rain, and rain would be undesirable. Furthermore, if this is a business event and $100,000 would be lost if it rains, then the risk has been quantified(a 10% chance of losing $100,000). These situations can be made even more realistic by quantifying light rain vs. heavy rain, the cost of delays vs. outright cancellation, etc.

Some may represent the risk in this example as the "expected opportunity loss" (EOL) or the chance of the loss multiplied by the amount of the loss (10% × $100,000 = $10,000). That is useful if the organizer of the event is "risk neutral", which most people are not. Most would be willing to pay a premium to avoid the loss. An insurance company, for example, would compute an EOL as a minimum for any insurance coverage, then add onto that other operating costs and profit. Since many people are willing to buy insurance for many reasons, then clearly the EOL alone is not the perceived value of avoiding the risk.

Quantitative uses of the terms uncertainty and risk are fairly consistent from fields such as probability theory, actuarial science, and information theory. Some also create new terms without substantially changing the definitions of uncertainty or risk. For example, surprisal is a variation on uncertainty sometimes used in information theory. But outside of the more mathematical uses of the term, usage may vary widely. In cognitive psychology, uncertainty can be real, or just a matter of perception, such as expectations, threats, etc.

Vagueness or ambiguity are sometimes described as "second order uncertainty", where there is uncertainty even about the definitions of uncertain states or outcomes. The difference here is that this uncertainty is about the human definitions and concepts, not an objective fact of nature. It has been argued that ambiguity, however, is always avoidable while uncertainty (of the "first order" kind) is not necessarily avoidable.

Uncertainty may be purely a consequence of a lack of knowledge of obtainable facts. That is, there may be uncertainty about whether a new rocket design will work, but this uncertainty can be removed with further analysis and experimentation. At the subatomic level, however, uncertainty may be a fundamental and unavoidable property of the universe. In quantum mechanics, the Heisenberg Uncertainty Principle puts limits on how much an observer can ever know about the position and velocity of a particle. This may not just be ignorance of potentially obtainable facts but that there is no fact to be found. There is some controversy in physics as to whether such uncertainty is an irreducible property of nature or if there are "hidden variables" that would describe the state of a particle even more exactly than Heisenberg's uncertainty principle allows.

Measurements

In metrology, physics, and engineering, the uncertainty or margin of error of a measurement, when explicitly stated, is given by a range of values likely to enclose the true value. This may be denoted by error bars on a graph, or by the following notations:
  • measured value ± uncertainty
  • measured value +uncertainty
    −uncertainty
  • measured value(uncertainty)
In the last notation, parentheses are the concise notation for the ± notation. For example applying 10 12 meters in a scientific or engineering application, it could be written 10.5 m or 10.50 m, by convention meaning accurate to within one tenth of a meter, or one hundredth. The precision is symmetric around the last digit. In this case its half a tenth up and half a tenth down, so 10.5 means between 10.45 and 10.55. Thus it is understood that 10.5 means 10.5±0.05, and 10.50 means 10.50±0.005, also written 10.5(0.5) and 10.50(5). But if the accuracy is within two tenths, the uncertainty is ± one tenth, and it is required to be explicit: 10.5±0.1 and 10.50±0.01 or 10.5(1)and 10.50(1). The numbers in parenthesis apply to the numeral left of themselves, and are not part of that number, but part of a notation of uncertainty. They apply to the least significant digits. For instance, 1.00794(7) stands for 1.00794±0.00007, while 1.00794(72) stands for 1.00794±0.00072.[5] This concise notation is used for example by IUPAC in stating the atomic mass of elements.
The middle notation is used when the error is not symmetrical about the value – for example 3.4+0.3
−0.2
. This can occur when using a logarithmic scale, for example.

Often, the uncertainty of a measurement is found by repeating the measurement enough times to get a good estimate of the standard deviation of the values. Then, any single value has an uncertainty equal to the standard deviation. However, if the values are averaged, then the mean measurement value has a much smaller uncertainty, equal to the standard error of the mean, which is the standard deviation divided by the square root of the number of measurements. This procedure neglects systematic errors, however.

When the uncertainty represents the standard error of the measurement, then about 68.3% of the time, the true value of the measured quantity falls within the stated uncertainty range. For example, it is likely that for 31.7% of the atomic mass values given on the list of elements by atomic mass, the true value lies outside of the stated range. If the width of the interval is doubled, then probably only 4.6% of the true values lie outside the doubled interval, and if the width is tripled, probably only 0.3% lie outside. These values follow from the properties of the normal distribution, and they apply only if the measurement process produces normally distributed errors. In that case, the quoted standard errors are easily converted to 68.3% ("one sigma"), 95.4% ("two sigma"), or 99.7% ("three sigma") confidence intervals.[citation needed]

In this context, uncertainty depends on both the accuracy and precision of the measurement instrument. The lower the accuracy and precision of an instrument, the larger the measurement uncertainty is. Notice that precision is often determined as the standard deviation of the repeated measures of a given value, namely using the same method described above to assess measurement uncertainty. However, this method is correct only when the instrument is accurate. When it is inaccurate, the uncertainty is larger than the standard deviation of the repeated measures, and it appears evident that the uncertainty does not depend only on instrumental precision.

Uncertainty and the media

Uncertainty in science, and science in general, is often interpreted much differently in the public sphere than in the scientific community.[6] This is due in part to the diversity of the public audience, and the tendency for scientists to misunderstand lay audiences and therefore not communicate ideas clearly and effectively.[6] One example is explained by the information deficit model. Also, in the public realm, there are often many scientific voices giving input on a single topic.[6] For example, depending on how an issue is reported in the public sphere, discrepancies between outcomes of multiple scientific studies due to methodological differences could be interpreted by the public as a lack of consensus in a situation where a consensus does in fact exist.[6] This interpretation may have even been intentionally promoted, as scientific uncertainty may be managed to reach certain goals. For example, global warming contrarian activists took the advice of Frank Luntz to frame global warming as an issue of scientific uncertainty, which was a precursor to the conflict frame used by journalists when reporting the issue.[7]

“Indeterminacy can be loosely said to apply to situations in which not all the parameters of the system and their interactions are fully known, whereas ignorance refers to situations in which it is not known what is not known”.[8] These unknowns, indeterminacy and ignorance, that exist in science are often “transformed” into uncertainty when reported to the public in order to make issues more manageable, since scientific indeterminacy and ignorance are difficult concepts for scientists to convey without losing credibility.[6] Conversely, uncertainty is often interpreted by the public as ignorance.[9] The transformation of indeterminacy and ignorance into uncertainty may be related to the public’s misinterpretation of uncertainty as ignorance.

Journalists often either inflate uncertainty (making the science seem more uncertain than it really is) or downplay uncertainty (making the science seem more certain than it really is).[10] One way that journalists inflate uncertainty is by describing new research that contradicts past research without providing context for the change[10] Other times, journalists give scientists with minority views equal weight as scientists with majority views, without adequately describing or explaining the state of scientific consensus on the issue.[10] In the same vein, journalists often give non-scientists the same amount of attention and importance as scientists.[10]

Journalists may downplay uncertainty by eliminating “scientists’ carefully chosen tentative wording, and by losing these caveats the information is skewed and presented as more certain and conclusive than it really is”.[10] Also, stories with a single source or without any context of previous research mean that the subject at hand is presented as more definitive and certain than it is in reality.[10] There is often a “product over process” approach to science journalism that aids, too, in the downplaying of uncertainty.[10] Finally, and most notably for this investigation, when science is framed by journalists as a triumphant quest, uncertainty is erroneously framed as “reducible and resolvable”.[10]

Some media routines and organizational factors affect the overstatement of uncertainty; other media routines and organizational factors help inflate the certainty of an issue. Because the general public (in the United States) generally trusts scientists, when science stories are covered without alarm-raising cues from special interest organizations (religious groups, environmental organization, political factions, etc.) they are often covered in a business related sense, in an economic-development frame or a social progress frame.[11] The nature of these frames is to downplay or eliminate uncertainty, so when economic and scientific promise are focused on early in the issue cycle, as has happened with coverage of plant biotechnology and nanotechnology in the United States, the matter in question seems more definitive and certain.[11]

Sometimes, too, stockholders, owners, or advertising will pressure a media organization to promote the business aspects of a scientific issue, and therefore any uncertainty claims that may compromise the business interests are downplayed or eliminated.[10]

Applications

  • Investing in financial markets such as the stock market.
  • Uncertainty or error is used in science and engineering notation. Numerical values should only be expressed to those digits that are physically meaningful, which are referred to as significant figures. Uncertainty is involved in every measurement, such as measuring a distance, a temperature, etc., the degree depending upon the instrument or technique used to make the measurement. Similarly, uncertainty is propagated through calculations so that the calculated value has some degree of uncertainty depending upon the uncertainties of the measured values and the equation used in the calculation.[12]
  • Uncertainty is designed into games, most notably in gambling, where chance is central to play.
  • In scientific modelling, in which the prediction of future events should be understood to have a range of expected values.
  • In physics, the Heisenberg uncertainty principle forms the basis of modern quantum mechanics.
  • In weather forecasting it is now commonplace to include data on the degree of uncertainty in a weather forecast.
  • Uncertainty is often an important factor in economics. According to economist Frank Knight, it is different from risk, where there is a specific probability assigned to each outcome (as when flipping a fair coin). Uncertainty involves a situation that has unknown probabilities, while the estimated probabilities of possible outcomes need not add to unity.
  • In entrepreneurship: New products, services, firms and even markets are often created in the absence of probability estimates. According to entrepreneurship research, expert entrepreneurs predominantly use experience based heuristics called effectuation (as opposed to causality) to overcome uncertainty.
  • In metrology, measurement uncertainty is a central concept quantifying the dispersion one may reasonably attribute to a measurement result. Such an uncertainty can also be referred to as a measurement error. In daily life, measurement uncertainty is often implicit ("He is 6 feet tall" give or take a few inches), while for any serious use an explicit statement of the measurement uncertainty is necessary. The expected measurement uncertainty of many measuring instruments (scales, oscilloscopes, force gages, rulers, thermometers, etc.) is often stated in the manufacturer's specification.
  • Mobile phone radiation and health The most commonly used procedure for calculating measurement uncertainty is described in the "Guide to the Expression of Uncertainty in Measurement" (GUM) published by ISO. A derived work is for example the National Institute for Standards and Technology (NIST) Technical Note 1297, "Guidelines for Evaluating and Expressing the Uncertainty of NIST Measurement Results", and the Eurachem/Citac publication "Quantifying Uncertainty in Analytical Measurement". The uncertainty of the result of a measurement generally consists of several components. The components are regarded as random variables, and may be grouped into two categories according to the method used to estimate their numerical values:


    By propagating the variances of the components through a function relating the components to the measurement result, the combined measurement uncertainty is given as the square root of the resulting variance. The simplest form is the standard deviation of a repeated observation.
  • Uncertainty has been a common theme in art, both as a thematic device (see, for example, the indecision of Hamlet), and as a quandary for the artist (such as Martin Creed's difficulty with deciding what artworks to make).
  • Uncertainty assessment is significantly important for managing oil reservoirs where decisions are made based on uncertain models/outcomes. Predictions of oil and gas production from subsurface reservoirs are always uncertain.[13][14]

Saturday, October 24, 2015

Chimpanzee–human last common ancestor


From Wikipedia, the free encyclopedia

Chimpanzee–human last common ancestor
Temporal range: 5.4–0 Ma
O
S
D
C
P
T
J
K
N
Scientific classification
Kingdom: Animalia
Phylum: Chordata
Class: Mammalia
Order: Primates
Infraorder: Simiiformes
Superfamily: Hominoidea
Family: Hominidae
Subfamily: Homininae
Type species
Homo sapiens
Linnaeus, 1758
Genera
Tribe Hominini
The chimpanzee–human last common ancestor, or CHLCA, is the last species shared as a common ancestor by humans and chimpanzees; it represents the node point at which the line to genus Homo split from genus Pan. The last common ancestor of humans and chimps is estimated to have lived during the late Miocene, but possibly as late as Pliocene times—that is, more recent than 5.3 million years ago.

Speciation from Pan to Homo appears to have been a long, drawn-out process. After the "original" divergence(s), there were, according to Patterson (2006), periods of hybridization between population groups and a process of alternating divergence and hybridization that lasted over several millions of years.[1] Sometime during the late Miocene or early Pliocene the earliest members of the human clade completed a final separation from the lineage of Pan—with dates estimated by several specialists ranging from 13 million [2] to as recent as 4 million years ago.[3] The latter date and the argument for hybridization events are rejected by Wakeley[4] (see current estimates regarding complex speciation).

Richard Wrangham (2001) argued that the CHLCA species was very similar to the common chimpanzee (Pan troglodytes)—so much so that it should be classified as a member of the Pan genus and be given the taxonomic name Pan prior.[5] However, to date no fossil has been identified as a probable candidate for the CHLCA or the taxon Pan prior.

In human genetic studies, the CHLCA is useful as an anchor point for calculating single-nucleotide polymorphism (SNP) rates in human populations where chimpanzees are used as an outgroup, that is, as the extant species most genetically similar to Homo sapiens.

Time estimates

Historical studies

The earliest studies[year needed] of apes suggested the CHLCA may have been as old as 25 million years; however, protein studies in the 1970s suggested the CHLCA was less than 8 million years in age. Genetic methods based on orangutan–human and gibbon–human LCA times were then used to estimate a chimpanzee–human LCA of 5 to 7 million years.

Some researchers tried to estimate the age of the CHLCA (TCHLCA) using biopolymer structures that differ slightly between closely related animals. Among these researchers, Allan C. Wilson and Vincent Sarich were pioneers in the development of the molecular clock for humans. Working on protein sequences they eventually (1971) determined that apes were closer to humans than some paleontologists perceived based on the fossil record. [7] Later,[year needed] Vincent Sarich concluded that the TCHLCA was no greater than 8 million years in age, with a favored range between 4 and 6 million years before present.

This paradigmatic age has stuck with molecular anthropology until the late 1990s. Since the 1990s, the estimate has again been pushed towards more-remote times, because studies have found evidence for a slowing of the molecular clock as apes evolved from a common monkey-like ancestor with monkeys and humans evolved from a common ape-like ancestor with non-human apes.[8]

Current estimates

Since the 1990s, the estimation of the TCHLCA has become less certain, and there is genetic as well as paleontological support for increasing TCHLCA beyond the 5 to 7 million years range accepted during the 1970s and 1980s. An estimate of TCHLCA at 10 to 13 million years was proposed in 1998,[9] and a range of 7 to 10 million years ago is assumed by White et a. (2009):
In effect, there is now no a priori reason to presume that human-chimpanzee split times are especially recent, and the fossil evidence is now fully compatible with older chimpanzee–human divergence dates [7 to 10 Ma...
— White et al. (2009), [10]
A source of confusion in determining the exact age of the PanHomo split is evidence of a more complex speciation process rather than a clean split between the two lineages. Different chromosomes appear to have split at different times, possibly over as much as a 4-million-year period, indicating a long and drawn out speciation process with large-scale hybridization events between the two emerging lineages as late as 6.3 to 5.4 million years ago according to Patterson et al. (2006).[11] The assumption of late hybridization was in particular based on the similarity of the X chromosome in humans and chimpanzees, suggesting a divergence as late as some 4 million years ago. This conclusion was rejected as unwarranted by Wakeley (2008), who suggested alternative explanations, including selection pressure on the X chromosome in the populations ancestral to the CHLCA.[12]

Complex speciation and incomplete lineage sorting of genetic sequences seem to also have happened in the split between the human lineage and that of the gorilla, indicating "messy" speciation is the rule rather than the exception in large primates.[13][14] Such a scenario would explain why the divergence age between the Homo and Pan has varied with the chosen method and why a single point has so far been hard to track down.

Taxonomy

The taxon 'tribe Hominini' was proposed on basis of the idea that, regarding a trichotomy, the least similar species should be separated from the other two. Originally, this produced a separated Homo genus, which, predictably, was deemed the 'most different' among the three genera that includes Pan and Gorilla. However, later discoveries and analyses revealed that Pan and Homo are closer genetically than are Pan and Gorilla; thus, Pan was referred to the tribe Hominini with HomoGorilla now became the separated genus and was referred to the new taxon 'tribe Gorillini' (see evolutionary tree here).

Mann and Weiss (1996), proposed that the tribe Hominini should encompass Pan as well as Homo, but grouped within separate subtribes.[15] They would classify Homo and all bipedal apes to the subtribe Hominina and Pan to the subtribe Panina. (Wood (2010) discusses the different views of this taxonomy.)[16] Richard Wrangham (2001) argued that the CHLCA species was very similar to chimpanzees (Pan troglodytes)—so much so that it should be classified as a member of the Pan genus and be given the taxonomic name Pan prior.[5] To date, no fossil has been identified as a potential candidate for the CHLCA or the taxon Pan prior.

The 'human-side' descendants of the CHLCA species are specified as members of the tribe Hominini, that is, to the inclusion of the genus Homo and its closely related genus Australopithecus, but to the exclusion of the genus Pan—meaning all those human-related genera of tribe Hominini that arose after speciation from the line with Pan. Such grouping represents "the human clade" and its members are called "hominins".[17] A "chimpanzee clade" was posited by Wood and Richard, who referred it to a 'Tribe Panini', which was envisioned from the family Hominidae being composed of a trifurcation of subfamilies.[18]

Sahelanthropus tchadensis is an extinct hominid species with a morphology apparently as expected of the CHLCA; and it lived some 7 million years ago—which is very close to the time of the chimpanzee–human divergence. But it is unclear whether it should be classified as a member of the Hominini tribe, that is, a hominin, or as a direct ancestor of Homo and Pan and a potential candidate for the CHLCA species itself.

Few fossil specimens on the 'chimpanzee-side' of the split have been found; the first fossil chimpanzee, dating between 545 and 284 kyr (thousand years, radiometric), was discovered in Kenya's East African Rift Valley (McBrearty, 2005).[19] All extinct genera listed in the taxobox are ancestral to Homo, or are offshoots of such. However, both Orrorin and Sahelanthropus existed around the time of the divergence, and so either one or both may be ancestral to both genera Homo and Pan.

Lie group

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Lie_group In mathematics , a Lie gro...