Search This Blog

Friday, June 30, 2023

Internal energy

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Internal_energy

The internal energy of a thermodynamic system is the energy contained within it, measured as the quantity of energy necessary to bring the system from its standard internal state to its present internal state of interest, accounting for the gains and losses of energy due to changes in its internal state, including such quantities as magnetization. It excludes the kinetic energy of motion of the system as a whole and the potential energy of position of the system as a whole, with respect to its surroundings and external force fields. It includes the thermal energy, i.e., the constituent particles' kinetic energies of motion relative to the motion of the system as a whole. The internal energy of an isolated system cannot change, as expressed in the law of conservation of energy, a foundation of the first law of thermodynamics.

The internal energy cannot be measured absolutely. Thermodynamics concerns changes in the internal energy, not its absolute value. The processes that change the internal energy are transfers, into or out of the system, of matter, or of energy, as heat, or by thermodynamic work. These processes are measured by changes in the system's properties, such as temperature, entropy, volume, electric polarization, and molar constitution. The internal energy depends only on the internal state of the system and not on the particular choice from many possible processes by which energy may pass into or out of the system. It is a state variable, a thermodynamic potential, and an extensive property.

Thermodynamics defines internal energy macroscopically, for the body as a whole. In statistical mechanics, the internal energy of a body can be analyzed microscopically in terms of the kinetic energies of microscopic motion of the system's particles from translations, rotations, and vibrations, and of the potential energies associated with microscopic forces, including chemical bonds.

The unit of energy in the International System of Units (SI) is the joule (J). The internal energy relative to the mass with unit J/kg is the specific internal energy. The corresponding quantity relative to the amount of substance with unit J/mol is the molar internal energy.

Cardinal functions

The internal energy of a system depends on its entropy S, its volume V and its number of massive particles: U(S,V,{Nj}). It expresses the thermodynamics of a system in the energy representation. As a function of state, its arguments are exclusively extensive variables of state. Alongside the internal energy, the other cardinal function of state of a thermodynamic system is its entropy, as a function, S(U,V,{Nj}), of the same list of extensive variables of state, except that the entropy, S, is replaced in the list by the internal energy, U. It expresses the entropy representation.

Each cardinal function is a monotonic function of each of its natural or canonical variables. Each provides its characteristic or fundamental equation, for example U = U(S,V,{Nj}), that by itself contains all thermodynamic information about the system. The fundamental equations for the two cardinal functions can in principle be interconverted by solving, for example, U = U(S,V,{Nj}) for S, to get S = S(U,V,{Nj}).

In contrast, Legendre transforms are necessary to derive fundamental equations for other thermodynamic potentials and Massieu functions. The entropy as a function only of extensive state variables is the one and only cardinal function of state for the generation of Massieu functions. It is not itself customarily designated a 'Massieu function', though rationally it might be thought of as such, corresponding to the term 'thermodynamic potential', which includes the internal energy.

For real and practical systems, explicit expressions of the fundamental equations are almost always unavailable, but the functional relations exist in principle. Formal, in principle, manipulations of them are valuable for the understanding of thermodynamics.

Description and definition

The internal energy of a given state of the system is determined relative to that of a standard state of the system, by adding up the macroscopic transfers of energy that accompany a change of state from the reference state to the given state:

where denotes the difference between the internal energy of the given state and that of the reference state, and the are the various energies transferred to the system in the steps from the reference state to the given state. It is the energy needed to create the given state of the system from the reference state. From a non-relativistic microscopic point of view, it may be divided into microscopic potential energy, , and microscopic kinetic energy, , components:

The microscopic kinetic energy of a system arises as the sum of the motions of all the system's particles with respect to the center-of-mass frame, whether it be the motion of atoms, molecules, atomic nuclei, electrons, or other particles. The microscopic potential energy algebraic summative components are those of the chemical and nuclear particle bonds, and the physical force fields within the system, such as due to internal induced electric or magnetic dipole moment, as well as the energy of deformation of solids (stress-strain). Usually, the split into microscopic kinetic and potential energies is outside the scope of macroscopic thermodynamics.

Internal energy does not include the energy due to motion or location of a system as a whole. That is to say, it excludes any kinetic or potential energy the body may have because of its motion or location in external gravitational, electrostatic, or electromagnetic fields. It does, however, include the contribution of such a field to the energy due to the coupling of the internal degrees of freedom of the object with the field. In such a case, the field is included in the thermodynamic description of the object in the form of an additional external parameter.

For practical considerations in thermodynamics or engineering, it is rarely necessary, convenient, nor even possible, to consider all energies belonging to the total intrinsic energy of a sample system, such as the energy given by the equivalence of mass. Typically, descriptions only include components relevant to the system under study. Indeed, in most systems under consideration, especially through thermodynamics, it is impossible to calculate the total internal energy. Therefore, a convenient null reference point may be chosen for the internal energy.

The internal energy is an extensive property: it depends on the size of the system, or on the amount of substance it contains.

At any temperature greater than absolute zero, microscopic potential energy and kinetic energy are constantly converted into one another, but the sum remains constant in an isolated system (cf. table). In the classical picture of thermodynamics, kinetic energy vanishes at zero temperature and the internal energy is purely potential energy. However, quantum mechanics has demonstrated that even at zero temperature particles maintain a residual energy of motion, the zero point energy. A system at absolute zero is merely in its quantum-mechanical ground state, the lowest energy state available. At absolute zero a system of given composition has attained its minimum attainable entropy.

The microscopic kinetic energy portion of the internal energy gives rise to the temperature of the system. Statistical mechanics relates the pseudo-random kinetic energy of individual particles to the mean kinetic energy of the entire ensemble of particles comprising a system. Furthermore, it relates the mean microscopic kinetic energy to the macroscopically observed empirical property that is expressed as temperature of the system. While temperature is an intensive measure, this energy expresses the concept as an extensive property of the system, often referred to as the thermal energy, The scaling property between temperature and thermal energy is the entropy change of the system.

Statistical mechanics considers any system to be statistically distributed across an ensemble of microstates. In a system that is in thermodynamic contact equilibrium with a heat reservoir, each microstate has an energy and is associated with a probability . The internal energy is the mean value of the system's total energy, i.e., the sum of all microstate energies, each weighted by its probability of occurrence:

This is the statistical expression of the law of conservation of energy.

Interactions of thermodynamic systems
Type of system Mass flow Work Heat
Open Green tick Green tick Green tick
Closed Red X Green tick Green tick
Thermally isolated Red X Green tick Red X
Mechanically isolated Red X Red X Green tick
Isolated Red X Red X Red X

Internal energy changes

Thermodynamics is chiefly concerned with the changes in internal energy .

For a closed system, with matter transfer excluded, the changes in internal energy are due to heat transfer and due to thermodynamic work done by the system on its surroundings. Accordingly, the internal energy change for a process may be written

When a closed system receives energy as heat, this energy increases the internal energy. It is distributed between microscopic kinetic and microscopic potential energies. In general, thermodynamics does not trace this distribution. In an ideal gas all of the extra energy results in a temperature increase, as it is stored solely as microscopic kinetic energy; such heating is said to be sensible.

A second kind of mechanism of change in the internal energy of a closed system changed is in its doing of work on its surroundings. Such work may be simply mechanical, as when the system expands to drive a piston, or, for example, when the system changes its electric polarization so as to drive a change in the electric field in the surroundings.

If the system is not closed, the third mechanism that can increase the internal energy is transfer of matter into the system. This increase, cannot be split into heat and work components. If the system is so set up physically that heat transfer and work that it does are by pathways separate from and independent of matter transfer, then the transfers of energy add to change the internal energy:

If a system undergoes certain phase transformations while being heated, such as melting and vaporization, it may be observed that the temperature of the system does not change until the entire sample has completed the transformation. The energy introduced into the system while the temperature does not change is called latent energy or latent heat, in contrast to sensible heat, which is associated with temperature change.

Internal energy of the ideal gas

Thermodynamics often uses the concept of the ideal gas for teaching purposes, and as an approximation for working systems. The ideal gas consists of particles considered as point objects that interact only by elastic collisions and fill a volume such that their mean free path between collisions is much larger than their diameter. Such systems approximate monatomic gases such as helium and other noble gases. For an ideal gas the kinetic energy consists only of the translational energy of the individual atoms. Monatomic particles do not possess rotational or vibrational degrees of freedom, and are not electronically excited to higher energies except at very high temperatures.

Therefore, the internal energy of an ideal gas depends solely on its temperature (and the number of gas particles): . It is not dependent on other thermodynamic quantities such as pressure or density.

The internal energy of an ideal gas is proportional to its mass (number of moles) and to its temperature

where is the isochoric (at constant volume) molar heat capacity of the gas. is constant for an ideal gas. The internal energy of any gas (ideal or not) may be written as a function of the three extensive properties , , (entropy, volume, mass). In case of the ideal gas it is in the following way:

where is an arbitrary positive constant and where is the universal gas constant. It is easily seen that is a linearly homogeneous function of the three variables (that is, it is extensive in these variables), and that it is weakly convex. Knowing temperature and pressure to be the derivatives the ideal gas law immediately follows as below:

Internal energy of a closed thermodynamic system

The above summation of all components of change in internal energy assumes that a positive energy denotes heat added to the system or the negative of work done by the system on its surroundings.

This relationship may be expressed in infinitesimal terms using the differentials of each term, though only the internal energy is an exact differential. For a closed system, with transfers only as heat and work, the change in the internal energy is

expressing the first law of thermodynamics. It may be expressed in terms of other thermodynamic parameters. Each term is composed of an intensive variable (a generalized force) and its conjugate infinitesimal extensive variable (a generalized displacement).

For example, the mechanical work done by the system may be related to the pressure and volume change . The pressure is the intensive generalized force, while the volume change is the extensive generalized displacement:

This defines the direction of work, , to be energy transfer from the working system to the surroundings, indicated by a positive term. Taking the direction of heat transfer to be into the working fluid and assuming a reversible process, the heat is

where denotes the temperature, and denotes the entropy.

The change in internal energy becomes

Changes due to temperature and volume

The expression relating changes in internal energy to changes in temperature and volume is

 

 

 

 

(1)

This is useful if the equation of state is known.

In case of an ideal gas, we can derive that , i.e. the internal energy of an ideal gas can be written as a function that depends only on the temperature.

Changes due to temperature and pressure

When considering fluids or solids, an expression in terms of the temperature and pressure is usually more useful:

where it is assumed that the heat capacity at constant pressure is related to the heat capacity at constant volume according to

Derivation of dU in terms of dT and dP

Changes due to volume at constant temperature

The internal pressure is defined as a partial derivative of the internal energy with respect to the volume at constant temperature:

Internal energy of multi-component systems

In addition to including the entropy and volume terms in the internal energy, a system is often described also in terms of the number of particles or chemical species it contains:

where are the molar amounts of constituents of type in the system. The internal energy is an extensive function of the extensive variables , , and the amounts , the internal energy may be written as a linearly homogeneous function of first degree:

where is a factor describing the growth of the system. The differential internal energy may be written as

which shows (or defines) temperature to be the partial derivative of with respect to entropy and pressure to be the negative of the similar derivative with respect to volume ,

and where the coefficients are the chemical potentials for the components of type in the system. The chemical potentials are defined as the partial derivatives of the internal energy with respect to the variations in composition:

As conjugate variables to the composition , the chemical potentials are intensive properties, intrinsically characteristic of the qualitative nature of the system, and not proportional to its extent. Under conditions of constant and , because of the extensive nature of and its independent variables, using Euler's homogeneous function theorem, the differential may be integrated and yields an expression for the internal energy:

The sum over the composition of the system is the Gibbs free energy:

that arises from changing the composition of the system at constant temperature and pressure. For a single component system, the chemical potential equals the Gibbs energy per amount of substance, i.e. particles or moles according to the original definition of the unit for .

Internal energy in an elastic medium

For an elastic medium the mechanical energy term of the internal energy is expressed in terms of the stress and strain involved in elastic processes. In Einstein notation for tensors, with summation over repeated indices, for unit volume, the infinitesimal statement is

Euler's theorem yields for the internal energy:

For a linearly elastic material, the stress is related to the strain by

where the are the components of the 4th-rank elastic constant tensor of the medium.

Elastic deformations, such as sound, passing through a body, or other forms of macroscopic internal agitation or turbulent motion create states when the system is not in thermodynamic equilibrium. While such energies of motion continue, they contribute to the total energy of the system; thermodynamic internal energy pertains only when such motions have ceased.

History

James Joule studied the relationship between heat, work, and temperature. He observed that friction in a liquid, such as caused by its agitation with work by a paddle wheel, caused an increase in its temperature, which he described as producing a quantity of heat. Expressed in modern units, he found that c. 4186 joules of energy were needed to raise the temperature of one kilogram of water by one degree Celsius.

Molecular solid

From Wikipedia, the free encyclopedia
Models of the packing of molecules in two molecular solids, carbon dioxide or Dry ice (a), and caffeine (c). The gray, red, and purple balls represent carbon, oxygen, and nitrogen, respectively. Images of carbon dioxide (b) and caffeine (d) in the solid state at room temperature and atmosphere. The gaseous phase of the dry ice in image (b) is visible because the molecular solid is subliming.

A molecular solid is a solid consisting of discrete molecules. The cohesive forces that bind the molecules together are van der Waals forces, dipole-dipole interactions, quadrupole interactions, π-π interactions, hydrogen bonding, halogen bonding, London dispersion forces, and in some molecular solids, coulombic interactions. Van der Waals, dipole interactions, quadrupole interactions, π-π interactions, hydrogen bonding, and halogen bonding (2-127 kJ mol−1) are typically much weaker than the forces holding together other solids: metallic (metallic bonding, 400-500 kJ mol−1), ionic (Coulomb’s forces, 700-900 kJ mol−1), and network solids (covalent bonds, 150-900 kJ mol−1). Intermolecular interactions, typically do not involve delocalized electrons, unlike metallic and certain covalent bonds. Exceptions are charge-transfer complexes such as the tetrathiafulvane-tetracyanoquinodimethane (TTF-TCNQ), a radical ion salt. These differences in the strength of force (i.e. covalent vs. van der Waals) and electronic characteristics (i.e. delocalized electrons) from other types of solids give rise to the unique mechanical, electronic, and thermal properties of molecular solids.

Molecular solids are poor electrical conductors, although some, such as TTF-TCNQ are semiconductors (ρ = 5 x 102 Ω−1 cm−1). They are still substantially less than the conductivity of copper (ρ = 6 x 105 Ω−1 cm−1). Molecular solids tend to have lower fracture toughness (sucrose, KIc = 0.08 MPa m1/2) than metal (iron, KIc = 50 MPa m1/2), ionic (sodium chloride, KIc = 0.5 MPa m1/2), and covalent solids (diamond, KIc = 5 MPa m1/2). Molecular solids have low melting (Tm) and boiling (Tb) points compared to metal (iron), ionic (sodium chloride), and covalent solids (diamond). Examples of molecular solids with low melting and boiling temperatures include argon, water, naphthalene, nicotine, and caffeine (see table below). The constituents of molecular solids range in size from condensed monatomic gases to small molecules (i.e. naphthalene and water) to large molecules with tens of atoms (i.e. fullerene with 60 carbon atoms).

Melting and boiling points of metallic, ionic, covalent, and molecular solids.
Type of Solid Material Tm (°C) Tb (°C)
Metallic Iron 1,538 2,861
Ionic Sodium chloride 801 1,465
Covalent Diamond 4,440 -
Molecular Argon -189.3 -185.9
Molecular Water 0 100
Molecular Naphthalene 80.1 217.9
Molecular Nicotine -79 491
Molecular Caffeine 235.6 519.9

Composition and structure

Molecular solids may consist of single atoms, diatomic, and/or polyatomic molecules. The intermolecular interactions between the constituents dictate how the crystal lattice of the material is structured. All atoms and molecules can partake in van der Waals and London dispersion forces (sterics). It is the lack or presence of other intermolecular interactions based on the atom or molecule that affords materials unique properties.

Van der Waals forces

Van der Waals and London dispersion forces guide iodine to condense into a solid at room temperature. (a) A lewis dot structure of iodine and an analogous structure as a spacefill model. Purple balls represent iodine atoms. (b) Demonstration of how van der Waals and London dispersion forces guide the organization of the crystal lattice from 1D to 3D (bulk material).

Argon, is a noble gas that has a full octet, no charge, and is nonpolar. These characteristics make it unfavorable for argon to partake in metallic, covalent, and ionic bonds as well as most intermolecular interactions. It can though partake in van der Waals and London dispersion forces. These weak self-interactions are isotropic and result in the long-range ordering of the atoms into face centered cubic packing when cooled below -189.3. Similarly iodine, a linear diatomic molecule has a net dipole of zero and can only partake in van der Waals interactions that are fairly isotropic. This results in the bipyramidal symmetry.

Dipole-dipole and quadrupole interactions

The dipole-dipole interactions between the acetone molecules partially guide the organization of the crystal lattice structure. (a) A dipole-dipole interaction between acetone molecules stacked on top of one another. (b) A dipole-dipole interaction between acetone molecules in front and bock of each other in the same plane. (c) A dipole-dipole interaction between acetone molecules flipped in direction, but adjacent to each other in the same plane. (d) Demonstration of how quadrupole-quadrupole interactions are involved in the crystal lattice structure.

For acetone dipole-dipole interactions are a major driving force behind the structure of its crystal lattice. The negative dipole is caused by oxygen. Oxygen is more electronegative than carbon and hydrogen, causing a partial negative (δ-) and positive charge (δ+) on the oxygen and remainder of the molecule, respectively. The δ- orienttowards the δ+ causing the acetone molecules to prefer to align in a few configurations in a δ- to δ+ orientation (pictured left). The dipole-dipole and other intermolecular interactions align to minimize energy in the solid state and determine the crystal lattice structure.

The quadrupole-quadrupole interactions between the naphthalene molecules partially guide the organization of the crystal lattice structure. (a) A lewis dot structure artificially colored to provide a qualitative map of where the partial charges exist for the quadrupole. A 3D representation of naphthalene molecules and quadrupole. (b) A 3D representation of the quadrupole from two naphthalene molecules interacting. (c) A dipole-dipole interaction between acetone molecules flipped in direction, but adjacent to each other in the same plane. (c) Demonstration of how quadrupole-quadrupole interactions are involved in the crystal lattice structure.

A quadrupole, like a dipole, is a permanent pole but the electric field of the molecule is not linear as in acetone, but in two dimensions. Examples of molecular solids with quadrupoles are octafluoronaphthalene and naphthalene. Naphthalene consists of two joined conjugated rings. The electronegativity of the atoms of this ring system and conjugation cause a ring current resulting in a quadrupole. For naphthalene, this quadrupole manifests in a δ- and δ+ accumulating within and outside the ring system, respectively. Naphthalene assembles through the coordination of δ- of one molecules to the δ+ of another molecule. This results in 1D columns of naphthalene in a herringbone configuration. These columns then stack into 2D layers and then 3D bulk materials. Octafluoronaphthalene follows this path of organization to build bulk material except the δ- and δ+ are on the exterior and interior of the ring system, respectively.

Hydrogen and halogen bonding

The hydrogen bonding between the acetic acid molecules partially guides the organization of the crystal lattice structure. (a) A lewis dot structure with the partial charges and hydrogen bond denoted with blue dashed line. A ball and stick model of acetic acid with hydrogen bond denoted with blue dashed line. (b) Four acetic acid molecules in zig-zag hydrogen bonding in 1D. (c) Demonstration of how hydrogen bonding are involved in the crystal lattice structure.

A hydrogen bond is a specific dipole where a hydrogen atom has a partial positive charge (δ+) to due a neighboring electronegative atom or functional group. Hydrogen bonds are amongst the strong intermolecular interactions know other than ion-dipole interactions. For intermolecular hydrogen bonds the δ+ hydrogen interacts with a δ- on an adjacent molecule. Examples of molecular solids that hydrogen bond are water, amino acids, and acetic acid. For acetic acid, the hydrogen (δ+) on the alcohol moiety of the carboxylic acid hydrogen bonds with other the carbonyl moiety (δ-) of the carboxylic on the adjacent molecule. This hydrogen bond leads a string of acetic acid molecules hydrogen bonding to minimize free energy. These strings of acetic acid molecules then stack together to build solids.

The halogen bonding between the bromine and 1,4-dioxane molecules partially guides the organization of the crystal lattice structure. (a) A lewis dot structure and ball and stick model of bromine and 1,4-dioxane. The halogen bond is between the bromine and 1,4-dioxane. (b) Demonstration of how halogen bonding can guide the crystal lattice structure.

A halogen bond is when an electronegative halide participates in a noncovalent interaction with a less electronegative atom on an adjacent molecule. Examples of molecular solids that halogen bond are hexachlorobenzene and a cocrystal of bromine 1,4-dioxane. For the second example, the δ- bromine atom in the diatomic bromine molecule is aligning with the less electronegative oxygen in the 1,4-dioxane. The oxygen in this case is viewed as δ+ compared to the bromine atom. This coordination results in a chain-like organization that stack into 2D and then 3D.

Coulombic interactions

The partial ionic bonding between the TTF and TCNQ molecules partially guides the organization of the crystal structure. The van der Waals interactions of the core for TTF and TCNQ guide adjacent stacked columns. (a) A lewis dot structure and ball and stick model of TTF and TCNQ. The partial ionic bond is between the cyano- and thio- motifs. (b) Demonstration of how van der Waals and partial ionic bonding guide the crystal lattice structure.

Coulombic interactions are manifested in some molecular solids. A well-studied example is the radical ion salt TTF-TCNQ with a conductivity of 5 x 102 Ω−1 cm−1, much closer to copper (ρ = 6 x 105 Ω−1 cm−1) than many molecular solids. The coulombic interaction in TTF-TCNQ stems from the large partial negative charge (δ = -0.59) on the cyano- moiety on TCNQ at room temperature. For reference, a completely charged molecule δ = ±1. This partial negative charge leads to a strong interaction with the thio- moiety of the TTF. The strong interaction leads to favorable alignment of these functional groups adjacent to each other in the solid state. While π-π interactions cause the TTF and TCNQ to stack in separate columns.

Allotropes

One form of an element may be a molecular solid, but another form of that same element may not be a molecular solid. For example, solid phosphorus can crystallize as different allotropes called "white", "red", and "black" phosphorus. White phosphorus forms molecular crystals composed of tetrahedral P4 molecules. Heating at ambient pressure to 250 °C or exposing to sunlight converts white phosphorus to red phosphorus where the P4 tetrahedra are no longer isolated, but connected by covalent bonds into polymer-like chains. Heating white phosphorus under high (GPa) pressures converts it to black phosphorus which has a layered, graphite-like structure.

The structural transitions in phosphorus are reversible: upon releasing high pressure, black phosphorus gradually converts into the red phosphorus, and by vaporizing red phosphorus at 490 °C in an inert atmosphere and condensing the vapor, covalent red phosphorus can be transformed into the molecular solid, white phosphorus.

PhosphComby.jpg Tetraphosphorus-liquid-2D-dimensions.png
Červený fosfor2.gif Hittoff phosphorus chain.jpg BlackPhosphorus.jpg
White, red, violet, and black phosphorus samples Structure unit
of white phosphorus

Structures of red violet and black phosphorus

Similarly, yellow arsenic is a molecular solid composed of As4 units. Some forms of sulfur and selenium are composed of S8 (or Se8) units and are molecular solids at ambient conditions, but converted into covalent allotropes having atomic chains extending throughout the crystal.

Properties

Since molecular solids are held together by relatively weak forces they tend to have low melting and boiling points, low mechanical strength, low electrical conductivity, and poor thermal conductivity. Also, depending on the structure of the molecule, the intermolecular forces may have directionality leading to anisotropy of certain properties.

Melting and boiling points

The characteristic melting point of metals and ionic solids is ~ 1000 °C and greater, while molecular solids typically melt closer to 300 °C (see table), thus many corresponding substances are either liquid (ice) or gaseous (oxygen) at room temperature. This is due to the elements involved, the molecules they form, and the weak intermolecular interactions of the molecules.

Allotropes of phosphorus are useful to further demonstrate this structure-property relationship. White phosphorus, a molecular solid, has a relatively low density of 1.82 g/cm3 and melting point of 44.1 °C; it is a soft material which can be cut with a knife. When it is converted to the covalent red phosphorus, the density goes to 2.2–2.4 g/cm3 and melting point to 590 °C, and when white phosphorus is transformed into the (also covalent) black phosphorus, the density becomes 2.69–3.8 g/cm3 and melting temperature ~200 °C. Both red and black phosphorus forms are significantly harder than white phosphorus.

Mechanical properties

Molecular solids can be either ductile or brittle, or a combination depending on the crystal face stressed. Both ductile and brittle solids undergo elastic deformation till they reach the yield stress. Once the yield stress is reached ductile solids undergo a period of plastic deformation, and eventually fracture. Brittle solids fracture promptly after passing the yield stress. Due to the asymmetric structure of most molecules, many molecular solids have directional intermolecular forces. This phenomenon can lead to anisotropic mechanical properties. Typically a molecular solid is ductile when it has directional intermolecular interactions. This allows for dislocation between layers of the crystal much like metals.

One example of a ductile molecular solid, that can be bent 180°, is hexachlorobenzene (HCB). In this example the π-π interactions between the benzene cores are stronger than the halogen interactions of the chlorides. This difference leads to its flexibility. This flexibility is anisotropic; to bend HCB to 180° you must stress the [001] face of the crystal. Another example of a flexible molecular solid is 2-(methylthio)nicotinic acid (MTN). MTN is flexible due to its strong hydrogen bonding and π-π interactions creating a rigid set of dimers that dislocate along the alignment of their terminal methyls. When stressed on the [010] face this crystal will bend 180°. Note, not all ductile molecular solids bend 180° and some may have more than one bending faces.

Electrical properties

Molecular solids are generally insulators. This large band gap (compared to germanium at 0.7 eV) is due to the weak intermolecular interactions, which result in low charge carrier mobility. Some molecular solids exhibit electrical conductivity, such as TTF-TCNQ with ρ = 5 x 102 Ω−1 cm−1 but in such cases orbital overlap is evident in the crystal structure. Fullerenes, which are insulating, become conducting or even superconducting upon doping.

Thermal properties

Molecular solids have many thermal properties: specific heat capacity, thermal expansion, and thermal conductance to name a few. These thermal properties are determined by the intra- and intermolecular vibrations of the atoms and molecules of the molecular solid. While transitions of an electron do contribute to thermal properties, their contribution is negligible compared to the vibrational contribution.

Physical chemistry

From Wikipedia, the free encyclopedia
Between the flame and the flower is aerogel, whose synthesis has been aided greatly by physical chemistry.

Physical chemistry is the study of macroscopic and microscopic phenomena in chemical systems in terms of the principles, practices, and concepts of physics such as motion, energy, force, time, thermodynamics, quantum chemistry, statistical mechanics, analytical dynamics and chemical equilibria.

Physical chemistry, in contrast to chemical physics, is predominantly (but not always) a supra-molecular science, as the majority of the principles on which it was founded relate to the bulk rather than the molecular or atomic structure alone (for example, chemical equilibrium and colloids).

Some of the relationships that physical chemistry strives to resolve include the effects of:

  1. Intermolecular forces that act upon the physical properties of materials (plasticity, tensile strength, surface tension in liquids).
  2. Reaction kinetics on the rate of a reaction.
  3. The identity of ions and the electrical conductivity of materials.
  4. Surface science and electrochemistry of cell membranes.
  5. Interaction of one body with another in terms of quantities of heat and work called thermodynamics.
  6. Transfer of heat between a chemical system and its surroundings during change of phase or chemical reaction taking place called thermochemistry
  7. Study of colligative properties of number of species present in solution.
  8. Number of phases, number of components and degree of freedom (or variance) can be correlated with one another with help of phase rule.
  9. Reactions of electrochemical cells.
  10. Behaviour of microscopic systems using quantum mechanics and macroscopic systems using statistical thermodynamics.
  11. Calculation of the Energy of electron movement in a metal complexes.

Key concepts

The key concepts of physical chemistry are the ways in which pure physics is applied to chemical problems.

One of the key concepts in classical chemistry is that all chemical compounds can be described as groups of atoms bonded together and chemical reactions can be described as the making and breaking of those bonds. Predicting the properties of chemical compounds from a description of atoms and how they bond is one of the major goals of physical chemistry. To describe the atoms and bonds precisely, it is necessary to know both where the nuclei of the atoms are, and how electrons are distributed around them.

Disciplines

Quantum chemistry, a subfield of physical chemistry especially concerned with the application of quantum mechanics to chemical problems, provides tools to determine how strong and what shape bonds are, how nuclei move, and how light can be absorbed or emitted by a chemical compound. Spectroscopy is the related sub-discipline of physical chemistry which is specifically concerned with the interaction of electromagnetic radiation with matter.

Another set of important questions in chemistry concerns what kind of reactions can happen spontaneously and which properties are possible for a given chemical mixture. This is studied in chemical thermodynamics, which sets limits on quantities like how far a reaction can proceed, or how much energy can be converted into work in an internal combustion engine, and which provides links between properties like the thermal expansion coefficient and rate of change of entropy with pressure for a gas or a liquid. It can frequently be used to assess whether a reactor or engine design is feasible, or to check the validity of experimental data. To a limited extent, quasi-equilibrium and non-equilibrium thermodynamics can describe irreversible changes. However, classical thermodynamics is mostly concerned with systems in equilibrium and reversible changes and not what actually does happen, or how fast, away from equilibrium.

Which reactions do occur and how fast is the subject of chemical kinetics, another branch of physical chemistry. A key idea in chemical kinetics is that for reactants to react and form products, most chemical species must go through transition states which are higher in energy than either the reactants or the products and serve as a barrier to reaction. In general, the higher the barrier, the slower the reaction. A second is that most chemical reactions occur as a sequence of elementary reactions, each with its own transition state. Key questions in kinetics include how the rate of reaction depends on temperature and on the concentrations of reactants and catalysts in the reaction mixture, as well as how catalysts and reaction conditions can be engineered to optimize the reaction rate.

The fact that how fast reactions occur can often be specified with just a few concentrations and a temperature, instead of needing to know all the positions and speeds of every molecule in a mixture, is a special case of another key concept in physical chemistry, which is that to the extent an engineer needs to know, everything going on in a mixture of very large numbers (perhaps of the order of the Avogadro constant, 6 x 1023) of particles can often be described by just a few variables like pressure, temperature, and concentration. The precise reasons for this are described in statistical mechanics, a specialty within physical chemistry which is also shared with physics. Statistical mechanics also provides ways to predict the properties we see in everyday life from molecular properties without relying on empirical correlations based on chemical similarities.

History

Fragment of M. Lomonosov's manuscript 'Physical Chemistry' (1752)

The term "physical chemistry" was coined by Mikhail Lomonosov in 1752, when he presented a lecture course entitled "A Course in True Physical Chemistry" (Russian: Курс истинной физической химии) before the students of Petersburg University. In the preamble to these lectures he gives the definition: "Physical chemistry is the science that must explain under provisions of physical experiments the reason for what is happening in complex bodies through chemical operations".

Modern physical chemistry originated in the 1860s to 1880s with work on chemical thermodynamics, electrolytes in solutions, chemical kinetics and other subjects. One milestone was the publication in 1876 by Josiah Willard Gibbs of his paper, On the Equilibrium of Heterogeneous Substances. This paper introduced several of the cornerstones of physical chemistry, such as Gibbs energy, chemical potentials, and Gibbs' phase rule.

The first scientific journal specifically in the field of physical chemistry was the German journal, Zeitschrift für Physikalische Chemie, founded in 1887 by Wilhelm Ostwald and Jacobus Henricus van 't Hoff. Together with Svante August Arrhenius, these were the leading figures in physical chemistry in the late 19th century and early 20th century. All three were awarded the Nobel Prize in Chemistry between 1901 and 1909.

Developments in the following decades include the application of statistical mechanics to chemical systems and work on colloids and surface chemistry, where Irving Langmuir made many contributions. Another important step was the development of quantum mechanics into quantum chemistry from the 1930s, where Linus Pauling was one of the leading names. Theoretical developments have gone hand in hand with developments in experimental methods, where the use of different forms of spectroscopy, such as infrared spectroscopy, microwave spectroscopy, electron paramagnetic resonance and nuclear magnetic resonance spectroscopy, is probably the most important 20th century development.

Further development in physical chemistry may be attributed to discoveries in nuclear chemistry, especially in isotope separation (before and during World War II), more recent discoveries in astrochemistry, as well as the development of calculation algorithms in the field of "additive physicochemical properties" (practically all physicochemical properties, such as boiling point, critical point, surface tension, vapor pressure, etc.—more than 20 in all—can be precisely calculated from chemical structure alone, even if the chemical molecule remains unsynthesized), and herein lies the practical importance of contemporary physical chemistry.

See Group contribution method, Lydersen method, Joback method, Benson group increment theory, quantitative structure–activity relationship

Journals

Some journals that deal with physical chemistry include Zeitschrift für Physikalische Chemie (1887); Journal of Physical Chemistry A (from 1896 as Journal of Physical Chemistry, renamed in 1997); Physical Chemistry Chemical Physics (from 1999, formerly Faraday Transactions with a history dating back to 1905); Macromolecular Chemistry and Physics (1947); Annual Review of Physical Chemistry (1950); Molecular Physics (1957); Journal of Physical Organic Chemistry (1988); Journal of Physical Chemistry B (1997); ChemPhysChem (2000); Journal of Physical Chemistry C (2007); and Journal of Physical Chemistry Letters (from 2010, combined letters previously published in the separate journals)

Historical journals that covered both chemistry and physics include Annales de chimie et de physique (started in 1789, published under the name given here from 1815 to 1914).

Branches and related topics

 

Platinum group

From Wikipedia, the free encyclopedia ...