Search This Blog

Monday, May 28, 2018

Glueball

From Wikipedia, the free encyclopedia

In particle physics, a glueball (also gluonium, gluon-ball) is a hypothetical composite particle.[1] It consists solely of gluon particles, without valence quarks. Such a state is possible because gluons carry color charge and experience the strong interaction between themselves. Glueballs are extremely difficult to identify in particle accelerators, because they mix with ordinary meson states.[2]

Theoretical calculations show that glueballs should exist at energy ranges accessible with current collider technology. However, due to the aforementioned difficulty (among others), they have so far not been observed and identified with certainty,[3] although phenomenological calculations have suggested that an experimentally identified glueball candidate, denoted f_{0}(1710), has properties consistent with those expected of a Standard Model glueball.[4]

The prediction that glueballs exist is one of the most important predictions of the Standard Model of particle physics that has not yet been confirmed experimentally.[5] Glueballs are the only particles predicted by the Standard Model with total angular momentum (J) (sometimes called "intrinsic spin") that could be either 2 or 3 in their ground states.

Properties of glueballs

In principle, it is theoretically possible for all properties of glueballs to be calculated exactly and derived directly from the equations and fundamental physical constants of quantum chromodynamics (QCD) without further experimental input. So, the predicted properties of these hypothetical particles can be described in exquisite detail using only Standard Model physics which have wide acceptance in the theoretical physics literature. But, there is considerable uncertainty in the measurement of some of the relevant key physical constants, and the QCD calculations are so difficult that solutions to these equations are almost always numerical approximations (reached by several very different methodologies). This can lead to variation in theoretical predictions of glueball properties like mass and branching ratios in glueball decays.

Constituent particles and color charge

Theoretical studies of glueballs have focused on glueballs consisting of either two gluons or three gluons, by analogy to mesons and baryons that have two and three quarks respectively. As in the case of mesons and baryons, glueballs would be QCD color charge neutral. The baryon number of a glueball is zero.

Total angular momentum

Two gluon glueballs can have total angular momentum (J) of 0 (which are scalar or pseudo-scalar) or 2 (tensor). Three gluon glueballs can have total angular momentum (J) of 1 (vector boson) or 3. All glueballs have integer total angular momentum which implies that they are bosons rather than fermions.

Glueballs are the only particles predicted by the Standard Model with total angular momentum (J) (sometimes called "intrinsic spin") that could be either 2 or 3 in their ground states, although mesons made of two quarks with J=0 and J=1 with similar masses have been observed and excited states of other mesons can have these values of total angular momentum.

Fundamental particles with ground states having J=0 or J=2 are easily distinguished from glueballs. The hypothetical graviton, while having a total angular momentum J=2 would be massless and lack color charge, and so would be easily distinguished from glueballs. The Standard Model Higgs boson for which an experimentally measured mass of about 125–126 GeV/c² has been determined is the only fundamental particle with J=0 in the Standard Model. It also lacks color charge and hence does not engage in strong force interactions. But the Higgs boson is about 25–80 times as heavy as the mass of the various glueball states predicted by the Standard Model.

Electric charge

All glueballs would have an electric charge of zero as gluons themselves do not have an electric charge.

Mass and parity

Glueballs are predicted by quantum chromodynamics to be massive, notwithstanding the fact that gluons themselves have zero rest mass in the Standard Model. Glueballs with all four possible combinations of quantum numbers P (parity) and C (c-parity) for every possible total angular momentum have been considered, producing at least fifteen possible glueball states including excited glueball states that share the same quantum numbers but have differing masses with the lightest states having masses as low as 1.4 GeV/c2 (for a glueball with quantum numbers J=0, P=+, C=+), and the heaviest states having masses as great as almost 5 GeV/c2 (for a glueball with quantum numbers J=0, P=+, C=-).[6]

These masses are on the same order of magnitude as the masses of many experimentally observed mesons and baryons, as well as to the masses of the tau lepton, charm quark, bottom quark, some hydrogen isotopes, and some helium isotopes.

Stability and decay channels

Just as all Standard Model mesons and baryons, except the proton, are unstable in isolation, all glueballs are predicted by the Standard Model to be unstable in isolation, with various QCD calculations predicting the total decay width (which is functionally related to half-life) for various glueball states. QCD calculations also make predictions regarding the expected decay patterns of glueballs.[7][8] For example, glueballs would not have radiative or two photon decays, but would have decays into pairs of pions, pairs of kaons, or pairs of eta mesons.[7]

Practical impact on macroscopic low energy physics

Feynman diagram of a glueball (G) decaying to two pions (
π
). Such decays help the study of and search for glueballs.[9]

Because Standard Model glueballs are so ephemeral (decaying almost immediately into more stable decay products) and are only generated in high energy physics, glueballs only arise synthetically in the natural conditions found on Earth that humans can easily observe. They are scientifically notable mostly because they are a testable prediction of the Standard Model, and not because of phenomenological impact on macroscopic processes, or their engineering applications.

Lattice QCD simulations

Lattice QCD provides a way to study the glueball spectrum theoretically and from first principles. Some of the first quantities calculated using lattice QCD methods (in 1980) were glueball mass estimates.[10] Morningstar and Peardon[11] computed in 1999 the masses of the lightest glueballs in QCD without dynamical quarks. The three lowest states are tabulated below. The presence of dynamical quarks would slightly alter these data, but also makes the computations more difficult. Since that time calculations within QCD (lattice and sum rules) find the lightest glueball to be a scalar with mass in the range of about 1000–1700 MeV.[12]

J P'C mass
0++ 1730 ±80 MeV
2++ 2400 ±120 MeV
0−+ 2590 ±130 MeV

Experimental candidates

Particle accelerator experiments are often able to identify unstable composite particles and assign masses to those particles to a precision of approximately 10 MeV/c2, without being able to immediately assign to the particle resonance that is observed all of the properties of that particle. Scores of such particles have been detected, although particles detected in some experiments but not others can be viewed as doubtful. Some of the candidate particle resonances that could be glueballs, although the evidence is not definitive, include the following:

Vector, Pseudo-Vector, or Tensor Glueball Candidates:
  • X(3020) observed by the BaBar collaboration is a candidate for an excited state of the 2−+, 1+− or 1−− glueball states with a mass of about 3.02 GeV/c2.[5]
Scalar Glueball Candidates:
  • f0(500) also known as σ – the properties of this particle are possibly consistent with a 1000 MeV or 1500 MeV mass glueball.[13]
  • f0(980) – the structure of this composite particle is consistent with the existence of a light glueball.[13]
  • f0(1370) – existence of this resonance is disputed but is a candidate for a glueball-meson mixing state[13]
  • f0(1500) – existence of this resonance is undisputed but its status as a glueball-meson mixing state or pure glueball is not well established.[13]
  • f0(1710) – existence of this resonance is undisputed but its status as a glueball-meson mixing state or pure glueball is not well established.[13]
Other Glueball Candidates:
  • Gluon jets at the LEP experiment show a 40% excess over theoretical expectations of electromagnetically neutral clusters which suggests that electromagnetically neutral particles expected in gluon rich environments such as glueballs are likely to be present.[13]
Many of these candidates have been the subject of active investigation for at least eighteen years.[7] The GlueX experiment has been specifically designed to produce more definitive experimental evidence of glueballs.[14]

Thermal conductivity

From Wikipedia, the free encyclopedia
Thermal conductivity (often denoted k, λ, or κ) is the property of a material to conduct heat. It is evaluated primarily in terms of the Fourier's Law for heat conduction. In general, thermal conductivity is a tensor property, expressing the anisotropy of the property.

Heat transfer occurs at a lower rate in materials of low thermal conductivity than in materials of high thermal conductivity. Correspondingly, materials of high thermal conductivity are widely used in heat sink applications and materials of low thermal conductivity are used as thermal insulation. The thermal conductivity of a material may depend on temperature. The reciprocal of thermal conductivity is called thermal resistivity.

Units of thermal conductivity

In the International System of Units (SI), thermal conductivity is measured in watts per meter-kelvin (W/(mK)). The dimension of thermal conductivity is M1L1T−3Θ−1, expressed in terms of the dimensions mass (M), length (L), time (T), and temperature (Θ). In Imperial units, thermal conductivity is measured in BTU/(hrft°F).[note 1][1]

Other units which are closely related to the thermal conductivity are in common use in the construction and textile industries. The construction industry makes use of units such as the R-value (resistance) and the U-value (transmittance). Although related to the thermal conductivity of a material used in an insulation product, R- and U-values are dependent on the thickness of the product.[note 2]

Likewise the textile industry has several units including the tog and the clo which express thermal resistance of a material in a way analogous to the R-values used in the construction industry.

Measurement

There are a number of ways to measure thermal conductivity. Each of these is suitable for a limited range of materials, depending on the thermal properties and the medium temperature. There is a distinction between steady-state and transient techniques.

In general, steady-state techniques are useful when the temperature of the material does not change with time. This makes the signal analysis straightforward (steady state implies constant signals). The disadvantage is that a well-engineered experimental setup is usually needed. The Divided Bar (various types) is the most common device used for consolidated rock solids.

Experimental values


Experimental values of thermal conductivity.

Thermal conductivity is important in material science, research, electronics, building insulation and related fields, especially where high operating temperatures are achieved. Several materials are shown in the list of thermal conductivities. These should be considered approximate due to the uncertainties related to material definitions.

High energy generation rates within electronics or turbines require the use of materials with high thermal conductivity such as copper (see: Copper in heat exchangers), aluminium, and silver. On the other hand, materials with low thermal conductance, such as polystyrene and alumina, are used in building construction or in furnaces in an effort to slow the flow of heat, i.e. for insulation purposes.

Definitions

The reciprocal of thermal conductivity is thermal resistivity, usually expressed in kelvin-meters per watt (K⋅m⋅W−1). For a given thickness of a material, that particular construction's thermal resistance and the reciprocal property, thermal conductance, can be calculated. Unfortunately, there are differing definitions for these terms.

Thermal conductivity, k, often depends on temperature. Therefore, the definitions listed below make sense when the thermal conductivity is temperature independent. Otherwise a representative mean value has to be considered; for more, see the equations section below.

Conductance

For general scientific use, thermal conductance is the quantity of heat that passes in unit time through a plate of particular area and thickness when its opposite faces differ in temperature by one kelvin. For a plate of thermal conductivity k, area A and thickness L, the conductance calculated is kA/L, measured in W⋅K−1 (equivalent to: W/°C). ASTM C168-15, however, defines thermal conductance as "time rate of steady state heat flow through a unit area of a material or construction induced by a unit temperature difference between the body surfaces" and defines the units as W/(m2⋅K) (Btu/(h⋅ft2⋅°F))[2]

The thermal conductance of that particular construction is the inverse of the thermal resistance. Thermal conductivity and conductance are analogous to electrical conductivity (A⋅m−1⋅V−1) and electrical conductance (A⋅V−1).

There is also a measure known as heat transfer coefficient: the quantity of heat that passes in unit time through a unit area of a plate of particular thickness when its opposite faces differ in temperature by one kelvin. The reciprocal is thermal insulance. In summary:
  • thermal conductance = kA/L, measured in W⋅K−1 or in ASTM C168-15 as W/(m2⋅K)[2]
    • thermal resistance = L/(kA), measured in K⋅W−1 (equivalent to: °C/W)
  • heat transfer coefficient = k/L, measured in W⋅K−1⋅m−2
    • thermal insulance = L/k, measured in K⋅m2⋅W−1.
The heat transfer coefficient is also known as thermal admittance in the sense that the material may be seen as admitting heat to flow.

Resistance

Thermal resistance is the ability of a material to resist the flow of heat.
Thermal resistance is the reciprocal of thermal conductance, i.e., lowering its value will raise the heat conduction and vice versa.

When thermal resistances occur in series, they are additive. Thus, when heat flows consecutively through two components each with a resistance of 3 °C/W, the total resistance is 3 °C/W + 3 °C/W = 6 °C/W.

A common engineering design problem involves the selection of an appropriate sized heat sink for a given heat source. Working in units of thermal resistance greatly simplifies the design calculation. The following formula can be used to estimate the performance:
R_{hs}={\frac {\Delta T}{P_{th}}}-R_{s}
where:
  • Rhs is the maximum thermal resistance of the heat sink to ambient, in °C/W (equivalent to K/W)
  • ΔT is the required temperature difference (temperature drop), in °C
  • Pth is the thermal power (heat flow), in watts
  • Rs is the thermal resistance of the heat source, in °C/W
For example, if a component produces 100 W of heat, and has a thermal resistance of 0.5 °C/W, what is the maximum thermal resistance of the heat sink? Suppose the maximum temperature is 125 °C, and the ambient temperature is 25 °C; then ΔT is 100 °C. The heat sink's thermal resistance to ambient must then be 0.5 °C/W or less (total resistance component and heat sink is then 1.0 °C/W).

Transmittance

A third term, thermal transmittance, sub way the thermal conductance of a structure along with heat transfer due to convection and radiation. It is measured in the same units as thermal conductance and is sometimes known as the composite thermal conductance. The term U-value is often used.

Admittance

The thermal admittance of a material, such as a building fabric, is a measure of the ability of a material to transfer heat in the presence of a temperature difference on opposite sides of the material. Thermal admittance is measured in the same units as a heat transfer coefficient, power (watts) per unit area (square meters) per temperature change (kelvins). Thermal admittance of a building fabric affects a building's thermal response to variation in outside temperature.[3]

Co-efficient of thermal conductivity: The co-efficient of thermal conductivity of the material of a substance is numerically equal to the quantity of heat that conducts in one second normally through a slab of unit length and unit area, the difference of temperature between its end faces being one degree.

Influencing factors

Effect of temperature on thermal conductivity

The effect of temperature on thermal conductivity is different for metals and nonmetals. In metals, heat conductivity is primarily due to free electrons. Following the Wiedemann–Franz law, thermal conductivity of metals is approximately proportional to the absolute temperature (in kelvins) times electrical conductivity. In pure metals the electrical conductivity decreases with increasing temperature and thus the product of the two, the thermal conductivity, stays approximately constant. However, as temperatures approach absolute zero, the thermal conductivity decreases sharply.[4] In alloys the change in electrical conductivity is usually smaller and thus thermal conductivity increases with temperature, often proportionally to temperature. Many pure metals have a peak thermal conductivity between 2 K and 10 K.

On the other hand, heat conductivity in nonmetals is mainly due to lattice vibrations (phonons). Except for high quality crystals at low temperatures, the phonon mean free path is not reduced significantly at higher temperatures. Thus, the thermal conductivity of nonmetals is approximately constant at high temperatures. At low temperatures well below the Debye temperature, thermal conductivity decreases, as does the heat capacity, due to carrier scattering from defects at very low temperatures.[4]

Chemical phase

When a material undergoes a phase change from solid to liquid or from liquid to gas the thermal conductivity may change. An example of this would be the change in thermal conductivity that occurs when ice (thermal conductivity of 2.18 W/(m⋅K) at 0 °C) melts to form liquid water (thermal conductivity of 0.56 W/(m⋅K) at 0 °C).[5]

Thermal anisotropy

Some substances, such as non-cubic crystals, can exhibit different thermal conductivities along different crystal axes, due to differences in phonon coupling along a given crystal axis. Sapphire is a notable example of variable thermal conductivity based on orientation and temperature, with 35 W/(m⋅K) along the C-axis and 32 W/(m⋅K) along the A-axis.[6] Wood generally conducts better along the grain than across it. Other examples of materials where the thermal conductivity varies with direction are metals that have undergone heavy cold pressing, laminated materials, cables, the materials used for the Space Shuttle thermal protection system, and fiber-reinforced composite structures.[7]

When anisotropy is present, the direction of heat flow may not be exactly the same as the direction of the thermal gradient.

Electrical conductivity

In metals, thermal conductivity approximately tracks electrical conductivity according to the Wiedemann–Franz law, as freely moving valence electrons transfer not only electric current but also heat energy. However, the general correlation between electrical and thermal conductance does not hold for other materials, due to the increased importance of phonon carriers for heat in non-metals. Highly electrically conductive silver is less thermally conductive than diamond, which is an electrical insulator, but due to its orderly array of atoms it is conductive of heat via phonons.

Magnetic field

The influence of magnetic fields on thermal conductivity is known as the Righi-Leduc effect.

Convection


Exhaust system components with ceramic coatings having a low thermal conductivity reduce heating of nearby sensitive components

Air and other gases are generally good insulators, in the absence of convection. Therefore, many insulating materials function simply by having a large number of gas-filled pockets which prevent large-scale convection. Examples of these include expanded and extruded polystyrene (popularly referred to as "styrofoam") and silica aerogel, as well as warm clothes. Natural, biological insulators such as fur and feathers achieve similar effects by dramatically inhibiting convection of air or water near an animal's skin.

Light gases, such as hydrogen and helium typically have high thermal conductivity. Dense gases such as xenon and dichlorodifluoromethane have low thermal conductivity. An exception, sulfur hexafluoride, a dense gas, has a relatively high thermal conductivity due to its high heat capacityArgon and krypton, gases denser than air, are often used in insulated glazing (double paned windows) to improve their insulation characteristics.

Isotopic purity

Isotopically pure diamond can have a significantly higher thermal conductivity.[8] eg. 41,000 W·m−1·K−1 [9] (99.999% 12C calc.200,000[9])

Physical origins

At the atomic level, there are no simple, correct expressions for thermal conductivity. Atomically, the thermal conductivity of a system is determined by how atoms composing the system interact. There are two different approaches for calculating the thermal conductivity of a system.
  • The first approach employs the Green–Kubo relations. Although this employs analytic expressions, which, in principle, can be solved, calculating the thermal conductivity of a dense fluid or solid using this relation requires the use of molecular dynamics computer simulation.
  • The second approach is based on the relaxation time approach. Due to the anharmonicity within the crystal potential, the phonons in the system are known to scatter. There are three main mechanisms for scattering:
    • Boundary scattering, a phonon hitting the boundary of a system;
    • Mass defect scattering, a phonon hitting an impurity within the system and scattering;
    • Phonon-phonon scattering, a phonon breaking into two lower energy phonons or a phonon colliding with another phonon and merging into one higher-energy phonon.

Lattice waves

Heat transport in both amorphous and crystalline dielectric solids is by way of elastic vibrations of the lattice (phonons). This transport mode is limited by the elastic scattering of acoustic phonons at lattice defects. These predictions were confirmed by the experiments of Chang and Jones on commercial glasses and glass ceramics, where the mean free paths were limited by "internal boundary scattering" to length scales of 10−2 cm to 10−3 cm.[10][11]

The phonon mean free path has been associated directly with the effective relaxation length for processes without directional correlation. If Vg  is the group velocity of a phonon wave packet, then the relaxation length l\; is defined as:
{\displaystyle l\;=V_{\text{g}}t}
where t is the characteristic relaxation time. Since longitudinal waves have a much greater phase velocity than transverse waves, Vlong is much greater than Vtrans, and the relaxation length or mean free path of longitudinal phonons will be much greater. Thus, thermal conductivity will be largely determined by the speed of longitudinal phonons.[10][12]

Regarding the dependence of wave velocity on wavelength or frequency (dispersion), low-frequency phonons of long wavelength will be limited in relaxation length by elastic Rayleigh scattering. This type of light scattering from small particles is proportional to the fourth power of the frequency. For higher frequencies, the power of the frequency will decrease until at highest frequencies scattering is almost frequency independent. Similar arguments were subsequently generalized to many glass forming substances using Brillouin scattering.[13][14][15][16]

Phonons in the acoustical branch dominate the phonon heat conduction as they have greater energy dispersion and therefore a greater distribution of phonon velocities. Additional optical modes could also be caused by the presence of internal structure (i.e., charge or mass) at a lattice point; it is implied that the group velocity of these modes is low and therefore their contribution to the lattice thermal conductivity λL (\kappa L) is small.[17]

Each phonon mode can be split into one longitudinal and two transverse polarization branches. By extrapolating the phenomenology of lattice points to the unit cells it is seen that the total number of degrees of freedom is 3pq when p is the number of primitive cells with q atoms/unit cell. From these only 3p are associated with the acoustic modes, the remaining 3p(q − 1) are accommodated through the optical branches. This implies that structures with larger p and q contain a greater number of optical modes and a reduced λL.

From these ideas, it can be concluded that increasing crystal complexity, which is described by a complexity factor CF (defined as the number of atoms/primitive unit cell), decreases λL. Micheline Roufosse and P.G. Klemens derived the exact proportionality in their article Thermal Conductivity of Complex Dielectric Crystals at Phys. Rev. B 7, 5379–5386 (1973). This was done by assuming that the relaxation time τ decreases with increasing number of atoms in the unit cell and then scaling the parameters of the expression for thermal conductivity in high temperatures accordingly.[17]

Describing of anharmonic effects is complicated because exact treatment as in the harmonic case is not possible and phonons are no longer exact eigensolutions to the equations of motion. Even if the state of motion of the crystal could be described with a plane wave at a particular time, its accuracy would deteriorate progressively with time. Time development would have to be described by introducing a spectrum of other phonons, which is known as the phonon decay. The two most important anharmonic effects are the thermal expansion and the phonon thermal conductivity.

Only when the phonon number ‹n› deviates from the equilibrium value ‹n›0, can a thermal current arise as stated in the following expression
Q_{x}={\frac {1}{V}}\sum _{q,j}{\hslash \omega \left(\left\langle n\right\rangle -{\left\langle n\right\rangle }^{0}\right)v_{x}}{\text{,}}
where v is the energy transport velocity of phonons. Only two mechanisms exist that can cause time variation of ‹n› in a particular region. The number of phonons that diffuse into the region from neighboring regions differs from those that diffuse out, or phonons decay inside the same region into other phonons. A special form of the Boltzmann equation
{\frac {d\left\langle n\right\rangle }{dt}}={\left({\frac {\partial \left\langle n\right\rangle }{\partial t}}\right)}_{\text{diff.}}+{\left({\frac {\partial \left\langle n\right\rangle }{\partial t}}\right)}_{\text{decay}}
states this. When steady state conditions are assumed the total time derivate of phonon number is zero, because the temperature is constant in time and therefore the phonon number stays also constant. Time variation due to phonon decay is described with a relaxation time (τ) approximation
{\left({\frac {\partial \left\langle n\right\rangle }{\partial t}}\right)}_{\text{decay}}=-{\text{ }}{\frac {\left\langle n\right\rangle -{\left\langle n\right\rangle }^{0}}{\tau }},
which states that the more the phonon number deviates from its equilibrium value, the more its time variation increases. At steady state conditions and local thermal equilibrium are assumed we get the following equation
{\left({\frac {\partial \left(n\right)}{\partial t}}\right)}_{\text{diff.}}=-{v}_{x}{\frac {\partial {\left(n\right)}^{0}}{\partial T}}{\frac {\partial T}{\partial x}}{\text{.}}
Using the relaxation time approximation for the Boltzmann equation and assuming steady-state conditions, the phonon thermal conductivity λL can be determined. The temperature dependence for λL originates from the variety of processes, whose significance for λL depends on the temperature range of interest. Mean free path is one factor that determines the temperature dependence for λL, as stated in the following equation
{\lambda }_{L}={\frac {1}{3V}}\sum _{q,j}v\left(q,j\right)\Lambda \left(q,j\right){\frac {\partial }{\partial T}}\epsilon \left(\omega \left(q,j\right),T\right),
where Λ is the mean free path for phonon and {\frac {\partial }{\partial T}}\epsilon denotes the heat capacity. This equation is a result of combining the four previous equations with each other and knowing that \left\langle v_{x}^{2}\right\rangle ={\frac {1}{3}}v^{2} for cubic or isotropic systems and \Lambda =v\tau .[18]

At low temperatures (< 10 K) the anharmonic interaction does not influence the mean free path and therefore, the thermal resistivity is determined only from processes for which q-conservation does not hold. These processes include the scattering of phonons by crystal defects, or the scattering from the surface of the crystal in case of high quality single crystal. Therefore, thermal conductance depends on the external dimensions of the crystal and the quality of the surface. Thus, temperature dependence of λL is determined by the specific heat and is therefore proportional to T3.[18]

Phonon quasimomentum is defined as ℏq and differs from normal momentum because it is only defined within an arbitrary reciprocal lattice vector. At higher temperatures (10 K < T < Θ), the conservation of energy \hslash {\omega }_{{1}}=\hslash {\omega }_{{2}}+\hslash {\omega }_{{3}} and quasimomentum {\displaystyle \mathbf {q} _{1}=\mathbf {q} _{2}+\mathbf {q} _{3}+\mathbf {G} }, where q1 is wave vector of the incident phonon and q2, q3 are wave vectors of the resultant phonons, may also involve a reciprocal lattice vector G complicating the energy transport process. These processes can also reverse the direction of energy transport.

Therefore, these processes are also known as Umklapp (U) processes and can only occur when phonons with sufficiently large q-vectors are excited, because unless the sum of q2 and q3 points outside of the Brillouin zone the momentum is conserved and the process is normal scattering (N-process). The probability of a phonon to have energy E is given by the Boltzmann distribution P\propto {e}^{{-E/kT}}. To U-process to occur the decaying phonon to have a wave vector q1 that is roughly half of the diameter of the Brillouin zone, because otherwise quasimomentum would not be conserved.

Therefore, these phonons have to possess energy of \sim k\Theta /2, which is a significant fraction of Debye energy that is needed to generate new phonons. The probability for this is proportional to {e}^{{-\Theta /bT}}, with b=2. Temperature dependence of the mean free path has an exponential form {e}^{{\Theta /bT}}. The presence of the reciprocal lattice wave vector implies a net phonon backscattering and a resistance to phonon and thermal transport resulting finite λL,[17] as it means that momentum is not conserved. Only momentum non-conserving processes can cause thermal resistance.[18]

At high temperatures (T > Θ), the mean free path and therefore λL has a temperature dependence T−1, to which one arrives from formula {e}^{{\Theta /bT}} by making the following approximation {e}^{x}\propto x{\text{ }},{\text{ }}\left(x\right)<1 [clarification needed] and writing x=\Theta /bT. This dependency is known as Eucken's law and originates from the temperature dependency of the probability for the U-process to occur.[17][18]

Thermal conductivity is usually described by the Boltzmann equation with the relaxation time approximation in which phonon scattering is a limiting factor. Another approach is to use analytic models or molecular dynamics or Monte Carlo based methods to describe thermal conductivity in solids.

Short wavelength phonons are strongly scattered by impurity atoms if an alloyed phase is present, but mid and long wavelength phonons are less affected. Mid and long wavelength phonons carry significant fraction of heat, so to further reduce lattice thermal conductivity one has to introduce structures to scatter these phonons. This is achieved by introducing interface scattering mechanism, which requires structures whose characteristic length is longer than that of impurity atom. Some possible ways to realize these interfaces are nanocomposites and embedded nanoparticles/structures.[19]

Electronic thermal conductivity

Hot electrons from higher energy states carry more thermal energy than cold electrons, while electrical conductivity is rather insensitive to the energy distribution of carriers because the amount of charge that electrons carry, does not depend on their energy. This is a physical reason for the greater sensitivity of electronic thermal conductivity to energy dependence of density of states and relaxation time, respectively.[17]

Mahan and Sofo (PNAS 1996 93 (15) 7436-7439) showed that materials with a certain electron structure have reduced electron thermal conductivity. Based on their analysis one can demonstrate that if the electron density of states in the material is close to the delta-function, the electronic thermal conductivity drops to zero. By taking the following equation {\lambda }_{{E}}={\lambda }_{{0}}-T\sigma {S}^{{2}}, where λ0 is the electronic thermal conductivity when the electrochemical potential gradient inside the sample is zero, as a starting point. As next step the transport coefficients are written as following
\sigma ={\sigma }_{{0}}{I}_{{0}},
\sigma S={\left({\frac {k}{e}}\right)\sigma _{0}}{I}_{1}
{\lambda }_{0}={\left({\frac {k}{e}}\right)}^{2}{\sigma }_{0}T{I}_{2},
where {\sigma }_{{0}}={e}^{{2}}/\left(\hslash {a}_{{0}}\right) and a0 the Bohr radius. The dimensionless integrals In are defined as
{I}_{n}={\underset {-\infty }{\overset {\infty }{\int }}}{\frac {{e}^{x}}{{\left({e}^{x}+1\right)}^{2}}}s\left(x\right){x}^{n}dx,
where s(x) is the dimensionless transport distribution function. The integrals In are the moments of the function
P\left(x\right)=D\left(x\right)s\left(x\right),{\text{ }}D\left(x\right)={\frac {{e}^{x}}{{\left({e}^{x}+1\right)}^{2}}},
where x is the energy of carriers. By substituting the previous formulas for the transport coefficient to the equation for λE we get the following equation
{\lambda }_{\mathrm {E} }={\left({\frac {k}{e}}\right)}^{2}{\sigma }_{0}T\left({I}_{2}-{\frac {{I}_{1}^{2}}{{I}_{0}}}\right).
From the previous equation we see that λE to be zero the bracketed term containing In terms have to be zero. Now if we assume that
s\left(x\right)=f\left(x\right)\delta \left(x-b\right),
where δ is the Dirac delta function, In terms get the following expressions
{I}_{{0}}=D\left(b\right)f\left(b\right),
{I}_{{1}}=D\left(b\right)f\left(b\right)b,
{I}_{{2}}=D\left(b\right)f\left(b\right){b}^{{2}}.
By substituting these expressions to the equation for λE, we see that it goes to zero. Therefore, P(x) has to be delta function.[19]

Equations

In an isotropic medium the thermal conductivity is the parameter k in the Fourier expression for the heat flux
{\vec {q}}=-k{\vec {\nabla }}T
where {\vec {q}} is the heat flux (amount of heat flowing per second and per unit area) and {\vec {\nabla }}T the temperature gradient. The sign in the expression is chosen so that always k > 0 as heat always flows from a high temperature to a low temperature. This is a direct consequence of the second law of thermodynamics.

In the one-dimensional case q = H/A with H the amount of heat flowing per second through a surface with area A and the temperature gradient is dT/dx so
H=-kA{\frac {\mathrm {d} T}{\mathrm {d} x}}.
In case of a thermally insulated bar (except at the ends) in the steady state H is constant. If A is constant as well the expression can be integrated with the result
{\displaystyle HL=A\int _{T_{\text{L}}}^{T_{\text{H}}}k(T)\mathrm {d} T}
where TH and TL are the temperatures at the hot end and the cold end respectively, and L is the length of the bar. It is convenient to introduce the thermal-conductivity integral
I_{k}(T)=\int _{0}^{T}k(T^{\prime })\mathrm {d} T^{\prime }.
The heat flow rate is then given by
{\displaystyle H={\frac {A}{L}}[I_{k}(T_{\text{H}})-I_{k}(T_{\text{L}})].}
If the temperature difference is small k can be taken as constant. In that case
{\displaystyle H=kA{\frac {T_{\text{H}}-T_{\text{L}}}{L}}.}

Simple kinetic picture


Gas atoms moving randomly through a surface.

In this section we will motivate an expression for the thermal conductivity in terms of microscopic parameters.

Consider a gas of particles of negligible volume governed hard-core interactions and within a vertical temperature gradient. The upper side is hot and the lower side cold. There is a downward energy flow because the gas atoms, going down, have a higher energy than the atoms going up. The net flow of energy per second is the heat flow H, which is proportional to the number of particles that cross the area A per second. In fact, H should also be proportional to the particle density n, the mean particle velocity v, the amount of energy transported per particle so with the heat capacity per particle c and some characteristic temperature difference ΔT. So far, in our model,
{\displaystyle H\propto n\,v\,c\,A\,\Delta T.}
The unit of H is J/s and of the right-hand side it is (particle/m3) × (m/s) × (J/(K × particle)) × (m2) × (K) = J/s, so this is already of the right dimension. Only a numerical factor is missing. For ΔT we take the temperature difference of the gas between two collisions \Delta T=l{\frac {dT}{dz}} where l is the mean free path.

Detailed kinetic calculations[20] show that the numerical factor is -1/3, so, all in all,
{\displaystyle H=-{\frac {1}{3}}\,n\,v\,c\,l\,A\,{\frac {dT}{dz}}.}
Comparison with the one-dimension expression for the heat flow, given above, gives an expression for the factor k
{\displaystyle k={\frac {1}{3}}\,n\,v\,c\,l.}
The particle density and the heat capacity per particle can be combined as the heat capacity per unit volume so
{\displaystyle k={\frac {1}{3}}\,v\,l\,{\frac {C_{V}}{V_{m}}}}
where CV is the molar heat capacity at constant volume and Vm the molar volume.

More rigorously, the mean free path of a molecule in a gas is given by l\propto {\frac {1}{n\sigma }} where σ is the collision cross section. So
k\propto {\frac {c}{\sigma }}v.
The heat capacity per particle c and the cross section σ both are temperature independent so the temperature dependence of k is determined by the T dependence of v. For a monatomic gas, with atomic mass M, v is given by v={\sqrt {\frac {3RT}{M}}}. So
k\propto {\sqrt {\frac {T}{M}}}.
This expression also shows why gases with a low mass (hydrogen, helium) have a high thermal conductivity.

For metals at low temperatures the heat is carried mainly by the free electrons. In this case the mean velocity is the Fermi velocity which is temperature independent. The mean free path is determined by the impurities and the crystal imperfections which are temperature independent as well. So the only temperature-dependent quantity is the heat capacity c, which, in this case, is proportional to T. So
{\displaystyle k=k_{0}\,T{\text{     (metal at low temperature)}}}
with k0 a constant. For pure metals such as copper, silver, etc. l is large, so the thermal conductivity is high. At higher temperatures the mean free path is limited by the phonons, so the thermal conductivity tends to decrease with temperature. In alloys the density of the impurities is very high, so l and, consequently k, are small. Therefore, alloys, such as stainless steel, can be used for thermal insulation.

Operator (computer programming)

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Operator_(computer_programmin...