Search This Blog

Monday, May 28, 2018

Thermal conductivity

From Wikipedia, the free encyclopedia
Thermal conductivity (often denoted k, λ, or κ) is the property of a material to conduct heat. It is evaluated primarily in terms of the Fourier's Law for heat conduction. In general, thermal conductivity is a tensor property, expressing the anisotropy of the property.

Heat transfer occurs at a lower rate in materials of low thermal conductivity than in materials of high thermal conductivity. Correspondingly, materials of high thermal conductivity are widely used in heat sink applications and materials of low thermal conductivity are used as thermal insulation. The thermal conductivity of a material may depend on temperature. The reciprocal of thermal conductivity is called thermal resistivity.

Units of thermal conductivity

In the International System of Units (SI), thermal conductivity is measured in watts per meter-kelvin (W/(mK)). The dimension of thermal conductivity is M1L1T−3Θ−1, expressed in terms of the dimensions mass (M), length (L), time (T), and temperature (Θ). In Imperial units, thermal conductivity is measured in BTU/(hrft°F).[note 1][1]

Other units which are closely related to the thermal conductivity are in common use in the construction and textile industries. The construction industry makes use of units such as the R-value (resistance) and the U-value (transmittance). Although related to the thermal conductivity of a material used in an insulation product, R- and U-values are dependent on the thickness of the product.[note 2]

Likewise the textile industry has several units including the tog and the clo which express thermal resistance of a material in a way analogous to the R-values used in the construction industry.

Measurement

There are a number of ways to measure thermal conductivity. Each of these is suitable for a limited range of materials, depending on the thermal properties and the medium temperature. There is a distinction between steady-state and transient techniques.

In general, steady-state techniques are useful when the temperature of the material does not change with time. This makes the signal analysis straightforward (steady state implies constant signals). The disadvantage is that a well-engineered experimental setup is usually needed. The Divided Bar (various types) is the most common device used for consolidated rock solids.

Experimental values


Experimental values of thermal conductivity.

Thermal conductivity is important in material science, research, electronics, building insulation and related fields, especially where high operating temperatures are achieved. Several materials are shown in the list of thermal conductivities. These should be considered approximate due to the uncertainties related to material definitions.

High energy generation rates within electronics or turbines require the use of materials with high thermal conductivity such as copper (see: Copper in heat exchangers), aluminium, and silver. On the other hand, materials with low thermal conductance, such as polystyrene and alumina, are used in building construction or in furnaces in an effort to slow the flow of heat, i.e. for insulation purposes.

Definitions

The reciprocal of thermal conductivity is thermal resistivity, usually expressed in kelvin-meters per watt (K⋅m⋅W−1). For a given thickness of a material, that particular construction's thermal resistance and the reciprocal property, thermal conductance, can be calculated. Unfortunately, there are differing definitions for these terms.

Thermal conductivity, k, often depends on temperature. Therefore, the definitions listed below make sense when the thermal conductivity is temperature independent. Otherwise a representative mean value has to be considered; for more, see the equations section below.

Conductance

For general scientific use, thermal conductance is the quantity of heat that passes in unit time through a plate of particular area and thickness when its opposite faces differ in temperature by one kelvin. For a plate of thermal conductivity k, area A and thickness L, the conductance calculated is kA/L, measured in W⋅K−1 (equivalent to: W/°C). ASTM C168-15, however, defines thermal conductance as "time rate of steady state heat flow through a unit area of a material or construction induced by a unit temperature difference between the body surfaces" and defines the units as W/(m2⋅K) (Btu/(h⋅ft2⋅°F))[2]

The thermal conductance of that particular construction is the inverse of the thermal resistance. Thermal conductivity and conductance are analogous to electrical conductivity (A⋅m−1⋅V−1) and electrical conductance (A⋅V−1).

There is also a measure known as heat transfer coefficient: the quantity of heat that passes in unit time through a unit area of a plate of particular thickness when its opposite faces differ in temperature by one kelvin. The reciprocal is thermal insulance. In summary:
  • thermal conductance = kA/L, measured in W⋅K−1 or in ASTM C168-15 as W/(m2⋅K)[2]
    • thermal resistance = L/(kA), measured in K⋅W−1 (equivalent to: °C/W)
  • heat transfer coefficient = k/L, measured in W⋅K−1⋅m−2
    • thermal insulance = L/k, measured in K⋅m2⋅W−1.
The heat transfer coefficient is also known as thermal admittance in the sense that the material may be seen as admitting heat to flow.

Resistance

Thermal resistance is the ability of a material to resist the flow of heat.
Thermal resistance is the reciprocal of thermal conductance, i.e., lowering its value will raise the heat conduction and vice versa.

When thermal resistances occur in series, they are additive. Thus, when heat flows consecutively through two components each with a resistance of 3 °C/W, the total resistance is 3 °C/W + 3 °C/W = 6 °C/W.

A common engineering design problem involves the selection of an appropriate sized heat sink for a given heat source. Working in units of thermal resistance greatly simplifies the design calculation. The following formula can be used to estimate the performance:
R_{hs}={\frac {\Delta T}{P_{th}}}-R_{s}
where:
  • Rhs is the maximum thermal resistance of the heat sink to ambient, in °C/W (equivalent to K/W)
  • ΔT is the required temperature difference (temperature drop), in °C
  • Pth is the thermal power (heat flow), in watts
  • Rs is the thermal resistance of the heat source, in °C/W
For example, if a component produces 100 W of heat, and has a thermal resistance of 0.5 °C/W, what is the maximum thermal resistance of the heat sink? Suppose the maximum temperature is 125 °C, and the ambient temperature is 25 °C; then ΔT is 100 °C. The heat sink's thermal resistance to ambient must then be 0.5 °C/W or less (total resistance component and heat sink is then 1.0 °C/W).

Transmittance

A third term, thermal transmittance, sub way the thermal conductance of a structure along with heat transfer due to convection and radiation. It is measured in the same units as thermal conductance and is sometimes known as the composite thermal conductance. The term U-value is often used.

Admittance

The thermal admittance of a material, such as a building fabric, is a measure of the ability of a material to transfer heat in the presence of a temperature difference on opposite sides of the material. Thermal admittance is measured in the same units as a heat transfer coefficient, power (watts) per unit area (square meters) per temperature change (kelvins). Thermal admittance of a building fabric affects a building's thermal response to variation in outside temperature.[3]

Co-efficient of thermal conductivity: The co-efficient of thermal conductivity of the material of a substance is numerically equal to the quantity of heat that conducts in one second normally through a slab of unit length and unit area, the difference of temperature between its end faces being one degree.

Influencing factors

Effect of temperature on thermal conductivity

The effect of temperature on thermal conductivity is different for metals and nonmetals. In metals, heat conductivity is primarily due to free electrons. Following the Wiedemann–Franz law, thermal conductivity of metals is approximately proportional to the absolute temperature (in kelvins) times electrical conductivity. In pure metals the electrical conductivity decreases with increasing temperature and thus the product of the two, the thermal conductivity, stays approximately constant. However, as temperatures approach absolute zero, the thermal conductivity decreases sharply.[4] In alloys the change in electrical conductivity is usually smaller and thus thermal conductivity increases with temperature, often proportionally to temperature. Many pure metals have a peak thermal conductivity between 2 K and 10 K.

On the other hand, heat conductivity in nonmetals is mainly due to lattice vibrations (phonons). Except for high quality crystals at low temperatures, the phonon mean free path is not reduced significantly at higher temperatures. Thus, the thermal conductivity of nonmetals is approximately constant at high temperatures. At low temperatures well below the Debye temperature, thermal conductivity decreases, as does the heat capacity, due to carrier scattering from defects at very low temperatures.[4]

Chemical phase

When a material undergoes a phase change from solid to liquid or from liquid to gas the thermal conductivity may change. An example of this would be the change in thermal conductivity that occurs when ice (thermal conductivity of 2.18 W/(m⋅K) at 0 °C) melts to form liquid water (thermal conductivity of 0.56 W/(m⋅K) at 0 °C).[5]

Thermal anisotropy

Some substances, such as non-cubic crystals, can exhibit different thermal conductivities along different crystal axes, due to differences in phonon coupling along a given crystal axis. Sapphire is a notable example of variable thermal conductivity based on orientation and temperature, with 35 W/(m⋅K) along the C-axis and 32 W/(m⋅K) along the A-axis.[6] Wood generally conducts better along the grain than across it. Other examples of materials where the thermal conductivity varies with direction are metals that have undergone heavy cold pressing, laminated materials, cables, the materials used for the Space Shuttle thermal protection system, and fiber-reinforced composite structures.[7]

When anisotropy is present, the direction of heat flow may not be exactly the same as the direction of the thermal gradient.

Electrical conductivity

In metals, thermal conductivity approximately tracks electrical conductivity according to the Wiedemann–Franz law, as freely moving valence electrons transfer not only electric current but also heat energy. However, the general correlation between electrical and thermal conductance does not hold for other materials, due to the increased importance of phonon carriers for heat in non-metals. Highly electrically conductive silver is less thermally conductive than diamond, which is an electrical insulator, but due to its orderly array of atoms it is conductive of heat via phonons.

Magnetic field

The influence of magnetic fields on thermal conductivity is known as the Righi-Leduc effect.

Convection


Exhaust system components with ceramic coatings having a low thermal conductivity reduce heating of nearby sensitive components

Air and other gases are generally good insulators, in the absence of convection. Therefore, many insulating materials function simply by having a large number of gas-filled pockets which prevent large-scale convection. Examples of these include expanded and extruded polystyrene (popularly referred to as "styrofoam") and silica aerogel, as well as warm clothes. Natural, biological insulators such as fur and feathers achieve similar effects by dramatically inhibiting convection of air or water near an animal's skin.

Light gases, such as hydrogen and helium typically have high thermal conductivity. Dense gases such as xenon and dichlorodifluoromethane have low thermal conductivity. An exception, sulfur hexafluoride, a dense gas, has a relatively high thermal conductivity due to its high heat capacityArgon and krypton, gases denser than air, are often used in insulated glazing (double paned windows) to improve their insulation characteristics.

Isotopic purity

Isotopically pure diamond can have a significantly higher thermal conductivity.[8] eg. 41,000 W·m−1·K−1 [9] (99.999% 12C calc.200,000[9])

Physical origins

At the atomic level, there are no simple, correct expressions for thermal conductivity. Atomically, the thermal conductivity of a system is determined by how atoms composing the system interact. There are two different approaches for calculating the thermal conductivity of a system.
  • The first approach employs the Green–Kubo relations. Although this employs analytic expressions, which, in principle, can be solved, calculating the thermal conductivity of a dense fluid or solid using this relation requires the use of molecular dynamics computer simulation.
  • The second approach is based on the relaxation time approach. Due to the anharmonicity within the crystal potential, the phonons in the system are known to scatter. There are three main mechanisms for scattering:
    • Boundary scattering, a phonon hitting the boundary of a system;
    • Mass defect scattering, a phonon hitting an impurity within the system and scattering;
    • Phonon-phonon scattering, a phonon breaking into two lower energy phonons or a phonon colliding with another phonon and merging into one higher-energy phonon.

Lattice waves

Heat transport in both amorphous and crystalline dielectric solids is by way of elastic vibrations of the lattice (phonons). This transport mode is limited by the elastic scattering of acoustic phonons at lattice defects. These predictions were confirmed by the experiments of Chang and Jones on commercial glasses and glass ceramics, where the mean free paths were limited by "internal boundary scattering" to length scales of 10−2 cm to 10−3 cm.[10][11]

The phonon mean free path has been associated directly with the effective relaxation length for processes without directional correlation. If Vg  is the group velocity of a phonon wave packet, then the relaxation length l\; is defined as:
{\displaystyle l\;=V_{\text{g}}t}
where t is the characteristic relaxation time. Since longitudinal waves have a much greater phase velocity than transverse waves, Vlong is much greater than Vtrans, and the relaxation length or mean free path of longitudinal phonons will be much greater. Thus, thermal conductivity will be largely determined by the speed of longitudinal phonons.[10][12]

Regarding the dependence of wave velocity on wavelength or frequency (dispersion), low-frequency phonons of long wavelength will be limited in relaxation length by elastic Rayleigh scattering. This type of light scattering from small particles is proportional to the fourth power of the frequency. For higher frequencies, the power of the frequency will decrease until at highest frequencies scattering is almost frequency independent. Similar arguments were subsequently generalized to many glass forming substances using Brillouin scattering.[13][14][15][16]

Phonons in the acoustical branch dominate the phonon heat conduction as they have greater energy dispersion and therefore a greater distribution of phonon velocities. Additional optical modes could also be caused by the presence of internal structure (i.e., charge or mass) at a lattice point; it is implied that the group velocity of these modes is low and therefore their contribution to the lattice thermal conductivity λL (\kappa L) is small.[17]

Each phonon mode can be split into one longitudinal and two transverse polarization branches. By extrapolating the phenomenology of lattice points to the unit cells it is seen that the total number of degrees of freedom is 3pq when p is the number of primitive cells with q atoms/unit cell. From these only 3p are associated with the acoustic modes, the remaining 3p(q − 1) are accommodated through the optical branches. This implies that structures with larger p and q contain a greater number of optical modes and a reduced λL.

From these ideas, it can be concluded that increasing crystal complexity, which is described by a complexity factor CF (defined as the number of atoms/primitive unit cell), decreases λL. Micheline Roufosse and P.G. Klemens derived the exact proportionality in their article Thermal Conductivity of Complex Dielectric Crystals at Phys. Rev. B 7, 5379–5386 (1973). This was done by assuming that the relaxation time τ decreases with increasing number of atoms in the unit cell and then scaling the parameters of the expression for thermal conductivity in high temperatures accordingly.[17]

Describing of anharmonic effects is complicated because exact treatment as in the harmonic case is not possible and phonons are no longer exact eigensolutions to the equations of motion. Even if the state of motion of the crystal could be described with a plane wave at a particular time, its accuracy would deteriorate progressively with time. Time development would have to be described by introducing a spectrum of other phonons, which is known as the phonon decay. The two most important anharmonic effects are the thermal expansion and the phonon thermal conductivity.

Only when the phonon number ‹n› deviates from the equilibrium value ‹n›0, can a thermal current arise as stated in the following expression
Q_{x}={\frac {1}{V}}\sum _{q,j}{\hslash \omega \left(\left\langle n\right\rangle -{\left\langle n\right\rangle }^{0}\right)v_{x}}{\text{,}}
where v is the energy transport velocity of phonons. Only two mechanisms exist that can cause time variation of ‹n› in a particular region. The number of phonons that diffuse into the region from neighboring regions differs from those that diffuse out, or phonons decay inside the same region into other phonons. A special form of the Boltzmann equation
{\frac {d\left\langle n\right\rangle }{dt}}={\left({\frac {\partial \left\langle n\right\rangle }{\partial t}}\right)}_{\text{diff.}}+{\left({\frac {\partial \left\langle n\right\rangle }{\partial t}}\right)}_{\text{decay}}
states this. When steady state conditions are assumed the total time derivate of phonon number is zero, because the temperature is constant in time and therefore the phonon number stays also constant. Time variation due to phonon decay is described with a relaxation time (τ) approximation
{\left({\frac {\partial \left\langle n\right\rangle }{\partial t}}\right)}_{\text{decay}}=-{\text{ }}{\frac {\left\langle n\right\rangle -{\left\langle n\right\rangle }^{0}}{\tau }},
which states that the more the phonon number deviates from its equilibrium value, the more its time variation increases. At steady state conditions and local thermal equilibrium are assumed we get the following equation
{\left({\frac {\partial \left(n\right)}{\partial t}}\right)}_{\text{diff.}}=-{v}_{x}{\frac {\partial {\left(n\right)}^{0}}{\partial T}}{\frac {\partial T}{\partial x}}{\text{.}}
Using the relaxation time approximation for the Boltzmann equation and assuming steady-state conditions, the phonon thermal conductivity λL can be determined. The temperature dependence for λL originates from the variety of processes, whose significance for λL depends on the temperature range of interest. Mean free path is one factor that determines the temperature dependence for λL, as stated in the following equation
{\lambda }_{L}={\frac {1}{3V}}\sum _{q,j}v\left(q,j\right)\Lambda \left(q,j\right){\frac {\partial }{\partial T}}\epsilon \left(\omega \left(q,j\right),T\right),
where Λ is the mean free path for phonon and {\frac {\partial }{\partial T}}\epsilon denotes the heat capacity. This equation is a result of combining the four previous equations with each other and knowing that \left\langle v_{x}^{2}\right\rangle ={\frac {1}{3}}v^{2} for cubic or isotropic systems and \Lambda =v\tau .[18]

At low temperatures (< 10 K) the anharmonic interaction does not influence the mean free path and therefore, the thermal resistivity is determined only from processes for which q-conservation does not hold. These processes include the scattering of phonons by crystal defects, or the scattering from the surface of the crystal in case of high quality single crystal. Therefore, thermal conductance depends on the external dimensions of the crystal and the quality of the surface. Thus, temperature dependence of λL is determined by the specific heat and is therefore proportional to T3.[18]

Phonon quasimomentum is defined as ℏq and differs from normal momentum because it is only defined within an arbitrary reciprocal lattice vector. At higher temperatures (10 K < T < Θ), the conservation of energy \hslash {\omega }_{{1}}=\hslash {\omega }_{{2}}+\hslash {\omega }_{{3}} and quasimomentum {\displaystyle \mathbf {q} _{1}=\mathbf {q} _{2}+\mathbf {q} _{3}+\mathbf {G} }, where q1 is wave vector of the incident phonon and q2, q3 are wave vectors of the resultant phonons, may also involve a reciprocal lattice vector G complicating the energy transport process. These processes can also reverse the direction of energy transport.

Therefore, these processes are also known as Umklapp (U) processes and can only occur when phonons with sufficiently large q-vectors are excited, because unless the sum of q2 and q3 points outside of the Brillouin zone the momentum is conserved and the process is normal scattering (N-process). The probability of a phonon to have energy E is given by the Boltzmann distribution P\propto {e}^{{-E/kT}}. To U-process to occur the decaying phonon to have a wave vector q1 that is roughly half of the diameter of the Brillouin zone, because otherwise quasimomentum would not be conserved.

Therefore, these phonons have to possess energy of \sim k\Theta /2, which is a significant fraction of Debye energy that is needed to generate new phonons. The probability for this is proportional to {e}^{{-\Theta /bT}}, with b=2. Temperature dependence of the mean free path has an exponential form {e}^{{\Theta /bT}}. The presence of the reciprocal lattice wave vector implies a net phonon backscattering and a resistance to phonon and thermal transport resulting finite λL,[17] as it means that momentum is not conserved. Only momentum non-conserving processes can cause thermal resistance.[18]

At high temperatures (T > Θ), the mean free path and therefore λL has a temperature dependence T−1, to which one arrives from formula {e}^{{\Theta /bT}} by making the following approximation {e}^{x}\propto x{\text{ }},{\text{ }}\left(x\right)<1 [clarification needed] and writing x=\Theta /bT. This dependency is known as Eucken's law and originates from the temperature dependency of the probability for the U-process to occur.[17][18]

Thermal conductivity is usually described by the Boltzmann equation with the relaxation time approximation in which phonon scattering is a limiting factor. Another approach is to use analytic models or molecular dynamics or Monte Carlo based methods to describe thermal conductivity in solids.

Short wavelength phonons are strongly scattered by impurity atoms if an alloyed phase is present, but mid and long wavelength phonons are less affected. Mid and long wavelength phonons carry significant fraction of heat, so to further reduce lattice thermal conductivity one has to introduce structures to scatter these phonons. This is achieved by introducing interface scattering mechanism, which requires structures whose characteristic length is longer than that of impurity atom. Some possible ways to realize these interfaces are nanocomposites and embedded nanoparticles/structures.[19]

Electronic thermal conductivity

Hot electrons from higher energy states carry more thermal energy than cold electrons, while electrical conductivity is rather insensitive to the energy distribution of carriers because the amount of charge that electrons carry, does not depend on their energy. This is a physical reason for the greater sensitivity of electronic thermal conductivity to energy dependence of density of states and relaxation time, respectively.[17]

Mahan and Sofo (PNAS 1996 93 (15) 7436-7439) showed that materials with a certain electron structure have reduced electron thermal conductivity. Based on their analysis one can demonstrate that if the electron density of states in the material is close to the delta-function, the electronic thermal conductivity drops to zero. By taking the following equation {\lambda }_{{E}}={\lambda }_{{0}}-T\sigma {S}^{{2}}, where λ0 is the electronic thermal conductivity when the electrochemical potential gradient inside the sample is zero, as a starting point. As next step the transport coefficients are written as following
\sigma ={\sigma }_{{0}}{I}_{{0}},
\sigma S={\left({\frac {k}{e}}\right)\sigma _{0}}{I}_{1}
{\lambda }_{0}={\left({\frac {k}{e}}\right)}^{2}{\sigma }_{0}T{I}_{2},
where {\sigma }_{{0}}={e}^{{2}}/\left(\hslash {a}_{{0}}\right) and a0 the Bohr radius. The dimensionless integrals In are defined as
{I}_{n}={\underset {-\infty }{\overset {\infty }{\int }}}{\frac {{e}^{x}}{{\left({e}^{x}+1\right)}^{2}}}s\left(x\right){x}^{n}dx,
where s(x) is the dimensionless transport distribution function. The integrals In are the moments of the function
P\left(x\right)=D\left(x\right)s\left(x\right),{\text{ }}D\left(x\right)={\frac {{e}^{x}}{{\left({e}^{x}+1\right)}^{2}}},
where x is the energy of carriers. By substituting the previous formulas for the transport coefficient to the equation for λE we get the following equation
{\lambda }_{\mathrm {E} }={\left({\frac {k}{e}}\right)}^{2}{\sigma }_{0}T\left({I}_{2}-{\frac {{I}_{1}^{2}}{{I}_{0}}}\right).
From the previous equation we see that λE to be zero the bracketed term containing In terms have to be zero. Now if we assume that
s\left(x\right)=f\left(x\right)\delta \left(x-b\right),
where δ is the Dirac delta function, In terms get the following expressions
{I}_{{0}}=D\left(b\right)f\left(b\right),
{I}_{{1}}=D\left(b\right)f\left(b\right)b,
{I}_{{2}}=D\left(b\right)f\left(b\right){b}^{{2}}.
By substituting these expressions to the equation for λE, we see that it goes to zero. Therefore, P(x) has to be delta function.[19]

Equations

In an isotropic medium the thermal conductivity is the parameter k in the Fourier expression for the heat flux
{\vec {q}}=-k{\vec {\nabla }}T
where {\vec {q}} is the heat flux (amount of heat flowing per second and per unit area) and {\vec {\nabla }}T the temperature gradient. The sign in the expression is chosen so that always k > 0 as heat always flows from a high temperature to a low temperature. This is a direct consequence of the second law of thermodynamics.

In the one-dimensional case q = H/A with H the amount of heat flowing per second through a surface with area A and the temperature gradient is dT/dx so
H=-kA{\frac {\mathrm {d} T}{\mathrm {d} x}}.
In case of a thermally insulated bar (except at the ends) in the steady state H is constant. If A is constant as well the expression can be integrated with the result
{\displaystyle HL=A\int _{T_{\text{L}}}^{T_{\text{H}}}k(T)\mathrm {d} T}
where TH and TL are the temperatures at the hot end and the cold end respectively, and L is the length of the bar. It is convenient to introduce the thermal-conductivity integral
I_{k}(T)=\int _{0}^{T}k(T^{\prime })\mathrm {d} T^{\prime }.
The heat flow rate is then given by
{\displaystyle H={\frac {A}{L}}[I_{k}(T_{\text{H}})-I_{k}(T_{\text{L}})].}
If the temperature difference is small k can be taken as constant. In that case
{\displaystyle H=kA{\frac {T_{\text{H}}-T_{\text{L}}}{L}}.}

Simple kinetic picture


Gas atoms moving randomly through a surface.

In this section we will motivate an expression for the thermal conductivity in terms of microscopic parameters.

Consider a gas of particles of negligible volume governed hard-core interactions and within a vertical temperature gradient. The upper side is hot and the lower side cold. There is a downward energy flow because the gas atoms, going down, have a higher energy than the atoms going up. The net flow of energy per second is the heat flow H, which is proportional to the number of particles that cross the area A per second. In fact, H should also be proportional to the particle density n, the mean particle velocity v, the amount of energy transported per particle so with the heat capacity per particle c and some characteristic temperature difference ΔT. So far, in our model,
{\displaystyle H\propto n\,v\,c\,A\,\Delta T.}
The unit of H is J/s and of the right-hand side it is (particle/m3) × (m/s) × (J/(K × particle)) × (m2) × (K) = J/s, so this is already of the right dimension. Only a numerical factor is missing. For ΔT we take the temperature difference of the gas between two collisions \Delta T=l{\frac {dT}{dz}} where l is the mean free path.

Detailed kinetic calculations[20] show that the numerical factor is -1/3, so, all in all,
{\displaystyle H=-{\frac {1}{3}}\,n\,v\,c\,l\,A\,{\frac {dT}{dz}}.}
Comparison with the one-dimension expression for the heat flow, given above, gives an expression for the factor k
{\displaystyle k={\frac {1}{3}}\,n\,v\,c\,l.}
The particle density and the heat capacity per particle can be combined as the heat capacity per unit volume so
{\displaystyle k={\frac {1}{3}}\,v\,l\,{\frac {C_{V}}{V_{m}}}}
where CV is the molar heat capacity at constant volume and Vm the molar volume.

More rigorously, the mean free path of a molecule in a gas is given by l\propto {\frac {1}{n\sigma }} where σ is the collision cross section. So
k\propto {\frac {c}{\sigma }}v.
The heat capacity per particle c and the cross section σ both are temperature independent so the temperature dependence of k is determined by the T dependence of v. For a monatomic gas, with atomic mass M, v is given by v={\sqrt {\frac {3RT}{M}}}. So
k\propto {\sqrt {\frac {T}{M}}}.
This expression also shows why gases with a low mass (hydrogen, helium) have a high thermal conductivity.

For metals at low temperatures the heat is carried mainly by the free electrons. In this case the mean velocity is the Fermi velocity which is temperature independent. The mean free path is determined by the impurities and the crystal imperfections which are temperature independent as well. So the only temperature-dependent quantity is the heat capacity c, which, in this case, is proportional to T. So
{\displaystyle k=k_{0}\,T{\text{     (metal at low temperature)}}}
with k0 a constant. For pure metals such as copper, silver, etc. l is large, so the thermal conductivity is high. At higher temperatures the mean free path is limited by the phonons, so the thermal conductivity tends to decrease with temperature. In alloys the density of the impurities is very high, so l and, consequently k, are small. Therefore, alloys, such as stainless steel, can be used for thermal insulation.

Turbulence

From Wikipedia, the free encyclopedia
In fluid dynamics, turbulence or turbulent flow is any pattern of fluid motion characterized by chaotic changes in pressure and flow velocity. It is in contrast to a laminar flow regime, which occurs when a fluid flows in parallel layers, with no disruption between those layers.[1]

Turbulence is commonly observed in everyday phenomena such as surf, fast flowing rivers, billowing storm clouds, or smoke from a chimney, and most fluid flows occurring in nature and created in engineering applications are turbulent.[2][3]:2 Turbulence is caused by excessive kinetic energy in parts of a fluid flow, which overcomes the damping effect of the fluid's viscosity. For this reason turbulence is easier to create in low viscosity fluids, but more difficult in highly viscous fluids. In general terms, in turbulent flow, unsteady vortices appear of many sizes which interact with each other, consequently drag due to friction effects increases. This would increase the energy needed to pump fluid through a pipe, for instance. However this effect can also be exploited by devices such as aerodynamic spoilers on aircraft, which deliberately "spoil" the laminar flow to increase drag and reduce lift.

The onset of turbulence can be predicted by a dimensionless constant called the Reynolds number, which calculates the balance between kinetic energy and viscous damping in a fluid flow. However, turbulence has long resisted detailed physical analysis, and the interactions within turbulence create a very complex situation. Richard Feynman has described turbulence as the most important unsolved problem of classical physics.[4]

Examples of turbulence

Laminar and turbulent water flow over the hull of a submarine. As the relative velocity of the water increases turbulence occurs
 
Turbulence in the tip vortex from an airplane wing
  • Smoke rising from a cigarette is mostly turbulent flow. However, for the first few centimeters the flow is laminar. The smoke plume becomes turbulent as its Reynolds number increases, due to its flow velocity and characteristic length increasing.
  • Flow over a golf ball. (This can be best understood by considering the golf ball to be stationary, with air flowing over it.) If the golf ball were smooth, the boundary layer flow over the front of the sphere would be laminar at typical conditions. However, the boundary layer would separate early, as the pressure gradient switched from favorable (pressure decreasing in the flow direction) to unfavorable (pressure increasing in the flow direction), creating a large region of low pressure behind the ball that creates high form drag. To prevent this from happening, the surface is dimpled to perturb the boundary layer and promote transition to turbulence. This results in higher skin friction, but moves the point of boundary layer separation further along, resulting in lower form drag and lower overall drag.
  • Clear-air turbulence experienced during airplane flight, as well as poor astronomical seeing (the blurring of images seen through the atmosphere.)
  • Most of the terrestrial atmospheric circulation
  • The oceanic and atmospheric mixed layers and intense oceanic currents.
  • The flow conditions in many industrial equipment (such as pipes, ducts, precipitators, gas scrubbers, dynamic scraped surface heat exchangers, etc.) and machines (for instance, internal combustion engines and gas turbines).
  • The external flow over all kind of vehicles such as cars, airplanes, ships and submarines.
  • The motions of matter in stellar atmospheres.
  • A jet exhausting from a nozzle into a quiescent fluid. As the flow emerges into this external fluid, shear layers originating at the lips of the nozzle are created. These layers separate the fast moving jet from the external fluid, and at a certain critical Reynolds number they become unstable and break down to turbulence.
  • Biologically generated turbulence resulting from swimming animals affects ocean mixing.[5]
  • Snow fences work by inducing turbulence in the wind, forcing it to drop much of its snow load near the fence.
  • Bridge supports (piers) in water. In the late summer and fall, when river flow is slow, water flows smoothly around the support legs. In the spring, when the flow is faster, a higher Reynolds Number is associated with the flow. The flow may start off laminar but is quickly separated from the leg and becomes turbulent.
  • In many geophysical flows (rivers, atmospheric boundary layer), the flow turbulence is dominated by the coherent structure activities and associated turbulent events. A turbulent event is a series of turbulent fluctuations that contain more energy than the average flow turbulence.[6][7] The turbulent events are associated with coherent flow structures such as eddies and turbulent bursting, and they play a critical role in terms of sediment scour, accretion and transport in rivers as well as contaminant mixing and dispersion in rivers and estuaries, and in the atmosphere.
  • In the medical field of cardiology, a stethoscope is used to detect heart sounds and bruits, which are due to turbulent blood flow. In normal individuals, heart sounds are a product of turbulent flow as heart valves close. However, in some conditions turbulent flow can be audible due to other reasons, some of them pathological. For example, in advanced atherosclerosis, bruits (and therefore turbulent flow) can be heard in some vessels that have been narrowed by the disease process.
  • Recently, turbulence in porous media became a highly debated subject.[8]

Features

Flow visualization of a turbulent jet, made by laser-induced fluorescence. The jet exhibits a wide range of length scales, an important characteristic of turbulent flows.

Turbulence is characterized by the following features:
Irregularity 
Turbulent flows are always highly irregular. For this reason, turbulence problems are normally treated statistically rather than deterministically. Turbulent flow is chaotic. However, not all chaotic flows are turbulent.
Diffusivity 
The readily available supply of energy in turbulent flows tends to accelerate the homogenization (mixing) of fluid mixtures. The characteristic which is responsible for the enhanced mixing and increased rates of mass, momentum and energy transports in a flow is called "diffusivity".
Turbulent diffusion is usually described by a turbulent diffusion coefficient. This turbulent diffusion coefficient is defined in a phenomenological sense, by analogy with the molecular diffusivities, but it does not have a true physical meaning, being dependent on the flow conditions, and not a property of the fluid itself. In addition, the turbulent diffusivity concept assumes a constitutive relation between a turbulent flux and the gradient of a mean variable similar to the relation between flux and gradient that exists for molecular transport. In the best case, this assumption is only an approximation. Nevertheless, the turbulent diffusivity is the simplest approach for quantitative analysis of turbulent flows, and many models have been postulated to calculate it. For instance, in large bodies of water like oceans this coefficient can be found using Richardson's four-third power law and is governed by the random walk principle. In rivers and large ocean currents, the diffusion coefficient is given by variations of Elder's formula.
Rotationality 
Turbulent flows have non-zero vorticity and are characterized by a strong three-dimensional vortex generation mechanism known as vortex stretching. In fluid dynamics, they are essentially vortices subjected to stretching associated with a corresponding increase of the component of vorticity in the stretching direction—due to the conservation of angular momentum. On the other hand, vortex stretching is the core mechanism on which the turbulence energy cascade relies to establish the structure function.[clarification needed] In general, the stretching mechanism implies thinning of the vortices in the direction perpendicular to the stretching direction due to volume conservation of fluid elements. As a result, the radial length scale of the vortices decreases and the larger flow structures break down into smaller structures. The process continues until the small scale structures are small enough that their kinetic energy can be transformed by the fluid's molecular viscosity into heat. This is why turbulence is always rotational and three dimensional. For example, atmospheric cyclones are rotational but their substantially two-dimensional shapes do not allow vortex generation and so are not turbulent. On the other hand, oceanic flows are dispersive but essentially non rotational and therefore are not turbulent.
Dissipation 
To sustain turbulent flow, a persistent source of energy supply is required because turbulence dissipates rapidly as the kinetic energy is converted into internal energy by viscous shear stress. Turbulence causes the formation of eddies of many different length scales. Most of the kinetic energy of the turbulent motion is contained in the large-scale structures. The energy "cascades" from these large-scale structures to smaller scale structures by an inertial and essentially inviscid mechanism. This process continues, creating smaller and smaller structures which produces a hierarchy of eddies. Eventually this process creates structures that are small enough that molecular diffusion becomes important and viscous dissipation of energy finally takes place. The scale at which this happens is the Kolmogorov length scale.
Via this energy cascade, turbulent flow can be realized as a superposition of a spectrum of flow velocity fluctuations and eddies upon a mean flow. The eddies are loosely defined as coherent patterns of flow velocity, vorticity and pressure. Turbulent flows may be viewed as made of an entire hierarchy of eddies over a wide range of length scales and the hierarchy can be described by the energy spectrum that measures the energy in flow velocity fluctuations for each length scale (wavenumber). The scales in the energy cascade are generally uncontrollable and highly non-symmetric. Nevertheless, based on these length scales these eddies can be divided into three categories.
Integral time scale
The integral time scale for a Lagrangian flow can be defined as:

{\displaystyle T=\left({\frac {1}{\langle u'u'\rangle }}\right)\int _{0}^{\infty }\langle u'u'(\tau )\rangle d\tau }

where u' is the velocity fluctuation, and \tau is the time lag between measurements.[9]
Integral length scales
Largest scales in the energy spectrum. These eddies obtain energy from the mean flow and also from each other. Thus, these are the energy production eddies which contain most of the energy. They have the large flow velocity fluctuation and are low in frequency. Integral scales are highly anisotropic and are defined in terms of the normalized two-point flow velocity correlations. The maximum length of these scales is constrained by the characteristic length of the apparatus. For example, the largest integral length scale of pipe flow is equal to the pipe diameter. In the case of atmospheric turbulence, this length can reach up to the order of several hundreds kilometers.: The integral length scale can be defined as
{\displaystyle L=\left({\frac {1}{\langle u'u'\rangle }}\right)\int \limits _{0}^{\infty }\langle u'u'(r)\rangle dr}
where r is the distance between 2 measurement locations, and u' is the velocity fluctuation in that same direction.[9]
Kolmogorov length scales 
Smallest scales in the spectrum that form the viscous sub-layer range. In this range, the energy input from nonlinear interactions and the energy drain from viscous dissipation are in exact balance. The small scales have high frequency, causing turbulence to be locally isotropic and homogeneous.
Taylor microscales 
The intermediate scales between the largest and the smallest scales which make the inertial subrange. Taylor microscales are not dissipative scale but pass down the energy from the largest to the smallest without dissipation. Some literatures do not consider Taylor microscales as a characteristic length scale and consider the energy cascade to contain only the largest and smallest scales; while the latter accommodate both the inertial subrange and the viscous sublayer. Nevertheless, Taylor microscales are often used in describing the term “turbulence” more conveniently as these Taylor microscales play a dominant role in energy and momentum transfer in the wavenumber space.
Although it is possible to find some particular solutions of the Navier–Stokes equations governing fluid motion, all such solutions are unstable to finite perturbations at large Reynolds numbers. Sensitive dependence on the initial and boundary conditions makes fluid flow irregular both in time and in space so that a statistical description is needed. The Russian mathematician Andrey Kolmogorov proposed the first statistical theory of turbulence, based on the aforementioned notion of the energy cascade (an idea originally introduced by Richardson) and the concept of self-similarity. As a result, the Kolmogorov microscales were named after him. It is now known that the self-similarity is broken so the statistical description is presently modified.[10] Still, a complete description of turbulence remains one of the unsolved problems in physics.

According to an apocryphal story, Werner Heisenberg was asked what he would ask God, given the opportunity. His reply was: "When I meet God, I am going to ask him two questions: Why relativity? And why turbulence? I really believe he will have an answer for the first."[11] A similar witticism has been attributed to Horace Lamb (who had published a noted text book on Hydrodynamics)—his choice being quantum electrodynamics (instead of relativity) and turbulence. Lamb was quoted as saying in a speech to the British Association for the Advancement of Science, "I am an old man now, and when I die and go to heaven there are two matters on which I hope for enlightenment. One is quantum electrodynamics, and the other is the turbulent motion of fluids. And about the former I am rather optimistic."[12][13]

A more detailed presentation of turbulence with emphasis on high-Reynolds number flow, intended for a general readership of physicists and applied mathematicians, is found in the Scholarpedia articles by Benzi and Frisch[14] and by Falkovich.[15]

There are many scales of meteorological motions; in this context turbulence affects small-scale motions.[16]

Onset of turbulence

The plume from this candle flame goes from laminar to turbulent. The Reynolds number can be used to predict where this transition will take place

The onset of turbulence can be predicted by the Reynolds number, which is the ratio of inertial forces to viscous forces within a fluid which is subject to relative internal movement due to different fluid velocities, in what is known as a boundary layer in the case of a bounding surface such as the interior of a pipe. A similar effect is created by the introduction of a stream of higher velocity fluid, such as the hot gases from a flame in air. This relative movement generates fluid friction, which is a factor in developing turbulent flow. Counteracting this effect is the viscosity of the fluid, which as it increases, progressively inhibits turbulence, as more kinetic energy is absorbed by a more viscous fluid. The Reynolds number quantifies the relative importance of these two types of forces for given flow conditions, and is a guide to when turbulent flow will occur in a particular situation.[17]

This ability to predict the onset of turbulent flow is an important design tool for equipment such as piping systems or aircraft wings, but the Reynolds number is also used in scaling of fluid dynamics problems, and is used to determine dynamic similitude between two different cases of fluid flow, such as between a model aircraft, and its full size version. Such scaling is not linear and the application of Reynolds numbers to both situations allows scaling factors to be developed. A flow situation in which the kinetic energy is significantly absorbed due to the action of fluid molecular viscosity gives rise to a laminar flow regime. For this the dimensionless quantity the Reynolds number (Re) is used as a guide.

With respect to laminar and turbulent flow regimes:
  • laminar flow occurs at low Reynolds numbers, where viscous forces are dominant, and is characterized by smooth, constant fluid motion;
  • turbulent flow occurs at high Reynolds numbers and is dominated by inertial forces, which tend to produce chaotic eddies, vortices and other flow instabilities.
The Reynolds number is defined as[18]
{\displaystyle \mathrm {Re} ={\frac {\rho vL}{\mu }}\,,}
where:
  • ρ is the density of the fluid (SI units: kg/m3)
  • v is a characteristic velocity of the fluid with respect to the object (m/s)
  • L is a characteristic linear dimension (m)
  • μ is the dynamic viscosity of the fluid (Pa·s or N·s/m2 or kg/(m·s)).
While there is no theorem directly relating the non-dimensional Reynolds number to turbulence, flows at Reynolds numbers larger than 5000 are typically (but not necessarily) turbulent, while those at low Reynolds numbers usually remain laminar. In Poiseuille flow, for example, turbulence can first be sustained if the Reynolds number is larger than a critical value of about 2040;[19] moreover, the turbulence is generally interspersed with laminar flow until a larger Reynolds number of about 4000.

The transition occurs if the size of the object is gradually increased, or the viscosity of the fluid is decreased, or if the density of the fluid is increased.

Heat and momentum transfer

When flow is turbulent, particles exhibit additional transverse motion which enhances the rate of energy and momentum exchange between them thus increasing the heat transfer and the friction coefficient.

Assume for a two-dimensional turbulent flow that one was able to locate a specific point in the fluid and measure the actual flow velocity v = (vx,vy) of every particle that passed through that point at any given time. Then one would find the actual flow velocity fluctuating about a mean value:
{\displaystyle v_{x}=\underbrace {\overline {v_{x}}} _{\text{mean value}}+\underbrace {v'_{x}} _{\text{fluctuation}}\quad {\text{and}}\quad v_{y}={\overline {v_{y}}}+v'_{y}\,;}
and similarly for temperature (T = T + T′) and pressure (P = P + P′), where the primed quantities denote fluctuations superposed to the mean. This decomposition of a flow variable into a mean value and a turbulent fluctuation was originally proposed by Osborne Reynolds in 1895, and is considered to be the beginning of the systematic mathematical analysis of turbulent flow, as a sub-field of fluid dynamics. While the mean values are taken as predictable variables determined by dynamics laws, the turbulent fluctuations are regarded as stochastic variables.

The heat flux and momentum transfer (represented by the shear stress τ) in the direction normal to the flow for a given time are
{\displaystyle {\begin{aligned}q&=\underbrace {v'_{y}\rho c_{P}T'} _{\text{experimental value}}=-k_{\text{turb}}{\frac {\partial {\overline {T}}}{\partial y}}\,;\\\tau &=\underbrace {-\rho {\overline {v'_{y}v'_{x}}}} _{\text{experimental value}}=\mu _{\text{turb}}{\frac {\partial {\overline {v_{x}}}}{\partial y}}\,;\end{aligned}}}
where cP is the heat capacity at constant pressure, ρ is the density of the fluid, μturb is the coefficient of turbulent viscosity and kturb is the turbulent thermal conductivity.[3]

Kolmogorov's theory of 1941

Richardson's notion of turbulence was that a turbulent flow is composed by "eddies" of different sizes. The sizes define a characteristic length scale for the eddies, which are also characterized by flow velocity scales and time scales (turnover time) dependent on the length scale. The large eddies are unstable and eventually break up originating smaller eddies, and the kinetic energy of the initial large eddy is divided into the smaller eddies that stemmed from it. These smaller eddies undergo the same process, giving rise to even smaller eddies which inherit the energy of their predecessor eddy, and so on. In this way, the energy is passed down from the large scales of the motion to smaller scales until reaching a sufficiently small length scale such that the viscosity of the fluid can effectively dissipate the kinetic energy into internal energy.

In his original theory of 1941, Kolmogorov postulated that for very high Reynolds numbers, the small scale turbulent motions are statistically isotropic (i.e. no preferential spatial direction could be discerned). In general, the large scales of a flow are not isotropic, since they are determined by the particular geometrical features of the boundaries (the size characterizing the large scales will be denoted as L). Kolmogorov's idea was that in the Richardson's energy cascade this geometrical and directional information is lost, while the scale is reduced, so that the statistics of the small scales has a universal character: they are the same for all turbulent flows when the Reynolds number is sufficiently high.

Thus, Kolmogorov introduced a second hypothesis: for very high Reynolds numbers the statistics of small scales are universally and uniquely determined by the kinematic viscosity ν and the rate of energy dissipation ε. With only these two parameters, the unique length that can be formed by dimensional analysis is
{\displaystyle \eta =\left({\frac {\nu ^{3}}{\varepsilon }}\right)^{\frac {1}{4}}\,.}
This is today known as the Kolmogorov length scale (see Kolmogorov microscales).

A turbulent flow is characterized by a hierarchy of scales through which the energy cascade takes place. Dissipation of kinetic energy takes place at scales of the order of Kolmogorov length η, while the input of energy into the cascade comes from the decay of the large scales, of order L. These two scales at the extremes of the cascade can differ by several orders of magnitude at high Reynolds numbers. In between there is a range of scales (each one with its own characteristic length r) that has formed at the expense of the energy of the large ones. These scales are very large compared with the Kolmogorov length, but still very small compared with the large scale of the flow (i.e. ηrL). Since eddies in this range are much larger than the dissipative eddies that exist at Kolmogorov scales, kinetic energy is essentially not dissipated in this range, and it is merely transferred to smaller scales until viscous effects become important as the order of the Kolmogorov scale is approached. Within this range inertial effects are still much larger than viscous effects, and it is possible to assume that viscosity does not play a role in their internal dynamics (for this reason this range is called "inertial range").

Hence, a third hypothesis of Kolmogorov was that at very high Reynolds number the statistics of scales in the range ηrL are universally and uniquely determined by the scale r and the rate of energy dissipation ε.

The way in which the kinetic energy is distributed over the multiplicity of scales is a fundamental characterization of a turbulent flow. For homogeneous turbulence (i.e., statistically invariant under translations of the reference frame) this is usually done by means of the energy spectrum function E(k), where k is the modulus of the wavevector corresponding to some harmonics in a Fourier representation of the flow velocity field u(x):
{\displaystyle \mathbf {u} (\mathbf {x} )=\iiint _{\mathbb {R} ^{3}}{\hat {\mathbf {u} }}(\mathbf {k} )e^{i\mathbf {k\cdot x} }\mathrm {d} ^{3}\mathbf {k} \,,}
where û(k) is the Fourier transform of the flow velocity field. Thus, E(k)dk represents the contribution to the kinetic energy from all the Fourier modes with k < |k| < k + dk, and therefore,
{\displaystyle {\tfrac {1}{2}}\left\langle u_{i}u_{i}\right\rangle =\int _{0}^{\infty }E(k)\mathrm {d} k\,,}
where 1/2uiui is the mean turbulent kinetic energy of the flow. The wavenumber k corresponding to length scale r is k = /r. Therefore, by dimensional analysis, the only possible form for the energy spectrum function according with the third Kolmogorov's hypothesis is
{\displaystyle E(k)=C\varepsilon ^{\frac {2}{3}}k^{-{\frac {5}{3}}}\,,}
where C would be a universal constant. This is one of the most famous results of Kolmogorov 1941 theory, and considerable experimental evidence has accumulated that supports it.[20]

In spite of this success, Kolmogorov theory is at present under revision. This theory implicitly assumes that the turbulence is statistically self-similar at different scales. This essentially means that the statistics are scale-invariant in the inertial range. A usual way of studying turbulent flow velocity fields is by means of flow velocity increments:
{\displaystyle \delta \mathbf {u} (r)=\mathbf {u} (\mathbf {x} +\mathbf {r} )-\mathbf {u} (\mathbf {x} )\,;}
that is, the difference in flow velocity between points separated by a vector r (since the turbulence is assumed isotropic, the flow velocity increment depends only on the modulus of r). Flow velocity increments are useful because they emphasize the effects of scales of the order of the separation r when statistics are computed. The statistical scale-invariance implies that the scaling of flow velocity increments should occur with a unique scaling exponent β, so that when r is scaled by a factor λ,
\delta \mathbf{u}(\lambda r)
should have the same statistical distribution as
{\displaystyle \lambda ^{\beta }\delta \mathbf {u} (r)\,,}
with β independent of the scale r. From this fact, and other results of Kolmogorov 1941 theory, it follows that the statistical moments of the flow velocity increments (known as structure functions in turbulence) should scale as
{\displaystyle {\Big \langle }{\big (}\delta \mathbf {u} (r){\big )}^{n}{\Big \rangle }=C_{n}(\varepsilon r)^{\frac {n}{3}}\,,}
where the brackets denote the statistical average, and the Cn would be universal constants.

There is considerable evidence that turbulent flows deviate from this behavior. The scaling exponents deviate from the n/3 value predicted by the theory, becoming a non-linear function of the order n of the structure function. The universality of the constants have also been questioned. For low orders the discrepancy with the Kolmogorov n/3 value is very small, which explain the success of Kolmogorov theory in regards to low order statistical moments. In particular, it can be shown that when the energy spectrum follows a power law
{\displaystyle E(k)\propto k^{-p}\,,}
with 1 < p < 3, the second order structure function has also a power law, with the form
{\displaystyle {\Big \langle }{\big (}\delta \mathbf {u} (r){\big )}^{2}{\Big \rangle }\propto r^{p-1}\,,}
Since the experimental values obtained for the second order structure function only deviate slightly from the 2/3 value predicted by Kolmogorov theory, the value for p is very near to 5/3 (differences are about 2%[21]). Thus the "Kolmogorov −5/3 spectrum" is generally observed in turbulence. However, for high order structure functions the difference with the Kolmogorov scaling is significant, and the breakdown of the statistical self-similarity is clear. This behavior, and the lack of universality of the Cn constants, are related with the phenomenon of intermittency in turbulence. This is an important area of research in this field, and a major goal of the modern theory of turbulence is to understand what is really universal in the inertial range.

The Epic Project to Record the DNA of All Life on Earth

By
Original post:  https://singularityhub.com/2018/05/27/the-epic-project-to-record-the-dna-of-all-life-on-earth/#sm.00011mvw2o16odqfpg41083fkgrer

Advances in biotechnology over the past decade have brought rapid progress in the fields of medicine, food, ecology, and neuroscience, among others. With this progress comes ambition for even more progress—realizing we’re capable of, say, engineering crops to yield more food means we may be able to further engineer them to be healthier, too. Building a brain-machine interface that can read basic thoughts may mean another interface could eventually read complex thoughts.

One of the fields where progress seems to be moving especially quickly is genomics, and with that progress, ambitions have grown just as fast. The Earth BioGenome project, which aims to sequence the DNA of all known eukaryotic life on Earth, is a glowing example of both progress and ambition.

A recent paper published in the journal Proceedings of the National Academy of Science released new details about the project. It’s estimated to take 10 years, cost $4.7 billion, and require more than 200 petabytes of digital storage space (a petabyte is one quadrillion, or 1015 bytes).

These statistics sound huge, but in reality they’re small compared to the history of genome sequencing up to this point. Take the Human Genome Project, a publicly-funded project to sequence the first full human genome. The effort took over ten years—it started in 1990 and was completed in 2003—and cost roughly $2.7 billion ($4.8 billion in today’s dollars) overall.

Now, just 15 years later, the Earth BioGenome project aims to leverage plummeting costs to sequence, catalog, and analyze the genomes of all known eukaryotic species on Earth in about the same amount of time and for about the same cost.

“Eukaryotes” refers to all plants, animals, and single-celled organisms—all living things except bacteria and archaea (those will be taken care of by the Earth Microbiome Project). It’s estimated there are somewhere between 10–15 million eukaryotic species, from a rhinoceros to a chinchilla down to a flea (and there are far smaller still). Of the 2.3 million of these that we’ve documented, we’ve sequenced less than 15,000 of their genomes (most of which have been microbes).

As impressive as it is that scientists can do this, you may be wondering, what’s the point? There’s a clear benefit to studying the human genome, but what will we get out of decoding the DNA of a rhinoceros or a flea?

Earth BioGenome will essentially allow scientists to take a high-fidelity, digital genetic snapshot of known life on Earth. “The greatest legacy of [the project] will be a complete digital library of life that will guide future discoveries for generations,” said Gene Robinson, one of the project’s leaders, as well as a professor of entomology and the director of the Carl R. Woese Institute for Genomic Biology at the University of Illinois.

The estimated return on investment ratio of the Human Genome Project was 141 to 1—and that’s just the financial side of things. The project hugely contributed to advancing affordable genomics as we know it today, a field that promises to speed the discovery of disease-causing genetic mutations and aid in their diagnosis and treatment. New gene-editing tools like CRISPR have since emerged and may one day be able to cure genetic illnesses.

Extrapolate these returns over millions of species, then, and the insight to be gained—and the concrete benefits from that insight—are likely significant. Genomic research on crops, for example, has already yielded plants that grow faster, produce more food, and are more resistant to pests or severe weather. Researchers may find new medicines or discover better ways to engineer organisms for use in manufacturing or energy. They’ll be able to make intricate discoveries about how and when various species evolved—information that’s thus far been buried in the depths of history.

In the process, they’ll produce a digital gene bank of the world’s species. What other useful genes will lurk there to inspire a new generation of synthetic biologists?

“[In the future] designing genomes will be a personal thing, a new art form as creative as painting or sculpture. Few of the new creations will be masterpieces, but a great many will bring joy to their creators and variety to our fauna and flora,” renowned physicist Freeman Dyson famously said in 2007.

Just a little over ten years later his vision, which would have been closer to science fiction not so long ago, is approaching reality. Earth BioGenome would put a significant fraction of Earth’s genetic palette at future synthetic biologists’  fingertips.

But it’s not a done deal yet. In addition to funding, the project’s finer details still need to be firmed up; one of the biggest questions is how, exactly, scientists will go about the gargantuan task of collecting intact DNA samples from every known species on Earth. Some museum specimens will be used, but many likely haven’t been preserved in such a way that the DNA could produce a high-quality genome. One important source of samples will be the Global Genome Biodiversity Network.

“Genomics has helped scientists develop new medicines and new sources of renewable energy, feed a growing population, protect the environment, and support human survival and well-being,” Robinson said. “The Earth BioGenome Project will give us insight into the history and diversity of life and help us better understand how to conserve it.”

Cooperative

From Wikipedia, the free encyclopedia ...