Search This Blog

Monday, May 28, 2018

Turbulence

From Wikipedia, the free encyclopedia
In fluid dynamics, turbulence or turbulent flow is any pattern of fluid motion characterized by chaotic changes in pressure and flow velocity. It is in contrast to a laminar flow regime, which occurs when a fluid flows in parallel layers, with no disruption between those layers.[1]

Turbulence is commonly observed in everyday phenomena such as surf, fast flowing rivers, billowing storm clouds, or smoke from a chimney, and most fluid flows occurring in nature and created in engineering applications are turbulent.[2][3]:2 Turbulence is caused by excessive kinetic energy in parts of a fluid flow, which overcomes the damping effect of the fluid's viscosity. For this reason turbulence is easier to create in low viscosity fluids, but more difficult in highly viscous fluids. In general terms, in turbulent flow, unsteady vortices appear of many sizes which interact with each other, consequently drag due to friction effects increases. This would increase the energy needed to pump fluid through a pipe, for instance. However this effect can also be exploited by devices such as aerodynamic spoilers on aircraft, which deliberately "spoil" the laminar flow to increase drag and reduce lift.

The onset of turbulence can be predicted by a dimensionless constant called the Reynolds number, which calculates the balance between kinetic energy and viscous damping in a fluid flow. However, turbulence has long resisted detailed physical analysis, and the interactions within turbulence create a very complex situation. Richard Feynman has described turbulence as the most important unsolved problem of classical physics.[4]

Examples of turbulence

Laminar and turbulent water flow over the hull of a submarine. As the relative velocity of the water increases turbulence occurs
 
Turbulence in the tip vortex from an airplane wing
  • Smoke rising from a cigarette is mostly turbulent flow. However, for the first few centimeters the flow is laminar. The smoke plume becomes turbulent as its Reynolds number increases, due to its flow velocity and characteristic length increasing.
  • Flow over a golf ball. (This can be best understood by considering the golf ball to be stationary, with air flowing over it.) If the golf ball were smooth, the boundary layer flow over the front of the sphere would be laminar at typical conditions. However, the boundary layer would separate early, as the pressure gradient switched from favorable (pressure decreasing in the flow direction) to unfavorable (pressure increasing in the flow direction), creating a large region of low pressure behind the ball that creates high form drag. To prevent this from happening, the surface is dimpled to perturb the boundary layer and promote transition to turbulence. This results in higher skin friction, but moves the point of boundary layer separation further along, resulting in lower form drag and lower overall drag.
  • Clear-air turbulence experienced during airplane flight, as well as poor astronomical seeing (the blurring of images seen through the atmosphere.)
  • Most of the terrestrial atmospheric circulation
  • The oceanic and atmospheric mixed layers and intense oceanic currents.
  • The flow conditions in many industrial equipment (such as pipes, ducts, precipitators, gas scrubbers, dynamic scraped surface heat exchangers, etc.) and machines (for instance, internal combustion engines and gas turbines).
  • The external flow over all kind of vehicles such as cars, airplanes, ships and submarines.
  • The motions of matter in stellar atmospheres.
  • A jet exhausting from a nozzle into a quiescent fluid. As the flow emerges into this external fluid, shear layers originating at the lips of the nozzle are created. These layers separate the fast moving jet from the external fluid, and at a certain critical Reynolds number they become unstable and break down to turbulence.
  • Biologically generated turbulence resulting from swimming animals affects ocean mixing.[5]
  • Snow fences work by inducing turbulence in the wind, forcing it to drop much of its snow load near the fence.
  • Bridge supports (piers) in water. In the late summer and fall, when river flow is slow, water flows smoothly around the support legs. In the spring, when the flow is faster, a higher Reynolds Number is associated with the flow. The flow may start off laminar but is quickly separated from the leg and becomes turbulent.
  • In many geophysical flows (rivers, atmospheric boundary layer), the flow turbulence is dominated by the coherent structure activities and associated turbulent events. A turbulent event is a series of turbulent fluctuations that contain more energy than the average flow turbulence.[6][7] The turbulent events are associated with coherent flow structures such as eddies and turbulent bursting, and they play a critical role in terms of sediment scour, accretion and transport in rivers as well as contaminant mixing and dispersion in rivers and estuaries, and in the atmosphere.
  • In the medical field of cardiology, a stethoscope is used to detect heart sounds and bruits, which are due to turbulent blood flow. In normal individuals, heart sounds are a product of turbulent flow as heart valves close. However, in some conditions turbulent flow can be audible due to other reasons, some of them pathological. For example, in advanced atherosclerosis, bruits (and therefore turbulent flow) can be heard in some vessels that have been narrowed by the disease process.
  • Recently, turbulence in porous media became a highly debated subject.[8]

Features

Flow visualization of a turbulent jet, made by laser-induced fluorescence. The jet exhibits a wide range of length scales, an important characteristic of turbulent flows.

Turbulence is characterized by the following features:
Irregularity 
Turbulent flows are always highly irregular. For this reason, turbulence problems are normally treated statistically rather than deterministically. Turbulent flow is chaotic. However, not all chaotic flows are turbulent.
Diffusivity 
The readily available supply of energy in turbulent flows tends to accelerate the homogenization (mixing) of fluid mixtures. The characteristic which is responsible for the enhanced mixing and increased rates of mass, momentum and energy transports in a flow is called "diffusivity".
Turbulent diffusion is usually described by a turbulent diffusion coefficient. This turbulent diffusion coefficient is defined in a phenomenological sense, by analogy with the molecular diffusivities, but it does not have a true physical meaning, being dependent on the flow conditions, and not a property of the fluid itself. In addition, the turbulent diffusivity concept assumes a constitutive relation between a turbulent flux and the gradient of a mean variable similar to the relation between flux and gradient that exists for molecular transport. In the best case, this assumption is only an approximation. Nevertheless, the turbulent diffusivity is the simplest approach for quantitative analysis of turbulent flows, and many models have been postulated to calculate it. For instance, in large bodies of water like oceans this coefficient can be found using Richardson's four-third power law and is governed by the random walk principle. In rivers and large ocean currents, the diffusion coefficient is given by variations of Elder's formula.
Rotationality 
Turbulent flows have non-zero vorticity and are characterized by a strong three-dimensional vortex generation mechanism known as vortex stretching. In fluid dynamics, they are essentially vortices subjected to stretching associated with a corresponding increase of the component of vorticity in the stretching direction—due to the conservation of angular momentum. On the other hand, vortex stretching is the core mechanism on which the turbulence energy cascade relies to establish the structure function.[clarification needed] In general, the stretching mechanism implies thinning of the vortices in the direction perpendicular to the stretching direction due to volume conservation of fluid elements. As a result, the radial length scale of the vortices decreases and the larger flow structures break down into smaller structures. The process continues until the small scale structures are small enough that their kinetic energy can be transformed by the fluid's molecular viscosity into heat. This is why turbulence is always rotational and three dimensional. For example, atmospheric cyclones are rotational but their substantially two-dimensional shapes do not allow vortex generation and so are not turbulent. On the other hand, oceanic flows are dispersive but essentially non rotational and therefore are not turbulent.
Dissipation 
To sustain turbulent flow, a persistent source of energy supply is required because turbulence dissipates rapidly as the kinetic energy is converted into internal energy by viscous shear stress. Turbulence causes the formation of eddies of many different length scales. Most of the kinetic energy of the turbulent motion is contained in the large-scale structures. The energy "cascades" from these large-scale structures to smaller scale structures by an inertial and essentially inviscid mechanism. This process continues, creating smaller and smaller structures which produces a hierarchy of eddies. Eventually this process creates structures that are small enough that molecular diffusion becomes important and viscous dissipation of energy finally takes place. The scale at which this happens is the Kolmogorov length scale.
Via this energy cascade, turbulent flow can be realized as a superposition of a spectrum of flow velocity fluctuations and eddies upon a mean flow. The eddies are loosely defined as coherent patterns of flow velocity, vorticity and pressure. Turbulent flows may be viewed as made of an entire hierarchy of eddies over a wide range of length scales and the hierarchy can be described by the energy spectrum that measures the energy in flow velocity fluctuations for each length scale (wavenumber). The scales in the energy cascade are generally uncontrollable and highly non-symmetric. Nevertheless, based on these length scales these eddies can be divided into three categories.
Integral time scale
The integral time scale for a Lagrangian flow can be defined as:

{\displaystyle T=\left({\frac {1}{\langle u'u'\rangle }}\right)\int _{0}^{\infty }\langle u'u'(\tau )\rangle d\tau }

where u' is the velocity fluctuation, and \tau is the time lag between measurements.[9]
Integral length scales
Largest scales in the energy spectrum. These eddies obtain energy from the mean flow and also from each other. Thus, these are the energy production eddies which contain most of the energy. They have the large flow velocity fluctuation and are low in frequency. Integral scales are highly anisotropic and are defined in terms of the normalized two-point flow velocity correlations. The maximum length of these scales is constrained by the characteristic length of the apparatus. For example, the largest integral length scale of pipe flow is equal to the pipe diameter. In the case of atmospheric turbulence, this length can reach up to the order of several hundreds kilometers.: The integral length scale can be defined as
{\displaystyle L=\left({\frac {1}{\langle u'u'\rangle }}\right)\int \limits _{0}^{\infty }\langle u'u'(r)\rangle dr}
where r is the distance between 2 measurement locations, and u' is the velocity fluctuation in that same direction.[9]
Kolmogorov length scales 
Smallest scales in the spectrum that form the viscous sub-layer range. In this range, the energy input from nonlinear interactions and the energy drain from viscous dissipation are in exact balance. The small scales have high frequency, causing turbulence to be locally isotropic and homogeneous.
Taylor microscales 
The intermediate scales between the largest and the smallest scales which make the inertial subrange. Taylor microscales are not dissipative scale but pass down the energy from the largest to the smallest without dissipation. Some literatures do not consider Taylor microscales as a characteristic length scale and consider the energy cascade to contain only the largest and smallest scales; while the latter accommodate both the inertial subrange and the viscous sublayer. Nevertheless, Taylor microscales are often used in describing the term “turbulence” more conveniently as these Taylor microscales play a dominant role in energy and momentum transfer in the wavenumber space.
Although it is possible to find some particular solutions of the Navier–Stokes equations governing fluid motion, all such solutions are unstable to finite perturbations at large Reynolds numbers. Sensitive dependence on the initial and boundary conditions makes fluid flow irregular both in time and in space so that a statistical description is needed. The Russian mathematician Andrey Kolmogorov proposed the first statistical theory of turbulence, based on the aforementioned notion of the energy cascade (an idea originally introduced by Richardson) and the concept of self-similarity. As a result, the Kolmogorov microscales were named after him. It is now known that the self-similarity is broken so the statistical description is presently modified.[10] Still, a complete description of turbulence remains one of the unsolved problems in physics.

According to an apocryphal story, Werner Heisenberg was asked what he would ask God, given the opportunity. His reply was: "When I meet God, I am going to ask him two questions: Why relativity? And why turbulence? I really believe he will have an answer for the first."[11] A similar witticism has been attributed to Horace Lamb (who had published a noted text book on Hydrodynamics)—his choice being quantum electrodynamics (instead of relativity) and turbulence. Lamb was quoted as saying in a speech to the British Association for the Advancement of Science, "I am an old man now, and when I die and go to heaven there are two matters on which I hope for enlightenment. One is quantum electrodynamics, and the other is the turbulent motion of fluids. And about the former I am rather optimistic."[12][13]

A more detailed presentation of turbulence with emphasis on high-Reynolds number flow, intended for a general readership of physicists and applied mathematicians, is found in the Scholarpedia articles by Benzi and Frisch[14] and by Falkovich.[15]

There are many scales of meteorological motions; in this context turbulence affects small-scale motions.[16]

Onset of turbulence

The plume from this candle flame goes from laminar to turbulent. The Reynolds number can be used to predict where this transition will take place

The onset of turbulence can be predicted by the Reynolds number, which is the ratio of inertial forces to viscous forces within a fluid which is subject to relative internal movement due to different fluid velocities, in what is known as a boundary layer in the case of a bounding surface such as the interior of a pipe. A similar effect is created by the introduction of a stream of higher velocity fluid, such as the hot gases from a flame in air. This relative movement generates fluid friction, which is a factor in developing turbulent flow. Counteracting this effect is the viscosity of the fluid, which as it increases, progressively inhibits turbulence, as more kinetic energy is absorbed by a more viscous fluid. The Reynolds number quantifies the relative importance of these two types of forces for given flow conditions, and is a guide to when turbulent flow will occur in a particular situation.[17]

This ability to predict the onset of turbulent flow is an important design tool for equipment such as piping systems or aircraft wings, but the Reynolds number is also used in scaling of fluid dynamics problems, and is used to determine dynamic similitude between two different cases of fluid flow, such as between a model aircraft, and its full size version. Such scaling is not linear and the application of Reynolds numbers to both situations allows scaling factors to be developed. A flow situation in which the kinetic energy is significantly absorbed due to the action of fluid molecular viscosity gives rise to a laminar flow regime. For this the dimensionless quantity the Reynolds number (Re) is used as a guide.

With respect to laminar and turbulent flow regimes:
  • laminar flow occurs at low Reynolds numbers, where viscous forces are dominant, and is characterized by smooth, constant fluid motion;
  • turbulent flow occurs at high Reynolds numbers and is dominated by inertial forces, which tend to produce chaotic eddies, vortices and other flow instabilities.
The Reynolds number is defined as[18]
{\displaystyle \mathrm {Re} ={\frac {\rho vL}{\mu }}\,,}
where:
  • ρ is the density of the fluid (SI units: kg/m3)
  • v is a characteristic velocity of the fluid with respect to the object (m/s)
  • L is a characteristic linear dimension (m)
  • μ is the dynamic viscosity of the fluid (Pa·s or N·s/m2 or kg/(m·s)).
While there is no theorem directly relating the non-dimensional Reynolds number to turbulence, flows at Reynolds numbers larger than 5000 are typically (but not necessarily) turbulent, while those at low Reynolds numbers usually remain laminar. In Poiseuille flow, for example, turbulence can first be sustained if the Reynolds number is larger than a critical value of about 2040;[19] moreover, the turbulence is generally interspersed with laminar flow until a larger Reynolds number of about 4000.

The transition occurs if the size of the object is gradually increased, or the viscosity of the fluid is decreased, or if the density of the fluid is increased.

Heat and momentum transfer

When flow is turbulent, particles exhibit additional transverse motion which enhances the rate of energy and momentum exchange between them thus increasing the heat transfer and the friction coefficient.

Assume for a two-dimensional turbulent flow that one was able to locate a specific point in the fluid and measure the actual flow velocity v = (vx,vy) of every particle that passed through that point at any given time. Then one would find the actual flow velocity fluctuating about a mean value:
{\displaystyle v_{x}=\underbrace {\overline {v_{x}}} _{\text{mean value}}+\underbrace {v'_{x}} _{\text{fluctuation}}\quad {\text{and}}\quad v_{y}={\overline {v_{y}}}+v'_{y}\,;}
and similarly for temperature (T = T + T′) and pressure (P = P + P′), where the primed quantities denote fluctuations superposed to the mean. This decomposition of a flow variable into a mean value and a turbulent fluctuation was originally proposed by Osborne Reynolds in 1895, and is considered to be the beginning of the systematic mathematical analysis of turbulent flow, as a sub-field of fluid dynamics. While the mean values are taken as predictable variables determined by dynamics laws, the turbulent fluctuations are regarded as stochastic variables.

The heat flux and momentum transfer (represented by the shear stress τ) in the direction normal to the flow for a given time are
{\displaystyle {\begin{aligned}q&=\underbrace {v'_{y}\rho c_{P}T'} _{\text{experimental value}}=-k_{\text{turb}}{\frac {\partial {\overline {T}}}{\partial y}}\,;\\\tau &=\underbrace {-\rho {\overline {v'_{y}v'_{x}}}} _{\text{experimental value}}=\mu _{\text{turb}}{\frac {\partial {\overline {v_{x}}}}{\partial y}}\,;\end{aligned}}}
where cP is the heat capacity at constant pressure, ρ is the density of the fluid, μturb is the coefficient of turbulent viscosity and kturb is the turbulent thermal conductivity.[3]

Kolmogorov's theory of 1941

Richardson's notion of turbulence was that a turbulent flow is composed by "eddies" of different sizes. The sizes define a characteristic length scale for the eddies, which are also characterized by flow velocity scales and time scales (turnover time) dependent on the length scale. The large eddies are unstable and eventually break up originating smaller eddies, and the kinetic energy of the initial large eddy is divided into the smaller eddies that stemmed from it. These smaller eddies undergo the same process, giving rise to even smaller eddies which inherit the energy of their predecessor eddy, and so on. In this way, the energy is passed down from the large scales of the motion to smaller scales until reaching a sufficiently small length scale such that the viscosity of the fluid can effectively dissipate the kinetic energy into internal energy.

In his original theory of 1941, Kolmogorov postulated that for very high Reynolds numbers, the small scale turbulent motions are statistically isotropic (i.e. no preferential spatial direction could be discerned). In general, the large scales of a flow are not isotropic, since they are determined by the particular geometrical features of the boundaries (the size characterizing the large scales will be denoted as L). Kolmogorov's idea was that in the Richardson's energy cascade this geometrical and directional information is lost, while the scale is reduced, so that the statistics of the small scales has a universal character: they are the same for all turbulent flows when the Reynolds number is sufficiently high.

Thus, Kolmogorov introduced a second hypothesis: for very high Reynolds numbers the statistics of small scales are universally and uniquely determined by the kinematic viscosity ν and the rate of energy dissipation ε. With only these two parameters, the unique length that can be formed by dimensional analysis is
{\displaystyle \eta =\left({\frac {\nu ^{3}}{\varepsilon }}\right)^{\frac {1}{4}}\,.}
This is today known as the Kolmogorov length scale (see Kolmogorov microscales).

A turbulent flow is characterized by a hierarchy of scales through which the energy cascade takes place. Dissipation of kinetic energy takes place at scales of the order of Kolmogorov length η, while the input of energy into the cascade comes from the decay of the large scales, of order L. These two scales at the extremes of the cascade can differ by several orders of magnitude at high Reynolds numbers. In between there is a range of scales (each one with its own characteristic length r) that has formed at the expense of the energy of the large ones. These scales are very large compared with the Kolmogorov length, but still very small compared with the large scale of the flow (i.e. ηrL). Since eddies in this range are much larger than the dissipative eddies that exist at Kolmogorov scales, kinetic energy is essentially not dissipated in this range, and it is merely transferred to smaller scales until viscous effects become important as the order of the Kolmogorov scale is approached. Within this range inertial effects are still much larger than viscous effects, and it is possible to assume that viscosity does not play a role in their internal dynamics (for this reason this range is called "inertial range").

Hence, a third hypothesis of Kolmogorov was that at very high Reynolds number the statistics of scales in the range ηrL are universally and uniquely determined by the scale r and the rate of energy dissipation ε.

The way in which the kinetic energy is distributed over the multiplicity of scales is a fundamental characterization of a turbulent flow. For homogeneous turbulence (i.e., statistically invariant under translations of the reference frame) this is usually done by means of the energy spectrum function E(k), where k is the modulus of the wavevector corresponding to some harmonics in a Fourier representation of the flow velocity field u(x):
{\displaystyle \mathbf {u} (\mathbf {x} )=\iiint _{\mathbb {R} ^{3}}{\hat {\mathbf {u} }}(\mathbf {k} )e^{i\mathbf {k\cdot x} }\mathrm {d} ^{3}\mathbf {k} \,,}
where û(k) is the Fourier transform of the flow velocity field. Thus, E(k)dk represents the contribution to the kinetic energy from all the Fourier modes with k < |k| < k + dk, and therefore,
{\displaystyle {\tfrac {1}{2}}\left\langle u_{i}u_{i}\right\rangle =\int _{0}^{\infty }E(k)\mathrm {d} k\,,}
where 1/2uiui is the mean turbulent kinetic energy of the flow. The wavenumber k corresponding to length scale r is k = /r. Therefore, by dimensional analysis, the only possible form for the energy spectrum function according with the third Kolmogorov's hypothesis is
{\displaystyle E(k)=C\varepsilon ^{\frac {2}{3}}k^{-{\frac {5}{3}}}\,,}
where C would be a universal constant. This is one of the most famous results of Kolmogorov 1941 theory, and considerable experimental evidence has accumulated that supports it.[20]

In spite of this success, Kolmogorov theory is at present under revision. This theory implicitly assumes that the turbulence is statistically self-similar at different scales. This essentially means that the statistics are scale-invariant in the inertial range. A usual way of studying turbulent flow velocity fields is by means of flow velocity increments:
{\displaystyle \delta \mathbf {u} (r)=\mathbf {u} (\mathbf {x} +\mathbf {r} )-\mathbf {u} (\mathbf {x} )\,;}
that is, the difference in flow velocity between points separated by a vector r (since the turbulence is assumed isotropic, the flow velocity increment depends only on the modulus of r). Flow velocity increments are useful because they emphasize the effects of scales of the order of the separation r when statistics are computed. The statistical scale-invariance implies that the scaling of flow velocity increments should occur with a unique scaling exponent β, so that when r is scaled by a factor λ,
\delta \mathbf{u}(\lambda r)
should have the same statistical distribution as
{\displaystyle \lambda ^{\beta }\delta \mathbf {u} (r)\,,}
with β independent of the scale r. From this fact, and other results of Kolmogorov 1941 theory, it follows that the statistical moments of the flow velocity increments (known as structure functions in turbulence) should scale as
{\displaystyle {\Big \langle }{\big (}\delta \mathbf {u} (r){\big )}^{n}{\Big \rangle }=C_{n}(\varepsilon r)^{\frac {n}{3}}\,,}
where the brackets denote the statistical average, and the Cn would be universal constants.

There is considerable evidence that turbulent flows deviate from this behavior. The scaling exponents deviate from the n/3 value predicted by the theory, becoming a non-linear function of the order n of the structure function. The universality of the constants have also been questioned. For low orders the discrepancy with the Kolmogorov n/3 value is very small, which explain the success of Kolmogorov theory in regards to low order statistical moments. In particular, it can be shown that when the energy spectrum follows a power law
{\displaystyle E(k)\propto k^{-p}\,,}
with 1 < p < 3, the second order structure function has also a power law, with the form
{\displaystyle {\Big \langle }{\big (}\delta \mathbf {u} (r){\big )}^{2}{\Big \rangle }\propto r^{p-1}\,,}
Since the experimental values obtained for the second order structure function only deviate slightly from the 2/3 value predicted by Kolmogorov theory, the value for p is very near to 5/3 (differences are about 2%[21]). Thus the "Kolmogorov −5/3 spectrum" is generally observed in turbulence. However, for high order structure functions the difference with the Kolmogorov scaling is significant, and the breakdown of the statistical self-similarity is clear. This behavior, and the lack of universality of the Cn constants, are related with the phenomenon of intermittency in turbulence. This is an important area of research in this field, and a major goal of the modern theory of turbulence is to understand what is really universal in the inertial range.

The Epic Project to Record the DNA of All Life on Earth

By
Original post:  https://singularityhub.com/2018/05/27/the-epic-project-to-record-the-dna-of-all-life-on-earth/#sm.00011mvw2o16odqfpg41083fkgrer

Advances in biotechnology over the past decade have brought rapid progress in the fields of medicine, food, ecology, and neuroscience, among others. With this progress comes ambition for even more progress—realizing we’re capable of, say, engineering crops to yield more food means we may be able to further engineer them to be healthier, too. Building a brain-machine interface that can read basic thoughts may mean another interface could eventually read complex thoughts.

One of the fields where progress seems to be moving especially quickly is genomics, and with that progress, ambitions have grown just as fast. The Earth BioGenome project, which aims to sequence the DNA of all known eukaryotic life on Earth, is a glowing example of both progress and ambition.

A recent paper published in the journal Proceedings of the National Academy of Science released new details about the project. It’s estimated to take 10 years, cost $4.7 billion, and require more than 200 petabytes of digital storage space (a petabyte is one quadrillion, or 1015 bytes).

These statistics sound huge, but in reality they’re small compared to the history of genome sequencing up to this point. Take the Human Genome Project, a publicly-funded project to sequence the first full human genome. The effort took over ten years—it started in 1990 and was completed in 2003—and cost roughly $2.7 billion ($4.8 billion in today’s dollars) overall.

Now, just 15 years later, the Earth BioGenome project aims to leverage plummeting costs to sequence, catalog, and analyze the genomes of all known eukaryotic species on Earth in about the same amount of time and for about the same cost.

“Eukaryotes” refers to all plants, animals, and single-celled organisms—all living things except bacteria and archaea (those will be taken care of by the Earth Microbiome Project). It’s estimated there are somewhere between 10–15 million eukaryotic species, from a rhinoceros to a chinchilla down to a flea (and there are far smaller still). Of the 2.3 million of these that we’ve documented, we’ve sequenced less than 15,000 of their genomes (most of which have been microbes).

As impressive as it is that scientists can do this, you may be wondering, what’s the point? There’s a clear benefit to studying the human genome, but what will we get out of decoding the DNA of a rhinoceros or a flea?

Earth BioGenome will essentially allow scientists to take a high-fidelity, digital genetic snapshot of known life on Earth. “The greatest legacy of [the project] will be a complete digital library of life that will guide future discoveries for generations,” said Gene Robinson, one of the project’s leaders, as well as a professor of entomology and the director of the Carl R. Woese Institute for Genomic Biology at the University of Illinois.

The estimated return on investment ratio of the Human Genome Project was 141 to 1—and that’s just the financial side of things. The project hugely contributed to advancing affordable genomics as we know it today, a field that promises to speed the discovery of disease-causing genetic mutations and aid in their diagnosis and treatment. New gene-editing tools like CRISPR have since emerged and may one day be able to cure genetic illnesses.

Extrapolate these returns over millions of species, then, and the insight to be gained—and the concrete benefits from that insight—are likely significant. Genomic research on crops, for example, has already yielded plants that grow faster, produce more food, and are more resistant to pests or severe weather. Researchers may find new medicines or discover better ways to engineer organisms for use in manufacturing or energy. They’ll be able to make intricate discoveries about how and when various species evolved—information that’s thus far been buried in the depths of history.

In the process, they’ll produce a digital gene bank of the world’s species. What other useful genes will lurk there to inspire a new generation of synthetic biologists?

“[In the future] designing genomes will be a personal thing, a new art form as creative as painting or sculpture. Few of the new creations will be masterpieces, but a great many will bring joy to their creators and variety to our fauna and flora,” renowned physicist Freeman Dyson famously said in 2007.

Just a little over ten years later his vision, which would have been closer to science fiction not so long ago, is approaching reality. Earth BioGenome would put a significant fraction of Earth’s genetic palette at future synthetic biologists’  fingertips.

But it’s not a done deal yet. In addition to funding, the project’s finer details still need to be firmed up; one of the biggest questions is how, exactly, scientists will go about the gargantuan task of collecting intact DNA samples from every known species on Earth. Some museum specimens will be used, but many likely haven’t been preserved in such a way that the DNA could produce a high-quality genome. One important source of samples will be the Global Genome Biodiversity Network.

“Genomics has helped scientists develop new medicines and new sources of renewable energy, feed a growing population, protect the environment, and support human survival and well-being,” Robinson said. “The Earth BioGenome Project will give us insight into the history and diversity of life and help us better understand how to conserve it.”

Supersymmetry

From Wikipedia, the free encyclopedia

In particle physics, supersymmetry (SUSY) is a theory that proposes a relationship between two basic classes of elementary particles: bosons, which have an integer-valued spin, and fermions, which have a half-integer spin.[1][2] A type of spacetime symmetry, supersymmetry is a possible candidate for undiscovered particle physics, and seen as an elegant solution to many current problems in particle physics if confirmed correct, which could resolve various areas where current theories are believed to be incomplete. A supersymmetrical extension to the Standard Model would resolve major hierarchy problems within gauge theory, by guaranteeing that quadratic divergences of all orders will cancel out in perturbation theory.

In supersymmetry, each particle from one group would have an associated particle in the other, which is known as its superpartner, the spin of which differs by a half-integer. These superpartners would be new and undiscovered particles. For example, there would be a particle called a "selectron" (superpartner electron), a bosonic partner of the electron. In the simplest supersymmetry theories, with perfectly "unbroken" supersymmetry, each pair of superpartners would share the same mass and internal quantum numbers besides spin. Since we expect to find these "superpartners" using present-day equipment, if supersymmetry exists then it consists of a spontaneously broken symmetry allowing superpartners to differ in mass.[3][4] [5] Spontaneously-broken supersymmetry could solve many mysterious problems in particle physics including the hierarchy problem.

There is no evidence at this time to show whether or not supersymmetry is correct, or what other extensions to current models might be more accurate. In part this is because it is only since around 2010 that particle accelerators specifically designed to study physics beyond the Standard Model have become operational, and because it is not yet known where exactly to look nor the energies required for a successful search. The main reasons for supersymmetry being supported by physicists is that the current theories are known to be incomplete and their limitations are well established, and supersymmetry would be an attractive solution to some of the major concerns. Direct confirmation would entail production of superpartners in collider experiments, such as the Large Hadron Collider (LHC). The first runs of the LHC found no previously-unknown particles other than the Higgs boson which was already suspected to exist as part of the Standard Model, and therefore no evidence for supersymmetry.[6][7]

These findings disappointed many physicists, who believed that supersymmetry (and other theories relying upon it) were by far the most promising theories for "new" physics, and had hoped for signs of unexpected results from these runs.[8][9] Former enthusiastic supporter Mikhail Shifman went as far as urging the theoretical community to search for new ideas and accept that supersymmetry was a failed theory.[10] However it has also been argued that this "naturalness" crisis was premature, because various calculations were too optimistic about the limits of masses which would allow a supersymmetry based solution.[11][12] The collider energies needed for such a discovery were likely too low, so superpartners could exist but be more massive than the LHC can detect.

Motivations

There are numerous phenomenological motivations for supersymmetry close to the electroweak scale, as well as technical motivations for supersymmetry at any scale.

The hierarchy problem

Supersymmetry close to the electroweak scale ameliorates the hierarchy problem that afflicts the Standard Model.[13] In the Standard Model, the electroweak scale receives enormous Planck-scale quantum corrections. The observed hierarchy between the electroweak scale and the Planck scale must be achieved with extraordinary fine tuning. In a supersymmetric theory, on the other hand, Planck-scale quantum corrections cancel between partners and superpartners (owing to a minus sign associated with fermionic loops). The hierarchy between the electroweak scale and the Planck scale is achieved in a natural manner, without miraculous fine-tuning.

Gauge coupling unification

The idea that the gauge symmetry groups unify at high-energy is called Grand unification theory. In the Standard Model, however, the weak, strong and electromagnetic couplings fail to unify at high energy. In a supersymmetry theory, the running of the gauge couplings are modified, and precise high-energy unification of the gauge couplings is achieved. The modified running also provides a natural mechanism for radiative electroweak symmetry breaking.

Dark matter

TeV-scale supersymmetry (augmented with a discrete symmetry) typically provides a candidate dark matter particle at a mass scale consistent with thermal relic abundance calculations.[14][15]

Other technical motivations

Supersymmetry is also motivated by solutions to several theoretical problems, for generally providing many desirable mathematical properties, and for ensuring sensible behavior at high energies. Supersymmetric quantum field theory is often much easier to analyze, as many more problems become mathematically possible. When supersymmetry is imposed as a local symmetry, Einstein's theory of general relativity is included automatically, and the result is said to be a theory of supergravity. It is also a necessary feature of the most popular candidate for a theory of everything, superstring theory, and a SUSY theory could explain the issue of cosmological inflation.

Another theoretically appealing property of supersymmetry is that it offers the only "loophole" to the Coleman–Mandula theorem, which prohibits spacetime and internal symmetries from being combined in any nontrivial way, for quantum field theories like the Standard Model with very general assumptions. The Haag–Łopuszański–Sohnius theorem demonstrates that supersymmetry is the only way spacetime and internal symmetries can be combined consistently.[16]

History

A supersymmetry relating mesons and baryons was first proposed, in the context of hadronic physics, by Hironari Miyazawa in 1966. This supersymmetry did not involve spacetime, that is, it concerned internal symmetry, and was broken badly. Miyazawa's work was largely ignored at the time.[17][18][19][20]

J. L. Gervais and B. Sakita (in 1971),[21] Yu. A. Golfand and E. P. Likhtman (also in 1971), and D. V. Volkov and V. P. Akulov (1972),[22] independently rediscovered supersymmetry in the context of quantum field theory, a radically new type of symmetry of spacetime and fundamental fields, which establishes a relationship between elementary particles of different quantum nature, bosons and fermions, and unifies spacetime and internal symmetries of microscopic phenomena. Supersymmetry with a consistent Lie-algebraic graded structure on which the Gervais−Sakita rediscovery was based directly first arose in 1971[23] in the context of an early version of string theory by Pierre Ramond, John H. Schwarz and André Neveu.

Finally, Julius Wess and Bruno Zumino (in 1974)[24] identified the characteristic renormalization features of four-dimensional supersymmetric field theories, which identified them as remarkable QFTs, and they and Abdus Salam and their fellow researchers introduced early particle physics applications. The mathematical structure of supersymmetry (graded Lie superalgebras) has subsequently been applied successfully to other topics of physics, ranging from nuclear physics,[25][26] critical phenomena,[27] quantum mechanics to statistical physics. It remains a vital part of many proposed theories of physics.

The first realistic supersymmetric version of the Standard Model was proposed in 1977 by Pierre Fayet and is known as the Minimal Supersymmetric Standard Model or MSSM for short. It was proposed to solve, amongst other things, the hierarchy problem.

Applications

Extension of possible symmetry groups

One reason that physicists explored supersymmetry is because it offers an extension to the more familiar symmetries of quantum field theory. These symmetries are grouped into the Poincaré group and internal symmetries and the Coleman–Mandula theorem showed that under certain assumptions, the symmetries of the S-matrix must be a direct product of the Poincaré group with a compact internal symmetry group or if there is not any mass gap, the conformal group with a compact internal symmetry group. In 1971 Golfand and Likhtman were the first to show that the Poincaré algebra can be extended through introduction of four anticommuting spinor generators (in four dimensions), which later became known as supercharges. in 1975 the Haag-Lopuszanski-Sohnius theorem analyzed all possible superalgebras in the general form, including those with an extended number of the supergenerators and central charges. This extended super-Poincaré algebra paved the way for obtaining a very large and important class of supersymmetric field theories.

The supersymmetry algebra

Traditional symmetries of physics are generated by objects that transform by the tensor representations of the Poincaré group and internal symmetries. Supersymmetries, however, are generated by objects that transform by the spin representations. According to the spin-statistics theorem, bosonic fields commute while fermionic fields anticommute. Combining the two kinds of fields into a single algebra requires the introduction of a Z2-grading under which the bosons are the even elements and the fermions are the odd elements. Such an algebra is called a Lie superalgebra.
The simplest supersymmetric extension of the Poincaré algebra is the Super-Poincaré algebra. Expressed in terms of two Weyl spinors, has the following anti-commutation relation:
\{Q_{\alpha },{\bar {Q_{\dot {\beta }}}}\}=2(\sigma {}^{\mu })_{\alpha {\dot {\beta }}}P_{\mu }
and all other anti-commutation relations between the Qs and commutation relations between the Qs and Ps vanish. In the above expression P_{\mu }=-i\partial {}_{\mu } are the generators of translation and \sigma {}^{\mu } are the Pauli matrices.

There are representations of a Lie superalgebra that are analogous to representations of a Lie algebra. Each Lie algebra has an associated Lie group and a Lie superalgebra can sometimes be extended into representations of a Lie supergroup.

The Supersymmetric Standard Model

Incorporating supersymmetry into the Standard Model requires doubling the number of particles since there is no way that any of the particles in the Standard Model can be superpartners of each other. With the addition of new particles, there are many possible new interactions. The simplest possible supersymmetric model consistent with the Standard Model is the Minimal Supersymmetric Standard Model (MSSM) which can include the necessary additional new particles that are able to be superpartners of those in the Standard Model.

Cancellation of the Higgs boson quadratic mass renormalization between fermionic top quark loop and scalar stop squark tadpole Feynman diagrams in a supersymmetric extension of the Standard Model

One of the main motivations for SUSY comes from the quadratically divergent contributions to the Higgs mass squared. The quantum mechanical interactions of the Higgs boson causes a large renormalization of the Higgs mass and unless there is an accidental cancellation, the natural size of the Higgs mass is the greatest scale possible. This problem is known as the hierarchy problem. Supersymmetry reduces the size of the quantum corrections by having automatic cancellations between fermionic and bosonic Higgs interactions. If supersymmetry is restored at the weak scale, then the Higgs mass is related to supersymmetry breaking which can be induced from small non-perturbative effects explaining the vastly different scales in the weak interactions and gravitational interactions.

In many supersymmetric Standard Models there is a heavy stable particle (such as neutralino) which could serve as a weakly interacting massive particle (WIMP) dark matter candidate. The existence of a supersymmetric dark matter candidate is related closely to R-parity.

The standard paradigm for incorporating supersymmetry into a realistic theory is to have the underlying dynamics of the theory be supersymmetric, but the ground state of the theory does not respect the symmetry and supersymmetry is broken spontaneously. The supersymmetry break can not be done permanently by the particles of the MSSM as they currently appear. This means that there is a new sector of the theory that is responsible for the breaking. The only constraint on this new sector is that it must break supersymmetry permanently and must give superparticles TeV scale masses. There are many models that can do this and most of their details do not matter. In order to parameterize the relevant features of supersymmetry breaking, arbitrary soft SUSY breaking terms are added to the theory which temporarily break SUSY explicitly but could never arise from a complete theory of supersymmetry breaking.

Gauge-coupling unification

One piece of evidence for supersymmetry existing is gauge coupling unification. The renormalization group evolution of the three gauge coupling constants of the Standard Model is somewhat sensitive to the present particle content of the theory. These coupling constants do not quite meet together at a common energy scale if we run the renormalization group using the Standard Model.[28] With the addition of minimal SUSY joint convergence of the coupling constants is projected at approximately 1016 GeV.[28]

Supersymmetric quantum mechanics

Supersymmetric quantum mechanics adds the SUSY superalgebra to quantum mechanics as opposed to quantum field theory. Supersymmetric quantum mechanics often becomes relevant when studying the dynamics of supersymmetric solitons, and due to the simplified nature of having fields which are only functions of time (rather than space-time), a great deal of progress has been made in this subject and it is now studied in its own right.

SUSY quantum mechanics involves pairs of Hamiltonians which share a particular mathematical relationship, which are called partner Hamiltonians. (The potential energy terms which occur in the Hamiltonians are then known as partner potentials.) An introductory theorem shows that for every eigenstate of one Hamiltonian, its partner Hamiltonian has a corresponding eigenstate with the same energy. This fact can be exploited to deduce many properties of the eigenstate spectrum. It is analogous to the original description of SUSY, which referred to bosons and fermions. We can imagine a "bosonic Hamiltonian", whose eigenstates are the various bosons of our theory. The SUSY partner of this Hamiltonian would be "fermionic", and its eigenstates would be the theory's fermions. Each boson would have a fermionic partner of equal energy.

Supersymmetry in condensed matter physics

SUSY concepts have provided useful extensions to the WKB approximation. Additionally, SUSY has been applied to disorder averaged systems both quantum and non-quantum (through statistical mechanics), the Fokker-Planck equation being an example of a non-quantum theory. The 'supersymmetry' in all these systems arises from the fact that one is modelling one particle and as such the 'statistics' don't matter. The use of the supersymmetry method provides a mathematical rigorous alternative to the replica trick, but only in non-interacting systems, which attempts to address the so-called 'problem of the denominator' under disorder averaging. For more on the applications of supersymmetry in condensed matter physics see the book[29]

Supersymmetry in optics

Integrated optics was recently found[30] to provide a fertile ground on which certain ramifications of SUSY can be explored in readily-accessible laboratory settings. Making use of the analogous mathematical structure of the quantum-mechanical Schrödinger equation and the wave equation governing the evolution of light in one-dimensional settings, one may interpret the refractive index distribution of a structure as a potential landscape in which optical wave packets propagate. In this manner, a new class of functional optical structures with possible applications in phase matching, mode conversion[31] and space-division multiplexing becomes possible. SUSY transformations have been also proposed as a way to address inverse scattering problems in optics and as a one-dimensional transformation optics[32]

Supersymmetry in dynamical systems

All stochastic (partial) differential equations, the models for all types of continuous time dynamical systems, possess topological supersymmetry.[33][34] In the operator representation of stochastic evolution, the topological supersymmetry is the exterior derivative which is commutative with the stochastic evolution operator defined as the stochastically averaged pullback induced on differential forms by SDE-defined diffeomorphisms of the phase space. The topological sector of the so-emerging supersymmetric theory of stochastic dynamics can be recognized as the Witten-type topological field theory.

The meaning of the topological supersymmetry in dynamical systems is the preservation of the phase space continuity—infinitely close points will remain close during continuous time evolution even in the presence of noise. When the topological supersymmetry is broken spontaneously, this property is violated in the limit of the infinitely long temporal evolution and the model can be said to exhibit (the stochastic generalization of) the butterfly effect. From a more general perspective, spontaneous breakdown of the topological supersymmetry is the theoretical essence of the ubiquitous dynamical phenomenon variously known as chaos, turbulence, self-organized criticality etc. The Goldstone theorem explains the associated emergence of the long-range dynamical behavior that manifests itself as 1/f noise, butterfly effect, and the scale-free statistics of sudden (instantonic) processes, e.g., earthquakes, neuroavalanches, solar flares etc., known as the Zipf's law and the Richter scale.

Supersymmetry in mathematics

SUSY is also sometimes studied mathematically for its intrinsic properties. This is because it describes complex fields satisfying a property known as holomorphy, which allows holomorphic quantities to be exactly computed. This makes supersymmetric models useful "toy models" of more realistic theories. A prime example of this has been the demonstration of S-duality in four-dimensional gauge theories[35] that interchanges particles and monopoles.

The proof of the Atiyah-Singer index theorem is much simplified by the use of supersymmetric quantum mechanics.

Supersymmetry in quantum gravity

Supersymmetry is part of superstring theory, a string theory of quantum gravity, although it could in theory be a component of other quantum gravity theories as well, such as loop quantum gravity. For superstring theory to be consistent, supersymmetry seems to be required at some level (although it may be a strongly broken symmetry). If experimental evidence confirms supersymmetry in the form of supersymmetric particles such as the neutralino that is often believed to be the lightest superpartner, some people believe this would be a major boost to superstring theory. Since supersymmetry is a required component of superstring theory, any discovered supersymmetry would be consistent with superstring theory. If the Large Hadron Collider and other major particle physics experiments fail to detect supersymmetric partners, many versions of superstring theory which had predicted certain low mass superpartners to existing particles may need to be significantly revised.

General supersymmetry

Supersymmetry appears in many related contexts of theoretical physics. It is possible to have multiple supersymmetries and also have supersymmetric extra dimensions.

Extended supersymmetry

It is possible to have more than one kind of supersymmetry transformation. Theories with more than one supersymmetry transformation are known as extended supersymmetric theories. The more supersymmetry a theory has, the more constrained are the field content and interactions. Typically the number of copies of a supersymmetry is a power of 2, i.e. 1, 2, 4, 8. In four dimensions, a spinor has four degrees of freedom and thus the minimal number of supersymmetry generators is four in four dimensions and having eight copies of supersymmetry means that there are 32 supersymmetry generators.

The maximal number of supersymmetry generators possible is 32. Theories with more than 32 supersymmetry generators automatically have massless fields with spin greater than 2. It is not known how to make massless fields with spin greater than two interact, so the maximal number of supersymmetry generators considered is 32. This is due to the Weinberg-Witten theorem. This corresponds to an N = 8 supersymmetry theory. Theories with 32 supersymmetries automatically have a graviton.

For four dimensions there are the following theories, with the corresponding multiplets[36] (CPT adds a copy, whenever they are not invariant under such symmetry)
  • N = 1
Chiral multiplet: (0,​12) Vector multiplet: (​12,1) Gravitino multiplet: (1,​32) Graviton multiplet: (​32,2)
  • N = 2
hypermultiplet: (-​12,02,​12) vector multiplet: (0,​122,1) supergravity multiplet: (1,​322,2)
  • N = 4
Vector multiplet: (-1,-​124,06,​124,1) Supergravity multiplet: (0,​124,16,​324,2)
  • N = 8
Supergravity multiplet: (-2,-​328,-128,-​1256,070,​1256,128,​328,2)

Supersymmetry in alternate numbers of dimensions

It is possible to have supersymmetry in dimensions other than four. Because the properties of spinors change drastically between different dimensions, each dimension has its characteristic. In d dimensions, the size of spinors is approximately 2d/2 or 2(d − 1)/2. Since the maximum number of supersymmetries is 32, the greatest number of dimensions in which a supersymmetric theory can exist is eleven.

Current status

Supersymmetric models are constrained by a variety of experiments, including measurements of low-energy observables – for example, the anomalous magnetic moment of the muon at Brookhaven; the WMAP dark matter density measurement and direct detection experiments – for example, XENON-100 and LUX; and by particle collider experiments, including B-physics, Higgs phenomenology and direct searches for superpartners (sparticles), at the Large Electron–Positron Collider, Tevatron and the LHC.

Historically, the tightest limits were from direct production at colliders. The first mass limits for squarks and gluinos were made at CERN by the UA1 experiment and the UA2 experiment at the Super Proton Synchrotron. LEP later set very strong limits.,[37] which in 2006 were extended by the D0 experiment at the Tevatron.[38][39] From 2003-2015, WMAP's and Planck's dark matter density measurements have strongly constrained supersymmetry models, which, if they explain dark matter, have to be tuned to invoke a particular mechanism to sufficiently reduce the neutralino density.

Prior to the beginning of the LHC, in 2009 fits of available data to CMSSM and NUHM1 indicated that squarks and gluinos were most likely to have masses in the 500 to 800 GeV range, though values as high as 2.5 TeV were allowed with low probabilities. Neutralinos and sleptons were expected to be quite light, with the lightest neutralino and the lightest stau most likely to be found between 100 and 150 GeV.[40]

The first run of the LHC found no evidence for supersymmetry, and, as a result, surpassed existing experimental limits from the Large Electron–Positron Collider and Tevatron and partially excluded the aforementioned expected ranges.[41]

In 2011–12, the LHC discovered a Higgs boson with a mass of about 125 GeV, and with couplings to fermions and bosons which are consistent with the Standard Model. The MSSM predicts that the mass of the lightest Higgs boson should not be much higher than the mass of the Z boson, and, in the absence of fine tuning (with the supersymmetry breaking scale on the order of 1 TeV), should not exceed 135 GeV.[42]

The LHC result seemed problematic for the minimal supersymmetric model, as the value of 125 GeV is relatively large for the model and can only be achieved with large radiative loop corrections from top squarks, which many theorists had considered to be "unnatural" (see naturalness (physics) and fine tuning).[43]

Cooperative

From Wikipedia, the free encyclopedia ...