Search This Blog

Sunday, March 1, 2026

Main sequence

From Wikipedia, the free encyclopedia
A Hertzsprung–Russell diagram plots the luminosity (or absolute magnitude) of a star against its color index (represented as B−V). The main sequence is visible as a prominent diagonal band from upper left to lower right. This plot shows 22,000 stars from the Hipparcos Catalog together with 1,000 low-luminosity stars (red and white dwarfs) from the Gliese Catalogue of Nearby Stars.

In astronomy, the main sequence is a classification of stars which appear on plots of stellar color versus brightness as a continuous and distinctive band. Stars spend the majority of their lives on the main sequence, during which core hydrogen burning is dominant. These main-sequence stars, or sometimes interchangeably dwarf stars, are the most numerous true stars in the universe and include the Sun. Color-magnitude plots are known as Hertzsprung–Russell diagrams after Ejnar Hertzsprung and Henry Norris Russell.

When a gaseous nebula undergoes sufficient gravitational collapse, the high pressure and temperature concentrated at the core will trigger the nuclear fusion of hydrogen into helium (see stars). The thermal energy from this process radiates out from the hot, dense core, generating a strong pressure gradient. It is this pressure gradient that counters the star's collapse under gravity, maintaining the star in a state of hydrostatic equilibrium. The star's position on the main sequence is determined primarily by the mass, but also by age and chemical composition. As a result, radiation is not the only method of energy transfer in stars. Convection plays a role in the movement of energy, particularly in the cores of stars greater than 1.3 to 1.5 times the Sun's mass, again depending on age and chemical composition.

When discussing chemical composition, astronomers generally refer to the metallicity of the star. This is the abundance of heavier-than-helium elements present in the star. For example, the fraction of the Sun by mass currently composed of hydrogen (denoted X) is 74.9%. For helium (denoted Y) it is 23.8%, meaning the star's metallicity, or mass fraction of all other elements, is 1.3% (denoted Z). This is a typical range for similar-mass main sequence stars. In fact, a higher metallicity leads to a higher opacity whereby the energy production can remain concentrated in the core without being radiated or transferred away to the star's outer layers. This hotter environment speeds up nuclear fusion and decreases the amount of time the star will spend on the main sequence.

The main sequence is divided into upper and lower parts, based on the dominant process that a star uses to generate energy. The Sun, along with main sequence stars below about 1.5 M, primarily fuse hydrogen atoms together in a series of stages to form helium, a sequence called the proton–proton chain. Above this mass, in the upper main sequence, the nuclear fusion process mainly uses atoms of carbon, nitrogen, and oxygen as intermediaries in the CNO cycle that produces helium from hydrogen atoms. The proton-proton chain is still occurring, but it produces less energy than the CNO cycle. Main-sequence stars where the CNO cycle is the dominant energy production process undergo convection in their core regions, which acts to stir up the newly created helium and maintain the proportion of fuel needed for fusion to occur. Below this mass, stars have cores that are entirely radiative with convective zones near the surface. With decreasing stellar mass, the proportion of the star forming a convective envelope steadily increases. The main-sequence stars below 0.4 M undergo convection throughout their mass. When core convection does not occur, a helium-rich core develops surrounded by an outer layer of hydrogen.

The more massive a star is, the shorter its lifespan on the main sequence. After the hydrogen fuel at the core has been consumed, the star evolves away from the main sequence on the HR diagram, into a supergiant, red giant, or directly to a white dwarf.

History

In the early part of the 20th century, information about the types and distances of stars became more readily available. The spectra of stars were shown to have distinctive features, which allowed them to be categorized. Annie Jump Cannon and Edward Charles Pickering at Harvard College Observatory developed a method of categorization that became known as the Harvard Classification Scheme, published in the Harvard Annals in 1901.

In Potsdam in 1906, the Danish astronomer Ejnar Hertzsprung noticed that the reddest stars—classified as K and M in the Harvard scheme—could be divided into two distinct groups. These stars are either much brighter than the Sun or much fainter. To distinguish these groups, he called them "giant" and "dwarf" stars. The following year he began studying star clusters; large groupings of stars that are co-located at approximately the same distance. For these stars, he published the first plots of color versus luminosity. These plots showed a prominent and continuous sequence of stars, which he named the Main Sequence.

At Princeton University, Henry Norris Russell was following a similar course of research. He was studying the relationship between the spectral classification of stars and their actual brightness as corrected for distance—their absolute magnitude. For this purpose, he used a set of stars that had reliable parallaxes and many of which had been categorized at Harvard. When he plotted the spectral types of these stars against their absolute magnitude, he found that dwarf stars followed a distinct relationship. This allowed the real brightness of a dwarf star to be predicted with reasonable accuracy.

Of the red stars observed by Hertzsprung, the dwarf stars also followed the spectra-luminosity relationship discovered by Russell. However, giant stars are much brighter than dwarfs and so do not follow the same relationship. Russell proposed that "giant stars must have low density or great surface brightness, and the reverse is true of dwarf stars". The same curve also showed that there were very few faint white stars.

In 1933, Bengt Strömgren introduced the term Hertzsprung–Russell diagram to denote a luminosity-spectral class diagram. This name reflected the parallel development of this technique by both Hertzsprung and Russell earlier in the century.

As evolutionary models of stars were developed during the 1930s, it was shown that, for stars with the same composition, the star's mass determines its luminosity and radius. Conversely, when a star's chemical composition and its position on the main sequence are known, the star's mass and radius can be deduced. This became known as the Vogt–Russell theorem; named after Heinrich Vogt and Henry Norris Russell. It was subsequently discovered that this relationship breaks down somewhat for stars of the non-uniform composition.

A refined scheme for stellar classification was published in 1943 by William Wilson Morgan and Philip Childs Keenan. The MK classification assigned each star a spectral type—based on the Harvard classification—and a luminosity class. The Harvard classification had been developed by assigning a different letter to each star based on the strength of the hydrogen spectral line before the relationship between spectra and temperature was known. When ordered by temperature and when duplicate classes were removed, the spectral types of stars followed, in order of decreasing temperature with colors ranging from blue to red, the sequence O, B, A, F, G, K, and M. (A popular mnemonic for memorizing this sequence of stellar classes is "Oh Be A Fine Girl/Guy, Kiss Me".) The luminosity class ranged from I to V, in order of decreasing luminosity. Stars of luminosity class V belonged to the main sequence.

In April 2018, astronomers reported the detection of the most distant "ordinary" (i.e., main sequence) star, named Icarus (formally, MACS J1149 Lensed Star 1), at 9 billion light-years away from Earth.

Formation and evolution

Zero age main sequence and evolutionary tracks
The violent youth of stars like the Sun

When a protostar is formed from the collapse of a giant molecular cloud of gas and dust in the local interstellar medium, the initial composition is homogeneous throughout, consisting of approximately 70% hydrogen, 28% helium, and trace amounts of other elements, by mass. The initial mass of the star depends on the local conditions within the cloud. (The mass distribution of newly formed stars is described empirically by the initial mass function.) During the initial collapse, this pre-main-sequence star generates thermal energy through the increase in pressure arising due to its gravitational contraction. During this phase, before hydrogen ignition, the star will spend a length of time contracting known as the Kelvin-Helmholtz, or thermal, timescale. This timescale describes the length of time a star can last by radiating its internal kinetic energy. Once sufficiently dense, stars begin converting hydrogen into helium and producing energy through an exothermic nuclear fusion process. The nuclear timescale is useful to describe the length of time a star can last during this next phase.

When nuclear fusion of hydrogen becomes the dominant energy production process and the excess energy gained from gravitational contraction has been lost, the star lies along a curve on the Hertzsprung–Russell diagram (or HR diagram) called the standard main sequence. Astronomers will sometimes refer to this stage as "zero-age main sequence", or ZAMS. The ZAMS curve can be calculated using computer models of stellar properties at the point when stars begin hydrogen fusion. From this point, the brightness and surface temperature of stars typically increase with age.

A star remains near its initial position on the main sequence until a significant amount of hydrogen in the core has been consumed, then begins to evolve into a more luminous star. (On the HR diagram, the evolving star moves up and to the right of the main sequence.) Thus the main sequence represents the primary hydrogen-burning stage of a star's lifetime.

Classification

Hot and brilliant O-type main-sequence stars in star-forming regions. These are all regions of star formation that contain many hot young stars including several bright stars of spectral type O.

Main sequence stars are divided into the following types:

M-type (and, to a lesser extent, K-type) main-sequence stars are usually referred to as red dwarfs.

Properties

The majority of stars on a typical HR diagram lie along the main-sequence curve. This line is pronounced because both the spectral type and the luminosity depends only on a star's mass, at least to zeroth-order approximation, as long as it is fusing hydrogen at its core—and that is what almost all stars spend most of their "active" lives doing.

The temperature of a star determines its spectral type via its effect on the physical properties of plasma in its photosphere. A star's energy emission as a function of wavelength is influenced by both its temperature and composition. A key indicator of this energy distribution is given by the color index, BV, which measures the star's magnitude in blue (B) and green-yellow (V) light by means of filters. This difference in magnitude provides a measure of a star's temperature.

Dwarf terminology

Main-sequence stars are called dwarf stars, but this terminology is partly historical and can be somewhat confusing. For the cooler stars, dwarfs such as red dwarfs, orange dwarfs, and yellow dwarfs are indeed much smaller and dimmer than other stars of those colors. However, for hotter blue and white stars, the difference in size and brightness between so-called "dwarf" stars that are on the main sequence and so-called "giant" stars that are not, becomes smaller. For the hottest stars the difference is not directly observable and for these stars, the terms "dwarf" and "giant" refer to differences in spectral lines which indicate whether a star is on or off the main sequence. Nevertheless, very hot main-sequence stars are still sometimes called dwarfs, even though they have roughly the same size and brightness as the "giant" stars of that temperature.

The common use of "dwarf" to mean the main sequence is confusing in another way because there are dwarf stars that are not main-sequence stars. For example, a white dwarf is the dead core left over after a star has shed its outer layers, and is much smaller than a main-sequence star, roughly the size of Earth. These represent the final evolutionary stage of many main-sequence stars.

Parameters

Comparison of main sequence stars of each spectral class

By treating the star as an idealized energy radiator known as a black body, the luminosity L and radius R can be related to the effective temperature Teff by the Stefan–Boltzmann law:

where σ is the Stefan–Boltzmann constant. As the position of a star on the HR diagram shows its approximate luminosity, this relation can be used to estimate its radius.

The mass, radius, and luminosity of a star are closely interlinked, and their respective values can be approximated by three relations. First is the Stefan–Boltzmann law, which relates the luminosity L, the radius R and the surface temperature Teff. Second is the mass–luminosity relation, which relates the luminosity L and the mass M. Finally, the relationship between M and R is close to linear. The ratio of M to R increases by a factor of only three over 2.5 orders of magnitude of M. This relation is roughly proportional to the star's inner temperature TI, and its extremely slow increase reflects the fact that the rate of energy generation in the core strongly depends on this temperature, whereas it has to fit the mass-luminosity relation. Thus, a too-high or too-low temperature will result in stellar instability.

A better approximation is to take ε = L/M, the energy generation rate per unit mass, as ε is proportional to TI15, where TI is the core temperature. This is suitable for stars at least as massive as the Sun, exhibiting the CNO cycle, and gives the better fit RM0.78.

Sample parameters

The table below shows typical values for stars along the main sequence. The values of luminosity (L), radius (R), and mass (M) are relative to the Sun—a dwarf star with a spectral classification of G2 V. The actual values for a star may vary by as much as 20–30% from the values listed below.

Table of main-sequence stellar parameters
Stellar
class
Radius,
R/R
Mass,
M/M
Luminosity,
L/L
Temp.
(K)
Examples
O2 12 100 800,000 50,000 BI 253
O6 9.8 35 180,000 38,000 Theta1 Orionis C
B0 7.4 18 20,000 30,000 Phi1 Orionis
B5 3.8 6.5 800 16,400 Pi Andromedae A
A0 2.5 3.2 80 10,800 Alpha Coronae Borealis A
A5 1.7 2.1 20 8,620 Beta Pictoris
F0 1.3 1.7 6 7,240 Gamma Virginis
F5 1.2 1.3 2.5 6,540 Eta Arietis
G0 1.05 1.10 1.26 5,920 Beta Comae Berenices
G2 1 1 1 5,780 Sun
G5 0.93 0.93 0.79 5,610 Alpha Mensae
K0 0.85 0.78 0.40 5,240 70 Ophiuchi A
K5 0.74 0.69 0.16 4,410 61 Cygni A
M0 0.51 0.60 0.072 3,800 Lacaille 8760
M5 0.18 0.15 0.0027 3,120 EZ Aquarii A
M8 0.11 0.08 0.0004 2,650 Van Biesbroeck's star
L1 0.09 0.07 0.00017 2,200 2MASS J0523−1403
Representative lifetimes of stars as a function of their masses

Energy generation

Logarithm of the relative energy output (ε) of proton–proton (PP), CNO and triple-α fusion processes at different temperatures (T). The dashed line shows the combined energy generation of the PP and CNO processes within a star. At the Sun's core temperature, the PP process is more efficient.

All main-sequence stars have a core region where energy is generated by nuclear fusion. The temperature and density of this core are at the levels necessary to sustain the energy production that will support the remainder of the star. A reduction of energy production would cause the overlaying mass to compress the core, resulting in an increase in the fusion rate because of higher temperature and pressure. Likewise, an increase in energy production would cause the star to expand, lowering the pressure at the core. Thus the star forms a self-regulating system in hydrostatic equilibrium that is stable over the course of its main-sequence lifetime.

Main-sequence stars employ two types of hydrogen fusion processes, and the rate of energy generation from each type depends on the temperature in the core region. Astronomers divide the main sequence into upper and lower parts, based on which of the two is the dominant fusion process. In the lower main sequence, energy is primarily generated as the result of the proton–proton chain, which directly fuses hydrogen together in a series of stages to produce helium. Stars in the upper main sequence have sufficiently high core temperatures to efficiently use the CNO cycle (see chart). This process uses atoms of carbon, nitrogen, and oxygen as intermediaries in the process of fusing hydrogen into helium.

At a stellar core temperature of 18 million kelvin, the PP process and CNO cycle are equally efficient, and each type generates half of the star's net luminosity. As this is the core temperature of a star with about 1.5 M, the upper main sequence consists of stars above this mass. Thus, roughly speaking, stars of spectral class F or cooler belong to the lower main sequence, while A-type stars or hotter are upper main-sequence stars. The transition in primary energy production from one form to the other spans a range difference of less than a single solar mass. In the Sun, a one solar-mass star, only 1.5% of the energy is generated by the CNO cycle. By contrast, stars with 1.8 M or above generate almost their entire energy output through the CNO cycle.

The observed upper limit for a main-sequence star is 120–200 M. The theoretical explanation for this limit is that stars above this mass can not radiate energy fast enough to remain stable, so any additional mass will be ejected in a series of pulsations until the star reaches a stable limit. The lower limit for sustained proton-proton nuclear fusion is about 0.08 M or 80 times the mass of Jupiter. Below this threshold are sub-stellar objects that can not sustain hydrogen fusion, known as brown dwarfs.

Structure

This diagram shows a cross-section of a Sun-like star, showing the internal structure.

Because there is a temperature difference between the core and the surface, or photosphere, energy is transported outward. The two modes for transporting this energy are radiation and convection. A radiation zone, where energy is transported by radiation, is stable against convection and there is very little mixing of the plasma. By contrast, in a convection zone the energy is transported by bulk movement of plasma, with hotter material rising and cooler material descending. Convection is a more efficient mode for carrying energy than radiation, but it will only occur under conditions that create a steep temperature gradient.

In massive stars (above 10 M) the rate of energy generation by the CNO cycle is very sensitive to temperature, so the fusion is highly concentrated at the core. Consequently, there is a high temperature gradient in the core region, which results in a convection zone for more efficient energy transport. This mixing of material around the core removes the helium ash from the hydrogen-burning region, allowing more of the hydrogen in the star to be consumed during the main-sequence lifetime. The outer regions of a massive star transport energy by radiation, with little or no convection.

Intermediate-mass stars such as Sirius may transport energy primarily by radiation, with a small core convection region. Medium-sized, low-mass stars like the Sun have a core region that is stable against convection, with a convection zone near the surface that mixes the outer layers. This results in a steady buildup of a helium-rich core, surrounded by a hydrogen-rich outer region. By contrast, cool, very low-mass stars (below 0.4 M) are convective throughout. Thus the helium produced at the core is distributed across the star, producing a relatively uniform atmosphere and a proportionately longer main-sequence lifespan.

Luminosity-color variation

The Sun is the most familiar example of a main-sequence star

As non-fusing helium accumulates in the core of a main-sequence star, the reduction in the abundance of hydrogen per unit mass results in a gradual lowering of the fusion rate within that mass. Since it is fusion-supplied power that maintains the pressure of the core and supports the higher layers of the star, the core gradually gets compressed. This brings hydrogen-rich material into a shell around the helium-rich core at a depth where the pressure is sufficient for fusion to occur. The high power output from this shell pushes the higher layers of the star further out. This causes a gradual increase in the radius and consequently luminosity of the star over time. For example, the luminosity of the early Sun was only about 70% of its current value. As a star ages it thus changes its position on the HR diagram. This evolution is reflected in a broadening of the main sequence band which contains stars at various evolutionary stages.

Other factors that broaden the main sequence band on the HR diagram include uncertainty in the distance to stars and the presence of unresolved binary stars that can alter the observed stellar parameters. However, even perfect observation would show a fuzzy main sequence because mass is not the only parameter that affects a star's color and luminosity. Variations in chemical composition caused by the initial abundances, the star's evolutionary status, interaction with a close companionrapid rotation, or a magnetic field can all slightly change a main-sequence star's HR diagram position, to name just a few factors. As an example, there are metal-poor stars (with a very low abundance of elements with higher atomic numbers than helium) that lie just below the main sequence and are known as subdwarfs. These stars are fusing hydrogen in their cores and so they mark the lower edge of the main sequence fuzziness caused by variance in chemical composition.

A nearly vertical region of the HR diagram, known as the instability strip, is occupied by pulsating variable stars known as Cepheid variables. These stars vary in magnitude at regular intervals, giving them a pulsating appearance. The strip intersects the upper part of the main sequence in the region of class A and F stars, which are between one and two solar masses. Pulsating stars in this part of the instability strip intersecting the upper part of the main sequence are called Delta Scuti variables. Main-sequence stars in this region experience only small changes in magnitude, so this variation is difficult to detect. Other classes of unstable main-sequence stars, like Beta Cephei variables, are unrelated to this instability strip.

Lifetime

This plot gives an example of the mass-luminosity relationship for zero-age main-sequence stars. The mass and luminosity are relative to the present-day Sun.

The total amount of energy that a star can generate through nuclear fusion of hydrogen is limited by the amount of hydrogen fuel that can be consumed at the core. For a star in equilibrium, the thermal energy generated at the core must be at least equal to the energy radiated at the surface. Since the luminosity gives the amount of energy radiated per unit time, the total life span can be estimated, to first approximation, as the total energy produced divided by the star's luminosity.

For a star with at least 0.5 M, when the hydrogen supply in its core is exhausted and it expands to become a red giant, it can start to fuse helium atoms to form carbon. The energy output of the helium fusion process per unit mass is only about a tenth the energy output of the hydrogen process, and the luminosity of the star increases. This results in a much shorter length of time in this stage compared to the main-sequence lifetime. (For example, the Sun is predicted to spend 130 million years burning helium, compared to about 12 billion years burning hydrogen.) Thus, about 90% of the observed stars above 0.5 M will be on the main sequence. On average, main-sequence stars are known to follow an empirical mass–luminosity relationship. The luminosity (L) of the star is roughly proportional to the total mass (M) as the following power law:

This relationship applies to main-sequence stars in the range 0.1–50 M.

The amount of fuel available for nuclear fusion is proportional to the mass of the star. Thus, the lifetime of a star on the main sequence can be estimated by comparing it to solar evolutionary models. The Sun has been a main-sequence star for about 4.5 billion years and it will start to expand rapidly towards a red giant in 6.5 billion years, for a total main-sequence lifetime of roughly 1010 years. Hence:

where M and L are the mass and luminosity of the star, respectively, is a solar mass, is the solar luminosity and is the star's estimated main-sequence lifetime.

Although more massive stars have more fuel to burn and might intuitively be expected to last longer, they also radiate a proportionately greater amount with increased mass. This is required by the stellar equation of state; for a massive star to maintain equilibrium, the outward pressure of radiated energy generated in the core not only must but will rise to match the titanic inward gravitational pressure of its envelope. Thus, the most massive stars may remain on the main sequence for only a few million years, while stars with less than a tenth of a solar mass may last for over a trillion years.

The exact mass-luminosity relationship depends on how efficiently energy can be transported from the core to the surface. A higher opacity has an insulating effect that retains more energy at the core, so the star does not need to produce as much energy to remain in hydrostatic equilibrium. By contrast, a lower opacity means energy escapes more rapidly and the star must burn more fuel to remain in equilibrium. A sufficiently high opacity can result in energy transport via convection, which changes the conditions needed to remain in equilibrium.

In high-mass main-sequence stars, the opacity is dominated by electron scattering, which is nearly constant with increasing temperature. Thus the luminosity only increases as the cube of the star's mass. For stars below 10 M, the opacity becomes dependent on temperature, resulting in the luminosity varying approximately as the fourth power of the star's mass. For very low-mass stars, molecules in the atmosphere also contribute to the opacity. Below about 0.5 M, the luminosity of the star varies as the mass to the power of 2.3, producing a flattening of the slope on a graph of mass versus luminosity. Even these refinements are only an approximation, however, and the mass-luminosity relation can vary depending on a star's composition.

Evolutionary tracks

Evolutionary track of a star like the sun

When a main-sequence star has consumed the hydrogen at its core, the loss of energy generation causes its gravitational collapse to resume and the star evolves off the main sequence. The path which the star follows across the HR diagram is called an evolutionary track. A track known as the zero age main sequence (ZAMS) is where stars of different masses begin their main sequence lives, while a track known as the terminal age main sequence (TAMS) is where stars of different masses end their main sequence lives when hydrogen is depleted in their cores.

H–R diagram for two open clusters: NGC 188 (blue) is older and shows a lower turn off from the main sequence than M67 (yellow). The dots outside the two sequences are mostly foreground and background stars with no relation to the clusters.

Stars with less than 0.23 M are predicted to directly become white dwarfs when energy generation by nuclear fusion of hydrogen at their core comes to a halt, but stars in this mass range have main-sequence lifetimes longer than the current age of the universe, so no stars are old enough for this to have occurred.

In stars more massive than 0.23 M, the hydrogen surrounding the helium core reaches sufficient temperature and pressure to undergo fusion, forming a hydrogen-burning shell and causing the outer layers of the star to expand and cool. The stage as these stars move away from the main sequence is known as the subgiant branch; it is relatively brief and appears as a gap in the evolutionary track since few stars are observed at that point.

When the helium core of low-mass stars becomes degenerate, or the outer layers of intermediate-mass stars cool sufficiently to become opaque, their hydrogen shells increase in temperature and the stars start to become more luminous. This is known as the red-giant branch; it is a relatively long-lived stage and it appears prominently in H–R diagrams. These stars will eventually end their lives as white dwarfs.

The most massive stars do not become red giants; instead, their cores quickly become hot enough to fuse helium and eventually heavier elements and they are known as supergiants. They follow approximately horizontal evolutionary tracks from the main sequence across the top of the H–R diagram. Supergiants are relatively rare and do not show prominently on most H–R diagrams. Their cores will eventually collapse, usually leading to a supernova and leaving behind either a neutron star or black hole.

When a cluster of stars is formed at about the same time, the main-sequence lifespan of these stars will depend on their individual masses. The most massive stars will leave the main sequence first, followed in sequence by stars of ever lower masses. The position where stars in the cluster are leaving the main sequence is known as the turnoff point. By knowing the main-sequence lifespan of stars at this point, it becomes possible to estimate the age of the cluster.

Modern synthesis (20th century)

From Wikipedia, the free encyclopedia
Several major ideas about evolution came together in the population genetics of the early 20th century to form the modern synthesis, including genetic variation, natural selection, and particulate (Mendelian) inheritance. This ended the eclipse of Darwinism and supplanted a variety of non-Darwinian theories of evolution.

The modern synthesis was the early 20th-century synthesis of Charles Darwin's theory of evolution and Gregor Mendel's ideas on heredity into a joint mathematical framework. Julian Huxley coined the term in his 1942 book, Evolution: The Modern Synthesis. The synthesis combined the ideas of natural selection, Mendelian genetics, and population genetics. It also related the broad-scale macroevolution seen by palaeontologists to the small-scale microevolution of local populations.

The synthesis was defined differently by its founders, with Ernst Mayr in 1959, G. Ledyard Stebbins in 1966, and Theodosius Dobzhansky in 1974 offering differing basic postulates, though they all include natural selection, working on heritable variation supplied by mutation. Other major figures in the synthesis included E. B. Ford, Bernhard Rensch, Ivan Schmalhausen, and George Gaylord Simpson. An early event in the modern synthesis was R. A. Fisher's 1918 paper on mathematical population genetics, though William Bateson, and separately Udny Yule, had already started to show how Mendelian genetics could work in evolution in 1902.

Different syntheses followed, including with social behaviour in E. O. Wilson's sociobiology in 1975, evolutionary developmental biology's integration of embryology with genetics and evolution, starting in 1977, and Massimo Pigliucci's and Gerd B. Müller's proposed extended evolutionary synthesis of 2007. In the view of evolutionary biologist Eugene Koonin in 2009, the modern synthesis will be replaced by a 'post-modern' synthesis that will include revolutionary changes in molecular biology, the study of prokaryotes and the resulting tree of life, and genomics.

Developments leading up to the synthesis

Darwin's pangenesis theory. Every part of the body emits tiny gemmules which migrate to the gonads and contribute to the next generation via the fertilised egg. Changes to the body during an organism's life would be inherited, as in Lamarckism.

Darwin's evolution by natural selection, 1859

Charles Darwin's 1859 book, On the Origin of Species, convinced most biologists that evolution had occurred, but not that natural selection was its primary mechanism. In the 19th and early 20th centuries, variations of Lamarckism (inheritance of acquired characteristics), orthogenesis (progressive evolution), saltationism (evolution by jumps) and mutationism (evolution driven by mutations) were discussed as alternatives. Darwin himself had sympathy for Lamarckism, but Alfred Russel Wallace advocated natural selection and totally rejected Lamarckism. In 1880, Samuel Butler labelled Wallace's view neo-Darwinism.

Blending inheritance, implied by pangenesis, causes the averaging out of every characteristic, which as the engineer Fleeming Jenkin pointed out, would make evolution by natural selection impossible.

The eclipse of Darwinism, 1880s onwards

From the 1880s onwards, biologists grew skeptical of Darwinian evolution. This eclipse of Darwinism (in Julian Huxley's words) grew out of the weaknesses in Darwin's account, with respect to his view of inheritance. Darwin believed in blending inheritance, which implied that any new variation, even if beneficial, would be weakened by 50% at each generation, as the engineer Fleeming Jenkin noted in 1868. This in turn meant that small variations would not survive long enough to be selected. Blending would therefore directly oppose natural selection. In addition, Darwin and others considered Lamarckian inheritance of acquired characteristics entirely possible, and Darwin's 1868 theory of pangenesis, with contributions to the next generation (gemmules) flowing from all parts of the body, actually implied Lamarckism as well as blending.

August Weismann's germ plasm theory. The hereditary material, the germ plasm, is confined to the gonads and the gametes. Somatic cells (of the body) develop afresh in each generation from the germ plasm.

Weismann's germ plasm, 1892

August Weismann's idea, set out in his 1892 book Das Keimplasma: eine Theorie der Vererbung ("The Germ Plasm: a Theory of Inheritance"), was that the hereditary material, which he called the germ plasm, and the rest of the body (the soma) had a one-way relationship: the germ-plasm formed the body, but the body did not influence the germ-plasm, except indirectly in its participation in a population subject to natural selection. If correct, this made Darwin's pangenesis wrong, and Lamarckian inheritance impossible. His experiment on mice, cutting off their tails and showing that their offspring had normal tails, demonstrated that inheritance was 'hard'. He argued strongly and dogmatically for Darwinism and against Lamarckism, polarising opinions among other scientists. This increased anti-Darwinian feeling, contributing to its eclipse.

Disputed beginnings

Genetics, mutationism and biometrics, 1900–1918

William Bateson championed Mendelism.

While carrying out breeding experiments to clarify the mechanism of inheritance in 1900, Hugo de Vries and Carl Correns independently rediscovered Gregor Mendel's work. News of this reached William Bateson in England, who reported on the paper during a presentation to the Royal Horticultural Society in May 1900. In Mendelian inheritance, the contributions of each parent retain their integrity, rather than blending with the contribution of the other parent. In the case of a cross between two true-breeding varieties such as Mendel's round and wrinkled peas, the first-generation offspring are all alike, in this case, all round. Allowing these to cross, the original characteristics reappear (segregation): about 3/4 of their offspring are round, 1/4 wrinkled. There is a discontinuity between the appearance of the offspring; de Vries coined the term allele for a variant form of an inherited characteristic. This reinforced a major division of thought, already present in the 1890s, between gradualists who followed Darwin, and saltationists such as Bateson.

The two schools were the Mendelians, such as Bateson and de Vries, who favoured mutationism, evolution driven by mutation, based on genes whose alleles segregated discretely like Mendel's peas; and the biometric school, led by Karl Pearson and Walter Weldon. The biometricians argued vigorously against mutationism, saying that empirical evidence indicated that variation was continuous in most organisms, not discrete as Mendelism seemed to predict; they wrongly believed that Mendelism inevitably implied evolution in discontinuous jumps.

Karl Pearson led the biometric school.

A traditional view is that the biometricians and the Mendelians rejected natural selection and argued for their separate theories for 20 years, the debate only resolved by the development of population genetics. A more recent view is that Bateson, de Vries, Thomas Hunt Morgan and Reginald Punnett had by 1918 formed a synthesis of Mendelism and mutationism. The understanding achieved by these geneticists spanned the action of natural selection on alleles (alternative forms of a gene), the Hardy–Weinberg equilibrium, the evolution of continuously varying traits (like height), and the probability that a new mutation will become fixed. In this view, the early geneticists accepted natural selection but rejected Darwin's non-Mendelian ideas about variation and heredity, and the synthesis began soon after 1900. The traditional claim that Mendelians rejected the idea of continuous variation is false; as early as 1902, Bateson and Saunders wrote that "If there were even so few as, say, four or five pairs of possible allelomorphs, the various homo- and heterozygous combinations might, on seriation, give so near an approach to a continuous curve, that the purity of the elements would be unsuspected". Also in 1902, the statistician Udny Yule showed mathematically that given multiple factors, Mendel's theory enabled continuous variation. Yule criticised Bateson's approach as confrontational, but failed to prevent the Mendelians and the biometricians from falling out.

Castle's hooded rats, 1911

Starting in 1906, William Castle carried out a long study of the effect of selection on coat colour in rats. The piebald or hooded pattern was recessive to the grey wild type. He crossed hooded rats with both wild and "Irish" types, and then back-crossed the offspring with pure hooded rats. The dark stripe on the back was bigger. He then tried selecting different groups for bigger or smaller stripes for 5 generations and found that it was possible to change the characteristics considerably beyond the initial range of variation. This effectively refuted de Vries's claim that continuous variation was caused by the environment and could not be inherited. By 1911, Castle noted that the results could be explained by Darwinian selection on a heritable variation of a sufficient number of Mendelian genes.

Morgan's fruit flies, 1912

Thomas Hunt Morgan began his career in genetics as a saltationist and started out trying to demonstrate that mutations could produce new species in fruit flies. However, the experimental work at his lab with the fruit fly, Drosophila melanogaster showed that rather than creating new species in a single step, mutations increased the supply of genetic variation in the population. By 1912, after years of work on the genetics of fruit flies, Morgan showed that these insects had many small Mendelian factors (discovered as mutant flies) on which Darwinian evolution could work as if the variation was fully continuous. The way was open for geneticists to conclude that Mendelism supported Darwinism.

An obstruction: Woodger's positivism, 1929

The theoretical biologist and philosopher of biology Joseph Henry Woodger led the introduction of positivism into biology with his 1929 book Biological Principles. He saw a mature science as being characterised by a framework of hypotheses that could be verified by facts established by experiments. He criticised the traditional natural history style of biology, including the study of evolution, as immature science, since it relied on narrative. Woodger set out to play the role of Robert Boyle's 1661 Sceptical Chymist, intending to convert the subject of biology into a formal, unified science, and ultimately, following the Vienna Circle of logical positivists like Otto Neurath and Rudolf Carnap, to reduce biology to physics and chemistry. His efforts stimulated the biologist J. B. S. Haldane to push for the axiomatisation of biology, and by influencing thinkers such as Huxley, helped to bring about the modern synthesis. The positivist climate made natural history unfashionable, and in America, research and university-level teaching on evolution declined almost to nothing by the late 1930s. The Harvard physiologist William John Crozier told his students that evolution was not even a science: "You can't experiment with two million years!" The tide of opinion turned with the adoption of mathematical modelling and controlled experimentation in population genetics, combining genetics, ecology and evolution in a framework acceptable to positivism.

Elements of the synthesis

Fisher and Haldane's mathematical population genetics, 1918–1930

In 1918, R. A. Fisher wrote "The Correlation between Relatives on the Supposition of Mendelian Inheritance," which showed how continuous variation could come from a number of discrete genetic loci. In this and other papers, culminating in his 1930 book The Genetical Theory of Natural Selection, Fisher showed how Mendelian genetics was consistent with the idea of evolution by natural selection.

In the 1920s, a series of papers by J. B. S. Haldane analyzed real-world examples of natural selection, such as the evolution of industrial melanism in peppered moths. and showed that natural selection could work even faster than Fisher had assumed. Both of these scholars, and others, such as Dobzhansky and Wright, wanted to raise biology to the standards of the physical sciences by basing it on mathematical modeling and empirical testing. Natural selection, once considered unverifiable, was becoming predictable, measurable, and testable.

De Beer's embryology, 1930

The traditional view is that developmental biology played little part in the modern synthesis, but in his 1930 book Embryos and Ancestors, the evolutionary embryologist Gavin de Beer anticipated evolutionary developmental biology by showing that evolution could occur by heterochrony, such as in the retention of juvenile features in the adult. This, de Beer argued, could cause apparently sudden changes in the fossil record, since embryos fossilise poorly. As the gaps in the fossil record had been used as an argument against Darwin's gradualist evolution, de Beer's explanation supported the Darwinian position. However, despite de Beer, the modern synthesis largely ignored embryonic development when explaining the form of organisms, since population genetics appeared to be an adequate explanation of how such forms evolved.

Wright's adaptive landscape, 1932

Sewall Wright introduced the idea of a fitness landscape with local optima.

The population geneticist Sewall Wright focused on combinations of genes that interacted as complexes, and the effects of inbreeding on small relatively isolated populations, which could be subject to genetic drift. In a 1932 paper, he introduced the concept of an adaptive landscape in which phenomena such as cross breeding and genetic drift in small populations could push them away from adaptive peaks, which would in turn allow natural selection to push them towards new adaptive peaks. Wright's model appealed to field naturalists such as Theodosius Dobzhansky and Ernst Mayr who were becoming aware of the importance of geographical isolation in real world populations. The work of Fisher, Haldane and Wright helped to found the discipline of theoretical population genetics.

Dobzhansky's evolutionary genetics, 1937

Drosophila pseudoobscura, the fruit fly which served as Theodosius Dobzhansky's model organism

Theodosius Dobzhansky, an immigrant from the Soviet Union to the United States, who had been a postdoctoral worker in Morgan's fruit fly lab, was one of the first to apply genetics to natural populations. He worked mostly with Drosophila pseudoobscura. He says pointedly: "Russia has a variety of climates from the Arctic to sub-tropical... Exclusively laboratory workers who neither possess nor wish to have any knowledge of living beings in nature were and are in a minority." Not surprisingly, there were other Russian geneticists with similar ideas, though for some time their work was known to only a few in the West. His 1937 work Genetics and the Origin of Species was a key step in bridging the gap between population geneticists and field naturalists. It presented the conclusions reached by Fisher, Haldane, and especially Wright in their highly mathematical papers in a form that was easily accessible to others. Further, Dobzhansky asserted the physicality, and hence the biological reality, of the mechanisms of inheritance: that evolution was based on material genes, arranged in a string on physical hereditary structures, the chromosomes, and linked more or less strongly to each other according to their actual physical distances on the chromosomes. As with Haldane and Fisher, Dobzhansky's "evolutionary genetics" was a genuine science, now unifying cell biology, genetics, and both micro and macroevolution. His work emphasized that real-world populations had far more genetic variability than the early population geneticists had assumed in their models and that genetically distinct sub-populations were important. Dobzhansky argued that natural selection worked to maintain genetic diversity as well as by driving change. He was influenced by his exposure in the 1920s to the work of Sergei Chetverikov, who had looked at the role of recessive genes in maintaining a reservoir of genetic variability in a population, before his work was shut down by the rise of Lysenkoism in the Soviet Union. By 1937, Dobzhansky was able to argue that mutations were the main source of evolutionary changes and variability, along with chromosome rearrangements, effects of genes on their neighbours during development, and polyploidy. Next, genetic drift (he used the term in 1941), selection, migration, and geographical isolation could change gene frequencies. Thirdly, mechanisms like ecological or sexual isolation and hybrid sterility could fix the results of the earlier processes.

Ford's ecological genetics, 1940

E. B. Ford studied polymorphism in the scarlet tiger moth for many years.

E. B. Ford was an experimental naturalist who wanted to test natural selection in nature, virtually inventing the field of ecological genetics. His work on natural selection in wild populations of butterflies and moths was the first to show that predictions made by R. A. Fisher were correct. In 1940, he was the first to describe and define genetic polymorphism, and to predict that human blood group polymorphisms might be maintained in the population by providing some protection against disease. His 1949 book Mendelism and Evolution helped to persuade Dobzhansky to change the emphasis in the third edition of his famous textbook Genetics and the Origin of Species from drift to selection.

Schmalhausen's stabilizing selection, 1941

Ivan Schmalhausen developed the theory of stabilizing selection, the idea that selection can preserve a trait at some value, publishing a paper in Russian titled "Stabilizing selection and its place among factors of evolution" in 1941 and a monograph Factors of Evolution: The Theory of Stabilizing Selection in 1945. He developed it from J. M. Baldwin's 1902 concept that adaptive changes induced by an organism's agency or environment may ultimately be replaced by hereditary changes (including the Baldwin effect of behaviour), following that theory's implications to their Darwinian conclusion, and bringing him into conflict with Lysenkoism. Schmalhausen observed that stabilizing selection would remove most variations from the norm, most mutations being harmful. Dobzhansky called the work "an important missing link in the modern view of evolution".

Huxley's popularising synthesis, 1942

Julian Huxley presented a serious but popularising version of the theory in his 1942 book Evolution: The Modern Synthesis.

In 1942, Julian Huxley's serious but popularising book Evolution: The Modern Synthesis introduced a name for the synthesis and intentionally set out to promote a "synthetic point of view" on the evolutionary process. He imagined a wide synthesis of many sciences: genetics, developmental physiology, ecology, systematics, palaeontology, cytology, and mathematical analysis of biology, and assumed that evolution would proceed differently in different groups of organisms according to how their genetic material was organised and their strategies for reproduction, leading to progressive but varying evolutionary trends. His vision was of an "evolutionary humanism", with a system of ethics and a meaningful place for "Man" in the world grounded in a unified theory of evolution which would demonstrate progress leading to humanity at its summit. Natural selection was in his view a "fact of nature capable of verification by observation and experiment", while the "period of synthesis" of the 1920s and 1930s had formed a "more unified science", rivalling physics and enabling the "rebirth of Darwinism".

However, the book was not the research text that it appeared to be. In the view of the philosopher of science Michael Ruse, and in Huxley's own opinion, Huxley was "a generalist, a synthesizer of ideas, rather than a specialist". Ruse observes that Huxley wrote as if he were adding empirical evidence to the mathematical framework established by Fisher and the population geneticists, but that this was not so. Huxley avoided mathematics, for instance not even mentioning Fisher's fundamental theorem of natural selection. Instead, Huxley used a mass of examples to demonstrate that natural selection is powerful and that it works on Mendelian genes. The book was successful in its goal of persuading readers of the reality of evolution, effectively illustrating topics such as island biogeography, speciation, and competition. Huxley further showed that the appearance of long-term orthogenetic trends – predictable directions for evolution – in the fossil record were readily explained as allometric growth (since parts are interconnected). All the same, Huxley did not reject orthogenesis out of hand, but maintained a belief in progress all his life, with Homo sapiens as the endpoint, and he had since 1912 been influenced by the vitalist philosopher Henri Bergson, though in public he maintained an atheistic position on evolution. Huxley's belief in progress within evolution and evolutionary humanism was shared in various forms by Dobzhansky, Mayr, Simpson and Stebbins, all of them writing about "the future of Mankind". Both Huxley and Dobzhansky admired the palaeontologist priest Pierre Teilhard de Chardin, Huxley writing the introduction to Teilhard's 1955 book on orthogenesis, The Phenomenon of Man. This vision required evolution to be seen as the central and guiding principle of biology.

Mayr's allopatric speciation, 1942

Ernst Mayr argued that geographic isolation was needed to provide sufficient reproductive isolation for new species to form.

Ernst Mayr's key contribution to the synthesis was Systematics and the Origin of Species, published in 1942. It asserted the importance of and set out to explain population variation in evolutionary processes including speciation. He analysed in particular the effects of polytypic species, geographic variation, and isolation by geographic and other means. Mayr emphasized the importance of allopatric speciation, where geographically isolated sub-populations diverge so far that reproductive isolation occurs. He was skeptical of the reality of sympatric speciation believing that geographical isolation was a prerequisite for building up intrinsic (reproductive) isolating mechanisms. Mayr also introduced the biological species concept that defined a species as a group of interbreeding or potentially interbreeding populations that were reproductively isolated from all other populations. Before he left Germany for the United States in 1930, Mayr had been influenced by the work of the German biologist Bernhard Rensch, who in the 1920s had analyzed the geographic distribution of polytypic species, paying particular attention to how variations between populations correlated with factors such as differences in climate.

George Gaylord Simpson argued against the naive view that evolution such as of the horse took place in a "straight-line". He noted that any chosen line is one path in a complex branching tree, natural selection having no imposed direction.

Simpson's palaeontology, 1944

George Gaylord Simpson was responsible for showing that the modern synthesis was compatible with palaeontology in his 1944 book Tempo and Mode in Evolution. Simpson's work was crucial because so many palaeontologists had disagreed, in some cases vigorously, with the idea that natural selection was the main mechanism of evolution. It showed that the trends of linear progression (in for example the evolution of the horse) that earlier palaeontologists had used as support for neo-Lamarckism and orthogenesis did not hold up under careful examination. Instead, the fossil record was consistent with the irregular, branching, and non-directional pattern predicted by the modern synthesis.

Society for the Study of Evolution, 1946

During World War II, Mayr edited a series of bulletins of the Committee on Common Problems of Genetics, Paleontology, and Systematics, formed in 1943, reporting on discussions of a "synthetic attack" on the interdisciplinary problems of evolution. In 1946, the committee became the Society for the Study of Evolution, with Mayr, Dobzhansky and Sewall Wright the first of the signatories. Mayr became the editor of its journal, Evolution. From Mayr and Dobzhansky's point of view, suggests the historian of science Betty Smocovitis, Darwinism was reborn, evolutionary biology was legitimised, and genetics and evolution were synthesised into a newly unified science. Everything fitted into the new framework, except "heretics" like Richard Goldschmidt who annoyed Mayr and Dobzhansky by insisting on the possibility of speciation by macromutation, creating "hopeful monsters". The result was "bitter controversy".

Speciation via polyploidy: a diploid cell may fail to separate during meiosis, producing diploid gametes, which self-fertilize to produce a fertile tetraploid zygote that cannot interbreed with its parent species.

Stebbins's botany, 1950

The botanist G. Ledyard Stebbins extended the synthesis to encompass botany. He described the important effects on speciation of hybridization and polyploidy in plants in his 1950 book Variation and Evolution in Plants. These permitted evolution to proceed rapidly at times, polyploidy in particular evidently being able to create new species effectively instantaneously.

Definitions by the founders

The modern synthesis was defined differently by its various founders, with differing numbers of basic postulates, as shown in the table.

Definitions of the modern synthesis by its founders, as they numbered them
Component Mayr 1959 Stebbins, 1966 Dobzhansky, 1974
Mutation (1) Randomness in all events that produce new genotypes, e.g. mutation (1) a source of variability, but not of direction (1) yields genetic raw materials
Recombination (1) Randomness in recombination, fertilisation (2) a source of variability, but not of direction
Chromosomal organisation
(3) affects genetic linkage, arranges variation in gene pool
Natural selection (2) is only direction-giving factor, as seen in adaptations to physical and biotic environment (4) guides changes to gene pool (2) constructs evolutionary changes from genetic raw materials
Reproductive isolation
(5) limits direction in which selection can guide the population (3) makes divergence irreversible in sexual organisms

After the synthesis

After the synthesis, evolutionary biology continued to develop with major contributions from workers including W. D. Hamilton, George C. Williams, E. O. Wilson, Edward B. Lewis and others.

Hamilton's inclusive fitness, 1964

In 1964, W. D. Hamilton published two papers on "The Genetical Evolution of Social Behaviour". These defined inclusive fitness as the number of offspring equivalents an individual rears, rescues or otherwise supports through its behaviour. This was contrasted with personal reproductive fitness, the number of offspring that the individual directly begets. Hamilton, and others such as John Maynard Smith, argued that a gene's success consisted in maximising the number of copies of itself, either by begetting them or by indirectly encouraging begetting by related individuals who shared the gene, the theory of kin selection.

Williams's gene-centred evolution, 1966

In 1966, George C. Williams published Adaptation and Natural Selection, outlined a gene-centred view of evolution following Hamilton's concepts, disputing the idea of evolutionary progress, and attacking the then widespread theory of group selection. Williams argued that natural selection worked by changing the frequency of alleles, and could not work at the level of groups. Gene-centred evolution was popularised by Richard Dawkins in his 1976 book The Selfish Gene and developed in his more technical writings.

Wilson's sociobiology, 1975

Ant societies have evolved elaborate caste structures, widely different in size and function.

In 1975, E. O. Wilson published his controversial book Sociobiology: The New Synthesis, the subtitle alluding to the modern synthesis as he attempted to bring the study of animal society into the evolutionary fold. This appeared radically new, although Wilson was following Darwin, Fisher, Dawkins and others. Critics such as Gerhard Lenski noted that he was following Huxley, Simpson and Dobzhansky's approach, which Lenski considered needlessly reductive as far as human society was concerned. By 2000, the proposed discipline of sociobiology had morphed into the relatively well-accepted discipline of evolutionary psychology.

Lewis's homeotic genes, 1978

Evolutionary developmental biology has formed a synthesis of evolutionary and developmental biology, discovering deep homology between the embryogenesis of such different animals as insects and vertebrates.

In 1977, recombinant DNA technology enabled biologists to start to explore the genetic control of development. The growth of evolutionary developmental biology from 1978, when Edward B. Lewis discovered homeotic genes, showed that many so-called toolkit genes act to regulate development, influencing the expression of other genes. It also revealed that some of the regulatory genes are extremely ancient, so that animals as different as insects and mammals share control mechanisms; for example, the Pax6 gene is involved in forming the eyes of mice and of fruit flies. Such deep homology provided strong evidence for evolution and indicated the paths that evolution had taken.

Later syntheses

In 1982, a historical note on a series of evolutionary biology books could state without qualification that evolution is the central organizing principle of biology. Smocovitis commented on this that "What the architects of the synthesis had worked to construct had by 1982 become a matter of fact", adding in a footnote that "the centrality of evolution had thus been rendered tacit knowledge, part of the received wisdom of the profession".

By the late 20th century, however, the modern synthesis was showing its age, and fresh syntheses to remedy its defects and fill in its gaps were proposed from different directions. These have included such diverse fields as the study of society, developmental biology, epigenetics, molecular biology, microbiology, genomicssymbiogenesis, and horizontal gene transfer. The physiologist Denis Noble argues that these additions render neo-Darwinism in the sense of the early 20th century's modern synthesis "at the least, incomplete as a theory of evolution", and one that has been falsified by later biological research.

Michael Rose and Todd Oakley argue that evolutionary biology, formerly divided and "Balkanized", has been brought together by genomics. It has in their view discarded at least five common assumptions from the modern synthesis, namely that the genome is always a well-organised set of genes; that each gene has a single function; that species are well adapted biochemically to their ecological niches; that species are the durable units of evolution, and all levels from organism to organ, cell and molecule within the species are characteristic of it; and that the design of every organism and cell is efficient. They argue that the "new biology" integrates genomics, bioinformatics, and evolutionary genetics into a general-purpose toolkit for a "Postmodern Synthesis".

Pigliucci's extended evolutionary synthesis, 2007

In 2007, more than half a century after the modern synthesis, Massimo Pigliucci called for an extended evolutionary synthesis to incorporate aspects of biology that had not been included or had not existed in the mid-20th century. It revisits the relative importance of different factors, challenges assumptions made in the modern synthesis, and adds new factors such as multilevel selection, transgenerational epigenetic inheritance, niche construction, and evolvability.

Koonin's 'post-modern' evolutionary synthesis, 2009

A 21st century tree of life showing horizontal gene transfers among prokaryotes and the saltational endosymbiosis events that created the eukaryotes, neither fitting into the 20th century's modern synthesis

In 2009, Darwin's 200th anniversary, the Origin of Species' 150th, and the 200th of Lamarck's "early evolutionary synthesis", Philosophie Zoologique, the evolutionary biologist Eugene Koonin stated that while "the edifice of the [early 20th century] Modern Synthesis has crumbled, apparently, beyond repair", a new 21st-century synthesis could be glimpsed. Three interlocking revolutions had, he argued, taken place in evolutionary biology: molecular, microbiological, and genomic. The molecular revolution included the neutral theory, that most mutations are neutral and that negative selection happens more often than the positive form, and that all current life evolved from a single common ancestor. In microbiology, the synthesis has expanded to cover the prokaryotes, using ribosomal RNA to form a tree of life. Finally, genomics brought together the molecular and microbiological syntheses - in particular, horizontal gene transfer between bacteria shows that prokaryotes can freely share genes. Many of these points had already been made by other researchers such as Ulrich Kutschera and Karl J. Niklas.

Towards a replacement synthesis

Inputs to the modern synthesis, with other topics (inverted colours) such as developmental biology that were not joined with evolutionary biology until the turn of the 21st century

Biologists, alongside scholars of the history and philosophy of biology, have continued to debate the need for, and possible nature of, a replacement synthesis. For example, in 2017 Philippe Huneman and Denis M. Walsh stated in their book Challenging the Modern Synthesis that numerous theorists had pointed out that the disciplines of embryological developmental theory, morphology, and ecology had been omitted. They noted that all such arguments amounted to a continuing desire to replace the modern synthesis with one that united "all biological fields of research related to evolution, adaptation, and diversity in a single theoretical framework." They observed further that there are two groups of challenges to the way the modern synthesis viewed inheritance. The first is that other modes such as epigenetic inheritance, phenotypic plasticity, the Baldwin effect, and the maternal effect allow new characteristics to arise and be passed on and for the genes to catch up with the new adaptations later. The second is that all such mechanisms are part, not of an inheritance system, but a developmental system: the fundamental unit is not a discrete selfishly competing gene, but a collaborating system that works at all levels from genes and cells to organisms and cultures to guide evolution. The molecular biologist Sean B. Carroll has commented that had Huxley had access to evolutionary developmental biology, "embryology would have been a cornerstone of his Modern Synthesis, and so evo-devo is today a key element of a more complete, expanded evolutionary synthesis."

Historiography

Looking back at the conflicting accounts of the modern synthesis, the historian Betty Smocovitis notes in her 1996 book Unifying Biology: The Evolutionary Synthesis and Evolutionary Biology that both historians and philosophers of biology have attempted to grasp its scientific meaning, but have found it "a moving target"; the only thing they agreed on was that it was a historical event. In her words

by the late 1980s the notoriety of the evolutionary synthesis was recognized ... So notorious did 'the synthesis' become, that few serious historically minded analysts would touch the subject, let alone know where to begin to sort through the interpretive mess left behind by the numerous critics and commentators.

Interplanetary Internet

From Wikipedia, the free encyclopedia The speed of light, illustrated here by a beam of light traveling ...