Search This Blog

Tuesday, March 23, 2021

Catastrophism

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Catastrophism

Catastrophism is the theory that the Earth has largely been shaped by sudden, short-lived, violent events, possibly worldwide in scope. This is in contrast to uniformitarianism (sometimes described as gradualism), in which slow incremental changes, such as erosion, created all the Earth's geological features. The proponents of uniformitarianism held that the present was the key to the past, and that all geological processes (such as erosion) throughout the past were like those that can be observed now. Since the early disputes, a more inclusive and integrated view of geologic events has developed, in which the scientific consensus accepts that there were some catastrophic events in the geologic past, but these were explicable as extreme examples of natural processes which can occur.

Proponents of catastrophism proposed that the geological epochs had ended with violent and sudden natural catastrophes such as great floods and the rapid formation of major mountain chains. Plants and animals living in the parts of the world where such events occurred were made extinct, being replaced abruptly by the new forms whose fossils defined the geological strata. Some catastrophists attempted to relate at least one such change to the Biblical account of Noah's flood.

The concept was first popularised by the early 19th-century French scientist Georges Cuvier, who proposed that new life forms had moved in from other areas after local floods, and avoided religious or metaphysical speculation in his scientific writings.

History

Geology and biblical beliefs

In the early development of geology, efforts were made in a predominantly Christian western society to reconcile biblical narratives of Creation and the universal flood with new concepts about the processes which had formed the Earth. The discovery of other ancient flood myths was taken as explaining why the flood story was "stated in scientific methods with surprising frequency among the Greeks", an example being Plutarch's account of the Ogygian flood.

Cuvier and the natural theologians

The leading scientific proponent of catastrophism in the early eighteenth century was the French anatomist and paleontologist Georges Cuvier. His motivation was to explain the patterns of extinction and faunal succession that he and others were observing in the fossil record. While he did speculate that the catastrophe responsible for the most recent extinctions in Eurasia might have been the result of the inundation of low-lying areas by the sea, he did not make any reference to Noah's flood. Nor did he ever make any reference to divine creation as the mechanism by which repopulation occurred following the extinction event. In fact Cuvier, influenced by the ideas of the Enlightenment and the intellectual climate of the French revolution, avoided religious or metaphysical speculation in his scientific writings. Cuvier also believed that the stratigraphic record indicated that there had been several of these revolutions, which he viewed as recurring natural events, amid long intervals of stability during the history of life on earth. This led him to believe the Earth was several million years old.

By contrast in Britain, where natural theology was influential during the early nineteenth century, a group of geologists including William Buckland and Robert Jameson interpreted Cuvier's work differently. Cuvier had written an introduction to a collection of his papers on fossil quadrupeds, discussing his ideas on catastrophic extinction. Jameson translated Cuvier's introduction into English, publishing it under the title Theory of the Earth. He added extensive editorial notes to the translation, explicitly linking the latest of Cuvier's revolutions with the biblical flood. The resulting essay was extremely influential in the English-speaking world. Buckland spent much of his early career trying to demonstrate the reality of the biblical flood using geological evidence. He frequently cited Cuvier's work, even though Cuvier had proposed an inundation of limited geographic extent and extended duration, whereas Buckland, to be consistent with the biblical account, was advocating a universal flood of short duration. Eventually, Buckland abandoned flood geology in favor of the glaciation theory advocated by Louis Agassiz, following a visit to the Alps where Agassiz demonstrated the effects of glaciation at first hand. As a result of the influence of Jameson, Buckland, and other advocates of natural theology, the nineteenth century debate over catastrophism took on much stronger religious overtones in Britain than elsewhere in Europe.

The rise of uniformitarianism in geology

Uniformitarian explanations for the formation of sedimentary rock and an understanding of the immense stretch of geological time, or as the concept came to be known deep time, were found in the writing of James Hutton, sometimes known as the father of geology, in the late 18th century. The geologist Charles Lyell built upon Hutton's ideas during the first half of 19th century and amassed observations in support of the uniformitarian idea that the Earth's features had been shaped by same geological processes that could be observed in the present acting gradually over an immense period of time. Lyell presented his ideas in the influential three volume work, Principles of Geology, published in the 1830s, which challenged theories about geological cataclysms proposed by proponents of catastrophism like Cuvier and Buckland.

From around 1850 to 1980, most geologists endorsed uniformitarianism ("The present is the key to the past") and gradualism (geologic change occurs slowly over long periods of time) and rejected the idea that cataclysmic events such as earthquakes, volcanic eruptions, or floods of vastly greater power than those observed at the present time, played any significant role in the formation of the Earth's surface. Instead they believed that the earth had been shaped by the long term action of forces such as volcanism, earthquakes, erosion, and sedimentation, that could still be observed in action today. In part, the geologists' rejection was fostered by their impression that the catastrophists of the early nineteenth century believed that God was directly involved in determining the history of Earth. Some of the theories about Catastrophism in the nineteenth and early twentieth centuries were connected with religion and catastrophic origins were sometimes considered miraculous rather than natural events.

The rise in uniformitarianism made the introduction of a new catastrophe theory very difficult. In 1923 J Harlen Bretz published a paper on the channeled scablands formed by glacial Lake Missoula in Washington State, USA. Bretz encountered resistance to his theories from the geology establishment of the day, kicking off an acrimonious 40 year debate. Finally in 1979 Bretz received the Penrose Medal; the Geological Society of America's highest award.

Immanuel Velikovsky's views

In the 1950s, Immanuel Velikovsky propounded catastrophism in several popular books. He speculated that the planet Venus is a former "comet" which was ejected from Jupiter and subsequently 3,500 years ago made two catastrophic close passes by Earth, 52 years apart, and later interacted with Mars, which then had a series of near collisions with Earth which ended in 687 BCE, before settling into its current orbit. Velikovsky used this to explain the biblical plagues of Egypt, the biblical reference to the "Sun standing still" for a day (Joshua 10:12 & 13, explained by changes in Earth's rotation), and the sinking of Atlantis. Scientists vigorously rejected Velikovsky's conjectures.

Current application

Neocatastrophism is the explanation of sudden extinctions in the palaeontological record by high magnitude, low frequency events (such as asteroid impacts, super-volcanic eruptions, supernova gamma ray bursts, etc.), as opposed to the more prevalent geomorphological thought which emphasises low magnitude, high frequency events.

Luis Alvarez impact event hypothesis

Over the past 25 years, a scientifically based catastrophism has gained wide acceptance with regard to certain events in the distant past. One impetus for this change came from the publication of a historic paper by Walter and Luis Alvarez in 1980. This paper suggested that a 10 kilometres (6.2 mi) asteroid struck Earth 66 million years ago at the end of the Cretaceous period. The impact wiped out about 70% of all species, including the dinosaurs, leaving behind the Cretaceous–Paleogene boundary (K–T boundary). In 1990, a 180 kilometres (110 mi) candidate crater marking the impact was identified at Chicxulub in the Yucatán Peninsula of Mexico.

Since then, the debate about the extinction of the dinosaurs and other mass extinction events has centered on whether the extinction mechanism was the asteroid impact, widespread volcanism (which occurred about the same time), or some other mechanism or combination. Most of the mechanisms suggested are catastrophic in nature.

The observation of the Shoemaker-Levy 9 cometary collision with Jupiter illustrated that catastrophic events occur as natural events.

Comparison with uniformitarianism

One of the key differences between catastrophism and uniformitarianism is that uniformitarianism requires the assumption of vast timelines, whereas catastrophism does not. Today most geologists combine catastrophist and uniformitarianist standpoints, taking the view that Earth's history is a slow, gradual story punctuated by occasional natural catastrophic events that have affected Earth and its inhabitants.

Moon-formation

Modern theories also suggest that Earth's anomalously large moon was formed catastrophically. In a paper published in Icarus in 1975, William K. Hartmann and Donald R. Davis proposed that a catastrophic near-miss by a large planetesimal early in Earth's formation approximately 4.5 billion years ago blew out rocky debris, remelted Earth and formed the Moon, thus explaining the Moon's lesser density and lack of an iron core. The impact theory does have some faults; some computer simulations show the formation of a ring or multiple moons post impact, and elements are not quite the same between the earth and moon.

Genetic recombination

From Wikipedia, the free encyclopedia
 
A current model of meiotic recombination, initiated by a double-strand break or gap, followed by pairing with an homologous chromosome and strand invasion to initiate the recombinational repair process. Repair of the gap can lead to crossover (CO) or non-crossover (NCO) of the flanking regions. CO recombination is thought to occur by the Double Holliday Junction (DHJ) model, illustrated on the right, above. NCO recombinants are thought to occur primarily by the Synthesis Dependent Strand Annealing (SDSA) model, illustrated on the left, above. Most recombination events appear to be the SDSA type.

Genetic recombination (also known as genetic reshuffling) is the exchange of genetic material between different organisms which leads to production of offspring with combinations of traits that differ from those found in either parent. In eukaryotes, genetic recombination during meiosis can lead to a novel set of genetic information that can be passed on from the parents to the offspring. Most recombination is naturally occurring.

During meiosis in eukaryotes, genetic recombination involves the pairing of homologous chromosomes. This may be followed by information transfer between the chromosomes. The information transfer may occur without physical exchange (a section of genetic material is copied from one chromosome to another, without the donating chromosome being changed) (see SDSA pathway in Figure); or by the breaking and rejoining of DNA strands, which forms new molecules of DNA (see DHJ pathway in Figure).

Recombination may also occur during mitosis in eukaryotes where it ordinarily involves the two sister chromosomes formed after chromosomal replication. In this case, new combinations of alleles are not produced since the sister chromosomes are usually identical. In meiosis and mitosis, recombination occurs between similar molecules of DNA (homologous sequences). In meiosis, non-sister homologous chromosomes pair with each other so that recombination characteristically occurs between non-sister homologues. In both meiotic and mitotic cells, recombination between homologous chromosomes is a common mechanism used in DNA repair.

Gene conversion - the process during which homologous sequences are made identical also falls under genetic recombination.

Genetic recombination and recombinational DNA repair also occurs in bacteria and archaea, which use asexual reproduction.

Recombination can be artificially induced in laboratory (in vitro) settings, producing recombinant DNA for purposes including vaccine development.

V(D)J recombination in organisms with an adaptive immune system is a type of site-specific genetic recombination that helps immune cells rapidly diversify to recognize and adapt to new pathogens.

Synapsis

During meiosis, synapsis (the pairing of homologous chromosomes) ordinarily precedes genetic recombination.

Mechanism

Genetic recombination is catalyzed by many different enzymes. Recombinases are key enzymes that catalyse the strand transfer step during recombination. RecA, the chief recombinase found in Escherichia coli, is responsible for the repair of DNA double strand breaks (DSBs). In yeast and other eukaryotic organisms there are two recombinases required for repairing DSBs. The RAD51 protein is required for mitotic and meiotic recombination, whereas the DNA repair protein, DMC1, is specific to meiotic recombination. In the archaea, the ortholog of the bacterial RecA protein is RadA.

Bacterial recombination

In Bacteria there are:

  • regular bacterial recombination, as well as noneffective transfer of genetic material, expressed as
  • unsuccessful transfer or abortive transfer which is any bacterial DNA transfer of the donor cell recipients who have set the incoming DNA as part of the genetic material of the recipient. Abortive transfer was registered in the following transduction and conjugation. In all cases, the transmitted fragment is diluted by the culture growth.

Chromosomal crossover

Thomas Hunt Morgan's illustration of crossing over (1916)

In eukaryotes, recombination during meiosis is facilitated by chromosomal crossover. The crossover process leads to offspring having different combinations of genes from those of their parents, and can occasionally produce new chimeric alleles. The shuffling of genes brought about by genetic recombination produces increased genetic variation. It also allows sexually reproducing organisms to avoid Muller's ratchet, in which the genomes of an asexual population accumulate genetic deletions in an irreversible manner.

Chromosomal crossover involves recombination between the paired chromosomes inherited from each of one's parents, generally occurring during meiosis. During prophase I (pachytene stage) the four available chromatids are in tight formation with one another. While in this formation, homologous sites on two chromatids can closely pair with one another, and may exchange genetic information.

Because recombination can occur with small probability at any location along chromosome, the frequency of recombination between two locations depends on the distance separating them. Therefore, for genes sufficiently distant on the same chromosome, the amount of crossover is high enough to destroy the correlation between alleles.

Tracking the movement of genes resulting from crossovers has proven quite useful to geneticists. Because two genes that are close together are less likely to become separated than genes that are farther apart, geneticists can deduce roughly how far apart two genes are on a chromosome if they know the frequency of the crossovers. Geneticists can also use this method to infer the presence of certain genes. Genes that typically stay together during recombination are said to be linked. One gene in a linked pair can sometimes be used as a marker to deduce the presence of another gene. This is typically used in order to detect the presence of a disease-causing gene.

The recombination frequency between two loci observed is the crossing-over value. It is the frequency of crossing over between two linked gene loci (markers), and depends on the mutual distance of the genetic loci observed. For any fixed set of genetic and environmental conditions, recombination in a particular region of a linkage structure (chromosome) tends to be constant, and the same is then true for the crossing-over value which is used in the production of genetic maps.

Gene conversion

In gene conversion, a section of genetic material is copied from one chromosome to another, without the donating chromosome being changed. Gene conversion occurs at high frequency at the actual site of the recombination event during meiosis. It is a process by which a DNA sequence is copied from one DNA helix (which remains unchanged) to another DNA helix, whose sequence is altered. Gene conversion has often been studied in fungal crosses where the 4 products of individual meioses can be conveniently observed. Gene conversion events can be distinguished as deviations in an individual meiosis from the normal 2:2 segregation pattern (e.g. a 3:1 pattern).

Nonhomologous recombination

Recombination can occur between DNA sequences that contain no sequence homology. This can cause chromosomal translocations, sometimes leading to cancer.

In B cells

B cells of the immune system perform genetic recombination, called immunoglobulin class switching. It is a biological mechanism that changes an antibody from one class to another, for example, from an isotype called IgM to an isotype called IgG.

Genetic engineering

In genetic engineering, recombination can also refer to artificial and deliberate recombination of disparate pieces of DNA, often from different organisms, creating what is called recombinant DNA. A prime example of such a use of genetic recombination is gene targeting, which can be used to add, delete or otherwise change an organism's genes. This technique is important to biomedical researchers as it allows them to study the effects of specific genes. Techniques based on genetic recombination are also applied in protein engineering to develop new proteins of biological interest.

Recombinational repair

DNA damages caused by a variety of exogenous agents (e.g. UV light, X-rays, chemical cross-linking agents) can be repaired by homologous recombinational repair (HRR). These findings suggest that DNA damages arising from natural processes, such as exposure to reactive oxygen species that are byproducts of normal metabolism, are also repaired by HRR. In humans, deficiencies in the gene products necessary for HRR during meiosis likely cause infertility In humans, deficiencies in gene products necessary for HRR, such as BRCA1 and BRCA2, increase the risk of cancer.

In bacteria, transformation is a process of gene transfer that ordinarily occurs between individual cells of the same bacterial species. Transformation involves integration of donor DNA into the recipient chromosome by recombination. This process appears to be an adaptation for repairing DNA damages in the recipient chromosome by HRR. Transformation may provide a benefit to pathogenic bacteria by allowing repair of DNA damage, particularly damages that occur in the inflammatory, oxidizing environment associated with infection of a host.

When two or more viruses, each containing lethal genomic damages, infect the same host cell, the virus genomes can often pair with each other and undergo HRR to produce viable progeny. This process, referred to as multiplicity reactivation, has been studied in lambda and T4 bacteriophages, as well as in several pathogenic viruses. In the case of pathogenic viruses, multiplicity reactivation may be an adaptive benefit to the virus since it allows the repair of DNA damages caused by exposure to the oxidizing environment produced during host infection. See also reassortment.

Meiotic recombination

Molecular models of meiotic recombination have evolved over the years as relevant evidence accumulated. A major incentive for developing a fundamental understanding of the mechanism of meiotic recombination is that such understanding is crucial for solving the problem of the adaptive function of sex, a major unresolved issue in biology. A recent model that reflects current understanding was presented by Anderson and Sekelsky, and is outlined in the first figure in this article. The figure shows that two of the four chromatids present early in meiosis (prophase I) are paired with each other and able to interact. Recombination, in this version of the model, is initiated by a double-strand break (or gap) shown in the DNA molecule (chromatid) at the top of the first figure in this article. However, other types of DNA damage may also initiate recombination. For instance, an inter-strand cross-link (caused by exposure to a cross-linking agent such as mitomycin C) can be repaired by HRR.

As indicated in the first figure, above, two types of recombinant product are produced. Indicated on the right side is a “crossover” (CO) type, where the flanking regions of the chromosomes are exchanged, and on the left side, a “non-crossover” (NCO) type where the flanking regions are not exchanged. The CO type of recombination involves the intermediate formation of two “Holliday junctions” indicated in the lower right of the figure by two X shaped structures in each of which there is an exchange of single strands between the two participating chromatids. This pathway is labeled in the figure as the DHJ (double-Holliday junction) pathway.

The NCO recombinants (illustrated on the left in the figure) are produced by a process referred to as “synthesis dependent strand annealing” (SDSA). Recombination events of the NCO/SDSA type appear to be more common than the CO/DHJ type. The NCO/SDSA pathway contributes little to genetic variation, since the arms of the chromosomes flanking the recombination event remain in the parental configuration. Thus, explanations for the adaptive function of meiosis that focus exclusively on crossing-over are inadequate to explain the majority of recombination events.

Achiasmy and heterochiasmy

Achiasmy is the phenomenon where autosomal recombination is completely absent in one sex of a species. Achiasmatic chromosomal segregation is well documented in male Drosophila melanogaster. Heterochiasmy occurs when recombination rates differ between the sexes of a species. This sexual dimorphic pattern in recombination rate has been observed in many species. In mammals, females most often have higher rates of recombination. The "Haldane-Huxley rule" states that achiasmy usually occurs in the heterogametic sex.

RNA virus recombination

Numerous RNA viruses are capable of genetic recombination when at least two viral genomes are present in the same host cell. RNA virus recombination occurs during reverse transcription and is mediated by the enzyme, reverse transcriptase. Recombination occurs when reverse transcriptase jumps from one virus RNA genome to the other virus RNA genome, resulting in a "template switching" event and a single DNA strand that contains sequences from both viral RNA genomes. Recombination is largely responsible for RNA virus diversity and immune evasion. RNA recombination appears to be a major driving force in determining genome architecture and the course of viral evolution among picornaviridae ((+)ssRNA) (e.g. poliovirus). In the retroviridae ((+)ssRNA)(e.g. HIV), damage in the RNA genome appears to be avoided during reverse transcription by strand switching, a form of recombination.

Recombination also occurs in the reoviridae (dsRNA)(e.g. reovirus), orthomyxoviridae ((-)ssRNA)(e.g. influenza virus) and coronaviridae ((+)ssRNA) (e.g. SARS).

Recombination in RNA viruses appears to be an adaptation for coping with genome damage. Switching between template strands during genome replication, referred to as copy-choice recombination, was originally proposed to explain the positive correlation of recombination events over short distances in organisms with a DNA genome (see first Figure, SDSA pathway). The forced copy-choice model suggests that reverse transcriptase undergoes template switching when it encounters a nick in the viral RNA sequence. Thus, the forced copy-choice model implies that recombination is required for virus integrity and survival, as it is able to correct for genomic damage in order to create proviral DNA. Another recombination model counters this idea, and instead proposes that recombination occurs sporadically when the two domains of reverse transcriptase, the RNAse H and the polymerase, differ in their activity speeds. This forces the reverse transcriptase enzyme off of one RNA strand and onto the second. This second model of recombination is referred to as the dynamic choice model. A study by Rawson et al. determined that both recombination models are correct in HIV-1 recombination, and that recombination is necessary for viral replication.

Recombination can occur infrequently between animal viruses of the same species but of divergent lineages. The resulting recombinant viruses may sometimes cause an outbreak of infection in humans.

When replicating its (+)ssRNA genome, the poliovirus RNA-dependent RNA polymerase (RdRp) is able to carry out recombination. Recombination appears to occur by a copy choice mechanism in which the RdRp switches (+)ssRNA templates during negative strand synthesis. Recombination by RdRp strand switching also occurs in the (+)ssRNA plant carmoviruses and tombusviruses.

Recombination appears to be a major driving force in determining genetic variability within coronaviruses, as well as the ability of coronavirus species to jump from one host to another and, infrequently, for the emergence of novel species, although the mechanism of recombination in is unclear. During the first months of the COVID-19 pandemic, such a recombination event was suggested to have been a critical step in the evolution of SARS-CoV-2's ability to infect humans. SARS-CoV-2's entire receptor binding motif appeared, based on preliminary observations, to have been introduced through recombination from coronaviruses of pangolins. However, more comprehensive analyses later refuted this suggestion and showed that SARS-CoV-2 likely evolved solely within bats and with little or no recombination.

Uniformitarianism

From Wikipedia, the free encyclopedia

Hutton's Unconformity at Jedburgh.
Above: John Clerk of Eldin's 1787 illustration.
Below: 2003 photograph.

Uniformitarianism, also known as the Doctrine of Uniformity or the Uniformitarian Principle, is the assumption that the same natural laws and processes that operate in our present-day scientific observations have always operated in the universe in the past and apply everywhere in the universe. It refers to invariance in the metaphysical principles underpinning science, such as the constancy of cause and effect throughout space-time, but has also been used to describe spatiotemporal invariance of physical laws. Though an unprovable postulate that cannot be verified using the scientific method, some consider that uniformitarianism should be a required first principle in scientific research. Other scientists disagree and consider that nature is not absolutely uniform, even though it does exhibit certain regularities.

In geology, uniformitarianism has included the gradualistic concept that "the present is the key to the past" and that geological events occur at the same rate now as they have always done, though many modern geologists no longer hold to a strict gradualism. Coined by William Whewell, it was originally proposed in contrast to catastrophism by British naturalists in the late 18th century, starting with the work of the geologist James Hutton in his many books including Theory of the Earth. Hutton's work was later refined by scientist John Playfair and popularised by geologist Charles Lyell's Principles of Geology in 1830. Today, Earth's history is considered to have been a slow, gradual process, punctuated by occasional natural catastrophic events.

History

18th century

Cliff at the east of Siccar Point in Berwickshire, showing the near-horizontal red sandstone layers above vertically tilted greywacke rocks.

The earlier conceptions likely had little influence on 18th-century European geological explanations for the formation of Earth. Abraham Gottlob Werner (1749–1817) proposed Neptunism, where strata represented deposits from shrinking seas precipitated onto primordial rocks such as granite. In 1785 James Hutton proposed an opposing, self-maintaining infinite cycle based on natural history and not on the Biblical account.

The solid parts of the present land appear in general, to have been composed of the productions of the sea, and of other materials similar to those now found upon the shores. Hence we find a reason to conclude:

1st, That the land on which we rest is not simple and original, but that it is a composition, and had been formed by the operation of second causes.
2nd, That before the present land was made, there had subsisted a world composed of sea and land, in which were tides and currents, with such operations at the bottom of the sea as now take place. And,
Lastly, That while the present land was forming at the bottom of the ocean, the former land maintained plants and animals; at least the sea was then inhabited by animals, in a similar manner as it is at present.

Hence we are led to conclude, that the greater part of our land, if not the whole had been produced by operations natural to this globe; but that in order to make this land a permanent body, resisting the operations of the waters, two things had been required;

1st, The consolidation of masses formed by collections of loose or incoherent materials;
2ndly, The elevation of those consolidated masses from the bottom of the sea, the place where they were collected, to the stations in which they now remain above the level of the ocean.

Hutton then sought evidence to support his idea that there must have been repeated cycles, each involving deposition on the seabed, uplift with tilting and erosion, and then moving undersea again for further layers to be deposited. At Glen Tilt in the Cairngorm mountains he found granite penetrating metamorphic schists, in a way which indicated to him that the presumed primordial rock had been molten after the strata had formed. He had read about angular unconformities as interpreted by Neptunists, and found an unconformity at Jedburgh where layers of greywacke in the lower layers of the cliff face have been tilted almost vertically before being eroded to form a level plane, under horizontal layers of Old Red Sandstone. In the spring of 1788 he took a boat trip along the Berwickshire coast with John Playfair and the geologist Sir James Hall, and found a dramatic unconformity showing the same sequence at Siccar Point. Playfair later recalled that "the mind seemed to grow giddy by looking so far into the abyss of time", and Hutton concluded a 1788 paper he presented at the Royal Society of Edinburgh, later rewritten as a book, with the phrase "we find no vestige of a beginning, no prospect of an end".

Both Playfair and Hall wrote their own books on the theory, and for decades robust debate continued between Hutton's supporters and the Neptunists. Georges Cuvier's paleontological work in the 1790s, which established the reality of extinction, explained this by local catastrophes, after which other fixed species repopulated the affected areas. In Britain, geologists adapted this idea into "diluvial theory" which proposed repeated worldwide annihilation and creation of new fixed species adapted to a changed environment, initially identifying the most recent catastrophe as the biblical flood.

19th century

Charles Lyell at the British Association meeting in Glasgow 1840

From 1830 to 1833 Charles Lyell's multi-volume Principles of Geology was published. The work's subtitle was "An attempt to explain the former changes of the Earth's surface by reference to causes now in operation". He drew his explanations from field studies conducted directly before he went to work on the founding geology text, and developed Hutton's idea that the earth was shaped entirely by slow-moving forces still in operation today, acting over a very long period of time. The terms uniformitarianism for this idea, and catastrophism for the opposing viewpoint, was coined by William Whewell in a review of Lyell's book. Principles of Geology was the most influential geological work in the middle of the 19th century.

Systems of inorganic earth history

Geoscientists support diverse systems of Earth history, the nature of which rests on a certain mixture of views about the process, control, rate, and state which are preferred. Because geologists and geomorphologists tend to adopt opposite views over process, rate, and state in the inorganic world, there are eight different systems of beliefs in the development of the terrestrial sphere. All geoscientists stand by the principle of uniformity of law. Most, but not all, are directed by the principle of simplicity. All make definite assertions about the quality of rate and state in the inorganic realm.

Methodological
assumption concerning
kind of process
Substantive claim
concerning state
Substantive claim
Concerning rate
System of Inorganic
Earth history
Promoters
Same Kind of processes
that exist today
Actualism
Steady State
Non-directionalism
Constant Rate
Gradualism
Actualistic
Non-directional
Gradualism
Most of Hutton, Playfair, Lyell
Changing Rate
Catastrophism
Actualistic
Non-directional
Catastrophism
Hall
Changing State
Directionalism
Constant Rate
Gradualism
Actualistic
Directional
Gradualism
Small part of Hutton, Cotta, Darwin
Changing Rate
Catastrophism
Actualistic
Directional
Catastrophism
Hooke, Steno, Lehmann, Pallas,
de Saussure, Werner, and geognosists,
Elis de Beaumont and followers
Different Kind of processes
than exist today
Non-Actualism
Steady State
Non-directionalism
Constant Rate
Gradualism
Non-Actualistic
Non-directional
Gradualism
Carpenter
Changing Rate
Catastrophism
Non-Actualistic
Non-directional
Catastrophism
Bonnet, Cuvier
Changing State
Directionalism
Constant Rate
Gradualism
Non-Actualistic
directional
Gradualism
De Mallet, Buffon
Changing Rate
Catastrophism
Non-Actualistic
Directional
Catastrophism
Restoration cosmogonists,
English diluvialists,
Scriptural geologists

Lyell's uniformitarianism

According to Reijer Hooykaas (1963), Lyell's uniformitarianism is a family of four related propositions, not a single idea:

  • Uniformity of law – the laws of nature are constant across time and space.
  • Uniformity of methodology – the appropriate hypotheses for explaining the geological past are those with analogy today.
  • Uniformity of kind – past and present causes are all of the same kind, have the same energy, and produce the same effects.
  • Uniformity of degree – geological circumstances have remained the same over time.

None of these connotations requires another, and they are not all equally inferred by uniformitarians.

Gould explained Lyell's propositions in Time's Arrow, Time's Cycle (1987), stating that Lyell conflated two different types of propositions: a pair of methodological assumptions with a pair of substantive hypotheses. The four together make up Lyell's uniformitarianism.

Methodological assumptions

The two methodological assumptions below are accepted to be true by the majority of scientists and geologists. Gould claims that these philosophical propositions must be assumed before you can proceed as a scientist doing science. "You cannot go to a rocky outcrop and observe either the constancy of nature's laws or the working of unknown processes. It works the other way around." You first assume these propositions and "then you go to the outcrop."

  • Uniformity of law across time and space: Natural laws are constant across space and time.
The axiom of uniformity of law is necessary in order for scientists to extrapolate (by inductive inference) into the unobservable past. The constancy of natural laws must be assumed in the study of the past; else we cannot meaningfully study it.
  • Uniformity of process across time and space: Natural processes are constant across time and space.
Though similar to uniformity of law, this second a priori assumption, shared by the vast majority of scientists, deals with geological causes, not physicochemical laws. The past is to be explained by processes acting currently in time and space rather than inventing extra esoteric or unknown processes without good reason, otherwise known as parsimony or Occam's razor.
Substantive hypotheses

The substantive hypotheses were controversial and, in some cases, accepted by few. These hypotheses are judged true or false on empirical grounds through scientific observation and repeated experimental data. This is in contrast with the previous two philosophical assumptions that come before one can do science and so cannot be tested or falsified by science.

  • Uniformity of rate across time and space: Change is typically slow, steady, and gradual.
Uniformity of rate (or gradualism) is what most people (including geologists) think of when they hear the word "uniformitarianism," confusing this hypothesis with the entire definition. As late as 1990, Lemon, in his textbook of stratigraphy, affirmed that "The uniformitarian view of earth history held that all geologic processes proceed continuously and at a very slow pace."
Gould explained Hutton's view of uniformity of rate; mountain ranges or grand canyons are built by the accumulation of nearly insensible changes added up through vast time. Some major events such as floods, earthquakes, and eruptions, do occur. But these catastrophes are strictly local. They neither occurred in the past nor shall happen in the future, at any greater frequency or extent than they display at present. In particular, the whole earth is never convulsed at once.
  • Uniformity of state across time and space: Change is evenly distributed throughout space and time.
The uniformity of state hypothesis implies that throughout the history of our earth there is no progress in any inexorable direction. The planet has almost always looked and behaved as it does now. Change is continuous but leads nowhere. The earth is in balance: a dynamic steady state.

20th century

Stephen Jay Gould's first scientific paper, "Is uniformitarianism necessary?" (1965), reduced these four assumptions to two. He dismissed the first principle, which asserted spatial and temporal invariance of natural laws, as no longer an issue of debate. He rejected the third (uniformity of rate) as an unjustified limitation on scientific inquiry, as it constrains past geologic rates and conditions to those of the present. So, Lyell's uniformitarianism was deemed unnecessary.

Uniformitarianism was proposed in contrast to catastrophism, which states that the distant past "consisted of epochs of paroxysmal and catastrophic action interposed between periods of comparative tranquility" Especially in the late 19th and early 20th centuries, most geologists took this interpretation to mean that catastrophic events are not important in geologic time; one example of this is the debate of the formation of the Channeled Scablands due to the catastrophic Missoula glacial outburst floods. An important result of this debate and others was the re-clarification that, while the same principles operate in geologic time, catastrophic events that are infrequent on human time-scales can have important consequences in geologic history. Derek Ager has noted that "geologists do not deny uniformitarianism in its true sense, that is to say, of interpreting the past by means of the processes that are seen going on at the present day, so long as we remember that the periodic catastrophe is one of those processes. Those periodic catastrophes make more showing in the stratigraphical record than we have hitherto assumed."

Even Charles Lyell thought that ordinary geological processes would cause Niagara Falls to move upstream to Lake Erie within 10,000 years, leading to catastrophic flooding of a large part of North America.

Modern geologists do not apply uniformitarianism in the same way as Lyell. They question if rates of processes were uniform through time and only those values measured during the history of geology are to be accepted. The present may not be a long enough key to penetrating the deep lock of the past. Geologic processes may have been active at different rates in the past that humans have not observed. "By force of popularity, uniformity of rate has persisted to our present day. For more than a century, Lyell's rhetoric conflating axiom with hypotheses has descended in unmodified form. Many geologists have been stifled by the belief that proper methodology includes an a priori commitment to gradual change, and by a preference for explaining large-scale phenomena as the concatenation of innumerable tiny changes."

The current consensus is that Earth's history is a slow, gradual process punctuated by occasional natural catastrophic events that have affected Earth and its inhabitants. In practice it is reduced from Lyell's conflation, or blending, to simply the two philosophical assumptions. This is also known as the principle of geological actualism, which states that all past geological action was like all present geological action. The principle of actualism is the cornerstone of paleoecology.

Social sciences

Uniformitarianism has also been applied in historical linguistics, where it is considered a foundational principle of the field. Linguist Donald Ringe gives the following definition:

If language was normally acquired in the past in the same way as it is today – usually by native acquisition in early childhood – and if it was used in the same ways – to transmit information, to express solidarity with family, friends, and neighbors, to mark one's social position, etc. – then it must have had the same general structure and organization in the past as it does today, and it must have changed in the same ways as it does today.

History of molecular evolution

From Wikipedia, the free encyclopedia

The history of molecular evolution starts in the early 20th century with "comparative biochemistry", but the field of molecular evolution came into its own in the 1960s and 1970s, following the rise of molecular biology. The advent of protein sequencing allowed molecular biologists to create phylogenies based on sequence comparison, and to use the differences between homologous sequences as a molecular clock to estimate the time since the last common ancestor. In the late 1960s, the neutral theory of molecular evolution provided a theoretical basis for the molecular clock, though both the clock and the neutral theory were controversial, since most evolutionary biologists held strongly to panselectionism, with natural selection as the only important cause of evolutionary change. After the 1970s, nucleic acid sequencing allowed molecular evolution to reach beyond proteins to highly conserved ribosomal RNA sequences, the foundation of a reconceptualization of the early history of life.

Early history

Before the rise of molecular biology in the 1950s and 1960s, a small number of biologists had explored the possibilities of using biochemical differences between species to study evolution. Alfred Sturtevant predicted the existence of chromosomal inversions in 1921 and with Dobzhansky constructed one of the first molecular phylogenies on 17 Drosophila Pseudo-obscura strains from the accumulation of chromosomal inversions observed from the hybridization of polyten chromosomes. Ernest Baldwin worked extensively on comparative biochemistry beginning in the 1930s, and Marcel Florkin pioneered techniques for constructing phylogenies based on molecular and biochemical characters in the 1940s. However, it was not until the 1950s that biologists developed techniques for producing biochemical data for the quantitative study of molecular evolution.

The first molecular systematics research was based on immunological assays and protein "fingerprinting" methods. Alan Boyden—building on immunological methods of George Nuttall—developed new techniques beginning in 1954, and in the early 1960s Curtis Williams and Morris Goodman used immunological comparisons to study primate phylogeny. Others, such as Linus Pauling and his students, applied newly developed combinations of electrophoresis and paper chromatography to proteins subject to partial digestion by digestive enzymes to create unique two-dimensional patterns, allowing fine-grained comparisons of homologous proteins.

Beginning in the 1950s, a few naturalists also experimented with molecular approaches—notably Ernst Mayr and Charles Sibley. While Mayr quickly soured on paper chromatography, Sibley successfully applied electrophoresis to egg-white proteins to sort out problems in bird taxonomy, soon supplemented that with DNA hybridization techniques—the beginning of a long career built on molecular systematics.

While such early biochemical techniques found grudging acceptance in the biology community, for the most part they did not impact the main theoretical problems of evolution and population genetics. This would change as molecular biology shed more light on the physical and chemical nature of genes.

Genetic load, the classical/balance controversy, and the measurement of heterozygosity

At the time that molecular biology was coming into its own in the 1950s, there was a long-running debate—the classical/balance controversy—over the causes of heterosis, the increase in fitness observed when inbred lines are outcrossed. In 1950, James F. Crow offered two different explanations (later dubbed the classical and balance positions) based on the paradox first articulated by J. B. S. Haldane in 1937: the effect of deleterious mutations on the average fitness of a population depends only on the rate of mutations (not the degree of harm caused by each mutation) because more-harmful mutations are eliminated more quickly by natural selection, while less-harmful mutations remain in the population longer. H. J. Muller dubbed this "genetic load".

Muller, motivated by his concern about the effects of radiation on human populations, argued that heterosis is primarily the result of deleterious homozygous recessive alleles, the effects of which are masked when separate lines are crossed—this was the dominance hypothesis, part of what Dobzhansky labeled the classical position. Thus, ionizing radiation and the resulting mutations produce considerable genetic load even if death or disease does not occur in the exposed generation, and in the absence of mutation natural selection will gradually increase the level of homozygosity. Bruce Wallace, working with J. C. King, used the overdominance hypothesis to develop the balance position, which left a larger place for overdominance (where the heterozygous state of a gene is more fit than the homozygous states). In that case, heterosis is simply the result of the increased expression of heterozygote advantage. If overdominant loci are common, then a high level of heterozygosity would result from natural selection, and mutation-inducing radiation may in fact facilitate an increase in fitness due to overdominance. (This was also the view of Dobzhansky.)

Debate continued through 1950s, gradually becoming a central focus of population genetics. A 1958 study of Drosophila by Wallace suggested that radiation-induced mutations increased the viability of previously homozygous flies, providing evidence for heterozygote advantage and the balance position; Wallace estimated that 50% of loci in natural Drosophila populations were heterozygous. Motoo Kimura's subsequent mathematical analyses reinforced what Crow had suggested in 1950: that even if overdominant loci are rare, they could be responsible for a disproportionate amount of genetic variability. Accordingly, Kimura and his mentor Crow came down on the side of the classical position. Further collaboration between Crow and Kimura led to the infinite alleles model, which could be used to calculate the number of different alleles expected in a population, based on population size, mutation rate, and whether the mutant alleles were neutral, overdominant, or deleterious. Thus, the infinite alleles model offered a potential way to decide between the classical and balance positions, if accurate values for the level of heterozygosity could be found.

By the mid-1960s, the techniques of biochemistry and molecular biology—in particular protein electrophoresis—provided a way to measure the level of heterozygosity in natural populations: a possible means to resolve the classical/balance controversy. In 1963, Jack L. Hubby published an electrophoresis study of protein variation in Drosophila; soon after, Hubby began collaborating with Richard Lewontin to apply Hubby's method to the classical/balance controversy by measuring the proportion of heterozygous loci in natural populations. Their two landmark papers, published in 1966, established a significant level of heterozygosity for Drosophila (12%, on average). However, these findings proved difficult to interpret. Most population geneticists (including Hubby and Lewontin) rejected the possibility of widespread neutral mutations; explanations that did not involve selection were anathema to mainstream evolutionary biology. Hubby and Lewontin also ruled out heterozygote advantage as the main cause because of the segregation load it would entail, though critics argued that the findings actually fit well with overdominance hypothesis.

Protein sequences and the molecular clock

While evolutionary biologists were tentatively branching out into molecular biology, molecular biologists were rapidly turning their attention toward evolution.

After developing the fundamentals of protein sequencing with insulin between 1951 and 1955, Frederick Sanger and his colleagues had published a limited interspecies comparison of the insulin sequence in 1956. Francis Crick, Charles Sibley and others recognized the potential for using biological sequences to construct phylogenies, though few such sequences were yet available. By the early 1960s, techniques for protein sequencing had advanced to the point that direct comparison of homologous amino acid sequences was feasible. In 1961, Emanuel Margoliash and his collaborators completed the sequence for horse cytochrome c (a longer and more widely distributed protein than insulin), followed in short order by a number of other species.

In 1962, Linus Pauling and Emile Zuckerkandl proposed using the number of differences between homologous protein sequences to estimate the time since divergence, an idea Zuckerkandl had conceived around 1960 or 1961. This began with Pauling's long-time research focus, hemoglobin, which was being sequenced by Walter Schroeder; the sequences not only supported the accepted vertebrate phylogeny, but also the hypothesis (first proposed in 1957) that the different globin chains within a single organism could also be traced to a common ancestral protein. Between 1962 and 1965, Pauling and Zuckerkandl refined and elaborated this idea, which they dubbed the molecular clock, and Emil L. Smith and Emanuel Margoliash expanded the analysis to cytochrome c. Early molecular clock calculations agreed fairly well with established divergence times based on paleontological evidence. However, the essential idea of the molecular clock—that individual proteins evolve at a regular rate independent of a species' morphological evolution—was extremely provocative (as Pauling and Zuckerkandl intended it to be).

The "molecular wars"

From the early 1960s, molecular biology was increasingly seen as a threat to the traditional core of evolutionary biology. Established evolutionary biologists—particularly Ernst Mayr, Theodosius Dobzhansky and G. G. Simpson, three of the founders of the modern evolutionary synthesis of the 1930s and 1940s—were extremely skeptical of molecular approaches, especially when it came to the connection (or lack thereof) to natural selection. Molecular evolution in general—and the molecular clock in particular—offered little basis for exploring evolutionary causation. According to the molecular clock hypothesis, proteins evolved essentially independently of the environmentally determined forces of selection; this was sharply at odds with the panselectionism prevalent at the time. Moreover, Pauling, Zuckerkandl, and other molecular biologists were increasingly bold in asserting the significance of "informational macromolecules" (DNA, RNA and proteins) for all biological processes, including evolution. The struggle between evolutionary biologists and molecular biologists—with each group holding up their discipline as the center of biology as a whole—was later dubbed the "molecular wars" by Edward O. Wilson, who experienced firsthand the domination of his biology department by young molecular biologists in the late 1950s and the 1960s.

In 1961, Mayr began arguing for a clear distinction between functional biology (which considered proximate causes and asked "how" questions) and evolutionary biology (which considered ultimate causes and asked "why" questions) He argued that both disciplines and individual scientists could be classified on either the functional or evolutionary side, and that the two approaches to biology were complementary. Mayr, Dobzhansky, Simpson and others used this distinction to argue for the continued relevance of organismal biology, which was rapidly losing ground to molecular biology and related disciplines in the competition for funding and university support. It was in that context that Dobzhansky first published his famous statement, "nothing in biology makes sense except in the light of evolution", in a 1964 paper affirming the importance of organismal biology in the face of the molecular threat; Dobzhansky characterized the molecular disciplines as "Cartesian" (reductionist) and organismal disciplines as "Darwinian".

Mayr and Simpson attended many of the early conferences where molecular evolution was discussed, critiquing what they saw as the overly simplistic approaches of the molecular clock. The molecular clock, based on uniform rates of genetic change driven by random mutations and drift, seemed incompatible with the varying rates of evolution and environmentally-driven adaptive processes (such as adaptive radiation) that were among the key developments of the evolutionary synthesis. At the 1962 Wenner-Gren conference, the 1964 Colloquium on the Evolution of Blood Proteins in Bruges, Belgium, and the 1964 Conference on Evolving Genes and Proteins at Rutgers University, they engaged directly with the molecular biologists and biochemists, hoping to maintain the central place of Darwinian explanations in evolution as its study spread to new fields.

Gene-centered view of evolution

Though not directly related to molecular evolution, the mid-1960s also saw the rise of the gene-centered view of evolution, spurred by George C. Williams's Adaptation and Natural Selection (1966). Debate over units of selection, particularly the controversy over group selection, led to increased focus on individual genes (rather than whole organisms or populations) as the theoretical basis for evolution. However, the increased focus on genes did not mean a focus on molecular evolution; in fact, the adaptationism promoted by Williams and other evolutionary theories further marginalized the apparently non-adaptive changes studied by molecular evolutionists.

The neutral theory of molecular evolution

The intellectual threat of molecular evolution became more explicit in 1968, when Motoo Kimura introduced the neutral theory of molecular evolution. Based on the available molecular clock studies (of hemoglobin from a wide variety of mammals, cytochrome c from mammals and birds, and triosephosphate dehydrogenase from rabbits and cows), Kimura (assisted by Tomoko Ohta) calculated an average rate of DNA substitution of one base pair change per 300 base pairs (encoding 100 amino acids) per 28 million years. For mammal genomes, this indicated a substitution rate of one every 1.8 years, which would produce an unsustainably high substitution load unless the preponderance of substitutions was selectively neutral. Kimura argued that neutral mutations occur very frequently, a conclusion compatible with the results of the electrophoretic studies of protein heterozygosity. Kimura also applied his earlier mathematical work on genetic drift to explain how neutral mutations could come to fixation, even in the absence of natural selection; he soon convinced James F. Crow of the potential power of neutral alleles and genetic drift as well.

Kimura's theory—described only briefly in a letter to Nature—was followed shortly after with a more substantial analysis by Jack L. King and Thomas H. Jukes—who titled their first paper on the subject "non-Darwinian evolution". Though King and Jukes produced much lower estimates of substitution rates and the resulting genetic load in the case of non-neutral changes, they agreed that neutral mutations driven by genetic drift were both real and significant. The fairly constant rates of evolution observed for individual proteins was not easily explained without invoking neutral substitutions (though G. G. Simpson and Emil Smith had tried). Jukes and King also found a strong correlation between the frequency of amino acids and the number of different codons encoding each amino acid. This pointed to substitutions in protein sequences as being largely the product of random genetic drift.

King and Jukes' paper, especially with the provocative title, was seen as a direct challenge to mainstream neo-Darwinism, and it brought molecular evolution and the neutral theory to the center of evolutionary biology. It provided a mechanism for the molecular clock and a theoretical basis for exploring deeper issues of molecular evolution, such as the relationship between rate of evolution and functional importance. The rise of the neutral theory marked synthesis of evolutionary biology and molecular biology—though an incomplete one.

With their work on firmer theoretical footing, in 1971 Emile Zuckerkandl and other molecular evolutionists founded the Journal of Molecular Evolution.

The neutralist-selectionist debate and near-neutrality

The critical responses to the neutral theory that soon appeared marked the beginning of the neutralist-selectionist debate. In short, selectionists viewed natural selection as the primary or only cause of evolution, even at the molecular level, while neutralists held that neutral mutations were widespread and that genetic drift was a crucial factor in the evolution of proteins. Kimura became the most prominent defender of the neutral theory—which would be his main focus for the rest of his career. With Ohta, he refocused his arguments on the rate at which drift could fix new mutations in finite populations, the significance of constant protein evolution rates, and the functional constraints on protein evolution that biochemists and molecular biologists had described. Though Kimura had initially developed the neutral theory partly as an outgrowth of the classical position within the classical/balance controversy (predicting high genetic load as a consequence of non-neutral mutations), he gradually deemphasized his original argument that segregational load would be impossibly high without neutral mutations (which many selectionists, and even fellow neutralists King and Jukes, rejected).

From the 1970s through the early 1980s, both selectionists and neutralists could explain the observed high levels of heterozygosity in natural populations, by assuming different values for unknown parameters. Early in the debate, Kimura's student Tomoko Ohta focused on the interaction between natural selection and genetic drift, which was significant for mutations that were not strictly neutral, but nearly so. In such cases, selection would compete with drift: most slightly deleterious mutations would be eliminated by natural selection or chance; some would move to fixation through drift. The behavior of this type of mutation, described by an equation that combined the mathematics of the neutral theory with classical models, became the basis of Ohta's nearly neutral theory of molecular evolution.

In 1973, Ohta published a short letter in Nature suggesting that a wide variety of molecular evidence supported the theory that most mutation events at the molecular level are slightly deleterious rather than strictly neutral. Molecular evolutionists were finding that while rates of protein evolution (consistent with the molecular clock) were fairly independent of generation time, rates of noncoding DNA divergence were inversely proportional to generation time. Noting that population size is generally inversely proportional to generation time, Tomoko Ohta proposed that most amino acid substitutions are slightly deleterious while noncoding DNA substitutions are more neutral. In this case, the faster rate of neutral evolution in proteins expected in small populations (due to genetic drift) is offset by longer generation times (and vice versa), but in large populations with short generation times, noncoding DNA evolves faster while protein evolution is retarded by selection (which is more significant than drift for large populations).

Between then and the early 1990s, many studies of molecular evolution used a "shift model" in which the negative effect on the fitness of a population due to deleterious mutations shifts back to an original value when a mutation reaches fixation. In the early 1990s, Ohta developed a "fixed model" that included both beneficial and deleterious mutations, so that no artificial "shift" of overall population fitness was necessary. According to Ohta, however, the nearly neutral theory largely fell out of favor in the late 1980s, because of the mathematically simpler neutral theory for the widespread molecular systematics research that flourished after the advent of rapid DNA sequencing. As more detailed systematics studies started to compare the evolution of genome regions subject to strong selection versus weaker selection in the 1990s, the nearly neutral theory and the interaction between selection and drift have once again become an important focus of research.

Microbial phylogeny

While early work in molecular evolution focused on readily sequenced proteins and relatively recent evolutionary history, by the late 1960s some molecular biologists were pushing further toward the base of the tree of life by studying highly conserved nucleic acid sequences. Carl Woese, a molecular biologist whose earlier work was on the genetic code and its origin, began using small subunit ribosomal RNA to reclassify bacteria by genetic (rather than morphological) similarity. Work proceeded slowly at first, but accelerated as new sequencing methods were developed in the 1970s and 1980s. By 1977, Woese and George Fox announced that some bacteria, such as methanogens, lacked the rRNA units that Woese's phylogenetic studies were based on; they argued that these organisms were actually distinct enough from conventional bacteria and the so-called higher organisms to form their own kingdom, which they called archaebacteria. Though controversial at first (and challenged again in the late 1990s), Woese's work became the basis of the modern three-domain system of Archaea, Bacteria, and Eukarya (replacing the five-domain system that had emerged in the 1960s).

Work on microbial phylogeny also brought molecular evolution closer to cell biology and origin of life research. The differences between archaea pointed to the importance of RNA in the early history of life. In his work with the genetic code, Woese had suggested RNA-based life had preceded the current forms of DNA-based life, as had several others before him—an idea that Walter Gilbert would later call the "RNA world". In many cases, genomics research in the 1990s produced phylogenies contradicting the rRNA-based results, leading to the recognition of widespread lateral gene transfer across distinct taxa. Combined with the probable endosymbiotic origin of organelle-filled eukarya, this pointed to a far more complex picture of the origin and early history of life, one which might not be describable in the traditional terms of common ancestry.

A land without a people for a people without a land

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/A_l...