Search This Blog

Sunday, June 16, 2024

Transmutation of species

The Transmutation of species and transformism are 18th and early 19th-century ideas about the change of one species into another that preceded Charles Darwin's theory of evolution through natural selection. The French Transformisme was a term used by Jean Baptiste Lamarck in 1809 for his theory, and other 18th and 19th century proponents of pre-Darwinian evolutionary ideas included Denis Diderot, Étienne Geoffroy Saint-Hilaire, Erasmus Darwin, Robert Grant, and Robert Chambers, the anonymous author of the book Vestiges of the Natural History of Creation. Such ideas were associated with 18th century ideas of Deism and human progress. Opposition in the scientific community to these early theories of evolution, led by influential scientists like the anatomists Georges Cuvier and Richard Owen, and the geologist Charles Lyell, was intense. The debate over them was an important stage in the history of evolutionary thought and influenced the subsequent reaction to Darwin's theory.

Terminology

Transmutation was one of the names commonly used for evolutionary ideas in the 19th century before Charles Darwin published On The Origin of Species (1859). Transmutation had previously been used as a term in alchemy to describe the transformation of base metals into gold. Other names for evolutionary ideas used in this period include the development hypothesis (one of the terms used by Darwin) and the theory of regular gradation, used by William Chilton in the periodical press such as The Oracle of Reason. Transformation is another word used quite as often as transmutation in this context. These early 19th century evolutionary ideas played an important role in the history of evolutionary thought.

The proto-evolutionary thinkers of the 18th and early 19th century had to invent terms to label their ideas, but it was first Joseph Gottlieb Kölreuter who used the term "transmutation" to refer to species who have had biological changes through hybridization.

The terminology did not settle down until some time after the publication of the Origin of Species. The word evolved in a modern sense was first used in 1826 in an anonymous paper published in Robert Jameson's journal and evolution was a relative late-comer which can be seen in Herbert Spencer's Social Statics of 1851, and at least one earlier example, but was not in general use until about 1865–70.

Historical development

Ideas before the 18th century

In the 10th and 11th centuries, Ibn Miskawayh's Al-Fawz al-Kabir (الفوز الأكبر), and the Brethren of Purity's Encyclopedia of the Brethren of Purity (رسائل إخوان الصفا‎) developed ideas about changes in biological species. In 1993, Muhammad Hamidullah described the ideas in lectures:

[These books] state that God first created matter and invested it with energy for development. Matter, therefore, adopted the form of vapour which assumed the shape of water in due time. The next stage of development was mineral life. Different kinds of stones developed in course of time. Their highest form being mirjan (coral). It is a stone which has in it branches like those of a tree. After mineral life evolves vegetation. The evolution of vegetation culminates with a tree which bears the qualities of an animal. This is the date-palm. It has male and female genders. It does not wither if all its branches are chopped but it dies when the head is cut off. The date-palm is therefore considered the highest among the trees and resembles the lowest among animals. Then is born the lowest of animals. It evolves into an ape. This is not the statement of Darwin. This is what Ibn Maskawayh states and this is precisely what is written in the Epistles of Ikhwan al-Safa. The Muslim thinkers state that ape then evolved into a lower kind of a barbarian man. He then became a superior human being. Man becomes a saint, a prophet. He evolves into a higher stage and becomes an angel. The one higher to angels is indeed none but God. Everything begins from Him and everything returns to Him.

In the 14th century, Ibn Khaldun further developed these ideas. According to some commentators, statements in his 1377 work, the Muqaddimah anticipate the biological theory of evolution.

Robert Hooke proposed in a speech to the Royal Society in the late 17th century that species vary, change, and especially become extinct. His “Discourse of Earthquakes” was based on comparisons made between fossils, especially the modern pearly nautilus and the curled shells of ammonites.

18th and early 19th century

In the 18th century, Jacques-Antoine des Bureaux claimed a "genealogical ascent of species". He argued that through crossbreeding and hybridization in reproduction, "progressive organization" occurred, allowing organisms to change and more complex species to develop.

Simultaneously, Retif de la Bretonne wrote La decouverte australe par un homme-volant (1781) and La philosophie de monsieur Nicolas (1796), which encapsulated his view that more complex species, such as mankind, had developed step-by-step from "less perfect" animals. De la Bretonne believed that living forms undergo constant change. Although he believed in constant change, he took a very different approach from Diderot: chance and blind combinations of atoms, in de la Bretonne's opinion, were not the cause of transmutation. De la Bretonne argued that all species had developed from more primitive organisms, and that nature aimed to reach perfection.

Denis Diderot, chief editor of the Encyclopédie, spent his time poring over scientific theories attempting to explain rock strata and the diversity of fossils. Geological and fossil evidence was presented to him as contributions to Encyclopedia articles, chief among them "Mammoth", "Fossil", and "Ivory Fossil", all of which noted the existence of mammoth bones in Siberia. As a result of this geological and fossil evidence, Diderot believed that species were mutable. Particularly, he argued that organisms metamorphosized over millennia, resulting in species changes. In Diderot's theory of transformationism, random chance plays a large role in allowing species to change, develop and become extinct, as well as having new species form. Specifically, Diderot believed that given randomness and an infinite number of times, all possible scenarios would manifest themselves. He proposed that this randomness was behind the development of new traits in offspring and as a result the development and extinction of species.

Diderot drew from Leonardo da Vinci’s comparison of the leg structure of a human and a horse as proof of the interconnectivity of species. He saw this experiment as demonstrating that nature could continually try out new variations. Additionally, Diderot argued that organic molecules and organic matter possessed an inherent consciousness, which allowed the smallest particles of organic matter to organize into fibers, then a network, and then organs. The idea that organic molecules have consciousness was derived from both Maupertuis and Lucretian texts. Overall, Diderot’s musings all fit together as a "composite transformist philosophy", one dependent on the randomness inherent to nature as a transformist mechanism.

Erasmus Darwin

Erasmus Darwin developed a theory of universal transformation. His major works, The Botanic Garden (1792), Zoonomia (1794–96), and The Temple of Nature all touched on the transformation of organic creatures. In both The Botanic Garden and The Temple of Nature, Darwin used poetry to describe his ideas regarding species. In Zoonomia, however, Erasmus clearly articulates (as a more scientific text) his beliefs about the connections between organic life. He notes particularly that some plants and animals have "useless appendages", which have gradually changed from their original, useful states. Additionally, Darwin relied on cosmological transformation as a crucial aspect of his theory of transformation, making a connection between William Herschel’s approach to natural historical cosmology and the changing aspects of plants and animals.

Erasmus believed that life had one origin, a common ancestor, which he referred to as the "filament" of life. He used his understanding of chemical transmutation to justify the spontaneous generation of this filament. His geological study of Derbyshire and the sea- shells and fossils which he found there helped him to come to the conclusion that complex life had developed from more primitive forms (Laniel-Musitelli). Erasmus was an early proponent of what we now refer to as "adaptations", albeit through a different transformist mechanism – he argued that sexual reproduction could pass on acquired traits through the father’s contribution to the embryon. These changes, he believed, were mainly driven by the three great needs of life: lust, food, and security. Erasmus proposed that these acquired changes gradually altered the physical makeup of organisms as a result of the desires of plants and animals. Notably, he describes insects developing from plants, a grand example of one species transforming into another.

Erasmus Darwin relied on Lucretian philosophy to form a theory of universal change. He proposed that both organic and inorganic matter changed throughout the course of the universe, and that plants and animals could pass on acquired traits to their progeny. His view of universal transformation placed time as a driving force in the universe’s journey towards improvement. In addition, Erasmus believed that nature had some amount of agency in this inheritance. Darwin spun his own story of how nature began to develop from the ocean, and then slowly became more diverse and more complex. His transmutation theory relied heavily on the needs which drove animal competition, as well as the results of this contest between both animals and plants.

Charles Darwin acknowledged his grandfather’s contribution to the field of transmutation in his synopsis of Erasmus’ life, The Life of Erasmus Darwin. Darwin collaborated with Ernst Krause to write a forward on Krause's Erasmus Darwin und Seine Stellung in Der Geschichte Der Descendenz-Theorie, which translates into Erasmus Darwin and His Place in the History of the Descent Theory. Krause explains Erasmus' motivations for arguing for the theory of descent, including Darwin's connection with and correspondence with Rousseau, which may have influenced how he saw the world.

Lamarck

Jean-Baptiste Lamarck proposed a hypothesis on the transmutation of species in Philosophie Zoologique (1809). Lamarck did not believe that all living things shared a common ancestor. Rather he believed that simple forms of life were created continuously by spontaneous generation. He also believed that an innate life force, which he sometimes described as a nervous fluid, drove species to become more complex over time, advancing up a linear ladder of complexity that was related to the great chain of being. Lamarck also recognized that species were adapted to their environment. He explained this observation by saying that the same nervous fluid driving increasing complexity, also caused the organs of an animal (or a plant) to change based on the use or disuse of that organ, just as muscles are affected by exercise. He argued that these changes would be inherited by the next generation and produce slow adaptation to the environment. It was this secondary mechanism of adaptation through the inheritance of acquired characteristics that became closely associated with his name and would influence discussions of evolution into the 20th century.

Ideas after Lamarck

The German Abraham Gottlob Werner believed in geological transformism. Specifically, Werner argued that the Earth undergoes irreversible and continuous change. The Edinburgh school, a radical British school of comparative anatomy, fostered a lot of debate around natural history. Edinburgh, which included the surgeon Robert Knox and the anatomist Robert Grant, was closely in touch with Lamarck's school of French Transformationism, which contained scientists such as Étienne Geoffroy Saint-Hilaire. Grant developed Lamarck's and Erasmus Darwin's ideas of transmutation and evolutionism, investigating homology to prove common descent. As a young student Charles Darwin joined Grant in investigations of the life cycle of marine animals. He also studied geology under professor Robert Jameson whose journal published an anonymous paper in 1826 praising "Mr. Lamarck" for explaining how the higher animals had "evolved" from the "simplest worms" – this was the first use of the word "evolved" in a modern sense. Professor Jameson was a Wernerian, which allowed him to consider transformation theories and foster the interest in transformism among his students. Jameson's course closed with lectures on the "Origin of the Species of Animals".

Vestiges of the Natural History of Creation

Diagram from the 1844 book Vestiges of the Natural History of Creation by Robert Chambers shows a model of development where fishes (F), reptiles (R), and birds (B) represent branches from a path leading to mammals (M).

The computing pioneer Charles Babbage published his unofficial Ninth Bridgewater Treatise in 1837, putting forward the thesis that God had the omnipotence and foresight to create as a divine legislator, making laws (or programs) which then produced species at the appropriate times, rather than continually interfering with ad hoc miracles each time a new species was required. In 1844 the Scottish publisher Robert Chambers anonymously published an influential and extremely controversial book of popular science entitled Vestiges of the Natural History of Creation. This book proposed an evolutionary scenario for the origins of the solar system and life on earth. It claimed that the fossil record showed an ascent of animals with current animals being branches off a main line that leads progressively to humanity. It implied that the transmutations led to the unfolding of a preordained orthogenetic plan woven into the laws that governed the universe. In this sense it was less completely materialistic than the ideas of radicals like Robert Grant, but its implication that humans were just the last step in the ascent of animal life incensed many conservative thinkers. Both conservatives like Adam Sedgwick, and radical materialists like Thomas Henry Huxley, who disliked Chambers' implications of preordained progress, were able to find scientific inaccuracies in the book that they could disparage. Darwin himself openly deplored the author's "poverty of intellect", and dismissed it as a "literary curiosity". However, the high profile of the public debate over Vestiges, with its depiction of evolution as a progressive process, and its popular success, would greatly influence the perception of Darwin's theory a decade later. It also influenced some younger naturalists, including Alfred Russel Wallace, to take an interest in the idea of transmutation.

Ideological motivations for theories of transmutation

The proponents of transmutation were almost all inclined to Deism—the idea, popular among many 18th century Western intellectuals that God had initially created the universe, but then left it to operate and develop through natural law rather than through divine intervention. Thinkers like Erasmus Darwin saw the transmutation of species as part of this development of the world through natural law, which they saw as a challenge to traditional Christianity. They also believed that human history was progressive, which was another idea becoming increasingly popular in the 18th century. They saw progress in human history as being mirrored by the development of life from the simple to the complex over the history of the Earth. This connection was very clear in the work of Erasmus Darwin and Robert Chambers.

Opposition to transmutation

Ideas about the transmutation of species were strongly associated with the anti-Christen materialism and radical political ideas of the Enlightenment and were greeted with hostility by more conservative thinkers. Cuvier attacked the ideas of Lamarck and Geoffroy Saint-Hilaire, agreeing with Aristotle that species were immutable. Cuvier believed that the individual parts of an animal were too closely correlated with one another to allow for one part of the anatomy to change in isolation from the others, and argued that the fossil record showed patterns of catastrophic extinctions followed by re-population, rather than gradual change over time. He also noted that drawings of animals and animal mummies from Egypt, which were thousands of years old, showed no signs of change when compared with modern animals. The strength of Cuvier's arguments and his reputation as a leading scientist helped keep transmutational ideas out of the scientific mainstream for decades.

In Britain, where the philosophy of natural theology remained influential, William Paley wrote the book Natural Theology with its famous watchmaker analogy, at least in part as a response to the transmutational ideas of Erasmus Darwin. Geologists influenced by natural theology, such as Buckland and Sedgwick, made a regular practice of attacking the evolutionary ideas of Lamarck and Grant, and Sedgwick wrote a famously harsh review of The Vestiges of the Natural History of Creation. Although the geologist Charles Lyell opposed scriptural geology, he also believed in the immutability of species, and in his Principles of Geology (1830–1833) he criticized and dismissed Lamarck's theories of development. Instead, he advocated a form of progressive creation, in which each species had its "centre of creation" and was designed for this particular habitat, but would go extinct when this habitat changed.

This 1847 diagram by Richard Owen shows his conceptual archetype for all vertebrates.

Another source of opposition to transmutation was a school of naturalists who were influenced by the German philosophers and naturalists associated with idealism, such as Goethe, Hegel and Lorenz Oken. Idealists such as Louis Agassiz and Richard Owen believed that each species was fixed and unchangeable because it represented an idea in the mind of the creator. They believed that relationships between species could be discerned from developmental patterns in embryology, as well as in the fossil record, but that these relationships represented an underlying pattern of divine thought, with progressive creation leading to increasing complexity and culminating in humanity. Owen developed the idea of "archetypes" in the divine mind that would produce a sequence of species related by anatomical homologies, such as vertebrate limbs. Owen was concerned by the political implications of the ideas of transmutationists like Robert Grant, and he led a public campaign by conservatives that successfully marginalized Grant in the scientific community. In his famous 1841 paper, which coined the term dinosaur for the giant reptiles discovered by Buckland and Gideon Mantell, Owen argued that these reptiles contradicted the transmutational ideas of Lamarck because they were more sophisticated than the reptiles of the modern world. Darwin would make good use of the homologies analyzed by Owen in his own theory, but the harsh treatment of Grant, along with the controversy surrounding Vestiges, would be factors in his decision to ensure that his theory was fully supported by facts and arguments before publishing his ideas.

Evolution of biological complexity

The evolution of biological complexity is one important outcome of the process of evolution. Evolution has produced some remarkably complex organisms – although the actual level of complexity is very hard to define or measure accurately in biology, with properties such as gene content, the number of cell types or morphology all proposed as possible metrics.

Many biologists used to believe that evolution was progressive (orthogenesis) and had a direction that led towards so-called "higher organisms", despite a lack of evidence for this viewpoint. This idea of "progression" introduced the terms "high animals" and "low animals" in evolution. Many now regard this as misleading, with natural selection having no intrinsic direction and that organisms selected for either increased or decreased complexity in response to local environmental conditions. Although there has been an increase in the maximum level of complexity over the history of life, there has always been a large majority of small and simple organisms and the most common level of complexity appears to have remained relatively constant.

Selection for simplicity and complexity

Usually organisms that have a higher rate of reproduction than their competitors have an evolutionary advantage. Consequently, organisms can evolve to become simpler and thus multiply faster and produce more offspring, as they require fewer resources to reproduce. A good example are parasites such as Plasmodium – the parasite responsible for malaria – and mycoplasma; these organisms often dispense with traits that are made unnecessary through parasitism on a host.

A lineage can also dispense with complexity when a particular complex trait merely provides no selective advantage in a particular environment. Loss of this trait need not necessarily confer a selective advantage, but may be lost due to the accumulation of mutations if its loss does not confer an immediate selective disadvantage. For example, a parasitic organism may dispense with the synthetic pathway of a metabolite where it can readily scavenge that metabolite from its host. Discarding this synthesis may not necessarily allow the parasite to conserve significant energy or resources and grow faster, but the loss may be fixed in the population through mutation accumulation if no disadvantage is incurred by loss of that pathway. Mutations causing loss of a complex trait occur more often than mutations causing gain of a complex trait.

With selection, evolution can also produce more complex organisms. Complexity often arises in the co-evolution of hosts and pathogens, with each side developing ever more sophisticated adaptations, such as the immune system and the many techniques pathogens have developed to evade it. For example, the parasite Trypanosoma brucei, which causes sleeping sickness, has evolved so many copies of its major surface antigen that about 10% of its genome is devoted to different versions of this one gene. This tremendous complexity allows the parasite to constantly change its surface and thus evade the immune system through antigenic variation.

More generally, the growth of complexity may be driven by the co-evolution between an organism and the ecosystem of predators, prey and parasites to which it tries to stay adapted: as any of these become more complex in order to cope better with the diversity of threats offered by the ecosystem formed by the others, the others too will have to adapt by becoming more complex, thus triggering an ongoing evolutionary arms race towards more complexity. This trend may be reinforced by the fact that ecosystems themselves tend to become more complex over time, as species diversity increases, together with the linkages or dependencies between species.

Types of trends in complexity

Passive versus active trends in complexity. Organisms at the beginning are red. Numbers are shown by height with time moving up in a series.

If evolution possessed an active trend toward complexity (orthogenesis), as was widely believed in the 19th century, then we would expect to see an active trend of increase over time in the most common value (the mode) of complexity among organisms.

However, an increase in complexity can also be explained through a passive process. Assuming unbiased random changes of complexity and the existence of a minimum complexity leads to an increase over time of the average complexity of the biosphere. This involves an increase in variance, but the mode does not change. The trend towards the creation of some organisms with higher complexity over time exists, but it involves increasingly small percentages of living things.

In this hypothesis, any appearance of evolution acting with an intrinsic direction towards increasingly complex organisms is a result of people concentrating on the small number of large, complex organisms that inhabit the right-hand tail of the complexity distribution and ignoring simpler and much more common organisms. This passive model predicts that the majority of species are microscopic prokaryotes, which is supported by estimates of 106 to 109 extant prokaryotes compared to diversity estimates of 106 to 3·106 for eukaryotes. Consequently, in this view, microscopic life dominates Earth, and large organisms only appear more diverse due to sampling bias.

Genome complexity has generally increased since the beginning of the life on Earth. Some computer models have suggested that the generation of complex organisms is an inescapable feature of evolution. Proteins tend to become more hydrophobic over time, and to have their hydrophobic amino acids more interspersed along the primary sequence. Increases in body size over time are sometimes seen in what is known as Cope's rule.

Constructive neutral evolution

Recently work in evolution theory has proposed that by relaxing selection pressure, which typically acts to streamline genomes, the complexity of an organism increases by a process called constructive neutral evolution. Since the effective population size in eukaryotes (especially multi-cellular organisms) is much smaller than in prokaryotes, they experience lower selection constraints.

According to this model, new genes are created by non-adaptive processes, such as by random gene duplication. These novel entities, although not required for viability, do give the organism excess capacity that can facilitate the mutational decay of functional subunits. If this decay results in a situation where all of the genes are now required, the organism has been trapped in a new state where the number of genes has increased. This process has been sometimes described as a complexifying ratchet. These supplemental genes can then be co-opted by natural selection by a process called neofunctionalization. In other instances constructive neutral evolution does not promote the creation of new parts, but rather promotes novel interactions between existing players, which then take on new moonlighting roles.

Constructive neutral evolution has also been used to explain how ancient complexes, such as the spliceosome and the ribosome, have gained new subunits over time, how new alternative spliced isoforms of genes arise, how gene scrambling in ciliates evolved, how pervasive pan-RNA editing may have arisen in Trypanosoma brucei, how functional lncRNAs have likely arisen from transcriptional noise, and how even useless protein complexes can persist for millions of years.

Mutational hazard hypothesis

The mutational hazard hypothesis is a non-adaptive theory for increased complexity in genomes. The basis of mutational hazard hypothesis is that each mutation for non-coding DNA imposes a fitness cost. Variation in complexity can be described by 2Neu, where Ne is effective population size and u is mutation rate.

In this hypothesis, selection against non-coding DNA can be reduced in three ways: random genetic drift, recombination rate, and mutation rate. As complexity increases from prokaryotes to multicellular eukaryotes, effective population size decreases, subsequently increasing the strength of random genetic drift. This, along with low recombination rate and high mutation rate, allows non-coding DNA to proliferate without being removed by purifying selection.

Accumulation of non-coding DNA in larger genomes can be seen when comparing genome size and genome content across eukaryotic taxa. There is a positive correlation between genome size and noncoding DNA genome content with each group staying within some variation. When comparing variation in complexity in organelles, effective population size is replaced with genetic effective population size (Ng). If looking at silent-site nucleotide diversity, then larger genomes are expected to have less diversity than more compact ones. In plant and animal mitochondria, differences in mutation rate account for the opposite directions in complexity, with plant mitochondria being more complex and animal mitochondria more streamlined.

The mutational hazard hypothesis has been used to at least partially explain expanded genomes in some species. For example, when comparing Volvox cateri to a close relative with a compact genome, Chlamydomonas reinhardtii, the former had less silent-site diversity than the latter in nuclear, mitochondrial, and plastid genomes. However when comparing the plastid genome of Volvox cateri to Volvox africanus, a species in the same genus but with half the plastid genome size, there was high mutation rates in intergenic regions. In Arabiopsis thaliana, the hypothesis was used as a possible explanation for intron loss and compact genome size. When compared to Arabidopsis lyrata, researchers found a higher mutation rate overall and in lost introns (an intron that is no longer transcribed or spliced) compared to conserved introns.

There are expanded genomes in other species that could not be explained by the mutational hazard hypothesis. For example, the expanded mitochondrial genomes of Silene noctiflora and Silene conica have high mutation rates, lower intron lengths, and more non-coding DNA elements compared to others in the same genus, but there was no evidence for long-term low effective population size. The mitochondrial genomes of Citrullus lanatus and Cucurbita pepo differ in several ways. Citrullus lanatus is smaller, has more introns and duplications, while Cucurbita pepo is larger with more chloroplast and short repeated sequences. If RNA editing sites and mutation rate lined up, then Cucurbita pepo would have a lower mutation rate and more RNA editing sites. However the mutation rate is four times higher than Citrullus lanatus and they have a similar number of RNA editing sites. There was also an attempt to use the hypothesis to explain large nuclear genomes of salamanders, but researchers found opposite results than expected, including lower long-term strength of genetic drift.

History

In the 19th century, some scientists such as Jean-Baptiste Lamarck (1744–1829) and Ray Lankester (1847–1929) believed that nature had an innate striving to become more complex with evolution. This belief may reflect then-current ideas of Hegel (1770–1831) and of Herbert Spencer (1820–1903) which envisaged the universe gradually evolving to a higher, more perfect state.

This view regarded the evolution of parasites from independent organisms to a parasitic species as "devolution" or "degeneration", and contrary to nature. Social theorists have sometimes interpreted this approach metaphorically to decry certain categories of people as "degenerate parasites". Later scientists regarded biological devolution as nonsense; rather, lineages become simpler or more complicated according to whatever forms had a selective advantage.

In a 1964 book, The Emergence of Biological Organization, Quastler pioneered a theory of emergence, developing a model of a series of emergences from protobiological systems to prokaryotes without the need to invoke implausible very low probability events.

The evolution of order, manifested as biological complexity, in living systems and the generation of order in certain non-living systems was proposed in 1983 to obey a common fundamental principal called “the Darwinian dynamic”. The Darwinian dynamic was formulated by first considering how microscopic order is generated in simple non-biological systems that are far from thermodynamic equilibrium. Consideration was then extended to short, replicating RNA molecules assumed to be similar to the earliest forms of life in the RNA world. It was shown that the underlying order-generating processes in the non-biological systems and in replicating RNA are basically similar. This approach helped clarify the relationship of thermodynamics to evolution as well as the empirical content of Darwin's theory.

In 1985, Morowitz noted that the modern era of irreversible thermodynamics ushered in by Lars Onsager in the 1930s showed that systems invariably become ordered under a flow of energy, thus indicating that the existence of life involves no contradiction to the laws of physics.

Genetic hitchhiking

From Wikipedia, the free encyclopedia

Genetic hitchhiking, also called genetic draft or the hitchhiking effect, is when an allele changes frequency not because it itself is under natural selection, but because it is near another gene that is undergoing a selective sweep and that is on the same DNA chain. When one gene goes through a selective sweep, any other nearby polymorphisms that are in linkage disequilibrium will tend to change their allele frequencies too. Selective sweeps happen when newly appeared (and hence still rare) mutations are advantageous and increase in frequency. Neutral or even slightly deleterious alleles that happen to be close by on the chromosome 'hitchhike' along with the sweep. In contrast, effects on a neutral locus due to linkage disequilibrium with newly appeared deleterious mutations are called background selection. Both genetic hitchhiking and background selection are stochastic (random) evolutionary forces, like genetic drift.

History

The term hitchhiking was coined in 1974 by Maynard Smith and John Haigh. Subsequently the phenomenon was studied by John H. Gillespie and others.

Outcomes

Hitchhiking occurs when a polymorphism is in linkage disequilibrium with a second locus that is undergoing a selective sweep. The allele that is linked to the adaptation will increase in frequency, in some cases until it becomes fixed in the population. The other allele, which is linked to the non-advantageous version, will decrease in frequency, in some cases until extinction. Overall, hitchhiking reduces the amount of genetic variation. A hitchhiker mutation (or passenger mutation in cancer biology) may itself be neutral, advantageous, or deleterious.

Recombination can interrupt the process of genetic hitchhiking, ending it before the hitchhiking neutral or deleterious allele becomes fixed or goes extinct. The closer a hitchhiking polymorphism is to the gene under selection, the less opportunity there is for recombination to occur. This leads to a reduction in genetic variation near a selective sweep that is closer to the selected site. This pattern is useful for using population data to detect selective sweeps, and hence to detect which genes have been under very recent selection.

Draft versus drift

Both genetic drift and genetic draft are random evolutionary processes, i.e. they act stochastically and in a way that is not correlated with selection at the gene in question. Drift is the change in the frequency of an allele in a population due to random sampling in each generation. Draft is the change in the frequency of an allele due to the randomness of what other non-neutral alleles it happens to be found in association with.

Assuming genetic drift is the only evolutionary force acting on an allele, after one generation in many replicated idealised populations each of size N, each starting with allele frequencies of p and q, the newly added variance in allele frequency across those populations (i.e. the degree of randomness of the outcome) is . This equation shows that the effect of genetic drift is heavily dependent on population size, defined as the actual number of individuals in an idealised population. Genetic draft results in similar behavior to the equation above, but with an effective population size that may have no relationship to the actual number of individuals in the population. Instead, the effective population size may depend on factors such as the recombination rate and the frequency and strength of beneficial mutations. The increase in variance between replicate populations due to drift is independent, whereas with draft it is autocorrelated, i.e. if an allele frequency goes up because of genetic drift, that contains no information about the next generation, whereas if it goes up because of genetic draft, it is more likely to go up than down in the next generation. Genetic draft generates a different allele frequency spectrum to genetic drift.

Applications

Sex chromosomes

The Y chromosome does not undergo recombination, making it particularly prone to the fixation of deleterious mutations via hitchhiking. This has been proposed as an explanation as to why there are so few functional genes on the Y chromosome.

Mutator evolution

Hitchhiking is necessary for the evolution of higher mutation rates to be favored by natural selection on evolvability. A hypothetical mutator M increases the general mutation rate in the area around it. Due to the increased mutation rate, the nearby A allele may be mutated into a new, advantageous allele, A*

--M------A-- -> --M------A*--

The individual in which this chromosome lies will now have a selective advantage over other individuals of this species, so the allele A* will spread through the population by the normal processes of natural selection. M, due to its proximity to A*, will be dragged through into the general population. This process only works when M is very close to the allele it has mutated. A greater distance would increase the chance of recombination separating M from A*, leaving M alone with any deleterious mutations it may have caused. For this reason, evolution of mutators is generally expected to happen largely in asexual species where recombination cannot disrupt linkage disequilibrium.

Neutral theory of molecular evolution

The neutral theory of molecular evolution assumes that most new mutations are either deleterious (and quickly purged by selection) or else neutral, with very few being adaptive. It also assumes that the behavior of neutral allele frequencies can be described by the mathematics of genetic drift. Genetic hitchhiking has therefore been viewed as a major challenge to neutral theory, and an explanation for why genome-wide versions of the McDonald–Kreitman test appear to indicate a high proportion of mutations becoming fixed for reasons connected to selection.

Genetic linkage

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Genetic_linkage

Genetic linkage is the tendency of DNA sequences that are close together on a chromosome to be inherited together during the meiosis phase of sexual reproduction. Two genetic markers that are physically near to each other are unlikely to be separated onto different chromatids during chromosomal crossover, and are therefore said to be more linked than markers that are far apart. In other words, the nearer two genes are on a chromosome, the lower the chance of recombination between them, and the more likely they are to be inherited together. Markers on different chromosomes are perfectly unlinked, although the penetrance of potentially deleterious alleles may be influenced by the presence of other alleles, and these other alleles may be located on other chromosomes than that on which a particular potentially deleterious allele is located.

Genetic linkage is the most prominent exception to Gregor Mendel's Law of Independent Assortment. The first experiment to demonstrate linkage was carried out in 1905. At the time, the reason why certain traits tend to be inherited together was unknown. Later work revealed that genes are physical structures related by physical distance.

The typical unit of genetic linkage is the centimorgan (cM). A distance of 1 cM between two markers means that the markers are separated to different chromosomes on average once per 100 meiotic product, thus once per 50 meioses.

Discovery

Gregor Mendel's Law of Independent Assortment states that every trait is inherited independently of every other trait. But shortly after Mendel's work was rediscovered, exceptions to this rule were found. In 1905, the British geneticists William Bateson, Edith Rebecca Saunders and Reginald Punnett cross-bred pea plants in experiments similar to Mendel's. They were interested in trait inheritance in the sweet pea and were studying two genes—the gene for flower colour (P, purple, and p, red) and the gene affecting the shape of pollen grains (L, long, and l, round). They crossed the pure lines PPLL and ppll and then self-crossed the resulting PpLl lines.

According to Mendelian genetics, the expected phenotypes would occur in a 9:3:3:1 ratio of PL:Pl:pL:pl. To their surprise, they observed an increased frequency of PL and pl and a decreased frequency of Pl and pL:

Bateson, Saunders, and Punnett experiment
Phenotype and genotype Observed Expected from 9:3:3:1 ratio
Purple, long (P_L_) 284 216
Purple, round (P_ll) 21 72
Red, long (ppL_) 21 72
Red, round (ppll) 55 24

Their experiment revealed linkage between the P and L alleles and the p and l alleles. The frequency of P occurring together with L and p occurring together with l is greater than that of the recombinant Pl and pL. The recombination frequency is more difficult to compute in an F2 cross than a backcross, but the lack of fit between observed and expected numbers of progeny in the above table indicate it is less than 50%. This indicated that two factors interacted in some way to create this difference by masking the appearance of the other two phenotypes. This led to the conclusion that some traits are related to each other because of their near proximity to each other on a chromosome.

The understanding of linkage was expanded by the work of Thomas Hunt Morgan. Morgan's observation that the amount of crossing over between linked genes differs led to the idea that crossover frequency might indicate the distance separating genes on the chromosome. The centimorgan, which expresses the frequency of crossing over, is named in his honour.

Linkage map

Thomas Hunt Morgan's Drosophila melanogaster genetic linkage map. This was the first successful gene mapping work and provides important evidence for the chromosome theory of inheritance. The map shows the relative positions of alleles on the second Drosophila chromosome. The distances between the genes (centimorgans) are equal to the percentages of chromosomal crossover events that occur between different alleles.

A linkage map (also known as a genetic map) is a table for a species or experimental population that shows the position of its known genes or genetic markers relative to each other in terms of recombination frequency, rather than a specific physical distance along each chromosome. Linkage maps were first developed by Alfred Sturtevant, a student of Thomas Hunt Morgan.

A linkage map is a map based on the frequencies of recombination between markers during crossover of homologous chromosomes. The greater the frequency of recombination (segregation) between two genetic markers, the further apart they are assumed to be. Conversely, the lower the frequency of recombination between the markers, the smaller the physical distance between them. Historically, the markers originally used were detectable phenotypes (enzyme production, eye colour) derived from coding DNA sequences; eventually, confirmed or assumed noncoding DNA sequences such as microsatellites or those generating restriction fragment length polymorphisms (RFLPs) have been used.

Linkage maps help researchers to locate other markers, such as other genes by testing for genetic linkage of the already known markers. In the early stages of developing a linkage map, the data are used to assemble linkage groups, a set of genes which are known to be linked. As knowledge advances, more markers can be added to a group, until the group covers an entire chromosome. For well-studied organisms the linkage groups correspond one-to-one with the chromosomes.

A linkage map is not a physical map (such as a radiation reduced hybrid map) or gene map.

Linkage analysis

Linkage analysis is a genetic method that searches for chromosomal segments that cosegregate with the ailment phenotype through families. It can be used to map genes for both binary and quantitative traits. Linkage analysis may be either parametric (if we know the relationship between phenotypic and genetic similarity) or non-parametric. Parametric linkage analysis is the traditional approach, whereby the probability that a gene important for a disease is linked to a genetic marker is studied through the LOD score, which assesses the probability that a given pedigree, where the disease and the marker are cosegregating, is due to the existence of linkage (with a given linkage value) or to chance. Non-parametric linkage analysis, in turn, studies the probability of an allele being identical by descent with itself.

Pedigree illustrating Parametric Linkage Analysis

Parametric linkage analysis

The LOD score (logarithm (base 10) of odds), developed by Newton Morton, is a statistical test often used for linkage analysis in human, animal, and plant populations. The LOD score compares the likelihood of obtaining the test data if the two loci are indeed linked, to the likelihood of observing the same data purely by chance. Positive LOD scores favour the presence of linkage, whereas negative LOD scores indicate that linkage is less likely. Computerised LOD score analysis is a simple way to analyse complex family pedigrees in order to determine the linkage between Mendelian traits (or between a trait and a marker, or two markers).

The method is described in greater detail by Strachan and Read. Briefly, it works as follows:

  1. Establish a pedigree
  2. Make a number of estimates of recombination frequency
  3. Calculate a LOD score for each estimate
  4. The estimate with the highest LOD score will be considered the best estimate

The LOD score is calculated as follows:

NR denotes the number of non-recombinant offspring, and R denotes the number of recombinant offspring. The reason 0.5 is used in the denominator is that any alleles that are completely unlinked (e.g. alleles on separate chromosomes) have a 50% chance of recombination, due to independent assortment. θ is the recombinant fraction, i.e. the fraction of births in which recombination has happened between the studied genetic marker and the putative gene associated with the disease. Thus, it is equal to R / (NR + R).

By convention, a LOD score greater than 3.0 is considered evidence for linkage, as it indicates 1000 to 1 odds that the linkage being observed did not occur by chance. On the other hand, a LOD score less than −2.0 is considered evidence to exclude linkage. Although it is very unlikely that a LOD score of 3 would be obtained from a single pedigree, the mathematical properties of the test allow data from a number of pedigrees to be combined by summing their LOD scores. A LOD score of 3 translates to a p-value of approximately 0.05, and no multiple testing correction (e.g. Bonferroni correction) is required.

Limitations

Linkage analysis has a number of methodological and theoretical limitations that can significantly increase the type-1 error rate and reduce the power to map human quantitative trait loci (QTL). While linkage analysis was successfully used to identify genetic variants that contribute to rare disorders such as Huntington disease, it did not perform that well when applied to more common disorders such as heart disease or different forms of cancer. An explanation for this is that the genetic mechanisms affecting common disorders are different from those causing some rare disorders.

Recombination frequency

Recombination frequency is a measure of genetic linkage and is used in the creation of a genetic linkage map. Recombination frequency (θ) is the frequency with which a single chromosomal crossover will take place between two genes during meiosis. A centimorgan (cM) is a unit that describes a recombination frequency of 1%. In this way we can measure the genetic distance between two loci, based upon their recombination frequency. This is a good estimate of the real distance. Double crossovers would turn into no recombination. In this case we cannot tell if crossovers took place. If the loci we're analysing are very close (less than 7 cM) a double crossover is very unlikely. When distances become higher, the likelihood of a double crossover increases. As the likelihood of a double crossover increases one could systematically underestimate the genetic distance between two loci, unless one used an appropriate mathematical model.

During meiosis, chromosomes assort randomly into gametes, such that the segregation of alleles of one gene is independent of alleles of another gene. This is stated in Mendel's Second Law and is known as the law of independent assortment. The law of independent assortment always holds true for genes that are located on different chromosomes, but for genes that are on the same chromosome, it does not always hold true.

As an example of independent assortment, consider the crossing of the pure-bred homozygote parental strain with genotype AABB with a different pure-bred strain with genotype aabb. A and a and B and b represent the alleles of genes A and B. Crossing these homozygous parental strains will result in F1 generation offspring that are double heterozygotes with genotype AaBb. The F1 offspring AaBb produces gametes that are AB, Ab, aB, and ab with equal frequencies (25%) because the alleles of gene A assort independently of the alleles for gene B during meiosis. Note that 2 of the 4 gametes (50%)—Ab and aB—were not present in the parental generation. These gametes represent recombinant gametes. Recombinant gametes are those gametes that differ from both of the haploid gametes that made up the original diploid cell. In this example, the recombination frequency is 50% since 2 of the 4 gametes were recombinant gametes.

The recombination frequency will be 50% when two genes are located on different chromosomes or when they are widely separated on the same chromosome. This is a consequence of independent assortment.

When two genes are close together on the same chromosome, they do not assort independently and are said to be linked. Whereas genes located on different chromosomes assort independently and have a recombination frequency of 50%, linked genes have a recombination frequency that is less than 50%.

As an example of linkage, consider the classic experiment by William Bateson and Reginald Punnett. They were interested in trait inheritance in the sweet pea and were studying two genes—the gene for flower colour (P, purple, and p, red) and the gene affecting the shape of pollen grains (L, long, and l, round). They crossed the pure lines PPLL and ppll and then self-crossed the resulting PpLl lines. According to Mendelian genetics, the expected phenotypes would occur in a 9:3:3:1 ratio of PL:Pl:pL:pl. To their surprise, they observed an increased frequency of PL and pl and a decreased frequency of Pl and pL (see table below).

Bateson and Punnett experiment
Phenotype and genotype Observed Expected from 9:3:3:1 ratio
Purple, long (P_L_) 284 216
Purple, round (P_ll) 21 72
Red, long (ppL_) 21 72
Red, round (ppll) 55 24
Unlinked Genes vs. Linked Genes

Their experiment revealed linkage between the P and L alleles and the p and l alleles. The frequency of P occurring together with L and with p occurring together with l is greater than that of the recombinant Pl and pL. The recombination frequency is more difficult to compute in an F2 cross than a backcross, but the lack of fit between observed and expected numbers of progeny in the above table indicate it is less than 50%.

The progeny in this case received two dominant alleles linked on one chromosome (referred to as coupling or cis arrangement). However, after crossover, some progeny could have received one parental chromosome with a dominant allele for one trait (e.g. Purple) linked to a recessive allele for a second trait (e.g. round) with the opposite being true for the other parental chromosome (e.g. red and Long). This is referred to as repulsion or a trans arrangement. The phenotype here would still be purple and long but a test cross of this individual with the recessive parent would produce progeny with much greater proportion of the two crossover phenotypes. While such a problem may not seem likely from this example, unfavourable repulsion linkages do appear when breeding for disease resistance in some crops.

The two possible arrangements, cis and trans, of alleles in a double heterozygote are referred to as gametic phases, and phasing is the process of determining which of the two is present in a given individual.

When two genes are located on the same chromosome, the chance of a crossover producing recombination between the genes is related to the distance between the two genes. Thus, the use of recombination frequencies has been used to develop linkage maps or genetic maps.

However, it is important to note that recombination frequency tends to underestimate the distance between two linked genes. This is because as the two genes are located farther apart, the chance of double or even number of crossovers between them also increases. Double or even number of crossovers between the two genes results in them being cosegregated to the same gamete, yielding a parental progeny instead of the expected recombinant progeny. As mentioned above, the Kosambi and Haldane transformations attempt to correct for multiple crossovers.

Linkage of genetic sites within a gene

In the early 1950s the prevailing view was that the genes in a chromosome are discrete entities, indivisible by genetic recombination and arranged like beads on a string. During 1955 to 1959, Benzer performed genetic recombination experiments using rII mutants of bacteriophage T4. He found that, on the basis of recombination tests, the sites of mutation could be mapped in a linear order. This result provided evidence for the key idea that the gene has a linear structure equivalent to a length of DNA with many sites that can independently mutate.

Edgar et al. performed mapping experiments with r mutants of bacteriophage T4 showing that recombination frequencies between rII mutants are not strictly additive. The recombination frequency from a cross of two rII mutants (a x d) is usually less than the sum of recombination frequencies for adjacent internal sub-intervals (a x b) + (b x c) + (c x d). Although not strictly additive, a systematic relationship was observed that likely reflects the underlying molecular mechanism of genetic recombination.

Variation of recombination frequency

While recombination of chromosomes is an essential process during meiosis, there is a large range of frequency of cross overs across organisms and within species. Sexually dimorphic rates of recombination are termed heterochiasmy, and are observed more often than a common rate between male and females. In mammals, females often have a higher rate of recombination compared to males. It is theorised that there are unique selections acting or meiotic drivers which influence the difference in rates. The difference in rates may also reflect the vastly different environments and conditions of meiosis in oogenesis and spermatogenesis.

Genes affecting recombination frequency

Mutations in genes that encode proteins involved in the processing of DNA often affect recombination frequency. In bacteriophage T4, mutations that reduce expression of the replicative DNA polymerase [gene product 43 (gp43)] increase recombination (decrease linkage) several fold. The increase in recombination may be due to replication errors by the defective DNA polymerase that are themselves recombination events such as template switches, i.e. copy choice recombination events. Recombination is also increased by mutations that reduce the expression of DNA ligase (gp30) and dCMP hydroxymethylase (gp42), two enzymes employed in DNA synthesis.

Recombination is reduced (linkage increased) by mutations in genes that encode proteins with nuclease functions (gp46 and gp47) and a DNA-binding protein (gp32) Mutation in the bacteriophage uvsX gene also substantially reduces recombination. The uvsX gene is analogous to the well studied recA gene of Escherichia coli that plays a central role in recombination.

Meiosis indicators

With very large pedigrees or with very dense genetic marker data, such as from whole-genome sequencing, it is possible to precisely locate recombinations. With this type of genetic analysis, a meiosis indicator is assigned to each position of the genome for each meiosis in a pedigree. The indicator indicates which copy of the parental chromosome contributes to the transmitted gamete at that position. For example, if the allele from the 'first' copy of the parental chromosome is transmitted, a '0' might be assigned to that meiosis. If the allele from the 'second' copy of the parental chromosome is transmitted, a '1' would be assigned to that meiosis. The two alleles in the parent came, one each, from two grandparents. These indicators are then used to determine identical-by-descent (IBD) states or inheritance states, which are in turn used to identify genes responsible for diseases.

Synthetic lethality

From Wikipedia, the free encyclopedia

Synthetic lethality is defined as a type of genetic interaction where the combination of two genetic events results in cell death or death of an organism. Although the foregoing explanation is wider than this, it is common when referring to synthetic lethality to mean the situation arising by virtue of a combination of deficiencies of two or more genes leading to cell death (whether by means of apoptosis or otherwise), whereas a deficiency of only one of these genes does not. In a synthetic lethal genetic screen, it is necessary to begin with a mutation that does not result in cell death, although the effect of that mutation could result in a differing phenotype (slow growth for example), and then systematically test other mutations at additional loci to determine which, in combination with the first mutation, causes cell death arising by way of deficiency or abolition of expression.

Synthetic lethality has utility for purposes of molecular targeted cancer therapy. The first example of a molecular targeted therapeutic agent, which exploited a synthetic lethal approach, arose by means of an inactivated tumor suppressor gene (BRCA1 and 2), a treatment which received FDA approval in 2016 (PARP inhibitor). A sub-case of synthetic lethality, where vulnerabilities are exposed by the deletion of passenger genes rather than tumor suppressor is the so-called "collateral lethality".

Background

Schematic of basic synthetic lethality. Simultaneous mutations in gene pair confer lethality while any other combination of mutations is viable.

The phenomenon of synthetic lethality was first described by Calvin Bridges in 1922, who noticed that some combinations of mutations in the model organism Drosophila melanogaster (the common fruit fly) confer lethality. Theodore Dobzhansky coined the term "synthetic lethality" in 1946 to describe the same type of genetic interaction in wildtype populations of Drosophila. If the combination of genetic events results in a non-lethal reduction in fitness, the interaction is called synthetic sickness. Although in classical genetics the term synthetic lethality refers to the interaction between two genetic perturbations, synthetic lethality can also apply to cases in which the combination of a mutation and the action of a chemical compound causes lethality, whereas the mutation or compound alone are non-lethal.

Synthetic lethality is a consequence of the tendency of organisms to maintain buffering schemes (i.e. backup plans) which engender phenotypic stability notwithstanding underlying genetic variations, environmental changes or other random events, such as mutations. This genetic robustness is the result of parallel redundant pathways and "capacitor" proteins that camouflage the effects of mutations so that important cellular processes do not depend on any individual component. Synthetic lethality can help identify these buffering relationships, and what type of disease or malfunction that may occur when these relationships break down, through the identification of gene interactions that function in either the same biochemical process or pathways that appear to be unrelated.

High-throughput screens

High-throughput synthetic lethal screens may help illuminate questions about how cellular processes work without previous knowledge of gene function or interaction. Screening strategy must take into account the organism used for screening, the mode of genetic perturbation, and whether the screen is forward or reverse. Many of the first synthetic lethal screens were performed in Saccharomyces cerevisiae. Budding yeast has many experimental advantages in screens, including a small genome, fast doubling time, both haploid and diploid states, and ease of genetic manipulation. Gene ablation can be performed using a PCR-based strategy and complete libraries of knockout collections for all annotated yeast genes are publicly available. Synthetic genetic array (SGA), synthetic lethality by microarray (SLAM), and genetic interaction mapping (GIM) are three high-throughput methods for analyzing synthetic lethality in yeast. A genome scale genetic interaction map was created by SGA analysis in S. cerevisiae that comprises about 75% of all yeast genes.

Collateral lethality

Collateral lethality is a sub-case of synthetic lethality in personalized cancer therapy, where vulnerabilities are exposed by the deletion of passenger genes rather than tumor suppressor genes, which are deleted by virtue of chromosomal proximity to major deleted tumor suppressor loci.

DDR deficiencies

DNA mismatch repair deficiency

Mutations in genes employed in DNA mismatch repair (MMR) cause a high mutation rate. In tumors, such frequent subsequent mutations often generate "non-self" immunogenic antigens. A human Phase II clinical trial, with 41 patients, evaluated one synthetic lethal approach for tumors with or without MMR defects. In the case of sporadic tumors evaluated, the majority would be deficient in MMR due to epigenetic repression of an MMR gene (see DNA mismatch repair). The product of gene PD-1 ordinarily represses cytotoxic immune responses. Inhibition of this gene allows a greater immune response. In this Phase II clinical trial with 47 patients, when cancer patients with a defect in MMR in their tumors were exposed to an inhibitor of PD-1, 67% - 78% of patients experienced immune-related progression-free survival. In contrast, for patients without defective MMR, addition of PD-1 inhibitor generated only 11% of patients with immune-related progression-free survival. Thus inhibition of PD-1 is primarily synthetically lethal with MMR defects.

Werner syndrome gene deficiency

The analysis of 630 human primary tumors in 11 tissues shows that WRN promoter hypermethylation (with loss of expression of WRN protein) is a common event in tumorigenesis. The WRN gene promoter is hypermethylated in about 38% of colorectal cancers and non-small-cell lung carcinomas and in about 20% or so of stomach cancers, prostate cancers, breast cancers, non-Hodgkin lymphomas and chondrosarcomas, plus at significant levels in the other cancers evaluated. The WRN helicase protein is important in homologous recombinational DNA repair and also has roles in non-homologous end joining DNA repair and base excision DNA repair.

Topoisomerase inhibitors are frequently used as chemotherapy for different cancers, though they cause bone marrow suppression, are cardiotoxic and have variable effectiveness. A 2006 retrospective study, with long clinical follow-up, was made of colon cancer patients treated with the topoisomerase inhibitor irinotecan. In this study, 45 patients had hypermethylated WRN gene promoters and 43 patients had unmethylated WRN gene promoters. Irinitecan was more strongly beneficial for patients with hypermethylated WRN promoters (39.4 months survival) than for those with unmethylated WRN promoters (20.7 months survival). Thus, a topoisomerase inhibitor appeared to be synthetically lethal with deficient expression of WRN. Further evaluations have also indicated synthetic lethality of deficient expression of WRN and topoisomerase inhibitors.

Clinical and preclinical PARP1 inhibitor synthetic lethality

As reviewed by Murata et al., five different PARP1 inhibitors are now undergoing Phase I, II and III clinical trials, to determine if particular PARP1 inhibitors are synthetically lethal in a large variety of cancers, including those in the prostate, pancreas, non-small-cell lung tumors, lymphoma, multiple myeloma, and Ewing sarcoma. In addition, in preclinical studies using cells in culture or within mice, PARP1 inhibitors are being tested for synthetic lethality against epigenetic and mutational deficiencies in about 20 DNA repair defects beyond BRCA1/2 deficiencies. These include deficiencies in PALB2, FANCD2, RAD51, ATM, MRE11, p53, XRCC1 and LSD1.

Preclinical ARID1A synthetic lethality

ARID1A, a chromatin modifier, is required for non-homologous end joining, a major pathway that repairs double-strand breaks in DNA, and also has transcription regulatory roles. ARID1A mutations are one of the 12 most common carcinogenic mutations. Mutation or epigenetically decreased expression of ARID1A has been found in 17 types of cancer. Pre-clinical studies in cells and in mice show that synthetic lethality for deficient ARID1A expression occurs by either inhibition of the methyltransferase activity of EZH2, by inhibition of the DNA repair kinase ATR, or by exposure to the kinase inhibitor dasatinib.

Preclinical RAD52 synthetic lethality

There are two pathways for homologous recombinational repair of double-strand breaks. The major pathway depends on BRCA1, PALB2 and BRCA2 while an alternative pathway depends on RAD52. Pre-clinical studies, involving epigenetically reduced or mutated BRCA-deficient cells (in culture or injected into mice), show that inhibition of RAD52 is synthetically lethal with BRCA-deficiency.

Side effects

Although treatments using synthetic lethality can stop or slow progression of cancers and prolong survival, each of the synthetic lethal treatments has some adverse side effects. For example, more than 20% of patients treated with an inhibitor of PD-1 encounter fatigue, rash, pruritus, cough, diarrhea, decreased appetite, constipation or arthralgia. Thus, it is important to determine which DDR deficiency is present, so that only an effective synthetic lethal treatment can be applied, and not unnecessarily subject patients to adverse side effects without a direct benefit.

Child abandonment

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Child_abandonment ...