Search This Blog

Wednesday, November 7, 2018

History of genetics

From Wikipedia, the free encyclopedia

The history of genetics dates from the classical era with contributions by Hippocrates, Aristotle and Epicurus. Modern biology began with the work of the Augustinian friar Gregor Johann Mendel. His work on pea plants, published in 1866,what is now Mendelian inheritance. Some theories of heredity suggest in the centuries before and for several decades after Mendel's work.

The year 1900 marked the "rediscovery of Mendel" by Hugo de Vries, Carl Correns and Erich von Tschermak, and by 1915 the basic principles of Mendelian genetics had been applied to a wide variety of organisms—most notably the fruit fly Drosophila melanogaster. Led by Thomas Hunt Morgan and his fellow "drosophilists", geneticists developed the Mendelian model, which was widely accepted by 1925. Alongside experimental work, mathematicians developed the statistical framework of population genetics, bringing genetic explanations into the study of evolution.

With the basic patterns of genetic inheritance established, many biologists turned to investigations of the physical nature of the gene. In the 1940s and early 1950s, experiments pointed to DNA as the portion of chromosomes (and perhaps other nucleoproteins) that held genes. A focus on new model organisms such as viruses and bacteria, along with the discovery of the double helical structure of DNA in 1953, marked the transition to the era of molecular genetics.

In the following years, chemists developed techniques for sequencing both nucleic acids and proteins, while others worked out the relationship between the two forms of biological molecules: the genetic code. The regulation of gene expression became a central issue in the 1960s; by the 1970s gene expression could be controlled and manipulated through genetic engineering. In the last decades of the 20th century, many biologists focused on large-scale genetics projects, sequencing entire genomes.

Pre-Mendelian ideas on heredity

Ancient theories

Aristotle's model of transmission of movements from parents to child, and of form from the father. The model is not fully symmetric.
 
The most influential early theories of heredity were that of Hippocrates and Aristotle. Hippocrates' theory (possibly based on the teachings of Anaxagoras) was similar to Darwin's later ideas on pangenesis, involving heredity material that collects from throughout the body. Aristotle suggested instead that the (nonphysical) form-giving principle of an organism was transmitted through semen (which he considered to be a purified form of blood) and the mother's menstrual blood, which interacted in the womb to direct an organism's early development. For both Hippocrates and Aristotle—and nearly all Western scholars through to the late 19th century—the inheritance of acquired characters was a supposedly well-established fact that any adequate theory of heredity had to explain. At the same time, individual species were taken to have a fixed essence; such inherited changes were merely superficial. The Athenian philosopher Epicurus observed families and proposed the contribution of both males and females of hereditary characters ("sperm atoms"), noticed dominant and recessive types of inheritance and described segregation and independent assortment of "sperm atoms"

In the Charaka Samhita of 300CE, ancient Indian medical writers saw the characteristics of the child as determined by four factors: 1) those from the mother’s reproductive material, (2) those from the father’s sperm, (3) those from the diet of the pregnant mother and (4) those accompanying the soul which enters into the foetus. Each of these four factors had four parts creating sixteen factors of which the karma of the parents and the soul determined which attributes predominated and thereby gave the child its characteristics.

In the 9th century CE, the Afro-Arab writer Al-Jahiz considered the effects of the environment on the likelihood of an animal to survive. In 1000 CE, the Arab physician, Abu al-Qasim al-Zahrawi (known as Albucasis in the West) was the first physician to describe clearly the hereditary nature of haemophilia in his Al-Tasrif. In 1140 CE, Judah HaLevi described dominant and recessive genetic traits in The Kuzari.

Plant systematics and hybridization

In the 18th century, with increased knowledge of plant and animal diversity and the accompanying increased focus on taxonomy, new ideas about heredity began to appear. Linnaeus and others (among them Joseph Gottlieb Kölreuter, Carl Friedrich von Gärtner, and Charles Naudin) conducted extensive experiments with hybridization, especially species hybrids. Species hybridizers described a wide variety of inheritance phenomena, include hybrid sterility and the high variability of back-crosses.

Plant breeders were also developing an array of stable varieties in many important plant species. In the early 19th century, Augustin Sageret established the concept of dominance, recognizing that when some plant varieties are crossed, certain characteristics (present in one parent) usually appear in the offspring; he also found that some ancestral characteristics found in neither parent may appear in offspring. However, plant breeders made little attempt to establish a theoretical foundation for their work or to share their knowledge with current work of physiology, although Gartons Agricultural Plant Breeders in England explained their system.

Mendel

Blending inheritance leads to the averaging out of every characteristic, which would make evolution by natural selection impossible.

In breeding experiments between 1856 and 1865, Gregor Mendel first traced inheritance patterns of certain traits in pea plants and showed that they obeyed simple statistical rules with some traits being dominant and others being recessive. These patterns of Mendelian inheritance demonstrated that application of statistics to inheritance could be highly useful; they also contradicted 19th century theories of blending inheritance as the traits remained discrete through multiple generation of hybridization.

From his statistical analysis Mendel defined a concept that he described as a character (which in his mind holds also for "determinant of that character"). In only one sentence of his historical paper he used the term "factors" to designate the "material creating" the character: " So far as experience goes, we find it in every case confirmed that constant progeny can only be formed when the egg cells and the fertilizing pollen are of like character, so that both are provided with the material for creating quite similar individuals, as is the case with the normal fertilization of pure species. We must therefore regard it as certain that exactly similar factors must be at work also in the production of the constant forms in the hybrid plants."(Mendel, 1866).

Mendel's work was published in 1866 as "Versuche über Pflanzen-Hybriden" (Experiments on Plant Hybridization) in the Verhandlungen des Naturforschenden Vereins zu Brünn (Proceedings of the Natural History Society of Brünn), following two lectures he gave on the work in early 1866.

Post-Mendel, pre-rediscovery

Pangenesis

Diagram of Charles Darwin's pangenesis theory. Every part of the body emits tiny particles, gemmules, which migrate to the gonads and contribute to the fertilised egg and so to the next generation. The theory implied that changes to the body during an organism's life would be inherited, as proposed in Lamarckism.

Mendel's work was published in a relatively obscure scientific journal, and it was not given any attention in the scientific community. Instead, discussions about modes of heredity were galvanized by Darwin's theory of evolution by natural selection, in which mechanisms of non-Lamarckian heredity seemed to be required. Darwin's own theory of heredity, pangenesis, did not meet with any large degree of acceptance. A more mathematical version of pangenesis, one which dropped much of Darwin's Lamarckian holdovers, was developed as the "biometrical" school of heredity by Darwin's cousin, Francis Galton.

Germ plasm

August Weismann's germ plasm theory. The hereditary material, the germ plasm, is confined to the gonads. Somatic cells (of the body) develop afresh in each generation from the germ plasm.

In 1883 August Weismann conducted experiments involving breeding mice whose tails had been surgically removed. His results — that surgically removing a mouse's tail had no effect on the tail of its offspring — challenged the theories of pangenesis and Lamarckism, which held that changes to an organism during its lifetime could be inherited by its descendants. Weismann proposed the germ plasm theory of inheritance, which held that hereditary information was carried only in sperm and egg cells.

Rediscovery of Mendel

Hugo de Vries wondered what the nature of germ plasm might be, and in particular he wondered whether or not germ plasm was mixed like paint or whether the information was carried in discrete packets that remained unbroken. In the 1890s he was conducting breeding experiments with a variety of plant species and in 1897 he published a paper on his results that stated that each inherited trait was governed by two discrete particles of information, one from each parent, and that these particles were passed along intact to the next generation. In 1900 he was preparing another paper on his further results when he was shown a copy of Mendel's 1866 paper by a friend who thought it might be relevant to de Vries's work. He went ahead and published his 1900 paper without mentioning Mendel's priority. Later that same year another botanist, Carl Correns, who had been conducting hybridization experiments with maize and peas, was searching the literature for related experiments prior to publishing his own results when he came across Mendel's paper, which had results similar to his own. Correns accused de Vries of appropriating terminology from Mendel's paper without crediting him or recognizing his priority. At the same time another botanist, Erich von Tschermak was experimenting with pea breeding and producing results like Mendel's. He too discovered Mendel's paper while searching the literature for relevant work. In a subsequent paper de Vries praised Mendel and acknowledged that he had only extended his earlier work.

Emergence of molecular genetics

After the rediscovery of Mendel's work there was a feud between William Bateson and Pearson over the hereditary mechanism, solved by Ronald Fisher in his work "The Correlation Between Relatives on the Supposition of Mendelian Inheritance".

Thomas Hunt Morgan discovered sex linked inheritance of the white eyed mutation in the fruit fly Drosophila in 1910, implying the gene was on the sex chromosome.

In 1910, Thomas Hunt Morgan showed that genes reside on specific chromosomes. He later showed that genes occupy specific locations on the chromosome. With this knowledge, Morgan and his students began the first chromosomal map of the fruit fly Drosophila melanogaster. In 1928, Frederick Griffith showed that genes could be transferred. In what is now known as Griffith's experiment, injections into a mouse of a deadly strain of bacteria that had been heat-killed transferred genetic information to a safe strain of the same bacteria, killing the mouse.

A series of subsequent discoveries led to the realization decades later that the genetic material is made of DNA (deoxyribonucleic acid). In 1941, George Wells Beadle and Edward Lawrie Tatum showed that mutations in genes caused errors in specific steps in metabolic pathways. This showed that specific genes code for specific proteins, leading to the "one gene, one enzyme" hypothesis. Oswald Avery, Colin Munro MacLeod, and Maclyn McCarty showed in 1944 that DNA holds the gene's information. In 1952, Rosalind Franklin and Raymond Gosling produced a strikingly clear x-ray diffraction pattern indicating a helical form, and in 1953, James D. Watson and Francis Crick demonstrated the molecular structure of DNA. Together, these discoveries established the central dogma of molecular biology, which states that proteins are translated from RNA which is transcribed by DNA. This dogma has since been shown to have exceptions, such as reverse transcription in retroviruses.

In 1972, Walter Fiers and his team at the University of Ghent were the first to determine the sequence of a gene: the gene for bacteriophage MS2 coat protein. Richard J. Roberts and Phillip Sharp discovered in 1977 that genes can be split into segments. This led to the idea that one gene can make several proteins. The successful sequencing of many organisms' genomes has complicated the molecular definition of the gene. In particular, genes do not always sit side by side on DNA like discrete beads. Instead, regions of the DNA producing distinct proteins may overlap, so that the idea emerges that "genes are one long continuum". It was first hypothesized in 1986 by Walter Gilbert that neither DNA nor protein would be required in such a primitive system as that of a very early stage of the earth if RNA could serve both as a catalyst and as genetic information storage processor.

The modern study of genetics at the level of DNA is known as molecular genetics and the synthesis of molecular genetics with traditional Darwinian evolution is known as the modern evolutionary synthesis.

Early timeline

1856-1863: Mendel studied the inheritance of traits between generations based on experiments involving garden pea plants. He deduced that there is a certain tangible essence that is passed on between generations from both parents. Mendel established the basic principles of inheritance, namely, the principles of dominance, independent assortment, and segregation.
 
1866: Austrian Augustinian monk Gregor Mendel's paper, Experiments on Plant Hybridization, published.
 
1869: Friedrich Miescher discovers a weak acid in the nuclei of white blood cells that today we call DNA. In 1871 he isolated cell nuclei, separated the nucleic cells from bandages and then treated them with pepsin (an enzyme which breaks down proteins). From this, he recovered an acidic substance which he called "nuclein."
 
1880-1890: Walther Flemming, Eduard Strasburger, and Edouard Van Beneden elucidate chromosome distribution during cell division.
 
1889: Richard Altmann purified protein free DNA. However, the nucleic acid was not as pure as he had assumed. It was determined later to contain a large amount of protein.
 
1889: Hugo de Vries postulates that "inheritance of specific traits in organisms comes in particles", naming such particles "(pan)genes".
 
1902: Archibald Garrod discovered inborn errors of metabolism. An explanation for epistasis is an important manifestation of Garrod’s research, albeit indirectly. When Garrod studied alkaptonuria, a disorder that makes urine quickly turn black due to the presence of gentesate, he noticed that it was prevalent among populations whose parents were closely related.
 
1903: Walter Sutton and Theodor Boveri independently hypothesizes that chromosomes, which segregate in a Mendelian fashion, are hereditary units. Boveri was studying sea urchins when he found that all the chromosomes in the sea urchins had to be present for proper embryonic development to take place. Sutton's work with grasshoppers showed that chromosomes occur in matched pairs of maternal and paternal chromosomes which separate during meiosis. He concluded that this could be "the physical basis of the Mendelian law of heredity."
 
1905: William Bateson coins the term "genetics" in a letter to Adam Sedgwick (Zoologist) and at a meeting in 1906.
 
1908: G.H. Hardy and Wilhelm Weinberg proposed the Hardy-Weinberg equilibrium model which describes the frequencies of alleles in the gene pool of a population, which are under certain specific conditions, as constant and at a state of equilibrium from generation to generation unless specific disturbing influences are introduced.
 
1910: Thomas Hunt Morgan shows that genes reside on chromosomes while determining the nature of sex-linked traits by studying Drosophila melanogaster. He determined that the white-eyed mutant was sex-linked based on Mendelian's principles of segregation and independent assortment.
 
1911: Alfred Sturtevant, one of Morgan's students, invented the procedure of linkage mapping which is based on the frequency of recombination.
 
1913: Alfred Sturtevant makes the first genetic map of a chromosome.
 
1913: Gene maps show chromosomes containing linear arranged genes.
 
1918: Ronald Fisher publishes "The Correlation Between Relatives on the Supposition of Mendelian Inheritance" the modern synthesis of genetics and evolutionary biology starts. See population genetics.
 
1920: Lysenkoism Started, during Lysenkoism they stated that the hereditary factor are not only in the nucleus, but also in the cytoplasm, though they called it living protoplasm.
 
1923: Frederick Griffith studied bacterial transformation and observed that DNA carries genes responsible for pathogenicity.
In Griffith's experiment, mice are injected with dead bacteria of one strain and live bacteria of another, and develop an infection of the dead strain's type.
1928: Frederick Griffith discovers that hereditary material from dead bacteria can be incorporated into live bacteria.
 
1930s–1950s: Joachim Hämmerling conducted experiments with Acetabularia in which he began to distinguish the contributions of the nucleus and the cytoplasm substances (later discovered to be DNA and mRNA, respectively) to cell morphogenesis and development.
 
1931: Crossing over is identified as the cause of recombination; the first cytological demonstration of this crossing over was performed by Barbara McClintock and Harriet Creighton.
 
1933: Jean Brachet, while studying virgin sea urchin eggs, suggested that DNA is found in cell nucleus and that RNA is present exclusively in the cytoplasm. At the time, "yeast nucleic acid" (RNA) was thought to occur only in plants, while "thymus nucleic acid" (DNA) only in animals. The latter was thought to be a tetramer, with the function of buffering cellular pH.
 
1933: Thomas Morgan received the Nobel prize for linkage mapping. His work elucidated the role played by the chromosome in heredity.
 
1941: Edward Lawrie Tatum and George Wells Beadle show that genes code for proteins.
 
1943: Luria–Delbrück experiment: this experiment showed that genetic mutations conferring resistance to bacteriophage arise in the absence of selection, rather than being a response to selection.

The DNA era

1944: The Avery–MacLeod–McCarty experiment isolates DNA as the genetic material (at that time called transforming principle).
 
1947: Salvador Luria discovers reactivation of irradiated phage, stimulating numerous further studies of DNA repair processes in bacteriophage, and other organisms, including humans.
 
1948: Barbara McClintock discovers transposons in maize.
 
1950: Erwin Chargaff determined the pairing method of nitrogenous bases. Chargaff and his team studied the DNA from multiple organisms and found three things (also known as Chargaff's rules). First, the concentration of the pyrimidines (guanine and adenine) are always found in the same amount as one another. Second, the concentration of purines (cytosine and thymidine) are also always the same. Lastly, Chargaff and his team found the proportion of pyrimidines and purines correspond each other.
Hershey–Chase experiment proves that phage genetic material is DNA.
1952: The Hershey–Chase experiment proves the genetic information of phages (and, by implication, all other organisms) to be DNA.
1952: an X-ray diffraction image of DNA taken by Raymond Gosling in May 1952, a student supervised by Rosalind Franklin
1953: DNA structure is resolved to be a double helix by Rosalind Franklin, James Watson and Francis Crick.
 
1955: Alexander R. Todd determined the chemical makeup of nitrogenous bases. Todd also successfully synthesized adenosine triphosphate (ATP) and flavin adenine dinucleotide (FAD). He was awarded the Nobel prize in Chemistry in 1957 for his contributions in the scientific knowledge of nucleotides and nucleotide co-enzymes.
 
1955: Joe Hin Tjio, while working in Albert Levan's lab, determined the number of chromosomes in humans to be of 46. Tjio was attempting to refine an established technique to separate chromosomes onto glass slides by conducting a study of human embryonic lung tissue, when he saw that there were 46 chromosomes rather than 48. This revolutionized the world of cytogenetics.
 
1957: Arthur Kornberg with Severo Ochoa synthesized DNA in a test tube after discovering the means by which DNA is duplicated . DNA polymerase 1 established requirements for in vitro synthesis of DNA. Kornberg and Ochoa were awarded the Nobel Prize in 1959 for this work.
 
1957/1958: Robert W. Holley, Marshall Nirenberg, Har Gobind Khorana proposed the nucleotide sequence of the tRNA molecule. Francis Crick had proposed the requirement of some kind of adapter molecule and it was soon identified by Holey, Nirenberg and Khorana. These scientists help explain the link between a messenger RNA nucleotide sequence and a polypeptide sequence. In the experiment, they purified tRNAs from yeast cells and were awarded the Nobel prize in 1968.
1958: The Meselson–Stahl experiment demonstrates that DNA is semiconservatively replicated.
1960: Jacob and collaborators discover the operon, a group of genes whose expression is coordinated by an operator.
 
1961: Francis Crick and Sydney Brenner discovered frame shift mutations. In the experiment, proflavin-induced mutations of the T4 bacteriophage gene (rIIB) were isolated. Proflavin causes mutations by inserting itself between DNA bases, typically resulting in insertion or deletion of a single base pair. The mutants could not produce functional rIIB protein. These mutations were used to demonstrate that three sequential bases of the rIIB gene’s DNA specify each successive amino acid of the encoded protein. Thus the genetic code is a triplet code, where each triplet (called a codon) specifies a particular amino acid.
 
1961: Sydney Brenner, Francois Jacob and Matthew Meselson identified the function of messenger RNA.
 
1961 - 1967: Combined efforts of scientists "crack" the genetic code, including Marshall Nirenberg, Har Gobind Khorana, Sydney Brenner & Francis Crick.
 
1964: Howard Temin showed using RNA viruses that the direction of DNA to RNA transcription can be reversed.
 
1964: Lysenkoism Ended.
 
1966: Marshall W. Nirenberg, Philip Leder, Har Gobind Khorana cracked the genetic code by using RNA homopolymer and heteropolymer experiments, through which they figured out which triplets of RNA were translated into what amino acids in yeast cells.
 
1969: Molecular hybridization of radioactive DNA to the DNA of cytological preparation. by Pardue, M. L. and Gall, J. G.
 
1970: Restriction enzymes were discovered in studies of a bacterium, Haemophilus influenzae, by Hamilton O. Smith and Daniel Nathans, enabling scientists to cut and paste DNA.
 
1972: Stanley Norman Cohen and Herbert Boyer at UCSF and Stanford University constructed Recombinant DNA which can be formed by using restriction Endonuclease to cleave the DNA and DNA ligase to reattach the "sticky ends" into a bacterial plasmid.

The genomics era

In 1972, the first gene was sequenced: the gene for bacteriophage MS2 coat protein (3 chains in different colours).
1972: Walter Fiers and his team were the first to determine the sequence of a gene: the gene for bacteriophage MS2 coat protein.
 
1976: Walter Fiers and his team determine the complete nucleotide-sequence of bacteriophage MS2-RNA.
 
1976: Yeast genes expressed in E. coli for the first time.
 
1977: DNA is sequenced for the first time by Fred Sanger, Walter Gilbert, and Allan Maxam working independently. Sanger's lab sequence the entire genome of bacteriophage Φ-X174.
 
In the late 1970s: nonisotopic methods of nucleic acid labeling were developed. The subsequent improvements in the detection of reporter molecules using immunocytochemistry and immunofluorescence,in conjunction with advances in fluorescence microscopy and image analysis, have made the technique safer, faster and reliable.
 
1980: Paul Berg, Walter Gilbert and Frederick Sanger developed methods of mapping the structure of DNA. In 1972, recombinant DNA molecules were produced in Paul Berg’s Stanford University laboratory. Berg was awarded the 1980 Nobel Prize in Chemistry for constructing recombinant DNA molecules that contained phage lambda genes inserted into the small circular DNA mol.
 
1980: Stanley Norman Cohen and Herbert Boyer received first U.S. patent for gene cloning, by proving the successful outcome of cloning a plasmid and expressing a foreign gene in bacteria to produce a "protein foreign to a unicellular organism." These two scientist were able to replicate proteins such as HGH, Erythropoietin and Insulin. The patent earned about $300 million in licensing royalties for Stanford.
 
1982: The U.S. Food and Drug Administration (FDA) approved the release of the first genetically engineered human insulin, originally biosynthesized using recombination DNA methods by Genentech in 1978. Once approved, the cloning process lead to mass production of humulin (under license by Eli Lilly & Co.).
 
1983: Kary Banks Mullis invents the polymerase chain reaction enabling the easy amplification of DNA.
 
1983: Barbara McClintock was awarded the Nobel Prize in Physiology or Medicine for her discovery of mobile genetic elements. McClintock studied transposon-mediated mutation and chromosome breakage in maize and published her first report in 1948 on transposable elements or transposons. She found that transposons were widely observed in corn, although her ideas weren't widely granted attention until the 1960s and 1970s when the same phenomenon was discovered in bacteria and Drosophila melanogaster.
Display of VNTR allele lengths on a chromatogram, a technology used in DNA fingerprinting
1985: Alec Jeffreys announced DNA fingerprinting method. Jeffreys was studying DNA variation and the evolution of gene families in order to understand disease causing genes. In an attempt to develop a process to isolate many mini-satellites at once using chemical probes, Jeffreys took x-ray films of the DNA for examination and noticed that mini-satellite regions differ greatly from one person to another. In a DNA fingerprinting technique, a DNA sample is digested by treatment with specific nucleases or Restriction endonuclease and then the fragments are separated by electrophoresis producing a template distinct to each individual banding pattern of the gel.
 
1986: Jeremy Nathans found genes for color vision and color blindness, working with David Hogness, Douglas Vollrath and Ron Davis as they were studying the complexity of the retina.
 
1987: Yoshizumi Ishino accidentally discovers and describes part of a DNA sequence which later will be called CRISPR.
 
1989: Thomas Cech discovered that RNA can catalyze chemical reactions, making for one of the most important breakthroughs in molecular genetics, because it elucidates the true function of poorly understood segments of DNA.
 
1989: The human gene that encodes the CFTR protein was sequenced by Francis Collins and Lap-Chee Tsui. Defects in this gene cause cystic fibrosis.
 
1992: American and British scientists unveiled a technique for testing embryos in-vitro (Amniocentesis) for genetic abnormalities such as Cystic fibrosis and Hemophilia.
 
1993: Phillip Allen Sharp and Richard Roberts awarded the Nobel Prize for the discovery that genes in DNA are made up of introns and exons. According to their findings not all the nucleotides on the RNA strand (product of DNA transcription) are used in the translation process. The intervening sequences in the RNA strand are first spliced out so that only the RNA segment left behind after splicing would be translated to polypeptides.
 
1994: The first breast cancer gene is discovered. BRCA I, was discovered by researchers at the King laboratory at UC Berkeley in 1990 but was first cloned in 1994. BRCA II, the second key gene in the manifestation of breast cancer was discovered later in 1994 by Professor Michael Stratton and Dr. Richard Wooster.
 
1995: The genome of bacterium Haemophilus influenzae is the first genome of a free living organism to be sequenced.
 
1996: Saccharomyces cerevisiae , a yeast species, is the first eukaryote genome sequence to be released.
 
1996: Alexander Rich discovered the Z-DNA, a type of DNA which is in a transient state, that is in some cases associated with DNA transcription. The Z-DNA form is more likely to occur in regions of DNA rich in cytosine and guanine with high salt concentrations.
 
1997: Dolly the sheep was cloned by Ian Wilmut and colleagues from the Roslin Institute in Scotland.
 
1998: The first genome sequence for a multicellular eukaryote, Caenorhabditis elegans, is released.
 
2000: The full genome sequence of Drosophila melanogaster is completed.
 
2001: First draft sequences of the human genome are released simultaneously by the Human Genome Project and Celera Genomics.
 
2001: Francisco Mojica and Rudd Jansen propose the acronym CRISPR to describe a family of bacterial DNA sequences that can be used to specifically change genes within organisms.
Francis Collins announces the successful completion of the Human Genome Project in 2003
2003 (14 April): Successful completion of Human Genome Project with 99% of the genome sequenced to a 99.99% accuracy.
 
2004: Merck introduced a vaccine for Human Papillomavirus which promised to protect women against infection with HPV 16 and 18, which inactivates tumor suppressor genes and together cause 70% of cervical cancers.
 
2007: Michael Worobey traced the evolutionary origins of HIV by analyzing its genetic mutations, which revealed that HIV infections had occurred in the United States as early as the 1960s.
 
2007: Timothy Ray Brown becomes the first person cured from HIV/AIDS through a Hematopoietic stem cell transplantation.
 
2008: Houston-based Introgen developed Advexin (FDA Approval pending), the first gene therapy for cancer and Li-Fraumeni syndrome, utilizing a form of Adenovirus to carry a replacement gene coding for the p53 protein.
 
2010: transcription activator-like effector nucleases (or TALENs) are first used to cut specific sequences of DNA.
 
2016: A genome is sequenced in outer space for the first time, with NASA astronaut Kate Rubins using a MinION device aboard the International Space Station.[85]

Tuesday, November 6, 2018

History of genetic engineering

From Wikipedia, the free encyclopedia
 
Herbert Boyer (pictured) and Stanley Cohen created the first genetically modified organism in 1972

Genetic recombination caused by human activity has been occurring since around 12,000 BC, when humans first began to domesticate organisms. Genetic engineering as the direct transfer of DNA from one organism to another was first accomplished by Herbert Boyer and Stanley Cohen in 1972. It was the result of a series of advancements in techniques that allowed the direct modification of the genome. Important advances included the discovery of restriction enzymes and DNA ligases, the ability to design plasmids and technologies like polymerase chain reaction and sequencing. Transformation of the DNA into a host organism was accomplished with the invention of biolistics, Agrobacterium-mediated recombination and microinjection.

The first genetically modified animal was a mouse created in 1974 by Rudolf Jaenisch. In 1976 the technology was commercialised, with the advent of genetically modified bacteria that produced somatostatin, followed by insulin in 1978. In 1983 an antibiotic resistant gene was inserted into tobacco, leading to the first genetically engineered plant. Advances followed that allowed scientists to manipulate and add genes to a variety of different organisms and induce a range of different effects. Plants were first commercialized with virus resistant tobacco released in China in 1992. The first genetically modified food was the Flavr Savr tomato marketed in 1994. By 2010, 29 countries had planted commercialized biotech crops. In 2000 a paper published in Science introduced golden rice, the first food developed with increased nutrient value.

Agriculture

DNA studies suggested that the dog most likely arose from a common ancestor with the grey wolf.
 
Genetic engineering is the direct manipulation of an organism's genome using certain biotechnology techniques that have only existed since the 1970s. Human directed genetic manipulation was occurring much earlier, beginning with the domestication of plants and animals through artificial selection. The dog is believed to be the first animal domesticated, possibly arising from a common ancestor of the grey wolf, with archeological evidence dating to about 12,000 BC. Other carnivores domesticated in prehistoric times include the cat, which cohabited with human 9,500 years ago. Archeological evidence suggests sheep, cattle, pigs and goats were domesticated between 9 000 BC and 8 000 BC in the Fertile Crescent.

The first evidence of plant domestication comes from emmer and einkorn wheat found in pre-Pottery Neolithic A villages in Southwest Asia dated about 10,500 to 10,100 BC. The Fertile Crescent of Western Asia, Egypt, and India were sites of the earliest planned sowing and harvesting of plants that had previously been gathered in the wild. Independent development of agriculture occurred in northern and southern China, Africa's Sahel, New Guinea and several regions of the Americas. The eight Neolithic founder crops (emmer wheat, einkorn wheat, barley, peas, lentils, bitter vetch, chick peas and flax) had all appeared by about 7000 BC. Horticulture first appears in the Levant during the Chalcolithic period about 6 800 to 6,300 BC. Due to the soft tissues, archeological evidence for early vegetables is scarce. The earliest vegetable remains have been found in Egyptian caves that date back to the 2nd millennium BC.

Selective breeding of domesticated plants was once the main way early farmers shaped organisms to suit their needs. Charles Darwin described three types of selection: methodical selection, wherein humans deliberately select for particular characteristics; unconscious selection, wherein a characteristic is selected simply because it is desirable; and natural selection, wherein a trait that helps an organism survive better is passed on. Early breeding relied on unconscious and natural selection. The introduction of methodical selection is unknown. Common characteristics that were bred into domesticated plants include grains that did not shatter to allow easier harvesting, uniform ripening, shorter lifespans that translate to faster growing, loss of toxic compounds, and productivity. Some plants, like the Banana, were able to be propagated by vegetative cloning. Offspring often did not contain seeds, and therefore sterile. However, these offspring were usually juicier and larger. Propagation through cloning allows these mutant varieties to be cultivated despite their lack of seeds.

Hybridization was another way that rapid changes in plant's makeup were introduced. It often increased vigor in plants, and combined desirable traits together. Hybridization most likely first occurred when humans first grew similar, yet slightly different plants in close proximity. Triticum aestivum, wheat used in baking bread, is an allopolyploid. Its creation is the result of two separate hybridization events.

Grafting can transfer chloroplasts (specialised DNA in plants that can conduct photosynthesis), mitichondrial DNA and the entire cell nucleus containing the genome to potentially make a new species making grafting a form of natural genetic engineering.

X-rays were first used to deliberately mutate plants in 1927. Between 1927 and 2007, more than 2,540 genetically mutated plant varieties had been produced using x-rays.

Genetics

Griffith proved the existence of a "transforming principle", which Avery, MacLeod and McCarty later showed to be DNA
 
The bacterium Agrobacterium tumefaciens inserts T-DNA into infected plant cells, which is then incorporated into the plants genome.
 
Various genetic discoveries have been essential in the development of genetic engineering. Genetic inheritance was first discovered by Gregor Mendel in 1865 following experiments crossing peas. Although largely ignored for 34 years he provided the first evidence of hereditary segregation and independent assortment. In 1889 Hugo de Vries came up with the name "(pan)gene" after postulating that particles are responsible for inheritance of characteristics and the term "genetics" was coined by William Bateson in 1905. In 1928 Frederick Griffith proved the existence of a "transforming principle" involved in inheritance, which Avery, MacLeod and McCarty later (1944) identified as DNA. Edward Lawrie Tatum and George Wells Beadle developed the central dogma that genes code for proteins in 1941. The double helix structure of DNA was identified by James Watson and Francis Crick in 1953.

As well as discovering how DNA works, tools had to be developed that allowed it to be manipulated. In 1970 Hamilton Smiths lab discovered restriction enzymes that allowed DNA to be cut at specific places and separated out on an electrophoresis gel. This enabled scientists to isolate genes from an organism's genome. DNA ligases, that join broken DNA together, had been discovered earlier in 1967 and by combining the two enzymes it was possible to "cut and paste" DNA sequences to create recombinant DNA. Plasmids, discovered in 1952, became important tools for transferring information between cells and replicating DNA sequences. Frederick Sanger developed a method for sequencing DNA in 1977, greatly increasing the genetic information available to researchers.  Polymerase chain reaction (PCR), developed by Kary Mullis in 1983, allowed small sections of DNA to be amplified and aided identification and isolation of genetic material.

As well as manipulating the DNA, techniques had to be developed for its insertion (known as transformation) into an organism's genome. Griffiths experiment had already shown that some bacteria had the ability to naturally take up and express foreign DNA. Artificial competence was induced in Escherichia coli in 1970 when Morton Mandel and Akiko Higa showed that it could take up bacteriophage λ after treatment with calcium chloride solution (CaCl2). Two years later, Stanley Cohen showed that CaCl2 treatment was also effective for uptake of plasmid DNA. Transformation using electroporation was developed in the late 1980s, increasing the efficiency and bacterial range. In 1907 a bacterium that caused plant tumors, Agrobacterium tumefaciens, was discovered and in the early 1970s the tumor inducing agent was found to be a DNA plasmid called the Ti plasmid. By removing the genes in the plasmid that caused the tumor and adding in novel genes researchers were able to infect plants with A. tumefaciens and let the bacteria insert their chosen DNA into the genomes of the plants.

Early genetically modified organisms

Paul Berg created the first recombinant DNA molecules in 1972.

In 1972 Paul Berg used restriction enzymes and DNA ligases to create the first recombinant DNA molecules. He combined DNA from the monkey virus SV40 with that of the lambda virus. Herbert Boyer and Stanley Norman Cohen took Berg's work a step further and introduced recombinant DNA into a bacterial cell. Cohen was researching plasmids, while Boyers work involved restriction enzymes. They recognised the complementary nature of their work and teamed up in 1972. Together they found a restriction enzyme that cut the pSC101 plasmid at a single point and were able to insert and ligate a gene that conferred resistance to the kanamycin antibiotic into the gap. Cohen had previously devised a method where bacteria could be induced to take up a plasmid and using this they were able to create a bacteria that survived in the presence of the kanamycin. This represented the first genetically modified organism. They repeated experiments showing that other genes could be expressed in bacteria, including one from the toad Xenopus laevis, the first cross kingdom transformation.

In 1974 Rudolf Jaenisch created the first GM animal.

In 1974 Rudolf Jaenisch created a transgenic mouse by introducing foreign DNA into its embryo, making it the world’s first transgenic animal. Jaenisch was studying mammalian cells infected with simian virus 40 (SV40) when he happened to read a paper from Beatrice Mintz describing the generation of chimera mice. He took his SV40 samples to Mintz's lab and injected them into early mouse embryos expecting tumours to develop. The mice appeared normal, but after using radioactive probes he discovered that the virus had integrated itself into the mice genome. However the mice did not pass the transgene to their offspring. In 1981 the laboratories of Frank Ruddle, Frank Constantini and Elizabeth Lacy injected purified DNA into a single-cell mouse embryo and showed transmission of the genetic material to subsequent generations.

The first genetically engineered plant was tobacco, reported in 1983. It was developed by Michael W. Bevan, Richard B. Flavell and Mary-Dell Chilton by creating a chimeric gene that joined an antibiotic resistant gene to the T1 plasmid from Agrobacterium. The tobacco was infected with Agrobacterium transformed with this plasmid resulting in the chimeric gene being inserted into the plant. Through tissue culture techniques a single tobacco cell was selected that contained the gene and a new plant grown from it.

Regulation

The development of genetic engineering technology led to concerns in the scientific community about potential risks. The development of a regulatory framework concerning genetic engineering began in 1975, at Asilomar, California. The Asilomar meeting recommended a set of guidelines regarding the cautious use of recombinant technology and any products resulting from that technology. The Asilomar recommendations were voluntary, but in 1976 the US National Institute of Health (NIH) formed a recombinant DNA advisory committee. This was followed by other regulatory offices (the United States Department of Agriculture (USDA), Environmental Protection Agency (EPA) and Food and Drug Administration (FDA), effectively making all recombinant DNA research tightly regulated in the USA.

In 1982 the Organization for Economic Co-operation and Development (OECD) released a report into the potential hazards of releasing genetically modified organisms into the environment as the first transgenic plants were being developed. As the technology improved and genetically organisms moved from model organisms to potential commercial products the USA established a committee at the Office of Science and Technology (OSTP) to develop mechanisms to regulate the developing technology. In 1986 the OSTP assigned regulatory approval of genetically modified plants in the US to the USDA, FDA and EPA. In the late 1980s and early 1990s, guidance on assessing the safety of genetically engineered plants and food emerged from organizations including the FAO and WHO.

The European Union first introduced laws requiring GMO's to be labelled in 1997. In 2013 Connecticut became the first state to enact a labeling law in the USA, although it would not take effect until other states followed suit.

Research and medicine

A laboratory mouse in which a gene affecting hair growth has been knocked out (left), is shown next to a normal lab mouse.

The ability to insert, alter or remove genes in model organisms allowed scientists to study the genetic elements of human diseases. Genetically modified mice were created in 1984 that carried cloned oncogenes that predisposed them to developing cancer. The technology has also been used to generate mice with genes knocked out. The first recorded knockout mouse was created by Mario R. Capecchi, Martin Evans and Oliver Smithies in 1989. In 1992 oncomice with tumor suppressor genes knocked out were generated. Creating Knockout rats is much harder and only became possible in 2003.

After the discovery of microRNA in 1993, RNA interference (RNAi) has been used to silence an organism's genes. By modifying an organism to express microRNA targeted to its endogenous genes, researchers have been able to knockout or partially reduce gene function in a range of species. The ability to partially reduce gene function has allowed the study of genes that are lethal when completely knocked out. Other advantages of using RNAi include the availability of inducible and tissue specific knockout. In 2007 microRNA targeted to insect and nematode genes was expressed in plants, leading to suppression when they fed on the transgenic plant, potentially creating a new way to control pests. Targeting endogenous microRNA expression has allowed further fine tuning of gene expression, supplementing the more traditional gene knock out approach.

Genetic engineering has been used to produce proteins derived from humans and other sources in organisms that normally cannot synthesize these proteins. Human insulin-synthesising bacteria were developed in 1979 and were first used as a treatment in 1982. In 1988 the first human antibodies were produced in plants. In 2000 Vitamin A-enriched golden rice, was the first food with increased nutrient value.

Further advances

As not all plant cells were susceptible to infection by A. tumefaciens other methods were developed, including electroporation, micro-injection and particle bombardment with a gene gun (invented in 1987). In the 1980s techniques were developed to introduce isolated chloroplasts back into a plant cell that had its cell wall removed. With the introduction of the gene gun in 1987 it became possible to integrate foreign genes into a chloroplast.

Genetic transformation has become very efficient in some model organisms. In 2008 genetically modified seeds were produced in Arabidopsis thaliana by simply dipping the flowers in an Agrobacterium solution. The range of plants that can be transformed has increased as tissue culture techniques have been developed for different species.

The first transgenic livestock were produced in 1985, by micro-injecting foreign DNA into rabbit, sheep and pig eggs. The first animal to synthesise transgenic proteins in their milk were mice, engineered to produce human tissue plasminogen activator. This technology was applied to sheep, pigs, cows and other livestock.

In 2010 scientists at the J. Craig Venter Institute announced that they had created the first synthetic bacterial genome. The researchers added the new genome to bacterial cells and selected for cells that contained the new genome. To do this the cells undergoes a process called resolution, where during bacterial cell division one new cell receives the original DNA genome of the bacteria, whilst the other receives the new synthetic genome. When this cell replicates it uses the synthetic genome as its template. The resulting bacterium the researchers developed, named Synthia, was the world's first synthetic life form.

In 2014 a bacteria was developed that replicated a plasmid containing an unnatural base pair. This required altering the bacterium so it could import the unnatural nucleotides and then efficiently replicate them. The plasmid retained the unnatural base pairs when it doubled an estimated 99.4% of the time. This is the first organism engineered to use an expanded genetic alphabet.

In 2015 CRISPR and TALENs was used to modify plant genomes. Chinese labs used it to create a fungus-resistant wheat and boost rice yields, while a U.K. group used it to tweak a barley gene that could help produce drought-resistant varieties. When used to precisely remove material from DNA without adding genes from other species, the result is not subject the lengthy and expensive regulatory process associated with GMOs. While CRISPR may use foreign DNA to aid the editing process, the second generation of edited plants contain none of that DNA. Researchers celebrated the acceleration because it may allow them to "keep up" with rapidly evolving pathogens. The U.S. Department of Agriculture stated that some examples of gene-edited corn, potatoes and soybeans are not subject to existing regulations. As of 2016 other review bodies had yet to make statements.

Commercialisation

In 1976 Genentech, the first genetic engineering company was founded by Herbert Boyer and Robert Swanson and a year later the company produced a human protein (somatostatin) in E.coli. Genentech announced the production of genetically engineered human insulin in 1978. In 1980 the U.S. Supreme Court in the Diamond v. Chakrabarty case ruled that genetically altered life could be patented. The insulin produced by bacteria, branded humulin, was approved for release by the Food and Drug Administration in 1982.

In 1983 a biotech company, Advanced Genetic Sciences (AGS) applied for U.S. government authorization to perform field tests with the ice-minus strain of P. syringae to protect crops from frost, but environmental groups and protestors delayed the field tests for four years with legal challenges. In 1987 the ice-minus strain of P. syringae became the first genetically modified organism (GMO) to be released into the environment when a strawberry field and a potato field in California were sprayed with it. Both test fields were attacked by activist groups the night before the tests occurred: "The world's first trial site attracted the world's first field trasher".

The first genetically modified crop plant was produced in 1982, an antibiotic-resistant tobacco plant. The first field trials of genetically engineered plants occurred in France and the USA in 1986, tobacco plants were engineered to be resistant to herbicides. In 1987 Plant Genetic Systems, founded by Marc Van Montagu and Jeff Schell, was the first company to genetically engineer insect-resistant plants by incorporating genes that produced insecticidal proteins from Bacillus thuringiensis (Bt) into tobacco.

Genetically modified microbial enzymes were the first application of genetically modified organisms in food production and were approved in 1988 by the US Food and Drug Administration. In the early 1990s, recombinant chymosin was approved for use in several countries. Cheese had typically been made using the enzyme complex rennet that had been extracted from cows' stomach lining. Scientists modified bacteria to produce chymosin, which was also able to clot milk, resulting in cheese curds. The People’s Republic of China was the first country to commercialize transgenic plants, introducing a virus-resistant tobacco in 1992. In 1994 Calgene attained approval to commercially release the Flavr Savr tomato, a tomato engineered to have a longer shelf life. Also in 1994, the European Union approved tobacco engineered to be resistant to the herbicide bromoxynil, making it the first genetically engineered crop commercialized in Europe. In 1995 Bt Potato was approved safe by the Environmental Protection Agency, after having been approved by the FDA, making it the first pesticide producing crop to be approved in the USA. In 1996 a total of 35 approvals had been granted to commercially grow 8 transgenic crops and one flower crop (carnation), with 8 different traits in 6 countries plus the EU.

By 2010, 29 countries had planted commercialized biotech crops and a further 31 countries had granted regulatory approval for transgenic crops to be imported. In 2013 Robert Fraley (Monsanto’s executive vice president and chief technology officer), Marc Van Montagu and Mary-Dell Chilton were awarded the World Food Prize for improving the "quality, quantity or availability" of food in the world.

The first genetically modified animal to be commercialised was the GloFish, a Zebra fish with a fluorescent gene added that allows it to glow in the dark under ultraviolet light. The first genetically modified animal to be approved for food use was AquAdvantage salmon in 2015. The salmon were transformed with a growth hormone-regulating gene from a Pacific Chinook salmon and a promoter from an ocean pout enabling it to grow year-round instead of only during spring and summer.

Opposition

Opposition and support for the use of genetic engineering has existed since the technology was developed. After Arpad Pusztai went public with research he was conducting in 1998 the public opposition to genetically modified food increased. Opposition continued following controversial and publicly debated papers published in 1999 and 2013 that claimed negative environmental and health impacts from genetically modified crops.

Pharmacogenomics

From Wikipedia, the free encyclopedia

Pharmacogenomics is the study of the role of the genome in drug response. Its name (pharmaco- + genomics) reflects its combining of pharmacology and genomics. Pharmacogenomics analyzes how the genetic makeup of an individual affects his/her response to drugs. It deals with the influence of acquired and inherited genetic variation on drug response in patients by correlating gene expression or single-nucleotide polymorphisms with pharmacokinetics (drug absorption, distribution, metabolism, and elimination) and pharmacodynamics (effects mediated through a drug's biological targets). The term pharmacogenomics is often used interchangeably with pharmacogenetics. Although both terms relate to drug response based on genetic influences, pharmacogenetics focuses on single drug-gene interactions, while pharmacogenomics encompasses a more genome-wide association approach, incorporating genomics and epigenetics while dealing with the effects of multiple genes on drug response.

Pharmacogenomics aims to develop rational means to optimize drug therapy, with respect to the patients' genotype, to ensure maximum efficiency with minimal adverse effects. Through the utilization of pharmacogenomics, it is hoped that pharmaceutical drug treatments can deviate from what is dubbed as the "one-dose-fits-all" approach. Pharmacogenomics also attempts to eliminate the trial-and-error method of prescribing, allowing physicians to take into consideration their patient's genes, the functionality of these genes, and how this may affect the efficacy of the patient's current or future treatments (and where applicable, provide an explanation for the failure of past treatments). Such approaches promise the advent of precision medicine and even personalized medicine, in which drugs and drug combinations are optimized for narrow subsets of patients or even for each individual's unique genetic makeup. Whether used to explain a patient's response or lack thereof to a treatment, or act as a predictive tool, it hopes to achieve better treatment outcomes, greater efficacy, minimization of the occurrence of drug toxicities and adverse drug reactions (ADRs). For patients who have lack of therapeutic response to a treatment, alternative therapies can be prescribed that would best suit their requirements. In order to provide pharmacogenomic recommendations for a given drug, two possible types of input can be used: genotyping or exome or whole genome sequencing. Sequencing provides many more data points, including detection of mutations that prematurely terminate the synthesized protein (early stop codon).

History

Pharmacogenomics was first recognized by Pythagoras around 510 BC when he made a connection between the dangers of fava bean ingestion with hemolytic anemia and oxidative stress. This identification was later validated and attributed to deficiency of G6PD in the 1950s and called favism. Although the first official publication dates back to 1961, circa 1950s marked the unofficial beginnings of this science. Reports of prolonged paralysis and fatal reactions linked to genetic variants in patients who lacked butyryl-cholinesterase (‘pseudocholinesterase’) following administration of succinylcholine injection during anesthesia were first reported in 1956. The term pharmacogenetic was first coined in 1959 by Friedrich Vogel of Heidelberg, Germany (although some papers suggest it was 1957 or 1958). In the late 1960s, twin studies supported the inference of genetic involvement in drug metabolism, with identical twins sharing remarkable similarities to drug response compared to fraternal twins. The term pharmacogenomics first began appearing around the 1990s.

The first FDA approval of a pharmacogenetic test was in 2005 (for alleles in CYP2D6 and CYP2C19).

Drug-metabolizing enzymes

There are several known genes which are largely responsible for variances in drug metabolism and response. The focus of this article will remain on the genes that are more widely accepted and utilized clinically for brevity.
  • Cytochrome P450s
  • VKORC1
  • TPMT

Cytochrome P450

The most prevalent drug-metabolizing enzymes (DME) are the Cytochrome P450 (CYP) enzymes. The term Cytochrome P450 was coined by Omura and Sato in 1962 to describe the membrane-bound, heme-containing protein characterized by 450 nm spectral peak when complexed with carbon monoxide. The human CYP family consists of 57 genes, with 18 families and 44 subfamilies. CYP proteins are conveniently arranged into these families and subfamilies on the basis of similarities identified between the amino acid sequences. Enzymes that share 35-40% identity are assigned to the same family by an Arabic numeral, and those that share 55-70% make up a particular subfamily with a designated letter. For example, CYP2D6 refers to family 2, subfamily D, and gene number 6.

From a clinical perspective, the most commonly tested CYPs include: CYP2D6, CYP2C19, CYP2C9, CYP3A4 and CYP3A5. These genes account for the metabolism of approximately 80-90% of currently available prescription drugs. The table below provides a summary for some of the medications that take these pathways.

CYP2D6

Also known as debrisoquine hydroxylase (named after the drug that led to its discovery), CYP2D6 is the most well-known and extensively studied CYP gene. It is a gene of great interest also due to its highly polymorphic nature, and involvement in a high number of medication metabolisms (both as a major and minor pathway). More than 100 CYP2D6 genetic variants have been identified.

CYP2C19

Discovered in the early 1980s, CYP2C19 is the second most extensively studied and well understood gene in pharmacogenomics. Over 28 genetic variants have been identified for CYP2C19, of which affects the metabolism of several classes of drugs, such as antidepressants and proton pump inhibitors.

CYP2C9

CYP2C9 constitutes the majority of the CYP2C subfamily, representing approximately 20% of the liver content. It is involved in the metabolism of approximately 10% of all drugs, which include medications with narrow therapeutic windows such as warfarin and tolbutamide. There are approximately 57 genetic variants associated with CYP2C9.

CYP3A4 and CYP3A5

The CYP3A family is the most abundantly found in the liver, with CYP3A4 accounting for 29% of the liver content. These enzymes also cover between 40-50% of the current prescription drugs, with the CYP3A4 accounting for 40-45% of these medications. CYP3A5 has over 11 genetic variants identified at the time of this publication.

VKORC1

The vitamin K epoxide reductase complex subunit 1 (VKORC1) is responsible for the pharmacodynamics of warfarin. VKORC1 along with CYP2C9 are useful for identifying the risk of bleeding during warfarin administration. Warfarin works by inhibiting VKOR, which is encoded by the VKORC1 gene. Individuals with polymorphism in this have an affected response to warfarin treatment.

TPMT

Thiopurine methyltransferase (TPMT) catalyzes the S-methylation of thiopurines, thereby regulating the balance between cytotoxic thioguanine nucleotide and inactive metabolites in hematopoietic cells.  TPMT is highly involved in 6-MP metabolism and TMPT activity and TPMT genotype is known to affect the risk of toxicity. Excessive levels of 6-MP can cause myelosuppression and myelotoxicity.

Codeine, clopidogrel, tamoxifen, and warfarin a few examples of medications that follow the above metabolic pathways.

Predictive prescribing

Patient genotypes are usually categorized into the following predicted phenotypes:
  • Ultra-rapid metabolizer: patients with substantially increased metabolic activity;
  • Extensive metabolizer: normal metabolic activity;
  • Intermediate metabolizer: patients with reduced metabolic activity; and
  • Poor metabolizer: patients with little to no functional metabolic activity.
The two extremes of this spectrum are the poor metabolizers and ultra-rapid metabolizers. Efficacy of a medication is not only based on the above metabolic statuses, but also the type of drug consumed. Drugs can be classified into two main groups: active drugs and prodrugs. Active drugs refer to drugs that are inactivated during metabolism, and prodrugs are inactive until they are metabolized.

 
An overall process of how pharmacogenomics functions in a clinical practice. From the raw genotype results, this is then translated to the physical trait, the phenotype. Based on these observations, optimal dosing is evaluated.
 
For example, we have two patients who are taking codeine for pain relief. Codeine is a prodrug, so it requires conversion from its inactive form to its active form. The active form of codeine is morphine, which provides the therapeutic effect of pain relief. If person A receives one *1 allele each from mother and father to code for the CYP2D6 gene, then that person is considered to have an extensive metabolizer (EM) phenotype, as allele *1 is considered to have a normal-function (this would be represented as CYP2D6 *1/*1). If person B on the other hand had received one *1 allele from the mother and a *4 allele from the father, that individual would be an Intermediate Metabolizer (IM) (the genotype would be CYP2D6 *1/*4). Although both individuals are taking the same dose of codeine, person B could potentially lack the therapeutic benefits of codeine due to the decreased conversion rate of codeine to its active counterpart morphine.

Each phenotype is based upon the allelic variation within the individual genotype. However, several genetic events can influence a same phenotypic trait, and establishing genotype-to-phenotype relationships can thus be far from consensual with many enzymatic patterns. For instance, the influence of the CYP2D6*1/*4 allelic variant on the clinical outcome in patients treated with Tamoxifen remains debated today. In oncology, genes coding for DPD, UGT1A1, TPMT, CDA involved in the pharmacokinetics of 5-FU/capecitabine, irinotecan, 6-mercaptopurine and gemcitabine/cytarabine, respectively, have all been described as being highly polymorphic. A strong body of evidence suggests that patients affected by these genetic polymorphisms will experience severe/lethal toxicities upon drug intake, and that pre-therapeutic screening does help to reduce the risk of treatment-related toxicities through adaptive dosing strategies.

Applications

The list below provides a few more commonly known applications of pharmacogenomics:
  • Improve drug safety, and reduce ADRs;
  • Tailor treatments to meet patients' unique genetic pre-disposition, identifying optimal dosing;
  • Improve drug discovery targeted to human disease; and
  • Improve proof of principle for efficacy trials.
Pharmacogenomics may be applied to several areas of medicine, including Pain Management, Cardiology, Oncology, and Psychiatry. A place may also exist in Forensic Pathology, in which pharmacogenomics can be used to determine the cause of death in drug-related deaths where no findings emerge using autopsy.

In cancer treatment, pharmacogenomics tests are used to identify which patients are most likely to respond to certain cancer drugs. In behavioral health, pharmacogenomic tests provide tools for physicians and care givers to better manage medication selection and side effect amelioration. Pharmacogenomics is also known as companion diagnostics, meaning tests being bundled with drugs. Examples include KRAS test with cetuximab and EGFR test with gefitinib. Beside efficacy, germline pharmacogenetics can help to identify patients likely to undergo severe toxicities when given cytotoxics showing impaired detoxification in relation with genetic polymorphism, such as canonical 5-FU.

In cardiovascular disorders, the main concern is response to drugs including warfarin, clopidogrel, beta blockers, and statins.

Example case studies

Case A – Antipsychotic adverse reaction

Patient A suffers from schizophrenia. Their treatment included a combination of ziprasidone, olanzapine, trazodone and benzotropine. The patient experienced dizziness and sedation, so they were tapered off ziprasidone and olanzapine, and transition to quetiapine. Trazodone was discontinued. The patient then experienced excessive sweating, tachycardia and neck pain, gained considerable weight and had hallucinations. Five months later, quetiapine was tapered and discontinued, with ziprasidone re-introduction into their treatment due to the excessive weight gain. Although the patient lost the excessive weight they gained, they then developed muscle stiffness, cogwheeling, tremor and night sweats. When benztropine was added they experienced blurry vision. After an additional five months, the patient was switched from ziprasidone to aripiprazole. Over the course of 8 months, patient A gradually experienced more weight gain, sedation, developed difficulty with their gait, stiffness, cogwheel and dyskinetic ocular movements. A pharmacogenomics test later proved the patient had a CYP2D6 *1/*41, with has a predicted phenotype of IM and CYP2C19 *1/*2 with predicted phenotype of IM as well.

Case B – Pain Management

Patient B is a woman who gave birth by caesarian section. Her physician prescribed codeine for post-caesarian pain. She took the standard prescribed dose, however experienced nausea and dizziness while she was taking codeine. She also noticed that her breastfed infant was lethargic and feeding poorly. When the patient mentioned these symptoms to her physician, they recommended that she discontinue codeine use. Within a few days, both the patient and her infant’s symptoms were no longer present. It is assumed that if the patient underwent a pharmacogenomic test, it would have revealed she may have had a duplication of the gene CYP2D6 placing her in the Ultra-rapid metabolizer (UM) category, explaining her ADRs to codeine use.

Case C – FDA Warning on Codeine Overdose for Infants

On February 20, 2013, the FDA released a statement addressing a serious concern regarding the connection between children who are known as CYP2D6 UM and fatal reactions to codeine following tonsillectomy and/or adenoidectomy (surgery to remove the tonsils and/or adenoids). They released their strongest Boxed Warning to elucidate the dangers of CYP2D6 UMs consuming codeine. Codeine is converted to morphine by CYP2D6, and those who have UM phenotypes are at danger of producing large amounts of morphine due to the increased function of the gene. The morphine can elevate to life-threatening or fatal amounts, as became evident with the death of three children in August 2012.

Polypharmacy

A potential role pharmacogenomics may play would be to reduce the occurrence of polypharmacy. It is theorized that with tailored drug treatments, patients will not have the need to take several medications that are intended to treat the same condition. In doing so, they could potentially minimize the occurrence of ADRs, have improved treatment outcomes, and can save costs by avoiding purchasing extraneous medications. An example of this can be found in psychiatry, where patients tend to be receiving more medications than even age-matched non-psychiatric patients. This has been associated with an increased risk of inappropriate prescribing.

The need for pharmacogenomics tailored drug therapies may be most evident in a survey conducted by the Slone Epidemiology Center at Boston University from February 1998 to April 2007. The study elucidated that an average of 82% of adults in the United States are taking at least one medication (prescription or nonprescription drug, vitamin/mineral, herbal/natural supplement), and 29% are taking five or more. The study suggested that those aged 65 years or older continue to be the biggest consumers of medications, with 17-19 % in this age group taking at least ten medications in a given week. Polypharmacy has also shown to have increased since 2000 from 23% to 29%.

Drug labeling

The U.S. Food and Drug Administration (FDA) appears to be very invested in the science of pharmacogenomics as is demonstrated through the 120 and more FDA-approved drugs that include pharmacogenomic biomarkers in their labels. This number increased varies over the years. A study of the labels of FDA-approved drugs as of 20 June 2014 found that there were 140 different drugs with a pharmacogenomic biomarker in their label. Because a drug can have different biomarkers, this corresponded to 158 drug–biomarker pairs. Only 29% stated a requirement or recommendation for genetic biomarker testing but this was higher for oncology drugs (62%). On May 22, 2005, the FDA issued its first Guidance for Industry: Pharmacogenomic Data Submissions, which clarified the type of pharmacogenomic data required to be submitted to the FDA and when. Experts recognized the importance of the FDA’s acknowledgement that pharmacogenomics experiments will not bring negative regulatory consequences. The FDA had released its latest guide Clinical Pharmacogenomics (PGx): Premarket Evaluation in Early-Phase Clinical Studies and Recommendations for Labeling in January, 2013. The guide is intended to address the use of genomic information during drug development and regulatory review processes.

Challenges

Consecutive phases and associated challenges in Pharmacogenomics.
 
Although there appears to be a general acceptance of the basic tenet of pharmacogenomics amongst physicians and healthcare professionals, several challenges exist that slow the uptake, implementation, and standardization of pharmacogenomics. Some of the concerns raised by physicians include:
  • Limitation on how to apply the test into clinical practices and treatment;
  • A general feeling of lack of availability of the test;
  • The understanding and interpretation of evidence-based research; and
  • Ethical, legal and social issues.
Issues surrounding the availability of the test include:
  • The lack of availability of scientific data: Although there are considerable number of DME involved in the metabolic pathways of drugs, only a fraction have sufficient scientific data to validate their use within a clinical setting; and
  • Demonstrating the cost-effectiveness of pharmacogenomics: Publications for the pharmacoeconomics of pharmacogenomics are scarce, therefore sufficient evidence does not at this time exist to validate the cost-effectiveness and cost-consequences of the test.
Although other factors contribute to the slow progression of pharmacogenomics (such as developing guidelines for clinical use), the above factors appear to be the most prevalent.

Controversies

Some alleles that vary in frequency between specific populations have been shown to be associated with differential responses to specific drugs. The beta blocker atenolol is an anti-hypertensive medication that is shown to more significantly lower the blood pressure of Caucasian patients than African American patients in the United States. This observation suggests that Caucasian and African American populations have different alleles governing oleic acid biochemistry, which react differentially with atenolol. Similarly, hypersensitivity to the antiretroviral drug abacavir is strongly associated with a single-nucleotide polymorphism that varies in frequency between populations.

The FDA approval of the drug BiDil (isosorbide dinitrate/hydralazine) with a label specifying African-Americans with congestive heart failure, produced a storm of controversy over race-based medicine and fears of genetic stereotyping, even though the label for BiDil did not specify any genetic variants but was based on racial self-identification.

Future

Computational advances in pharmacogenomics has proven to be a blessing in research. As a simple example, for nearly a decade the ability to store more information on a hard drive has enabled us to investigate a human genome sequence cheaper and in more detail with regards to the effects/risks/safety concerns of drugs and other such substances. Such computational advances are expected to continue in the future. The aim is to use the genome sequence data to effectively make decisions in order to minimise the negative impacts on, say, a patient or the health industry in general. A large amount of research in the biomedical sciences regarding Pharmacogenomics as of late stems from combinatorial chemistry, genomic mining, omic technologies and high throughput screening. In order for the field to grow, rich knowledge enterprises and business must work more closely together and adopt simulation strategies. Consequently, more importance must be placed on the role of computational biology with regards to safety and risk assessments. Here, we can find the growing need and importance of being able to manage large, complex data sets, being able to extract information by integrating disparate data so that developments can be made in improving human health.

Introduction to entropy

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Introduct...