Search This Blog

Thursday, November 25, 2021

History of biotechnology

From Wikipedia, the free encyclopedia
 
Brewing was an early example of biotechnology

Biotechnology is the application of scientific and engineering principles to the processing of materials by biological agents to provide goods and services. From its inception, biotechnology has maintained a close relationship with society. Although now most often associated with the development of drugs, historically biotechnology has been principally associated with food, addressing such issues as malnutrition and famine. The history of biotechnology begins with zymotechnology, which commenced with a focus on brewing techniques for beer. By World War I, however, zymotechnology would expand to tackle larger industrial issues, and the potential of industrial fermentation gave rise to biotechnology. However, both the single-cell protein and gasohol projects failed to progress due to varying issues including public resistance, a changing economic scene, and shifts in political power.

Yet the formation of a new field, genetic engineering, would soon bring biotechnology to the forefront of science in society, and the intimate relationship between the scientific community, the public, and the government would ensue. These debates gained exposure in 1975 at the Asilomar Conference, where Joshua Lederberg was the most outspoken supporter for this emerging field in biotechnology. By as early as 1978, with the development of synthetic human insulin, Lederberg's claims would prove valid, and the biotechnology industry grew rapidly. Each new scientific advance became a media event designed to capture public support, and by the 1980s, biotechnology grew into a promising real industry. In 1988, only five proteins from genetically engineered cells had been approved as drugs by the United States Food and Drug Administration (FDA), but this number would skyrocket to over 125 by the end of the 1990s.

The field of genetic engineering remains a heated topic of discussion in today's society with the advent of gene therapy, stem cell research, cloning, and genetically modified food. While it seems only natural nowadays to link pharmaceutical drugs as solutions to health and societal problems, this relationship of biotechnology serving social needs began centuries ago.

Origins of biotechnology

Biotechnology arose from the field of zymotechnology or zymurgy, which began as a search for a better understanding of industrial fermentation, particularly beer. Beer was an important industrial, and not just social, commodity. In late 19th-century Germany, brewing contributed as much to the gross national product as steel, and taxes on alcohol proved to be significant sources of revenue to the government. In the 1860s, institutes and remunerative consultancies were dedicated to the technology of brewing. The most famous was the private Carlsberg Institute, founded in 1875, which employed Emil Christian Hansen, who pioneered the pure yeast process for the reliable production of consistent beer. Less well known were private consultancies that advised the brewing industry. One of these, the Zymotechnic Institute, was established in Chicago by the German-born chemist John Ewald Siebel.

The heyday and expansion of zymotechnology came in World War I in response to industrial needs to support the war. Max Delbrück grew yeast on an immense scale during the war to meet 60 percent of Germany's animal feed needs. Compounds of another fermentation product, lactic acid, made up for a lack of hydraulic fluid, glycerol. On the Allied side the Russian chemist Chaim Weizmann used starch to eliminate Britain's shortage of acetone, a key raw material for cordite, by fermenting maize to acetone. The industrial potential of fermentation was outgrowing its traditional home in brewing, and "zymotechnology" soon gave way to "biotechnology."

With food shortages spreading and resources fading, some dreamed of a new industrial solution. The Hungarian Károly Ereky coined the word "biotechnology" in Hungary during 1919 to describe a technology based on converting raw materials into a more useful product. He built a slaughterhouse for a thousand pigs and also a fattening farm with space for 50,000 pigs, raising over 100,000 pigs a year. The enterprise was enormous, becoming one of the largest and most profitable meat and fat operations in the world. In a book entitled Biotechnologie, Ereky further developed a theme that would be reiterated through the 20th century: biotechnology could provide solutions to societal crises, such as food and energy shortages. For Ereky, the term "biotechnologie" indicated the process by which raw materials could be biologically upgraded into socially useful products.

This catchword spread quickly after the First World War, as "biotechnology" entered German dictionaries and was taken up abroad by business-hungry private consultancies as far away as the United States. In Chicago, for example, the coming of prohibition at the end of World War I encouraged biological industries to create opportunities for new fermentation products, in particular a market for nonalcoholic drinks. Emil Siebel, the son of the founder of the Zymotechnic Institute, broke away from his father's company to establish his own called the "Bureau of Biotechnology," which specifically offered expertise in fermented nonalcoholic drinks.

The belief that the needs of an industrial society could be met by fermenting agricultural waste was an important ingredient of the "chemurgic movement." Fermentation-based processes generated products of ever-growing utility. In the 1940s, penicillin was the most dramatic. While it was discovered in England, it was produced industrially in the U.S. using a deep fermentation process originally developed in Peoria, Illinois. The enormous profits and the public expectations penicillin engendered caused a radical shift in the standing of the pharmaceutical industry. Doctors used the phrase "miracle drug", and the historian of its wartime use, David Adams, has suggested that to the public penicillin represented the perfect health that went together with the car and the dream house of wartime American advertising. Beginning in the 1950s, fermentation technology also became advanced enough to produce steroids on industrially significant scales. Of particular importance was the improved semisynthesis of cortisone which simplified the old 31 step synthesis to 11 steps. This advance was estimated to reduce the cost of the drug by 70%, making the medicine inexpensive and available. Today biotechnology still plays a central role in the production of these compounds and likely will for years to come.

Penicillin was viewed as a miracle drug that brought enormous profits and public expectations.

Single-cell protein and gasohol projects

Even greater expectations of biotechnology were raised during the 1960s by a process that grew single-cell protein. When the so-called protein gap threatened world hunger, producing food locally by growing it from waste seemed to offer a solution. It was the possibilities of growing microorganisms on oil that captured the imagination of scientists, policy makers, and commerce. Major companies such as British Petroleum (BP) staked their futures on it. In 1962, BP built a pilot plant at Cap de Lavera in Southern France to publicize its product, Toprina. Initial research work at Lavera was done by Alfred Champagnat, In 1963, construction started on BP's second pilot plant at Grangemouth Oil Refinery in Britain.

As there was no well-accepted term to describe the new foods, in 1966 the term "single-cell protein" (SCP) was coined at MIT to provide an acceptable and exciting new title, avoiding the unpleasant connotations of microbial or bacterial.

The "food from oil" idea became quite popular by the 1970s, when facilities for growing yeast fed by n-paraffins were built in a number of countries. The Soviets were particularly enthusiastic, opening large "BVK" (belkovo-vitaminny kontsentrat, i.e., "protein-vitamin concentrate") plants next to their oil refineries in Kstovo (1973) and Kirishi (1974).

By the late 1970s, however, the cultural climate had completely changed, as the growth in SCP interest had taken place against a shifting economic and cultural scene (136). First, the price of oil rose catastrophically in 1974, so that its cost per barrel was five times greater than it had been two years earlier. Second, despite continuing hunger around the world, anticipated demand also began to shift from humans to animals. The program had begun with the vision of growing food for Third World people, yet the product was instead launched as an animal food for the developed world. The rapidly rising demand for animal feed made that market appear economically more attractive. The ultimate downfall of the SCP project, however, came from public resistance.

This was particularly vocal in Japan, where production came closest to fruition. For all their enthusiasm for innovation and traditional interest in microbiologically produced foods, the Japanese were the first to ban the production of single-cell proteins. The Japanese ultimately were unable to separate the idea of their new "natural" foods from the far from natural connotation of oil. These arguments were made against a background of suspicion of heavy industry in which anxiety over minute traces of petroleum was expressed. Thus, public resistance to an unnatural product led to the end of the SCP project as an attempt to solve world hunger.

Also, in 1989 in the USSR, the public environmental concerns made the government decide to close down (or convert to different technologies) all 8 paraffin-fed-yeast plants that the Soviet Ministry of Microbiological Industry had by that time.

In the late 1970s, biotechnology offered another possible solution to a societal crisis. The escalation in the price of oil in 1974 increased the cost of the Western world's energy tenfold. In response, the U.S. government promoted the production of gasohol, gasoline with 10 percent alcohol added, as an answer to the energy crisis. In 1979, when the Soviet Union sent troops to Afghanistan, the Carter administration cut off its supplies to agricultural produce in retaliation, creating a surplus of agriculture in the U.S. As a result, fermenting the agricultural surpluses to synthesize fuel seemed to be an economical solution to the shortage of oil threatened by the Iran–Iraq War. Before the new direction could be taken, however, the political wind changed again: the Reagan administration came to power in January 1981 and, with the declining oil prices of the 1980s, ended support for the gasohol industry before it was born.

Biotechnology seemed to be the solution for major social problems, including world hunger and energy crises. In the 1960s, radical measures would be needed to meet world starvation, and biotechnology seemed to provide an answer. However, the solutions proved to be too expensive and socially unacceptable, and solving world hunger through SCP food was dismissed. In the 1970s, the food crisis was succeeded by the energy crisis, and here too, biotechnology seemed to provide an answer. But once again, costs proved prohibitive as oil prices slumped in the 1980s. Thus, in practice, the implications of biotechnology were not fully realized in these situations. But this would soon change with the rise of genetic engineering.

Genetic engineering

The origins of biotechnology culminated with the birth of genetic engineering. There were two key events that have come to be seen as scientific breakthroughs beginning the era that would unite genetics with biotechnology. One was the 1953 discovery of the structure of DNA, by Watson and Crick, and the other was the 1973 discovery by Cohen and Boyer of a recombinant DNA technique by which a section of DNA was cut from the plasmid of an E. coli bacterium and transferred into the DNA of another. This approach could, in principle, enable bacteria to adopt the genes and produce proteins of other organisms, including humans. Popularly referred to as "genetic engineering," it came to be defined as the basis of new biotechnology.

Genetic engineering proved to be a topic that thrust biotechnology into the public scene, and the interaction between scientists, politicians, and the public defined the work that was accomplished in this area. Technical developments during this time were revolutionary and at times frightening. In December 1967, the first heart transplant by Christian Barnard reminded the public that the physical identity of a person was becoming increasingly problematic. While poetic imagination had always seen the heart at the center of the soul, now there was the prospect of individuals being defined by other people's hearts. During the same month, Arthur Kornberg announced that he had managed to biochemically replicate a viral gene. "Life had been synthesized," said the head of the National Institutes of Health. Genetic engineering was now on the scientific agenda, as it was becoming possible to identify genetic characteristics with diseases such as beta thalassemia and sickle-cell anemia.

Responses to scientific achievements were colored by cultural skepticism. Scientists and their expertise were looked upon with suspicion. In 1968, an immensely popular work, The Biological Time Bomb, was written by the British journalist Gordon Rattray Taylor. The author's preface saw Kornberg's discovery of replicating a viral gene as a route to lethal doomsday bugs. The publisher's blurb for the book warned that within ten years, "You may marry a semi-artificial man or woman…choose your children's sex…tune out pain…change your memories…and live to be 150 if the scientific revolution doesn’t destroy us first." The book ended with a chapter called "The Future – If Any." While it is rare for current science to be represented in the movies, in this period of "Star Trek", science fiction and science fact seemed to be converging. "Cloning" became a popular word in the media. Woody Allen satirized the cloning of a person from a nose in his 1973 movie Sleeper, and cloning Adolf Hitler from surviving cells was the theme of the 1976 novel by Ira Levin, The Boys from Brazil.

In response to these public concerns, scientists, industry, and governments increasingly linked the power of recombinant DNA to the immensely practical functions that biotechnology promised. One of the key scientific figures that attempted to highlight the promising aspects of genetic engineering was Joshua Lederberg, a Stanford professor and Nobel laureate. While in the 1960s "genetic engineering" described eugenics and work involving the manipulation of the human genome, Lederberg stressed research that would involve microbes instead. Lederberg emphasized the importance of focusing on curing living people. Lederberg's 1963 paper, "Biological Future of Man" suggested that, while molecular biology might one day make it possible to change the human genotype, "what we have overlooked is euphenics, the engineering of human development." Lederberg constructed the word "euphenics" to emphasize changing the phenotype after conception rather than the genotype which would affect future generations.

With the discovery of recombinant DNA by Cohen and Boyer in 1973, the idea that genetic engineering would have major human and societal consequences was born. In July 1974, a group of eminent molecular biologists headed by Paul Berg wrote to Science suggesting that the consequences of this work were so potentially destructive that there should be a pause until its implications had been thought through. This suggestion was explored at a meeting in February 1975 at California's Monterey Peninsula, forever immortalized by the location, Asilomar. Its historic outcome was an unprecedented call for a halt in research until it could be regulated in such a way that the public need not be anxious, and it led to a 16-month moratorium until National Institutes of Health (NIH) guidelines were established.

Joshua Lederberg was the leading exception in emphasizing, as he had for years, the potential benefits. At Asilomar, in an atmosphere favoring control and regulation, he circulated a paper countering the pessimism and fears of misuses with the benefits conferred by successful use. He described "an early chance for a technology of untold importance for diagnostic and therapeutic medicine: the ready production of an unlimited variety of human proteins. Analogous applications may be foreseen in fermentation process for cheaply manufacturing essential nutrients, and in the improvement of microbes for the production of antibiotics and of special industrial chemicals." In June 1976, the 16-month moratorium on research expired with the Director's Advisory Committee (DAC) publication of the NIH guidelines of good practice. They defined the risks of certain kinds of experiments and the appropriate physical conditions for their pursuit, as well as a list of things too dangerous to perform at all. Moreover, modified organisms were not to be tested outside the confines of a laboratory or allowed into the environment.

Synthetic insulin crystals synthesized using recombinant DNA technology

Atypical as Lederberg was at Asilomar, his optimistic vision of genetic engineering would soon lead to the development of the biotechnology industry. Over the next two years, as public concern over the dangers of recombinant DNA research grew, so too did interest in its technical and practical applications. Curing genetic diseases remained in the realms of science fiction, but it appeared that producing human simple proteins could be good business. Insulin, one of the smaller, best characterized and understood proteins, had been used in treating type 1 diabetes for a half century. It had been extracted from animals in a chemically slightly different form from the human product. Yet, if one could produce synthetic human insulin, one could meet an existing demand with a product whose approval would be relatively easy to obtain from regulators. In the period 1975 to 1977, synthetic "human" insulin represented the aspirations for new products that could be made with the new biotechnology. Microbial production of synthetic human insulin was finally announced in September 1978 and was produced by a startup company, Genentech. Although that company did not commercialize the product themselves, instead, it licensed the production method to Eli Lilly and Company. 1978 also saw the first application for a patent on a gene, the gene which produces human growth hormone, by the University of California, thus introducing the legal principle that genes could be patented. Since that filing, almost 20% of the more than 20,000 genes in the human DNA have been patented.

The radical shift in the connotation of "genetic engineering" from an emphasis on the inherited characteristics of people to the commercial production of proteins and therapeutic drugs was nurtured by Joshua Lederberg. His broad concerns since the 1960s had been stimulated by enthusiasm for science and its potential medical benefits. Countering calls for strict regulation, he expressed a vision of potential utility. Against a belief that new techniques would entail unmentionable and uncontrollable consequences for humanity and the environment, a growing consensus on the economic value of recombinant DNA emerged.

Biosensor technology

The MOSFET (metal-oxide-semiconductor field-effect transistor, or MOS transistor) was invented by Mohamed M. Atalla and Dawon Kahng in 1959, and demonstrated in 1960. Two years later, L.C. Clark and C. Lyons invented the biosensor in 1962. Biosensor MOSFETs (BioFETs) were later developed, and they have since been widely used to measure physical, chemical, biological and environmental parameters.

The first BioFET was the ion-sensitive field-effect transistor (ISFET), invented by Piet Bergveld for electrochemical and biological applications in 1970. The adsorption FET (ADFET) was patented by P.F. Cox in 1974, and a hydrogen-sensitive MOSFET was demonstrated by I. Lundstrom, M.S. Shivaraman, C.S. Svenson and L. Lundkvist in 1975. The ISFET is a special type of MOSFET with a gate at a certain distance, and where the metal gate is replaced by an ion-sensitive membrane, electrolyte solution and reference electrode. The ISFET is widely used in biomedical applications, such as the detection of DNA hybridization, biomarker detection from blood, antibody detection, glucose measurement, pH sensing, and genetic technology.

By the mid-1980s, other BioFETs had been developed, including the gas sensor FET (GASFET), pressure sensor FET (PRESSFET), chemical field-effect transistor (ChemFET), reference ISFET (REFET), enzyme-modified FET (ENFET) and immunologically modified FET (IMFET). By the early 2000s, BioFETs such as the DNA field-effect transistor (DNAFET), gene-modified FET (GenFET) and cell-potential BioFET (CPFET) had been developed.

Biotechnology and industry

A Genentech-sponsored sign declaring South San Francisco to be "The Birthplace of Biotechnology."

With ancestral roots in industrial microbiology that date back centuries, the new biotechnology industry grew rapidly beginning in the mid-1970s. Each new scientific advance became a media event designed to capture investment confidence and public support. Although market expectations and social benefits of new products were frequently overstated, many people were prepared to see genetic engineering as the next great advance in technological progress. By the 1980s, biotechnology characterized a nascent real industry, providing titles for emerging trade organizations such as the Biotechnology Industry Organization (BIO).

The main focus of attention after insulin were the potential profit makers in the pharmaceutical industry: human growth hormone and what promised to be a miraculous cure for viral diseases, interferon. Cancer was a central target in the 1970s because increasingly the disease was linked to viruses. By 1980, a new company, Biogen, had produced interferon through recombinant DNA. The emergence of interferon and the possibility of curing cancer raised money in the community for research and increased the enthusiasm of an otherwise uncertain and tentative society. Moreover, to the 1970s plight of cancer was added AIDS in the 1980s, offering an enormous potential market for a successful therapy, and more immediately, a market for diagnostic tests based on monoclonal antibodies. By 1988, only five proteins from genetically engineered cells had been approved as drugs by the United States Food and Drug Administration (FDA): synthetic insulin, human growth hormone, hepatitis B vaccine, alpha-interferon, and tissue plasminogen activator (TPa), for lysis of blood clots. By the end of the 1990s, however, 125 more genetically engineered drugs would be approved.

The 2007–2008 global financial crisis led to several changes in the way the biotechnology industry was financed and organized. First, it led to a decline in overall financial investment in the sector, globally; and second, in some countries like the UK it led to a shift from business strategies focused on going for an initial public offering (IPO) to seeking a trade sale instead. By 2011, financial investment in the biotechnology industry started to improve again and by 2014 the global market capitalization reached $1 trillion.

Genetic engineering also reached the agricultural front as well. There was tremendous progress since the market introduction of the genetically engineered Flavr Savr tomato in 1994. Ernst and Young reported that in 1998, 30% of the U.S. soybean crop was expected to be from genetically engineered seeds. In 1998, about 30% of the US cotton and corn crops were also expected to be products of genetic engineering.

Genetic engineering in biotechnology stimulated hopes for both therapeutic proteins, drugs and biological organisms themselves, such as seeds, pesticides, engineered yeasts, and modified human cells for treating genetic diseases. From the perspective of its commercial promoters, scientific breakthroughs, industrial commitment, and official support were finally coming together, and biotechnology became a normal part of business. No longer were the proponents for the economic and technological significance of biotechnology the iconoclasts. Their message had finally become accepted and incorporated into the policies of governments and industry.

Global trends

According to Burrill and Company, an industry investment bank, over $350 billion has been invested in biotech since the emergence of the industry, and global revenues rose from $23 billion in 2000 to more than $50 billion in 2005. The greatest growth has been in Latin America but all regions of the world have shown strong growth trends. By 2007 and into 2008, though, a downturn in the fortunes of biotech emerged, at least in the United Kingdom, as the result of declining investment in the face of failure of biotech pipelines to deliver and a consequent downturn in return on investment.

 

History of molecular biology

From Wikipedia, the free encyclopedia

The history of molecular biology begins in the 1930s with the convergence of various, previously distinct biological and physical disciplines: biochemistry, genetics, microbiology, virology and physics. With the hope of understanding life at its most fundamental level, numerous physicists and chemists also took an interest in what would become molecular biology.

In its modern sense, molecular biology attempts to explain the phenomena of life starting from the macromolecular properties that generate them. Two categories of macromolecules in particular are the focus of the molecular biologist: 1) nucleic acids, among which the most famous is deoxyribonucleic acid (or DNA), the constituent of genes, and 2) proteins, which are the active agents of living organisms. One definition of the scope of molecular biology therefore is to characterize the structure, function and relationships between these two types of macromolecules. This relatively limited definition will suffice to allow us to establish a date for the so-called "molecular revolution", or at least to establish a chronology of its most fundamental developments.

General overview

In its earliest manifestations, molecular biology—the name was coined by Warren Weaver of the Rockefeller Foundation in 1938—was an idea of physical and chemical explanations of life, rather than a coherent discipline. Following the advent of the Mendelian-chromosome theory of heredity in the 1910s and the maturation of atomic theory and quantum mechanics in the 1920s, such explanations seemed within reach. Weaver and others encouraged (and funded) research at the intersection of biology, chemistry and physics, while prominent physicists such as Niels Bohr and Erwin Schrödinger turned their attention to biological speculation. However, in the 1930s and 1940s it was by no means clear which—if any—cross-disciplinary research would bear fruit; work in colloid chemistry, biophysics and radiation biology, crystallography, and other emerging fields all seemed promising.

In 1940, George Beadle and Edward Tatum demonstrated the existence of a precise relationship between genes and proteins. In the course of their experiments connecting genetics with biochemistry, they switched from the genetics mainstay Drosophila to a more appropriate model organism, the fungus Neurospora; the construction and exploitation of new model organisms would become a recurring theme in the development of molecular biology. In 1944, Oswald Avery, working at the Rockefeller Institute of New York, demonstrated that genes are made up of DNA. In 1952, Alfred Hershey and Martha Chase confirmed that the genetic material of the bacteriophage, the virus which infects bacteria, is made up of DNA. In 1953, James Watson and Francis Crick discovered the double helical structure of the DNA molecule based on the discoveries made by Rosalind Franklin. In 1961, François Jacob and Jacques Monod demonstrated that the products of certain genes regulated the expression of other genes by acting upon specific sites at the edge of those genes. They also hypothesized the existence of an intermediary between DNA and its protein products, which they called messenger RNA. Between 1961 and 1965, the relationship between the information contained in DNA and the structure of proteins was determined: there is a code, the genetic code, which creates a correspondence between the succession of nucleotides in the DNA sequence and a series of amino acids in proteins.

The chief discoveries of molecular biology took place in a period of only about twenty-five years. Another fifteen years were required before new and more sophisticated technologies, united today under the name of genetic engineering, would permit the isolation and characterization of genes, in particular those of highly complex organisms.

The exploration of the molecular dominion

If we evaluate the molecular revolution within the context of biological history, it is easy to note that it is the culmination of a long process which began with the first observations through a microscope. The aim of these early researchers was to understand the functioning of living organisms by describing their organization at the microscopic level. From the end of the 18th century, the characterization of the chemical molecules which make up living beings gained increasingly greater attention, along with the birth of physiological chemistry in the 19th century, developed by the German chemist Justus von Liebig and following the birth of biochemistry at the beginning of the 20th, thanks to another German chemist Eduard Buchner. Between the molecules studied by chemists and the tiny structures visible under the optical microscope, such as the cellular nucleus or the chromosomes, there was an obscure zone, "the world of the ignored dimensions," as it was called by the chemical-physicist Wolfgang Ostwald. This world is populated by colloids, chemical compounds whose structure and properties were not well defined.

The successes of molecular biology derived from the exploration of that unknown world by means of the new technologies developed by chemists and physicists: X-ray diffraction, electron microscopy, ultracentrifugation, and electrophoresis. These studies revealed the structure and function of the macromolecules.

A milestone in that process was the work of Linus Pauling in 1949, which for the first time linked the specific genetic mutation in patients with sickle cell disease to a demonstrated change in an individual protein, the hemoglobin in the erythrocytes of heterozygous or homozygous individuals.

The encounter between biochemistry and genetics

The development of molecular biology is also the encounter of two disciplines which made considerable progress in the course of the first thirty years of the twentieth century: biochemistry and genetics. The first studies the structure and function of the molecules which make up living things. Between 1900 and 1940, the central processes of metabolism were described: the process of digestion and the absorption of the nutritive elements derived from alimentation, such as the sugars. Every one of these processes is catalyzed by a particular enzyme. Enzymes are proteins, like the antibodies present in blood or the proteins responsible for muscular contraction. As a consequence, the study of proteins, of their structure and synthesis, became one of the principal objectives of biochemists.

The second discipline of biology which developed at the beginning of the 20th century is genetics. After the rediscovery of the laws of Mendel through the studies of Hugo de Vries, Carl Correns and Erich von Tschermak in 1900, this science began to take shape thanks to the adoption by Thomas Hunt Morgan, in 1910, of a model organism for genetic studies, the famous fruit fly (Drosophila melanogaster). Shortly after, Morgan showed that the genes are localized on chromosomes. Following this discovery, he continued working with Drosophila and, along with numerous other research groups, confirmed the importance of the gene in the life and development of organisms. Nevertheless, the chemical nature of genes and their mechanisms of action remained a mystery. Molecular biologists committed themselves to the determination of the structure, and the description of the complex relations between, genes and proteins.

The development of molecular biology was not just the fruit of some sort of intrinsic "necessity" in the history of ideas, but was a characteristically historical phenomenon, with all of its unknowns, imponderables and contingencies: the remarkable developments in physics at the beginning of the 20th century highlighted the relative lateness in development in biology, which became the "new frontier" in the search for knowledge about the empirical world. Moreover, the developments of the theory of information and cybernetics in the 1940s, in response to military exigencies, brought to the new biology a significant number of fertile ideas and, especially, metaphors.

The choice of bacteria and of its virus, the bacteriophage, as models for the study of the fundamental mechanisms of life was almost natural - they are the smallest living organisms known to exist - and at the same time the fruit of individual choices. This model owes its success, above all, to the fame and the sense of organization of Max Delbrück, a German physicist, who was able to create a dynamic research group, based in the United States, whose exclusive scope was the study of the bacteriophage: the phage group.

The phage group was an informal network of biologists that carried out basic research mainly on bacteriophage T4 and made numerous seminal contributions to microbial genetics and the origins of molecular biology in the mid-20th century. In 1961, Sydney Brenner, an early member of the phage group, collaborated with Francis Crick, Leslie Barnett and Richard Watts-Tobin at the Cavendish Laboratory in Cambridge to perform genetic experiments that demonstrated the basic nature of the genetic code for proteins. These experiments, carried out with mutants of the rIIB gene of bacteriophage T4, showed, that for a gene that encodes a protein, three sequential bases of the gene's DNA specify each successive amino acid of the protein. Thus the genetic code is a triplet code, where each triplet (called a codon) specifies a particular amino acid. They also found that the codons do not overlap with each other in the DNA sequence encoding a protein, and that such a sequence is read from a fixed starting point. During 1962-1964 phage T4 researchers provided an opportunity to study the function of virtually all of the genes that are essential for growth of the bacteriophage under laboratory conditions. These studies were facilitated by the discovery of two classes of conditional lethal mutants. One class of such mutants is known as amber mutants. Another class of conditional lethal mutants is referred to as temperature-sensitive mutants. Studies of these two classes of mutants led to considerable insight into numerous fundamental biologic problems. Thus understanding was gained on the functions and interactions of the proteins employed in the machinery of DNA replication, DNA repair and DNA recombination. Furthermore, understanding was gained on the processes by which viruses are assembled from protein and nucleic acid components (molecular morphogenesis). Also, the role of chain terminating codons was elucidated. One noteworthy study used amber mutants defective in the gene encoding the major head protein of bacteriophage T4. This experiment provided strong evidence for the widely held, but prior to 1964 still unproven, "sequence hypothesis" that the amino acid sequence of a protein is specified by the nucleotide sequence of the gene determining the protein. Thus, this study demonstrated the co-linearity of the gene with its encoded protein.

The geographic panorama of the developments of the new biology was conditioned above all by preceding work. The US, where genetics had developed the most rapidly, and the UK, where there was a coexistence of both genetics and biochemical research of highly advanced levels, were in the avant-garde. Germany, the cradle of the revolutions in physics, with the best minds and the most advanced laboratories of genetics in the world, should have had a primary role in the development of molecular biology. But history decided differently: the arrival of the Nazis in 1933 - and, to a less extreme degree, the rigidification of totalitarian measures in fascist Italy - caused the emigration of a large number of Jewish and non-Jewish scientists. The majority of them fled to the US or the UK, providing an extra impulse to the scientific dynamism of those nations. These movements ultimately made molecular biology a truly international science from the very beginnings.

History of DNA biochemistry

The study of DNA is a central part of molecular biology.

First isolation of DNA

Working in the 19th century, biochemists initially isolated DNA and RNA (mixed together) from cell nuclei. They were relatively quick to appreciate the polymeric nature of their "nucleic acid" isolates, but realized only later that nucleotides were of two types—one containing ribose and the other deoxyribose. It was this subsequent discovery that led to the identification and naming of DNA as a substance distinct from RNA.

Friedrich Miescher (1844–1895) discovered a substance he called "nuclein" in 1869. Somewhat later, he isolated a pure sample of the material now known as DNA from the sperm of salmon, and in 1889 his pupil, Richard Altmann, named it "nucleic acid". This substance was found to exist only in the chromosomes.

In 1919 Phoebus Levene at the Rockefeller Institute identified the components (the four bases, the sugar and the phosphate chain) and he showed that the components of DNA were linked in the order phosphate-sugar-base. He called each of these units a nucleotide and suggested the DNA molecule consisted of a string of nucleotide units linked together through the phosphate groups, which are the 'backbone' of the molecule. However Levene thought the chain was short and that the bases repeated in the same fixed order. Torbjörn Caspersson and Einar Hammersten showed that DNA was a polymer.

Chromosomes and inherited traits

In 1927, Nikolai Koltsov proposed that inherited traits would be inherited via a "giant hereditary molecule" which would be made up of "two mirror strands that would replicate in a semi-conservative fashion using each strand as a template". Max Delbrück, Nikolay Timofeev-Ressovsky, and Karl G. Zimmer published results in 1935 suggesting that chromosomes are very large molecules the structure of which can be changed by treatment with X-rays, and that by so changing their structure it was possible to change the heritable characteristics governed by those chromosomes. In 1937 William Astbury produced the first X-ray diffraction patterns from DNA. He was not able to propose the correct structure but the patterns showed that DNA had a regular structure and therefore it might be possible to deduce what this structure was.

In 1943, Oswald Theodore Avery and a team of scientists discovered that traits proper to the "smooth" form of the Pneumococcus could be transferred to the "rough" form of the same bacteria merely by making the killed "smooth" (S) form available to the live "rough" (R) form. Quite unexpectedly, the living R Pneumococcus bacteria were transformed into a new strain of the S form, and the transferred S characteristics turned out to be heritable. Avery called the medium of transfer of traits the transforming principle; he identified DNA as the transforming principle, and not protein as previously thought. He essentially redid Frederick Griffith's experiment. In 1953, Alfred Hershey and Martha Chase did an experiment (Hershey–Chase experiment) that showed, in T2 phage, that DNA is the genetic material (Hershey shared the Nobel prize with Luria).

Discovery of the structure of DNA

In the 1950s, three groups made it their goal to determine the structure of DNA. The first group to start was at King's College London and was led by Maurice Wilkins and was later joined by Rosalind Franklin. Another group consisting of Francis Crick and James Watson was at Cambridge. A third group was at Caltech and was led by Linus Pauling. Crick and Watson built physical models using metal rods and balls, in which they incorporated the known chemical structures of the nucleotides, as well as the known position of the linkages joining one nucleotide to the next along the polymer. At King's College Maurice Wilkins and Rosalind Franklin examined X-ray diffraction patterns of DNA fibers. Of the three groups, only the London group was able to produce good quality diffraction patterns and thus produce sufficient quantitative data about the structure.

Helix structure

In 1948, Pauling discovered that many proteins included helical (see alpha helix) shapes. Pauling had deduced this structure from X-ray patterns and from attempts to physically model the structures. (Pauling was also later to suggest an incorrect three chain helical DNA structure based on Astbury's data.) Even in the initial diffraction data from DNA by Maurice Wilkins, it was evident that the structure involved helices. But this insight was only a beginning. There remained the questions of how many strands came together, whether this number was the same for every helix, whether the bases pointed toward the helical axis or away, and ultimately what were the explicit angles and coordinates of all the bonds and atoms. Such questions motivated the modeling efforts of Watson and Crick.

Complementary nucleotides

In their modeling, Watson and Crick restricted themselves to what they saw as chemically and biologically reasonable. Still, the breadth of possibilities was very wide. A breakthrough occurred in 1952, when Erwin Chargaff visited Cambridge and inspired Crick with a description of experiments Chargaff had published in 1947. Chargaff had observed that the proportions of the four nucleotides vary between one DNA sample and the next, but that for particular pairs of nucleotides — adenine and thymine, guanine and cytosine — the two nucleotides are always present in equal proportions.

Crick and Watson DNA model built in 1953, was reconstructed largely from its original pieces in 1973 and donated to the National Science Museum in London.

Using X-ray diffraction, as well as other data from Rosalind Franklin and her information that the bases were paired, James Watson and Francis Crick arrived at the first accurate model of DNA's molecular structure in 1953, which was accepted through inspection by Rosalind Franklin. The discovery was announced on February 28, 1953; the first Watson/Crick paper appeared in Nature on April 25, 1953. Sir Lawrence Bragg, the director of the Cavendish Laboratory, where Watson and Crick worked, gave a talk at Guy's Hospital Medical School in London on Thursday, May 14, 1953 which resulted in an article by Ritchie Calder in the News Chronicle of London, on Friday, May 15, 1953, entitled "Why You Are You. Nearer Secret of Life." The news reached readers of The New York Times the next day; Victor K. McElheny, in researching his biography, "Watson and DNA: Making a Scientific Revolution", found a clipping of a six-paragraph New York Times article written from London and dated May 16, 1953 with the headline "Form of `Life Unit' in Cell Is Scanned." The article ran in an early edition and was then pulled to make space for news deemed more important. (The New York Times subsequently ran a longer article on June 12, 1953). The Cambridge University undergraduate newspaper also ran its own short article on the discovery on Saturday, May 30, 1953. Bragg's original announcement at a Solvay Conference on proteins in Belgium on 8 April 1953 went unreported by the press. In 1962 Watson, Crick, and Maurice Wilkins jointly received the Nobel Prize in Physiology or Medicine for their determination of the structure of DNA.

"Central Dogma"

Watson and Crick's model attracted great interest immediately upon its presentation. Arriving at their conclusion on February 21, 1953, Watson and Crick made their first announcement on February 28. In an influential presentation in 1957, Crick laid out the "central dogma of molecular biology", which foretold the relationship between DNA, RNA, and proteins, and articulated the "sequence hypothesis." A critical confirmation of the replication mechanism that was implied by the double-helical structure followed in 1958 in the form of the Meselson–Stahl experiment. Work by Crick and coworkers showed that the genetic code was based on non-overlapping triplets of bases, called codons, and Har Gobind Khorana and others deciphered the genetic code not long afterward (1966). These findings represent the birth of molecular biology.

History of RNA tertiary structure

Pre-history: the helical structure of RNA

The earliest work in RNA structural biology coincided, more or less, with the work being done on DNA in the early 1950s. In their seminal 1953 paper, Watson and Crick suggested that van der Waals crowding by the 2`OH group of ribose would preclude RNA from adopting a double helical structure identical to the model they proposed - what we now know as B-form DNA. This provoked questions about the three-dimensional structure of RNA: could this molecule form some type of helical structure, and if so, how? As with DNA, early structural work on RNA centered around isolation of native RNA polymers for fiber diffraction analysis. In part because of heterogeneity of the samples tested, early fiber diffraction patterns were usually ambiguous and not readily interpretable. In 1955, Marianne Grunberg-Manago and colleagues published a paper describing the enzyme polynucleotide phosphorylase, which cleaved a phosphate group from nucleotide diphosphates to catalyze their polymerization. This discovery allowed researchers to synthesize homogenous nucleotide polymers, which they then combined to produce double stranded molecules. These samples yielded the most readily interpretable fiber diffraction patterns yet obtained, suggesting an ordered, helical structure for cognate, double stranded RNA that differed from that observed in DNA. These results paved the way for a series of investigations into the various properties and propensities of RNA. Through the late 1950s and early 1960s, numerous papers were published on various topics in RNA structure, including RNA-DNA hybridization, triple stranded RNA, and even small-scale crystallography of RNA di-nucleotides - G-C, and A-U - in primitive helix-like arrangements. For a more in-depth review of the early work in RNA structural biology, see the article The Era of RNA Awakening: Structural biology of RNA in the early years by Alexander Rich.

The beginning: crystal structure of tRNAPHE

In the mid-1960s, the role of tRNA in protein synthesis was being intensively studied. At this point, ribosomes had been implicated in protein synthesis, and it had been shown that an mRNA strand was necessary for the formation of these structures. In a 1964 publication, Warner and Rich showed that ribosomes active in protein synthesis contained tRNA molecules bound at the A and P sites, and discussed the notion that these molecules aided in the peptidyl transferase reaction. However, despite considerable biochemical characterization, the structural basis of tRNA function remained a mystery. In 1965, Holley et al. purified and sequenced the first tRNA molecule, initially proposing that it adopted a cloverleaf structure, based largely on the ability of certain regions of the molecule to form stem loop structures. The isolation of tRNA proved to be the first major windfall in RNA structural biology. Following Robert W. Holley's publication, numerous investigators began work on isolation tRNA for crystallographic study, developing improved methods for isolating the molecule as they worked. By 1968 several groups had produced tRNA crystals, but these proved to be of limited quality and did not yield data at the resolutions necessary to determine structure. In 1971, Kim et al. achieved another breakthrough, producing crystals of yeast tRNAPHE that diffracted to 2-3 Ångström resolutions by using spermine, a naturally occurring polyamine, which bound to and stabilized the tRNA. Despite having suitable crystals, however, the structure of tRNAPHE was not immediately solved at high resolution; rather it took pioneering work in the use of heavy metal derivatives and a good deal more time to produce a high-quality density map of the entire molecule. In 1973, Kim et al. produced a 4 Ångström map of the tRNA molecule in which they could unambiguously trace the entire backbone. This solution would be followed by many more, as various investigators worked to refine the structure and thereby more thoroughly elucidate the details of base pairing and stacking interactions, and validate the published architecture of the molecule.

The tRNAPHE structure is notable in the field of nucleic acid structure in general, as it represented the first solution of a long-chain nucleic acid structure of any kind - RNA or DNA - preceding Richard E. Dickerson's solution of a B-form dodecamer by nearly a decade. Also, tRNAPHE demonstrated many of the tertiary interactions observed in RNA architecture which would not be categorized and more thoroughly understood for years to come, providing a foundation for all future RNA structural research.

The renaissance: the hammerhead ribozyme and the group I intron: P4-6

For a considerable time following the first tRNA structures, the field of RNA structure did not dramatically advance. The ability to study an RNA structure depended upon the potential to isolate the RNA target. This proved limiting to the field for many years, in part because other known targets - i.e., the ribosome - were significantly more difficult to isolate and crystallize. Further, because other interesting RNA targets had simply not been identified, or were not sufficiently understood to be deemed interesting, there was simply a lack of things to study structurally. As such, for some twenty years following the original publication of the tRNAPHE structure, the structures of only a handful of other RNA targets were solved, with almost all of these belonging to the transfer RNA family. This unfortunate lack of scope would eventually be overcome largely because of two major advancements in nucleic acid research: the identification of ribozymes, and the ability to produce them via in vitro transcription.

Subsequent to Tom Cech's publication implicating the Tetrahymena group I intron as an autocatalytic ribozyme, and Sidney Altman's report of catalysis by ribonuclease P RNA, several other catalytic RNAs were identified in the late 1980s, including the hammerhead ribozyme. In 1994, McKay et al. published the structure of a 'hammerhead RNA-DNA ribozyme-inhibitor complex' at 2.6 Ångström resolution, in which the autocatalytic activity of the ribozyme was disrupted via binding to a DNA substrate. The conformation of the ribozyme published in this paper was eventually shown to be one of several possible states, and although this particular sample was catalytically inactive, subsequent structures have revealed its active-state architecture. This structure was followed by Jennifer Doudna's publication of the structure of the P4-P6 domains of the Tetrahymena group I intron, a fragment of the ribozyme originally made famous by Cech. The second clause in the title of this publication - Principles of RNA Packing - concisely evinces the value of these two structures: for the first time, comparisons could be made between well described tRNA structures and those of globular RNAs outside the transfer family. This allowed the framework of categorization to be built for RNA tertiary structure. It was now possible to propose the conservation of motifs, folds, and various local stabilizing interactions. For an early review of these structures and their implications, see RNA FOLDS: Insights from recent crystal structures, by Doudna and Ferre-D'Amare.

In addition to the advances being made in global structure determination via crystallography, the early 1990s also saw the implementation of NMR as a powerful technique in RNA structural biology. Coincident with the large-scale ribozyme structures being solved crystallographically, a number of structures of small RNAs and RNAs complexed with drugs and peptides were solved using NMR. In addition, NMR was now being used to investigate and supplement crystal structures, as exemplified by the determination of an isolated tetraloop-receptor motif structure published in 1997. Investigations such as this enabled a more precise characterization of the base pairing and base stacking interactions which stabilized the global folds of large RNA molecules. The importance of understanding RNA tertiary structural motifs was prophetically well described by Michel and Costa in their publication identifying the tetraloop motif: "..it should not come as a surprise if self-folding RNA molecules were to make intensive use of only a relatively small set of tertiary motifs. Identifying these motifs would greatly aid modeling enterprises, which will remain essential as long as the crystallization of large RNAs remains a difficult task".

The modern era: the age of RNA structural biology

The resurgence of RNA structural biology in the mid-1990s has caused a veritable explosion in the field of nucleic acid structural research. Since the publication of the hammerhead and P4-6 structures, numerous major contributions to the field have been made. Some of the most noteworthy examples include the structures of the Group I and Group II introns, and the Ribosome solved by Nenad Ban and colleagues in the laboratory of Thomas Steitz. The first three structures were produced using in vitro transcription, and that NMR has played a role in investigating partial components of all four structures - testaments to the indispensability of both techniques for RNA research. Most recently, the 2009 Nobel Prize in Chemistry was awarded to Ada Yonath, Venkatraman Ramakrishnan and Thomas Steitz for their structural work on the ribosome, demonstrating the prominent role RNA structural biology has taken in modern molecular biology.

History of protein biochemistry

First isolation and classification

Proteins were recognized as a distinct class of biological molecules in the eighteenth century by Antoine Fourcroy and others. Members of this class (called the "albuminoids", Eiweisskörper, or matières albuminoides) were recognized by their ability to coagulate or flocculate under various treatments such as heat or acid; well-known examples at the start of the nineteenth century included albumen from egg whites, blood serum albumin, fibrin, and wheat gluten. The similarity between the cooking of egg whites and the curdling of milk was recognized even in ancient times; for example, the name albumen for the egg-white protein was coined by Pliny the Elder from the Latin albus ovi (egg white).

With the advice of Jöns Jakob Berzelius, the Dutch chemist Gerhardus Johannes Mulder carried out elemental analyses of common animal and plant proteins. To everyone's surprise, all proteins had nearly the same empirical formula, roughly C400H620N100O120 with individual sulfur and phosphorus atoms. Mulder published his findings in two papers (1837,1838) and hypothesized that there was one basic substance (Grundstoff) of proteins, and that it was synthesized by plants and absorbed from them by animals in digestion. Berzelius was an early proponent of this theory and proposed the name "protein" for this substance in a letter dated 10 July 1838

The name protein that he propose for the organic oxide of fibrin and albumin, I wanted to derive from [the Greek word] πρωτειος, because it appears to be the primitive or principal substance of animal nutrition.

Mulder went on to identify the products of protein degradation such as the amino acid, leucine, for which he found a (nearly correct) molecular weight of 131 Da.

Purifications and measurements of mass

The minimum molecular weight suggested by Mulder's analyses was roughly 9 kDa, hundreds of times larger than other molecules being studied. Hence, the chemical structure of proteins (their primary structure) was an active area of research until 1949, when Fred Sanger sequenced insulin. The (correct) theory that proteins were linear polymers of amino acids linked by peptide bonds was proposed independently and simultaneously by Franz Hofmeister and Emil Fischer at the same conference in 1902. However, some scientists were sceptical that such long macromolecules could be stable in solution. Consequently, numerous alternative theories of the protein primary structure were proposed, e.g., the colloidal hypothesis that proteins were assemblies of small molecules, the cyclol hypothesis of Dorothy Wrinch, the diketopiperazine hypothesis of Emil Abderhalden and the pyrrol/piperidine hypothesis of Troensgard (1942). Most of these theories had difficulties in accounting for the fact that the digestion of proteins yielded peptides and amino acids. Proteins were finally shown to be macromolecules of well-defined composition (and not colloidal mixtures) by Theodor Svedberg using analytical ultracentrifugation. The possibility that some proteins are non-covalent associations of such macromolecules was shown by Gilbert Smithson Adair (by measuring the osmotic pressure of hemoglobin) and, later, by Frederic M. Richards in his studies of ribonuclease S. The mass spectrometry of proteins has long been a useful technique for identifying posttranslational modifications and, more recently, for probing protein structure.

Most proteins are difficult to purify in more than milligram quantities, even using the most modern methods. Hence, early studies focused on proteins that could be purified in large quantities, e.g., those of blood, egg white, various toxins, and digestive/metabolic enzymes obtained from slaughterhouses. Many techniques of protein purification were developed during World War II in a project led by Edwin Joseph Cohn to purify blood proteins to help keep soldiers alive. In the late 1950s, the Armour Hot Dog Co. purified 1 kg (= one million milligrams) of pure bovine pancreatic ribonuclease A and made it available at low cost to scientists around the world. This generous act made RNase A the main protein for basic research for the next few decades, resulting in several Nobel Prizes.

Protein folding and first structural models

The study of protein folding began in 1910 with a famous paper by Harriette Chick and C. J. Martin, in which they showed that the flocculation of a protein was composed of two distinct processes: the precipitation of a protein from solution was preceded by another process called denaturation, in which the protein became much less soluble, lost its enzymatic activity and became more chemically reactive. In the mid-1920s, Tim Anson and Alfred Mirsky proposed that denaturation was a reversible process, a correct hypothesis that was initially lampooned by some scientists as "unboiling the egg". Anson also suggested that denaturation was a two-state ("all-or-none") process, in which one fundamental molecular transition resulted in the drastic changes in solubility, enzymatic activity and chemical reactivity; he further noted that the free energy changes upon denaturation were much smaller than those typically involved in chemical reactions. In 1929, Hsien Wu hypothesized that denaturation was protein unfolding, a purely conformational change that resulted in the exposure of amino acid side chains to the solvent. According to this (correct) hypothesis, exposure of aliphatic and reactive side chains to solvent rendered the protein less soluble and more reactive, whereas the loss of a specific conformation caused the loss of enzymatic activity. Although considered plausible, Wu's hypothesis was not immediately accepted, since so little was known of protein structure and enzymology and other factors could account for the changes in solubility, enzymatic activity and chemical reactivity. In the early 1960s, Chris Anfinsen showed that the folding of ribonuclease A was fully reversible with no external cofactors needed, verifying the "thermodynamic hypothesis" of protein folding that the folded state represents the global minimum of free energy for the protein.

The hypothesis of protein folding was followed by research into the physical interactions that stabilize folded protein structures. The crucial role of hydrophobic interactions was hypothesized by Dorothy Wrinch and Irving Langmuir, as a mechanism that might stabilize her cyclol structures. Although supported by J. D. Bernal and others, this (correct) hypothesis was rejected along with the cyclol hypothesis, which was disproven in the 1930s by Linus Pauling (among others). Instead, Pauling championed the idea that protein structure was stabilized mainly by hydrogen bonds, an idea advanced initially by William Astbury (1933). Remarkably, Pauling's incorrect theory about H-bonds resulted in his correct models for the secondary structure elements of proteins, the alpha helix and the beta sheet. The hydrophobic interaction was restored to its correct prominence by a famous article in 1959 by Walter Kauzmann on denaturation, based partly on work by Kaj Linderstrøm-Lang. The ionic nature of proteins was demonstrated by Bjerrum, Weber and Arne Tiselius, but Linderstrom-Lang showed that the charges were generally accessible to solvent and not bound to each other (1949).

The secondary and low-resolution tertiary structure of globular proteins was investigated initially by hydrodynamic methods, such as analytical ultracentrifugation and flow birefringence. Spectroscopic methods to probe protein structure (such as circular dichroism, fluorescence, near-ultraviolet and infrared absorbance) were developed in the 1950s. The first atomic-resolution structures of proteins were solved by X-ray crystallography in the 1960s and by NMR in the 1980s. As of 2019, the Protein Data Bank has over 150,000 atomic-resolution structures of proteins. In more recent times, cryo-electron microscopy of large macromolecular assemblies has achieved atomic resolution, and computational protein structure prediction of small protein domains is approaching atomic resolution.

 

Baryogenesis

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Baryogenesis In physical cosmology , baryog...