Search This Blog

Thursday, March 6, 2025

History of molecular biology

From Wikipedia, the free encyclopedia

The history of molecular biology begins in the 1930s with the convergence of various, previously distinct biological and physical disciplines: biochemistry, genetics, microbiology, virology and physics. With the hope of understanding life at its most fundamental level, numerous physicists and chemists also took an interest in what would become molecular biology.

In its modern sense, molecular biology attempts to explain the phenomena of life starting from the macromolecular properties that generate them. Two categories of macromolecules in particular are the focus of the molecular biologist: 1) nucleic acids, among which the most famous is deoxyribonucleic acid (or DNA), the constituent of genes, and 2) proteins, which are the active agents of living organisms. One definition of the scope of molecular biology therefore is to characterize the structure, function and relationships between these two types of macromolecules. This relatively limited definition allows for the estimation of a date for the so-called "molecular revolution", or at least to establish a chronology of its most fundamental developments.

General overview

In its earliest manifestations, molecular biology—the name was coined by Warren Weaver of the Rockefeller Foundation in 1938—was an idea of physical and chemical explanations of life, rather than a coherent discipline. Following the advent of the Mendelian-chromosome theory of heredity in the 1910s and the maturation of atomic theory and quantum mechanics in the 1920s, such explanations seemed within reach. Weaver and others encouraged (and funded) research at the intersection of biology, chemistry and physics, while prominent physicists such as Niels Bohr and Erwin Schrödinger turned their attention to biological speculation. However, in the 1930s and 1940s it was by no means clear which—if any—cross-disciplinary research would bear fruit; work in colloid chemistry, biophysics and radiation biology, crystallography, and other emerging fields all seemed promising.

In 1940, George Beadle and Edward Tatum demonstrated the existence of a precise relationship between genes and proteins. In the course of their experiments connecting genetics with biochemistry, they switched from the genetics mainstay Drosophila to a more appropriate model organism, the fungus Neurospora; the construction and exploitation of new model organisms would become a recurring theme in the development of molecular biology. In 1944, Oswald Avery, working at the Rockefeller Institute of New York, demonstrated that genes are made up of DNA (see Avery–MacLeod–McCarty experiment). In 1952, Alfred Hershey and Martha Chase confirmed that the genetic material of the bacteriophage, the virus which infects bacteria, is made up of DNA (see Hershey–Chase experiment). In 1953, James Watson and Francis Crick discovered the double helical structure of the DNA molecule based on the discoveries made by Rosalind Franklin. In 1961, François Jacob and Jacques Monod demonstrated that the products of certain genes regulated the expression of other genes by acting upon specific sites at the edge of those genes. They also hypothesized the existence of an intermediary between DNA and its protein products, which they called messenger RNA. Between 1961 and 1965, the relationship between the information contained in DNA and the structure of proteins was determined: there is a code, the genetic code, which creates a correspondence between the succession of nucleotides in the DNA sequence and a series of amino acids in proteins.

In April 2023, scientists, based on new evidence, concluded that Rosalind Franklin was a contributor and "equal player" in the discovery process of DNA, rather than otherwise, as may have been presented subsequently after the time of the discovery.

The chief discoveries of molecular biology took place in a period of only about twenty-five years. Another fifteen years were required before new and more sophisticated technologies, united today under the name of genetic engineering, would permit the isolation and characterization of genes, in particular those of highly complex organisms.

The exploration of the molecular dominion

If we evaluate the molecular revolution within the context of biological history, it is easy to note that it is the culmination of a long process which began with the first observations through a microscope. The aim of these early researchers was to understand the functioning of living organisms by describing their organization at the microscopic level. From the end of the 18th century, the characterization of the chemical molecules which make up living beings gained increasingly greater attention, along with the birth of physiological chemistry in the 19th century, developed by the German chemist Justus von Liebig and following the birth of biochemistry at the beginning of the 20th, thanks to another German chemist Eduard Buchner. Between the molecules studied by chemists and the tiny structures visible under the optical microscope, such as the cellular nucleus or the chromosomes, there was an obscure zone, "the world of the ignored dimensions," as it was called by the chemical-physicist Wolfgang Ostwald. This world is populated by colloids, chemical compounds whose structure and properties were not well defined.

The successes of molecular biology derived from the exploration of that unknown world by means of the new technologies developed by chemists and physicists: X-ray diffraction, electron microscopy, ultracentrifugation, and electrophoresis. These studies revealed the structure and function of the macromolecules.

A milestone in that process was the work of Linus Pauling in 1949, which for the first time linked the specific genetic mutation in patients with sickle cell disease to a demonstrated change in an individual protein, the hemoglobin in the erythrocytes of heterozygous or homozygous individuals.

The encounter between biochemistry and genetics

The development of molecular biology is also the encounter of two disciplines which made considerable progress in the course of the first thirty years of the twentieth century: biochemistry and genetics. The first studies the structure and function of the molecules which make up living things. Between 1900 and 1940, the central processes of metabolism were described: the process of digestion and the absorption of the nutritive elements derived from alimentation, such as the sugars. Every one of these processes is catalyzed by a particular enzyme. Enzymes are proteins, like the antibodies present in blood or the proteins responsible for muscular contraction. As a consequence, the study of proteins, of their structure and synthesis, became one of the principal objectives of biochemists.

The second discipline of biology which developed at the beginning of the 20th century is genetics. After the rediscovery of the laws of Mendel through the studies of Hugo de Vries, Carl Correns and Erich von Tschermak in 1900, this science began to take shape thanks to the adoption by Thomas Hunt Morgan, in 1910, of a model organism for genetic studies, the famous fruit fly (Drosophila melanogaster). Shortly after, Morgan showed that the genes are localized on chromosomes. Following this discovery, he continued working with Drosophila and, along with numerous other research groups, confirmed the importance of the gene in the life and development of organisms. Nevertheless, the chemical nature of genes and their mechanisms of action remained a mystery. Molecular biologists committed themselves to the determination of the structure, and the description of the complex relations between, genes and proteins.

The development of molecular biology was not just the fruit of some sort of intrinsic "necessity" in the history of ideas, but was a characteristically historical phenomenon, with all of its unknowns, imponderables and contingencies: the remarkable developments in physics at the beginning of the 20th century highlighted the relative lateness in development in biology, which became the "new frontier" in the search for knowledge about the empirical world. Moreover, the developments of the theory of information and cybernetics in the 1940s, in response to military exigencies, brought to the new biology a significant number of fertile ideas and, especially, metaphors.

The choice of bacteria and of its virus, the bacteriophage, as models for the study of the fundamental mechanisms of life was almost natural—they are the smallest living organisms known to exist—and at the same time the fruit of individual choices. This model owes its success, above all, to the fame and the sense of organization of Max Delbrück, a German physicist, who was able to create a dynamic research group, based in the United States, whose exclusive scope was the study of the bacteriophage: the phage group.

The phage group was an informal network of biologists that carried out basic research mainly on bacteriophage T4 and made numerous seminal contributions to microbial genetics and the origins of molecular biology in the mid-20th century. In 1961, Sydney Brenner, an early member of the phage group, collaborated with Francis Crick, Leslie Barnett and Richard Watts-Tobin at the Cavendish Laboratory in Cambridge to perform genetic experiments that demonstrated the basic nature of the genetic code for proteins. These experiments, carried out with mutants of the rIIB gene of bacteriophage T4, showed, that for a gene that encodes a protein, three sequential bases of the gene's DNA specify each successive amino acid of the protein. Thus the genetic code is a triplet code, where each triplet (called a codon) specifies a particular amino acid. They also found that the codons do not overlap with each other in the DNA sequence encoding a protein, and that such a sequence is read from a fixed starting point. During 1962–1964 phage T4 researchers provided an opportunity to study the function of virtually all of the genes that are essential for growth of the bacteriophage under laboratory conditions. These studies were facilitated by the discovery of two classes of conditional lethal mutants. One class of such mutants is known as amber mutants. Another class of conditional lethal mutants is referred to as temperature-sensitive mutants. Studies of these two classes of mutants led to considerable insight into numerous fundamental biologic problems. Thus understanding was gained on the functions and interactions of the proteins employed in the machinery of DNA replication, DNA repair and DNA recombination. Furthermore, understanding was gained on the processes by which viruses are assembled from protein and nucleic acid components (molecular morphogenesis). Also, the role of chain terminating codons was elucidated. One noteworthy study used amber mutants defective in the gene encoding the major head protein of bacteriophage T4. This experiment provided strong evidence for the widely held, but prior to 1964 still unproven, "sequence hypothesis" that the amino acid sequence of a protein is specified by the nucleotide sequence of the gene determining the protein. Thus, this study demonstrated the co-linearity of the gene with its encoded protein.

The geographic panorama of the developments of the new biology was conditioned above all by preceding work. The US, where genetics had developed the most rapidly, and the UK, where there was a coexistence of both genetics and biochemical research of highly advanced levels, were in the avant-garde. Germany, the cradle of the revolutions in physics, with the best minds and the most advanced laboratories of genetics in the world, should have had a primary role in the development of molecular biology. But history decided differently: the arrival of the Nazis in 1933—and, to a less extreme degree, the rigidification of totalitarian measures in fascist Italy—caused the emigration of a large number of Jewish and non-Jewish scientists. The majority of them fled to the US or the UK, providing an extra impulse to the scientific dynamism of those nations. These movements ultimately made molecular biology a truly international science from the very beginnings.

History of DNA biochemistry

The study of DNA is a central part of molecular biology.

First isolation of DNA

Working in the 19th century, biochemists initially isolated DNA and RNA (mixed together) from cell nuclei. They were relatively quick to appreciate the polymeric nature of their "nucleic acid" isolates, but realized only later that nucleotides were of two types—one containing ribose and the other deoxyribose. It was this subsequent discovery that led to the identification and naming of DNA as a substance distinct from RNA.

Friedrich Miescher (1844–1895) discovered a substance he called "nuclein" in 1869. Somewhat later, he isolated a pure sample of the material now known as DNA from the sperm of salmon, and in 1889 his pupil, Richard Altmann, named it "nucleic acid". This substance was found to exist only in the chromosomes.

In 1919 Phoebus Levene at the Rockefeller Institute identified the components (the four bases, the sugar and the phosphate chain) and he showed that the components of DNA were linked in the order phosphate-sugar-base. He called each of these units a nucleotide and suggested the DNA molecule consisted of a string of nucleotide units linked together through the phosphate groups, which are the 'backbone' of the molecule. However Levene thought the chain was short and that the bases repeated in the same fixed order. Torbjörn Caspersson and Einar Hammersten showed that DNA was a polymer.

Chromosomes and inherited traits

In 1927, Nikolai Koltsov proposed that inherited traits would be inherited via a "giant hereditary molecule" which would be made up of "two mirror strands that would replicate in a semi-conservative fashion using each strand as a template". Max Delbrück, Nikolay Timofeev-Ressovsky, and Karl G. Zimmer published results in 1935 suggesting that chromosomes are very large molecules the structure of which can be changed by treatment with X-rays, and that by so changing their structure it was possible to change the heritable characteristics governed by those chromosomes. In 1937 William Astbury produced the first X-ray diffraction patterns from DNA. He was not able to propose the correct structure but the patterns showed that DNA had a regular structure and therefore it might be possible to deduce what this structure was.

In 1943, Oswald Theodore Avery and a team of scientists discovered that traits proper to the "smooth" form of the Pneumococcus could be transferred to the "rough" form of the same bacteria merely by making the killed "smooth" (S) form available to the live "rough" (R) form. Quite unexpectedly, the living R Pneumococcus bacteria were transformed into a new strain of the S form, and the transferred S characteristics turned out to be heritable. Avery called the medium of transfer of traits the transforming principle; he identified DNA as the transforming principle, and not protein as previously thought. He essentially redid Frederick Griffith's experiment. In 1953, Alfred Hershey and Martha Chase did an experiment (Hershey–Chase experiment) that showed, in T2 phage, that DNA is the genetic material (Hershey shared the Nobel prize with Luria).

Discovery of the structure of DNA

In the 1950s, three groups made it their goal to determine the structure of DNA. The first group to start was at King's College London and was led by Maurice Wilkins and was later joined by Rosalind Franklin. Another group consisting of Francis Crick and James Watson was at Cambridge. A third group was at Caltech and was led by Linus Pauling. Crick and Watson built physical models using metal rods and balls, in which they incorporated the known chemical structures of the nucleotides, as well as the known position of the linkages joining one nucleotide to the next along the polymer. At King's College Maurice Wilkins and Rosalind Franklin examined X-ray diffraction patterns of DNA fibers. Of the three groups, only the London group was able to produce good quality diffraction patterns and thus produce sufficient quantitative data about the structure.

Helix structure

In 1948, Pauling discovered that many proteins included helical (see alpha helix) shapes. Pauling had deduced this structure from X-ray patterns and from attempts to physically model the structures. (Pauling was also later to suggest an incorrect three chain helical DNA structure based on Astbury's data.) Even in the initial diffraction data from DNA by Maurice Wilkins, it was evident that the structure involved helices. But this insight was only a beginning. There remained the questions of how many strands came together, whether this number was the same for every helix, whether the bases pointed toward the helical axis or away, and ultimately what were the explicit angles and coordinates of all the bonds and atoms. Such questions motivated the modeling efforts of Watson and Crick.

Complementary nucleotides

In their modeling, Watson and Crick restricted themselves to what they saw as chemically and biologically reasonable. Still, the breadth of possibilities was very wide. A breakthrough occurred in 1952, when Erwin Chargaff visited Cambridge and inspired Crick with a description of experiments Chargaff had published in 1947. Chargaff had observed that the proportions of the four nucleotides vary between one DNA sample and the next, but that for particular pairs of nucleotides—adenine and thymine, guanine and cytosine—the two nucleotides are always present in equal proportions.

Crick and Watson DNA model built in 1953, was reconstructed largely from its original pieces in 1973 and donated to the National Science Museum in London.

Using X-ray diffraction, as well as other data from Rosalind Franklin and her information that the bases were paired, James Watson and Francis Crick arrived at the first accurate model of DNA's molecular structure in 1953, which was accepted through inspection by Rosalind Franklin. The discovery was announced on February 28, 1953; the first Watson/Crick paper appeared in Nature on April 25, 1953. Sir Lawrence Bragg, the director of the Cavendish Laboratory, where Watson and Crick worked, gave a talk at Guy's Hospital Medical School in London on Thursday, May 14, 1953, which resulted in an article by Ritchie Calder in the News Chronicle of London, on Friday, May 15, 1953, entitled "Why You Are You. Nearer Secret of Life." The news reached readers of The New York Times the next day; Victor K. McElheny, in researching his biography, "Watson and DNA: Making a Scientific Revolution", found a clipping of a six-paragraph New York Times article written from London and dated May 16, 1953, with the headline "Form of 'Life Unit' in Cell Is Scanned." The article ran in an early edition and was then pulled to make space for news deemed more important. (The New York Times subsequently ran a longer article on June 12, 1953). The Cambridge University undergraduate newspaper also ran its own short article on the discovery on Saturday, May 30, 1953. Bragg's original announcement at a Solvay Conference on proteins in Belgium on April 8, 1953, went unreported by the press. In 1962 Watson, Crick, and Maurice Wilkins jointly received the Nobel Prize in Physiology or Medicine for their determination of the structure of DNA.

"Central Dogma"

Watson and Crick's model attracted great interest immediately upon its presentation. Arriving at their conclusion on February 21, 1953, Watson and Crick made their first announcement on February 28. In an influential presentation in 1957, Crick laid out the "central dogma of molecular biology", which foretold the relationship between DNA, RNA, and proteins, and articulated the "sequence hypothesis." A critical confirmation of the replication mechanism that was implied by the double-helical structure followed in 1958 in the form of the Meselson–Stahl experiment. Messenger RNA (mRNA) was identified as an intermediate between DNA sequences and protein synthesis by Brenner, Meselson, and Jacob in 1961. Then, work by Crick and coworkers showed that the genetic code was based on non-overlapping triplets of bases, called codons, and Har Gobind Khorana and others deciphered the genetic code not long afterward (1966). These findings represent the birth of molecular biology.

History of RNA tertiary structure

Pre-history: the helical structure of RNA

The earliest work in RNA structural biology coincided, more or less, with the work being done on DNA in the early 1950s. In their seminal 1953 paper, Watson and Crick suggested that van der Waals crowding by the 2`OH group of ribose would preclude RNA from adopting a double helical structure identical to the model they proposed—what we now know as B-form DNA. This provoked questions about the three-dimensional structure of RNA: could this molecule form some type of helical structure, and if so, how? As with DNA, early structural work on RNA centered around isolation of native RNA polymers for fiber diffraction analysis. In part because of heterogeneity of the samples tested, early fiber diffraction patterns were usually ambiguous and not readily interpretable. In 1955, Marianne Grunberg-Manago and colleagues published a paper describing the enzyme polynucleotide phosphorylase, which cleaved a phosphate group from nucleotide diphosphates to catalyze their polymerization. This discovery allowed researchers to synthesize homogenous nucleotide polymers, which they then combined to produce double stranded molecules. These samples yielded the most readily interpretable fiber diffraction patterns yet obtained, suggesting an ordered, helical structure for cognate, double stranded RNA that differed from that observed in DNA. These results paved the way for a series of investigations into the various properties and propensities of RNA. Through the late 1950s and early 1960s, numerous papers were published on various topics in RNA structure, including RNA-DNA hybridization, triple stranded RNA, and even small-scale crystallography of RNA di-nucleotides—G-C, and A-U—in primitive helix-like arrangements. For a more in-depth review of the early work in RNA structural biology, see the article The Era of RNA Awakening: Structural biology of RNA in the early years by Alexander Rich.

The beginning: crystal structure of tRNAPHE

In the mid-1960s, the role of tRNA in protein synthesis was being intensively studied. At this point, ribosomes had been implicated in protein synthesis, and it had been shown that an mRNA strand was necessary for the formation of these structures. In a 1964 publication, Warner and Rich showed that ribosomes active in protein synthesis contained tRNA molecules bound at the A and P sites, and discussed the notion that these molecules aided in the peptidyl transferase reaction. However, despite considerable biochemical characterization, the structural basis of tRNA function remained a mystery. In 1965, Holley et al. purified and sequenced the first tRNA molecule, initially proposing that it adopted a cloverleaf structure, based largely on the ability of certain regions of the molecule to form stem loop structures. The isolation of tRNA proved to be the first major windfall in RNA structural biology. Following Robert W. Holley's publication, numerous investigators began work on isolation tRNA for crystallographic study, developing improved methods for isolating the molecule as they worked. By 1968 several groups had produced tRNA crystals, but these proved to be of limited quality and did not yield data at the resolutions necessary to determine structure. In 1971, Kim et al. achieved another breakthrough, producing crystals of yeast tRNAPHE that diffracted to 2–3 Ångström resolutions by using spermine, a naturally occurring polyamine, which bound to and stabilized the tRNA. Despite having suitable crystals, however, the structure of tRNAPHE was not immediately solved at high resolution; rather it took pioneering work in the use of heavy metal derivatives and a good deal more time to produce a high-quality density map of the entire molecule. In 1973, Kim et al. produced a 4 Ångström map of the tRNA molecule in which they could unambiguously trace the entire backbone. This solution would be followed by many more, as various investigators worked to refine the structure and thereby more thoroughly elucidate the details of base pairing and stacking interactions, and validate the published architecture of the molecule.

The tRNAPHE structure is notable in the field of nucleic acid structure in general, as it represented the first solution of a long-chain nucleic acid structure of any kind—RNA or DNA—preceding Richard E. Dickerson's solution of a B-form dodecamer by nearly a decade. Also, tRNAPHE demonstrated many of the tertiary interactions observed in RNA architecture which would not be categorized and more thoroughly understood for years to come, providing a foundation for all future RNA structural research.

The renaissance: the hammerhead ribozyme and the group I intron: P4-6

For a considerable time following the first tRNA structures, the field of RNA structure did not dramatically advance. The ability to study an RNA structure depended upon the potential to isolate the RNA target. This proved limiting to the field for many years, in part because other known targets—i.e., the ribosome—were significantly more difficult to isolate and crystallize. Further, because other interesting RNA targets had simply not been identified, or were not sufficiently understood to be deemed interesting, there was simply a lack of things to study structurally. As such, for some twenty years following the original publication of the tRNAPHE structure, the structures of only a handful of other RNA targets were solved, with almost all of these belonging to the transfer RNA family. This unfortunate lack of scope would eventually be overcome largely because of two major advancements in nucleic acid research: the identification of ribozymes, and the ability to produce them via in vitro transcription.

Subsequent to Tom Cech's publication implicating the Tetrahymena group I intron as an autocatalytic ribozyme, and Sidney Altman's report of catalysis by ribonuclease P RNA, several other catalytic RNAs were identified in the late 1980s, including the hammerhead ribozyme. In 1994, McKay et al. published the structure of a 'hammerhead RNA-DNA ribozyme-inhibitor complex' at 2.6 Ångström resolution, in which the autocatalytic activity of the ribozyme was disrupted via binding to a DNA substrate. The conformation of the ribozyme published in this paper was eventually shown to be one of several possible states, and although this particular sample was catalytically inactive, subsequent structures have revealed its active-state architecture. This structure was followed by Jennifer Doudna's publication of the structure of the P4-P6 domains of the Tetrahymena group I intron, a fragment of the ribozyme originally made famous by Cech. The second clause in the title of this publication—Principles of RNA Packing—concisely evinces the value of these two structures: for the first time, comparisons could be made between well described tRNA structures and those of globular RNAs outside the transfer family. This allowed the framework of categorization to be built for RNA tertiary structure. It was now possible to propose the conservation of motifs, folds, and various local stabilizing interactions. For an early review of these structures and their implications, see RNA FOLDS: Insights from recent crystal structures, by Doudna and Ferre-D'Amare.

In addition to the advances being made in global structure determination via crystallography, the early 1990s also saw the implementation of NMR as a powerful technique in RNA structural biology. Coincident with the large-scale ribozyme structures being solved crystallographically, a number of structures of small RNAs and RNAs complexed with drugs and peptides were solved using NMR. In addition, NMR was now being used to investigate and supplement crystal structures, as exemplified by the determination of an isolated tetraloop-receptor motif structure published in 1997. Investigations such as this enabled a more precise characterization of the base pairing and base stacking interactions which stabilized the global folds of large RNA molecules. The importance of understanding RNA tertiary structural motifs was prophetically well described by Michel and Costa in their publication identifying the tetraloop motif: "...it should not come as a surprise if self-folding RNA molecules were to make intensive use of only a relatively small set of tertiary motifs. Identifying these motifs would greatly aid modeling enterprises, which will remain essential as long as the crystallization of large RNAs remains a difficult task".

The modern era: the age of RNA structural biology

The resurgence of RNA structural biology in the mid-1990s has caused a veritable explosion in the field of nucleic acid structural research. Since the publication of the hammerhead and P4-6 structures, numerous major contributions to the field have been made. Some of the most noteworthy examples include the structures of the Group I and Group II introns, and the Ribosome solved by Nenad Ban and colleagues in the laboratory of Thomas Steitz. The first three structures were produced using in vitro transcription, and that NMR has played a role in investigating partial components of all four structures—testaments to the indispensability of both techniques for RNA research. Most recently, the 2009 Nobel Prize in Chemistry was awarded to Ada Yonath, Venkatraman Ramakrishnan and Thomas Steitz for their structural work on the ribosome, demonstrating the prominent role RNA structural biology has taken in modern molecular biology.

History of protein biochemistry

First isolation and classification

Proteins were recognized as a distinct class of biological molecules in the eighteenth century by Antoine Fourcroy and others. Members of this class (called the "albuminoids", Eiweisskörper, or matières albuminoides) were recognized by their ability to coagulate or flocculate under various treatments such as heat or acid; well-known examples at the start of the nineteenth century included albumen from egg whites, blood serum albumin, fibrin, and wheat gluten. The similarity between the cooking of egg whites and the curdling of milk was recognized even in ancient times; for example, the name albumen for the egg-white protein was coined by Pliny the Elder from the Latin albus ovi (egg white).

With the advice of Jöns Jakob Berzelius, the Dutch chemist Gerhardus Johannes Mulder carried out elemental analyses of common animal and plant proteins. To everyone's surprise, all proteins had nearly the same empirical formula, roughly C400H620N100O120 with individual sulfur and phosphorus atoms. Mulder published his findings in two papers (1837,1838) and hypothesized that there was one basic substance (Grundstoff) of proteins, and that it was synthesized by plants and absorbed from them by animals in digestion. Berzelius was an early proponent of this theory and proposed the name "protein" for this substance in a letter dated 10 July 1838

The name protein that he propose for the organic oxide of fibrin and albumin, I wanted to derive from [the Greek word] πρωτειος, because it appears to be the primitive or principal substance of animal nutrition.

Mulder went on to identify the products of protein degradation such as the amino acid, leucine, for which he found a (nearly correct) molecular weight of 131 Da.

Purifications and measurements of mass

The minimum molecular weight suggested by Mulder's analyses was roughly 9 kDa, hundreds of times larger than other molecules being studied. Hence, the chemical structure of proteins (their primary structure) was an active area of research until 1949, when Fred Sanger sequenced insulin. The (correct) theory that proteins were linear polymers of amino acids linked by peptide bonds was proposed independently and simultaneously by Franz Hofmeister and Emil Fischer at the same conference in 1902. However, some scientists were sceptical that such long macromolecules could be stable in solution. Consequently, numerous alternative theories of the protein primary structure were proposed, e.g., the colloidal hypothesis that proteins were assemblies of small molecules, the cyclol hypothesis of Dorothy Wrinch, the diketopiperazine hypothesis of Emil Abderhalden and the pyrrol/piperidine hypothesis of Troensgard (1942). Most of these theories had difficulties in accounting for the fact that the digestion of proteins yielded peptides and amino acids. Proteins were finally shown to be macromolecules of well-defined composition (and not colloidal mixtures) by Theodor Svedberg using analytical ultracentrifugation. The possibility that some proteins are non-covalent associations of such macromolecules was shown by Gilbert Smithson Adair (by measuring the osmotic pressure of hemoglobin) and, later, by Frederic M. Richards in his studies of ribonuclease S. The mass spectrometry of proteins has long been a useful technique for identifying posttranslational modifications and, more recently, for probing protein structure.

Most proteins are difficult to purify in more than milligram quantities, even using the most modern methods. Hence, early studies focused on proteins that could be purified in large quantities, e.g., those of blood, egg white, various toxins, and digestive/metabolic enzymes obtained from slaughterhouses. Many techniques of protein purification were developed during World War II in a project led by Edwin Joseph Cohn to purify blood proteins to help keep soldiers alive. In the late 1950s, the Armour Hot Dog Co. purified 1 kg (= one million milligrams) of pure bovine pancreatic ribonuclease A and made it available at low cost to scientists around the world. This generous act made RNase A the main protein for basic research for the next few decades, resulting in several Nobel Prizes.

Protein folding and first structural models

The study of protein folding began in 1910 with a famous paper by Harriette Chick and C. J. Martin, in which they showed that the flocculation of a protein was composed of two distinct processes: the precipitation of a protein from solution was preceded by another process called denaturation, in which the protein became much less soluble, lost its enzymatic activity and became more chemically reactive. In the mid-1920s, Tim Anson and Alfred Mirsky proposed that denaturation was a reversible process, a correct hypothesis that was initially lampooned by some scientists as "unboiling the egg". Anson also suggested that denaturation was a two-state ("all-or-none") process, in which one fundamental molecular transition resulted in the drastic changes in solubility, enzymatic activity and chemical reactivity; he further noted that the free energy changes upon denaturation were much smaller than those typically involved in chemical reactions. In 1929, Hsien Wu hypothesized that denaturation was protein unfolding, a purely conformational change that resulted in the exposure of amino acid side chains to the solvent. According to this (correct) hypothesis, exposure of aliphatic and reactive side chains to solvent rendered the protein less soluble and more reactive, whereas the loss of a specific conformation caused the loss of enzymatic activity. Although considered plausible, Wu's hypothesis was not immediately accepted, since so little was known of protein structure and enzymology and other factors could account for the changes in solubility, enzymatic activity and chemical reactivity. In the early 1960s, Chris Anfinsen showed that the folding of ribonuclease A was fully reversible with no external cofactors needed, verifying the "thermodynamic hypothesis" of protein folding that the folded state represents the global minimum of free energy for the protein.

The hypothesis of protein folding was followed by research into the physical interactions that stabilize folded protein structures. The crucial role of hydrophobic interactions was hypothesized by Dorothy Wrinch and Irving Langmuir, as a mechanism that might stabilize her cyclol structures. Although supported by J. D. Bernal and others, this (correct) hypothesis was rejected along with the cyclol hypothesis, which was disproven in the 1930s by Linus Pauling (among others). Instead, Pauling championed the idea that protein structure was stabilized mainly by hydrogen bonds, an idea advanced initially by William Astbury (1933). Remarkably, Pauling's incorrect theory about H-bonds resulted in his correct models for the secondary structure elements of proteins, the alpha helix and the beta sheet. The hydrophobic interaction was restored to its correct prominence by a famous article in 1959 by Walter Kauzmann on denaturation, based partly on work by Kaj Linderstrøm-Lang. The ionic nature of proteins was demonstrated by Bjerrum, Weber and Arne Tiselius, but Linderstrom-Lang showed that the charges were generally accessible to solvent and not bound to each other (1949).

The secondary and low-resolution tertiary structure of globular proteins was investigated initially by hydrodynamic methods, such as analytical ultracentrifugation and flow birefringence. Spectroscopic methods to probe protein structure (such as circular dichroism, fluorescence, near-ultraviolet and infrared absorbance) were developed in the 1950s. The first atomic-resolution structures of proteins were solved by X-ray crystallography in the 1960s and by NMR in the 1980s. As of 2019, the Protein Data Bank has over 150,000 atomic-resolution structures of proteins. In more recent times, cryo-electron microscopy of large macromolecular assemblies has achieved atomic resolution, and computational protein structure prediction of small protein domains is approaching atomic resolution.

The eclipse of Darwinism

From Wikipedia, the free encyclopedia

Julian Huxley used the phrase "the eclipse of Darwinism" to describe the state of affairs prior to what he called the "modern synthesis". During the "eclipse", evolution was widely accepted in scientific circles but relatively few biologists believed that natural selection was its primary mechanism. Historians of science such as Peter J. Bowler have used the same phrase as a label for the period within the history of evolutionary thought from the 1880s to around 1920, when alternatives to natural selection were developed and explored—as many biologists considered natural selection to have been a wrong guess on Charles Darwin's part, or at least to be of relatively minor importance.

Four major alternatives to natural selection were in play in the 19th century:

  • Theistic evolution, the belief that God directly guided evolution
  • Neo-Lamarckism, the idea that evolution was driven by the inheritance of characteristics acquired during the life of the organism
  • Orthogenesis, the belief that organisms were affected by internal forces or laws of development that drove evolution in particular directions
  • Mutationism, the idea that evolution was largely the product of mutations that created new forms or species in a single step.

Theistic evolution had largely disappeared from the scientific literature by the end of the 19th century as direct appeals to supernatural causes came to be seen as unscientific. The other alternatives had significant followings well into the 20th century; mainstream biology largely abandoned them only when developments in genetics made them seem increasingly untenable, and when the development of population genetics and the modern synthesis demonstrated the explanatory power of natural selection. Ernst Mayr wrote that as late as 1930 most textbooks still emphasized such non-Darwinian mechanisms.

Context

Evolution was widely accepted in scientific circles within a few years after the publication of On the Origin of Species, but there was much less acceptance of natural selection as its driving mechanism. Six objections were raised to the theory in the 19th century:

Blending inheritance leads to the averaging out of every characteristic, which as the engineer Fleeming Jenkin pointed out, makes evolution by natural selection impossible.
  1. The fossil record was discontinuous, suggesting gaps in evolution.
  2. The physicist Lord Kelvin calculated in 1862 that the Earth would have cooled in 100 million years or less from its formation, too little time for evolution.
  3. It was argued that many structures were nonadaptive (functionless), so they could not have evolved under natural selection.
  4. Some structures seemed to have evolved on a regular pattern, like the eyes of unrelated animals such as the squid and mammals.
  5. Natural selection was argued not to be creative, while variation was admitted to be mostly not of value.
  6. The engineer Fleeming Jenkin correctly noted in 1868, reviewing The Origin of Species, that the blending inheritance favoured by Charles Darwin would oppose the action of natural selection.

Both Darwin and his close supporter Thomas Henry Huxley freely admitted, too, that selection might not be the whole explanation; Darwin was prepared to accept a measure of Lamarckism, while Huxley was comfortable with both sudden (mutational) change and directed (orthogenetic) evolution.

By the end of the 19th century, criticism of natural selection had reached the point that in 1903 the German botanist, Eberhard Dennert  [de], edited a series of articles intended to show that "Darwinism will soon be a thing of the past, a matter of history; that we even now stand at its death-bed, while its friends are solicitous only to secure for it a decent burial." In 1907, the Stanford University entomologist Vernon Lyman Kellogg, who supported natural selection, asserted that "... the fair truth is that the Darwinian selection theory, considered with regard to its claimed capacity to be an independently sufficient mechanical explanation of descent, stands today seriously discredited in the biological world." He added, however, that there were problems preventing the widespread acceptance of any of the alternatives, as large mutations seemed too uncommon, and there was no experimental evidence of mechanisms that could support either Lamarckism or orthogenesis. Ernst Mayr wrote that a survey of evolutionary literature and biology textbooks showed that as late as 1930 the belief that natural selection was the most important factor in evolution was a minority viewpoint, with only a few population geneticists being strict selectionists.

Motivation for alternatives

A variety of different factors motivated people to propose other evolutionary mechanisms as alternatives to natural selection, some of them dating back before Darwin's Origin of Species. Natural selection, with its emphasis on death and competition, did not appeal to some naturalists because they felt it was immoral, and left little room for teleology or the concept of progress in the development of life. Some of these scientists and philosophers, like St. George Jackson Mivart and Charles Lyell, who came to accept evolution but disliked natural selection, raised religious objections. Others, such as Herbert Spencer, the botanist George Henslow (son of Darwin's mentor John Stevens Henslow, also a botanist), and Samuel Butler, felt that evolution was an inherently progressive process that natural selection alone was insufficient to explain. Still others, including the American paleontologists Edward Drinker Cope and Alpheus Hyatt, had an idealist perspective and felt that nature, including the development of life, followed orderly patterns that natural selection could not explain.

Another factor was the rise of a new faction of biologists at the end of the 19th century, typified by the geneticists Hugo DeVries and Thomas Hunt Morgan, who wanted to recast biology as an experimental laboratory science. They distrusted the work of naturalists like Darwin and Alfred Russel Wallace, dependent on field observations of variation, adaptation, and biogeography, considering these overly anecdotal. Instead they focused on topics like physiology, and genetics that could be easily investigated with controlled experiments in the laboratory, and discounted natural selection and the degree to which organisms were adapted to their environment, which could not easily be tested experimentally.

Anti-Darwinist theories during the eclipse

Theistic evolution

Louis Agassiz (here in 1870, with drawings of Radiata) believed in a sequence of creations in which humanity was the goal of a divine plan.

British science developed in the early 19th century on a basis of natural theology which saw the adaptation of fixed species as evidence that they had been specially created to a purposeful divine design. The philosophical concepts of German idealism inspired concepts of an ordered plan of harmonious creation, which Richard Owen reconciled with natural theology as a pattern of homology showing evidence of design. Similarly, Louis Agassiz saw Ernest Haeckel's recapitulation theory, which held that the embryological development of an organism repeats its evolutionary history, as symbolising a pattern of the sequence of creations in which humanity was the goal of a divine plan. In 1844 Vestiges adapted Agassiz's concept into theistic evolutionism. Its anonymous author Robert Chambers proposed a "law" of divinely ordered progressive development, with transmutation of species as an extension of recapitulation theory. This popularised the idea, but it was strongly condemned by the scientific establishment. Agassiz remained forcefully opposed to evolution, and after he moved to America in 1846 his idealist argument from design of orderly development became very influential. In 1858 Owen cautiously proposed that this development could be a real expression of a continuing creative law, but distanced himself from transmutationists. Two years later, in his review of On the Origin of Species, Owen attacked Darwin while at the same time openly supporting evolution, expressing belief in a pattern of transmutation by law-like means. This idealist argument from design was taken up by other naturalists such as George Jackson Mivart, and the Duke of Argyll who rejected natural selection altogether in favor of laws of development that guided evolution down preordained paths.

Many of Darwin's supporters accepted evolution on the basis that it could be reconciled with design. In particular, Asa Gray considered natural selection to be the main mechanism of evolution and sought to reconcile it with natural theology. He proposed that natural selection could be a mechanism in which the problem of evil of suffering produced the greater good of adaptation, but conceded that this had difficulties and suggested that God might influence the variations on which natural selection acted to guide evolution. For Darwin and Thomas Henry Huxley such pervasive supernatural influence was beyond scientific investigation, and George Frederick Wright, an ordained minister who was Gray's colleague in developing theistic evolution, emphasised the need to look for secondary or known causes rather than invoking supernatural explanations: "If we cease to observe this rule there is an end to all science and all sound science."

A secular version of this methodological naturalism was welcomed by a younger generation of scientists who sought to investigate natural causes of organic change, and rejected theistic evolution in science. By 1872 Darwinism in its broader sense of the fact of evolution was accepted as a starting point. Around 1890 only a few older men held onto the idea of design in science, and it had completely disappeared from mainstream scientific discussions by 1900. There was still unease about the implications of natural selection, and those seeking a purpose or direction in evolution turned to neo-Lamarckism or orthogenesis as providing natural explanations.

Neo-Lamarckism

Jean-Baptiste Lamarck

Jean-Baptiste Lamarck had originally proposed a theory on the transmutation of species that was largely based on a progressive drive toward greater complexity. Lamarck also believed, as did many others in the 19th century, that characteristics acquired during the course of an organism's life could be inherited by the next generation, and he saw this as a secondary evolutionary mechanism that produced adaptation to the environment. Typically, such characteristics included changes caused by the use or disuse of a particular organ. It was this mechanism of evolutionary adaptation through the inheritance of acquired characteristics that much later came to be known as Lamarckism. Although Alfred Russel Wallace completely rejected the concept in favor of natural selection, Darwin always included what he called Effects of the increased Use and Disuse of Parts, as controlled by Natural Selection in On the Origin of Species, giving examples such as large ground feeding birds getting stronger legs through exercise, and weaker wings from not flying until, like the ostrich, they could not fly at all.

Alpheus Spring Packard's 1872 book Mammoth Cave and its Inhabitants used the example of cave beetles (Anophthalmus and Adelops) that had become blind to argue for Lamarckian evolution through inherited disuse of organs.

In the late 19th century the term neo-Lamarckism came to be associated with the position of naturalists who viewed the inheritance of acquired characteristics as the most important evolutionary mechanism. Advocates of this position included the British writer and Darwin critic Samuel Butler, the German biologist Ernst Haeckel, the American paleontologists Edward Drinker Cope and Alpheus Hyatt, and the American entomologist Alpheus Packard. They considered Lamarckism to be more progressive and thus philosophically superior to Darwin's idea of natural selection acting on random variation. Butler and Cope both believed that this allowed organisms to effectively drive their own evolution, since organisms that developed new behaviors would change the patterns of use of their organs and thus kick-start the evolutionary process. In addition, Cope and Haeckel both believed that evolution was a progressive process. The idea of linear progress was an important part of Haeckel's recapitulation theory. Cope and Hyatt looked for, and thought they found, patterns of linear progression in the fossil record. Packard argued that the loss of vision in the blind cave insects he studied was best explained through a Lamarckian process of atrophy through disuse combined with inheritance of acquired characteristics.

Many American proponents of neo-Lamarckism were strongly influenced by Louis Agassiz, and a number of them, including Hyatt and Packard, were his students. Agassiz had an idealistic view of nature, connected with natural theology, that emphasized the importance of order and pattern. Agassiz never accepted evolution; his followers did, but they continued his program of searching for orderly patterns in nature, which they considered to be consistent with divine providence, and preferred evolutionary mechanisms like neo-Lamarckism and orthogenesis that would be likely to produce them.

In Britain the botanist George Henslow, the son of Darwin's mentor John Stevens Henslow, was an important advocate of neo-Lamarckism. He studied how environmental stress affected the development of plants, and he wrote that the variations induced by such environmental factors could largely explain evolution. The historian of science Peter J. Bowler writes that, as was typical of many 19th century Lamarckians, Henslow did not appear to understand the need to demonstrate that such environmentally induced variations would be inherited by descendants that developed in the absence of the environmental factors that produced them, but merely assumed that they would be.

Polarising the argument: Weismann's germ plasm

August Weismann's germ plasm theory stated that the hereditary material is confined to the gonads. Somatic cells (of the body) develop afresh in each generation from the germ plasm, so changes to the body acquired during a lifetime cannot affect the next generation, as neo-Lamarckism required.

Critics of neo-Lamarckism pointed out that no one had ever produced solid evidence for the inheritance of acquired characteristics. The experimental work of the German biologist August Weismann resulted in the germ plasm theory of inheritance. This led him to declare that inheritance of acquired characteristics was impossible, since the Weismann barrier would prevent any changes that occurred to the body after birth from being inherited by the next generation. This effectively polarised the argument between the Darwinians and the neo-Lamarckians, as it forced people to choose whether to agree or disagree with Weismann and hence with evolution by natural selection. Despite Weismann's criticism, neo-Lamarckism remained the most popular alternative to natural selection at the end of the 19th century, and would remain the position of some naturalists well into the 20th century.

Baldwin effect

As a consequence of the debate over the viability of neo-Lamarckism, in 1896 James Mark Baldwin, Henry Fairfield Osborne and C. Lloyd Morgan all independently proposed a mechanism where new learned behaviors could cause the evolution of new instincts and physical traits through natural selection without resort to the inheritance of acquired characteristics. They proposed that if individuals in a species benefited from learning a particular new behavior, the ability to learn that behavior could be favored by natural selection, and the result would be the evolution of new instincts and eventually new physical adaptations. This became known as the Baldwin effect and it has remained a topic of debate and research in evolutionary biology ever since.

Orthogenesis

Henry Fairfield Osborn's 1918 book Origin and Evolution of Life claimed the evolution of Titanothere horns was an example of an orthogenetic trend in evolution.

Orthogenesis was the theory that life has an innate tendency to change, in a unilinear fashion in a particular direction. The term was popularized by Theodor Eimer, a German zoologist, in his 1898 book On Orthogenesis: And the Impotence of Natural Selection in Species Formation. He had studied the coloration of butterflies, and believed he had discovered non-adaptive features which could not be explained by natural selection. Eimer also believed in Lamarckian inheritance of acquired characteristics, but he felt that internal laws of growth determined which characteristics would be acquired and guided the long term direction of evolution down certain paths.

Orthogenesis had a significant following in the 19th century, its proponents including the Russian biologist Leo S. Berg, and the American paleontologist Henry Fairfield Osborn. Orthogenesis was particularly popular among some paleontologists, who believed that the fossil record showed patterns of gradual and constant unidirectional change. Those who accepted this idea, however, did not necessarily accept that the mechanism driving orthogenesis was teleological (goal-directed). They did believe that orthogenetic trends were non-adaptive; in fact they felt that in some cases they led to developments that were detrimental to the organism, such as the large antlers of the Irish elk that they believed led to the animal's extinction.

Support for orthogenesis began to decline during the modern synthesis in the 1940s, when it became apparent that orthogenesis could not explain the complex branching patterns of evolution revealed by statistical analysis of the fossil record by paleontologists. A few biologists however hung on to the idea of orthogenesis as late as the 1950s, claiming that the processes of macroevolution, the long term trends in evolution, were distinct from the processes of microevolution.

Mutationism

Painting of Hugo de Vries, making a painting of an evening primrose, the plant which had apparently produced new forms by large mutations in his experiments, by Thérèse Schwartze, 1918

Mutationism was the idea that new forms and species arose in a single step as a result of large mutations. It was seen as a much faster alternative to the Darwinian concept of a gradual process of small random variations being acted on by natural selection. It was popular with early geneticists such as Hugo de Vries, who along with Carl Correns helped rediscover Gregor Mendel's laws of inheritance in 1900, William Bateson a British zoologist who switched to genetics, and early in his career, Thomas Hunt Morgan.

The 1901 mutation theory of evolution held that species went through periods of rapid mutation, possibly as a result of environmental stress, that could produce multiple mutations, and in some cases completely new species, in a single generation. Its originator was the Dutch botanist Hugo de Vries. De Vries looked for evidence of mutation extensive enough to produce a new species in a single generation and thought he found it with his work breeding the evening primrose of the genus Oenothera, which he started in 1886. The plants that de Vries worked with seemed to be constantly producing new varieties with striking variations in form and color, some of which appeared to be new species because plants of the new generation could only be crossed with one another, not with their parents. DeVries himself allowed a role for natural selection in determining which new species would survive, but some geneticists influenced by his work, including Morgan, felt that natural selection was not necessary at all. De Vries's ideas were influential in the first two decades of the 20th century, as some biologists felt that mutation theory could explain the sudden emergence of new forms in the fossil record; research on Oenothera spread across the world. However, critics including many field naturalists wondered why no other organism seemed to show the same kind of rapid mutation.

Morgan was a supporter of de Vries's mutation theory and was hoping to gather evidence in favor of it when he started working with the fruit fly Drosophila melanogaster in his lab in 1907. However, it was a researcher in that lab, Hermann Joseph Muller, who determined in 1918 that the new varieties de Vries had observed while breeding Oenothera were the result of polyploid hybrids rather than rapid genetic mutation. While they were doubtful of the importance of natural selection, the work of geneticists like Morgan, Bateson, de Vries and others from 1900 to 1915 established Mendelian genetics linked to chromosomal inheritance, which validated August Weismann's criticism of neo-Lamarckian evolution by discounting the inheritance of acquired characteristics. The work in Morgan's lab with Drosophila also undermined the concept of orthogenesis by demonstrating the random nature of mutation.

End of the eclipse

Several major ideas about evolution came together in the population genetics of the early 20th century to form the modern synthesis, including competition for resources, genetic variation, natural selection, and particulate (Mendelian) inheritance. This ended the eclipse of Darwinism.

During the period 1916–1932, the discipline of population genetics developed largely through the work of the geneticists Ronald Fisher, J.B.S. Haldane, and Sewall Wright. Their work recognized that the vast majority of mutations produced small effects that served to increase the genetic variability of a population rather than creating new species in a single step as the mutationists assumed. They were able to produce statistical models of population genetics that included Darwin's concept of natural selection as the driving force of evolution.

Developments in genetics persuaded field naturalists such as Bernhard Rensch and Ernst Mayr to abandon neo-Lamarckian ideas about evolution in the early 1930s. By the late 1930s, Mayr and Theodosius Dobzhansky had synthesized the ideas of population genetics with the knowledge of field naturalists about the amount of genetic diversity in wild populations, and the importance of genetically distinct subpopulations (especially when isolated from one another by geographical barriers) to create the early 20th century modern synthesis. In 1944 George Gaylord Simpson integrated paleontology into the synthesis by statistically analyzing the fossil record to show that it was consistent with the branching non-directional form of evolution predicted by the synthesis, and in particular that the linear trends cited by earlier paleontologists in support of Lamarckism and orthogenesis did not stand up to careful analysis. Mayr wrote that by the end of the synthesis natural selection together with chance mechanisms like genetic drift had become the universal explanation for evolutionary change.

Historiography

The concept of eclipse suggests that Darwinian research paused, implying in turn that there had been a preceding period of vigorously Darwinian activity among biologists. However, historians of science such as Mark Largent have argued that while biologists broadly accepted the extensive evidence for evolution presented in The Origin of Species, there was less enthusiasm for natural selection as a mechanism. Biologists instead looked for alternative explanations more in keeping with their worldviews, which included the beliefs that evolution must be directed and that it constituted a form of progress. Further, the idea of a dark eclipse period was convenient to scientists such as Julian Huxley, who wished to paint the modern synthesis as a bright new achievement, and accordingly to depict the preceding period as dark and confused. Huxley's 1942 book Evolution: The Modern Synthesis therefore, argued Largent, suggested that the so-called modern synthesis began after a long period of eclipse lasting until the 1930s, in which Mendelians, neo-Lamarckians, mutationists, and Weismannians, not to mention experimental embryologists and Haeckelian recapitulationists fought running battles with each other. The idea of an eclipse also allowed Huxley to step aside from what was to him the inconvenient association of evolution with aspects such as social Darwinism, eugenics, imperialism, and militarism. Accounts such as Michael Ruse's very large book Monad to Man ignored, claimed Largent, almost all the early 20th century American evolutionary biologists. Largent has suggested as an alternative to eclipse a biological metaphor, the interphase of Darwinism, interphase being an apparently quiet period in the cycle of cell division and growth.

Thermophotovoltaic energy conversion

From Wikipedia, the free encyclopedia

Thermophotovoltaic (TPV) energy conversion is a direct conversion process from heat to electricity via photons. A basic thermophotovoltaic system consists of a hot object emitting thermal radiation and a photovoltaic cell similar to a solar cell but tuned to the spectrum being emitted from the hot object.

As TPV systems generally work at lower temperatures than solar cells, their efficiencies tend to be low. Offsetting this through the use of multi-junction cells based on non-silicon materials is common, but generally very expensive. This currently limits TPV to niche roles like spacecraft power and waste heat collection from larger systems like steam turbines.

General concept

PV

Typical photovoltaics work by creating a p–n junction near the front surface of a thin semiconductor material. When photons above the bandgap energy of the material hit atoms within the bulk lower layer, below the junction, an electron is photoexcited and becomes free of its atom. The junction creates an electric field that accelerates the electron forward within the cell until it passes the junction and is free to move to the thin electrodes patterned on the surface. Connecting a wire from the front to the rear allows the electrons to flow back into the bulk and complete the circuit.

Photons with less energy than the bandgap do not eject electrons. Photons with energy above the bandgap will eject higher-energy electrons which tend to thermalize within the material and lose their extra energy as heat. If the cell's bandgap is raised, the electrons that are emitted will have higher energy when they reach the junction and thus result in a higher voltage, but this will reduce the number of electrons emitted as more photons will be below the bandgap energy and thus generate a lower current. As electrical power is the product of voltage and current, there is a sweet spot where the total output is maximized.

Terrestrial solar radiation is typically characterized by a standard known as Air Mass 1.5, or AM1.5. This is very close to 1,000 W of energy per square meter at an apparent temperature of 5780 K. At this temperature, about half of all the energy reaching the surface is in the infrared. Based on this temperature, energy production is maximized when the bandgap is about 1.4 eV, in the near infrared. This just happens to be very close to the bandgap in doped silicon, at 1.1 eV, which makes solar PV inexpensive to produce.

This means that all of the energy in the infrared and lower, about half of AM1.5, goes to waste. There has been continuing research into cells that are made of several different layers, each with a different bandgap, and thus tuned to a different part of the solar spectrum. As of 2022, cells with overall efficiencies in the range of 40% are commercially available, although they are extremely expensive and have not seen widespread use outside of specific roles like powering spacecraft, where cost is not a significant consideration.

TPV

Higher temperature spectrums not only have more energy in total, but also have that energy in a more concentrated peak. Low-temperature sources, the lower line being close to that of a welding torch, spread out their energy much more widely. Efficiently collecting this energy demands multi-layer cells.

The same process of photoemission can be used to produce electricity from any spectrum, although the number of semiconductor materials that will have just the right bandgap for an arbitrary hot object is limited. Instead, semiconductors that have tuneable bandgaps are needed. It is also difficult to produce solar-like thermal output; an oxyacetylene torch is about 3400 K (~3126 °C), and more common commercial heat sources like coal and natural gas burn at much lower temperatures around 900 °C to about 1300 °C. This further limits the suitable materials. In the case of TPV most research has focused on gallium antimonide (GaSb), although germanium (Ge) is also suitable.

Another problem with lower-temperature sources is that their energy is more spread out, according to Wien's displacement law. While one can make a practical solar cell with a single bandgap tuned to the peak of the spectrum and just ignore the losses in the IR region, doing the same with a lower temperature source will lose much more of the potential energy and result in very low overall efficiency. This means TPV systems almost always use multi-junction cells in order to reach reasonable double-digit efficiencies. Current research in the area aims at increasing system efficiencies while keeping the system cost low, but even then their roles tend to be niches similar to those of multi-junction solar cells.

Actual designs

TPV systems generally consist of a heat source, an emitter, and a waste heat rejection system. The TPV cells are placed between the emitter, often a block of metal or similar, and the cooling system, often a passive radiator. PV systems in general operate at lower efficiency as the temperature increases, and in TPV systems, keeping the photovoltaic cool is a significant challenge.

This contrasts with a somewhat related concept, the "thermoradiative" or "negative emission" cells, in which the photodiode is on the hot side of the heat engine. Systems have also been proposed that use a thermoradiative device as an emitter in a TPV system, theoretically allowing power to be extracted from both a hot photodiode and a cold photodiode.

Applications

RTGs

Conventional radioisotope thermoelectric generators (RTGs) used to power spacecraft use a radioactive material whose radiation is used to heat a block of material and then converted to electricity using a thermocouple. Thermocouples are very inefficient and their replacement with TPV could offer significant improvements in efficiency and thus require a smaller and lighter RTG for any given mission. Experimental systems developed by Emcore (a multi-junction solar cell provider), Creare, Oak Ridge and NASA's Glenn Research Center demonstrated 15 to 20% efficiency. A similar concept was developed by the University of Houston which reached 30% efficiency, a 3 to 4-fold improvement over existing systems.

Thermoelectric storage

Another area of active research is using TPV as the basis of a thermal storage system. In this concept, electricity being generated in off-peak times is used to heat a large block of material, typically carbon or a phase-change material. The material is surrounded by TPV cells which are in turn backed by a reflector and insulation. During storage, the TPV cells are turned off and the photons pass through them and reflect back into the high-temperature source. When power is needed, the TPV is connected to a load.

Waste heat collection

TPV cells have been proposed as auxiliary power conversion devices for capture of otherwise lost heat in other power generation systems, such as steam turbine systems or solar cells.

History

Henry Kolm constructed an elementary TPV system at MIT in 1956. However, Pierre Aigrain is widely cited as the inventor based on lectures he gave at MIT between 1960–1961 which, unlike Kolm's system, led to research and development.

In the 1980s, efficiency reached close to 30%.

In 1997 a prototype TPV hybrid car was built, the "Viking 29" (TPV) powered automobile, designed and built by the Vehicle Research Institute (VRI) at Western Washington University.

In 2022, MIT/NREL announced a device with 41% efficiency. The absorber employed multiple III-V semiconductor layers tuned to absorb variously, ultraviolet, visible, and infrared photons. A gold reflector recycled unabsorbed photons. The device operated at 2400 °C, at which temperature the tungsten emitter reaches maximum brightness.

In May 2024, researchers announced a device that achieved 44% efficiency when using silicon-carbide (SiC) as the heat storage material (emitter). At 1,435 °C (2,615 °F) the device radiates thermal photons at various energy levels. The semiconductor captures 20 to 30% of the photons. Additional layers include air and a gold reflector layer.

Details

Efficiency

The upper limit for efficiency in TPVs (and all systems that convert heat energy to work) is the Carnot efficiency, that of an ideal heat engine. This efficiency is given by:

where Tcell is the temperature of the PV converter. Practical systems can achieve Tcell= ~300 K and Temit= ~1800 K, giving a maximum possible efficiency of ~83%. This assumes the PV converts the radiation into electrical energy without losses, such as thermalization or Joule heating, though in reality the photovoltaic inefficiency is quite significant. In real devices, as of 2021, the maximum demonstrated efficiency in the laboratory was 35% with an emitter temperature of 1,773 K. This is the efficiency in terms of heat input being converted to electrical power. In complete TPV systems, a necessarily lower total system efficiency may be cited including the source of heat, so for example, fuel-based TPV systems may report efficiencies in terms of fuel-energy to electrical energy, in which case 5% is considered a "world record" level of efficiency. Real-world efficiencies are reduced by such effects as heat transfer losses, electrical conversion efficiency (TPV voltage outputs are often quite low), and losses due to active cooling of the PV cell.

Emitters

Deviations from perfect absorption and perfect black body behavior lead to light losses. For selective emitters, any light emitted at wavelengths not matched to the bandgap energy of the photovoltaic may not be efficiently converted, reducing efficiency. In particular, emissions associated with phonon resonances are difficult to avoid for wavelengths in the deep infrared, which cannot be practically converted. An ideal emitter would emit no light at wavelengths other than at the bandgap energy, and much TPV research is devoted to developing emitters that better approximate this narrow emission spectrum.

Filters

For black body emitters or imperfect selective emitters, filters reflect non-ideal wavelengths back to the emitter. These filters are imperfect. Any light that is absorbed or scattered and not redirected to the emitter or the converter is lost, generally as heat. Conversely, practical filters often reflect a small percentage of light in desired wavelength ranges. Both are inefficiencies. The absorption of suboptimal wavelengths by the photovoltaic device also contributes inefficiency and has the added effect of heating it, which also decreases efficiency.

Converters

Even for systems where only light of optimal wavelengths is passed to the photovoltaic converter, inefficiencies associated with non-radiative recombination and Ohmic losses exist. There are also losses from Fresnel reflections at the PV surface, optimal-wavelength light that passes through the cell unabsorbed, and the energy difference between higher-energy photons and the bandgap energy (though this tends to be less significant than with solar PVs). Non-radiative recombination losses tend to become less significant as the light intensity increases, while they increase with increasing temperature, so real systems must consider the intensity produced by a given design and operating temperature.

Geometry

In an ideal system, the emitter is surrounded by converters so no light is lost. Realistically, geometries must accommodate the input energy (fuel injection or input light) used to heat the emitter. Additionally, costs have prohibited surrounding the filter with converters. When the emitter reemits light, anything that does not travel to the converters is lost. Mirrors can be used to redirect some of this light back to the emitter; however, the mirrors may have their own losses.

Black body radiation

For black body emitters where photon recirculation is achieved via filters, Planck's law states that a black body emits light with a spectrum given by:

where I′ is the light flux of a specific wavelength, λ, given in units of 1 m–3⋅s–1. h is the Planck constant, k is the Boltzmann constant, c is the speed of light, and Temit is the emitter temperature. Thus, the light flux with wavelengths in a specific range can be found by integrating over the range. The peak wavelength is determined by the temperature, Temit based on Wien's displacement law:

where b is Wien's displacement constant. For most materials, the maximum temperature an emitter can stably operate at is about 1800 °C. This corresponds to an intensity that peaks at λ ≅ 1600 nm or an energy of ~0.75 eV. For more reasonable operating temperatures of 1200 °C, this drops to ~0.5 eV. These energies dictate the range of bandgaps that are needed for practical TPV converters (though the peak spectral power is slightly higher). Traditional PV materials such as Si (1.1 eV) and GaAs (1.4 eV) are substantially less practical for TPV systems, as the intensity of the black body spectrum is low at these energies for emitters at realistic temperatures.

Active components and materials selection

Emitters

Efficiency, temperature resistance and cost are the three major factors for choosing a TPV emitter. Efficiency is determined by energy absorbed relative to incoming radiation. High temperature operation is crucial because efficiency increases with operating temperature. As emitter temperature increases, black-body radiation shifts to shorter wavelengths, allowing for more efficient absorption by photovoltaic cells.

Polycrystalline silicon carbide

Polycrystalline silicon carbide (SiC) is the most commonly used emitter for burner TPVs. SiC is thermally stable to ~1700 °C. However, SiC radiates much of its energy in the long wavelength regime, far lower in energy than even the narrowest bandgap photovoltaic. Such radiation is not converted into electrical energy. However, non-absorbing selective filters in front of the PV, or mirrors deposited on the back side of the PV can be used to reflect the long wavelengths back to the emitter, thereby recycling the unconverted energy. In addition, polycrystalline SiC is inexpensive.

Tungsten

Tungsten is the most common refractory metal that can be used as a selective emitter. It has higher emissivity in the visible and near-IR range of 0.45 to 0.47 and a low emissivity of 0.1 to 0.2 in the IR region. The emitter is usually in the shape of a cylinder with a sealed bottom, which can be considered a cavity. The emitter is attached to the back of a thermal absorber such as SiC and maintains the same temperature. Emission occurs in the visible and near IR range, which can be readily converted by the PV to electrical energy. However, compared to other metals, tungsten oxidizes more easily.

Rare-earth oxides

Rare-earth oxides such as ytterbium oxide (Yb2O3) and erbium oxide (Er2O3) are the most commonly used selective emitters. These oxides emit a narrow band of wavelengths in the near-infrared region, allowing the emission spectra to be tailored to better fit the absorbance characteristics of a particular PV material. The peak of the emission spectrum occurs at 1.29 eV for Yb2O3 and 0.827 eV for Er2O3. As a result, Yb2O3 can be used a selective emitter for silicon cells and Er2O3, for GaSb or InGaAs. However, the slight mismatch between the emission peaks and band gap of the absorber costs significant efficiency. Selective emission only becomes significant at 1100 °C and increases with temperature. Below 1700 °C, selective emission of rare-earth oxides is fairly low, further decreasing efficiency. Currently, 13% efficiency has been achieved with Yb2O3 and silicon PV cells. In general selective emitters have had limited success. More often filters are used with black body emitters to pass wavelengths matched to the bandgap of the PV and reflect mismatched wavelengths back to the emitter.

Photonic crystals

Photonic crystals allow precise control of electromagnetic wave properties. These materials give rise to the photonic bandgap (PBG). In the spectral range of the PBG, electromagnetic waves cannot propagate. Engineering these materials allows some ability to tailor their emission and absorption properties, allowing for more effective emitter design. Selective emitters with peaks at higher energy than the black body peak (for practical TPV temperatures) allow for wider bandgap converters. These converters are traditionally cheaper to manufacture and less temperature sensitive. Researchers at Sandia Labs predicted a high-efficiency (34% of light emitted converted to electricity) based on TPV emitter demonstrated using tungsten photonic crystals. However, manufacturing of these devices is difficult and not commercially feasible.

Photovoltaic cells

Silicon

Early TPV work focused on the use of silicon. Silicon's commercial availability, low cost, scalability and ease of manufacture makes this material an appealing candidate. However, the relatively wide bandgap of Si (1.1eV) is not ideal for use with a black body emitter at lower operating temperatures. Calculations indicate that Si PVs are only feasible at temperatures much higher than 2000 K. No emitter has been demonstrated that can operate at these temperatures. These engineering difficulties led to the pursuit of lower-bandgap semiconductor PVs.

Using selective radiators with Si PVs is still a possibility. Selective radiators would eliminate high and low energy photons, reducing heat generated. Ideally, selective radiators would emit no radiation beyond the band edge of the PV converter, increasing conversion efficiency significantly. No efficient TPVs have been realized using Si PVs.

Germanium

Early investigations into low bandgap semiconductors focused on germanium (Ge). Ge has a bandgap of 0.66 eV, allowing for conversion of a much higher fraction of incoming radiation. However, poor performance was observed due to the high effective electron mass of Ge. Compared to III-V semiconductors, Ge's high electron effective mass leads to a high density of states in the conduction band and therefore a high intrinsic carrier concentration. As a result, Ge diodes have fast decaying "dark" current and therefore, a low open-circuit voltage. In addition, surface passivation of germanium has proven difficult.

Gallium antimonide

The gallium antimonide (GaSb) PV cell, invented in 1989, is the basis of most PV cells in modern TPV systems. GaSb is a III-V semiconductor with the zinc blende crystal structure. The GaSb cell is a key development owing to its narrow bandgap of 0.72 eV. This allows GaSb to respond to light at longer wavelengths than silicon solar cell, enabling higher power densities in conjunction with manmade emission sources. A solar cell with 35% efficiency was demonstrated using a bilayer PV with GaAs and GaSb, setting the solar cell efficiency record.

Manufacturing a GaSb PV cell is quite simple. Czochralski tellurium-doped n-type GaSb wafers are commercially available. Vapor-based zinc diffusion is carried out at elevated temperatures (~450 °C) to allow for p-type doping. Front and back electrical contacts are patterned using traditional photolithography techniques and an anti-reflective coating is deposited. Efficiencies are estimated at ~20% using a 1000 °C black body spectrum. The radiative limit for efficiency of the GaSb cell in this setup is 52%.

Indium gallium arsenide antimonide

Indium gallium arsenide antimonide (InGaAsSb) is a compound III-V semiconductor. (InxGa1−xAsySb1−y) The addition of GaAs allows for a narrower bandgap (0.5 to 0.6 eV), and therefore better absorption of long wavelengths. Specifically, the bandgap was engineered to 0.55 eV. With this bandgap, the compound achieved a photon-weighted internal quantum efficiency of 79% with a fill factor of 65% for a black body at 1100 °C. This was for a device grown on a GaSb substrate by organometallic vapour phase epitaxy (OMVPE). Devices have been grown by molecular beam epitaxy (MBE) and liquid phase epitaxy (LPE). The internal quantum efficiencies (IQE) of these devices approach 90%, while devices grown by the other two techniques exceed 95%. The largest problem with InGaAsSb cells is phase separation. Compositional inconsistencies throughout the device degrade its performance. When phase separation can be avoided, the IQE and fill factor of InGaAsSb approach theoretical limits in wavelength ranges near the bandgap energy. However, the Voc/Eg ratio is far from the ideal. Current methods to manufacture InGaAsSb PVs are expensive and not commercially viable.

Indium gallium arsenide

Indium gallium arsenide (InGaAs) is a compound III-V semiconductor. It can be applied in two ways for use in TPVs. When lattice-matched to an InP substrate, InGaAs has a bandgap of 0.74 eV, no better than GaSb. Devices of this configuration have been produced with a fill factor of 69% and an efficiency of 15%. However, to absorb higher wavelength photons, the bandgap may be engineered by changing the ratio of In to Ga. The range of bandgaps for this system is from about 0.4 to 1.4 eV. However, these different structures cause strain with the InP substrate. This can be controlled with graded layers of InGaAs with different compositions. This was done to develop of device with a quantum efficiency of 68% and a fill factor of 68%, grown by MBE. This device had a bandgap of 0.55 eV, achieved in the compound In0.68Ga0.33As. It is a well-developed material. InGaAs can be made to lattice match perfectly with Ge resulting in low defect densities. Ge as a substrate is a significant advantage over more expensive or harder-to-produce substrates.

Indium phosphide arsenide antimonide

The InPAsSb quaternary alloy has been grown by both OMVPE and LPE. When lattice-matched to InAs, it has a bandgap in the range 0.3–0.55 eV. The benefits of such a low band gap have not been studied in depth. Therefore, cells incorporating InPAsSb have not been optimized and do not yet have competitive performance. The longest spectral response from an InPAsSb cell studied was 4.3 μm with a maximum response at 3 μm. For this and other low-bandgap materials, high IQE for long wavelengths is hard to achieve due to an increase in Auger recombination.

Lead tin selenide/Lead strontium selenide quantum wells

PbSnSe/PbSrSe quantum well materials, which can be grown by MBE on silicon substrates, have been proposed for low cost TPV device fabrication. These IV-VI semiconductor materials can have bandgaps between 0.3 and 0.6 eV. Their symmetric band structure and lack of valence band degeneracy result in low Auger recombination rates, typically more than an order of magnitude smaller than those of comparable bandgap III-V semiconductor materials.

Applications

TPVs promise efficient and economically viable power systems for both military and commercial applications. Compared to traditional nonrenewable energy sources, burner TPVs have little NOx emissions and are virtually silent. Solar TPVs are a source of emission-free renewable energy. TPVs can be more efficient than PV systems owing to recycling of unabsorbed photons. However, losses at each energy conversion step lower efficiency. When TPVs are used with a burner source, they provide on-demand energy. As a result, energy storage may not be needed. In addition, owing to the PV's proximity to the radiative source, TPVs can generate current densities 300 times that of conventional PVs.

Energy storage

Man-portable power

Battlefield dynamics require portable power. Conventional diesel generators are too heavy for use in the field. Scalability allows TPVs to be smaller and lighter than conventional generators. Also, TPVs have few emissions and are silent. Multifuel operation is another potential benefit.

Investigations in the 1970s failed due to PV limitations. However, the GaSb photocell led to a renewed effort in the 1990s with improved results. In early 2001, JX Crystals delivered a TPV based battery charger to the US Army that produced 230 W fueled by propane. This prototype utilized an SiC emitter operating at 1250 °C and GaSb photocells and was approximately 0.5 m tall. The power source had an efficiency of 2.5%, calculated as the ratio of the power generated to the thermal energy of the fuel burned. This is too low for practical battlefield use. No portable TPV power sources have reached troop testing.

Grid storage

Converting spare electricity into heat for high-volume, long-term storage is under research at various companies, who claim that costs could be much lower than lithium-ion batteries.[14] Graphite may be used as a storage medium, with molten tin as heat transfer, at temperatures around 2000°. See LaPotin, A., Schulte, K.L., Steiner, M.A. et al. Thermophotovoltaic efficiency of 40%. Nature 604, 287–291 (2022).

Spacecraft

Space power generation systems must provide consistent and reliable power without large amounts of fuel. As a result, solar and radioisotope fuels (extremely high power density and long lifetime) are ideal. TPVs have been proposed for each. In the case of solar energy, orbital spacecraft may be better locations for the large and potentially cumbersome concentrators required for practical TPVs. However, weight considerations and inefficiencies associated with the more complicated design of TPVs, protected conventional PVs continue to dominate.

The output of isotopes is thermal energy. In the past thermoelectricity (direct thermal to electrical conversion with no moving parts) has been used because TPV efficiency is less than the ~10% of thermoelectric converters. Stirling engines have been deemed too unreliable, despite conversion efficiencies >20%. However, with the recent advances in small-bandgap PVs, TPVs are becoming more promising. A TPV radioisotope converter with 20% efficiency was demonstrated that uses a tungsten emitter heated to 1350 K, with tandem filters and a 0.6 eV bandgap InGaAs PV converter (cooled to room temperature). About 30% of the lost energy was due to the optical cavity and filters. The remainder was due to the efficiency of the PV converter.

Low-temperature operation of the converter is critical to the efficiency of TPV. Heating PV converters increases their dark current, thereby reducing efficiency. The converter is heated by the radiation from the emitter. In terrestrial systems it is reasonable to dissipate this heat without using additional energy with a heat sink. However, space is an isolated system, where heat sinks are impractical. Therefore, it is critical to develop innovative solutions to efficiently remove that heat. Both represent substantial challenges.

Commercial applications

Off-grid generators

TPVs can provide continuous power to off-grid homes. Traditional PVs do not provide power during winter months and nighttime, while TPVs can utilize alternative fuels to augment solar-only production.

The greatest advantage for TPV generators is cogeneration of heat and power. In cold climates, it can function as both a heater/stove and a power generator. JX Crystals developed a prototype TPV heating stove/generator that burns natural gas and uses a SiC source emitter operating at 1250 °C and GaSb photocell to output 25,000 BTU/hr (7.3kW of heat) simultaneously generating 100W (1.4% efficiency). However, costs render it impractical.

Combining a heater and a generator is called combined heat and power (CHP). Many TPV CHP scenarios have been theorized, but a study found that generator using boiling coolant was most cost efficient.[37] The proposed CHP would utilize a SiC IR emitter operating at 1425 °C and GaSb photocells cooled by boiling coolant. The TPV CHP would output 85,000 BTU/hr (25kW of heat) and generate 1.5 kW. The estimated efficiency would be 12.3% (?)(1.5kW/25kW = 0.06 = 6%) requiring investment or 0.08 €/kWh assuming a 20 year lifetime. The estimated cost of other non-TPV CHPs are 0.12 €/kWh for gas engine CHP and 0.16 €/kWh for fuel cell CHP. This furnace was not commercialized because the market was not thought to be large enough.

Recreational vehicles

TPVs have been proposed for use in recreational vehicles. Their ability to use multiple fuel sources makes them interesting as more sustainable fuels emerge. TPVs silent operation allows them to replace noisy conventional generators (i.e. during "quiet hours" in national park campgrounds). However, the emitter temperatures required for practical efficiencies make TPVs on this scale unlikely.

Newton's laws of motion

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Newton%27s_laws_of_motion Newton's laws of motion are three ...