Search This Blog

Wednesday, January 14, 2015

RNA world hypothesis


From Wikipedia, the free encyclopedia

A comparison of RNA (left) with DNA (right), showing the helices and nucleobases each 
employs.

The RNA world hypothesis proposes that self-replicating ribonucleic acid (RNA) molecules were precursors to all current life on Earth.[1][2][3] It is generally accepted that current life on Earth descends from an RNA world,[4] although RNA-based life may not have been the first life to exist.[5][6]

RNA stores genetic information like DNA, and catalyzes chemical reactions like an enzyme protein. It may, therefore, have played a major step in the evolution of cellular life.[7] The RNA world would have eventually been replaced by the DNA, RNA and protein world of today, likely through an intermediate stage of ribonucleoprotein enzymes such as the ribosome and ribozymes, since proteins large enough to self-fold and have useful activities would only have come about after RNA was available to catalyze peptide ligation or amino acid polymerization.[8] DNA is thought to have taken over the role of data storage due to its increased stability, while proteins, through a greater variety of monomers (amino acids), replaced RNA's role in specialized biocatalysis.

The RNA world hypothesis is supported by many independent lines of evidence, such as the observations that RNA is central to the translation process and that small RNAs can catalyze all of the chemical group and information transfers required for life.[6][9] The structure of the ribosome has been called the "smoking gun," as it showed that the ribosome is a ribozyme, with a central core of RNA and no amino acid side chains within 18 angstroms of the active site where peptide bond formation is catalyzed.[5] Many of the most critical components of cells (those that evolve the slowest) are composed mostly or entirely of RNA. Also, many critical cofactors (ATP, Acetyl-CoA, NADH, etc.) are either nucleotides or substances clearly related to them. This would mean that the RNA and nucleotide cofactors in modern cells are an evolutionary remnant of an RNA-based enzymatic system that preceded the protein-based one seen in all extant life.

Evidence suggests chemical conditions (including the presence of boron, molybdenum and oxygen) for initially producing RNA molecules may have been better on the planet Mars than those on the planet Earth.[2][3] If so, life-suitable molecules, originating on Mars, may have later migrated to Earth via panspermia or similar process.[2][3]

History

One of the challenges in studying abiogenesis is that the system of reproduction and metabolism utilized by all extant life involves three distinct types of interdependent macromolecules (DNA, RNA, and protein). This suggests that life could not have arisen in its current form, and mechanisms have then been sought whereby the current system might have arisen from a simpler precursor system. The concept of RNA as a primordial molecule[8] can be found in papers by Francis Crick[10] and Leslie Orgel,[11] as well as in Carl Woese's 1967 book The Genetic Code.[12] In 1962 the molecular biologist Alexander Rich, of the Massachusetts Institute of Technology, had posited much the same idea in an article he contributed to a volume issued in honor of Nobel-laureate physiologist Albert Szent-Györgyi.[13] Hans Kuhn in 1972 laid out a possible process by which the modern genetic system might have arisen from a nucleotide-based precursor, and this led Harold White in 1976 to observe that many of the cofactors essential for enzymatic function are either nucleotides or could have been derived from nucleotides. He proposed that these nucleotide cofactors represent "fossils of nucleic acid enzymes".[14] The phrase "RNA World" was first used by Nobel laureate Walter Gilbert in 1986, in a commentary on how recent observations of the catalytic properties of various forms of RNA fit with this hypothesis.[15]

Properties of RNA

The properties of RNA make the idea of the RNA world hypothesis conceptually plausible, though its general acceptance as an explanation for the origin of life requires further evidence.[13] RNA is known to form efficient catalysts and its similarity to DNA makes its ability to store information clear. Opinions differ, however, as to whether RNA constituted the first autonomous self-replicating system or was a derivative of a still-earlier system.[8] One version of the hypothesis is that a different type of nucleic acid, termed pre-RNA, was the first one to emerge as a self-reproducing molecule, to be replaced by RNA only later. On the other hand, the recent finding that activated pyrimidine ribonucleotides can be synthesized under plausible prebiotic conditions[16] means that it is premature to dismiss the RNA-first scenarios.[8] Suggestions for 'simple' pre-RNA nucleic acids have included Peptide nucleic acid (PNA), Threose nucleic acid (TNA) or Glycol nucleic acid (GNA).[17][18]
Despite their structural simplicity and possession of properties comparable with RNA, the chemically plausible generation of "simpler" nucleic acids under prebiotic conditions has yet to be demonstrated.[19]

RNA as an enzyme

RNA enzymes, or ribozymes, are found in today's DNA-based life and could be examples of living fossils. Ribozymes play vital roles, such as those in the ribosome, which is vital for protein synthesis. Many other ribozyme functions exist; for example, the hammerhead ribozyme performs self-cleavage[20] and an RNA polymerase ribozyme can synthesize a short RNA strand from a primed RNA template.[21]
Among the enzymatic properties important for the beginning of life are:
  • The ability to self-replicate, or synthesize other RNA molecules; relatively short RNA molecules that can synthesize others have been artificially produced in the lab. The shortest was 165-bases long, though it has been estimated that only part of the molecule was crucial for this function. One version, 189-bases long, had an error rate of just 1.1% per nucleotide when synthesizing 11 nucleotide RNA sequences from primed template strands.[22] This 189 base pair ribozyme could polymerize a template of at most 14 nucleotides in length, which is too short for self replication, but a potential lead for further investigation. The longest primer extension performed by a ribozyme polymerase was 20 bases.[23]
  • The ability to catalyze simple chemical reactions—which would enhance creation of molecules that are building blocks of RNA molecules (i.e., a strand of RNA which would make creating more strands of RNA easier). Relatively short RNA molecules with such abilities have been artificially formed in the lab.[24][25]
  • The ability to conjugate an amino acid to the 3'-end of an RNA in order to use its chemical groups or provide a long-branched aliphatic side-chain.[26]
  • The ability to catalyse the formation of peptide bonds to produce short peptides or longer proteins. This is done in modern cells by ribosomes, a complex of several RNA molecules known as rRNA together with many proteins. The rRNA molecules are thought responsible for its enzymatic activity, as no amino acid molecules lie within 18Å of the enzyme's active site.[13] A much shorter RNA molecule has been synthesized in the laboratory with the ability to form peptide bonds, and it has been suggested that rRNA has evolved from a similar molecule.[27] It has also been suggested that amino acids may have initially been involved with RNA molecules as cofactors enhancing or diversifying their enzymatic capabilities, before evolving to more complex peptides. Similarly, tRNA is suggested to have evolved from RNA molecules that began to catalyze amino acid transfer.[28]

RNA in information storage

RNA is a very similar molecule to DNA, and only has two chemical differences. The overall structure of RNA and DNA are immensely similar—one strand of DNA and one of RNA can bind to form a double helical structure. This makes the storage of information in RNA possible in a very similar way to the storage of information in DNA. However RNA is less stable.

The major difference between RNA and DNA is the presence of a hydroxyl group at the 2'-position.

Comparison of DNA and RNA structure

The major difference between RNA and DNA is the presence of a hydroxyl group at the 2'-position of the ribose sugar in RNA (illustration, right).[13] This group makes the molecule less stable because when not constrained in a double helix, the 2' hydroxyl can chemically attack the adjacent phosphodiester bond to cleave the phosphodiester backbone. The hydroxyl group also forces the ribose into the C3'-endo sugar conformation unlike the C2'-endo conformation of the deoxyribose sugar in DNA. This forces an RNA double helix to change from a B-DNA structure to one more closely resembling A-DNA.
RNA also uses a different set of bases than DNA—adenine, guanine, cytosine and uracil, instead of adenine, guanine, cytosine and thymine. Chemically, uracil is similar to thymine, differing only by a methyl group, and its production requires less energy.[29] In terms of base pairing, this has no effect. Adenine readily binds uracil or thymine. Uracil is, however, one product of damage to cytosine that makes RNA particularly susceptible to mutations that can replace a GC base pair with a GU (wobble) or AU base pair.

RNA is thought to have preceded DNA, because of their ordering in the biosynthetic pathways. The deoxyribonucleotides used to make DNA are made from ribonucleotides, the building blocks of RNA, by removing the 2'-hydroxyl group. As a consequence a cell must have the ability to make RNA before it can make DNA.

Limitations of information storage in RNA

The chemical properties of RNA make large RNA molecules inherently fragile, and they can easily be broken down into their constituent nucleotides through hydrolysis.[30][31] These limitations do not make use of RNA as an information storage system impossible, simply energy intensive (to repair or replace damaged RNA molecules) and prone to mutation. While this makes it unsuitable for current 'DNA optimised' life, it may have been acceptable for more primitive life.

RNA as a regulator

Riboswitches have been found to act as regulators of gene expression, particularly in bacteria, but also in plants and archaea. Riboswitches alter their secondary structure in response to the binding of a metabolite. This change in structure can result in the formation or disruption of a terminator, truncating or permitting transcription respectively.[32] Alternatively, riboswitches may bind or occlude the Shine-Dalgarno sequence, affecting translation.[33] It has been suggested that these originated in an RNA-based world.[34] In addition, RNA thermometers regulate gene expression in response to temperature changes.[35]

Support and difficulties

The RNA world hypothesis is supported by RNA's ability to store, transmit, and duplicate genetic information, as DNA does. RNA can act as a ribozyme, a special type of enzyme. Because it can perform the tasks of both DNA and enzymes, RNA is believed to have once been capable of supporting independent life forms.[13] Some viruses use RNA as their genetic material, rather than DNA.[36] Further, while nucleotides were not found in Miller-Urey's origins of life experiments, their formation in prebiotically plausible conditions has now been reported, as noted above;[16] the purine base known as adenine is merely a pentamer of hydrogen cyanide. Experiments with basic ribozymes, like Bacteriophage Qβ RNA, have shown that simple self-replicating RNA structures can withstand even strong selective pressures (e.g., opposite-chirality chain terminators).[37]

Since there were no known chemical pathways for the abiogenic synthesis of nucleotides from pyrimidine nucleobases cytosine and uracil under prebiotic conditions, it is thought by some that nucleic acids did not contain these nucleobases seen in life's nucleic acids.[38] The nucleoside cytosine has a half-life in isolation of 19 days at 100 °C (212 °F) and 17,000 years in freezing water, which some argue is too short on the geologic time scale for accumulation.[39] Others have questioned whether ribose and other backbone sugars could be stable enough to find in the original genetic material,[40] and have raised the issue that all ribose molecules would have had to be the same enantiomer, as any nucleotide of the wrong chirality acts as a chain terminator.[41]

Pyrimidine ribonucleosides and their respective nucleotides have been prebiotically synthesised by a sequence of reactions that by-pass free sugars and assemble in a stepwise fashion by going against the dogma that nitrogenous and oxygenous chemistries should be avoided. In a series of publications, The Sutherland Group at the School of Chemistry, University of Manchester have demonstrated high yielding routes to cytidine and uridine ribonucleotides built from small 2 and 3 carbon fragments such as glycolaldehyde, glyceraldehyde or glyceraldehyde-3-phosphate, cyanamide and cyanoacetylene. One of the steps in this sequence allows the isolation of enantiopure ribose aminooxazoline if the enantiomeric excess of glyceraldehyde is 60% or greater, of possible interest towards biological homochirality.[42] This can be viewed as a prebiotic purification step, where the said compound spontaneously crystallised out from a mixture of the other pentose aminooxazolines. Aminooxazolines can react with cyanoacetylene in a mild and highly efficient manner, controlled by inorganic phosphate, to give the cytidine ribonucleotides. Photoanomerization with UV light allows for inversion about the 1' anomeric centre to give the correct beta stereochemistry, one problem with this chemistry is the selective phosphorylation of alpha-cytidine at the 2' position.[43] However, in 2009 they showed that the same simple building blocks allow access, via phosphate controlled nucleobase elaboration, to 2',3'-cyclic pyrimidine nucleotides directly, which are known to be able to polymerise into RNA.[44] This was hailed as strong evidence for the RNA world.[45] The paper also highlighted the possibility for the photo-sanitization of the pyrimidine-2',3'-cyclic phosphates.[44] A potential weakness of these routes is the generation of enantioenriched glyceraldehyde, or its 3-phosphate derivative (glyceraldehyde prefers to exist as its keto tautomer dihydroxyacetone).[citation needed]

On August 8, 2011, a report, based on NASA studies with meteorites found on Earth, was published suggesting building blocks of RNA (adenine, guanine and related organic molecules) may have been formed extraterrestrially in outer space.[46][47][48] On August 29, 2012, and in a world first, astronomers at Copenhagen University reported the detection of a specific sugar molecule, glycolaldehyde, in a distant star system. The molecule was found around the protostellar binary IRAS 16293-2422, which is located 400 light years from Earth.[49][50] Glycolaldehyde is needed to form ribonucleic acid, or RNA, which is similar in function to DNA. This finding suggests that complex organic molecules may form in stellar systems prior to the formation of planets, eventually arriving on young planets early in their formation.[51]

"Molecular biologist's dream"

"Molecular biologist's dream" is a phrase coined by Gerald Joyce and Leslie Orgel to refer to the problem of emergence of self-replicating RNA molecules, as any movement towards an RNA world on a properly modeled prebiotic early Earth would have been continuously suppressed by destructive reactions.[52] It was noted that many of the steps needed for the nucleotides formation do not proceed efficiently in prebiotic conditions.[53] Joyce and Orgel specifically referred the molecular biologist's dream to "a magic catalyst" that could "convert the activated nucleotides to a random ensemble of polynucleotide sequences, a subset of which had the ability to replicate".[52]

Joyce and Orgel further argued that nucleotides cannot link unless there is some activation of the phosphate group, whereas the only effective activating groups for this are "totally implausible in any prebiotic scenario", particularly adenosine triphosphate.[52] According to Joyce and Orgel, in case of the phosphate group activation, the basic polymer product would have 5',5'-pyrophosphate linkages, while the 3',5'-phosphodiester linkages, which are present in all known RNA, would be much less abundant.[52] The associated molecules would have been also prone to addition of incorrect nucleotides or to reactions with numerous other substances likely to have been present.[52] The RNA molecules would have been also continuously degraded by such destructive process as spontaneous hydrolysis, present on the early Earth.[52] Joyce and Orgel proposed to reject "the myth of a self-replicating RNA molecule that arose de novo from a soup of random polynucleotides"[52] and hypothesised about a scenario where the prebiotic processes furnish pools of enantiopure beta-D-ribonucleosides.[54]

Prebiotic RNA synthesis

Nucleotides are the fundamental molecules that combine in series to form RNA. They consist of a nitrogenous base attached to a sugar-phosphate backbone. RNA is made of long stretches of specific nucleotides arranged so that their sequence of bases carries information. The RNA world hypothesis holds that in the primordial soup (or sandwich), there existed free-floating nucleotides. These nucleotides regularly formed bonds with one another, which often broke because the change in energy was so low. However, certain sequences of base pairs have catalytic properties that lower the energy of their chain being created, causing them to stay together for longer periods of time. As each chain grew longer, it attracted more matching nucleotides faster, causing chains to now form faster than they were breaking down.

These chains are proposed as the first, primitive forms of life.[55] In an RNA world, different forms of RNA compete with each other for free nucleotides and are subject to natural selection. The most efficient molecules of RNA, the ones able to efficiently catalyze their own reproduction, survived and evolved, forming modern RNA. Such an RNA enzyme, capable of self replication in about an hour, has been identified. It was produced by molecular competition (in vitro evolution) of candidate enzyme mixtures.[56]

Competition between RNA may have favored the emergence of cooperation between different RNA chains, opening the way for the formation of the first protocell. Eventually, RNA chains developed with catalytic properties that help amino acids bind together (a process called peptide-bonding). These amino acids could then assist with RNA synthesis, giving those RNA chains that could serve as ribozymes the selective advantage. The ability to catalyze one step in protein synthesis, aminoacylation of RNA, has been demonstrated in a short (five-nucleotide) segment of RNA.[57]

One of the problems with the RNA world hypothesis is to discover the pathway by which RNA became upgraded to the DNA system. Kim Stedman of Portland State University in Oregon, may have found the solution. While filtering virus-sized particles from a hot acidic lake in Lassen Volcanic National Park, California, he discvered 400,000 pieces of viral DNA. Some of these, however, contained a protein coat of reverse transcriptase enzyme normally associated with RNA based retroviruses. This lack of respect for biochemical boundaries virologists like Luis Villareal of the University of California Irvine believe would have been a characteristic of a pre RNA virus world up to 4 billion years ago.[58] This finding bolsters the argument for the transfer of information from the RNA world to the emerging DNA world before the emergence of the Last Universal Common Ancestor. From the research, the diversity of this virus world is still with us.

Origin of sex

Eigen et al.[59] and Woese[60] proposed that the genomes of early protocells were composed of single-stranded RNA, and that individual genes corresponded to separate RNA segments, rather than being linked end-to-end as in present day DNA genomes. A protocell that was haploid (one copy of each RNA gene) would be vulnerable to damage, since a single lesion in any RNA segment would be potentially lethal to the protocell (e.g. by blocking replication or inhibiting the function of an essential gene).

Vulnerability to damage could be reduced by maintaining two or more copies of each RNA segment in each protocell, i.e. by maintaining diploidy or polyploidy. Genome redundancy would allow a damaged RNA segment to be replaced by an additional replication of its homolog. However for such a simple organism, the proportion of available resources tied up in the genetic material would be a large fraction of the total resource budget. Under limited resource conditions, the protocell reproductive rate would likely be inversely related to ploidy number. The protocell's fitness would be reduced by the costs of redundancy. Consequently, coping with damaged RNA genes while minimizing the costs of redundancy would likely have been a fundamental problem for early protocells.

A cost-benefit analysis was carried out in which the costs of maintaining redundancy were balanced against the costs of genome damage.[61] This analysis led to the conclusion that, under a wide range of circumstances, the selected strategy would be for each protocell to be haploid, but to periodically fuse with another haploid protocell to form a transient diploid. The retention of the haploid state maximizes the growth rate. The periodic fusions permit mutual reactivation of otherwise lethally damaged protocells. If at least one damage-free copy of each RNA gene is present in the transient diploid, viable progeny can be formed. For two, rather than one, viable daughter cells to be produced would require an extra replication of the intact RNA gene homologous to any RNA gene that had been damaged prior to the division of the fused protocell. The cycle of haploid reproduction, with occasional fusion to a transient diploid state, followed by splitting to the haploid state, can be considered to be the sexual cycle in its most primitive form.[61][62] In the absence of this sexual cycle, haploid protocells with a damage in an essential RNA gene would simply die.

This model for the early sexual cycle is hypothetical, but it is very similar to the known sexual behavior of the segmented RNA viruses, which are among the simplest organisms known. Influenza virus, whose genome consists of 8 physically separated single-stranded RNA segments,[63] is an example of this type of virus. In segmented RNA viruses, “mating” can occur when a host cell is infected by at least two virus particles. If these viruses each contain an RNA segment with a lethal damage, multiple infection can lead to reactivation providing that at least one undamaged copy of each virus gene is present in the infected cell. This phenomenon is known as “multiplicity reactivation”. Multiplicity reactivation has been reported to occur in influenza virus infections after induction of RNA damage by UV-irradiation,[64] and ionizing radiation.[65]

Further developments

Patrick Forterre has been working on a novel hypothesis, called "three viruses, three domains":[66] that viruses were instrumental in the transition from RNA to DNA and the evolution of Bacteria, Archaea, and Eukaryota. He believes the last common ancestor (specifically, the "last universal cellular ancestor")[66] was RNA-based and evolved RNA viruses. Some of the viruses evolved into DNA viruses to protect their genes from attack. Through the process of viral infection into hosts the three domains of life evolved.[66][67] Another interesting proposal is the idea that RNA synthesis might have been driven by temperature gradients, in the process of thermosynthesis.[68] Single nucleotides have been shown to catalyze organic reactions.[69]

Alternative hypotheses

The hypothesized existence of an RNA world does not exclude a "Pre-RNA world", where a metabolic system based on a different nucleic acid is proposed to pre-date RNA. A candidate nucleic acid is peptide nucleic acid (PNA), which uses simple peptide bonds to link nucleobases.[70] PNA is more stable than RNA, but its ability to be generated under prebiological conditions has yet to be demonstrated experimentally.

Threose nucleic acid (TNA) has also been proposed as a starting point, as has glycol nucleic acid (GNA), and like PNA, also lack experimental evidence for their respective abiogenesis.

An alternative—or complementary— theory of RNA origin is proposed in the PAH world hypothesis, whereby polycyclic aromatic hydrocarbons (PAHs) mediate the synthesis of RNA molecules.[71] PAHs are the most common and abundant of the known polyatomic molecules in the visible Universe, and are a likely constituent of the primordial sea.[72] PAHs, along with fullerenes (also implicated in the origin of life),[73] have been recently detected in nebulae.[74]

The iron-sulfur world theory proposes that simple metabolic processes developed before genetic materials did, and these energy-producing cycles catalyzed the production of genes.

Some of the difficulties over producing the precursors on earth are bypassed by another alternative or complementary theory for their origin, panspermia. It discusses the possibility that the earliest life on this planet was carried here from somewhere else in the galaxy, possibly on meteorites similar to the Murchison meteorite.[75] This does not invalidate the concept of an RNA world, but posits that this world or its precursors originated not on Earth but rather another, probably older, planet.

There are hypotheses that are in direct conflict to the RNA world hypothesis. The relative chemical complexity of the nucleotide and the unlikelihood of it spontaneously arising, along with the limited number of combinations possible among four base forms as well as the need for RNA polymers of some length before seeing enzymatic activity have led some to reject the RNA world hypothesis in favor of a metabolism-first hypothesis, where the chemistry underlying cellular function arose first, and the ability to replicate and facilitate this metabolism. Another proposal is that the dual molecule system we see today, where a nucleotide-based molecule is needed to synthesize protein, and a protein-based molecule is needed to make nucleic acid polymers, represents the original form of life.[76] This theory is called the Peptide-RNA world, and offers a possible explanation for the rapid evolution of high-quality replication in RNA (since proteins are catalysts), with the disadvantage of having to postulate the formation of two complex molecules, an enzyme (from peptides) and a RNA (from nucleotides). In this Peptide-RNA World scenario, RNA would have contained the instructions for life while peptides (simple protein enzymes) would have accelerated key chemical reactions to carry out those instructions.[77] The study leaves open the question of exactly how those primitive systems managed to replicate themselves — something neither the RNA World hypothesis nor the Peptide-RNA World theory can yet explain, unless polymerases —enzymes that rapidly assemble the RNA molecule— played a role.[77]

Implications of the RNA world

The RNA world hypothesis, if true, has important implications for the definition of life. For most of the time that followed Watson and Crick's elucidation of DNA structure in 1953, life was largely defined in terms of DNA and proteins: DNA and proteins seemed the dominant macromolecules in the living cell, with RNA only aiding in creating proteins from the DNA blueprint.

The RNA world hypothesis places RNA at center-stage when life originated. This has been accompanied by many studies[citation needed] in the last ten years that demonstrate important aspects of RNA function not previously known—and supports the idea of a critical role for RNA in the mechanisms of life. The RNA world hypothesis is supported by the observations that ribosomes are ribozymes: the catalytic site is composed of RNA, and proteins hold no major structural role and are of peripheral functional importance. This was confirmed with the deciphering of the 3-dimensional structure of the ribosome in 2001. Specifically, peptide bond formation, the reaction that binds amino acids together into proteins, is now known to be catalyzed by an adenine residue in the rRNA.

Other interesting discoveries demonstrate a role for RNA beyond a simple message or transfer molecule.[78] These include the importance of small nuclear ribonucleoproteins (snRNPs) in the processing of pre-mRNA and RNA editing, RNA interference (RNAi), and reverse transcription from RNA in eukaryotes in the maintenance of telomeres in the telomerase reaction.[79]

Tuesday, January 13, 2015

Superconductivity

From Wikipedia, the free encyclopedia
 
A magnet levitating above a high-temperature superconductor, cooled with liquid nitrogen. Persistent electric current flows on the surface of the superconductor, acting to exclude the magnetic field of the magnet (Faraday's law of induction). This current effectively forms an electromagnet that repels the magnet.
Video of a Meissner effect in a high temperature superconductor (black pellet) with a NdFeB magnet (metallic)
A high-temperature superconductor levitating above a magnet

Superconductivity is a phenomenon of exactly zero electrical resistance and expulsion of magnetic fields occurring in certain materials when cooled below a characteristic critical temperature. It was discovered by Dutch physicist Heike Kamerlingh Onnes on April 8, 1911 in Leiden. Like ferromagnetism and atomic spectral lines, superconductivity is a quantum mechanical phenomenon. It is characterized by the Meissner effect, the complete ejection of magnetic field lines from the interior of the superconductor as it transitions into the superconducting state. The occurrence of the Meissner effect indicates that superconductivity cannot be understood simply as the idealization of perfect conductivity in classical physics.

The electrical resistivity of a metallic conductor decreases gradually as temperature is lowered. In ordinary conductors, such as copper or silver, this decrease is limited by impurities and other defects. Even near absolute zero, a real sample of a normal conductor shows some resistance. In a superconductor, the resistance drops abruptly to zero when the material is cooled below its critical temperature. An electric current flowing through a loop of superconducting wire can persist indefinitely with no power source.[1][2][3][4][5]

In 1986, it was discovered that some cuprate-perovskite ceramic materials have a critical temperature above 90 K (−183 °C).[6] Such a high transition temperature is theoretically impossible for a conventional superconductor, leading the materials to be termed high-temperature superconductors. Liquid nitrogen boils at 77 K, and superconduction at higher temperatures than this facilitates many experiments and applications that are less practical at lower temperatures.

Classification

There are many criteria by which superconductors are classified. The most common are:

Elementary properties of superconductors

Most of the physical properties of superconductors vary from material to material, such as the heat capacity and the critical temperature, critical field, and critical current density at which superconductivity is destroyed.

On the other hand, there is a class of properties that are independent of the underlying material. For instance, all superconductors have exactly zero resistivity to low applied currents when there is no magnetic field present or if the applied field does not exceed a critical value. The existence of these "universal" properties implies that superconductivity is a thermodynamic phase, and thus possesses certain distinguishing properties which are largely independent of microscopic details.

Zero electrical DC resistance

Electric cables for accelerators at CERN. Both the massive and slim cables are rated for 12,500 A. Top: conventional cables for LEP; bottom: superconductor-based cables for the LHC

The simplest method to measure the electrical resistance of a sample of some material is to place it in an electrical circuit in series with a current source I and measure the resulting voltage V across the sample. The resistance of the sample is given by Ohm's law as R = V / I. If the voltage is zero, this means that the resistance is zero.

Superconductors are also able to maintain a current with no applied voltage whatsoever, a property exploited in superconducting electromagnets such as those found in MRI machines. Experiments have demonstrated that currents in superconducting coils can persist for years without any measurable degradation. Experimental evidence points to a current lifetime of at least 100,000 years. Theoretical estimates for the lifetime of a persistent current can exceed the estimated lifetime of the universe, depending on the wire geometry and the temperature.[3]

In a normal conductor, an electric current may be visualized as a fluid of electrons moving across a heavy ionic lattice. The electrons are constantly colliding with the ions in the lattice, and during each collision some of the energy carried by the current is absorbed by the lattice and converted into heat, which is essentially the vibrational kinetic energy of the lattice ions. As a result, the energy carried by the current is constantly being dissipated. This is the phenomenon of electrical resistance.

The situation is different in a superconductor. In a conventional superconductor, the electronic fluid cannot be resolved into individual electrons. Instead, it consists of bound pairs of electrons known as Cooper pairs. This pairing is caused by an attractive force between electrons from the exchange of phonons. Due to quantum mechanics, the energy spectrum of this Cooper pair fluid possesses an energy gap, meaning there is a minimum amount of energy ΔE that must be supplied in order to excite the fluid. Therefore, if ΔE is larger than the thermal energy of the lattice, given by kT, where k is Boltzmann's constant and T is the temperature, the fluid will not be scattered by the lattice. The Cooper pair fluid is thus a superfluid, meaning it can flow without energy dissipation.

In a class of superconductors known as type II superconductors, including all known high-temperature superconductors, an extremely small amount of resistivity appears at temperatures not too far below the nominal superconducting transition when an electric current is applied in conjunction with a strong magnetic field, which may be caused by the electric current. This is due to the motion of magnetic vortices in the electronic superfluid, which dissipates some of the energy carried by the current. If the current is sufficiently small, the vortices are stationary, and the resistivity vanishes. The resistance due to this effect is tiny compared with that of non-superconducting materials, but must be taken into account in sensitive experiments. However, as the temperature decreases far enough below the nominal superconducting transition, these vortices can become frozen into a disordered but stationary phase known as a "vortex glass". Below this vortex glass transition temperature, the resistance of the material becomes truly zero.

Superconducting phase transition

Behavior of heat capacity (cv, blue) and resistivity (ρ, green) at the superconducting phase transition

In superconducting materials, the characteristics of superconductivity appear when the temperature T is lowered below a critical temperature Tc. The value of this critical temperature varies from material to material. Conventional superconductors usually have critical temperatures ranging from around 20 K to less than 1 K. Solid mercury, for example, has a critical temperature of 4.2 K. As of 2009, the highest critical temperature found for a conventional superconductor is 39 K for magnesium diboride (MgB2),[7][8] although this material displays enough exotic properties that there is some doubt about classifying it as a "conventional" superconductor.[9] Cuprate superconductors can have much higher critical temperatures: YBa2Cu3O7, one of the first cuprate superconductors to be discovered, has a critical temperature of 92 K, and mercury-based cuprates have been found with critical temperatures in excess of 130 K. The explanation for these high critical temperatures remains unknown. Electron pairing due to phonon exchanges explains superconductivity in conventional superconductors, but it does not explain superconductivity in the newer superconductors that have a very high critical temperature.

Similarly, at a fixed temperature below the critical temperature, superconducting materials cease to superconduct when an external magnetic field is applied which is greater than the critical magnetic field. This is because the Gibbs free energy of the superconducting phase increases quadratically with the magnetic field while the free energy of the normal phase is roughly independent of the magnetic field. If the material superconducts in the absence of a field, then the superconducting phase free energy is lower than that of the normal phase and so for some finite value of the magnetic field (proportional to the square root of the difference of the free energies at zero magnetic field) the two free energies will be equal and a phase transition to the normal phase will occur. More generally, a higher temperature and a stronger magnetic field lead to a smaller fraction of the electrons in the superconducting band and consequently a longer London penetration depth of external magnetic fields and currents. The penetration depth becomes infinite at the phase transition.

The onset of superconductivity is accompanied by abrupt changes in various physical properties, which is the hallmark of a phase transition. For example, the electronic heat capacity is proportional to the temperature in the normal (non-superconducting) regime. At the superconducting transition, it suffers a discontinuous jump and thereafter ceases to be linear. At low temperatures, it varies instead as e−α/T for some constant, α. This exponential behavior is one of the pieces of evidence for the existence of the energy gap.

The order of the superconducting phase transition was long a matter of debate. Experiments indicate that the transition is second-order, meaning there is no latent heat. However in the presence of an external magnetic field there is latent heat, because the superconducting phase has a lower entropy below the critical temperature than the normal phase. It has been experimentally demonstrated[10] that, as a consequence, when the magnetic field is increased beyond the critical field, the resulting phase transition leads to a decrease in the temperature of the superconducting material.

Calculations in the 1970s suggested that it may actually be weakly first-order due to the effect of long-range fluctuations in the electromagnetic field. In the 1980s it was shown theoretically with the help of a disorder field theory, in which the vortex lines of the superconductor play a major role, that the transition is of second order within the type II regime and of first order (i.e., latent heat) within the type I regime, and that the two regions are separated by a tricritical point.[11] The results were strongly supported by Monte Carlo computer simulations.[12]

Meissner effect

When a superconductor is placed in a weak external magnetic field H, and cooled below its transition temperature, the magnetic field is ejected. The Meissner effect does not cause the field to be completely ejected but instead the field penetrates the superconductor but only to a very small distance, characterized by a parameter λ, called the London penetration depth, decaying exponentially to zero within the bulk of the material. The Meissner effect is a defining characteristic of superconductivity. For most superconductors, the London penetration depth is on the order of 100 nm.
The Meissner effect is sometimes confused with the kind of diamagnetism one would expect in a perfect electrical conductor: according to Lenz's law, when a changing magnetic field is applied to a conductor, it will induce an electric current in the conductor that creates an opposing magnetic field. In a perfect conductor, an arbitrarily large current can be induced, and the resulting magnetic field exactly cancels the applied field.

The Meissner effect is distinct from this—it is the spontaneous expulsion which occurs during transition to superconductivity. Suppose we have a material in its normal state, containing a constant internal magnetic field. When the material is cooled below the critical temperature, we would observe the abrupt expulsion of the internal magnetic field, which we would not expect based on Lenz's law.

The Meissner effect was given a phenomenological explanation by the brothers Fritz and Heinz London, who showed that the electromagnetic free energy in a superconductor is minimized provided
 \nabla^2\mathbf{H} = \lambda^{-2} \mathbf{H}\,
where H is the magnetic field and λ is the London penetration depth.

This equation, which is known as the London equation, predicts that the magnetic field in a superconductor decays exponentially from whatever value it possesses at the surface.

A superconductor with little or no magnetic field within it is said to be in the Meissner state. The Meissner state breaks down when the applied magnetic field is too large. Superconductors can be divided into two classes according to how this breakdown occurs. In Type I superconductors, superconductivity is abruptly destroyed when the strength of the applied field rises above a critical value Hc. Depending on the geometry of the sample, one may obtain an intermediate state[13] consisting of a baroque pattern[14] of regions of normal material carrying a magnetic field mixed with regions of superconducting material containing no field. In Type II superconductors, raising the applied field past a critical value Hc1 leads to a mixed state (also known as the vortex state) in which an increasing amount of magnetic flux penetrates the material, but there remains no resistance to the flow of electric current as long as the current is not too large. At a second critical field strength Hc2, superconductivity is destroyed. The mixed state is actually caused by vortices in the electronic superfluid, sometimes called fluxons because the flux carried by these vortices is quantized. Most pure elemental superconductors, except niobium and carbon nanotubes, are Type I, while almost all impure and compound superconductors are Type II.

London moment

Conversely, a spinning superconductor generates a magnetic field, precisely aligned with the spin axis. The effect, the London moment, was put to good use in Gravity Probe B. This experiment measured the magnetic fields of four superconducting gyroscopes to determine their spin axes. This was critical to the experiment since it is one of the few ways to accurately determine the spin axis of an otherwise featureless sphere.

History of superconductivity

Heike Kamerlingh Onnes (right), the discoverer of superconductivity

Superconductivity was discovered on April 8, 1911 by Heike Kamerlingh Onnes, who was studying the resistance of solid mercury at cryogenic temperatures using the recently produced liquid helium as a refrigerant. At the temperature of 4.2 K, he observed that the resistance abruptly disappeared.[15] In the same experiment, he also observed the superfluid transition of helium at 2.2 K, without recognizing its significance. The precise date and circumstances of the discovery were only reconstructed a century later, when Onnes's notebook was found.[16] In subsequent decades, superconductivity was observed in several other materials. In 1913, lead was found to superconduct at 7 K, and in 1941 niobium nitride was found to superconduct at 16 K.

Great efforts have been devoted to finding out how and why superconductivity works; the important step occurred in 1933, when Meissner and Ochsenfeld discovered that superconductors expelled applied magnetic fields, a phenomenon which has come to be known as the Meissner effect.[17] In 1935, Fritz and Heinz London showed that the Meissner effect was a consequence of the minimization of the electromagnetic free energy carried by superconducting current.[18]

London theory

The first phenomenological theory of superconductivity was London theory. It was put forward by the brothers Fritz and Heinz London in 1935, shortly after the discovery that magnetic fields are expelled from superconductors. A major triumph of the equations of this theory is their ability to explain the Meissner effect,[19] wherein a material exponentially expels all internal magnetic fields as it crosses the superconducting threshold. By using the London equation, one can obtain the dependence of the magnetic field inside the superconductor on the distance to the surface.[20]
There are two London equations:
\frac{\partial \mathbf{j}_s}{\partial t} = \frac{n_s e^2}{m}\mathbf{E}, \qquad \mathbf{\nabla}\times\mathbf{j}_s =-\frac{n_s e^2}{m}\mathbf{B}.
The first equation follows from Newton's second law for superconducting electrons.

Conventional theories (1950s)

During the 1950s, theoretical condensed matter physicists arrived at a solid understanding of "conventional" superconductivity, through a pair of remarkable and important theories: the phenomenological Ginzburg-Landau theory (1950) and the microscopic BCS theory (1957).[21][22]
In 1950, the phenomenological Ginzburg-Landau theory of superconductivity was devised by Landau and Ginzburg.[23] This theory, which combined Landau's theory of second-order phase transitions with a Schrödinger-like wave equation, had great success in explaining the macroscopic properties of superconductors. In particular, Abrikosov showed that Ginzburg-Landau theory predicts the division of superconductors into the two categories now referred to as Type I and Type II. Abrikosov and Ginzburg were awarded the 2003 Nobel Prize for their work (Landau had received the 1962 Nobel Prize for other work, and died in 1968). The four-dimensional extension of the Ginzburg-Landau theory, the Coleman-Weinberg model, is important in quantum field theory and cosmology.

Also in 1950, Maxwell and Reynolds et al. found that the critical temperature of a superconductor depends on the isotopic mass of the constituent element.[24][25] This important discovery pointed to the electron-phonon interaction as the microscopic mechanism responsible for superconductivity.

The complete microscopic theory of superconductivity was finally proposed in 1957 by Bardeen, Cooper and Schrieffer.[22] This BCS theory explained the superconducting current as a superfluid of Cooper pairs, pairs of electrons interacting through the exchange of phonons. For this work, the authors were awarded the Nobel Prize in 1972.

The BCS theory was set on a firmer footing in 1958, when N. N. Bogolyubov showed that the BCS wavefunction, which had originally been derived from a variational argument, could be obtained using a canonical transformation of the electronic Hamiltonian.[26] In 1959, Lev Gor'kov showed that the BCS theory reduced to the Ginzburg-Landau theory close to the critical temperature.[27][28]

Generalizations of BCS theory for conventional superconductors form the basis for understanding of the phenomenon of superfluidity, because they fall into the Lambda transition universality class. The extent to which such generalizations can be applied to unconventional superconductors is still controversial.

Further history

The first practical application of superconductivity was developed in 1954 with Dudley Allen Buck's invention of the cryotron.[29] Two superconductors with greatly different values of critical magnetic field are combined to produce a fast, simple, switch for computer elements.

In 1962, the first commercial superconducting wire, a niobium-titanium alloy, was developed by researchers at Westinghouse, allowing the construction of the first practical superconducting magnets. In the same year, Josephson made the important theoretical prediction that a supercurrent can flow between two pieces of superconductor separated by a thin layer of insulator.[30] This phenomenon, now called the Josephson effect, is exploited by superconducting devices such as SQUIDs. It is used in the most accurate available measurements of the magnetic flux quantum Φ0 = h/(2e), where h is the Planck constant. Coupled with the quantum Hall resistivity, this leads to a precise measurement of the Planck constant. Josephson was awarded the Nobel Prize for this work in 1973.

In 2008, it was proposed that the same mechanism that produces superconductivity could produce a superinsulator state in some materials, with almost infinite electrical resistance.[31]

High-temperature superconductivity

Timeline of superconducting materials

Until 1986, physicists had believed that BCS theory forbade superconductivity at temperatures above about 30 K. In that year, Bednorz and Müller discovered superconductivity in a lanthanum-based cuprate perovskite material, which had a transition temperature of 35 K (Nobel Prize in Physics, 1987).[6] It was soon found that replacing the lanthanum with yttrium (i.e., making YBCO) raised the critical temperature to 92 K.[32]

This temperature jump is particularly significant, since it allows liquid nitrogen as a refrigerant, replacing liquid helium.[32] This can be important commercially because liquid nitrogen can be produced relatively cheaply, even on-site, avoiding some of the problems (such as so-called "solid air" plugs) which arise when liquid helium is used in piping.[33][34]

Many other cuprate superconductors have since been discovered, and the theory of superconductivity in these materials is one of the major outstanding challenges of theoretical condensed matter physics.[35] There are currently two main hypotheses – the resonating-valence-bond theory, and spin fluctuation which has the most support in the research community.[36] The second hypothesis proposed that electron pairing in high-temperature superconductors is mediated by short-range spin waves known as paramagnons.[37][38]

Since about 1993, the highest temperature superconductor was a ceramic material consisting of mercury, barium, calcium, copper and oxygen (HgBa2Ca2Cu3O8+δ) with Tc = 133–138 K.[39][40] The latter experiment (138 K) still awaits experimental confirmation, however.

In February 2008, an iron-based family of high-temperature superconductors was discovered.[41][42] Hideo Hosono, of the Tokyo Institute of Technology, and colleagues found lanthanum oxygen fluorine iron arsenide (LaO1-xFxFeAs), an oxypnictide that superconducts below 26 K. Replacing the lanthanum in LaO1−xFxFeAs with samarium leads to superconductors that work at 55 K.[43]

Applications

File:Flyingsuperconductor.oggPlay media

Video of superconducting levitation of YBCO

Superconducting magnets are some of the most powerful electromagnets known. They are used in MRI/NMR machines, mass spectrometers, and the beam-steering magnets used in particle accelerators. They can also be used for magnetic separation, where weakly magnetic particles are extracted from a background of less or non-magnetic particles, as in the pigment industries.
In the 1950s and 1960s, superconductors were used to build experimental digital computers using cryotron switches. More recently, superconductors have been used to make digital circuits based on rapid single flux quantum technology and RF and microwave filters for mobile phone base stations.

Superconductors are used to build Josephson junctions which are the building blocks of SQUIDs (superconducting quantum interference devices), the most sensitive magnetometers known. SQUIDs are used in scanning SQUID microscopes and magnetoencephalography. Series of Josephson devices are used to realize the SI volt. Depending on the particular mode of operation, a superconductor-insulator-superconductor Josephson junction can be used as a photon detector or as a mixer. The large resistance change at the transition from the normal- to the superconducting state is used to build thermometers in cryogenic micro-calorimeter photon detectors. The same effect is used in ultrasensitive bolometers made from superconducting materials.

Other early markets are arising where the relative efficiency, size and weight advantages of devices based on high-temperature superconductivity outweigh the additional costs involved.

Promising future applications include high-performance smart grid, electric power transmission, transformers, power storage devices, electric motors (e.g. for vehicle propulsion, as in vactrains or maglev trains), magnetic levitation devices, fault current limiters, and superconducting magnetic refrigeration. However, superconductivity is sensitive to moving magnetic fields so applications that use alternating current (e.g. transformers) will be more difficult to develop than those that rely upon direct current.

Nobel Prizes for superconductivity

Cryogenics

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Cryogenics...