Search This Blog

Friday, November 4, 2022

Clinical metagenomic sequencing

From Wikipedia, the free encyclopedia

Clinical metagenomic next-generation sequencing (mNGS) is the comprehensive analysis of microbial and host genetic material (DNA or RNA) in clinical samples from patients. It uses the techniques of metagenomics to identify and characterize the genome of bacteria, fungi, parasites, and viruses without the need for a prior knowledge of a specific pathogen directly from clinical specimens. The capacity to detect all the potential pathogens in a sample makes metagenomic next generation sequencing a potent tool in the diagnosis of infectious disease especially when other more directed assays, such as PCR, fail. Its limitations include clinical utility, laboratory validity, sense and sensitivity, cost and regulatory considerations.

Outside of clinical medicine, similar work is done to identify genetic material in environmental samples, such as ponds or soil.

Definition

It uses the techniques of metagenomics to identify and characterize the genome of bacteria, fungi, parasites, and viruses without the need for a prior knowledge of a specific pathogen directly from clinical specimens. The capacity to detect all the potential pathogens in a sample makes metagenomic next generation sequencing a potent tool in the diagnosis of infectious disease especially when other more directed assays, such as PCR, fail.

Laboratory workflow

A typical mNGS workflow consists of the following steps:

  • Sample acquisition: the most commonly used samples for metagenomic sequencing are blood, stool, cerebrospinal fluid (CSF), urine, or nasopharyngeal swabs. Among these, blood and CSF are the cleanest, having less background noise, while the others are expected to have a great amount of commensals and/or opportunistic infections and thus have more background noise. Samples should be collected with much caution as surgical specimens could be contaminated during handling of the biopsy; for example, lumbar punctures to obtain CSF specimens may be contaminated during the procedure.
  • RNA/DNA extraction: the DNA and the RNA of the sample is extracted by using an extraction kit. If there is a strong previous suspicion of the pathogen genome composition and since the amount of pathogen nucleic acid in more noise samples is overwhelmed by the RNA/DNA of other organisms, selecting an extraction kit of only RNA or DNA would be a more specific and convenient approach. Some commerciable available kits are for example RNeasy PowerSoil Total RNA kit (Qiagen), RNeasy Minikit (Qiagen), MagMAX Viral Isolation kit (ABI), Viral RNA Minikit (Qiagen).
  • Optimization strategies for library preparation: because of high levels of background noise in metagenomic sequencing, several target enrichment procedures have been developed that aim to increase the probability of capturing pathogen-derived transcripts and/or genomes. Generally there are two main approaches that can be used to increase the amount of pathogen signal in a sample: negative selection and positive enrichment.
  1. Negative selection (background depletion or subtraction) targets and eliminates the host and microbiome genomic background, while aiming to preserve the nucleic acid derived from the pathogens of interest. Degradation of genomic background can be performed through broad-spectrum digestion with nucleases, such as DNase I for DNA background, or by removing abundant RNA species (rRNA, mtRNA, globin mRNA) using sequence-specific RNA depletion kits. Also CRISPR-Cas9-based approaches can be performed to target and deplete human mitochrondrial RNA for example. Generally, however, subtraction approaches lead to a certain degree of loss of the targeted pathogen genome, as poor recovery may occur during the cleanup.
  2. Positive enrichment is used to increase pathogen signal rather than reducing background noise. This is commonly done through hybridization-based target capture by probes, which are used to pull out nucleic acid of interest for downstream amplification and sequencing. Panviral probes have been shown to successfully identify diverse types of pathogens in different clinical fluid and respiratory samples, and have been used for sequencing and characterization of novel viruses. However, the probe approach includes extra hybridization and cleanup steps, requiring higher sample input, increasing the risk of losing the target, and increasing the cost and hands-on time.
  • High-throughput sequencing: all the nucleic acids fragments of the library are sequenced. The sequencing platform to be used is chosen depending on different factors such as laboratory's research objectives, personal experience and skill levels. So far, the Illumina MiSeq system has proven to be the most commonly used platform for infectious disease research, pathogen surveillance, and pathogen discovery in research and public health. The instrument is compact enough to fit on a laboratory bench, has a fast runtime as compared to other similar platforms, and has a strong user support community. However, with further improvements of this technology and with additional error reduction and software stabilization, the MinION may be an excellent addition to the arsenal of current sequencing technologies for routine surveillance, especially in smaller laboratories with limited resources. For instance, the MinION was successfully used in the ZiBRA project for real-time Zika virus surveillance of mosquitoes and humans in Brazil, and in Guinea to perform real-time surveillance during the ongoing Ebola outbreak. In general, for limited resources IlluminaMiSeq, iSeq, Ion Torrent PGM, Oxford Nanopore, MinION are used. While for substantial resources Illumina NextSeq, NovaSeq, PacBio Sequel, Oxford Nanopore and PromethION are preferred. Moreover, for pathogen sequencing the use of controls is of fundamental importance ensuring mNGS assay quality and stability over time; PhiX is used as sequencing control, then the other controls include the positive control, an additional internal control (e.g., spiked DNA or other known pathogen) and a negative control (usually water sample).
  • Bioinformatic analysis: Whereas the sequencing itself has been made widely accessible and more user friendly, the data analysis and interpretation that follows still requires specialized bioinformatics expertise and appropriate computational resources. The raw data from a sequencing platform is usually cleaned, trimmed, and filtered to remove low-quality and duplicate reads. Removal of the host genome/transcriptome reads is performed to decrease background noise (e.g., host and environmental reads) and increase the frequency of pathogen reads. This step will also decrease downstream analysis time. Further background noise removal is achieved by mapping of sample reads to the reads from the negative control to ensure elimination of any contaminating reads, such as those associated with the reagents or sampling storage medium. The remaining reads are usually assembled de novo to produce long stretches of sequences called contigs. Taxonomic identification of the resulting contigs is performed by matching them to the genomes and sequences in nucleotide or protein databases; for this, various versions of BLAST are most commonly used. Advanced characterization of bacterial organisms can be also performed, allowing to obtain the necessary depth and breadth of coverage for genetic characterization results. Gene calling can be performed in a variety of ways, including RAST or using NCBI services at the time of full genome submission. Results of multiple annotation tools can be compared for accuracy and completeness and, if necessary, merged using BEACON. For characterization of antibiotic resistance genes, the Resistance Gene Identifier from the Comprehensive Antibiotic Resistance Database (CARD) is commonly used. To characterize virulence factor genes, ShortBRED offers analyses with a customized database from the Virulence Factor Database.

Applications

Infectious diseases diagnosis

One way to detect these pathogens is detect part of their genome by metagenomics sequencing (Next Generation Sequencing-mNGS), which can be targeted or untargeted.

Targeted

Because of that, the sensitivity to detect microorganisms that are being targeted usually increases, but this comes with a limitation of the amount of identifiable pathogens.

Untargeted

The untargeted analysis is a metagenomic "shotgun" approach. The whole DNA and/or RNA is sequenced with this approach using universal primers. The resultant mNGS reads can be assembled into partial or complete genomes. These genome sequences allow to monitor hospital outbreaks to facilitate infection control and public health surveillance. Also, they can be used for subtyping (identificate a specific genetic variant of a microorganism).

Untargeted mNGS is the most promising approach to analyse clinical samples and provide a comprehensive diagnosis of infections. Various groups have validated mNGS in Clinical Laboratory Improvement Amendments (CLIA), such as meningitis or encephalitis, sepsis and pneumonia. This method can be very helpful in the settings where no exact infectious etiology is suspected. For example, in patients with suspected pneumonia, identification of the underlying infectious etiology as in COVID-19 has important clinical and public health implications.

The traditional method consists on formulating a differential diagnosis on the basis of the patient's history, a clinical presentation, imaging findings and laboratory testing. But here it is suggested a different way of diagnosis; metagenomic next-generation sequencing (NGS) is a promising method because a comprehensive spectrum of a potential causes (viral, bacterial; fungus and parasitic) can be identify by a single assay.

Below are some examples of the metagenomic sequencing application in infectious diseases diagnosis.

Examples

Diagnosis of meningitis and encephalitis

The traditional method that is used to the diagnosis of infectious diseases has been challenged in some cases: neuroinflamatory diseases, lack of diagnostic tests for rare pathogens and the limited availability and volume of the Central Nervous System (CNS) samples, because of the requirement for invasive procedures. Owing to these problems, some assays suggest a different way of diagnosis, which is the metagenomic next-generation sequencing (NGS); this is a promising method for diagnosis because a comprehensive spectrum of potential causes (bacterial, viral, fungus and parasitic) can be identify by a single study. Summarising, NGS can identify a broad range of pathogens in a single test.

In some articles they evaluate the clinical usefulness of metagenomic NGS for diagnosis neurologic infections, in parallel with conventional microbiologic testing. It has been seen that the highest diagnostic yield resulted from a combination of metagenomics NGS of CSF and conventional testing, including serologic testing and testing of sample types other than CSF.

Moreover, some findings from different studies have shown that neurologic infections remain undiagnosed in a proportion of patients despite conventional testing and they demonstrate the potential usefulness of clinical metagenomic NGS testing in these patients.

The results of metagenomic NGS can also be valuable even when concordant with results of conventional testing, not only providing reassurance that the conventionally obtained diagnosis is correct but also potentially detecting or ruling out coinfections, specially in immunocompromised patients.

Metagenomic is fundamentally a direct-detection method and relies on the presence of nucleic acid form the causative pathogen in the CSF samples.

Study of antimicrobial resistance

Antimicrobial resistance is a health problem needed to resolve.

Nowadays to detect resistances of different microbes is used a technique called Antibiotic Sensitivity (AST), but several studies have discovered that bacterial resistance is in the genoma and it is transferred by horizontal way (HGT), so sequencing methods are being developed to ease the identification and characterization of those genomes and metagenomes. For the moment exist the following methods to detect antimicrobial resistances:

  • AST: an advantage about this method is that it gives information for patients treatment. There are also some disadvantages, one of them is that this technique is only useful in cultivable bacteria or that is needed competent personal.
  • Sequencing methods: some advantages of those methods over the AST technique are that it is rapid and sensible, it is useful on both bacteria that grow on artificial media and those that do not, and it permits compare studies in several organisms. One type of sequencing method can be used in preference to another depending on the type of the sample, for a genomic sample assembly-based methods is used; for a metagenomic sample it is preferable to use read-based methods.

It is noted that metagenomic sequencing methods have provided better results than genomics ones, due to these present less number of negative falses. Within metagenomics sequencing, functional metagenomic is a powerful approach for characterizing resistomes; a metagenomic library is generated by cloning the total community DNA extracted from a sample into an expression vector, this library is assayed for antimicrobial resistance by plating on selective media that are lethal to the wild-type host. The selected inserts from the surviving recombinant, antimicrobial-resistant host cells are then sequenced, and resulting sequences are subsequently assembled and annotated (PARFuMS).

Functional metagenomics has enabled the discovery of several new antimicrobial resistance mechanisms and their related genes, one such example is the recently discovered tetracycline resistance mechanism by tetracycline destructases.

In conclusion is important to incorporate not only the antimicrobial resistance gene sequence and mechanism but also the genomic context, host bacterial species and geographic location (metagenome).

Pandemic preparedness

Potentially dangerous pathogens such as ebolaviruses, coronaviruses etc., and the closest genetic relative for unknown pathogens, could be identified immediately, prompting further follow-up. Its role in the future of pandemic preparedness is anticipated and could exist as the earliest surveillance system we may have to detect outbreaks of unknown etiology and to respond in an opportune manner.

Clinical microbiome analyses

The use of mNGS to characterize the microbiome has made possible the development of bacterial probiotics to be administrated as pills, for example, as a treatment of Clostridium difficile-associated diseases.

Human host response analyses

The studying of genes expression allows us to characterize a lot of infections, for example infections due to Staphylococcus aureus, Lyme disease, candidiasis, tuberculosis and influenza. Also, this approach can be used for cancer classification.

RNAseq analysis have a lot of other purposes and applications such as to identify novel or under appreciated host–microbial interactions directly from clinical samples, to make indirect diagnosis on the basis of a pathogen specific human host response and to discriminate infectious versus noninfectious causes of acute illness.

Applications in oncology

Challenges

Clinical utility

Even though, most of the metagenomics outcomes data generated consist of case reports which belie the increasing interest on diagnostic metagenomics.

eAccordingly, there is an overall lack of penetration of this approach into the clinical microbiology laboratory, as making a diagnosis with metagenomics is still basically only useful in the context of case report but not for a true daily diagnostic purpose.

Recent cost-effectiveness modelling of metagenomics in the diagnosis of fever of unknown origin concluded that, even after limiting the cost of diagnostic metagenomics to $100–1000 per test, it would require 2.5-4 times the diagnostic yield of computed tomography of the abdomen/pelvis in order to be cost neutral and cautioned against ‘widespread rush’ to deploy metagenomic testing.

Furthermore, in the case of the discovery of potential novel infectious agents, usually only the positives results are published event though the vast majority of sequenced cases are negative, thus resulting in a very biased information. Besides, most of the discovery work based in metagenomic that precede the current diagnostic-based work even mentioned the known agents detected while screening unsolved cases for completely novel causes.

Laboratory validity

To date, most published testing has been run in an unvalidated, unreportable manner. The ‘standard microbiological testing’ that samples are subjected to prior to metagenomics is variable and has not included reverse transcription-polymerase chain reaction (RT-PCR) testing for common respiratory viruses or, routinely 16S/ITS PCR testing.

Given the relative costs of validating and performing metagenomic versus 16S/ITS PCR testing, the second one is considered an easier and more efficient option. A potential exception to the 16S/ITS testing is blood, given the huge amount of 16S sequence available, making clean cutoffs for diagnostic purposes problematic.

Furthermore, almost all of the organisms detected by metagenomics for which there is an associated treatment and thus would be truly actionable are also detectable by 16S/ITS testing (or 16S/ITS-NGS). This makes questionable the utility of metagenomics in many diagnostic cases.

One of the main points to accomplish laboratory validity is the presence of reference standards and controls when performing mNGS assays. They are needed to ensure the quality and stability of this technique over time.

Sense and sensitivity

In clinical microbiology labs, the quantitation of microbial burden is considered a routine function as it is associated with the severity and progression of the disease. To achieve a good quantitation a high sensitivity of the technique is needed.

Whereas interfering substances represent a common problem to clinical chemistry or to PCR diagnostics, the degree of interference from host (for example, in tissue biopsies) or nonpathogen nucleic acids (for example, in stool) in metagenomics is a new twist. In addition, due to the relative size of the human genome in comparison with microbial genomes the interference can occur at low levels of contaminating material.

Another challenge for clinical metagenomics in regards to sensitivity is the diagnosis of coinfections where there are present high-titer pathogens that can generate biased results as they may disproportionately soak up reads and make difficult to distinguish the less predominant pathogens.

In addition to issues with interfering substances, specially in the diagnosis area, accurate quantitation and sensitivities are essential as a confusion in the results can affect to a third person, the patient. For these reason, practitioners currently have to be keenly aware of the index-swapping issues associated with Illumina sequencing which can lead to trace incorrectly barcoded samples.

Since metagenomics has typically been used on patients for whom every other test to date has been negative, questions surrounding analytical sensitivity haven been less germane. But, for ruling out infections causes being one of the more important roles for clinical metagenomics it is essential to be capable to perform a deep enough sequencing to achieve adequate sensitivities. One way could be developing novel library preparation techniques.

Cost considerations

In fact the Illumina monopoly on high-quality next-generation sequencing reagents and the need for accurate and deep sequencing in metagenomics mean that the sequencing reagents alone cost more than FDA-approved syndromic testing panels. Also additional direct costs of metagenomics such as extraction, library preparation, and computational analysis have to be considered.

In general, metagenomic sequencing is most useful and cost efficient for pathogen discovery when at least one of the following criteria are met:

  1. the identification of the organism is not sufficient (one desires to go beyond discovery to produce data for genomic characterization),
  2. a coinfection is suspected,
  3. other simpler assays are ineffective or will take an inordinate amount of time,
  4. screening of environmental samples for previously undescribed or divergent pathogens.

Atomic battery

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Atomic_battery

An atomic battery, nuclear battery, radioisotope battery or radioisotope generator is a device which uses energy from the decay of a radioactive isotope to generate electricity. Like nuclear reactors, they generate electricity from nuclear energy, but differ in that they do not use a chain reaction. Although commonly called batteries, they are technically not electrochemical and cannot be charged or recharged. They are very costly, but have an extremely long life and high energy density, and so they are typically used as power sources for equipment that must operate unattended for long periods of time, such as spacecraft, pacemakers, underwater systems and automated scientific stations in remote parts of the world.

Nuclear battery technology began in 1913, when Henry Moseley first demonstrated a current generated by charged particle radiation. The field received considerable in-depth research attention for applications requiring long-life power sources for space needs during the 1950s and 1960s. In 1954 RCA researched a small atomic battery for small radio receivers and hearing aids. Since RCA's initial research and development in the early 1950s, many types and methods have been designed to extract electrical energy from nuclear sources. The scientific principles are well known, but modern nano-scale technology and new wide-bandgap semiconductors have created new devices and interesting material properties not previously available.

Nuclear batteries can be classified by energy conversion technology into two main groups: thermal converters and non-thermal converters. The thermal types convert some of the heat generated by the nuclear decay into electricity. The most notable example is the radioisotope thermoelectric generator (RTG), often used in spacecraft. The non-thermal converters extract energy directly from the emitted radiation, before it is degraded into heat. They are easier to miniaturize and do not require a thermal gradient to operate, so they are suitable for use in small-scale applications. The most notable example is the betavoltaic cell.

Atomic batteries usually have an efficiency of 0.1–5%. High-efficiency betavoltaic devices can reach 6–8% efficiency.

Thermal conversion

Thermionic conversion

A thermionic converter consists of a hot electrode, which thermionically emits electrons over a space-charge barrier to a cooler electrode, producing a useful power output. Caesium vapor is used to optimize the electrode work functions and provide an ion supply (by surface ionization) to neutralize the electron space charge.

Thermoelectric conversion

Radioisotope-powered cardiac pacemaker being developed by the Atomic Energy Commission, is planned to stimulate the pulsing action of a malfunctioning heart. Circa 1967.

A radioisotope thermoelectric generator (RTG) uses thermocouples. Each thermocouple is formed from two wires of different metals (or other materials). A temperature gradient along the length of each wire produces a voltage gradient from one end of the wire to the other; but the different materials produce different voltages per degree of temperature difference. By connecting the wires at one end, heating that end but cooling the other end, a usable, but small (millivolts), voltage is generated between the unconnected wire ends. In practice, many are connected in series (or in parallel) to generate a larger voltage (or current) from the same heat source, as heat flows from the hot ends to the cold ends. Metal thermocouples have low thermal-to-electrical efficiency. However, the carrier density and charge can be adjusted in semiconductor materials such as bismuth telluride and silicon germanium to achieve much higher conversion efficiencies.

Thermophotovoltaic conversion

Thermophotovoltaic (TPV) cells work by the same principles as a photovoltaic cell, except that they convert infrared light (rather than visible light) emitted by a hot surface, into electricity. Thermophotovoltaic cells have an efficiency slightly higher than thermoelectric couples and can be overlaid on thermoelectric couples, potentially doubling efficiency. The University of Houston TPV Radioisotope Power Conversion Technology development effort is aiming at combining thermophotovoltaic cells concurrently with thermocouples to provide a 3- to 4-fold improvement in system efficiency over current thermoelectric radioisotope generators.

Stirling generators

A Stirling radioisotope generator is a Stirling engine driven by the temperature difference produced by a radioisotope. A more efficient version, the advanced Stirling radioisotope generator, was under development by NASA, but was cancelled in 2013 due to large-scale cost overruns.

Non-thermal conversion

Non-thermal converters extract energy from emitted radiation before it is degraded into heat. Unlike thermoelectric and thermionic converters their output does not depend on the temperature difference. Non-thermal generators can be classified by the type of particle used and by the mechanism by which their energy is converted.

Electrostatic conversion

Energy can be extracted from emitted charged particles when their charge builds up in a conductor, thus creating an electrostatic potential. Without a dissipation mode the voltage can increase up to the energy of the radiated particles, which may range from several kilovolts (for beta radiation) up to megavolts (alpha radiation). The built up electrostatic energy can be turned into usable electricity in one of the following ways.

Direct-charging generator

A direct-charging generator consists of a capacitor charged by the current of charged particles from a radioactive layer deposited on one of the electrodes. Spacing can be either vacuum or dielectric. Negatively charged beta particles or positively charged alpha particles, positrons or fission fragments may be utilized. Although this form of nuclear-electric generator dates back to 1913, few applications have been found in the past for the extremely low currents and inconveniently high voltages provided by direct-charging generators. Oscillator/transformer systems are employed to reduce the voltages, then rectifiers are used to transform the AC power back to direct current.

English physicist H. G. J. Moseley constructed the first of these. Moseley's apparatus consisted of a glass globe silvered on the inside with a radium emitter mounted on the tip of a wire at the center. The charged particles from the radium created a flow of electricity as they moved quickly from the radium to the inside surface of the sphere. As late as 1945 the Moseley model guided other efforts to build experimental batteries generating electricity from the emissions of radioactive elements.

Electromechanical conversion

Electromechanical atomic batteries use the buildup of charge between two plates to pull one bendable plate towards the other, until the two plates touch, discharge, equalizing the electrostatic buildup, and spring back. The mechanical motion produced can be used to produce electricity through flexing of a piezoelectric material or through a linear generator. Milliwatts of power are produced in pulses depending on the charge rate, in some cases multiple times per second (35 Hz).

Radiovoltaic conversion

A radiovoltaic (RV) device converts the energy of ionizing radiation directly into electricity using a semiconductor junction, similar to the conversion of photons into electricity in a photovoltaic cell. Depending on the type of radiation targeted, these devices are called alphavoltaic (AV, αV), betavoltaic (BV, βV) and/or gammavoltaic (GV, γV). Betavoltaics have traditionally received the most attention since (low-energy) beta emitters cause the least amount of radiative damage, thus allowing a longer operating life and less shielding. Interest in alphavoltaic and (more recently) gammavoltaic devices is driven by their potential higher efficiency.

Alphavoltaic conversion

Alphavoltaic devices use a semiconductor junction to produce electrical energy from energetic alpha particles.

Betavoltaic conversion

Betavoltaic devices use a semiconductor junction to produce electrical energy from energetic beta particles (electrons). A commonly used source is the hydrogen isotope tritium.

Betavoltaic devices are particularly well-suited to low-power electrical applications where long life of the energy source is needed, such as implantable medical devices or military and space applications.

Gammavoltaic conversion

Gammavoltaic devices use a semiconductor junction to produce electrical energy from energetic gamma particles (high-energy photons). They have only been considered in the 2010s but were proposed as early as 1981.

A gammavoltaic effect has been reported in perovskite solar cells. Another patented design involves scattering of the gamma particle until its energy has decreased enough to be absorbed in a conventional photovoltaic cell. Gammavoltaic designs using diamond and Schottky diodes are also being investigated.

Radiophotovoltaic (optoelectric) conversion

In a radiophotovoltaic (RPV) device the energy conversion is indirect: the emitted particles are first converted into light using a radioluminescent material (a scintillator or phosphor), and the light is then converted into electricity using a photovoltaic cell. Depending on the type of particle targeted, the conversion type can be more precisely specified as alphaphotovoltaic (APV or α-PV), betaphotovoltaic (BPV or β-PV) or gammaphotovoltaic (GPV or γ-PV).

Radiophotovoltaic conversion can be combined with radiovoltaic conversion to increase the conversion efficiency.

Pacemakers

Medtronic and Alcatel developed a plutonium-powered pacemaker, the Numec NU-5, powered by a 2.5 Ci slug of plutonium 238, first implanted in a human patient in 1970. The 139 Numec NU-5 nuclear pacemakers implanted in the 1970s are expected to never need replacing, an advantage over non-nuclear pacemakers, which require surgical replacement of their batteries every 5 to 10 years. The plutonium "batteries" are expected to produce enough power to drive the circuit for longer than the 88-year halflife of the plutonium.

Radioisotopes used

Atomic batteries use radioisotopes that produce low energy beta particles or sometimes alpha particles of varying energies. Low energy beta particles are needed to prevent the production of high energy penetrating Bremsstrahlung radiation that would require heavy shielding. Radioisotopes such as tritium, nickel-63, promethium-147, and technetium-99 have been tested. Plutonium-238, curium-242, curium-244 and strontium-90 have been used. Besides the nuclear properties of the used isotope, there are also the issues of chemical properties and availability. A product deliberately produced via neutron irradiation or in a particle accelerator is more difficult to obtain than a fission product easily extracted from spent nuclear fuel.

Plutonium-238 must be deliberately produced via neutron irradiation of Neptunium-237 but it can be easily converted into a stable plutonium oxide ceramic. Strontium-90 is easily extracted from spent nuclear fuel but must be converted into the perovskite form strontium titanate to reduce its chemical mobility, cutting power density in half. Caesium-137, another high yield nuclear fission product, is rarely used in atomic batteries because it is difficult to convert into chemically inert substances. Another undesirable property of Cs-137 extracted from spent nuclear fuel is that it is contaminated with other isotopes of Caesium which reduce power density further.

Micro-batteries

In the field of microelectromechanical systems (MEMS), nuclear engineers at the University of Wisconsin, Madison have explored the possibilities of producing minuscule batteries which exploit radioactive nuclei of substances such as polonium or curium to produce electric energy. As an example of an integrated, self-powered application, the researchers have created an oscillating cantilever beam that is capable of consistent, periodic oscillations over very long time periods without the need for refueling. Ongoing work demonstrate that this cantilever is capable of radio frequency transmission, allowing MEMS devices to communicate with one another wirelessly.

These micro-batteries are very light and deliver enough energy to function as power supply for use in MEMS devices and further for supply for nanodevices.

The radiation energy released is transformed into electric energy, which is restricted to the area of the device that contains the processor and the micro-battery that supplies it with energy.

Age of Earth

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Age_of_Earth

The Blue Marble, Earth as seen in 1972 from Apollo 17

The age of Earth is estimated to be 4.54 ± 0.05 billion years (4.54 × 109 years ± 1%). This age may represent the age of Earth's accretion, or core formation, or of the material from which Earth formed. This dating is based on evidence from radiometric age-dating of meteorite material and is consistent with the radiometric ages of the oldest-known terrestrial and lunar samples.

Following the development of radiometric age-dating in the early 20th century, measurements of lead in uranium-rich minerals showed that some were in excess of a billion years old. The oldest such minerals analyzed to date—small crystals of zircon from the Jack Hills of Western Australia—are at least 4.404 billion years old. Calcium–aluminium-rich inclusions—the oldest known solid constituents within meteorites that are formed within the Solar System—are 4.567 billion years old, giving a lower limit for the age of the Solar System.

It is hypothesised that the accretion of Earth began soon after the formation of the calcium-aluminium-rich inclusions and the meteorites. Because the time this accretion process took is not yet known, and predictions from different accretion models range from a few million up to about 100 million years, the difference between the age of Earth and of the oldest rocks is difficult to determine. It is also difficult to determine the exact age of the oldest rocks on Earth, exposed at the surface, as they are aggregates of minerals of possibly different ages.

Development of modern geologic concepts

Studies of strata—the layering of rocks and earth—gave naturalists an appreciation that Earth may have been through many changes during its existence. These layers often contained fossilized remains of unknown creatures, leading some to interpret a progression of organisms from layer to layer.

Nicolas Steno in the 17th century was one of the first naturalists to appreciate the connection between fossil remains and strata. His observations led him to formulate important stratigraphic concepts (i.e., the "law of superposition" and the "principle of original horizontality"). In the 1790s, William Smith hypothesized that if two layers of rock at widely differing locations contained similar fossils, then it was very plausible that the layers were the same age. Smith's nephew and student, John Phillips, later calculated by such means that Earth was about 96 million years old.

In the mid-18th century, the naturalist Mikhail Lomonosov suggested that Earth had been created separately from, and several hundred thousand years before, the rest of the universe. Lomonosov's ideas were mostly speculative. In 1779 the Comte du Buffon tried to obtain a value for the age of Earth using an experiment: He created a small globe that resembled Earth in composition and then measured its rate of cooling. This led him to estimate that Earth was about 75,000 years old.

Other naturalists used these hypotheses to construct a history of Earth, though their timelines were inexact as they did not know how long it took to lay down stratigraphic layers. In 1830, geologist Charles Lyell, developing ideas found in James Hutton's works, popularized the concept that the features of Earth were in perpetual change, eroding and reforming continuously, and the rate of this change was roughly constant. This was a challenge to the traditional view, which saw the history of Earth as dominated by intermittent catastrophes. Many naturalists were influenced by Lyell to become "uniformitarians" who believed that changes were constant and uniform.

In 1862, the physicist William Thomson, 1st Baron Kelvin published calculations that fixed the age of Earth at between 20 million and 400 million years. He assumed that Earth had formed as a completely molten object, and determined the amount of time it would take for the near-surface temperature gradient to decrease to its present value. His calculations did not account for heat produced via radioactive decay (a then unknown process) or, more significantly, convection inside Earth, which allows the temperature in the upper mantle to remain high much longer, maintaining a high thermal gradient in the crust much longer. Even more constraining were Kelvin's estimates of the age of the Sun, which were based on estimates of its thermal output and a theory that the Sun obtains its energy from gravitational collapse; Kelvin estimated that the Sun is about 20 million years old.

William Thomson (Lord Kelvin)

Geologists such as Charles Lyell had trouble accepting such a short age for Earth. For biologists, even 100 million years seemed much too short to be plausible. In Charles Darwin's theory of evolution, the process of random heritable variation with cumulative selection requires great durations of time, and Darwin himself stated that Lord Kelvin's estimates did not appear to provide enough. According to modern biology, the total evolutionary history from the beginning of life to today has taken place since 3.5 to 3.8 billion years ago, the amount of time which passed since the last universal ancestor of all living organisms as shown by geological dating.

In a lecture in 1869, Darwin's great advocate, Thomas H. Huxley, attacked Thomson's calculations, suggesting they appeared precise in themselves but were based on faulty assumptions. The physicist Hermann von Helmholtz (in 1856) and astronomer Simon Newcomb (in 1892) contributed their own calculations of 22 and 18 million years, respectively, to the debate: they independently calculated the amount of time it would take for the Sun to condense down to its current diameter and brightness from the nebula of gas and dust from which it was born. Their values were consistent with Thomson's calculations. However, they assumed that the Sun was only glowing from the heat of its gravitational contraction. The process of solar nuclear fusion was not yet known to science.

In 1895 John Perry challenged Kelvin's figure on the basis of his assumptions on conductivity, and Oliver Heaviside entered the dialogue, considering it "a vehicle to display the ability of his operator method to solve problems of astonishing complexity."

Other scientists backed up Thomson's figures. Charles Darwin's son, the astronomer George H. Darwin, proposed that Earth and Moon had broken apart in their early days when they were both molten. He calculated the amount of time it would have taken for tidal friction to give Earth its current 24-hour day. His value of 56 million years added additional evidence that Thomson was on the right track.

The last estimate Thomson gave, in 1897, was: "that it was more than 20 and less than 40 million year old, and probably much nearer 20 than 40". In 1899 and 1900, John Joly calculated the rate at which the oceans should have accumulated salt from erosion processes, and determined that the oceans were about 80 to 100 million years old.

Radiometric dating

Overview

By their chemical nature, rock minerals contain certain elements and not others; but in rocks containing radioactive isotopes, the process of radioactive decay generates exotic elements over time. By measuring the concentration of the stable end product of the decay, coupled with knowledge of the half life and initial concentration of the decaying element, the age of the rock can be calculated. Typical radioactive end products are argon from decay of potassium-40, and lead from decay of uranium and thorium. If the rock becomes molten, as happens in Earth's mantle, such nonradioactive end products typically escape or are redistributed. Thus the age of the oldest terrestrial rock gives a minimum for the age of Earth, assuming that no rock has been intact for longer than Earth itself.

Convective mantle and radioactivity

In 1892, Thomson had been made Lord Kelvin in appreciation of his many scientific accomplishments. Kelvin calculated the age of Earth by using thermal gradients, and he arrived at an estimate of about 100 million years. He did not realize that Earth's mantle was convecting, and this invalidated his estimate. In 1895, John Perry produced an age-of-Earth estimate of 2 to 3 billion years using a model of a convective mantle and thin crust, however his work was largely ignored. Kelvin stuck by his estimate of 100 million years, and later reduced it to about 20 million years.

The discovery of radioactivity introduced another factor in the calculation. After Henri Becquerel's initial discovery in 1896, Marie and Pierre Curie discovered the radioactive elements polonium and radium in 1898; and in 1903, Pierre Curie and Albert Laborde announced that radium produces enough heat to melt its own weight in ice in less than an hour. Geologists quickly realized that this upset the assumptions underlying most calculations of the age of Earth. These had assumed that the original heat of Earth and the Sun had dissipated steadily into space, but radioactive decay meant that this heat had been continually replenished. George Darwin and John Joly were the first to point this out, in 1903.

Invention of radiometric dating

Radioactivity, which had overthrown the old calculations, yielded a bonus by providing a basis for new calculations, in the form of radiometric dating.

Ernest Rutherford and Frederick Soddy jointly had continued their work on radioactive materials and concluded that radioactivity was due to a spontaneous transmutation of atomic elements. In radioactive decay, an element breaks down into another, lighter element, releasing alpha, beta, or gamma radiation in the process. They also determined that a particular isotope of a radioactive element decays into another element at a distinctive rate. This rate is given in terms of a "half-life", or the amount of time it takes half of a mass of that radioactive material to break down into its "decay product".

Some radioactive materials have short half-lives; some have long half-lives. Uranium and thorium have long half-lives, and so persist in Earth's crust, but radioactive elements with short half-lives have generally disappeared. This suggested that it might be possible to measure the age of Earth by determining the relative proportions of radioactive materials in geological samples. In reality, radioactive elements do not always decay into nonradioactive ("stable") elements directly, instead, decaying into other radioactive elements that have their own half-lives and so on, until they reach a stable element. These "decay chains", such as the uranium-radium and thorium series, were known within a few years of the discovery of radioactivity and provided a basis for constructing techniques of radiometric dating.

The pioneers of radioactivity were chemist Bertram B. Boltwood and the energetic Rutherford. Boltwood had conducted studies of radioactive materials as a consultant, and when Rutherford lectured at Yale in 1904, Boltwood was inspired to describe the relationships between elements in various decay series. Late in 1904, Rutherford took the first step toward radiometric dating by suggesting that the alpha particles released by radioactive decay could be trapped in a rocky material as helium atoms. At the time, Rutherford was only guessing at the relationship between alpha particles and helium atoms, but he would prove the connection four years later.

Soddy and Sir William Ramsay had just determined the rate at which radium produces alpha particles, and Rutherford proposed that he could determine the age of a rock sample by measuring its concentration of helium. He dated a rock in his possession to an age of 40 million years by this technique. Rutherford wrote,

I came into the room, which was half dark, and presently spotted Lord Kelvin in the audience and realized that I was in trouble at the last part of my speech dealing with the age of the Earth, where my views conflicted with his. To my relief, Kelvin fell fast asleep, but as I came to the important point, I saw the old bird sit up, open an eye, and cock a baleful glance at me! Then a sudden inspiration came, and I said, "Lord Kelvin had limited the age of the Earth, provided no new source was discovered. That prophetic utterance refers to what we are now considering tonight, radium!" Behold! the old boy beamed upon me.

Rutherford assumed that the rate of decay of radium as determined by Ramsay and Soddy was accurate, and that helium did not escape from the sample over time. Rutherford's scheme was inaccurate, but it was a useful first step.

Boltwood focused on the end products of decay series. In 1905, he suggested that lead was the final stable product of the decay of radium. It was already known that radium was an intermediate product of the decay of uranium. Rutherford joined in, outlining a decay process in which radium emitted five alpha particles through various intermediate products to end up with lead, and speculated that the radium–lead decay chain could be used to date rock samples. Boltwood did the legwork, and by the end of 1905 had provided dates for 26 separate rock samples, ranging from 92 to 570 million years. He did not publish these results, which was fortunate because they were flawed by measurement errors and poor estimates of the half-life of radium. Boltwood refined his work and finally published the results in 1907.

Boltwood's paper pointed out that samples taken from comparable layers of strata had similar lead-to-uranium ratios, and that samples from older layers had a higher proportion of lead, except where there was evidence that lead had leached out of the sample. His studies were flawed by the fact that the decay series of thorium was not understood, which led to incorrect results for samples that contained both uranium and thorium. However, his calculations were far more accurate than any that had been performed to that time. Refinements in the technique would later give ages for Boltwood's 26 samples of 410 million to 2.2 billion years.

Arthur Holmes establishes radiometric dating

Although Boltwood published his paper in a prominent geological journal, the geological community had little interest in radioactivity. Boltwood gave up work on radiometric dating and went on to investigate other decay series. Rutherford remained mildly curious about the issue of the age of Earth but did little work on it.

Robert Strutt tinkered with Rutherford's helium method until 1910 and then ceased. However, Strutt's student Arthur Holmes became interested in radiometric dating and continued to work on it after everyone else had given up. Holmes focused on lead dating, because he regarded the helium method as unpromising. He performed measurements on rock samples and concluded in 1911 that the oldest (a sample from Ceylon) was about 1.6 billion years old. These calculations were not particularly trustworthy. For example, he assumed that the samples had contained only uranium and no lead when they were formed.

More important research was published in 1913. It showed that elements generally exist in multiple variants with different masses, or "isotopes". In the 1930s, isotopes would be shown to have nuclei with differing numbers of the neutral particles known as "neutrons". In that same year, other research was published establishing the rules for radioactive decay, allowing more precise identification of decay series.

Many geologists felt these new discoveries made radiometric dating so complicated as to be worthless. Holmes felt that they gave him tools to improve his techniques, and he plodded ahead with his research, publishing before and after the First World War. His work was generally ignored until the 1920s, though in 1917 Joseph Barrell, a professor of geology at Yale, redrew geological history as it was understood at the time to conform to Holmes's findings in radiometric dating. Barrell's research determined that the layers of strata had not all been laid down at the same rate, and so current rates of geological change could not be used to provide accurate timelines of the history of Earth.

Holmes' persistence finally began to pay off in 1921, when the speakers at the yearly meeting of the British Association for the Advancement of Science came to a rough consensus that Earth was a few billion years old, and that radiometric dating was credible. Holmes published The Age of the Earth, an Introduction to Geological Ideas in 1927 in which he presented a range of 1.6 to 3.0 billion years. No great push to embrace radiometric dating followed, however, and the die-hards in the geological community stubbornly resisted. They had never cared for attempts by physicists to intrude in their domain, and had successfully ignored them so far. The growing weight of evidence finally tilted the balance in 1931, when the National Research Council of the US National Academy of Sciences decided to resolve the question of the age of Earth by appointing a committee to investigate. Holmes, being one of the few people on Earth who was trained in radiometric dating techniques, was a committee member, and in fact wrote most of the final report.

Thus, Arthur Holmes' report concluded that radioactive dating was the only reliable means of pinning down geological time scales. Questions of bias were deflected by the great and exacting detail of the report. It described the methods used, the care with which measurements were made, and their error bars and limitations.

Modern radiometric dating

Radiometric dating continues to be the predominant way scientists date geologic timescales. Techniques for radioactive dating have been tested and fine-tuned on an ongoing basis since the 1960s. Forty or so different dating techniques have been utilized to date, working on a wide variety of materials. Dates for the same sample using these different techniques are in very close agreement on the age of the material.

Possible contamination problems do exist, but they have been studied and dealt with by careful investigation, leading to sample preparation procedures being minimized to limit the chance of contamination.

Why meteorites were used

An age of 4.55 ± 0.07 billion years, very close to today's accepted age, was determined by Clair Cameron Patterson using uranium–lead isotope dating (specifically lead–lead dating) on several meteorites including the Canyon Diablo meteorite and published in 1956.

Lead isotope isochron diagram showing data used by Patterson to determine the age of Earth in 1956.

The quoted age of Earth is derived, in part, from the Canyon Diablo meteorite for several important reasons and is built upon a modern understanding of cosmochemistry built up over decades of research.

Most geological samples from Earth are unable to give a direct date of the formation of Earth from the solar nebula because Earth has undergone differentiation into the core, mantle, and crust, and this has then undergone a long history of mixing and unmixing of these sample reservoirs by plate tectonics, weathering and hydrothermal circulation.

All of these processes may adversely affect isotopic dating mechanisms because the sample cannot always be assumed to have remained as a closed system, by which it is meant that either the parent or daughter nuclide (a species of atom characterised by the number of neutrons and protons an atom contains) or an intermediate daughter nuclide may have been partially removed from the sample, which will skew the resulting isotopic date. To mitigate this effect it is usual to date several minerals in the same sample, to provide an isochron. Alternatively, more than one dating system may be used on a sample to check the date.

Some meteorites are furthermore considered to represent the primitive material from which the accreting solar disk was formed. Some have behaved as closed systems (for some isotopic systems) soon after the solar disk and the planets formed. To date, these assumptions are supported by much scientific observation and repeated isotopic dates, and it is certainly a more robust hypothesis than that which assumes a terrestrial rock has retained its original composition.

Nevertheless, ancient Archaean lead ores of galena have been used to date the formation of Earth as these represent the earliest formed lead-only minerals on the planet and record the earliest homogeneous lead–lead isotope systems on the planet. These have returned age dates of 4.54 billion years with a precision of as little as 1% margin for error.

Statistics for several meteorites that have undergone isochron dating are as follows:

1. St. Severin (ordinary chondrite)

1. Pb-Pb isochron 4.543 ± 0.019 billion years

2. Sm-Nd isochron 4.55 ± 0.33 billion years

3. Rb-Sr isochron 4.51 ± 0.15 billion years

4. Re-Os isochron 4.68 ± 0.15 billion years
2. Juvinas (basaltic achondrite)

1. Pb-Pb isochron 4.556 ± 0.012 billion years

2. Pb-Pb isochron 4.540 ± 0.001 billion years

3. Sm-Nd isochron 4.56 ± 0.08 billion years

4. Rb-Sr isochron 4.50 ± 0.07 billion years
3. Allende (carbonaceous chondrite)

1. Pb-Pb isochron 4.553 ± 0.004 billion years

2. Ar-Ar age spectrum 4.52 ± 0.02 billion years

3. Ar-Ar age spectrum 4.55 ± 0.03 billion years

4. Ar-Ar age spectrum  4.56 ± 0.05 billion years

Canyon Diablo meteorite

Barringer Crater, Arizona, where the Canyon Diablo meteorite was found.

The Canyon Diablo meteorite was used because it is both large and representative of a particularly rare type of meteorite that contains sulfide minerals (particularly troilite, FeS), metallic nickel-iron alloys, plus silicate minerals. This is important because the presence of the three mineral phases allows investigation of isotopic dates using samples that provide a great separation in concentrations between parent and daughter nuclides. This is particularly true of uranium and lead. Lead is strongly chalcophilic and is found in the sulfide at a much greater concentration than in the silicate, versus uranium. Because of this segregation in the parent and daughter nuclides during the formation of the meteorite, this allowed a much more precise date of the formation of the solar disk and hence the planets than ever before.

Fragment of the Canyon Diablo iron meteorite.

The age determined from the Canyon Diablo meteorite has been confirmed by hundreds of other age determinations, from both terrestrial samples and other meteorites. The meteorite samples, however, show a spread from 4.53 to 4.58 billion years ago. This is interpreted as the duration of formation of the solar nebula and its collapse into the solar disk to form the Sun and the planets. This 50 million year time span allows for accretion of the planets from the original solar dust and meteorites.

The Moon, as another extraterrestrial body that has not undergone plate tectonics and that has no atmosphere, provides quite precise age dates from the samples returned from the Apollo missions. Rocks returned from the Moon have been dated at a maximum of 4.51 billion years old. Martian meteorites that have landed upon Earth have also been dated to around 4.5 billion years old by lead–lead dating. Lunar samples, since they have not been disturbed by weathering, plate tectonics or material moved by organisms, can also provide dating by direct electron microscope examination of cosmic ray tracks. The accumulation of dislocations generated by high energy cosmic ray particle impacts provides another confirmation of the isotopic dates. Cosmic ray dating is only useful on material that has not been melted, since melting erases the crystalline structure of the material, and wipes away the tracks left by the particles.

Altogether, the concordance of age dates of both the earliest terrestrial lead reservoirs and all other reservoirs within the Solar System found to date are used to support the fact that Earth and the rest of the Solar System formed at around 4.53 to 4.58 billion years ago.

Neurophilosophy

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Neurophilosophy ...