Search This Blog

Friday, November 4, 2022

Massive parallel sequencing

From Wikipedia, the free encyclopedia

Massive parallel sequencing or massively parallel sequencing is any of several high-throughput approaches to DNA sequencing using the concept of massively parallel processing; it is also called next-generation sequencing (NGS) or second-generation sequencing. Some of these technologies emerged between 1994 and 1998 and have been commercially available since 2005. These technologies use miniaturized and parallelized platforms for sequencing of 1 million to 43 billion short reads (50 to 400 bases each) per instrument run.

Many NGS platforms differ in engineering configurations and sequencing chemistry. They share the technical paradigm of massive parallel sequencing via spatially separated, clonally amplified DNA templates or single DNA molecules in a flow cell. This design is very different from that of Sanger sequencing—also known as capillary sequencing or first-generation sequencing—which is based on electrophoretic separation of chain-termination products produced in individual sequencing reactions. This methodology allows sequencing to be completed on a larger scale.

NGS platforms

DNA sequencing with commercially available NGS platforms is generally conducted with the following steps. First, DNA sequencing libraries are generated by clonal amplification by PCR in vitro. Second, the DNA is sequenced by synthesis, such that the DNA sequence is determined by the addition of nucleotides to the complementary strand rather than through chain-termination chemistry. Third, the spatially segregated, amplified DNA templates are sequenced simultaneously in a massively parallel fashion without the requirement for a physical separation step. These steps are followed in most NGS platforms, but each utilizes a different strategy.

NGS parallelization of the sequencing reactions generates hundreds of megabases to gigabases of nucleotide sequence reads in a single instrument run. This has enabled a drastic increase in available sequence data and fundamentally changed genome sequencing approaches in the biomedical sciences. Newly emerging NGS technologies and instruments have further contributed to a significant decrease in the cost of sequencing nearing the mark of $1000 per genome sequencing.

As of 2014, massively parallel sequencing platforms commercially available and their features are summarized in the table. As the pace of NGS technologies is advancing rapidly, technical specifications and pricing are in flux.

An Illumina HiSeq 2000 sequencing machine
NGS platforms
Platform Template preparation Chemistry Max read length (bases) Run times (days) Max Gb per Run
Roche 454 Clonal-emPCR Pyrosequencing 400‡ 0.42 0.40-0.60
GS FLX Titanium Clonal-emPCR Pyrosequencing 400‡ 0.42 0.035
Illumina MiSeq Clonal Bridge Amplification Reversible Dye Terminator 2x300 0.17-2.7 15
Illumina HiSeq Clonal Bridge Amplification Reversible Dye Terminator 2x150 0.3-11 1000
Illumina Genome Analyzer IIX Clonal Bridge Amplification Reversible Dye Terminator 2x150 2-14 95
Life Technologies SOLiD4 Clonal-emPCR Oligonucleotide 8-mer Chained Ligation 20-45 4-7 35-50
Life Technologies Ion Proton Clonal-emPCR Native dNTPs, proton detection 200 0.5 100
Complete Genomics Gridded DNA-nanoballs Oligonucleotide 9-mer Unchained Ligation 7x10 11 3000
Helicos Biosciences Heliscope Single Molecule Reversible Dye Terminator 35‡ 8 25
Pacific Biosciences SMRT Single Molecule Phospholinked Fluorescent Nucleotides 10,000 (N50); 30,000+ (max) 0.08 0.5


Run times and gigabase (Gb) output per run for single-end sequencing are noted. Run times and outputs approximately double when performing paired-end sequencing. ‡Average read lengths for the Roche 454 and Helicos Biosciences platforms.

Template preparation methods for NGS

Two methods are used in preparing templates for NGS reactions: amplified templates originating from single DNA molecules, and single DNA molecule templates. For imaging systems which cannot detect single fluorescence events, amplification of DNA templates is required. The three most common amplification methods are emulsion PCR (emPCR), rolling circle and solid-phase amplification. The final distribution of templates can be spatially random or on a grid.

Emulsion PCR

In emulsion PCR methods, a DNA library is first generated through random fragmentation of genomic DNA. Single-stranded DNA fragments (templates) are attached to the surface of beads with adaptors or linkers, and one bead is attached to a single DNA fragment from the DNA library. The surface of the beads contains oligonucleotide probes with sequences that are complementary to the adaptors binding the DNA fragments. The beads are then compartmentalized into water-oil emulsion droplets. In the aqueous water-oil emulsion, each of the droplets capturing one bead is a PCR microreactor that produces amplified copies of the single DNA template.

Gridded rolling circle nanoballs

Amplification of a population of single DNA molecules by rolling circle amplification in solution is followed by capture on a grid of spots sized to be smaller than the DNAs to be immobilized.

DNA colony generation (Bridge amplification)

Forward and reverse primers are covalently attached at high-density to the slide in a flow cell. The ratio of the primers to the template on the support defines the surface density of the amplified clusters. The flow cell is exposed to reagents for polymerase-based extension, and priming occurs as the free/distal end of a ligated fragment "bridges" to a complementary oligo on the surface. Repeated denaturation and extension results in localized amplification of DNA fragments in millions of separate locations across the flow cell surface. Solid-phase amplification produces 100–200 million spatially separated template clusters, providing free ends to which a universal sequencing primer is then hybridized to initiate the sequencing reaction. This technology was filed for a patent in 1997 from Glaxo-Welcome's Geneva Biomedical Research Institute (GBRI), by Pascal Mayer [fr], Eric Kawashima, and Laurent Farinelli, and was publicly presented for the first time in 1998. In 1994 Chris Adams and Steve Kron filed a patent on a similar, but non-clonal, surface amplification method, named “bridge amplification” adapted for clonal amplification in 1997 by Church and Mitra.

Single-molecule templates

Protocols requiring DNA amplification are often cumbersome to implement and may introduce sequencing errors. The preparation of single-molecule templates is more straightforward and does not require PCR, which can introduce errors in the amplified templates. AT-rich and GC-rich target sequences often show amplification bias, which results in their underrepresentation in genome alignments and assemblies. Single molecule templates are usually immobilized on solid supports using one of at least three different approaches. In the first approach, spatially distributed individual primer molecules are covalently attached to the solid support. The template, which is prepared by randomly fragmenting the starting material into small sizes (for example,~200–250 bp) and adding common adapters to the fragment ends, is then hybridized to the immobilized primer. In the second approach, spatially distributed single-molecule templates are covalently attached to the solid support by priming and extending single-stranded, single-molecule templates from immobilized primers. A common primer is then hybridized to the template. In either approach, DNA polymerase can bind to the immobilized primed template configuration to initiate the NGS reaction. Both of the above approaches are used by Helicos BioSciences. In a third approach, spatially distributed single polymerase molecules are attached to the solid support, to which a primed template molecule is bound. This approach is used by Pacific Biosciences. Larger DNA molecules (up to tens of thousands of base pairs) can be used with this technique and, unlike the first two approaches, the third approach can be used with real-time methods, resulting in potentially longer read lengths.

Sequencing approaches

Pyrosequencing

In 1996, Pål Nyrén and his student Mostafa Ronaghi at the Royal Institute of Technology in Stockholm published their method of pyrosequencing. Pyrosequencing is a non-electrophoretic, bioluminescence method that measures the release of inorganic pyrophosphate by proportionally converting it into visible light using a series of enzymatic reactions. Unlike other sequencing approaches that use modified nucleotides to terminate DNA synthesis, the pyrosequencing method manipulates DNA polymerase by the single addition of a dNTP in limiting amounts. Upon incorporation of the complementary dNTP, DNA polymerase extends the primer and pauses. DNA synthesis is reinitiated following the addition of the next complementary dNTP in the dispensing cycle. The order and intensity of the light peaks are recorded as flowgrams, which reveal the underlying DNA sequence. 

Sequencing by reversible terminator chemistry

This approach uses reversible terminator-bound dNTPs in a cyclic method that comprises nucleotide incorporation, fluorescence imaging and cleavage. A fluorescently-labeled terminator is imaged as each dNTP is added and then cleaved to allow incorporation of the next base. These nucleotides are chemically blocked such that each incorporation is a unique event. An imaging step follows each base incorporation step, then the blocked group is chemically removed to prepare each strand for the next incorporation by DNA polymerase. This series of steps continues for a specific number of cycles, as determined by user-defined instrument settings. The 3' blocking groups were originally conceived as either enzymatic or chemical reversal The chemical method has been the basis for the Solexa and Illumina machines. Sequencing by reversible terminator chemistry can be a four-colour cycle such as used by Illumina/Solexa, or a one-colour cycle such as used by Helicos BioSciences. Helicos BioSciences used “virtual Terminators”, which are unblocked terminators with a second nucleoside analogue that acts as an inhibitor. These terminators have the appropriate modifications for terminating or inhibiting groups so that DNA synthesis is terminated after a single base addition. Sequencing-by-ligation mediated by ligase enzymes

In this approach, the sequence extension reaction is not carried out by polymerases but rather by DNA ligase and either one-base-encoded probes or two-base-encoded probes. In its simplest form, a fluorescently labelled probe hybridizes to its complementary sequence adjacent to the primed template. DNA ligase is then added to join the dye-labelled probe to the primer. Non-ligated probes are washed away, followed by fluorescence imaging to determine the identity of the ligated probe. The cycle can be repeated either by using cleavable probes to remove the fluorescent dye and regenerate a 5′-PO4 group for subsequent ligation cycles (chained ligation) or by removing and hybridizing a new primer to the template (unchained ligation).

Phospholinked Fluorescent Nucleotides or Real-time sequencing

Pacific Biosciences is currently leading this method. The method of real-time sequencing involves imaging the continuous incorporation of dye-labelled nucleotides during DNA synthesis: single DNA polymerase molecules are attached to the bottom surface of individual zero-mode waveguide detectors (Zmw detectors) that can obtain sequence information while phospholinked nucleotides are being incorporated into the growing primer strand. Pacific Biosciences uses a unique DNA polymerase which better incorporates phospholinked nucleotides and enables the resequencing of closed circular templates. While single-read accuracy is 87%, consensus accuracy has been demonstrated at 99.999% with multi-kilobase read lengths. In 2015, Pacific Biosciences released a new sequencing instrument called the Sequel System, which increases capacity approximately 6.5-fold.

History of neuroscience

From Wikipedia, the free encyclopedia

From the ancient Egyptian mummifications to 18th-century scientific research on "globules" and neurons, there is evidence of neuroscience practice throughout the early periods of history. The early civilizations lacked adequate means to obtain knowledge about the human brain. Their assumptions about the inner workings of the mind, therefore, were not accurate. Early views on the function of the brain regarded it to be a form of "cranial stuffing" of sorts. In ancient Egypt, from the late Middle Kingdom onwards, in preparation for mummification, the brain was regularly removed, for it was the heart that was assumed to be the seat of intelligence. According to Herodotus, during the first step of mummification: "The most perfect practice is to extract as much of the brain as possible with an iron hook, and what the hook cannot reach is mixed with drugs." Over the next five thousand years, this view came to be reversed; the brain is now known to be the seat of intelligence, although colloquial variations of the former remain as in "memorizing something by heart".

Antiquity

Hieroglyph designating the brain or skull in the Edwin Smith papyrus.

The earliest reference to the brain occurs in the Edwin Smith Surgical Papyrus, written in the 17th century BC. The hieroglyph for brain, occurring eight times in this papyrus, describes the symptoms, diagnosis, and prognosis of two patients, wounded in the head, who had compound fractures of the skull. The assessments of the author (a battlefield surgeon) of the papyrus allude to ancient Egyptians having a vague recognition of the effects of head trauma. While the symptoms are well written and detailed, the absence of a medical precedent is apparent. The author of the passage notes "the pulsations of the exposed brain" and compared the surface of the brain to the rippling surface of copper slag (which indeed has a gyral-sulcal pattern). The laterality of injury was related to the laterality of symptom, and both aphasia ("he speaks not to thee") and seizures ("he shudders exceedingly") after head injury were described. Observations by ancient civilizations of the human brain suggest only a relative understanding of the basic mechanics and the importance of cranial security. Furthermore, considering the general consensus of medical practice pertaining to human anatomy was based on myths and superstition, the thoughts of the battlefield surgeon appear to be empirical and based on logical deduction and simple observation.

In Ancient Greece, interest in the brain began with the work of Alcmaeon, who appeared to have dissected the eye and related the brain to vision. He also suggested that the brain, not the heart, was the organ that ruled the body (what Stoics would call the hegemonikon) and that the senses were dependent on the brain. According to ancient authorities, Alcmaeon believed the power of the brain to synthesize sensations made it also the seat of memories and thought. The author of On the Sacred Disease, part of the Hippocratic corpus, likewise believed the brain to be the seat of intelligence.

The debate regarding the hegemonikon persisted among ancient Greek philosophers and physicians for a very long time. Already in the 4th century BC, Aristotle thought that the heart was the seat of intelligence, while the brain was a cooling mechanism for the blood. He reasoned that humans are more rational than the beasts because, among other reasons, they have a larger brain to cool their hot-bloodedness. On the opposite end, during the Hellenistic period, Herophilus and Erasistratus of Alexandria engaged in studies that involved dissecting human bodies, providing evidence for the primacy of the brain. They affirmed the distinction between the cerebrum and the cerebellum, and identifying the ventricles and the dura mater. Their works are now mostly lost, and we know about their achievements due mostly to secondary sources. Some of their discoveries had to be re-discovered a millennium after their death.

During the Roman Empire, the Greek physician and philosopher Galen dissected the brains of oxen, Barbary apes, swine, and other non-human mammals. He concluded that, as the cerebellum was denser than the brain, it must control the muscles, while as the cerebrum was soft, it must be where the senses were processed. Galen further theorized that the brain functioned by the movement of animal spirits through the ventricles. He also noted that specific spinal nerves controlled specific muscles, and had the idea of the reciprocal action of muscles. Only in the 19th century, in the work of François Magendie and Charles Bell, would the understanding of spinal function surpass that of Galen.

Medieval to Early Modern

Islamic medicine in the middle ages was focused on how the mind and body interacted and emphasized a need to understand mental health. Circa 1000, Al-Zahrawi, living in Islamic Iberia, evaluated neurological patients and performed surgical treatments of head injuries, skull fractures, spinal injuries, hydrocephalus, subdural effusions and headache. In Persia, Avicenna (Ibn-Sina) presented detailed knowledge about skull fractures and their surgical treatments. Avicenna is regarded by some as the father of modern medicine. He wrote 40 pieces on medicine with the most notable being the Qanun, a medical encyclopedia that would become a staple at universities for nearly a hundred years. He also explained phenomena such as, insomnia, mania, hallucinations, nightmares, dementia, epilepsy, stroke, paralysis, vertigo, melancholia and tremors. He also discovered a condition similar to schizophrenia, which he called Junun Mufrit, characterized by agitation, behavioral and sleep disturbances, giving inappropriate answers to questions, and occasional inability to speak. Avicenna also discovered the cerebellar vermis, which he simply called the vermis, and the caudate nucleus. Both terms are still used in neuroanatomy today. He was also the first person to associate mental deficits with deficits in the brain's middle ventricle or frontal lobe. Abulcasis, Averroes, Avenzoar, and Maimonides, active in the Medieval Muslim world, also described a number of medical problems related to the brain.

Between the 13th and 14th centuries, the first anatomy textbooks in Europe, which included a description of the brain, were written by Mondino de Luzzi and Guido da Vigevano.

Renaissance

One of Leonardo da Vinci's sketches of the human skull

Work by Andreas Vesalius on human cadavers found problems with the Galenic view of anatomy. Vesalius noted many structural characteristics of both the brain and general nervous system during his dissections. In addition to recording many anatomical features such as the putamen and corpus callosum, Vesalius proposed that the brain was made up of seven pairs of 'brain nerves', each with a specialized function. Other scholars furthered Vesalius' work by adding their own detailed sketches of the human brain.

Scientific Revolution

In the 17th century, René Descartes studied the physiology of the brain, proposing the theory of dualism to tackle the issue of the brain's relation to the mind. He suggested that the pineal gland was where the mind interacted with the body after recording the brain mechanisms responsible for circulating cerebrospinal fluid. Jan Swammerdam placed severed frog thigh muscle in an airtight syringe with a small amount of water in the tip and when he caused the muscle to contract by irritating the nerve, the water level did not rise but rather was lowered by a minute amount debunking balloonist theory. The idea that nerve stimulation led to movement had important implications by putting forward the idea that behaviour is based on stimuli. Thomas Willis studied the brain, nerves, and behavior to develop neurologic treatments. He described in great detail the structure of the brainstem, the cerebellum, the ventricles, and the cerebral hemispheres.

Modern Period

The role of electricity in nerves was first observed in dissected frogs by Luigi Galvani, Lucia Galeazzi Galvani and Giovanni Aldini in the second half of the 18th century. In 1811, César Julien Jean Legallois defined a specific function of a brain region for the first time. He studied respiration in animal dissection and lesions, and found the center of respiration in the medulla oblongata. Between 1811 and 1824, Charles Bell and François Magendie discovered through dissection and vivisection that the ventral roots in spine transmit motor impulses and the posterior roots receive sensory input (Bell-Magendie law). In the 1820s, Jean Pierre Flourens pioneered the experimental method of carrying out localized lesions of the brain in animals describing their effects on motricity, sensibility and behavior. He concluded that the ablation of the cerebellum resulted in movements that “were not regular and coordinated". At mid century, Emil du Bois-Reymond, Johannes Peter Müller, and Hermann von Helmholtz showed neurons were electrically excitable and that their activity predictably affected the electrical state of adjacent neurons.

In 1848, John Martyn Harlow described that Phineas Gage had his frontal lobe pierced by an iron tamping rod in a blasting accident. He became a case study in the connection between the prefrontal cortex and executive functions. In 1861, Paul Broca heard of a patient at the Bicêtre Hospital who had a 21-year progressive loss of speech and paralysis but neither a loss of comprehension nor mental function. Broca performed an autopsy and determined that the patient had a lesion in the frontal lobe in the left cerebral hemisphere. Broca published his findings from the autopsies of twelve patients in 1865. His work inspired others to perform careful autopsies with the aim of linking more brain regions to sensory and motor functions. Another French neurologist, Marc Dax, made similar observations a generation earlier. Broca's hypothesis was supported by Gustav Fritsch and Eduard Hitzig who discovered in 1870 that electrical stimulation of motor cortex caused involuntary muscular contractions of specific parts of a dog's body and by observations of epileptic patients conducted by John Hughlings Jackson, who correctly deduced in the 1870s the organization of the motor cortex by watching the progression of seizures through the body. Carl Wernicke further developed the theory of the specialization of specific brain structures in language comprehension and production. Richard Caton presented his findings in 1875 about electrical phenomena of the cerebral hemispheres of rabbits and monkeys. In 1878, Hermann Munk found in dogs and monkeys that vision was localized in the occipital cortical area, David Ferrier found in 1881 that audition was localized in the superior temporal gyrus and Harvey Cushing found in 1909 that the sense of touch was localized in the postcentral gyrus. Modern research still uses the Korbinian Brodmann's cytoarchitectonic (referring to study of cell structure) anatomical definitions from this era in continuing to show that distinct areas of the cortex are activated in the execution of specific tasks.

Studies of the brain became more sophisticated after the invention of the microscope and the development of a staining procedure by Camillo Golgi during the late 1890s that used a silver chromate salt to reveal the intricate structures of single neurons. His technique was used by Santiago Ramón y Cajal and led to the formation of the neuron doctrine, the hypothesis that the functional unit of the brain is the neuron. Golgi and Ramón y Cajal shared the Nobel Prize in Physiology or Medicine in 1906 for their extensive observations, descriptions and categorizations of neurons throughout the brain. The hypotheses of the neuron doctrine were supported by experiments following Galvani's pioneering work in the electrical excitability of muscles and neurons. In 1898, British scientist John Newport Langley first coined the term "autonomic" in classifying the connections of nerve fibers to peripheral nerve cells. Langley is known as one of the fathers of the chemical receptor theory, and as the origin of the concept of "receptive substance". Towards the end of the nineteenth century Francis Gotch conducted several experiments on nervous system function. In 1899 he described the "inexcitable" or "refractory phase" that takes place between nerve impulses. His primary focus was on how nerve interaction affected the muscles and eyes.

Heinrich Obersteiner in 1887 founded the ‘‘Institute for Anatomy and Physiology of the CNS’’, later called Neurological or Obersteiner Institute of the Vienna University School of Medicine. It was one of the first brain research institutions in the world. He studied the cerebellar cortex, described the Redlich–Obersteiner's zone and wrote one of the first books on neuroanatomy in 1888. Róbert Bárány, who worked on the physiology and pathology of the vestibular apparatus, attended this school, graduating in 1900. Obersteiner was later superseded by Otto Marburg.

Twentieth Century

Neuroscience during the twentieth century began to be recognized as a distinct unified academic discipline, rather than studies of the nervous system being a factor of science belonging to a variety of disciplines.

Ivan Pavlov contributed to many areas of neurophysiology. Most of his work involved research in temperament, conditioning and involuntary reflex actions. In 1891, Pavlov was invited to the Institute of Experimental Medicine in St. Petersburg to organize and direct the Department of Physiology. He published The Work of the Digestive Glands in 1897, after 12 years of research. His experiments earned him the 1904 Nobel Prize in Physiology and Medicine. During the same period, Vladimir Bekhterev discovered 15 new reflexes and is known for his competition with Pavlov regarding the study of conditioned reflexes. He founded the Psychoneurological Institute at the St. Petersburg State Medical Academy in 1907 where he worked with Alexandre Dogiel. In the institute, he attempted to establish a multidisciplinary approach to brain exploration. The Institute of Higher Nervous Activity in Moscow, Russia was established on July 14, 1950.

Charles Scott Sherrington's work focused strongly on reflexes and his experiments led up to the discovery of motor units. His concepts centered around unitary behaviour of cells activated or inhibited at what he called synapses. Sherrington received the Nobel prize for showing that reflexes require integrated activation and demonstrated reciprocal innervation of muscles (Sherrington's law). Sherrington also worked with Thomas Graham Brown who developed one of the first ideas about central pattern generators in 1911. Brown recognized that the basic pattern of stepping can be produced by the spinal cord without the need of descending commands from the cortex.

Acetylcholine was the first neurotransmitter to be identified. It was first identified in 1915 by Henry Hallett Dale for its actions on heart tissue. It was confirmed as a neurotransmitter in 1921 by Otto Loewi in Graz. Loewi demonstrated the ″humorale Übertragbarkeit der Herznervenwirkung″ first in amphibians. He initially gave it the name Vagusstoff because it was released from the vagus nerve and in 1936 he wrote: ″I no longer hesitate to identify the Sympathicusstoff with adrenaline.″

A graph showing the threshold for nervous system response.

One major question for neuroscientists in the early twentieth century was the physiology of nerve impulses. In 1902 and again in 1912, Julius Bernstein advanced the hypothesis that the action potential resulted from a change in the permeability of the axonal membrane to ions. Bernstein was also the first to introduce the Nernst equation for resting potential across the membrane. In 1907, Louis Lapicque suggested that the action potential was generated as a threshold was crossed, what would be later shown as a product of the dynamical systems of ionic conductances. A great deal of study on sensory organs and the function of nerve cells was conducted by British physiologist Keith Lucas and his protege Edgar Adrian. Keith Lucas' experiments in the first decade of the twentieth century proved that muscles contract entirely or not at all, this was referred to as the all-or-none principle. Edgar Adrian observed nerve fibers in action during his experiments on frogs. This proved that scientists could study nervous system function directly, not just indirectly. This led to a rapid increase in the variety of experiments conducted in the field of neurophysiology and innovation in the technology necessary for these experiments. Much of Adrian's early research was inspired by studying the way vacuum tubes intercepted and enhanced coded messages. Concurrently, Josepht Erlanger and Herbert Gasser were able to modify an oscilloscope to run at low voltages and were able to observe that action potentials occurred in two phases—a spike followed by an after-spike. They discovered that nerves were found in many forms, each with their own potential for excitability. With this research, the pair discovered that the velocity of action potentials was directly proportional to the diameter of the nerve fiber and received a Nobel Prize for their work.

Kenneth Cole joined Columbia University in 1937 and remained there until 1946 where he made pioneering advances modelling the electrical properties of nervous tissue. Bernstein's hypothesis about the action potential was confirmed by Cole and Howard Curtis, who showed that membrane conductance increases during an action potential. David E. Goldman worked with Cole and derived the Goldman equation in 1943 at Columbia University. Alan Lloyd Hodgkin spent a year (1937–38) at the Rockefeller Institute, during which he joined Cole to measure the D.C. resistance of the membrane of the squid giant axon in the resting state. In 1939 they began using internal electrodes inside the giant nerve fibre of the squid and Cole developed the voltage clamp technique in 1947. Hodgkin and Andrew Huxley later presented a mathematical model for transmission of electrical signals in neurons of the giant axon of a squid and how they are initiated and propagated, known as the Hodgkin–Huxley model. In 1961–1962, Richard FitzHugh and J. Nagumo simplified Hodgkin–Huxley, in what is called the FitzHugh–Nagumo model. In 1962, Bernard Katz modeled neurotransmission across the space between neurons known as synapses. Beginning in 1966, Eric Kandel and collaborators examined biochemical changes in neurons associated with learning and memory storage in Aplysia. In 1981 Catherine Morris and Harold Lecar combined these models in the Morris–Lecar model. Such increasingly quantitative work gave rise to numerous biological neuron models and models of neural computation.

Eric Kandel and collaborators have cited David Rioch, Francis O. Schmitt, and Stephen Kuffler as having played critical roles in establishing the field. Rioch originated the integration of basic anatomical and physiological research with clinical psychiatry at the Walter Reed Army Institute of Research, starting in the 1950s. During the same period, Schmitt established a neuroscience research program within the Biology Department at the Massachusetts Institute of Technology, bringing together biology, chemistry, physics, and mathematics. The first freestanding neuroscience department (then called Psychobiology) was founded in 1964 at the University of California, Irvine by James L. McGaugh. Stephen Kuffler started the Department of Neurobiology at Harvard Medical School in 1966. The first official use of the word "Neuroscience" may be in 1962 with Francis O. Schmitt's "Neuroscience Research Program", which was hosted by the Massachusetts Institute of Technology.

Over time, brain research has gone through philosophical, experimental, and theoretical phases, with work on brain simulation predicted to be important in the future.

Neuroscience Institutes and Organizations

As a result of the increasing interest about the nervous system, several prominent neuroscience institutes and organizations have been formed to provide a forum to all neuroscientists. The largest professional neuroscience organization is the Society for Neuroscience (SFN), which is based in the United States but includes many members from other countries.

List of the major institutes and organizations
Foundation Institute or Organization
1887 Obersteiner Institute of the Vienna University School of Medicine
1903 The brain commission of the International Association of Academies
1907 Psychoneurological Institute at the St. Petersburg State Medical Academy
1909 Netherlands Central Institute for Brain Research in Amsterdam, now Netherlands Institute for Neuroscience
1947 National Institute of Mental Health and Neurosciences
1950 Institute of Higher Nervous Activity
1960 International Brain Research Organization
1963 International Society for Neurochemistry
1968 European Brain and Behaviour Society
1968 British Neuroscience Association
1969 Society for Neuroscience
1997 National Brain Research Centre

In 2013, the BRAIN Initiative was announced in the US. An International Brain Initiative was created in 2017, currently integrated by more than seven national-level brain research initiatives (US, Europe, Allen Institute, Japan, China, Australia, Canada, Korea, Israel) spanning four continents.

Nuclear power in space

From Wikipedia, the free encyclopedia
 
The KIWI A prime nuclear thermal rocket engine
 
Mars Curiosity rover powered by a RTG on Mars. White RTG with fins is visible at far side of rover.

Nuclear power in space is the use of nuclear power in outer space, typically either small fission systems or radioactive decay for electricity or heat. Another use is for scientific observation, as in a Mössbauer spectrometer. The most common type is a radioisotope thermoelectric generator, which has been used on many space probes and on crewed lunar missions. Small fission reactors for Earth observation satellites, such as the TOPAZ nuclear reactor, have also been flown. A radioisotope heater unit is powered by radioactive decay and can keep components from becoming too cold to function, potentially over a span of decades.

The United States tested the SNAP-10A nuclear reactor in space for 43 days in 1965, with the next test of a nuclear reactor power system intended for space use occurring on 13 September 2012 with the Demonstration Using Flattop Fission (DUFF) test of the Kilopower reactor.

After a ground-based test of the experimental 1965 Romashka reactor, which used uranium and direct thermoelectric conversion to electricity, the USSR sent about 40 nuclear-electric satellites into space, mostly powered by the BES-5 reactor. The more powerful TOPAZ-II reactor produced 10 kilowatts of electricity.

Examples of concepts that use nuclear power for space propulsion systems include the nuclear electric rocket (nuclear powered ion thruster(s)), the radioisotope rocket, and radioisotope electric propulsion (REP). One of the more explored concepts is the nuclear thermal rocket, which was ground tested in the NERVA program. Nuclear pulse propulsion was the subject of Project Orion.

Regulation and hazard prevention

After the ban of nuclear weapons in space by the Outer Space Treaty in 1967, nuclear power has been discussed at least since 1972 as a sensitive issue by states. Particularly its potential hazards to Earth's environment and thus also humans has prompted states to adopt in the U.N. General Assembly the Principles Relevant to the Use of Nuclear Power Sources in Outer Space (1992), particularly introducing safety principles for launches and to manage their traffic.

Benefits

Both the Viking 1 and Viking 2 landers used RTGs for power on the surface of Mars. (Viking launch vehicle pictured)

While solar power is much more commonly used, nuclear power can offer advantages in some areas. Solar cells, although efficient, can only supply energy to spacecraft in orbits where the solar flux is sufficiently high, such as low Earth orbit and interplanetary destinations close enough to the Sun. Unlike solar cells, nuclear power systems function independently of sunlight, which is necessary for deep space exploration. Nuclear-based systems can have less mass than solar cells of equivalent power, allowing more compact spacecraft that are easier to orient and direct in space. In the case of crewed spaceflight, nuclear power concepts that can power both life support and propulsion systems may reduce both cost and flight time.

Selected applications and/or technologies for space include:

Types

Name and model Used on (# of RTGs per user) Maximum output Radio-
isotope
Max fuel
used (kg)
Mass (kg) Power/mass (Electrical W/kg)
Electrical (W) Heat (W)
MMRTG MSL/Curiosity rover and Perseverance/Mars 2020 rover c. 110 c. 2000 238Pu c. 4 <45 2.4
GPHS-RTG Cassini (3), New Horizons (1), Galileo (2), Ulysses (1) 300 4400 238Pu 7.8 55.9–57.8 5.2–5.4
MHW-RTG LES-8/9, Voyager 1 (3), Voyager 2 (3) 160 2400 238Pu c. 4.5 37.7 4.2
SNAP-3B Transit-4A (1) 2.7 52.5 238Pu ? 2.1 1.3
SNAP-9A Transit 5BN1/2 (1) 25 525 238Pu c. 1 12.3 2.0
SNAP-19 Nimbus-3 (2), Pioneer 10 (4), Pioneer 11 (4) 40.3 525 238Pu c. 1 13.6 2.9
modified SNAP-19 Viking 1 (2), Viking 2 (2) 42.7 525 238Pu c. 1 15.2 2.8
SNAP-27 Apollo 12–17 ALSEP (1) 73 1,480 238Pu 3.8 20 3.65
(fission reactor) Buk (BES-5)** US-As (1) 3000 100,000 highly enriched 235U 30 1000 3.0
(fission reactor) SNAP-10A*** SNAP-10A (1) 600 30,000 highly enriched 235U
431 1.4
ASRG**** prototype design (not launched), Discovery Program c. 140 (2x70) c. 500 238Pu 1 34 4.1

Radioisotope systems

SNAP-27 on the Moon

For more than fifty years, radioisotope thermoelectric generators (RTGs) have been the United States’ main nuclear power source in space. RTGs offer many benefits; they are relatively safe and maintenance-free, are resilient under harsh conditions, and can operate for decades. RTGs are particularly desirable for use in parts of space where solar power is not a viable power source. Dozens of RTGs have been implemented to power 25 different US spacecraft, some of which have been operating for more than 20 years. Over 40 radioisotope thermoelectric generators have been used globally (principally US and USSR) on space missions.

The advanced Stirling radioisotope generator (ASRG, a model of Stirling radioisotope generator (SRG)) produces roughly four times the electric power of an RTG per unit of nuclear fuel, but flight-ready units based on Stirling technology are not expected until 2028. NASA plans to utilize two ASRGs to explore Titan in the distant future.

Cutaway diagram of the advanced Stirling radioisotope generator.

Radioisotope power generators include:

Radioisotope heater units (RHUs) are also used on spacecraft to warm scientific instruments to the proper temperature so they operate efficiently. A larger model of RHU called the General Purpose Heat Source (GPHS) is used to power RTGs and the ASRG.

Extremely slow-decaying radioisotopes have been proposed for use on interstellar probes with multi-decade lifetimes.

As of 2011, another direction for development was an RTG assisted by subcritical nuclear reactions.

Fission systems

Fission power systems may be utilized to power a spacecraft's heating or propulsion systems. In terms of heating requirements, when spacecraft require more than 100 kW for power, fission systems are much more cost effective than RTGs.

In 1965, the US launched a space reactor, the SNAP-10A, which had been developed by Atomics International, then a division of North American Aviation.

Over the past few decades, several fission reactors have been proposed, and the Soviet Union launched 31 BES-5 low power fission reactors in their RORSAT satellites utilizing thermoelectric converters between 1967 and 1988.

In the 1960s and 1970s, the Soviet Union developed TOPAZ reactors, which utilize thermionic converters instead, although the first test flight was not until 1987.

In 1983, NASA and other US government agencies began development of a next-generation space reactor, the SP-100, contracting with General Electric and others. In 1994, the SP-100 program was cancelled, largely for political reasons, with the idea of transitioning to the Russian TOPAZ-II reactor system. Although some TOPAZ-II prototypes were ground-tested, the system was never deployed for US space missions.

In 2008, NASA announced plans to utilize a small fission power system on the surface of the Moon and Mars, and began testing "key" technologies for it to come to fruition.

Proposed fission power system spacecraft and exploration systems have included SP-100, JIMO nuclear electric propulsion, and Fission Surface Power.

SAFE-30 small experimental reactor

A number of micro nuclear reactor types have been developed or are in development for space applications:

Nuclear thermal propulsion systems (NTR) are based on the heating power of a fission reactor, offering a more efficient propulsion system than one powered by chemical reactions. Current research focuses more on nuclear electric systems as the power source for providing thrust to propel spacecraft that are already in space.

Other space fission reactors for powering space vehicles include the SAFE-400 reactor and the HOMER-15. In 2020, Roscosmos (the Russian Federal Space Agency) plans to launch a spacecraft utilizing nuclear-powered propulsion systems (developed at the Keldysh Research Center), which includes a small gas-cooled fission reactor with 1 MWe.

In September 2020, NASA and the Department of Energy (DOE) issued a formal request for proposals for a lunar nuclear power system, in which several awards would be granted to preliminary designs completed by the end of 2021, while in a second phase, by early 2022, they would select one company to develop a 10-kilowatt fission power system to be placed on the moon in 2027.

Artists's Conception of Jupiter Icy Moons Orbiter mission for Prometheus, with the reactor on the right, providing power to ion engines and electronics.

Project Prometheus

In 2002, NASA announced an initiative towards developing nuclear systems, which later came to be known as Project Prometheus. A major part of the Prometheus Project was to develop the Stirling Radioisotope Generator and the Multi-Mission Thermoelectric Generator, both types of RTGs. The project also aimed to produce a safe and long-lasting space fission reactor system for a spacecraft's power and propulsion, replacing the long-used RTGs. Budget constraints resulted in the effective halting of the project, but Project Prometheus has had success in testing new systems. After its creation, scientists successfully tested a High Power Electric Propulsion (HiPEP) ion engine, which offered substantial advantages in fuel efficiency, thruster lifetime, and thruster efficiency over other power sources.

Visuals

A gallery of images of space nuclear power systems.

Operator (computer programming)

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Operator_(computer_programmin...