Search This Blog

Tuesday, January 30, 2024

Antiproton

From Wikipedia, the free encyclopedia
 
Antiproton
The quark content of the antiproton.
 
ClassificationAntibaryon
Composition2 up antiquarks, 1 down antiquark
StatisticsFermionic
FamilyHadron
InteractionsStrong, weak, electromagnetic, gravity
Symbol
p
AntiparticleProton
TheorisedPaul Dirac (1933)
DiscoveredEmilio Segrè & Owen Chamberlain (1955)
Mass1.67262192369(51)×10−27 kg
938.27208816(29) MeV/c2
Electric charge−1 e
Magnetic moment−2.7928473441(42) μN 
Spin12
Isospin12

The antiproton,
p
, (pronounced p-bar) is the antiparticle of the proton. Antiprotons are stable, but they are typically short-lived, since any collision with a proton will cause both particles to be annihilated in a burst of energy.

The existence of the antiproton with electric charge of −1 e, opposite to the electric charge of +1 e of the proton, was predicted by Paul Dirac in his 1933 Nobel Prize lecture. Dirac received the Nobel Prize for his 1928 publication of his Dirac equation that predicted the existence of positive and negative solutions to Einstein's energy equation () and the existence of the positron, the antimatter analog of the electron, with opposite charge and spin.

The antiproton was first experimentally confirmed in 1955 at the Bevatron particle accelerator by University of California, Berkeley, physicists Emilio Segrè and Owen Chamberlain, for which they were awarded the 1959 Nobel Prize in Physics.

In terms of valence quarks, an antiproton consists of two up antiquarks and one down antiquark (
u

u

d
). The properties of the antiproton that have been measured all match the corresponding properties of the proton, with the exception that the antiproton has electric charge and magnetic moment that are the opposites of those in the proton, which is to be expected from the antimatter equivalent of a proton. The questions of how matter is different from antimatter, and the relevance of antimatter in explaining how our universe survived the Big Bang, remain open problems—open, in part, due to the relative scarcity of antimatter in today's universe.

Occurrence in nature

Antiprotons have been detected in cosmic rays beginning in 1979, first by balloon-borne experiments and more recently by satellite-based detectors. The standard picture for their presence in cosmic rays is that they are produced in collisions of cosmic ray protons with atomic nuclei in the interstellar medium, via the reaction, where A represents a nucleus:


p
+ A →
p
+
p
+
p
+ A

The secondary antiprotons (
p
) then propagate through the galaxy, confined by the galactic magnetic fields. Their energy spectrum is modified by collisions with other atoms in the interstellar medium, and antiprotons can also be lost by "leaking out" of the galaxy.

The antiproton cosmic ray energy spectrum is now measured reliably and is consistent with this standard picture of antiproton production by cosmic ray collisions. These experimental measurements set upper limits on the number of antiprotons that could be produced in exotic ways, such as from annihilation of supersymmetric dark matter particles in the galaxy or from the Hawking radiation caused by the evaporation of primordial black holes. This also provides a lower limit on the antiproton lifetime of about 1–10 million years. Since the galactic storage time of antiprotons is about 10 million years, an intrinsic decay lifetime would modify the galactic residence time and distort the spectrum of cosmic ray antiprotons. This is significantly more stringent than the best laboratory measurements of the antiproton lifetime:

The magnitude of properties of the antiproton are predicted by CPT symmetry to be exactly related to those of the proton. In particular, CPT symmetry predicts the mass and lifetime of the antiproton to be the same as those of the proton, and the electric charge and magnetic moment of the antiproton to be opposite in sign and equal in magnitude to those of the proton. CPT symmetry is a basic consequence of quantum field theory and no violations of it have ever been detected.

List of recent cosmic ray detection experiments

  • BESS: balloon-borne experiment, flown in 1993, 1995, 1997, 2000, 2002, 2004 (Polar-I) and 2007 (Polar-II).
  • CAPRICE: balloon-borne experiment, flown in 1994 and 1998.
  • HEAT: balloon-borne experiment, flown in 2000.
  • AMS: space-based experiment, prototype flown on the Space Shuttle in 1998, intended for the International Space Station, launched May 2011.
  • PAMELA: satellite experiment to detect cosmic rays and antimatter from space, launched June 2006. Recent report discovered 28 antiprotons in the South Atlantic Anomaly.

Modern experiments and applications

BEV-938. Antiproton set-up with work group: Emilio Segre, Clyde Wiegand, Edward J. Lofgren, Owen Chamberlain, Thomas Ypsilantis, 1955

Production

Antiprotons were routinely produced at Fermilab for collider physics operations in the Tevatron, where they were collided with protons. The use of antiprotons allows for a higher average energy of collisions between quarks and antiquarks than would be possible in proton–proton collisions. This is because the valence quarks in the proton, and the valence antiquarks in the antiproton, tend to carry the largest fraction of the proton or antiproton's momentum.

Formation of antiprotons requires energy equivalent to a temperature of 10 trillion K (1013 K), and this does not tend to happen naturally. However, at CERN, protons are accelerated in the Proton Synchrotron to an energy of 26 GeV and then smashed into an iridium rod. The protons bounce off the iridium nuclei with enough energy for matter to be created. A range of particles and antiparticles are formed, and the antiprotons are separated off using magnets in vacuum.

Measurements

In July 2011, the ASACUSA experiment at CERN determined the mass of the antiproton to be 1836.1526736(23) times that of the electron. This is the same as the mass of a proton, within the level of certainty of the experiment.

In October 2017, scientists working on the BASE experiment at CERN reported a measurement of the antiproton magnetic moment to a precision of 1.5 parts per billion. It is consistent with the most precise measurement of the proton magnetic moment (also made by BASE in 2014), which supports the hypothesis of CPT symmetry. This measurement represents the first time that a property of antimatter is known more precisely than the equivalent property in matter.

In January 2022, by comparing the charge-to-mass ratios between antiproton and negatively charged hydrogen ion, the BASE experiment has determined the antiproton's charge-to-mass ratio is identical to the proton's, down to 16 parts per trillion.

Possible applications

Antiprotons have been shown within laboratory experiments to have the potential to treat certain cancers, in a similar method currently used for ion (proton) therapy. The primary difference between antiproton therapy and proton therapy is that following ion energy deposition the antiproton annihilates, depositing additional energy in the cancerous region.

Isotopic labeling

From Wikipedia, the free encyclopedia

Isotopic labeling (or isotopic labelling) is a technique used to track the passage of an isotope (an atom with a detectable variation in neutron count) through chemical reaction, metabolic pathway, or a biological cell. The reactant is 'labeled' by replacing one or more specific atoms with their isotopes. The reactant is then allowed to undergo the reaction. The position of the isotopes in the products is measured to determine the sequence the isotopic atom followed in the reaction or the cell's metabolic pathway. The nuclides used in isotopic labeling may be stable nuclides or radionuclides. In the latter case, the labeling is called radiolabeling.

In isotopic labeling, there are multiple ways to detect the presence of labeling isotopes; through their mass, vibrational mode, or radioactive decay. Mass spectrometry detects the difference in an isotope's mass, while infrared spectroscopy detects the difference in the isotope's vibrational modes. Nuclear magnetic resonance detects atoms with different gyromagnetic ratios. The radioactive decay can be detected through an ionization chamber or autoradiographs of gels.

An example of the use of isotopic labeling is the study of phenol (C6H5OH) in water by replacing common hydrogen (protium) with deuterium (deuterium labeling). Upon adding phenol to deuterated water (water containing D2O in addition to the usual H2O), the substitution of deuterium for the hydrogen is observed in phenol's hydroxyl group (resulting in C6H5OD), indicating that phenol readily undergoes hydrogen-exchange reactions with water. Only the hydroxyl group is affected, indicating that the other 5 hydrogen atoms do not participate in the exchange reactions.

Isotopic tracer

A carbon-13 label was used to determine the mechanism in the 1,2- to 1,3-didehydrobenzene conversion of the phenyl substituted aryne precursor 1 to acenaphthylene.

An isotopic tracer, (also "isotopic marker" or "isotopic label"), is used in chemistry and biochemistry to help understand chemical reactions and interactions. In this technique, one or more of the atoms of the molecule of interest is substituted for an atom of the same chemical element, but of a different isotope (like a radioactive isotope used in radioactive tracing). Because the labeled atom has the same number of protons, it will behave in almost exactly the same way as its unlabeled counterpart and, with few exceptions, will not interfere with the reaction under investigation. The difference in the number of neutrons, however, means that it can be detected separately from the other atoms of the same element.

Nuclear magnetic resonance (NMR) and mass spectrometry (MS) are used to investigate the mechanisms of chemical reactions. NMR and MS detects isotopic differences, which allows information about the position of the labeled atoms in the products' structure to be determined. With information on the positioning of the isotopic atoms in the products, the reaction pathway the initial metabolites utilize to convert into the products can be determined. Radioactive isotopes can be tested using the autoradiographs of gels in gel electrophoresis. The radiation emitted by compounds containing the radioactive isotopes darkens a piece of photographic film, recording the position of the labeled compounds relative to one another in the gel.

Isotope tracers are commonly used in the form of isotope ratios. By studying the ratio between two isotopes of the same element, we avoid effects involving the overall abundance of the element, which usually swamp the much smaller variations in isotopic abundances. Isotopic tracers are some of the most important tools in geology because they can be used to understand complex mixing processes in earth systems. Further discussion of the application of isotopic tracers in geology is covered under the heading of isotope geochemistry.

Isotopic tracers are usually subdivided into two categories: stable isotope tracers and radiogenic isotope tracers. Stable isotope tracers involve only non-radiogenic isotopes and usually are mass-dependent. In theory, any element with two stable isotopes can be used as an isotopic tracer. However, the most commonly used stable isotope tracers involve relatively light isotopes, which readily undergo fractionation in natural systems. See also isotopic signature. A radiogenic isotope tracer involves an isotope produced by radioactive decay, which is usually in a ratio with a non-radiogenic isotope (whose abundance in the earth does not vary due to radioactive decay).

Stable isotope labeling

Isotopic tracing through reactions in the pentose phosphate pathway. The blue circles indicate a labeled carbon atom, while white circles are an unlabeled carbon atom.

Stable isotope labeling involves the use of non-radioactive isotopes that can act as a tracers used to model several chemical and biochemical systems. The chosen isotope can act as a label on that compound that can be identified through nuclear magnetic resonance (NMR) and mass spectrometry (MS). Some of the most common stable isotopes are 2H, 13C, and 15N, which can further be produced into NMR solvents, amino acids, nucleic acids, lipids, common metabolites and cell growth media. The compounds produced using stable isotopes are either specified by the percentage of labeled isotopes (i.e. 30% uniformly labeled 13C glucose contains a mixture that is 30% labeled with 13 carbon isotope and 70% naturally labeled carbon) or by the specifically labeled carbon positions on the compound (i.e. 1-13C glucose which is labeled at the first carbon position of glucose).

A network of reactions adopted from the glycolysis pathway and the pentose phosphate pathway is shown in which the labeled carbon isotope rearranges to different carbon positions throughout the network of reactions. The network starts with fructose 6-phosphate (F6P), which has 6 carbon atoms with a label 13C at carbon position 1 and 2. 1,2-13C F6P becomes two glyceraldehyde 3-phosphate (G3P), one 2,3-13C T3P and one unlabeled T3P. The 2,3-13C T3P can now be reacted with sedoheptulose 7-phosphate (S7P) to form an unlabeled erythrose 4-phosphate(E4P) and a 5,6-13C F6P. The unlabeled T3P will react with the S7P to synthesize unlabeled products. The figure demonstrates the use of stable isotope labeling to discover the carbon atom rearrangement through reactions using position specific labeled compounds.

Metabolic flux analysis using stable isotope labeling

Determining the percent of isotope labeling throughout a reaction. If a 50% labeled and 50% unlabeled metabolite is split in the manner shown, the expected percent of each outcome can be found. The blue circles indicate a labeled atom, while a white circle indicates an unlabeled atom.

Metabolic flux analysis (MFA) using stable isotope labeling is an important tool for explaining the flux of certain elements through the metabolic pathways and reactions within a cell. An isotopic label is fed to the cell, then the cell is allowed to grow utilizing the labeled feed. For stationary metabolic flux analysis the cell must reach a steady state (the isotopes entering and leaving the cell remain constant with time) or a quasi-steady state (steady state is reached for a given period of time). The isotope pattern of the output metabolite is determined. The output isotope pattern provides valuable information, which can be used to find the magnitude of flux, rate of conversion from reactants to products, through each reaction.

The figure demonstrates the ability to use different labels to determine the flux through a certain reaction. Assume the original metabolite, a three carbon compound, has the ability to either split into a two carbon metabolite and one carbon metabolite in one reaction then recombine or remain a three carbon metabolite. If the reaction is provided with two isotopes of the metabolite in equal proportion, one completely labeled (blue circles), commonly known as uniformly labeled, and one completely unlabeled (white circles). The pathway down the left side of the diagram does not display any change in the metabolites, while the right side shows the split and recombination. As shown, if the metabolite only takes the pathway down the left side, it remains in a 50–50 ratio of uniformly labeled to unlabeled metabolite. If the metabolite only takes the right side new labeling patterns can occur, all in equal proportion. Other proportions can occur depending on how much of the original metabolite follows the left side of the pathway versus the right side of the pathway. Here the proportions are shown for a situation in which half of the metabolites take the left side and half the right, but other proportions can occur. These patterns of labeled atoms and unlabeled atoms in one compound represent isotopomers. By measuring the isotopomer distribution of the differently labeled metabolites, the flux through each reaction can be determined.

MFA combines the data harvested from isotope labeling with the stoichiometry of each reaction, constraints, and an optimization procedure resolve a flux map. The irreversible reactions provide the thermodynamic constraints needed to find the fluxes. A matrix is constructed that contains the stoichiometry of the reactions. The intracellular fluxes are estimated by using an iterative method in which simulated fluxes are plugged into the stoichiometric model. The simulated fluxes are displayed in a flux map, which shows the rate of reactants being converted to products for each reaction. In most flux maps, the thicker the arrow, the larger the flux value of the reaction.

Isotope labeling measuring techniques

Any technique in measuring the difference between isotopomers can be used. The two primary methods, nuclear magnetic resonance (NMR) and mass spectrometry (MS), have been developed for measuring mass isotopomers in stable isotope labeling.

Proton NMR was the first technique used for 13C-labeling experiments. Using this method, each single protonated carbon position inside a particular metabolite pool can be observed separately from the other positions. This allows the percentage of isotopomers labeled at that specific position to be known. The limit to proton NMR is that if there are n carbon atoms in a metabolite, there can only be at most n different positional enrichment values, which is only a small fraction of the total isotopomer information. Although the use of proton NMR labeling is limiting, pure proton NMR experiments are much easier to evaluate than experiments with more isotopomer information.

In addition to Proton NMR, using 13C NMR techniques will allow a more detailed view of the distribution of the isotopomers. A labeled carbon atom will produce different hyperfine splitting signals depending on the labeling state of its direct neighbors in the molecule. A singlet peak emerges if the neighboring carbon atoms are not labeled. A doublet peak emerges if only one neighboring carbon atom is labeled. The size of the doublet split depends on the functional group of the neighboring carbon atom. If two neighboring carbon atoms are labeled, a doublet of doublets may degenerate into a triplet if the doublet splittings are equal.

The drawbacks to using NMR techniques for metabolic flux analysis purposes is that it is different from other NMR applications because it is a rather specialized discipline. An NMR spectrometer may not be directly available for all research teams. The optimization of NMR measurement parameters and proper analysis of peak structures requires a skilled NMR specialist. Certain metabolites also may require specialized measurement procedures to obtain additional isotopomer data. In addition, specially adapted software tools are needed to determine the precise quantity of peak areas as well as identifying the decomposition of entangled singlet, doublet, and triplet peaks.

As opposed to nuclear magnetic resonance, mass spectrometry (MS) is another method that is more applicable and sensitive to metabolic flux analysis experiments. MS instruments are available in different variants. Different from two-dimensional nuclear magnetic resonance (2D-NMR), the MS instruments work directly with hydrolysate.

In gas chromatography-mass spectrometry (GC-MS), the MS is coupled to a gas chromatograph to separate the compounds of the hydrolysate. The compounds eluting from the GC column are then ionized and simultaneously fragmented. The benefit in using GC-MS is that not only are the mass isotopomers of the molecular ion measured but also the mass isotopomer spectrum of several fragments, which significantly increases the measured information.

In liquid chromatography-mass spectrometry (LC-MS), the GC is replaced with a liquid chromatograph. The main difference is that chemical derivatization is not necessary. Applications of LC-MS to MFA, however, are rare.

In each case, MS instruments divide a particular isotopomer distribution by its molecular weight. All isotopomers of a particular metabolite that contain the same number of labeled carbon atoms are collected in one peak signal. Because every isotopomer contributes to exactly one peak in the MS spectrum, the percentage value can then be calculated for each peak, yielding the mass isotopomer fraction. For a metabolite with n carbon atoms, n+1 measurements are produced. After normalization, exactly n informative mass isotopomer quantities remain.

The drawback to using MS techniques is that for gas chromatography, the sample must be prepared by chemical derivatization in order to obtain molecules with charge. There are numerous compounds used to derivatize samples. N,N-Dimethylformamide dimethyl acetal (DMFDMA) and N-(tert-butyldimethylsilyl)-N-methyltrifluoroacetamide (MTBSTFA) are two examples of compounds that have been used to derivatize amino acids.

In addition, strong isotope effects observed affect the retention time of differently labeled isotopomers in the GC column. Overloading of the GC column also must be prevented.

Lastly, the natural abundance of other atoms than carbon also leads to a disturbance in the mass isotopomer spectrum. For example, each oxygen atom in the molecule might also be present as a 17O isotope and as a 18O isotope. A more significant impact of the natural abundance of isotopes is the effect of silicon with a natural abundance of the isotopes 29Si and 30Si. Si is used in derivatizing agents for MS techniques.

Radioisotopic labeling

Radioisotopic labeling is a technique for tracking the passage of a sample of substance through a system. The substance is "labeled" by including radionuclides in its chemical composition. When these decay, their presence can be determined by detecting the radiation emitted by them. Radioisotopic labeling is a special case of isotopic labeling.

For these purposes, a particularly useful type of radioactive decay is positron emission. When a positron collides with an electron, it releases two high-energy photons traveling in diametrically opposite directions. If the positron is produced within a solid object, it is likely to do this before traveling more than a millimeter. If both of these photons can be detected, the location of the decay event can be determined very precisely.

Strictly speaking, radioisotopic labeling includes only cases where radioactivity is artificially introduced by experimenters, but some natural phenomena allow similar analysis to be performed. In particular, radiometric dating uses a closely related principle.

Applications

Applications in human mineral nutrition research

The use of stable isotope tracers to study mineral nutrition and metabolism in humans was first reported in the 1960s. While radioisotopes had been used in human nutrition research for several decades prior, stable isotopes presented a safer option, especially in subjects for which there is elevated concern about radiation exposure, e.g. pregnant and lactating women and children. Other advantages offered by stable isotopes include the ability to study elements having no suitable radioisotopes and to study long-term tracer behavior. Thus the use of stable isotopes became commonplace with the increasing availability of isotopically enriched materials and inorganic mass spectrometers. The use of stable isotopes instead of radioisotopes does have several drawbacks: larger quantities of tracer are required, having the potential of perturbing the naturally existing mineral; analytical sample preparation is more complex and mass spectrometry instrumentation more costly; the presence of tracer in whole bodies or particular tissues cannot be measured externally. Nonetheless, the advantages have prevailed making stable isotopes the standard in human studies.

Most of the minerals that are essential for human health and of particular interest to nutrition researchers have stable isotopes, some well-suited as biological tracers because of their low natural abundance. Iron, zinc, calcium, copper, magnesium, selenium and molybdenum are among the essential minerals having stable isotopes to which isotope tracer methods have been applied. Iron, zinc and calcium in particular have been extensively studied.

Aspects of mineral nutrition/metabolism that are studied include absorption (from the gastrointestinal tract into the body), distribution, storage, excretion and the kinetics of these processes. Isotope tracers are administered to subjects orally (with or without food, or with a mineral supplement) and/or intravenously. Isotope enrichment is then measured in blood plasma, erythrocytes, urine and/or feces. Enrichment has also been measured in breast milk and intestinal contents. Tracer experiment design sometimes differs between minerals due to differences in their metabolism. For example, iron absorption is usually determined from incorporation of tracer in erythrocytes whereas zinc or calcium absorption is measured from tracer appearance in plasma, urine or feces. The administration of multiple isotope tracers in a single study is common, permitting the use of more reliable measurement methods and simultaneous investigations of multiple aspects of metabolism.

The measurement of mineral absorption from the diet, often conceived of as bioavailability, is the most common application of isotope tracer methods to nutrition research. Among the purposes of such studies are the investigations of how absorption is influenced by type of food (e.g. plant vs animal source, breast milk vs formula), other components of the diet (e.g. phytate), disease and metabolic disorders (e.g. environmental enteric dysfunction), the reproductive cycle, quantity of mineral in diet, chronic mineral deficiency, subject age and homeostatic mechanisms. When results from such studies are available for a mineral, they may serve as a basis for estimations of the human physiological and dietary requirements of the mineral.

When tracer is administered with food for the purpose of observing mineral absorption and metabolism, it may be in the form of an intrinsic or extrinsic label. An intrinsic label is isotope that has been introduced into the food during its production, thus enriching the natural mineral content of the food, whereas extrinsic labeling refers to the addition of tracer isotope to the food during the study. Because it is a very time-consuming and expensive approach, intrinsic labeling is not routinely used. Studies comparing measurements of absorption using intrinsic and extrinsic labeling of various foods have generally demonstrated good agreement between the two labeling methods, supporting the hypothesis that extrinsic and natural minerals are handled similarly in the human gastrointestinal tract.

Enrichment is quantified from the measurement of isotope ratios, the ratio of the tracer isotope to a reference isotope, by mass spectrometry. Multiple definitions and calculations of enrichment have been adopted by different researchers. Calculations of enrichment become more complex when multiple tracers are used simultaneously. Because enriched isotope preparations are never isotopically pure, i.e. they contain all the element's isotopes in unnatural abundances, calculations of enrichment of multiple isotope tracers must account for the perturbation of each isotope ratio by the presence of the other tracers.

Due to the prevalence of mineral deficiencies and their critical impact on human health and well-being in resource-poor countries, the International Atomic Energy Agency has recently published detailed and comprehensive descriptions of stable isotope methods to facilitate the dissemination of this knowledge to researchers beyond western academic centers.

Applications in proteomics

In proteomics, the study of the full set of proteins expressed by a genome, identifying diseases biomarkers can involve the usage of stable isotope labeling by amino acids in cell culture (SILAC), that provides isotopic labeled forms of amino acid used to estimate protein levels. In protein recombinant, manipulated proteins are produced in large quantities and isotope labeling is a tool to test for relevant proteins. The method used to be about selectively enrich nuclei with 13C or 15N or deplete 1H from them. The recombinant would be expressed in E.coli with media containing 15N-ammonium chloride as a source of nitrogen. The resulting 15N labeled proteins are then purified by immobilized metal affinity and their percentage estimated. In order to increase the yield of labeled proteins and cut down the cost of isotope labeled media, an alternative procedure primarily increases the cell mass using unlabeled media before introducing it in a minimal amount of labeled media. Another application of isotope labeling would be in measuring DNA synthesis, that is cell proliferation in vitro. Uses H3-thymidine labeling to compare pattern of synthesis (or sequence) in cells.

Applications for ecosystem process analysis

Isotopic tracers are used to examine processes in natural systems, especially terrestrial and aquatic environments. In soil science 15N tracers are used extensively to study nitrogen cycling, whereas 13C and 14C, stable and radioisotopes of carbon respectively, are used for studying turnover of organic compounds and fixation of CO2 by autotrophs. For example, Marsh et al. (2005) used dual labeled (15N- and 14C) urea to demonstrate utilization of the compound by ammonia oxidizers as both an energy source (ammonia oxidation) and carbon source (chemoautotrophic carbon fixation). Deuterated water is also used for tracing the fate and ages of water in a tree or in an ecosystem.

Applications for oceanography

Tracers are also used extensively in oceanography to study a wide array of processes. The isotopes used are typically naturally occurring with well-established sources and rates of formation and decay. However, anthropogenic isotopes may also be used with great success. The researchers measure the isotopic ratios at different locations and times to infer information about the physical processes of the ocean.

Particle transport

The ocean is an extensive network of particle transport. Thorium isotopes can help researchers decipher the vertical and horizontal movement of matter. 234Th has a constant, well-defined production rate in the ocean and a half-life of 24 days. This naturally occurring isotope has been shown to vary linearly with depth. Therefore, any changes in this linear pattern can be attributed to the transport of 234Th on particles. For example, low isotopic ratios in surface water with very high values a few meters down would indicate a vertical flux in the downward direction. Furthermore, the thorium isotope may be traced within a specific depth to decipher the lateral transport of particles.

Circulation

Circulation within local systems, such as bays, estuaries, and groundwater, may be examined with radium isotopes. 223Ra has a half-life of 11 days and can occur naturally at specific locations in rivers and groundwater sources. The isotopic ratio of radium will then decrease as the water from the source river enters a bay or estuary. By measuring the amount of 223Ra at a number of different locations, a circulation pattern can be deciphered. This same exact process can also be used to study the movement and discharge of groundwater.

Various isotopes of lead can be used to study circulation on a global scale. Different oceans (i.e. the Atlantic, Pacific, Indian, etc.) have different isotopic signatures. This results from differences in isotopic ratios of sediments and rocks within the different oceans. Because the different isotopes of lead have half-lives of 50–200 years, there is not enough time for the isotopic ratios to be homogenized throughout the whole ocean. Therefore, precise analysis of Pb isotopic ratios can be used to study the circulation of the different oceans.

Tectonic processes and climate change

Isotopes with extremely long half-lives and their decay products can be used to study multi-million year processes, such as tectonics and extreme climate change. For example, in rubidium–strontium dating, the isotopic ratio of strontium (87Sr/86Sr) can be analyzed within ice cores to examine changes over the earth's lifetime. Differences in this ratio within the ice core would indicate significant alterations in the earth's geochemistry.

Isotopes related to nuclear weapons

The aforementioned processes can be measured using naturally occurring isotopes. Nevertheless, anthropogenic isotopes are also extremely useful for oceanographic measurements. Nuclear weapons tests released a plethora of uncommon isotopes into the world's oceans. 3H, 129I, and 137Cs can be found dissolved in seawater, while 241Am and 238Pu are attached to particles. The isotopes dissolved in water are particularly useful in studying global circulation. For example, differences in lateral isotopic ratios within an ocean can indicate strong water fronts or gyres. Conversely, the isotopes attached to particles can be used to study mass transport within water columns. For instance, high levels of Am or Pu can indicate downwelling when observed at great depths, or upwelling when observed at the surface.

Methods for isotopic labeling

Nanoparticles for drug delivery to the brain

Nanoparticles for drug delivery to the brain is a method for transporting drug molecules across the blood–brain barrier (BBB) using nanoparticles. These drugs cross the BBB and deliver pharmaceuticals to the brain for therapeutic treatment of neurological disorders. These disorders include Parkinson's disease, Alzheimer's disease, schizophrenia, depression, and brain tumors. Part of the difficulty in finding cures for these central nervous system (CNS) disorders is that there is yet no truly efficient delivery method for drugs to cross the BBB. Antibiotics, antineoplastic agents, and a variety of CNS-active drugs, especially neuropeptides, are a few examples of molecules that cannot pass the BBB alone. With the aid of nanoparticle delivery systems, however, studies have shown that some drugs can now cross the BBB, and even exhibit lower toxicity and decrease adverse effects throughout the body. Toxicity is an important concept for pharmacology because high toxicity levels in the body could be detrimental to the patient by affecting other organs and disrupting their function. Further, the BBB is not the only physiological barrier for drug delivery to the brain. Other biological factors influence how drugs are transported throughout the body and how they target specific locations for action. Some of these pathophysiological factors include blood flow alterations, edema and increased intracranial pressure, metabolic perturbations, and altered gene expression and protein synthesis. Though there exist many obstacles that make developing a robust delivery system difficult, nanoparticles provide a promising mechanism for drug transport to the CNS.

Background

The first successful delivery of a drug across the BBB occurred in 1995. The drug used was hexapeptide dalargin, an anti-nociceptive peptide that cannot cross the BBB alone. It was encapsulated in polysorbate 80 coated nanoparticles and intravenously injected. This was a huge breakthrough in the nanoparticle drug delivery field, and it helped advance research and development toward clinical trials of nanoparticle delivery systems. Nanoparticles range in size from 10 - 1000 nm (or 1 µm) and they can be made from natural or artificial polymers, lipids, dendrimers, and micelles. Most polymers used for nanoparticle drug delivery systems are natural, biocompatible, and biodegradable, which helps prevent contamination in the CNS. Several current methods for drug delivery to the brain include the use of liposomes, prodrugs, and carrier-mediated transporters. Many different delivery methods exist to transport these drugs into the body, such as peroral, intranasal, intravenous, and intracranial. For nanoparticles, most studies have shown increasing progression with intravenous delivery. Along with delivery and transport methods, there are several means of functionalizing, or activating, the nanoparticle carriers. These means include dissolving or absorbing a drug throughout the nanoparticle, encapsulating a drug inside the particle, or attaching a drug on the surface of the particle.

Types of nanoparticles for CNS drug delivery

Lipid-based

Diagram of liposome showing a phospholipid bilayer surrounding an aqueous interior.

One type of nanoparticle involves use of liposomes as drug molecule carriers. The diagram on the right shows a standard liposome. It has a phospholipid bilayer separating the interior from the exterior of the cell.

Liposomes are composed of vesicular bilayers, lamellae, made of biocompatible and biodegradable lipids such as sphingomyelin, phosphatidylcholine, and glycerophospholipids. Cholesterol, a type of lipid, is also often incorporated in the lipid-nanoparticle formulation. Cholesterol can increase stability of a liposome and prevent leakage of a bilayer because its hydroxyl group can interact with the polar heads of the bilayer phospholipids. Liposomes have the potential to protect the drug from degradation, target sites for action, and reduce toxicity and adverse effects. Lipid nanoparticles can be manufactured by high pressure homogenization, a current method used to produce parenteral emulsions. This process can ultimately form a uniform dispersion of small droplets in a fluid substance by subdividing particles until the desired consistency is acquired. This manufacturing process is already scaled and in use in the food industry, which therefore makes it more appealing for researchers and for the drug delivery industry.

Liposomes can also be functionalized by attaching various ligands on the surface to enhance brain-targeted delivery.

Cationic liposomes

Another type of lipid-nanoparticle that can be used for drug delivery to the brain is a cationic liposome. These are lipid molecules that are positively charged. One example of cationic liposomes uses bolaamphiphiles, which contain hydrophilic groups surrounding a hydrophobic chain to strengthen the boundary of the nano-vesicle containing the drug. Bolaamphiphile nano-vesicles can cross the BBB, and they allow controlled release of the drug to target sites. Lipoplexes can also be formed from cationic liposomes and DNA solutions, to yield transfection agents. Cationic liposomes cross the BBB through adsorption mediated endocytosis followed by internalization in the endosomes of the endothelial cells. By transfection of endothelial cells through the use of lipoplexes, physical alterations in the cells could be made. These physical changes could potentially improve how some nanoparticle drug-carriers cross the BBB.

Metallic

Metal nanoparticles are promising as carriers for drug delivery to the brain. Common metals used for nanoparticle drug delivery are gold, silver, and platinum, owing to their biocompatibility. These metallic nanoparticles are used due to their large surface area to volume ratio, geometric and chemical tunability, and endogenous antimicrobial properties. Silver cations released from silver nanoparticles can bind to the negatively charged cellular membrane of bacteria and increase membrane permeability, allowing foreign chemicals to enter the intracellular fluid.

Metal nanoparticles are chemically synthesized using reduction reactions. For example, drug-conjugated silver nanoparticles are created by reducing silver nitrate with sodium borohydride in the presence of an ionic drug compound. The drug binds to the surface of the silver, stabilizing the nanoparticles and preventing the nanoparticles from aggregation.

Metallic nanoparticles typically cross the BBB via transcytosis. Nanoparticle delivery through the BBB can be increased by introducing peptide conjugates to improve permeability to the central nervous system. For instance, recent studies have shown an improvement in gold nanoparticle delivery efficiency by conjugating a peptide that binds to the transferrin receptors expressed in brain endothelial cells.

Solid lipid

Diagram displays a solid lipid nanoparticle (SLN). There is only one phospholipid layer because the interior of the particle is solid. Molecules such as antibodies, targeting peptides, and drug molecules can be bound to the surface of the SLN.

Also, solid lipid nanoparticles (SLNs) are lipid nanoparticles with a solid interior as shown in the diagram on the right. SLNs can be made by replacing the liquid lipid oil used in the emulsion process with a solid lipid. In solid lipid nanoparticles, the drug molecules are dissolved in the particle's solid hydrophobic lipid core, this is called the drug payload, and it is surrounded by an aqueous solution. Many SLNs are developed from triglycerides, fatty acids, and waxes. High-pressure homogenization or micro-emulsification can be used for manufacturing. Further, functionalizing the surface of solid lipid nanoparticles with polyethylene glycol (PEG) can result in increased BBB permeability. Different colloidal carriers such as liposomes, polymeric nanoparticles, and emulsions have reduced stability, shelf life and encapsulation efficacy. Solid lipid nanoparticles are designed to overcome these shortcomings and have an excellent drug release and physical stability apart from targeted delivery of drugs.

Nanoemulsions

Another form for nanoparticle delivery systems is oil-in-water emulsions done on a nano-scale. This process uses common biocompatible oils such as triglycerides and fatty acids, and combines them with water and surface-coating surfactants. Oils rich in omega-3 fatty acids especially contain important factors that aid in penetrating the tight junctions of the BBB.

Polymer-based

Other nanoparticles are polymer-based, meaning they are made from a natural polymer such as polylactic acid (PLA), poly D,L-glycolide (PLG),

polylactide-co-glycolide (PLGA), and polycyanoacrylate (PCA). Some studies have found that polymeric nanoparticles may provide better results for drug delivery relative to lipid-based nanoparticles because they may increase the stability of the drugs or proteins being transported. Polymeric nanoparticles may also contain beneficial controlled release mechanisms.

Polymer Branch

Nanoparticles made from natural polymers that are biodegradable have the abilities to target specific organs and tissues in the body, to carry DNA for gene therapy, and to deliver larger molecules such as proteins, peptides, and even genes. To manufacture these polymeric nanoparticles, the drug molecules are first dissolved and then encapsulated or attached to a polymer nanoparticle matrix. Three different structures can then be obtained from this process; nanoparticles, nanocapsules (in which the drug is encapsulated and surrounded by the polymer matrix), and nanospheres (in which the drug is dispersed throughout the polymeric matrix in a spherical form).

One of the most important traits for nanoparticle delivery systems is that they must be biodegradable on the scale of a few days. A few common polymer materials used for drug delivery studies are polybutyl cyanoacrylate (PBCA), poly(isohexyl cyanoacrylate) (PIHCA), polylactic acid (PLA), or polylactide-co-glycolide (PLGA). PBCA undergoes degradation through enzymatic cleavage of its ester bond on the alkyl side chain to produce water-soluble byproducts. PBCA also proves to be the fastest biodegradable material, with studies showing 80% reduction after 24 hours post intravenous therapy injection. PIHCA, however, was recently found to display an even lower degradation rate, which in turn further decreases toxicity. PIHCA, due to this slight advantage, is currently undergoing phase III clinical trials for transporting the drug doxorubicin as a treatment for hepatocellular carcinomas.

Human serum albumin (HSA) and chitosan are also materials of interest for the generation of nanoparticle delivery systems. Using albumin nanoparticles for stroke therapy can overcome numerous limitations. For instance, albumin nanoparticles can enhance BBB permeability, increase solubility, and increase half-life in circulation. Patients who have brain cancer overexpress albumin-binding proteins, such as SPARC and gp60, in their BBB and tumor cells, naturally increasing the uptake of albumin into the brain. Using this relationship, researches have formed albumin nanoparticles that co-encapsulate two anticancer drugs, paclitaxel and fenretinide, modified with low weight molecular protamine (LMWP), a type of cell-penetrating protein, for anti-glioma therapy. Once injected into the patient's body, the albumin nanoparticles can cross the BBB more easily, bind to the proteins and penetrate glioma cells, and then release the contained drugs. This nanoparticle formulation enhances tumor-targeting delivery efficiency and improves the solubility issue of hydrophobic drugs. Specifically, cationic bovine serum albumin-conjugated tanshinone IIA PEGylated nanoparticles injected into a MCAO rat model decreased the volume of infarction and neuronal apoptosis. Chitosan, a naturally abundant polysaccharide, is particularly useful due to its biocompability and lack of toxicity. With its adsorptive and mucoadhesive properties, chitosan can overcome limitations of internasal administration to the brain. It has been shown that cationic chitosan nanoparticles interact with the negatively charged brain endothelium.

Coating these polymeric nanoparticle devices with different surfactants can also aid BBB crossing and uptake in the brain. Surfactants such as polysorbate 80, 20, 40, 60, and poloxamer 188, demonstrated positive drug delivery through the blood–brain barrier, whereas other surfactants did not yield the same results. It has also been shown that functionalizing the surface of nanoparticles with polyethylene glycol (PEG), can induce the "stealth effect", allowing the drug-loaded nanoparticle to circulate throughout the body for prolonged periods of time. Further, the stealth effect, caused in part by the hydrophilic and flexible properties of the PEG chains, facilitates an increase in localizing the drug at target sites in tissues and organs.

Mechanisms for delivery

Liposomes

A mechanism for liposome transport across the BBB is lipid-mediated free diffusion, a type of facilitated diffusion, or lipid-mediated endocytosis. There exist many lipoprotein receptors which bind lipoproteins to form complexes that in turn transport the liposome nano-delivery system across the BBB. Apolipoprotein E (apoE) is a protein that facilitates transport of lipids and cholesterol. ApoE constituents bind to nanoparticles, and then this complex binds to a low-density lipoprotein receptor (LDLR) in the BBB and allows transport to occur.

This diagram shows several ways in which transport across the BBB works. For nanoparticle delivery across the BBB, the most common mechanisms are receptor-mediated transcytosis and adsorptive transcytosis

Polymeric nanoparticles

The mechanism for the transport of polymer-based nanoparticles across the BBB has been characterized as receptor-mediated endocytosis by the brain capillary endothelial cells. Transcytosis then occurs to transport the nanoparticles across the tight junction of endothelial cells and into the brain. Surface coating nanoparticles with surfactants such as polysorbate 80 or poloxamer 188 was shown to increase uptake of the drug into the brain also. This mechanism also relies on certain receptors located on the luminal surface of endothelial cells of the BBB. Ligands coated on the nanoparticle's surface bind to specific receptors to cause a conformational change. Once bound to these receptors, transcytosis can commence, and this involves the formation of vesicles from the plasma membrane pinching off the nanoparticle system after internalization.

Additional receptors identified for receptor-mediated endocytosis of nanoparticle delivery systems are the scavenger receptor class B type I (SR-BI), LDL receptor (LRP1), transferrin receptor, and insulin receptor. As long as a receptor exists on the endothelial surface of the BBB, any ligand can be attached to the nanoparticle's surface to functionalize it so that it can bind and undergo endocytosis.

Another mechanism is adsorption mediated transcytosis, where electrostatic interactions are involved in mediating nanoparticle crossing of the BBB. Cationic nanoparticles (including cationic liposomes) are of interest for this mechanism, because their positive charges assist binding on the brain's endothelial cells. Using TAT-peptides, a cell-penetrating peptide, to functionalize the surface of cationic nanoparticles can further improve drug transport into the brain.

Magnetic and Magnetoelectric nanoparticles

In contrast to the above mechanisms, a delivery with magnetic fields does not strongly depend on the biochemistry of the brain. In this case, nanoparticles are literally pulled across the BBB via application of a magnetic field gradient. The nanoparticles can be pulled in as well as removed from the brain merely by controlling the direction of the gradient. For the approach to work, the nanoparticles must have a non-zero magnetic moment and have a diameter of less than 50 nm. Both magnetic and magnetoelectric nanoparticles (MENs) satisfy the requirements. However, it is only the MENs which display a non-zero magnetoelectric (ME) effect. Due to the ME effect, MENs can provide a direct access to local intrinsic electric fields at the nanoscale to enable a two-way communication with the neural network at the single-neuron level. MENs, proposed by the research group of Professor Sakhrat Khizroev at Florida International University (FIU), have been used for targeted drug delivery and externally controlled release across the BBB to treat HIV and brain tumors, as well as to wirelessly stimulate neurons deep in the brain for treatment of neurodegenerative diseases such as Parkinson's Disease and others.

Focused ultrasound

Studies have shown that focused ultrasound bursts can noninvasively be used to disrupt tight junctions in desired locations of BBB, allowing for the increased passage of particles at that location. This disruption can last up to four hours after burst administration. Focused ultrasound works by generating oscillating microbubbles, which physically interact with the cells of the BBB by oscillating at a frequency which can be tuned by the ultrasound burst. This physical interaction is believed to cause cavitation and ultimately the disintegration of the tight junction complexes which may explain why this effect lasts for several hours. However, the energy applied from ultrasound can result in tissue damage. Fortunately, studies have demonstrated that this risk can be reduced if preformed microbubbles are first injected before focused ultrasound is applied, reducing the energy required from the ultrasound. This technique has applications in the treatment of various diseases. For example, one study has shown that using focused ultrasound with oscillating bubbles loaded with a chemotherapeutic drug, carmustine, facilitates the safe treatment of glioblastoma in an animal model. This drug, like many others, normally requires large dosages to reach the target brain tissue diffusion from the blood, leading to systemic toxicity and the possibilities of multiple harmful side effects manifesting throughout the body. However, focused ultrasound has the potential to increase the safety and efficacy of drug delivery to the brain.

Toxicity

A study was performed to assess the toxicity effects of doxorubicin-loaded polymeric nanoparticle systems. It was found that doses up to 400 mg/kg of PBCA nanoparticles alone did not cause any toxic effects on the organism. These low toxicity effects can most likely be attributed to the controlled release and modified biodistribution of the drug due to the traits of the nanoparticle delivery system. Toxicity is a highly important factor and limit of drug delivery studies, and a major area of interest in research on nanoparticle delivery to the brain.

Metal nanoparticles are associated with risks of neurotoxicity and cytotoxicity. These heavy metals generate reactive oxygen species, which causes oxidative stress and damages the cells' mitochondria and endoplasmic reticulum. This leads to further issues in cellular toxicity, such as damage to DNA and disruption of cellular pathways. Silver nanoparticles in particular have a higher degree of toxicity compared to other metal nanoparticles such as gold or iron. Silver nanoparticles can circulate through the body and accumulate easily in multiple organs, as discovered in a study on the silver nanoparticle distribution in rats. Traces of silver accumulated in the rats' lungs, spleen, kidney, liver, and brain after the nanoparticles were injected subcutaneously. In addition, silver nanoparticles generate more reactive oxygen species compared to other metals, which leads to an overall larger issue of toxicity.

Research

In the early 21st century, extensive research is occurring in the field of nanoparticle drug delivery systems to the brain. One of the common diseases being studied in neuroscience is Alzheimer's disease. Many studies have been done to show how nanoparticles can be used as a platform to deliver therapeutic drugs to these patients with the disease. A few Alzheimer's drugs that have been studied especially are rivastigmine, tacrine, quinoline, piperine, and curcumin. PBCA, chitosan, and PLGA nanoparticles were used as delivery systems for these drugs. Overall, the results from each drug injection with these nanoparticles showed remarkable improvements in the effects of the drug relative to non-nanoparticle delivery systems. This possibly suggests that nanoparticles could provide a promising solution to how these drugs could cross the BBB. One factor that still must be considered and accounted for is nanoparticle accumulation in the body. With long-term and frequent injections that are often required to treat chronic diseases such as Alzheimer's disease, polymeric nanoparticles could potentially build up in the body, causing undesirable effects. This area for concern would have to be further assessed to analyze these possible effects and to improve them.

History of neuroimaging

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/History_of_neuroimaging

Neuroimaging is a medical technique that allows doctors and researchers to take pictures of the inner workings of the body or brain of a patient. It can show areas with heightened activity, areas with high or low blood flow, the structure of the patients brain/body, as well as certain abnormalities. Neuroimaging is most often used to find the specific location of certain diseases or birth defects such as tumors, cancers, or clogged arteries. Neuroimaging first came about as a medical technique in the 1880's with the invention of the human circulation balance and has since lead to other inventions such as the x-ray, air ventriculography, cerebral angiography, PET/SPECT scans, magnetoencephalography, and xenon CT scanning.

Neuroimaging Techniques

Human Circulation Balance

Angelo Mosso's 'human circulation balance.'

The 'human circulation balance' was a non-invasive way to measure blood flow to the brain during mental activities. This technique worked by placing patients on a table that was supported by a fulcrum, allowing the table to sway depending on activity levels. When patients were exposed to more cognitively complex stimuli, the table would sway towards the head. Invented in 1882 by Angelo Mosso, the 'human circulation balance' is said to be the first technique of neuroimaging created and is what Mosso is most known for.

Wilhelm Roentgen, creator of the X-ray.

X-ray

In the year of 1895, Wilhelm Roentgen developed the first radiograph, more commonly known as the X-ray. By 1901, Roentgen had been awarded a Nobel Peace Prize for his discovery. Immediately after its release, X-ray machines were being manufactured and used worldwide in medicine. However, this was only the first step in the development of neuroimaging. The brain is almost entirely composed of soft tissue that is not radio-opaque, meaning it remains essentially invisible to ordinary or plain X-ray examinations. This is also true of most brain abnormalities, though there are exceptions. For example, a calcified tumor (e.g.,meningioma, craniopharyngioma, and some types of glioma) can easily be seen.

Air Ventriculography

To combat this, in 1918, neurosurgeon Walter Dandy developed a technique called air ventriculography. This method injected filtered air directly into the lateral ventricles to better take pictures of the ventricle systems of the brain. Thanks to local anesthetics, this was not a painful procedure, but it was significantly risky. Hemorrhage, severe infection, and extreme changes in intrarenal pressure were all threats to the procedure. Despite this, Dandy did not stop there. In 1919, he proceeded to discover Encephalography, a medical procedure used to record the brain's electrical activity. This method involved attaching sensors to the brain that detect and measure the brain's electrical signals. These signals are then translated into a visual, showing the brain's activity patterns. With these early advances, neuroimaging was beginning to be used to diagnose conditions such as epilepsy, brain injuries, and sleep disorders. Providing invaluable information about brain function that would one day be added upon during the devolvement of modern neuroimaging.

Cerebral Angiography

Cerebral angiogram showing a transverse projection of the vertebrobasilar and posterior cerebral circulation.

Introduced in 1927, cerebral angiography enabled doctors to accurately detect and diagnose anomalies in the brain such as tumors and internal carotid artery occlusions. Over the course of a year, Egas Moniz, the inventor of cerebral angiography, ran experiments with various dye solution percentages that were injected into arteries to help better visualize the blood vessels in the brain before discovering that a solution consisting of 25% sodium iodide was the safest for patients, as well as the most effective in the visualization of blood vessels and arteries within the brain.

PET/SPECT Scans

Full body PET scan of an adult female.

A positron emission tomography, or PET scan, is a scan that shows areas of high activity in the body. The way it works is that a patient is first given a radioactive substance (called a tracer) via an injection in the hand or arm. The tracer then circulates through the body and attaches to a specific substance that the organ or tissue produces during metabolism, such as glucose. As a result, positrons are created, and those positrons are scanned by the PET camera. After they are scanned, a computer produces either a 2D or 3D image of the activity occurring within the organ or tissue. The idea for the PET scan was originally proposed by William Sweet in the 1950's, but the first full-body PET scanner wasn't actually developed until 1974 by Michael Phelp.

Similarly, the single-photon emission computed tomography scan, or SPECT scan, also works by scanning a tracer within the patient. The difference, however, is that the SPECT directly scans the gamma rays from where the tracer attaches rather than the positrons that the PET scans. As a result, the images that the SPECT scan creates are not as clear as the images produced by a PET scan, but it's typically a cheaper procedure to undertake. SPECT was developed by David Kuhl in the 1950's. Kuhl also helped set the foundation that would lead to the PET scan.

Magnetoencephalography

MEG device with patient.

Magnetoencephalography (MEG) is a technique that looks for regions of activity in the brain by detecting large groups of electrically charged ions moving through cells. It was originally developed by physicist David Cohen in the early 1970's as a noninvasive procedure. In order to be noninvasive, the MEG was designed like a giant helmet that the patient would put their head inside of and, once turned on, would read the electromagnetic pulses coming from their brain. Later on, in 1972, Cohen invented the SQUID (superconducting quantum interference device), which gave the MEG the ability to detect extremely small changes in ions and magnetic fields in the brain.   

Xenon CT Scanning

Godfrey Hounsfield, inventor of first CT scanner

Xenon computed tomography is a modern scanning technique that reveals the flow of blood to the areas of the brain. The scan tests for consistent and sufficient blood flow to all areas of the brain by having patients breathe in xenon gas, a contrast agent, to show the areas of high and low blood flow. Although many trial scans and tests were ran during the development process of computed tomography, British biomedical engineer Godfrey Hounsfield is the founder of the technique and invented the first CT scanner in 1967, which he won a Nobel Prize for in 1979. However, the adoption of the scanners in the United States didn't occur until six years later in 1973. Regardless, the CT scanner was already gaining a notable reputation and popularity beforehand.

Magnetic resonance imaging

Shortly after the initial development of CT, magnetic resonance imaging (MRI or MR scanning) was developed. Rather than using ionizing or X-radiation, MRI uses the variation in signals produced by protons in the body when the head is placed in a strong magnetic field. Associated with early application of the basic technique to the human body are the names of Jackson (in 1968), Damadian (in 1972), and Abe and Paul Lauterbur (in 1973). Lauterbur and Sir Peter Mansfield were awarded the 2003 Nobel Prize in Physiology or Medicine for their discoveries concerning MRI. At first, structural imaging benefited more than functional imaging from the introduction of MRI. During the 1980s a veritable explosion of technical refinements and diagnostic MR applications took place, enabling even neurological tyros to diagnose brain pathology that would have been elusive or incapable of demonstration in a living person only a decade or two earlier.

Inequality (mathematics)

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Inequality...