Search This Blog

Tuesday, July 17, 2018

Isotope geochemistry

From Wikipedia, the free encyclopedia
Isotope geochemistry is an aspect of geology based upon the study of natural variations in the relative abundances of isotopes of various elements. Variations in isotopic abundance are measured by isotope ratio mass spectrometry, and can reveal information about the ages and origins of rock, air or water bodies, or processes of mixing between them.

Stable isotope geochemistry is largely concerned with isotopic variations arising from mass-dependent isotope fractionation, whereas radiogenic isotope geochemistry is concerned with the products of natural radioactivity.

Stable isotope geochemistry

For most stable isotopes, the magnitude of fractionation from kinetic and equilibrium fractionation is very small; for this reason, enrichments are typically reported in "per mil" (‰, parts per thousand).[1] These enrichments (δ) represent the ratio of heavy isotope to light isotope in the sample over the ratio of a standard. That is,
{\displaystyle \delta {\ce {^{13}C}}=\left({\frac {\left({\frac {{\ce {^{13}C}}}{{\ce {^{12}C}}}}\right)_{sample}}{\left({\frac {{\ce {^{13}C}}}{{\ce {^{12}C}}}}\right)_{standard}}}-1\right)\times 1000}

Carbon

Carbon has two stable isotopes, 12C and 13C, and one radioactive isotope, 14C.

The stable carbon isotope ratio, δ13C, is measured against Vienna Pee Dee Belemnite (VPDB).[2] The stable carbon isotopes are fractionated primarily by photosynthesis (Faure, 2004). The 13C/12C ratio is also an indicator of paleoclimate: a change in the ratio in the remains of plants indicates a change in the amount of photosynthetic activity, and thus in how favorable the environment was for the plants. During photosynthesis, organisms using the C3 pathway show different enrichments compared to those using the C4 pathway, allowing scientists not only to distinguish organic matter from abiotic carbon, but also what type of photosynthetic pathway the organic matter was using.[1] Occasional spikes in the global 13C/12C ratio have also been useful as stratigraphic markers for chemostratigraphy, especially during the Paleozoic.[3]

The 14C ratio has been used to track ocean circulation, among other things.

Nitrogen

Nitrogen has two stable isotopes, 14N, and 15N. The ratio between these is measured relative to nitrogen in ambient air.[2] Nitrogen ratios are frequently linked to agricultural activities. Nitrogen isotope data has also been used to measure the amount of exchange of air between the stratosphere and troposphere using data from the greenhouse gas N2O.[4]

Oxygen

Oxygen has three stable isotopes, 16O, 17O, and 18O. Oxygen ratios are measured relative to Vienna Standard Mean Ocean Water (VSMOW) or Vienna Pee Dee Belemnite (VPDB).[2] Variations in oxygen isotope ratios are used to track both water movement, paleoclimate,[1] and atmospheric gases such as ozone and carbon dioxide.[5] Typically, the VPDB oxygen reference is used for paleoclimate, while VSMOW is used for most other applications.[1] Oxygen isotopes appear in anomalous ratios in atmospheric ozone, resulting from mass-independent fractionation.[6] Isotope ratios in fossilized foraminifera have been used to deduce the temperature of ancient seas.[7]

Sulfur

Sulfur has four stable isotopes, with the following abundances: 32S (0.9502), 33S (0.0075), 34S (0.0421) and 36S (0.0002). These abundances are compared to those found in Cañon Diablo troilite.[5] Variations in sulfur isotope ratios are used to study the origin of sulfur in an orebody and the temperature of formation of sulfur–bearing minerals.[8]

Radiogenic isotope geochemistry

Radiogenic isotopes provide powerful tracers for studying the ages and origins of Earth systems.[9] They are particularly useful to understand mixing processes between different components, because (heavy) radiogenic isotope ratios are not usually fractionated by chemical processes.

Radiogenic isotope tracers are most powerful when used together with other tracers: The more tracers used, the more control on mixing processes. An example of this application is to the evolution of the Earth's crust and Earth's mantle through geological time.

Lead–lead isotope geochemistry

Lead has four stable isotopes - 204Pb, 206Pb, 207Pb, 208Pb and one common radioactive isotope 202Pb with a half-life of ~53,000 years.

Lead is created in the Earth via decay of transuranic elements, primarily uranium and thorium.

Lead isotope geochemistry is useful for providing isotopic dates on a variety of materials. Because the lead isotopes are created by decay of different transuranic elements, the ratios of the four lead isotopes to one another can be very useful in tracking the source of melts in igneous rocks, the source of sediments and even the origin of people via isotopic fingerprinting of their teeth, skin and bones.

It has been used to date ice cores from the Arctic shelf, and provides information on the source of atmospheric lead pollution.

Lead–lead isotopes has been successfully used in forensic science to fingerprint bullets, because each batch of ammunition has its own peculiar 204Pb/206Pb vs 207Pb/208Pb ratio.

Samarium–neodymium

Samariumneodymium is an isotope system which can be utilised to provide a date as well as isotopic fingerprints of geological materials, and various other materials including archaeological finds (pots, ceramics).

147Sm decays to produce 143Nd with a half life of 1.06x1011 years.

Dating is achieved usually by trying to produce an isochron of several minerals within a rock specimen. The initial 143Nd/144Nd ratio is determined.

This initial ratio is modelled relative to CHUR - the Chondritic Uniform Reservoir - which is an approximation of the chondritic material which formed the solar system. CHUR was determined by analysing chondrite and achondrite meteorites.

The difference in the ratio of the sample relative to CHUR can give information on a model age of extraction from the mantle (for which an assumed evolution has been calculated relative to CHUR) and to whether this was extracted from a granitic source (depleted in radiogenic Nd), the mantle, or an enriched source.

Rhenium–osmium

Rhenium and osmium are siderophile elements which are present at very low abundances in the crust. Rhenium undergoes radioactive decay to produce osmium. The ratio of non-radiogenic osmium to radiogenic osmium throughout time varies.

Rhenium prefers to enter sulfides more readily than osmium. Hence, during melting of the mantle, rhenium is stripped out, and prevents the osmium–osmium ratio from changing appreciably. This locks in an initial osmium ratio of the sample at the time of the melting event. Osmium–osmium initial ratios are used to determine the source characteristic and age of mantle melting events.

Noble gas isotopes

Natural isotopic variations amongst the noble gases result from both radiogenic and nucleogenic production processes. Because of their unique properties, it is useful to distinguish them from the conventional radiogenic isotope systems described above.

Helium-3

Helium-3 was trapped in the planet when it formed. Some 3He is being added by meteoric dust, primarily collecting on the bottom of oceans (although due to subduction, all oceanic tectonic plates are younger than continental plates). However, 3He will be degassed from oceanic sediment during subduction, so cosmogenic 3He is not affecting the concentration or noble gas ratios of the mantle.
Helium-3 is created by cosmic ray bombardment, and by lithium spallation reactions which generally occur in the crust. Lithium spallation is the process by which a high-energy neutron bombards a lithium atom, creating a 3He and a 4He ion. This requires significant lithium to adversely affect the 3He/4He ratio.

All degassed helium is lost to space eventually, due to the average speed of helium exceeding the escape velocity for the Earth. Thus, it is assumed the helium content and ratios of Earth's atmosphere have remained essentially stable.

It has been observed that 3He is present in volcano emissions and oceanic ridge samples. How 3He is stored in the planet is under investigation, but it is associated with the mantle and is used as a marker of material of deep origin.

Due to similarities in helium and carbon in magma chemistry, outgassing of helium requires the loss of volatile components (water, carbon dioxide) from the mantle, which happens at depths of less than 60 km. However, 3He is transported to the surface primarily trapped in the crystal lattice of minerals within fluid inclusions.

Helium-4 is created by radiogenic production (by decay of uranium/thorium-series elements). The continental crust has become enriched with those elements relative to the mantle and thus more He4 is produced in the crust than in the mantle.

The ratio (R) of 3He to 4He is often used to represent 3He content. R usually is given as a multiple of the present atmospheric ratio (Ra).

Common values for R/Ra:
  • Old continental crust: less than 1
  • mid-ocean ridge basalt (MORB): 7 to 9
  • Spreading ridge rocks: 9.1 plus or minus 3.6
  • Hotspot rocks: 5 to 42
  • Ocean and terrestrial water: 1
  • Sedimentary formation water: less than 1
  • Thermal spring water: 3 to 11
3He/4He isotope chemistry is being used to date groundwaters, estimate groundwater flow rates, track water pollution, and provide insights into hydrothermal processes, igneous geology and ore genesis.

Uranium-series isotopes

U-series isotopes are unique amongst radiogenic isotopes because, being in the U-series decay chains, they are both radiogenic and radioactive. Because their abundances are normally quoted as activity ratios rather than atomic ratios, they are best considered separately from the other radiogenic isotope systems.

Protactinium/Thorium - 231Pa / 230Th

Uranium is well mixed in the ocean, and its decay produces 231Pa and 230Th at a constant activity ratio (0.093). The decay products are rapidly removed by adsorption on settling particles, but not at equal rates. 231Pa has a residence equivalent to the residence time of deep water in the Atlantic basin (around 1000 yrs) but 230Th is removed more rapidly (centuries). Thermohaline circulation effectively exports 231Pa from the Atlantic into the Southern Ocean, while most of the 230Th remains in Atlantic sediments. As a result, there is a relationship between 231Pa/230Th in Atlantic sediments and the rate of overturning: faster overturning produces lower sediment 231Pa/230Th ratio, while slower overturning increases this ratio. The combination of δ13C and 231Pa/230Th can therefore provide a more complete insight into past circulation changes.

Anthropogenic isotopes

Tritium/helium-3

Tritium was released to the atmosphere during atmospheric testing of nuclear bombs. Radioactive decay of tritium produces the noble gas helium-3. Comparing the ratio of tritium to helium-3 (3H/3He) allows estimation of the age of recent ground waters.

Primordial nuclide

From Wikipedia, the free encyclopedia

Relative abundance of the chemical elements in the Earth's upper continental crust, on a per-atom basis

In geochemistry, geophysics and geonuclear physics, primordial nuclides, also known as primordial isotopes, are nuclides found on Earth that have existed in their current form since before Earth was formed. Primordial nuclides were present in the interstellar medium from which the solar system was formed, and were formed in the Big Bang, by nucleosynthesis in stars and supernovae followed by mass ejection, by cosmic ray spallation, and potentially from other processes. They are the stable nuclides plus the long-lived fraction of radionuclides surviving in the primordial solar nebula through planet accretion until the present. Only 286 such nuclides are known.

All of the known 253 stable nuclides occur as primordial nuclides, plus another 33 nuclides that have half-lives long enough to have survived from the formation of the Earth. These 33 primordial radionuclides represent isotopes of 28 separate elements. Cadmium, tellurium, neodymium, samarium and uranium each have two primordial radioisotopes (113Cd
, 116Cd
; 128Te
, 130Te
; 144Nd
, 150Nd
; 147Sm
, 148Sm
; and 235U
, 238U
).

Because the age of the Earth is 4.58×109 years (4.6 billion years), this means that the half-life of the given nuclides must be greater than about 1×108 years (100 million years) for practical considerations. For example, for a nuclide with half-life 6×107 years (60 million years), this means 77 half-lives have elapsed, meaning that for each mole (6.02×1023 atoms) of that nuclide being present at the formation of Earth, only 4 atoms remain today.

The shortest-lived primordial nuclides (i.e. nuclides with shortest half-lives) are:
..., 232Th
, 238U
, 40K
, and 235U
.
These are the 4 nuclides with half-lives comparable to, or less than, the estimated age of the universe. (In the case of 232Th, it has a half life of more than 14 billion years, slightly longer than the age of the universe.) For a complete list of the 33 known primordial radionuclides, including the next 29 with half-lives much longer than the age of the universe, see the complete list in the section below. For practical purposes, nuclides with half-lives much longer than the age of the universe may be treated as if they really were stable. 232Th and 238U have half-lives long enough that their decay is limited over geological time scales; 40K and 235U have shorter half-lives and are hence severely depleted, but are still long-lived enough to persist significantly in nature.

The next longest-living nuclide after the end of the list given in the table is 244Pu
, with a half-life of 8.08×107 years. It has been reported to exist in nature as a primordial nuclide, although later studies could not detect it.[1] Likewise, the second-longest-lived non-primordial 146Sm
has a half-life of 6.8×107 years, about double that of the third-longest-lived non-primordial 92Nb
(3.5×107 years).[2] Taking into account that all these nuclides must exist since at least 4.6×109 years, 244Pu must survive 57 half-lives (and hence be reduced by a factor of 257 ≈ 1.4 × 1017), 146Sm must survive 67 (and be reduced by 267 ≈ 1.5 × 1020), and 92Nb must survive 130 (and be reduced by 2130 ≈ 1.4 × 1039). Considering the likely initial abundances of these nuclides, possibly measurable quantities of 244Pu and 146Sm should persist today, while they should not for 92Nb and all shorter-lived nuclides. Nuclides such as 92Nb that were present in the primordial solar nebula but have long since decayed away completely are termed extinct radionuclides if they have no other means of being regenerated.[3]

Although it is estimated that about 33 primordial nuclides are radioactive (list below), it becomes very difficult to determine the exact total number of radioactive primordials, because the total number of stable nuclides is uncertain. There exist many extremely long-lived nuclides whose half-lives are still unknown. For example, it is predicted theoretically that all isotopes of tungsten, including those indicated by even the most modern empirical methods to be stable, must be radioactive and can decay by alpha emission, but as of 2013 this could only be measured experimentally for 180W
.[4] Similarly, all four primordial isotopes of lead are expected to decay to mercury, but the predicted half-lives are so long (some exceeding 10100 years) that this can hardly be observed in the near future. Nevertheless, the number of nuclides with half-lives so long that they cannot be measured with present instruments—and are considered from this viewpoint to be stable nuclides—is limited. Even when a "stable" nuclide is found to be radioactive, the fact merely moves it from the stable to the unstable list of primordial nuclides, and the total number of primordial nuclides remains unchanged.

Because primordial chemical elements often consist of more than one primordial isotope, there are only 83 distinct primordial chemical elements. Of these, 80 have at least one observationally stable isotope and three additional primordial elements have only radioactive isotopes (bismuth, thorium, and uranium).

Naturally occurring nuclides that are not primordial

Some unstable isotopes which occur naturally (such as 14C
, 3H
, and 239Pu
) are not primordial, as they must be constantly regenerated. This occurs by cosmic radiation (in the case of cosmogenic nuclides such as 14C
and 3H
), or (rarely) by such processes as geonuclear transmutation (neutron capture of uranium in the case of 237Np
and 239Pu
). Other examples of common naturally occurring but non-primordial nuclides are isotopes of radon, polonium, and radium, which are all radiogenic nuclide daughters of uranium decay and are found in uranium ores. A similar radiogenic series is derived from the long-lived radioactive primordial nuclide 232Th. All of such nuclides have shorter half-lives than their parent radioactive primordial nuclides. Some other geogenic nuclides do not occur in the decay chains of 232Th, 235U, or 238U but can still fleetingly occur naturally as products of the spontaneous fission of one of these three long-lived nuclides, such as 126Sn, which makes up about 10−14 of all natural tin.[5]

Primordial elements

There are 253 stable primordial nuclides and 33 radioactive primordial nuclides, but only 80 primordial stable elements (1 through 82, i.e. hydrogen through lead, exclusive of 43 and 61, technetium and promethium respectively) and three radioactive primordial elements (bismuth, thorium, and uranium). Bismuth's half-life is so long that it is often classed with the 80 primordial stable elements instead, since its radioactivity is not a cause for serious concern. The numbers of elements are smaller, because many primordial elements are represented by more than one primordial nuclide. See chemical element for more information.

Naturally occurring stable nuclides

As noted, these number about 253. For a complete list noting which of the "stable" 253 nuclides may be in some respect unstable, see list of nuclides and stable nuclide. These questions do not impact the question of whether a nuclide is primordial, since all "nearly stable" nuclides, with half-lives longer than the age of the universe, are primordial also.

List of 33 radioactive primordial nuclides and measured half-lives

These 33 primordial nuclides represent radioisotopes of 28 distinct chemical elements (cadmium, neodymium, samarium, tellurium, and uranium each have two primordial radioisotopes). The radionuclides are listed in order of stability, with the longest half-life beginning the list. These radionuclides in many cases are so nearly stable that they compete for abundance with stable isotopes of their respective elements. For three chemical elements, a very long lived radioactive primordial nuclide is found to be the most abundant nuclide for an element that also has a stable nuclide. These unusual elements are tellurium, indium, and rhenium.

The longest has a half-life of 2.2×1024 years, which is 160 trillion times the age of the Universe. Only four of these 33 nuclides have half-lives shorter than, or equal to, the age of the universe. Most of the remaining 29 have half-lives much longer. The shortest-lived primordial isotope, 235U, has a half-life of 704 million years, about one sixth of the age of the Earth and Solar System.

no nuclide energy half-
life
(years)
decay
mode
decay energy
(MeV)
approx. ratio
half-life to
age of universe
254 128Te 8.743261 2.2×1024 2 β 2.530 160 trillion
255 78Kr 9.022349 9.2×1021 KK 2.846 670 billion
256 136Xe 8.706805 2.165×1021 2 β 2.462 150 billion
257 76Ge 9.034656 1.8×1021 2 β 2.039 130 billion
258 130Ba 8.742574 1.2×1021 KK 2.620 90 billion
259 82Se 9.017596 1.1×1020 2 β 2.995 8 billion
260 116Cd 8.836146 3.102×1019 2 β 2.809 2 billion
261 48Ca 8.992452 2.301×1019 2 β 4.274, .0058 2 billion
262 96Zr 8.961359 2.0×1019 2 β 3.4 1 billion
263 209Bi 8.158689 1.9×1019 α 3.137 1 billion
264 130Te 8.766578 8.806×1018 2 β .868 600 million
265 150Nd 8.562594 7.905×1018 2 β 3.367 600 million
266 100Mo 8.933167 7.804×1018 2 β 3.035 600 million
267 151Eu 8.565759 5.004×1018 α 1.9644 300 million
268 180W 8.347127 1.801×1018 α 2.509 100 million
269 50V 9.055759 1.4×1017 β+ or β 2.205, 1.038 10 million
270 113Cd 8.859372 7.7×1015 β .321 600,000
271 148Sm 8.607423 7.005×1015 α 1.986 500,000
272 144Nd 8.652947 2.292×1015 α 1.905 200,000
273 186Os 8.302508 2.002×1015 α 2.823 100,000
274 174Hf 8.392287 2.002×1015 α 2.497 100,000
275 115In 8.849910 4.4×1014 β .499 30,000
276 152Gd 8.562868 1.1×1014 α 2.203 8000
277 190Pt 8.267764 6.5×1011 α 3.252 60
278 147Sm 8.610593 1.061×1011 α 2.310 8
279 138La 8.698320 1.021×1011 K or β 1.737, 1.044 7
280 87Rb 9.043718 4.972×1010 β .283 4
281 187Re 8.291732 4.122×1010 β .0026 3
282 176Lu 8.374665 3.764×1010 β 1.193 3
283 232Th 7.918533 1.406×1010 α or SF 4.083 1
284 238U 7.872551 4.471×109 α or SF or 2 β 4.270 0.3
285 40K 8.909707 1.25×109 β or K or β+ 1.311, 1.505, 1.505 0.09
286 235U 7.897198 7.04×108 α or SF 4.679 0.05

List legends

no (number)
A running positive integer for reference. These numbers may change slightly in the future since there are 163 nuclides now classified as stable, but which are theoretically predicted to be unstable (see Stable nuclide#Still-unobserved decay), so that future experiments may show that some are in fact unstable. The number starts at 254, to follow the 253 nuclides (or stable isotopes) not yet found to be radioactive.
nuclide column
Nuclide identifiers are given by their mass number A and the symbol for the corresponding chemical element (implies a unique proton number).
energy column
The column labeled "energy" denotes the mass of the average nucleon of this nuclide relative to the mass of a neutron (so all nuclides get a positive value) in MeV/c2, formally: mnmnuclide / A.
half-life column
All times are given in years.
decay mode column
α α decay
β β decay
K electron capture
KK double electron capture
β+ β+ decay
SF spontaneous fission
2 β double β decay
2 β+ double β+ decay
I isomeric transition
p proton emission
n neutron emission
decay energy column
Multiple values for (maximal) decay energy in MeV are mapped to decay modes in their order.

Accelerated Living

September 24, 2001 by Ray Kurzweil
Original link:  http://www.kurzweilai.net/accelerated-living-2

In this article written for PC Magazine, Ray Kurzweil explores how advancing technologies will impact our personal lives.

Imagine a Web, circa 2030, that will offer a panoply of virtual environments incorporating all of our senses, and in which there will be no clear distinction between real and simulated people. Consider the day when miniaturized displays on our eyeglasses will provide speech-to-speech translation so we can understand a foreign language in real time–kind of like subtitles on the world. Then, think of a time when noninvasive nanobots will work with the neurotransmitters in our brains to vastly extend our mental capabilities.

These scenarios may seem too futuristic to be plausible by 2030. They require us to consider capabilities never previously encountered, just as people in the nineteenth century had to do when confronted with the concept of the telephone–essentially auditory virtual reality. It would be the first time in history people could “be” with another person hundreds of miles away.

When most people think of the future, they underestimate the long-term power of technological advances–and the speed with which they occur. People assume that the current rate of progress will continue, as will the social repercussions that follow. I call this the intuitive linear view.

However, because the rate of change itself is accelerating, the future is more surprising than we anticipate. In fact, serious assessment of history shows that technological change is exponential. In other words, we won’t experience 100 years of progress in the twenty-first century, but rather, we’ll witness on the order of 20,000 years of progress (at today’s rate of progress, that is).

Exponential growth is a feature of any evolutionary process. And we find it in all aspects of technology: miniaturization, communication, genomic scanning, brain scanning, and many other areas. Indeed, we also find double exponential growth, meaning that the rate of exponential growth itself is growing exponentially.

For example, critics of the early genome project suggested that at the rate with which we could scan DNA base pairs, it would take 10,000 years to finish the project. Yet the project was completed ahead of schedule, because DNA scanning technology grew at a double exponential rate. Another example is the Web explosion of the mid-1990s.

Over the past 25 years, I’ve created mathematical models for how technology develops. Predictions that I made using these models in the 1980s about the 1990s and the early years of this decade regarding computing power and its impact–automated medical diagnosis, the use of intelligent weapons, investment programs based on pattern recognition, and others–have been relatively accurate.

These models can provide a clear window into the future and form the foundation on which I build my own scenarios for what life will be like in the next 30 years.

Computing Gets Personal

The broad trend in computing has always moved toward making computers more intimate. The first computers were large, remote machines stored behind glass walls. The PC made computing accessible to everyone. In its next phase, computing will become profoundly personal.

By 2010, computation will be everywhere, yet it will appear to disappear as it becomes embedded in everything from our clothing and eyeglasses to our bodies and brains. And underlying it all will be always-on, very-high-bandwidth connections to the Internet.

Medical diagnosis will routinely use computerized devices that travel in our bodies. And neural implants, which are already used today to counteract tremors from neurological disorders, will be used for a much wider range of conditions, including providing vision to people who have recently lost their sight.

As for interaction with computers, very-high-resolution images will be written directly to our retinas from our eyeglasses and contact lenses. This will spur the next paradigm shift: highly realistic, 3-D, visual-auditory virtual reality. Retinal projection systems will provide full-immersion, virtual environments that can either overlay “real” reality or replace it. People will navigate these environments through manual and verbal commands, as well as with body movement. Visiting a Web site will often mean entering virtual-reality environments, such as forests, beaches, and conference rooms.

In contrast to today’s crude videoconferencing systems, virtual reality in 2010 will look and sound like being together in “real” reality. You’ll be able to establish eye contact, look around your partner, and otherwise have the sense of being together. The sensors and computers in our clothing will track all of our movements and project a 3-D image of ourselves into the virtual world. This will introduce the opportunity to be someone else. The tactile aspect will still be limited, though.

We’ll also interact with simulated people–lifelike avatars that engage in flexible, natural-language dialogs–who will be a primary interface with machine intelligence. We will use them to request information, negotiate e-commerce transactions, and make reservations.

Personal avatars will guide us to desired locations (using GPS) and even augment our visual field of view, via our eyeglass displays, with as much background information as desired.

The virtual personalities won’t pass the Turing test by 2010, though, meaning we won’t be fooled into thinking that they’re really human. But by 2030, it won’t be feasible to differentiate between real and simulated people.

Another technology that will greatly enhance the realism of virtual reality is nanobots: miniature robots the size of blood cells that travel through the capillaries of our brains and communicate with biological neurons. These nanobots might be injected or even swallowed.

Scientists at the Max Planck Institute have already demonstrated electronic-based neuron transistors that can control the movement of a live leech from a computer. They can detect the firing of a nearby neuron, cause it to fire, or suppress a neuron from firing–all of which amounts to two-way communication between neurons and neuron transistors.

Today, our brains are relatively fixed in design. Although we do add patterns of interneuronal connections and neurotransmitter concentrations as a normal part of the learning process, the capacity of the human brain is highly constrained–and restricted to a mere hundred trillion connections. But because the nanobots will communicate with each other–over a wireless LAN–they could create any set of new neural connections, break existing connections (by suppressing neural firing), or create hybrid biological/nonbiological networks.

Using nanobots as brain extenders will be a significant improvement over today’s surgically installed neural implants. And brain implants based on massively distributed intelligent nanobots will ultimately expand our memories by adding trillions of new connections, thereby vastly improving all of our sensory, pattern recognition, and cognitive abilities.

Nanobots will also incorporate all of the senses by taking up positions in close physical proximity to the interneuronal connections coming from all of our sensory inputs (eyes, ears, skin). The nanobots will be programmable through software downloaded from the Web and will be able to change their configurations. They can be directed to leave, so the process is easily reversible.

In addition, these new virtual shared environments could include emotional overlays, since the nanobots will be able to trigger the neurological correlates of emotions, sexual pleasure, and other sensory experiences and reactions.

When we want to experience “real” reality, the nanobots just stay in position (in our capillaries) and do nothing. If we want to enter virtual reality, they suppress all of the inputs coming from the real senses and replace them with signals appropriate for the virtual environment. Our brains could decide to cause our muscles and limbs to move normally, but the nanobots would intercept the inter-neuronal signals to keep our real limbs from moving and instead cause our virtual limbs to move appropriately.

Another scenario enabled by nanobots is the “experience beamer.” By 2030, people will beam their entire flow of sensory experiences and, if desired, their emotions, the same way that people broadcast their lives today using Webcams. We’ll be able to plug into a Web site and experience other people’s lives, the same way characters did in the movie Being John Malkovich. Particularly interesting experiences can be archived and relived at any time.

The ongoing acceleration of computation, communication, and miniaturization, combined with our understanding of the human brain (derived from human-brain reverse engineering), provides the basis for these nanobot-based scenarios.

A Double-Edged Sword

Technology may bring us longer and healthier lives, freedom from physical and mental drudgery, and countless creative possibilities, but it also introduces new and salient dangers. For the 21st century, we will see the same intertwined potentials–only greatly amplified.

Consider unrestrained nanobot replication, which requires billions or trillions of such intelligent devices to be useful. The most cost-effective way to scale nanobots up to that level is through self-replication–essentially the same approach seen in the biological world. But just as biological self-replication can go awry (cancer), a defect in the mechanism that curtails nanobot self-replication could also endanger all physical entities–biological or otherwise.

And who will control the nanobots? Organizations (governments or extremist groups) or just a clever individual could put trillions of undetectable nanobots in the water or food supply of an entire population. These “spy” nanobots could then monitor, influence, and even take over our thoughts and actions. Nanobots could also fall prey to software viruses and hacks.

If we described the dangers that exist today to people who lived a couple hundred years ago, they would think it mad to take such risks. But how many people living in 2001 would want to go back to the short, brutish, disease-filled, poverty-stricken, disaster-prone lives that 99 percent of the human race struggled through? We may romanticize the past, but until fairly recently, most of humanity lived extremely fragile lives. Substantial portions of our species still live in this precarious way, which is at least one reason to continue technological progress and the economic enhancement that accompanies it.

My own expectation is that the creative and constructive applications of this technology will dominate, as I believe they do now. And there will be a valuable (and increasingly vocal) role for a constructive Luddite movement.

When examining the impact of future technology, people often go through three stages: awe and wonderment at the potential; dread about the new set of grave dangers; and finally (hopefully), the realization that the only viable path is to set a careful, responsible course that realizes the promise while managing the peril.

What Is (And Isn't) Scientific About The Multiverse

Author:   Ethan Siegel,
 
Astrophysicist and author Ethan Siegel is the founder and primary writer of Starts With A Bang! His books, Treknology and Beyond The Galaxy, are available wherever books are sold.


Artistic impression of a Multiverse — where our Universe is only one of many. According to the research, varying amounts of dark energy have little effect on star formation. This raises the prospect of life in other universes — if the Multiverse exists.

The Universe is all there ever was, all there is, and all there will ever be. At least, that's what we're told, and that's what's implied by the word "Universe" itself. But whatever the true nature of the Universe actually is, our ability to gather information about it is fundamentally limited.

It's only been 13.8 billion years since the Big Bang, and the top speed at which any information can travel — the speed of light — is finite. Even though the entire Universe itself may truly be infinite, the observable Universe is limited. According to the leading ideas of theoretical physics, however, our Universe may be just one minuscule region of a much larger multiverse, within which many Universes, perhaps even an infinite number, are contained. Some of this is actual science, but some is nothing more than speculative, wishful thinking. Here's how to tell which is which. But first, a little background.


There is a large suite of scientific evidence that supports the picture of the expanding Universe and the Big Bang. The entire mass-energy of the Universe was released in an event lasting less than 10^-30 seconds in duration; the most energetic thing ever to occur in our Universe's history.NASA / GSFC


The Universe today has a few facts about it that are relatively easy, at least with world-class scientific facilities, to observe. We know the Universe is expanding: we can measure properties about galaxies that teach us both their distance and how fast they appear to move away from us. The farther away they are, the faster they appear to recede. In the context of General Relativity, that means the Universe is expanding.

And if the Universe is expanding today, that means it was smaller and denser in the past. Extrapolate back far enough, and you'll find that things are also more uniform (because gravity takes time to make things clump together) and hotter (because smaller wavelengths for light mean higher energies/temperatures). This leads us back to the Big Bang.

An illustration of our cosmic history, from the Big Bang until the present, within the context of the expanding Universe. The first Friedmann equation describes all of these epochs, from inflation to the Big Bang to the present and far into the future, perfectly accurately, even today.

But the Big Bang wasn't the very beginning of the Universe! We can only extrapolate back to a certain epoch in time before the Big Bang's predictions break down. There are a number of things we observe in the Universe that the Big Bang can't explain, but a new theory that sets up the Big Bang — cosmic inflation — can.


The quantum fluctuations that occur during inflation get stretched across the Universe, and when inflation ends, they become density fluctuations. This leads, over time, to the large-scale structure in the Universe today, as well as the fluctuations in temperature observed in the CMB.

In the 1980s, a large number of theoretical consequences of inflation were worked out, including:
  • what the seeds for large-scale structure should look like,
  • that temperature and density fluctuations should exist on scales larger than the cosmic horizon,
  • that all regions of space, even with fluctuations, should have constant entropy,
  • and that there should be a maximum temperature achieved by the hot Big Bang.
In the 1990s, 2000s and 2010s, these four predictions were observationally confirmed to great precision. Cosmic inflation is a winner.


Inflation causes space to expand exponentially, which can very quickly result in any pre-existing curved or non-smooth space appearing flat. If the Universe is curved, it has a radius of curvature that is at minimum hundreds of times larger than what we can observe.

Inflation tells us that, prior to the Big Bang, the Universe wasn't filled with particles, antiparticles and radiation. Instead, it was filled with energy inherent to space itself, and that energy caused space to expand at a rapid, relentless, and exponential rate. At some point, inflation ends, and all (or almost all) of that energy gets converted into matter and energy, giving rise to the hot Big Bang. The end of inflation, and what's known as the reheating of our Universe, marks the start of the hot Big Bang. The Big Bang still happens, but it isn't the very beginning.


Inflation predicts the existence of a huge volume of unobservable Universe beyond the part we can observe. But it gives us even more than that.

If this were the full story, all we'd have was one extremely large Universe. It would have the same properties everywhere, the same laws everywhere, and the parts that were beyond our visible horizon would be similar to where we are, but it wouldn't be justifiably called the multiverse.

Until, that is, you remember that everything that physically exists must be inherently quantum in nature. Even inflation, with all the unknowns surrounding it, must be a quantum field.


The quantum nature of inflation means that it ends in some “pockets” of the Universe and continues in others. It needs to roll down the metaphorical hill and into the valley, but if it's a quantum field, the spreading-out means it will end in some regions while continuing in others.

If you then require inflation to have the properties that all quantum fields have:
  • that its properties have uncertainties inherent to them,
  • that the field is described by a wavefunction,
  • and the values of that field can spread out over time,
you reach a surprising conclusion.


Wherever inflation occurs (blue cubes), it gives rise to exponentially more regions of space with each step forward in time. Even if there are many cubes where inflation ends (red Xs), there are far more regions where inflation will continue on into the future. The fact that this never comes to an end is what makes inflation 'eternal' once it begins.

Inflation doesn't end everywhere at once, but rather in select, disconnected locations at any given time, while the space between those locations continues to inflate. There should be multiple, enormous regions of space where inflation ends and a hot Big Bang begins, but they can never encounter one another, as they're separated by regions of inflating space. Wherever inflation begins, it is all but guaranteed to continue for an eternity, at least in places.

Where inflation ends for us, we get a hot Big Bang. The part of the Universe we observe is just one part of this region where inflation ended, with more unobservable Universe beyond that. But there are countlessly many regions, all disconnected from one another, with the same exact story.


An illustration of multiple, independent Universes, causally disconnected from one another in an ever-expanding cosmic ocean, is one depiction of the Multiverse idea. In a region where the Big Bang begins and inflation ends, the expansion rate will drop, while inflation continues in between two such regions, forever separating them.

That's the idea of the multiverse. As you can see, it's based on two independent, well-established, and widely-accepted aspects of theoretical physics: the quantum nature of everything and the properties of cosmic inflation. There's no known way to measure it, just as there's no way to measure the unobservable part of our Universe. But the two theories that underlie it, inflation and quantum physics, have been demonstrated to be valid. If they're right, then the multiverse is an inescapable consequence of that, and we're living in it.


The multiverse idea states that there are an arbitrarily large number of Universes like our own, but that doesn't necessarily mean there's another version of us out there, and it certainly doesn't mean there's any chance of running into an alternate version of yourself... or anything from another Universe at all.

So what? That's not a whole lot, is it? There are plenty of theoretical consequences that are inevitable, but that we cannot know about for certain because we can't test them. The multiverse is one in a long line of those. It's not particularly a useful realization, just an interesting prediction that falls out of these theories.

So why do so many theoretical physicists write papers about the multiverse? About parallel Universes and their connection to our own through this multiverse? Why do they claim that the multiverse is connected to the string landscape, the cosmological constant, and even to the fact that our Universe is finely-tuned for life?

Because even though it's obviously a bad idea, they don't have any better ones.


The string landscape might be a fascinating idea that's full of theoretical potential, but it doesn't predict anything that we can observe in our Universe. This idea of beauty, motivated by solving 'unnatural' problems, is not enough on its own to rise to the level required by science.

In the context of string theory, there are a huge set of parameters that could, in principle, take on almost any value. The theory makes no predictions for them, so we have to put them in by hand: the expectation values of the string vacua. If you've heard of incredibly large numbers like the famed 10500 which appears in string theory, the possible values of the string vacua are what they're referring to. We don't know what they are, or why they have the values that they do. No one knows how to calculate them.


A representation of the different parallel "worlds" that might exist in other pockets of the multiverse.

So, instead, some people say "it's the multiverse!" The line of thinking goes like this:
  • We don't know why the fundamental constants have the values they do.
  • We don't know why the laws of physics are what they are.
  • String theory is a framework that could give us our laws of physics with our fundamental constants, but it could give us other laws and/or other constants.
  • Therefore, if we have an enormous multiverse, where lots of different regions have different laws and/or constants, one of them could be ours.
The big problem is that not only is this enormously speculative, but there's no reason, given the inflation and quantum physics we know, to presume that an inflating spacetime has different laws or constants in different regions.

Not impressed with this line of reasoning? Neither is practically anyone else.


How likely or unlikely was our Universe to produce a world like Earth? And how plausible would those odds be if the fundamental constants or laws governing our Universe were different? A Fortunate Universe, from whose cover this image was taken, is one such book that explores these issues.Geraint Lewis and Luke Barnes

As I've explained beforethe Multiverse is not a scientific theory on its own. Rather, it’s a theoretical consequence of the laws of physics as they’re best understood today. It’s perhaps even an inevitable consequence of those laws: if you have an inflationary Universe governed by quantum physics, this is something you’re pretty much bound to wind up with. But — much like String Theory — it has some big problems: it doesn't predict anything we either have observed and can't explain without it, and it doesn't predict anything definitive we can go and look for.


Visualization of a quantum field theory calculation showing virtual particles in the quantum vacuum. Even in empty space, this vacuum energy is non-zero. Whether it has the same, constant value in other regions of the multiverse is something we cannot know, but there is no motivation for it to be that way.

In this physical Universe, it's important to observe all that we can, and to measure every bit of knowledge we can glean. Only from the full suite of data available can we hope to ever draw valid, scientific conclusions about the nature of our Universe. Some of those conclusions will have implications that we may not be able to measure: the existence of the multiverse arises from that. But when people then contend that they can draw conclusions about fundamental constants, the laws of physics, or the values of string vacua, they're no longer doing science; they're speculating. Wishful thinking is no substitute for data, experiments, or observables. Until we have those, be aware that the multiverse is a consequence of the best science we have available today, but it doesn't make any scientific predictions we can put to the test.

Entropy (information theory)

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Entropy_(information_theory) In info...