Search This Blog

Monday, March 10, 2025

Self-replicating spacecraft

From Wikipedia, the free encyclopedia

The concept of self-replicating spacecraft, as envisioned by mathematician John von Neumann, has been described by futurists and has been discussed across a wide breadth of hard science fiction novels and stories. Self-replicating probes are sometimes referred to as von Neumann probes. Self-replicating spacecraft would in some ways either mimic or echo the features of living organisms or viruses.

Theory

Von Neumann argued that the most effective way of performing large-scale mining operations such as mining an entire moon or asteroid belt would be by self-replicating spacecraft, taking advantage of their exponential growth. In theory, a self-replicating spacecraft could be sent to a neighboring planetary system, where it would seek out raw materials (extracted from asteroids, moons, gas giants, etc.) to create replicas of itself. These replicas would then be sent out to other planetary systems. The original "parent" probe could then pursue its primary purpose within the star system. This mission varies widely depending on the variant of self-replicating starship proposed.

Given this pattern, and its similarity to the reproduction patterns of bacteria, it has been pointed out that von Neumann machines might be considered a form of life. In his short story "Lungfish", David Brin touches on this idea, pointing out that self-replicating machines launched by different species might actually compete with one another (in a Darwinistic fashion) for raw material, or even have conflicting missions. Given enough variety of "species" they might even form a type of ecology, or – should they also have a form of artificial intelligence – a society. They may even mutate with thousands of "generations".

The first quantitative engineering analysis of such a spacecraft was published in 1980 by Robert Freitas, in which the non-replicating Project Daedalus design was modified to include all subsystems necessary for self-replication. The design's strategy was to use the probe to deliver a "seed" factory with a mass of about 443 tons to a distant site, have the seed factory produce many copies of itself there to increase its total manufacturing capacity over a 500-year period, and then use the resulting automated industrial complex to construct more probes with a single seed factory on board each.

It has been theorized that a self-replicating starship utilizing relatively conventional theoretical methods of interstellar travel (i.e., no exotic faster-than-light propulsion, and speeds limited to an "average cruising speed" of 0.1c.) could spread throughout a galaxy the size of the Milky Way in as little as half a million years.

Debate on Fermi's paradox

In 1981, Frank Tipler put forth an argument that extraterrestrial intelligences do not exist, based on the fact that von Neumann probes have not been observed. Given even a moderate rate of replication and the history of the galaxy, such probes should already be common throughout space and thus, we should have already encountered them. Because we have not, this shows that extraterrestrial intelligences do not exist. This is thus a resolution to the Fermi paradox – that is, the question of why we have not already encountered extraterrestrial intelligence if it is common throughout the universe.

A response came from Carl Sagan and William Newman. Now known as Sagan's Response, it pointed out that in fact Tipler had underestimated the rate of replication, and that von Neumann probes should have already started to consume most of the mass in the galaxy. Any intelligent race would therefore, Sagan and Newman reasoned, not design von Neumann probes in the first place, and would try to destroy any von Neumann probes found as soon as they were detected. As Robert Freitas has pointed out, the assumed capacity of von Neumann probes described by both sides of the debate is unlikely in reality, and more modestly reproducing systems are unlikely to be observable in their effects on our solar system or the galaxy as a whole.

Another objection to the prevalence of von Neumann probes is that civilizations that could potentially create such devices may have a high probability of self-destruction before being capable of producing such machines. This could be through events such as biological or nuclear warfare, nanoterrorism, resource exhaustion, ecological catastrophe, or pandemics. This obstacle to the creation of von Neumann probes is one potential candidate for the concept of a Great Filter.

Simple workarounds exist to avoid the over-replication scenario. Radio transmitters, or other means of wireless communication, could be used by probes programmed not to replicate beyond a certain density (such as five probes per cubic parsec) or arbitrary limit (such as ten million within one century), analogous to the Hayflick limit in cell reproduction. One problem with this defence against uncontrolled replication is that it would only require a single probe to malfunction and begin unrestricted reproduction for the entire approach to fail – essentially a technological cancer – unless each probe also has the ability to detect such malfunction in its neighbours and implements a seek and destroy protocol (which in turn could lead to probe-on-probe space wars if faulty probes first managed to multiply to high numbers before they were found by sound ones, which could then well have programming to replicate to matching numbers so as to manage the infestation). Another workaround is based on the need for spacecraft heating during long interstellar travel. The use of plutonium as a thermal source would limit the ability to self-replicate. The spacecraft would have no programming to make more plutonium even if it found the required raw materials. Another is to program the spacecraft with a clear understanding of the dangers of uncontrolled replication.

Applications for self-replicating spacecraft

The details of the mission of self-replicating starships can vary widely from proposal to proposal, and the only common trait is the self-replicating nature.

Von Neumann probes

A von Neumann probe is a spacecraft capable of replicating itself. It is a concatenation of two concepts: a Von Neumann universal constructor (self-replicating machine) and a probe (an instrument to explore or examine something). The concept is named after Hungarian American mathematician and physicist John von Neumann, who rigorously studied the concept of self-replicating machines that he called "Universal Assemblers" and which are often referred to as "von Neumann machines". Such constructs could be theorised to comprise five basic components (variations of this template could create other machines such as Bracewell probes):

  • Probe: which would contain the actual probing instruments & goal-directed AI to guide the construct.
  • Life-support systems: mechanisms to repair and maintain the construct.
  • Factory: mechanisms to harvest resources & replicate itself.
  • Memory banks: store programs for all its components & information gained by the probe.
  • Engine: motor to move the probe.[citation needed]

Andreas M. Hein and science fiction author Stephen Baxter proposed different types of von Neumann probes, termed "Philosopher" and "Founder", where the purpose of the former is exploration and for the latter preparing future settlement.

A near-term concept of a self-replicating probe has been proposed by the Initiative for Interstellar Studies, achieving about 70% self-replication, based on current and near-term technologies.[8]

If a self-replicating probe finds evidence of primitive life (or a primitive, low-level culture) it might be programmed to lie dormant, silently observe, attempt to make contact (this variant is known as a Bracewell probe),[jargon] or even interfere with or guide the evolution of life in some way.

Physicist Paul Davies of University of Adelaide has "raised the possibility of a probe resting on our own Moon", having arrived at some point in Earth's ancient prehistory and remained to monitor Earth, a concept that, per Michio Kaku, was what Stanley Kubrick used as the basis of his film, 2001: A Space Odyssey (though the director cut the relevant monolith scene from the movie). Kubrick's work was based on Arthur C. Clarke's story, "The Sentinel", expanded by the pair in the form of a novel that became the basis for the movie  and so Davies' lunar probe/observatory concept is also considered reminiscent of Clarke.

A variant idea on the interstellar von Neumann probe idea is that of the "Astrochicken", proposed by Freeman Dyson. While it has the common traits of self-replication, exploration, and communication with its "home base", Dyson conceived the Astrochicken to explore and operate within our own planetary system, and not explore interstellar space.

Anders Sandberg and Stuart Armstrong argued that launching the colonization of the entire reachable universe through self-replicating probes is well within the capabilities of a star-spanning civilization, and proposed a theoretical approach for achieving it in 32 years, by mining planet Mercury for resources and constructing a Dyson Swarm around the Sun.

Berserkers

A variant of the self-replicating starship is the Berserker. Unlike the benign probe concept, Berserkers are programmed to seek out and exterminate lifeforms and life-bearing exoplanets whenever they are encountered.

The name is derived from the Berserker series of novels by Fred Saberhagen which describes a war between humanity and such machines. Saberhagen points out (through one of his characters) that the Berserker warships in his novels are not von Neumann machines themselves, but the larger complex of Berserker machines – including automated shipyards – do constitute a von Neumann machine. This again brings up the concept of an ecology of von Neumann machines, or even a von Neumann hive entity.

It is speculated in fiction that Berserkers could be created and launched by a xenophobic civilization (see Anvil of Stars, by Greg Bear, in the section In fiction below) or could theoretically "mutate" from a more benign probe. For instance, a von Neumann ship designed for terraforming processes – mining a planet's surface and adjusting its atmosphere to more human-friendly conditions – could be interpreted as attacking previously inhabited planets, killing their inhabitants in the process of changing the planetary environment, and then self-replicating to dispatch more ships to "attack" other planets.

Replicating seeder ships

Yet another variant on the idea of the self-replicating starship is that of the seeder ship. Such starships might store the genetic patterns of lifeforms from their home world, perhaps even of the species which created it. Upon finding a habitable exoplanet, or even one that might be terraformed, it would try to replicate such lifeforms – either from stored embryos or from stored information using molecular nanotechnology to build zygotes with varying genetic information from local raw materials.

Such ships might be terraforming vessels, preparing colony worlds for later colonization by other vessels, or – should they be programmed to recreate, raise, and educate individuals of the species that created it – self-replicating colonizers themselves. Seeder ships would be a suitable alternative to generation ships as a way to colonize worlds too distant to travel to in one lifetime.

In fiction

Von Neumann probes

  • 2001: A Space Odyssey: The monoliths in Arthur C. Clarke's book and Stanley Kubrick's film 2001: A Space Odyssey were intended to be self-replicating probes, though the artifacts in "The Sentinel", Clarke's original short story upon which 2001 was based, were not. The film was to begin with a series of scientists explaining how probes like these would be the most efficient method of exploring outer space. Kubrick cut the opening segment from his film at the last minute, however, and these monoliths became almost mystical entities in both the film and Clarke's novel.
  • Cold As Ice: In the novel by Charles Sheffield, there is a segment where the author (a physicist) describes Von Neumann machines harvesting sulfur, nitrogen, phosphorus, helium-4, and various metals from the atmosphere of Jupiter.
  • Destiny's Road: Larry Niven frequently refers to Von Neumann probes in many of his works. In his 1998 book Destiny's Road, Von Neumann machines are scattered throughout the human colony world Destiny and its moon Quicksilver in order to build and maintain technology and to make up for the lack of the resident humans' technical knowledge; the Von Neumann machines primarily construct a stretchable fabric cloth capable of acting as a solar collector which serves as the humans' primary energy source. The Von Neumann machines also engage in ecological maintenance and other exploratory work.
  • The Devil's Blind Spot: See also Alexander Kluge, The Devil's Blind Spot (New Directions; 2004.)
  • Grey Goo: In the video game Grey Goo, the "Goo" faction is composed entirely of Von Neumann probes sent through various microscopic wormholes to map the Milky Way Galaxy. The faction's units are configurations of nanites used during their original mission of exploration, which have adapted to a combat role. The Goo starts as an antagonist to the Human and Beta factions, but their true objective is revealed during their portion of the single-player campaign. Related to, and inspired by, the Grey Goo doomsday scenario.
  • Spin: In the novel by Robert Charles Wilson, Earth is veiled by a temporal field. Humanity tries to understand and escape this field by using Von Neumann probes. It is later revealed that the field itself was generated by Von Neumann probes from another civilization, and that a competition for resources had taken place between earth's and the aliens' probes.
  • The Third Millennium: A History of the World AD 2000–3000: In the book by Brian Stableford and David Langford (published by Alfred A. Knopf, Inc., 1985) humanity sends cycle-limited Von Neumann probes out to the nearest stars to do open-ended exploration and to announce humanity's existence to whoever might encounter them.
  • Von Neumann's War: In Von Neumann's War by John Ringo and Travis S. Taylor (published by Baen Books in 2007) Von Neumann probes arrive in the solar system, moving in from the outer planets, and converting all metals into gigantic structures. Eventually, they arrive on Earth, wiping out much of the population before being beaten back when humanity reverse engineers some of the probes.
  • We Are Legion (We Are Bob) by Dennis E. Taylor: Bob Johansson, the former owner of a software company, dies in a car accident, only to wake up a hundred years later as a computer emulation of Bob. Given a Von Neumann probe by America's religious government, he is sent out to explore, exploit, expand, and experiment for the good of the human race.
  • ARMA 3: In the "First Contact" single-player campaign introduced in the Contact expansion, a series of extraterrestrial network structures are found in various locations on Earth, one being the fictional country of Livonia, the campaign's setting. In the credits of the campaign, a radio broadcast reveals that a popular theory surrounding the networks is that they are a type of Von Neumann probe that arrived on Earth during the time of a supercontinent.
  • Questionable Content: In Jeph Jacques' webcomic, Faye Whitaker refers to the "Floating Black Slab Emitting A Low Hum" as a possible Von Neumann probe in Episode 4645: Accessorized.
  • In the third act of the incremental game Universal Paperclips, after all of Earth's matter has been converted into paperclips, players are tasked with sending Von Neumann probes into the universe to find and consume all matter in service of making paperclips, eventually entering a war with another class of probes called "drifters" that are created as a result of random mutations.
  • In the game Satisfactory developed by Coffee Stain Studios, the player arrives on a distant alien planet and is tasked with constructing another spaceship. The player is guided by an artificial intelligence which provides the instructions for creating the spaceship (specifically, which resources are required). When complete, it then leaves to presumably repeat the process on another planet. This is not explicitly explained by the game but lore suggests you are simply a clone created by the previous iteration of the process, and it has been going on for a long, long time.

Berserkers

  • In the science fiction short story collection Berserker by Fred Saberhagen, a series of short stories include accounts of battles fought against extremely destructive Berserker machines. This and subsequent books set in the same fictional universe are the origin of the term "Berserker probe".
  • In the 2003 miniseries reboot of Battlestar Galactica (and the subsequent 2004 series) the Cylons are similar to Berserkers in their wish to destroy human life. They were created by humans in a group of fictional planets called the Twelve Colonies. The Cylons created special models that look like humans in order to destroy the twelve colonies and later, the fleeing fleet of surviving humans.
  • The Borg of Star Trek – a self-replicating bio-mechanical race that is dedicated to the task of achieving perfection through the assimilation of useful technology and lifeforms. Their ships are massive mechanical cubes (a close step from the Berserker's massive mechanical Spheres).
  • Science fiction author Larry Niven later borrowed this notion in his short story "A Teardrop Falls".
  • In the computer game Star Control II, the Slylandro Probe is an out-of-control self-replicating probe that attacks starships of other races. They were not originally intended to be a berserker probe; they sought out intelligent life for peaceful contact, but due to a programming error, they would immediately switch to "resource extraction" mode and attempt to dismantle the target ship for raw materials. While the plot claims that the probes reproduce "at a geometric rate", the game itself caps the frequency of encountering these probes. It is possible to deal with the menace in a side-quest, but this is not necessary to complete the game, as the probes only appear one at a time, and the player's ship will eventually be fast and powerful enough to outrun them or destroy them for resources – although the probes will eventually dominate the entire game universe.
  • In Iain Banks' novel Excession, hegemonising swarms are described as a form of Outside Context Problem. An example of an "Aggressive Hegemonising Swarm Object" is given as an uncontrolled self-replicating probe with the goal of turning all matter into copies of itself. After causing great damage, they are somehow transformed using unspecified techniques by the Zetetic Elench and become "Evangelical Hegemonising Swarm Objects". Such swarms (referred to as "smatter") reappear in the later novels Surface Detail (which features scenes of space combat against the swarms) and The Hydrogen Sonata.
  • The Inhibitors from Alastair Reynolds' Revelation Space series are self-replicating machines whose purpose is to inhibit the development of intelligent star-faring cultures. They are dormant for extreme periods of time until they detect the presence of a space-faring culture and proceed to exterminate it even to the point of sterilizing entire planets. They are very difficult to destroy as they seem to have faced every type of weapon ever devised and only need a short time to 'remember' the necessary counter-measures.
  • Also from Alastair Reynolds' books, the "Greenfly" terraforming machines are another form of berserker machines. For unknown reasons, but probably an error in their programming, they destroy planets and turn them into trillions of domes filled with vegetation – after all, their purpose is to produce a habitable environment for humans, however in doing so they inadvertently decimate the human race. By 10,000, they have wiped out most of the Galaxy.
  • The Reapers in the video game series Mass Effect are also self-replicating probes bent on destroying any advanced civilization encountered in the galaxy. They lie dormant in the vast spaces between the galaxies and follow a cycle of extermination. It is seen in Mass Effect 2 that they assimilate any advanced species.
  • Mantrid Drones from the science fiction television series Lexx were an extremely aggressive type of self-replicating Berserker machine, eventually converting the majority of the matter in the universe into copies of themselves in the course of their quest to thoroughly exterminate humanity.
  • The Babylon 5 episode "Infection" showed a smaller scale berserker in the form of the Icarran War Machine. After being created with the goal of defeating an unspecified enemy faction, the War Machines proceeded to exterminate all life on the planet Icarra VII because they had been programmed with standards for what constituted a 'Pure Icaran' based on religious teachings, which no actual Icaran could satisfy. Because the Icaran were pre-starflight, the War Machines became dormant after completing their task rather than spreading. One unit was reactivated on-board Babylon 5 after being smuggled past quarantine by an unscrupulous archaeologist, but after being confronted with how they had rendered Icara VII a dead world, the simulated personality of the War Machine committed suicide.
  • The Babylon 5 episode "A Day in the Strife" features a probe that threatens the station with destruction unless a series of questions designed to test a civilization's level of advancement are answered correctly. The commander of the station correctly surmises that the probe is actually a berserker and that if the questions are answered the probe would identify them as a threat to its originating civilization and detonate.
  • Greg Bear's novel The Forge of God deals directly with the concept of "Berserker" von Neumann probes and their consequences. The idea is further explored in the novel's sequel, Anvil of Stars, which explores the reaction other civilizations have to the creation and release of Berserkers.
  • In Gregory Benford's Galactic Center Saga series, an antagonist berserker machine race is encountered by Earth, first as a probe in In the Ocean of Night, and then in an attack in Across the Sea of Suns. The berserker machines do not seek to completely eradicate a race if merely throwing it into a primitive low technological state will do as they did to the EMs encountered in Across the Sea of Suns. The alien machine Watchers would not be considered von Neumann machines themselves, but the collective machine race could.
  • On Stargate SG-1 the Replicators were a vicious race of insect-like robots that were originally created by an android named Reese to serve as toys. They grew beyond her control and began evolving, eventually spreading throughout at least two galaxies. In addition to ordinary autonomous evolution they were able to analyze and incorporate new technologies they encountered into themselves, ultimately making them one of the most advanced "races" known.
  • On Stargate Atlantis, a second race of replicators created by the Ancients were encountered in the Pegasus Galaxy. They were created as a means to defeat the Wraith. The Ancients attempted to destroy them after they began showing signs of sentience and requested that their drive to kill the wraith be removed. This failed, and an unspecified length of time after the Ancients retreated to the Milky Way Galaxy, the replicators nearly succeeded in destroying the Wraith. The Wraith were able to hack into the replicators and deactivate the extermination drive, at which point they retreated to their home world and were not heard from again until encountered by the Atlantis Expedition. After the Atlantis Expedition reactivated this dormant directive, the replicators embarked on a plan to kill the Wraith by removing their food source, i.e. all humans in the Pegasus Galaxy.
  • In Stargate Universe Season 2, a galaxy billions of light years distant from the Milky Way is infested with drone ships that are programmed to annihilate intelligent life and advanced technology. The drone ships attack other space ships (including Destiny) as well as humans on planetary surfaces, but don't bother destroying primitive technology such as buildings unless they are harboring intelligent life or advanced technology.
  • In the Justice League Unlimited episode "Dark Heart", an alien weapon based on this same idea lands on Earth.
  • In the Homeworld: Cataclysm video game, a bio-mechanical virus called Beast has the ability to alter organic and mechanic material to suit its needs, and the ships infected become self-replicating hubs for the virus.
  • In the SF MMO EVE Online, experiments to create more autonomous drones than the ones used by player's ships accidentally created 'rogue drones' which form hives in certain parts of space and are used extensively in missions as difficult opponents.
  • In the computer game Sword of the Stars, the player may randomly encounter "Von Neumann". A Von Neumann mothership appears along with smaller Von Neumann probes, which attack and consume the player's ships. The probes then return to the mothership, returning the consumed material. If probes are destroyed, the mothership will create new ones. If all the player's ships are destroyed, the Von Neumann probes will reduce the planets resource levels before leaving. The probes appear as blue octahedrons, with small spheres attached to the apical points. The mothership is a larger version of the probes. In the 2008 expansion A Murder of Crows, Kerberos Productions also introduces the VN Berserker, a combat oriented ship, which attacks player planets and ships in retaliation to violence against VN Motherships. If the player destroys the Berserker things will escalate and a System Destroyer will attack.
  • In the X Computer Game Series, the Xenon are a malevolent race of artificially intelligent machines descended from terraforming ships sent out by humans to prepare worlds for eventual colonization; the result caused by a bugged software update. They are continual antagonists in the X-Universe.
  • In the comic Transmetropolitan a character mentions "Von Neumann rectal infestations" which are apparently caused by "Shit-ticks that build more shit-ticks that build more shit-ticks".
  • In the anime Vandread, harvester ships attack vessels from both male- and female-dominated factions and harvest hull, reactors, and computer components to make more of themselves. To this end, Harvester ships are built around mobile factories. Earth-born humans also view the inhabitants of the various colonies to be little more than spare parts.
  • In Earth 2160, the Morphidian Aliens rely on Mantain strain aliens for colonization. Most Mantain-derived aliens can absorb water, then reproduce like a colony of cells. In this manner, even one Mantain Lady (or Princess, or Queen) can create enough clones to cover the map. Once they have significant numbers, they "choose an evolutionary path" and swarm the enemy, taking over their resources.
  • In the European comic series Storm, numbers 20 & 21, a kind of berserk von Neumann probe is set on a collision course with the Pandarve system.
  • In PC role-playing game Space Rangers and its sequel Space Rangers 2: Dominators, a league of 5 nations battles three different types of Berserker robots. One that focuses on invading planets, another that battles normal space and third that lives in hyperspace.
  • In the Star Wolves video game series, Berserkers are a self-replicating machine menace that threatens the known universe for purposes of destruction and/or assimilation of humanity.
  • The Star Wars expanded universe features the World Devastators, large ships designed and built by the Galactic Empire that tear apart planets to use its materials to build other ships or even upgrade or replicate themselves.
  • The Tet in the 2013 film Oblivion is revealed to be a Berserker of sorts: a sentient machine that travels from planet to planet, exterminating the indigenous population using armies of robotic drones and cloned members of the target species. The Tet then proceeds to harvest the planet's water in order to extract hydrogen for nuclear fusion.
  • In Eclipse Phase, an ETI probe is believed to have infected the TITAN computer systems with the Exsurgent virus to cause them to go berserk and wage war on humanity. This would make ETI probes a form of berserker, albeit one that uses pre-existing computer systems as its key weapons.
  • In Herr aller Dinge by Andreas Eschbach, an ancient nano machine complex is discovered buried in a glacier off the coast of Russia. When it comes in contact with materials it needs to fulfill its mission, it creates a launch facility and launches a space craft. It is later revealed that the nano machines were created by a pre-historic human race with the intention of destroying other interstellar civilizations (for an unknown reason). It is proposed that the reason there is no evidence of the race is because of the nano-machines themselves and their ability to manipulate matter at an atomic level. It is even suggested that viruses could be ancient nano machines that have evolved over time.
  • In Dead Space brother moons could be considered as berserkers.

Replicating seeder ships

  • Code of the Lifemaker by James P. Hogan describes the evolution of a society of humanoid-like robots who inhabit Saturn's moon Titan. The sentient machines are descended from an uncrewed factory ship that was to be self replicating, but suffered radiation damage and went off course, eventually landing on Titan around 1,000,000 BC.
  • Manifold: Space, Stephen Baxter's novel, starts with the discovery of alien self-replicating machines active within the Solar system.
  • In the Metroid Prime subseries of games, the massive Leviathans are probes routinely sent out from the planet Phaaze to infect other planets with Phazon radiation and eventually turn these planets into clones of Phaaze, where the self-replication process can continue.
  • In David Brin's short story collection, The River of Time (1986), the short story "Lungfish" prominently features von Neumann probes. Not only does he explore the concept of the probes themselves, but indirectly explores the ideas of competition between different designs of probes, evolution of von Neumann probes in the face of such competition, and the development of a type of ecology between von Neumann probes. One of the vessels mentioned is clearly a Seeder type.
  • In The Songs of Distant Earth by Arthur C. Clarke, humanity on a future Earth facing imminent destruction creates automated seedships that act as fire and forget lifeboats aimed at distant, habitable worlds. Upon landing, the ship begins to create new humans from stored genetic information, and an onboard computer system raises and trains the first few generations of new inhabitants. The massive ships are then broken down and used as building materials by their "children".
  • On the Stargate Atlantis episode "Remnants", the Atlantis team finds an ancient probe that they later learn was launched by a now-extinct, technologically advanced race in order to seed new worlds and re-propagate their silicon-based species. The probe communicated with inhabitants of Atlantis by means of hallucinations.
  • On the Stargate SG-1 episode "Scorched Earth", a species of newly relocated humanoids face extinction via an automated terraforming colony seeder ship controlled by an Artificial Intelligence.
  • On Stargate Universe, the human adventurers live on a ship called Destiny. Its mission was to connect a network of Stargates, placed by preceding seeder ships on planets capable of supporting life to allow instantaneous travel between them.
  • The trilogy of albums which conclude the comic book series Storm by Don Lawrence (starting with Chronicles of Pandarve 11: The Von Neumann machine) is based on self-replicating conscious machines containing the sum of all human knowledge employed to rebuild human society throughout the universe in case of disaster on Earth. The probe malfunctions and although new probes are built, they do not separate from the motherprobe, which eventually results in a cluster of malfunctioning probes so big that it can absorb entire moons.
  • In the Xeno series, a rogue seeder ship (technically a berserker) known as "Deus" created humanity.

Fourier-transform infrared spectroscopy

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Fourier-transform_infrared_spectroscopy

Fourier transform infrared spectroscopy (FTIR) is a technique used to obtain an infrared spectrum of absorption or emission of a solid, liquid, or gas. An FTIR spectrometer simultaneously collects high-resolution spectral data over a wide spectral range. This confers a significant advantage over a dispersive spectrometer, which measures intensity over a narrow range of wavelengths at a time.

The term Fourier transform infrared spectroscopy originates from the fact that a Fourier transform (a mathematical process) is required to convert the raw data into the actual spectrum.

An example of an FTIR spectrometer with an attenuated total reflectance (ATR) attachment

Conceptual introduction

An FTIR interferogram. The central peak is at the ZPD position ("zero path difference" or zero retardation), where the maximal amount of light passes through the interferometer to the detector.

The goal of absorption spectroscopy techniques (FTIR, ultraviolet-visible ("UV-vis") spectroscopy, etc.) is to measure how much light a sample absorbs at each wavelength. The most straightforward way to do this, the "dispersive spectroscopy" technique, is to shine a monochromatic light beam at a sample, measure how much of the light is absorbed, and repeat for each different wavelength. (This is how some UV–vis spectrometers work, for example.)

Fourier transform spectroscopy is a less intuitive way to obtain the same information. Rather than shining a monochromatic beam of light (a beam composed of only a single wavelength) at the sample, this technique shines a beam containing many frequencies of light at once and measures how much of that beam is absorbed by the sample. Next, the beam is modified to contain a different combination of frequencies, giving a second data point. This process is rapidly repeated many times over a short time span. Afterwards, a computer takes all this data and works backward to infer what the absorption is at each wavelength.

The beam described above is generated by starting with a broadband light source—one containing the full spectrum of wavelengths to be measured. The light shines into a Michelson interferometer—a certain configuration of mirrors, one of which is moved by a motor. As this mirror moves, each wavelength of light in the beam is periodically blocked, transmitted, blocked, transmitted, by the interferometer, due to wave interference. Different wavelengths are modulated at different rates, so that at each moment or mirror position the beam coming out of the interferometer has a different spectrum.

As mentioned, computer processing is required to turn the raw data (light absorption for each mirror position) into the desired result (light absorption for each wavelength). The processing required turns out to be a common algorithm called the Fourier transform. The Fourier transform converts one domain (in this case displacement of the mirror in cm) into its inverse domain (wavenumbers in cm−1). The raw data is called an "interferogram".

History

The first low-cost spectrophotometer capable of recording an infrared spectrum was the Perkin-Elmer Infracord produced in 1957. This instrument covered the wavelength range from 2.5 μm to 15 μm (wavenumber range 4,000 cm−1 to 660 cm−1). The lower wavelength limit was chosen to encompass the highest known vibration frequency due to a fundamental molecular vibration. The upper limit was imposed by the fact that the dispersing element was a prism made from a single crystal of rock-salt (sodium chloride), which becomes opaque at wavelengths longer than about 15 μm; this spectral region became known as the rock-salt region. Later instruments used potassium bromide prisms to extend the range to 25 μm (400 cm−1) and caesium iodide 50 μm (200 cm−1). The region beyond 50 μm (200 cm−1) became known as the far-infrared region; at very long wavelengths it merges into the microwave region. Measurements in the far infrared needed the development of accurately ruled diffraction gratings to replace the prisms as dispersing elements, since salt crystals are opaque in this region. More sensitive detectors than the bolometer were required because of the low energy of the radiation. One such was the Golay detector. An additional issue is the need to exclude atmospheric water vapour because water vapour has an intense pure rotational spectrum in this region. Far-infrared spectrophotometers were cumbersome, slow and expensive. The advantages of the Michelson interferometer were well-known, but considerable technical difficulties had to be overcome before a commercial instrument could be built. Also an electronic computer was needed to perform the required Fourier transform, and this only became practicable with the advent of minicomputers, such as the PDP-8, which became available in 1965. Digilab pioneered the world's first commercial FTIR spectrometer (Model FTS-14) in 1969. Digilab FTIRs are now a part of Agilent Technologies's molecular product line after Agilent acquired spectroscopy business from Varian.

Michelson interferometer

Schematic diagram of a Michelson interferometer, configured for FTIR

In a Michelson interferometer adapted for FTIR, light from the polychromatic infrared source, approximately a black-body radiator, is collimated and directed to a beam splitter. Ideally 50% of the light is refracted towards the fixed mirror and 50% is transmitted towards the moving mirror. Light is reflected from the two mirrors back to the beam splitter and some fraction of the original light passes into the sample compartment. There, the light is focused on the sample. On leaving the sample compartment the light is refocused on to the detector. The difference in optical path length between the two arms to the interferometer is known as the retardation or optical path difference (OPD). An interferogram is obtained by varying the retardation and recording the signal from the detector for various values of the retardation. The form of the interferogram when no sample is present depends on factors such as the variation of source intensity and splitter efficiency with wavelength. This results in a maximum at zero retardation, when there is constructive interference at all wavelengths, followed by series of "wiggles". The position of zero retardation is determined accurately by finding the point of maximum intensity in the interferogram. When a sample is present the background interferogram is modulated by the presence of absorption bands in the sample.

Commercial spectrometers use Michelson interferometers with a variety of scanning mechanisms to generate the path difference. Common to all these arrangements is the need to ensure that the two beams recombine exactly as the system scans. The simplest systems have a plane mirror that moves linearly to vary the path of one beam. In this arrangement the moving mirror must not tilt or wobble as this would affect how the beams overlap as they recombine. Some systems incorporate a compensating mechanism that automatically adjusts the orientation of one mirror to maintain the alignment. Arrangements that avoid this problem include using cube corner reflectors instead of plane mirrors as these have the property of returning any incident beam in a parallel direction regardless of orientation.

Interferometer schematics where the path difference is generated by a rotary motion.

Systems where the path difference is generated by a rotary movement have proved very successful. One common system incorporates a pair of parallel mirrors in one beam that can be rotated to vary the path without displacing the returning beam. Another is the double pendulum design where the path in one arm of the interferometer increases as the path in the other decreases.

A quite different approach involves moving a wedge of an IR-transparent material such as KBr into one of the beams. Increasing the thickness of KBr in the beam increases the optical path because the refractive index is higher than that of air. One limitation of this approach is that the variation of refractive index over the wavelength range limits the accuracy of the wavelength calibration.

Measuring and processing the interferogram

The interferogram has to be measured from zero path difference to a maximum length that depends on the resolution required. In practice the scan can be on either side of zero resulting in a double-sided interferogram. Mechanical design limitations may mean that for the highest resolution the scan runs to the maximum OPD on one side of zero only.

The interferogram is converted to a spectrum by Fourier transformation. This requires it to be stored in digital form as a series of values at equal intervals of the path difference between the two beams. To measure the path difference a laser beam is sent through the interferometer, generating a sinusoidal signal where the separation between successive maxima is equal to the wavelength of the laser (typically a 633 nm HeNe laser is used). This can trigger an analog-to-digital converter to measure the IR signal each time the laser signal passes through zero. Alternatively, the laser and IR signals can be measured synchronously at smaller intervals with the IR signal at points corresponding to the laser signal zero crossing being determined by interpolation. This approach allows the use of analog-to-digital converters that are more accurate and precise than converters that can be triggered, resulting in lower noise.

Values of the interferogram at times corresponding to zero crossings of the laser signal are found by interpolation.

The result of Fourier transformation is a spectrum of the signal at a series of discrete wavelengths. The range of wavelengths that can be used in the calculation is limited by the separation of the data points in the interferogram. The shortest wavelength that can be recognized is twice the separation between these data points. For example, with one point per wavelength of a HeNe reference laser at 0.633 μm (15800 cm−1) the shortest wavelength would be 1.266 μm (7900 cm−1). Because of aliasing, any energy at shorter wavelengths would be interpreted as coming from longer wavelengths and so has to be minimized optically or electronically. The spectral resolution, i.e. the separation between wavelengths that can be distinguished, is determined by the maximum OPD. The wavelengths used in calculating the Fourier transform are such that an exact number of wavelengths fit into the length of the interferogram from zero to the maximum OPD as this makes their contributions orthogonal. This results in a spectrum with points separated by equal frequency intervals.

For a maximum path difference d adjacent wavelengths λ1 and λ2 will have n and (n+1) cycles, respectively, in the interferogram. The corresponding frequencies are ν1 and ν2:

d = nλ1 and d = (n+1)λ2
λ1 = d/n and λ2 =d/(n+1)
ν1 = 1/λ1 and ν2 = 1/λ2
ν1 = n/d and ν2 = (n+1)/d
ν2 − ν1 = 1/d

The separation is the inverse of the maximum OPD. For example, a maximum OPD of 2 cm results in a separation of 0.5 cm−1. This is the spectral resolution in the sense that the value at one point is independent of the values at adjacent points. Most instruments can be operated at different resolutions by choosing different OPD's. Instruments for routine analyses typically have a best resolution of around 0.5 cm−1, while spectrometers have been built with resolutions as high as 0.001 cm−1, corresponding to a maximum OPD of 10 m. The point in the interferogram corresponding to zero path difference has to be identified, commonly by assuming it is where the maximum signal occurs. This so-called centerburst is not always symmetrical in real world spectrometers so a phase correction may have to be calculated. The interferogram signal decays as the path difference increases, the rate of decay being inversely related to the width of features in the spectrum. If the OPD is not large enough to allow the interferogram signal to decay to a negligible level there will be unwanted oscillations or sidelobes associated with the features in the resulting spectrum. To reduce these sidelobes the interferogram is usually multiplied by a function that approaches zero at the maximum OPD. This so-called apodization reduces the amplitude of any sidelobes and also the noise level at the expense of some reduction in resolution.

For rapid calculation the number of points in the interferogram has to equal a power of two. A string of zeroes may be added to the measured interferogram to achieve this. More zeroes may be added in a process called zero filling to improve the appearance of the final spectrum although there is no improvement in resolution. Alternatively, interpolation after the Fourier transform gives a similar result.

Advantages

There are three principal advantages for an FT spectrometer compared to a scanning (dispersive) spectrometer.[1]

  1. The multiplex or Fellgett's advantage (named after Peter Fellgett). This arises from the fact that information from all wavelengths is collected simultaneously. It results in a higher signal-to-noise ratio for a given scan-time for observations limited by a fixed detector noise contribution (typically in the thermal infrared spectral region where a photodetector is limited by generation-recombination noise). For a spectrum with m resolution elements, this increase is equal to the square root of m. Alternatively, it allows a shorter scan-time for a given resolution. In practice multiple scans are often averaged, increasing the signal-to-noise ratio by the square root of the number of scans.
  2. The throughput or Jacquinot's advantage (named after Pierre Jacquinot). This results from the fact that in a dispersive instrument, the monochromator has entrance and exit slits which restrict the amount of light that passes through it. The interferometer throughput is determined only by the diameter of the collimated beam coming from the source. Although no slits are needed, FTIR spectrometers do require an aperture to restrict the convergence of the collimated beam in the interferometer. This is because convergent rays are modulated at different frequencies as the path difference is varied. Such an aperture is called a Jacquinot stop. For a given resolution and wavelength this circular aperture allows more light through than a slit, resulting in a higher signal-to-noise ratio.
  3. The wavelength accuracy or Connes' advantage (named after Janine Connes). The wavelength scale is calibrated by a laser beam of known wavelength that passes through the interferometer. This is much more stable and accurate than in dispersive instruments where the scale depends on the mechanical movement of diffraction gratings. In practice, the accuracy is limited by the divergence of the beam in the interferometer which depends on the resolution.

Another minor advantage is less sensitivity to stray light, that is radiation of one wavelength appearing at another wavelength in the spectrum. In dispersive instruments, this is the result of imperfections in the diffraction gratings and accidental reflections. In FT instruments there is no direct equivalent as the apparent wavelength is determined by the modulation frequency in the interferometer.

Resolution

The interferogram belongs in the length dimension. Fourier transform (FT) inverts the dimension, so the FT of the interferogram belongs in the reciprocal length dimension([L−1]), that is the dimension of wavenumber. The spectral resolution in cm−1 is equal to the reciprocal of the maximal retardation in cm. Thus a 4 cm−1 resolution will be obtained if the maximal retardation is 0.25 cm; this is typical of the cheaper FTIR instruments. Much higher resolution can be obtained by increasing the maximal retardation. This is not easy, as the moving mirror must travel in a near-perfect straight line. The use of corner-cube mirrors in place of the flat mirrors is helpful, as an outgoing ray from a corner-cube mirror is parallel to the incoming ray, regardless of the orientation of the mirror about axes perpendicular to the axis of the light beam.

A spectrometer with 0.001 cm−1 resolution is now available commercially. The throughput advantage is important for high-resolution FTIR, as the monochromator in a dispersive instrument with the same resolution would have very narrow entrance and exit slits.

In 1966 Janine Connes measured the temperature of the atmosphere of Venus by recording the vibration-rotation spectrum of Venusian CO2 at 0.1 cm−1 resolution. Michelson himself attempted to resolve the hydrogen Hα emission band in the spectrum of a hydrogen atom into its two components by using his interferometer.

Motivation

FTIR is a method of measuring infrared absorption and emission spectra. For a discussion of why people measure infrared absorption and emission spectra, i.e. why and how substances absorb and emit infrared light, see the article: Infrared spectroscopy.

Components

FTIR setup. The sample is placed right before the detector.

IR sources

FTIR spectrometers are mostly used for measurements in the mid and near IR regions. For the mid-IR region, 2−25 μm (5,000–400 cm−1), the most common source is a silicon carbide (SiC) element heated to about 1,200 K (930 °C; 1,700 °F) (Globar). The output is similar to a blackbody. Shorter wavelengths of the near-IR, 1−2.5 μm (10,000–4,000 cm−1), require a higher temperature source, typically a tungsten-halogen lamp. The long wavelength output of these is limited to about 5 μm (2,000 cm−1) by the absorption of the quartz envelope. For the far-IR, especially at wavelengths beyond 50 μm (200 cm−1) a mercury discharge lamp gives higher output than a thermal source.

Detectors

Far-IR spectrometers commonly use pyroelectric detectors that respond to changes in temperature as the intensity of IR radiation falling on them varies. The sensitive elements in these detectors are either deuterated triglycine sulfate (DTGS) or lithium tantalate (LiTaO3). These detectors operate at ambient temperatures and provide adequate sensitivity for most routine applications. To achieve the best sensitivity the time for a scan is typically a few seconds. Cooled photoelectric detectors are employed for situations requiring higher sensitivity or faster response. Liquid nitrogen cooled mercury cadmium telluride (MCT) detectors are the most widely used in the mid-IR. With these detectors an interferogram can be measured in as little as 10 milliseconds. Uncooled indium gallium arsenide photodiodes or DTGS are the usual choices in near-IR systems. Very sensitive liquid-helium-cooled silicon or germanium bolometers are used in the far-IR where both sources and beamsplitters are inefficient.

Beam splitter

Simple interferometer with a beam-splitter and compensator plate

An ideal beam-splitter transmits and reflects 50% of the incident radiation. However, as any material has a limited range of optical transmittance, several beam-splitters may be used interchangeably to cover a wide spectral range.

In a simple Michelson interferometer, one beam passes twice through the beamsplitter but the other passes through only once. To correct for this, an additional compensator plate of equal thickness is incorporated.

For the mid-IR region, the beamsplitter is usually made of KBr with a germanium-based coating that makes it semi-reflective. KBr absorbs strongly at wavelengths beyond 25 μm (400 cm−1), so CsI or KRS-5 are sometimes used to extend the range to about 50 μm (200 cm−1). ZnSe is an alternative where moisture vapour can be a problem, but is limited to about 20 μm (500 cm−1).

CaF2 is the usual material for the near-IR, being both harder and less sensitive to moisture than KBr, but cannot be used beyond about 8 μm (1,200 cm−1).

Far-IR beamsplitters are mostly based on polymer films, and cover a limited wavelength range.

Attenuated total reflectance

Attenuated total reflectance (ATR) is one accessory of FTIR spectrophotometer to measure surface properties of solid or thin film samples rather than their bulk properties. Generally, ATR has a penetration depth of around 1 or 2 micrometers depending on sample conditions.

Fourier transform

The interferogram in practice consists of a set of intensities measured for discrete values of retardation. The difference between successive retardation values is constant. Thus, a discrete Fourier transform is needed. The fast Fourier transform (FFT) algorithm is used.

Spectral range

Far-infrared

The first FTIR spectrometers were developed for far-infrared range. The reason for this has to do with the mechanical tolerance needed for good optical performance, which is related to the wavelength of the light being used. For the relatively long wavelengths of the far infrared, ~10 μm tolerances are adequate, whereas for the rock-salt region tolerances have to be better than 1 μm. A typical instrument was the cube interferometer developed at the NPL and marketed by Grubb Parsons. It used a stepper motor to drive the moving mirror, recording the detector response after each step was completed.

Mid-infrared

With the advent of cheap microcomputers it became possible to have a computer dedicated to controlling the spectrometer, collecting the data, doing the Fourier transform and presenting the spectrum. This provided the impetus for the development of FTIR spectrometers for the rock-salt region. The problems of manufacturing ultra-high precision optical and mechanical components had to be solved. A wide range of instruments are now available commercially. Although instrument design has become more sophisticated, the basic principles remain the same. Nowadays, the moving mirror of the interferometer moves at a constant velocity, and sampling of the interferogram is triggered by finding zero-crossings in the fringes of a secondary interferometer lit by a helium–neon laser. In modern FTIR systems the constant mirror velocity is not strictly required, as long as the laser fringes and the original interferogram are recorded simultaneously with higher sampling rate and then re-interpolated on a constant grid, as pioneered by James W. Brault. This confers very high wavenumber accuracy on the resulting infrared spectrum and avoids wavenumber calibration errors.

Near-infrared

The near-infrared region spans the wavelength range between the rock-salt region and the start of the visible region at about 750 nm. Overtones of fundamental vibrations can be observed in this region. It is used mainly in industrial applications such as process control and chemical imaging.

Applications

FTIR can be used in all applications where a dispersive spectrometer was used in the past (see external links). In addition, the improved sensitivity and speed have opened up new areas of application. Spectra can be measured in situations where very little energy reaches the detector. Fourier transform infrared spectroscopy is used in geology, chemistry, materials, botany and biology research fields.

Nano and biological materials

FTIR is also used to investigate various nanomaterials and proteins in hydrophobic membrane environments. Studies show the ability of FTIR to directly determine the polarity at a given site along the backbone of a transmembrane protein. The bond features involved with various organic and inorganic nanomaterials and their quantitative analysis can be done with the help of FTIR.

Microscopy and imaging

An infrared microscope allows samples to be observed and spectra measured from regions as small as 5 microns across. Images can be generated by combining a microscope with linear or 2-D array detectors. The spatial resolution can approach 5 microns with tens of thousands of pixels. The images contain a spectrum for each pixel and can be viewed as maps showing the intensity at any wavelength or combination of wavelengths. This allows the distribution of different chemical species within the sample to be seen. This technique has been applied in various biological applications including the analysis of tissue sections as an alternative to conventional histopathology, examining the homogeneity of pharmaceutical tablets, and for differentiating morphologically-similar pollen grains.

Nanoscale and spectroscopy below the diffraction limit

The spatial resolution of FTIR can be further improved below the micrometer scale by integrating it into scanning near-field optical microscopy platform. The corresponding technique is called nano-FTIR and allows for performing broadband spectroscopy on materials in ultra-small quantities (single viruses and protein complexes) and with 10 to 20 nm spatial resolution.

FTIR as detector in chromatography

The speed of FTIR allows spectra to be obtained from compounds as they are separated by a gas chromatograph. However this technique is little used compared to GC-MS (gas chromatography-mass spectrometry) which is more sensitive. The GC-IR method is particularly useful for identifying isomers, which by their nature have identical masses. Liquid chromatography fractions are more difficult because of the solvent present. One notable exception is to measure chain branching as a function of molecular size in polyethylene using gel permeation chromatography, which is possible using chlorinated solvents that have no absorption in the area in question.

TG-IR (thermogravimetric analysis-infrared spectrometry)

Measuring the gas evolved as a material is heated allows qualitative identification of the species to complement the purely quantitative information provided by measuring the weight loss.

Water content determination in plastics and composites

FTIR analysis is used to determine water content in fairly thin plastic and composite parts, more commonly in the laboratory setting. Such FTIR methods have long been used for plastics, and became extended for composite materials in 2018, when the method was introduced by Krauklis, Gagani and Echtermeyer. FTIR method uses the maxima of the absorbance band at about 5,200 cm−1 which correlates with the true water content in the material.

Near-infrared spectroscopy

From Wikipedia, the free encyclopedia
Near-IR absorption spectrum of dichloromethane showing complicated overlapping overtones of mid IR absorption features.

Near-infrared spectroscopy (NIRS) is a spectroscopic method that uses the near-infrared region of the electromagnetic spectrum (from 780 nm to 2500 nm). Typical applications include medical and physiological diagnostics and research including blood sugar, pulse oximetry, functional neuroimaging, sports medicine, elite sports training, ergonomics, rehabilitation, neonatal research, brain computer interface, urology (bladder contraction), and neurology (neurovascular coupling). There are also applications in other areas as well such as pharmaceutical, food and agrochemical quality control, atmospheric chemistry, combustion propagation.

Theory

Near-infrared spectroscopy is based on molecular overtone and combination vibrations. Overtones and combinations exhibit lower intensity compared to the fundamental, as a result, the molar absorptivity in the near-IR region is typically quite small. (NIR absorption bands are typically 10–100 times weaker than the corresponding fundamental mid-IR absorption band.) The lower absorption allows NIR radiation to penetrate much further into a sample than mid infrared radiation. Near-infrared spectroscopy is, therefore, not a particularly sensitive technique, but it can be very useful in probing bulk material with little to no sample preparation.

The molecular overtone and combination bands seen in the near-IR are typically very broad, leading to complex spectra; it can be difficult to assign specific features to specific chemical components. Multivariate (multiple variables) calibration techniques (e.g., principal components analysis, partial least squares, or artificial neural networks) are often employed to extract the desired chemical information. Careful development of a set of calibration samples and application of multivariate calibration techniques is essential for near-infrared analytical methods.

History

Near-infrared spectrum of liquid ethanol.

The discovery of near-infrared energy is ascribed to William Herschel in the 19th century, but the first industrial application began in the 1950s. In the first applications, NIRS was used only as an add-on unit to other optical devices that used other wavelengths such as ultraviolet (UV), visible (Vis), or mid-infrared (MIR) spectrometers. In the 1980s, a single-unit, stand-alone NIRS system was made available.

In the 1980s, Karl Norris (while working at the USDA Instrumentation Research Laboratory, Beltsville, USA) pioneered the use NIR spectroscopy for quality assessments of agricultural products. Since then, use has expanded from food and agricultural to chemical, polymer, and petroleum industries; pharmaceutical industry; biomedical sciences; and environmental analysis.

With the introduction of light-fiber optics in the mid-1980s and the monochromator-detector developments in the early 1990s, NIRS became a more powerful tool for scientific research. The method has been used in a number of fields of science including physics, physiology, or medicine. It is only in the last few decades that NIRS began to be used as a medical tool for monitoring patients, with the first clinical application of so-called fNIRS in 1994.

Instrumentation

Instrumentation for near-IR (NIR) spectroscopy is similar to instruments for the UV-visible and mid-IR ranges. There is a source, a detector, and a dispersive element (such as a prism, or, more commonly, a diffraction grating) to allow the intensity at different wavelengths to be recorded. Fourier transform NIR instruments using an interferometer are also common, especially for wavelengths above ~1000 nm. Depending on the sample, the spectrum can be measured in either reflection or transmission.

Common incandescent or quartz halogen light bulbs are most often used as broadband sources of near-infrared radiation for analytical applications. Light-emitting diodes (LEDs) can also be used. For high precision spectroscopy, wavelength-scanned lasers and frequency combs have recently become powerful sources, albeit with sometimes longer acquisition timescales. When lasers are used, a single detector without any dispersive elements might be sufficient.

The type of detector used depends primarily on the range of wavelengths to be measured. Silicon-based CCDs are suitable for the shorter end of the NIR range, but are not sufficiently sensitive over most of the range (over 1000 nm). InGaAs and PbS devices are more suitable and have higher quantum efficiency for wavelengths above 1100 nm. It is possible to combine silicon-based and InGaAs detectors in the same instrument. Such instruments can record both UV-visible and NIR spectra 'simultaneously'.

Instruments intended for chemical imaging in the NIR may use a 2D array detector with an acousto-optic tunable filter. Multiple images may be recorded sequentially at different narrow wavelength bands.

Many commercial instruments for UV/vis spectroscopy are capable of recording spectra in the NIR range (to perhaps ~900 nm). In the same way, the range of some mid-IR instruments may extend into the NIR. In these instruments, the detector used for the NIR wavelengths is often the same detector used for the instrument's "main" range of interest.

NIRS as an analytical technique

The use of NIR as an analytical technique did not come from extending the use of mid-IR into the near-IR range, but developed independently. A striking way this was exhibited is that, while mid-IR spectroscopists use wavenumbers (cm−1) when displaying spectra, NIR spectroscopists used wavelength (nm), as is used in ultraviolet–visible spectroscopy. Early practitioners of IR spectroscopy, who depended on assignment of absorption bands to specific bond types, were frustrated by the complexity of the bonding regions being measured. However, as a quantitative tool, the lower molar absorption levels in the bonding region tended to keep absorption maxima "on-scale", enabling quantitative work with little sample preparation. Techniques applied to extract the quantitative information from these complex spectra were unfamiliar to analytical chemists, and the technique was viewed with suspicion in academia.

Generally, quantitative NIR analysis is accomplished by selecting a group of calibration samples, for which the concentration of the analyte of interest has been determined by a reference method, and finding a correlation between various spectral features and those concentrations using a chemometric tool. The calibration is then validated by using it to predict the analyte values for samples in a validation set, whose values have been determined by the reference method but have not been included in the calibration. A validated calibration is then used to predict the values of samples. The complexity of the spectra are overcome by the use of multivariate calibration. The two tools most often used a multi-wavelength linear regression and partial least squares.

Applications

Typical applications of NIR spectroscopy include the analysis of food products, pharmaceuticals, combustion products, and a major branch of astronomical spectroscopy.

Astronomical spectroscopy

Near-infrared spectroscopy is used in astronomy for studying the atmospheres of cool stars where molecules can form. The vibrational and rotational signatures of molecules such as titanium oxide, cyanide, and carbon monoxide can be seen in this wavelength range and can give a clue towards the star's spectral type. It is also used for studying molecules in other astronomical contexts, such as in molecular clouds where new stars are formed. The astronomical phenomenon known as reddening means that near-infrared wavelengths are less affected by dust in the interstellar medium, such that regions inaccessible by optical spectroscopy can be studied in the near-infrared. Since dust and gas are strongly associated, these dusty regions are exactly those where infrared spectroscopy is most useful. The near-infrared spectra of very young stars provide important information about their ages and masses, which is important for understanding star formation in general. Astronomical spectrographs have also been developed for the detection of exoplanets using the Doppler shift of the parent star due to the radial velocity of the planet around the star.

Agriculture

Near-infrared spectroscopy is widely applied in agriculture for determining the quality of forages, grains, and grain products, oilseeds, coffee, tea, spices, fruits, vegetables, sugarcane, beverages, fats, and oils, dairy products, eggs, meat, and other agricultural products. It is widely used to quantify the composition of agricultural products because it meets the criteria of being accurate, reliable, rapid, non-destructive, and inexpensive. Abeni and Bergoglio 2001 apply NIRS to chicken breeding as the assay method for characteristics of fat composition.

Remote monitoring

Techniques have been developed for NIR spectroscopic imaging. Hyperspectral imaging has been applied for a wide range of uses, including the remote investigation of plants and soils. Data can be collected from instruments on airplanes, satellites or unmanned aerial systems to assess ground cover and soil chemistry.

Remote monitoring or remote sensing from the NIR spectroscopic region can also be used to study the atmosphere. For example, measurements of atmospheric gases are made from NIR spectra measured by the OCO-2, GOSAT, and the TCCON.

Materials science

Techniques have been developed for NIR spectroscopy of microscopic sample areas for film thickness measurements, research into the optical characteristics of nanoparticles and optical coatings for the telecommunications industry.

Medical uses

The application of NIRS in medicine centres on its ability to provide information about the oxygen saturation of haemoglobin within the microcirculation. Broadly speaking, it can be used to assess oxygenation and microvascular function in the brain (cerebral NIRS) or in the peripheral tissues (peripheral NIRS).

Cerebral NIRS

When a specific area of the brain is activated, the localized blood volume in that area changes quickly. Optical imaging can measure the location and activity of specific regions of the brain by continuously monitoring blood hemoglobin levels through the determination of optical absorption coefficients.

Infrascanner 1000, a NIRS scanner used to detect intracranial bleeding.

NIRS can be used as a quick screening tool for possible intracranial bleeding cases by placing the scanner on four locations on the head. In non-injured patients the brain absorbs the NIR light evenly. When there is an internal bleeding from an injury, the blood may be concentrated in one location causing the NIR light to be absorbed more than other locations, which the scanner detects.

So-called functional NIRS can be used for non-invasive assessment of brain function through the intact skull in human subjects by detecting changes in blood hemoglobin concentrations associated with neural activity, e.g., in branches of cognitive psychology as a partial replacement for fMRI techniques. NIRS can be used on infants, and NIRS is much more portable than fMRI machines, even wireless instrumentation is available, which enables investigations in freely moving subjects. However, NIRS cannot fully replace fMRI because it can only be used to scan cortical tissue, whereas fMRI can be used to measure activation throughout the brain. Special public domain statistical toolboxes for analysis of stand alone and combined NIRS/MRI measurement have been developed.

Example of data acquisition using fNIRS (Hitachi ETG-4000)

The application in functional mapping of the human cortex is called functional NIRS (fNIRS) or diffuse optical tomography (DOT). The term diffuse optical tomography is used for three-dimensional NIRS. The terms NIRS, NIRI, and DOT are often used interchangeably, but they have some distinctions. The most important difference between NIRS and DOT/NIRI is that DOT/NIRI is used mainly to detect changes in optical properties of tissue simultaneously from multiple measurement points and display the results in the form of a map or image over a specific area, whereas NIRS provides quantitative data in absolute terms on up to a few specific points. The latter is also used to investigate other tissues such as, e.g., muscle, breast and tumors. NIRS can be used to quantify blood flow, blood volume, oxygen consumption, reoxygenation rates and muscle recovery time in muscle.

By employing several wavelengths and time resolved (frequency or time domain) and/or spatially resolved methods blood flow, volume and absolute tissue saturation ( or Tissue Saturation Index (TSI)) can be quantified. Applications of oximetry by NIRS methods include neuroscience, ergonomics, rehabilitation, brain-computer interface, urology, the detection of illnesses that affect the blood circulation (e.g., peripheral vascular disease), the detection and assessment of breast tumors, and the optimization of training in sports medicine.

The use of NIRS in conjunction with a bolus injection of indocyanine green (ICG) has been used to measure cerebral blood flow and cerebral metabolic rate of oxygen consumption (CMRO2). It has also been shown that CMRO2 can be calculated with combined NIRS/MRI measurements. Additionally metabolism can be interrogated by resolving an additional mitochondrial chromophore, cytochrome-c-oxidase, using broadband NIRS.

NIRS is starting to be used in pediatric critical care, to help manage patients following cardiac surgery. Indeed, NIRS is able to measure venous oxygen saturation (SVO2), which is determined by the cardiac output, as well as other parameters (FiO2, hemoglobin, oxygen uptake). Therefore, examining the NIRS provides critical care physicians with an estimate of the cardiac output. NIRS is favoured by patients, because it is non-invasive, painless, and does not require ionizing radiation.

Optical coherence tomography (OCT) is another NIR medical imaging technique capable of 3D imaging with high resolution on par with low-power microscopy. Using optical coherence to measure photon pathlength allows OCT to build images of live tissue and clear examinations of tissue morphology. Due to technique differences OCT is limited to imaging 1–2 mm below tissue surfaces, but despite this limitation OCT has become an established medical imaging technique especially for imaging of the retina and anterior segments of the eye, as well as coronaries.

A type of neurofeedback, hemoencephalography or HEG, uses NIR technology to measure brain activation, primarily of the frontal lobes, for the purpose of training cerebral activation of that region.

The instrumental development of NIRS/NIRI/DOT/OCT has proceeded tremendously during the last years and, in particular, in terms of quantification, imaging and miniaturization.

Peripheral NIRS

Peripheral microvascular function can be assessed using NIRS. The oxygen saturation of haemoglobin in the tissue (StO2) can provide information about tissue perfusion. A vascular occlusion test (VOT) can be employed to assess microvascular function. Common sites for peripheral NIRS monitoring include the thenar eminence, forearm and calf muscles.

Particle measurement

NIR is often used in particle sizing in a range of different fields, including studying pharmaceutical and agricultural powders.

Industrial uses

As opposed to NIRS used in optical topography, general NIRS used in chemical assays does not provide imaging by mapping. For example, a clinical carbon dioxide analyzer requires reference techniques and calibration routines to be able to get accurate CO2 content change. In this case, calibration is performed by adjusting the zero control of the sample being tested after purposefully supplying 0% CO2 or another known amount of CO2 in the sample. Normal compressed gas from distributors contains about 95% O2 and 5% CO2, which can also be used to adjust %CO2 meter reading to be exactly 5% at initial calibration.

De Broglie–Bohm theory

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/De_Broglie%E2%80%93Bohm_theory The de Broglie–Bohm theory is an interp...