Search This Blog

Sunday, February 22, 2026

Fermi paradox

From Wikipedia, the free encyclopedia
Fermi's headshot
Enrico Fermi (Los Alamos 1945)

The Fermi paradox is the discrepancy between the lack of conclusive evidence of advanced extraterrestrial life and the apparently high likelihood of its existence.

The paradox is named after physicist Enrico Fermi, who informally posed the question—remembered by Emil Konopinski as "But where is everybody?"—during a 1950 conversation at Los Alamos with colleagues Konopinski, Edward Teller, and Herbert York. The paradox first appeared in print in a 1963 paper by Carl Sagan and the paradox has since been fully characterized by scientists. Early formulations of the paradox have also been identified in writings by Bernard Le Bovier de Fontenelle (1686) and Jules Verne (1865), and by Soviet rocket scientist Konstantin Tsiolkovsky (1933).

There have been many attempts to resolve the Fermi paradox, such as suggesting that intelligent extraterrestrial beings are extremely rare, that the lifetime of such civilizations is short, or that they exist but (for various reasons) humans see no evidence.

Chain of reasoning

Some of the facts and hypotheses that together serve to highlight the apparent contradiction:

  • There are billions of stars in the Milky Way similar to the Sun.
  • With high probability, some of these stars have Earth-like planets orbiting in the habitable zone.
  • Many of these stars, and hence their planets, are much older than the Sun. If Earth-like planets are typical, some may have developed intelligent life long ago.
  • Some of these civilizations may have developed interstellar travel, a step that humans are investigating.
  • Even at the slow pace of envisioned interstellar travel, the Milky Way galaxy could be completely traversed in a few million years.
  • Since many of the Sun-like stars are billions of years older than the Sun, the Earth should have already been visited by extraterrestrial civilizations, or at least their probes.
  • However, there is no convincing evidence that this has happened.

History

Los Alamos conversation

Los Alamos identity badge photo for Emil Konopinski
Los Alamos identity badge photo for Edward Teller
Portrait of Herbert York
Enrico Fermi posed the paradox to fellow physicists Emil Konopinski (top), Edward Teller (middle), and Herbert York (bottom) at Los Alamos in 1950.

Enrico Fermi was a Nobel Prize-winning physicist who predicted the existence of neutrinos and helped create the first artificial nuclear reactor, an early feat of the Manhattan Project. He was known to pose simple but seemingly unanswerable questions—termed "Fermi questions"—to his colleagues and students, like "How many atoms of Caesar's last breath do you inhale with each lungful of air?"

In 1950, Fermi visited Los Alamos National Laboratory in New Mexico and, while walking to the Fuller Lodge for lunch, conversed with fellow physicists Emil Konopinski, Edward Teller, and Herbert York about reports of flying saucers and the feasibility of faster-than-light travel. When the conversation shifted to unrelated topics at the lodge, Fermi blurted a question variously recalled as: "Where is everybody?" (Teller), "Don't you ever wonder where everybody is?" (York), or "But where is everybody?" (Konopinski). According to Teller, "The result of his question was general laughter because of the strange fact that, in spite of Fermi's question coming out of the blue, everybody around the table seemed to understand at once that he was talking about extraterrestrial life."

According to York, Fermi "followed up with a series of calculations on the probability of earthlike planets, the probability of life given an earth, the probability of humans given life, the likely rise and duration of high technology, and so on. He concluded on the basis of such calculations that we ought to have been visited long ago and many times over." However, Teller recalled that Fermi did not elaborate on his question beyond "perhaps a statement that the distances to the next location of living beings may be very great and that, indeed, as far as our galaxy is concerned, we are living somewhere in the sticks, far removed from the metropolitan area of the galactic center."

Predecessors

Russian scientist Konstantin Tsiolkovsky at his desk, examining papers
Russian rocket scientist Konstantin Tsiolkovsky

Fermi was not the first to note the paradox. In his 1686 book Conversations on the Plurality of Worlds, Bernard Le Bovier de Fontenelle—later the secretary of the French Academy of Sciences—constructs a dialogue in which Fontenelle's claims of "intelligent beings exist in other worlds, for instance the Moon" are refuted by a character who notes that "If this were the case, the Moon's inhabitants would already have come to us before now." This may have inspired a similar discussion in Jules Verne's 1865 novel Around the Moon, which has also been identified as an early conceptualization of the Fermi paradox.

Another early formulation Fermi paradox was presented and dissected in the 1930s writings of Russian rocket scientist Konstantin Tsiolkovsky. Although his rocketry work was embraced by the materialist Soviets, his philosophical writings were suppressed and unknown for most of the 20th century. Tsiolkovsky noted that critics refute the existence of advanced extraterrestrial life as such civilizations would have visited humanity or left some detectable evidence. He posed a solution to the paradox: humanity is quarantined by aliens to protect its independent cultural development, which resembles the zoo hypothesis proposed by John Ball.

Popularization

Carl Sagan standing beside a Viking Lander
Carl Sagan, seen here beside a Viking lander, first mentioned the paradox in print.

The Fermi question first appeared in print in a footnote of a 1963 paper by Carl Sagan. Two years later, Stephen Dole noted the dilemma at a symposium—"If there are so many advanced forms of life around, where is everybody?"—but did not attribute it to Fermi. A chapter of Intelligent Life in the Universe, co-authored by Sagan and Iosif Shklovsky, was headlined with the Fermi-attributed "Where are they?" The Fermi question also appeared in NASA's 1970 Project Cyclops report, a 1973 book by Sagan, and a 1975 article in JBIS Interstellar Studies by David Viewing that first described it as a paradox.

Later that year, Michael Hart published a detailed examination of the paradox in the Quarterly Journal of the Royal Astronomical Society. Hart, who concluded that "we are the first civilization in our Galaxy", proposed four broad categories of solutions to the paradox: those that are physical (a space travel limitation), sociological (aliens choose not to visit Earth), temporal (aliens have not had time to travel to Earth), or that extraterrestrials have already visited. His paper sparked significant interest in the paradox among academics and even politicians, with a discussion held in the House of Lords. A seminal response—"Extraterrestrial intelligent beings do not exist"—was written by Frank Tipler, who argued that, if an advanced extraterrestrial civilization existed, their self-replicating spacecraft should have already been detected in the Solar System. The term "Fermi paradox" was coined in a 1977 article by David Stephenson and was widely adopted.

The popularization of the Fermi paradox damaged SETI efforts, and Senator William Proxmire cited Tipler when he spurred the termination of the federally funded NASA SETI program in 1981. According to Robert Gray, the paradox may contribute to a "de facto prohibition on government support for research in a branch of astrobiology".

Criticism

Fermi did not publish anything regarding the paradox, with Sagan once suggesting the quote to be apocryphal. Scientists like Robert Gray have criticized its attribution to Fermi, and alternative terms like the "Hart–Tipler argument" or "Tsiolkovsky–Fermi–Viewing–Hart paradox" have been proposed. According to Gray, the current understanding of the paradox misinterprets Fermi's question and subsequent discussion, which was challenging the feasibility of interstellar travel rather than the existence of advanced extraterrestrial life.

Basis

Enrico Fermi (1901–1954)

The Fermi paradox is a conflict between the argument that scale and probability seem to favor intelligent life being common in the universe, and the total lack of evidence of intelligent life having ever arisen anywhere other than on Earth.

The first aspect of the Fermi paradox is a function of the scale or the large numbers involved: there are an estimated 200–400 billion stars in the Milky Way (2–4 × 1011) and 70 sextillion (7×1022) in the observable universe. Even if intelligent life occurs on only a minuscule percentage of planets around these stars, there might still be a great number of extant civilizations, and if the percentage were high enough it would produce a significant number of extant civilizations in the Milky Way. This assumes the mediocrity principle, by which Earth is a typical planet.

The second aspect of the Fermi paradox is the argument of probability: given intelligent life's ability to overcome scarcity, and its tendency to colonize new habitats, it seems possible that at least some civilizations would be technologically advanced, seek out new resources in space, and colonize their star system and, subsequently, surrounding star systems. Since there is no significant evidence on Earth, or elsewhere in the known universe, of other intelligent life after 13.8 billion years of the universe's history, there is a conflict requiring a resolution. Some examples of possible resolutions are that intelligent life is rarer than is thought, that assumptions about the general development or behavior of intelligent species are flawed, or, more radically, that the scientific understanding of the nature of the universe is quite incomplete.

The Fermi paradox can be asked in two ways. The first is, "Why are no aliens or their artifacts found on Earth, or in the Solar System?". If interstellar travel is possible, even the "slow" kind nearly within the reach of Earth technology, then it would only take from 5 million to 50 million years to colonize the galaxy. This is relatively brief on a geological scale, let alone a cosmological one. Since there are many stars older than the Sun, and since intelligent life might have evolved earlier elsewhere, the question then becomes why the galaxy has not been colonized already. Even if colonization is impractical or undesirable to all alien civilizations, large-scale exploration of the galaxy could be possible by probes. These might leave detectable artifacts in the Solar System, such as old probes or evidence of mining activity, but none of these have been observed.

The second form of the question is "Why are there no signs of intelligence elsewhere in the universe?". This version does not assume interstellar travel, but includes other galaxies as well. For distant galaxies, travel times may well explain the lack of alien visits to Earth, but a sufficiently advanced civilization could potentially be observable over a significant fraction of the size of the observable universe. Even if such civilizations are rare, the scale argument indicates they should exist somewhere at some point during the history of the universe, and since they could be detected from far away over a considerable period of time, many more potential sites for their origin are within range of human observation. It is unknown whether the paradox is stronger for the Milky Way galaxy or for the universe as a whole.

Drake equation

The theories and principles in the Drake equation are closely related to the Fermi paradox. The equation was formulated by Frank Drake in 1961 in an attempt to find a systematic means to evaluate the numerous probabilities involved in the existence of alien life. The equation is

where is the number of technologically advanced civilizations in the Milky Way galaxy, and is asserted to be the product of

  • , the rate of formation of stars in the galaxy;
  • , the fraction of those stars with planetary systems;
  • , the number of planets, per solar system, with an environment suitable for organic life;
  • , the fraction of those suitable planets whereon organic life appears;
  • , the fraction of life-bearing planets whereon intelligent life appears;
  • , the fraction of civilizations that reach the technological level whereby detectable signals may be dispatched; and
  • , the length of time that those civilizations dispatch their signals.

The fundamental problem is that the last four terms (, , , and ) are entirely unknown, rendering statistical estimates impossible.

The Drake equation has been used by both optimists and pessimists, with wildly differing results. The first scientific meeting on the search for extraterrestrial intelligence (SETI), which had 10 attendees including Frank Drake and Carl Sagan, speculated that the number of civilizations was roughly between 1,000 and 100,000,000 civilizations in the Milky Way galaxy. Conversely, Frank Tipler and John D. Barrow used pessimistic numbers and speculated that the average number of civilizations in a galaxy is much less than one. Almost all arguments involving the Drake equation suffer from the overconfidence effect, a common error of probabilistic reasoning about low-probability events, by guessing specific numbers for likelihoods of events whose mechanism is not understood, such as the likelihood of abiogenesis on an Earth-like planet, with estimates varying over many hundreds of orders of magnitude. An analysis that takes into account some of the uncertainty associated with this lack of understanding has been carried out by Anders Sandberg, Eric Drexler and Toby Ord, and suggests "a substantial ex ante probability of there being no other intelligent life in our observable universe".

Great Filter

The Great Filter, a concept introduced by Robin Hanson in 1996, represents whatever natural phenomena that would make it unlikely for life to evolve from inanimate matter to an advanced civilization. The most commonly agreed-upon low probability event is abiogenesis: a gradual process of increasing complexity of the first self-replicating molecules by a randomly occurring chemical process. Other proposed great filters are the emergence of eukaryotic cells or of meiosis or some of the steps involved in the evolution of a brain-like organ capable of complex logical deductions.

Astrobiologists Dirk Schulze-Makuch and William Bains, reviewing the history of life on Earth, including convergent evolution, concluded that transitions such as oxygenic photosynthesis, the eukaryotic cell, multicellularity, and tool-using intelligence are likely to occur on any Earth-like planet given enough time. They argue that the Great Filter may be abiogenesis, the rise of technological human-level intelligence, or an inability to settle other worlds because of self-destruction or a lack of resources. Paleobiologist Olev Vinn has suggested that the great filter may have universal biological roots related to evolutionary animal behavior.

Grabby Aliens

In 2021, the concepts of quiet, loud, and grabby aliens were introduced by Hanson et al. The proposed "loud" aliens expand rapidly in a highly detectable way throughout the universe and endure, while "quiet" aliens are hard or impossible to detect and eventually disappear. "Grabby" aliens prevent the emergence of other civilizations in their sphere of influence, which expands at a rate near the speed of light. The authors argue that if loud civilizations are rare, as they appear to be, then quiet civilizations are also rare. The paper suggests that humanity's existing stage of technological development is relatively early in the potential timeline of intelligent life in the universe, as loud aliens would otherwise be observable by astronomers.

Earlier in 2013, Anders Sandberg and Stuart Armstrong examined the potential for intelligent life to spread intergalactically throughout the universe and the implications for the Fermi Paradox. Their study suggests that with sufficient energy, intelligent civilizations could potentially colonize the entire Milky Way galaxy within a few million years, and spread to nearby galaxies in a timespan that is cosmologically brief. They conclude that intergalactic colonization appears possible with the resources of a single planetary system and that intergalactic colonization is of comparable difficulty to interstellar colonization, and therefore the Fermi paradox is much sharper than commonly thought.

Critics such as David Kipping have contended that the "Grabby Aliens" model is reliant on unproven assumptions, lacking enough scientific rigor to be empirically falsifiable, and suggested other explanations for the proposed earliness of humans such as planets in M-dwarf systems being uninhabitable. Robin Hanson has responded to these criticisms.

Anthropics

Anthropic reasoning and the question of why we happen to find ourselves as humans creates a number of potential problems for astrobiology. Walter Barta argues that Hanson's grabby aliens model creates an anthropic dilemma. According to Hanson's model, most observers in our reference class should be grabby aliens themselves. This leads to the question of why we do not find ourselves as grabby aliens, but rather as a species confined to a single planet.

Empirical evidence

There are two parts of the Fermi paradox that rely on empirical evidence—that there are many potentially habitable planets, and that humans see no evidence of life. The first point, that many suitable planets exist, was an assumption in Fermi's time, but is since supported by the discovery that exoplanets are common. Existing models predict billions of habitable worlds in the Milky Way.

The second part of the paradox, that humans see no evidence of extraterrestrial life, is also an active field of scientific research. This includes both efforts to find any indication of life, and efforts specifically directed to finding intelligent life. These searches have been made since 1960, and several are ongoing.

Although astronomers do not usually search for extraterrestrials, they have observed phenomena that they could not immediately explain without positing an intelligent civilization as the source. For example, pulsars, when first discovered in 1967, were called little green men (LGM) because of the precise repetition of their pulses. In all cases, explanations with no need for intelligent life have been found for such observations, but the possibility of discovery remains. Proposed examples include asteroid mining that would change the appearance of debris disks around stars, or spectral lines from nuclear waste disposal in stars.

Electromagnetic emissions

Radio telescopes are often used by SETI projects.

Radio technology and the ability to construct a radio telescope are presumed to be a natural advance for technological species, theoretically creating effects that might be detected over interstellar distances. The careful searching for non-natural radio emissions from space may lead to the detection of alien civilizations. Sensitive alien observers of the Solar System, for example, would note unusually intense radio waves for a G2 star due to Earth's television and telecommunication broadcasts. In the absence of an apparent natural cause, alien observers might infer the existence of a terrestrial civilization. Such signals could be either "accidental" by-products of a civilization, or deliberate attempts to communicate, such as the Arecibo message. It is unclear whether "leakage", as opposed to a deliberate beacon, could be detected by an extraterrestrial civilization. The most sensitive radio telescopes on Earth, as of 2019, would not be able to detect non-directional radio signals (such as broadband) even at a fraction of a light-year away, but other civilizations could hypothetically have much better equipment.

A number of astronomers and observatories have attempted and are attempting to detect such evidence, mostly through SETI organizations such as the SETI Institute and Breakthrough Listen. Several decades of SETI analysis have not revealed any unusually bright or meaningfully repetitive radio emissions.

Direct planetary observation

A composite picture of Earth at night, created using data from the Defense Meteorological Satellite Program (DMSP) Operational Linescan System (OLS). Large-scale artificial lighting produced by human civilization is detectable from space.

Exoplanet detection and classification is a very active sub-discipline in astronomy; the first candidate terrestrial planet discovered within a star's habitable zone was found in 2007. New refinements in exoplanet detection methods, and use of existing methods from space (such as the Kepler and TESS missions) have detected and characterized Earth-size planets, and determined whether they are within the habitable zones of their stars. Such observational refinements have allowed better estimates of how common these potentially habitable worlds are, typically in the range of 0.5-1.0 potentially habitable planets per star.

Conjectures about interstellar probes

The Hart–Tipler conjecture is a form of contraposition which states that because no interstellar probes have been detected, there likely is no other intelligent life in the universe, as such life should be expected to eventually create and launch such probes. Self-replicating probes could exhaustively explore a galaxy the size of the Milky Way in as little as a million years. If even a single civilization in the Milky Way attempted this, such probes could spread throughout the entire galaxy. Another speculation for contact with an alien probe—one that would be trying to find human beings—is an alien Bracewell probe. Such a hypothetical device would be an autonomous space probe whose purpose is to seek out and communicate with alien civilizations (as opposed to von Neumann probes, which are usually described as purely exploratory). These were proposed as an alternative to carrying a slow speed-of-light dialogue between vastly distant neighbors. Rather than contending with the long delays a radio dialogue would suffer, a probe housing an artificial intelligence would seek out an alien civilization to carry on a close-range communication with the discovered civilization. The findings of such a probe would still have to be transmitted to the home civilization at light speed, but an information-gathering dialogue could be conducted in real time.

Direct exploration of the Solar System has yielded no evidence indicating a visit by aliens or their probes. Detailed exploration of areas of the Solar System where resources would be plentiful may yet produce evidence of alien exploration, though the entirety of the Solar System is relatively vast and difficult to investigate. Attempts to signal, attract, or activate hypothetical Bracewell probes in Earth's vicinity have not succeeded.

Searches for stellar-scale artifacts

A variant of the speculative Dyson sphere. Such large-scale artifacts would drastically alter the spectrum of a star.

In 1959, Freeman Dyson observed that every developing human civilization constantly increases its energy consumption, and he conjectured that a civilization might try to harness a large part of the energy produced by a star. He proposed a hypothetical "Dyson sphere" as a means: a shell or cloud of objects enclosing a star to absorb and utilize as much radiant energy as possible. Such a feat of astroengineering would drastically alter the observed spectrum of the star involved, changing it at least partly from the normal emission lines of a natural stellar atmosphere to those of black-body radiation, probably with a peak in the infrared. Dyson speculated that advanced alien civilizations might be detected by examining the spectra of stars and searching for such an altered spectrum.

There have been attempts to find evidence of Dyson spheres that would alter the spectra of their core stars. Direct observation of thousands of galaxies has shown no explicit evidence of artificial construction or modifications. In October 2015, there was speculation that a dimming of light from star KIC 8462852, observed by the Kepler space telescope, could have been a result of such a Dyson sphere under construction. However, in 2018, further observations determined that the amount of dimming varied by the frequency of the light, pointing to dust, rather than an opaque object such as a Dyson sphere, as the cause of the dimming.

Hypothetical explanations for the paradox

Rarity of intelligent life

Extraterrestrial life is rare or non-existent

Those who think that intelligent extraterrestrial life is (nearly) impossible argue that the conditions needed for the evolution of life—or at least the evolution of biological complexity—are rare or even unique to Earth. Under this assumption, called the rare Earth hypothesis, a rejection of the mediocrity principle, complex multicellular life is regarded as exceedingly unusual.

The rare Earth hypothesis argues that the evolution of biological complexity requires a host of fortuitous circumstances, such as a galactic habitable zone, a star and planet(s) having the requisite conditions, such as enough of a continuous habitable zone, the advantage of a giant guardian like Jupiter and a large moon, conditions needed to ensure the planet has a magnetosphere and plate tectonics, the chemistry of the lithosphere, atmosphere, and oceans, the role of "evolutionary pumps" such as massive glaciation and rare bolide impacts. Perhaps most importantly, advanced life needs whatever it was that led to the transition of (some) prokaryotic cells to eukaryotic cells, sexual reproduction and the Cambrian explosion.

In his book Wonderful Life (1989), Stephen Jay Gould suggested that if the "tape of life" were rewound to the time of the Cambrian explosion, and one or two tweaks made, human beings probably never would have evolved. Other thinkers such as Fontana, Buss, and Kauffman have written about the self-organizing properties of life.

Extraterrestrial intelligence is rare or non-existent

It is possible that even if complex life is common, intelligence (and consequently civilizations) is not. While there are remote sensing techniques that could perhaps detect life-bearing planets without relying on the signs of technology, none of them have the ability to determine if any detected life is intelligent. This is sometimes referred to as the "algae vs. alumnae" problem.

Charles Lineweaver states that when considering any extreme trait in an animal, intermediate stages do not necessarily produce "inevitable" outcomes. For example, large brains are no more "inevitable", or convergent, than are the long noses of animals such as aardvarks and elephants. As he points out, "dolphins have had ~20 million years to build a radio telescope and have not done so". In addition, Rebecca Boyle points out that of all the species that have evolved in the history of life on the planet Earth, only one—human beings and only in the beginning stages—has ever become space-faring.

Extraterrestrial intelligence is relatively new

Given that the expected lifespan of the universe is at least one trillion years and the age of the universe is around 14 billion years, it is possible that humans have emerged at or near the earliest possible opportunity for intelligent life to evolve. Avi Loeb, an astrophysicist and cosmologist, has suggested that Earth may be a very early example of a life-bearing planet and that life-bearing planets may be more likely trillions of years from now. He has put forward the view that the Universe has only recently reached a state in which life is possible and this is the reason humanity has not detected extraterrestrial life. The firstborn hypothesis posits that humans are the first, or one of the first, intelligent species to evolve. Therefore, many intelligent species may eventually exist, but few, if any, currently do. Moreover, it is possible that said species, even if they already exist, are developing more slowly, or have more limited resources on their home world, meaning that they may take longer than humans have to achieve spaceflight.

Periodic extinction by natural events

An asteroid impact may trigger an extinction event.

New life might commonly die out due to runaway heating or cooling on their fledgling planets. On Earth, there have been numerous major extinction events that destroyed the majority of complex species alive at the time; the extinction of the non-avian dinosaurs is the best known example. These are thought to have been caused by events such as impact from a large asteroid, massive volcanic eruptions, or astronomical events such as gamma-ray bursts. It may be the case that such extinction events are common throughout the universe and periodically destroy intelligent life, or at least its civilizations, before the species is able to develop the technology to communicate with other intelligent species.

However, the chances of extinction by natural events may be very low on the scale of a civilization's lifetime. Based on an analysis of impact craters on Earth and the Moon, the average interval between impacts large enough to cause global consequences (like the Chicxulub impact) is estimated to be around 100 million years.

Evolutionary explanations

It is the nature of intelligent life to destroy itself

A 23-kiloton tower shot called BADGER, fired as part of the Operation Upshot–Knothole nuclear test series

This is the argument that technological civilizations may usually or invariably destroy themselves before or shortly after developing radio or spaceflight technology. The astrophysicist Sebastian von Hoerner stated that the progress of science and technology on Earth was driven by two factors—the struggle for domination and the desire for an easy life. The former potentially leads to complete destruction, while the latter may lead to biological or mental degeneration. Possible means of annihilation via major global issues, where global interconnectedness actually makes humanity more vulnerable than resilient, are many, including war, accidental environmental contamination or damage, the development of biotechnologysynthetic life like mirror liferesource depletion, climate change, or artificial intelligence. This general theme is explored both in fiction and in scientific hypotheses.

In 1966, Sagan and Shklovskii speculated that technological civilizations will either tend to destroy themselves within a century of developing interstellar communicative capability or master their self-destructive tendencies and survive for billion-year timescales. Self-annihilation may also be viewed in terms of thermodynamics: insofar as life is an ordered system that can sustain itself against the tendency to disorder, Stephen Hawking's "external transmission" or interstellar communicative phase, where knowledge production and knowledge management is more important than transmission of information via evolution, may be the point at which the system becomes unstable and self-destructs. Here, Hawking emphasizes self-design of the human genome (transhumanism) or enhancement via machines (e.g., brain–computer interface) to enhance human intelligence and reduce aggression, without which he implies human civilization may be too stupid collectively to survive an increasingly unstable system. For instance, the development of technologies during the "external transmission" phase, such as weaponization of artificial general intelligence or antimatter, may not be met by concomitant increases in human ability to manage its own inventions. Consequently, disorder increases in the system: global governance may become increasingly destabilized, worsening humanity's ability to manage the possible means of annihilation listed above, resulting in global societal collapse.

A less theoretical example might be the resource-depletion issue on Polynesian islands, of which Easter Island is only the best known. David Brin points out that during the expansion phase from 1500 BC to 800 AD there were cycles of overpopulation followed by what might be called periodic cullings of adult males through war or ritual. He writes, "There are many stories of islands whose men were almost wiped out—sometimes by internal strife, and sometimes by invading males from other islands."

Using extinct civilizations such as Easter Island as models, a study conducted in 2018 by Adam Frank et al. posited that climate change induced by "energy intensive" civilizations may prevent sustainability within such civilizations, thus explaining the paradoxical lack of evidence for intelligent extraterrestrial life. Based on dynamical systems theory, the study examined how technological civilizations (exo-civilizations) consume resources and the feedback effects this consumption has on their planets and its carrying capacity. According to Adam Frank "[t]he point is to recognize that driving climate change may be something generic. The laws of physics demand that any young population, building an energy-intensive civilization like ours, is going to have feedback on its planet. Seeing climate change in this cosmic context may give us better insight into what's happening to us now and how to deal with it." Generalizing the Anthropocene, their model produces four different outcomes:

Possible trajectories of anthropogenic climate change in a model by Frank et al., 2018
  • Die-off: A scenario where the population grows quickly, surpassing the planet's carrying capacity, which leads to a peak followed by a rapid decline. The population eventually stabilizes at a much lower equilibrium level, allowing the planet to partially recover.
  • Sustainability: A scenario where civilizations successfully transition from high-impact resources (like fossil fuels) to sustainable ones (like solar energy) before significant environmental degradation occurs. This allows the civilization and planet to reach a stable equilibrium, avoiding catastrophic effects.
  • Collapse Without Resource Change: In this trajectory, the population and environmental degradation increase rapidly. The civilization does not switch to sustainable resources in time, leading to a total collapse where a tipping point is crossed and the population drops.
  • Collapse With Resource Change: Similar to the previous scenario, but in this case, the civilization attempts to transition to sustainable resources. However, the change comes too late, and the environmental damage is irreversible, still leading to the civilization's collapse.

Only one intelligent species can exist in a given region of space

Another hypothesis is that an intelligent species beyond a certain point of technological capability will destroy other intelligent species as they appear, perhaps by using self-replicating probes. Science fiction writer Fred Saberhagen has explored this idea in his Berserker series, as has physicist Gregory Benford and also, science fiction writer Greg Bear in his The Forge of God novel, and later Liu Cixin in his The Three-Body Problem series.

A species might undertake such extermination out of expansionist motives, greed, paranoia, or aggression. In 1981, cosmologist Edward Harrison argued that such behavior would be an act of prudence: an intelligent species that has overcome its own self-destructive tendencies might view any other species bent on galactic expansion as a threat. It has also been suggested that a successful alien species would be a superpredator, as are humans. Another possibility invokes the "tragedy of the commons" and the anthropic principle: the first lifeform to achieve interstellar travel will necessarily (even if unintentionally) prevent competitors from arising, and humans simply happen to be first.[123]

Civilizations only broadcast detectable signals for a brief period of time

It may be that alien civilizations are detectable through their radio emissions for only a short time, reducing the likelihood of spotting them. The usual assumption is that civilizations outgrow radio through technological advancement. However, there could be other leakage such as that from microwaves used to transmit power from solar satellites to ground receivers. Regarding the first point, in a 2006 Sky & Telescope article, Seth Shostak wrote, "Moreover, radio leakage from a planet is only likely to get weaker as a civilization advances and its communications technology gets better. Earth itself is increasingly switching from broadcasts to leakage-free cables and fiber optics, and from primitive but obvious carrier-wave broadcasts to subtler, hard-to-recognize spread-spectrum transmissions."

More hypothetically, advanced alien civilizations may evolve beyond broadcasting at all in the electromagnetic spectrum and communicate by technologies not developed or used by mankind. Some scientists have hypothesized that advanced civilizations may send neutrino signals. If such signals exist, they could be detectable by neutrino detectors that are as of 2009 under construction for other goals.

Alien life may be too incomprehensible

Microwave window as seen by a ground-based system. From NASA report SP-419: SETI – the Search for Extraterrestrial Intelligence

Another possibility is that human theoreticians have underestimated how much alien life might differ from that on Earth. Aliens may be psychologically unwilling to attempt to communicate with human beings. Perhaps human mathematics is parochial to Earth and not shared by other life, though others argue this can only apply to abstract math since the math associated with physics must be similar (in results, if not in methods).

In his 2009 book, SETI scientist Seth Shostak wrote, "Our experiments [such as plans to use drilling rigs on Mars] are still looking for the type of extraterrestrial that would have appealed to Percival Lowell [astronomer who believed he had observed canals on Mars]."

Physiology might also be a communication barrier. Carl Sagan speculated that an alien species might have a thought process orders of magnitude slower (or faster) than that of humans. A message broadcast by that species might seem like random background noise to humans, and therefore go undetected.

Paul Davies stated that 500 years ago the very idea of a computer doing work merely by manipulating internal data may not have been viewed as a technology at all. He writes, "Might there be a still higher level [...] If so, this 'third level' would never be manifest through observations made at the informational level, still less the matter level. There is no vocabulary to describe the third level, but that doesn't mean it is non-existent, and we need to be open to the possibility that alien technology may operate at the third level, or maybe the fourth, fifth [...] levels."

Arthur C. Clarke hypothesized that "our technology must still be laughably primitive; we may well be like jungle savages listening for the throbbing of tom-toms, while the ether around them carries more words per second than they could utter in a lifetime". Another thought is that technological civilizations invariably experience a technological singularity and attain a post-biological character.

Sociological explanations

Expansionism is not the cosmic norm

In response to Tipler's idea of self-replicating probes, Stephen Jay Gould wrote, "I must confess that I simply don't know how to react to such arguments. I have enough trouble predicting the plans and reactions of the people closest to me. I am usually baffled by the thoughts and accomplishments of humans in different cultures. I'll be damned if I can state with certainty what some extraterrestrial source of intelligence might do."

Alien species may have only settled part of the galaxy

According to a study by Frank et al., advanced civilizations may not colonize everything in the galaxy due to their potential adoption of steady states of expansion. This hypothesis suggests that civilizations might reach a stable pattern of expansion where they neither collapse nor aggressively spread throughout the galaxy. A February 2019 article in Popular Science states, "Sweeping across the Milky Way and establishing a unified galactic empire might be inevitable for a monolithic super-civilization, but most cultures are neither monolithic nor super—at least if our experience is any guide." Astrophysicist Adam Frank, along with co-authors such as astronomer Jason Wright, ran a variety of simulations in which they varied such factors as settlement lifespans, fractions of suitable planets, and recharge times between launches. They found many of their simulations seemingly resulted in a "third category" in which the Milky Way remains partially settled indefinitely. The abstract to their 2019 paper states, "These results break the link between Hart's famous 'Fact A' (no interstellar visitors on Earth now) and the conclusion that humans must, therefore, be the only technological civilization in the galaxy. Explicitly, our solutions admit situations where our current circumstances are consistent with an otherwise settled, steady-state galaxy."

An alternative scenario is that long-lived civilizations may only choose to colonize stars during closest approach. As low mass K- and M-type dwarfs are by far the most common types of main sequence stars in the Milky Way, they are more likely to pass close to existing civilizations. These stars have longer life spans, which may be preferred by such a civilization. Interstellar travel capability of 0.3 light years is theoretically sufficient to colonize all M-dwarfs in the galaxy within 2 billion years. If the travel capability is increased to 2 light years, then all K-dwarfs can be colonized in the same time frame.

Alien species may isolate themselves in virtual worlds

Avi Loeb suggests that one possible explanation for the Fermi paradox is virtual reality technology. Individuals of extraterrestrial civilizations may prefer to spend time in virtual worlds or metaverses that have different physical law constraints as opposed to focusing on colonizing planets. Nick Bostrom suggests that some advanced beings may divest themselves entirely of physical form, create massive artificial virtual environments, transfer themselves into these environments through mind uploading, and exist totally within virtual worlds, ignoring the external physical universe.

It may be that intelligent alien life develops an "increasing disinterest" in their outside world. Possibly any sufficiently advanced society will develop highly engaging media and entertainment well before the capacity for advanced space travel, with the rate of appeal of these social contrivances being destined, because of their inherent reduced complexity, to overtake any desire for complex, expensive endeavors such as space exploration and communication. Once any sufficiently advanced civilization becomes able to master its environment, and most of its physical needs are met through technology, various "social and entertainment technologies", including virtual reality, are postulated to become the primary drivers and motivations of that civilization.

Artificial intelligence may not be aggressively expansionist

While artificial intelligence supplanting its creators could only deepen the Fermi paradox, such as through enabling the colonizing of the galaxy through self-replicating probes, it is also possible that after replacing its creators, artificial intelligence either doesn't expand or endure for a variety of reasons. Michael A. Garrett has suggested that biological civilizations may universally underestimate the speed that AI systems progress, and not react to it in time, thus making it a possible great filter. He also argues that this could make the longevity of advanced technological civilizations less than 200 years, thus explaining the great silence observed by SETI.

Economic explanations

Lack of resources needed to physically spread throughout the galaxy

The ability of an alien culture to colonize other star systems is based on the idea that interstellar travel is technologically feasible. While the existing understanding of physics rules out the possibility of faster-than-light travel, it appears that there are no major theoretical barriers to the construction of "slow" interstellar ships, even though the engineering required is considerably beyond existing human capabilities. This idea underlies the concept of the Von Neumann probe and the Bracewell probe as a potential evidence of extraterrestrial intelligence.

It is possible, however, that scientific knowledge cannot properly gauge the feasibility and costs of such interstellar colonization. Theoretical barriers may not yet be understood, and the resources needed may be so great as to make it unlikely that any civilization could afford to attempt it. Even if interstellar travel and colonization are possible, they may be difficult, leading to a more gradual pace of colonization based on percolation.

Colonization efforts may not occur as an unstoppable hyper-aggressive rush, but rather as an uneven tendency to "percolate" outwards, within an eventual slowing and termination of the effort given the enormous costs involved and the expectation that colonies will inevitably develop a culture and civilization of their own. Colonization may thus occur in "clusters", with large areas remaining uncolonized at any one time, and planets only restarting the colonization process when their populations begin to outstrip their world's carrying capacity.

Information is cheaper to transmit than matter is to transfer

If a human-capability machine intelligence is possible, and if it is possible to transfer such constructs over vast distances and rebuild them on a remote machine, then it might not make strong economic sense to travel the galaxy by spaceflight. Louis K. Scheffer calculates the cost of radio transmission of information across space to be cheaper than spaceflight by a factor of 108–1017. For a machine civilization, the costs of interstellar travel are therefore enormous compared to the more efficient option of sending computational signals across space to already established sites. After the first civilization has physically explored or colonized the galaxy, as well as sent such machines for easy exploration, then any subsequent civilizations, after having contacted the first, may find it cheaper, faster, and easier to explore the galaxy through intelligent mind transfers to the machines built by the first civilization. However, since a star system needs only one such remote machine, and the communication is most likely highly directed, transmitted at high-frequencies, and at a minimal power to be economical, such signals would be hard to detect from Earth.

By contrast, in economics the counter-intuitive Jevons paradox implies that higher productivity results in higher demand. In other words, increased economic efficiency results in increased economic growth. For example, increased renewable energy has the risk of not directly resulting in declining fossil fuel use, but rather additional economic growth as fossil fuels instead are directed to alternative uses. Thus, technological innovation makes human civilization more capable of higher levels of consumption, as opposed to its existing consumption being achieved more efficiently at a stable level.

Other species' home planets cannot support industrial economies

Amedeo Balbi and Adam Frank propose the concept of an "oxygen bottleneck" for the emergence of the industrial production necessary for spaceflight. The "oxygen bottleneck" refers to the critical level of atmospheric oxygen necessary for fire and combustion. Earth's atmospheric oxygen concentration is about 21%, but has been much lower in the past and may also be on many exoplanets. The authors argue that while the threshold of oxygen required for the existence of complex life and ecosystems is relatively low, industrial processes which are necessary precursors to spaceflight, particularly metal smelting and many forms of electricity generation, require higher oxygen concentrations of at least some 18%. A planet with oxygen sufficient to support intelligent life but not to develop advanced metallurgy would be technologically gated by its extremely limited industrial capabilities at a level likely incapable of supporting spaceflight. Thus, the presence of high levels of oxygen in a planet's atmosphere is not only a potential biosignature but also a critical factor in the emergence of detectable technological civilizations.

Another hypothesis in this category is the "waterworlds hypothesis". According to author and scientist David Brin: "it turns out that our Earth skates the very inner edge of our sun's continuously habitable—or 'Goldilocks'—zone. And Earth may be anomalous. It may be that because we are so close to our sun, we have an anomalously oxygen-rich atmosphere, and we have anomalously little ocean for a water world. In other words, 32 percent continental mass may be high among water worlds..." Brin continues, "In which case, the evolution of creatures like us, with hands and fire and all that sort of thing, may be rare in the galaxy. In which case, when we do build starships and head out there, perhaps we'll find lots and lots of life worlds, but they're all like Polynesia. We'll find lots and lots of intelligent lifeforms out there, but they're all dolphins, whales, squids, who could never build their own starships. What a perfect universe for us to be in, because nobody would be able to boss us around, and we'd get to be the voyagers, the Star Trek people, the starship builders, the policemen, and so on."

Intelligent alien species have not developed advanced technologies

Le Moustier Neanderthals (Charles R. Knight, 1920)

It may be that while alien species with intelligence exist, they are primitive or have not reached the level of technological advancement necessary to communicate. Along with non-intelligent life, such civilizations would also be very difficult to detect from Earth. A trip using conventional rockets would take hundreds of thousands of years to reach the nearest stars.

To skeptics, the fact that over the history of life on the Earth, only one species has developed a civilization to the point of being capable of spaceflight, and this only in the early stages, lends credence to the idea that technologically advanced civilizations are rare in the universe.

Developing practical spaceflight technology is very difficult or expensive

The rapid increase of scientific and technological progress seen in the 18th to 20th centuries (the Industrial Revolution), compared to earlier eras, led to the common assumption that such progress will continue at exponential rates as time goes by, eventually leading to the progress level required for space exploration. The "universal limit to technological development" (ULTD) hypothesis proposes that there is a limit to the potential growth of a civilization, and that this limit may be placed well below the point required for space exploration. Such limits may be based on the enormous strain spaceflight may put on a planet's resources, physical limitations (such as faster-than-light travel being impossible), and even limitations based on the species' own biology.

Discovering extraterrestrial life is very difficult

Humans are not listening properly

There are some assumptions that underlie the SETI programs that may cause searchers to miss signals that exist. Extraterrestrials might, for example, transmit signals that have a very high or low data rate, or employ unconventional (in human terms) frequencies, which would make them hard to distinguish from background noise. Signals might be sent from non-main sequence star systems that humans search with lower priority; our programs assume that most alien life will be orbiting Sun-like stars.

Radio signals cannot be straightforwardly detected at interstellar distances

The greatest challenge is the sheer size of the radio search needed to look for signals (effectively spanning the entire observable universe), the limited amount of resources committed to SETI, and the sensitivity of modern instruments. SETI estimates, for instance, that with a radio telescope as sensitive as the Arecibo Observatory, Earth's television and radio broadcasts would only be detectable at distances up to 0.3 light-years, less than 1/10 the distance to the nearest star. A signal is much easier to detect if it consists of a deliberate, powerful transmission directed at Earth. Such signals could be detected at ranges of hundreds to tens of thousands of light-years distance. However, this means that detectors must be listening to an appropriate range of frequencies, and be in that region of space to which the beam is being sent. Many SETI searches assume that extraterrestrial civilizations will be broadcasting a deliberate signal, like the Arecibo message, in order to be found. Moreover, as human communication technology has advanced, humans have reduced the use of broadband radio transmissions in favor of more efficient and higher-bandwidth methods such as satellite communication and fibre optics. It may be that alien civilizations, having, as we have, largely moved past high-power radio broadcasting, producing very few, if any, detectable transmissions.

Thus, to detect alien civilizations through their radio emissions, Earth observers need very sensitive instruments, and moreover must hope that:

1) Aliens have developed radio technology, and,

2) Aliens use radio as a primary means of communication, and,

3) For reasons unknown, their transmitters are orders of magnitude more powerful than ours, or they are deliberately broadcasting high-power radio signals towards Earth as part of their own efforts to contact other civilizations, and,

4) We are listening at the right frequency, at the right time, and,

5) We recognize their transmission as an attempt at communication.

Humans have not listened for long enough

Humanity's ability to detect intelligent extraterrestrial life has existed for only a very brief period—from 1937 onwards, if the invention of the radio telescope is taken as the dividing line—and Homo sapiens is a geologically recent species. The whole period of modern human existence to date is a very brief period on a cosmological scale, and radio transmissions have only been propagated since 1895. Thus, it remains possible that human beings have neither existed long enough nor made themselves sufficiently detectable to be found by extraterrestrial intelligence.

Intelligent life may be too far away

NASA's conception of the Terrestrial Planet Finder

It may be that non-colonizing technologically capable alien civilizations exist, but that they are simply too far apart for meaningful two-way communication. Sebastian von Hoerner estimated the average duration of civilization at 6,500 years and the average distance between civilizations in the Milky Way at 1,000 light years. If two civilizations are separated by several thousand light-years, it is possible that one or both cultures may become extinct before meaningful dialogue can be established. Human searches may be able to detect their existence, but communication will remain impossible because of distance. It has been suggested that this problem might be ameliorated somewhat if contact and communication is made through a Bracewell probe. In this case at least one partner in the exchange may obtain meaningful information. Alternatively, a civilization may simply broadcast its knowledge, and leave it to the receiver to make what they may of it. This is similar to the transmission of information from ancient civilizations to the present, and humanity has undertaken similar activities like the Arecibo message, which could transfer information about Earth's intelligent species, even if it never yields a response or does not yield a response in time for humanity to receive it. It is possible that observational signatures of self-destroyed civilizations could be detected, depending on the destruction scenario and the timing of human observation relative to it.

A related speculation by Sagan and Newman suggests that if other civilizations exist, and are transmitting and exploring, their signals and probes simply have not arrived yet, i.e. that Humans are a relatively early civilization. However, critics have noted that this is unlikely, since it requires that humanity's advancement has occurred at a very special point in time, while the Milky Way is in transition from empty to full. This is a tiny fraction of the lifespan of a galaxy under ordinary assumptions, so the likelihood that humanity is in the midst of this transition is considered low in the paradox. In 2021, Hanson et al. reconsidered this likelihood and concluded it is indeed plausible when assuming that many civilizations are "grabby", i.e. displace other civilizations. Under this assumption there is a selection effect of the sort that provided we exist and are not (yet) destroyed by grabby aliens, we are very unlikely to observe aliens. Specifically, grabby aliens imply a typical civilizational expansion rate at nearly the speed of light because otherwise many other civilizations would be visible. The transition time between detection of an alien technosignature and extinction would be vanishingly short in cosmological timeframes, making it likely we are before that time period.

Some SETI skeptics may also believe that humanity is at a very special point of time—specifically, a transitional period from no space-faring societies to one space-faring society, namely that of human beings.

Intelligent life exists buried below the surfaces of ice planets

Planetary scientist Alan Stern put forward the idea that there could be a number of worlds with subsurface oceans (such as Jupiter's Europa or Saturn's Enceladus). The surface would provide a large degree of protection from such things as cometary impacts and nearby supernovae, as well as creating a situation in which a much broader range of orbital configurations are capable of supporting life. Life, and potentially intelligence and civilization, could evolve below the surface of such a planet, but be very hard to detect, insofar as it is generally only possible to observe the surface of planets from space. Stern states, "If they have technology, and let's say they're broadcasting, or they have city lights or whatever—we can't see it in any part of the spectrum, except maybe very-low-frequency [radio]."Moreover, such a civilization may have great difficulty getting to space, insofar as even getting to the surface of their world could present a considerable engineering challenge involving tunneling through many kilometres of ice. This may severely hamper their ability to communicate with us.

Advanced civilizations may limit their search for life to technological signatures

If life is abundant in the universe but the cost of space travel is high, an advanced civilization may choose to focus its search not on signs of life in general, but on those of other advanced civilizations, and specifically on radio signals. Since humanity has only recently began to use radio communication, its signals may have yet to arrive to other inhabited planets, and if they have, probes from those planets may have yet to arrive on Earth.

Willingness to communicate

Everyone is listening but no one is transmitting

Alien civilizations might be technically capable of contacting Earth, but could be only listening instead of transmitting. If all or most civilizations act in the same way, the galaxy could be full of civilizations eager for contact, but everyone is listening and no one is transmitting. This is the so-called SETI Paradox. The only civilization known, humanity, does not explicitly transmit, except for a few small efforts.

Alien governments are choosing not to respond

Even these limited efforts, and certainly any attempt to expand them, are controversial. It is not even clear humanity would respond to a detected signal—the official policy within the SETI community is that "[no] response to a signal or other evidence of extraterrestrial intelligence should be sent until appropriate international consultations have taken place". However, given the possible impact of any reply, it may be very difficult to obtain any consensus on whether to reply, and if so, who would speak and what they would say. It is therefore quite possible that an alien civilization led by cautious decision-makers might conclude that not responding is the soundest option. Moreover, as the only observed civilization does not have a planetary central government capable of making a binding decision about a response, alien civilizations, themselves divided into various political units without a central decision-making authority, may be aware of our existence and technically capable of responding, but cannot agree on whether and/or how to do so.

Communication is dangerous

An alien civilization might feel it is too dangerous to communicate, either for humanity or for them. It is argued that when very different civilizations have met on Earth, the results have often been disastrous for one side or the other, and the same may well apply to interstellar contact. Even contact at a safe distance could lead to infection by computer code or even ideas themselves. Perhaps prudent civilizations actively hide not only from Earth but from everyone, out of fear of other civilizations.

Perhaps the Fermi paradox itself, however aliens may conceive of it, is the reason for any civilization to avoid contact with other civilizations, even if no other obstacles existed. From any one civilization's point of view, it would be unlikely for them to be the first ones to make first contact. According to this reasoning, it is likely that previous civilizations faced fatal problems upon first contact and doing so should be avoided. So perhaps every civilization keeps quiet because of the possibility that there is a real reason for others to do so.

In 1987, science fiction author Greg Bear explored this concept in his novel The Forge of God. In The Forge of God, humanity is likened to a baby crying in a hostile forest: "There once was an infant lost in the woods, crying its heart out, wondering why no one answered, drawing down the wolves." One of the characters explains, "We've been sitting in our tree chirping like foolish birds for over a century now, wondering why no other birds answered. The galactic skies are full of hawks, that's why. Planetisms that don't know enough to keep quiet, get eaten."

In Liu Cixin's 2008 novel The Dark Forest, the author proposes a literary explanation for the Fermi paradox in which countless alien civilizations exist, but are both silent and paranoid, destroying any nascent lifeforms loud enough to make themselves known. This is because any other intelligent life may represent a future threat. As a result, Liu's fictional universe contains a plethora of quiet civilizations which do not reveal themselves, as in a "dark forest"...filled with "armed hunter(s) stalking through the trees like a ghost". This idea has come to be known as the dark forest hypothesis.

Earth is deliberately being avoided

The zoo hypothesis states that intelligent extraterrestrial life exists and does not contact life on Earth to allow for its natural evolution and development as a sort of cosmic closed nature reserve. A variation on the zoo hypothesis is the laboratory hypothesis, where humanity has been or is being subject to experiments, with Earth or the Solar System effectively serving as a laboratory. The zoo hypothesis may break down under the uniformity of motive flaw: all it takes is a single culture or civilization (or even a faction or rogue actor within one) to decide to act contrary to the interplanetary consensus, and the probability of such a violation of hegemony increases with the number of civilizations, tending not towards a galactic league with a single policy towards Earth, but towards multiple competing factions. However, if artificial superintelligences are paramount in galactic politics, and such intelligences tend towards consolidation behind a central authority, then this would at least partially address the uniformity of motive flaw by dissuading rogue behavior.

Analysis of the inter-arrival times between civilizations in the galaxy based on common astrobiological assumptions suggests that the initial civilization would have a commanding lead over the later arrivals, inasmuch as it has had time to assert control over resources, and settle the best planets (assuming similar biological needs to competitors). As such, it may have established what has been termed the zoo hypothesis through force or as a galactic or universal norm and the resultant "paradox" by a cultural founder effect with or without the continued activity of the founder. Some colonization scenarios predict spherical expansion across star systems, with continued expansion coming from the systems just previously settled. It has been suggested that this would cause a strong selection process among colonists, favoring cultural, biological, or political adaptation to living aboard spacecraft or space habitats for long periods of time; as a result, they may only settle a very limited number of the highest-quality planets, or simply stay aboard their ships and forgo planets entirely. This may result in a lack of interest in colonization, instead focusing on planets only as a destructible source of non-renewable resources. Alternatively, they may have an ethic of protection for "nursery worlds", and protect them without intervening. Moreover, having developed spaceborne habitation sufficient to support their needs, they may obtain resources through asteroid mining and mostly ignore terrestrial worlds insofar as they require a much greater expenditure of fuel and resources to make it to land on for mining compared to smaller objects.

It is possible that a civilization advanced enough to travel between planetary systems could be actively visiting or observing Earth while remaining undetected or unrecognized. Following this logic, and building on arguments that other proposed solutions to the Fermi paradox may be implausible, Ian Crawford and Dirk Schulze-Makuch have argued that technological civilisations are either very rare in the Galaxy or are deliberately hiding from us.

Earth is deliberately being isolated

A related idea to the zoo hypothesis is that, beyond a certain distance, the perceived universe is a simulated reality. The planetarium hypothesis speculates that beings may have created this simulation so that the universe appears to be empty of other life.

Conspiracy theories: alien life is already here, unacknowledged and/or deliberately concealed

A significant fraction of the population believes that at least some UFOs (Unidentified Flying Objects) are spacecraft piloted by aliens. While most of these are unrecognized or mistaken interpretations of mundane phenomena, some occurrences remain puzzling even after investigation. The scientific consensus is that although they may be unexplained, they do not rise to the level of convincing evidence.

Similarly, it is theoretically possible that SETI groups are not reporting positive detections, or governments have been blocking signals or suppressing publication. This response might be attributed to security or economic interests from the potential use of advanced extraterrestrial technology. It has been suggested that the detection of an extraterrestrial radio signal or technology could well be the most highly secret information that exists. Claims that this has already happened are common in the popular press, but the scientists involved report the opposite experience—the press becomes informed and interested in a potential detection even before a signal can be confirmed.

Regarding the idea that aliens are in secret contact with governments, David Brin writes, "Aversion to an idea, simply because of its long association with crackpots, gives crackpots altogether too much influence."

Nanoinformatics

From Wikipedia, the free encyclopedia

Nanoinformatics is the application of informatics to nanotechnology. It is an interdisciplinary field that develops methods and software tools for understanding nanomaterials, their properties, and their interactions with biological entities, and using that information more efficiently. It differs from cheminformatics in that nanomaterials usually involve nonuniform collections of particles that have distributions of physical properties that must be specified. The nanoinformatics infrastructure includes ontologies for nanomaterials, file formats, and data repositories.

Nanoinformatics has applications for improving workflows in fundamental research, manufacturing, and environmental health, allowing the use of high-throughput data-driven methods to analyze broad sets of experimental results. Nanomedicine applications include analysis of nanoparticle-based pharmaceuticals for structure–activity relationships in a similar manner to bioinformatics.

Background

Context of nanoinformatics as a convergence of science and practice at the nexus of safety, health, well-being, and productivity; risk management; and emerging nanotechnology.

While conventional chemicals are specified by their chemical composition, and concentration, nanoparticles have other physical properties that must be measured for a complete description, such as size, shape, surface properties, crystallinity, and dispersion state. In addition, preparations of nanoparticles are often non-uniform, having distributions of these properties that must also be specified. These molecular-scale properties influence their macroscopic chemical and physical properties, as well as their biological effects. They are important in both the experimental characterization of nanoparticles and their representation in an informatics system. The context of nanoinformatics is that effective development and implementation of potential applications of nanotechnology requires the harnessing of information at the intersection of safety, health, well-being, and productivity; risk management; and emerging nanotechnology.

A graphical representation of a working definition of nanoinformatics as a life-cycle process

One working definition of nanoinformatics developed through the community-based Nanoinformatics 2020 Roadmap and subsequently expanded is:

  • Determining which information is relevant to meeting the safety, health, well-being, and productivity objectives of the nanoscale science, engineering, and technology community;
  • Developing and implementing effective mechanisms for collecting, validating, storing, sharing, analyzing, modeling, and applying the information;
  • Confirming that appropriate decisions were made and that desired mission outcomes were achieved as a result of that information; and finally
  • Conveying experience to the broader community, contributing to generalized knowledge, and updating standards and training.

Data representations

Although nanotechnology is the subject of significant experimentation, much of the data are not stored in standardized formats or broadly accessible. Nanoinformatics initiatives seek to coordinate developments of data standards and informatics methods.

Ontologies

An overview of the eNanoMapper nanomaterial ontology

In the context of information science, an ontology is a formal representation of knowledge within a domain, using hierarchies of terms including their definitions, attributes, and relations. Ontologies provide a common terminology in a machine-readable framework that facilitates sharing and discovery of data. Having an established ontology for nanoparticles is important for cancer nanomedicine due to the need of researchers to search, access, and analyze large amounts of data.

The NanoParticle Ontology is an ontology for the preparation, chemical composition, and characterization of nanomaterials involved in cancer research. It uses the Basic Formal Ontology framework and is implemented in the Web Ontology Language. It is hosted by the National Center for Biomedical Ontology and maintained on GitHub. The eNanoMapper Ontology is more recent and reuses wherever possible already existing domain ontologies. As such, it reuses and extends the NanoParticle Ontology, but also the BioAssay Ontology, Experimental Factor Ontology, Unit Ontology, and ChEBI.

File formats

Flowchart depicting the ways to identify different components of a material sample to guide the creation of an ISA-TAB-Nano Material file

ISA-TAB-Nano is a set of four spreadsheet-based file formats for representing and sharing nanomaterial data, based on the ISA-TAB metadata standard. In Europe, other templates have been adopted that were developed by the Institute of Occupational Medicine, and by the Joint Research Centre for the NANoREG project.

Tools

Nanoinformatics is not limited to aggregating and sharing information about nanotechnologies, but has many complementary tools, some originating from chemoinformatics and bioinformatics.

Databases and repositories

Over the past decase, various publicly available nanomaterials databases and repositories have been constructed to support nanoinformatics and toxicology modelling. These databases often store standardised physicochemical, biological, and toxicological data on engineered nanomaterials and offer model-ready datasets to the scientific community to enable data reuse. Given the limitation of data availability in nanoinformatics tasks, the curation of large datasets and storage into accessible repositories is prioritised to support computational modelling, regulatory assessment, and data-driven research.

caNanoLab, developed by the U.S. National Cancer Institute, focuses on nanotechnologies related to biomedicine. The NanoMaterials Registry, maintained by RTI International, is a curated database of nanomaterials, and includes data from caNanoLab.

The eNanoMapper database, a project of the EU NanoSafety Cluster, is a deployment of the database software developed in the eNanoMapper project. It has since been used in other settings, such as the EU Observatory for NanoMaterials (EUON).

Other databases include the Center for the Environmental Implications of NanoTechnology's NanoInformatics Knowledge Commons (NIKC) and NanoDatabank, PEROSH's Nano Exposure & Contextual Information Database (NECID), Data and Knowledge on Nanomaterials (DaNa), NanoPharos and Springer Nature's Nano database.

Applications

Nanoinformatics has applications for improving workflows in fundamental research, manufacturing, and environmental health, allowing the use of high-throughput data-driven methods to analyze broad sets of experimental results.

Nanoinformatics is especially useful in nanoparticle-based cancer diagnostics and therapeutics. They are very diverse in nature due to the combinatorially large numbers of chemical and physical modifications that can be made to them, which can cause drastic changes in their functional properties. This leads to a combinatorial complexity that far exceeds, for example, genomic data. Nanoinformatics can enable structure–activity relationship modelling for nanoparticle-based drugs. Nanoinformatics and biomolecular nanomodeling provide a route for effective cancer treatment. Nanoinformatics also enables a data-driven approach to the design of materials to meet health and environmental needs.

Modeling and NanoQSAR

Viewed as a workflow process, nanoinformatics deconstructs experimental studies using data, metadata, controlled vocabularies and ontologies to populate databases so that trends, regularities and theories will be uncovered for use as predictive computational tools. Models are involved at each stage, some material (experiments, reference materials, model organisms) and some abstract (ontology, mathematical formulae), and all intended as a representation of the target system. Models can be used in experimental design, may substitute for experiment or may simulate how a complex system changes over time.

At present, nanoinformatics is an extension of bioinformatics due to the great opportunities for nanotechnology in medical applications, as well as to the importance of regulatory approvals to product commercialization. In these cases, the models target, their purposes, may be physico-chemical, estimating a property based on structure (quantitative structure–property relationship, QSPR); or biological, predicting biological activity based on molecular structure (quantitative structure–activity relationship, QSAR) or the time-course development of a simulation (physiologically based toxicokinetics, PBTK). Each of these has been explored for small molecule drug development with a supporting body of literature.

Particles differ from molecular entities, especially in having surfaces that challenge nomenclature system and QSAR/PBTK model development. For example, particles do not exhibit an octanol–water partition coefficient, which acts as a motive force in QSAR/PBTK models; and they may dissolve in vivo or have band gaps. Illustrative of current QSAR and PBTK models are those of Puzyn et al. and Bachler et al. The OECD has codified regulatory acceptance criteria, and there are guidance roadmaps with supporting workshops to coordinate international efforts.

Communities

Communities active in nanoinformatics include the European Union NanoSafety Cluster, The U.S. National Cancer Institute National Cancer Informatics Program's Nanotechnology Working Group, and the US–EU Nanotechnology Communities of Research.

Nanoinformatics roles, responsibilities, and communication interfaces

Individuals who engage in nanoinformatics can be viewed as fitting across four categories of roles and responsibilities for nanoinformatics methods and data:

  • Customers, who need either the methods to create the data, the data itself, or both, and who specify the scientific applications and characterization methods and data needs for their intended purposes;
  • Creators, who develop relevant and reliable methods and data to meet the needs of customers in the nanotechnology community;
  • Curators, who maintain and ensure the quality of the methods and associated data; and
  • Analysts, who develop and apply methods and models for data analysis and interpretation that are consistent with the quality and quantity of the data and that meet customers' needs.

In some instances, the same individuals perform all four roles. More often, many individuals must interact, with their roles and responsibilities extending over significant distances, organizations, and time. Effective communication is important across each of the twelve links (in both directions across each of the six pairwise interactions) that exist among the various customers, creators, curators, and analysts.

History

One of the first mentions of nanoinformatics was in the context of handling information about nanotechnology.

An early international workshop with substantial discussion of the need for sharing all types of information on nanotechnology and nanomaterials was the First International Symposium on Occupational Health Implications of Nanomaterials held 12–14 October 2004 at the Palace Hotel, Buxton, Derbyshire, UK. The workshop report included a presentation on Information Management for Nanotechnology Safety and Health that described the development of a Nanoparticle Information Library (NIL) and noted that efforts to ensure the health and safety of nanotechnology workers and members of the public could be substantially enhanced by a coordinated approach to information management. The NIL subsequently served as an example for web-based sharing of characterization data for nanomaterials.

The National Cancer Institute prepared in 2009 a rough vision of, what was then still called, nanotechnology informatics, outlining various aspects of what nanoinformatics should comprise. This was later followed by two roadmaps, detailing existing solutions, needs, and ideas on how the field should further develop: the Nanoinformatics 2020 Roadmap and the EU US Roadmap Nanoinformatics 2030.

A 2013 workshop on nanoinformatics described current resources, community needs and the proposal of a collaborative framework for data sharing and information integration.

Fine-tuned universe

From Wikipedia, the free encyclopedia

The fine-tuned universe is the hypothesis that, because "life as we know it" could not exist if the constants of nature—such as the electron charge or the gravitational constant—had been even slightly different, the universe must be tuned specifically for life. In practice, this hypothesis is formulated in terms of dimensionless physical constants.

History

In 1913, chemist Lawrence Joseph Henderson wrote The Fitness of the Environment, one of the first books to explore fine tuning in the universe. Henderson discusses the importance of water and the environment to living things, pointing out that life as it exists on Earth depends entirely on Earth's very specific environmental conditions, especially the prevalence and properties of water.

In 1961, physicist Robert H. Dicke argued that certain forces in physics, such as gravity and electromagnetism, must be perfectly fine-tuned for life to exist in the universe.

Astronomer Fred Hoyle argued for a fine-tuned universe: "From 1953 onward, Willy Fowler and I have always been intrigued by the remarkable relation of [...] and your fixing would have to be just where these levels are actually found to be. [...] A common sense interpretation of the facts suggests that a superintellect has monkeyed with physics, as well as with chemistry and biology, and that there are no blind forces worth speaking about in nature." In his 1983 book The Intelligent Universe, Hoyle wrote, "The list of anthropic properties, apparent accidents of a non-biological nature without which carbon-based and hence human life could not exist, is large and impressive."

The desire to resolve the fine-tuning problem led to the expectation that the Large Hadron Collider would produce evidence of physics beyond the Standard Model, such as supersymmetry, but by 2012 it had not produced evidence for supersymmetry at the energy scales it was able to probe. This absence leaves the fine-tuning problem unresolved, leading some physicists to suggest that the Standard Model may fundamentally require fine-tuning rather than a natural explanation.

Motivation

Physicist Paul Davies said: "There is now broad agreement among physicists and cosmologists that the Universe is in several respects 'fine-tuned' for life. But the conclusion is not so much that the Universe is fine-tuned for life; rather it is fine-tuned for the building blocks and environments that life requires". He also said that "'anthropic' reasoning fails to distinguish between minimally biophilic universes, in which life is permitted, but only marginally possible, and optimally biophilic universes, in which life flourishes because biogenesis occurs frequently". Among scientists who find the evidence persuasive, a variety of natural explanations have been proposed, such as the existence of multiple universes introducing a survivorship bias under the anthropic principle.

The premise of the fine-tuned universe assertion is that a small change in several of the physical constants would make the universe radically different. Stephen Hawking observed: "The laws of science, as we know them at present, contain many fundamental numbers, like the size of the electric charge of the electron and the ratio of the masses of the proton and the electron. ... The remarkable fact is that the values of these numbers seem to have been very finely adjusted to make possible the development of life".

For example, if the strong nuclear force were 2% stronger than it is (i.e. if the coupling constant representing its strength were 2% larger) while the other constants were left unchanged, diprotons would be stable; according to Davies, hydrogen would fuse into them instead of deuterium and helium. This would drastically alter the physics of stars, and presumably preclude the existence of life similar to what we observe on Earth. The diproton's existence would short-circuit the slow fusion of hydrogen into deuterium. Hydrogen would fuse so easily that it is likely that all the universe's hydrogen would be consumed in the first few minutes after the Big Bang. This "diproton argument" is disputed by other physicists, who calculate that as long as the increase in strength is less than 50%, stellar fusion could occur despite the existence of stable diprotons.

The precise formulation of the idea is made difficult by the fact that it is not yet known how many independent physical constants there are. The Standard Model of particle physics has 25 freely adjustable parameters and general relativity has one more, the cosmological constant, which is known to be nonzero but profoundly small in value. Because physicists have not developed an empirically successful theory of quantum gravity, there is no known way to combine quantum mechanics, on which the standard model depends, and general relativity.

Without knowledge of this more complete theory suspected to underlie the standard model, it is impossible to definitively count the number of truly independent physical constants. In some candidate theories, the number of independent physical constants may be as small as one. For example, the cosmological constant may be a fundamental constant but attempts have also been made to calculate it from other constants, and according to the author of one such calculation, "the small value of the cosmological constant is telling us that a remarkably precise and totally unexpected relation exists among all the parameters of the Standard Model of particle physics, the bare cosmological constant and unknown physics".

Examples

Martin Rees formulates the fine-tuning of the universe in terms of the following six dimensionless physical constants.

  • N, the ratio of the electromagnetic force to the gravitational force between a pair of protons, is approximately 1036. According to Rees, if it were significantly smaller, only a small and short-lived universe could exist. If it were large enough, they would repel them so violently that larger atoms would never be generated.
  • Epsilon (ε), a measure of the nuclear efficiency of fusion from hydrogen to helium, is 0.007: when four nucleons fuse into helium, 0.007 (0.7%) of their mass is converted to energy. The value of ε is in part determined by the strength of the strong nuclear force. If ε were 0.006, a proton could not bond to a neutron, and only hydrogen could exist, and complex chemistry would be impossible. According to Rees, if it were above 0.008, no hydrogen would exist, as all the hydrogen would have been fused shortly after the Big Bang. Other physicists disagree, calculating that substantial hydrogen remains as long as the strong force coupling constant increases by less than about 50%.
  • Omega (Ω), commonly known as the density parameter, is the relative importance of gravity and expansion energy in the universe. It is the ratio of the mass density of the universe to the "critical density" and is approximately 1. If gravity were too strong compared with dark energy and the initial cosmic expansion rate, the universe would have collapsed before life could have evolved. If gravity were too weak, no stars would have formed.
  • Lambda (Λ), commonly known as the cosmological constant, describes the ratio of the density of dark energy to the critical energy density of the universe, given certain reasonable assumptions such as that dark energy density is a constant. In terms of Planck units, and as a natural dimensionless value, Λ is on the order of 10−122. This is so small that it has no significant effect on cosmic structures that are smaller than a billion light-years across. A slightly larger value of the cosmological constant would have caused space to expand rapidly enough that stars and other astronomical structures would not be able to form.
  • Q, the ratio of the gravitational energy required to pull a large galaxy apart to the energy equivalent of its mass, is around 10−5. If it is too small, no stars can form. If it is too large, no stars can survive because the universe is too violent, according to Rees.
  • D, the number of spatial dimensions in spacetime, is 3. Rees claims that life could not exist if there were 2 or 4 spatial dimensions. Rees argues this does not preclude the existence of ten-dimensional strings.

Max Tegmark argued that if there is more than one time dimension, then physical systems' behavior could not be predicted reliably from knowledge of the relevant partial differential equations. In such a universe, intelligent life capable of manipulating technology could not emerge. Moreover, protons and electrons would be unstable and could decay into particles having greater mass than themselves. This is not a problem if the particles have a sufficiently low temperature.

Carbon and oxygen

An older example is the Hoyle state, the third-lowest energy state of the carbon-12 nucleus, with an energy of 7.656 MeV above the ground level. According to one calculation, if the state's energy level were lower than 7.3 or greater than 7.9 MeV, insufficient carbon would exist to support life. To explain the universe's abundance of carbon, the Hoyle state must be further tuned to a value between 7.596 and 7.716 MeV. A similar calculation, focusing on the underlying fundamental constants that give rise to various energy levels, concludes that the strong force must be tuned to a precision of at least 0.5%, and the electromagnetic force to a precision of at least 4%, to prevent either carbon production or oxygen production from dropping significantly.

Explanations

Some explanations of fine-tuning are naturalistic. First, the fine-tuning might be an illusion: more fundamental physics may explain the apparent fine-tuning in physical parameters in the current understanding by constraining the values those parameters are likely to take. As Lawrence Krauss put it, "certain quantities have seemed inexplicable and fine-tuned, and once we understand them, they don't seem to be so fine-tuned. We have to have some historical perspective". Victor J. Stenger has shown that random selection of physical parameters can still produce universes capable of harboring life. Some argue it is possible that a final fundamental theory of everything will explain the underlying causes of the apparent fine-tuning in every parameter.

Still, as modern cosmology developed, various hypotheses not presuming hidden order have been proposed. One is a multiverse, where fundamental physical constants are postulated to have different values outside of the known universe. On this hypothesis, separate parts of reality would have wildly different characteristics. In such scenarios, the appearance of fine-tuning is explained as a consequence of the weak anthropic principle and selection bias, specifically survivorship bias. Only those universes with fundamental constants hospitable to life, such as on Earth, could contain life forms capable of observing the universe who can contemplate the question of fine-tuning. Zhi-Wei Wang and Samuel L. Braunstein argue that the apparent fine-tuning of fundamental constants could be due to the lack of understanding of these constants.

Multiverse

If the universe is just one of many (possibly infinitely many) universes, each with different physical phenomena and constants, it is unsurprising that there is a universe hospitable to intelligent life. Some versions of the multiverse hypothesis therefore provide a simple explanation for any fine-tuning, while the analysis of Wang and Braunstein challenges the view that this universe is unique in its ability to support life.

The multiverse idea has led to considerable research into the anthropic principle and has been of particular interest to particle physicists because theories of everything do apparently generate large numbers of universes in which the physical constants vary widely. Although there is no evidence for the existence of a multiverse, some versions of the theory make predictions of which some researchers studying M-theory and gravity leaks hope to see some evidence soon. According to Laura Mersini-Houghton, the WMAP cold spot could provide testable empirical evidence of a parallel universe. Variants of this approach include Lee Smolin's notion of cosmological natural selection, the ekpyrotic universe, and the bubble universe theory.

It has been suggested that invoking the multiverse to explain fine-tuning is a form of the inverse gambler's fallacy.

Top-down cosmology

Stephen Hawking and Thomas Hertog proposed that the universe's initial conditions consisted of a superposition of many possible initial conditions, only a small fraction of which contributed to the conditions seen today. According to the top-down cosmology theory, the universe's "fine-tuned" physical constants are inevitable, because the universe "selects" only those histories that led to the present conditions. In this way, top-down cosmology provides an anthropic explanation for why this universe allows matter and life without invoking the multiverse.

Carbon chauvinism

Some forms of fine-tuning arguments about the formation of life assume that only carbon-based life forms are possible, an assumption sometimes called carbon chauvinism. Conceptually, alternative biochemistry or other forms of life are possible.

Simulation hypothesis

The simulation hypothesis holds that the universe is fine-tuned simply because the more technologically advanced simulation operator(s) programmed it that way.

No improbability

Graham Priest, Mark Colyvan, Jay L. Garfield, and others have argued against the presupposition that "the laws of physics or the boundary conditions of the universe could have been other than they are".

Theistic

Some scientists, theologians, and philosophers, as well as certain religious groups, argue that providence or creation are responsible for fine-tuning. Christian philosopher Alvin Plantinga argues that random chance, applied to a single and sole universe, only raises the question as to why this universe could be so "lucky" as to have precise conditions that support life at least at some place (the Earth) and time (within millions of years of the present).

One reaction to these apparent enormous coincidences is to see them as substantiating the theistic claim that the universe has been created by a personal God and as offering the material for a properly restrained theistic argument – hence the fine-tuning argument. It's as if there are a large number of dials that have to be tuned to within extremely narrow limits for life to be possible in our universe. It is extremely unlikely that this should happen by chance, but much more likely that this should happen if there is such a person as God.

— Alvin Plantinga, "The Dawkins Confusion: Naturalism ad absurdum"

William Lane Craig, a philosopher and Christian apologist, cites this fine-tuning of the universe as evidence for the existence of God or some form of intelligence capable of manipulating (or designing) the basic physics that governs the universe. Philosopher and theologian Richard Swinburne reaches the design conclusion using Bayesian probability. Scientist and theologian Alister McGrath observed that the fine-tuning of carbon is even responsible for nature's ability to tune itself to any degree.

The entire biological evolutionary process depends upon the unusual chemistry of carbon, which allows it to bond to itself, as well as other elements, creating highly complex molecules that are stable over prevailing terrestrial temperatures, and are capable of conveying genetic information (especially DNA). [...] Whereas it might be argued that nature creates its own fine-tuning, this can only be done if the primordial constituents of the universe are such that an evolutionary process can be initiated. The unique chemistry of carbon is the ultimate foundation of the capacity of nature to tune itself.

Theoretical physicist and Anglican priest John Polkinghorne stated: "Anthropic fine tuning is too remarkable to be dismissed as just a happy accident".

Theologian and philosopher Andrew Loke argues that there are only five possible categories of hypotheses concerning fine-tuning and order: (i) chance, (ii) regularity, (iii) combinations of regularity and chance, (iv) uncaused, and (v) design, and that only design gives an exclusively logical explanation of order in the universe. He argues that the Kalam Cosmological Argument strengthens the teleological argument by answering the question "Who designed the Designer?".

Creationist Hugh Ross advances a number of fine-tuning hypotheses. One is the existence of what Ross calls "vital poisons", which are elemental nutrients that are harmful in large quantities but essential for animal life in smaller quantities.

Philosopher and theologian Robin Collins argues that theism entails the expectation that God would create a reality structured to allow for scientific discovery to easily happen. According to Collins, various physical constants such as the fine-structure constant allowing for efficient energy usage, the baryon-to-photon ratio allowing for the cosmic microwave background to be discovered, and the mass of the Higgs boson allowing it to be detected are examples of the laws of physics being fine-tuned for scientific discovery.

Evolutionary biologist Richard Dawkins dismisses the theistic argument as "deeply unsatisfying" since it leaves the existence of God unexplained, with a God capable of calculating the fine-tuning at least as improbable as the fine-tuning itself. Against this claim, it has been argued that theism is a simple hypothesis, allowing theists to deny that God is at least as improbable as the fine-tuning.

Douglas Adams satirized the theistic argument in his 2002 book The Salmon of Doubt:

Imagine a puddle waking up one morning and thinking, "This is an interesting world I find myself in, an interesting hole I find myself in, fits me rather neatly, doesn't it? In fact, it fits me staggeringly well, must have been made to have me in it!"

Neurohacking

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Neurohacking   ...