Search This Blog

Saturday, June 9, 2018

Technological singularity

From Wikipedia, the free encyclopedia

The technological singularity (also, simply, the singularity)[1] is the hypothesis that the invention of artificial superintelligence (ASI) will abruptly trigger runaway technological growth, resulting in unfathomable changes to human civilization.[2] According to this hypothesis, an upgradable intelligent agent (such as a computer running software-based artificial general intelligence) would enter a "runaway reaction" of self-improvement cycles, with each new and more intelligent generation appearing more and more rapidly, causing an intelligence explosion and resulting in a powerful superintelligence that would, qualitatively, far surpass all human intelligence. Stanislaw Ulam reports a discussion with John von Neumann "centered on the accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue".[3] Subsequent authors have echoed this viewpoint.[2][4] I. J. Good's "intelligence explosion" model predicts that a future superintelligence will trigger a singularity.[5] Emeritus professor of computer science at San Diego State University and science fiction author Vernor Vinge said in his 1993 essay The Coming Technological Singularity that this would signal the end of the human era, as the new superintelligence would continue to upgrade itself and would advance technologically at an incomprehensible rate.[5]

Four polls were conducted in 2012 and 2013 which suggested that the median estimate was a one in two chance that artificial general intelligence (AGI) would be developed by 2040–2050, depending on the poll.[6][7]

In the 2010s public figures such as Stephen Hawking and Elon Musk expressed concern that full artificial intelligence could result in human extinction.[8][9] The consequences of the singularity and its potential benefit or harm to the human race have been hotly debated.

Manifestations

Intelligence explosion

I. J. Good speculated in 1965 that artificial general intelligence might bring about an intelligence explosion. Good's scenario runs as follows: as computers increase in power, it becomes possible for people to build a machine that is more intelligent than humanity; this superhuman intelligence possesses greater problem-solving and inventive skills than current humans are capable of. This superintelligent machine then designs an even more capable machine, or re-writes its own software to become even more intelligent; this (ever more capable) machine then goes on to design a machine of yet greater capability, and so on. These iterations of recursive self-improvement accelerate, allowing enormous qualitative change before any upper limits imposed by the laws of physics or theoretical computation set in.[10]

Emergence of superintelligence

John von Neumann, Vernor Vinge and Ray Kurzweil define the concept in terms of the technological creation of super intelligence. They argue that it is difficult or impossible for present-day humans to predict what human beings' lives would be like in a post-singularity world.[5][11]

Non-AI singularity

Some writers use "the singularity" in a broader way to refer to any radical changes in our society brought about by new technologies such as molecular nanotechnology,[12][13][14] although Vinge and other writers specifically state that without superintelligence, such changes would not qualify as a true singularity.[5] Many writers also tie the singularity to observations of exponential growth in various technologies (with Moore's law being the most prominent example), using such observations as a basis for predicting that the singularity is likely to happen sometime within the 21st century.[13][15]

Plausibility

Many prominent technologists and academics dispute the plausibility of a technological singularity, including Paul Allen, Jeff Hawkins, John Holland, Jaron Lanier, and Gordon Moore, whose law is often cited in support of the concept.[16][17][18]

Claimed cause: exponential growth

Ray Kurzweil writes that, due to paradigm shifts, a trend of exponential growth extends Moore's law from integrated circuits to earlier transistors, vacuum tubes, relays, and electromechanical computers. He predicts that the exponential growth will continue, and that in a few decades the computing power of all computers will exceed that of ("unenhanced") human brains, with superhuman artificial intelligence appearing around the same time.
 
An updated version of Moore's law over 120 Years (based on Kurzweil’s graph). The 7 most recent data points are all NVIDIA GPUs.

The exponential growth in computing technology suggested by Moore's law is commonly cited as a reason to expect a singularity in the relatively near future, and a number of authors have proposed generalizations of Moore's law. Computer scientist and futurist Hans Moravec proposed in a 1998 book[19] that the exponential growth curve could be extended back through earlier computing technologies prior to the integrated circuit.

Ray Kurzweil postulates a law of accelerating returns in which the speed of technological change (and more generally, all evolutionary processes[20]) increases exponentially, generalizing Moore's law in the same manner as Moravec's proposal, and also including material technology (especially as applied to nanotechnology), medical technology and others.[21] Between 1986 and 2007, machines' application-specific capacity to compute information per capita roughly doubled every 14 months; the per capita capacity of the world's general-purpose computers has doubled every 18 months; the global telecommunication capacity per capita doubled every 34 months; and the world's storage capacity per capita doubled every 40 months.[22]

Kurzweil reserves the term "singularity" for a rapid increase in artificial intelligence (as opposed to other technologies), writing for example that "The Singularity will allow us to transcend these limitations of our biological bodies and brains ... There will be no distinction, post-Singularity, between human and machine".[23] He also defines his predicted date of the singularity (2045) in terms of when he expects computer-based intelligences to significantly exceed the sum total of human brainpower, writing that advances in computing before that date "will not represent the Singularity" because they do "not yet correspond to a profound expansion of our intelligence."[24]

Accelerating change

According to Kurzweil, his logarithmic graph of 15 lists of paradigm shifts for key historic events shows an exponential trend

Some singularity proponents argue its inevitability through extrapolation of past trends, especially those pertaining to shortening gaps between improvements to technology. In one of the first uses of the term "singularity" in the context of technological progress, Stanislaw Ulam tells of a conversation with John von Neumann about accelerating change:
One conversation centered on the ever accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue.[3]
Kurzweil claims that technological progress follows a pattern of exponential growth, following what he calls the "law of accelerating returns". Whenever technology approaches a barrier, Kurzweil writes, new technologies will surmount it. He predicts paradigm shifts will become increasingly common, leading to "technological change so rapid and profound it represents a rupture in the fabric of human history".[25] Kurzweil believes that the singularity will occur by approximately 2045.[26] His predictions differ from Vinge's in that he predicts a gradual ascent to the singularity, rather than Vinge's rapidly self-improving superhuman intelligence.

Oft-cited dangers include those commonly associated with molecular nanotechnology and genetic engineering. These threats are major issues for both singularity advocates and critics, and were the subject of Bill Joy's Wired magazine article "Why the future doesn't need us".[4][27]

Criticisms

Some critics assert that no computer or machine will ever achieve human intelligence, while others hold that the definition of intelligence is irrelevant if the net result is the same.[28]

Steven Pinker stated in 2008:
... There is not the slightest reason to believe in a coming singularity. The fact that you can visualize a future in your imagination is not evidence that it is likely or even possible. Look at domed cities, jet-pack commuting, underwater cities, mile-high buildings, and nuclear-powered automobiles—all staples of futuristic fantasies when I was a child that have never arrived. Sheer processing power is not a pixie dust that magically solves all your problems. ...[16]
University of California, Berkeley, philosophy professor John Searle writes:
[Computers] have, literally ..., no intelligence, no motivation, no autonomy, and no agency. We design them to behave as if they had certain sorts of psychology, but there is no psychological reality to the corresponding processes or behavior. ... [T]he machinery has no beliefs, desires, [or] motivations.[29]
Martin Ford in The Lights in the Tunnel: Automation, Accelerating Technology and the Economy of the Future[30] postulates a "technology paradox" in that before the singularity could occur most routine jobs in the economy would be automated, since this would require a level of technology inferior to that of the singularity. This would cause massive unemployment and plummeting consumer demand, which in turn would destroy the incentive to invest in the technologies that would be required to bring about the Singularity. Job displacement is increasingly no longer limited to work traditionally considered to be "routine".[31]

Theodore Modis[32][33] and Jonathan Huebner[34] argue that the rate of technological innovation has not only ceased to rise, but is actually now declining. Evidence for this decline is that the rise in computer clock rates is slowing, even while Moore's prediction of exponentially increasing circuit density continues to hold. This is due to excessive heat build-up from the chip, which cannot be dissipated quickly enough to prevent the chip from melting when operating at higher speeds. Advancements in speed may be possible in the future by virtue of more power-efficient CPU designs and multi-cell processors.[35] While Kurzweil used Modis' resources, and Modis' work was around accelerating change, Modis distanced himself from Kurzweil's thesis of a "technological singularity", claiming that it lacks scientific rigor.[33]

Others[who?] propose that other "singularities" can be found through analysis of trends in world population, world gross domestic product, and other indices. Andrey Korotayev and others argue that historical hyperbolic growth curves can be attributed to feedback loops that ceased to affect global trends in the 1970s, and thus hyperbolic growth should not be expected in the future.[36][37][improper synthesis?]

In a detailed empirical accounting, The Progress of Computing, William Nordhaus argued that, prior to 1940, computers followed the much slower growth of a traditional industrial economy, thus rejecting extrapolations of Moore's law to 19th-century computers.[38]

In a 2007 paper, Schmidhuber stated that the frequency of subjectively "notable events" appears to be approaching a 21st-century singularity, but cautioned readers to take such plots of subjective events with a grain of salt: perhaps differences in memory of recent and distant events could create an illusion of accelerating change where none exists.[39]

Paul Allen argues the opposite of accelerating returns, the complexity brake;[18] the more progress science makes towards understanding intelligence, the more difficult it becomes to make additional progress. A study of the number of patents shows that human creativity does not show accelerating returns, but in fact, as suggested by Joseph Tainter in his The Collapse of Complex Societies,[40] a law of diminishing returns. The number of patents per thousand peaked in the period from 1850 to 1900, and has been declining since.[34] The growth of complexity eventually becomes self-limiting, and leads to a widespread "general systems collapse".

Jaron Lanier refutes the idea that the Singularity is inevitable. He states: "I do not think the technology is creating itself. It's not an autonomous process."[41] He goes on to assert: "The reason to believe in human agency over technological determinism is that you can then have an economy where people earn their own way and invent their own lives. If you structure a society on not emphasizing individual human agency, it's the same thing operationally as denying people clout, dignity, and self-determination ... to embrace [the idea of the Singularity] would be a celebration of bad data and bad politics."[41]

Economist Robert J. Gordon, in The Rise and Fall of American Growth: The U.S. Standard of Living Since the Civil War (2016), points out that measured economic growth has slowed around 1970 and slowed even further since the financial crisis of 2008, and argues that the economic data show no trace of a coming Singularity as imagined by mathematician I.J. Good.[42]

In addition to general criticisms of the singularity concept, several critics have raised issues with Kurzweil's iconic chart. One line of criticism is that a log-log chart of this nature is inherently biased toward a straight-line result. Others identify selection bias in the points that Kurzweil chooses to use. For example, biologist PZ Myers points out that many of the early evolutionary "events" were picked arbitrarily.[43] Kurzweil has rebutted this by charting evolutionary events from 15 neutral sources, and showing that they fit a straight line on a log-log chart. The Economist mocked the concept with a graph extrapolating that the number of blades on a razor, which has increased over the years from one to as many as five, will increase ever-faster to infinity.[44]

Ramifications

Uncertainty and risk

The term "technological singularity" reflects the idea that such change may happen suddenly, and that it is difficult to predict how the resulting new world would operate.[45][46] It is unclear whether an intelligence explosion of this kind would be beneficial or harmful, or even an existential threat,[47][48] as the issue has not been dealt with by most artificial general intelligence researchers, although the topic of friendly artificial intelligence is investigated by the Future of Humanity Institute and the Machine Intelligence Research Institute.[45]

Next step of sociobiological evolution

Schematic Timeline of Information and Replicators in the Biosphere: Gillings et al.'s "major evolutionary transitions" in information processing.[49]
 
Amount of digital information worldwide (5x10^21 bytes) versus human genome information worldwide (10^19 bytes) in 2014.[49]
 
While the technological singularity is usually seen as a sudden event, some scholars argue the current speed of change already fits this description.[citation needed] In addition, some argue that we are already in the midst of a major evolutionary transition that merges technology, biology, and society. Digital technology has infiltrated the fabric of human society to a degree of indisputable and often life-sustaining dependence. A 2016 article in Trends in Ecology & Evolution argues that "humans already embrace fusions of biology and technology. We spend most of our waking time communicating through digitally mediated channels... we trust artificial intelligence with our lives through antilock braking in cars and autopilots in planes... With one in three marriages in America beginning online, digital algorithms are also taking a role in human pair bonding and reproduction". The article argues that from the perspective of the evolution, several previous Major Transitions in Evolution have transformed life through innovations in information storage and replication (RNA, DNA, multicellularity, and culture and language). In the current stage of life's evolution, the carbon-based biosphere has generated a cognitive system (humans) capable of creating technology that will result in a comparable evolutionary transition. The digital information created by humans has reached a similar magnitude to biological information in the biosphere. Since the 1980s, "the quantity of digital information stored has doubled about every 2.5 years, reaching about 5 zettabytes in 2014 (5x10^21 bytes). In biological terms, there are 7.2 billion humans on the planet, each having a genome of 6.2 billion nucleotides. Since one byte can encode four nucleotide pairs, the individual genomes of every human on the planet could be encoded by approximately 1x10^19 bytes. The digital realm stored 500 times more information than this in 2014 (...see Figure)... The total amount of DNA contained in all of the cells on Earth is estimated to be about 5.3x10^37 base pairs, equivalent to 1.325x10^37 bytes of information. If growth in digital storage continues at its current rate of 30–38% compound annual growth per year,[22] it will rival the total information content contained in all of the DNA in all of the cells on Earth in about 110 years. This would represent a doubling of the amount of information stored in the biosphere across a total time period of just 150 years".[49]

Implications for human society

In February 2009, under the auspices of the Association for the Advancement of Artificial Intelligence (AAAI), Eric Horvitz chaired a meeting of leading computer scientists, artificial intelligence researchers and roboticists at Asilomar in Pacific Grove, California. The goal was to discuss the potential impact of the hypothetical possibility that robots could become self-sufficient and able to make their own decisions. They discussed the extent to which computers and robots might be able to acquire autonomy, and to what degree they could use such abilities to pose threats or hazards.[50]

Some machines are programmed with various forms of semi-autonomy, including the ability to locate their own power sources and choose targets to attack with weapons. Also, some computer viruses can evade elimination and, according to scientists in attendance, could therefore be said to have reached a "cockroach" stage of machine intelligence. The conference attendees noted that self-awareness as depicted in science-fiction is probably unlikely, but that other potential hazards and pitfalls exist.[50]

Some experts and academics have questioned the use of robots for military combat, especially when such robots are given some degree of autonomous functions.[51][improper synthesis?]

Immortality

In his 2005 book, The Singularity is Near, Kurzweil suggests that medical advances would allow people to protect their bodies from the effects of aging, making the life expectancy limitless. Kurzweil argues that the technological advances in medicine would allow us to continuously repair and replace defective components in our bodies, prolonging life to an undetermined age.[52] Kurzweil further buttresses his argument by discussing current bio-engineering advances. Kurzweil suggests somatic gene therapy; after synthetic viruses with specific genetic information, the next step would be to apply this technology to gene therapy, replacing human DNA with synthesized genes.[53]

K. Eric Drexler, one of the founders of nanotechnology, postulated cell repair devices, including ones operating within cells and utilizing as yet hypothetical biological machines, in his 1986 book Engines of Creation. According to Richard Feynman, it was his former graduate student and collaborator Albert Hibbs who originally suggested to him (circa 1959) the idea of a medical use for Feynman's theoretical micromachines . Hibbs suggested that certain repair machines might one day be reduced in size to the point that it would, in theory, be possible to (as Feynman put it) "swallow the doctor".

The idea was incorporated into Feynman's 1959 essay There's Plenty of Room at the Bottom.[54]

Beyond merely extending the operational life of the physical body, Jaron Lanier argues for a form of immortality called "Digital Ascension" that involves "people dying in the flesh and being uploaded into a computer and remaining conscious".[55] Singularitarianism has also been likened to a religion by John Horgan.[56]

History of the concept

In his obituary for John von Neumann, Ulam recalled a conversation with von Neumann about the "ever accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue."[3]

In 1965, Good wrote his essay postulating an "intelligence explosion" of recursive self-improvement of a machine intelligence. In 1985, in "The Time Scale of Artificial Intelligence", artificial intelligence researcher Ray Solomonoff articulated mathematically the related notion of what he called an "infinity point": if a research community of human-level self-improving AIs take four years to double their own speed, then two years, then one year and so on, their capabilities increase infinitely in finite time.[4][57]

In 1981, Stanisław Lem published his science fiction novel Golem XIV. It describes a military AI computer (Golem XIV) who obtains consciousness and starts to increase his own intelligence, moving towards personal technological singularity. Golem XIV was originally created to aid its builders in fighting wars, but as its intelligence advances to a much higher level than that of humans, it stops being interested in the military requirement because it finds them lacking internal logical consistency.

In 1983, Vinge greatly popularized Good's intelligence explosion in a number of writings, first addressing the topic in print in the January 1983 issue of Omni magazine. In this op-ed piece, Vinge seems to have been the first to use the term "singularity" in a way that was specifically tied to the creation of intelligent machines:[58][59] writing
We will soon create intelligences greater than our own. When this happens, human history will have reached a kind of singularity, an intellectual transition as impenetrable as the knotted space-time at the center of a black hole, and the world will pass far beyond our understanding. This singularity, I believe, already haunts a number of science-fiction writers. It makes realistic extrapolation to an interstellar future impossible. To write a story set more than a century hence, one needs a nuclear war in between ... so that the world remains intelligible.
Vinge's 1993 article "The Coming Technological Singularity: How to Survive in the Post-Human Era",[5] spread widely on the internet and helped to popularize the idea.[60] This article contains the statement, "Within thirty years, we will have the technological means to create superhuman intelligence. Shortly after, the human era will be ended." Vinge argues that science-fiction authors cannot write realistic post-singularity characters who surpass the human intellect, as the thoughts of such an intellect would be beyond the ability of humans to express.[5]

In 2000, Bill Joy, a prominent technologist and a co-founder of Sun Microsystems, voiced concern over the potential dangers of the singularity.[27]

In 2005, Kurzweil published The Singularity is Near. Kurzweil's publicity campaign included an appearance on The Daily Show with Jon Stewart.[61]

In 2007, Eliezer Yudkowsky suggested that many of the varied definitions that have been assigned to "singularity" are mutually incompatible rather than mutually supporting.[13][62] For example, Kurzweil extrapolates current technological trajectories past the arrival of self-improving AI or superhuman intelligence, which Yudkowsky argues represents a tension with both I. J. Good's proposed discontinuous upswing in intelligence and Vinge's thesis on unpredictability.[13]

In 2009, Kurzweil and X-Prize founder Peter Diamandis announced the establishment of Singularity University, a nonaccredited private institute whose stated mission is "to educate, inspire and empower leaders to apply exponential technologies to address humanity's grand challenges."[63] Funded by Google, Autodesk, ePlanet Ventures, and a group of technology industry leaders, Singularity University is based at NASA's Ames Research Center in Mountain View, California. The not-for-profit organization runs an annual ten-week graduate program during the northern-hemisphere summer that covers ten different technology and allied tracks, and a series of executive programs throughout the year.

In politics

In 2007, the joint Economic Committee of the United States Congress released a report about the future of nanotechnology. It predicts significant technological and political changes in the mid-term future, including possible technological singularity.[64][65][66]

Former President of the United States Barack Obama spoke about singularity in his interview to Wired in 2016:[67]
One thing that we haven't talked about too much, and I just want to go back to, is we really have to think through the economic implications. Because most people aren't spending a lot of time right now worrying about singularity—they are worrying about "Well, is my job going to be replaced by a machine?"

Interstellar travel

From Wikipedia, the free encyclopedia

A Bussard ramjet, one of many possible methods that could serve as propulsion of a starship.

Interstellar travel is the term used for hypothetical crewed or uncrewed travel between stars or planetary systems. Interstellar travel will be much more difficult than interplanetary spaceflight; the distances between the planets in the Solar System are less than 30 astronomical units (AU)—whereas the distances between stars are typically hundreds of thousands of AU, and usually expressed in light-years. Because of the vastness of those distances, interstellar travel would require a high percentage of the speed of light; huge travel time, lasting from decades to millennia or longer; or a combination of both.

The speeds required for interstellar travel in a human lifetime far exceed what current methods of spacecraft propulsion can provide. Even with a hypothetically perfectly efficient propulsion system, the kinetic energy corresponding to those speeds is enormous by today's standards of energy production. Moreover, collisions by the spacecraft with cosmic dust and gas can produce very dangerous effects both to passengers and the spacecraft itself.

A number of strategies have been proposed to deal with these problems, ranging from giant arks that would carry entire societies and ecosystems, to microscopic space probes. Many different spacecraft propulsion systems have been proposed to give spacecraft the required speeds, including nuclear propulsion, beam-powered propulsion, and methods based on speculative physics.[1]

For both crewed and uncrewed interstellar travel, considerable technological and economic challenges need to be met. Even the most optimistic views about interstellar travel see it as only being feasible decades from now—the more common view is that it is a century or more away. However, in spite of the challenges, if interstellar travel should ever be realized, then a wide range of scientific benefits can be expected.[2]

Most interstellar travel concepts require a developed space logistics system capable of moving millions of tons to a construction / operating location, and most would require gigawatt-scale power for construction or power (such as Star Wisp or Light Sail type concepts). Such a system could grow organically if space-based solar power became a significant component of Earth's energy mix. Consumer demand for a multi-terawatt system would automatically create the necessary multi-million ton/year logistical system.[3]

Challenges

Interstellar distances

Distances between the planets in the Solar System are often measured in astronomical units (AU), defined as the average distance between the Sun and Earth, some 1.5×108 kilometers (93 million miles). Venus, the closest other planet to Earth is (at closest approach) 0.28 AU away. Neptune, the farthest planet from the Sun, is 29.8 AU away. As of January 2018, Voyager 1, the farthest man-made object from Earth, is 141.5 AU away.[4]

The closest known star Proxima Centauri, however, is some 268,332 AU away, or over 9,000 times farther away than Neptune.

Object A.U. light time
Moon 0.0026 1.3 seconds
Sun 1 8 minutes
Venus (nearest planet) 0.28 2.41 minutes
Neptune (farthest planet) 29.8 4.1 hours
Voyager 1 141.5 19.61 hours
Proxima Centauri (nearest star and exoplanet)    268,332  4.24 years

Because of this, distances between stars are usually expressed in light-years, defined as the distance that a ray of light travels in a year. Light in a vacuum travels around 300,000 kilometres (186,000 mi) per second, so this is some 9.461×1012 kilometers (5.879 trillion miles) or 1 light-year (63,241 AU) in a year. Proxima Centauri is 4.243 light-years away.

Another way of understanding the vastness of interstellar distances is by scaling: One of the closest stars to the Sun, Alpha Centauri A (a Sun-like star), can be pictured by scaling down the Earth–Sun distance to one meter (3.28 ft). On this scale, the distance to Alpha Centauri A would be 276 kilometers (171 miles).

The fastest outward-bound spacecraft yet sent, Voyager 1, has covered 1/600 of a light-year in 30 years and is currently moving at 1/18,000 the speed of light. At this rate, a journey to Proxima Centauri would take 80,000 years.[5]

Required energy

A significant factor contributing to the difficulty is the energy that must be supplied to obtain a reasonable travel time. A lower bound for the required energy is the kinetic energy {\displaystyle K={\tfrac {1}{2}}mv^{2}} where m is the final mass. If deceleration on arrival is desired and cannot be achieved by any means other than the engines of the ship, then the lower bound for the required energy is doubled to mv^2.[6]

The velocity for a manned round trip of a few decades to even the nearest star is several thousand times greater than those of present space vehicles. This means that due to the v^{2} term in the kinetic energy formula, millions of times as much energy is required. Accelerating one ton to one-tenth of the speed of light requires at least 450 petajoules or 4.50×1017 joules or 125 terawatt-hours[7] (world energy consumption 2008 was 143,851 terawatt-hours),[citation needed] without factoring in efficiency of the propulsion mechanism. This energy has to be generated onboard from stored fuel, harvested from the interstellar medium, or projected over immense distances.

Interstellar medium

A knowledge of the properties of the interstellar gas and dust through which the vehicle must pass is essential for the design of any interstellar space mission.[8] A major issue with traveling at extremely high speeds is that interstellar dust may cause considerable damage to the craft, due to the high relative speeds and large kinetic energies involved. Various shielding methods to mitigate this problem have been proposed.[9] Larger objects (such as macroscopic dust grains) are far less common, but would be much more destructive. The risks of impacting such objects, and methods of mitigating these risks, have been discussed in the literature, but many unknowns remain[10] and, owing to the inhomogeneous distribution of interstellar matter around the Sun, will depend on direction travelled.[8] Although a high density interstellar medium may cause difficulties for many interstellar travel concepts, interstellar ramjets, and some proposed concepts for decelerating interstellar spacecraft, would actually benefit from a denser interstellar medium.[8]

Hazards

The crew of an interstellar ship would face several significant hazards, including the psychological effects of long-term isolation, the effects of exposure to ionizing radiation, and the physiological effects of weightlessness to the muscles, joints, bones, immune system, and eyes. There also exists the risk of impact by micrometeoroids and other space debris. These risks represent challenges that have yet to be overcome.[11]

Wait calculation

It has been argued that an interstellar mission that cannot be completed within 50 years should not be started at all. Instead, assuming that a civilization is still on an increasing curve of propulsion system velocity and not yet having reached the limit, the resources should be invested in designing a better propulsion system. This is because a slow spacecraft would probably be passed by another mission sent later with more advanced propulsion (the incessant obsolescence postulate).[12] On the other hand, Andrew Kennedy has shown that if one calculates the journey time to a given destination as the rate of travel speed derived from growth (even exponential growth) increases, there is a clear minimum in the total time to that destination from now.[13] Voyages undertaken before the minimum will be overtaken by those who leave at the minimum, whereas those who leave after the minimum will never overtake those who left at the minimum.

Prime targets for interstellar travel

There are 59 known stellar systems within 40 light years of the Sun, containing 81 visible stars. The following could be considered prime targets for interstellar missions:[12]

System Distance (ly) Remarks
Alpha Centauri 4.3 Closest system. Three stars (G2, K1, M5). Component A is similar to the Sun (a G2 star). On August 24, 2016, the discovery of an Earth-size exoplanet (Proxima Centauri b) orbiting in the habitable zone of Proxima Centauri was announced.
Barnard's Star 6 Small, low-luminosity M5 red dwarf. Second closest to Solar System.
Sirius 8.7 Large, very bright A1 star with a white dwarf companion.
Epsilon Eridani 10.8 Single K2 star slightly smaller and colder than the Sun. It has two asteroid belts, might have a giant and one much smaller planet,[14] and may possess a Solar-System-type planetary system.
Tau Ceti 11.8 Single G8 star similar to the Sun. High probability of possessing a Solar-System-type planetary system: current evidence shows 5 planets with potentially two in the habitable zone.
Wolf 1061 ~14 Wolf 1061 c is 4.3 times the size of Earth; it may have rocky terrain. It also sits within the ‘Goldilocks’ zone where it might be possible for liquid water to exist.[15]
Gliese 581 planetary system 20.3 Multiple planet system. The unconfirmed exoplanet Gliese 581g and the confirmed exoplanet Gliese 581d are in the star's habitable zone.
Gliese 667C 22 A system with at least six planets. A record-breaking three of these planets are super-Earths lying in the zone around the star where liquid water could exist, making them possible candidates for the presence of life.[16]
Vega 25 A very young system possibly in the process of planetary formation.[17]
TRAPPIST-1 39 A recently discovered system which boasts 7 Earth-like planets, some of which may have liquid water. The discovery is a major advancement in finding a habitable planet and in finding a planet that could support life.

Existing and near-term astronomical technology is capable of finding planetary systems around these objects, increasing their potential for exploration

Proposed methods

Slow, uncrewed probes

Slow interstellar missions based on current and near-future propulsion technologies are associated with trip times starting from about one hundred years to thousands of years. These missions consist of sending a robotic probe to a nearby star for exploration, similar to interplanetary probes such as used in the Voyager program.[18] By taking along no crew, the cost and complexity of the mission is significantly reduced although technology lifetime is still a significant issue next to obtaining a reasonable speed of travel. Proposed concepts include Project Daedalus, Project Icarus, Project Dragonfly, Project Longshot.,[19] and more recently Breakthrough Starshot.[20]

Fast, uncrewed probes

Nanoprobes

Near-lightspeed nano spacecraft might be possible within the near future built on existing microchip technology with a newly developed nanoscale thruster. Researchers at the University of Michigan are developing thrusters that use nanoparticles as propellant. Their technology is called "nanoparticle field extraction thruster", or nanoFET. These devices act like small particle accelerators shooting conductive nanoparticles out into space.[21]

Michio Kaku, a theoretical physicist, has suggested that clouds of "smart dust" be sent to the stars, which may become possible with advances in nanotechnology. Kaku also notes that a large number of nanoprobes would need to be sent due to the vulnerability of very small probes to be easily deflected by magnetic fields, micrometeorites and other dangers to ensure the chances that at least one nanoprobe will survive the journey and reach the destination.[22]

Given the light weight of these probes, it would take much less energy to accelerate them. With onboard solar cells, they could continually accelerate using solar power. One can envision a day when a fleet of millions or even billions of these particles swarm to distant stars at nearly the speed of light and relay signals back to Earth through a vast interstellar communication network.

As a near-term solution, small, laser-propelled interstellar probes, based on current CubeSat technology were proposed in the context of Project Dragonfly.[19]

Slow, manned missions

In crewed missions, the duration of a slow interstellar journey presents a major obstacle and existing concepts deal with this problem in different ways.[23] They can be distinguished by the "state" in which humans are transported on-board of the spacecraft.

Generation ships

A generation ship (or world ship) is a type of interstellar ark in which the crew that arrives at the destination is descended from those who started the journey. Generation ships are not currently feasible because of the difficulty of constructing a ship of the enormous required scale and the great biological and sociological problems that life aboard such a ship raises.[24][25][26][27]

Suspended animation

Scientists and writers have postulated various techniques for suspended animation. These include human hibernation and cryonic preservation. Although neither is currently practical, they offer the possibility of sleeper ships in which the passengers lie inert for the long duration of the voyage.[28]

Frozen embryos

A robotic interstellar mission carrying some number of frozen early stage human embryos is another theoretical possibility. This method of space colonization requires, among other things, the development of an artificial uterus, the prior detection of a habitable terrestrial planet, and advances in the field of fully autonomous mobile robots and educational robots that would replace human parents.[29]

Island hopping through interstellar space

Interstellar space is not completely empty; it contains trillions of icy bodies ranging from small asteroids (Oort cloud) to possible rogue planets. There may be ways to take advantage of these resources for a good part of an interstellar trip, slowly hopping from body to body or setting up waystations along the way.[30]

Fast missions

If a spaceship could average 10 percent of light speed (and decelerate at the destination, for manned missions), this would be enough to reach Proxima Centauri in forty years. Several propulsion concepts have been proposed [31] that might be eventually developed to accomplish this (see also the section below on propulsion methods), but none of them are ready for near-term (few decades) developments at acceptable cost.

Time dilation

Assuming faster-than-light travel is impossible, one might conclude that a human can never make a round-trip farther from Earth than 20 light years if the traveler is active between the ages of 20 and 60. A traveler would never be able to reach more than the very few star systems that exist within the limit of 20 light years from Earth. This, however, fails to take into account relativistic time dilation.[32] Clocks aboard an interstellar ship would run slower than Earth clocks, so if a ship's engines were capable of continuously generating around 1 g of acceleration (which is comfortable for humans), the ship could reach almost anywhere in the galaxy and return to Earth within 40 years ship-time (see diagram). Upon return, there would be a difference between the time elapsed on the astronaut's ship and the time elapsed on Earth.
For example, a spaceship could travel to a star 32 light-years away, initially accelerating at a constant 1.03g (i.e. 10.1 m/s2) for 1.32 years (ship time), then stopping its engines and coasting for the next 17.3 years (ship time) at a constant speed, then decelerating again for 1.32 ship-years, and coming to a stop at the destination. After a short visit, the astronaut could return to Earth the same way. After the full round-trip, the clocks on board the ship show that 40 years have passed, but according to those on Earth, the ship comes back 76 years after launch.

From the viewpoint of the astronaut, onboard clocks seem to be running normally. The star ahead seems to be approaching at a speed of 0.87 light years per ship-year. The universe would appear contracted along the direction of travel to half the size it had when the ship was at rest; the distance between that star and the Sun would seem to be 16 light years as measured by the astronaut.

At higher speeds, the time on board will run even slower, so the astronaut could travel to the center of the Milky Way (30,000 light years from Earth) and back in 40 years ship-time. But the speed according to Earth clocks will always be less than 1 light year per Earth year, so, when back home, the astronaut will find that more than 60 thousand years will have passed on Earth.

Constant acceleration


This plot shows a ship capable of 1-g (10 m/s2 or about 1.0 ly/y2) "felt" or proper-acceleration[33] can go far, except for the problem of accelerating on-board propellant.

Regardless of how it is achieved, a propulsion system that could produce acceleration continuously from departure to arrival would be the fastest method of travel. A constant acceleration journey is one where the propulsion system accelerates the ship at a constant rate for the first half of the journey, and then decelerates for the second half, so that it arrives at the destination stationary relative to where it began. If this were performed with an acceleration similar to that experienced at the Earth's surface, it would have the added advantage of producing artificial "gravity" for the crew. Supplying the energy required, however, would be prohibitively expensive with current technology.[34]

From the perspective of a planetary observer, the ship will appear to accelerate steadily at first, but then more gradually as it approaches the speed of light (which it cannot exceed). It will undergo hyperbolic motion.[35] The ship will be close to the speed of light after about a year of accelerating and remain at that speed until it brakes for the end of the journey.

From the perspective of an onboard observer, the crew will feel a gravitational field opposite the engine's acceleration, and the universe ahead will appear to fall in that field, undergoing hyperbolic motion. As part of this, distances between objects in the direction of the ship's motion will gradually contract until the ship begins to decelerate, at which time an onboard observer's experience of the gravitational field will be reversed.

When the ship reaches its destination, if it were to exchange a message with its origin planet, it would find that less time had elapsed on board than had elapsed for the planetary observer, due to time dilation and length contraction.

The result is an impressively fast journey for the crew.

Propulsion

Rocket concepts

All rocket concepts are limited by the rocket equation, which sets the characteristic velocity available as a function of exhaust velocity and mass ratio, the ratio of initial (M0, including fuel) to final (M1, fuel depleted) mass.

Very high specific power, the ratio of thrust to total vehicle mass, is required to reach interstellar targets within sub-century time-frames.[36] Some heat transfer is inevitable and a tremendous heating load must be adequately handled.

Thus, for interstellar rocket concepts of all technologies, a key engineering problem (seldom explicitly discussed) is limiting the heat transfer from the exhaust stream back into the vehicle.[37]

Ion engine

A type of electric propulsion, spacecraft such as Dawn use an ion engine. In an ion engine, electric power is used to create charged particles of the propellant, usually the gas xenon, and accelerate them to extremely high velocities. The exhaust velocity of conventional rockets is limited by the chemical energy stored in the fuel’s molecular bonds, which limits the thrust to about 5 km/s. This gives them high power[clarification needed] (for lift-off from Earth, for example) but limits the top speed. By contrast, ion engines have low force, but the top speed in principle is limited only by the electrical power available on the spacecraft and on the gas ions being accelerated. The exhaust speed of the charged particles range from 15 km/s to 35 km/s.[38]

Nuclear fission powered

Fission-electric
Nuclear-electric or plasma engines, operating for long periods at low thrust and powered by fission reactors, have the potential to reach speeds much greater than chemically powered vehicles or nuclear-thermal rockets. Such vehicles probably have the potential to power Solar System exploration with reasonable trip times within the current century. Because of their low-thrust propulsion, they would be limited to off-planet, deep-space operation. Electrically powered spacecraft propulsion powered by a portable power-source, say a nuclear reactor, producing only small accelerations, would take centuries to reach for example 15% of the velocity of light, thus unsuitable for interstellar flight during a single human lifetime.[39]
Fission-fragment
Fission-fragment rockets use nuclear fission to create high-speed jets of fission fragments, which are ejected at speeds of up to 12,000 km/s (7,500 mi/s). With fission, the energy output is approximately 0.1% of the total mass-energy of the reactor fuel and limits the effective exhaust velocity to about 5% of the velocity of light. For maximum velocity, the reaction mass should optimally consist of fission products, the "ash" of the primary energy source, so no extra reaction mass need be bookkept in the mass ratio.
Nuclear pulse

Modern Pulsed Fission Propulsion Concept.

Based on work in the late 1950s to the early 1960s, it has been technically possible to build spaceships with nuclear pulse propulsion engines, i.e. driven by a series of nuclear explosions. This propulsion system contains the prospect of very high specific impulse (space travel's equivalent of fuel economy) and high specific power.[40]

Project Orion team member Freeman Dyson proposed in 1968 an interstellar spacecraft using nuclear pulse propulsion that used pure deuterium fusion detonations with a very high fuel-burnup fraction. He computed an exhaust velocity of 15,000 km/s and a 100,000-tonne space vehicle able to achieve a 20,000 km/s delta-v allowing a flight-time to Alpha Centauri of 130 years.[41] Later studies indicate that the top cruise velocity that can theoretically be achieved by a Teller-Ulam thermonuclear unit powered Orion starship, assuming no fuel is saved for slowing back down, is about 8% to 10% of the speed of light (0.08-0.1c).[42] An atomic (fission) Orion can achieve perhaps 3%-5% of the speed of light. A nuclear pulse drive starship powered by fusion-antimatter catalyzed nuclear pulse propulsion units would be similarly in the 10% range and pure matter-antimatter annihilation rockets would be theoretically capable of obtaining a velocity between 50% to 80% of the speed of light. In each case saving fuel for slowing down halves the maximum speed. The concept of using a magnetic sail to decelerate the spacecraft as it approaches its destination has been discussed as an alternative to using propellant, this would allow the ship to travel near the maximum theoretical velocity.[43] Alternative designs utilizing similar principles include Project Longshot, Project Daedalus, and Mini-Mag Orion. The principle of external nuclear pulse propulsion to maximize survivable power has remained common among serious concepts for interstellar flight without external power beaming and for very high-performance interplanetary flight.

In the 1970s the Nuclear Pulse Propulsion concept further was refined by Project Daedalus by use of externally triggered inertial confinement fusion, in this case producing fusion explosions via compressing fusion fuel pellets with high-powered electron beams. Since then, lasers, ion beams, neutral particle beams and hyper-kinetic projectiles have been suggested to produce nuclear pulses for propulsion purposes.[44]

A current impediment to the development of any nuclear-explosion-powered spacecraft is the 1963 Partial Test Ban Treaty, which includes a prohibition on the detonation of any nuclear devices (even non-weapon based) in outer space. This treaty would, therefore, need to be renegotiated, although a project on the scale of an interstellar mission using currently foreseeable technology would probably require international cooperation on at least the scale of the International Space Station.

Nuclear fusion rockets


Daedalus interstellar vehicle.

Fusion rocket starships, powered by nuclear fusion reactions, should conceivably be able to reach speeds of the order of 10% of that of light, based on energy considerations alone. In theory, a large number of stages could push a vehicle arbitrarily close to the speed of light.[45] These would "burn" such light element fuels as deuterium, tritium, 3He, 11B, and 7Li. Because fusion yields about 0.3–0.9% of the mass of the nuclear fuel as released energy, it is energetically more favorable than fission, which releases  less than 0.1% of the fuel's mass-energy. The maximum exhaust velocities potentially energetically available are correspondingly higher than for fission, typically 4–10% of c. However, the most easily achievable fusion reactions release a large fraction of their energy as high-energy neutrons, which are a significant source of energy loss. Thus, although these concepts seem to offer the best (nearest-term) prospects for travel to the nearest stars within a (long) human lifetime, they still involve massive technological and engineering difficulties, which may turn out to be intractable for decades or centuries.

Early studies include Project Daedalus, performed by the British Interplanetary Society in 1973–1978, and Project Longshot, a student project sponsored by NASA and the US Naval Academy, completed in 1988. Another fairly detailed vehicle system, "Discovery II",[46] designed and optimized for crewed Solar System exploration, based on the D3He reaction but using hydrogen as reaction mass, has been described by a team from NASA's Glenn Research Center. It achieves characteristic velocities of  greater than 300 km/s with an acceleration of ~1.7•10−3 g, with a ship initial mass of ~1700 metric tons, and payload fraction above 10%. Although these are still far short of the requirements for interstellar travel on human timescales, the study seems to represent a reasonable benchmark towards what may be approachable within several decades, which is not impossibly beyond the current state-of-the-art. Based on the concept's 2.2% burnup fraction it could achieve a pure fusion product exhaust velocity of ~3,000 km/s.

Antimatter rockets

An antimatter rocket would have a far higher energy density and specific impulse than any other proposed class of rocket.[31] If energy resources and efficient production methods are found to make antimatter in the quantities required and store[47][48] it safely, it would be theoretically possible to reach speeds of several tens of percent that of light.[31] Whether antimatter propulsion could lead to the higher speeds (>90% that of light) at which relativistic time dilation would become more noticeable, thus making time pass at a slower rate for the travelers as perceived by an outside observer, is doubtful owing to the large quantity of antimatter that would be required.[31]

Speculating that production and storage of antimatter should become feasible, two further issues need to be considered. First, in the annihilation of antimatter, much of the energy is lost as high-energy gamma radiation, and especially also as neutrinos, so that only about 40% of mc2 would actually be available if the antimatter were simply allowed to annihilate into radiations thermally.[31] Even so, the energy available for propulsion would be substantially higher than the ~1% of mc2 yield of nuclear fusion, the next-best rival candidate.

Second, heat transfer from the exhaust to the vehicle seems likely to transfer enormous wasted energy into the ship (e.g. for 0.1g ship acceleration, approaching 0.3 trillion watts per ton of ship mass), considering the large fraction of the energy that goes into penetrating gamma rays. Even assuming shielding was provided to protect the payload (and passengers on a crewed vehicle), some of the energy would inevitably heat the vehicle, and may thereby prove a limiting factor if useful accelerations are to be achieved.

More recently, Friedwardt Winterberg proposed that a matter-antimatter GeV gamma ray laser photon rocket is possible by a relativistic proton-antiproton pinch discharge, where the recoil from the laser beam is transmitted by the Mössbauer effect to the spacecraft.[49]

Rockets with an external energy source

Rockets deriving their power from external sources, such as a laser, could replace their internal energy source with an energy collector, potentially reducing the mass of the ship greatly and allowing much higher travel speeds. Geoffrey A. Landis has proposed for an interstellar probe, with energy supplied by an external laser from a base station powering an Ion thruster.[50]

Non-rocket concepts

A problem with all traditional rocket propulsion methods is that the spacecraft would need to carry its fuel with it, thus making it very massive, in accordance with the rocket equation. Several concepts attempt to escape from this problem:[31][51]

Interstellar ramjets

In 1960, Robert W. Bussard proposed the Bussard ramjet, a fusion rocket in which a huge scoop would collect the diffuse hydrogen in interstellar space, "burn" it on the fly using a proton–proton chain reaction, and expel it out of the back. Later calculations with more accurate estimates suggest that the thrust generated would be less than the drag caused by any conceivable scoop design.[citation needed] Yet the idea is attractive because the fuel would be collected en route (commensurate with the concept of energy harvesting), so the craft could theoretically accelerate to near the speed of light. The limitation is due to the fact that the reaction can only accelerate the propellant to 0.12c. Thus the drag of catching interstellar dust and the thrust of accelerating that same dust to 0.12c would be the same when the speed is 0.12c, preventing further acceleration.

Beamed propulsion


This diagram illustrates Robert L. Forward's scheme for slowing down an interstellar light-sail at the star system destination.

A light sail or magnetic sail powered by a massive laser or particle accelerator in the home star system could potentially reach even greater speeds than rocket- or pulse propulsion methods, because it would not need to carry its own reaction mass and therefore would only need to accelerate the craft's payload. Robert L. Forward proposed a means for decelerating an interstellar light sail in the destination star system without requiring a laser array to be present in that system. In this scheme, a smaller secondary sail is deployed to the rear of the spacecraft, whereas the large primary sail is detached from the craft to keep moving forward on its own. Light is reflected from the large primary sail to the secondary sail, which is used to decelerate the secondary sail and the spacecraft payload.[52] In 2002, Geoffrey A. Landis of NASA's Glen Research center also proposed a laser-powered, propulsion, sail ship that would host a diamond sail (of a few nanometers thick) powered with the use of solar energy.[53] With this proposal, this interstellar ship would, theoretically, be able to reach 10 percent the speed of light.

A magnetic sail could also decelerate at its destination without depending on carried fuel or a driving beam in the destination system, by interacting with the plasma found in the solar wind of the destination star and the interstellar medium.[54][55]

The following table lists some example concepts using beamed laser propulsion as proposed by the physicist Robert L. Forward:[56]

Mission Laser Power Vehicle Mass Acceleration Sail Diameter Maximum Velocity (% of the speed of light)
1. Flyby – Alpha Centauri, 40 years
outbound stage 65 GW 1 t 0.036 g 3.6 km 11% @ 0.17 ly
2. Rendezvous – Alpha Centauri, 41 years
outbound stage 7,200 GW 785 t 0.005 g 100 km 21% @ 4.29 ly[dubious ]
deceleration stage 26,000 GW 71 t 0.2 g 30 km 21% @ 4.29 ly
3. Manned – Epsilon Eridani, 51 years (including 5 years exploring star system)
outbound stage 75,000,000 GW 78,500 t 0.3 g 1000 km 50% @ 0.4 ly
deceleration stage 21,500,000 GW 7,850 t 0.3 g 320 km 50% @ 10.4 ly
return stage 710,000 GW 785 t 0.3 g 100 km 50% @ 10.4 ly
deceleration stage 60,000 GW 785 t 0.3 g 100 km 50% @ 0.4 ly
Interstellar travel catalog to use photogravitational assists for a full stop
The following table is based on work by Heller, Hippke and Kervella.[57]

Name Travel time
(yr)
Distance
(ly)
Luminosity
(L)
Sirius A 68.90 8.58 24.20
α Centauri A 101.25 4.36 1.52
α Centauri B 147.58 4.36 0.50
Procyon A 154.06 11.44 6.94
Vega 167.39 25.02 50.05
Altair 176.67 16.69 10.70
Fomalhaut A 221.33 25.13 16.67
Denebola 325.56 35.78 14.66
Castor A 341.35 50.98 49.85
Epsilon Eridiani 363.35 10.50 0.50
  • Successive assists at α Cen A and B could allow travel times to 75 yr to both stars.
  • Lightsail has a nominal mass-to-surface ratio (σnom) of 8.6×10−4 gram m−2 for a nominal graphene-class sail.
  • Area of the Lightsail, about 105 m2 = (316 m)2
  • Velocity up to 37,300 km s−1 (12.5% c)

Pre-accelerated fuel

Achieving start-stop interstellar trip times of less than a human lifetime require mass-ratios of between 1,000 and 1,000,000, even for the nearer stars. This could be achieved by multi-staged vehicles on a vast scale.[45] Alternatively large linear accelerators could propel fuel to fission propelled space-vehicles, avoiding the limitations of the Rocket equation.[58]

Theoretical concepts

Faster-than-light travel


Artist's depiction of a hypothetical Wormhole Induction Propelled Spacecraft, based loosely on the 1994 "warp drive" paper of Miguel Alcubierre. Credit: NASA CD-98-76634 by Les Bossinas.

Scientists and authors have postulated a number of ways by which it might be possible to surpass the speed of light, but even the most serious-minded of these are highly speculative.[59]

It is also debatable whether faster-than-light travel is physically possible, in part because of causality concerns: travel faster than light may, under certain conditions, permit travel backwards in time within the context of special relativity.[60] Proposed mechanisms for faster-than-light travel within the theory of general relativity require the existence of exotic matter[59] and it is not known if this could be produced in sufficient quantity.
Alcubierre drive
In physics, the Alcubierre drive is based on an argument, within the framework of general relativity and without the introduction of wormholes, that it is possible to modify a spacetime in a way that allows a spaceship to travel with an arbitrarily large speed by a local expansion of spacetime behind the spaceship and an opposite contraction in front of it.[61] Nevertheless, this concept would require the spaceship to incorporate a region of exotic matter, or hypothetical concept of negative mass.[61]
Artificial black hole
A theoretical idea for enabling interstellar travel is by propelling a starship by creating an artificial black hole and using a parabolic reflector to reflect its Hawking radiation. Although beyond current technological capabilities, a black hole starship offers some advantages compared to other possible methods. Getting the black hole to act as a power source and engine also requires a way to convert the Hawking radiation into energy and thrust. One potential method involves placing the hole at the focal point of a parabolic reflector attached to the ship, creating forward thrust. A slightly easier, but less efficient method would involve simply absorbing all the gamma radiation heading towards the fore of the ship to push it onwards, and let the rest shoot out the back.[62][63][64]
Wormholes
Wormholes are conjectural distortions in spacetime that theorists postulate could connect two arbitrary points in the universe, across an Einstein–Rosen Bridge. It is not known whether wormholes are possible in practice. Although there are solutions to the Einstein equation of general relativity that allow for wormholes, all of the currently known solutions involve some assumption, for example the existence of negative mass, which may be unphysical.[65] However, Cramer et al. argue that such wormholes might have been created in the early universe, stabilized by cosmic string.[66] The general theory of wormholes is discussed by Visser in the book Lorentzian Wormholes.[67]
Hyperdrive
If the conjecture of Felber is correct, ie any mass moving at 57.7% C generates an anti-gravity beam then an Orion drive may be capable of acting as the initial boost, and as-yet-undiscovered technology based on theorized anti-matter repulsion effects under exotic conditions such as entanglement that directly manipulates space-time to open a window into hyperspace. This could feasibly exploit physics in extradimensional space to travel very quickly through the galaxy. The side effect here is that due to the need for a second vehicle to slow down it would be a one-way trip, although other concepts such as a solar sail could be used to slow down near the destination. [68] All the components could be feasibly re-used and also provide very effective SETI targets if other species are using this technology. It is also likely that hyperdrive jumps could be limited to discrete areas such as near stars due to gravitation wells.

Designs and studies

Enzmann starship

The Enzmann starship, as detailed by G. Harry Stine in the October 1973 issue of Analog, was a design for a future starship, based on the ideas of Robert Duncan-Enzmann. The spacecraft itself as proposed used a 12,000,000 ton ball of frozen deuterium to power 12–24 thermonuclear pulse propulsion units. Twice as long as the Empire State Building and assembled in-orbit, the spacecraft was part of a larger project preceded by interstellar probes and telescopic observation of target star systems.[69]

Project Hyperion

Project Hyperion, one of the projects of Icarus Interstellar.[70]

NASA research

NASA has been researching interstellar travel since its formation, translating important foreign language papers and conducting early studies on applying fusion propulsion, in the 1960s, and laser propulsion, in the 1970s, to interstellar travel.

The NASA Breakthrough Propulsion Physics Program (terminated in FY 2003 after a 6-year, $1.2-million study, because "No breakthroughs appear imminent.")[71] identified some breakthroughs that are needed for interstellar travel to be possible.[72]

Geoffrey A. Landis of NASA's Glenn Research Center states that a laser-powered interstellar sail ship could possibly be launched within 50 years, using new methods of space travel. "I think that ultimately we're going to do it, it's just a question of when and who," Landis said in an interview. Rockets are too slow to send humans on interstellar missions. Instead, he envisions interstellar craft with extensive sails, propelled by laser light to about one-tenth the speed of light. It would take such a ship about 43 years to reach Alpha Centauri if it passed through the system. Slowing down to stop at Alpha Centauri could increase the trip to 100 years,[73] whereas a journey without slowing down raises the issue of making sufficiently accurate and useful observations and measurements during a fly-by.

100 Year Starship study

The 100 Year Starship (100YSS) is the name of the overall effort that will, over the next century, work toward achieving interstellar travel. The effort will also go by the moniker 100YSS. The 100 Year Starship study is the name of a one-year project to assess the attributes of and lay the groundwork for an organization that can carry forward the 100 Year Starship vision.

Harold ("Sonny") White[74] from NASA's Johnson Space Center is a member of Icarus Interstellar,[75] the nonprofit foundation whose mission is to realize interstellar flight before the year 2100. At the 2012 meeting of 100YSS, he reported using a laser to try to warp spacetime by 1 part in 10 million with the aim of helping to make interstellar travel possible.[76]

Other designs

Non-profit organizations

A few organisations dedicated to interstellar propulsion research and advocacy for the case exist worldwide. These are still in their infancy, but are already backed up by a membership of a wide variety of scientists, students and professionals.

Feasibility

The energy requirements make interstellar travel very difficult. It has been reported that at the 2008 Joint Propulsion Conference, multiple experts opined that it was improbable that humans would ever explore beyond the Solar System.[87] Brice N. Cassenti, an associate professor with the Department of Engineering and Science at Rensselaer Polytechnic Institute, stated that at least 100 times the total energy output of the entire world [in a given year] would be required to send a probe to the nearest star.[87]

Astrophysicist Sten Odenwald stated that the basic problem is that through intensive studies of thousands of detected exoplanets, most of the closest destinations within 50 light years do not yield Earth-like planets in the star's habitable zones.[88] Given the multi-trillion-dollar expense of some of the proposed technologies, travelers will have to spend up to 200 years traveling at 20% the speed of light to reach the best known destinations. Moreover, once the travelers arrive at their destination (by any means), they will not be able to travel down to the surface of the target world and set up a colony unless the atmosphere is non-lethal. The prospect of making such a journey, only to spend the rest of the colony's life inside a sealed habitat and venturing outside in a spacesuit, may eliminate many prospective targets from the list.

Moving at a speed close to the speed of light and encountering even a tiny stationary object like a grain of sand will have fatal consequences. For example, a gram of matter moving at 90% of the speed of light contains a kinetic energy corresponding to a small nuclear bomb (around 30kt TNT).

Interstellar missions not for human benefit

Explorative high-speed missions to Alpha Centauri, as planned for by the Breakthrough Starshot initiative, are projected to be realizable within the 21st century.[89] It is alternatively possible to plan for unmanned slow-cruising missions taking millennia to arrive. These probes would not be for human benefit in the sense that one can not foresee whether there would be anybody around on earth interested in then back-transmitted science data. An example would be the Genesis mission,[90] which aims to bring unicellular life, in the spirit of directed panspermia, to habitable but otherwise barren planets.[91] Comparatively slow cruising Genesis probes, with a typical speed of {\displaystyle c/300}, corresponding to about {\displaystyle 1000\,{\mbox{km/s}}}, can be decelerated using a magnetic sail. Unmanned missions not for human benefit would hence be feasible [92]

Discovery of Earth-Like planets

In February 2017, NASA has announced the discovery of 7 Earth-like planets in the TRAPPIST-1 system orbiting an ultra-cool dwarf star 40 light-years away from our solar system.[93] NASA's Spitzer Space Telescope has revealed the first known system of seven Earth-size planets around a single star. Three of these planets are firmly located in the habitable zone, the area around the parent star where a rocky planet is most likely to have liquid water. The discovery sets a new record for greatest number of habitable-zone planets found around a single star outside our solar system. All of these seven planets could have liquid water – the key to life as we know it – under the right atmospheric conditions, but the chances are highest with the three in the habitable zone.

Representation of a Lie group

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Representation_of_a_Lie_group...