Search This Blog

Monday, March 29, 2021

Space exploration

From Wikipedia, the free encyclopedia

Humans explore the lunar surface
 
This Kuiper belt object, known as Arrokoth, is the farthest closely visited Solar System body, seen in 2019

Space exploration is the use of astronomy and space technology to explore outer space. While the exploration of space is carried out mainly by astronomers with telescopes, its physical exploration though is conducted both by unmanned robotic space probes and human spaceflight. Space exploration, like its classical form astronomy, is one of the main sources for space science.

While the observation of objects in space, known as astronomy, predates reliable recorded history, it was the development of large and relatively efficient rockets during the mid-twentieth century that allowed physical space exploration to become a reality. Common rationales for exploring space include advancing scientific research, national prestige, uniting different nations, ensuring the future survival of humanity, and developing military and strategic advantages against other countries.

The early era of space exploration was driven by a "Space Race" between the Soviet Union and the United States. The launch of the first human-made object to orbit Earth, the Soviet Union's Sputnik 1, on 4 October 1957, and the first Moon landing by the American Apollo 11 mission on 20 July 1969 are often taken as landmarks for this initial period. The Soviet space program achieved many of the first milestones, including the first living being in orbit in 1957, the first human spaceflight (Yuri Gagarin aboard Vostok 1) in 1961, the first spacewalk (by Alexei Leonov) on 18 March 1965, the first automatic landing on another celestial body in 1966, and the launch of the first space station (Salyut 1) in 1971. After the first 20 years of exploration, focus shifted from one-off flights to renewable hardware, such as the Space Shuttle program, and from competition to cooperation as with the International Space Station (ISS).

With the substantial completion of the ISS following STS-133 in March 2011, plans for space exploration by the U.S. remain in flux. Constellation, a Bush Administration program for a return to the Moon by 2020 was judged inadequately funded and unrealistic by an expert review panel reporting in 2009. The Obama Administration proposed a revision of Constellation in 2010 to focus on the development of the capability for crewed missions beyond low Earth orbit (LEO), envisioning extending the operation of the ISS beyond 2020, transferring the development of launch vehicles for human crews from NASA to the private sector, and developing technology to enable missions to beyond LEO, such as Earth–Moon L1, the Moon, Earth–Sun L2, near-Earth asteroids, and Phobos or Mars orbit.

In the 2000s, China initiated a successful manned spaceflight program when India launched Chandraayan 1, while the European Union and Japan have also planned future crewed space missions. China, Russia, and Japan have advocated crewed missions to the Moon during the 21st century, while the European Union has advocated manned missions to both the Moon and Mars during the 20th and 21st century.

From the 1990s onwards, private interests began promoting space tourism and then public space exploration of the Moon. Students interested in Space have formed SEDS (Students for the Exploration and Development of Space). SpaceX is currently developing Starship, a fully reusable orbital launch vehicle that is expected to massively reduce the cost of spaceflight and allow for crewed planetary exploration.

History of exploration

A dark blue shaded diagram subdivided by horizontal lines, with the names of the five atmospheric regions arranged along the left. From bottom to top, the troposphere section shows Mount Everest and an airplane icon, the stratosphere displays a weather balloon, the mesosphere shows meteors, and the thermosphere includes an aurora and the Space Station. At the top, the exosphere shows only stars.
Most orbital flight actually takes place in upper layers of the atmosphere, especially in the thermosphere (not to scale)
 
In July 1950 the first Bumper rocket is launched from Cape Canaveral, Florida. The Bumper was a two-stage rocket consisting of a Post-War V-2 topped by a WAC Corporal rocket. It could reach then-record altitudes of almost 400 km. Launched by General Electric Company, this Bumper was used primarily for testing rocket systems and for research on the upper atmosphere. They carried small payloads that allowed them to measure attributes including air temperature and cosmic ray impacts.

Telescope

The first telescope was said to be invented in 1608 in the Netherlands by an eyeglass maker named Hans Lippershey. The Orbiting Astronomical Observatory 2 was the first space telescope launched on 7 December 1968. As of 2 February 2019, there was 3,891 confirmed exoplanets discovered. The Milky Way is estimated to contain 100–400 billion stars and more than 100 billion planets. There are at least 2 trillion galaxies in the observable universe. GN-z11 is the most distant known object from Earth, reported as 32 billion light-years away.

First outer space flights

Sputnik 1, the first artificial satellite orbited Earth at 939 to 215 km (583 to 134 mi) in 1957, and was soon followed by Sputnik 2. See First satellite by country (Replica Pictured)
 
Apollo CSM in lunar orbit
 
Apollo 17 astronaut Harrison Schmitt standing next to a boulder at Taurus-Littrow.

In 1949, the Bumper-WAC reached an altitude of 393 kilometres (244 mi), becoming the first human-made object to enter space, according to NASA, although V-2 Rocket MW 18014 crossed the Kármán line earlier, in 1944.

The first successful orbital launch was of the Soviet uncrewed Sputnik 1 ("Satellite 1") mission on 4 October 1957. The satellite weighed about 83 kg (183 lb), and is believed to have orbited Earth at a height of about 250 km (160 mi). It had two radio transmitters (20 and 40 MHz), which emitted "beeps" that could be heard by radios around the globe. Analysis of the radio signals was used to gather information about the electron density of the ionosphere, while temperature and pressure data was encoded in the duration of radio beeps. The results indicated that the satellite was not punctured by a meteoroid. Sputnik 1 was launched by an R-7 rocket. It burned up upon re-entry on 3 January 1958.

First human outer space flight

The first successful human spaceflight was Vostok 1 ("East 1"), carrying the 27-year-old Russian cosmonaut, Yuri Gagarin, on 12 April 1961. The spacecraft completed one orbit around the globe, lasting about 1 hour and 48 minutes. Gagarin's flight resonated around the world; it was a demonstration of the advanced Soviet space program and it opened an entirely new era in space exploration: human spaceflight.

First astronomical body space explorations

The first artificial object to reach another celestial body was Luna 2 reaching the Moon in 1959. The first soft landing on another celestial body was performed by Luna 9 landing on the Moon on 3 February 1966. Luna 10 became the first artificial satellite of the Moon, entering in a lunar orbit on 3 April 1966.

The first crewed landing on another celestial body was performed by Apollo 11 on 20 July 1969, landing on the Moon. There have been a total of six spacecraft with humans landing on the Moon starting from 1969 to the last human landing in 1972.

The first interplanetary flyby was the 1961 Venera 1 flyby of Venus, though the 1962 Mariner 2 was the first flyby of Venus to return data (closest approach 34,773 kilometers). Pioneer 6 was the first satellite to orbit the Sun, launched on 16 December 1965. The other planets were first flown by in 1965 for Mars by Mariner 4, 1973 for Jupiter by Pioneer 10, 1974 for Mercury by Mariner 10, 1979 for Saturn by Pioneer 11, 1986 for Uranus by Voyager 2, 1989 for Neptune by Voyager 2. In 2015, the dwarf planets Ceres and Pluto were orbited by Dawn and passed by New Horizons, respectively. This accounts for flybys of each of the eight planets in the Solar System, the Sun, the Moon and Ceres & Pluto (2 of the 5 recognized dwarf planets).

The first interplanetary surface mission to return at least limited surface data from another planet was the 1970 landing of Venera 7, which returned data to Earth for 23 minutes from Venus. In 1975 the Venera 9 was the first to return images from the surface of another planet, returning images from Venus. In 1971 the Mars 3 mission achieved the first soft landing on Mars returning data for almost 20 seconds. Later much longer duration surface missions were achieved, including over six years of Mars surface operation by Viking 1 from 1975 to 1982 and over two hours of transmission from the surface of Venus by Venera 13 in 1982, the longest ever Soviet planetary surface mission. Venus and Mars are the two planets outside of Earth on which humans have conducted surface missions with unmanned robotic spacecraft.

First space station

Salyut 1 was the first space station of any kind, launched into low Earth orbit by the Soviet Union on 19 April 1971. The International Space Station is currently the only fully functional space station, inhabited continuously since the year 2000.

First interstellar space flight

Voyager 1 became the first human-made object to leave the Solar System into interstellar space on 25 August 2012. The probe passed the heliopause at 121 AU to enter interstellar space.

Farthest from Earth

The Apollo 13 flight passed the far side of the Moon at an altitude of 254 kilometers (158 miles; 137 nautical miles) above the lunar surface, and 400,171 km (248,655 mi) from Earth, marking the record for the farthest humans have ever traveled from Earth in 1970.

Voyager 1 is currently at a distance of 145.11 astronomical units (2.1708×1010 km; 1.3489×1010 mi) (21.708 billion kilometers; 13.489 billion miles) from Earth as of 1 January 2019. It is the most distant human-made object from Earth.

GN-z11 is the most distant known object from Earth, reported as 13.4 billion light-years away.

Key people in early space exploration

The dream of stepping into the outer reaches of Earth's atmosphere was driven by the fiction of Jules Verne and H. G. Wells, and rocket technology was developed to try to realize this vision. The German V-2 was the first rocket to travel into space, overcoming the problems of thrust and material failure. During the final days of World War II this technology was obtained by both the Americans and Soviets as were its designers. The initial driving force for further development of the technology was a weapons race for intercontinental ballistic missiles (ICBMs) to be used as long-range carriers for fast nuclear weapon delivery, but in 1961 when the Soviet Union launched the first man into space, the United States declared itself to be in a "Space Race" with the Soviets.

Konstantin Tsiolkovsky, Robert Goddard, Hermann Oberth, and Reinhold Tiling laid the groundwork of rocketry in the early years of the 20th century.

Wernher von Braun was the lead rocket engineer for Nazi Germany's World War II V-2 rocket project. In the last days of the war he led a caravan of workers in the German rocket program to the American lines, where they surrendered and were brought to the United States to work on their rocket development ("Operation Paperclip"). He acquired American citizenship and led the team that developed and launched Explorer 1, the first American satellite. Von Braun later led the team at NASA's Marshall Space Flight Center which developed the Saturn V moon rocket.

Initially the race for space was often led by Sergei Korolev, whose legacy includes both the R7 and Soyuz—which remain in service to this day. Korolev was the mastermind behind the first satellite, first man (and first woman) in orbit and first spacewalk. Until his death his identity was a closely guarded state secret; not even his mother knew that he was responsible for creating the Soviet space program.

Kerim Kerimov was one of the founders of the Soviet space program and was one of the lead architects behind the first human spaceflight (Vostok 1) alongside Sergey Korolev. After Korolev's death in 1966, Kerimov became the lead scientist of the Soviet space program and was responsible for the launch of the first space stations from 1971 to 1991, including the Salyut and Mir series, and their precursors in 1967, the Cosmos 186 and Cosmos 188.

Other key people:

  • Valentin Glushko was Chief Engine Designer for the Soviet Union. Glushko designed many of the engines used on the early Soviet rockets, but was constantly at odds with Korolev.
  • Vasily Mishin was Chief Designer working under Sergey Korolev and one of the first Soviets to inspect the captured German V-2 design. Following the death of Sergei Korolev, Mishin was held responsible for the Soviet failure to be first country to place a man on the Moon.
  • Robert Gilruth was the NASA head of the Space Task Force and director of 25 crewed space flights. Gilruth was the person who suggested to John F. Kennedy that the Americans take the bold step of reaching the Moon in an attempt to reclaim space superiority from the Soviets.
  • Christopher C. Kraft, Jr. was NASA's first flight director, who oversaw development of Mission Control and associated technologies and procedures.
  • Maxime Faget was the designer of the Mercury capsule; he played a key role in designing the Gemini and Apollo spacecraft, and contributed to the design of the Space Shuttle.

Targets of exploration

The Moon as seen in a digitally processed image from data collected during the 1992 Galileo spacecraft flyby

Starting in the mid-20th century probes and then human mission were sent into Earth orbit, and then on to the Moon. Also, probes were sent throughout the known Solar system, and into Solar orbit. Unmanned spacecraft have been sent into orbit around Saturn, Jupiter, Mars, Venus, and Mercury by the 21st century, and the most distance active spacecraft, Voyager 1 and 2 traveled beyond 100 times the Earth-Sun distance. The instruments were enough though that it is thought they have left the Sun's heliosphere, a sort of bubble of particles made in the Galaxy by the Sun's solar wind.

The Sun

The Sun is a major focus of space exploration. Being above the atmosphere in particular and Earth's magnetic field gives access to the solar wind and infrared and ultraviolet radiations that cannot reach Earth's surface. The Sun generates most space weather, which can affect power generation and transmission systems on Earth and interfere with, and even damage, satellites and space probes. Numerous spacecraft dedicated to observing the Sun, beginning with the Apollo Telescope Mount, have been launched and still others have had solar observation as a secondary objective. Parker Solar Probe, launched in 2018, will approach the Sun to within 1/8th the orbit of Mercury.

Mercury

MESSENGER image of Mercury (2013)
 
A MESSENGER image from 18,000 km showing a region about 500 km across (2008)

Mercury remains the least explored of the Terrestrial planets. As of May 2013, the Mariner 10 and MESSENGER missions have been the only missions that have made close observations of Mercury. MESSENGER entered orbit around Mercury in March 2011, to further investigate the observations made by Mariner 10 in 1975 (Munsell, 2006b).

A third mission to Mercury, scheduled to arrive in 2025, BepiColombo is to include two probes. BepiColombo is a joint mission between Japan and the European Space Agency. MESSENGER and BepiColombo are intended to gather complementary data to help scientists understand many of the mysteries discovered by Mariner 10's flybys.

Flights to other planets within the Solar System are accomplished at a cost in energy, which is described by the net change in velocity of the spacecraft, or delta-v. Due to the relatively high delta-v to reach Mercury and its proximity to the Sun, it is difficult to explore and orbits around it are rather unstable.

Venus

Mariner 10 image of Venus (1974)

Venus was the first target of interplanetary flyby and lander missions and, despite one of the most hostile surface environments in the Solar System, has had more landers sent to it (nearly all from the Soviet Union) than any other planet in the Solar System. The first flyby was the 1961 Venera 1, though the 1962 Mariner 2 was the first flyby to successfully return data. Mariner 2 has been followed by several other flybys by multiple space agencies often as part of missions using a Venus flyby to provide a gravitational assist en route to other celestial bodies. In 1967 Venera 4 became the first probe to enter and directly examine the atmosphere of Venus. In 1970, Venera 7 became the first successful lander to reach the surface of Venus and by 1985 it had been followed by eight additional successful Soviet Venus landers which provided images and other direct surface data. Starting in 1975 with the Soviet orbiter Venera 9 some ten successful orbiter missions have been sent to Venus, including later missions which were able to map the surface of Venus using radar to pierce the obscuring atmosphere.

Earth

First television image of Earth from space, taken by TIROS-1. (1960)
 
The Blue Marble Earth picture taken during Apollo 17 (1972)

Space exploration has been used as a tool to understand Earth as a celestial object in its own right. Orbital missions can provide data for Earth that can be difficult or impossible to obtain from a purely ground-based point of reference.

For example, the existence of the Van Allen radiation belts was unknown until their discovery by the United States' first artificial satellite, Explorer 1. These belts contain radiation trapped by Earth's magnetic fields, which currently renders construction of habitable space stations above 1000 km impractical. Following this early unexpected discovery, a large number of Earth observation satellites have been deployed specifically to explore Earth from a space based perspective. These satellites have significantly contributed to the understanding of a variety of Earth-based phenomena. For instance, the hole in the ozone layer was found by an artificial satellite that was exploring Earth's atmosphere, and satellites have allowed for the discovery of archeological sites or geological formations that were difficult or impossible to otherwise identify.

The Moon

The Moon (2010)
 
Apollo 16 LEM Orion, the Lunar Roving Vehicle and astronaut John Young (1972)

The Moon was the first celestial body to be the object of space exploration. It holds the distinctions of being the first remote celestial object to be flown by, orbited, and landed upon by spacecraft, and the only remote celestial object ever to be visited by humans.

In 1959 the Soviets obtained the first images of the far side of the Moon, never previously visible to humans. The U.S. exploration of the Moon began with the Ranger 4 impactor in 1962. Starting in 1966 the Soviets successfully deployed a number of landers to the Moon which were able to obtain data directly from the Moon's surface; just four months later, Surveyor 1 marked the debut of a successful series of U.S. landers. The Soviet uncrewed missions culminated in the Lunokhod program in the early 1970s, which included the first uncrewed rovers and also successfully brought lunar soil samples to Earth for study. This marked the first (and to date the only) automated return of extraterrestrial soil samples to Earth. Uncrewed exploration of the Moon continues with various nations periodically deploying lunar orbiters, and in 2008 the Indian Moon Impact Probe.

Crewed exploration of the Moon began in 1968 with the Apollo 8 mission that successfully orbited the Moon, the first time any extraterrestrial object was orbited by humans. In 1969, the Apollo 11 mission marked the first time humans set foot upon another world. Crewed exploration of the Moon did not continue for long, however. The Apollo 17 mission in 1972 marked the sixth landing and the most recent human visit there. Artemis 2 will flyby the Moon in 2022. Robotic missions are still pursued vigorously.

Mars

Mars, as seen by the Hubble Space Telescope (2003)
 
Surface of Mars by the Spirit rover (2004)

The exploration of Mars has been an important part of the space exploration programs of the Soviet Union (later Russia), the United States, Europe, Japan and India. Dozens of robotic spacecraft, including orbiters, landers, and rovers, have been launched toward Mars since the 1960s. These missions were aimed at gathering data about current conditions and answering questions about the history of Mars. The questions raised by the scientific community are expected to not only give a better appreciation of the red planet but also yield further insight into the past, and possible future, of Earth.

The exploration of Mars has come at a considerable financial cost with roughly two-thirds of all spacecraft destined for Mars failing before completing their missions, with some failing before they even began. Such a high failure rate can be attributed to the complexity and large number of variables involved in an interplanetary journey, and has led researchers to jokingly speak of The Great Galactic Ghoul which subsists on a diet of Mars probes. This phenomenon is also informally known as the "Mars Curse". In contrast to overall high failure rates in the exploration of Mars, India has become the first country to achieve success of its maiden attempt. India's Mars Orbiter Mission (MOM) is one of the least expensive interplanetary missions ever undertaken with an approximate total cost of 450 Crore (US$73 million). The first mission to Mars by any Arab country has been taken up by the United Arab Emirates. Called the Emirates Mars Mission, it is scheduled for launch in 2020. The uncrewed exploratory probe has been named "Hope Probe" and will be sent to Mars to study its atmosphere in detail.

Phobos

The Russian space mission Fobos-Grunt, which launched on 9 November 2011 experienced a failure leaving it stranded in low Earth orbit. It was to begin exploration of the Phobos and Martian circumterrestrial orbit, and study whether the moons of Mars, or at least Phobos, could be a "trans-shipment point" for spaceships traveling to Mars.

Asteroids

Asteroid 4 Vesta, imaged by the Dawn spacecraft (2011)

Until the advent of space travel, objects in the asteroid belt were merely pinpricks of light in even the largest telescopes, their shapes and terrain remaining a mystery. Several asteroids have now been visited by probes, the first of which was Galileo, which flew past two: 951 Gaspra in 1991, followed by 243 Ida in 1993. Both of these lay near enough to Galileo's planned trajectory to Jupiter that they could be visited at acceptable cost. The first landing on an asteroid was performed by the NEAR Shoemaker probe in 2000, following an orbital survey of the object. The dwarf planet Ceres and the asteroid 4 Vesta, two of the three largest asteroids, were visited by NASA's Dawn spacecraft, launched in 2007.

Hayabusa was a robotic spacecraft developed by the Japan Aerospace Exploration Agency to return a sample of material from the small near-Earth asteroid 25143 Itokawa to Earth for further analysis. Hayabusa was launched on 9 May 2003 and rendezvoused with Itokawa in mid-September 2005. After arriving at Itokawa, Hayabusa studied the asteroid's shape, spin, topography, color, composition, density, and history. In November 2005, it landed on the asteroid twice to collect samples. The spacecraft returned to Earth on 13 June 2010.

Jupiter

Jupiter, as seen by the Hubble Space Telescope (2019).

The exploration of Jupiter has consisted solely of a number of automated NASA spacecraft visiting the planet since 1973. A large majority of the missions have been "flybys", in which detailed observations are taken without the probe landing or entering orbit; such as in Pioneer and Voyager programs. The Galileo and Juno spacecraft are the only spacecraft to have entered the planet's orbit. As Jupiter is believed to have only a relatively small rocky core and no real solid surface, a landing mission is precluded.

Reaching Jupiter from Earth requires a delta-v of 9.2 km/s, which is comparable to the 9.7 km/s delta-v needed to reach low Earth orbit. Fortunately, gravity assists through planetary flybys can be used to reduce the energy required at launch to reach Jupiter, albeit at the cost of a significantly longer flight duration.

Jupiter has 79 known moons, many of which have relatively little known information about them.

Saturn

A picture of Saturn taken by Cassini (2004)
 
A view beneath the clouds of Titan, as seen in false colour, created from a mosaic of images taken by Cassini (2013)

Saturn has been explored only through uncrewed spacecraft launched by NASA, including one mission (Cassini–Huygens) planned and executed in cooperation with other space agencies. These missions consist of flybys in 1979 by Pioneer 11, in 1980 by Voyager 1, in 1982 by Voyager 2 and an orbital mission by the Cassini spacecraft, which lasted from 2004 until 2017.

Saturn has at least 62 known moons, although the exact number is debatable since Saturn's rings are made up of vast numbers of independently orbiting objects of varying sizes. The largest of the moons is Titan, which holds the distinction of being the only moon in the Solar System with an atmosphere denser and thicker than that of Earth. Titan holds the distinction of being the only object in the Outer Solar System that has been explored with a lander, the Huygens probe deployed by the Cassini spacecraft.

Uranus

Uranus as imaged by Voyager 2 (1986)

The exploration of Uranus has been entirely through the Voyager 2 spacecraft, with no other visits currently planned. Given its axial tilt of 97.77°, with its polar regions exposed to sunlight or darkness for long periods, scientists were not sure what to expect at Uranus. The closest approach to Uranus occurred on 24 January 1986. Voyager 2 studied the planet's unique atmosphere and magnetosphere. Voyager 2 also examined its ring system and the moons of Uranus including all five of the previously known moons, while discovering an additional ten previously unknown moons.

Images of Uranus proved to have a very uniform appearance, with no evidence of the dramatic storms or atmospheric banding evident on Jupiter and Saturn. Great effort was required to even identify a few clouds in the images of the planet. The magnetosphere of Uranus, however, proved to be unique, being profoundly affected by the planet's unusual axial tilt. In contrast to the bland appearance of Uranus itself, striking images were obtained of the Moons of Uranus, including evidence that Miranda had been unusually geologically active.

Neptune

A picture of Neptune taken by Voyager 2 (1989)
 
Triton as imaged by Voyager 2 (1989)

The exploration of Neptune began with 25 August 1989 Voyager 2 flyby, the sole visit to the system as of 2014. The possibility of a Neptune Orbiter has been discussed, but no other missions have been given serious thought.

Although the extremely uniform appearance of Uranus during Voyager 2's visit in 1986 had led to expectations that Neptune would also have few visible atmospheric phenomena, the spacecraft found that Neptune had obvious banding, visible clouds, auroras, and even a conspicuous anticyclone storm system rivaled in size only by Jupiter's Great Red Spot. Neptune also proved to have the fastest winds of any planet in the Solar System, measured as high as 2,100 km/h. Voyager 2 also examined Neptune's ring and moon system. It discovered 900 complete rings and additional partial ring "arcs" around Neptune. In addition to examining Neptune's three previously known moons, Voyager 2 also discovered five previously unknown moons, one of which, Proteus, proved to be the last largest moon in the system. Data from Voyager 2 supported the view that Neptune's largest moon, Triton, is a captured Kuiper belt object.

Pluto

New Horizons image of Pluto (2015)
 
New Horizons image of Charon (2015)

The dwarf planet Pluto presents significant challenges for spacecraft because of its great distance from Earth (requiring high velocity for reasonable trip times) and small mass (making capture into orbit very difficult at present). Voyager 1 could have visited Pluto, but controllers opted instead for a close flyby of Saturn's moon Titan, resulting in a trajectory incompatible with a Pluto flyby. Voyager 2 never had a plausible trajectory for reaching Pluto.

After an intense political battle, a mission to Pluto dubbed New Horizons was granted funding from the United States government in 2003. New Horizons was launched successfully on 19 January 2006. In early 2007 the craft made use of a gravity assist from Jupiter. Its closest approach to Pluto was on 14 July 2015; scientific observations of Pluto began five months prior to closest approach and continued for 16 days after the encounter.

Other objects in the Solar System

The New Horizons mission did a flyby of the small planetesimal Arrokoth in 2019.

Comets

Comet 103P/Hartley (2010)

Although many comets have been studied from Earth sometimes with centuries-worth of observations, only a few comets have been closely visited. In 1985, the International Cometary Explorer conducted the first comet fly-by (21P/Giacobini-Zinner) before joining the Halley Armada studying the famous comet. The Deep Impact probe smashed into 9P/Tempel to learn more about its structure and composition and the Stardust mission returned samples of another comet's tail. The Philae lander successfully landed on Comet Churyumov–Gerasimenko in 2014 as part of the broader Rosetta mission.

Deep space exploration

This high-resolution image of the Hubble Ultra Deep Field includes galaxies of various ages, sizes, shapes, and colors. The smallest, reddest galaxies, are some of the most distant galaxies to have been imaged by an optical telescope.

Deep space exploration is the branch of astronomy, astronautics and space technology that is involved with the exploration of distant regions of outer space. Physical exploration of space is conducted both by human spaceflights (deep-space astronautics) and by robotic spacecraft.

Some of the best candidates for future deep space engine technologies include anti-matter, nuclear power and beamed propulsion. The latter, beamed propulsion, appears to be the best candidate for deep space exploration presently available, since it uses known physics and known technology that is being developed for other purposes.

Future of space exploration

Concept art for a NASA Vision mission
 
Artistic image of a rocket lifting from a Saturn moon
 
The United States' planned Space Launch System concept art

Breakthrough Starshot

Breakthrough Starshot is a research and engineering project by the Breakthrough Initiatives to develop a proof-of-concept fleet of light sail spacecraft named StarChip, to be capable of making the journey to the Alpha Centauri star system 4.37 light-years away. It was founded in 2016 by Yuri Milner, Stephen Hawking, and Mark Zuckerberg.

Asteroids

An article in science magazine Nature suggested the use of asteroids as a gateway for space exploration, with the ultimate destination being Mars. In order to make such an approach viable, three requirements need to be fulfilled: first, "a thorough asteroid survey to find thousands of nearby bodies suitable for astronauts to visit"; second, "extending flight duration and distance capability to ever-increasing ranges out to Mars"; and finally, "developing better robotic vehicles and tools to enable astronauts to explore an asteroid regardless of its size, shape or spin." Furthermore, using asteroids would provide astronauts with protection from galactic cosmic rays, with mission crews being able to land on them without great risk to radiation exposure.

James Webb Space Telescope

The James Webb Space Telescope (JWST or "Webb") is a space telescope that is planned to be the successor to the Hubble Space Telescope. The JWST will provide greatly improved resolution and sensitivity over the Hubble, and will enable a broad range of investigations across the fields of astronomy and cosmology, including observing some of the most distant events and objects in the universe, such as the formation of the first galaxies. Other goals include understanding the formation of stars and planets, and direct imaging of exoplanets and novas.

The primary mirror of the JWST, the Optical Telescope Element, is composed of 18 hexagonal mirror segments made of gold-plated beryllium which combine to create a 6.5-meter (21 ft; 260 in) diameter mirror that is much larger than the Hubble's 2.4-meter (7.9 ft; 94 in) mirror. Unlike the Hubble, which observes in the near ultraviolet, visible, and near infrared (0.1 to 1 μm) spectra, the JWST will observe in a lower frequency range, from long-wavelength visible light through mid-infrared (0.6 to 27 μm), which will allow it to observe high redshift objects that are too old and too distant for the Hubble to observe. The telescope must be kept very cold in order to observe in the infrared without interference, so it will be deployed in space near the Earth–Sun L2 Lagrangian point, and a large sunshield made of silicon- and aluminum-coated Kapton will keep its mirror and instruments below 50 K (−220 °C; −370 °F).

Artemis program

The Artemis program is an ongoing crewed spaceflight program carried out by NASA, U.S. commercial spaceflight companies, and international partners such as ESA, with the goal of landing "the first woman and the next man" on the Moon, specifically at the lunar south pole region by 2024. Artemis would be the next step towards the long-term goal of establishing a sustainable presence on the Moon, laying the foundation for private companies to build a lunar economy, and eventually sending humans to Mars.

In 2017, the lunar campaign was authorized by Space Policy Directive 1, utilizing various ongoing spacecraft programs such as Orion, the Lunar Gateway, Commercial Lunar Payload Services, and adding an undeveloped crewed lander. The Space Launch System will serve as the primary launch vehicle for Orion, while commercial launch vehicles are planned for use to launch various other elements of the campaign. NASA requested $1.6 billion in additional funding for Artemis for fiscal year 2020, while the Senate Appropriations Committee requested from NASA a five-year budget profile which is needed for evaluation and approval by Congress.

Rationales

Astronaut Buzz Aldrin had a personal Communion service when he first arrived on the surface of the Moon.

The research that is conducted by national space exploration agencies, such as NASA and Roscosmos, is one of the reasons supporters cite to justify government expenses. Economic analyses of the NASA programs often showed ongoing economic benefits (such as NASA spin-offs), generating many times the revenue of the cost of the program. It is also argued that space exploration would lead to the extraction of resources on other planets and especially asteroids, which contain billions of dollars that worth of minerals and metals. Such expeditions could generate a lot of revenue. In addition, it has been argued that space exploration programs help inspire youth to study in science and engineering. Space exploration also gives scientists the ability to perform experiments in other settings and expand humanity's knowledge.

Another claim is that space exploration is a necessity to mankind and that staying on Earth will lead to extinction. Some of the reasons are lack of natural resources, comets, nuclear war, and worldwide epidemic. Stephen Hawking, renowned British theoretical physicist, said that "I don't think the human race will survive the next thousand years, unless we spread into space. There are too many accidents that can befall life on a single planet. But I'm an optimist. We will reach out to the stars." Arthur C. Clarke (1950) presented a summary of motivations for the human exploration of space in his non-fiction semi-technical monograph Interplanetary Flight. He argued that humanity's choice is essentially between expansion off Earth into space, versus cultural (and eventually biological) stagnation and death. These motivations could be attributed to one of the first rocket scientists in NASA, Wernher von Braun, and his vision of humans moving beyond Earth. The basis of this plan was to:

"Develop multi-stage rockets capable of placing satellites, animals, and humans in space.

Development of large, winged reusable spacecraft capable of carrying humans and equipment into Earth orbit in a way that made space access routine and cost-effective.

Construction of a large, permanently occupied space station to be used as a platform both to observe Earth and from which to launch deep space expeditions.

Launching the first human flights around the Moon, leading to the first landings of humans on the Moon, with the intent of exploring that body and establishing permanent lunar bases.

Assembly and fueling of spaceships in Earth orbit for the purpose of sending humans to Mars with the intent of eventually colonizing that planet".

Known as the Von Braun Paradigm, the plan was formulated to lead humans in the exploration of space. Von Braun's vision of human space exploration served as the model for efforts in space space exploration well into the twenty-first century, with NASA incorporating this approach into the majority of their projects. The steps were followed out of order, as seen by the Apollo program reaching the moon before the space shuttle program was started, which in turn was used to complete the International Space Station. Von Braun's Paradigm formed NASA's drive for human exploration, in the hopes that humans discover the far reaches of the universe.

NASA has produced a series of public service announcement videos supporting the concept of space exploration.

Overall, the public remains largely supportive of both crewed and uncrewed space exploration. According to an Associated Press Poll conducted in July 2003, 71% of U.S. citizens agreed with the statement that the space program is "a good investment", compared to 21% who did not.

Human nature

Space advocacy and space policy regularly invokes exploration as a human nature.

This advocacy has been criticized by scholars as essentializing and continuation of colonialism, particularly manifest destiny, making space exploration misaligned with science and a less inclusive field.

Spaceflight

Delta-v's in km/s for various orbital maneuvers

Spaceflight is the use of space technology to achieve the flight of spacecraft into and through outer space.

Spaceflight is used in space exploration, and also in commercial activities like space tourism and satellite telecommunications. Additional non-commercial uses of spaceflight include space observatories, reconnaissance satellites and other Earth observation satellites.

A spaceflight typically begins with a rocket launch, which provides the initial thrust to overcome the force of gravity and propels the spacecraft from the surface of Earth. Once in space, the motion of a spacecraft—both when unpropelled and when under propulsion—is covered by the area of study called astrodynamics. Some spacecraft remain in space indefinitely, some disintegrate during atmospheric reentry, and others reach a planetary or lunar surface for landing or impact.

Satellites

Satellites are used for a large number of purposes. Common types include military (spy) and civilian Earth observation satellites, communication satellites, navigation satellites, weather satellites, and research satellites. Space stations and human spacecraft in orbit are also satellites.

Commercialization of space

The commercialization of space first started out with the launching of private satellites by NASA or other space agencies. Current examples of the commercial satellite use of space include satellite navigation systems, satellite television and satellite radio. The next step of commercialization of space was seen as human spaceflight. Flying humans safely to and from space had become routine to NASA. Reusable spacecraft were an entirely new engineering challenge, something only seen in novels and films like Star Trek and War of the Worlds. Great names like Buzz Aldrin supported the use of making a reusable vehicle like the space shuttle. Aldrin held that reusable spacecraft were the key in making space travel affordable, stating that the use of "passenger space travel is a huge potential market big enough to justify the creation of reusable launch vehicles". How can the public go against the words of one of America's best known heroes in space exploration? After all exploring space is the next great expedition, following the example of Lewis and Clark.Space tourism is the next step reusable vehicles in the commercialization of space. The purpose of this form of space travel is used by individuals for the purpose of personal pleasure.

Private spaceflight companies such as SpaceX and Blue Origin, and commercial space stations such as the Axiom Space and the Bigelow Commercial Space Station have dramatically changed the landscape of space exploration, and will continue to do so in the near future.

Alien life

Astrobiology is the interdisciplinary study of life in the universe, combining aspects of astronomy, biology and geology. It is focused primarily on the study of the origin, distribution and evolution of life. It is also known as exobiology (from Greek: έξω, exo, "outside"). The term "Xenobiology" has been used as well, but this is technically incorrect because its terminology means "biology of the foreigners". Astrobiologists must also consider the possibility of life that is chemically entirely distinct from any life found on Earth. In the Solar System some of the prime locations for current or past astrobiology are on Enceladus, Europa, Mars, and Titan.

Human spaceflight and habitation

Crew quarters on Zvezda the base ISS crew module

To date, the longest human occupation of space is the International Space Station which has been in continuous use for 20 years, 146 days. Valeri Polyakov's record single spaceflight of almost 438 days aboard the Mir space station has not been surpassed. The health effects of space have been well documented through years of research conducted in the field of aerospace medicine. Analog environments similar to those one may experience in space travel (like deep sea submarines) have been used in this research to further explore the relationship between isolation and extreme environments. It is imperative that the health of the crew be maintained as any deviation from baseline may compromise the integrity of the mission as well as the safety of the crew, hence the reason why astronauts must endure rigorous medical screenings and tests prior to embarking on any missions. However, it does not take long for the environmental dynamics of spaceflight to commence its toll on the human body; for example, space motion sickness (SMS) - a condition which affects the neurovestibular system and culminates in mild to severe signs and symptoms such as vertigo, dizziness, fatigue, nausea, and disorientation - plagues almost all space travelers within their first few days in orbit. Space travel can also have a profound impact on the psyche of the crew members as delineated in anecdotal writings composed after their retirement. Space travel can adversely affect the body's natural biological clock (circadian rhythm); sleep patterns causing sleep deprivation and fatigue; and social interaction; consequently, residing in a Low Earth Orbit (LEO) environment for a prolonged amount of time can result in both mental and physical exhaustion. Long-term stays in space reveal issues with bone and muscle loss in low gravity, immune system suppression, and radiation exposure. The lack of gravity causes fluid to rise upward which can cause pressure to build up in the eye, resulting in vision problems; the loss of bone minerals and densities; cardiovascular deconditioning; and decreased endurance and muscle mass.

Radiation is perhaps the most insidious health hazard to space travelers as it is invisible to the naked eye and can cause cancer. Space craft are no longer protected from the sun's radiation as they are positioned above the Earth's magnetic field; the danger of radiation is even more potent when one enters deep space. The hazards of radiation can be ameliorated through protective shielding on the spacecraft, alerts, and dosimetry.

Fortunately, with new and rapidly evolving technological advancements, those in Mission Control are able to monitor the health of their astronauts more closely utilizing telemedicine. One may not be able to completely evade the physiological effects of space flight, but they can be mitigated. For example, medical systems aboard space vessels such as the International Space Station (ISS) are well equipped and designed to counteract the effects of lack of gravity and weightlessness; on-board treadmills can help prevent muscle loss and reduce the risk of developing premature osteoporosis. Additionally, a crew medical officer is appointed for each ISS mission and a flight surgeon is available 24/7 via the ISS Mission Control Center located in Houston, Texas. Although the interactions are intended to take place in real time, communications between the space and terrestrial crew may become delayed - sometimes by as much as 20 minutes - as their distance from each other increases when the spacecraft moves further out of LEO; because of this the crew are trained and need to be prepared to respond to any medical emergencies that may arise on the vessel as the ground crew are hundreds of miles away. As one can see, travelling and possibly living in space poses many challenges. Many past and current concepts for the continued exploration and colonization of space focus on a return to the Moon as a "stepping stone" to the other planets, especially Mars. At the end of 2006 NASA announced they were planning to build a permanent Moon base with continual presence by 2024.

Beyond the technical factors that could make living in space more widespread, it has been suggested that the lack of private property, the inability or difficulty in establishing property rights in space, has been an impediment to the development of space for human habitation. Since the advent of space technology in the latter half of the twentieth century, the ownership of property in space has been murky, with strong arguments both for and against. In particular, the making of national territorial claims in outer space and on celestial bodies has been specifically proscribed by the Outer Space Treaty, which had been, as of 2012, ratified by all spacefaring nations. Space colonization, also called space settlement and space humanization, would be the permanent autonomous (self-sufficient) human habitation of locations outside Earth, especially of natural satellites or planets such as the Moon or Mars, using significant amounts of in-situ resource utilization.

Human representation and participation

Participation and representation of humanity in space is an issue ever since the first phase of space exploration. Some rights of non-spacefaring countries have been secured through international space law, declaring space the "province of all mankind", understanding spaceflight as its resource, though sharing of space for all humanity is still criticized as imperialist and lacking. Additionally to international inclusion, the inclusion of women and people of colour has also been lacking. To reach a more inclusive spaceflight some organizations like the Justspace Alliance and IAU featured Inclusive Astronomy have been formed in recent years.

Women

The first woman to ever enter space was Valentina Tereshkova. She flew in 1963 but it was not until the 1980s that another woman entered space again. All astronauts were required to be military test pilots at the time and women were not able to enter this career, this is one reason for the delay in allowing women to join space crews. After the rule changed, Svetlana Savitskaya became the second woman to enter space, she was also from the Soviet Union. Sally Ride became the next woman to enter space and the first woman to enter space through the United States program.

Since then, eleven other countries have allowed women astronauts. The first all female space walk occurred in 2018, including Christina Koch and Jessica Meir. These two women have both participated in separate space walks with NASA. The first woman to go to the moon is planned for 2024.

Despite these developments women are still underrepresented among astronauts and especially cosmonauts. Issues that block potential applicants from the programs and limit the space missions they are able to go on, are for example:

  • agencies limiting women to half as much time in space than men, arguing with unresearched potential risks for cancer.
  • a lack of space suits sized appropriately for female astronauts.

Additionally women have been discriminately treated for example as with Sally Ride by being scrutinized more than her male counterparts and asked sexist questions by the press.

Art

Artistry in and from space ranges from signals, capturing and arranging material like Yuri Gagarin's selfie in space or the image The Blue Marble, over drawings like the first one in space by cosmonaut and artist Alexei Leonov, music videos like Chris Hadfield's cover of Space Oddity on board the ISS, to permanent installations on celestial bodies like on the Moon.

Computer multitasking

From Wikipedia, the free encyclopedia
Modern desktop operating systems are capable of handling large numbers of different processes at the same time. This screenshot shows Linux Mint running simultaneously Xfce desktop environment, Firefox, a calculator program, the built-in calendar, Vim, GIMP, and VLC media player.
 
Multitasking capabilities of Microsoft Windows 1.01 released in 1985, here shown running the MS-DOS Executive and Calculator programs

In computing, multitasking is the concurrent execution of multiple tasks (also known as processes) over a certain period of time. New tasks can interrupt already started ones before they finish, instead of waiting for them to end. As a result, a computer executes segments of multiple tasks in an interleaved manner, while the tasks share common processing resources such as central processing units (CPUs) and main memory. Multitasking automatically interrupts the running program, saving its state (partial results, memory contents and computer register contents) and loading the saved state of another program and transferring control to it. This "context switch" may be initiated at fixed time intervals (pre-emptive multitasking), or the running program may be coded to signal to the supervisory software when it can be interrupted (cooperative multitasking).

Multitasking does not require parallel execution of multiple tasks at exactly the same time; instead, it allows more than one task to advance over a given period of time. Even on multiprocessor computers, multitasking allows many more tasks to be run than there are CPUs.

Multitasking is a common feature of computer operating systems. It allows more efficient use of the computer hardware; where a program is waiting for some external event such as a user input or an input/output transfer with a peripheral to complete, the central processor can still be used with another program. In a time-sharing system, multiple human operators use the same processor as if it was dedicated to their use, while behind the scenes the computer is serving many users by multitasking their individual programs. In multiprogramming systems, a task runs until it must wait for an external event or until the operating system's scheduler forcibly swaps the running task out of the CPU. Real-time systems such as those designed to control industrial robots, require timely processing; a single processor might be shared between calculations of machine movement, communications, and user interface.

Often multitasking operating systems include measures to change the priority of individual tasks, so that important jobs receive more processor time than those considered less significant. Depending on the operating system, a task might be as large as an entire application program, or might be made up of smaller threads that carry out portions of the overall program.

A processor intended for use with multitasking operating systems may include special hardware to securely support multiple tasks, such as memory protection, and protection rings that ensure the supervisory software cannot be damaged or subverted by user-mode program errors.

The term "multitasking" has become an international term, as the same word is used in many other languages such as German, Italian, Dutch, Danish and Norwegian.

Multiprogramming

In the early days of computing, CPU time was expensive, and peripherals were very slow. When the computer ran a program that needed access to a peripheral, the central processing unit (CPU) would have to stop executing program instructions while the peripheral processed the data. This was usually very inefficient.

The first computer using a multiprogramming system was the British Leo III owned by J. Lyons and Co. During batch processing, several different programs were loaded in the computer memory, and the first one began to run. When the first program reached an instruction waiting for a peripheral, the context of this program was stored away, and the second program in memory was given a chance to run. The process continued until all programs finished running.

The use of multiprogramming was enhanced by the arrival of virtual memory and virtual machine technology, which enabled individual programs to make use of memory and operating system resources as if other concurrently running programs were, for all practical purposes, nonexistent.

Multiprogramming gives no guarantee that a program will run in a timely manner. Indeed, the first program may very well run for hours without needing access to a peripheral. As there were no users waiting at an interactive terminal, this was no problem: users handed in a deck of punched cards to an operator, and came back a few hours later for printed results. Multiprogramming greatly reduced wait times when multiple batches were being processed.

Cooperative multitasking

Early multitasking systems used applications that voluntarily ceded time to one another. This approach, which was eventually supported by many computer operating systems, is known today as cooperative multitasking. Although it is now rarely used in larger systems except for specific applications such as CICS or the JES2 subsystem, cooperative multitasking was once the only scheduling scheme employed by Microsoft Windows and Classic Mac OS to enable multiple applications to run simultaneously. Cooperative multitasking is still used today on RISC OS systems.

As a cooperatively multitasked system relies on each process regularly giving up time to other processes on the system, one poorly designed program can consume all of the CPU time for itself, either by performing extensive calculations or by busy waiting; both would cause the whole system to hang. In a server environment, this is a hazard that makes the entire environment unacceptably fragile.

Preemptive multitasking

Preemptive multitasking allows the computer system to more reliably guarantee to each process a regular "slice" of operating time. It also allows the system to deal rapidly with important external events like incoming data, which might require the immediate attention of one or another process. Operating systems were developed to take advantage of these hardware capabilities and run multiple processes preemptively. Preemptive multitasking was implemented in the PDP-6 Monitor and MULTICS in 1964, in OS/360 MFT in 1967, and in Unix in 1969, and was available in some operating systems for computers as small as DEC's PDP-8; it is a core feature of all Unix-like operating systems, such as Linux, Solaris and BSD with its derivatives, as well as modern versions of Windows.

At any specific time, processes can be grouped into two categories: those that are waiting for input or output (called "I/O bound"), and those that are fully utilizing the CPU ("CPU bound"). In primitive systems, the software would often "poll", or "busywait" while waiting for requested input (such as disk, keyboard or network input). During this time, the system was not performing useful work. With the advent of interrupts and preemptive multitasking, I/O bound processes could be "blocked", or put on hold, pending the arrival of the necessary data, allowing other processes to utilize the CPU. As the arrival of the requested data would generate an interrupt, blocked processes could be guaranteed a timely return to execution.

The earliest preemptive multitasking OS available to home users was Sinclair QDOS on the Sinclair QL, released in 1984, but very few people bought the machine. Commodore's Amiga, released the following year, was the first commercially successful home computer to use the technology, and its multimedia abilities make it a clear ancestor of contemporary multitasking personal computers. Microsoft made preemptive multitasking a core feature of their flagship operating system in the early 1990s when developing Windows NT 3.1 and then Windows 95. It was later adopted on the Apple Macintosh by Mac OS X that, as a Unix-like operating system, uses preemptive multitasking for all native applications.

A similar model is used in Windows 9x and the Windows NT family, where native 32-bit applications are multitasked preemptively. 64-bit editions of Windows, both for the x86-64 and Itanium architectures, no longer support legacy 16-bit applications, and thus provide preemptive multitasking for all supported applications.

Real time

Another reason for multitasking was in the design of real-time computing systems, where there are a number of possibly unrelated external activities needed to be controlled by a single processor system. In such systems a hierarchical interrupt system is coupled with process prioritization to ensure that key activities were given a greater share of available process time.

Multithreading

As multitasking greatly improved the throughput of computers, programmers started to implement applications as sets of cooperating processes (e. g., one process gathering input data, one process processing input data, one process writing out results on disk). This, however, required some tools to allow processes to efficiently exchange data.

Threads were born from the idea that the most efficient way for cooperating processes to exchange data would be to share their entire memory space. Thus, threads are effectively processes that run in the same memory context and share other resources with their parent processes, such as open files. Threads are described as lightweight processes because switching between threads does not involve changing the memory context.

While threads are scheduled preemptively, some operating systems provide a variant to threads, named fibers, that are scheduled cooperatively. On operating systems that do not provide fibers, an application may implement its own fibers using repeated calls to worker functions. Fibers are even more lightweight than threads, and somewhat easier to program with, although they tend to lose some or all of the benefits of threads on machines with multiple processors.

Some systems directly support multithreading in hardware.

Memory protection

Essential to any multitasking system is to safely and effectively share access to system resources. Access to memory must be strictly managed to ensure that no process can inadvertently or deliberately read or write to memory locations outside the process's address space. This is done for the purpose of general system stability and data integrity, as well as data security.

In general, memory access management is a responsibility of the operating system kernel, in combination with hardware mechanisms that provide supporting functionalities, such as a memory management unit (MMU). If a process attempts to access a memory location outside its memory space, the MMU denies the request and signals the kernel to take appropriate actions; this usually results in forcibly terminating the offending process. Depending on the software and kernel design and the specific error in question, the user may receive an access violation error message such as "segmentation fault".

In a well designed and correctly implemented multitasking system, a given process can never directly access memory that belongs to another process. An exception to this rule is in the case of shared memory; for example, in the System V inter-process communication mechanism the kernel allocates memory to be mutually shared by multiple processes. Such features are often used by database management software such as PostgreSQL.

Inadequate memory protection mechanisms, either due to flaws in their design or poor implementations, allow for security vulnerabilities that may be potentially exploited by malicious software.

Memory swapping

Use of a swap file or swap partition is a way for the operating system to provide more memory than is physically available by keeping portions of the primary memory in secondary storage. While multitasking and memory swapping are two completely unrelated techniques, they are very often used together, as swapping memory allows more tasks to be loaded at the same time. Typically, a multitasking system allows another process to run when the running process hits a point where it has to wait for some portion of memory to be reloaded from secondary storage.

Programming

Processes that are entirely independent are not much trouble to program in a multitasking environment. Most of the complexity in multitasking systems comes from the need to share computer resources between tasks and to synchronize the operation of co-operating tasks.

Various concurrent computing techniques are used to avoid potential problems caused by multiple tasks attempting to access the same resource.

Bigger systems were sometimes built with a central processor(s) and some number of I/O processors, a kind of asymmetric multiprocessing.

Over the years, multitasking systems have been refined. Modern operating systems generally include detailed mechanisms for prioritizing processes, while symmetric multiprocessing has introduced new complexities and capabilities.

Church–Turing thesis

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Church%E2%80%93Turing_thesis

In computability theory, the Church–Turing thesis (also known as computability thesis, the Turing–Church thesis, the Church–Turing conjecture, Church's thesis, Church's conjecture, and Turing's thesis) is a hypothesis about the nature of computable functions. It states that a function on the natural numbers can be calculated by an effective method if and only if it is computable by a Turing machine. The thesis is named after American mathematician Alonzo Church and the British mathematician Alan Turing. Before the precise definition of computable function, mathematicians often used the informal term effectively calculable to describe functions that are computable by paper-and-pencil methods. In the 1930s, several independent attempts were made to formalize the notion of computability:

  • In 1933, Kurt Gödel, with Jacques Herbrand, created a formal definition of a class called general recursive functions. The class of general recursive functions is the smallest class of functions (possibly with more than one argument) that includes all constant functions, projections, the successor function, and which is closed under function composition, recursion, and minimization.
  • In 1936, Alonzo Church created a method for defining functions called the λ-calculus. Within λ-calculus, he defined an encoding of the natural numbers called the Church numerals. A function on the natural numbers is called λ-computable if the corresponding function on the Church numerals can be represented by a term of the λ-calculus.
  • Also in 1936, before learning of Church's work, Alan Turing created a theoretical model for machines, now called Turing machines, that could carry out calculations from inputs by manipulating symbols on a tape. Given a suitable encoding of the natural numbers as sequences of symbols, a function on the natural numbers is called Turing computable if some Turing machine computes the corresponding function on encoded natural numbers.

Church and Turing proved that these three formally defined classes of computable functions coincide: a function is λ-computable if and only if it is Turing computable, and if and only if it is general recursive. This has led mathematicians and computer scientists to believe that the concept of computability is accurately characterized by these three equivalent processes. Other formal attempts to characterize computability have subsequently strengthened this belief (see below).

On the other hand, the Church–Turing thesis states that the above three formally-defined classes of computable functions coincide with the informal notion of an effectively calculable function. Since, as an informal notion, the concept of effective calculability does not have a formal definition, the thesis, although it has near-universal acceptance, cannot be formally proven.

Since its inception, variations on the original thesis have arisen, including statements about what can physically be realized by a computer in our universe (physical Church-Turing thesis) and what can be efficiently computed (Church–Turing thesis (complexity theory)). These variations are not due to Church or Turing, but arise from later work in complexity theory and digital physics. The thesis also has implications for the philosophy of mind (see below).

Statement in Church's and Turing's words

J. B. Rosser (1939) addresses the notion of "effective computability" as follows: "Clearly the existence of CC and RC (Church's and Rosser's proofs) presupposes a precise definition of 'effective'. 'Effective method' is here used in the rather special sense of a method each step of which is precisely predetermined and which is certain to produce the answer in a finite number of steps". Thus the adverb-adjective "effective" is used in a sense of "1a: producing a decided, decisive, or desired effect", and "capable of producing a result".

In the following, the words "effectively calculable" will mean "produced by any intuitively 'effective' means whatsoever" and "effectively computable" will mean "produced by a Turing-machine or equivalent mechanical device". Turing's "definitions" given in a footnote in his 1938 Ph.D. thesis Systems of Logic Based on Ordinals, supervised by Church, are virtually the same:

We shall use the expression "computable function" to mean a function calculable by a machine, and let "effectively calculable" refer to the intuitive idea without particular identification with any one of these definitions.

The thesis can be stated as: Every effectively calculable function is a computable function. Church also stated that "No computational procedure will be considered as an algorithm unless it can be represented as a Turing Machine". Turing stated it this way:

It was stated ... that "a function is effectively calculable if its values can be found by some purely mechanical process". We may take this literally, understanding that by a purely mechanical process one which could be carried out by a machine. The development ... leads to ... an identification of computability with effective calculability. [ is the footnote quoted above.]

History

One of the important problems for logicians in the 1930s was the Entscheidungsproblem of David Hilbert and Wilhelm Ackermann, which asked whether there was a mechanical procedure for separating mathematical truths from mathematical falsehoods. This quest required that the notion of "algorithm" or "effective calculability" be pinned down, at least well enough for the quest to begin. But from the very outset Alonzo Church's attempts began with a debate that continues to this day. Was the notion of "effective calculability" to be (i) an "axiom or axioms" in an axiomatic system, (ii) merely a definition that "identified" two or more propositions, (iii) an empirical hypothesis to be verified by observation of natural events, or (iv) just a proposal for the sake of argument (i.e. a "thesis").

Circa 1930–1952

In the course of studying the problem, Church and his student Stephen Kleene introduced the notion of λ-definable functions, and they were able to prove that several large classes of functions frequently encountered in number theory were λ-definable. The debate began when Church proposed to Gödel that one should define the "effectively computable" functions as the λ-definable functions. Gödel, however, was not convinced and called the proposal "thoroughly unsatisfactory". Rather, in correspondence with Church (c. 1934–35), Gödel proposed axiomatizing the notion of "effective calculability"; indeed, in a 1935 letter to Kleene, Church reported that:

His [Gödel's] only idea at the time was that it might be possible, in terms of effective calculability as an undefined notion, to state a set of axioms which would embody the generally accepted properties of this notion, and to do something on that basis.

But Gödel offered no further guidance. Eventually, he would suggest his recursion, modified by Herbrand's suggestion, that Gödel had detailed in his 1934 lectures in Princeton NJ (Kleene and Rosser transcribed the notes). But he did not think that the two ideas could be satisfactorily identified "except heuristically".

Next, it was necessary to identify and prove the equivalence of two notions of effective calculability. Equipped with the λ-calculus and "general" recursion, Stephen Kleene with help of Church and J. Barkley Rosser produced proofs (1933, 1935) to show that the two calculi are equivalent. Church subsequently modified his methods to include use of Herbrand–Gödel recursion and then proved (1936) that the Entscheidungsproblem is unsolvable: there is no algorithm that can determine whether a well formed formula has a "normal form".

Many years later in a letter to Davis (c. 1965), Gödel said that "he was, at the time of these [1934] lectures, not at all convinced that his concept of recursion comprised all possible recursions". By 1963–64 Gödel would disavow Herbrand–Gödel recursion and the λ-calculus in favor of the Turing machine as the definition of "algorithm" or "mechanical procedure" or "formal system".

A hypothesis leading to a natural law?: In late 1936 Alan Turing's paper (also proving that the Entscheidungsproblem is unsolvable) was delivered orally, but had not yet appeared in print. On the other hand, Emil Post's 1936 paper had appeared and was certified independent of Turing's work. Post strongly disagreed with Church's "identification" of effective computability with the λ-calculus and recursion, stating:

Actually the work already done by Church and others carries this identification considerably beyond the working hypothesis stage. But to mask this identification under a definition… blinds us to the need of its continual verification.

Rather, he regarded the notion of "effective calculability" as merely a "working hypothesis" that might lead by inductive reasoning to a "natural law" rather than by "a definition or an axiom". This idea was "sharply" criticized by Church.

Thus Post in his 1936 paper was also discounting Kurt Gödel's suggestion to Church in 1934–35 that the thesis might be expressed as an axiom or set of axioms.

Turing adds another definition, Rosser equates all three: Within just a short time, Turing's 1936–37 paper "On Computable Numbers, with an Application to the Entscheidungsproblem" appeared. In it he stated another notion of "effective computability" with the introduction of his a-machines (now known as the Turing machine abstract computational model). And in a proof-sketch added as an "Appendix" to his 1936–37 paper, Turing showed that the classes of functions defined by λ-calculus and Turing machines coincided. Church was quick to recognise how compelling Turing's analysis was. In his review of Turing's paper he made clear that Turing's notion made "the identification with effectiveness in the ordinary (not explicitly defined) sense evident immediately".

In a few years (1939) Turing would propose, like Church and Kleene before him, that his formal definition of mechanical computing agent was the correct one. Thus, by 1939, both Church (1934) and Turing (1939) had individually proposed that their "formal systems" should be definitions of "effective calculability"; neither framed their statements as theses.

Rosser (1939) formally identified the three notions-as-definitions:

All three definitions are equivalent, so it does not matter which one is used.

Kleene proposes Church's Thesis: This left the overt expression of a "thesis" to Kleene. In his 1943 paper Recursive Predicates and Quantifiers Kleene proposed his "THESIS I":

This heuristic fact [general recursive functions are effectively calculable] ... led Church to state the following thesis(22). The same thesis is implicit in Turing's description of computing machines(23).

THESIS I. Every effectively calculable function (effectively decidable predicate) is general recursive [Kleene's italics]

Since a precise mathematical definition of the term effectively calculable (effectively decidable) has been wanting, we can take this thesis ... as a definition of it ...

(22) references Church 1936; (23) references Turing 1936–7 Kleene goes on to note that:

the thesis has the character of an hypothesis—a point emphasized by Post and by Church(24). If we consider the thesis and its converse as definition, then the hypothesis is an hypothesis about the application of the mathematical theory developed from the definition. For the acceptance of the hypothesis, there are, as we have suggested, quite compelling grounds.

(24) references Post 1936 of Post and Church's Formal definitions in the theory of ordinal numbers, Fund. Math. vol 28 (1936) pp. 11–21.

Kleene's Church–Turing Thesis: In the years 1939-1952, Stephen Kleene would go on to overtly name, after switching from presenting his work in the mathematical terminology of the lambda calculus to his own theory of partial recursive functions or Gödel-Kleene recursiveness, both Church's thesis and Turing's thesis. Finally coining the term the "Church-Turing thesis" in a section from his graduate textbook on logic. In which he was helping to give clarifications to concepts, demanded in a critique from William Boone on Alan Turing's paper "The Word Problem in Semi-Groups with Cancellation", defend, and express the two "theses" and then "identify" them (show equivalence) by use of his Theorem XXX:

Heuristic evidence and other considerations led Church 1936 to propose the following thesis.

Thesis I. Every effectively calculable function (effectively decidable predicate) is general recursive.

Theorem XXX: The following classes of partial functions are coextensive, i.e. have the same members: (a) the partial recursive functions, (b) the computable functions ...

Turing's thesis: Turing's thesis that every function which would naturally be regarded as computable is computable under his definition, i.e. by one of his machines, is equivalent to Church's thesis by Theorem XXX.

Later developments

An attempt to understand the notion of "effective computability" better led Robin Gandy (Turing's student and friend) in 1980 to analyze machine computation (as opposed to human-computation acted out by a Turing machine). Gandy's curiosity about, and analysis of, cellular automata (including Conway's game of life), parallelism, and crystalline automata, led him to propose four "principles (or constraints) ... which it is argued, any machine must satisfy". His most-important fourth, "the principle of causality" is based on the "finite velocity of propagation of effects and signals; contemporary physics rejects the possibility of instantaneous action at a distance". From these principles and some additional constraints—(1a) a lower bound on the linear dimensions of any of the parts, (1b) an upper bound on speed of propagation (the velocity of light), (2) discrete progress of the machine, and (3) deterministic behavior—he produces a theorem that "What can be calculated by a device satisfying principles I–IV is computable."

In the late 1990s Wilfried Sieg analyzed Turing's and Gandy's notions of "effective calculability" with the intent of "sharpening the informal notion, formulating its general features axiomatically, and investigating the axiomatic framework". In his 1997 and 2002 work Sieg presents a series of constraints on the behavior of a computor—"a human computing agent who proceeds mechanically". These constraints reduce to:

  • "(B.1) (Boundedness) There is a fixed bound on the number of symbolic configurations a computor can immediately recognize.
  • "(B.2) (Boundedness) There is a fixed bound on the number of internal states a computor can be in.
  • "(L.1) (Locality) A computor can change only elements of an observed symbolic configuration.
  • "(L.2) (Locality) A computor can shift attention from one symbolic configuration to another one, but the new observed configurations must be within a bounded distance of the immediately previously observed configuration.
  • "(D) (Determinacy) The immediately recognizable (sub-)configuration determines uniquely the next computation step (and id [instantaneous description])"; stated another way: "A computor's internal state together with the observed configuration fixes uniquely the next computation step and the next internal state."

The matter remains in active discussion within the academic community.

The thesis as a definition

The thesis can be viewed as nothing but an ordinary mathematical definition. Comments by Gödel on the subject suggest this view, e.g. "the correct definition of mechanical computability was established beyond any doubt by Turing". The case for viewing the thesis as nothing more than a definition is made explicitly by Robert I. Soare, where it is also argued that Turing's definition of computability is no less likely to be correct than the epsilon-delta definition of a continuous function.

Success of the thesis

Other formalisms (besides recursion, the λ-calculus, and the Turing machine) have been proposed for describing effective calculability/computability. Stephen Kleene (1952) adds to the list the functions "reckonable in the system S1" of Kurt Gödel 1936, and Emil Post's (1943, 1946) "canonical [also called normal] systems". In the 1950s Hao Wang and Martin Davis greatly simplified the one-tape Turing-machine model (see Post–Turing machine). Marvin Minsky expanded the model to two or more tapes and greatly simplified the tapes into "up-down counters", which Melzak and Lambek further evolved into what is now known as the counter machine model. In the late 1960s and early 1970s researchers expanded the counter machine model into the register machine, a close cousin to the modern notion of the computer. Other models include combinatory logic and Markov algorithms. Gurevich adds the pointer machine model of Kolmogorov and Uspensky (1953, 1958): "... they just wanted to ... convince themselves that there is no way to extend the notion of computable function."

All these contributions involve proofs that the models are computationally equivalent to the Turing machine; such models are said to be Turing complete. Because all these different attempts at formalizing the concept of "effective calculability/computability" have yielded equivalent results, it is now generally assumed that the Church–Turing thesis is correct. In fact, Gödel (1936) proposed something stronger than this; he observed that there was something "absolute" about the concept of "reckonable in S1":

It may also be shown that a function which is computable ['reckonable'] in one of the systems Si, or even in a system of transfinite type, is already computable [reckonable] in S1. Thus the concept 'computable' ['reckonable'] is in a certain definite sense 'absolute', while practically all other familiar metamathematical concepts (e.g. provable, definable, etc.) depend quite essentially on the system to which they are defined ...

Informal usage in proofs

Proofs in computability theory often invoke the Church–Turing thesis in an informal way to establish the computability of functions while avoiding the (often very long) details which would be involved in a rigorous, formal proof. To establish that a function is computable by Turing machine, it is usually considered sufficient to give an informal English description of how the function can be effectively computed, and then conclude "by the Church–Turing thesis" that the function is Turing computable (equivalently, partial recursive).

Dirk van Dalen gives the following example for the sake of illustrating this informal use of the Church–Turing thesis:

EXAMPLE: Each infinite RE set contains an infinite recursive set.

Proof: Let A be infinite RE. We list the elements of A effectively, n0, n1, n2, n3, ...

From this list we extract an increasing sublist: put m0=n0, after finitely many steps we find an nk such that nk > m0, put m1=nk. We repeat this procedure to find m2 > m1, etc. this yields an effective listing of the subset B={m0,m1,m2,...} of A, with the property mi < mi+1.

Claim. B is decidable. For, in order to test k in B we must check if k=mi for some i. Since the sequence of mi's is increasing we have to produce at most k+1 elements of the list and compare them with k. If none of them is equal to k, then k not in B. Since this test is effective, B is decidable and, by Church's thesis, recursive.

In order to make the above example completely rigorous, one would have to carefully construct a Turing machine, or λ-function, or carefully invoke recursion axioms, or at best, cleverly invoke various theorems of computability theory. But because the computability theorist believes that Turing computability correctly captures what can be computed effectively, and because an effective procedure is spelled out in English for deciding the set B, the computability theorist accepts this as proof that the set is indeed recursive.

Variations

The success of the Church–Turing thesis prompted variations of the thesis to be proposed. For example, the physical Church–Turing thesis states: "All physically computable functions are Turing-computable."

The Church–Turing thesis says nothing about the efficiency with which one model of computation can simulate another. It has been proved for instance that a (multi-tape) universal Turing machine only suffers a logarithmic slowdown factor in simulating any Turing machine.

A variation of the Church–Turing thesis addresses whether an arbitrary but "reasonable" model of computation can be efficiently simulated. This is called the feasibility thesis, also known as the (classical) complexity-theoretic Church–Turing thesis or the extended Church–Turing thesis, which is not due to Church or Turing, but rather was realized gradually in the development of complexity theory. It states: "A probabilistic Turing machine can efficiently simulate any realistic model of computation." The word 'efficiently' here means up to polynomial-time reductions. This thesis was originally called computational complexity-theoretic Church–Turing thesis by Ethan Bernstein and Umesh Vazirani (1997). The complexity-theoretic Church–Turing thesis, then, posits that all 'reasonable' models of computation yield the same class of problems that can be computed in polynomial time. Assuming the conjecture that probabilistic polynomial time (BPP) equals deterministic polynomial time (P), the word 'probabilistic' is optional in the complexity-theoretic Church–Turing thesis. A similar thesis, called the invariance thesis, was introduced by Cees F. Slot and Peter van Emde Boas. It states: "'Reasonable' machines can simulate each other within a polynomially bounded overhead in time and a constant-factor overhead in space." The thesis originally appeared in a paper at STOC'84, which was the first paper to show that polynomial-time overhead and constant-space overhead could be simultaneously achieved for a simulation of a Random Access Machine on a Turing machine.

If BQP is shown to be a strict superset of BPP, it would invalidate the complexity-theoretic Church–Turing thesis. In other words, there would be efficient quantum algorithms that perform tasks that do not have efficient probabilistic algorithms. This would not however invalidate the original Church–Turing thesis, since a quantum computer can always be simulated by a Turing machine, but it would invalidate the classical complexity-theoretic Church–Turing thesis for efficiency reasons. Consequently, the quantum complexity-theoretic Church–Turing thesis states: "A quantum Turing machine can efficiently simulate any realistic model of computation."

Eugene Eberbach and Peter Wegner claim that the Church–Turing thesis is sometimes interpreted too broadly, stating "the broader assertion that algorithms precisely capture what can be computed is invalid". They claim that forms of computation not captured by the thesis are relevant today, terms which they call super-Turing computation.

Philosophical implications

Philosophers have interpreted the Church–Turing thesis as having implications for the philosophy of mind. B. Jack Copeland states that it is an open empirical question whether there are actual deterministic physical processes that, in the long run, elude simulation by a Turing machine; furthermore, he states that it is an open empirical question whether any such processes are involved in the working of the human brain. There are also some important open questions which cover the relationship between the Church–Turing thesis and physics, and the possibility of hypercomputation. When applied to physics, the thesis has several possible meanings:

  1. The universe is equivalent to a Turing machine; thus, computing non-recursive functions is physically impossible. This has been termed the strong Church–Turing thesis, or Church–Turing–Deutsch principle, and is a foundation of digital physics.
  2. The universe is not equivalent to a Turing machine (i.e., the laws of physics are not Turing-computable), but incomputable physical events are not "harnessable" for the construction of a hypercomputer. For example, a universe in which physics involves random real numbers, as opposed to computable reals, would fall into this category.
  3. The universe is a hypercomputer, and it is possible to build physical devices to harness this property and calculate non-recursive functions. For example, it is an open question whether all quantum mechanical events are Turing-computable, although it is known that rigorous models such as quantum Turing machines are equivalent to deterministic Turing machines. (They are not necessarily efficiently equivalent; see above.) John Lucas and Roger Penrose have suggested that the human mind might be the result of some kind of quantum-mechanically enhanced, "non-algorithmic" computation.

There are many other technical possibilities which fall outside or between these three categories, but these serve to illustrate the range of the concept.

Philosophical aspects of the thesis, regarding both physical and biological computers, are also discussed in Odifreddi's 1989 textbook on recursion theory.

Non-computable functions

One can formally define functions that are not computable. A well-known example of such a function is the Busy Beaver function. This function takes an input n and returns the largest number of symbols that a Turing machine with n states can print before halting, when run with no input. Finding an upper bound on the busy beaver function is equivalent to solving the halting problem, a problem known to be unsolvable by Turing machines. Since the busy beaver function cannot be computed by Turing machines, the Church–Turing thesis states that this function cannot be effectively computed by any method.

Several computational models allow for the computation of (Church-Turing) non-computable functions. These are known as hypercomputers. Mark Burgin argues that super-recursive algorithms such as inductive Turing machines disprove the Church–Turing thesis. His argument relies on a definition of algorithm broader than the ordinary one, so that non-computable functions obtained from some inductive Turing machines are called computable. This interpretation of the Church–Turing thesis differs from the interpretation commonly accepted in computability theory, discussed above. The argument that super-recursive algorithms are indeed algorithms in the sense of the Church–Turing thesis has not found broad acceptance within the computability research community.

Equality (mathematics)

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Equality_...