Search This Blog

Thursday, September 23, 2021

Nanorobotics

From Wikipedia, the free encyclopedia
 

Nanorobotics is an emerging technology field creating machines or robots whose components are at or near the scale of a nanometer (10−9 meters). More specifically, nanorobotics (as opposed to microrobotics) refers to the nanotechnology engineering discipline of designing and building nanorobots, with devices ranging in size from 0.1 to 10 micrometres and constructed of nanoscale or molecular components. The terms nanobot, nanoid, nanite, nanomachine, or nanomite have also been used to describe such devices currently under research and development.

Nanomachines are largely in the research and development phase, but some primitive molecular machines and nanomotors have been tested. An example is a sensor having a switch approximately 1.5 nanometers across, able to count specific molecules in the chemical sample. The first useful applications of nanomachines may be in nanomedicine. For example, biological machines could be used to identify and destroy cancer cells. Another potential application is the detection of toxic chemicals, and the measurement of their concentrations, in the environment. Rice University has demonstrated a single-molecule car developed by a chemical process and including Buckminsterfullerenes (buckyballs) for wheels. It is actuated by controlling the environmental temperature and by positioning a scanning tunneling microscope tip.

Another definition is a robot that allows precise interactions with nanoscale objects, or can manipulate with nanoscale resolution. Such devices are more related to microscopy or scanning probe microscopy, instead of the description of nanorobots as molecular machines. Using the microscopy definition, even a large apparatus such as an atomic force microscope can be considered a nanorobotic instrument when configured to perform nanomanipulation. For this viewpoint, macroscale robots or microrobots that can move with nanoscale precision can also be considered nanorobots.

Nanorobotics theory

According to Richard Feynman, it was his former graduate student and collaborator Albert Hibbs who originally suggested to him (circa 1959) the idea of a medical use for Feynman's theoretical micro-machines (see biological machine). Hibbs suggested that certain repair machines might one day be reduced in size to the point that it would, in theory, be possible to (as Feynman put it) "swallow the surgeon". The idea was incorporated into Feynman's 1959 essay There's Plenty of Room at the Bottom.

Since nano-robots would be microscopic in size, it would probably be necessary for very large numbers of them to work together to perform microscopic and macroscopic tasks. These nano-robot swarms, both those unable to replicate (as in utility fog) and those able to replicate unconstrained in the natural environment (as in grey goo and synthetic biology), are found in many science fiction stories, such as the Borg nano-probes in Star Trek and The Outer Limits episode "The New Breed". Some proponents of nano-robotics, in reaction to the grey goo scenarios that they earlier helped to propagate, hold the view that nano-robots able to replicate outside of a restricted factory environment do not form a necessary part of a purported productive nanotechnology, and that the process of self-replication, were it ever to be developed, could be made inherently safe. They further assert that their current plans for developing and using molecular manufacturing do not in fact include free-foraging replicators.

A detailed theoretical discussion of nanorobotics, including specific design issues such as sensing, power communication, navigation, manipulation, locomotion, and onboard computation, has been presented in the medical context of nanomedicine by Robert Freitas. Some of these discussions remain at the level of unbuildable generality and do not approach the level of detailed engineering.

Legal and ethical implications

Open technology

A document with a proposal on nanobiotech development using open design technology methods, as in open-source hardware and open-source software, has been addressed to the United Nations General Assembly. According to the document sent to the United Nations, in the same way that open source has in recent years accelerated the development of computer systems, a similar approach should benefit the society at large and accelerate nanorobotics development. The use of nanobiotechnology should be established as a human heritage for the coming generations, and developed as an open technology based on ethical practices for peaceful purposes. Open technology is stated as a fundamental key for such an aim.

Nanorobot race

In the same ways that technology research and development drove the space race and nuclear arms race, a race for nanorobots is occurring. There is plenty of ground allowing nanorobots to be included among the emerging technologies. Some of the reasons are that large corporations, such as General Electric, Hewlett-Packard, Synopsys, Northrop Grumman and Siemens have been recently working in the development and research of nanorobots; surgeons are getting involved and starting to propose ways to apply nanorobots for common medical procedures; universities and research institutes were granted funds by government agencies exceeding $2 billion towards research developing nanodevices for medicine; bankers are also strategically investing with the intent to acquire beforehand rights and royalties on future nanorobots commercialisation. Some aspects of nanorobot litigation and related issues linked to monopoly have already arisen. A large number of patents has been granted recently on nanorobots, done mostly for patent agents, companies specialized solely on building patent portfolios, and lawyers. After a long series of patents and eventually litigations, see for example the invention of radio, or the war of currents, emerging fields of technology tend to become a monopoly, which normally is dominated by large corporations.

Manufacturing approaches

Manufacturing nanomachines assembled from molecular components is a very challenging task. Because of the level of difficulty, many engineers and scientists continue working cooperatively across multidisciplinary approaches to achieve breakthroughs in this new area of development. Thus, it is quite understandable the importance of the following distinct techniques currently applied towards manufacturing nanorobots:

Biochip

The joint use of nanoelectronics, photolithography, and new biomaterials provides a possible approach to manufacturing nanorobots for common medical uses, such as surgical instrumentation, diagnosis, and drug delivery. This method for manufacturing on nanotechnology scale is in use in the electronics industry since 2008. So, practical nanorobots should be integrated as nanoelectronics devices, which will allow tele-operation and advanced capabilities for medical instrumentation.

Nubots

A nucleic acid robot (nubot) is an organic molecular machine at the nanoscale. DNA structure can provide means to assemble 2D and 3D nanomechanical devices. DNA based machines can be activated using small molecules, proteins and other molecules of DNA. Biological circuit gates based on DNA materials have been engineered as molecular machines to allow in-vitro drug delivery for targeted health problems. Such material based systems would work most closely to smart biomaterial drug system delivery, while not allowing precise in vivo teleoperation of such engineered prototypes.

Surface-bound systems

Several reports have demonstrated the attachment of synthetic molecular motors to surfaces. These primitive nanomachines have been shown to undergo machine-like motions when confined to the surface of a macroscopic material. The surface anchored motors could potentially be used to move and position nanoscale materials on a surface in the manner of a conveyor belt.

Positional nanoassembly

Nanofactory Collaboration, founded by Robert Freitas and Ralph Merkle in 2000 and involving 23 researchers from 10 organizations and 4 countries, focuses on developing a practical research agenda specifically aimed at developing positionally-controlled diamond mechanosynthesis and a diamondoid nanofactory that would have the capability of building diamondoid medical nanorobots.

Biohybrids

The emerging field of bio-hybrid systems combines biological and synthetic structural elements for biomedical or robotic applications. The constituting elements of bio-nanoelectromechanical systems (BioNEMS) are of nanoscale size, for example DNA, proteins or nanostructured mechanical parts. Thiol-ene e-beams resist allow the direct writing of nanoscale features, followed by the functionalization of the natively reactive resist surface with biomolecules. Other approaches use a biodegradable material attached to magnetic particles that allow them to be guided around the body.

Bacteria-based

This approach proposes the use of biological microorganisms, like the bacterium Escherichia coli and Salmonella typhimurium. Thus the model uses a flagellum for propulsion purposes. Electromagnetic fields normally control the motion of this kind of biological integrated device. Chemists at the University of Nebraska have created a humidity gauge by fusing a bacterium to a silicon computer chip.

Virus-based

Retroviruses can be retrained to attach to cells and replace DNA. They go through a process called reverse transcription to deliver genetic packaging in a vector. Usually, these devices are Pol – Gag genes of the virus for the Capsid and Delivery system. This process is called retroviral gene therapy, having the ability to re-engineer cellular DNA by usage of viral vectors. This approach has appeared in the form of retroviral, adenoviral, and lentiviral gene delivery systems. These gene therapy vectors have been used in cats to send genes into the genetically modified organism (GMO), causing it to display the trait. 

3D printing

3D printing is the process by which a three-dimensional structure is built through the various processes of additive manufacturing. Nanoscale 3D printing involves many of the same process, incorporated at a much smaller scale. To print a structure in the 5-400 µm scale, the precision of the 3D printing machine needs to be improved greatly. A two-step process of 3D printing, using a 3D printing and laser etched plates method was incorporated as an improvement technique. To be more precise at a nanoscale, the 3D printing process uses a laser etching machine, which etches the details needed for the segments of nanorobots into each plate. The plate is then transferred to the 3D printer, which fills the etched regions with the desired nanoparticle. The 3D printing process is repeated until the nanorobot is built from the bottom up. This 3D printing process has many benefits. First, it increases the overall accuracy of the printing process. Second, it has the potential to create functional segments of a nanorobot. The 3D printer uses a liquid resin, which is hardened at precisely the correct spots by a focused laser beam. The focal point of the laser beam is guided through the resin by movable mirrors and leaves behind a hardened line of solid polymer, just a few hundred nanometers wide. This fine resolution enables the creation of intricately structured sculptures as tiny as a grain of sand. This process takes place by using photoactive resins, which are hardened by the laser at an extremely small scale to create the structure. This process is quick by nanoscale 3D printing standards. Ultra-small features can be made with the 3D micro-fabrication technique used in multiphoton photopolymerisation. This approach uses a focused laser to trace the desired 3D object into a block of gel. Due to the nonlinear nature of photo excitation, the gel is cured to a solid only in the places where the laser was focused while the remaining gel is then washed away. Feature sizes of under 100 nm are easily produced, as well as complex structures with moving and interlocked parts.

Potential uses

Nanomedicine

Potential uses for nanorobotics in medicine include early diagnosis and targeted drug-delivery for cancer, biomedical instrumentation, surgery, pharmacokinetics, monitoring of diabetes, and health care.

In such plans, future medical nanotechnology is expected to employ nanorobots injected into the patient to perform work at a cellular level. Such nanorobots intended for use in medicine should be non-replicating, as replication would needlessly increase device complexity, reduce reliability, and interfere with the medical mission.

Nanotechnology provides a wide range of new technologies for developing customized means to optimize the delivery of pharmaceutical drugs. Today, harmful side effects of treatments such as chemotherapy are commonly a result of drug delivery methods that don't pinpoint their intended target cells accurately. Researchers at Harvard and MIT, however, have been able to attach special RNA strands, measuring nearly 10 nm in diameter, to nanoparticles, filling them with a chemotherapy drug. These RNA strands are attracted to cancer cells. When the nanoparticle encounters a cancer cell, it adheres to it, and releases the drug into the cancer cell. This directed method of drug delivery has great potential for treating cancer patients while avoiding negative effects (commonly associated with improper drug delivery). The first demonstration of nanomotors operating in living organisms was carried out in 2014 at University of California, San Diego. MRI-guided nanocapsules are one potential precursor to nanorobots.

Another useful application of nanorobots is assisting in the repair of tissue cells alongside white blood cells. Recruiting inflammatory cells or white blood cells (which include neutrophil granulocytes, lymphocytes, monocytes, and mast cells) to the affected area is the first response of tissues to injury. Because of their small size, nanorobots could attach themselves to the surface of recruited white cells, to squeeze their way out through the walls of blood vessels and arrive at the injury site, where they can assist in the tissue repair process. Certain substances could possibly be used to accelerate the recovery.

The science behind this mechanism is quite complex. Passage of cells across the blood endothelium, a process known as transmigration, is a mechanism involving engagement of cell surface receptors to adhesion molecules, active force exertion and dilation of the vessel walls and physical deformation of the migrating cells. By attaching themselves to migrating inflammatory cells, the robots can in effect "hitch a ride" across the blood vessels, bypassing the need for a complex transmigration mechanism of their own.

As of 2016, in the United States, Food and Drug Administration (FDA) regulates nanotechnology on the basis of size.

Nanocomposite particles that are controlled remotely by an electromagnetic field was also developed. This series of nanorobots that are now enlisted in the Guinness World Records, can be used to interact with the biological cells. Scientists suggest that this technology can be used for the treatment of cancer.

Cultural references

The Nanites are characters on the TV show Mystery Science Theater 3000. They're self-replicating, bio-engineered organisms that work on the ship and reside in the SOL's computer systems. They made their first appearance in Season 8. Nanites are used in a number of episodes in the Netflix series "Travelers". They be programmed and injected into injured people to perform repairs. First appearance in season 1

Nanites also feature in the Rise of Iron 2016 expansion for Destiny in which SIVA, a self-replicating nanotechnology is used as a weapon.

Nanites (referred to more often as Nanomachines) are often referenced in Konami's "Metal Gear" series being used to enhance and regulate abilities and body functions.

In the Star Trek franchise TV shows nanites play an important plot device. Starting with Evolution in the third season of The Next Generation, Borg Nanoprobes perform the function of maintaining the Borg cybernetic systems, as well as repairing damage to the organic parts of a Borg. They generate new technology inside a Borg when needed, as well as protecting them from many forms of disease.

Nanites play a role in the video game Deus Ex, being the basis of the nano-augmentation technology which gives augmented people superhuman abilities.

Nanites are also mentioned in the Arc of a Scythe book series by Neal Shusterman and are used to heal all nonfatal injuries, regulate bodily functions, and considerably lessen pain.

Self-replicating machine

From Wikipedia, the free encyclopedia
 
A simple form of machine self-replication

A self-replicating machine is a type of autonomous robot that is capable of reproducing itself autonomously using raw materials found in the environment, thus exhibiting self-replication in a way analogous to that found in nature. The concept of self-replicating machines has been advanced and examined by Homer Jacobson, Edward F. Moore, Freeman Dyson, John von Neumann and in more recent times by K. Eric Drexler in his book on nanotechnology, Engines of Creation (coining the term clanking replicator for such machines) and by Robert Freitas and Ralph Merkle in their review Kinematic Self-Replicating Machines which provided the first comprehensive analysis of the entire replicator design space. The future development of such technology is an integral part of several plans involving the mining of moons and asteroid belts for ore and other materials, the creation of lunar factories, and even the construction of solar power satellites in space. The von Neumann probe is one theoretical example of such a machine. Von Neumann also worked on what he called the universal constructor, a self-replicating machine that would be able to evolve and which he formalized in a cellular automata environment. Notably, Von Neumann's Self-Reproducing Automata scheme posited that open-ended evolution requires inherited information to be copied and passed to offspring separately from the self-replicating machine, an insight that preceded the discovery of the structure of the DNA molecule by Watson and Crick and how it is separately translated and replicated in the cell.

A self-replicating machine is an artificial self-replicating system that relies on conventional large-scale technology and automation. Although suggested more than 70 years ago no self-replicating machine has been seen until today. Certain idiosyncratic terms are occasionally found in the literature. For example, the term clanking replicator was once used by Drexler to distinguish macroscale replicating systems from the microscopic nanorobots or "assemblers" that nanotechnology may make possible, but the term is informal and is rarely used by others in popular or technical discussions. Replicators have also been called "von Neumann machines" after John von Neumann, who first rigorously studied the idea. However, the term "von Neumann machine" is less specific and also refers to a completely unrelated computer architecture that von Neumann proposed and so its use is discouraged where accuracy is important. Von Neumann himself used the term universal constructor to describe such self-replicating machines.

Historians of machine tools, even before the numerical control era, sometimes figuratively said that machine tools were a unique class of machines because they have the ability to "reproduce themselves" by copying all of their parts. Implicit in these discussions is that a human would direct the cutting processes (later planning and programming the machines), and would then assemble the parts. The same is true for RepRaps, which are another class of machines sometimes mentioned in reference to such non-autonomous "self-replication". In contrast, machines that are truly autonomously self-replicating (like biological machines) are the main subject discussed here.

History

The general concept of artificial machines capable of producing copies of themselves dates back at least several hundred years. An early reference is an anecdote regarding the philosopher René Descartes, who suggested to Queen Christina of Sweden that the human body could be regarded as a machine; she responded by pointing to a clock and ordering "see to it that it reproduces offspring." Several other variations on this anecdotal response also exist. Samuel Butler proposed in his 1872 novel Erewhon that machines were already capable of reproducing themselves but it was man who made them do so, and added that "machines which reproduce machinery do not reproduce machines after their own kind". In George Eliot's 1879 book Impressions of Theophrastus Such, a series of essays that she wrote in the character of a fictional scholar named Theophrastus, the essay "Shadows of the Coming Race" speculated about self-replicating machines, with Theophrastus asking "how do I know that they may not be ultimately made to carry, or may not in themselves evolve, conditions of self-supply, self-repair, and reproduction".

In 1802 William Paley formulated the first known teleological argument depicting machines producing other machines, suggesting that the question of who originally made a watch was rendered moot if it were demonstrated that the watch was able to manufacture a copy of itself. Scientific study of self-reproducing machines was anticipated by John Bernal as early as 1929 and by mathematicians such as Stephen Kleene who began developing recursion theory in the 1930s. Much of this latter work was motivated by interest in information processing and algorithms rather than physical implementation of such a system, however. In the course of the 1950s, suggestions of several increasingly simple mechanical systems capable of self-reproduction were made — notably by Lionel Penrose.

Von Neumann's kinematic model

A detailed conceptual proposal for a self-replicating machine was first put forward by mathematician John von Neumann in lectures delivered in 1948 and 1949, when he proposed a kinematic model of self-reproducing automata as a thought experiment. Von Neumann's concept of a physical self-replicating machine was dealt with only abstractly, with the hypothetical machine using a "sea" or stockroom of spare parts as its source of raw materials. The machine had a program stored on a memory tape that directed it to retrieve parts from this "sea" using a manipulator, assemble them into a duplicate of itself, and then copy the contents of its memory tape into the empty duplicate's. The machine was envisioned as consisting of as few as eight different types of components; four logic elements that send and receive stimuli and four mechanical elements used to provide a structural skeleton and mobility. While qualitatively sound, von Neumann was evidently dissatisfied with this model of a self-replicating machine due to the difficulty of analyzing it with mathematical rigor. He went on to instead develop an even more abstract model self-replicator based on cellular automata. His original kinematic concept remained obscure until it was popularized in a 1955 issue of Scientific American.

Von Neummann's goal for his self-reproducing automata theory, as specified in his lectures at the University of Illinois in 1949, was to design a machine whose complexity could grow automatically akin to biological organisms under natural selection. He asked what is the threshold of complexity that must be crossed for machines to be able to evolve. His answer was to design an abstract machine which, when run, would replicate itself. Notably, his design implies that open-ended evolution requires inherited information to be copied and passed to offspring separately from the self-replicating machine, an insight that preceded the discovery of the structure of the DNA molecule by Watson and Crick and how it is separately translated and replicated in the cell.

Moore's artificial living plants

In 1956 mathematician Edward F. Moore proposed the first known suggestion for a practical real-world self-replicating machine, also published in Scientific American. Moore's "artificial living plants" were proposed as machines able to use air, water and soil as sources of raw materials and to draw its energy from sunlight via a solar battery or a steam engine. He chose the seashore as an initial habitat for such machines, giving them easy access to the chemicals in seawater, and suggested that later generations of the machine could be designed to float freely on the ocean's surface as self-replicating factory barges or to be placed in barren desert terrain that was otherwise useless for industrial purposes. The self-replicators would be "harvested" for their component parts, to be used by humanity in other non-replicating machines.

Dyson's replicating systems

The next major development of the concept of self-replicating machines was a series of thought experiments proposed by physicist Freeman Dyson in his 1970 Vanuxem Lecture. He proposed three large-scale applications of machine replicators. First was to send a self-replicating system to Saturn's moon Enceladus, which in addition to producing copies of itself would also be programmed to manufacture and launch solar sail-propelled cargo spacecraft. These spacecraft would carry blocks of Enceladean ice to Mars, where they would be used to terraform the planet. His second proposal was a solar-powered factory system designed for a terrestrial desert environment, and his third was an "industrial development kit" based on this replicator that could be sold to developing countries to provide them with as much industrial capacity as desired. When Dyson revised and reprinted his lecture in 1979 he added proposals for a modified version of Moore's seagoing artificial living plants that was designed to distill and store fresh water for human use and the "Astrochicken."

Advanced Automation for Space Missions

An artist's conception of a "self-growing" robotic lunar factory

In 1980, inspired by a 1979 "New Directions Workshop" held at Wood's Hole, NASA conducted a joint summer study with ASEE entitled Advanced Automation for Space Missions to produce a detailed proposal for self-replicating factories to develop lunar resources without requiring additional launches or human workers on-site. The study was conducted at Santa Clara University and ran from June 23 to August 29, with the final report published in 1982. The proposed system would have been capable of exponentially increasing productive capacity and the design could be modified to build self-replicating probes to explore the galaxy.

The reference design included small computer-controlled electric carts running on rails inside the factory, mobile "paving machines" that used large parabolic mirrors to focus sunlight on lunar regolith to melt and sinter it into a hard surface suitable for building on, and robotic front-end loaders for strip mining. Raw lunar regolith would be refined by a variety of techniques, primarily hydrofluoric acid leaching. Large transports with a variety of manipulator arms and tools were proposed as the constructors that would put together new factories from parts and assemblies produced by its parent.

Power would be provided by a "canopy" of solar cells supported on pillars. The other machinery would be placed under the canopy.

A "casting robot" would use sculpting tools and templates to make plaster molds. Plaster was selected because the molds are easy to make, can make precise parts with good surface finishes, and the plaster can be easily recycled afterward using an oven to bake the water back out. The robot would then cast most of the parts either from nonconductive molten rock (basalt) or purified metals. A carbon dioxide laser cutting and welding system was also included.

A more speculative, more complex microchip fabricator was specified to produce the computer and electronic systems, but the designers also said that it might prove practical to ship the chips from Earth as if they were "vitamins."

A 2004 study supported by NASA's Institute for Advanced Concepts took this idea further. Some experts are beginning to consider self-replicating machines for asteroid mining.

Much of the design study was concerned with a simple, flexible chemical system for processing the ores, and the differences between the ratio of elements needed by the replicator, and the ratios available in lunar regolith. The element that most limited the growth rate was chlorine, needed to process regolith for aluminium. Chlorine is very rare in lunar regolith.

Lackner-Wendt Auxon replicators

In 1995, inspired by Dyson's 1970 suggestion of seeding uninhabited deserts on Earth with self-replicating machines for industrial development, Klaus Lackner and Christopher Wendt developed a more detailed outline for such a system. They proposed a colony of cooperating mobile robots 10–30 cm in size running on a grid of electrified ceramic tracks around stationary manufacturing equipment and fields of solar cells. Their proposal didn't include a complete analysis of the system's material requirements, but described a novel method for extracting the ten most common chemical elements found in raw desert topsoil (Na, Fe, Mg, Si, Ca, Ti, Al, C, O2 and H2) using a high-temperature carbothermic process. This proposal was popularized in Discover magazine, featuring solar-powered desalination equipment used to irrigate the desert in which the system was based. They named their machines "Auxons", from the Greek word auxein which means "to grow."

Recent work

NIAC studies on self-replicating systems

In the spirit of the 1980 "Advanced Automation for Space Missions" study, the NASA Institute for Advanced Concepts began several studies of self-replicating system design in 2002 and 2003. Four phase I grants were awarded:

Bootstrapping Self-Replicating Factories in Space

In 2012, NASA researchers Metzger, Muscatello, Mueller, and Mantovani argued for a so-called "bootstrapping approach" to start self-replicating factories in space. They developed this concept on the basis of In Situ Resource Utilization (ISRU) technologies that NASA has been developing to "live off the land" on the Moon or Mars. Their modeling showed that in just 20 to 40 years this industry could become self-sufficient then grow to large size, enabling greater exploration in space as well as providing benefits back to Earth. In 2014, Thomas Kalil of the White House Office of Science and Technology Policy published on the White House blog an interview with Metzger on bootstrapping solar system civilization through self-replicating space industry. Kalil requested the public submit ideas for how "the Administration, the private sector, philanthropists, the research community, and storytellers can further these goals." Kalil connected this concept to what former NASA Chief technologist Mason Peck has dubbed "Massless Exploration", the ability to make everything in space so that you do not need to launch it from Earth. Peck has said, "...all the mass we need to explore the solar system is already in space. It's just in the wrong shape." In 2016, Metzger argued that fully self-replicating industry can be started over several decades by astronauts at a lunar outpost for a total cost (outpost plus starting the industry) of about a third of the space budgets of the International Space Station partner nations, and that this industry would solve Earth's energy and environmental problems in addition to providing massless exploration.

New York University artificial DNA tile motifs

In 2011, a team of scientists at New York University created a structure called 'BTX' (bent triple helix) based around three double helix molecules, each made from a short strand of DNA. Treating each group of three double-helices as a code letter, they can (in principle) build up self-replicating structures that encode large quantities of information.

Self-replication of magnetic polymers

In 2001 Jarle Breivik at University of Oslo created a system of magnetic building blocks, which in response to temperature fluctuations, spontaneously form self-replicating polymers.

Self-replication of neural circuits

In 1968 Zellig Harris wrote that "the metalanguage is in the language," suggesting that self-replication is part of language. In 1977 Niklaus Wirth formalized this proposition by publishing a self-replicating deterministic context-free grammar. Adding to it probabilities, Bertrand du Castel published in 2015 a self-replicating stochastic grammar and presented a mapping of that grammar to neural networks, thereby presenting a model for a self-replicating neural circuit.

Self-replicating spacecraft

The idea of an automated spacecraft capable of constructing copies of itself was first proposed in scientific literature in 1974 by Michael A. Arbib, but the concept had appeared earlier in science fiction such as the 1967 novel Berserker by Fred Saberhagen or the 1950 novellette trilogy The Voyage of the Space Beagle by A. E. van Vogt. The first quantitative engineering analysis of a self-replicating spacecraft was published in 1980 by Robert Freitas, in which the non-replicating Project Daedalus design was modified to include all subsystems necessary for self-replication. The design's strategy was to use the probe to deliver a "seed" factory with a mass of about 443 tons to a distant site, have the seed factory replicate many copies of itself there to increase its total manufacturing capacity, and then use the resulting automated industrial complex to construct more probes with a single seed factory on board each.

Other references

  • A number of patents have been granted for self-replicating machine concepts. U.S. Patent 5,659,477 "Self reproducing fundamental fabricating machines (F-Units)" Inventor: Collins; Charles M. (Burke, Va.) (August 1997), U.S. Patent 5,764,518 " Self reproducing fundamental fabricating machine system" Inventor: Collins; Charles M. (Burke, Va.)(June 1998); and Collins' PCT patent WO 96/20453: "Method and system for self-replicating manufacturing stations" Inventors: Merkle; Ralph C. (Sunnyvale, Calif.), Parker; Eric G. (Wylie, Tex.), Skidmore; George D. (Plano, Tex.) (January 2003).
  • Macroscopic replicators are mentioned briefly in the fourth chapter of K. Eric Drexler's 1986 book Engines of Creation.
  • In 1995, Nick Szabo proposed a challenge to build a macroscale replicator from Lego robot kits and similar basic parts. Szabo wrote that this approach was easier than previous proposals for macroscale replicators, but successfully predicted that even this method would not lead to a macroscale replicator within ten years.
  • In 2004, Robert Freitas and Ralph Merkle published the first comprehensive review of the field of self-replication (from which much of the material in this article is derived, with permission of the authors), in their book Kinematic Self-Replicating Machines, which includes 3000+ literature references. This book included a new molecular assembler design, a primer on the mathematics of replication, and the first comprehensive analysis of the entire replicator design space.

Prospects for implementation

As the use of industrial automation has expanded over time, some factories have begun to approach a semblance of self-sufficiency that is suggestive of self-replicating machines. However, such factories are unlikely to achieve "full closure" until the cost and flexibility of automated machinery comes close to that of human labour and the manufacture of spare parts and other components locally becomes more economical than transporting them from elsewhere. As Samuel Butler has pointed out in Erewhon, replication of partially closed universal machine tool factories is already possible. Since safety is a primary goal of all legislative consideration of regulation of such development, future development efforts may be limited to systems which lack either control, matter, or energy closure. Fully capable machine replicators are most useful for developing resources in dangerous environments which are not easily reached by existing transportation systems (such as outer space).

An artificial replicator can be considered to be a form of artificial life. Depending on its design, it might be subject to evolution over an extended period of time. However, with robust error correction, and the possibility of external intervention, the common science fiction scenario of robotic life run amok will remain extremely unlikely for the foreseeable future.

 

Singularitarianism

From Wikipedia, the free encyclopedia

Singularitarianism is a movement defined by the belief that a technological singularity—the creation of superintelligence—will likely happen in the medium future, and that deliberate action ought to be taken to ensure that the singularity benefits humans.

Singularitarians are distinguished from other futurists who speculate on a technological singularity by their belief that the singularity is not only possible, but desirable if guided prudently. Accordingly, they might sometimes dedicate their lives to acting in ways they believe will contribute to its rapid yet safe realization.

Time magazine describes the worldview of Singularitarians by saying that "even though it sounds like science fiction, it isn't, no more than a weather forecast is science fiction. It's not a fringe idea; it's a serious hypothesis about the future of life on Earth. There's an intellectual gag reflex that kicks in anytime you try to swallow an idea that involves super-intelligent immortal cyborgs, but... while the Singularity appears to be, on the face of it, preposterous, it's an idea that rewards sober, careful evaluation".

Definition

The term "Singularitarian" was originally defined by Extropian thinker Mark Plus (Mark Potts) in 1991 to mean "one who believes the concept of a Singularity". This term has since been redefined to mean "Singularity activist" or "friend of the Singularity"; that is, one who acts so as to bring about the singularity.

Singularitarianism can also be thought of as an orientation or an outlook that prefers the enhancement of human intelligence as a specific transhumanist goal instead of focusing on specific technologies such as A.I. There are also definitions that identify a singularitarian as an activist or a friend of the concept of singularity, that is, one who acts so as to bring about a singularity. Some sources described it as a moral philosophy that advocates deliberate action to bring about and steer the development of a superintelligence that will lead to a theoretical future point that emerges during a time of accelerated change.

Inventor and futurist Ray Kurzweil, author of the 2005 book The Singularity Is Near: When Humans Transcend Biology, defines a Singularitarian as someone "who understands the Singularity and who has reflected on its implications for his or her own life" and estimates the singularity will occur around 2045.

History

In 1993, mathematician, computer scientist, and science fiction author Vernor Vinge hypothesized that the moment might come when technology will allow "creation of entities with greater than human intelligence" and used the term "the Singularity" to describe this moment. He suggested that the singularity may pose an existential risk for humanity, and that it could happen through one of four means:

  1. The development of computers that are "awake" and superhumanly intelligent.
  2. Large computer networks (and their associated users) may "wake up" as a superhumanly intelligent entity.
  3. Computer/human interfaces may become so intimate that users may reasonably be considered superhumanly intelligent.
  4. Biological science may find ways to improve upon the natural human intellect.

Singularitarianism coalesced into a coherent ideology in 2000 when artificial intelligence (AI) researcher Eliezer Yudkowsky wrote The Singularitarian Principles, in which he stated that a Singularitarian believes that the singularity is a secular, non-mystical event which is possible and beneficial to the world and is worked towards by its adherents. Yudkowsky's conceptualization of singularity offered a broad definition developed to be inclusive of various interpretations. There are theorists such as Michael Anissimov who argued for a strict definition, one that refers only to the advocacy of the development of posthuman (greater than human) intelligence.

In June 2000 Yudkowsky, with the support of Internet entrepreneurs Brian Atkins and Sabine Atkins, founded the Machine Intelligence Research Institute to work towards the creation of self-improving Friendly AI. MIRI's writings argue for the idea that an AI with the ability to improve upon its own design (Seed AI) would rapidly lead to superintelligence. These Singularitarians believe that reaching the singularity swiftly and safely is the best possible way to minimize net existential risk.

Many people believe a technological singularity is possible without adopting Singularitarianism as a moral philosophy. Although the exact numbers are hard to quantify, Singularitarianism is a small movement, which includes transhumanist philosopher Nick Bostrom. Inventor and futurist Ray Kurzweil, who predicts that the Singularity will occur circa 2045, greatly contributed to popularizing Singularitarianism with his 2005 book The Singularity Is Near: When Humans Transcend Biology .

What, then, is the Singularity? It's a future period during which the pace of technological change will be so rapid, its impact so deep, that human life will be irreversibly transformed. Although neither utopian or dystopian, this epoch will transform the concepts we rely on to give meaning to our lives, from our business models to the cycle of human life, including death itself. Understanding the Singularity will alter our perspective on the significance of our past and the ramifications for our future. To truly understand it inherently changes one's view of life in general and one's particular life. I regard someone who understands the Singularity and who has reflected on its implications for his or her own life as a "singularitarian."

With the support of NASA, Google and a broad range of technology forecasters and technocapitalists, the Singularity University opened in June 2009 at the NASA Research Park in Silicon Valley with the goal of preparing the next generation of leaders to address the challenges of accelerating change.

In July 2009, many prominent Singularitarians participated in a conference organized by the Association for the Advancement of Artificial Intelligence (AAAI) to discuss the potential impact of robots and computers and the impact of the hypothetical possibility that they could become self-sufficient and able to make their own decisions. They discussed the possibility and the extent to which computers and robots might be able to acquire any level of autonomy, and to what degree they could use such abilities to possibly pose any threat or hazard (i.e., cybernetic revolt). They noted that some machines have acquired various forms of semi-autonomy, including being able to find power sources on their own and being able to independently choose targets to attack with weapons. They warned that some computer viruses can evade elimination and have achieved "cockroach intelligence". They asserted that self-awareness as depicted in science fiction is probably unlikely, but that there were other potential hazards and pitfalls. Some experts and academics have questioned the use of robots for military combat, especially when such robots are given some degree of autonomous functions. The President of the AAAI has commissioned a study to look at this issue.

Reception

There are several objections to Kurzweil's singularitarianism and these even include criticisms from optimists within the A.I. field. For instance, Pulitzer Prize winning author Douglas Hofstadter argued that Kurzweil's predicted achievement of human-level A.I. by 2045 is not viable. Even Gordon Moore, who is credited for introducing the Moore's Law that predicated the notion of singularity, maintained that it will never occur. According to some observers, these criticisms do not diminish enthusiasm for singularity because it has assumed a quasi-religious response to the fear of death, allowing its adherents to enjoy the benefits of religion without its ontological burdens. Science journalist John Horgan provided more insights into this notion as he likened singularitarianism to a religion:

Let's face it. The singularity is a religious rather than a scientific vision. The science-fiction writer Ken MacLeod has dubbed it ”the rapture for nerds,” an allusion to the end-time, when Jesus whisks the faithful to heaven and leaves us sinners behind. Such yearning for transcendence, whether spiritual or technological, is all too understandable. Both as individuals and as a species, we face deadly serious problems, including terrorism, nuclear proliferation, overpopulation, poverty, famine, environmental degradation, climate change, resource depletion, and AIDS. Engineers and scientists should be helping us face the world's problems and find solutions to them, rather than indulging in escapist, pseudoscientific fantasies like the singularity.

Kurzweil rejects this categorization, stating that his predictions about the singularity are driven by the data that increases in computational technology have been exponential in the past. He also stressed that critics who challenge his view mistakenly take an intuitive linear view of technological advancement.

Rare Earth hypothesis

From Wikipedia, the free encyclopedia
  
The Rare Earth hypothesis argues that planets with complex life, like Earth, are exceptionally rare

In planetary astronomy and astrobiology, the Rare Earth hypothesis argues that the origin of life and the evolution of biological complexity such as sexually reproducing, multicellular organisms on Earth (and, subsequently, human intelligence) required an improbable combination of astrophysical and geological events and circumstances.

According to the hypothesis, complex extraterrestrial life is an improbable phenomenon and likely to be rare throughout the universe as a whole. The term "Rare Earth" originates from Rare Earth: Why Complex Life Is Uncommon in the Universe (2000), a book by Peter Ward, a geologist and paleontologist, and Donald E. Brownlee, an astronomer and astrobiologist, both faculty members at the University of Washington.

In the 1970s and 1980s, Carl Sagan and Frank Drake, among others, argued that Earth is a typical rocky planet in a typical planetary system, located in a non-exceptional region of a common barred-spiral galaxy. From the principle of mediocrity (extended from the Copernican principle), they argued that the evolution of life on Earth, including human beings, was also typical, and therefore that the universe teems with complex life. However, Ward and Brownlee argue that planets, planetary systems, and galactic regions that are as accommodating for complex life as the Earth, the Solar System, and our own galactic region are not typical at all, but actually exceedingly rare.

Requirements for complex life

The Rare Earth hypothesis argues that the evolution of biological complexity anywhere in the universe requires the coincidence of a large number of fortuitous circumstances, including, among others, a galactic habitable zone; a central star and planetary system having the requisite character (i.e. a circumstellar habitable zone); a terrestrial planet of the right mass; the advantage of one or more gas giant guardians like Jupiter and possibly a large natural satellite to shield the planet from frequent impact events; conditions needed to ensure the planet has a magnetosphere and plate tectonics; a chemistry similar to that present in the Earth's lithosphere, atmosphere, and oceans; the influence of periodic "evolutionary pumps" such as massive glaciations and bolide impacts; and whatever factors may have led to the emergence of eukaryotic cells, sexual reproduction, and the Cambrian explosion of animal, plant, and fungi phyla. The evolution of human beings and of human intelligence may have required yet further specific events and circumstances, all of which are extremely unlikely to have happened were it not for the Cretaceous–Paleogene extinction event 66 million years ago removing dinosaurs as the dominant terrestrial vertebrates.

In order for a small rocky planet to support complex life, Ward and Brownlee argue, the values of several variables must fall within narrow ranges. The universe is so vast that it might still contain many Earth-like planets, but if such planets exist, they are likely to be separated from each other by many thousands of light-years. Such distances may preclude communication among any intelligent species that may evolve on such planets, which would solve the Fermi paradox: "If extraterrestrial aliens are common, why aren't they obvious?"

The right location in the right kind of galaxy

Rare Earth suggests that much of the known universe, including large parts of our galaxy, are "dead zones" unable to support complex life. Those parts of a galaxy where complex life is possible make up the galactic habitable zone, which is primarily characterized by distance from the Galactic Center.

  1. As that distance increases, star metallicity declines. Metals (which in astronomy refers to all elements other than hydrogen and helium) are necessary for the formation of terrestrial planets.
  2. The X-ray and gamma ray radiation from the black hole at the galactic center, and from nearby neutron stars, becomes less intense as distance increases. Thus the early universe, and present-day galactic regions where stellar density is high and supernovae are common, will be dead zones.
  3. Gravitational perturbation of planets and planetesimals by nearby stars becomes less likely as the density of stars decreases. Hence the further a planet lies from the Galactic Center or a spiral arm, the less likely it is to be struck by a large bolide which could extinguish all complex life on a planet.
Dense centers of galaxies such as NGC 7331 (often referred to as a "twin" of the Milky Way) have high radiation levels toxic to complex life.
 
According to Rare Earth, globular clusters are unlikely to support life.

Item #1 rules out the outermost reaches of a galaxy; #2 and #3 rule out galactic inner regions. Hence a galaxy's habitable zone may be a relatively narrow ring of adequate conditions sandwiched between its uninhabitable center and outer reaches.

Also, a habitable planetary system must maintain its favorable location long enough for complex life to evolve. A star with an eccentric (elliptical or hyperbolic) galactic orbit will pass through some spiral arms, unfavorable regions of high star density; thus a life-bearing star must have a galactic orbit that is nearly circular, with a close synchronization between the orbital velocity of the star and of the spiral arms. This further restricts the galactic habitable zone within a fairly narrow range of distances from the Galactic Center. Lineweaver et al. calculate this zone to be a ring 7 to 9 kiloparsecs in radius, including no more than 10% of the stars in the Milky Way, about 20 to 40 billion stars. Gonzalez et al. would halve these numbers; they estimate that at most 5% of stars in the Milky Way fall within the galactic habitable zone.

Approximately 77% of observed galaxies are spiral, two-thirds of all spiral galaxies are barred, and more than half, like the Milky Way, exhibit multiple arms. According to Rare Earth, our own galaxy is unusually quiet and dim (see below), representing just 7% of its kind. Even so, this would still represent more than 200 billion galaxies in the known universe.

Our galaxy also appears unusually favorable in suffering fewer collisions with other galaxies over the last 10 billion years, which can cause more supernovae and other disturbances. Also, the Milky Way's central black hole seems to have neither too much nor too little activity.

The orbit of the Sun around the center of the Milky Way is indeed almost perfectly circular, with a period of 226 Ma (million years), closely matching the rotational period of the galaxy. However, the majority of stars in barred spiral galaxies populate the spiral arms rather than the halo and tend to move in gravitationally aligned orbits, so there is little that is unusual about the Sun's orbit. While the Rare Earth hypothesis predicts that the Sun should rarely, if ever, have passed through a spiral arm since its formation, astronomer Karen Masters has calculated that the orbit of the Sun takes it through a major spiral arm approximately every 100 million years. Some researchers have suggested that several mass extinctions do indeed correspond with previous crossings of the spiral arms.

The right orbital distance from the right type of star

According to the hypothesis, Earth has an improbable orbit in the very narrow habitable zone (dark green) around the Sun.

The terrestrial example suggests that complex life requires liquid water, the maintenance of which requires an orbital distance neither too close nor too far from the central star, another scale of habitable zone or Goldilocks Principle. The habitable zone varies with the star's type and age.

For advanced life, the star must also be highly stable, which is typical of middle star life, about 4.6 billion years old. Proper metallicity and size are also important to stability. The Sun has a low (0.1%) luminosity variation. To date, no solar twin star, with an exact match of the sun's luminosity variation, has been found, though some come close. The star must also have no stellar companions, as in binary systems, which would disrupt the orbits of any planets. Estimates suggest 50% or more of all star systems are binary. The habitable zone for a main sequence star very gradually moves out over its lifespan until the star becomes a white dwarf and the habitable zone vanishes.

The liquid water and other gases available in the habitable zone bring the benefit of greenhouse warming. Even though the Earth's atmosphere contains a water vapor concentration from 0% (in arid regions) to 4% (in rainforest and ocean regions) and – as of February 2018 – only 408.05 parts per million of CO
2
, these small amounts suffice to raise the average surface temperature by about 40°C, with the dominant contribution being due to water vapor.

Rocky planets must orbit within the habitable zone for life to form. Although the habitable zone of such hot stars as Sirius or Vega is wide, hot stars also emit much more ultraviolet radiation that ionizes any planetary atmosphere. Such stars may also become red giants before advanced life evolves on their planets. These considerations rule out the massive and powerful stars of type F6 to O (see stellar classification) as homes to evolved metazoan life.

Conversely, small red dwarf stars have small habitable zones wherein planets are in tidal lock, with one very hot side always facing the star and another very cold side always facing away, and they are also at increased risk of solar flares (see Aurelia). Life probably cannot arise in such systems. Rare Earth proponents claim that only stars from F7 to K1 types are hospitable. Such stars are rare: G type stars such as the Sun (between the hotter F and cooler K) comprise only 9% of the hydrogen-burning stars in the Milky Way.

Such aged stars as red giants and white dwarfs are also unlikely to support life. Red giants are common in globular clusters and elliptical galaxies. White dwarfs are mostly dying stars that have already completed their red giant phase. Stars that become red giants expand into or overheat the habitable zones of their youth and middle age (though theoretically planets at much greater distances may then become habitable).

An energy output that varies with the lifetime of the star will likely prevent life (e.g., as Cepheid variables). A sudden decrease, even if brief, may freeze the water of orbiting planets, and a significant increase may evaporate it and cause a greenhouse effect that prevents the oceans from reforming.

All known life requires the complex chemistry of metallic elements. The absorption spectrum of a star reveals the presence of metals within, and studies of stellar spectra reveal that many, perhaps most, stars are poor in metals. Because heavy metals originate in supernova explosions, metallicity increases in the universe over time. Low metallicity characterizes the early universe: globular clusters and other stars that formed when the universe was young, stars in most galaxies other than large spirals, and stars in the outer regions of all galaxies. Metal-rich central stars capable of supporting complex life are therefore believed to be most common in the quiet suburbs of the larger spiral galaxies—where radiation also happens to be weak.

The right arrangement of planets around the star

Depiction of the Sun and planets of the Solar System and the sequence of planets. Rare Earth argues that without such an arrangement, in particular the presence of the massive gas giant Jupiter (the fifth planet from the Sun and the largest), complex life on Earth would not have arisen.

Rare Earth proponents argue that a planetary system capable of sustaining complex life must be structured more or less like the Solar System, with small, rocky inner planets and massive outer gas giants. Without the protection of such "celestial vacuum cleaner" planets with strong gravitational pulls, other planets would be subject to more frequent catastrophic asteroid collisions.

Observations of exoplanets have shown that arrangements of planets similar to the Solar System are rare. Most planetary systems have super-Earths, several times larger than Earth, close to their star, whereas the Solar System's inner region has only a few small rocky planets and none inside Mercury's orbit. Only 10% of stars have giant planets similar to Jupiter and Saturn, and those few rarely have stable, nearly circular orbits distant from their star. Konstantin Batygin and colleagues argue that these features can be explained if, early in the history of the Solar System, Jupiter and Saturn drifted towards the Sun, sending showers of planetesimals towards the super-Earths which sent them spiralling into the Sun, and ferrying icy building blocks into the terrestrial region of the Solar System which provided the building blocks for the rocky planets. The two giant planets then drifted out again to their present positions. In the view of Batygin and his colleagues: "The concatenation of chance events required for this delicate choreography suggest that small, Earth-like rocky planets – and perhaps life itself – could be rare throughout the cosmos."

A continuously stable orbit

Rare Earth argues that a gas giant also must not be too close to a body where life is developing. Close placement of one or more gas giants could disrupt the orbit of a potential life-bearing planet, either directly or by drifting into the habitable zone.

Newtonian dynamics can produce chaotic planetary orbits, especially in a system having large planets at high orbital eccentricity.

The need for stable orbits rules out stars with planetary systems that contain large planets with orbits close to the host star (called "hot Jupiters"). It is believed that hot Jupiters have migrated inwards to their current orbits. In the process, they would have catastrophically disrupted the orbits of any planets in the habitable zone. To exacerbate matters, hot Jupiters are much more common orbiting F and G class stars.

A terrestrial planet of the right size

Planets of the Solar System, shown to scale. Rare Earth argues that complex life cannot exist on large gaseous planets like Jupiter and Saturn (top row) or Uranus and Neptune (top middle) or smaller planets such as Mars and Mercury.

The Rare Earth hypothesis argues that life requires terrestrial planets like Earth, and since gas giants lack such a surface, that complex life cannot arise there.

A planet that is too small cannot maintain much atmosphere, rendering its surface temperature low and variable and oceans impossible. A small planet will also tend to have a rough surface, with large mountains and deep canyons. The core will cool faster, and plate tectonics may be brief or entirely absent. A planet that is too large will retain too dense an atmosphere, like Venus. Although Venus is similar in size and mass to Earth, its surface atmospheric pressure is 92 times that of Earth, and its surface temperature is 735 K (462°C; 863°F). The early Earth once had a similar atmosphere, but may have lost it in the giant impact event which formed the Moon.

Plate tectonics

The Great American Interchange on Earth, approximately 3.5 to 3 Ma, an example of species competition, resulting from continental plate interaction
 
An artist's rendering of the structure of Earth's magnetic field-magnetosphere that protects Earth's life from solar radiation. 1) Bow shock. 2) Magnetosheath. 3) Magnetopause. 4) Magnetosphere. 5) Northern tail lobe. 6) Southern tail lobe. 7) Plasmasphere.

Rare Earth proponents argue that plate tectonics and a strong magnetic field are essential for biodiversity, global temperature regulation, and the carbon cycle. The lack of mountain chains elsewhere in the Solar System is direct evidence that Earth is the only body with plate tectonics, and thus the only nearby body capable of supporting life.

Plate tectonics depend on the right chemical composition and a long-lasting source of heat from radioactive decay. Continents must be made of less dense felsic rocks that "float" on underlying denser mafic rock. Taylor emphasizes that tectonic subduction zones require the lubrication of oceans of water. Plate tectonics also provide a means of biochemical cycling.

Plate tectonics and as a result continental drift and the creation of separate landmasses would create diversified ecosystems and biodiversity, one of the strongest defenses against extinction. An example of species diversification and later competition on Earth's continents is the Great American Interchange. North and Middle America drifted into South America at around 3.5 to 3 Ma. The fauna of South America had already evolved separately for about 30 million years, since Antarctica separated, but after the merger many species were wiped out, mainly in South America, by competing North American animals.

A large moon

Tide pools resulting from the tidal interactions of the Moon are said to have promoted the evolution of complex life.

The Moon is unusual because the other rocky planets in the Solar System either have no satellites (Mercury and Venus), or only relatively tiny satellites which are probably captured asteroids (Mars). The Moon is also the largest natural satellite in the Solar System relative to the size of its planet, being 27% the size of Earth.

The giant-impact theory hypothesizes that the Moon resulted from the impact of a Mars-sized body, dubbed Theia, with the young Earth. This giant impact also gave the Earth its axial tilt (inclination) and velocity of rotation. Rapid rotation reduces the daily variation in temperature and makes photosynthesis viable. The Rare Earth hypothesis further argues that the axial tilt cannot be too large or too small (relative to the orbital plane). A planet with a large tilt will experience extreme seasonal variations in climate. A planet with little or no tilt will lack the stimulus to evolution that climate variation provides. In this view, the Earth's tilt is "just right". The gravity of a large satellite also stabilizes the planet's tilt; without this effect the variation in tilt would be chaotic, probably making complex life forms on land impossible.

If the Earth had no Moon, the ocean tides resulting solely from the Sun's gravity would be only half that of the lunar tides. A large satellite gives rise to tidal pools, which may be essential for the formation of complex life, though this is far from certain.

A large satellite also increases the likelihood of plate tectonics through the effect of tidal forces on the planet's crust. The impact that formed the Moon may also have initiated plate tectonics, without which the continental crust would cover the entire planet, leaving no room for oceanic crust. It is possible that the large-scale mantle convection needed to drive plate tectonics could not have emerged in the absence of crustal inhomogeneity. A further theory indicates that such a large moon may also contribute to maintaining a planet's magnetic shield by continually acting upon a metallic planetary core as dynamo, thus protecting the surface of the planet from charged particles and cosmic rays, and helping to ensure the atmosphere is not stripped over time by solar winds.

An atmosphere

Earth's atmosphere

A terrestrial planet must be the right size, like Earth and Venus, in order to retain an atmosphere. On Earth, once the giant impact of Theia thinned Earth's atmosphere, other events were needed to make the atmosphere capable of sustaining life. The Late Heavy Bombardment reseeded Earth with water lost after the impact of Theia. The development of an ozone layer generated a protective shield against ultraviolet (UV) sunlight. Nitrogen and carbon dioxide are needed in a correct ratio for life to form. Lightning is needed for nitrogen fixation. The gaseous carbon dioxide needed for life comes from sources such as volcanoes and geysers. Carbon dioxide is only needed at relatively low levels (currently at approximately 400 ppm on Earth); at high levels it is poisonous. Precipitation is needed to have a stable water cycle. A proper atmosphere must reduce diurnal temperature variation.

One or more evolutionary triggers for complex life

This diagram illustrates the twofold cost of sex. If each individual were to contribute to the same number of offspring (two), (a) the sexual population remains the same size each generation, whereas (b) the asexual population doubles in size each generation.

Regardless of whether planets with similar physical attributes to the Earth are rare or not, some argue that life tends not to evolve into anything more complex than simple bacteria without being provoked by rare and specific circumstances. Biochemist Nick Lane argues that simple cells (prokaryotes) emerged soon after Earth's formation, but since almost half the planet's life had passed before they evolved into complex ones (eukaryotes), all of whom share a common ancestor, this event can only have happened once. According to some views, prokaryotes lack the cellular architecture to evolve into eukaryotes because a bacterium expanded up to eukaryotic proportions would have tens of thousands of times less energy available to power its metabolism. Two billion years ago, one simple cell incorporated itself into another, multiplied, and evolved into mitochondria that supplied the vast increase in available energy that enabled the evolution of complex eukaryotic life. If this incorporation occurred only once in four billion years or is otherwise unlikely, then life on most planets remains simple. An alternative view is that the evolution of mitochondria was environmentally triggered, and that mitochondria-containing organisms appeared soon after the first traces of atmospheric oxygen.

The evolution and persistence of sexual reproduction is another mystery in biology. The purpose of sexual reproduction is unclear, as in many organisms it has a 50% cost (fitness disadvantage) in relation to asexual reproduction. Mating types (types of gametes, according to their compatibility) may have arisen as a result of anisogamy (gamete dimorphism), or the male and female sexes may have evolved before anisogamy. It is also unknown why most sexual organisms use a binary mating system, and why some organisms have gamete dimorphism. Charles Darwin was the first to suggest that sexual selection drives speciation; without it, complex life would probably not have evolved.

The right time in evolutionary history

Timeline of evolution; human writing exists for only 0.000218% of Earth's history.

While life on Earth is regarded to have spawned relatively early in the planet's history, the evolution from multicellular to intelligent organisms took around 800 million years. Civilizations on Earth have existed for about 12,000 years, and radio communication reaching space has existed for little more than 100 years. Relative to the age of the Solar System (~4.57 Ga) this is a short time, in which extreme climatic variations, super volcanoes, and large meteorite impacts were absent. These events would severely harm intelligent life, as well as life in general. For example, the Permian-Triassic mass extinction, caused by widespread and continuous volcanic eruptions in an area the size of Western Europe, led to the extinction of 95% of known species around 251.2 Ma ago. About 65 million years ago, the Chicxulub impact at the Cretaceous–Paleogene boundary (~65.5 Ma) on the Yucatán peninsula in Mexico led to a mass extinction of the most advanced species at that time.

Rare Earth equation

The following discussion is adapted from Cramer. The Rare Earth equation is Ward and Brownlee's riposte to the Drake equation. It calculates , the number of Earth-like planets in the Milky Way having complex life forms, as:

According to Rare Earth, the Cambrian explosion that saw extreme diversification of chordata from simple forms like Pikaia (pictured) was an improbable event

where:

  • N* is the number of stars in the Milky Way. This number is not well-estimated, because the Milky Way's mass is not well estimated, with little information about the number of small stars. N* is at least 100 billion, and may be as high as 500 billion, if there are many low visibility stars.
  • is the average number of planets in a star's habitable zone. This zone is fairly narrow, being constrained by the requirement that the average planetary temperature be consistent with water remaining liquid throughout the time required for complex life to evolve. Thus, =1 is a likely upper bound.

We assume . The Rare Earth hypothesis can then be viewed as asserting that the product of the other nine Rare Earth equation factors listed below, which are all fractions, is no greater than 10−10 and could plausibly be as small as 10−12. In the latter case, could be as small as 0 or 1. Ward and Brownlee do not actually calculate the value of , because the numerical values of quite a few of the factors below can only be conjectured. They cannot be estimated simply because we have but one data point: the Earth, a rocky planet orbiting a G2 star in a quiet suburb of a large barred spiral galaxy, and the home of the only intelligent species we know; namely, ourselves.

  • is the fraction of stars in the galactic habitable zone (Ward, Brownlee, and Gonzalez estimate this factor as 0.1).
  • is the fraction of stars in the Milky Way with planets.
  • is the fraction of planets that are rocky ("metallic") rather than gaseous.
  • is the fraction of habitable planets where microbial life arises. Ward and Brownlee believe this fraction is unlikely to be small.
  • is the fraction of planets where complex life evolves. For 80% of the time since microbial life first appeared on the Earth, there was only bacterial life. Hence Ward and Brownlee argue that this fraction may be small.
  • is the fraction of the total lifespan of a planet during which complex life is present. Complex life cannot endure indefinitely, because the energy put out by the sort of star that allows complex life to emerge gradually rises, and the central star eventually becomes a red giant, engulfing all planets in the planetary habitable zone. Also, given enough time, a catastrophic extinction of all complex life becomes ever more likely.
  • is the fraction of habitable planets with a large moon. If the giant impact theory of the Moon's origin is correct, this fraction is small.
  • is the fraction of planetary systems with large Jovian planets. This fraction could be large.
  • is the fraction of planets with a sufficiently low number of extinction events. Ward and Brownlee argue that the low number of such events the Earth has experienced since the Cambrian explosion may be unusual, in which case this fraction would be small.

The Rare Earth equation, unlike the Drake equation, does not factor the probability that complex life evolves into intelligent life that discovers technology. Barrow and Tipler review the consensus among such biologists that the evolutionary path from primitive Cambrian chordates, e.g., Pikaia to Homo sapiens, was a highly improbable event. For example, the large brains of humans have marked adaptive disadvantages, requiring as they do an expensive metabolism, a long gestation period, and a childhood lasting more than 25% of the average total life span. Other improbable features of humans include:

  • Being one of a handful of extant bipedal land (non-avian) vertebrate. Combined with an unusual eye–hand coordination, this permits dextrous manipulations of the physical environment with the hands;
  • A vocal apparatus far more expressive than that of any other mammal, enabling speech. Speech makes it possible for humans to interact cooperatively, to share knowledge, and to acquire a culture;
  • The capability of formulating abstractions to a degree permitting the invention of mathematics, and the discovery of science and technology. Only recently did humans acquire anything like their current scientific and technological sophistication.

Advocates

Writers who support the Rare Earth hypothesis:

  • Stuart Ross Taylor, a specialist on the Solar System, firmly believes in the hypothesis. Taylor concludes that the Solar System is probably unusual, because it resulted from so many chance factors and events.
  • Stephen Webb, a physicist, mainly presents and rejects candidate solutions for the Fermi paradox. The Rare Earth hypothesis emerges as one of the few solutions left standing by the end of the book
  • Simon Conway Morris, a paleontologist, endorses the Rare Earth hypothesis in chapter 5 of his Life's Solution: Inevitable Humans in a Lonely Universe, and cites Ward and Brownlee's book with approval.
  • John D. Barrow and Frank J. Tipler (1986. 3.2, 8.7, 9), cosmologists, vigorously defend the hypothesis that humans are likely to be the only intelligent life in the Milky Way, and perhaps the entire universe. But this hypothesis is not central to their book The Anthropic Cosmological Principle, a thorough study of the anthropic principle and of how the laws of physics are peculiarly suited to enable the emergence of complexity in nature.
  • Ray Kurzweil, a computer pioneer and self-proclaimed Singularitarian, argues in The Singularity Is Near that the coming Singularity requires that Earth be the first planet on which sapient, technology-using life evolved. Although other Earth-like planets could exist, Earth must be the most evolutionarily advanced, because otherwise we would have seen evidence that another culture had experienced the Singularity and expanded to harness the full computational capacity of the physical universe.
  • John Gribbin, a prolific science writer, defends the hypothesis in Alone in the Universe: Why our planet is unique.
  • Guillermo Gonzalez, astrophysicist who supports the concept of galactic habitable zone uses the hypothesis in his book The Privileged Planet to promote the concept of intelligent design.
  • Michael H. Hart, astrophysicist who proposed a narrow habitable zone based on climate studies, edited the influential book Extraterrestrials: Where are They and authored one of its chapters "Atmospheric Evolution, the Drake Equation and DNA: Sparse Life in an Infinite Universe".
  • Howard Alan Smith, astrophysicist and author of 'Let there be light: modern cosmology and Kabbalah: a new conversation between science and religion'.
  • Marc J. Defant, professor of geochemistry and volcanology, elaborated on several aspects of the rare earth hypothesis in his TEDx talk entitled: Why We are Alone in the Galaxy.
  • Brian Cox, physicist and popular science celebrity confesses his support for the hypothesis in his BBC production of the Human Universe.

Criticism

Cases against the Rare Earth hypothesis take various forms.

The hypothesis appears anthropocentric

The hypothesis concludes, more or less, that complex life is rare because it can evolve only on the surface of an Earth-like planet or on a suitable satellite of a planet. Some biologists, such as Jack Cohen, believe this assumption too restrictive and unimaginative; they see it as a form of circular reasoning.

According to David Darling, the Rare Earth hypothesis is neither hypothesis nor prediction, but merely a description of how life arose on Earth. In his view, Ward and Brownlee have done nothing more than select the factors that best suit their case.

What matters is not whether there's anything unusual about the Earth; there's going to be something idiosyncratic about every planet in space. What matters is whether any of Earth's circumstances are not only unusual but also essential for complex life. So far we've seen nothing to suggest there is.

Critics also argue that there is a link between the Rare Earth hypothesis and the unscientific idea of intelligent design.

Exoplanets around main sequence stars are being discovered in large numbers

An increasing number of extrasolar planet discoveries are being made with 4,834 planets in 3,572 planetary systems known as of 1 September 2021. Rare Earth proponents argue life cannot arise outside Sun-like systems, due to tidal locking and ionizing radiation outside the F7–K1 range. However, some exobiologists have suggested that stars outside this range may give rise to life under the right circumstances; this possibility is a central point of contention to the theory because these late-K and M category stars make up about 82% of all hydrogen-burning stars.

Current technology limits the testing of important Rare Earth criteria: surface water, tectonic plates, a large moon and biosignatures are currently undetectable. Though planets the size of Earth are difficult to detect and classify, scientists now think that rocky planets are common around Sun-like stars. The Earth Similarity Index (ESI) of mass, radius and temperature provides a means of measurement, but falls short of the full Rare Earth criteria.

Rocky planets orbiting within habitable zones may not be rare

Planets similar to Earth in size are being found in relatively large number in the habitable zones of similar stars. The 2015 infographic depicts Kepler-62e, Kepler-62f, Kepler-186f, Kepler-296e, Kepler-296f, Kepler-438b, Kepler-440b, Kepler-442b, Kepler-452b.

Some argue that Rare Earth's estimates of rocky planets in habitable zones ( in the Rare Earth equation) are too restrictive. James Kasting cites the Titius-Bode law to contend that it is a misnomer to describe habitable zones as narrow when there is a 50% chance of at least one planet orbiting within one. In 2013, astronomers using the Kepler space telescope's data estimated that about one-fifth of G-type and K-type stars (sun-like stars and orange dwarfs) are expected to have an Earth-sized or super-Earth-sized planet (1–2 Earths wide) close to an Earth-like orbit (0.25–4 F), yielding about 8.8 billion of them for the entire Milky Way Galaxy.

Uncertainty over Jupiter's role

The requirement for a system to have a Jovian planet as protector (Rare Earth equation factor ) has been challenged, affecting the number of proposed extinction events (Rare Earth equation factor ). Kasting's 2001 review of Rare Earth questions whether a Jupiter protector has any bearing on the incidence of complex life. Computer modelling including the 2005 Nice model and 2007 Nice 2 model yield inconclusive results in relation to Jupiter's gravitational influence and impacts on the inner planets. A study by Horner and Jones (2008) using computer simulation found that while the total effect on all orbital bodies within the Solar System is unclear, Jupiter has caused more impacts on Earth than it has prevented. Lexell's Comet, a 1770 near miss that passed closer to Earth than any other comet in recorded history, was known to be caused by the gravitational influence of Jupiter. Grazier (2017) claims that the idea of Jupiter as a shield is a misinterpretation of a 1996 study by George Wetherill, and using computer models Grazier was able to demonstrate that Saturn protects Earth from more asteroids and comets than does Jupiter.

Plate tectonics may not be unique to Earth or a requirement for complex life

Geological discoveries like the active features of Pluto's Tombaugh Regio appear to contradict the argument that geologically active worlds like Earth are rare.

Ward and Brownlee argue that for complex life to evolve (Rare Earth equation factor ), tectonics must be present to generate biogeochemical cycles, and predicted that such geological features would not be found outside of Earth, pointing to a lack of observable mountain ranges and subduction. There is, however, no scientific consensus on the evolution of plate tectonics on Earth. Though it is believed that tectonic motion first began around three billion years ago, by this time photosynthesis and oxygenation had already begun. Furthermore, recent studies point to plate tectonics as an episodic planetary phenomenon, and that life may evolve during periods of "stagnant-lid" rather than plate tectonic states.

Recent evidence also points to similar activity either having occurred or continuing to occur elsewhere. The geology of Pluto, for example, described by Ward and Brownlee as "without mountains or volcanoes ... devoid of volcanic activity", has since been found to be quite the contrary, with a geologically active surface possessing organic molecules and mountain ranges like Tenzing Montes and Hillary Montes comparable in relative size to those of Earth, and observations suggest the involvement of endogenic processes. Plate tectonics has been suggested as a hypothesis for the Martian dichotomy, and in 2012 geologist An Yin put forward evidence for active plate tectonics on Mars. Europa has long been suspected to have plate tectonics and in 2014 NASA announced evidence of active subduction. Like Europa, analysis of the surface of Jupiter's largest moon Ganymede strike-strip faulting and surface materials of possible endogenic origin suggests that plate tectonics has also taken place there. In 2017, scientists studying the geology of Charon confirmed that icy plate tectonics also operated on Pluto's largest moon. Since 2017 several studies of the geodynamics of Venus have also found that contrary to the view that the lithosphere of Venus is static, that it is actually being deformed via active processes similar to plate tectonics, though with less subduction, implying that geodynamics are not a rare occurrence in Earth sized bodies.

Kasting suggests that there is nothing unusual about the occurrence of plate tectonics in large rocky planets and liquid water on the surface as most should generate internal heat even without the assistance of radioactive elements. Studies by Valencia and Cowan suggest that plate tectonics may be inevitable for terrestrial planets Earth sized or larger, that is, Super-Earths, which are now known to be more common in planetary systems.

Free oxygen may be neither rare nor a prerequisite for multicellular life

Animals in the genus Spinoloricus are thought to defy the paradigm that all animal life on earth needs oxygen

The hypothesis that molecular oxygen, necessary for animal life, is rare and that a Great Oxygenation Event (Rare Earth equation factor ) could only have been triggered and sustained by tectonics, appears to have been invalidated by more recent discoveries.

Ward and Brownlee ask "whether oxygenation, and hence the rise of animals, would ever have occurred on a world where there were no continents to erode". Extraterrestrial free oxygen has recently been detected around other solid objects, including Mercury, Venus, Mars, Jupiter's four Galilean moons, Saturn's moons Enceladus, Dione and Rhea and even the atmosphere of a comet. This has led scientists to speculate whether processes other than photosynthesis could be capable of generating an environment rich in free oxygen. Wordsworth (2014) concludes that oxygen generated other than through photodissociation may be likely on Earth-like exoplanets, and could actually lead to false positive detections of life. Narita (2015) suggests photocatalysis by titanium dioxide as a geochemical mechanism for producing oxygen atmospheres.

Since Ward & Brownlee's assertion that "there is irrefutable evidence that oxygen is a necessary ingredient for animal life", anaerobic metazoa have been found that indeed do metabolise without oxygen. Spinoloricus cinziae, for example, a species discovered in the hypersaline anoxic L'Atalante basin at the bottom of the Mediterranean Sea in 2010, appears to metabolise with hydrogen, lacking mitochondria and instead using hydrogenosomes. Studies since 2015 of the eukaryotic genus Monocercomonoides that lack mitochondrial organelles are also significant as there are no detectable signs that mitochondria were ever part of the organism. Since then further eukaryotes, particularly parasites, have been identified to be completely absent of mitochondrial genome, such as the 2020 discovery in Henneguya zschokkei. Further investigation into alternative metabolic pathways used by these organisms appear to present further problems for the premise.

Stevenson (2015) has proposed other membrane alternatives for complex life in worlds without oxygen. In 2017, scientists from the NASA Astrobiology Institute discovered the necessary chemical preconditions for the formation of azotosomes on Saturn's moon Titan, a world that lacks atmospheric oxygen. Independent studies by Schirrmeister and by Mills concluded that Earth's multicellular life existed prior to the Great Oxygenation Event, not as a consequence of it.

NASA scientists Hartman and McKay argue that plate tectonics may in fact slow the rise of oxygenation (and thus stymie complex life rather than promote it). Computer modelling by Tilman Spohn in 2014 found that plate tectonics on Earth may have arisen from the effects of complex life's emergence, rather than the other way around as the Rare Earth might suggest. The action of lichens on rock may have contributed to the formation of subduction zones in the presence of water. Kasting argues that if oxygenation caused the Cambrian explosion then any planet with oxygen producing photosynthesis should have complex life.

A magnetosphere may not be rare or a requirement

The importance of Earth's magnetic field to the development of complex life has been disputed. The origin of Earth's magnetic field remains a mystery though the presence of a magnetosphere appears to be relatively common for larger planetary mass objects as all Solar System planets larger than Earth possess one. There is increasing evidence of present or past magnetic activity even in objects as small as The Moon, including Ganymede, Mercury and Mars. Without sufficient measurement present studies rely heavily on modelling methods developed in 2006 by Olson & Christensen to predict field strength. Using a sample of 496 planets such models predict Kepler-186f to be one of few of Earth size that would support a magnetosphere (though such a field around this planet has not currently been confirmed). However current recent empirical evidence points to the occurrence of much larger and more powerful fields than those found in our Solar System, some of which cannot be explained by these models.

Kasting argues that the atmosphere provides sufficient protection against cosmic rays even during times of magnetic pole reversal and atmosphere loss by sputtering.[78] Kasting also dismisses the role of the magnetic field in the evolution of eukaryotes, citing the age of the oldest known magnetofossils.[130]

A large moon may be neither rare nor necessary

The requirement of a large moon (Rare Earth equation factor ) has also been challenged. Even if it were required, such an occurrence may not be as unique as predicted by the Rare Earth Hypothesis. Recent work by Edward Belbruno and J. Richard Gott of Princeton University suggests that giant impactors such as those that may have formed the Moon can indeed form in planetary trojan points (L4 or L5 Lagrangian point) which means that similar circumstances may occur in other planetary systems.

Collision between two planetary bodies (artist concept).

The assertion that the Moon's stabilization of Earth's obliquity and spin is a requirement for complex life has been questioned. Kasting argues that a moonless Earth would still possess habitats with climates suitable for complex life and questions whether the spin rate of a moonless Earth can be predicted. Although the giant impact theory posits that the impact forming the Moon increased Earth's rotational speed to make a day about 5 hours long, the Moon has slowly "stolen" much of this speed to reduce Earth's solar day since then to about 24 hours and continues to do so: in 100 million years Earth's solar day will be roughly 24 hours 38 minutes (the same as Mars's solar day); in 1 billion years, 30 hours 23 minutes. Larger secondary bodies would exert proportionally larger tidal forces that would in turn decelerate their primaries faster and potentially increase the solar day of a planet in all other respects like Earth to over 120 hours within a few billion years. This long solar day would make effective heat dissipation for organisms in the tropics and subtropics extremely difficult in a similar manner to tidal locking to a red dwarf star. Short days (high rotation speed) cause high wind speeds at ground level. Long days (slow rotation speed) cause the day and night temperatures to be too extreme.

Many Rare Earth proponents argue that the Earth's plate tectonics would probably not exist if not for the tidal forces of the Moon. The hypothesis that the Moon's tidal influence initiated or sustained Earth's plate tectonics remains unproven, though at least one study implies a temporal correlation to the formation of the Moon. Evidence for the past existence of plate tectonics on planets like Mars which may never have had a large moon would counter this argument. Kasting argues that a large moon is not required to initiate plate tectonics.

Complex life may arise in alternative habitats

Complex life may exist in environments similar to black smokers on Earth.

Rare Earth proponents argue that simple life may be common, though complex life requires specific environmental conditions to arise. Critics consider life could arise on a moon of a gas giant, though this is less likely if life requires volcanicity. The moon must have stresses to induce tidal heating, but not so dramatic as seen on Jupiter's Io. However, the moon is within the gas giant's intense radiation belts, sterilizing any biodiversity before it can get established. Dirk Schulze-Makuch disputes this, hypothesizing alternative biochemistries for alien life. While Rare Earth proponents argue that only microbial extremophiles could exist in subsurface habitats beyond Earth, some argue that complex life can also arise in these environments. Examples of extremophile animals such as the Hesiocaeca methanicola, an animal that inhabits ocean floor methane clathrates substances more commonly found in the outer Solar System, the tardigrades which can survive in the vacuum of space or Halicephalobus mephisto which exists in crushing pressure, scorching temperatures and extremely low oxygen levels 3.6 kilometres deep in the Earth's crust, are sometimes cited by critics as complex life capable of thriving in "alien" environments. Jill Tarter counters the classic counterargument that these species adapted to these environments rather than arose in them, by suggesting that we cannot assume conditions for life to emerge which are not actually known. There are suggestions that complex life could arise in sub-surface conditions which may be similar to those where life may have arisen on Earth, such as the tidally heated subsurfaces of Europa or Enceladus. Ancient circumvental ecosystems such as these support complex life on Earth such as Riftia pachyptila that exist completely independent of the surface biosphere.

Politics of Europe

From Wikipedia, the free encyclopedia ...