Search This Blog

Sunday, May 16, 2021

Self-replicating machine

From Wikipedia, the free encyclopedia
A simple form of machine self-replication

A self-replicating machine is a type of autonomous robot that is capable of reproducing itself autonomously using raw materials found in the environment, thus exhibiting self-replication in a way analogous to that found in nature. The concept of self-replicating machines has been advanced and examined by Homer Jacobson, Edward F. Moore, Freeman Dyson, John von Neumann and in more recent times by K. Eric Drexler in his book on nanotechnology, Engines of Creation (coining the term clanking replicator for such machines) and by Robert Freitas and Ralph Merkle in their review Kinematic Self-Replicating Machines which provided the first comprehensive analysis of the entire replicator design space. The future development of such technology is an integral part of several plans involving the mining of moons and asteroid belts for ore and other materials, the creation of lunar factories, and even the construction of solar power satellites in space. The von Neumann probe is one theoretical example of such a machine. Von Neumann also worked on what he called the universal constructor, a self-replicating machine that would be able to evolve and which he formalized in a cellular automata environment. Notably, Von Neumann's Self-Reproducing Automata scheme posited that open-ended evolution requires inherited information to be copied and passed to offspring separately from the self-replicating machine, an insight that preceded the discovery of the structure of the DNA molecule by Watson and Crick and how it is separately translated and replicated in the cell.

A self-replicating machine is an artificial self-replicating system that relies on conventional large-scale technology and automation. Although suggested more than 70 years ago no self-replicating machine has been seen until today. Certain idiosyncratic terms are occasionally found in the literature. For example, the term clanking replicator was once used by Drexler to distinguish macroscale replicating systems from the microscopic nanorobots or "assemblers" that nanotechnology may make possible, but the term is informal and is rarely used by others in popular or technical discussions. Replicators have also been called "von Neumann machines" after John von Neumann, who first rigorously studied the idea. However, the term "von Neumann machine" is less specific and also refers to a completely unrelated computer architecture that von Neumann proposed and so its use is discouraged where accuracy is important. Von Neumann himself used the term universal constructor to describe such self-replicating machines.

Historians of machine tools, even before the numerical control era, sometimes figuratively said that machine tools were a unique class of machines because they have the ability to "reproduce themselves" by copying all of their parts. Implicit in these discussions is that a human would direct the cutting processes (later planning and programming the machines), and would then assemble the parts. The same is true for RepRaps, which are another class of machines sometimes mentioned in reference to such non-autonomous "self-replication". In contrast, machines that are truly autonomously self-replicating (like biological machines) are the main subject discussed here.

History

The general concept of artificial machines capable of producing copies of themselves dates back at least several hundred years. An early reference is an anecdote regarding the philosopher René Descartes, who suggested to Queen Christina of Sweden that the human body could be regarded as a machine; she responded by pointing to a clock and ordering "see to it that it reproduces offspring." Several other variations on this anecdotal response also exist. Samuel Butler proposed in his 1872 novel Erewhon that machines were already capable of reproducing themselves but it was man who made them do so, and added that "machines which reproduce machinery do not reproduce machines after their own kind". In George Eliot's 1879 book Impressions of Theophrastus Such, a series of essays that she wrote in the character of a fictional scholar named Theophrastus, the essay "Shadows of the Coming Race" speculated about self-replicating machines, with Theophrastus asking "how do I know that they may not be ultimately made to carry, or may not in themselves evolve, conditions of self-supply, self-repair, and reproduction".

In 1802 William Paley formulated the first known teleological argument depicting machines producing other machines, suggesting that the question of who originally made a watch was rendered moot if it were demonstrated that the watch was able to manufacture a copy of itself. Scientific study of self-reproducing machines was anticipated by John Bernal as early as 1929 and by mathematicians such as Stephen Kleene who began developing recursion theory in the 1930s. Much of this latter work was motivated by interest in information processing and algorithms rather than physical implementation of such a system, however. In the course of the 1950s, suggestions of several increasingly simple mechanical systems capable of self-reproduction were made — notably by Lionel Penrose.

Von Neumann's kinematic model

A detailed conceptual proposal for a self-replicating machine was first put forward by mathematician John von Neumann in lectures delivered in 1948 and 1949, when he proposed a kinematic model of self-reproducing automata as a thought experiment. Von Neumann's concept of a physical self-replicating machine was dealt with only abstractly, with the hypothetical machine using a "sea" or stockroom of spare parts as its source of raw materials. The machine had a program stored on a memory tape that directed it to retrieve parts from this "sea" using a manipulator, assemble them into a duplicate of itself, and then copy the contents of its memory tape into the empty duplicate's. The machine was envisioned as consisting of as few as eight different types of components; four logic elements that send and receive stimuli and four mechanical elements used to provide a structural skeleton and mobility. While qualitatively sound, von Neumann was evidently dissatisfied with this model of a self-replicating machine due to the difficulty of analyzing it with mathematical rigor. He went on to instead develop an even more abstract model self-replicator based on cellular automata. His original kinematic concept remained obscure until it was popularized in a 1955 issue of Scientific American.

Von Neummann's goal for his self-reproducing automata theory, as specified in his lectures at the University of Illinois in 1949, was to design a machine whose complexity could grow automatically akin to biological organisms under natural selection. He asked what is the threshold of complexity that must be crossed for machines to be able to evolve. His answer was to design an abstract machine which, when run, would replicate itself. Notably, his design implies that open-ended evolution requires inherited information to be copied and passed to offspring separately from the self-replicating machine, an insight that preceded the discovery of the structure of the DNA molecule by Watson and Crick and how it is separately translated and replicated in the cell.

Moore's artificial living plants

In 1956 mathematician Edward F. Moore proposed the first known suggestion for a practical real-world self-replicating machine, also published in Scientific American. Moore's "artificial living plants" were proposed as machines able to use air, water and soil as sources of raw materials and to draw its energy from sunlight via a solar battery or a steam engine. He chose the seashore as an initial habitat for such machines, giving them easy access to the chemicals in seawater, and suggested that later generations of the machine could be designed to float freely on the ocean's surface as self-replicating factory barges or to be placed in barren desert terrain that was otherwise useless for industrial purposes. The self-replicators would be "harvested" for their component parts, to be used by humanity in other non-replicating machines.

Dyson's replicating systems

The next major development of the concept of self-replicating machines was a series of thought experiments proposed by physicist Freeman Dyson in his 1970 Vanuxem Lecture. He proposed three large-scale applications of machine replicators. First was to send a self-replicating system to Saturn's moon Enceladus, which in addition to producing copies of itself would also be programmed to manufacture and launch solar sail-propelled cargo spacecraft. These spacecraft would carry blocks of Enceladean ice to Mars, where they would be used to terraform the planet. His second proposal was a solar-powered factory system designed for a terrestrial desert environment, and his third was an "industrial development kit" based on this replicator that could be sold to developing countries to provide them with as much industrial capacity as desired. When Dyson revised and reprinted his lecture in 1979 he added proposals for a modified version of Moore's seagoing artificial living plants that was designed to distill and store fresh water for human use and the "Astrochicken."

Advanced Automation for Space Missions

An artist's conception of a "self-growing" robotic lunar factory

In 1980, inspired by a 1979 "New Directions Workshop" held at Wood's Hole, NASA conducted a joint summer study with ASEE entitled Advanced Automation for Space Missions to produce a detailed proposal for self-replicating factories to develop lunar resources without requiring additional launches or human workers on-site. The study was conducted at Santa Clara University and ran from June 23 to August 29, with the final report published in 1982. The proposed system would have been capable of exponentially increasing productive capacity and the design could be modified to build self-replicating probes to explore the galaxy.

The reference design included small computer-controlled electric carts running on rails inside the factory, mobile "paving machines" that used large parabolic mirrors to focus sunlight on lunar regolith to melt and sinter it into a hard surface suitable for building on, and robotic front-end loaders for strip mining. Raw lunar regolith would be refined by a variety of techniques, primarily hydrofluoric acid leaching. Large transports with a variety of manipulator arms and tools were proposed as the constructors that would put together new factories from parts and assemblies produced by its parent.

Power would be provided by a "canopy" of solar cells supported on pillars. The other machinery would be placed under the canopy.

A "casting robot" would use sculpting tools and templates to make plaster molds. Plaster was selected because the molds are easy to make, can make precise parts with good surface finishes, and the plaster can be easily recycled afterward using an oven to bake the water back out. The robot would then cast most of the parts either from nonconductive molten rock (basalt) or purified metals. A carbon dioxide laser cutting and welding system was also included.

A more speculative, more complex microchip fabricator was specified to produce the computer and electronic systems, but the designers also said that it might prove practical to ship the chips from Earth as if they were "vitamins."

A 2004 study supported by NASA's Institute for Advanced Concepts took this idea further. Some experts are beginning to consider self-replicating machines for asteroid mining.

Much of the design study was concerned with a simple, flexible chemical system for processing the ores, and the differences between the ratio of elements needed by the replicator, and the ratios available in lunar regolith. The element that most limited the growth rate was chlorine, needed to process regolith for aluminium. Chlorine is very rare in lunar regolith.

Lackner-Wendt Auxon replicators

In 1995, inspired by Dyson's 1970 suggestion of seeding uninhabited deserts on Earth with self-replicating machines for industrial development, Klaus Lackner and Christopher Wendt developed a more detailed outline for such a system. They proposed a colony of cooperating mobile robots 10–30 cm in size running on a grid of electrified ceramic tracks around stationary manufacturing equipment and fields of solar cells. Their proposal didn't include a complete analysis of the system's material requirements, but described a novel method for extracting the ten most common chemical elements found in raw desert topsoil (Na, Fe, Mg, Si, Ca, Ti, Al, C, O2 and H2) using a high-temperature carbothermic process. This proposal was popularized in Discover magazine, featuring solar-powered desalination equipment used to irrigate the desert in which the system was based. They named their machines "Auxons", from the Greek word auxein which means "to grow."

Recent work

NIAC studies on self-replicating systems

In the spirit of the 1980 "Advanced Automation for Space Missions" study, the NASA Institute for Advanced Concepts began several studies of self-replicating system design in 2002 and 2003. Four phase I grants were awarded:

Bootstrapping Self-Replicating Factories in Space

In 2012, NASA researchers Metzger, Muscatello, Mueller, and Mantovani argued for a so-called "bootstrapping approach" to start self-replicating factories in space. They developed this concept on the basis of In Situ Resource Utilization (ISRU) technologies that NASA has been developing to "live off the land" on the Moon or Mars. Their modeling showed that in just 20 to 40 years this industry could become self-sufficient then grow to large size, enabling greater exploration in space as well as providing benefits back to Earth. In 2014, Thomas Kalil of the White House Office of Science and Technology Policy published on the White House blog an interview with Metzger on bootstrapping solar system civilization through self-replicating space industry. Kalil requested the public submit ideas for how "the Administration, the private sector, philanthropists, the research community, and storytellers can further these goals." Kalil connected this concept to what former NASA Chief technologist Mason Peck has dubbed "Massless Exploration", the ability to make everything in space so that you do not need to launch it from Earth. Peck has said, "...all the mass we need to explore the solar system is already in space. It's just in the wrong shape." In 2016, Metzger argued that fully self-replicating industry can be started over several decades by astronauts at a lunar outpost for a total cost (outpost plus starting the industry) of about a third of the space budgets of the International Space Station partner nations, and that this industry would solve Earth's energy and environmental problems in addition to providing massless exploration.

New York University artificial DNA tile motifs

In 2011, a team of scientists at New York University created a structure called 'BTX' (bent triple helix) based around three double helix molecules, each made from a short strand of DNA. Treating each group of three double-helices as a code letter, they can (in principle) build up self-replicating structures that encode large quantities of information.

Self-replication of magnetic polymers

In 2001 Jarle Breivik at University of Oslo created a system of magnetic building blocks, which in response to temperature fluctuations, spontaneously form self-replicating polymers.

Self-replication of neural circuits

In 1968 Zellig Harris wrote that "the metalanguage is in the language," suggesting that self-replication is part of language. In 1977 Niklaus Wirth formalized this proposition by publishing a self-replicating deterministic context-free grammar. Adding to it probabilities, Bertrand du Castel published in 2015 a self-replicating stochastic grammar and presented a mapping of that grammar to neural networks, thereby presenting a model for a self-replicating neural circuit.

Self-replicating spacecraft

The idea of an automated spacecraft capable of constructing copies of itself was first proposed in scientific literature in 1974 by Michael A. Arbib, but the concept had appeared earlier in science fiction such as the 1967 novel Berserker by Fred Saberhagen or the 1950 novellette trilogy The Voyage of the Space Beagle by A. E. van Vogt. The first quantitative engineering analysis of a self-replicating spacecraft was published in 1980 by Robert Freitas, in which the non-replicating Project Daedalus design was modified to include all subsystems necessary for self-replication. The design's strategy was to use the probe to deliver a "seed" factory with a mass of about 443 tons to a distant site, have the seed factory replicate many copies of itself there to increase its total manufacturing capacity, and then use the resulting automated industrial complex to construct more probes with a single seed factory on board each.

Other references

  • A number of patents have been granted for self-replicating machine concepts. U.S. Patent 5,659,477 "Self reproducing fundamental fabricating machines (F-Units)" Inventor: Collins; Charles M. (Burke, Va.) (August 1997), U.S. Patent 5,764,518 " Self reproducing fundamental fabricating machine system" Inventor: Collins; Charles M. (Burke, Va.)(June 1998); and Collins' PCT patent WO 96/20453: "Method and system for self-replicating manufacturing stations" Inventors: Merkle; Ralph C. (Sunnyvale, Calif.), Parker; Eric G. (Wylie, Tex.), Skidmore; George D. (Plano, Tex.) (January 2003).
  • Macroscopic replicators are mentioned briefly in the fourth chapter of K. Eric Drexler's 1986 book Engines of Creation.
  • In 1995, Nick Szabo proposed a challenge to build a macroscale replicator from Lego robot kits and similar basic parts. Szabo wrote that this approach was easier than previous proposals for macroscale replicators, but successfully predicted that even this method would not lead to a macroscale replicator within ten years.
  • In 2004, Robert Freitas and Ralph Merkle published the first comprehensive review of the field of self-replication (from which much of the material in this article is derived, with permission of the authors), in their book Kinematic Self-Replicating Machines, which includes 3000+ literature references. This book included a new molecular assembler design, a primer on the mathematics of replication, and the first comprehensive analysis of the entire replicator design space.

Prospects for implementation

As the use of industrial automation has expanded over time, some factories have begun to approach a semblance of self-sufficiency that is suggestive of self-replicating machines. However, such factories are unlikely to achieve "full closure" until the cost and flexibility of automated machinery comes close to that of human labour and the manufacture of spare parts and other components locally becomes more economical than transporting them from elsewhere. As Samuel Butler has pointed out in Erewhon, replication of partially closed universal machine tool factories is already possible. Since safety is a primary goal of all legislative consideration of regulation of such development, future development efforts may be limited to systems which lack either control, matter, or energy closure. Fully capable machine replicators are most useful for developing resources in dangerous environments which are not easily reached by existing transportation systems (such as outer space).

An artificial replicator can be considered to be a form of artificial life. Depending on its design, it might be subject to evolution over an extended period of time. However, with robust error correction, and the possibility of external intervention, the common science fiction scenario of robotic life run amok will remain extremely unlikely for the foreseeable future.


 

Evolutionary algorithm

In computational intelligence (CI), an evolutionary algorithm (EA) is a subset of evolutionary computation, a generic population-based metaheuristic optimization algorithm. An EA uses mechanisms inspired by biological evolution, such as reproduction, mutation, recombination, and selection. Candidate solutions to the optimization problem play the role of individuals in a population, and the fitness function determines the quality of the solutions (see also loss function). Evolution of the population then takes place after the repeated application of the above operators.

Evolutionary algorithms often perform well approximating solutions to all types of problems because they ideally do not make any assumption about the underlying fitness landscape. Techniques from evolutionary algorithms applied to the modeling of biological evolution are generally limited to explorations of microevolutionary processes and planning models based upon cellular processes. In most real applications of EAs, computational complexity is a prohibiting factor. In fact, this computational complexity is due to fitness function evaluation. Fitness approximation is one of the solutions to overcome this difficulty. However, seemingly simple EA can solve often complex problems; therefore, there may be no direct link between algorithm complexity and problem complexity.

Implementation

The following is an example of a generic single-objective genetic algorithm.

Step One: Generate the initial population of individuals randomly. (First generation)

Step Two: Repeat the following regenerational steps until termination:

  1. Evaluate the fitness of each individual in the population (time limit, sufficient fitness achieved, etc.)
  2. Select the fittest individuals for reproduction. (Parents)
  3. Breed new individuals through crossover and mutation operations to give birth to offspring.
  4. Replace the least-fit individuals of the population with new individuals.

Types

Similar techniques differ in genetic representation and other implementation details, and the nature of the particular applied problem.

  • Genetic algorithm – This is the most popular type of EA. One seeks the solution of a problem in the form of strings of numbers (traditionally binary, although the best representations are usually those that reflect something about the problem being solved), by applying operators such as recombination and mutation (sometimes one, sometimes both). This type of EA is often used in optimization problems.
  • Genetic programming – Here the solutions are in the form of computer programs, and their fitness is determined by their ability to solve a computational problem.
  • Evolutionary programming – Similar to genetic programming, but the structure of the program is fixed and its numerical parameters are allowed to evolve.
  • Gene expression programming – Like genetic programming, GEP also evolves computer programs but it explores a genotype-phenotype system, where computer programs of different sizes are encoded in linear chromosomes of fixed length.
  • Evolution strategy – Works with vectors of real numbers as representations of solutions, and typically uses self-adaptive mutation rates.
  • Differential evolution – Based on vector differences and is therefore primarily suited for numerical optimization problems.
  • Neuroevolution – Similar to genetic programming but the genomes represent artificial neural networks by describing structure and connection weights. The genome encoding can be direct or indirect.
  • Learning classifier system – Here the solution is a set of classifiers (rules or conditions). A Michigan-LCS evolves at the level of individual classifiers whereas a Pittsburgh-LCS uses populations of classifier-sets. Initially, classifiers were only binary, but now include real, neural net, or S-expression types. Fitness is typically determined with either a strength or accuracy based reinforcement learning or supervised learning approach.

Comparison to biological processes

A possible limitation of many evolutionary algorithms is their lack of a clear genotype-phenotype distinction. In nature, the fertilized egg cell undergoes a complex process known as embryogenesis to become a mature phenotype. This indirect encoding is believed to make the genetic search more robust (i.e. reduce the probability of fatal mutations), and also may improve the evolvability of the organism. Such indirect (also known as generative or developmental) encodings also enable evolution to exploit the regularity in the environment. Recent work in the field of artificial embryogeny, or artificial developmental systems, seeks to address these concerns. And gene expression programming successfully explores a genotype-phenotype system, where the genotype consists of linear multigenic chromosomes of fixed length and the phenotype consists of multiple expression trees or computer programs of different sizes and shapes.

Related techniques

Swarm algorithms include

Other population-based metaheuristic methods

  • Hunting Search – A method inspired by the group hunting of some animals such as wolves that organize their position to surround the prey, each of them relative to the position of the others and especially that of their leader. It is a continuous optimization method adapted as a combinatorial optimization method.
  • Adaptive dimensional search – Unlike nature-inspired metaheuristic techniques, an adaptive dimensional search algorithm does not implement any metaphor as an underlying principle. Rather it uses a simple performance-oriented method, based on the update of the search dimensionality ratio (SDR) parameter at each iteration.
  • Firefly algorithm is inspired by the behavior of fireflies, attracting each other by flashing light. This is especially useful for multimodal optimization.
  • Harmony search – Based on the ideas of musicians' behavior in searching for better harmonies. This algorithm is suitable for combinatorial optimization as well as parameter optimization.
  • Gaussian adaptation – Based on information theory. Used for maximization of manufacturing yield, mean fitness or average information. See for instance Entropy in thermodynamics and information theory.
  • Memetic algorithm – A hybrid method, inspired by Richard Dawkins's notion of a meme, it commonly takes the form of a population-based algorithm coupled with individual learning procedures capable of performing local refinements. Emphasizes the exploitation of problem-specific knowledge, and tries to orchestrate local and global search in a synergistic way.

Examples

In 2020, Google stated that their AutoML-Zero can successfully rediscover classic algorithms such as the concept of neural networks.

The computer simulations Tierra and Avida attempt to model macroevolutionary dynamics.

Superintelligence

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Superintelligence

A superintelligence is a hypothetical agent that possesses intelligence far surpassing that of the brightest and most gifted human minds. "Superintelligence" may also refer to a property of problem-solving systems (e.g., superintelligent language translators or engineering assistants) whether or not these high-level intellectual competencies are embodied in agents that act in the world. A superintelligence may or may not be created by an intelligence explosion and associated with a technological singularity.

University of Oxford philosopher Nick Bostrom defines superintelligence as "any intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest". The program Fritz falls short of superintelligence—even though it is much better than humans at chess—because Fritz cannot outperform humans in other tasks. Following Hutter and Legg, Bostrom treats superintelligence as general dominance at goal-oriented behavior, leaving open whether an artificial or human superintelligence would possess capacities such as intentionality (cf. the Chinese room argument) or first-person consciousness (cf. the hard problem of consciousness).

Technological researchers disagree about how likely present-day human intelligence is to be surpassed. Some argue that advances in artificial intelligence (AI) will probably result in general reasoning systems that lack human cognitive limitations. Others believe that humans will evolve or directly modify their biology so as to achieve radically greater intelligence. A number of futures studies scenarios combine elements from both of these possibilities, suggesting that humans are likely to interface with computers, or upload their minds to computers, in a way that enables substantial intelligence amplification.

Some researchers believe that superintelligence will likely follow shortly after the development of artificial general intelligence. The first generally intelligent machines are likely to immediately hold an enormous advantage in at least some forms of mental capability, including the capacity of perfect recall, a vastly superior knowledge base, and the ability to multitask in ways not possible to biological entities. This may give them the opportunity to—either as a single being or as a new species—become much more powerful than humans, and to displace them.

A number of scientists and forecasters argue for prioritizing early research into the possible benefits and risks of human and machine cognitive enhancement, because of the potential social impact of such technologies.

Feasibility of artificial superintelligence

Progress in machine classification of images
The error rate of AI by year. Red line - the error rate of a trained human

Philosopher David Chalmers argues that artificial general intelligence is a very likely path to superhuman intelligence. Chalmers breaks this claim down into an argument that AI can achieve equivalence to human intelligence, that it can be extended to surpass human intelligence, and that it can be further amplified to completely dominate humans across arbitrary tasks.

Concerning human-level equivalence, Chalmers argues that the human brain is a mechanical system, and therefore ought to be emulatable by synthetic materials. He also notes that human intelligence was able to biologically evolve, making it more likely that human engineers will be able to recapitulate this invention. Evolutionary algorithms in particular should be able to produce human-level AI. Concerning intelligence extension and amplification, Chalmers argues that new AI technologies can generally be improved on, and that this is particularly likely when the invention can assist in designing new technologies.

If research into strong AI produced sufficiently intelligent software, it would be able to reprogram and improve itself – a feature called "recursive self-improvement". It would then be even better at improving itself, and could continue doing so in a rapidly increasing cycle, leading to a superintelligence. This scenario is known as an intelligence explosion. Such an intelligence would not have the limitations of human intellect, and may be able to invent or discover almost anything.

Computer components already greatly surpass human performance in speed. Bostrom writes, "Biological neurons operate at a peak speed of about 200 Hz, a full seven orders of magnitude slower than a modern microprocessor (~2 GHz)." Moreover, neurons transmit spike signals across axons at no greater than 120 m/s, "whereas existing electronic processing cores can communicate optically at the speed of light". Thus, the simplest example of a superintelligence may be an emulated human mind run on much faster hardware than the brain. A human-like reasoner that could think millions of times faster than current humans would have a dominant advantage in most reasoning tasks, particularly ones that require haste or long strings of actions.

Another advantage of computers is modularity, that is, their size or computational capacity can be increased. A non-human (or modified human) brain could become much larger than a present-day human brain, like many supercomputers. Bostrom also raises the possibility of collective superintelligence: a large enough number of separate reasoning systems, if they communicated and coordinated well enough, could act in aggregate with far greater capabilities than any sub-agent.

There may also be ways to qualitatively improve on human reasoning and decision-making. Humans appear to differ from chimpanzees in the ways we think more than we differ in brain size or speed. Humans outperform non-human animals in large part because of new or enhanced reasoning capacities, such as long-term planning and language use. If there are other possible improvements to reasoning that would have a similarly large impact, this makes it likelier that an agent can be built that outperforms humans in the same fashion humans outperform chimpanzees.

All of the above advantages hold for artificial superintelligence, but it is not clear how many hold for biological superintelligence. Physiological constraints limit the speed and size of biological brains in many ways that are inapplicable to machine intelligence. As such, writers on superintelligence have devoted much more attention to superintelligent AI scenarios.

Feasibility of biological superintelligence

Evolution of the reaction speed in the noogenesis In unicellular organism – the rate of movement of ions through the membrane ~ m/s, water through the membrane m/s, intracellular liquid (cytoplasm) m/s; Inside multicellular organism – the speed of blood through the vessels ~0.05 m/s, the momentum along the nerve fibers ~100 m/s; In population (humanity) – communications: sound (voice and audio) ~300 m/s, quantum-electron ~ m/s (the speed of radio-electromagnetic waves, electric current, light, optical, tele-communications).
 
Evolution of the number of components in intelligent systems. A - number of neurons in the brain during individual development (ontogenesis), B - number of people (evolution of populations of humanity), C - number of neurons in the nervous systems of organisms during evolution (phylogenesis).
 
Evolution of the number of connections of intelligent systems. A - number of synapses between neurons during individual development (ontogenesis) of intelsystem of the human brain, B - number of connections between people in the dynamics of population growth of the human population, C - number of synapses between neurons in the historical evolutionary development (phylogenesis) of nervous systems to the human brain.
 
Emergence and evolution of info-interactions within populations of Humanity Aworld human population → 7 billion; B – number of literate persons; C – number of reading books (with beginning of printing); D – number of receivers (radio, TV); E – number of phones, computers, Internet users

Carl Sagan suggested that the advent of Caesarean sections and in vitro fertilization may permit humans to evolve larger heads, resulting in improvements via natural selection in the heritable component of human intelligence. By contrast, Gerald Crabtree has argued that decreased selection pressure is resulting in a slow, centuries-long reduction in human intelligence, and that this process instead is likely to continue into the future. There is no scientific consensus concerning either possibility, and in both cases the biological change would be slow, especially relative to rates of cultural change.

Selective breeding, nootropics, NSI-189, MAOIs, epigenetic modulation, and genetic engineering could improve human intelligence more rapidly. Bostrom writes that if we come to understand the genetic component of intelligence, pre-implantation genetic diagnosis could be used to select for embryos with as much as 4 points of IQ gain (if one embryo is selected out of two), or with larger gains (e.g., up to 24.3 IQ points gained if one embryo is selected out of 1000). If this process is iterated over many generations, the gains could be an order of magnitude greater. Bostrom suggests that deriving new gametes from embryonic stem cells could be used to iterate the selection process very rapidly. A well-organized society of high-intelligence humans of this sort could potentially achieve collective superintelligence.

Alternatively, collective intelligence might be constructible by better organizing humans at present levels of individual intelligence. A number of writers have suggested that human civilization, or some aspect of it (e.g., the Internet, or the economy), is coming to function like a global brain with capacities far exceeding its component agents. If this systems-based superintelligence relies heavily on artificial components, however, it may qualify as an AI rather than as a biology-based superorganism. A prediction market is sometimes considered an example of working collective intelligence system, consisting of humans only (assuming algorithms are not used to inform decisions).

A final method of intelligence amplification would be to directly enhance individual humans, as opposed to enhancing their social or reproductive dynamics. This could be achieved using nootropics, somatic gene therapy, or brain–computer interfaces. However, Bostrom expresses skepticism about the scalability of the first two approaches, and argues that designing a superintelligent cyborg interface is an AI-complete problem.

Evolution with possible integration of NI, IT and AI

In 2005 Alexei Eryomin in the monograph "Noogenesis and Theory of Intellect" proposed a new concept of noogenesis in understanding the evolution of intelligence systems. The evolution of intellectual capabilities can occur with the simultaneous participation of natural (biological) intelligences (NI), modern advances in information technology (IT) and future scientific achievements in the field of artificial intelligence (AI).

Evolution of speed of interaction between components of intelligence systems

The first person to measure the speed (in the range of 24.6 – 38.4 meters per second) at which the signal is carried along a nerve fibre in 1849 was Helmholtz. To date, the measured rates of nerve conduction velocity are 0,5 – 120 m/s. The speed of sound and speed of light were determined earlier in the XVII century. By the 21st century, it became clear that they determine mainly the speeds of physical signals-information carriers, between intelligent systems and their components: sound (voice and audio) ~300 m/s, quantum-electron ~ m/s (the speed of radio-electromagnetic waves, electric current, light, optical, tele-communications).

Evolution of components of intelligence systems

In 1906 Santiago Ramón y Cajal brought the central importance of the neuron to the attention of scientists and established the neuron doctrine, which states that the nervous system is made up of discrete individual cells. According to modern data, there are approximately 86 billion neurons in the brain an adult human.

In the process of evolution, the human population was about 70 million in 2000 BC, about 300 million at the beginning of the first century AD, about one billion in 1930 AD, 6 billion in 2000, and 7.7 billion now world population. According to the mathematical models of Sergey Kapitsa, the human population may reach 12.5 - 14 billion before the end of 2200.

Evolution of links between components of intelligence systems

Synapse – from the Greek synapsis (συνάψις), meaning "conjunction", in turn from συνάπτεὶν (συν ("together") and ἅπτειν ("to fasten")) – was introduced in 1897 by Charles Sherrington. The relevance of measurements in this direction is confirmed by both modern comprehensive researches of cooperation, and connections of information, genetic, and cultural, due to structures at the neuronal level of the brain, and the importance of cooperation in the development of civilization. In this regard, A. L. Eryomin analyzed the known data on the evolution of the number of connections for cooperation in intelligent systems. Connections, contacts between biological objects, can be considered to have appeared with a multicellularity of ~ 3-3.5 billion years ago. The system of high — speed connections of specialized cells that transmit information using electrical signals, the nervous system, in the entire history of life appeared only in one major evolutionary branch: in multicellular animals (Metazoa) and appeared in the Ediacaran period (about 635-542 million years ago). During evolution (phylogeny), the number of connections between neurons increased from one to ~ 7000 synoptic connections of each neuron with other neurons in the human brain. It has been estimated that the brain of a three-year-old child has about of synapses (1 quadrillion). In individual development (ontogenesis), the number of synapses decreases with age to ~ . According to other data, the estimated number of neocortical synapses in the male and female brains decreases during human life from ~ to ~ .

The number of human contacts is difficult to calculate, but the "Dunbar’s number" ~150 stable human connections with other people is fixed in science, the assumed cognitive limit of the number of people with whom it is possible to maintain stable social relations, according to other authors - the range of 100–290. Structures responsible for social interaction have been identified in the brain. With the appearance of Homo sapiens ~50-300 thousand years ago, the relevance of cooperation, its evolution in the human population, increased quantitatively. If 2000 years ago there were 0.1 billion people on Earth, 100 years ago - 1 billion, by the middle of the twentieth century – 3 billion, and by now, humanity - 7.7 billion. Thus, the total number of "stable connections" between people, social relationships within the population, can be estimated by a number ~ ." 

Noometry of intellectual interaction
Parameter Results of the measurements
(limits)

Number of components of
intellectual systems
~
Number of links between
components
~
Speed of interaction between
components (m/s)
~

Forecasts

Most surveyed AI researchers expect machines to eventually be able to rival humans in intelligence, though there is little consensus on when this will likely happen. At the 2006 AI@50 conference, 18% of attendees reported expecting machines to be able "to simulate learning and every other aspect of human intelligence" by 2056; 41% of attendees expected this to happen sometime after 2056; and 41% expected machines to never reach that milestone.

In a survey of the 100 most cited authors in AI (as of May 2013, according to Microsoft academic search), the median year by which respondents expected machines "that can carry out most human professions at least as well as a typical human" (assuming no global catastrophe occurs) with 10% confidence is 2024 (mean 2034, st. dev. 33 years), with 50% confidence is 2050 (mean 2072, st. dev. 110 years), and with 90% confidence is 2070 (mean 2168, st. dev. 342 years). These estimates exclude the 1.2% of respondents who said no year would ever reach 10% confidence, the 4.1% who said 'never' for 50% confidence, and the 16.5% who said 'never' for 90% confidence. Respondents assigned a median 50% probability to the possibility that machine superintelligence will be invented within 30 years of the invention of approximately human-level machine intelligence.

Design considerations

Bostrom expressed concern about what values a superintelligence should be designed to have. He compared several proposals:

  • The coherent extrapolated volition (CEV) proposal is that it should have the values upon which humans would converge.
  • The moral rightness (MR) proposal is that it should value moral rightness.
  • The moral permissibility (MP) proposal is that it should value staying within the bounds of moral permissibility (and otherwise have CEV values).

Bostrom clarifies these terms:

instead of implementing humanity's coherent extrapolated volition, one could try to build an AI with the goal of doing what is morally right, relying on the AI’s superior cognitive capacities to figure out just which actions fit that description. We can call this proposal “moral rightness” (MR) ... MR would also appear to have some disadvantages. It relies on the notion of “morally right,” a notoriously difficult concept, one with which philosophers have grappled since antiquity without yet attaining consensus as to its analysis. Picking an erroneous explication of “moral rightness” could result in outcomes that would be morally very wrong ... The path to endowing an AI with any of these [moral] concepts might involve giving it general linguistic ability (comparable, at least, to that of a normal human adult). Such a general ability to understand natural language could then be used to understand what is meant by “morally right.” If the AI could grasp the meaning, it could search for actions that fit ...

One might try to preserve the basic idea of the MR model while reducing its demandingness by focusing on moral permissibility: the idea being that we could let the AI pursue humanity’s CEV so long as it did not act in ways that are morally impermissible.

Responding to Bostrom, Santos-Lang raised concern that developers may attempt to start with a single kind of superintelligence.

Potential threat to humanity

It has been suggested that if AI systems rapidly become superintelligent, they may take unforeseen actions or out-compete humanity. Researchers have argued that, by way of an "intelligence explosion," a self-improving AI could become so powerful as to be unstoppable by humans.

Concerning human extinction scenarios, Bostrom (2002) identifies superintelligence as a possible cause:

When we create the first superintelligent entity, we might make a mistake and give it goals that lead it to annihilate humankind, assuming its enormous intellectual advantage gives it the power to do so. For example, we could mistakenly elevate a subgoal to the status of a supergoal. We tell it to solve a mathematical problem, and it complies by turning all the matter in the solar system into a giant calculating device, in the process killing the person who asked the question.

In theory, since a superintelligent AI would be able to bring about almost any possible outcome and to thwart any attempt to prevent the implementation of its goals, many uncontrolled, unintended consequences could arise. It could kill off all other agents, persuade them to change their behavior, or block their attempts at interference. Eliezer Yudkowsky illustrates such instrumental convergence as follows: "The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else."

This presents the AI control problem: how to build an intelligent agent that will aid its creators, while avoiding inadvertently building a superintelligence that will harm its creators. The danger of not designing control right "the first time," is that a superintelligence may be able to seize power over its environment and prevent humans from shutting it down. Since a superintelligent AI will likely have the ability to not fear death and instead consider it an avoidable situation which can be predicted and avoided by simply disabling the power button. Potential AI control strategies include "capability control" (limiting an AI's ability to influence the world) and "motivational control" (building an AI whose goals are aligned with human values).

Bill Hibbard advocates for public education about superintelligence and public control over the development of superintelligence.

 

Superintelligence: Paths, Dangers, Strategies

From Wikipedia, the free encyclopedia
 
Superintelligence:
Paths, Dangers, Strategies
AuthorNick Bostrom
CountryUnited Kingdom
LanguageEnglish
SubjectArtificial intelligence
GenrePhilosophy, popular science
PublisherOxford University Press
Publication date
July 3, 2014 (UK)
September 1, 2014 (US)
Media typePrint, e-book, audiobook
Pages352 pp.
ISBN978-0199678112
Preceded byGlobal Catastrophic Risks 

Superintelligence: Paths, Dangers, Strategies is a 2014 book by the Swedish philosopher Nick Bostrom from the University of Oxford. It argues that if machine brains surpass human brains in general intelligence, then this new superintelligence could replace humans as the dominant lifeform on Earth. Sufficiently intelligent machines could improve their own capabilities faster than human computer scientists, and the outcome could be an existential catastrophe for humans.

Bostrom's book has been translated into many languages.

Synopsis

It is unknown whether human-level artificial intelligence will arrive in a matter of years, later this century, or not until future centuries. Regardless of the initial timescale, once human-level machine intelligence is developed, a "superintelligent" system that "greatly exceeds the cognitive performance of humans in virtually all domains of interest" would, most likely, follow surprisingly quickly. Such a superintelligence would be very difficult to control or restrain.

While the ultimate goals of superintelligences can vary greatly, a functional superintelligence will spontaneously generate, as natural subgoals, "instrumental goals" such as self-preservation and goal-content integrity, cognitive enhancement, and resource acquisition. For example, an agent whose sole final goal is to solve the Riemann hypothesis (a famous unsolved, mathematical conjecture) could create and act upon a subgoal of transforming the entire Earth into some form of computronium (hypothetical material optimized for computation) to assist in the calculation. The superintelligence would proactively resist any outside attempts to turn the superintelligence off or otherwise prevent its subgoal completion. In order to prevent such an existential catastrophe, it is necessary to successfully solve the "AI control problem" for the first superintelligence. The solution might involve instilling the superintelligence with goals that are compatible with human survival and well-being. Solving the control problem is surprisingly difficult because most goals, when translated into machine-implementable code, lead to unforeseen and undesirable consequences.

The owl on the book cover alludes to an analogy which Bostrom calls the "Unfinished Fable of the Sparrows". A group of sparrows decide to find an owl chick and raise it as their servant.[5] They eagerly imagine "how easy life would be" if they had an owl to help build their nests, to defend the sparrows and to free them for a life of leisure. The sparrows start the difficult search for an owl egg; only "Scronkfinkle", a "one-eyed sparrow with a fretful temperament", suggests thinking about the complicated question of how to tame the owl before bringing it "into our midst". The other sparrows demur; the search for an owl egg will already be hard enough on its own: "Why not get the owl first and work out the fine details later?" Bostrom states that "It is not known how the story ends", but he dedicates his book to Scronkfinkle.

Reception

The book ranked #17 on the New York Times list of best selling science books for August 2014. In the same month, business magnate Elon Musk made headlines by agreeing with the book that artificial intelligence is potentially more dangerous than nuclear weapons. Bostrom's work on superintelligence has also influenced Bill Gates’s concern for the existential risks facing humanity over the coming century. In a March 2015 interview by Baidu's CEO, Robin Li, Gates said that he would "highly recommend" Superintelligence. According to the New Yorker, philosophers Peter Singer and Derek Parfit have "received it as a work of importance".

The science editor of the Financial Times found that Bostrom's writing "sometimes veers into opaque language that betrays his background as a philosophy professor" but convincingly demonstrates that the risk from superintelligence is large enough that society should start thinking now about ways to endow future machine intelligence with positive values. A review in The Guardian pointed out that "even the most sophisticated machines created so far are intelligent in only a limited sense" and that "expectations that AI would soon overtake human intelligence were first dashed in the 1960s", but finds common ground with Bostrom in advising that "one would be ill-advised to dismiss the possibility altogether".

Some of Bostrom's colleagues suggest that nuclear war presents a greater threat to humanity than superintelligence, as does the future prospect of the weaponisation of nanotechnology and biotechnology. The Economist stated that "Bostrom is forced to spend much of the book discussing speculations built upon plausible conjecture... but the book is nonetheless valuable. The implications of introducing a second intelligent species onto Earth are far-reaching enough to deserve hard thinking, even if the prospect of actually doing so seems remote." Ronald Bailey wrote in the libertarian Reason that Bostrom makes a strong case that solving the AI control problem is the "essential task of our age". According to Tom Chivers of The Daily Telegraph, the book is difficult to read, but nonetheless rewarding. A reviewer in the Journal of Experimental & Theoretical Artificial Intelligence broke with others by stating the book's "writing style is clear", and praised the book for avoiding "overly technical jargon". A reviewer in Philosophy judged Superintelligence to be "more realistic" than Ray Kurzweil's The Singularity is Near.

Entropy (information theory)

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Entropy_(information_theory) In info...