Search This Blog

Saturday, June 9, 2018

Outline of transhumanism

From Wikipedia, the free encyclopedia

The following outline provides an overview of and a topical guide to transhumanism:

Transhumanism – international intellectual and cultural movement that affirms the possibility and desirability of fundamentally transforming the human condition by developing and making widely available technologies to eliminate aging and to greatly enhance human intellectual, physical, and psychological capacities.[1] Transhumanist thinkers study the potential benefits and dangers of emerging and hypothetical technologies that could overcome fundamental human limitations, as well as study the ethical matters involved in developing and using such technologies.[1] They predict that human beings may eventually be able to transform themselves into beings with such greatly expanded abilities as to merit the label "posthuman".[1]

Classification of transhumanism

Transhumanism can be described as all of the following:
  • Branch of philosophy – study of general and fundamental problems, such as those connected with reality, existence, knowledge, values, reason, mind, and language.
    • Branch of humanism – philosophical and ethical stance that emphasizes the value and agency of human beings, individually and collectively, and generally prefers critical thinking and evidence (rationalism, empiricism) over established doctrine or faith (fideism). In modern times, humanist movements are typically aligned with secularism, and today "Humanism" typically refers to a non-theistic life stance centred on human agency, and looking to science instead of religious dogma in order to understand the world.[2] According to Max More: "Transhumanism shares many elements of humanism, including a respect for reason and science, a commitment to progress, and a valuing of human (or transhuman) existence in this life. [...] Transhumanism differs from humanism in recognizing and anticipating the radical alterations in the nature and possibilities of our lives resulting from various sciences and technologies."
  • Life stance – one's relation with what they accept as being of ultimate importance. It involves the presuppositions and theories upon which such a stance could be made, a belief system, and a commitment to working it out in one's life.[3]
  • Social movement – type of group action. Large, sometimes informal, grouping of individuals or organizations which focus on specific political or social issues. In other words, members of a social movement carry out, resist or undo a social change.
  • World view – fundamental cognitive orientation of an individual or society encompassing the entirety of the individual or society's knowledge and point of view. A world view can include natural philosophy; fundamental, existential, and normative postulates; or themes, values, emotions, and ethics.[4] It is the framework of ideas and beliefs forming a global description through which an individual, group or culture watches and interprets the world and interacts with it.

Transhumanist values

Neophilia

Neophilia – strong affinity for novelty and change. Transhumanist neophiliac values include:
  • Posthumanism – desire to become posthuman.
  • Proactionary principle – ethical and decision-making principle formulated by the transhumanist philosopher Max More, pertaining to people's freedom to innovate, and the value of protecting that freedom.
  • Singularitarianism – technocentric ideology and social movement defined by the belief that a technological singularity—the creation of a superintelligence—will likely happen in the medium future, and that deliberate action ought to be taken to ensure that the Singularity benefits humans.
  • Technophilia – strong enthusiasm for technology, especially new technologies such as personal computers, the Internet, mobile phones and home cinema. The term is used in sociology when examining the interaction of individuals with their society, especially contrasted with technophobia.

Survival

Survival – survival, or self-preservation, is behavior that ensures the survival of an organism.[5] It is almost universal among living organisms. Humans differ from other animals in that they use technology extensively to improve chances of survival and increase life expectancy.
  • Global catastrophic risk – hypothetical future event with the potential to inflict serious damage to human well-being on a global scale. Some such events could destroy or cripple modern civilization. Other, even more severe, scenarios threaten permanent human extinction. In transhumanist thought, existential risks are to be actively avoided, with the notable exception of the technological singularity.
  • Human extinction – end of the species Homo sapiens sapiens. Usually considered bad. Though opinions vary depending on whether humans are succeeded by progeny species or entities, such as posthumans or strong AI. If there are posthumans, people wouldn't be extinct, just their human predecessors. If all that remained was strong AI, sentience wouldn't be extinct, just biological sentience.
  • Longevity – the length of one's life
    • Immortality – unable to die or be killed, which scientifically speaking is theoretically impossible. A related term is biological immortality, which refers to forms of life immune to the effects of aging, which technically are not immortal in the general sense. It is not yet known if biological immortality is possible for humans.
    • Indefinite lifespan – hypothetical longevity of humans (and other life-forms) under conditions in which aging is effectively and completely prevented and treated. Their lifespans would be "indefinite" (that is, they would not be "immortal"), because protection from the effects of aging on health does not guarantee survival (from accidents, natural disasters, war, etc.).
    • Rejuvenation – distinct from life extension. Life extension strategies often study the causes of aging and try to oppose those causes in order to slow aging. Rejuvenation is the reversal of aging and thus requires a different strategy, namely repair of the damage that is associated with aging or replacement of damaged tissue with new tissue. Rejuvenation can be a means of life extension, but most life extension strategies do not involve rejuvenation.

Transhumanist ideologies

  • Extropianism – early school of transhumanist thought characterized by a set of principles advocating a proactive approach to human evolution.[6] It is an evolving framework of values and standards for continuously improving the human condition. Extropians believe that advances in science and technology will some day let people live indefinitely. An extropian may wish to contribute to this goal, e.g. by doing research and development or volunteering to test new technology.
  • Immortalism – moral ideology based upon the belief that technological immortality is possible and desirable, and advocating research and development to ensure its realization.[7]
  • Postgenderism – social ideology which, though not currently possible, seeks the voluntary elimination of gender in the human species through the application of advanced biotechnology and assisted reproductive technologies.[8]
  • Singularitarianism – moral ideology based upon the belief that a technological singularity is possible, and advocating deliberate action to affect it and ensure its safety.[9]
  • Technogaianism – ecological ideology based upon the belief that emerging technologies can help restore Earth's environment, and that developing safe, clean, alternative technology should therefore be an important goal of environmentalists.[10]

Transhumanist politics

Transhumanist rights

History of transhumanism

The term "transhumanism" was first coined in 1957 by Sir Julian Huxley, a zoologist and prominent humanist.[14]
  • Preludes to transhumanism
    • Renaissance humanism – cultural and educational reform during the fourteenth and the beginning of the fifteenth centuries, as a response to the challenge of MediƦval scholastic education, emphasizing practical, pre-professional and -scientific studies. Rather than train professionals in jargon and strict practice, humanists sought to create a citizenry (sometimes including women) able to speak and write with eloquence and clarity.
    • Age of Enlightenment – elite cultural movement of intellectuals in 18th century Europe that sought to mobilize the power of reason in order to reform society and advance knowledge. It promoted intellectual interchange and opposed intolerance and abuses in church and state.
    • Russian cosmism – past philosophical and cultural movement that emerged in Russia in the early 20th century. It entailed a broad theory of natural philosophy, combining elements of religion and ethics with a history and philosophy of the origin, evolution and future existence of the cosmos, and an expanding role of humankind within it.
  • Emergence of transhumanism

Current technological factors

  • Human condition – the irreducible part of humanity that is inherent and not connected to gender, race, class, etc.; the experiences of being human in a social, cultural, and personal context. Transhumanism aims to radically improve the human condition. The present human condition is highly technological, and is becoming more so.
  • Noosphere – "sphere of human thought".[15] In the original theory of Vernadsky, the noosphere (sentience) is the third in a succession of phases of development of the Earth, after the geosphere (inanimate matter) and the biosphere (biological life). The noosphere includes technological endeavor.
  • Technological change – overall process of invention, innovation and diffusion of technology (including processes).
    • Rate of technological change
      • Accelerating change – perceived increase in the rate of technological (and sometimes social and cultural) progress throughout history, which may suggest faster and more profound change in the future. While many have suggested accelerating change, the popularity of this theory in modern times is closely associated with various advocates of the technological singularity, such as Vernor Vinge and Ray Kurzweil.
        • Moore's law – the observation that, over the history of computing hardware, the number of transistors on integrated circuits doubles approximately every two years. The law is named after Intel co-founder Gordon E. Moore, who described the trend in his 1965 paper.
    • Management of technological change
      • Differential technological development – strategy proposed by transhumanist philosopher Nick Bostrom in which societies would seek to influence the sequence in which emerging technologies developed. On this approach, societies would strive to retard the development of harmful technologies and their applications, while accelerating the development of beneficial technologies, especially those that offer protection against the harmful ones.
    • Influences on technological change
      • Ideologies promoting technological change
        • Innovation economics – growing economic theory that emphasizes entrepreneurship and innovation. Innovation economics is based on two fundamental tenets: that the central goal of economic policy should be to spur higher productivity through greater innovation, and that markets relying on input resources and price signals alone will not always be as effective in spurring higher productivity, and thereby economic growth.
        • Singularitarianism – technocentric ideology and social movement defined by the belief that a technological singularity (TS), that is, the creation of a superintelligence, will likely happen in the medium future, and that deliberate action ought to be taken to ensure that the Singularity benefits humans. Singularitarians endeavor to bring about the TS.
      • Innovation – application of better solutions (in the form of new ideas, new devices or new processes) that meet new requirements, inarticulated needs, or existing market needs. This is accomplished through more effective products, processes, services, technologies, or ideas that are readily available to markets, governments and society.

Human enhancement technologies

in biology

in genetics

related-to aging

Neuro-based

Info-based

Prosthetics

Emerging technologies of interest to transhumanists

Emerging technologies – contemporary advances and innovation in various fields of technology, prior to or early in their diffusion. They are typically in the form of progressive developments intended to achieve a competitive advantage.[16] Transhumanists believe that humans can and should use technologies to become more than human. Emerging technologies offer the greatest potential in doing so. Examples of developing technologies that have become the focus of transhumanism include:
  • Anti-aging – another term for "life extension".
  • Artificial intelligence – intelligence of machines and the branch of computer science that aims to create it. AI textbooks define the field as "the study and design of intelligent agents",[17] where an intelligent agent is a system that perceives its environment and takes actions that maximize its chances of success. John McCarthy, who coined the term in 1956, defines it as "the science and engineering of making intelligent machines."[18]
  • Augmented reality – live, direct or indirect, view of a physical, real-world environment whose elements are augmented by computer-generated sensory input such as sound, video, graphics or GPS data. It is related to a more general concept called mediated reality, in which a view of reality is modified (possibly even diminished rather than augmented) by a computer. As a result, the technology functions by enhancing one’s current perception of reality. By contrast, virtual reality replaces the real world with a simulated one.
  • Biomedical engineering – application of engineering principles and design concepts to biology and medicine, to improve healthcare diagnosis, monitoring and therapy.[19] Applications include the development of biocompatible prostheses, clinical equipment, micro-implants, imaging equipment such as MRIs and EEGs, regenerative tissue growth, pharmaceutical drugs and therapeutic biologicals.
    • Neural engineering – discipline that uses engineering techniques to understand, repair, replace, enhance, or otherwise exploit the properties of neural systems. Neural engineers are uniquely qualified to solve design problems at the interface of living neural tissue and non-living constructs. Also known as "neuroengineering".
      • Neurohacking – colloquial term encompassing all methods of manipulating or interfering with the structure and/or function of neurons for improvement or repair.
  • Biotechnology – field of applied biology that uses living organisms and bioprocesses in engineering, technology, medicine, and manufacturing, among other fields. It encompasses a wide range of procedures for modifying living organisms for human purposes. Early examples of biotechnology include domestication of animals, cultivation of plants, and breeding through artificial selection and hybridization.
    • Bionics – in medicine, this refers to the replacement or enhancement of organs or other body parts by mechanical versions. Bionic implants differ from mere prostheses by mimicking the original function very closely, or even surpassing it.
      • Cyborg – being with both biological and artificial (e.g. electronic, mechanical or robotic) parts.
    • Brain-computer interface – direct communication pathway between the brain and an external device. BCIs are under development to assist, augment, or repair human cognitive and sensory-motor functions. Sometimes called a direct neural interface or a brain–machine interface (BMI).
    • Cloning – in biotechnology, this refers to processes used to create copies of DNA fragments (molecular cloning), cells (cell cloning), or organisms.
  • Cognitive science – interdisciplinary scientific study of mind and its processes. It examines what cognition is, what it does and how it works. It includes research on how information is processed (in faculties such as perception, language, memory, reasoning, and emotion), represented, and transformed in behaviour, (human or other animal) nervous system or machine (e.g., computer). It includes research on artificial intelligence.
  • Computer-mediated reality – ability to add to, subtract information from, or otherwise manipulate one's perception of reality through the use of a wearable computer or hand-held device[20] such as a smart phone.
  • Cryonics – low-temperature preservation of humans and animals who can no longer be sustained by contemporary medicine, with the hope that healing and resuscitation may be possible in the future. Cryopreservation of people or large animals is not reversible with current technology.
  • Cyberware – hardware or machine parts implanted in the human body and acting as an interface between the central nervous system and the computers or machinery connected to it. Research in this area is a protoscience.
  • Head-mounted display (HMD) – display device, worn on the head or as part of a helmet, that has a small display optic in front of one (monocular HMD) or each eye (binocular HMD).
  • Human enhancement technologies (HET) – techniques used to treat illness or disability, or to enhance human characteristics and capacities.[21]
  • Human genetic engineering – alteration of an individual's genotype with the aim of choosing the phenotype of a newborn or changing the existing phenotype of a child or adult.[22]
  • Human-machine interface – the part of a machine that handles its human-machine interaction.
  • Information technology – acquisition, processing, storage and dissemination of vocal, pictorial, textual and numerical information by a microelectronics-based combination of computing and telecommunications.[23]
    • Internet of Autonomous Things – technological developments that are expected to bring computers into the physical environment as autonomous entities without human direction, freely moving and interacting with humans and other objects. An expected evolution of the Internet of things.
  • Life extension – study of slowing down or reversing the processes of aging to extend both the maximum and average lifespan. Some researchers in this area, and persons who wish to achieve longer lives for themselves (called "life extensionists" or "longevists"), expect that future breakthroughs in tissue rejuvenation with stem cells, molecular repair, and organ replacement (such as with artificial organs or xenotransplantations) will eventually enable humans to live indefinitely (agerasia[24]) through complete rejuvenation to a healthy youthful condition. Also known as anti-aging medicine, experimental gerontology, and biomedical gerontology.
  • Nanotechnology – study of physical phenomena on the nanoscale, dealing with things measured in nanometres, billionths of a meter. The development of microscopic or molecular machines.
  • Nootropics – drugs, supplements, nutraceuticals, and functional foods that improve mental functions such as cognition, memory, intelligence, motivation, attention, and concentration.[25][26] Also referred to as "smart drugs", "brain steroids", "memory enhancers", "cognitive enhancers", "brain boosters", and "intelligence enhancers".
  • Organ transplants – moving of an organ from one body to another or from a donor site on the patient's own body, for the purpose of replacing the recipient's damaged or absent organ. The emerging field of regenerative medicine is allowing scientists and engineers to create organs to be re-grown from the patient's own cells (stem cells, or cells extracted from the failing organs).
    • Autograft – organs and/or tissues that are transplanted within the same person's body.
    • Allograft – transplants that are performed between two subjects of the same species.
    • Xenograft – living cells, tissues or organs transplanted from one species to another.
  • Personal communicators – Around 1990 the next generation digital mobile phones were called digital personal communicators. Another definition, coined in 1991, is for a category of handheld devices that provide personal information manager functions and packet switched wireless data communications capabilities over wireless wide area networks such as cellular networks. These devices are now commonly referred to as smartphones or wireless PDAs.
  • Personal development – includes activities that improve awareness and identity, develop talents and potential, build human capital and facilitates employability, enhance quality of life and contribute to the realization of dreams and aspirations. The concept is not limited to self-help, but includes formal and informal activities for developing others, in roles such as teacher, guide, counselor, manager, coach, or mentor. Finally, as personal development takes place in the context of institutions, it refers to the methods, programs, tools, techniques, and assessment systems that support human development at the individual level in organizations.[27]
  • Powered exoskeleton – powered mobile machine consisting primarily of an exoskeleton-like framework worn by a person and a power supply that supplies at least part of the activation-energy for limb movement. Also known as "powered armor", or "exoframe".
  • Prosthetics – artificial device extensions that replace missing body parts.
  • Robotics – design, construction, operation, structural disposition, manufacture and application of robots. It draws heavily upon electronics, engineering, mechanics, mechatronics, and software engineering.
    • Autonomous Things (also the "Internet of Autonomous Things") – emerging term[28][29][30][31][32] for the technological developments that are expected to bring computers into the physical environment as autonomous entities without human direction, freely moving and interacting with humans and other objects.
    • Swarm robotics
  • Simulated reality
  • Suspended animation – slowing of life processes by external means without termination. Breathing, heartbeat, and other involuntary functions may still occur, but they can only be detected by artificial means. Extreme cold can be used to precipitate the slowing of an individual's functions. For example, Laina Beasley was kept in suspended animation as a two-celled embryo for 13 years.[33][34]
  • Virtual retinal display – display technology that draws a raster display (like a television) directly onto the retina of the eye. Users see what appears to be a conventional display floating in space in front of them.

Technological evolution


Schematic Timeline of Information and Replicators in the Biosphere: major evolutionary transitions in information processing

Technological evolution
  • Directed evolution
  • Extropy – opposing concept of entropy. It poses that culture and technology will aid the universe in developing in an orderly progressive manner. So extropy is the tendency of systems to grow more organized.
  • Intelligence explosion – possible outcome of humanity building artificial general intelligence (AGI), and hypothetically a direct result of such a technological singularity, in that AGI would be capable of recursive self-improvement leading to rapid emergence of ASI (artificial superintelligence), the limits of which are unknown.
  • Megatrajectory – theoretical concept in evolutionary biology that describes paradigmatic developmental stages (major evolutionary milestones) and potential directionality in the evolution of life. A theorized megatrajectory that hasn't occurred yet is postbiological evolution triggered by the emergence of strong AI and several other similarly complex technologies.
  • Participant evolution – process of deliberately redesigning the human body and brain using technological means, rather than through the natural processes of mutation and natural selection, with the goal of removing "biological limitations."
    • Liberal eugenics – use of reproductive and genetic technologies where the choice of enhancing human characteristics and capacities is left to the individual preferences of parents acting as consumers, rather than the public health policies of the state.
  • Posthumanity – all persons technologically evolved from humans, but that are no longer human. Post humanity might include:
    • Posthumans[35] – in transhumanism, they are hypothetical future beings "whose basic capacities so radically exceed those of present humans as to be no longer unambiguously human by our current standards."[35] A being technologically evolved from humans.
      • Superhumans – humans with extraordinary and unusual capabilities enabling them to perform feats well beyond anything that an ordinary person could conceivably achieve, even through long-time training and development. Superhuman can mean an improved human, for example, by genetic modification, cybernetic implants, nanotechnology, or as what humans might eventually evolve into thousands or millions of years later into the distant future.
        • Posthuman Gods – posthumans, being no longer confined to the parameters of human nature, might grow physically and mentally so powerful as to appear god-like by human standards.[35]
        • Ɯbermensch – concept of the superhuman in the philosophy of Friedrich Nietzsche. Nietzsche had his character Zarathustra posit the Ɯbermensch as a goal for humanity to set for itself in his 1883 novel Thus Spoke Zarathustra (German: Also Sprach Zarathustra).
  • Sociocultural evolution from a technological perspective – evolution transitioning from one basis to new forms, such as evolution through biology, then through cognition, then through culture, then through technology, including the possibility of a fusion between biology and technology.
  • Technological convergence – tendency for different technological systems to evolve towards performing similar tasks. Convergence can refer to previously separate technologies such as voice (and telephony features), data (and productivity applications), and video that now share resources and interact with each other synergistically.
  • Technological singularity (TS) – hypothetical future emergence of superintelligence through technological means. This could be achieved through the development of artificial intelligence as smart as humans (strong AI). Once computers become as capable as humans, they would be recursive, that is, they could improve themselves through redesign and modification. They could also conceivably mass-produce successively more capable models of artificially intelligent computers and robots. Another route might be through the fusion of a human and a computer, resulting in a being with rapid recursive improvement potential. Therefore, the technological singularity is seen as an intellectual event horizon, beyond which the future becomes difficult to forecast. Nevertheless, proponents of the singularity typically anticipate an "intelligence explosion", leading quickly to the development of superintelligence, and resulting in technological and sociological change so rapid that mere humans would not be able to keep up with it. Does the TS hold promise for human civilization, or peril?

  • Utopia – an idyllic society, one of the goals of transhumanism.
    • Techno-utopia – hypothetical ideal society, in which laws, government, and social conditions are solely operating for the benefit and well-being of all its citizens, set in the near- or far-future, when advanced science and technology will allow these ideal living standards to exist; for example, post scarcity, transformations in human nature, the abolition of suffering and even the end of death.
    • Techno-utopianism – any ideology based on the belief that advances in science and technology will eventually bring about a utopia, or at least help to fulfill one or another utopian ideal.
  • Omega Point – term coined by the French Jesuit Pierre Teilhard de Chardin (1881–1955) to describe a maximum level of complexity and consciousness towards which he believed the universe was evolving.

Hypothetical technologies

Hypothetical technology – technology that does not exist yet, but the development of which could potentially be achieved in the future. It is distinct from an emerging technology, which has achieved some developmental success. A hypothetical technology is typically not proven to be impossible. Many hypothetical technologies have been the subject of science fiction.
  • Artificial general intelligence – hypothetical artificial intelligence that demonstrates human-like intelligence – the intelligence of a machine that could successfully perform any intellectual task that a human being can. It is a primary goal of artificial intelligence research and an important topic for science fiction writers and futurists. Artificial general intelligence is also referred to as strong AI,[37] full AI[38] or as the ability to perform "general intelligent action".[39] Strong AI is the focus and hypothesized cause of the technological singularity.
    • Friendly artificial intelligence – artificial intelligence (AI) that has a positive rather than negative effect on humanity. Friendly AI also refers to the field of knowledge required to build such an AI. AIs may be harmful to humans if steps are not taken to specifically design them to be benevolent. Doing so effectively is the primary goal of Friendly AI.
  • Designer babies – babies whose genetic makeup has been artificially selected by genetic engineering combined with in vitro fertilisation to ensure the presence or absence of particular genes or characteristics.[40]
  • Human cloning – creation of a genetically identical copy of a human. It does not usually refer to monozygotic multiple births nor the reproduction of human cells or tissue. The term is generally used to refer to artificial human cloning; human clones in the form of identical twins are commonplace, with their cloning occurring during the natural process of reproduction.
  • Mind uploading – hypothetical process of transferring or copying a conscious mind from a brain to a non-biological substrate by scanning and mapping a biological brain in detail and copying its state into a computer system or another computational device. The computer would have to run a simulation model so faithful to the original that it would behave in essentially the same way as the original brain, or for all practical purposes, indistinguishably.[41]
  • Molecular nanotechnology – technology based on the ability to build structures to complex, atomic specifications by means of mechanosynthesis.[42]
    • Molecular assemblers – as defined by K. Eric Drexler, is a "proposed device able to guide chemical reactions by positioning reactive molecules with atomic precision". Some biological molecules such as ribosomes fit this definition, because they receive instructions from messenger RNA and then assemble specific sequences of amino acids to construct protein molecules. However, the term "molecular assembler" usually refers to theoretical human-made devices.[43]
  • Rejuvenation – reversal of aging, which entails the repair of the damage associated with aging, or replacement of damaged tissue with new tissue. Rejuvenation can be a means of life extension, but most life extension strategies do not involve rejuvenation.
  • Reprogenetics – merging of reproductive and genetic technologies expected to happen in the near future as techniques like germinal choice technology become more available and more powerful.
  • Self-replicating machine – artificial construct that is theoretically capable of autonomously manufacturing a copy of itself using raw materials taken from its environment, thus exhibiting self-replication in a way analogous to that found in nature.
  • Space colonization – concept of permanent human habitation outside of Earth. Although hypothetical at the present time, there are many proposals and speculations about the first space colony. It is a long-term goal of some national space programs. Also called "space settlement", "space humanization", and "space habitation".
  • Superintelligence – hypothetical agent that possesses intelligence far surpassing that of the brightest and most gifted human minds. For example, a supercomputer with mental capacity exceeding the brainpower and cognitive abilities of all the people of Earth combined, while developing itself even further.

Related fields

  • Futures studies – study of postulating possible, probable, and preferable futures and the worldviews and myths that underlie them. Also called "futurology".

Transhumanist media

Documentary films about transhumanism

Transhumanist books

  • Converging Technologies for Improving Human Performance
  • The Transhumanist Reader: Classical and Contemporary Essays on the Science, Technology, and Philosophy of the Human Future, First Edition. Edited by Max More and Natasha Vita-More. © 2013 John Wiley & Sons, Inc. Published 2013 by John Wiley & Sons, Inc.

Transhumanist periodicals

Transhumanism in fiction

Transhumanism in fiction – Many of the tropes of science fiction can be viewed as similar to the goals of transhumanism. Science fiction literature contains many positive depictions of technologically enhanced human life, occasionally set in utopian (especially techno-utopian) societies. However, science fiction's depictions of technologically enhanced humans or other posthuman beings frequently come with a cautionary twist. The more pessimistic scenarios include many dystopian tales of human bioengineering gone wrong.

Notable transhumanist authors

Television programs and films with transhumanist themes

Comics or graphic novels

  • Battle Angel Alita/Gunm
  • Dresden Codak – webcomic that stars a transhuman cyborg named Kimiko Ross who augments her body over the course of the strip's stories.
  • Transmetropolitan – comic about a transhuman society several centuries in the future that includes many cyborgs, uploaded humans, and genetically modified mutants.
  • Upgrade – satirical dystopian graphic novel by Louis Rosenberg, about humans approaching the singularity
  • Monkey Room – satirical dystopian graphic novel by Louis Rosenberg, about humans approaching the singularity

Video games

Table-top games

  • Eclipse Phase – role playing game that takes transhumanism to a post-apocalyptic horror setting in which General Artificial Intelligences have gone rogue, introducing itself with the slogan “Your mind is software. Program it. Your body is a shell. Change it. Death is a disease. Cure it. Extinction is approaching. Fight it.”
  • Transhuman Space – GURPS supplement that presents a transhumanist future set in our solar system in the year 2100.
  • Warhammer 40,000 – game and fiction franchise by Games Workshop, that depicts a universe which includes cybernetic and genetic modifications, human-machine interfaces, self-aware computer "spirits" (advanced AIs), ubiquitous space travel and posthuman gods. The main protagonists of many novels and campaigns, the Imperial Space Marines, are literal textbook transhumans: normal human men who have been so vastly augmented and changed by technology that they are no longer Homo sapiens but some other, new species.

Transhumanist organizations

  • Carboncopies - nonprofit dedicated to advancing research in whole brain emulation and substrate-independent minds.
  • 2045 Initiative - an initiative to develop cybernetic immortality by 2045.
  • Alcor Life Extension Foundation – nonprofit company based in Scottsdale, Arizona, USA that researches, advocates for and performs cryonics, the preservation of humans in liquid nitrogen after legal death, in the hope of restoring them to full health when new technology is developed in the future.
  • American Cryonics Society
  • Cryonics Institute
  • Extropy Institute
  • Foresight Institute – nonprofit organization based in Palo Alto, California that promotes transformative technologies. They sponsor conferences on molecular nanotechnology, publish reports, produce a newsletter, and offer several running prizes, including the annual Feynman Prizes given in experimental and theory categories, and the $250,000 Feynman Grand Prize for demonstrating two molecular machines capable of nanoscale positional accuracy and computation.[45]
  • Humanity+ – international non-governmental organization which advocates the ethical use of emerging technologies to enhance human capacities. It was formerly named the "World Transhumanist Association".
  • Machine Intelligence Research Institutenon-profit organization founded in 2000 (as the "Singularity Institute for Artificial Intelligence") to develop safe artificial intelligence software, and to raise awareness of both the dangers and potential benefits it believes AI presents. In their view, the potential benefits and risks of a technological singularity necessitate the search for solutions to problems involving AI goal systems to ensure powerful AIs are not dangerous when they are created.[46][47]
  • Mormon Transhumanist Association
  • Christian Transhumanist Association, founded 2013

Transhumanist leaders and scholars

Some people who have made a major impact on the advancement of transhumanism:
  • Nick Bostrom – Swedish philosopher at the University of Oxford known for his work on existential risk, the anthropic principle, human enhancement ethics, superintelligence risks, the reversal test, and consequentialism.
  • K. Eric Drexler
  • George Dvorsky
  • Robert Ettinger
  • Nikolai Fyodorovich Fyodorov
  • FM-2030 (October 15, 1930, – July 8, 2000) – author, teacher, transhumanist philosopher, futurist, and consultant.[48] His given name was Fereidoun M. Esfandiary. He became notable as a transhumanist with the book Are You a Transhuman?: Monitoring and Stimulating Your Personal Rate of Growth in a Rapidly Changing World, published in 1989.
  • Aubrey de Grey – English author and theoretician in the field of gerontology, and the Chief Science Officer of the SENS Foundation. He is perhaps best known for his view that human beings could, in theory, live to lifespans far in excess of that which any authenticated cases have lived to today.
  • James Hughes – sociologist and bioethicist teaching health policy at Trinity College in Hartford, Connecticut in the United States. Hughes served as the executive director of the World Transhumanist Association (which has since changed its name to Humanity+) from 2004 to 2006, and moved on to serve as the executive director of the Institute for Ethics and Emerging Technologies, which he founded with Nick Bostrom.
  • Julian Huxley
  • Raymond Kurzweil – pioneer in optical character recognition, text-to-speech synthesis, speech recognition technology, and musical synthesizer keyboard instruments. He is most notable now as a public advocate for the futurist and transhumanist movements, and as Director of Engineering at Google, where he has a one-sentence job description: "to bring natural language understanding to Google". Books he has authored which pertain to transhumanism include The Age of Intelligent Machines, The 10% Solution for a Healthy Life, The Age of Spiritual Machines, Fantastic Voyage: Live Long Enough to Live Forever, The Singularity Is Near, Transcend: Nine Steps to Living Well Forever,[49] and How to Create a Mind. He co-founded the Singularity University, and he is also a co-founder of the Singularity Summit.
  • Hans Moravec – adjunct faculty member at the Robotics Institute of Carnegie Mellon University. He is known for his work on robotics, artificial intelligence, and writings on the impact of technology. Moravec also is a futurist with many of his publications and predictions focusing on transhumanism. Moravec developed techniques in computer vision for determining the region of interest (ROI) in a scene.
  • Max More – a philosopher and futurist who writes, speaks, and consults on advanced decision-making about emerging technologies
  • David Pearce – Utilitarian thinker and author of The Hedonistic Imperative, in which he explores the possibility of how technologies such as genetic engineering, nanotechnology, pharmacology, and neurosurgery could potentially converge to eliminate all forms of unpleasant experience in human life and produce a posthuman civilization.[50]
  • Giulio Prisco
  • Anders Sandberg – researcher, science debater, futurist, transhumanist, and author born in Solna, Sweden, whose recent contributions include work on cognitive enhancement[51] (methods, impacts, and policy analysis); a technical roadmap on whole brain emulation;[52] on neuroethics; and on global catastrophic risks, particularly on the question of how to take into account the subjective uncertainty in risk estimates of low-likelihood, high-consequence risk.[53]
  • Stefan Lorenz Sorgner – philosopher who argues for a Nietzschean transhumanism
  • Frank J. Tipler

Intelligence explosion

From Wikipedia, the free encyclopedia

The intelligence explosion is a possible outcome of humanity building artificial general intelligence (AGI). AGI would be capable of recursive self-improvement leading to rapid emergence of ASI (artificial superintelligence), the limits of which are unknown. An intelligence explosion would be associated with a technological singularity.

The notion of an "intelligence explosion" was first described by Good (1965), who speculated on the effects of superhuman machines, should they ever be invented:
Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an ‘intelligence explosion,’ and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control.

Background

Although technological progress has been accelerating, it has been limited by the basic intelligence of the human brain, which has not, according to Paul R. Ehrlich, changed significantly for millennia.[1] However, with the increasing power of computers and other technologies, it might eventually be possible to build a machine that is more intelligent than humans.[2] If a superhuman intelligence were to be invented—either through the amplification of human intelligence or through artificial intelligence—it would bring to bear greater problem-solving and inventive skills than current humans are capable of. Such an AI is referred to as Seed AI[3][4] because if an AI were created with engineering capabilities that matched or surpassed those of its human creators, it would have the potential to autonomously improve its own software and hardware or design an even more capable machine. This more capable machine could then go on to design a machine of yet greater capability. These iterations of recursive self-improvement could accelerate, potentially allowing enormous qualitative change before any upper limits imposed by the laws of physics or theoretical computation set in. It is speculated that over many iterations, such an AI would far surpass human cognitive abilities.

Computer scientist Vernor Vinge said in his 1993 essay "The Coming Technological Singularity" that this would signal the end of the human era, as the new superintelligence would continue to upgrade itself and would advance technologically at an incomprehensible rate.[5]

Plausibility

Most proposed methods for creating superhuman or transhuman minds fall into one of two categories: intelligence amplification of human brains and artificial intelligence. The means speculated to produce intelligence augmentation are numerous, and include bioengineering, genetic engineering, nootropic drugs, AI assistants, direct brain–computer interfaces and mind uploading. The existence of multiple paths to an intelligence explosion makes a singularity more likely; for a singularity to not occur they would all have to fail.[6]

Hanson (1998) is skeptical of human intelligence augmentation, writing that once one has exhausted the "low-hanging fruit" of easy methods for increasing human intelligence, further improvements will become increasingly difficult to find. Despite the numerous speculated means for amplifying human intelligence, non-human artificial intelligence (specifically seed AI) is the most popular option for organizations trying to advance the singularity.[citation needed]

Whether or not an intelligence explosion occurs depends on three factors.[7] The first, accelerating factor, is the new intelligence enhancements made possible by each previous improvement. Contrariwise, as the intelligences become more advanced, further advances will become more and more complicated, possibly overcoming the advantage of increased intelligence. Each improvement must be able to beget at least one more improvement, on average, for the singularity to continue. Finally the laws of physics will eventually prevent any further improvements.

There are two logically independent, but mutually reinforcing causes of intelligence improvements: increases in the speed of computation, and improvements to the algorithms used.[8] The former is predicted by Moore’s Law and the forecast improvements in hardware,[9] and is comparatively similar to previous technological advance. On the other hand, most AI researchers[who?] believe that software is more important than hardware.[citation needed]

A 2017 email survey of authors with publications at the 2015 NIPS and ICML machine learning conferences asked them about the chance of an intelligence explosion. 12% said it was "quite likely", 17% said it was "likely", 21% said it was "about even", 24% said it was "unlikely" and 26% said it was "quite unlikely".[10]

Speed improvements

Both for human and artificial intelligence, hardware improvements increase the rate of future hardware improvements. Oversimplified,[11] Moore's Law suggests that if the first doubling of speed took 18 months, the second would take 18 subjective months; or 9 external months, whereafter, four months, two months, and so on towards a speed singularity.[12] An upper limit on speed may eventually be reached, although it is unclear how high this would be. Hawkins (2008)[citation needed], responding to Good, argued that the upper limit is relatively low;
Belief in this idea is based on a naive understanding of what intelligence is. As an analogy, imagine we had a computer that could design new computers (chips, systems, and software) faster than itself. Would such a computer lead to infinitely fast computers or even computers that were faster than anything humans could ever build? No. It might accelerate the rate of improvements for a while, but in the end there are limits to how big and fast computers can be. We would end up in the same place; we'd just get there a bit faster. There would be no singularity.

Whereas if it were a lot higher than current human levels of intelligence, the effects of the singularity would be great enough as to be indistinguishable (to humans) from a singularity with an upper limit. For example, if the speed of thought could be increased a million-fold, a subjective year would pass in 30 physical seconds.[6]
It is difficult to directly compare silicon-based hardware with neurons. But Berglas (2008) notes that computer speech recognition is approaching human capabilities, and that this capability seems to require 0.01% of the volume of the brain. This analogy suggests that modern computer hardware is within a few orders of magnitude of being as powerful as the human brain.

Algorithm improvements

Some intelligence technologies, like "seed AI",[3][4] may also have the potential to make themselves more efficient, not just faster, by modifying their source code. These improvements would make further improvements possible, which would make further improvements possible, and so on.

The mechanism for a recursively self-improving set of algorithms differs from an increase in raw computation speed in two ways. First, it does not require external influence: machines designing faster hardware would still require humans to create the improved hardware, or to program factories appropriately.[citation needed] An AI which was rewriting its own source code, however, could do so while contained in an AI box.

Second, as with Vernor Vinge’s conception of the singularity, it is much harder to predict the outcome. While speed increases seem to be only a quantitative difference from human intelligence, actual algorithm improvements would be qualitatively different. Eliezer Yudkowsky compares it to the changes that human intelligence brought: humans changed the world thousands of times more rapidly than evolution had done, and in totally different ways. Similarly, the evolution of life had been a massive departure and acceleration from the previous geological rates of change, and improved intelligence could cause change to be as different again.[13]

There are substantial dangers associated with an intelligence explosion singularity originating from a recursively self-improving set of algorithms. First, the goal structure of the AI may not be invariant under self-improvement, potentially causing the AI to optimise for something other than was intended.[14][15] Secondly, AIs could compete for the scarce resources mankind uses to survive.[16][17]

While not actively malicious, there is no reason to think that AIs would actively promote human goals unless they could be programmed as such, and if not, might use the resources currently used to support mankind to promote its own goals, causing human extinction.[18][19][20]

Carl Shulman and Anders Sandberg suggest that algorithm improvements may be the limiting factor for a singularity because whereas hardware efficiency tends to improve at a steady pace, software innovations are more unpredictable and may be bottlenecked by serial, cumulative research. They suggest that in the case of a software-limited singularity, intelligence explosion would actually become more likely than with a hardware-limited singularity, because in the software-limited case, once human-level AI was developed, it could run serially on very fast hardware, and the abundance of cheap hardware would make AI research less constrained.[21] An abundance of accumulated hardware that can be unleashed once the software figures out how to use it has been called "computing overhang."[22]

Impact

Dramatic changes in the rate of economic growth have occurred in the past because of some technological advancement. Based on population growth, the economy doubled every 250,000 years from the Paleolithic era until the Neolithic Revolution. The new agricultural economy doubled every 900 years, a remarkable increase. In the current era, beginning with the Industrial Revolution, the world’s economic output doubles every fifteen years, sixty times faster than during the agricultural era. If the rise of superhuman intelligence causes a similar revolution, argues Robin Hanson, one would expect the economy to double at least quarterly and possibly on a weekly basis.[23]

Superintelligence

A superintelligence, hyperintelligence, or superhuman intelligence is a hypothetical agent that possesses intelligence far surpassing that of the brightest and most gifted human minds. "Superintelligence" may also refer to the form or degree of intelligence possessed by such an agent.
Technology forecasters and researchers disagree about if or when human intelligence is likely to be surpassed. Some argue that advances in artificial intelligence (AI) will probably result in general reasoning systems that lack human cognitive limitations. Others believe that humans will evolve or directly modify their biology so as to achieve radically greater intelligence. A number of futures studies scenarios combine elements from both of these possibilities, suggesting that humans are likely to interface with computers, or upload their minds to computers, in a way that enables substantial intelligence amplification.

Existential risk

Berglas (2008) notes that there is no direct evolutionary motivation for an AI to be friendly to humans. Evolution has no inherent tendency to produce outcomes valued by humans, and there is little reason to expect an arbitrary optimisation process to promote an outcome desired by mankind, rather than inadvertently leading to an AI behaving in a way not intended by its creators (such as Nick Bostrom's whimsical example of an AI which was originally programmed with the goal of manufacturing paper clips, so that when it achieves superintelligence it decides to convert the entire planet into a paper clip manufacturing facility).[24][25][26] Anders Sandberg has also elaborated on this scenario, addressing various common counter-arguments.[27] AI researcher Hugo de Garis suggests that artificial intelligences may simply eliminate the human race for access to scarce resources,[16][28] and humans would be powerless to stop them.[29] Alternatively, AIs developed under evolutionary pressure to promote their own survival could outcompete humanity.[20]
Bostrom (2002) discusses human extinction scenarios, and lists superintelligence as a possible cause:
When we create the first superintelligent entity, we might make a mistake and give it goals that lead it to annihilate humankind, assuming its enormous intellectual advantage gives it the power to do so. For example, we could mistakenly elevate a subgoal to the status of a supergoal. We tell it to solve a mathematical problem, and it complies by turning all the matter in the solar system into a giant calculating device, in the process killing the person who asked the question.
A significant problem is that unfriendly artificial intelligence is likely to be much easier to create than friendly AI. While both require large advances in recursive optimisation process design, friendly AI also requires the ability to make goal structures invariant under self-improvement (or the AI could transform itself into something unfriendly) and a goal structure that aligns with human values and does not automatically destroy the human race. An unfriendly AI, on the other hand, can optimize for an arbitrary goal structure, which does not need to be invariant under self-modification.[30]
Eliezer Yudkowsky proposed that research be undertaken to produce friendly artificial intelligence in order to address the dangers. He noted that the first real AI would have a head start on self-improvement and, if friendly, could prevent unfriendly AIs from developing, as well as providing enormous benefits to mankind.[19]

Bill Hibbard (2014) proposes an AI design that avoids several dangers including self-delusion,[31] unintended instrumental actions,[14][32] and corruption of the reward generator.[32] He also discusses social impacts of AI[33] and testing AI.[34] His 2001 book Super-Intelligent Machines advocates the need for public education about AI and public control over AI. It also proposed a simple design that was vulnerable to corruption of the reward generator.

One hypothetical approach towards attempting to control an artificial intelligence is an AI box, where the artificial intelligence is kept constrained inside a simulated world and not allowed to affect the external world. However, a sufficiently intelligent AI may simply be able to escape by outsmarting its less intelligent human captors.[35][36][37]

Stephen Hawking said in 2014 that "Success in creating AI would be the biggest event in human history. Unfortunately, it might also be the last, unless we learn how to avoid the risks." Hawking believes that in the coming decades, AI could offer "incalculable benefits and risks" such as "technology outsmarting financial markets, out-inventing human researchers, out-manipulating human leaders, and developing weapons we cannot even understand." Hawking believes more should be done to prepare for the singularity:[38]
So, facing possible futures of incalculable benefits and risks, the experts are surely doing everything possible to ensure the best outcome, right? Wrong. If a superior alien civilisation sent us a message saying, "We'll arrive in a few decades," would we just reply, "OK, call us when you get here – we'll leave the lights on"? Probably not – but this is more or less what is happening with AI.

Hard vs. soft takeoff


In this sample recursive self-improvement scenario, humans modifying an AI's architecture would be able to double its performance every three years through, for example, 30 generations before exhausting all feasible improvements (left). If instead the AI is smart enough to modify its own architecture as well as human researchers can, its time required to complete a redesign halves with each generation, and it progresses all 30 feasible generations in six years (right).[39]

In a hard takeoff scenario, an AGI rapidly self-improves, "taking control" of the world (perhaps in a matter of hours), too quickly for significant human-initiated error correction or for a gradual tuning of the AGI's goals. In a soft takeoff scenario, AGI still becomes far more powerful than humanity, but at a human-like pace (perhaps on the order of decades), on a timescale where ongoing human interaction and correction can effectively steer the AGI's development.[40][41]

Ramez Naam argues against a hard takeoff by pointing out that we already see recursive self-improvement by superintelligences, such as corporations. For instance, Intel has "the collective brainpower of tens of thousands of humans and probably millions of CPU cores to.. design better CPUs!" However, this has not led to a hard takeoff; rather, it has led to a soft takeoff in the form of Moore's law.[42] Naam further points out that the computational complexity of higher intelligence may be much greater than linear, such that "creating a mind of intelligence 2 is probably more than twice as hard as creating a mind of intelligence 1."[43]

J. Storrs Hall believes that "many of the more commonly seen scenarios for overnight hard takeoff are circular – they seem to assume hyperhuman capabilities at the starting point of the self-improvement process" in order for an AI to be able to make the dramatic, domain-general improvements required for takeoff. Hall suggests that rather than recursively self-improving its hardware, software, and infrastructure all on its own, a fledgling AI would be better off specializing in one area where it was most effective and then buying the remaining components on the marketplace, because the quality of products on the marketplace continually improves, and the AI would have a hard time keeping up with the cutting-edge technology used by the rest of the world.[44]

Ben Goertzel agrees with Hall's suggestion that a new human-level AI would do well to use its intelligence to accumulate wealth. The AI's talents might inspire companies and governments to disperse its software throughout society. Goertzel is skeptical of a very hard, 5-minute takeoff but thinks a takeoff from human to superhuman level on the order of 5 years is reasonable. He calls this a "semihard takeoff".[45]

Max More disagrees, arguing that if there were only a few superfast human-level AIs, they wouldn't radically change the world, because they would still depend on other people to get things done and would still have human cognitive constraints. Even if all superfast AIs worked on intelligence augmentation, it's not clear why they would do better in a discontinuous way than existing human cognitive scientists at producing super-human intelligence, although the rate of progress would increase. More also argues that a superintelligence would not transform the world overnight, because a superintelligence would need to engage with existing, slow human systems to accomplish physical impacts on the world. "The need for collaboration, for organization, and for putting ideas into physical changes will ensure that all the old rules are not thrown out overnight or even within years."[46]

Technological singularity

From Wikipedia, the free encyclopedia

The technological singularity (also, simply, the singularity)[1] is the hypothesis that the invention of artificial superintelligence (ASI) will abruptly trigger runaway technological growth, resulting in unfathomable changes to human civilization.[2] According to this hypothesis, an upgradable intelligent agent (such as a computer running software-based artificial general intelligence) would enter a "runaway reaction" of self-improvement cycles, with each new and more intelligent generation appearing more and more rapidly, causing an intelligence explosion and resulting in a powerful superintelligence that would, qualitatively, far surpass all human intelligence. Stanislaw Ulam reports a discussion with John von Neumann "centered on the accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue".[3] Subsequent authors have echoed this viewpoint.[2][4] I. J. Good's "intelligence explosion" model predicts that a future superintelligence will trigger a singularity.[5] Emeritus professor of computer science at San Diego State University and science fiction author Vernor Vinge said in his 1993 essay The Coming Technological Singularity that this would signal the end of the human era, as the new superintelligence would continue to upgrade itself and would advance technologically at an incomprehensible rate.[5]

Four polls were conducted in 2012 and 2013 which suggested that the median estimate was a one in two chance that artificial general intelligence (AGI) would be developed by 2040–2050, depending on the poll.[6][7]

In the 2010s public figures such as Stephen Hawking and Elon Musk expressed concern that full artificial intelligence could result in human extinction.[8][9] The consequences of the singularity and its potential benefit or harm to the human race have been hotly debated.

Manifestations

Intelligence explosion

I. J. Good speculated in 1965 that artificial general intelligence might bring about an intelligence explosion. Good's scenario runs as follows: as computers increase in power, it becomes possible for people to build a machine that is more intelligent than humanity; this superhuman intelligence possesses greater problem-solving and inventive skills than current humans are capable of. This superintelligent machine then designs an even more capable machine, or re-writes its own software to become even more intelligent; this (ever more capable) machine then goes on to design a machine of yet greater capability, and so on. These iterations of recursive self-improvement accelerate, allowing enormous qualitative change before any upper limits imposed by the laws of physics or theoretical computation set in.[10]

Emergence of superintelligence

John von Neumann, Vernor Vinge and Ray Kurzweil define the concept in terms of the technological creation of super intelligence. They argue that it is difficult or impossible for present-day humans to predict what human beings' lives would be like in a post-singularity world.[5][11]

Non-AI singularity

Some writers use "the singularity" in a broader way to refer to any radical changes in our society brought about by new technologies such as molecular nanotechnology,[12][13][14] although Vinge and other writers specifically state that without superintelligence, such changes would not qualify as a true singularity.[5] Many writers also tie the singularity to observations of exponential growth in various technologies (with Moore's law being the most prominent example), using such observations as a basis for predicting that the singularity is likely to happen sometime within the 21st century.[13][15]

Plausibility

Many prominent technologists and academics dispute the plausibility of a technological singularity, including Paul Allen, Jeff Hawkins, John Holland, Jaron Lanier, and Gordon Moore, whose law is often cited in support of the concept.[16][17][18]

Claimed cause: exponential growth

Ray Kurzweil writes that, due to paradigm shifts, a trend of exponential growth extends Moore's law from integrated circuits to earlier transistors, vacuum tubes, relays, and electromechanical computers. He predicts that the exponential growth will continue, and that in a few decades the computing power of all computers will exceed that of ("unenhanced") human brains, with superhuman artificial intelligence appearing around the same time.
 
An updated version of Moore's law over 120 Years (based on Kurzweil’s graph). The 7 most recent data points are all NVIDIA GPUs.

The exponential growth in computing technology suggested by Moore's law is commonly cited as a reason to expect a singularity in the relatively near future, and a number of authors have proposed generalizations of Moore's law. Computer scientist and futurist Hans Moravec proposed in a 1998 book[19] that the exponential growth curve could be extended back through earlier computing technologies prior to the integrated circuit.

Ray Kurzweil postulates a law of accelerating returns in which the speed of technological change (and more generally, all evolutionary processes[20]) increases exponentially, generalizing Moore's law in the same manner as Moravec's proposal, and also including material technology (especially as applied to nanotechnology), medical technology and others.[21] Between 1986 and 2007, machines' application-specific capacity to compute information per capita roughly doubled every 14 months; the per capita capacity of the world's general-purpose computers has doubled every 18 months; the global telecommunication capacity per capita doubled every 34 months; and the world's storage capacity per capita doubled every 40 months.[22]

Kurzweil reserves the term "singularity" for a rapid increase in artificial intelligence (as opposed to other technologies), writing for example that "The Singularity will allow us to transcend these limitations of our biological bodies and brains ... There will be no distinction, post-Singularity, between human and machine".[23] He also defines his predicted date of the singularity (2045) in terms of when he expects computer-based intelligences to significantly exceed the sum total of human brainpower, writing that advances in computing before that date "will not represent the Singularity" because they do "not yet correspond to a profound expansion of our intelligence."[24]

Accelerating change

According to Kurzweil, his logarithmic graph of 15 lists of paradigm shifts for key historic events shows an exponential trend

Some singularity proponents argue its inevitability through extrapolation of past trends, especially those pertaining to shortening gaps between improvements to technology. In one of the first uses of the term "singularity" in the context of technological progress, Stanislaw Ulam tells of a conversation with John von Neumann about accelerating change:
One conversation centered on the ever accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue.[3]
Kurzweil claims that technological progress follows a pattern of exponential growth, following what he calls the "law of accelerating returns". Whenever technology approaches a barrier, Kurzweil writes, new technologies will surmount it. He predicts paradigm shifts will become increasingly common, leading to "technological change so rapid and profound it represents a rupture in the fabric of human history".[25] Kurzweil believes that the singularity will occur by approximately 2045.[26] His predictions differ from Vinge's in that he predicts a gradual ascent to the singularity, rather than Vinge's rapidly self-improving superhuman intelligence.

Oft-cited dangers include those commonly associated with molecular nanotechnology and genetic engineering. These threats are major issues for both singularity advocates and critics, and were the subject of Bill Joy's Wired magazine article "Why the future doesn't need us".[4][27]

Criticisms

Some critics assert that no computer or machine will ever achieve human intelligence, while others hold that the definition of intelligence is irrelevant if the net result is the same.[28]

Steven Pinker stated in 2008:
... There is not the slightest reason to believe in a coming singularity. The fact that you can visualize a future in your imagination is not evidence that it is likely or even possible. Look at domed cities, jet-pack commuting, underwater cities, mile-high buildings, and nuclear-powered automobiles—all staples of futuristic fantasies when I was a child that have never arrived. Sheer processing power is not a pixie dust that magically solves all your problems. ...[16]
University of California, Berkeley, philosophy professor John Searle writes:
[Computers] have, literally ..., no intelligence, no motivation, no autonomy, and no agency. We design them to behave as if they had certain sorts of psychology, but there is no psychological reality to the corresponding processes or behavior. ... [T]he machinery has no beliefs, desires, [or] motivations.[29]
Martin Ford in The Lights in the Tunnel: Automation, Accelerating Technology and the Economy of the Future[30] postulates a "technology paradox" in that before the singularity could occur most routine jobs in the economy would be automated, since this would require a level of technology inferior to that of the singularity. This would cause massive unemployment and plummeting consumer demand, which in turn would destroy the incentive to invest in the technologies that would be required to bring about the Singularity. Job displacement is increasingly no longer limited to work traditionally considered to be "routine".[31]

Theodore Modis[32][33] and Jonathan Huebner[34] argue that the rate of technological innovation has not only ceased to rise, but is actually now declining. Evidence for this decline is that the rise in computer clock rates is slowing, even while Moore's prediction of exponentially increasing circuit density continues to hold. This is due to excessive heat build-up from the chip, which cannot be dissipated quickly enough to prevent the chip from melting when operating at higher speeds. Advancements in speed may be possible in the future by virtue of more power-efficient CPU designs and multi-cell processors.[35] While Kurzweil used Modis' resources, and Modis' work was around accelerating change, Modis distanced himself from Kurzweil's thesis of a "technological singularity", claiming that it lacks scientific rigor.[33]

Others[who?] propose that other "singularities" can be found through analysis of trends in world population, world gross domestic product, and other indices. Andrey Korotayev and others argue that historical hyperbolic growth curves can be attributed to feedback loops that ceased to affect global trends in the 1970s, and thus hyperbolic growth should not be expected in the future.[36][37][improper synthesis?]

In a detailed empirical accounting, The Progress of Computing, William Nordhaus argued that, prior to 1940, computers followed the much slower growth of a traditional industrial economy, thus rejecting extrapolations of Moore's law to 19th-century computers.[38]

In a 2007 paper, Schmidhuber stated that the frequency of subjectively "notable events" appears to be approaching a 21st-century singularity, but cautioned readers to take such plots of subjective events with a grain of salt: perhaps differences in memory of recent and distant events could create an illusion of accelerating change where none exists.[39]

Paul Allen argues the opposite of accelerating returns, the complexity brake;[18] the more progress science makes towards understanding intelligence, the more difficult it becomes to make additional progress. A study of the number of patents shows that human creativity does not show accelerating returns, but in fact, as suggested by Joseph Tainter in his The Collapse of Complex Societies,[40] a law of diminishing returns. The number of patents per thousand peaked in the period from 1850 to 1900, and has been declining since.[34] The growth of complexity eventually becomes self-limiting, and leads to a widespread "general systems collapse".

Jaron Lanier refutes the idea that the Singularity is inevitable. He states: "I do not think the technology is creating itself. It's not an autonomous process."[41] He goes on to assert: "The reason to believe in human agency over technological determinism is that you can then have an economy where people earn their own way and invent their own lives. If you structure a society on not emphasizing individual human agency, it's the same thing operationally as denying people clout, dignity, and self-determination ... to embrace [the idea of the Singularity] would be a celebration of bad data and bad politics."[41]

Economist Robert J. Gordon, in The Rise and Fall of American Growth: The U.S. Standard of Living Since the Civil War (2016), points out that measured economic growth has slowed around 1970 and slowed even further since the financial crisis of 2008, and argues that the economic data show no trace of a coming Singularity as imagined by mathematician I.J. Good.[42]

In addition to general criticisms of the singularity concept, several critics have raised issues with Kurzweil's iconic chart. One line of criticism is that a log-log chart of this nature is inherently biased toward a straight-line result. Others identify selection bias in the points that Kurzweil chooses to use. For example, biologist PZ Myers points out that many of the early evolutionary "events" were picked arbitrarily.[43] Kurzweil has rebutted this by charting evolutionary events from 15 neutral sources, and showing that they fit a straight line on a log-log chart. The Economist mocked the concept with a graph extrapolating that the number of blades on a razor, which has increased over the years from one to as many as five, will increase ever-faster to infinity.[44]

Ramifications

Uncertainty and risk

The term "technological singularity" reflects the idea that such change may happen suddenly, and that it is difficult to predict how the resulting new world would operate.[45][46] It is unclear whether an intelligence explosion of this kind would be beneficial or harmful, or even an existential threat,[47][48] as the issue has not been dealt with by most artificial general intelligence researchers, although the topic of friendly artificial intelligence is investigated by the Future of Humanity Institute and the Machine Intelligence Research Institute.[45]

Next step of sociobiological evolution

Schematic Timeline of Information and Replicators in the Biosphere: Gillings et al.'s "major evolutionary transitions" in information processing.[49]
 
Amount of digital information worldwide (5x10^21 bytes) versus human genome information worldwide (10^19 bytes) in 2014.[49]
 
While the technological singularity is usually seen as a sudden event, some scholars argue the current speed of change already fits this description.[citation needed] In addition, some argue that we are already in the midst of a major evolutionary transition that merges technology, biology, and society. Digital technology has infiltrated the fabric of human society to a degree of indisputable and often life-sustaining dependence. A 2016 article in Trends in Ecology & Evolution argues that "humans already embrace fusions of biology and technology. We spend most of our waking time communicating through digitally mediated channels... we trust artificial intelligence with our lives through antilock braking in cars and autopilots in planes... With one in three marriages in America beginning online, digital algorithms are also taking a role in human pair bonding and reproduction". The article argues that from the perspective of the evolution, several previous Major Transitions in Evolution have transformed life through innovations in information storage and replication (RNA, DNA, multicellularity, and culture and language). In the current stage of life's evolution, the carbon-based biosphere has generated a cognitive system (humans) capable of creating technology that will result in a comparable evolutionary transition. The digital information created by humans has reached a similar magnitude to biological information in the biosphere. Since the 1980s, "the quantity of digital information stored has doubled about every 2.5 years, reaching about 5 zettabytes in 2014 (5x10^21 bytes). In biological terms, there are 7.2 billion humans on the planet, each having a genome of 6.2 billion nucleotides. Since one byte can encode four nucleotide pairs, the individual genomes of every human on the planet could be encoded by approximately 1x10^19 bytes. The digital realm stored 500 times more information than this in 2014 (...see Figure)... The total amount of DNA contained in all of the cells on Earth is estimated to be about 5.3x10^37 base pairs, equivalent to 1.325x10^37 bytes of information. If growth in digital storage continues at its current rate of 30–38% compound annual growth per year,[22] it will rival the total information content contained in all of the DNA in all of the cells on Earth in about 110 years. This would represent a doubling of the amount of information stored in the biosphere across a total time period of just 150 years".[49]

Implications for human society

In February 2009, under the auspices of the Association for the Advancement of Artificial Intelligence (AAAI), Eric Horvitz chaired a meeting of leading computer scientists, artificial intelligence researchers and roboticists at Asilomar in Pacific Grove, California. The goal was to discuss the potential impact of the hypothetical possibility that robots could become self-sufficient and able to make their own decisions. They discussed the extent to which computers and robots might be able to acquire autonomy, and to what degree they could use such abilities to pose threats or hazards.[50]

Some machines are programmed with various forms of semi-autonomy, including the ability to locate their own power sources and choose targets to attack with weapons. Also, some computer viruses can evade elimination and, according to scientists in attendance, could therefore be said to have reached a "cockroach" stage of machine intelligence. The conference attendees noted that self-awareness as depicted in science-fiction is probably unlikely, but that other potential hazards and pitfalls exist.[50]

Some experts and academics have questioned the use of robots for military combat, especially when such robots are given some degree of autonomous functions.[51][improper synthesis?]

Immortality

In his 2005 book, The Singularity is Near, Kurzweil suggests that medical advances would allow people to protect their bodies from the effects of aging, making the life expectancy limitless. Kurzweil argues that the technological advances in medicine would allow us to continuously repair and replace defective components in our bodies, prolonging life to an undetermined age.[52] Kurzweil further buttresses his argument by discussing current bio-engineering advances. Kurzweil suggests somatic gene therapy; after synthetic viruses with specific genetic information, the next step would be to apply this technology to gene therapy, replacing human DNA with synthesized genes.[53]

K. Eric Drexler, one of the founders of nanotechnology, postulated cell repair devices, including ones operating within cells and utilizing as yet hypothetical biological machines, in his 1986 book Engines of Creation. According to Richard Feynman, it was his former graduate student and collaborator Albert Hibbs who originally suggested to him (circa 1959) the idea of a medical use for Feynman's theoretical micromachines . Hibbs suggested that certain repair machines might one day be reduced in size to the point that it would, in theory, be possible to (as Feynman put it) "swallow the doctor".

The idea was incorporated into Feynman's 1959 essay There's Plenty of Room at the Bottom.[54]

Beyond merely extending the operational life of the physical body, Jaron Lanier argues for a form of immortality called "Digital Ascension" that involves "people dying in the flesh and being uploaded into a computer and remaining conscious".[55] Singularitarianism has also been likened to a religion by John Horgan.[56]

History of the concept

In his obituary for John von Neumann, Ulam recalled a conversation with von Neumann about the "ever accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue."[3]

In 1965, Good wrote his essay postulating an "intelligence explosion" of recursive self-improvement of a machine intelligence. In 1985, in "The Time Scale of Artificial Intelligence", artificial intelligence researcher Ray Solomonoff articulated mathematically the related notion of what he called an "infinity point": if a research community of human-level self-improving AIs take four years to double their own speed, then two years, then one year and so on, their capabilities increase infinitely in finite time.[4][57]

In 1981, Stanisław Lem published his science fiction novel Golem XIV. It describes a military AI computer (Golem XIV) who obtains consciousness and starts to increase his own intelligence, moving towards personal technological singularity. Golem XIV was originally created to aid its builders in fighting wars, but as its intelligence advances to a much higher level than that of humans, it stops being interested in the military requirement because it finds them lacking internal logical consistency.

In 1983, Vinge greatly popularized Good's intelligence explosion in a number of writings, first addressing the topic in print in the January 1983 issue of Omni magazine. In this op-ed piece, Vinge seems to have been the first to use the term "singularity" in a way that was specifically tied to the creation of intelligent machines:[58][59] writing
We will soon create intelligences greater than our own. When this happens, human history will have reached a kind of singularity, an intellectual transition as impenetrable as the knotted space-time at the center of a black hole, and the world will pass far beyond our understanding. This singularity, I believe, already haunts a number of science-fiction writers. It makes realistic extrapolation to an interstellar future impossible. To write a story set more than a century hence, one needs a nuclear war in between ... so that the world remains intelligible.
Vinge's 1993 article "The Coming Technological Singularity: How to Survive in the Post-Human Era",[5] spread widely on the internet and helped to popularize the idea.[60] This article contains the statement, "Within thirty years, we will have the technological means to create superhuman intelligence. Shortly after, the human era will be ended." Vinge argues that science-fiction authors cannot write realistic post-singularity characters who surpass the human intellect, as the thoughts of such an intellect would be beyond the ability of humans to express.[5]

In 2000, Bill Joy, a prominent technologist and a co-founder of Sun Microsystems, voiced concern over the potential dangers of the singularity.[27]

In 2005, Kurzweil published The Singularity is Near. Kurzweil's publicity campaign included an appearance on The Daily Show with Jon Stewart.[61]

In 2007, Eliezer Yudkowsky suggested that many of the varied definitions that have been assigned to "singularity" are mutually incompatible rather than mutually supporting.[13][62] For example, Kurzweil extrapolates current technological trajectories past the arrival of self-improving AI or superhuman intelligence, which Yudkowsky argues represents a tension with both I. J. Good's proposed discontinuous upswing in intelligence and Vinge's thesis on unpredictability.[13]

In 2009, Kurzweil and X-Prize founder Peter Diamandis announced the establishment of Singularity University, a nonaccredited private institute whose stated mission is "to educate, inspire and empower leaders to apply exponential technologies to address humanity's grand challenges."[63] Funded by Google, Autodesk, ePlanet Ventures, and a group of technology industry leaders, Singularity University is based at NASA's Ames Research Center in Mountain View, California. The not-for-profit organization runs an annual ten-week graduate program during the northern-hemisphere summer that covers ten different technology and allied tracks, and a series of executive programs throughout the year.

In politics

In 2007, the joint Economic Committee of the United States Congress released a report about the future of nanotechnology. It predicts significant technological and political changes in the mid-term future, including possible technological singularity.[64][65][66]

Former President of the United States Barack Obama spoke about singularity in his interview to Wired in 2016:[67]
One thing that we haven't talked about too much, and I just want to go back to, is we really have to think through the economic implications. Because most people aren't spending a lot of time right now worrying about singularity—they are worrying about "Well, is my job going to be replaced by a machine?"

Entropy (information theory)

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Entropy_(information_theory) In info...