Search This Blog

Tuesday, November 25, 2025

Transhumanism

From Wikipedia, the free encyclopedia

Transhumanist thinkers study the potential benefits and dangers of emerging technologies that could overcome fundamental human limitations, as well as the ethics of using such technologies. Some transhumanists speculate that human beings may eventually be able to transform themselves into beings of such vastly greater abilities as to merit the label of posthuman beings.

Another topic of transhumanist research is how to protect humanity against existential risks, including artificial general intelligence, asteroid impact, gray goo, pandemic, societal collapse, and nuclear warfare.

The biologist Julian Huxley popularised the term "transhumanism" in a 1957 essay. The contemporary meaning of the term was foreshadowed by one of the first professors of futurology, a man who changed his name to FM-2030. In the 1960s, he taught "new concepts of the human" at The New School when he began to identify people who adopt technologies, lifestyles, and worldviews "transitional" to posthumanity as "transhuman". The assertion laid the intellectual groundwork for the British philosopher Max More to begin articulating the principles of transhumanism as a futurist philosophy in 1990, organizing in California a school of thought that has since grown into the worldwide transhumanist movement.

Influenced by seminal works of science fiction, the transhumanist vision of a transformed future humanity has attracted many supporters and detractors from a wide range of perspectives, including philosophy and religion.

History

Precursors of transhumanism

According to Nick Bostrom, transcendentalist impulses have been expressed at least as far back as the quest for immortality in the Epic of Gilgamesh, as well as in historical quests for the Fountain of Youth, the Elixir of Life, and other efforts to stave off aging and death.

Transhumanists draw upon and claim continuity from intellectual and cultural traditions such as the ancient philosophy of Aristotle or the scientific tradition of Roger Bacon. In his Divine Comedy, Dante coined the word trasumanar meaning "to transcend human nature, to pass beyond human nature" in the first canto of Paradiso.

The interweaving of transhumanist aspirations with the scientific imagination can be seen in the works of some precursors of Enlightenment such as Francis Bacon. One of the early precursors to transhumanist ideas is René Descartes's Discourse on Method (1637), in which Descartes envisions a new kind of medicine that can grant both physical immortality and stronger minds.

In his first edition of Political Justice (1793), William Godwin included arguments favoring the possibility of "earthly immortality" (what would now be called physical immortality). Godwin explored the themes of life extension and immortality in his gothic novel St. Leon, which became popular (and notorious) at the time of its publication in 1799, but is now mostly forgotten. St. Leon may have inspired his daughter Mary Shelley's novel Frankenstein.

There is debate about whether the philosophy of Friedrich Nietzsche can be considered an influence on transhumanism, despite its exaltation of the Übermensch (overhuman), due to its emphasis on self-actualization rather than technological transformation. The transhumanist philosophies of More and Sorgner have been influenced strongly by Nietzschean thinking. By contrast, The Transhumanist Declaration "advocates the well-being of all sentience (whether in artificial intellects, humans, posthumans, or non-human animals)".

The late 19th- to early 20th-century movement known as Russian cosmism, by Russian philosopher N. F. Fyodorov, is noted for anticipating transhumanist ideas. In 1966, FM-2030 (formerly F. M. Esfandiary), a futurist who taught "new concepts of the human" at The New School, in New York City, began to identify people who adopt technologies, lifestyles and worldviews transitional to posthumanity as "transhuman".

Early transhumanist thinking

Julian Huxley, the biologist who popularised the term transhumanism in an influential 1957 essay

Fundamental ideas of transhumanism were first advanced in 1923 by the British geneticist J. B. S. Haldane in his essay Daedalus: Science and the Future, which predicted that great benefits would come from the application of advanced sciences to human biology—and that every such advance would first appear to someone as blasphemy or perversion, "indecent and unnatural". In particular, he was interested in the development of the science of ectogenesis (creating and sustaining life in an artificial environment), eugenics, and the application of genetics to improve human characteristics such as health and intelligence.

His article inspired academic and popular interest. J. D. Bernal, a crystallographer at Cambridge, wrote The World, the Flesh and the Devil in 1929, in which he speculated on the prospects of space colonization and radical changes to human bodies and intelligence through bionic implants and cognitive enhancement. These ideas have been common transhumanist themes ever since.

The biologist Julian Huxley is generally regarded as the founder of transhumanism after using the term for the title of an influential 1957 article. But the term derives from a 1940 paper by the Canadian philosopher W. D. Lighthall. Huxley describes transhumanism in these terms:

Up till now human life has generally been, as Hobbes described it, "nasty, brutish and short"; the great majority of human beings (if they have not already died young) have been afflicted with misery… we can justifiably hold the belief that these lands of possibility exist, and that the present limitations and miserable frustrations of our existence could be in large measure surmounted… The human species can, if it wishes, transcend itself—not just sporadically, an individual here in one way, an individual there in another way, but in its entirety, as humanity.

Huxley's definition differs, albeit not substantially, from the one commonly in use since the 1980s. The ideas raised by these thinkers were explored in the science fiction of the 1960s, notably in Arthur C. Clarke's 2001: A Space Odyssey, in which an alien artifact grants transcendent power to its wielder.

Japanese Metabolist architects produced a manifesto in 1960 which outlined goals to "encourage active metabolic development of our society" through design and technology. In the Material and Man section of the manifesto, Noboru Kawazoe suggests that:

After several decades, with the rapid progress of communication technology, every one will have a "brain wave receiver" in his ear, which conveys directly and exactly what other people think about him and vice versa. What I think will be known by all the people. There is no more individual consciousness, only the will of mankind as a whole.

Artificial intelligence and the technological singularity

The concept of the technological singularity, or the ultra-rapid advent of superhuman intelligence, was first proposed by the British cryptologist I. J. Good in 1965:

Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an "intelligence explosion," and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make.

Computer scientist Marvin Minsky wrote on relationships between human and artificial intelligence beginning in the 1960s. Over the succeeding decades, this field continued to generate influential thinkers, such as Hans Moravec and Ray Kurzweil, who oscillated between the technical arena and futuristic speculations in the transhumanist vein. The coalescence of an identifiable transhumanist movement began in the last decades of the 20th century. In 1972, Robert Ettinger, whose 1964 Prospect of Immortality founded the cryonics movement, contributed to the conceptualization of "transhumanity" with his 1972 Man into Superman. FM-2030 published the Upwingers Manifesto in 1973.

Growth of transhumanism

The first self-described transhumanists met formally in the early 1980s at the University of California, Los Angeles, which became the main center of transhumanist thought. Here, FM-2030 lectured on his "Third Way" futurist ideology. At the EZTV Media venue, frequented by transhumanists and other futurists, Natasha Vita-More presented Breaking Away, her 1980 experimental film with the theme of humans breaking away from their biological limitations and the Earth's gravity as they head into space. FM-2030 and Vita-More soon began holding gatherings for transhumanists in Los Angeles, which included students from FM-2030's courses and audiences from Vita-More's artistic productions. In 1982, Vita-More authored the Transhumanist Arts Statement and in 1988 she produced the cable TV show TransCentury Update on transhumanity, a program that reached over 100,000 viewers.

In 1986, Eric Drexler published Engines of Creation: The Coming Era of Nanotechnology, which discussed the prospects for nanotechnology and molecular assemblers, and founded the Foresight Institute. As the first nonprofit organization to research, advocate for, and perform cryonics, the Southern California offices of the Alcor Life Extension Foundation became a center for futurists. In 1988, the first issue of Extropy Magazine was published by Max More and Tom Morrow. In 1990, More, a strategic philosopher, created his own particular transhumanist doctrine, which took the form of the Principles of Extropy, and laid the foundation of modern transhumanism by giving it a new definition:

Transhumanism is a class of philosophies that seek to guide us towards a posthuman condition. Transhumanism shares many elements of humanism, including a respect for reason and science, a commitment to progress, and a valuing of human (or transhuman) existence in this life. [...] Transhumanism differs from humanism in recognizing and anticipating the radical alterations in the nature and possibilities of our lives resulting from various sciences and technologies [...]

In 1992, More and Morrow founded the Extropy Institute, a catalyst for networking futurists and brainstorming new memeplexes by organizing a series of conferences and, more importantly, providing a mailing list, which exposed many to transhumanist views for the first time during the rise of cyberculture and the cyberdelic counterculture. In 1998, philosophers Nick Bostrom and David Pearce founded the World Transhumanist Association (WTA), an international non-governmental organization working toward the recognition of transhumanism as a legitimate subject of scientific inquiry and public policy. In 2002, the WTA modified and adopted The Transhumanist Declaration. The Transhumanist FAQ, prepared by the WTA (later Humanity+), gave two formal definitions for transhumanism:

  1. The intellectual and cultural movement that affirms the possibility and desirability of fundamentally improving the human condition through applied reason, especially by developing and making widely available technologies to eliminate aging and to greatly enhance human intellectual, physical, and psychological capacities.
  2. The study of the ramifications, promises, and potential dangers of technologies that will enable us to overcome fundamental human limitations, and the related study of the ethical matters involved in developing and using such technologies.

In possible contrast with other transhumanist organizations, WTA officials considered that social forces could undermine their futurist visions and needed to be addressed. A particular concern is equal access to human enhancement technologies across classes and borders. In 2006, a political struggle within the transhumanist movement between the libertarian right and the liberal left resulted in a more centre-leftward positioning of the WTA under its former executive director James Hughes. In 2006, the board of directors of the Extropy Institute ceased operations of the organization, saying that its mission was "essentially completed". This left the World Transhumanist Association as the leading international transhumanist organization. In 2008, as part of a rebranding effort, the WTA changed its name to "Humanity+". In 2012, the transhumanist Longevity Party had been initiated as an international union of people who promote the development of scientific and technological means to significant life extension that now has more than 30 national organisations throughout the world.

The Mormon Transhumanist Association was founded in 2006. By 2012, it had hundreds of members.

The first transhumanist elected member of a parliament was Giuseppe Vatinno, in 2012 in Italy.

In 2017, Penn State University Press, in cooperation with philosopher Stefan Lorenz Sorgner and sociologist James Hughes, established the Journal of Posthuman Studies as the first academic journal explicitly dedicated to the posthuman, with the goal of clarifying the notions of posthumanism and transhumanism, as well as comparing and contrasting both.

Theory

It is a matter of debate whether transhumanism is a branch of posthumanism and how this philosophical movement should be conceptualised with regard to transhumanism. Transhumanism is often referred to as a variant or activist form of posthumanism by its conservativeChristian and progressive critics.

A common feature of transhumanism and philosophical posthumanism is the future vision of a new intelligent species, into which humanity will evolve and which eventually will supplement or supersede it. Transhumanism stresses the evolutionary perspective, including sometimes the creation of a highly intelligent animal species by way of cognitive enhancement (i.e. biological uplift), but clings to a "posthuman future" as the final goal of participant evolution.

Nevertheless, the idea of creating intelligent artificial beings (proposed, for example, by roboticist Hans Moravec) has influenced transhumanism. Moravec's ideas and transhumanism have also been characterised as a "complacent" or "apocalyptic" variant of posthumanism and contrasted with "cultural posthumanism" in humanities and the arts. While such a "cultural posthumanism" would offer resources for rethinking the relationships between humans and increasingly sophisticated machines, transhumanism and similar posthumanisms are, in this view, not abandoning obsolete concepts of the "autonomous liberal subject", but are expanding its "prerogatives" into the realm of the posthuman. Transhumanist self-characterisations as a continuation of humanism and Enlightenment thinking correspond with this view.

Some secular humanists conceive transhumanism as an offspring of the humanist freethought movement and argue that transhumanists differ from the humanist mainstream by having a specific focus on technological approaches to resolving human concerns (i.e. technocentrism) and on the issue of mortality. Other progressives have argued that posthumanism, in its philosophical or activist forms, amounts to a shift away from concerns about social justice, from the reform of human institutions and from other Enlightenment preoccupations, toward narcissistic longings to transcend the human body in quest of more exquisite ways of being.

The philosophy of transhumanism is closely related to technoself studies, an interdisciplinary domain of scholarly research dealing with all aspects of human identity in a technological society and focusing on the changing nature of relationships between humans and technology.

Aims

You awake one morning to find your brain has another lobe functioning. Invisible, this auxiliary lobe answers your questions with information beyond the realm of your own memory, suggests plausible courses of action, and asks questions that help bring out relevant facts. You quickly come to rely on the new lobe so much that you stop wondering how it works. You just use it. This is the dream of artificial intelligence.

— Byte, April 1985

Ray Kurzweil believes that a countdown to when "human life will be irreversibly transformed" can be made through plotting major world events on a graph.

While many transhumanist theorists and advocates seek to apply reason, science and technology to reduce poverty, disease, disability, and malnutrition around the globe, transhumanism is distinctive in its particular focus on the applications of technologies to the improvement of human bodies at the individual level. Many transhumanists actively assess the potential for future technologies and innovative social systems to improve the quality of all life, while seeking to make the material reality of the human condition fulfill the promise of legal and political equality by eliminating congenital mental and physical barriers.

Transhumanist philosophers argue that there not only exists a perfectionist ethical imperative for humans to strive for progress and improvement of the human condition, but that it is possible and desirable for humanity to enter a transhuman phase of existence in which humans enhance themselves beyond what is naturally human. In such a phase, natural evolution would be replaced with deliberate participatory or directed evolution.

Some theorists such as Ray Kurzweil think that the pace of technological innovation is accelerating and that the next 50 years may yield not only radical technological advances, but possibly a technological singularity, which may fundamentally change the nature of human beings. Transhumanists who foresee this massive technological change generally maintain that it is desirable, but some are concerned about the dangers of extremely rapid technological change and propose options for ensuring that advanced technology is used responsibly. For example, Bostrom has written extensively on existential risks to humanity's future welfare, including ones that emerging technologies could create. In contrast, some proponents of transhumanism view it as essential to humanity's survival. For instance, Stephen Hawking points out that the "external transmission" phase of human evolution, where knowledge production and knowledge management is more important than transmission of information via evolution, may be the point at which human civilization becomes unstable and self-destructs, one of Hawking's explanations for the Fermi paradox. To counter this, Hawking emphasizes either self-design of the human genome or mechanical enhancement (e.g., brain-computer interface) to enhance human intelligence and reduce aggression, without which he implies human civilization may be too stupid collectively to survive an increasingly unstable system, resulting in societal collapse.

While many people believe that all transhumanists are striving for immortality, that is not necessarily true. Hank Pellissier, managing director of the Institute for Ethics and Emerging Technologies (2011–2012), surveyed transhumanists. He found that, of the 818 respondents, 23.8% did not want immortality. Some of the reasons argued were boredom, Earth's overpopulation, and the desire "to go to an afterlife".

Ethics

Transhumanists engage in interdisciplinary approaches to understand and evaluate possibilities for overcoming biological limitations by drawing on futurology and various fields of ethics. Unlike many philosophers, social critics, and activists who morally value preservation of natural systems, transhumanists see the concept of the specifically natural as problematically nebulous at best and an obstacle to progress at worst. In keeping with this, many prominent transhumanist advocates, such as Dan Agin, call transhumanism's critics, on the political right and left jointly, "bioconservatives" or "bioluddites", the latter term alluding to the 19th-century anti-industrialisation social movement that opposed the replacement of human manual labourers by machines.

A belief of counter-transhumanism is that transhumanism can cause unfair human enhancement in many areas of life, but specifically on the social plane. This can be compared to steroid use, where athletes who use steroids in sports have an advantage over those who do not. The same disparity may happen when people have certain neural implants that give them an advantage in the workplace and in education. Additionally, according to M.J. McNamee and S.D. Edwards, many fear that the improvements afforded by a specific, privileged section of society will lead to a division of the human species into two different species. The idea of two human species, one at a great physical and economic advantage over with the other, is troublesome at best. One may be incapable of breeding with the other, and may by consequence of lower physical health and ability, be considered of a lower moral standing than the other.

Nick Bostrom has said that transhumanism advocates for the wellbeing of all sentient beings, including non-human animals, extraterrestrials, and artificial forms of life. This view is reiterated by David Pearce, who advocates the use of biotechnology to eradicate suffering in all sentient beings.

Currents

There is a variety of opinions within transhumanist thought. Many of the leading transhumanist thinkers hold views that are under constant revision and development. Some distinctive currents of transhumanism are identified and listed here in alphabetical order:

Spirituality

Although many transhumanists are atheists, agnostics, or secular humanists, some have religious or spiritual views. Despite the prevailing secular attitude, some transhumanists pursue hopes traditionally espoused by religions, such as immortality, while several controversial new religious movements from the late 20th century have explicitly embraced transhumanist goals of transforming the human condition by applying technology to alter the mind and body, such as Raëlism. But most thinkers associated with the transhumanism focus on the practical goals of using technology to help achieve longer and healthier lives, while speculating that future understanding of neurotheology and the application of neurotechnology will enable humans to gain greater control of altered states of consciousness, which were commonly interpreted as spiritual experiences, and thus achieve more profound self-knowledge. Transhumanist Buddhists have sought to explore areas of agreement between various types of Buddhism and Buddhist-derived meditation and mind-expanding neurotechnologies. They have been criticised for appropriating mindfulness as a tool for transcending humanness.

Some transhumanists believe in the compatibility between the human mind and computer hardware, with the theoretical implication that human consciousness may someday be transferred to alternative media (a speculative technique commonly known as mind uploading). One extreme formulation of this idea that interests some transhumanists is the proposal of the Omega Point by Christian cosmologist Frank Tipler. Drawing upon ideas in digitalism, Tipler has advanced the notion that the collapse of the Universe billions of years hence could create the conditions for the perpetuation of humanity in a simulated reality within a megacomputer and thus achieve a form of "posthuman godhood". Before Tipler, the term Omega Point was used by Pierre Teilhard de Chardin, a paleontologist and Jesuit theologian who saw an evolutionary telos in the development of an encompassing noosphere, a global consciousness.

Viewed from the perspective of some Christian thinkers, the idea of mind uploading is asserted to represent a denigration of the human body, characteristic of gnostic manichaean belief. Transhumanism and its presumed intellectual progenitors have also been described as neo-gnostic by non-Christian and secular commentators.

The first dialogue between transhumanism and faith was a one-day conference at the University of Toronto in 2004. Religious critics alone faulted transhumanism for offering no eternal truths or relationship with the divine. They commented that a philosophy bereft of these beliefs leaves humanity adrift in a foggy sea of postmodern cynicism and anomie. Transhumanists responded that such criticisms reflect a failure to look at the actual content of transhumanist philosophy, which, far from being cynical, is rooted in optimistic, idealistic attitudes that trace back to the Enlightenment. Following this dialogue, William Sims Bainbridge, a sociologist of religion, conducted a pilot study, published in the Journal of Evolution and Technology, suggesting that religious attitudes were negatively correlated with acceptance of transhumanist ideas and indicating that people with highly religious worldviews tended to perceive transhumanism as a direct, competitive (though ultimately futile) affront to their spiritual beliefs.

Since 2006, the Mormon Transhumanist Association sponsors conferences and lectures on the intersection of technology and religion. The Christian Transhumanist Association was established in 2014.

Since 2009, the American Academy of Religion holds a "Transhumanism and Religion" consultation during its annual meeting, where scholars in the field of religious studies seek to identify and critically evaluate any implicit religious beliefs that might underlie key transhumanist claims and assumptions; consider how transhumanism challenges religious traditions to develop their own ideas of the human future, in particular the prospect of human transformation, whether by technological or other means; and provide critical and constructive assessments of an envisioned future that place greater confidence in nanotechnology, robotics and information technology to achieve virtual immortality and create a superior posthuman species.

The physicist and transhumanist thinker Giulio Prisco states that "cosmist religions based on science, might be our best protection from reckless pursuit of superintelligence and other risky technologies." He also recognizes the importance of spiritual ideas, such as those of Russian Orthodox philosopher Nikolai Fyodorovich Fyodorov, to the origins of the transhumanism movement.

Practice

While some transhumanists such as Ray Kurzweil and Hans Moravec take an abstract and theoretical approach to the perceived benefits of emerging technologies, others have offered specific proposals for modifications to the human body, including heritable ones. Transhumanists are often concerned with methods of enhancing the human nervous system. Though some, such as Kevin Warwick, propose modification of the peripheral nervous system, the brain is considered the common denominator of personhood and is thus a primary focus of transhumanist ambitions.

In fact, Warwick has gone a lot further than merely making a proposal. In 2002 he had a 100 electrode array surgically implanted into the median nerves of his left arm to link his nervous system directly with a computer and thus to also connect with the internet. As a consequence, he carried out a series of experiments. He was able to directly control a robot hand using his neural signals and to feel the force applied by the hand through feedback from the fingertips. He also experienced a form of ultrasonic sensory input and conducted the first purely electronic communication between his own nervous system and that of his wife who also had electrodes implanted.

As proponents of self-improvement and body modification, transhumanists tend to use existing technologies and techniques that supposedly improve cognitive and physical performance, while engaging in routines and lifestyles designed to improve health and longevity. Depending on their age, some transhumanists, such as Kurzweil, express concern that they will not live to reap the benefits of future technologies. However, many have a great interest in life extension strategies and in funding research in cryonics to make the latter a viable option of last resort, rather than remaining an unproven method.

While most transhumanist theory focuses on future technologies and the changes they may bring, many today are already involved in the practice on a very basic level. It is not uncommon for many to receive cosmetic changes to their physical form via cosmetic surgery, even if it is not required for health reasons. Human growth hormones attempt to alter the natural development of shorter children or those who have been born with a physical deficiency. Doctors prescribe medicines such as Ritalin and Adderall to improve cognitive focus, and many people take "lifestyle" drugs such as Viagra, Propecia, and Botox to restore aspects of youthfulness that have been lost in maturity.

Other transhumanists, such as cyborg artist Neil Harbisson, use technologies and techniques to improve their senses and perception of reality. Harbisson's antenna, which is permanently implanted in his skull, allows him to sense colours beyond human perception such as infrareds and ultraviolets.

Technologies of interest

Transhumanists support the emergence and convergence of technologies including nanotechnology, biotechnology, information technology and cognitive science (NBIC), as well as hypothetical future technologies like simulated reality, artificial intelligence, superintelligence, 3D bioprinting, mind uploading, chemical brain preservation and cryonics. They believe that humans can and should use these technologies to become more than human. Therefore, they support the recognition or protection of cognitive liberty, morphological freedom and procreative liberty as civil liberties, so as to guarantee individuals the choice of using human enhancement technologies on themselves and their children. Some speculate that human enhancement techniques and other emerging technologies may facilitate more radical human enhancement no later than at the midpoint of the 21st century. Kurzweil's book The Singularity is Near and Michio Kaku's book Physics of the Future outline various human enhancement technologies and give insight on how these technologies may impact the human race.

Some reports on the converging technologies and NBIC concepts have criticised their transhumanist orientation and alleged science fictional character. At the same time, research on brain and body alteration technologies has been accelerated under the sponsorship of the U.S. Department of Defense, which is interested in the battlefield advantages they would provide to the supersoldiers of the United States and its allies. There has already been a brain research program to "extend the ability to manage information", while military scientists are now looking at stretching the human capacity for combat to a maximum 168 hours without sleep.

Neuroscientist Anders Sandberg has been practicing on the method of scanning ultra-thin sections of the brain. This method is being used to help better understand the architecture of the brain. It is currently being used on mice. This is the first step towards hypothetically uploading contents of the human brain, including memories and emotions, onto a computer.

Debate

The very notion and prospect of human enhancement and related issues arouse public controversy. Criticisms of transhumanism and its proposals take two main forms: those objecting to the likelihood of transhumanist goals being achieved (practical criticisms) and those objecting to the moral principles or worldview sustaining transhumanist proposals or underlying transhumanism itself (ethical criticisms). Critics and opponents often see transhumanists' goals as posing threats to human values.

The human enhancement debate is, for some, framed by the opposition between strong bioconservatism and transhumanism. The former opposes any form of human enhancement, whereas the latter advocates for all possible human enhancements. But many philosophers hold a more nuanced view in favour of some enhancements while rejecting the transhumanist carte blanche approach.

Transhumanists argue that parents have a moral responsibility called procreative beneficence to make use of these methods, if and when they are shown to be reasonably safe and effective, to have the healthiest children possible. They believe this responsibility is a moral judgment best left to individual conscience, rather than imposed by law, in all but extreme cases. In this context, the emphasis on freedom of choice is called procreative liberty.

Some of the best-known critiques of the transhumanist program are novels and fictional films. These works, despite presenting imagined worlds rather than philosophical analyses, are touchstones for some of the more formal arguments. Various arguments have been made to the effect that a society that adopts human enhancement technologies may come to resemble the dystopia depicted in Aldous Huxley's 1932 novel Brave New World.

Some authors consider humanity already transhuman, because recent medical advances have significantly altered our species. But this has not happened in a conscious and therefore transhumanistic way. From such a perspective, transhumanism is perpetually aspirational: as new technologies become mainstream, the adoption of still unadopted technologies becomes a new shifting goal.

Giuseppe Vattino, a member of Italy's parliament, believes transhumanism will make people "less subject to the whims of nature, such as illness or climate extremes".

Feasibility

In a 1992 book, sociologist Max Dublin pointed to many past failed predictions of technological progress and argued that modern futurist predictions would prove similarly inaccurate. He also objected to what he saw as scientism, fanaticism and nihilism by a few in advancing transhumanist causes. Dublin also said that historical parallels existed between Millenarian religions and Communist doctrines.

Although generally sympathetic to transhumanism, public health professor Gregory Stock is skeptical of the technical feasibility and mass appeal of the cyborgization of humanity predicted by Kurzweil, Hans Moravec, and Kevin Warwick. He said that, throughout the 21st century, many humans will be deeply integrated into systems of machines, but remain biological. Primary changes to their own form and character would arise not from cyberware, but from the direct manipulation of their genetics, metabolism and biochemistry.

In her 1992 book Science as Salvation, philosopher Mary Midgley traces the notion of achieving immortality by transcendence of the material human body (echoed in the transhumanist tenet of mind uploading) to a group of male scientific thinkers of the early 20th century, including J. B. S. Haldane and members of his circle. She characterizes these ideas as "quasi-scientific dreams and prophesies" involving visions of escape from the body coupled with "self-indulgent, uncontrolled power-fantasies". Her argument focuses on what she perceives as the pseudoscientific speculations and irrational, fear-of-death-driven fantasies of these thinkers, their disregard for laymen and the remoteness of their eschatological visions.

Another critique is aimed mainly at "algeny" (a portmanteau of alchemy and genetics), which Jeremy Rifkin defined as "the upgrading of existing organisms and the design of wholly new ones with the intent of 'perfecting' their performance". It emphasizes the issue of biocomplexity and the unpredictability of attempts to guide the development of products of biological evolution. This argument, elaborated in particular by the biologist Stuart Newman, is based on the recognition that cloning and germline genetic engineering of animals are error-prone and inherently disruptive of embryonic development. Accordingly, so it is argued, it would create unacceptable risks to use such methods on human embryos. Performing experiments, particularly ones with permanent biological consequences, on developing humans would thus be in violation of accepted principles governing research on human subjects (see the 1964 Declaration of Helsinki). Moreover, because improvements in experimental outcomes in one species are not automatically transferable to a new species without further experimentation, it is claimed that there is no ethical route to genetic manipulation of humans at early developmental stages.

As a practical matter, international protocols on human subject research may not present a legal obstacle to attempts by transhumanists and others to improve their offspring by germinal choice technology. According to legal scholar Kirsten Rabe Smolensky, existing laws protect parents who choose to enhance their child's genome from future liability arising from adverse outcomes of the procedure.

Transhumanists and other supporters of human genetic engineering do not dismiss practical concerns out of hand, insofar as there is a high degree of uncertainty about the timelines and likely outcomes of genetic modification experiments in humans. But bioethicist James Hughes suggests that one possible ethical route to the genetic manipulation of humans at early developmental stages is the building of computer models of the human genome, the proteins it specifies and the tissue engineering he argues that it also codes for. With the exponential progress in bioinformatics, Hughes believes that a virtual model of genetic expression in the human body will not be far behind and that it will soon be possible to accelerate approval of genetic modifications by simulating their effects on virtual humans.[7] Public health professor Gregory Stock points to artificial chromosomes as a safer alternative to existing genetic engineering techniques.

Thinkers who defend the likelihood of accelerating change point to a past pattern of exponential increases in humanity's technological capacities. Kurzweil developed this position in his 2005 book The Singularity Is Near.

Intrinsic immorality

Some argue that, in transhumanist thought, humans attempt to substitute themselves for God. The 2002 Vatican statement Communion and Stewardship: Human Persons Created in the Image of God, stated that "changing the genetic identity of man as a human person through the production of an infrahuman being is radically immoral", implying, that "man has full right of disposal over his own biological nature". The statement also argues that creation of a superhuman or spiritually superior being is "unthinkable", since true improvement can come only through religious experience and "realizing more fully the image of God". Christian theologians and lay activists of several churches and denominations have expressed similar objections to transhumanism and claimed that Christians attain in the afterlife what radical transhumanism promises, such as indefinite life extension or the abolition of suffering. In this view, transhumanism is just another representative of the long line of utopian movements which seek to create "heaven on earth". On the other hand, religious thinkers allied with transhumanist goals such as the theologians Ronald Cole-Turner and Ted Peters hold that the doctrine of "co-creation" provides an obligation to use genetic engineering to improve human biology.

Other critics target what they claim to be an instrumental conception of the human body in the writings of Minsky, Moravec, and some other transhumanists. Reflecting a strain of feminist criticism of the transhumanist program, philosopher Susan Bordo points to "contemporary obsessions with slenderness, youth and physical perfection", which she sees as affecting both men and women, but in distinct ways, as "the logical (if extreme) manifestations of anxieties and fantasies fostered by our culture." Some critics question other social implications of the movement's focus on body modification. Political scientist Klaus-Gerd Giesen, in particular, has asserted that transhumanism's concentration on altering the human body represents the logical yet tragic consequence of atomized individualism and body commodification within a consumer culture.

Bostrom responds that the desire to regain youth, specifically, and transcend the natural limitations of the human body, in general, is pan-cultural and pan-historical, not uniquely tied to the culture of the 20th century. He argues that the transhumanist program is an attempt to channel that desire into a scientific project on par with the Human Genome Project and achieve humanity's oldest hope, rather than a puerile fantasy or social trend.

Loss of human identity

In the U.S., the Amish are a religious group most known for their avoidance of certain modern technologies. Transhumanists draw a parallel by arguing that in the near-future there will probably be "humanish", people who choose to "stay human" by not adopting human enhancement technologies. They believe their choice must be respected and protected.

In his 2003 book Enough: Staying Human in an Engineered Age, environmental ethicist Bill McKibben argued at length against many of the technologies that are postulated or supported by transhumanists, including germinal choice technology, nanomedicine and life extension strategies. He claims that it would be morally wrong for humans to tamper with fundamental aspects of themselves (or their children) in an attempt to overcome universal human limitations, such as vulnerability to aging, maximum life span and biological constraints on physical and cognitive ability. Attempts to "improve" themselves through such manipulation would remove limitations that provide a necessary context for the experience of meaningful human choice. He claims that human lives would no longer seem meaningful in a world where such limitations could be overcome technologically. Even the goal of using germinal choice technology for clearly therapeutic purposes should be relinquished, since it would inevitably produce temptations to tamper with such things as cognitive capacities. He argues that it is possible for societies to benefit from renouncing particular technologies, using as examples Ming China, Tokugawa Japan and the contemporary Amish.

Biopolitical activist Jeremy Rifkin and biologist Stuart Newman accept that biotechnology has the power to make profound changes in organismal identity. They argue against the genetic engineering of human beings because they fear the blurring of the boundary between human and artifact. Philosopher Keekok Lee sees such developments as part of an accelerating trend in modernization in which technology has been used to transform the "natural" into the "artefactual". In the extreme, this could lead to the manufacturing and enslavement of "monsters" such as human clones, human-animal chimeras, or bioroids, but even lesser dislocations of humans and non-humans from social and ecological systems are seen as problematic. The film Blade Runner (1982) and the novels The Boys From Brazil (1976) and The Island of Doctor Moreau (1896) depict elements of such scenarios, but Mary Shelley's 1818 novel Frankenstein; or, The Modern Prometheus is most often alluded to by critics who suggest that biotechnologies could create objectified and socially unmoored people as well as subhumans. Such critics propose that strict measures be implemented to prevent what they portray as dehumanizing possibilities from ever happening, usually in the form of an international ban on human genetic engineering.

Science journalist Ronald Bailey claims that McKibben's historical examples are flawed and support different conclusions when studied more closely. For example, few groups are more cautious than the Amish about embracing new technologies, but, though they shun television and use horses and buggies, some are welcoming the possibilities of gene therapy since inbreeding has afflicted them with a number of rare genetic diseases. Bailey and other supporters of technological alteration of human biology also reject the claim that life would be experienced as meaningless if some human limitations are overcome with enhancement technologies as extremely subjective.

Writing in Reason magazine, Bailey has accused opponents of research involving the modification of animals as indulging in alarmism when they speculate about the creation of subhuman creatures with human-like intelligence and brains resembling those of Homo sapiens. Bailey insists that the aim of conducting research on animals is simply to produce human health care benefits.

A different response comes from transhumanist personhood theorists who object to what they characterize as the anthropomorphobia fueling some criticisms of this research, which science fiction writer Isaac Asimov termed the "Frankenstein complex". For example, Woody Evans argues that, provided they are self-aware, human clones, human-animal chimeras and uplifted animals would all be unique persons deserving of respect, dignity, rights, responsibilities, and citizenship. They conclude that the coming ethical issue is not the creation of so-called monsters, but what they characterize as the "yuck factor" and "human-racism", that would judge and treat these creations as monstrous. In book 3 of his Corrupting the Image series, Douglas Hamp goes so far as to suggest that the Beast of John's Apocalypse is himself a hybrid who will induce humanity to take "the mark of the Beast", in the hopes of obtaining perfection and immortality.

At least one public interest organization, the U.S.-based Center for Genetics and Society, was formed, in 2001, with the specific goal of opposing transhumanist agendas that involve transgenerational modification of human biology, such as full-term human cloning and germinal choice technology. The Institute on Biotechnology and the Human Future of the Chicago-Kent College of Law critically scrutinizes proposed applications of genetic and nanotechnologies to human biology in an academic setting.

Socioeconomic effects

Some critics of libertarian transhumanism have focused on the likely socioeconomic consequences in societies in which divisions between rich and poor are on the rise. Bill McKibben, for example, suggests that emerging human enhancement technologies would be disproportionately available to those with greater financial resources, thereby exacerbating the gap between rich and poor and creating a "genetic divide". Even Lee M. Silver, the biologist and science writer who coined the term "reprogenetics" and supports its applications, has expressed concern that these methods could create a two-tiered society of genetically engineered "haves" and "have nots" if social democratic reforms lag behind implementation of enhancement technologies. The 1997 film Gattaca depicts a dystopian society in which one's social class depends entirely on genetic potential and is often cited by critics in support of these views.

These criticisms are also voiced by non-libertarian transhumanist advocates, especially self-described democratic transhumanists, who believe that the majority of current or future social and environmental issues (such as unemployment and resource depletion) must be addressed by a combination of political and technological solutions (like a guaranteed minimum income and alternative technology). Therefore, on the specific issue of an emerging genetic divide due to unequal access to human enhancement technologies, bioethicist James Hughes, in his 2004 book Citizen Cyborg: Why Democratic Societies Must Respond to the Redesigned Human of the Future, argues that progressives or, more precisely, techno-progressives, must articulate and implement public policies (i.e., a universal health care voucher system that covers human enhancement technologies) to attenuate this problem as much as possible, rather than trying to ban human enhancement technologies. The latter, he argues, might actually worsen the problem by making these technologies unsafe or available only to the wealthy on the local black market or in countries where such a ban is not enforced.

Sometimes, as in the writings of Leon Kass, the fear is that various institutions and practices judged as fundamental to civilized society would be damaged or destroyed. In his 2002 book Our Posthuman Future and in a 2004 Foreign Policy magazine article, political economist and philosopher Francis Fukuyama designates transhumanism as the world's most dangerous idea because he believes it may undermine the egalitarian ideals of democracy (in general) and liberal democracy (in particular) through a fundamental alteration of "human nature". Social philosopher Jürgen Habermas makes a similar argument in his 2003 book The Future of Human Nature, in which he asserts that moral autonomy depends on not being subject to another's unilaterally imposed specifications. Habermas thus suggests that the human "species ethic" would be undermined by embryo-stage genetic alteration. Critics such as Kass and Fukuyama hold that attempts to significantly alter human biology are not only inherently immoral, but also threaten the social order. Alternatively, they argue that implementation of such technologies would likely lead to the "naturalizing" of social hierarchies or place new means of control in the hands of totalitarian regimes. AI pioneer Joseph Weizenbaum criticizes what he sees as misanthropic tendencies in the language and ideas of some of his colleagues, in particular Minsky and Moravec, which, by devaluing the human organism per se, promotes a discourse that enables divisive and undemocratic social policies.

In a 2004 article in the libertarian monthly Reason, science journalist Ronald Bailey contested Fukuyama's assertions by arguing that political equality has never rested on the facts of human biology. He asserts that liberalism was founded not on the proposition of effective equality of human beings, or de facto equality, but on the assertion of an equality in political rights and before the law, or de jure equality. Bailey asserts that the products of genetic engineering may well ameliorate rather than exacerbate human inequality, giving to the many what were once the privileges of the few. Moreover, he argues, "the crowning achievement of the Enlightenment is the principle of tolerance". In fact, he says, political liberalism is already the solution to the issue of human and posthuman rights since in liberal societies the law is meant to apply equally to all, no matter how rich or poor, powerful or powerless, educated or ignorant, enhanced or unenhanced. Other thinkers sympathetic to transhumanist ideas, such as Russell Blackford, have also objected to the appeal to tradition and what they see as alarmism involved in Brave New World-type arguments.

Cultural aesthetics

In addition to the socioeconomic risks and implications of transhumanism, there are indeed implications and possible consequences in regard to cultural aesthetics. Currently, there are a number of ways in which people choose to represent themselves in society. The way in which a person dresses, hair styles, and body alteration all serve to identify the way a person presents themselves and is perceived by society. According to Foucault, society already governs and controls bodies by making them feel watched. This "surveillance" of society dictates how the majority of individuals choose to express themselves aesthetically.

One of the risks outlined in a 2004 article by Jerold Abrams is the elimination of differences in favor of universality. This, he argues, will eliminate the ability of individuals to subvert the possibly oppressive, dominant structure of society by way of uniquely expressing themselves externally. Such control over a population would have dangerous implications of tyranny. Yet another consequence of enhancing the human form not only cognitively, but physically, will be the reinforcement of "desirable" traits which are perpetuated by the dominant social structure.

New eugenics

The tradition of human enhancement originated with the eugenics movement that was once prominent in the biological sciences, and was later politicized in various ways. This continuity is especially clear in the case of Julian Huxley himself.

The major transhumanist organizations strongly condemn the coercion involved in such policies and reject the racist and classist assumptions on which they were based, along with the pseudoscientific notions that eugenic improvements could be accomplished in a practically meaningful time frame through selective human breeding. Instead, most transhumanist thinkers advocate a "new eugenics", a form of egalitarian liberal eugenics. In their 2000 book From Chance to Choice: Genetics and Justice, non-transhumanist bioethicists Allen Buchanan, Dan Brock, Norman Daniels and Daniel Wikler have argued that liberal societies have an obligation to encourage as wide an adoption of eugenic enhancement technologies as possible (so long as such policies do not infringe on individuals' reproductive rights or exert undue pressures on prospective parents to use these technologies) to maximize public health and minimize the inequalities that may result from both natural genetic endowments and unequal access to genetic enhancements. Most transhumanists holding similar views nonetheless distance themselves from the term "eugenics" (preferring "germinal choice" or "reprogenetics") to avoid having their position confused with the discredited theories and practices of early-20th-century eugenic movements.

Health law professor George Annas and technology law professor Lori Andrews are prominent advocates of the position that the use of these technologies could lead to human-posthuman caste warfare.

Existential risks

In his 2003 book Our Final Hour, British Astronomer Royal Martin Rees argues that advanced science and technology bring as much risk of disaster as opportunity for progress. However, Rees does not advocate a halt to scientific activity. Instead, he calls for tighter security and perhaps an end to traditional scientific openness. Advocates of the precautionary principle, such as many in the environmental movement, also favor slow, careful progress or a halt in potentially dangerous areas. Some precautionists believe that artificial intelligence and robotics present possibilities of alternative forms of cognition that may threaten human life.

Transhumanists do not necessarily rule out specific restrictions on emerging technologies so as to lessen the prospect of existential risk. Generally, however, they counter that proposals based on the precautionary principle are often unrealistic and sometimes even counter-productive as opposed to the technogaian current of transhumanism, which they claim is both realistic and productive. In his television series Connections, science historian James Burke dissects several views on technological change, including precautionism and the restriction of open inquiry. Burke questions the practicality of some of these views, but concludes that maintaining the status quo of inquiry and development poses hazards of its own, such as a disorienting rate of change and the depletion of our planet's resources. The common transhumanist position is a pragmatic one where society takes deliberate action to ensure the early arrival of the benefits of safe, clean, alternative technology, rather than fostering what it considers to be anti-scientific views and technophobia.

Nick Bostrom argues that even barring the occurrence of a singular global catastrophic event, basic Malthusian and evolutionary forces facilitated by technological progress threaten to eliminate the positive aspects of human society.

One transhumanist solution proposed by Bostrom to counter existential risks is control of differential technological development, a series of attempts to influence the sequence in which technologies are developed. In this approach, planners would strive to retard the development of possibly harmful technologies and their applications, while accelerating the development of likely beneficial technologies, especially those that offer protection against the harmful effects of others.

In their 2021 book Calamity Theory, Joshua Schuster and Derek Woods critique existential risks by arguing against Bostrom's transhumanist perspective, which emphasizes controlling and mitigating these risks through technological advancements. They contend that this approach relies too much on fringe science and speculative technologies and fails to address deeper philosophical and ethical problems about the nature of human existence and its limitations. Instead, they advocate an approach more grounded in secular existentialist philosophy, focusing on mental fortitude, community resilience, international peacebuilding, and environmental stewardship to better cope with existential risks.

Antinatalism and pronatalism

Although most people focus on the scientific and technological barriers on the road to human enhancement, Robbert Zandbergen argues that contemporary transhumanists' failure to critically engage the cultural current of antinatalism is a far bigger obstacle to a posthuman future. Antinatalism is a stance seeking to discourage, restrict, or terminate human reproduction to solve existential problems. If transhumanists fail to take this threat to human continuity seriously, they run the risk of seeing the collapse of the entire edifice of radical enhancement.

Simone and Malcolm Collins, founders of Pronatalist.org, are activists known primarily for their views and advocacy related to a secular and voluntaristic form of pronatalism, a stance encouraging higher birth rates to reverse demographic decline and its negative implications for the viability of modern societies and the possibility of a better future. Critical of transhumanism, they have expressed concern that life extension would worsen the problem of gerontocracy, causing toxic imbalances in power. The Collinses lament that voluntarily childfree transhumanists who "want to live forever believe they are the epitome of centuries of human cultural and biological evolution. They don’t think they can make kids that are better than them."

Propagandistic use

Common enemy to anti-democratic movements

Transhumanism has increasingly been co-opted by anti-democratic movements as a common enemy stereotype. These movements range from Putin sympathizers to radical anti-vaxxers and Christian fundamentalists. Critics argue that nonsensical claims often stem from deliberate ignorance, and terms like "Putin sympathizer" or "conspiracy theorist" are used to defame legitimate criticism.

Political scientists like Markus Linden point out that Putin, in his speeches, argues against the so-called "liberal-globalist American egocentrism" and cancel culture, which parallels the agitation seen in alternative media. These discourses also occur on platforms like Nachdenkseiten, Rubikon, and Compact, where they are presented as analyses of the decline of Western democracy.

The propagandistic use of the term "transhumanism" aims to create a comprehensive counter-narrative that unites right-wing extremists, theocratic groups, and liberals. Transhumanism is portrayed as a threat to traditional values and human nature. These narratives can also be found among ideologues like Alexander Dugin, who condemns transhumanism as the work of the devil, and Christian fundamentalists who equate it with the denial of traditional values.

The use of the term "transhumanism" as an ideological rallying point for the Querfront is also evident in the fusion of right-wing, left-wing, and libertarian ideas that collectively oppose liberal democracies. This development emphasizes individual conceptions of humanity that are often incompatible with a pluralistic society. It requires a critical examination of the political implications of transhumanism and its instrumentalization by anti-democratic forces.

Quantum mind

From Wikipedia, the free encyclopedia

The quantum mind or quantum consciousness is a group of hypotheses proposing that local physical laws and interactions from classical mechanics or connections between neurons alone cannot explain consciousness. These hypotheses posit instead that quantum-mechanical phenomena, such as entanglement and superposition that cause nonlocalized quantum effects, interacting in smaller features of the brain than cells, may play an important part in the brain's function and could explain critical aspects of consciousness. These scientific hypotheses are as yet unvalidated, and they can overlap with quantum mysticism.

History

Eugene Wigner developed the idea that quantum mechanics has something to do with the workings of the mind. He proposed that the wave function collapses due to its interaction with consciousness. Freeman Dyson argued that "mind, as manifested by the capacity to make choices, is to some extent inherent in every electron".

Other contemporary physicists and philosophers considered these arguments unconvincing. Victor Stenger characterized quantum consciousness as a "myth" having "no scientific basis" that "should take its place along with gods, unicorns and dragons".

David Chalmers argues against quantum consciousness. He instead discusses how quantum mechanics may relate to dualistic consciousness. Chalmers is skeptical that any new physics can resolve the hard problem of consciousness. He argues that quantum theories of consciousness suffer from the same weakness as more conventional theories. Just as he argues that there is no particular reason why specific macroscopic physical features in the brain should give rise to consciousness, he also thinks that there is no specific reason why a particular quantum feature, such as the EM field in the brain, should give rise to consciousness either.

Approaches

Bohm and Hiley

David Bohm viewed quantum theory and relativity as contradictory, which implied a more fundamental level in the universe. He claimed that both quantum theory and relativity pointed to this deeper theory, a quantum field theory. This more fundamental level was proposed to represent an undivided wholeness and an implicate order, from which arises the explicate order of the universe as we experience it.

Bohm's proposed order applies both to matter and consciousness. He suggested that it could explain the relationship between them. He saw mind and matter as projections into our explicate order from the underlying implicate order. Bohm claimed that when we look at matter, we see nothing that helps us to understand consciousness.

Bohm never proposed a specific means by which his proposal could be falsified, nor a neural mechanism through which his "implicate order" could emerge in a way relevant to consciousness. He later collaborated on Karl Pribram's holonomic brain theory as a model of quantum consciousness.

David Bohm also collaborated with Basil Hiley on work that claimed mind and matter both emerge from an "implicate order". Hiley in turn worked with philosopher Paavo Pylkkänen. According to Pylkkänen, Bohm's suggestion "leads naturally to the assumption that the physical correlate of the logical thinking process is at the classically describable level of the brain, while the basic thinking process is at the quantum-theoretically describable level".

Penrose and Hameroff

Theoretical physicist Roger Penrose and anaesthesiologist Stuart Hameroff collaborated to produce the theory known as "orchestrated objective reduction" (Orch-OR). Penrose and Hameroff initially developed their ideas separately and later collaborated to produce Orch-OR in the early 1990s. They reviewed and updated their theory in 2013.

Penrose's argument stemmed from Gödel's incompleteness theorems. In his first book on consciousness, The Emperor's New Mind (1989), he argued that while a formal system cannot prove its own consistency, Gödel's unprovable results are provable by human mathematicians. Penrose took this to mean that human mathematicians are not formal proof systems and not running a computable algorithm. According to Bringsjord and Xiao, this line of reasoning is based on fallacious equivocation on the meaning of computation. In the same book, Penrose wrote: "One might speculate, however, that somewhere deep in the brain, cells are to be found of single quantum sensitivity. If this proves to be the case, then quantum mechanics will be significantly involved in brain activity."

Penrose determined that wave function collapse was the only possible physical basis for a non-computable process. Dissatisfied with its randomness, he proposed a new form of wave function collapse that occurs in isolation and called it objective reduction. He suggested each quantum superposition has its own piece of spacetime curvature and that when these become separated by more than one Planck length, they become unstable and collapse. Penrose suggested that objective reduction represents neither randomness nor algorithmic processing but instead a non-computable influence in spacetime geometry from which mathematical understanding and, by later extension, consciousness derives.

Hameroff provided a hypothesis that microtubules would be suitable hosts for quantum behavior. Microtubules are composed of tubulin protein dimer subunits. The dimers each have hydrophobic pockets that are 8 nm apart and may contain delocalized π electrons. Tubulins have other smaller non-polar regions that contain π-electron-rich indole rings separated by about 2 nm. Hameroff proposed that these electrons are close enough to become entangled. He originally suggested that the tubulin-subunit electrons would form a Bose–Einstein condensate, but this was discredited. He then proposed a Frohlich condensate, a hypothetical coherent oscillation of dipolar molecules, but this too was experimentally discredited.

For instance, the proposed predominance of A-lattice microtubules, more suitable for information processing, was falsified by Kikkawa et al., who showed that all in vivo microtubules have a B lattice and a seam. Orch-OR predicted that microtubule coherence reaches the synapses through dendritic lamellar bodies (DLBs), but De Zeeuw et al. proved this impossible by showing that DLBs are micrometers away from gap junctions.

In 2014, Hameroff and Penrose claimed that the discovery of quantum vibrations in microtubules by Anirban Bandyopadhyay of the National Institute for Materials Science in Japan in March 2013 corroborates Orch-OR theory. Experiments that showed that anaesthetic drugs reduce how long microtubules can sustain suspected quantum excitations appear to support the quantum theory of consciousness.

In April 2022, the results of two related experiments at the University of Alberta and Princeton University were announced at The Science of Consciousness conference, providing further evidence to support quantum processes operating within microtubules. In a study Stuart Hameroff was part of, Jack Tuszyński of the University of Alberta demonstrated that anesthetics hasten the duration of a process called delayed luminescence, in which microtubules and tubulins re-emit trapped light. Tuszyński suspects that the phenomenon has a quantum origin, with superradiance being investigated as one possibility. In the second experiment, Gregory D. Scholes and Aarat Kalra of Princeton University used lasers to excite molecules within tubulins, causing a prolonged excitation to diffuse through microtubules further than expected, which did not occur when repeated under anesthesia. However, diffusion results have to be interpreted carefully, since even classical diffusion can be very complex due to the wide range of length scales in the fluid filled extracellular space.[35] Nevertheless, University of Oxford quantum physicist Vlatko Vedral told that this connection with consciousness is a really long shot.

Also in 2022, a group of Italian physicists conducted several experiments that failed to provide evidence in support of a gravity-related quantum collapse model of consciousness, weakening the possibility of a quantum explanation for consciousness.

Although these theories are stated in a scientific framework, it is difficult to separate them from scientists' personal opinions. The opinions are often based on intuition or subjective ideas about the nature of consciousness. For example, Penrose wrote:

[M]y own point of view asserts that you can't even simulate conscious activity. What's going on in conscious thinking is something you couldn't properly imitate at all by computer.... If something behaves as though it's conscious, do you say it is conscious? People argue endlessly about that. Some people would say, "Well, you've got to take the operational viewpoint; we don't know what consciousness is. How do you judge whether a person is conscious or not? Only by the way they act. You apply the same criterion to a computer or a computer-controlled robot." Other people would say, "No, you can't say it feels something merely because it behaves as though it feels something." My view is different from both those views. The robot wouldn't even behave convincingly as though it was conscious unless it really was—which I say it couldn't be, if it's entirely computationally controlled.

Penrose continues:

A lot of what the brain does you could do on a computer. I'm not saying that all the brain's action is completely different from what you do on a computer. I am claiming that the actions of consciousness are something different. I'm not saying that consciousness is beyond physics, either—although I'm saying that it's beyond the physics we know now.... My claim is that there has to be something in physics that we don't yet understand, which is very important, and which is of a noncomputational character. It's not specific to our brains; it's out there, in the physical world. But it usually plays a totally insignificant role. It would have to be in the bridge between quantum and classical levels of behavior—that is, where quantum measurement comes in.

Umezawa, Vitiello, Freeman

Hiroomi Umezawa and collaborators proposed a quantum field theory of memory storage. Giuseppe Vitiello and Walter Freeman proposed a dialog model of the mind. This dialog takes place between the classical and the quantum parts of the brain. Their quantum field theory models of brain dynamics are fundamentally different from the Penrose–Hameroff theory.

Quantum brain dynamics

As described by Harald Atmanspacher, "Since quantum theory is the most fundamental theory of matter that is currently available, it is a legitimate question to ask whether quantum theory can help us to understand consciousness."

The original motivation in the early 20th century for relating quantum theory to consciousness was essentially philosophical. It is fairly plausible that conscious free decisions ("free will") are problematic in a perfectly deterministic world, so quantum randomness might indeed open up novel possibilities for free will. (On the other hand, randomness is problematic for goal-directed volition!)

Ricciardi and Umezawa proposed in 1967 a general theory of quanta of long-range coherent waves within and between brain cells, and showed a possible mechanism of memory storage and retrieval in terms of Nambu–Goldstone bosons. Mari Jibu and Kunio Yasue later popularized these results under the name "quantum brain dynamics" (QBD) as the hypothesis to explain the function of the brain within the framework of quantum field theory with implications on consciousness.

Pribram

Karl Pribram's holonomic brain theory (quantum holography) invoked quantum field theory to explain higher-order processing of memory in the brain. He argued that his holonomic model solved the binding problem. Pribram collaborated with Bohm in his work on quantum approaches to the thought process. Pribram suggested much of the processing in the brain was done in distributed fashion. He proposed that the fine fibered, felt-like dendritic fields might follow the principles of quantum field theory when storing and retrieving long term memory.

Stapp

Henry Stapp proposed that quantum waves are reduced only when they interact with consciousness. He argues from the orthodox quantum mechanics of John von Neumann that the quantum state collapses when the observer selects one among the alternative quantum possibilities as a basis for future action. The collapse, therefore, takes place in the expectation that the observer associated with the state. Stapp's work drew criticism from scientists such as David Bourget and Danko Georgiev.

Catecholaminergic neuron electron transport (CNET)

CNET is a hypothesized neural signaling mechanism in catecholaminergic neurons that would use quantum mechanical electron transport. The hypothesis is based in part on the observation by many independent researchers that electron tunneling occurs in ferritin, an iron storage protein that is prevalent in those neurons, at room temperature and ambient conditions. The hypothesized function of this mechanism is to assist in action selection, but the mechanism itself would be capable of integrating millions of cognitive and sensory neural signals using a physical mechanism associated with strong electron-electron interactions. Each tunneling event would involve a collapse of an electron wave function, but the collapse would be incidental to the physical effect created by strong electron-electron interactions.

CNET predicted a number of physical properties of these neurons that have been subsequently observed experimentally, such as electron tunneling in substantia nigra pars compacta (SNc) tissue and the presence of disordered arrays of ferritin in SNc tissue. The hypothesis also predicted that disordered ferritin arrays like those found in SNc tissue should be capable of supporting long-range electron transport and providing a switching or routing function, both of which have also been subsequently observed.

Another prediction of CNET was that the largest SNc neurons should mediate action selection. This prediction was contrary to earlier proposals about the function of those neurons at that time, which were based on predictive reward dopamine signaling. A team led by Dr. Pascal Kaeser of Harvard Medical School subsequently demonstrated that those neurons do in fact code movement, consistent with the earlier predictions of CNET. While the CNET mechanism has not yet been directly observed, it may be possible to do so using quantum dot fluorophores tagged to ferritin or other methods for detecting electron tunneling.

CNET is applicable to a number of different consciousness models as a binding or action selection mechanism, such as Integrated Information Theory (IIT) and Sensorimotor Theory (SMT). It is noted that many existing models of consciousness fail to specifically address action selection or binding. For example, O'Regan and Noë call binding a "pseudo problem," but also state that "the fact that object attributes seem perceptually to be part of a single object does not require them to be 'represented' in any unified kind of way, for example, at a single location in the brain, or by a single process. They may be so represented, but there is no logical necessity for this." Simply because there is no "logical necessity" for a physical phenomenon does not mean that it does not exist, or that once it is identified that it can be ignored. Likewise, global workspace theory (GWT) models appear to treat dopamine as modulatory, based on the prior understanding of those neurons from predictive reward dopamine signaling research, but GWT models could be adapted to include modeling of moment-by-moment activity in the striatum to mediate action selection, as observed by Kaiser. CNET is applicable to those neurons as a selection mechanism for that function, as otherwise that function could result in seizures from simultaneous actuation of competing sets of neurons. While CNET by itself is not a model of consciousness, it is able to integrate different models of consciousness through neural binding and action selection. However, a more complete understanding of how CNET might relate to consciousness would require a better understanding of strong electron-electron interactions in ferritin arrays, which implicates the many-body problem.

Criticism

These hypotheses of the quantum mind remain hypothetical speculation, as Penrose admits in his discussions. Until they make a prediction that is tested by experimentation, the hypotheses are not based on empirical evidence. In 2010, Lawrence Krauss was guarded in criticising Penrose's ideas. He said: "Roger Penrose has given lots of new-age crackpots ammunition... Many people are dubious that Penrose's suggestions are reasonable, because the brain is not an isolated quantum-mechanical system. To some extent it could be, because memories are stored at the molecular level, and at a molecular level quantum mechanics is significant." According to Krauss, "It is true that quantum mechanics is extremely strange, and on extremely small scales for short times, all sorts of weird things happen. And in fact, we can make weird quantum phenomena happen. But what quantum mechanics doesn't change about the universe is, if you want to change things, you still have to do something. You can't change the world by thinking about it."

The process of testing the hypotheses with experiments is fraught with conceptual/theoretical, practical, and ethical problems.

Conceptual problems

The idea that a quantum effect is necessary for consciousness to function is still in the realm of philosophy. Penrose proposes that it is necessary, but other theories of consciousness do not indicate that it is needed. For example, Daniel Dennett proposed a theory called multiple drafts model, which doesn't indicate that quantum effects are needed, in his 1991 book Consciousness Explained. A philosophical argument on either side is not a scientific proof, although philosophical analysis can indicate key differences in the types of models and show what type of experimental differences might be observed. But since there is no clear consensus among philosophers, there is no conceptual support that a quantum mind theory is needed.

A possible conceptual approach is to use quantum mechanics as an analogy to understand a different field of study like consciousness, without expecting that the laws of quantum physics will apply. An example of this approach is the idea of Schrödinger's cat. Erwin Schrödinger described how one could, in principle, create entanglement of a large-scale system by making it dependent on an elementary particle in a superposition. He proposed a scenario with a cat in a locked steel chamber, wherein the cat's survival depended on the state of a radioactive atom—whether it had decayed and emitted radiation. According to Schrödinger, the Copenhagen interpretation implies that the cat is both alive and dead until the state has been observed. Schrödinger did not wish to promote the idea of dead-and-alive cats as a serious possibility; he intended the example to illustrate the absurdity of the existing view of quantum mechanics. But since Schrödinger's time, physicists have given other interpretations of the mathematics of quantum mechanics, some of which regard the "alive and dead" cat superposition as quite real. Schrödinger's famous thought experiment poses the question of when a system stops existing as a quantum superposition of states. In the same way, one can ask whether the act of making a decision is analogous to having a superposition of states of two decision outcomes, so that making a decision means "opening the box" to reduce the brain from a combination of states to one state. This analogy of decision-making uses a formalism derived from quantum mechanics, but does not indicate the actual mechanism by which the decision is made.

In this way, the idea is similar to quantum cognition. This field clearly distinguishes itself from the quantum mind, as it is not reliant on the hypothesis that there is something micro-physical quantum-mechanical about the brain. Quantum cognition is based on the quantum-like paradigm, generalized quantum paradigm, or quantum structure paradigm that information processing by complex systems such as the brain can be mathematically described in the framework of quantum information and quantum probability theory. This model uses quantum mechanics only as an analogy and does not propose that quantum mechanics is the physical mechanism by which it operates. For example, quantum cognition proposes that some decisions can be analyzed as if there is interference between two alternatives, but it is not a physical quantum interference effect.

Practical problems

The main theoretical argument against the quantum-mind hypothesis is the assertion that quantum states in the brain would lose coherency before they reached a scale where they could be useful for neural processing. This supposition was elaborated by Max Tegmark. His calculations indicate that quantum systems in the brain decohere at sub-picosecond timescales. No response by a brain has shown computational results or reactions on this fast of a timescale. Typical reactions are on the order of milliseconds, trillions of times longer than sub-picosecond timescales.

Daniel Dennett uses an experimental result in support of his multiple drafts model of an optical illusion that happens on a timescale of less than a second or so. In this experiment, two different-colored lights, with an angular separation of a few degrees at the eye, are flashed in succession. If the interval between the flashes is less than a second or so, the first light that is flashed appears to move across to the position of the second light. Furthermore, the light seems to change color as it moves across the visual field. A green light will appear to turn red as it seems to move across to the position of a red light. Dennett asks how we could see the light change color before the second light is observed. Velmans argues that the cutaneous rabbit illusion, another illusion that happens in about a second, demonstrates that there is a delay while modelling occurs in the brain and that this delay was discovered by Libet. These slow illusions that happen at times of less than a second do not support a proposal that the brain functions on the picosecond timescale.

Penrose says:

The problem with trying to use quantum mechanics in the action of the brain is that if it were a matter of quantum nerve signals, these nerve signals would disturb the rest of the material in the brain, to the extent that the quantum coherence would get lost very quickly. You couldn't even attempt to build a quantum computer out of ordinary nerve signals, because they're just too big and in an environment that's too disorganized. Ordinary nerve signals have to be treated classically. But if you go down to the level of the microtubules, then there's an extremely good chance that you can get quantum-level activity inside them.

For my picture, I need this quantum-level activity in the microtubules; the activity has to be a large-scale thing that goes not just from one microtubule to the next but from one nerve cell to the next, across large areas of the brain. We need some kind of coherent activity of a quantum nature which is weakly coupled to the computational activity that Hameroff argues is taking place along the microtubules.

There are various avenues of attack. One is directly on the physics, on quantum theory, and there are certain experiments that people are beginning to perform, and various schemes for a modification of quantum mechanics. I don't think the experiments are sensitive enough yet to test many of these specific ideas. One could imagine experiments that might test these things, but they'd be very hard to perform.

Penrose also said in an interview:

...whatever consciousness is, it must be beyond computable physics.... It's not that consciousness depends on quantum mechanics, it's that it depends on where our current theories of quantum mechanics go wrong. It's to do with a theory that we don't know yet.

A demonstration of a quantum effect in the brain has to explain this problem or explain why it is not relevant, or that the brain somehow circumvents the problem of the loss of quantum coherency at body temperature. As Penrose proposes, it may require a new type of physical theory, something "we don't know yet."

Ethical problems

Deepak Chopra has referred a "quantum soul" existing "apart from the body", human "access to a field of infinite possibilities", and other quantum mysticism topics such as quantum healing or quantum effects of consciousness. Seeing the human body as being undergirded by a "quantum-mechanical body" composed not of matter but of energy and information, he believes that "human aging is fluid and changeable; it can speed up, slow down, stop for a time, and even reverse itself", as determined by one's state of mind. Robert Carroll states that Chopra attempts to integrate Ayurveda with quantum mechanics to justify his teachings. Chopra argues that what he calls "quantum healing" cures any manner of ailments, including cancer, through effects that he claims are based on the same principles as quantum mechanics. This has led physicists to object to his use of the term quantum in reference to medical conditions and the human body. Chopra said: "I think quantum theory has a lot of things to say about the observer effect, about non-locality, about correlations. So I think there's a school of physicists who believe that consciousness has to be equated, or at least brought into the equation, in understanding quantum mechanics." On the other hand, he also claims that quantum effects are "just a metaphor. Just like an electron or a photon is an indivisible unit of information and energy, a thought is an indivisible unit of consciousness." In his book Quantum Healing, Chopra stated the conclusion that quantum entanglement links everything in the Universe, and therefore it must create consciousness.

According to Daniel Dennett, "On this topic, Everybody's an expert... but they think that they have a particular personal authority about the nature of their own conscious experiences that can trump any hypothesis they find unacceptable."

While quantum effects are significant in the physiology of the brain, critics of quantum mind hypotheses challenge whether the effects of known or speculated quantum phenomena in biology scale up to have significance in neuronal computation, much less the emergence of consciousness as phenomenon. Daniel Dennett said, "Quantum effects are there in your car, your watch, and your computer. But most things—most macroscopic objects—are, as it were, oblivious to quantum effects. They don't amplify them; they don't hinge on them."

Emergence

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Emergence
The formation of complex symmetrical and fractal patterns in snowflakes exemplifies emergence in a physical system.
A termite "cathedral" mound produced by a termite colony offers a classic example of emergence in nature.

In philosophy, systems theory, science, and art, emergence occurs when a complex entity has properties or behaviors that its parts do not have on their own, and emerge only when they interact in a wider whole.

Emergence plays a central role in theories of integrative levels and of complex systems. For instance, the phenomenon of life as studied in biology is an emergent property of chemistry and physics.

In philosophy, theories that emphasize emergent properties have been called emergentism.

In philosophy

Philosophers often understand emergence as a claim about the etiology of a system's properties. An emergent property of a system, in this context, is one that is not a property of any component of that system, but is still a feature of the system as a whole. Nicolai Hartmann (1882–1950), one of the first modern philosophers to write on emergence, termed this a categorial novum (new category).

Definitions

This concept of emergence dates from at least the time of Aristotle. In Heideggerian thought, the notion of emergence is derived from the Greek word poiein, meaning "to make", and refers to a bringing-forth that encompasses not just a process of crafting (techne) but also the broader sense of something coming into being or revealing itself. Heidegger used emerging blossoms and butterflies as examples to illustrate poiêsis as a threshold event where something moves from one state to another. Many scientists and philosophers have written on the concept, including John Stuart Mill (Composition of Causes, 1843) and Julian Huxley (1887–1975).

The philosopher G. H. Lewes coined the term "emergent" in 1875, distinguishing it from the merely "resultant":

Every resultant is either a sum or a difference of the co-operant forces; their sum, when their directions are the same – their difference, when their directions are contrary. Further, every resultant is clearly traceable in its components, because these are homogeneous and commensurable. It is otherwise with emergents, when, instead of adding measurable motion to measurable motion, or things of one kind to other individuals of their kind, there is a co-operation of things of unlike kinds. The emergent is unlike its components insofar as these are incommensurable, and it cannot be reduced to their sum or their difference.

Strong and weak emergence

Usage of the notion "emergence" may generally be subdivided into two perspectives, that of "weak emergence" and "strong emergence". One paper discussing this division is Weak Emergence, by philosopher Mark Bedau. In terms of physical systems, weak emergence is a type of emergence in which the emergent property is amenable to computer simulation or similar forms of after-the-fact analysis (for example, the formation of a traffic jam, the structure of a flock of starlings in flight or a school of fish, or the formation of galaxies). Crucial in these simulations is that the interacting members retain their independence. If not, a new entity is formed with new, emergent properties: this is called strong emergence, which it is argued cannot be simulated, analysed or reduced.

David Chalmers writes that emergence often causes confusion in philosophy and science due to a failure to demarcate strong and weak emergence, which are "quite different concepts".

Some common points between the two notions are that emergence concerns new properties produced as the system grows, which is to say ones which are not shared with its components or prior states. Also, it is assumed that the properties are supervenient rather than metaphysically primitive.

Weak emergence describes new properties arising in systems as a result of the interactions at a fundamental level. However, Bedau stipulates that the properties can be determined only by observing or simulating the system, and not by any process of a reductionist analysis. As a consequence the emerging properties are scale dependent: they are only observable if the system is large enough to exhibit the phenomenon. Chaotic, unpredictable behaviour can be seen as an emergent phenomenon, while at a microscopic scale the behaviour of the constituent parts can be fully deterministic.

Bedau notes that weak emergence is not a universal metaphysical solvent, as the hypothesis that consciousness is weakly emergent would not resolve the traditional philosophical questions about the physicality of consciousness. However, Bedau concludes that adopting this view would provide a precise notion that emergence is involved in consciousness, and second, the notion of weak emergence is metaphysically benign.

Strong emergence describes the direct causal action of a high-level system on its components; qualities produced this way are irreducible to the system's constituent parts. The whole is other than the sum of its parts. It is argued then that no simulation of the system can exist, for such a simulation would itself constitute a reduction of the system to its constituent parts. Physics lacks well-established examples of strong emergence, unless it is interpreted as the impossibility in practice to explain the whole in terms of the parts. Practical impossibility may be a more useful distinction than one in principle, since it is easier to determine and quantify, and does not imply the use of mysterious forces, but simply reflects the limits of our capability.

Viability of strong emergence

One of the reasons for the importance of distinguishing these two concepts with respect to their difference concerns the relationship of purported emergent properties to science. Some thinkers question the plausibility of strong emergence as contravening our usual understanding of physics. Mark A. Bedau observes:

Although strong emergence is logically possible, it is uncomfortably like magic. How does an irreducible but supervenient downward causal power arise, since by definition it cannot be due to the aggregation of the micro-level potentialities? Such causal powers would be quite unlike anything within our scientific ken. This not only indicates how they will discomfort reasonable forms of materialism. Their mysteriousness will only heighten the traditional worry that emergence entails illegitimately getting something from nothing.

The concern that strong emergence does so entail is that such a consequence must be incompatible with metaphysical principles such as the principle of sufficient reason or the Latin dictum ex nihilo nihil fit, often translated as "nothing comes from nothing".

Strong emergence can be criticized for leading to causal overdetermination. The canonical example concerns emergent mental states (M and M∗) that supervene on physical states (P and P∗) respectively. Let M and M∗ be emergent properties. Let M∗ supervene on base property P∗. What happens when M causes M∗? Jaegwon Kim says:

In our schematic example above, we concluded that M causes M∗ by causing P∗. So M causes P∗. Now, M, as an emergent, must itself have an emergence base property, say P. Now we face a critical question: if an emergent, M, emerges from basal condition P, why cannot P displace M as a cause of any putative effect of M? Why cannot P do all the work in explaining why any alleged effect of M occurred? If causation is understood as nomological (law-based) sufficiency, P, as M's emergence base, is nomologically sufficient for it, and M, as P∗'s cause, is nomologically sufficient for P∗. It follows that P is nomologically sufficient for P∗ and hence qualifies as its cause...If M is somehow retained as a cause, we are faced with the highly implausible consequence that every case of downward causation involves overdetermination (since P remains a cause of P∗ as well). Moreover, this goes against the spirit of emergentism in any case: emergents are supposed to make distinctive and novel causal contributions.

If M is the cause of M∗, then M∗ is overdetermined because M∗ can also be thought of as being determined by P. One escape-route that a strong emergentist could take would be to deny downward causation. However, this would remove the proposed reason that emergent mental states must supervene on physical states, which in turn would call physicalism into question, and thus be unpalatable for some philosophers and physicists.

Carroll and Parola propose a taxonomy that classifies emergent phenomena by how the macro-description relates to the underlying micro-dynamics.

Type‑0 (Featureless) Emergence

A coarse-graining map Φ from a micro state space A to a macro state space B that commutes with time evolution, without requiring any further decomposition into subsystems.
Type‑1 (Local) Emergence

Emergence where the macro theory is defined in terms of localized collections of micro-subsystems. This category is subdivided into:
Type‑1a (Direct) Emergence: When the emergence map Φ is algorithmically simple (i.e. compressible), so that the macro behavior is easily deduced from the micro-states.
Type‑1b (Incompressible) Emergence: When Φ is algorithmically complex (i.e. incompressible), making the macro behavior appear more novel despite being determined by the micro-dynamics.
Type‑2 (Nonlocal) Emergence

Cases in which both the micro and macro theories admit subsystem decompositions, yet the macro entities are defined nonlocally with respect to the micro-structure, meaning that macro behavior depends on widely distributed micro information.
Type‑3 (Augmented) Emergence

A form of strong emergence in which the macro theory introduces additional ontological variables that do not supervene on the micro-states, thereby positing genuinely novel macro-level entities.

Objective or subjective quality

Crutchfield regards the properties of complexity and organization of any system as subjective qualities determined by the observer.

Defining structure and detecting the emergence of complexity in nature are inherently subjective, though essential, scientific activities. Despite the difficulties, these problems can be analysed in terms of how model-building observers infer from measurements the computational capabilities embedded in non-linear processes. An observer's notion of what is ordered, what is random, and what is complex in its environment depends directly on its computational resources: the amount of raw measurement data, of memory, and of time available for estimation and inference. The discovery of structure in an environment depends more critically and subtly, though, on how those resources are organized. The descriptive power of the observer's chosen (or implicit) computational model class, for example, can be an overwhelming determinant in finding regularity in data.

The low entropy of an ordered system can be viewed as an example of subjective emergence: the observer sees an ordered system by ignoring the underlying microstructure (i.e. movement of molecules or elementary particles) and concludes that the system has a low entropy. On the other hand, chaotic, unpredictable behaviour can also be seen as subjective emergent, while at a microscopic scale the movement of the constituent parts can be fully deterministic.

In science

In physics, weak emergence is used to describe a property, law, or phenomenon which occurs at macroscopic scales (in space or time) but not at microscopic scales, despite the fact that a macroscopic system can be viewed as a very large ensemble of microscopic systems.

An emergent behavior of a physical system is a qualitative property that can only occur in the limit that the number of microscopic constituents tends to infinity.

According to Robert Laughlin, for many-particle systems, nothing can be calculated exactly from the microscopic equations, and macroscopic systems are characterised by broken symmetry: the symmetry present in the microscopic equations is not present in the macroscopic system, due to phase transitions. As a result, these macroscopic systems are described in their own terminology, and have properties that do not depend on many microscopic details.

Novelist Arthur Koestler used the metaphor of Janus (a symbol of the unity underlying complements like open/shut, peace/war) to illustrate how the two perspectives (strong vs. weak or holistic vs. reductionistic) should be treated as non-exclusive, and should work together to address the issues of emergence. Theoretical physicist Philip W. Anderson states it this way:

The ability to reduce everything to simple fundamental laws does not imply the ability to start from those laws and reconstruct the universe. The constructionist hypothesis breaks down when confronted with the twin difficulties of scale and complexity. At each level of complexity entirely new properties appear. Psychology is not applied biology, nor is biology applied chemistry. We can now see that the whole becomes not merely more, but very different from the sum of its parts.

Meanwhile, others have worked towards developing analytical evidence of strong emergence. Renormalization methods in theoretical physics enable physicists to study critical phenomena that are not tractable as the combination of their parts. In 2009, Gu et al. presented a class of infinite physical systems that exhibits non-computable macroscopic properties. More precisely, if one could compute certain macroscopic properties of these systems from the microscopic description of these systems, then one would be able to solve computational problems known to be undecidable in computer science. These results concern infinite systems, finite systems being considered computable. However, macroscopic concepts which only apply in the limit of infinite systems, such as phase transitions and the renormalization group, are important for understanding and modeling real, finite physical systems. Gu et al.

Recent developments in theoretical physics have explored strong emergence through intrinsic mechanisms for the quantum to classical transition. In the Theory of Emergent Motion, Gheorghe (2025) proposes that classical directional motion emerges as a probabilistic resolution beyond a discrete temporal threshold T0, where quantum path uncertainty transitions to deterministic trajectories via a switching function F(Δt) = 1 − e^{-Δt/T0}, reinterpreting the Feynman path integral over finite histories without relying on decoherence or measurement collapse. Similarly, Prakash's Vibrational Dynamics framework (2025) describes the emergence of classical spacetime curvature from standing wave patterns in vibrational fields generated by quantum fluctuations interacting with a foam like spacetime structure, modulated by a curvature dependent logarithmic suppression function S(R) = 1 / log(1 + 1/(R L_p^2)) that governs coherence and leads to the Quantum Equivalence Principle, unifying quantum and classical behaviors geometrically. These approaches suggest that macroscopic laws may involve non-computable elements from microscopic quantum descriptions, complementing earlier work on undecidability in physical systems. Recent work by Gheorghe et. al.(2025) synthesizes entropic stochastic resonance in Brownian transport with foundational quantum models like ToEMEDFPM, and EBM, alongside objective collapse theories such as Spontaneous Unitarity Violation and Continuous Spontaneous Localisation, deriving extensions to colored noise and non-Markovian fluctuation dissipation relations to integrate a stochastic Schrödinger equation for joint position momentum measurement, suggesting entropic mechanisms drive quantum state transitions in stochastic geometries. These approaches suggest that macroscopic laws may involve non-computable elements from microscopic quantum descriptions, complementing earlier work on undecidability in physical systems.

Although macroscopic concepts are essential for understanding our world, much of fundamental physics has been devoted to the search for a 'theory of everything', a set of equations that perfectly describe the behavior of all fundamental particles. The view that this is the goal of science rests in part on the rationale that such a theory would allow us to derive the behavior of all macroscopic concepts, at least in principle. The evidence we have presented suggests that this view may be overly optimistic. A 'theory of everything' is one of many components necessary for complete understanding of the universe, but is not necessarily the only one. The development of macroscopic laws from first principles may involve more than just systematic logic, and could require conjectures suggested by experiments, simulations or insight.

In humanity

Human beings are the basic elements of social systems, which perpetually interact and create, maintain, or untangle mutual social bonds. Social bonds in social systems are perpetually changing in the sense of the ongoing reconfiguration of their structure. An early argument (1904–05) for the emergence of social formations can be found in Max Weber's most famous work, The Protestant Ethic and the Spirit of Capitalism. Recently, the emergence of a new social system is linked with the emergence of order from nonlinear relationships among multiple interacting units, where multiple interacting units are individual thoughts, consciousness, and actions. In the case of the global economic system, under capitalism, growth, accumulation and innovation can be considered emergent processes where not only does technological processes sustain growth, but growth becomes the source of further innovations in a recursive, self-expanding spiral. In this sense, the exponential trend of the growth curve reveals the presence of a long-term positive feedback among growth, accumulation, and innovation; and the emergence of new structures and institutions connected to the multi-scale process of growth. This is reflected in the work of Karl Polanyi, who traces the process by which labor and nature are converted into commodities in the passage from an economic system based on agriculture to one based on industry. This shift, along with the idea of the self-regulating market, set the stage not only for another economy but also for another society. The principle of emergence is also brought forth when thinking about alternatives to the current economic system based on growth facing social and ecological limits. Both degrowth and social ecological economics have argued in favor of a co-evolutionary perspective for theorizing about transformations that overcome the dependence of human wellbeing on economic growth.

Economic trends and patterns which emerge are studied intensively by economists. Within the field of group facilitation and organization development, there have been a number of new group processes that are designed to maximize emergence and self-organization, by offering a minimal set of effective initial conditions. Examples of these processes include SEED-SCALE, appreciative inquiry, Future Search, the world cafe or knowledge cafe, Open Space Technology, and others (Holman, 2010). In international development, concepts of emergence have been used within a theory of social change termed SEED-SCALE to show how standard principles interact to bring forward socio-economic development fitted to cultural values, community economics, and natural environment (local solutions emerging from the larger socio-econo-biosphere). These principles can be implemented utilizing a sequence of standardized tasks that self-assemble in individually specific ways utilizing recursive evaluative criteria.

Looking at emergence in the context of social and systems change, invites us to reframe our thinking on parts and wholes and their interrelation. Unlike machines, living systems at all levels of recursion - be it a sentient body, a tree, a family, an organisation, the education system, the economy, the health system, the political system etc - are continuously creating themselves. They are continually growing and changing along with their surrounding elements, and therefore are more than the sum of their parts. As Peter Senge and co-authors put forward in the book Presence: Exploring profound change in People, Organizations and Society, "as long as our thinking is governed by habit - notably industrial, "machine age" concepts such as control, predictability, standardization, and "faster is better" - we will continue to recreate institutions as they have been, despite their disharmony with the larger world, and the need for all living systems to evolve." While change is predictably constant, it is unpredictable in direction and often occurs at second and nth orders of systemic relationality. Understanding emergence and what creates the conditions for different forms of emergence to occur, either insidious or nourishing vitality, is essential in the search for deep transformations.

The works of Nora Bateson and her colleagues at the International Bateson Institute delve into this. Since 2012, they have been researching questions such as what makes a living system ready to change? Can unforeseen ready-ness for change be nourished? Here being ready is not thought of as being prepared, but rather as nourishing the flexibility we do not yet know will be needed. These inquiries challenge the common view that a theory of change is produced from an identified preferred goal or outcome. As explained in their paper An essay on ready-ing: Tending the prelude to change: "While linear managing or controlling of the direction of change may appear desirable, tending to how the system becomes ready allows for pathways of possibility previously unimagined." This brings a new lens to the field of emergence in social and systems change as it looks to tending the pre-emergent process. Warm Data Labs are the fruit of their praxis, they are spaces for transcontextual mutual learning in which aphanipoetic phenomena unfold. Having hosted hundreds of Warm Data processes with 1000s of participants, they have found that these spaces of shared poly-learning across contexts lead to a realm of potential change, a necessarily obscured zone of wild interaction of unseen, unsaid, unknown flexibility. It is such flexibility that nourishes the ready-ing living systems require to respond to complex situations in new ways and change. In other words, this readying process preludes what will emerge. When exploring questions of social change, it is important to ask ourselves, what is submerging in the current social imaginary and perhaps, rather than focus all our resources and energy on driving direct order responses, to nourish flexibility with ourselves, and the systems we are a part of.

Another approach that engages with the concept of emergence for social change is Theory U, where "deep emergence" is the result of self-transcending knowledge after a successful journey along the U through layers of awareness. This practice nourishes transformation at the inner-being level, which enables new ways of being, seeing and relating to emerge. The concept of emergence has also been employed in the field of facilitation. In Emergent Strategy, adrienne maree brown defines emergent strategies as "ways for humans to practice complexity and grow the future through relatively simple interactions".

In linguistics, the concept of emergence has been applied in the domain of stylometry to explain the interrelation between the syntactical structures of the text and the author style (Slautina, Marusenko, 2014). It has also been argued that the structure and regularity of language grammar, or at least language change, is an emergent phenomenon. While each speaker merely tries to reach their own communicative goals, they use language in a particular way. If enough speakers behave in that way, language is changed. In a wider sense, the norms of a language, i.e. the linguistic conventions of its speech society, can be seen as a system emerging from long-time participation in communicative problem-solving in various social circumstances.

In technology

The bulk conductive response of binary (RC) electrical networks with random arrangements, known as the universal dielectric response (UDR), can be seen as emergent properties of such physical systems. Such arrangements can be used as simple physical prototypes for deriving mathematical formulae for the emergent responses of complex systems. Internet traffic can also exhibit some seemingly emergent properties. In the congestion control mechanism, TCP flows can become globally synchronized at bottlenecks, simultaneously increasing and then decreasing throughput in coordination. Congestion, widely regarded as a nuisance, is possibly an emergent property of the spreading of bottlenecks across a network in high traffic flows which can be considered as a phase transition. Some artificially intelligent (AI) computer applications simulate emergent behavior. One example is Boids, which mimics the swarming behavior of birds.

In religion and art

In religion, emergence grounds expressions of religious naturalism and syntheism in which a sense of the sacred is perceived in the workings of entirely naturalistic processes by which more complex forms arise or evolve from simpler forms. Examples are detailed in The Sacred Emergence of Nature by Ursula Goodenough & Terrence Deacon and Beyond Reductionism: Reinventing the Sacred by Stuart Kauffman, both from 2006, as well as Syntheism – Creating God in The Internet Age by Alexander Bard & Jan Söderqvist from 2014 and Emergentism: A Religion of Complexity for the Metamodern World by Brendan Graham Dempsey (2022).

Michael J. Pearce has used emergence to describe the experience of works of art in relation to contemporary neuroscience. Practicing artist Leonel Moura, in turn, attributes to his "artbots" a real, if nonetheless rudimentary, creativity based on emergent principles.

In daily life and nature

Objects consist of components with properties differing from the object itself. We call these properties emergent because they did not exist at the component level. The same applies to artifacts (structures, devices, tools, and even works of art). They are created for a specific purpose and are therefore subjectively emergent: someone who doesn't understand the purpose can't use it.

The artifact is the result of an invention: through a clever combination of components, something new is created with emergent properties and functionalities. This invention is often difficult to predict and therefore usually based on a chance discovery. An invention based on discovery is often improved through a feedback loop, making it more applicable. This is an example of downward causation.

Example 1: A hammer is a combination of a head and a handle, each with different properties. By cleverly connecting them, the hammer becomes an artifact with new, emergent functionalities. Through downward causation, you can improve the head and handle components in such a way that the hammer's functionality increases. Example 2: A mixture of tin and copper produces the alloy bronze, with new, emergent properties (hardness, lower melting temperature). Finding the correct ratio of tin to copper is an example of downward causation. Example 3: Finding the right combination of chemicals to create a superconductor at high temperatures (i.e room temperature) is a great challenge for many scientists, where chance plays a significant role. Conversely, however, the properties of all these invented artifacts can be readily explained reductionistically.

Something similar occurs in nature: random mutations in genes rarely create a creature with new, emergent properties, increasing its chances of survival in a changing ecosystem. This is how evolution works. Here too, through downward causation, new creatures can sometimes manipulate their ecosystem in such a way that their chances of survival are further increased.

In both artifacts and living beings, certain components can be crucial to the emergent end result: the end result supervenes on these components. Examples include: a construction error, a bug in a software program, an error in the genetic code, or the absence of a particular gene.

Both aspects: supervenience and the unpredictability of the emergent result are characteristic of strong emergence (see above). (This definition, however, differs significantly from the definition in philosophical literature

Planetary boundaries

From Wikipedia, the free encyclopedia ...