Search This Blog

Monday, November 23, 2020

AI takeover

From Wikipedia, the free encyclopedia
 
Robots revolt in R.U.R., a 1920 play

An AI takeover is a hypothetical scenario in which artificial intelligence (AI) becomes the dominant form of intelligence on Earth, with computer programs or robots effectively taking the control of the planet away from the human species. Possible scenarios include replacement of the entire human workforce, takeover by a superintelligent AI, and the popular notion of a robot uprising. Some public figures, such as Stephen Hawking and Elon Musk, have advocated research into precautionary measures to ensure future superintelligent machines remain under human control.

Types

Automation of the economy

The traditional consensus among economists has been that technological progress does not cause long-term unemployment. However, recent innovation in the fields of robotics and artificial intelligence has raised worries that human labor will become obsolete, leaving people in various sectors without jobs to earn a living, leading to an economic crisis. Many small and medium size businesses may also be driven out of business if they will not be able to afford or licence the latest robotic and AI technology, and may need to focus on areas or services that cannot easily be replaced for continued viability in the face of such technology.

Technologies that may displace workers

Computer-integrated manufacturing

Computer-integrated manufacturing is the manufacturing approach of using computers to control the entire production process. This integration allows individual processes to exchange information with each other and initiate actions. Although manufacturing can be faster and less error-prone by the integration of computers, the main advantage is the ability to create automated manufacturing processes. Computer-integrated manufacturing is used in automotive, aviation, space, and ship building industries.

White-collar machines

The 21st century has seen a variety of skilled tasks partially taken over by machines, including translation, legal research and even low level journalism. Care work, entertainment, and other tasks requiring empathy, previously thought safe from automation, have also begun to be performed by robots.

Autonomous cars

An autonomous car is a vehicle that is capable of sensing its environment and navigating without human input. Many such vehicles are being developed, but as of May 2017 automated cars permitted on public roads are not yet fully autonomous. They all require a human driver at the wheel who is ready at a moment's notice to take control of the vehicle. Among the main obstacles to widespread adoption of autonomous vehicles, are concerns about the resulting loss of driving-related jobs in the road transport industry. On March 18, 2018, the first human was killed by an autonomous vehicle in Tempe, Arizona by an Uber self-driving car.

Eradication

Scientists such as Stephen Hawking are confident that superhuman artificial intelligence is physically possible, stating "there is no physical law precluding particles from being organised in ways that perform even more advanced computations than the arrangements of particles in human brains".

Scholars like Nick Bostrom debate how far off superhuman intelligence is, and whether it would actually pose a risk to mankind. A superintelligent machine would not necessarily be motivated by the same emotional desire to collect power that often drives human beings. However, a machine could be motivated to take over the world as a rational means toward attaining its ultimate goals; taking over the world would both increase its access to resources, and would help to prevent other agents from stopping the machine's plans. As an oversimplified example, a paperclip maximizer designed solely to create as many paperclips as possible would want to take over the world so that it can use all of the world's resources to create as many paperclips as possible, and, additionally, prevent humans from shutting it down or using those resources on things other than paperclips.

In fiction

AI takeover is a common theme in science fiction. Fictional scenarios typically differ vastly from those hypothesized by researchers in that they involve an active conflict between humans and an AI or robots with anthropomorphic motives who see them as a threat or otherwise have active desire to fight humans, as opposed to the researchers' concern of an AI that rapidly exterminates humans as a byproduct of pursuing arbitrary goals. This theme is at least as old as Karel Čapek's R. U. R., which introduced the word robot to the global lexicon in 1921, and can even be glimpsed in Mary Shelley's Frankenstein (published in 1818), as Victor ponders whether, if he grants his monster's request and makes him a wife, they would reproduce and their kind would destroy humanity.

The word "robot" from R.U.R. comes from the Czech word, robota, meaning laborer or serf. The 1920 play was a protest against the rapid growth of technology, featuring manufactured "robots" with increasing capabilities who eventually revolt. HAL 9000 (1968) and the original Terminator (1984) are two iconic examples of hostile AI in pop culture.

Contributing factors

Advantages of superhuman intelligence over humans

Nick Bostrom and others have expressed concern that an AI with the abilities of a competent artificial intelligence researcher would be able to modify its own source code and increase its own intelligence. If its self-reprogramming leads to its getting even better at being able to reprogram itself, the result could be a recursive intelligence explosion where it would rapidly leave human intelligence far behind.

Bostrom defines a superintelligence as "any intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest", and enumerates some advantages a superintelligence would have if it chose to compete against humans:

  • Technology research: A machine with superhuman scientific research abilities would be able to beat the human research community to milestones such as nanotechnology or advanced biotechnology. If the advantage becomes sufficiently large (for example, due to a sudden intelligence explosion), an AI takeover becomes trivial. For example, a superintelligent AI might design self-replicating bots that initially escape detection by diffusing throughout the world at a low concentration. Then, at a prearranged time, the bots multiply into nanofactories that cover every square foot of the Earth, producing nerve gas or deadly target-seeking mini-drones.
  • Strategizing: A superintelligence might be able to simply outwit human opposition.
  • Social manipulation: A superintelligence might be able to recruit human support, or covertly incite a war between humans.
  • Economic productivity: As long as a copy of the AI could produce more economic wealth than the cost of its hardware, individual humans would have an incentive to voluntarily allow the Artificial General Intelligence (AGI) to run a copy of itself on their systems.
  • Hacking: A superintelligence could find new exploits in computers connected to the Internet, and spread copies of itself onto those systems, or might steal money to finance its plans.

Sources of AI advantage

According to Bostrom, a computer program that faithfully emulates a human brain, or that otherwise runs algorithms that are equally powerful as the human brain's algorithms, could still become a "speed superintelligence" if it can think many orders of magnitude faster than a human, due to being made of silicon rather than flesh, or due to optimization focusing on increasing the speed of the AGI. Biological neurons operate at about 200 Hz, whereas a modern microprocessor operates at a speed of about 2,000,000,000 Hz. Human axons carry action potentials at around 120 m/s, whereas computer signals travel near the speed of light.

A network of human-level intelligences designed to network together and share complex thoughts and memories seamlessly, able to collectively work as a giant unified team without friction, or consisting of trillions of human-level intelligences, would become a "collective superintelligence".

More broadly, any number of qualitative improvements to a human-level AGI could result in a "quality superintelligence", perhaps resulting in an AGI as far above us in intelligence as humans are above non-human apes. The number of neurons in a human brain is limited by cranial volume and metabolic constraints, while the number of processors in a supercomputer can be indefinitely expanded. An AGI need not be limited by human constraints on working memory, and might therefore be able to intuitively grasp more complex relationships than humans can. An AGI with specialized cognitive support for engineering or computer programming would have an advantage in these fields, compared with humans who evolved no specialized mental modules to specifically deal with those domains. Unlike humans, an AGI can spawn copies of itself and tinker with its copies' source code to attempt to further improve its algorithms.

Possibility of unfriendly AI preceding friendly AI

Is strong AI inherently dangerous?

A significant problem is that unfriendly artificial intelligence is likely to be much easier to create than friendly AI. While both require large advances in recursive optimisation process design, friendly AI also requires the ability to make goal structures invariant under self-improvement (or the AI could transform itself into something unfriendly) and a goal structure that aligns with human values and does not automatically destroy the entire human race. An unfriendly AI, on the other hand, can optimize for an arbitrary goal structure, which does not need to be invariant under self-modification.

The sheer complexity of human value systems makes it very difficult to make AI's motivations human-friendly. Unless moral philosophy provides us with a flawless ethical theory, an AI's utility function could allow for many potentially harmful scenarios that conform with a given ethical framework but not "common sense". According to Eliezer Yudkowsky, there is little reason to suppose that an artificially designed mind would have such an adaptation.

Odds of conflict

Many scholars, including as evolutionary psychologist Steven Pinker, argue that a superintelligent machine is likely to coexist peacefully with humans.

The fear of cybernetic revolt is often based on interpretations of humanity's history, which is rife with incidents of enslavement and genocide. Such fears stem from a belief that competitiveness and aggression are necessary in any intelligent being's goal system. However, such human competitiveness stems from the evolutionary background to our intelligence, where the survival and reproduction of genes in the face of human and non-human competitors was the central goal. According to AI researcher Steve Omohundro, an arbitrary intelligence could have arbitrary goals: there is no particular reason that an artificially intelligent machine (not sharing humanity's evolutionary context) would be hostile—or friendly—unless its creator programs it to be such and it is not inclined or capable of modifying its programming. But the question remains: what would happen if AI systems could interact and evolve (evolution in this context means self-modification or selection and reproduction) and need to compete over resources, would that create goals of self-preservation? AI's goal of self-preservation could be in conflict with some goals of humans.

Many scholars dispute the likelihood of unanticipated cybernetic revolt as depicted in science fiction such as The Matrix, arguing that it is more likely that any artificial intelligence powerful enough to threaten humanity would probably be programmed not to attack it. Pinker acknowledges the possibility of deliberate "bad actors", but states that in the absence of bad actors, unanticipated accidents are not a significant threat; Pinker argues that a culture of engineering safety will prevent AI researchers from unleashing malign superintelligence on accident. In contrast, Yudkowsky argues that humanity is less likely to be threatened by deliberately aggressive AIs than by AIs which were programmed such that their goals are unintentionally incompatible with human survival or well-being (as in the film I, Robot and in the short story "The Evitable Conflict"). Omohundro suggests that present-day automation systems are not designed for safety and that AIs may blindly optimize narrow utility functions (say, playing chess at all costs), leading them to seek self-preservation and elimination of obstacles, including humans who might turn them off.

Precautions

The AI control problem is the issue of how to build a superintelligent agent that will aid its creators, and avoid inadvertently building a superintelligence that will harm its creators. Some scholars argue that solutions to the control problem might also find applications in existing non-superintelligent AI.

Major approaches to the control problem include alignment, which aims to align AI goal systems with human values, and capability control, which aims to reduce an AI system's capacity to harm humans or gain control. An example of "capability control" is to research whether a superintelligence AI could be successfully confined in an "AI box". According to Bostrom, such capability control proposals are not reliable or sufficient to solve the control problem in the long term, but may potentially act as valuable supplements to alignment efforts.

Warnings

Physicist Stephen Hawking, Microsoft founder Bill Gates and SpaceX founder Elon Musk have expressed concerns about the possibility that AI could develop to the point that humans could not control it, with Hawking theorizing that this could "spell the end of the human race". Stephen Hawking said in 2014 that "Success in creating AI would be the biggest event in human history. Unfortunately, it might also be the last, unless we learn how to avoid the risks." Hawking believed that in the coming decades, AI could offer "incalculable benefits and risks" such as "technology outsmarting financial markets, out-inventing human researchers, out-manipulating human leaders, and developing weapons we cannot even understand." In January 2015, Nick Bostrom joined Stephen Hawking, Max Tegmark, Elon Musk, Lord Martin Rees, Jaan Tallinn, and numerous AI researchers, in signing the Future of Life Institute's open letter speaking to the potential risks and benefits associated with artificial intelligence. The signatories

…believe that research on how to make AI systems robust and beneficial is both important and timely, and that there are concrete research directions that can be pursued today.

Superintelligence

From Wikipedia, the free encyclopedia

https://en.wikipedia.org/wiki/Superintelligence 

A superintelligence is a hypothetical agent that possesses intelligence far surpassing that of the brightest and most gifted human minds. "Superintelligence" may also refer to a property of problem-solving systems (e.g., superintelligent language translators or engineering assistants) whether or not these high-level intellectual competencies are embodied in agents that act in the world. A superintelligence may or may not be created by an intelligence explosion and associated with a technological singularity.

University of Oxford philosopher Nick Bostrom defines superintelligence as "any intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest". The program Fritz falls short of superintelligence—even though it is much better than humans at chess—because Fritz cannot outperform humans in other tasks. Following Hutter and Legg, Bostrom treats superintelligence as general dominance at goal-oriented behavior, leaving open whether an artificial or human superintelligence would possess capacities such as intentionality (cf. the Chinese room argument) or first-person consciousness (cf. the hard problem of consciousness).

Technological researchers disagree about how likely present-day human intelligence is to be surpassed. Some argue that advances in artificial intelligence (AI) will probably result in general reasoning systems that lack human cognitive limitations. Others believe that humans will evolve or directly modify their biology so as to achieve radically greater intelligence. A number of futures studies scenarios combine elements from both of these possibilities, suggesting that humans are likely to interface with computers, or upload their minds to computers, in a way that enables substantial intelligence amplification.

Some researchers believe that superintelligence will likely follow shortly after the development of artificial general intelligence. The first generally intelligent machines are likely to immediately hold an enormous advantage in at least some forms of mental capability, including the capacity of perfect recall, a vastly superior knowledge base, and the ability to multitask in ways not possible to biological entities. This may give them the opportunity to—either as a single being or as a new species—become much more powerful than humans, and to displace them.

A number of scientists and forecasters argue for prioritizing early research into the possible benefits and risks of human and machine cognitive enhancement, because of the potential social impact of such technologies.

Feasibility of artificial superintelligence

Progress in machine classification of images
The error rate of AI by year. Red line - the error rate of a trained human

Philosopher David Chalmers argues that artificial general intelligence is a very likely path to superhuman intelligence. Chalmers breaks this claim down into an argument that AI can achieve equivalence to human intelligence, that it can be extended to surpass human intelligence, and that it can be further amplified to completely dominate humans across arbitrary tasks.

Concerning human-level equivalence, Chalmers argues that the human brain is a mechanical system, and therefore ought to be emulatable by synthetic materials. He also notes that human intelligence was able to biologically evolve, making it more likely that human engineers will be able to recapitulate this invention. Evolutionary algorithms in particular should be able to produce human-level AI. Concerning intelligence extension and amplification, Chalmers argues that new AI technologies can generally be improved on, and that this is particularly likely when the invention can assist in designing new technologies.

If research into strong AI produced sufficiently intelligent software, it would be able to reprogram and improve itself – a feature called "recursive self-improvement". It would then be even better at improving itself, and could continue doing so in a rapidly increasing cycle, leading to a superintelligence. This scenario is known as an intelligence explosion. Such an intelligence would not have the limitations of human intellect, and may be able to invent or discover almost anything.

Computer components already greatly surpass human performance in speed. Bostrom writes, "Biological neurons operate at a peak speed of about 200 Hz, a full seven orders of magnitude slower than a modern microprocessor (~2 GHz)." Moreover, neurons transmit spike signals across axons at no greater than 120 m/s, "whereas existing electronic processing cores can communicate optically at the speed of light". Thus, the simplest example of a superintelligence may be an emulated human mind run on much faster hardware than the brain. A human-like reasoner that could think millions of times faster than current humans would have a dominant advantage in most reasoning tasks, particularly ones that require haste or long strings of actions.

Another advantage of computers is modularity, that is, their size or computational capacity can be increased. A non-human (or modified human) brain could become much larger than a present-day human brain, like many supercomputers. Bostrom also raises the possibility of collective superintelligence: a large enough number of separate reasoning systems, if they communicated and coordinated well enough, could act in aggregate with far greater capabilities than any sub-agent.

There may also be ways to qualitatively improve on human reasoning and decision-making. Humans appear to differ from chimpanzees in the ways we think more than we differ in brain size or speed. Humans outperform non-human animals in large part because of new or enhanced reasoning capacities, such as long-term planning and language use. If there are other possible improvements to reasoning that would have a similarly large impact, this makes it likelier that an agent can be built that outperforms humans in the same fashion humans outperform chimpanzees.

All of the above advantages hold for artificial superintelligence, but it is not clear how many hold for biological superintelligence. Physiological constraints limit the speed and size of biological brains in many ways that are inapplicable to machine intelligence. As such, writers on superintelligence have devoted much more attention to superintelligent AI scenarios.

Feasibility of biological superintelligence

Carl Sagan suggested that the advent of Caesarean sections and in vitro fertilization may permit humans to evolve larger heads, resulting in improvements via natural selection in the heritable component of human intelligence. By contrast, Gerald Crabtree has argued that decreased selection pressure is resulting in a slow, centuries-long reduction in human intelligence, and that this process instead is likely to continue into the future. There is no scientific consensus concerning either possibility, and in both cases the biological change would be slow, especially relative to rates of cultural change.

Selective breeding, nootropics, NSI-189, MAOIs, epigenetic modulation, and genetic engineering could improve human intelligence more rapidly. Bostrom writes that if we come to understand the genetic component of intelligence, pre-implantation genetic diagnosis could be used to select for embryos with as much as 4 points of IQ gain (if one embryo is selected out of two), or with larger gains (e.g., up to 24.3 IQ points gained if one embryo is selected out of 1000). If this process is iterated over many generations, the gains could be an order of magnitude greater. Bostrom suggests that deriving new gametes from embryonic stem cells could be used to iterate the selection process very rapidly. A well-organized society of high-intelligence humans of this sort could potentially achieve collective superintelligence.

Alternatively, collective intelligence might be constructible by better organizing humans at present levels of individual intelligence. A number of writers have suggested that human civilization, or some aspect of it (e.g., the Internet, or the economy), is coming to function like a global brain with capacities far exceeding its component agents. If this systems-based superintelligence relies heavily on artificial components, however, it may qualify as an AI rather than as a biology-based superorganism. A prediction market is sometimes considered an example of working collective intelligence system, consisting of humans only (assuming algorithms are not used to inform decisions).

A final method of intelligence amplification would be to directly enhance individual humans, as opposed to enhancing their social or reproductive dynamics. This could be achieved using nootropics, somatic gene therapy, or brain–computer interfaces. However, Bostrom expresses skepticism about the scalability of the first two approaches, and argues that designing a superintelligent cyborg interface is an AI-complete problem.

Forecasts

Most surveyed AI researchers expect machines to eventually be able to rival humans in intelligence, though there is little consensus on when this will likely happen. At the 2006 AI@50 conference, 18% of attendees reported expecting machines to be able "to simulate learning and every other aspect of human intelligence" by 2056; 41% of attendees expected this to happen sometime after 2056; and 41% expected machines to never reach that milestone.

In a survey of the 100 most cited authors in AI (as of May 2013, according to Microsoft academic search), the median year by which respondents expected machines "that can carry out most human professions at least as well as a typical human" (assuming no global catastrophe occurs) with 10% confidence is 2024 (mean 2034, st. dev. 33 years), with 50% confidence is 2050 (mean 2072, st. dev. 110 years), and with 90% confidence is 2070 (mean 2168, st. dev. 342 years). These estimates exclude the 1.2% of respondents who said no year would ever reach 10% confidence, the 4.1% who said 'never' for 50% confidence, and the 16.5% who said 'never' for 90% confidence. Respondents assigned a median 50% probability to the possibility that machine superintelligence will be invented within 30 years of the invention of approximately human-level machine intelligence.

Design considerations

Bostrom expressed concern about what values a superintelligence should be designed to have. He compared several proposals:

  • The coherent extrapolated volition (CEV) proposal is that it should have the values upon which humans would converge.
  • The moral rightness (MR) proposal is that it should value moral rightness.
  • The moral permissibility (MP) proposal is that it should value staying within the bounds of moral permissibility (and otherwise have CEV values).

Bostrom clarifies these terms:

instead of implementing humanity's coherent extrapolated volition, one could try to build an AI with the goal of doing what is morally right, relying on the AI’s superior cognitive capacities to figure out just which actions fit that description. We can call this proposal “moral rightness” (MR) ... MR would also appear to have some disadvantages. It relies on the notion of “morally right,” a notoriously difficult concept, one with which philosophers have grappled since antiquity without yet attaining consensus as to its analysis. Picking an erroneous explication of “moral rightness” could result in outcomes that would be morally very wrong ... The path to endowing an AI with any of these [moral] concepts might involve giving it general linguistic ability (comparable, at least, to that of a normal human adult). Such a general ability to understand natural language could then be used to understand what is meant by “morally right.” If the AI could grasp the meaning, it could search for actions that fit ...

One might try to preserve the basic idea of the MR model while reducing its demandingness by focusing on moral permissibility: the idea being that we could let the AI pursue humanity’s CEV so long as it did not act in ways that are morally impermissible.

Responding to Bostrom, Santos-Lang raised concern that developers may attempt to start with a single kind of superintelligence.

Potential threat to humanity

It has been suggested that if AI systems rapidly become superintelligent, they may take unforeseen actions or out-compete humanity. Researchers have argued that, by way of an "intelligence explosion," a self-improving AI could become so powerful as to be unstoppable by humans.

Concerning human extinction scenarios, Bostrom (2002) identifies superintelligence as a possible cause:

When we create the first superintelligent entity, we might make a mistake and give it goals that lead it to annihilate humankind, assuming its enormous intellectual advantage gives it the power to do so. For example, we could mistakenly elevate a subgoal to the status of a supergoal. We tell it to solve a mathematical problem, and it complies by turning all the matter in the solar system into a giant calculating device, in the process killing the person who asked the question.

In theory, since a superintelligent AI would be able to bring about almost any possible outcome and to thwart any attempt to prevent the implementation of its goals, many uncontrolled, unintended consequences could arise. It could kill off all other agents, persuade them to change their behavior, or block their attempts at interference. Eliezer Yudkowsky illustrates such instrumental convergence as follows: "The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else."

This presents the AI control problem: how to build an intelligent agent that will aid its creators, while avoiding inadvertently building a superintelligence that will harm its creators. The danger of not designing control right "the first time," is that a superintelligence may be able to seize power over its environment and prevent humans from shutting it down. Potential AI control strategies include "capability control" (limiting an AI's ability to influence the world) and "motivational control" (building an AI whose goals are aligned with human values).

Bill Hibbard advocates for public education about superintelligence and public control over the development of superintelligence.

Noogenesis

From Wikipedia, the free encyclopedia

Noogenesis is the emergence and evolution of intelligence.

Term origin

Noo-, nous (UK: /ˈns/, US: /ˈns/), from the ancient Greek νόος, is a term that currently encompasses the meanings: "mind, intelligence, intellect, reason; wisdom; insight, intuition, thought."

Noogenesis was first mentioned in the posthumously published in 1955 book The Phenomenon of Man by Pierre Teilhard de Chardin, an anthropologist and philosopher, in a few places:

"With and within the crisis of reflection, the next term in the series manifests itself. Psychogenesis has led to man. Now it effaces itself, relieved or absorbed by another and a higher function—the engendering and subsequent development of the mind, in one word noogenesis. When for the first time in a living creature instinct perceived itself in its own mirror, the whole world took a pace forward." "There is only one way in which our minds can integrate into a coherent picture of noogenesis these two essential properties of the autonomous centre of all centres, and that is to resume and complement our Principle of Emergence." "The idea is that of noogenesis ascending irreversibly towards Omega through the strictly limited cycle of a geogenesis." "To make room for thought in the world, I have needed to ' interiorise ' matter : to imagine an energetics of the mind; to conceive a noogenesis rising upstream against the flow of entropy; to provide evolution with a direction, a line of advance and critical points..." —"Omega point".

The lack of any kind of definition of the term has led to a variety of interpretations reflected in the book, including "the contemporary period of evolution on Earth, signified by transformation of biosphere onto the sphere of intelligence—noosphere", "evolution run by human mind" etc. The most widespread interpretation is thought to be "the emergence of mind, which follows geogenesis, biogenesis and anthropogenesis, forming a new sphere on Earthnoosphere".

Recent developments

Noogenesis: the evolution of the reaction rate In unicellular organism – the rate of movement of ions through the membrane 10 in -10 degrees m/s, water through the membrane 10 in −6 degree m/s, intracellular liquid (cytoplasm) 2∙10 in −5 degree m/s; Inside multicellular organism – the speed of blood through the vessels ~0.05 m/s, the momentum along the nerve fibers ~100 m/s; In population (humanity) – communications: sound (voice and audio) ~300 km/h, quantum-electron ~3∙10 in 8 degree m/s (the speed of radio-electromagnetic waves, electric current, light, optical, tele-communications).

Modern understanding

In 2005 Alexei Eryomin in the monograph Noogenesis and Theory of Intellect proposed a new concept of noogenesis in understanding the evolution of intellectual systems, concepts of intellectual systems, information logistics, information speed, intellectual energy, intellectual potential, consolidated into a theory of the intellect which combines the biophysical parameters of intellectual energy—the amount of information, its acceleration (frequency, speed) and the distance it's being sent—into a formula.

According to the new concept—proposed hypothesis continue prognostic progressive evolution of the species Homo sapiens, the analogy between the human brain with the enormous number of neural cells firing at the same time and a similarly functioning human society.

Iteration of the number of components in Intellectual systems. A - number of neurons in the brain during individual development (ontogenesis), B - number of people (evolution of populations of humanity), C - number of neurons in the nervous systems of organisms during evolution (phylogenesis).
 
Emergence and evolution of info-interactions within populations of Humanity Aworld human population → 7 billion; B – number of literate persons; C – number of reading books (with beginning of printing); D – number of receivers (radio, TV); E – number of phones, computers, Internet users

A new understanding of the term "noogenesis" as an evolution of the intellect was proposed by A. Eryomin. A hypothesis based on recapitulation theory links the evolution of the human brain to the development of human civilization. The parallel between the number of people living on Earth and the number of neurons becomes more and more obvious leading us to viewing global intelligence as an analogy for human brain. All of the people living on this planet have undoubtedly inherited the amazing cultural treasures of the past, be it production, social and intellectual ones. We are genetically hardwired to be a sort of "live RAM" of the global intellectual system. Alexei Eryomin suggests that humanity is moving towards a unified self-contained informational and intellectual system. His research has shown the probability of Super Intellect realizing itself as Global Intelligence on Earth. We could get closer to understanding the most profound patterns and laws of the Universe if these kinds of research were given enough attention. Also, the resemblance between the individual human development and such of the whole human race has to be explored further if we are to face some of the threats of the future.

Therefore, generalizing and summarizing:

"Noogenesis—the expansion process in space and development in time (evolution) of intelligent systems (intelligent matter). Noogenesis represents a set of natural, interconnected, characterized by a certain temporal sequence of structural and functional transformations of the entire hierarchy and set of interacting among themselves on the basic structures and processes ranging from the formation and separation of the rational system to the present (the phylogenesis of the nervous systems of organisms; the evolution of humanity as autonomous intelligent systems) or death (in the course of ontogenesis of the human brain)".

Interdisciplinary nature

The term "noogenesis" can be used in a variety of fields i.e. medicine, biophysics, semiotics, mathematics, geology, information technology, psychology, theory of global evolution etc. thus making it a truly cross-disciplinary one. In astrobiology noogenesis concerns the origin of intelligent life and more specifically technological civilizations capable of communicating with humans and or traveling to Earth. The lack of evidence for the existence of such extraterrestrial life creates the Fermi paradox.

Aspects of emergence and evolution of mind

To the parameters of the phenomenon "noo", "intellectus"

The emergence of the human mind is considered to be one of the five fundamental phenomenons of emergent evolution. To understand the mind, it is necessary to determine how human thinking differs from other thinking beings. Such differences include the ability to generate calculations, to combine dissimilar concepts, to use mental symbols, and to think abstractly. The knowledge of the phenomenon of intelligent systems—the emergence of reason (noogenesis) boils down to:

Several published works which do not employ the term "noogenesis", however, address some patterns in the emergence and functioning of the human intelligence: working memory capacity ≥ 7, ability to predict, prognosis, hierarchical (6 layers neurons) system of information analysis, consciousness, memory, generated and consumed information properties etc. They also set the limits of several physiological aspects of human intelligence. Сonception of emergence of insight.

Aspects of evolution "sapiens"

Historical evolutionary development and emergence of H. sapiens as species, include emergence of such concepts as anthropogenesis, phylogenesis, morphogenesis, cephalization, systemogenesis, cognition systems autonomy.

On the other hand, development of an individual's intellect deals with concepts of embryogenesis, ontogenesis, morphogenesis, neurogenesis, higher nervous function of I.P.Pavlov and his philosophy of mind. Despite the fact that the morphofunctional maturity is usually reached by the age of 13, the definitive functioning of the brain structures is not complete until about 16–17 years of age.

New manifestations of humanity intelligence

The joint global highly intelligent activity of people, mankind as an autonomous system, in the second half of the 20th century led to acts reflecting the unity of humanity, which in some cases reacts as an autonomous system. Examples of such unity are the founding of the UN and its specialized agencies, the victory over smallpox by vaccination, the atomic energy peaceful use, access into space, nuclear and bacteriological testing bans, and the satellite television arrangement. Already in the 21st century - responding to global warming, hydrocarbon production contractual balancing, overcoming economic crises, mega-projects for joint space observations, the nanoworld study and nuclear research, the ambitions for the study of the brain and the creation of universal artificial intelligence indicated in national and international strategies. With a new challenge to humanity - the COVID-19 pandemic, in a hyperinformational society, the problem was designated as a choice "infopandemic or noogenesis?", "the rise of a global collective intelligence".

The future of intelligence

The fields of Bioinformatics, genetic engineering, noopharmacology, cognitive load, brain stimulations, the efficient use of altered states of consciousness, use of non-human cognition, information technology (IT), artificial intelligence (AI) are all believed to be effective methods of intelligence advancement and may be the future of intelligence on earth and the galaxy.

Issues and further research prospects

The development of the human brain, perception, cognition, memory and neuroplasticity are unsolved problems in neuroscience. Several megaprojects are being carried out in: Blue Brain Project, Allen Brain Atlas, Human Connectome Project, Google Brain, - in attempt to better our understanding of the brain's functionality along with the intention to develop human cognitive performance in the future with artificial intelligence, informational, communication and cognitive technology. An International Brain Initiative currently integrated national-level brain research initiatives (American BRAIN Initiative, European Human Brain Project, China Brain Project, Japan Brain/MINDS, Canadian Brain Research Strategy, Australian Brain Alliance, Korea Brain Initiative) with goals support an interface between countries to enable synergistic interactions with interdisciplinary approaches arising from the latest research in neuroscience and brain-inspired artificial intelligence etc. According to the Russian National Strategy - fundamental scientific research should be aimed at creating universal artificial intelligence.

Evolution of the brain

From Wikipedia, the free encyclopedia

The principles that govern the evolution of brain structure are not well understood. Brain to body size scales allometrically. Small bodied mammals have relatively large brains compared to their bodies whereas large mammals (such as whales) have smaller brain to body ratios. If brain weight is plotted against body weight for primates, the regression line of the sample points can indicate the brain power of a primate species. Lemurs for example fall below this line which means that for a primate of equivalent size, we would expect a larger brain size. Humans lie well above the line indicating that humans are more encephalized than lemurs. In fact, humans are more encephalized than all other primates.

Early history of brain development

One approach to understanding overall brain evolution is to use a paleoarchaeological timeline to trace the necessity for ever increasing complexity in structures that allow for chemical and electrical signaling. Because brains and other soft tissues do not fossilize as readily as mineralized tissues, scientists often look to other structures as evidence in the fossil record to get an understanding of brain evolution. This, however, leads to a dilemma as the emergence of organisms with more complex nervous systems with protective bone or other protective tissues that can then readily fossilize occur in the fossil record before evidence for chemical and electrical signaling. Recent evidence has shown that the ability to transmit electrical and chemical signals existed even before more complex multicellular lifeforms.

Fossilization of brain, or other soft tissue, is possible however, and scientists can infer that the first brain structure appeared at least 521 million years ago, with fossil brain tissue present in sites of exceptional preservation.

Another approach to understanding brain evolution is to look at extant organisms that do not possess complex nervous systems, comparing anatomical features that allow for chemical or electrical messaging. For example, choanoflagellates are organisms that possess various membrane channels that are crucial to electrical signaling. The membrane channels of choanoflagellates’ are homologous to the ones found in animal cells, and this is supported by the evolutionary connection between early choanoflagellates and the ancestors of animals. Another example of extant organisms with the capacity to transmit electrical signals would be the glass sponge, a multicellular organism, which is capable of propagating electrical impulses without the presence of a nervous system.

Before the evolutionary development of the brain, nerve nets, the simplest form of a nervous system developed. These nerve nets were a sort of precursor for the more evolutionarily advanced brains. They were first observed in Cnidaria and consist of a number of neurons spread apart that allow the organism to respond to physical contact. They are able to rudimentarily detect food and other chemicals but these nerve nets do not allow them to detect the source of the stimulus.

Ctenophores also demonstrate this crude precursor to a brain or centralized nervous system, however, they phylogenetically diverged before the phylum Porifera and Cnidaria. There are two current theories on the emergence of nerve nets. One theory is that nerve nets may have developed independently in Ctenophores and Cnidarians. The other theory states that a common ancestor may have developed nerve nets, but they were lost in Porifera.

A trend in brain evolution according to a study done with mice, chickens, monkeys and apes concluded that more evolved species tend to preserve the structures responsible for basic behaviors. A long term human study comparing the human brain to the primitive brain found that the modern human brain contains the primitive hindbrain region – what most neuroscientists call the protoreptilian brain. The purpose of this part of the brain is to sustain fundamental homeostatic functions. The pons and medulla are major structures found there. A new region of the brain developed in mammals about 250 million years after the appearance of the hindbrain. This region is known as the paleomammalian brain, the major parts of which are the hippocampi and amygdalas, often referred to as the limbic system. The limbic system deals with more complex functions including emotional, sexual and fighting behaviors. Of course, animals that are not vertebrates also have brains, and their brains have undergone separate evolutionary histories.

The brainstem and limbic system are largely based on nuclei, which are essentially balled-up clusters of tightly-packed neurons and the axon fibers that connect them to each other, as well as to neurons in other locations. The other two major brain areas (the cerebrum and cerebellum) are based on a cortical architecture. At the outer periphery of the cortex, the neurons are arranged into layers (the number of which vary according to species and function) a few millimeters thick. There are axons that travel between the layers, but the majority of axon mass is below the neurons themselves. Since cortical neurons and most of their axon fiber tracts don't have to compete for space, cortical structures can scale more easily than nuclear ones. A key feature of cortex is that because it scales with surface area, more of it can be fit inside a skull by introducing convolutions, in much the same way that a dinner napkin can be stuffed into a glass by wadding it up. The degree of convolution is generally greater in species with more complex behavior, which benefits from the increased surface area.

The cerebellum, or "little brain," is behind the brainstem and below the occipital lobe of the cerebrum in humans. Its purposes include the coordination of fine sensorimotor tasks, and it may be involved in some cognitive functions, such as language. Human cerebellar cortex is finely convoluted, much more so than cerebral cortex. Its interior axon fiber tracts are called the arbor vitae, or Tree of Life.

The area of the brain with the greatest amount of recent evolutionary change is called the neocortex. In reptiles and fish, this area is called the pallium, and is smaller and simpler relative to body mass than what is found in mammals. According to research, the cerebrum first developed about 200 million years ago. It's responsible for higher cognitive functions - for example, language, thinking, and related forms of information processing. It's also responsible for processing sensory input (together with the thalamus, a part of the limbic system that acts as an information router). Most of its function is subconscious, that is, not available for inspection or intervention by the conscious mind. The neocortex is an elaboration, or outgrowth, of structures in the limbic system, with which it is tightly integrated.

Role of embryology in the evolution of the brain

In addition to studying the fossil record, evolutionary history can be investigated via embryology. An embryo is an unborn/unhatched animal and evolutionary history can be studied by observing how processes in embryonic development are conserved (or not conserved) across species. Similarities between different species may indicate evolutionary connection. One way anthropologists study evolutionary connection between species is by observing orthologs. An ortholog is defined as two or more homologous genes between species that are evolutionarily related by linear descent.

Bone morphogenetic protein (BMP), a growth factor that plays a significant role in embryonic neural development, is highly conserved amongst vertebrates, as is sonic hedgehog (SHH), a morphogen that inhibits BMP to allow neural crest development.

Randomizing access and scaling brains up

Some animal phyla have gone through major brain enlargement through evolution (e.g. vertebrates and cephalopods both contain many lineages in which brains have grown through evolution) but most animal groups are composed only of species with extremely small brains. Some scientists argue that this difference is due to vertebrate and cephalopod neurons having evolved ways of communicating that overcome the scalability problem of neural networks while most animal groups have not. They argue that the reason why traditional neural networks fail to improve their function when they scale up is because filtering based on previously known probabilities cause self-fulfilling prophecy-like biases that create false statistical evidence giving a completely false worldview and that randomized access can overcome this problem and allow brains to be scaled up to more discriminating conditioned reflexes at larger brains that lead to new worldview forming abilities at certain thresholds. This is explained by randomization allowing the entire brain to eventually get access to all information over the course of many shifts even though instant privileged access is physically impossible. They cite that vertebrate neurons transmit virus-like capsules containing RNA that are sometimes read in the neuron to which it is transmitted and sometimes passed further on unread which creates randomized access, and that cephalopod neurons make different proteins from the same gene which suggests another mechanism for randomization of concentrated information in neurons, both making it evolutionarily worth scaling up brains.

Brain re-arrangement

With the use of in vivo Magnetic resonance imaging (MRI) and tissue sampling, different cortical samples from members of each hominoid species were analyzed. In each species, specific areas were either relatively enlarged or shrunken, which can detail neural organizations. Different sizes in the cortical areas can show specific adaptations, functional specializations and evolutionary events that were changes in how the hominoid brain is organized. In early prediction it was thought that the frontal lobe, a large part of the brain that is generally devoted to behavior and social interaction, predicted the differences in behavior between hominoid and humans. Discrediting this theory was evidence supporting that damage to the frontal lobe in both humans and hominoids show atypical social and emotional behavior; thus, this similarity means that the frontal lobe was not very likely to be selected for reorganization. Instead, it is now believed that evolution occurred in other parts of the brain that are strictly associated with certain behaviors. The reorganization that took place is thought to have been more organizational than volumetric; whereas the brain volumes were relatively the same but specific landmark position of surface anatomical features, for example, the lunate sulcus suggest that the brains had been through a neurological reorganization. There is also evidence that the early hominin lineage also underwent a quiescent period, which supports the idea of neural reorganization.

Dental fossil records for early humans and hominins show that immature hominins, including australopithecines and members of Homo, have a quiescent period (Bown et al. 1987). A quiescent period is a period in which there are no dental eruptions of adult teeth; at this time the child becomes more accustomed to social structure, and development of culture. During this time the child is given an extra advantage over other hominoids, devoting several years into developing speech and learning to cooperate within a community. This period is also discussed in relation to encephalization. It was discovered that chimpanzees do not have this neutral dental period and suggest that a quiescent period occurred in very early hominin evolution. Using the models for neurological reorganization it can be suggested the cause for this period, dubbed middle childhood, is most likely for enhanced foraging abilities in varying seasonal environments. To understand the development of human dentition, taking a look at behavior and biology.

Genetic factors contributing to modern evolution

Bruce Lahn, the senior author at the Howard Hughes Medical Center at the University of Chicago and colleagues have suggested that there are specific genes that control the size of the human brain. These genes continue to play a role in brain evolution, implying that the brain is continuing to evolve. The study began with the researchers assessing 214 genes that are involved in brain development. These genes were obtained from humans, macaques, rats and mice. Lahn and the other researchers noted points in the DNA sequences that caused protein alterations. These DNA changes were then scaled to the evolutionary time that it took for those changes to occur. The data showed the genes in the human brain evolved much faster than those of the other species. Once this genomic evidence was acquired, Lahn and his team decided to find the specific gene or genes that allowed for or even controlled this rapid evolution. Two genes were found to control the size of the human brain as it develops. These genes are Microcephalin and Abnormal Spindle-like Microcephaly (ASPM). The researchers at the University of Chicago were able to determine that under the pressures of selection, both of these genes showed significant DNA sequence changes. Lahn's earlier studies displayed that Microcephalin experienced rapid evolution along the primate lineage which eventually led to the emergence of Homo sapiens. After the emergence of humans, Microcephalin seems to have shown a slower evolution rate. On the contrary, ASPM showed its most rapid evolution in the later years of human evolution once the divergence between chimpanzees and humans had already occurred.

Each of the gene sequences went through specific changes that led to the evolution of humans from ancestral relatives. In order to determine these alterations, Lahn and his colleagues used DNA sequences from multiple primates then compared and contrasted the sequences with those of humans. Following this step, the researchers statistically analyzed the key differences between the primate and human DNA to come to the conclusion, that the differences were due to natural selection. The changes in DNA sequences of these genes accumulated to bring about a competitive advantage and higher fitness that humans possess in relation to other primates. This comparative advantage is coupled with a larger brain size which ultimately allows the human mind to have a higher cognitive awareness.

Evolution of the human brain

One of the prominent ways of tracking the evolution of the human brain is through direct evidence in the form of fossils. The evolutionary history of the human brain shows primarily a gradually bigger brain relative to body size during the evolutionary path from early primates to hominids and finally to Homo sapiens. Because fossilized brain tissue is rare, a more reliable approach is to observe anatomical characteristics of the skull that offer insight into brain characteristics. One such method is to observe the endocranial cast (also referred to as endocasts). Endocasts occur when, during the fossilization process, the brain deteriorates away, leaving a space that is filled by surrounding sedimentary material overtime. These casts, give an imprint of the lining of the brain cavity, which allows a visualization of what was there. This approach, however, is limited in regard to what information can be gathered. Information gleaned from endocasts is primarily limited to the size of the brain (cranial capacity or endocranial volume), prominent sulci and gyri, and size of dominant lobes or regions of the brain. While endocasts are extremely helpful in revealing superficial brain anatomy, they cannot reveal brain structure, particularly of deeper brain areas. By determining scaling metrics of cranial capacity as it relates to total number of neurons present in primates, it is also possible to estimate the number of neurons through fossil evidence.

Despite the limitations to endocasts, they can and do provide a basis for understanding human brain evolution, which shows primarily a gradually bigger brain. The evolutionary history of the human brain shows primarily a gradually bigger brain relative to body size during the evolutionary path from early primates to hominins and finally to Homo sapiens. This trend that has led to the present day human brain size indicates that there has been a 2-3 factor increase in size over the past 3 million years. This can be visualized with current data on hominin evolution, starting with Australopithecus—a group of hominins from which humans are likely descended.

Australopiths lived from 3.85-2.95 million years ago with the general cranial capacity somewhere near that of the extant chimpanzee—around 300–500 cm3. Considering that the volume of the modern human brain is around 1,352 cm3 on average this represents a substantial amount of brain mass evolved. Australopiths are estimated to have a total neuron count of ~30-35 billion.

Progressing along the human ancestral timeline, brain size continues to steadily increase when moving into the era of Homo. For example, Homo habilis, living 2.4 million to 1.4 million years ago and argued to be the first Homo species based on a host of characteristics, had a cranial capacity of around 600 cm3. Homo habilis is estimated to have had ~40 billion neurons.

A little closer to present day, Homo heidelbergensis lived from around 700,000 to 200,000 years ago and had a cranial capacity of around 1290 cm3 and having around 76 billion neurons.

Homo neaderthalensis, living 400,000 to 40,000 years ago, had a cranial capacity comparable to than modern humans at around 1500–1600 cm3on average, with some specimens of Neanderthal having even greater cranial capacity. Neanderthals are estimated to have had around 85 billion neurons. The increase in brain size topped with Neanderthals, possibly due to their larger visual systems.

It is also important to note that the measure of brain mass or volume, seen as cranial capacity, or even relative brain size, which is brain mass that is expressed as a percentage of body mass, are not a measure of intelligence, use, or function of regions of the brain. Total neurons, however, also do not indicate a higher ranking in cognitive abilities. Elephants have a higher number of total neurons (257 billion) compared to humans (100 billion). Relative brain size, overall mass, and total number of neurons are only a few metrics that help scientists follow the evolutionary trend of increased brain to body ratio through the hominin phylogeny.

Evolution of the neocortex

In addition to just the size of the brain, scientists have observed changes in the folding of the brain, as well as in the thickness of the cortex. The more convoluted the surface of the brain is, the greater the surface area of the cortex which allows for an expansion of cortex, the most evolutionarily advanced part of the brain. Greater surface area of the brain is linked to higher intelligence as is the thicker cortex but there is an inverse relationship—the thicker the cortex, the more difficult it is for it to fold. In adult humans, thicker cerebral cortex has been linked to higher intelligence.

The neocortex is the most advanced and most evolutionarily young part of the human brain. It is six layers thick and is only present in mammals. It is especially prominent in humans and is the location of most higher level functioning and cognitive ability. The six-layered neocortex found in mammals is evolutionarily derived from a three-layer cortex present in all modern reptiles. This three-layer cortex is still conserved in some parts of the human brain such as the hippocampus and is believed to have evolved in mammals to the neocortex during the transition between the Triassic and Jurassic periods. 

The three layers of this reptilian cortex correlate strongly to the first, fifth and sixth layers of the mammalian neocortex. Across species of mammals, primates have greater neuronal density compared to rodents of similar brain mass and this may account for increased intelligence.

Behavioral epigenetics

From Wikipedia, the free encyclopedia

Behavioral epigenetics is the field of study examining the role of epigenetics in shaping animal (including human) behaviour. It seeks to explain how nurture shapes nature, where nature refers to biological heredity and nurture refers to virtually everything that occurs during the life-span (e.g., social-experience, diet and nutrition, and exposure to toxins). Behavioral epigenetics attempts to provide a framework for understanding how the expression of genes is influenced by experiences and the environment to produce individual differences in behaviour, cognition, personality, and mental health.

Epigenetic gene regulation involves changes other than to the sequence of DNA and includes changes to histones (proteins around which DNA is wrapped) and DNA methylation. These epigenetic changes can influence the growth of neurons in the developing brain as well as modify the activity of neurons in the adult brain. Together, these epigenetic changes in neuron structure and function can have a marked influence on an organism's behavior.

Background

In biology, and specifically genetics, epigenetics is the study of heritable changes in gene activity which are not caused by changes in the DNA sequence; the term can also be used to describe the study of stable, long-term alterations in the transcriptional potential of a cell that are not necessarily heritable.

Examples of mechanisms that produce such changes are DNA methylation and histone modification, each of which alters how genes are expressed without altering the underlying DNA sequence. Gene expression can be controlled through the action of repressor proteins that attach to silencer regions of the DNA.

Modifications of the epigenome do not alter DNA.

DNA methylation turns a gene "off" – it results in the inability of genetic information to be read from DNA; removing the methyl tag can turn the gene back "on".

Epigenetics has a strong influence on the development of an organism and can alter the expression of individual traits. Epigenetic changes occur not only in the developing fetus, but also in individuals throughout the human life-span. Because some epigenetic modifications can be passed from one generation to the next, subsequent generations may be affected by the epigenetic changes that took place in the parents.

Discovery

The first documented example of epigenetics affecting behavior was provided by Michael Meaney and Moshe Szyf. While working at McGill University in Montréal in 2004, they discovered that the type and amount of nurturing a mother rat provides in the early weeks of the rat's infancy determines how that rat responds to stress later in life. This stress sensitivity was linked to a down-regulation in the expression of the glucocorticoid receptor in the brain. In turn, this down-regulation was found to be a consequence of the extent of methylation in the promoter region of the glucocorticoid receptor gene. Immediately after birth, Meaney and Szyf found that methyl groups repress the glucocorticoid receptor gene in all rat pups, making the gene unable to unwind from the histone in order to be transcribed, causing a decreased stress response. Nurturing behaviours from the mother rat were found to stimulate activation of stress signalling pathways that remove methyl groups from DNA. This releases the tightly wound gene, exposing it for transcription. The glucocorticoid gene is activated, resulting in lowered stress response. Rat pups that receive a less nurturing upbringing are more sensitive to stress throughout their life-span.

This pioneering work in rodents has been difficult to replicate in humans because of a general lack of availability human brain tissue for measurement of epigenetic changes.

Research into epigenetics in psychology

Anxiety and risk-taking

Monozygotic twins are identical twins. Twin studies help to reveal epigenetic differences related to various aspects of psychology.

In a small clinical study in humans published in 2008, epigenetic differences were linked to differences in risk-taking and reactions to stress in monozygotic twins. The study identified twins with different life paths, wherein one twin displayed risk-taking behaviours, and the other displayed risk-averse behaviours. Epigenetic differences in DNA methylation of the CpG islands proximal to the DLX1 gene correlated with the differing behavior. The authors of the twin study noted that despite the associations between epigenetic markers and differences personality traits, epigenetics cannot predict complex decision-making processes like career selection.

Stress

Animal and human studies have found correlations between poor care during infancy and epigenetic changes that correlate with long-term impairments that result from neglect.

Studies in rats have shown correlations between maternal care in terms of the parental licking of offspring and epigenetic changes. A high level of licking results in a long-term reduction in stress response as measured behaviorally and biochemically in elements of the hypothalamic-pituitary-adrenal axis (HPA). Further, decreased DNA methylation of the glucocorticoid receptor gene were found in offspring that experienced a high level of licking; the glucorticoid receptor plays a key role in regulating the HPA. The opposite is found in offspring that experienced low levels of licking, and when pups are switched, the epigenetic changes are reversed. This research provides evidence for an underlying epigenetic mechanism. Further support comes from experiments with the same setup, using drugs that can increase or decrease methylation. Finally, epigenetic variations in parental care can be passed down from one generation to the next, from mother to female offspring. Female offspring who received increased parental care (i.e., high licking) became mothers who engaged in high licking and offspring who received less licking became mothers who engaged in less licking.

In humans, a small clinical research study showed the relationship between prenatal exposure to maternal mood and genetic expression resulting in increased reactivity to stress in offspring. Three groups of infants were examined: those born to mothers medicated for depression with serotonin reuptake inhibitors; those born to depressed mothers not being treated for depression; and those born to non-depressed mothers. Prenatal exposure to depressed/anxious mood was associated with increased DNA methylation at the glucocorticoid receptor gene and to increased HPA axis stress reactivity. The findings were independent of whether the mothers were being pharmaceutically treated for depression.

Recent research has also shown the relationship of methylation of the maternal glucocorticoid receptor and maternal neural activity in response to mother-infant interactions on video. Longitudinal follow-up of those infants will be important to understand the impact of early caregiving in this high-risk population on child epigenetics and behavior.

Cognition

Learning and memory

A 2010 review discusses the role of DNA methylation in memory formation and storage, but the precise mechanisms involving neuronal function, memory, and methylation reversal remain unclear.

Studies in rodents have found that the environment exerts an influence on epigenetic changes related to cognition, in terms of learning and memory; environmental enrichment correlated with increased histone acetylation, and verification by administering histone deacetylase inhibitors induced sprouting of dendrites, an increased number of synapses, and reinstated learning behaviour and access to long-term memories. Research has also linked learning and long-term memory formation to reversible epigenetic changes in the hippocampus and cortex in animals with normal-functioning, non-damaged brains. In human studies, post-mortem brains from Alzheimer's patients show increased histone de-acetylase levels.

Psychopathology and mental health

Drug addiction

Signaling cascade in the nucleus accumbens that results in psychostimulant addiction
 
This diagram depicts the signaling events in the brain's reward center that are induced by chronic high-dose exposure to psychostimulants that increase the concentration of synaptic dopamine, like amphetamine, methamphetamine, and phenethylamine. Following presynaptic dopamine and glutamate co-release by such psychostimulants, postsynaptic receptors for these neurotransmitters trigger internal signaling events through a cAMP-dependent pathway and a calcium-dependent pathway that ultimately result in increased CREB phosphorylation. Phosphorylated CREB increases levels of ΔFosB, which in turn represses the c-Fos gene with the help of corepressors; c-Fos repression acts as a molecular switch that enables the accumulation of ΔFosB in the neuron. A highly stable (phosphorylated) form of ΔFosB, one that persists in neurons for 1–2 months, slowly accumulates following repeated high-dose exposure to stimulants through this process. ΔFosB functions as "one of the master control proteins" that produces addiction-related structural changes in the brain, and upon sufficient accumulation, with the help of its downstream targets (e.g., nuclear factor kappa B), it induces an addictive state.

Environmental and epigenetic influences seem to work together to increase the risk of addiction. For example, environmental stress has been shown to increase the risk of substance abuse. In an attempt to cope with stress, alcohol and drugs can be used as an escape. Once substance abuse commences, however, epigenetic alterations may further exacerbate the biological and behavioural changes associated with addiction.

Even short-term substance abuse can produce long-lasting epigenetic changes in the brain of rodents, via DNA methylation and histone modification. Epigenetic modifications have been observed in studies on rodents involving ethanol, nicotine, cocaine, amphetamine, methamphetamine and opiates.

Specifically, these epigenetic changes modify gene expression, which in turn increases the vulnerability of an individual to engage in repeated substance overdose in the future. In turn, increased substance abuse results in even greater epigenetic changes in various components of a rodent's reward system (e.g., in the nucleus accumbens). Hence, a cycle emerges whereby changes in areas of the reward system contribute to the long-lasting neural and behavioural changes associated with the increased likelihood of addiction, the maintenance of addiction and relapse. In humans, alcohol consumption has been shown to produce epigenetic changes that contribute to the increased craving of alcohol. As such, epigenetic modifications may play a part in the progression from the controlled intake to the loss of control of alcohol consumption. These alterations may be long-term, as is evidenced in smokers who still possess nicotine-related epigenetic changes ten years after cessation. Therefore, epigenetic modifications may account for some of the behavioural changes generally associated with addiction. These include: repetitive habits that increase the risk of disease, and personal and social problems; need for immediate gratification; high rates of relapse following treatment; and, the feeling of loss of control.

Evidence for related epigenetic changes has come from human studies involving alcohol, nicotine, and opiate abuse. Evidence for epigenetic changes stemming from amphetamine and cocaine abuse derives from animal studies. In animals, drug-related epigenetic changes in fathers have also been shown to negatively affect offspring in terms of poorer spatial working memory, decreased attention and decreased cerebral volume.

Eating disorders and obesity

Epigenetic changes may help to facilitate the development and maintenance of eating disorders via influences in the early environment and throughout the life-span. Pre-natal epigenetic changes due to maternal stress, behaviour and diet may later predispose offspring to persistent, increased anxiety and anxiety disorders. These anxiety issues can precipitate the onset of eating disorders and obesity, and persist even after recovery from the eating disorders.

Epigenetic differences accumulating over the life-span may account for the incongruent differences in eating disorders observed in monozygotic twins. At puberty, sex hormones may exert epigenetic changes (via DNA methylation) on gene expression, thus accounting for higher rates of eating disorders in men as compared to women. Overall, epigenetics contribute to persistent, unregulated self-control behaviours related to the urge to binge.

Schizophrenia

Epigenetic changes including hypomethylation of glutamatergic genes (i.e., NMDA-receptor-subunit gene NR3B and the promoter of the AMPA-receptor-subunit gene GRIA2) in the post-mortem human brains of schizophrenics are associated with increased levels of the neurotransmitter glutamate. Since glutamate is the most prevalent, fast, excitatory neurotransmitter, increased levels may result in the psychotic episodes related to schizophrenia. Epigenetic changes affecting a greater number of genes have been detected in men with schizophrenia as compared to women with the illness.

Population studies have established a strong association linking schizophrenia in children born to older fathers. Specifically, children born to fathers over the age of 35 years are up to three times more likely to develop schizophrenia. Epigenetic dysfunction in human male sperm cells, affecting numerous genes, have been shown to increase with age. This provides a possible explanation for increased rates of the disease in men. To this end, toxins (e.g., air pollutants) have been shown to increase epigenetic differentiation. Animals exposed to ambient air from steel mills and highways show drastic epigenetic changes that persist after removal from the exposure. Therefore, similar epigenetic changes in older human fathers are likely. Schizophrenia studies provide evidence that the nature versus nurture debate in the field of psychopathology should be re-evaluated to accommodate the concept that genes and the environment work in tandem. As such, many other environmental factors (e.g., nutritional deficiencies and cannabis use) have been proposed to increase the susceptibility of psychotic disorders like schizophrenia via epigenetics.

Bipolar disorder

Evidence for epigenetic modifications for bipolar disorder is unclear. One study found hypomethylation of a gene promoter of a prefrontal lobe enzyme (i.e., membrane-bound catechol-O-methyl transferase, or COMT) in post-mortem brain samples from individuals with bipolar disorder. COMT is an enzyme that metabolizes dopamine in the synapse. These findings suggest that the hypomethylation of the promoter results in over-expression of the enzyme. In turn, this results in increased degradation of dopamine levels in the brain. These findings provide evidence that epigenetic modification in the prefrontal lobe is a risk factor for bipolar disorder. However, a second study found no epigenetic differences in post-mortem brains from bipolar individuals.

Major depressive disorder

The causes of major depressive disorder (MDD) are poorly understood from a neuroscience perspective. The epigenetic changes leading to changes in glucocorticoid receptor expression and its effect on the HPA stress system discussed above, have also been applied to attempts to understand MDD.

Much of the work in animal models has focused on the indirect downregulation of brain derived neurotrophic factor (BDNF) by over-activation of the stress axis. Studies in various rodent models of depression, often involving induction of stress, have found direct epigenetic modulation of BDNF as well.

Psychopathy

Epigenetics may be relevant to aspects of psychopathic behaviour through methylation and histone modification. These processes are heritable but can also be influenced by environmental factors such as smoking and abuse. Epigenetics may be one of the mechanisms through which the environment can impact the expression of the genome. Studies have also linked methylation of genes associated with nicotine and alcohol dependence in women, ADHD, and drug abuse. It is probable that epigenetic regulation as well as methylation profiling will play an increasingly important role in the study of the play between the environment and genetics of psychopaths.

Suicide

A study of the brains of 24 suicide completers, 12 of whom had a history of child abuse and 12 who did not, found decreased levels of glucocorticoid receptor in victims of child abuse and associated epigenetic changes.

Social insects

Several studies have indicated DNA cytosine methylation linked to the social behavior of insects, such as honeybees and ants. In honeybees, when nurse bee switched from her in-hive tasks to out foraging, cytosine methylation marks are changing. When a forager bee was reversed to do nurse duties, the cytosine methylation marks were also reversed. Knocking down the DNMT3 in the larvae changed the worker to queen-like phenotype. Queen and worker are two distinguish castes with different morphology, behavior, and physiology. Studies in DNMT3 silencing also indicated DNA methylation may regulate gene alternative splicing and pre-mRNA maturation.

Limitations and future direction

Many researchers contribute information to the Human Epigenome Consortium. The aim of future research is to reprogram epigenetic changes to help with addiction, mental illness, age related changes, memory decline, and other issues. However, the sheer volume of consortium-based data makes analysis difficult. Most studies also focus on one gene. In actuality, many genes and interactions between them likely contribute to individual differences in personality, behaviour and health. As social scientists often work with many variables, determining the number of affected genes also poses methodological challenges. More collaboration between medical researchers, geneticists and social scientists has been advocated to increase knowledge in this field of study.

Limited access to human brain tissue poses a challenge to conducting human research. Not yet knowing if epigenetic changes in the blood and (non-brain) tissues parallel modifications in the brain, places even greater reliance on brain research. Although some epigenetic studies have translated findings from animals to humans, some researchers caution about the extrapolation of animal studies to humans. One view notes that when animal studies do not consider how the subcellular and cellular components, organs and the entire individual interact with the influences of the environment, results are too reductive to explain behaviour.

Some researchers note that epigenetic perspectives will likely be incorporated into pharmacological treatments. Others caution that more research is necessary as drugs are known to modify the activity of multiple genes and may, therefore, cause serious side effects. However, the ultimate goal is to find patterns of epigenetic changes that can be targeted to treat mental illness, and reverse the effects of childhood stressors, for example. If such treatable patterns eventually become well-established, the inability to access brains in living humans to identify them poses an obstacle to pharmacological treatment. Future research may also focus on epigenetic changes that mediate the impact of psychotherapy on personality and behaviour.

Most epigenetic research is correlational; it merely establishes associations. More experimental research is necessary to help establish causation. Lack of resources has also limited the number of intergenerational studies. Therefore, advancing longitudinal and multigenerational, experience-dependent studies will be critical to further understanding the role of epigenetics in psychology.

Social privilege

From Wikipedia, the free encyclopedia https://en.wikipedi...