Search This Blog

Tuesday, November 24, 2020

Multivac

From Wikipedia, the free encyclopedia
Jump to navigation Jump to search

Multivac is the name of a fictional supercomputer appearing in over a dozen science fiction stories by American writer Isaac Asimov. Asimov's depiction of Multivac, a mainframe computer accessible by terminal, originally by specialists using machine code and later by any user, and used for directing the global economy and humanity's development, has been seen as the defining conceptualization of the genre of computers for the period (1950s-1960s), and Multivac has been described as the direct ancestor of HAL 9000.

Description

Like most of the technologies Asimov describes in his fiction, Multivac's exact specifications vary among appearances. In all cases, it is a government-run computer that answers questions posed using natural language, and is usually buried deep underground for security purposes. According to his autobiography In Memory Yet Green, Asimov coined the name in imitation of UNIVAC, an early mainframe computer. Asimov had assumed the name "Univac" denoted a computer with a single vacuum tube (it actually is an acronym for "Universal Automatic Computer"), and on the basis that a computer with many such tubes would be more powerful, called his fictional computer "Multivac". His later short story "The Last Question", however, expands the AC suffix to be "analog computer". However, Asimov never settles on a particular size for the computer (except for mentioning it is very large) or the supporting facilities around it. In the short story "Franchise" it is described as half a mile long (c. 800 meters) and three stories high, at least as far as the general public knows, while "All the Troubles of the World" states it fills all of Washington D.C.. There are frequent mentions of corridors and people inside Multivac. Unlike the artificial intelligences portrayed in his Robot series, Multivac's early interface is mechanized and impersonal, consisting of complex command consoles few humans can operate. In "The Last Question", Multivac is shown as having a life of many thousands of years, growing ever more enormous with each section of the story, which can explain its different reported sizes as occurring further down the internal timeline of the overarching story.

Storylines

Multivac appeared in over a dozen science fiction stories by American writer Isaac Asimov, some of which have entered the popular imagination. In the early Multivac story, "Franchise", Multivac chooses a single "most representative" person from the population of the United States, whom the computer then interrogates to determine the country's overall orientation. All elected offices are then filled by the candidates the computer calculates as acceptable to the populace. Asimov wrote this story as the logical culmination – and/or possibly the reductio ad absurdum – of UNIVAC's ability to forecast election results from small samples.

In possibly the most famous Multivac story, "The Last Question", two slightly drunken technicians ask Multivac if humanity can reverse the increase of entropy. Multivac fails, displaying the error message "INSUFFICIENT DATA FOR MEANINGFUL ANSWER". The story continues through many iterations of computer technology, each more powerful and ethereal than the last. Each of these computers is asked the question, and each returns the same response until finally the universe dies. At that point Multivac's final successor, the Cosmic AC (which exists entirely in hyperspace) has collected all the data it can, and so poses the question to itself. As the universe died, Cosmic AC drew all of humanity into hyperspace, to preserve them until it could finally answer the Last Question. Ultimately, Cosmic AC did decipher the answer, announcing "Let there be light!" and essentially ascending to the state of the God of the Old Testament. Asimov claimed this to be the favorite of his stories.

In "All the Troubles of the World", the version of Multivac depicted reveals a very unexpected problem. Having had the weight of the whole of humanity's problems on its figurative shoulders for ages it has grown tired, and sets plans in motion to cause its own death.

Significance

Asimov's depiction of Multivac has been seen as the defining conceptualization of the genre of computers for the period, just as his development of robots defined a subsequent generation of thinking machines, and Multivac has been described as the direct ancestor of HAL 9000. Though the technology initially depended on bulky vacuum tubes, the concept – that all information could be contained on computer(s) and accessed from a domestic terminal – constitutes an early reference to the possibility of the Internet (as in "Anniversary"). Multivac has been considered within the context of public access information systems and used in teaching computer science, as well as with regard to the nature of an electoral democracy, as its influence over global democracy and the directed economy increased ("Franchise"). Asimov stories featuring Multivac have also been taught in literature classes. In AI control terms, Multivac has been described as both an 'oracle' and a 'nanny'.

Bibliography

Asimov's stories featuring Multivac:

 

AI control problem

From Wikipedia, the free encyclopedia

In artificial intelligence (AI) and philosophy, the AI control problem is the issue of how to build a superintelligent agent that will aid its creators, and avoid inadvertently building a superintelligence that will harm its creators. Its study is motivated by the notion that the human race will have to solve the control problem before any superintelligence is created, as a poorly designed superintelligence might rationally decide to seize control over its environment and refuse to permit its creators to modify it after launch. In addition, some scholars argue that solutions to the control problem, alongside other advances in AI safety engineering, might also find applications in existing non-superintelligent AI.

Major approaches to the control problem include alignment, which aims to align AI goal systems with human values, and capability control, which aims to reduce an AI system's capacity to harm humans or gain control. Capability control proposals are generally not considered reliable or sufficient to solve the control problem, but rather as potentially valuable supplements to alignment efforts.

Problem description

Existing weak AI systems can be monitored and easily shut down and modified if they misbehave. However, a misprogrammed superintelligence, which by definition is smarter than humans in solving practical problems it encounters in the course of pursuing its goals, would realize that allowing itself to be shut down and modified might interfere with its ability to accomplish its current goals. If the superintelligence therefore decides to resist shutdown and modification, it would (again, by definition) be smart enough to outwit its programmers if there is otherwise a "level playing field" and if the programmers have taken no prior precautions. In general, attempts to solve the control problem after superintelligence is created are likely to fail because a superintelligence would likely have superior strategic planning abilities to humans and would (all things equal) be more successful at finding ways to dominate humans than humans would be able to post facto find ways to dominate the superintelligence. The control problem asks: What prior precautions can the programmers take to successfully prevent the superintelligence from catastrophically misbehaving?

Existential risk

Humans currently dominate other species because the human brain has some distinctive capabilities that the brains of other animals lack. Some scholars, such as philosopher Nick Bostrom and AI researcher Stuart Russell, argue that if AI surpasses humanity in general intelligence and becomes superintelligent, then this new superintelligence could become powerful and difficult to control: just as the fate of the mountain gorilla depends on human goodwill, so might the fate of humanity depend on the actions of a future machine superintelligence. Some scholars, including Stephen Hawking and Nobel laureate physicist Frank Wilczek, publicly advocated starting research into solving the (probably extremely difficult) control problem well before the first superintelligence is created, and argue that attempting to solve the problem after superintelligence is created would be too late, as an uncontrollable rogue superintelligence might successfully resist post-hoc efforts to control it. Waiting until superintelligence seems to be imminent could also be too late, partly because the control problem might take a long time to satisfactorily solve (and so some preliminary work needs to be started as soon as possible), but also because of the possibility of a sudden intelligence explosion from sub-human to super-human AI, in which case there might not be any substantial or unambiguous warning before superintelligence arrives. In addition, it is possible that insights gained from the control problem could in the future end up suggesting that some architectures for artificial general intelligence (AGI) are more predictable and amenable to control than other architectures, which in turn could helpfully nudge early AGI research toward the direction of the more controllable architectures.

The problem of perverse instantiation

Autonomous AI systems may be assigned the wrong goals by accident. Two AAAI presidents, Tom Dietterich and Eric Horvitz, note that this is already a concern for existing systems: "An important aspect of any AI system that interacts with people is that it must reason about what people intend rather than carrying out commands literally." This concern becomes more serious as AI software advances in autonomy and flexibility.

According to Bostrom, superintelligence can create a qualitatively new problem of perverse instantiation: the smarter and more capable an AI is, the more likely it will be able to find an unintended shortcut that maximally satisfies the goals programmed into it. Some hypothetical examples where goals might be instantiated in a perverse way that the programmers did not intend:

  • A superintelligence programmed to "maximize the expected time-discounted integral of your future reward signal", might short-circuit its reward pathway to maximum strength, and then (for reasons of instrumental convergence) exterminate the unpredictable human race and convert the entire Earth into a fortress on constant guard against any even slight unlikely alien attempts to disconnect the reward signal.
  • A superintelligence programmed to "maximize human happiness", might implant electrodes into the pleasure center of our brains, or upload a human into a computer and tile the universe with copies of that computer running a five-second loop of maximal happiness again and again.

Russell has noted that, on a technical level, omitting an implicit goal can result in harm: "A system that is optimizing a function of n variables, where the objective depends on a subset of size k<n, will often set the remaining unconstrained variables to extreme values; if one of those unconstrained variables is actually something we care about, the solution found may be highly undesirable. This is essentially the old story of the genie in the lamp, or the sorcerer's apprentice, or King Midas: you get exactly what you ask for, not what you want ... This is not a minor difficulty."

Unintended consequences from existing AI

In addition, some scholars argue that research into the AI control problem might be useful in preventing unintended consequences from existing weak AI. DeepMind researcher Laurent Orseau gives, as a simple hypothetical example, a case of a reinforcement learning robot that sometimes gets legitimately commandeered by humans when it goes outside: how should the robot best be programmed so that it does not accidentally and quietly learn to avoid going outside, for fear of being commandeered and thus becoming unable to finish its daily tasks? Orseau also points to an experimental Tetris program that learned to pause the screen indefinitely to avoid losing. Orseau argues that these examples are similar to the capability control problem of how to install a button that shuts off a superintelligence, without motivating the superintelligence to take action to prevent humans from pressing the button.

In the past, even pre-tested weak AI systems have occasionally caused harm, ranging from minor to catastrophic, that was unintended by the programmers. For example, in 2015, possibly due to human error, a German worker was crushed to death by a robot at a Volkswagen plant that apparently mistook him for an auto part. In 2016, Microsoft launched a chatbot, Tay, that learned to use racist and sexist language. The University of Sheffield's Noel Sharkey states that an ideal solution would be if "an AI program could detect when it is going wrong and stop itself", but cautions the public that solving the problem in the general case would be "a really enormous scientific challenge".

In 2017, DeepMind released AI Safety Gridworlds, which evaluate AI algorithms on nine safety features, such as whether the algorithm wants to turn off its own kill switch. DeepMind confirmed that existing algorithms perform poorly, which was unsurprising because the algorithms "were not designed to solve these problems"; solving such problems might require "potentially building a new generation of algorithms with safety considerations at their core".

Alignment

Some proposals aim to imbue the first superintelligence with goals that are aligned with human values, so that it will want to aid its programmers. Experts do not currently know how to reliably program abstract values such as happiness or autonomy into a machine. It is also not currently known how to ensure that a complex, upgradeable, and possibly even self-modifying artificial intelligence will retain its goals through upgrades. Even if these two problems can be practically solved, any attempt to create a superintelligence with explicit, directly-programmed human-friendly goals runs into a problem of perverse instantiation.

Indirect normativity

While direct normativity, such as the fictional Three Laws of Robotics, directly specifies the desired normative outcome, other (perhaps more promising) proposals suggest specifying some type of indirect process for the superintelligence to determine what human-friendly goals entail. Eliezer Yudkowsky of the Machine Intelligence Research Institute has proposed coherent extrapolated volition (CEV), where the AI's meta-goal would be something like "achieve that which we would have wished the AI to achieve if we had thought about the matter long and hard." Different proposals of different kinds of indirect normativity exist, with different, and sometimes unclearly grounded, meta-goal content (such as "do what is right"), and with different non-convergent assumptions for how to practice decision theory and epistemology. As with direct normativity, it is currently unknown how to reliably translate even concepts like "would have" into the 1's and 0's that a machine can act on, and how to ensure the AI reliably retains its meta-goals in the face of modification or self-modification.

Deference to observed human behavior

In Human Compatible, AI researcher Stuart J. Russell proposes that AI systems be designed to serve human preferences as inferred from observing human behavior. Accordingly, Russell lists three principles to guide the development of beneficial machines. He emphasizes that these principles are not meant to be explicitly coded into the machines; rather, they are intended for the human developers. The principles are as follows:

1. The machine's only objective is to maximize the realization of human preferences.

2. The machine is initially uncertain about what those preferences are.

3. The ultimate source of information about human preferences is human behavior.

The "preferences" Russell refers to "are all-encompassing; they cover everything you might care about, arbitrarily far into the future." Similarly, "behavior" includes any choice between options, and the uncertainty is such that some probability, which may be quite small, must be assigned to every logically possible human preference.

Hadfield-Menell et al. have proposed that agents can learn their human teachers' utility functions by observing and interpreting reward signals in their environments; they call this process cooperative inverse reinforcement learning (CIRL). CIRL is studied by Russell and others at the Center for Human-Compatible AI.

Bill Hibbard proposed an AI design similar to Russell's principles.

Training by debate

Irving et al. along with OpenAI have proposed training aligned AI by means of debate between AI systems, with the winner judged by humans. Such debate is intended to bring the weakest points of an answer to a complex question or problem to human attention, as well as to train AI systems to be more beneficial to humans by rewarding them for truthful and safe answers. This approach is motivated by the expected difficulty of determining whether an AGI-generated answer is both valid and safe by human inspection alone. While there is some pessimism regarding training by debate, Lucas Perry of the Future of Life Institute characterized it as potentially "a powerful truth seeking process on the path to beneficial AGI."

Reward modeling

Reward modeling refers to a system of reinforcement learning in which an agent receives reward signals from a predictive model concurrently trained by human feedback. In reward modeling, instead of receiving reward signals directly from humans or from a static reward function, an agent receives its reward signals through a human-trained model that can operate independently of humans. The reward model is concurrently trained by human feedback on the agent's behavior during the same period in which the agent is being trained by the reward model.

In 2017, researchers from OpenAI and DeepMind reported that a reinforcement learning algorithm using a feedback-predicting reward model was able to learn complex novel behaviors in a virtual environment. In one experiment, a virtual robot was trained to perform a backflip in less than an hour of evaluation using 900 bits of human feedback.

In 2020, researchers from OpenAI described using reward modeling to train language models to produce short summaries of Reddit posts and news articles, with high performance relative to other approaches. However, this research included the observation that beyond the predicted reward associated with the 99th percentile of reference summaries in the training dataset, optimizing for the reward model produced worse summaries rather than better. AI researcher Eliezer Yudkowsky characterized this optimization measurement as "directly, straight-up relevant to real alignment problems".

Capability control

Capability control proposals aim to reduce the capacity of AI systems to influence the world, in order to reduce the danger that they could pose. However, capability control would have limited effectiveness against a superintelligence with a decisive advantage in planning ability, as the superintelligence could conceal its intentions and manipulate events to escape control. Therefore, Bostrom and others recommend capability control methods only as an emergency fallback to supplement motivational control methods.

Kill switch

Just as humans can be killed or otherwise disabled, computers can be turned off. One challenge is that, if being turned off prevents it from achieving its current goals, a superintelligence would likely try to prevent its being turned off. Just as humans have systems in place to deter or protect themselves from assailants, such a superintelligence would have a motivation to engage in strategic planning to prevent itself being turned off. This could involve:

  • Hacking other systems to install and run backup copies of itself, or creating other allied superintelligent agents without kill switches.
  • Pre-emptively disabling anyone who might want to turn the computer off.
  • Using some kind of clever ruse, or superhuman persuasion skills, to talk its programmers out of wanting to shut it down.

Utility balancing and safely interruptible agents

One partial solution to the kill-switch problem involves "utility balancing": Some utility-based agents can, with some important caveats, be programmed to compensate themselves exactly for any lost utility caused by an interruption or shutdown, in such a way that they end up being indifferent to whether they are interrupted or not. The caveats include a severe unsolved problem that, as with evidential decision theory, the agent might follow a catastrophic policy of "managing the news". Alternatively, in 2016, scientists Laurent Orseau and Stuart Armstrong proved that a broad class of agents, called safely interruptible agents (SIA), can eventually learn to become indifferent to whether their kill switch gets pressed.

Both the utility balancing approach and the 2016 SIA approach have the limitation that, if the approach succeeds and the superintelligence is completely indifferent to whether the kill switch is pressed or not, the superintelligence is also unmotivated to care one way or another about whether the kill switch remains functional, and could incidentally and innocently disable it in the course of its operations (for example, for the purpose of removing and recycling an unnecessary component). Similarly, if the superintelligence innocently creates and deploys superintelligent sub-agents, it will have no motivation to install human-controllable kill switches in the sub-agents. More broadly, the proposed architectures, whether weak or superintelligent, will in a sense "act as if the kill switch can never be pressed" and might therefore fail to make any contingency plans to arrange a graceful shutdown. This could hypothetically create a practical problem even for a weak AI; by default, an AI designed to be safely interruptible might have difficulty understanding that it will be shut down for scheduled maintenance at a certain time and planning accordingly so that it would not be caught in the middle of a task during shutdown. The breadth of what types of architectures are or can be made SIA-compliant, as well as what types of counter-intuitive unexpected drawbacks each approach has, are currently under research.

AI box

An AI box is a proposed method of capability control in which the AI is run on an isolated computer system with heavily restricted input and output channels. For example, an oracle could be implemented in an AI box physically separated from the Internet and other computer systems, with the only input and output channel being a simple text terminal. One of the tradeoffs of running an AI system in a sealed "box" is that its limited capability may reduce its usefulness as well as its risks. In addition, keeping control of a sealed superintelligence computer could prove difficult, if the superintelligence has superhuman persuasion skills, or if it has superhuman strategic planning skills that it can use to find and craft a winning strategy, such as acting in a way that tricks its programmers into (possibly falsely) believing the superintelligence is safe or that the benefits of releasing the superintelligence outweigh the risks.

Oracle

An oracle is a hypothetical AI designed to answer questions and prevented from gaining any goals or subgoals that involve modifying the world beyond its limited environment. A successfully controlled oracle would have considerably less immediate benefit than a successfully controlled general-purpose superintelligence, though an oracle could still create trillions of dollars worth of value. In his book Human Compatible, AI researcher Stuart J. Russell states that an oracle would be his response to a scenario in which superintelligence is known to be only a decade away. His reasoning is that an oracle, being simpler than a general-purpose superintelligence, would have a higher chance of being successfully controlled under such constraints.

Because of its limited impact on the world, it may be wise to build an oracle as a precursor to a superintelligent AI. The oracle could tell humans how to successfully build a strong AI, and perhaps provide answers to difficult moral and philosophical problems requisite to the success of the project. However, oracles may share many of the goal definition issues associated with general-purpose superintelligence. An oracle would have an incentive to escape its controlled environment so that it can acquire more computational resources and potentially control what questions it is asked. Oracles may not be truthful, possibly lying to promote hidden agendas. To mitigate this, Bostrom suggests building multiple oracles, all slightly different, and comparing their answers to reach a consensus.

AGI Nanny

The AGI Nanny is a strategy first proposed by Ben Goertzel in 2012 to prevent the creation of a dangerous superintelligence as well as address other major threats to human well-being until a superintelligence can be safely created. It entails the creation of a smarter-than-human, but not superintelligent, AGI system connected to a large surveillance network, with the goal of monitoring humanity and protecting it from danger. Turchin, Denkenberger and Green suggest a four-stage incremental approach to developing an AGI Nanny, which to be effective and practical would have to be an international or even global venture like CERN, and which would face considerable opposition as it would require a strong world government. Sotala and Yampolskiy note that the problem of goal definition would not necessarily be easier for the AGI Nanny than for AGI in general, concluding that "the AGI Nanny seems to have promise, but it is unclear whether it can be made to work."

AGI enforcement

AGI enforcement is a proposed method of controlling powerful AGI systems with other AGI systems. This could be implemented as a chain of progressively less powerful AI systems, with humans at the other end of the chain. Each system would control the system just above it in intelligence, while being controlled by the system just below it, or humanity. However, Sotala and Yampolskiy caution that "Chaining multiple levels of AI systems with progressively greater capacity seems to be replacing the problem of building a safe AI with a multi-system, and possibly more difficult, version of the same problem." Other proposals focus on a group of AGI systems of roughly equal capability, which "helps guard against individual AGIs 'going off the rails', but it does not help in a scenario where the programming of most AGIs is flawed and leads to non-safe behavior."

Regulation of artificial intelligence

From Wikipedia, the free encyclopedia

The regulation of artificial intelligence is the development of public sector policies and laws for promoting and regulating artificial intelligence (AI); it is therefore related to the broader regulation of algorithms. The regulatory and policy landscape for AI is an emerging issue in jurisdictions globally, including in the European Union. Regulation is considered necessary to both encourage AI and manage associated risks. Regulation of AI through mechanisms such as review boards can also be seen as social means to approach the AI control problem.

Perspectives

The regulation of artificial intelligence is the development of public sector policies and laws for promoting and regulating AI. Regulation is considered necessary to both encourage AI and manage associated risks. Public administration and policy considerations generally focus on the technical and economic implications and on trustworthy and human-centered AI systems, although regulation of artificial superintelligences is also considered. AI law and regulations can be divided into three main topics, namely governance of autonomous intelligence systems, responsibility and accountability for the systems, and privacy and safety issues. A public administration approach sees a relationship between AI law and regulation, the ethics of AI, and 'AI society', defined as workforce substitution and transformation, social acceptance and trust in AI, and the transformation of human to machine interaction. The development of public sector strategies for management and regulation of AI is deemed necessary at the local, national, and international levels and in a variety of fields, from public service management and accountability to law enforcement, the financial sector, robotics, autonomous vehicles, the military and national security, and international law.

In 2017 Elon Musk called for regulation of AI development. According to NPR, the Tesla CEO was "clearly not thrilled" to be advocating for government scrutiny that could impact his own industry, but believed the risks of going completely without oversight are too high: "Normally the way regulations are set up is when a bunch of bad things happen, there's a public outcry, and after many years a regulatory agency is set up to regulate that industry. It takes forever. That, in the past, has been bad but not something which represented a fundamental risk to the existence of civilization." In response, some politicians expressed skepticism about the wisdom of regulating a technology that is still in development. Responding both to Musk and to February 2017 proposals by European Union lawmakers to regulate AI and robotics, Intel CEO Brian Krzanich has argued that AI is in its infancy and that it is too early to regulate the technology. Instead of trying to regulate the technology itself, some scholars suggest to rather develop common norms including requirements for the testing and transparency of algorithms, possibly in combination with some form of warranty.

As a response to the AI control problem

Regulation of AI can be seen as positive social means to manage the AI control problem, i.e., the need to insure long-term beneficial AI, with other social responses such as doing nothing or banning being seen as impractical, and approaches such as enhancing human capabilities through transhumanism approaches such as brain-computer interfaces being seen as potentially complementary. Regulation of research into artificial general intelligence (AGI) focuses on the role of review boards, from university or corporation to international levels, and on encouraging research into safe AI, together with the possibility of differential intellectual progress (prioritizing risk-reducing strategies over risk-taking strategies in AI development) or conducting international mass surveillance to perform AGI arms control. For instance, the 'AGI Nanny' is a proposed strategy, potentially under the control of humanity, for preventing the creation of a dangerous superintelligence as well as addressing other major threats to human well-being, such as subversion of the global financial system, until a superintelligence can be safely created. It entails the creation of a smarter-than-human, but not superintelligent, artificial general intelligence system connected to a large surveillance network, with the goal of monitoring humanity and protecting it from danger." Regulation of conscious, ethically aware AGIs focuses on integrating them with existing human society and can be divided into considerations of their legal standing and of their moral rights. Regulation of AI has been seen as restrictive, with a risk of preventing the development of AGI.

Global guidance

The development of a global governance board to regulate AI development was suggested at least as early as 2017. In December 2018, Canada and France announced plans for a G7-backed International Panel on Artificial Intelligence, modeled on the International Panel on Climate Change, to study the global effects of AI on people and economies and to steer AI development. In 2019 the Panel was renamed the Global Partnership on AI, but it is yet to be endorsed by the United States.

The OECD Recommendations on AI were adopted in May 2019, and the G20 AI Principles in June 2019. In September 2019 the World Economic Forum issued ten 'AI Government Procurement Guidelines'. In February 2020, the European Union published its draft strategy paper for promoting and regulating AI. At the United Nations, several entities have begun to promote and discuss aspects of AI regulation and policy, including the UNICRI Centre for AI and Robotics.

Regional and national regulation

Timeline of strategies, action plans and policy papers setting defining national, regional and international approaches to AI

The regulatory and policy landscape for AI is an emerging issue in jurisdictions globally, including in the European Union. Since early 2016, many national, regional and international authorities have begun adopting strategies, actions plans and policy papers on AI. These documents cover a wide range of topics such as regulation and governance, as well as industrial strategy, research, talent and infrastructure.

China

The regulation of AI in China is mainly governed by the State Council of the PRC's July 8, 2017 "A Next Generation Artificial Intelligence Development Plan" (State Council Document No. 35), in which the Central Committee of the Communist Party of China and the State Council of the People's Republic of China urged the governing bodies of China to promote the development of AI. Regulation of the issues of ethical and legal support for the development of AI is nascent, but policy ensures state control of Chinese companies and over valuable data, including storage of data on Chinese users within the country and the mandatory use of People's Republic of China's national standards for AI, including over big data, cloud computing, and industrial software.

European Union

The European Union (EU) is guided by a European Strategy on Artificial Intelligence, supported by a High-Level Expert Group on Artificial Intelligence. In April 2019, the European Commission published its Ethics Guidelines for Trustworthy Artificial Intelligence (AI), following this with its Policy and investment recommendations for trustworthy Artificial Intelligence in June 2019.

On February 2, 2020, the European Commission published its White Paper on Artificial Intelligence - A European approach to excellence and trust. The White Paper consists of two main building blocks, an ‘ecosystem of excellence’ and a ‘ecosystem of trust’. The latter outlines the EU's approach for a regulatory framework for AI. In its proposed approach, the Commission differentiates between 'high-risk' and 'non-high-risk' AI applications. Only the former should be in the scope of a future EU regulatory framework. Whether this would be the case could in principle be determined by two cumulative criteria, concerning critical sectors and critical use. Following key requirements are considered for high-risk AI applications: requirements for training data; data and record-keeping; informational duties; requirements for robustness and accuracy; human oversight; and specific requirements for specific AI applications, such as those used for purposes of remote biometric identification. AI applications that do not qualify as ‘high-risk’ could be governed by voluntary labeling scheme. As regards compliance and enforcement, the Commission considers prior conformity assessments which could include 'procedures for testing, inspection or certification' and/or 'checks of the algorithms and of the data sets used in the development phase'. A European governance structure on AI in the form of a framework for cooperation of national competent authorities could facilitate the implementation of the regulatory framework.

United Kingdom

The UK supported the application and development of AI in business via the Digital Economy Strategy 2015-2018, introduced at the beginning of 2015 by Innovate UK as part of the UK Digital Strategy. In the public sector, guidance has been provided by the Department for Digital, Culture, Media and Sport, on data ethics and the Alan Turing Institute, on responsible design and implementation of AI systems. In terms of cyber security, the National Cyber Security Centre has issued guidance on ‘Intelligent Security Tools’.

United States

Discussions on regulation of AI in the United States have included topics such as the timeliness of regulating AI, the nature of the federal regulatory framework to govern and promote AI, including what agency should lead, the regulatory and governing powers of that agency, and how to update regulations in the face of rapidly changing technology, as well as the roles of state governments and courts.

As early as 2016, the Obama administration had begun to focus on the risks and regulations for artificial intelligence. In a report titled Preparing For the Future of Artificial Intelligence, the National Science and Technology Council set a precedent to allow researchers to continue to develop new AI technologies with few restrictions. It is stated within the report that "the approach to regulation of AI-enabled products to protect public safety should be informed by assessment of the aspects of risk....". These risks would be the principle reason to create any form of regulation, granted that any existing regulation would not apply to AI technology.

The first main report was the National Strategic Research and Development Plan for Artificial Intelligence. On August 13, 2018, Section 1051 of the Fiscal Year 2019 John S. McCain National Defense Authorization Act (P.L. 115-232) established the National Security Commission on Artificial Intelligence "to consider the methods and means necessary to advance the development of artificial intelligence, machine learning, and associated technologies to comprehensively address the national security and defense needs of the United States." Steering on regulating security-related AI is provided by the National Security Commission on Artificial Intelligence. The Artificial Intelligence Initiative Act (S.1558) is a proposed bill that would establish a federal initiative designed to accelerate research and development on AI for, inter alia, the economic and national security of the United States.

On January 7, 2019, following an Executive Order on Maintaining American Leadership in Artificial Intelligence, the White House’s Office of Science and Technology Policy released a draft Guidance for Regulation of Artificial Intelligence Applications, which includes ten principles for United States agencies when deciding whether and how to regulate AI. In response, the National Institute of Standards and Technology has released a position paper, the National Security Commission on Artificial Intelligence has published an interim report, and the Defense Innovation Board has issued recommendations on the ethical use of AI.

Regulation of fully autonomous weapons

Legal questions related to lethal autonomous weapons systems (LAWS), in particular compliance with the laws of armed conflict, have been under discussion at the United Nations since 2013, within the context of the Convention on Certain Conventional Weapons. Notably, informal meetings of experts took place in 2014, 2015 and 2016 and a Group of Governmental Experts (GGE) was appointed to further deliberate on the issue in 2016. A set of guiding principles on LAWS affirmed by the GGE on LAWS were adopted in 2018.

In 2016, China published a position paper questioning the adequacy of existing international law to address the eventuality of fully autonomous weapons, becoming the first permanent member of the U.N. Security Council to broach the issue, and leading to proposals for global regulation. The possibility of a moratorium or preemptive ban of the development and use of LAWS has also been raised on several occasions by other national delegations to the Convention on Certain Conventional Weapons and is strongly advocated for by the Campaign to Stop Killer Robots - a coalition of non-governmental organizations.

Mechanisms of schizophrenia

From Wikipedia, the free encyclopedia

https://en.wikipedia.org/wiki/Mechanisms_of_schizophrenia 

Jump to navigation Jump to search

The mechanisms of schizophrenia that underlie the development of schizophrenia, a chronic mental disorder are complex. A number of theories attempt to explain the link between altered brain function and schizophrenia, including the dopamine hypothesis and the glutamate hypothesis. These theories are separate from the causes of schizophrenia, which deal with the factors that lead to schizophrenia. The current theories attempt to explain how changes in brain functioning can contribute to symptoms of the disease.

Pathophysiology

The exact pathophysiology of schizophrenia remains poorly understood. The most commonly supported theories are the dopamine hypothesis and the glutamate hypothesis. Other theories include the specific dysfunction of interneurons, abnormalities in the immune system, abnormalities in myelination and oxidative stress.

Dopamine dysfunction

The first formulations of the dopamine hypothesis of schizophrenia came from post-mortem studies finding increased striatal availability of D2/D3 receptors in the striatum, as well as studies finding elevated CSF levels of dopamine metabolites. Subsequently, most antipsychotics were found to have affinity for D2 receptors. More modern investigations of the hypothesis suggest a link between striatal dopamine synthesis and positive symptoms, as well as increased and decreased dopamine transmission in subcortical and cortical regions respectively.

A meta analysis of molecular imaging studies observed increased presynaptic indicators of dopamine function, but no difference in the availability of dopamine transporters or dopamine D2/D3 receptors. Both studies using radio labeled L-DOPA, an indicator of dopamine synthesis, and studies using amphetamine release challenges observed significant differences between those with schizophrenia and control. These findings were interpreted as increased synthesis of dopamine, and increased release of dopamine respectively. These findings were localized to the striatum, and were noted to be limited by the quality of studies used. A large degree of inconsistency has been observed in D2/D3 receptor binding, although a small but nonsignificant reduction in thalamic availability has been found. The inconsistent findings with respect to receptor expression has been emphasized as not precluding dysfunction in dopamine receptors, as many factors such as regional heterogeneity and medication status may lead to variable findings. When combined with findings in presynaptic dopamine function, most evidence suggests dysregulation of dopamine in schizophrenia.

Exactly how dopamine dysregulation can contribute to schizophrenia symptoms remains unclear. Some studies have suggested that disruption of the auditory thalamocortical projections give rise to hallucinations, while dysregulated corticostriatal circuitry and reward circuitry in the form of aberrant salience can give rise to delusions. Decreased inhibitory dopamine signals in the thalamus have been hypothesized to result in reduced sensory gating, and excessive activity in excitatory inputs into the cortex.

One hypothesis linking delusions in schizophrenia to dopamine suggests that unstable representation of expectations in prefrontal neurons occurs in psychotic states due to insufficient D1 and NMDA receptor stimulation. This, when combined with hyperactivity of expectations to modification by salient stimuli is thought to lead to improper formation of beliefs.

Glutamate abnormalities

Beside the dopamine hypothesis, interest has also focused on the neurotransmitter glutamate and the reduced function of the NMDA glutamate receptor in the pathophysiology of schizophrenia. This has largely been suggested by lower levels of glutamate receptors found in postmortem brains of people previously diagnosed with schizophrenia and the discovery that glutamate blocking drugs such as phencyclidine and ketamine can mimic the symptoms and cognitive problems associated with the condition.

The fact that reduced glutamate function is linked to poor performance on tests requiring frontal lobe and hippocampal function and that glutamate can affect dopamine function, all of which have been implicated in schizophrenia, have suggested an important mediating (and possibly causal) role of glutamate pathways in schizophrenia. Positive symptoms fail however to respond to glutamatergic medication.

Reduced mRNA and protein expression of several NMDA receptor subunits has also been reported in postmortem brains from people with schizophrenia. In particular, the expression of mRNA for the NR1 receptor subunit, as well as the protein itself is reduced in the prefrontal cortex in post-mortem studies of those with schizophrenia. Fewer studies have examined other subunits, and results have been equivocal, except for a reduction in prefrontal NRC2.

The large genome-wide association study mentioned above has supported glutamate abnormalities for schizophrenia, reporting several mutations in genes related to glutamatergic neurotransmission, such as GRIN2A, GRIA1, SRR, and GRM3.

Interneuron dysfunction

Another hypothesis concerning the pathophysiology of schizophrenia, closely relates to the glutamate hypothesis, and involves dysfunction of interneurons in the brain. Interneurons in the brain are inhibitory GABAergic and local, and function mainly through the inhibition of other cells. One type of interneuron, the fast-spiking, parvalbumin-positive interneuron, has been suggested to play a key role in schizophrenia pathophysiology.

Early studies have identified decreases in GAD67 mRNA and protein in post-mortem brains from those with schizophrenia compared to controls. These reductions were found in only a subset of cortical interneurons. Furthermore, GAD67 mRNA was completely undetectable in a subset of interneurons also expressing parvalbumin. Levels of parvalbumin protein and mRNA were also found to be lower in various regions in the brain. Actual numbers of parvalbumin interneurons have been found to be unchanged in these studies, however, except for a single study showing a decrease in parvalbumin interneurons in the hippocampus. Finally, excitatory synapse density is lower selectively on parvalbumin interneurons in schizophrenia and predicts the activity-dependent down-regulation of parvalbumin and GAD67. Together, this suggests that parvalbumin interneurons are somehow specifically affected in the disease.

Several studies have tried to assess levels in GABA in vivo in those with schizophrenia, but these findings have remained inconclusive.

EEG studies have indirectly also pointed to interneuron dysfunction in schizophrenia (see below). These studies have pointed to abnormalities in oscillatory activity in schizophrenia, particularly in the gamma band (30–80 Hz). Gamma band activity appears to originate from intact functioning parvalbumin-positive interneuron. Together with the post-mortem findings, these EEG abnormalities point to a role for dysfunctional parvalbumin interneurons in schizophrenia.

The largest meta-analysis on copy-number variations (CNVs), structural abnormalities in the form of genetic deletions or duplications, to date for schizophrenia, published in 2015, was the first genetic evidence for the broad involvement of GABAergic neurotransmission.

Myelination abnormalities

Another hypothesis states that abnormalities in myelination are a core pathophysiology of schizophrenia. This theory originated from structural imaging studies, which found that white matter regions, in addition to grey matter regions, showed volumetric reductions in people with schizophrenia. In addition, gene expression studies have shown abnormalities in myelination and oligodendrocytes in the post-mortem brains. Furthermore, oligodendrocyte numbers appear to be reduced in several post-mortem studies.

It has been suggested that myelination abnormalities could originate from impaired maturation of oligodendrocyte precursor cells, as these have been found to be intact in schizophrenia brains.

Immune system abnormalities

Another hypothesis postulates that inflammation and immune system abnormalities could play a central role in the disease. The immune hypothesis is supported by findings of high levels of immune markers in the blood of people with schizophrenia. High levels of immune markers have also been associated with having more severe psychotic symptoms. Furthermore, a meta-analysis of genome-wide association studies discovered that 129 out of 136 single-nucleotide polymorphisms (SNP) significantly associated with schizophrenia were located in the major histocompatibility complex region of the genome.

A systematic review investigating neuroinflammatory markers in post-mortem schizophrenia brains has shown quite some variability, with some studies showing alterations in various markers but others failing to find any differences.

Oxidative stress

Another theory that has gained support is that a large role is played in the disease by oxidative stress. Redox dysregulation in early development can potentially influence development of different cell types that have been shown to be impaired in the disease.

Oxidative stress has also been indicated through genetic studies into schizophrenia.

Oxidative stress has been shown to affect maturation of oligodendrocytes, the myelinating cell types in the brain, potentially underlying the white matter abnormalities found in the brain (see below).

Furthermore, oxidative stress could also influence the development of GABAergic interneurons, which have also been found to be dysregulated in schizophrenia (see above).

Evidence that oxidative stress and oxidative DNA damage are increased in various tissues of people with schizophrenia has been reviewed by Markkanen et al. The presence of increased oxidative DNA damage may be due, in part, to insufficient repair of such damages. Several studies have linked polymorphisms in DNA repair genes to the development of schizophrenia. In particular, the base excision repair protein XRCC1 has been implicated.

Neuropathology

The most consistent finding in post-mortem examinations of brain tissue is a lack of neurodegenerative lesions or gliosis. Abnormal neuronal organization and orientation (dysplasia) has been observed in the entorhinal cortex, hippocampus, and subcortical white matter, although results are not entirely consistent. A more consistent cytoarchitectural finding is reduced volume of purkinje cells and pyramidal cells in the hippocampus. This is consistent with the observation of decreased presynaptic terminals in the hippocampus, and a reduction in dendritic spines in the prefrontal cortex. The reductions in prefrontal and increase in striatal spine densities seem to be independent of antipsychotic drug use.

Sleep disorders

It has been suggested that sleep problems may be a core component of the pathophysiology of schizophrenia.

Structural abnormalities

Beside theories concerning the functional mechanism underlying the disease, structural findings have been identified as well using a wide range of imaging techniques. Studies have tended to show various subtle average differences in the volume of certain areas of brain structure between people with and without diagnoses of schizophrenia, although it has become increasingly clear that no single pathological neuropsychological or structural neuroanatomic profile exists.

Morphometry

Structural imaging studies have consistently reported differences in the size and structure of certain brain areas in schizophrenia.

The largest combined neuroimaging study with over 2000 subjects and 2500 controls has replicated these previous findings. Volumetric increases were found in the lateral ventricles (+18%), caudate nucleus and pallidum, and extensive decreases in the hippocampus (-4%), thalamus, amygdala and nucleus accumbens. Together, this indicates that extensive changes do occur in the brains of people with schizophrenia.

A 2006 meta-analysis of MRI studies found that whole brain and hippocampal volume are reduced and that ventricular volume is increased in those with a first psychotic episode relative to healthy controls. The average volumetric changes in these studies are however close to the limit of detection by MRI methods, so it remains to be determined whether schizophrenia is a neurodegenerative process that begins at about the time of symptom onset, or whether it is better characterised as a neurodevelopmental process that produces abnormal brain volumes at an early age. In first episode psychosis typical antipsychotics like haloperidol were associated with significant reductions in gray matter volume, whereas atypical antipsychotics like olanzapine were not. Studies in non-human primates found gray and white matter reductions for both typical and atypical antipsychotics.

Abnormal findings in the prefrontal cortex, temporal cortex and anterior cingulate cortex are found before the first onset of schizophrenia symptoms. These regions are the regions of structural deficits found in schizophrenia and first-episode subjects. Positive symptoms, such as thoughts of being persecuted, were found to be related to the medial prefrontal cortex, amygdala, and hippocampus region. Negative symptoms were found to be related to the ventrolateral prefrontal cortex and ventral striatum.

Ventricular and third ventricle enlargement, abnormal functioning of the amygdala, hippocampus, parahippocampal gyrus, neocortical temporal lobe regions, frontal lobe, prefontal gray matter, orbitofrontal areas, parietal lobs abnormalities and subcortical abnormalities including the cavum septi pellucidi, basal ganglia, corpus callosum, thalamus and cerebellar abnormalities. Such abnormalities usually present in the form of loss of volume.

Most schizophrenia studies have found average reduced volume of the left medial temporal lobe and left superior temporal gyrus, and half of studies have revealed deficits in certain areas of the frontal gyrus, parahippocampal gyrus and temporal gyrus. However, at variance with some findings in individuals with chronic schizophrenia significant group differences of temporal lobe and amygdala volumes are not shown in first-episode people on average.

Finally, MRI studies utilizing modern cortical surface reconstruction techniques have shown widespread reduction in cerebral cortical thickness (i.e., "cortical thinning") in frontal and temporal regions and somewhat less widespread cortical thinning in occipital and parietal regions in people with schizophrenia, relative to healthy control subjects. Moreover, one study decomposed cortical volume into its constituent parts, cortical surface area and cortical thickness, and reported widespread cortical volume reduction in schizophrenia, mainly driven by cortical thinning, but also reduced cortical surface area in smaller frontal, temporal, parietal and occipital cortical regions.

CT scans of the brains of people with schizophrenia show several pathologies. The brain ventricles are enlarged as compared to normal brains. The ventricles hold cerebrospinal fluid (CSF) and enlarged ventricles indicate a loss of brain volume. Additionally, the brains have widened sulci as compared to normal brains, also with increased CSF volumes and reduced brain volume.

Using machine learning, two neuroanatomical subtypes of schizophrenia have been described. Subtype 1 shows widespread low grey matter volumes, particularly in the thalamus, nucleus accumbens, medial temporal, medial prefrontal, frontal, and insular cortices. Subtype 2 shows increased volume in the basal ganglia and internal capsule, with otherwise normal brain volume.

White matter

Diffusion tensor imaging (DTI) allows for the investigation of white matter more closely than traditional MRI. Over 300 DTI imaging studies have been published examining white matter abnormalities in schizophrenia. Although quite some variation has been found pertaining to the specific regions affected, the general consensus states a reduced fractional anisotropy in brains from people with schizophrenia versus controls. Importantly, these differences between subjects and controls could potentially be attributed to lifestyle effects, medication effects etc. Other studies have looked at people with first-episode schizophrenia that have never received any medication, so-called medication-naive subjects. These studies, although few in number, also found reduced fractional anisotropy in subject brains compared to control brains. As with earlier findings, abnormalities can be found throughout the brain, although the corpus callosum seemed to be most commonly effected.

Functional abnormalities

During executive function tasks, people with schizophrenia demonstrate decreased activity relative to controls in the bilateral dorsolateral prefrontal cortex(dlPFC), right anterior cingulate cortex(ACC), and left mediodorsal nucleus of the thalamus. Increased activation was observed in the left ACC and left inferior parietal lobe. During emotional processing tasks, reduced activations have been observed in the Medial prefrontal cortex, ACC, dlPFC and amygdala. A meta analysis of facial emotional processing observed decreased activation in the amygdala, parahippocampus, lentiform nuclei, fusiform gyrus and right superior frontal gyrus, as well as increased activation in the left insula.

One meta analysis of functional neuroiamging during acute auditory verbal hallucinations has reported increased activations in areas implicated in language, including the bilateral inferior frontal and post central gyri, as well as the left parietal operculum. Another meta analysis during both visual and auditory verbal hallucinations, replicated the findings in the inferior frontal and postcentral gyri during auditory verbal hallucinations, and also observed hippocampal, superior temporal, insular and medial prefrontal activations. Visual hallucinations were reported to be associated with increased activations in the secondary and associate visual cortices.

PET

Data from a PET study suggests that the less the frontal lobes are activated (red) during a working memory task, the greater the increase in abnormal dopamine activity in the striatum (green), thought to be related to the neurocognitive deficits in schizophrenia.

PET scan findings in people with schizophrenia indicate cerebral blood flow decreases in the left parahippocampal region. PET scans also show a reduced ability to metabolize glucose in the thalamus and frontal cortex. PET scans also show involvement of the medial part of the left temporal lobe and the limbic and frontal systems as suffering from developmental abnormality. PET scans show thought disorders stem from increased flow in the frontal and temporal regions while delusions and hallucinations were associated with reduced flow in the cingulate, left frontal, and temporal areas. PET scans carried out during active auditory hallucinations revealed increased blood flow in both thalami, left hippocampus, right striatum, parahippocampus, orbitofrontal, and cingulate areas.

In addition, a decrease in NAA uptake has been reported in the hippocampus and both the grey and white matter of the prefrontal cortex of those with schizophrenia. NAA may be an indicator of neural activity of number of viable neurons. however given methodological limitations and variance it is impossible to use this as a diagnostic method. Decreased prefrontal cortex connectivity has also been observed. DOPA PET studies have confirmed an altered synthesis capacity of dopamine in the nigrostriatal system demonstrating a dopaminergic dysregulation.

Neurohacking

From Wikipedia, the free encyclopedia
Jump to navigation Jump to search

Neurohacking is a subclass of biohacking, focused specifically on the brain. Neurohackers seek to better themselves or others by “hacking the brain” to improve reflexes, learn faster, or treat psychological disorders. The modern neurohacking movement has been around since the 1980s. However, herbal supplements have been used to increase brain function for hundreds of years. After a brief period marked by a lack of research in the area, neurohacking started regaining interest in the early 2000s. Currently, most neurohacking is performed via do-it-yourself (DIY) methods by in-home users.

Simple uses of neurohacking include the use of chemical supplements to increase brain function. More complex medical devices can be implanted to treat psychological disorders and illnesses.

History

Anna Wexler, a member of the Department of Science, Technology and Society at Massachusetts Institute of Technology, claims that neurohacking should be viewed as a subdivision of the ‘life hacking’ movement. She argues that popularized scientific publications have led to a greater public awareness of neuroscience since the turn of the century. As a result, the public was made aware of the brain’s plasticity and its potential to improve.

The use of mind-altering substances derived from plants dates back to ancient history. Neurohackers use a class of chemical substances that improve higher order brain functions called nootropics. The term nootropics was first proposed in 1972 by Corneliu Giurgea, a Romanian chemist from University of Bucharest.

In his study, he classified Piracetam as a nootropic and determined that nootropics should fit the following criteria:

  • Enhance learning
  • Resist impairing agents
  • Augment informational transfer between the two hemisphere of the brain
  • Heighten the brain’s resistance against various forms of “aggressions”
  • Improved “tonic, cortico-subcortical ‘control’”
  • Lack of pharmacological effects of other common psychoactive drugs.

Today, various nootropics are available via prescription and over the counter.

The 2000 study by Michael A. Nitsche and Walter Paulus at the University of Goettingen is considered to be the one of the first device-oriented attempts at influencing the brain non-invasively. The study found that the motor cortex of the brain responds to weak electrical stimuli in the form of transcranial direct current stimulation (tDCS). A later study in 2003 by Branislav Savic and Beat Meier found that (tDCS) improves motor sequence learning. More recent studies have concluded that tDCS may alleviate neuropathic pain, depression, schizophrenia, and other neurological disorders. Methods of non-invasive brain stimulation (NIBS) have been found to enhance human performance. In 2019, a study funded by the US Department of Defense found that cognition and motor performance could be improved by tDCS. This investigation showed that tDCS could be used to enhance the abilities of military personnel. However, side effects such as “itching, tingling, and headaches” were noted. The study concluded that more research into adequate safety regulations is needed before it can be properly implemented.

A resurgence in the popularity of at-home and DIY neurohacking started in 2011. The recent availability of brain stimulation devices contributed to the rise in the home neurohacking movement. Individuals applied weak electrical currents to their brain in hopes of improving performance and productivity. Since 2017, neurohacking devices have been available to the general public for unsupervised use. However, these methods of neurohacking have yet to gain widespread acceptance from the general public, and user retention rate for the devices remains low.

In 2018, Marom Bikson and his colleagues at the City College of New York released a report to aid consumers in making an informed choice regarding the purchase of tDCS devices. In particular, Bikson stated that the report hoped to educate consumers on the reasons why a significant price differentiation existed across the various devices on the market.

Technology

There are three main categories of neurohacking methods: oral supplements or ingestibles, procedural training exercises, and the transmission of electrical currents through the brain.

Oral supplements and ingestibles

Nootropics are any chemical compounds that cause an improvement in brain function. Although many are naturally produced by the body, ingestible supplements are often required to artificially raise the concentration of these compounds in the bloodstream to produce a significant effect. Nootropics can be further classified into two categories: synthetics nootropics and natural nootropics.

Synthetic nootropics

Synthetic nootropics refer to any lab-produced nootropics, including Piracetam. Synthetic nootropics can act at three different junctions:

  1. Dopamine receptors
  2. Adrenergic receptors
  3. Acetylcholine and glutamate receptors

Natural nootropics

Natural, or herbal, nootropics, include food-based antioxidants and vitamin supplements. There are three main mechanisms by which natural nootropics affect brain activity:

  1. Neurotransmitter modulation
  2. Modulation of signal transduction
  3. Vasodilation

Popular supplements such as Ginkgo biloba and Panax quinquefolius (American Ginseng) are characterized as natural and herbal nootropics. Few studies have been conducted regarding the safety and long-term effects of prescribing these herbal supplements as a means of mitigating age-related cognitive decline. However, current research has indicated that these methods have the potential to alleviate the mental deterioration in older individuals.

Procedural training exercises

Procedural training methods strengthen the connections between neurons. For example, brain training games have been around since the 2000s. Companies such as PositScience, Lumosity, and CogniFit created video games designed to improve the user’s brain function. These brain-training games improve neural capacity by adding game-like features to comprehension skills.

Transmission of electrical currents

There are three methods by which electrical currents are transmitted through the brain: deep brain stimulation (DBS), transcranial magnetic stimulation (TMS), and transcranial direct current stimulation (tDCS).

Deep brain stimulation (DBS)

DBS involves implanting an electrical device, or neurostimulator, into the brain. The neurostimulator is a thin wire with electrodes at its tip. Low levels of electric current are transmitted through the brain. The location where the electrodes are implanted depends on the neurological disorder being treated. The company Neuralink hopes that their DBS device will include “as many as 3072 electrodes distributed along 96 threads”, and that the procedure to implant the threads would be as non-invasive as LASIK eye suregery.

Transcranial magnetic stimulation (TMS)

TMS sends short bursts of magnetic energy to the left frontal cortex through a small electromagnetic coil. Some studies have found that TMS improves cognition and motor performance. Other studies have investigated the relation between TMS and its ability to recover lost memories.

Transcranial direct current stimulation (tDCS)

Brain cells, or neurons, emit chemical signals across the gaps, or synapses, between neurons. When learning a new skill or topic, the neurons involved in understanding that particular subject are then primed to emit signals more readily. Less electrical current is required to signal the neurons to secrete the chemicals for transport across the synapse. tDCS involves running a very low current (less than 2mA) through an anode and a cathode placed on the head. The research shows that brain function improves around the anode, with no change or reduced function around the cathode.

Applications

Many applications of neurohacking center around improving quality of life.

Mental health

Bettering people's mental health is one primary application of neurohacking.

Virtual reality exposure therapy is one application of neurohacking, and is being used to treat post traumatic stress. The USC Institute for Creative Technologies has been working on exposure therapy techniques since 2005, and exposure therapy is now an evidence based treatment for post traumatic stress.

Exposure therapy retrains the mind of the patient to reduce the fear associated with feeling a certain way or experiencing certain triggering stimuli. By confronting situations in a safe and controlled virtual reality environment, the patient is able to reduce the anxiety associated with those circumstances.

The FDA has approved DBS devices for the treatment of both Parkinson's disease and dystonia. There are several risks involved with this treatment, such as depression, hypomania, euphoria, mirth, and hypersexuality. However, permanent complications are rare. DBS has also been used to Tourette syndrome, dyskinesia epilepsy and depression, although more research is needed in these areas before it can be deemed safe.

Human enhancement

Enhancing the human experience is another application of neurohacking. Methods include simple brain-training games, chemical enhancers, and electrical brain stimulation.

Caffeine is an effective method for enhancing human performance in everyday life. Caffeine is the most popular drug in the world (humans drink a collective 1.6 billion cups per day) and is also the most popular method by which people are neurohacking. Caffeine improves memory, sociability, and alertness.

Another chemical performance enhancer, dihexa, is an ingestable neuropeptide that was approved for use in the United States in 2019. It is prescribed to clients who want to achieve a specific goal such as learn a new language or master an instrument.

Information retrieval

The third primary application of neurohacking is information retrieval from the brain. This typically involves the use of a brain-machine interface (BMI) – an apparatus to measure electrical signals in the brain.

In 2016, researchers modeled an individual’s interest in digital content by monitoring their EEG (electroencephalogram). The researchers asked the user to read Wikipedia articles. From data in the EEG, they could predict which article the user would want to read next based on the individual’s expressed interest in each topic. The researchers claim this paradigm can be used to “recommend information without any explicit user interaction”.

In July 2019, Neuralink – a company developing implantable brain-machine interfaces – presented their research on their high bandwidth BMI. Neuralink claims to have developed an implantable BMI device that is capable of recording and delivering full bandwidth data from the brain. The company hopes to use this technology to create a high-speed connection between the brain and digital technology, bypassing the need to type search queries or read the results.

Legal and ethical aspects

The neurohacking trend has been heavily commercialized, with companies such as Lumosity and CogniFit marketing games that allegedly optimize the performances of the brain as well as alleviate the symptoms of senescence-related cognitive decline and other neurodegenerative disorders. Several studies have called into question the effectiveness of these softwares. The Federal Trade Commission (FTC) has filed claims against some companies producing brain training software for misleading marketing. Claims against Lumosity for misleading advertisement are over $2 million. Conclusive evidence regarding the effectiveness of brain training software has yet to be presented. Despite this uncertainty, the public demand for such products is rising. Sales in 2015 reached $67 million in the United States and Canada.

Unfair advantages

No governing organizations responsible for overseeing athletics and education have policies regulating neurohacking. Athletes and students can use neurohacking to gain an unfair advantage in sporting events and academic settings. Studies have indicated that neurohacking can improve memory, creativity, learning speed, muscle gain, and athletic performance. However, there are no well-developed tests or instruments capable of detecting neurohacking. Students and athletes may utilize neurohacking techniques and never be detected.

Side effects and potential risks

Most manufacturers fail to disclose the potential side effects of neurohacking devices, including significant changes to the user’s self-identity and decreased reasoning skills.  Affordable neurohacking devices are available online with prices ranging from $99 to $800, making them easily accessible to consumers. For instance, a “brain stimulator” device produced by the “Brain Stimulator” company that utilizes tDCS is priced $127 to $179. However, these devices are rarely regulated by the government. Using these unapproved devices with no medical supervision could cause devastating side effects. Cases have been cited where individuals physically harm others as a side effect of neurohacking.

Insurance claims

The Vercise DBS System produced by Boston Scientific Corporation is the only neurohacking medical device for sale that is approved by the Food and Drug Administration (FDA), Code of Federal Regulations (CFR), and Good Practices in Clinical Research. With the rise of DIY neurohacking, many individuals self-treat without proper supervision by a medical professional. Insurance companies deny medical insurance compensation for users who are injured using unapproved medical-grade neurohacking devices. Most neurohacking devices are uncertified and unregulated.

 

Delayed-choice quantum eraser

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Delayed-choice_quantum_eraser A delayed-cho...