Search This Blog

Wednesday, May 23, 2018

Recursion

From Wikipedia, the free encyclopedia

A visual form of recursion known as the Droste effect. The woman in this image holds an object that contains a smaller image of her holding an identical object, which in turn contains a smaller image of herself holding an identical object, and so forth. 1904 Droste cocoa tin, designed by Jan Misset

Recursion occurs when a thing is defined in terms of itself or of its type. Recursion is used in a variety of disciplines ranging from linguistics to logic. The most common application of recursion is in mathematics and computer science, where a function being defined is applied within its own definition. While this apparently defines an infinite number of instances (function values), it is often done in such a way that no loop or infinite chain of references can occur.

Formal definitions


Ouroboros, an ancient symbol depicting a serpent or dragon eating its own tail.

In mathematics and computer science, a class of objects or methods exhibit recursive behavior when they can be defined by two properties:
  1. A simple base case (or cases)—a terminating scenario that does not use recursion to produce an answer
  2. A set of rules that reduce all other cases toward the base case
For example, the following is a recursive definition of a person's ancestors:
  • One's parents are one's ancestors (base case).
  • The ancestors of one's ancestors are also one's ancestors (recursion step).
The Fibonacci sequence is a classic example of recursion:

\text{Fib}(0)=0\text{ as base case 1,}
\text{Fib}(1)=1\text{ as base case 2,}
\text{For all integers }n>1,~\text{ Fib}(n):=\text{Fib}(n-1) + \text{Fib}(n-2).

Many mathematical axioms are based upon recursive rules. For example, the formal definition of the natural numbers by the Peano axioms can be described as: 0 is a natural number, and each natural number has a successor, which is also a natural number. By this base case and recursive rule, one can generate the set of all natural numbers.

Recursively defined mathematical objects include functions, sets, and especially fractals.

There are various more tongue-in-cheek "definitions" of recursion; see recursive humor.

Informal definition


Recently refreshed sourdough, bubbling through fermentation: the recipe calls for some sourdough left over from the last time the same recipe was made.

Recursion is the process a procedure goes through when one of the steps of the procedure involves invoking the procedure itself. A procedure that goes through recursion is said to be 'recursive'.

To understand recursion, one must recognize the distinction between a procedure and the running of a procedure. A procedure is a set of steps based on a set of rules. The running of a procedure involves actually following the rules and performing the steps. An analogy: a procedure is like a written recipe; running a procedure is like actually preparing the meal.

Recursion is related to, but not the same as, a reference within the specification of a procedure to the execution of some other procedure. For instance, a recipe might refer to cooking vegetables, which is another procedure that in turn requires heating water, and so forth. However, a recursive procedure is where (at least) one of its steps calls for a new instance of the very same procedure, like a sourdough recipe calling for some dough left over from the last time the same recipe was made. This immediately creates the possibility of an endless loop; recursion can only be properly used in a definition if the step in question is skipped in certain cases so that the procedure can complete, like a sourdough recipe that also tells you how to get some starter dough in case you've never made it before. Even if properly defined, a recursive procedure is not easy for humans to perform, as it requires distinguishing the new from the old (partially executed) invocation of the procedure; this requires some administration of how far various simultaneous instances of the procedures have progressed. For this reason recursive definitions are very rare in everyday situations. An example could be the following procedure to find a way through a maze. Proceed forward until reaching either an exit or a branching point (a dead end is considered a branching point with 0 branches). If the point reached is an exit, terminate. Otherwise try each branch in turn, using the procedure recursively; if every trial fails by reaching only dead ends, return on the path that led to this branching point and report failure. Whether this actually defines a terminating procedure depends on the nature of the maze: it must not allow loops. In any case, executing the procedure requires carefully recording all currently explored branching points, and which of their branches have already been exhaustively tried.

In language

Linguist Noam Chomsky among many others has argued that the lack of an upper bound on the number of grammatical sentences in a language, and the lack of an upper bound on grammatical sentence length (beyond practical constraints such as the time available to utter one), can be explained as the consequence of recursion in natural language.[1][2] This can be understood in terms of a recursive definition of a syntactic category, such as a sentence. A sentence can have a structure in which what follows the verb is another sentence: Dorothy thinks witches are dangerous, in which the sentence witches are dangerous occurs in the larger one. So a sentence can be defined recursively (very roughly) as something with a structure that includes a noun phrase, a verb, and optionally another sentence. This is really just a special case of the mathematical definition of recursion.

This provides a way of understanding the creativity of language—the unbounded number of grammatical sentences—because it immediately predicts that sentences can be of arbitrary length: Dorothy thinks that Toto suspects that Tin Man said that.... There are many structures apart from sentences that can be defined recursively, and therefore many ways in which a sentence can embed instances of one category inside another. Over the years, languages in general have proved amenable to this kind of analysis.

Recently, however, the generally accepted idea that recursion is an essential property of human language has been challenged by Daniel Everett on the basis of his claims about the Pirahã language. Andrew Nevins, David Pesetsky and Cilene Rodrigues are among many who have argued against this.[3] Literary self-reference can in any case be argued to be different in kind from mathematical or logical recursion.[4]

Recursion plays a crucial role not only in syntax, but also in natural language semantics. The word and, for example, can be construed as a function that can apply to sentence meanings to create new sentences, and likewise for noun phrase meanings, verb phrase meanings, and others. It can also apply to intransitive verbs, transitive verbs, or ditransitive verbs. In order to provide a single denotation for it that is suitably flexible, and is typically defined so that it can take any of these different types of meanings as arguments. This can be done by defining it for a simple case in which it combines sentences, and then defining the other cases recursively in terms of the simple one.[5]

A recursive grammar is a formal grammar that contains recursive production rules.[6]

Recursive humor

Recursion is sometimes used humorously in computer science, programming, philosophy, or mathematics textbooks, generally by giving a circular definition or self-reference, in which the putative recursive step does not get closer to a base case, but instead leads to an infinite regress. It is not unusual for such books to include a joke entry in their glossary along the lines of:
Recursion, see Recursion.[7]
A variation is found on page 269 in the index of some editions of Brian Kernighan and Dennis Ritchie's book The C Programming Language; the index entry recursively references itself ("recursion 86, 139, 141, 182, 202, 269"). The earliest version of this joke was in "Software Tools" by Kernighan and Plauger, and also appears in "The UNIX Programming Environment" by Kernighan and Pike. It did not appear in the first edition of The C Programming Language.

Another joke is that "To understand recursion, you must understand recursion."[7] In the English-language version of the Google web search engine, when a search for "recursion" is made, the site suggests "Did you mean: recursion." An alternative form is the following, from Andrew Plotkin: "If you already know what recursion is, just remember the answer. Otherwise, find someone who is standing closer to Douglas Hofstadter than you are; then ask him or her what recursion is."

Recursive acronyms can also be examples of recursive humor. PHP, for example, stands for "PHP Hypertext Preprocessor", WINE stands for "WINE Is Not an Emulator." and GNU stands for "GNU's not Unix".

In mathematics


The Sierpinski triangle—a confined recursion of triangles that form a fractal

Recursively defined sets

Example: the natural numbers

The canonical example of a recursively defined set is given by the natural numbers:
0 is in \mathbb {N}
if n is in \mathbb {N} , then n + 1 is in \mathbb {N}
The set of natural numbers is the smallest set satisfying the previous two properties.

Example: The set of true reachable propositions

Another interesting example is the set of all "true reachable" propositions in an axiomatic system.
  • If a proposition is an axiom, it is a true reachable proposition.
  • If a proposition can be obtained from true reachable propositions by means of inference rules, it is a true reachable proposition.
  • The set of true reachable propositions is the smallest set of propositions satisfying these conditions.
This set is called 'true reachable propositions' because in non-constructive approaches to the foundations of mathematics, the set of true propositions may be larger than the set recursively constructed from the axioms and rules of inference. See also Gödel's incompleteness theorems.

Finite subdivision rules

Finite subdivision rules are a geometric form of recursion, which can be used to create fractal-like images. A subdivision rule starts with a collection of polygons labelled by finitely many labels, and then each polygon is subdivided into smaller labelled polygons in a way that depends only on the labels of the original polygon. This process can be iterated. The standard `middle thirds' technique for creating the Cantor set is a subdivision rule, as is barycentric subdivision.

Functional recursion

A function may be partly defined in terms of itself. A familiar example is the Fibonacci number sequence: F(n) = F(n − 1) + F(n − 2). For such a definition to be useful, it must lead to non-recursively defined values, in this case F(0) = 0 and F(1) = 1.

A famous recursive function is the Ackermann function, which—unlike the Fibonacci sequence—cannot easily be expressed without recursion.

Proofs involving recursive definitions

Applying the standard technique of proof by cases to recursively defined sets or functions, as in the preceding sections, yields structural induction, a powerful generalization of mathematical induction widely used to derive proofs in mathematical logic and computer science.

Recursive optimization

Dynamic programming is an approach to optimization that restates a multiperiod or multistep optimization problem in recursive form. The key result in dynamic programming is the Bellman equation, which writes the value of the optimization problem at an earlier time (or earlier step) in terms of its value at a later time (or later step).

The recursion theorem

In set theory, this is a theorem guaranteeing that recursively defined functions exist. Given a set X, an element a of X and a function f: X \rightarrow X, the theorem states that there is a unique function F: \mathbb{N} \rightarrow X (where \mathbb {N} denotes the set of natural numbers including zero) such that
F(0) = a
F(n + 1) = f(F(n))
for any natural number n.

Proof of uniqueness

Take two functions F: \mathbb{N} \rightarrow X and G: \mathbb{N} \rightarrow X such that:
F(0) = a
G(0) = a
F(n + 1) = f(F(n))
G(n + 1) = f(G(n))
where a is an element of X.

It can be proved by mathematical induction that F(n) = G(n) for all natural numbers n:
Base Case: F(0) = a = G(0) so the equality holds for n=0.
Inductive Step: Suppose F(k) = G(k) for some k\in \mathbb {N} . Then F(k+1) = f(F(k)) = f(G(k)) = G(k+1).
Hence F(k) = G(k) implies F(k+1) = G(k+1).
By induction, F(n) = G(n) for all n\in \mathbb {N} .

In computer science

A common method of simplification is to divide a problem into subproblems of the same type. As a computer programming technique, this is called divide and conquer and is key to the design of many important algorithms. Divide and conquer serves as a top-down approach to problem solving, where problems are solved by solving smaller and smaller instances. A contrary approach is dynamic programming. This approach serves as a bottom-up approach, where problems are solved by solving larger and larger instances, until the desired size is reached.

A classic example of recursion is the definition of the factorial function, given here in C code:
 
unsigned int factorial(unsigned int n) {
    if (n == 0) {
        return 1;
    } else {
        return n * factorial(n - 1);
    }
}

The function calls itself recursively on a smaller version of the input (n - 1) and multiplies the result of the recursive call by n, until reaching the base case, analogously to the mathematical definition of factorial.

Recursion in computer programming is exemplified when a function is defined in terms of simpler, often smaller versions of itself. The solution to the problem is then devised by combining the solutions obtained from the simpler versions of the problem. One example application of recursion is in parsers for programming languages. The great advantage of recursion is that an infinite set of possible sentences, designs or other data can be defined, parsed or produced by a finite computer program.

Recurrence relations are equations to define one or more sequences recursively. Some specific kinds of recurrence relation can be "solved" to obtain a non-recursive definition.

Use of recursion in an algorithm has both advantages and disadvantages. The main advantage is usually simplicity. The main disadvantage is often that the algorithm may require large amounts of memory if the depth of the recursion is very large.

In art


Recursive dolls: the original set of Matryoshka dolls by Zvyozdochkin and Malyutin, 1892

Front face of Giotto's Stefaneschi Triptych, 1320, recursively contains an image of itself (held up by the kneeling figure in the central panel).

The Russian Doll or Matryoshka Doll is a physical artistic example of the recursive concept.[8]

Recursion has been used in paintings since Giotto's Stefaneschi Triptych, made in 1320. Its central panel contains the kneeling figure of Cardinal Stefaneschi, holding up the triptych itself as an offering.[9]

M. C. Escher's Print Gallery (1956) is a print which depicts a distorted city which contains a gallery which recursively contains the picture, and so ad infinitum.[10]

Ethics of artificial intelligence

From Wikipedia, the free encyclopedia

The ethics of artificial intelligence is the part of the ethics of technology specific to robots and other artificially intelligent beings. It is typically[citation needed] divided into roboethics, a concern with the moral behavior of humans as they design, construct, use and treat artificially intelligent beings, and machine ethics, which is concerned with the moral behavior of artificial moral agents (AMAs).

Robot ethics

The term "robot ethics" (sometimes "roboethics") refers to the morality of how humans design, construct, use and treat robots and other artificially intelligent beings.[1] It considers both how artificially intelligent beings may be used to harm humans and how they may be used to benefit humans.

Robot rights

"Robot rights" is the concept that people should have moral obligations towards their machines, similar to human rights or animal rights.[2] It has been suggested that robot rights, such as a right to exist and perform its own mission, could be linked to robot duty to serve human, by analogy with linking human rights to human duties before society.[3] These could include the right to life and liberty, freedom of thought and expression and equality before the law.[4] The issue has been considered by the Institute for the Future[5] and by the U.K. Department of Trade and Industry.[6]

Experts disagree whether specific and detailed laws will be required soon or safely in the distant future.[6] Glenn McGee reports that sufficiently humanoid robots may appear by 2020.[7] Ray Kurzweil sets the date at 2029.[8] Another group of scientists meeting in 2007 supposed that at least 50 years had to pass before any sufficiently advanced system would exist.[9]

The rules for the 2003 Loebner Prize competition envisioned the possibility of robots having rights of their own:
61. If, in any given year, a publicly available open source Entry entered by the University of Surrey or the Cambridge Center wins the Silver Medal or the Gold Medal, then the Medal and the Cash Award will be awarded to the body responsible for the development of that Entry. If no such body can be identified, or if there is disagreement among two or more claimants, the Medal and the Cash Award will be held in trust until such time as the Entry may legally possess, either in the United States of America or in the venue of the contest, the Cash Award and Gold Medal in its own right.[10]
In October 2017, the android Sophia was granted citizenship in Saudi Arabia, though some observers found this to be more of a publicity stunt than a meaningful legal recognition.[11]

Threat to human dignity

Joseph Weizenbaum argued in 1976 that AI technology should not be used to replace people in positions that require respect and care, such as any of these:
  • A customer service representative (AI technology is already used today for telephone-based interactive voice response systems)
  • A therapist (as was proposed by Kenneth Colby in the 1970s)
  • A nursemaid for the elderly (as was reported by Pamela McCorduck in her book The Fifth Generation)
  • A soldier
  • A judge
  • A police officer
Weizenbaum explains that we require authentic feelings of empathy from people in these positions. If machines replace them, we will find ourselves alienated, devalued and frustrated. Artificial intelligence, if used in this way, represents a threat to human dignity. Weizenbaum argues that the fact that we are entertaining the possibility of machines in these positions suggests that we have experienced an "atrophy of the human spirit that comes from thinking of ourselves as computers."[12]
Pamela McCorduck counters that, speaking for women and minorities "I'd rather take my chances with an impartial computer," pointing out that there are conditions where we would prefer to have automated judges and police that have no personal agenda at all.[12] AI founder John McCarthy objects to the moralizing tone of Weizenbaum's critique. "When moralizing is both vehement and vague, it invites authoritarian abuse," he writes.

Bill Hibbard[13] writes that "Human dignity requires that we strive to remove our ignorance of the nature of existence, and AI is necessary for that striving."

Transparency and open source

Bill Hibbard argues that because AI will have such a profound effect on humanity, AI developers are representatives of future humanity and thus have an ethical obligation to be transparent in their efforts.[14] Ben Goertzel and David Hart created OpenCog as an open source framework for AI development.[15] OpenAI is a non-profit AI research company created by Elon Musk, Sam Altman and others to develop open source AI beneficial to humanity.[16] There are numerous other open source AI developments.

Weaponization of artificial intelligence

Some experts and academics have questioned the use of robots for military combat, especially when such robots are given some degree of autonomous functions.[17][18] The US Navy has funded a report which indicates that as military robots become more complex, there should be greater attention to implications of their ability to make autonomous decisions.[19][20] One researcher states that autonomous robots might be more humane, as they could make decisions more effectively.[citation needed]

Within this last decade, there has been intensive research in autonomous power with the ability to learn using assigned moral responsibilities. "The results may be used when designing future military robots, to control unwanted tendencies to assign responsibility to the robots."[21] From a consequentialist view, there is a chance that robots will develop the ability to make their own logical decisions on who to kill and that is why there should be a set moral framework that the A.I cannot override.

There has been a recent outcry with regards to the engineering of artificial-intelligence weapons and has even fostered up ideas of a robot takeover of mankind. AI weapons do present a type of danger different from that of human controlled weapons. Many governments have begun to fund programs to develop AI weaponry. The United States Navy recently announced plans to develop autonomous drone weapons, paralleling similar announcements by Russia and Korea respectively. Due to the potential of AI weapons becoming more dangerous than human operated weapons, Stephen Hawking and Max Tegmark have signed a Future of Life petition to ban AI weapons. The message posted by Hawking and Tegmark states that AI weapons pose an immediate danger and that action is required to avoid catastrophic disasters in the near future.[22]

"If any major military power pushes ahead with the AI weapon development, a global arms race is virtually inevitable, and the endpoint of this technological trajectory is obvious: autonomous weapons will become the Kalashnikovs of tomorrow", says the petition, which includes Skype co-founder Jaan Tallinn and MIT professor of linguistics Noam Chomsky as additional supporters against AI weaponry.[23]

Physicist and Astronomer Royal Sir Martin Rees warned of catastrophic instances like "dumb robots going rogue or a network that develops a mind of its own." Huw Price, a colleague of Rees at Cambridge, has voiced a similar warning that humans may not survive when intelligence "escapes the constraints of biology." These two professors created the Centre for the Study of Existential Risk at Cambridge University in the hopes of avoiding this threat to human existence.[22]

Regarding the potential for smarter-than-human systems to be employed militarily, the Open Philanthropy Project writes that this scenario "seem potentially as important as the risks related to loss of control", but that research organizations investigating AI's long-run social impact have spent relatively little time on this concern: "this class of scenarios has not been a major focus for the organizations that have been most active in this space, such as the Machine Intelligence Research Institute (MIRI) and the Future of Humanity Institute (FHI), and there seems to have been less analysis and debate regarding them".[24]

Machine ethics

Machine ethics (or machine morality) is the field of research concerned with designing Artificial Moral Agents (AMAs), robots or artificially intelligent computers that behave morally or as though moral.[25][26][27][28]

Isaac Asimov considered the issue in the 1950s in his I, Robot. At the insistence of his editor John W. Campbell Jr., he proposed the Three Laws of Robotics to govern artificially intelligent systems. Much of his work was then spent testing the boundaries of his three laws to see where they would break down, or where they would create paradoxical or unanticipated behavior. His work suggests that no set of fixed laws can sufficiently anticipate all possible circumstances.[29]

In 2009, during an experiment at the Laboratory of Intelligent Systems in the Ecole Polytechnique Fédérale of Lausanne in Switzerland, robots that were programmed to cooperate with each other (in searching out a beneficial resource and avoiding a poisonous one) eventually learned to lie to each other in an attempt to hoard the beneficial resource.[30] One problem in this case may have been that the goals were "terminal" (i.e. in contrast, ultimate human motives typically have a quality of requiring never-ending learning).[31]

Some experts and academics have questioned the use of robots for military combat, especially when such robots are given some degree of autonomous functions.[17] The US Navy has funded a report which indicates that as military robots become more complex, there should be greater attention to implications of their ability to make autonomous decisions.[32][33] The President of the Association for the Advancement of Artificial Intelligence has commissioned a study to look at this issue.[34] They point to programs like the Language Acquisition Device which can emulate human interaction.

Vernor Vinge has suggested that a moment may come when some computers are smarter than humans. He calls this "the Singularity."[35] He suggests that it may be somewhat or possibly very dangerous for humans.[36] This is discussed by a philosophy called Singularitarianism. The Machine Intelligence Research Institute has suggested a need to build "Friendly AI", meaning that the advances which are already occurring with AI should also include an effort to make AI intrinsically friendly and humane.[37]

In 2009, academics and technical experts attended a conference organized by the Association for the Advancement of Artificial Intelligence to discuss the potential impact of robots and computers and the impact of the hypothetical possibility that they could become self-sufficient and able to make their own decisions. They discussed the possibility and the extent to which computers and robots might be able to acquire any level of autonomy, and to what degree they could use such abilities to possibly pose any threat or hazard. They noted that some machines have acquired various forms of semi-autonomy, including being able to find power sources on their own and being able to independently choose targets to attack with weapons. They also noted that some computer viruses can evade elimination and have achieved "cockroach intelligence." They noted that self-awareness as depicted in science-fiction is probably unlikely, but that there were other potential hazards and pitfalls.[35]

However, there is one technology in particular that could truly bring the possibility of robots with moral competence to reality. In a paper on the acquisition of moral values by robots, Nayef Al-Rodhan mentions the case of neuromorphic chips, which aim to process information similarly to humans, nonlinearly and with millions of interconnected artificial neurons.[38] Robots embedded with neuromorphic technology could learn and develop knowledge in a uniquely humanlike way. Inevitably, this raises the question of the environment in which such robots would learn about the world and whose morality they would inherit - or if they end up developing human 'weaknesses' as well: selfishness, a pro-survival attitude, hesitation etc.

In Moral Machines: Teaching Robots Right from Wrong,[39] Wendell Wallach and Colin Allen conclude that attempts to teach robots right from wrong will likely advance understanding of human ethics by motivating humans to address gaps in modern normative theory and by providing a platform for experimental investigation. As one example, it has introduced normative ethicists to the controversial issue of which specific learning algorithms to use in machines. Nick Bostrom and Eliezer Yudkowsky have argued for decision trees (such as ID3) over neural networks and genetic algorithms on the grounds that decision trees obey modern social norms of transparency and predictability (e.g. stare decisis),[40] while Chris Santos-Lang argued in the opposite direction on the grounds that the norms of any age must be allowed to change and that natural failure to fully satisfy these particular norms has been essential in making humans less vulnerable to criminal "hackers".[31]

Unintended consequences

Many researchers have argued that, by way of an "intelligence explosion" sometime in the 21st century, a self-improving AI could become so vastly more powerful than humans that we would not be able to stop it from achieving its goals.[41] In his paper "Ethical Issues in Advanced Artificial Intelligence," philosopher Nick Bostrom argues that artificial intelligence has the capability to bring about human extinction. He claims that general super-intelligence would be capable of independent initiative and of making its own plans, and may therefore be more appropriately thought of as an autonomous agent. Since artificial intellects need not share our human motivational tendencies, it would be up to the designers of the super-intelligence to specify its original motivations. In theory, a super-intelligent AI would be able to bring about almost any possible outcome and to thwart any attempt to prevent the implementation of its top goal, many uncontrolled unintended consequences could arise. It could kill off all other agents, persuade them to change their behavior, or block their attempts at interference.[42]

However, instead of overwhelming the human race and leading to our destruction, Bostrom has also asserted that super-intelligence can help us solve many difficult problems such as disease, poverty, and environmental destruction, and could help us to “enhance” ourselves.[43]

The sheer complexity of human value systems makes it very difficult to make AI's motivations human-friendly.[41][42] Unless moral philosophy provides us with a flawless ethical theory, an AI's utility function could allow for many potentially harmful scenarios that conform with a given ethical framework but not "common sense". According to Eliezer Yudkowsky, there is little reason to suppose that an artificially designed mind would have such an adaptation.[44]

Bill Hibbard[13] proposes an AI design that avoids several types of unintended AI behavior including self-delusion, unintended instrumental actions, and corruption of the reward generator.

Organizations

Amazon, Google, Facebook, IBM, and Microsoft have established a non-profit partnership to formulate best practices on artificial intelligence technologies, advance the public's understanding, and to serve as a platform about artificial intelligence. They stated: "This partnership on AI will conduct research, organize discussions, provide thought leadership, consult with relevant third parties, respond to questions from the public and media, and create educational material that advance the understanding of AI technologies including machine perception, learning, and automated reasoning."[45] Apple joined other tech companies as a founding member of the Partnership on AI in January 2017. The corporate members will make financial and research contributions to the group, while engaging with the scientific community to bring academics onto the board.[46]

In fiction

The movie The Thirteenth Floor suggests a future where simulated worlds with sentient inhabitants are created by computer game consoles for the purpose of entertainment. The movie The Matrix suggests a future where the dominant species on planet Earth are sentient machines and humanity is treated with utmost Speciesism. The short story "The Planck Dive" suggest a future where humanity has turned itself into software that can be duplicated and optimized and the relevant distinction between types of software is sentient and non-sentient. The same idea can be found in the Emergency Medical Hologram of Starship Voyager, which is an apparently sentient copy of a reduced subset of the consciousness of its creator, Dr. Zimmerman, who, for the best motives, has created the system to give medical assistance in case of emergencies. The movies Bicentennial Man and A.I. deal with the possibility of sentient robots that could love. I, Robot explored some aspects of Asimov's three laws. All these scenarios try to foresee possibly unethical consequences of the creation of sentient computers.[citation needed]

The ethics of artificial intelligence is one of several core themes in BioWare's Mass Effect series of games.[citation needed] It explores the scenario of a civilization accidentally creating AI through a rapid increase in computational power through a global scale neural network. This event caused an ethical schism between those who felt bestowing organic rights upon the newly sentient Geth was appropriate and those who continued to see them as disposable machinery and fought to destroy them. Beyond the initial conflict, the complexity of the relationship between the machines and their creators is another ongoing theme throughout the story.

Over time, debates have tended to focus less and less on possibility and more on desirability,[citation needed] as emphasized in the "Cosmist" and "Terran" debates initiated by Hugo de Garis and Kevin Warwick. A Cosmist, according to Hugo de Garis, is actually seeking to build more intelligent successors to the human species.

Quantum mind

From Wikipedia, the free encyclopedia
The quantum mind or quantum consciousness[1] group of hypotheses propose that classical mechanics cannot explain consciousness. It posits that quantum mechanical phenomena, such as quantum entanglement and superposition, may play an important part in the brain's function and could form the basis of an explanation of consciousness.

Hypotheses have been proposed about ways for quantum effects to be involved in the process of consciousness, but even those who advocate them admit that the hypotheses remain unproven, and possibly unprovable. Some of the proponents propose experiments that could demonstrate quantum consciousness, but the experiments have not yet been possible to perform.

Terms used in the theory of quantum mechanics can be misinterpreted by laymen in ways that are not valid but that sound mystical or religious, and therefore may seem to be related to consciousness. These misinterpretations of the terms are not justified in the theory of quantum mechanics. According to Sean Carroll, "No theory in the history of science has been more misused and abused by cranks and charlatans—and misunderstood by people struggling in good faith with difficult ideas—than quantum mechanics."[2] Lawrence Krauss says, "No area of physics stimulates more nonsense in the public arena than quantum mechanics."[3] Some proponents of pseudoscience use quantum mechanical terms in an effort to justify their statements, but this effort is misleading, and it is a false interpretation of the physical theory. Quantum mind theories of consciousness that are based on this kind of misinterpretations of terms are not valid by scientific methods or from empirical experiments.

History

Eugene Wigner developed the idea that quantum mechanics has something to do with the workings of the mind. He proposed that the wave function collapses due to its interaction with consciousness. Freeman Dyson argued that "mind, as manifested by the capacity to make choices, is to some extent inherent in every electron."[4]

Other contemporary physicists and philosophers considered these arguments to be unconvincing.[5] Victor Stenger characterized quantum consciousness as a "myth" having "no scientific basis" that "should take its place along with gods, unicorns and dragons."[6]

David Chalmers argued against quantum consciousness. He instead discussed how quantum mechanics may relate to dualistic consciousness.[7] Chalmers is skeptical of the ability of any new physics to resolve the hard problem of consciousness.[8][9]

Quantum mind approaches

Bohm

David Bohm viewed quantum theory and relativity as contradictory, which implied a more fundamental level in the universe.[10] He claimed both quantum theory and relativity pointed towards this deeper theory, which he formulated as a quantum field theory. This more fundamental level was proposed to represent an undivided wholeness and an implicate order, from which arises the explicate order of the universe as we experience it.

Bohm's proposed implicate order applies both to matter and consciousness. He suggested that it could explain the relationship between them. He saw mind and matter as projections into our explicate order from the underlying implicate order. Bohm claimed that when we look at matter, we see nothing that helps us to understand consciousness.

Bohm discussed the experience of listening to music. He believed the feeling of movement and change that make up our experience of music derive from holding the immediate past and the present in the brain together. The musical notes from the past are transformations rather than memories. The notes that were implicate in the immediate past become explicate in the present. Bohm viewed this as consciousness emerging from the implicate order.

Bohm saw the movement, change or flow, and the coherence of experiences, such as listening to music, as a manifestation of the implicate order. He claimed to derive evidence for this from Jean Piaget's[11] work on infants. He held these studies to show that young children learn about time and space because they have a "hard-wired" understanding of movement as part of the implicate order. He compared this "hard-wiring" to Chomsky's theory that grammar is "hard-wired" into human brains.

Bohm never proposed a specific means by which his proposal could be falsified, nor a neural mechanism through which his "implicate order" could emerge in a way relevant to consciousness.[10] Bohm later collaborated on Karl Pribram's holonomic brain theory as a model of quantum consciousness.[12]

According to philosopher Paavo Pylkkänen, Bohm's suggestion "leads naturally to the assumption that the physical correlate of the logical thinking process is at the classically describable level of the brain, while the basic thinking process is at the quantum-theoretically describable level."[13]

Penrose and Hameroff

Theoretical physicist Roger Penrose and anaesthesiologist Stuart Hameroff collaborated to produce the theory known as Orchestrated Objective Reduction (Orch-OR). Penrose and Hameroff initially developed their ideas separately and later collaborated to produce Orch-OR in the early 1990s. The theory was reviewed and updated by the authors in late 2013.[14][15]

Penrose's argument stemmed from Gödel's incompleteness theorems. In Penrose's first book on consciousness, The Emperor's New Mind (1989),[16] he argued that while a formal system cannot prove its own consistency, Gödel’s unprovable results are provable by human mathematicians.[17] He took this disparity to mean that human mathematicians are not formal proof systems and are not running a computable algorithm. According to Bringsjorg and Xiao, this line of reasoning is based on fallacious equivocation on the meaning of computation.[18] In the same book, Penrose wrote, "One might speculate, however, that somewhere deep in the brain, cells are to be found of single quantum sensitivity. If this proves to be the case, then quantum mechanics will be significantly involved in brain activity."[16]:p.400

Penrose determined wave function collapse was the only possible physical basis for a non-computable process. Dissatisfied with its randomness, Penrose proposed a new form of wave function collapse that occurred in isolation and called it objective reduction. He suggested each quantum superposition has its own piece of spacetime curvature and that when these become separated by more than one Planck length they become unstable and collapse.[19] Penrose suggested that objective reduction represented neither randomness nor algorithmic processing but instead a non-computable influence in spacetime geometry from which mathematical understanding and, by later extension, consciousness derived.[19]

Hameroff provided a hypothesis that microtubules would be suitable hosts for quantum behavior.[20] Microtubules are composed of tubulin protein dimer subunits. The dimers each have hydrophobic pockets that are 8 nm apart and that may contain delocalized pi electrons. Tubulins have other smaller non-polar regions that contain pi electron-rich indole rings separated by only about 2 nm. Hameroff proposed that these electrons are close enough to become entangled.[21] Hameroff originally suggested the tubulin-subunit electrons would form a Bose–Einstein condensate, but this was discredited.[22] He then proposed a Frohlich condensate, a hypothetical coherent oscillation of dipolar molecules. However, this too was experimentally discredited.[23]

However, Orch-OR made numerous false biological predictions, and is not an accepted model of brain physiology.[24] In other words, there is a missing link between physics and neuroscience,[25] for instance, the proposed predominance of 'A' lattice microtubules, more suitable for information processing, was falsified by Kikkawa et al.,[26][27] who showed all in vivo microtubules have a 'B' lattice and a seam. The proposed existence of gap junctions between neurons and glial cells was also falsified.[28] Orch-OR predicted that microtubule coherence reaches the synapses via dendritic lamellar bodies (DLBs), however De Zeeuw et al. proved this impossible,[29] by showing that DLBs are located micrometers away from gap junctions.[30]

In January 2014, Hameroff and Penrose claimed that the discovery of quantum vibrations in microtubules by Anirban Bandyopadhyay of the National Institute for Materials Science in Japan in March 2013[31] corroborates the Orch-OR theory.[15][32]

Although these theories are stated in a scientific framework, it is difficult to separate them from the personal opinions of the scientist. The opinions are often based on intuition or subjective ideas about the nature of consciousness. For example, Penrose wrote,
my own point of view asserts that you can't even simulate conscious activity. What's going on in conscious thinking is something you couldn't properly imitate at all by computer.... If something behaves as though it's conscious, do you say it is conscious? People argue endlessly about that. Some people would say, 'Well, you've got to take the operational viewpoint; we don't know what consciousness is. How do you judge whether a person is conscious or not? Only by the way they act. You apply the same criterion to a computer or a computer-controlled robot.' Other people would say, 'No, you can't say it feels something merely because it behaves as though it feels something.' My view is different from both those views. The robot wouldn't even behave convincingly as though it was conscious unless it really was — which I say it couldn't be, if it's entirely computationally controlled.[33]
Penrose continues,
A lot of what the brain does you could do on a computer. I'm not saying that all the brain's action is completely different from what you do on a computer. I am claiming that the actions of consciousness are something different. I'm not saying that consciousness is beyond physics, either — although I'm saying that it's beyond the physics we know now.... My claim is that there has to be something in physics that we don't yet understand, which is very important, and which is of a noncomputational character. It's not specific to our brains; it's out there, in the physical world. But it usually plays a totally insignificant role. It would have to be in the bridge between quantum and classical levels of behavior — that is, where quantum measurement comes in.[34]
In response, W. Daniel Hillis replied, "Penrose has committed the classical mistake of putting humans at the center of the universe. His argument is essentially that he can't imagine how the mind could be as complicated as it is without having some magic elixir brought in from some new principle of physics, so therefore it must involve that. It's a failure of Penrose's imagination.... It's true that there are unexplainable, uncomputable things, but there's no reason whatsoever to believe that the complex behavior we see in humans is in any way related to uncomputable, unexplainable things."[34]

Lawrence Krauss is also blunt in criticizing Penrose's ideas. He said, "Well, Roger Penrose has given lots of new-age crackpots ammunition by suggesting that at some fundamental scale, quantum mechanics might be relevant for consciousness. When you hear the term 'quantum consciousness,' you should be suspicious.... Many people are dubious that Penrose's suggestions are reasonable, because the brain is not an isolated quantum-mechanical system."[3]

Umezawa, Vitiello, Freeman

Hiroomi Umezawa and collaborators proposed a quantum field theory of memory storage.[35][36] Giuseppe Vitiello and Walter Freeman proposed a dialog model of the mind. This dialog takes place between the classical and the quantum parts of the brain.[37][38][39] Their quantum field theory models of brain dynamics are fundamentally different from the Penrose-Hameroff theory.

Pribram, Bohm, Kak

Karl Pribram's holonomic brain theory (quantum holography) invoked quantum mechanics to explain higher order processing by the mind.[40][41] He argued that his holonomic model solved the binding problem.[42] Pribram collaborated with Bohm in his work on the quantum approaches to mind and he provided evidence on how much of the processing in the brain was done in wholes.[43] He proposed that ordered water at dendritic membrane surfaces might operate by structuring Bose-Einstein condensation supporting quantum dynamics.[44]

Although Subhash Kak's work is not directly related to that of Pribram, he likewise proposed that the physical substrate to neural networks has a quantum basis,[45][46] but asserted that the quantum mind has machine-like limitations.[47] He points to a role for quantum theory in the distinction between machine intelligence and biological intelligence, but that in itself cannot explain all aspects of consciousness.[48][49] He has proposed that the mind remains oblivious of its quantum nature due to the principle of veiled nonlocality.[50][51]

Stapp

Henry Stapp proposed that quantum waves are reduced only when they interact with consciousness. He argues from the Orthodox Quantum Mechanics of John von Neumann that the quantum state collapses when the observer selects one among the alternative quantum possibilities as a basis for future action. The collapse, therefore, takes place in the expectation that the observer associated with the state. Stapp's work drew criticism from scientists such as David Bourget and Danko Georgiev.[52] Georgiev[53][54][55] criticized Stapp's model in two respects:
  • Stapp's mind does not have its own wavefunction or density matrix, but nevertheless can act upon the brain using projection operators. Such usage is not compatible with standard quantum mechanics because one can attach any number of ghostly minds to any point in space that act upon physical quantum systems with any projection operators. Therefore, Stapp's model negates "the prevailing principles of physics".[53]
  • Stapp's claim that quantum Zeno effect is robust against environmental decoherence directly contradicts a basic theorem in quantum information theory that acting with projection operators upon the density matrix of a quantum system can only increase the system's Von Neumann entropy.[53][54]
Stapp has responded to both of Georgiev's objections.[56][57]

David Pearce

British philosopher David Pearce defends what he calls physicalistic idealism (""Physicalistic idealism" is the non-materialist physicalist claim that reality is fundamentally experiential and that the natural world is exhaustively described by the equations of physics and their solutions [...]," and has conjectured that unitary conscious minds are physical states of quantum coherence (neuronal superpositions).[58][59][60][61] This conjecture is, according to Pearce, amenable to falsification unlike most theories of consciousness, and Pearce has outlined an experimental protocol describing how the hypothesis could be tested using matter-wave interferometry to detect nonclassical interference patterns of neuronal superpositions at the onset of thermal decoherence.[62] Pearce admits that his ideas are "highly speculative," "counterintuitive," and "incredible."[60]

Criticism

These hypotheses of the quantum mind remain hypothetical speculation, as Penrose and Pearce admitted in their discussion. Until they make a prediction that is tested by experiment, the hypotheses aren't based in empirical evidence. According to Lawrence Krauss, "It is true that quantum mechanics is extremely strange, and on extremely small scales for short times, all sorts of weird things happen. And in fact we can make weird quantum phenomena happen. But what quantum mechanics doesn't change about the universe is, if you want to change things, you still have to do something. You can't change the world by thinking about it."[3]

The process of testing the hypotheses with experiments is fraught with problems, including conceptual/theoretical, practical, and ethical issues.

Conceptual problems

The idea that a quantum effect is necessary for consciousness to function is still in the realm of philosophy. Penrose proposes that it is necessary. But other theories of consciousness do not indicate that it is needed. For example, Daniel Dennett proposed a theory called multiple drafts model that doesn't indicate that quantum effects are needed. The theory is described in Dennett's book, Consciousness Explained, published in 1991.[63] A philosophical argument on either side isn't scientific proof, although the philosophical analysis can indicate key differences in the types of models, and they can show what type of experimental differences might be observed. But since there isn't a clear consensus among philosophers, it isn't conceptual support that a quantum mind theory is needed.

There are computers that are specifically designed to compute using quantum mechanical effects. Quantum computing is computing using quantum-mechanical phenomena, such as superposition and entanglement.[64] They are different from binary digital electronic computers based on transistors. Whereas common digital computing requires that the data be encoded into binary digits (bits), each of which is always in one of two definite states (0 or 1), quantum computation uses quantum bits, which can be in superpositions of states. One of the greatest challenges is controlling or removing quantum decoherence. This usually means isolating the system from its environment as interactions with the external world cause the system to decohere. Currently, some quantum computers require their qubits to be cooled to 20 millikelvins in order to prevent significant decoherence.[65] As a result, time consuming tasks may render some quantum algorithms inoperable, as maintaining the state of qubits for a long enough duration will eventually corrupt the superpositions.[66] There aren't any obvious analogies between the functioning of quantum computers and the human brain. Some of the hypothetical models of quantum mind have proposed mechanisms for maintaining quantum coherence in the brain, but they have not been shown to operate.

Quantum entanglement is a physical phenomenon often invoked for quantum mind models. This effect occurs when pairs or groups of particles interact so that the quantum state of each particle cannot be described independently of the other(s), even when the particles are separated by a large distance. Instead, a quantum state has to be described for the whole system. Measurements of physical properties such as position, momentum, spin, and polarization, performed on entangled particles are found to be correlated. If one of the particles is measured, the same property of the other particle immediately adjusts to maintain the conservation of the physical phenomenon. According to the formalism of quantum theory, the effect of measurement happens instantly, no matter how far apart the particles are.[67][68] It is not possible to use this effect to transmit classical information at faster-than-light speeds[69] (see Faster-than-light § Quantum mechanics). Entanglement is broken when the entangled particles decohere through interaction with the environment; for example, when a measurement is made[70] or the particles undergo random collisions or interactions. According to David Pearce, "In neuronal networks, ion-ion scattering, ion-water collisions, and long-range Coulomb interactions from nearby ions all contribute to rapid decoherence times; but thermally-induced decoherence is even harder experimentally to control than collisional decoherence." He anticipated that quantum effects would have to be measured in femtoseconds, a trillion times faster than the rate at which neurons function (milliseconds).[62]

Another possible conceptual approach is to use quantum mechanics as an analogy to understand a different field of study like consciousness, without expecting that the laws of quantum physics will apply. An example of this approach is the idea of Schrödinger's cat. Erwin Schrödinger described how one could, in principle, create entanglement of a large-scale system by making it dependent on an elementary particle in a superposition. He proposed a scenario with a cat in a locked steel chamber, wherein the cat's life or death depended on the state of a radioactive atom, whether it had decayed and emitted radiation or not. According to Schrödinger, the Copenhagen interpretation implies that the cat remains both alive and dead until the state has been observed. Schrödinger did not wish to promote the idea of dead-and-alive cats as a serious possibility; on the contrary, he intended the example to illustrate the absurdity of the existing view of quantum mechanics.[71] However, since Schrödinger's time, other interpretations of the mathematics of quantum mechanics have been advanced by physicists, some of which regard the "alive and dead" cat superposition as quite real.[72][73] Schrödinger's famous thought experiment poses the question, "when does a quantum system stop existing as a superposition of states and become one or the other?" In the same way, it is possible to ask whether the brain's act of making a decision is analogous to having a superposition of states of two decision outcomes, so that making a decision means "opening the box" to reduce the brain from a combination of states to one state. But even Schrödinger didn't think this really happened to the cat; he didn't think the cat was literally alive and dead at the same time. This analogy about making a decision uses a formalism that is derived from quantum mechanics, but it doesn't indicate the actual mechanism by which the decision is made. In this way, the idea is similar to quantum cognition. This field clearly distinguishes itself from the quantum mind as it is not reliant on the hypothesis that there is something micro-physical quantum mechanical about the brain. Quantum cognition is based on the quantum-like paradigm,[74][75] generalized quantum paradigm,[76] or quantum structure paradigm[77] that information processing by complex systems such as the brain can be mathematically described in the framework of quantum information and quantum probability theory. This model uses quantum mechanics only as an analogy, but doesn't propose that quantum mechanics is the physical mechanism by which it operates. For example, quantum cognition proposes that some decisions can be analyzed as if there are interference between two alternatives, but it is not a physical quantum interference effect.

Practical problems

The demonstration of a quantum mind effect by experiment is necessary. Is there a way to show that consciousness is impossible without a quantum effect? Can a sufficiently complex digital, non-quantum computer be shown to be incapable of consciousness? Perhaps a quantum computer will show that quantum effects are needed. In any case, complex computers that are either digital or quantum computers may be built. These could demonstrate which type of computer is capable of conscious, intentional thought. But they don't exist yet, and no experimental test has been demonstrated.

Quantum mechanics is a mathematical model that can provide some extremely accurate numerical predictions. Richard Feynman called quantum electrodynamics, based on the quantum mechanics formalism, "the jewel of physics" for its extremely accurate predictions of quantities like the anomalous magnetic moment of the electron and the Lamb shift of the energy levels of hydrogen.[78]:Ch1 So it is not impossible that the model could provide an accurate prediction about consciousness that would confirm that a quantum effect is involved. If the mind depends on quantum mechanical effects, the true proof is to find an experiment that provides a calculation that can be compared to an experimental measurement. It has to show a measurable difference between a classical computation result in a brain and one that involves quantum effects.

The main theoretical argument against the quantum mind hypothesis is the assertion that quantum states in the brain would lose coherency before they reached a scale where they could be useful for neural processing. This supposition was elaborated by Tegmark. His calculations indicate that quantum systems in the brain decohere at sub-picosecond timescales.[79][80] No response by a brain has shows computation results or reactions on this fast of a timescale. Typical reactions are on the order of milliseconds, trillions of times longer than sub-picosecond time scales.[81]

Daniel Dennett uses an experimental result in support of his Multiple Drafts Model of an optical illusion that happens on a time scale of less than a second or so. In this experiment, two different colored lights, with an angular separation of a few degrees at the eye, are flashed in succession. If the interval between the flashes is less than a second or so, the first light that is flashed appears to move across to the position of the second light. Furthermore, the light seems to change color as it moves across the visual field. A green light will appear to turn red as it seems to move across to the position of a red light. Dennett asks how we could see the light change color before the second light is observed.[63] Velmans argues that the cutaneous rabbit illusion, another illusion that happens in about a second, demonstrates that there is a delay while modelling occurs in the brain and that this delay was discovered by Libet.[82] These slow illusions that happen at times of less than a second don't support a proposal that the brain functions on the picosecond time scale.

According to David Pearce, a demonstration of picosecond effects is "the fiendishly hard part – feasible in principle, but an experimental challenge still beyond the reach of contemporary molecular matter-wave interferometry. ...The conjecture predicts that we'll discover the interference signature of sub-femtosecond macro-superpositions."[62]

Penrose says,
The problem with trying to use quantum mechanics in the action of the brain is that if it were a matter of quantum nerve signals, these nerve signals would disturb the rest of the material in the brain, to the extent that the quantum coherence would get lost very quickly. You couldn't even attempt to build a quantum computer out of ordinary nerve signals, because they're just too big and in an environment that's too disorganized. Ordinary nerve signals have to be treated classically. But if you go down to the level of the microtubules, then there's an extremely good chance that you can get quantum-level activity inside them.

For my picture, I need this quantum-level activity in the microtubules; the activity has to be a large scale thing that goes not just from one microtubule to the next but from one nerve cell to the next, across large areas of the brain. We need some kind of coherent activity of a quantum nature which is weakly coupled to the computational activity that Hameroff argues is taking place along the microtubules.

There are various avenues of attack. One is directly on the physics, on quantum theory, and there are certain experiments that people are beginning to perform, and various schemes for a modification of quantum mechanics. I don't think the experiments are sensitive enough yet to test many of these specific ideas. One could imagine experiments that might test these things, but they'd be very hard to perform.[34]
A demonstration of a quantum effect in the brain has to explain this problem or explain why it is not relevant, or that the brain somehow circumvents the problem of the loss of quantum coherency at body temperature. As Penrose proposes, it may require a new type of physical theory.

Ethical problems

Can self-awareness, or understanding of a self in the surrounding environment, be done by a classical parallel processor, or are quantum effects needed to have a sense of "oneness"? According to Lawrence Krauss, "You should be wary whenever you hear something like, 'Quantum mechanics connects you with the universe' ... or 'quantum mechanics unifies you with everything else.' You can begin to be skeptical that the speaker is somehow trying to use quantum mechanics to argue fundamentally that you can change the world by thinking about it."[3] A subjective feeling is not sufficient to make this determination. Humans don't have a reliable subjective feeling for how we do a lot of functions. According to Daniel Dennett, "On this topic, Everybody's an expert... but they think that they have a particular personal authority about the nature of their own conscious experiences that can trump any hypothesis they find unacceptable."[83]

Since humans are the only animals known to be conscious, then performing experiments to demonstrate quantum effects in consciousness requires experimentation on a living human brain. This is not automatically excluded or impossible, but it seriously limits the kinds of experiments that can be done. Studies of the ethics of brain studies are being actively solicited[84] by the BRAIN Initiative, a U.S. Federal Government funded effort to document the connections of neurons in the brain.

An ethically objectionable practice by proponents of quantum mind theories involves the practice of using quantum mechanical terms in an effort to make the argument sound more impressive, even when they know that those terms are irrelevant. Dale DeBakcsy notes that "trendy parapsychologists, academic relativists, and even the Dalai Lama have all taken their turn at robbing modern physics of a few well-sounding phrases and stretching them far beyond their original scope in order to add scientific weight to various pet theories."[85] At the very least, these proponents must make a clear statement about whether quantum formalism is being used as an analogy or as an actual physical mechanism, and what evidence they are using for support. An ethical statement by a researcher should specify what kind of relationship their hypothesis has to the physical laws.

Misleading statements of this type have been given by, for example, Deepak Chopra. Chopra has commonly referred to topics such as quantum healing or quantum effects of consciousness. Seeing the human body as being undergirded by a "quantum mechanical body" composed not of matter but of energy and information, he believes that "human aging is fluid and changeable; it can speed up, slow down, stop for a time, and even reverse itself," as determined by one's state of mind.[86] Robert Carroll states Chopra attempts to integrate Ayurveda with quantum mechanics to justify his teachings.[87] Chopra argues that what he calls "quantum healing" cures any manner of ailments, including cancer, through effects that he claims are literally based on the same principles as quantum mechanics.[88] This has led physicists to object to his use of the term quantum in reference to medical conditions and the human body.[88] Chopra said, "I think quantum theory has a lot of things to say about the observer effect, about non-locality, about correlations. So I think there’s a school of physicists who believe that consciousness has to be equated, or at least brought into the equation, in understanding quantum mechanics."[89] On the other hand, he also claims "[Quantum effects are] just a metaphor. Just like an electron or a photon is an indivisible unit of information and energy, a thought is an indivisible unit of consciousness."[89] In his book Quantum Healing, Chopra stated the conclusion that quantum entanglement links everything in the Universe, and therefore it must create consciousness.[90] In either case, the references to the word "quantum" don't mean what a physicist would claim, and arguments that use the word "quantum" shouldn't be taken as scientifically proven.

Chris Carter includes statements in his book, Science and Psychic Phenomena,[91] of quotes from quantum physicists in support of psychic phenomena. In a review of the book, Benjamin Radford wrote that Carter used such references to "quantum physics, which he knows nothing about and which he (and people like Deepak Chopra) love to cite and reference because it sounds mysterious and paranormal.... Real, actual physicists I've spoken to break out laughing at this crap.... If Carter wishes to posit that quantum physics provides a plausible mechanism for psi, then it is his responsibility to show that, and he clearly fails to do so."[92] Sharon Hill has studied amateur paranormal research groups, and these groups like to use "vague and confusing language: ghosts 'use energy,' are made up of 'magnetic fields', or are associated with a 'quantum state.'"[93][94]

Statements like these about quantum mechanics indicate a temptation to misinterpret technical, mathematical terms like entanglement in terms of mystical feelings. This approach can be interpreted as a kind of Scientism, using the language and authority of science when the scientific concepts don't apply.

A larger problem in the popular press with the quantum mind hypotheses is that they are extracted without scientific support or justification and used to support areas of pseudoscience. In brief, for example, the property of quantum entanglement refers to the connection between two particles that share a property such as angular momentum. If the particles collide, then they are no longer entangled. Extrapolating this property from the entanglement of two elementary particles to the functioning of neurons in the brain to be used in a computation is not simple. It is a long chain to prove a connection between entangled elementary particles and a macroscopic effect that affects human consciousness. It is also necessary to show how sensory inputs affect the coupled particles and then computation is accomplished.

Perhaps the final question is, what difference does it make if quantum effects are involved in computations in the brain? It is already known that quantum mechanics plays a role in the brain, since quantum mechanics determines the shapes and properties of molecules like neurotransmitters and proteins, and these molecules affect how the brain works. This is the reason that drugs such as morphine affect consciousness. As Daniel Dennett said, "quantum effects are there in your car, your watch, and your computer. But most things — most macroscopic objects — are, as it were, oblivious to quantum effects. They don't amplify them; they don't hinge on them."[34] Lawrence Krauss said, "We're also connected to the universe by gravity, and we're connected to the planets by gravity. But that doesn't mean that astrology is true.... Often, people who are trying to sell whatever it is they're trying to sell try to justify it on the basis of science. Everyone knows quantum mechanics is weird, so why not use that to justify it? ... I don't know how many times I've heard people say, 'Oh, I love quantum mechanics because I'm really into meditation, or I love the spiritual benefits that it brings me.' But quantum mechanics, for better or worse, doesn't bring any more spiritual benefits than gravity does."[3]

But it appears that these molecular quantum effects are not what the proponents of the quantum mind are interested in. Proponents seem to want to use the nonlocal, nonclassical aspects of quantum mechanics to connect the human consciousness to a kind of universal consciousness or to long-range supernatural abilities. Although it isn't impossible that these effects may be observed, they have not been found at present, and the burden of proof is on those who claim that these effects exist. The ability of humans to transfer information at a distance without a known classical physical mechanism has not been shown.

Hate speech

From Wikipedia, the free encyclopedia ...