Search This Blog

Friday, July 17, 2020

Liar paradox

From Wikipedia, the free encyclopedia
Jump to navigation Jump to search
 
In philosophy and logic, the classical liar paradox or liar's paradox or antinomy of the liar is the statement of a liar that he or she is lying: for instance, declaring that "I am lying". If the liar is indeed lying, then the liar is telling the truth, which means the liar just lied. In "this sentence is a lie" the paradox is strengthened in order to make it amenable to more rigorous logical analysis. It is still generally called the "liar paradox" although abstraction is made precisely from the liar making the statement. Trying to assign to this statement, the strengthened liar, a classical binary truth value leads to a contradiction.

If "this sentence is false" is true, then it is false, but the sentence states that it is false, and if it is false, then it must be true, and so on.

History

The Epimenides paradox (circa 600 BC) has been suggested as an example of the liar paradox, but they are not logically equivalent. The semi-mythical seer Epimenides, a Cretan, reportedly stated that "All Cretans are liars." However, Epimenides' statement that all Cretans are liars can be resolved as false, given that he knows of at least one other Cretan who does not lie. It is precisely in order to avoid uncertainties deriving from the human factor and from fuzzy concepts that modern logicians proposed a "strengthened" liar such as the sentence "this sentence is false".

The paradox's name translates as pseudómenos lógos (ψευδόμενος λόγος) in Ancient Greek. One version of the liar paradox is attributed to the Greek philosopher Eubulides of Miletus who lived in the 4th century BC. Eubulides reportedly asked, "A man says that he is lying. Is what he says true or false?"

The paradox was once discussed by St. Jerome in a sermon:
"I said in my alarm, Every man is a liar!" Is David telling the truth or is he lying? If it is true that every man is a liar, and David's statement, "Every man is a liar" is true, then David also is lying; he, too, is a man. But if he, too, is lying, his statement that "Every man is a liar", consequently is not true. Whatever way you turn the proposition, the conclusion is a contradiction. Since David himself is a man, it follows that he also is lying; but if he is lying because every man is a liar, his lying is of a different sort.
The Indian grammarian-philosopher Bhartrhari (late fifth century AD) was well aware of a liar paradox which he formulated as "everything I am saying is false" (sarvam mithyā bravīmi). He analyzes this statement together with the paradox of "unsignifiability" and explores the boundary between statements that are unproblematic in daily life and paradoxes.

There was discussion of the liar paradox in early Islamic tradition for at least five centuries, starting from late 9th century, and apparently without being influenced by any other tradition. Naṣīr al-Dīn al-Ṭūsī could have been the first logician to identify the liar paradox as self-referential.

Explanation and variants

The problem of the liar paradox is that it seems to show that common beliefs about truth and falsity actually lead to a contradiction. Sentences can be constructed that cannot consistently be assigned a truth value even though they are completely in accord with grammar and semantic rules.

The simplest version of the paradox is the sentence:

A: This statement (A) is false.
 
If (A) is true, then "This statement is false" is true. Therefore, (A) must be false. The hypothesis that (A) is true leads to the conclusion that (A) is false, a contradiction.
If (A) is false, then "This statement is false" is false. Therefore, (A) must be true. The hypothesis that (A) is false leads to the conclusion that (A) is true, another contradiction. Either way, (A) is both true and false, which is a paradox.

However, that the liar sentence can be shown to be true if it is false and false if it is true has led some to conclude that it is "neither true nor false". This response to the paradox is, in effect, the rejection of the claim that every statement has to be either true or false, also known as the principle of bivalence, a concept related to the law of the excluded middle.

The proposal that the statement is neither true nor false has given rise to the following, strengthened version of the paradox:
This statement is not true. (B)
If (B) is neither true nor false, then it must be not true. Since this is what (B) itself states, it means that (B) must be true. Since initially (B) was not true and is now true, another paradox arises. 

Another reaction to the paradox of (A) is to posit, as Graham Priest has, that the statement is both true and false. Nevertheless, even Priest's analysis is susceptible to the following version of the liar:
This statement is only false. (C)
If (C) is both true and false, then (C) is only false. But then, it is not true. Since initially (C) was true and is now not true, it is a paradox. However, it has been argued that by adopting a two-valued relational semantics (as opposed to functional semantics), the dialetheic approach can overcome this version of the Liar.
There are also multi-sentence versions of the liar paradox. The following is the two-sentence version:
The following statement is true. (D1)
The preceding statement is false. (D2)
Assume (D1) is true. Then (D2) is true. This would mean that (D1) is false. Therefore, (D1) is both true and false.

Assume (D1) is false. Then (D2) is false. This would mean that (D1) is true. Thus (D1) is both true and false. Either way, (D1) is both true and false – the same paradox as (A) above.

The multi-sentence version of the liar paradox generalizes to any circular sequence of such statements (wherein the last statement asserts the truth/falsity of the first statement), provided there are an odd number of statements asserting the falsity of their successor; the following is a three-sentence version, with each statement asserting the falsity of its successor:

E2 is false. (E1)
E3 is false. (E2)
E1 is false. (E3)

Assume (E1) is true. Then (E2) is false, which means (E3) is true, and hence (E1) is false, leading to a contradiction. 

Assume (E1) is false. Then (E2) is true, which means (E3) is false, and hence (E1) is true. Either way, (E1) is both true and false – the same paradox as with (A) and (D1). 

There are many other variants, and many complements, possible. In normal sentence construction, the simplest version of the complement is the sentence:

This statement is true. (F)

If F is assumed to bear a truth value, then it presents the problem of determining the object of that value. But, a simpler version is possible, by assuming that the single word 'true' bears a truth value. The analogue to the paradox is to assume that the single word 'false' likewise bears a truth value, namely that it is false. This reveals that the paradox can be reduced to the mental act of assuming that the very idea of fallacy bears a truth value, namely that the very idea of fallacy is false: an act of misrepresentation. So, the symmetrical version of the paradox would be:

The following statement is false. (G1)
The preceding statement is false. (G2)

Possible resolutions

Alfred Tarski

Alfred Tarski diagnosed the paradox as arising only in languages that are "semantically closed", by which he meant a language in which it is possible for one sentence to predicate truth (or falsehood) of another sentence in the same language (or even of itself). To avoid self-contradiction, it is necessary when discussing truth values to envision levels of languages, each of which can predicate truth (or falsehood) only of languages at a lower level. So, when one sentence refers to the truth-value of another, it is semantically higher. The sentence referred to is part of the "object language", while the referring sentence is considered to be a part of a "meta-language" with respect to the object language. It is legitimate for sentences in "languages" higher on the semantic hierarchy to refer to sentences lower in the "language" hierarchy, but not the other way around. This prevents a system from becoming self-referential.

However, this system is incomplete. One would like to be able to make statements such as "For every statement in level α of the hierarchy, there is a statement at level α+1 which asserts that the first statement is false." This is a true, meaningful statement about the hierarchy that Tarski defines, but it refers to statements at every level of the hierarchy, so it must be above every level of the hierarchy, and is therefore not possible within the hierarchy (although bounded versions of the sentence are possible).

Arthur Prior

Arthur Prior asserts that there is nothing paradoxical about the liar paradox. His claim (which he attributes to Charles Sanders Peirce and John Buridan) is that every statement includes an implicit assertion of its own truth. Thus, for example, the statement "It is true that two plus two equals four" contains no more information than the statement "two plus two equals four", because the phrase "it is true that..." is always implicitly there. And in the self-referential spirit of the Liar Paradox, the phrase "it is true that..." is equivalent to "this whole statement is true and ...". 

Thus the following two statements are equivalent:
This statement is false.
This statement is true and this statement is false.

The latter is a simple contradiction of the form "A and not A", and hence is false. There is therefore no paradox because the claim that this two-conjunct Liar is false does not lead to a contradiction. Eugene Mills presents a similar answer.

Saul Kripke

Saul Kripke argued that whether a sentence is paradoxical or not can depend upon contingent facts. If the only thing Smith says about Jones is:
A majority of what Jones says about me is false.
and Jones says only these three things about Smith:
Smith is a big spender.
Smith is soft on crime.
Everything Smith says about me is true.

If Smith really is a big spender but is not soft on crime, then both Smith's remark about Jones and Jones's last remark about Smith are paradoxical.

Kripke proposes a solution in the following manner. If a statement's truth value is ultimately tied up in some evaluable fact about the world, that statement is "grounded". If not, that statement is "ungrounded". Ungrounded statements do not have a truth value. Liar statements and liar-like statements are ungrounded, and therefore have no truth value.

Jon Barwise and John Etchemendy

Jon Barwise and John Etchemendy propose that the liar sentence (which they interpret as synonymous with the Strengthened Liar) is ambiguous. They base this conclusion on a distinction they make between a "denial" and a "negation". If the liar means, "It is not the case that this statement is true", then it is denying itself. If it means, "This statement is not true", then it is negating itself. They go on to argue, based on situation semantics, that the "denial liar" can be true without contradiction while the "negation liar" can be false without contradiction. Their 1987 book makes heavy use of non-well-founded set theory.

Dialetheism

Graham Priest and other logicians, including J. C. Beall and Bradley Armour-Garb, have proposed that the liar sentence should be considered to be both true and false, a point of view known as dialetheism. Dialetheism is the view that there are true contradictions. Dialetheism raises its own problems. Chief among these is that since dialetheism recognizes the liar paradox, an intrinsic contradiction, as being true, it must discard the long-recognized principle of explosion, which asserts that any proposition can be deduced from a contradiction, unless the dialetheist is willing to accept trivialism – the view that all propositions are true. Since trivialism is an intuitively false view, dialetheists nearly always reject the explosion principle. Logics that reject it are called paraconsistent.

Non-cognitivism

Andrew Irvine has argued in favour of a non-cognitivist solution to the paradox, suggesting that some apparently well-formed sentences will turn out to be neither true nor false and that "formal criteria alone will inevitably prove insufficient" for resolving the paradox.

Bhartrhari's perspectivism

The Indian grammarian-philosopher Bhartrhari (late fifth century AD) dealt with paradoxes such as the liar in a section of one of the chapters of his magnum opus the Vākyapadīya. Although chronologically he precedes all modern treatments of the problem of the liar paradox, it has only very recently become possible for those who cannot read the original Sanskrit sources to confront his views and analyses with those of modern logicians and philosophers because sufficiently reliable editions and translations of his work have only started becoming available since the second half of the 20th century. Bhartrhari's solution fits into his general approach to language, thought and reality, which has been characterized by some as "relativistic", "non-committal" or "perspectivistic". With regard to the liar paradox (sarvam mithyā bhavāmi "everything I am saying is false") Bhartrhari identifies a hidden parameter which can change unproblematic situations in daily communication into a stubborn paradox. Bhartrhari's solution can be understood in terms of the solution proposed in 1992 by Julian Roberts: "Paradoxes consume themselves. But we can keep apart the warring sides of the contradiction by the simple expedient of temporal contextualisation: what is 'true' with respect to one point in time need not be so in another ... The overall force of the 'Austinian' argument is not merely that 'things change', but that rationality is essentially temporal in that we need time in order to reconcile and manage what would otherwise be mutually destructive states." According to Robert's suggestion, it is the factor "time" which allows us to reconcile the separated "parts of the world" that play a crucial role in the solution of Barwise and Etchemendy. The capacity of time to prevent a direct confrontation of the two "parts of the world" is here external to the "liar". In the light of Bhartrhari's analysis, however, the extension in time which separates two perspectives on the world or two "parts of the world" – the part before and the part after the function accomplishes its task – is inherent in any "function": also the function to signify which underlies each statement, including the "liar". The unsolvable paradox – a situation in which we have either contradiction (virodha) or infinite regress (anavasthā) – arises, in case of the liar and other paradoxes such as the unsignifiability paradox (Bhartrhari's paradox), when abstraction is made from this function (vyāpāra) and its extension in time, by accepting a simultaneous, opposite function (apara vyāpāra) undoing the previous one.

Logical structure

For a better understanding of the liar paradox, it is useful to write it down in a more formal way. If "this statement is false" is denoted by A and its truth value is being sought, it is necessary to find a condition that restricts the choice of possible truth values of A. Because A is self-referential it is possible to give the condition by an equation.

If some statement, B, is assumed to be false, one writes, "B = false". The statement (C) that the statement B is false would be written as "C = 'B = false'". Now, the liar paradox can be expressed as the statement A, that A is false:

A = "A = false"

This is an equation from which the truth value of A = "this statement is false" could hopefully be obtained. In the boolean domain "A = false" is equivalent to "not A" and therefore the equation is not solvable. This is the motivation for reinterpretation of A. The simplest logical approach to make the equation solvable is the dialetheistic approach, in which case the solution is A being both "true" and "false". Other resolutions mostly include some modifications of the equation; Arthur Prior claims that the equation should be "A = 'A = false and A = true'" and therefore A is false. In computational verb logic, the liar paradox is extended to statements like, "I hear what he says; he says what I don't hear", where verb logic must be used to resolve the paradox.

Applications

Gödel's first incompleteness theorem

Gödel's incompleteness theorems are two fundamental theorems of mathematical logic which state inherent limitations of sufficiently powerful axiomatic systems for mathematics. The theorems were proven by Kurt Gödel in 1931, and are important in the philosophy of mathematics. Roughly speaking, in proving the first incompleteness theorem, Gödel used a modified version of the liar paradox, replacing "this sentence is false" with "this sentence is not provable", called the "Gödel sentence G". His proof showed that for any sufficiently powerful theory T, G is true, but not provable in T. The analysis of the truth and provability of G is a formalized version of the analysis of the truth of the liar sentence.

To prove the first incompleteness theorem, Gödel represented statements by numbers. Then the theory at hand, which is assumed to prove certain facts about numbers, also proves facts about its own statements. Questions about the provability of statements are represented as questions about the properties of numbers, which would be decidable by the theory if it were complete. In these terms, the Gödel sentence states that no natural number exists with a certain, strange property. A number with this property would encode a proof of the inconsistency of the theory. If there were such a number then the theory would be inconsistent, contrary to the consistency hypothesis. So, under the assumption that the theory is consistent, there is no such number.

It is not possible to replace "not provable" with "false" in a Gödel sentence because the predicate "Q is the Gödel number of a false formula" cannot be represented as a formula of arithmetic. This result, known as Tarski's undefinability theorem, was discovered independently by Gödel (when he was working on the proof of the incompleteness theorem) and by Alfred Tarski.

George Boolos has since sketched an alternative proof of the first incompleteness theorem that uses Berry's paradox rather than the liar paradox to construct a true but unprovable formula.

In popular culture

The liar paradox is occasionally used in fiction to shut down artificial intelligences, who are presented as being unable to process the sentence. In Star Trek: The Original Series episode "I, Mudd", the liar paradox is used by Captain Kirk and Harry Mudd to confuse and ultimately disable an android holding them captive. In the 1973 Doctor Who serial The Green Death, the Doctor temporarily stumps the insane computer BOSS by asking it "If I were to tell you that the next thing I say would be true, but the last thing I said was a lie, would you believe me?" However BOSS eventually decides the question is irrelevant and summons security.

In the 2011 videogame Portal 2, GLaDOS attempts to use the "this sentence is false" paradox to defeat the naïve artificial intelligence Wheatley, but, lacking the intelligence to realize the statement a paradox, he simply responds, "Um, true. I'll go with true. There, that was easy." and is unaffected, although the frankencubes around him do spark and go offline.

In the seventh episode of Minecraft: Story Mode titled "Access Denied" the main character Jesse and his friends are captured by a supercomputer named PAMA. After PAMA controls two of Jesse's friends, Jesse learns that PAMA stalls when processing and uses a paradox to confuse him and escape with his last friend. One of the paradoxes the player can make him say is the liar paradox.

In Douglas Adams The Hitchhiker's Guide to the Galaxy, chapter 21 he describes a solitary old man inhabiting a small asteroid in the spatial coordinates where it should have been a whole planet dedicated to Biro (ballpoint pen) life forms. This old man repeatedly claimed that nothing was true, though he was later discovered to be lying.

Rollins Band's 1994 song "Liar" alluded to the paradox when the narrator ends the song by stating "I'll lie again and again and I'll keep lying, I promise".

Robert Earl Keen's song "The Road Goes On and On" alludes to the paradox. The song is widely believed to be written as part of Keen's feud with Toby Keith, who is presumably the "liar" Keen refers to.

Quantum cognition

From Wikipedia, the free encyclopedia
 
Quantum cognition is an emerging field which applies the mathematical formalism of quantum theory to model cognitive phenomena such as information processing by the human brain, language, decision making, human memory, concepts and conceptual reasoning, human judgment, and perception. The field clearly distinguishes itself from the quantum mind as it is not reliant on the hypothesis that there is something micro-physical quantum mechanical about the brain. Quantum cognition is based on the quantum-like paradigm or generalized quantum paradigm or quantum structure paradigm that information processing by complex systems such as the brain, taking into account contextual dependence of information and probabilistic reasoning, can be mathematically described in the framework of quantum information and quantum probability theory.

Quantum cognition uses the mathematical formalism of quantum theory to inspire and formalize models of cognition that aim to be an advance over models based on traditional classical probability theory. The field focuses on modeling phenomena in cognitive science that have resisted traditional techniques or where traditional models seem to have reached a barrier (e.g., human memory), and modeling preferences in decision theory that seem paradoxical from a traditional rational point of view (e.g., preference reversals). Since the use of a quantum-theoretic framework is for modeling purposes, the identification of quantum structures in cognitive phenomena does not presuppose the existence of microscopic quantum processes in the human brain.

Main subjects of research

Quantum-like models of information processing ("quantum-like brain")

The brain is definitely a macroscopic physical system operating on the scales (of time, space, temperature) which differ crucially from the corresponding quantum scales. (The macroscopic quantum physical phenomena such as e.g. the Bose-Einstein condensate are also characterized by the special conditions which are definitely not fulfilled in the brain.) In particular, the brain is simply too hot to be able perform the real quantum information processing, i.e., to use the quantum carriers of information such as photons, ions, electrons. As is commonly accepted in brain science, the basic unit of information processing is a neuron. It is clear that a neuron cannot be in the superposition of two states: firing and non-firing. Hence, it cannot produce superposition playing the basic role in the quantum information processing. Superpositions of mental states are created by complex networks of neurons (and these are classical neural networks). Quantum cognition community states that the activity of such neural networks can produce effects which are formally described as interference (of probabilities) and entanglement. In principle, the community does not try to create the concrete models of quantum (-like) representation of information in the brain.

The quantum cognition project is based on the observation that various cognitive phenomena are more adequately described by quantum information theory and quantum probability than by the corresponding classical theories, see examples below. Thus the quantum formalism is considered as an operational formalism describing nonclassical processing of probabilistic data. Recent derivations of the complete quantum formalism from simple operational principles for representation of information supports the foundations of quantum cognition. The subjective probability viewpoint on quantum probability which was developed by C. Fuchs and collaborators also supports the quantum cognition approach, especially using of quantum probabilities to describe the process of decision making.

Although at the moment we cannot present the concrete neurophysiological mechanisms of creation of the quantum-like representation of information in the brain, we can present general informational considerations supporting the idea that information processing in the brain matches with quantum information and probability. Here, contextuality is the key word, see the monograph of Khrennikov for detailed representation of this viewpoint. Quantum mechanics is fundamentally contextual. Quantum systems do not have objective properties which can be defined independently of measurement context. (As was pointed by N. Bohr, the whole experimental arrangement must be taken into account.) Contextuality implies existence of incompatible mental variables, violation of the classical law of total probability and (constructive and destructive) interference effects. Thus the quantum cognition approach can be considered as an attempt to formalize contextuality of mental processes by using the mathematical apparatus of quantum mechanics.

Decision making

Suppose a person is given an opportunity to play two rounds of the following gamble: a coin toss will determine whether the subject wins $200 or loses $100. Suppose the subject has decided to play the first round, and does so. Some subjects are then given the result (win or lose) of the first round, while other subjects are not yet given any information about the results. The experimenter then asks whether the subject wishes to play the second round. Performing this experiment with real subjects gives the following results:
  1. When subjects believe they won the first round, the majority of subjects choose to play again on the second round.
  2. When subjects believe they lost the first round, the majority of subjects choose to play again on the second round.
Given these two separate choices, according to the sure thing principle of rational decision theory, they should also play the second round even if they don't know or think about the outcome of the first round. But, experimentally, when subjects are not told the results of the first round, the majority of them decline to play a second round. This finding violates the law of total probability, yet it can be explained as a quantum interference effect in a manner similar to the explanation for the results from double-slit experiment in quantum physics. Similar violations of the sure-thing principle are seen in empirical studies of the Prisoner's Dilemma and have likewise been modeled in terms of quantum interference.

The above deviations from classical rational expectations in agents’ decisions under uncertainty produce well known paradoxes in behavioral economics, that is, the Allais, Ellsberg and Machina paradoxes. These deviations can be explained if one assumes that the overall conceptual landscape influences the subject's choice in a neither predictable nor controllable way. A decision process is thus an intrinsically contextual process, hence it cannot be modeled in a single Kolmogorovian probability space, which justifies the employment of quantum probability models in decision theory. More explicitly, the paradoxical situations above can be represented in a unified Hilbert space formalism where human behavior under uncertainty is explained in terms of genuine quantum aspects, namely, superposition, interference, contextuality and incompatibility.

Considering automated decision making, quantum decision trees have different structure compared to classical decision trees. Data can be analyzed to see if a quantum decision tree model fits the data better.

Human probability judgments

Quantum probability provides a new way to explain human probability judgment errors including the conjunction and disjunction errors. A conjunction error occurs when a person judges the probability of a likely event L and an unlikely event U to be greater than the unlikely event U; a disjunction error occurs when a person judges the probability of a likely event L to be greater than the probability of the likely event L or an unlikely event U. Quantum probability theory is a generalization of Bayesian probability theory because it is based on a set of von Neumann axioms that relax some of the classic Kolmogorov axioms. The quantum model introduces a new fundamental concept to cognition—the compatibility versus incompatibility of questions and the effect this can have on the sequential order of judgments. Quantum probability provides a simple account of conjunction and disjunction errors as well as many other findings such as order effects on probability judgments.

The liar paradox - The contextual influence of a human subject on the truth behavior of a cognitive entity is explicitly exhibited by the so-called liar paradox, that is, the truth value of a sentence like "this sentence is false". One can show that the true-false state of this paradox is represented in a complex Hilbert space, while the typical oscillations between true and false are dynamically described by the Schrödinger equation.

Knowledge representation

Concepts are basic cognitive phenomena, which provide the content for inference, explanation, and language understanding. Cognitive psychology has researched different approaches for understanding concepts including exemplars, prototypes, and neural networks, and different fundamental problems have been identified, such as the experimentally tested non classical behavior for the conjunction and disjunction of concepts, more specifically the Pet-Fish problem or guppy effect, and the overextension and underextension of typicality and membership weight for conjunction and disjunction. By and large, quantum cognition has drawn on quantum theory in three ways to model concepts.
  1. Exploit the contextuality of quantum theory to account for the contextuality of concepts in cognition and language and the phenomenon of emergent properties when concepts combine
  2. Use quantum entanglement to model the semantics of concept combinations in a non-decompositional way, and to account for the emergent properties/associates/inferences in relation to concept combinations
  3. Use quantum superposition to account for the emergence of a new concept when concepts are combined, and as a consequence put forward an explanatory model for the Pet-Fish problem situation, and the overextension and underextension of membership weights for the conjunction and disjunction of concepts.
The large amount of data collected by Hampton on the combination of two concepts can be modeled in a specific quantum-theoretic framework in Fock space where the observed deviations from classical set (fuzzy set) theory, the above-mentioned over- and under- extension of membership weights, are explained in terms of contextual interactions, superposition, interference, entanglement and emergence. And, more, a cognitive test on a specific concept combination has been performed which directly reveals, through the violation of Bell's inequalities, quantum entanglement between the component concepts.

Human memory

The hypothesis that there may be something quantum-like about the human mental function was put forward with the quantum entanglement formula which attempted to model the effect that when a word's associative network is activated during study in memory experiment, it behaves like a quantum-entangled system. Models of cognitive agents and memory based on quantum collectives have been proposed by Subhash Kak. But he also points to specific problems of limits on observation and control of these memories due to fundamental logical reasons.

Semantic analysis and information retrieval

The research in (iv) had a deep impact on the understanding and initial development of a formalism to obtain semantic information when dealing with concepts, their combinations and variable contexts in a corpus of unstructured documents. This conundrum of natural language processing (NLP) and information retrieval (IR) on the web – and data bases in general – can be addressed using the mathematical formalism of quantum theory. As basic steps, (a) the seminal book "The Geometry of Information Retrieval" by K. Van Rijsbergen introduced a quantum structure approach to IR, (b) Widdows and Peters utilised a quantum logical negation for a concrete search system, and Aerts and Czachor identified quantum structure in semantic space theories, such as latent semantic analysis. Since then, the employment of techniques and procedures induced from the mathematical formalisms of quantum theory – Hilbert space, quantum logic and probability, non-commutative algebras, etc. – in fields such as IR and NLP, has produced significant results.

Human perception

Bi-stable perceptual phenomena is a fascinating topic in the area of perception. If a stimulus has an ambiguous interpretation, such as a Necker cube, the interpretation tends to oscillate across time. Quantum models have been developed to predict the time period between oscillations and how these periods change with frequency of measurement. Quantum theory and an appropriate model have been developed by Elio Conte to account for interference effects obtained with measurements of ambiguous figures.
There are apparent similarities between Gestalt perception and quantum theory. In an article discussing the application of Gestalt to chemistry, Anton Amann writes: "Quantum mechanics does not explain Gestalt perception, of course, but in quantum mechanics and Gestalt psychology there exist almost isomorphic conceptions and problems:
  • Similarly as with the Gestalt concept, the shape of a quantum object does not a priori exist but it depends on the interaction of this quantum object with the environment (for example: an observer or a measurement apparatus).
  • Quantum mechanics and Gestalt perception are organized in a holistic way. Subentities do not necessarily exist in a distinct, individual sense.
  • In quantum mechanics and Gestalt perception objects have to be created by elimination of holistic correlations with the 'rest of the world'."
Each of the points mentioned in the above text in a simplified manner (Below explanations correlate respectively with the above-mentioned points):
  • As an object in quantum physics doesn't have any shape until and unless it interacts with its environment; Objects according to Gestalt perspective do not hold much of a meaning individually as they do when there is a "group" of them or when they are present in an environment.
  • Both in quantum mechanics and Gestalt perception, the objects must be studied as a whole rather than finding properties of individual components and interpolating the whole object.
  • In Gestalt concept creation of a new object from another previously existing object means that the previously existing object now becomes a sub entity of the new object, and hence "elimination of holistic correlations" occurs. Similarly a new quantum object made from a previously existing object means that the previously existing object looses its holistic view.
Amann comments: "The structural similarities between Gestalt perception and quantum mechanics are on a level of a parable, but even parables can teach us something, for example, that quantum mechanics is more than just production of numerical results or that the Gestalt concept is more than just a silly idea, incompatible with atomistic conceptions."[60]

Quantum-like models of cognition in economics and finance

The assumption that information processing by the agents of the market follows the laws of quantum information theory and quantum probability was actively explored by many authors, e.g., E. Haven, O. Choustova, A. Khrennikov, see the book of E. Haven and A. Khrennikov,[61] for detailed bibliography. We can mention, e.g., the Bohmian model of dynamics of prices of shares in which the quantum(-like) potential is generated by expectations of agents of the financial market and, hence, it has the mental nature. This approach can be used to model real financial data, see the book of E. Haven and A. Khrennikov (2012).

Application of theory of open quantum systems to decision making and "cell's cognition"

An isolated quantum system is an idealized theoretical entity. In reality interactions with environment have to be taken into account. This is the subject of theory of open quantum systems. Cognition is also fundamentally contextual. The brain is a kind of (self-)observer which makes context dependent decisions. Mental environment plays a crucial role in information processing. Therefore, it is natural to apply theory of open quantum systems to describe the process of decision making as the result of quantum-like dynamics of the mental state of a system interacting with an environment. The description of the process of decision making is mathematically equivalent to the description of the process of decoherence. This idea was explored in a series of works of the multidisciplinary group of researchers at Tokyo University of Science.[62][63]
Since in the quantum-like approach the formalism of quantum mechanics is considered as a purely operational formalism, it can be applied to the description of information processing by any biological system, i.e., not only by human beings.
Operationally it is very convenient to consider e.g. a cell as a kind of decision maker processing information in the quantum information framework. This idea was explored in a series of papers of the Swedish-Japanese research group using the methods of theory of open quantum systems: genes expressions were modeled as decision making in the process of interaction with environment.[64]

History

Here is a short history of applying the formalisms of quantum theory to topics in psychology. Ideas for applying quantum formalisms to cognition first appeared in the 1990s by Diederik Aerts and his collaborators Jan Broekaert, Sonja Smets and Liane Gabora, by Harald Atmanspacher, Robert Bordley, and Andrei Khrennikov. A special issue on Quantum Cognition and Decision appeared in the Journal of Mathematical Psychology (2009, vol 53.), which planted a flag for the field. A few books related to quantum cognition have been published including those by Khrennikov (2004, 2010), Ivancivic and Ivancivic (2010), Busemeyer and Bruza (2012), E. Conte (2012). The first Quantum Interaction workshop was held at Stanford in 2007 organized by Peter Bruza, William Lawless, C. J. van Rijsbergen, and Don Sofge as part of the 2007 AAAI Spring Symposium Series. This was followed by workshops at Oxford in 2008, Saarbrücken in 2009, at the 2010 AAAI Fall Symposium Series held in Washington, D.C., 2011 in Aberdeen, 2012 in Paris, and 2013 in Leicester. Tutorials also were presented annually beginning in 2007 until 2013 at the annual meeting of the Cognitive Science Society. A Special Issue on Quantum models of Cognition appeared in 2013 in the journal Topics in Cognitive Science.

Related theories

It was suggested by theoretical physicists David Bohm and Basil Hiley that mind and matter both emerge from an "implicate order".[65] Bohm and Hiley's approach to mind and matter is supported by philosopher Paavo Pylkkänen.[66] Pylkkänen underlines "unpredictable, uncontrollable, indivisible and non-logical" features of conscious thought and draws parallels to a philosophical movement some call "post-phenomenology", in particular to Pauli Pylkkö's notion of the "aconceptual experience", an unstructured, unarticulated and pre-logical experience.[67]
The mathematical techniques of both Conte's group and Hiley's group involve the use of Clifford algebras. These algebras account for "non-commutativity" of thought processes (for an example, see: noncommutative operations in everyday life).
However, an area that needs to be investigated is the concept lateralised brain functioning. Some studies in marketing have related lateral influences on cognition and emotion in processing of attachment related stimuli.

Gestalt psychology

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Gestalt_psychology
Gestalt psychology or gestaltism is a school of psychology that emerged in Austria and Germany in the early twentieth century based on work by Max Wertheimer, Wolfgang Köhler, and Kurt Koffka. As used in Gestalt psychology, the German word Gestalt, meaning "form") is interpreted as "pattern" or "configuration". Gestalt psychologists emphasized that organisms perceive entire patterns or configurations, not merely individual components. The view is sometimes summarized using the adage, "the whole is more than the sum of its parts." Gestalt principles, proximity, similarity, figure-ground, continuity, closure, and connection, determine how humans perceive visuals in connection with different objects and environments.

Origin and history

Max Wertheimer (1880–1943), Kurt Koffka (1886–1941), and Wolfgang Köhler (1887–1967) founded Gestalt psychology in the early 20th century. The dominant view in psychology at the time was structuralism, exemplified by the work of Hermann von Helmholtz (1821–1894), Wilhelm Wundt (1832–1920), and Edward B. Titchener (1867–1927). Structuralism was rooted firmly in British empiricism and was based on three closely interrelated theories: (1) "atomism," also known as "elementalism," the view that all knowledge, even complex abstract ideas, is built from simple, elementary constituents, (2) "sensationalism," the view that the simplest constituents—the atoms of thought—are elementary sense impressions, and (3) "associationism," the view that more complex ideas arise from the association of simpler ideas. Together, these three theories give rise to the view that the mind constructs all perceptions and even abstract thoughts strictly from lower-level sensations that are related solely by being associated closely in space and time. The Gestaltists took issue with this widespread "atomistic" view that the aim of psychology should be to break consciousness down into putative basic elements. In contrast, the Gestalt psychologists believed that breaking psychological phenomena down into smaller parts would not lead to understanding psychology. The Gestalt psychologists believed, instead, that the most fruitful way to view psychological phenomena is as organized, structured wholes. They argued that the psychological "whole" has priority and that the "parts" are defined by the structure of the whole, rather than vice versa. One could say that the approach was based on a macroscopic view of psychology rather than a microscopic approach. Gestalt theories of perception are based on human nature being inclined to understand objects as an entire structure rather than the sum of its parts.

Wertheimer had been a student of Austrian philosopher, Christian von Ehrenfels (1859–1932), a member of the School of Brentano. Von Ehrenfels introduced the concept of Gestalt to philosophy and psychology in 1890, before the advent of Gestalt psychology as such. Von Ehrenfels observed that a perceptual experience, such as perceiving a melody or a shape, is more than the sum of its sensory components. He claimed that, in addition to the sensory elements of the perception, there is something extra. Although in some sense derived from the organization of the component sensory elements, this further quality is an element in its own right. He called it Gestalt-qualität or "form-quality." For instance, when one hears a melody, one hears the notes plus something in addition to them that binds them together into a tune – the Gestalt-qualität. It is this Gestalt-qualität that, according to von Ehrenfels, allows a tune to be transposed to a new key, using completely different notes, while still retaining its identity. The idea of a Gestalt-qualität has roots in theories by David Hume, Johann Wolfgang von Goethe, Immanuel Kant, David Hartley, and Ernst Mach. Both von Ehrenfels and Edmund Husserl seem to have been inspired by Mach's work Beiträge zur Analyse der Empfindungen (Contributions to the Analysis of Sensations, 1886), in formulating their very similar concepts of gestalt and figural moment, respectively.

By 1914, the first published references to Gestalt theory could be found in a footnote of Gabriele von Wartensleben's application of Gestalt theory to personality. She was a student at Frankfurt Academy for Social Sciences, who interacted deeply with Wertheimer and Köhler.

Through a series of experiments, Wertheimer discovered that a person observing a pair of alternating bars of light can, under the right conditions, experience the illusion of movement between one location and the other. He noted that this was a perception of motion absent any moving object. That is, it was pure phenomenal motion. He dubbed it phi ("phenomenal") motion. Wertheimer's publication of these results in 1912 marks the beginning of Gestalt psychology. In comparison to von Ehrenfels and others who had used the term "gestalt" earlier in various ways, Wertheimer's unique contribution was to insist that the "gestalt" is perceptually primary. The gestalt defines the parts from which it is composed, rather than being a secondary quality that emerges from those parts. Wertheimer took the more radical position that "what is given me by the melody does not arise ... as a secondary process from the sum of the pieces as such. Instead, what takes place in each single part already depends upon what the whole is", (1925/1938). In other words, one hears the melody first and only then may perceptually divide it up into notes. Similarly, in vision, one sees the form of the circle first—it is given "im-mediately" (i.e., its apprehension is not mediated by a process of part-summation). Only after this primary apprehension might one notice that it is made up of lines or dots or stars.

The two men who served as Wertheimer's subjects in the phi experiments were Köhler and Koffka. Köhler was an expert in physical acoustics, having studied under physicist Max Planck (1858–1947), but had taken his degree in psychology under Carl Stumpf (1848–1936). Koffka was also a student of Stumpf's, having studied movement phenomena and psychological aspects of rhythm. In 1917, Köhler (1917/1925) published the results of four years of research on learning in chimpanzees. Köhler showed, contrary to the claims of most other learning theorists, that animals can learn by "sudden insight" into the "structure" of a problem, over and above the associative and incremental manner of learning that Ivan Pavlov (1849–1936) and Edward Lee Thorndike (1874–1949) had demonstrated with dogs and cats, respectively.

The terms "structure" and "organization" were focal for the Gestalt psychologists. Stimuli were said to have a certain structure, to be organized in a certain way, and that it is to this structural organization, rather than to individual sensory elements, that the organism responds. When an animal is conditioned, it does not simply respond to the absolute properties of a stimulus, but to its properties relative to its surroundings. To use a favorite example of Köhler's, if conditioned to respond in a certain way to the lighter of two gray cards, the animal generalizes the relation between the two stimuli rather than the absolute properties of the conditioned stimulus: it will respond to the lighter of two cards in subsequent trials even if the darker card in the test trial is of the same intensity as the lighter one in the original training trials.

In 1921, Koffka published a Gestalt-oriented text on developmental psychology, Growth of the Mind. With the help of American psychologist Robert Ogden, Koffka introduced the Gestalt point of view to an American audience in 1922 by way of a paper in Psychological Bulletin. It contains criticisms of then-current explanations of a number of problems of perception, and the alternatives offered by the Gestalt school. Koffka moved to the United States in 1924, eventually settling at Smith College in 1927. In 1935, Koffka published his Principles of Gestalt Psychology. This textbook laid out the Gestalt vision of the scientific enterprise as a whole. Science, he said, is not the simple accumulation of facts. What makes research scientific is the incorporation of facts into a theoretical structure. The goal of the Gestaltists was to integrate the facts of inanimate nature, life, and mind into a single scientific structure. This meant that science would have to accommodate not only what Koffka called the quantitative facts of physical science but the facts of two other "scientific categories": questions of order and questions of Sinn, a German word which has been variously translated as significance, value, and meaning. Without incorporating the meaning of experience and behavior, Koffka believed that science would doom itself to trivialities in its investigation of human beings.

Having survived the Nazis up to the mid-1930s, all the core members of the Gestalt movement were forced out of Germany to the United States by 1935. Köhler published another book, Dynamics in Psychology, in 1940 but thereafter the Gestalt movement suffered a series of setbacks. Koffka died in 1941 and Wertheimer in 1943. Wertheimer's long-awaited book on mathematical problem-solving, Productive Thinking, was published posthumously in 1945, but Köhler was left to guide the movement without his two long-time colleagues.

Gestalt therapy

Gestalt psychology should not be confused with the Gestalt therapy, which is only peripherally linked to Gestalt psychology. The founders of Gestalt therapy, Fritz and Laura Perls, had worked with Kurt Goldstein, a neurologist who had applied principles of Gestalt psychology to the functioning of the organism. Laura Perls had been a Gestalt psychologist before she became a psychoanalyst and before she began developing Gestalt therapy together with Fritz Perls. The extent to which Gestalt psychology influenced Gestalt therapy is disputed, however. In any case it is not identical with Gestalt psychology. On the one hand, Laura Perls preferred not to use the term "Gestalt" to name the emerging new therapy, because she thought that the Gestalt psychologists would object to it; on the other hand Fritz and Laura Perls clearly adopted some of Goldstein's work. Thus, though recognizing the historical connection and the influence, most Gestalt psychologists emphasize that Gestalt therapy is not a form of Gestalt psychology.

Mary Henle noted in her presidential address to Division 24 at the meeting of the American Psychological Association (1975): "What Perls has done has been to take a few terms from Gestalt psychology, stretch their meaning beyond recognition, mix them with notions—often unclear and often incompatible—from the depth psychologies, existentialism, and common sense, and he has called the whole mixture gestalt therapy. His work has no substantive relation to scientific Gestalt psychology. To use his own language, Fritz Perls has done 'his thing'; whatever it is, it is not Gestalt psychology" With her analysis however, she restricts herself explicitly to only three of Perls' books from 1969 and 1972, leaving out Perls' earlier work, and Gestalt therapy in general as a psychotherapy method.

There have been clinical applications of Gestalt psychology in the psychotherapeutic field long before Perls'ian Gestalt therapy, in group psychoanalysis (Foulkes), Adlerian individual psychology, by Gestalt psychologists in psychotherapy like Erwin Levy, Abraham S. Luchins, by Gestalt psychologically oriented psychoanalysts in Italy (Canestrari and others), and there have been newer developments foremost in Europe. For example, a strictly Gestalt psychology-based therapeutic method is Gestalt Theoretical Psychotherapy, developed by the German Gestalt psychologist and psychotherapist Hans-Jürgen Walter and his colleagues in Germany, Austria (Gerhard Stemberger and colleagues) and Switzerland. Other countries, especially Italy, have seen similar developments.

Contributions

Gestalt psychology made many contributions to the body of psychology. The Gestaltists were the first to demonstrate empirically and document many facts about perception—including facts about the perception of movement, the perception of contour, perceptual constancy, and perceptual illusions. Wertheimer's discovery of the phi phenomenon is one example of such a contribution. In addition to discovering perceptual phenomena, the contributions of Gestalt psychology include: (a) a unique theoretical framework and methodology, (b) a set of perceptual principles, (c) a well-known set of perceptual grouping laws, (d) a theory of problem solving based on insight, and (e) a theory of memory. The following subsections discuss these contributions in turn.

Theoretical framework and methodology

The Gestalt psychologists practiced a set of theoretical and methodological principles that attempted to redefine the approach to psychological research. This is in contrast to investigations developed at the beginning of the 20th century, based on traditional scientific methodology, which divided the object of study into a set of elements that could be analyzed separately with the objective of reducing the complexity of this object.

The theoretical principles are the following:
  • Principle of Totality—Conscious experience must be considered globally (by taking into account all the physical and mental aspects of the individual simultaneously) because the nature of the mind demands that each component be considered as part of a system of dynamic relationships. Wertheimer described holism as fundamental to Gestalt psychology, writing "There are wholes, the behavior of which is not determined by that of their individual elements, but where the part-processes are themselves determined by the intrinsic nature of the whole." In other words, a perceptual whole is different from what one would predict based on only its individual parts. Moreover, the nature of a part depends upon the whole in which it is embedded. Köhler, for example, writes "In psychology...we have wholes which, instead of being the sum of parts existing independently, give their parts specific functions or properties that can only be defined in relation to the whole in question." Thus, the maxim that the whole is more than the sum of its parts is not a precise description of the Gestaltist view. Rather, "The whole is something else than the sum of its parts, because summing is a meaningless procedure, whereas the whole-part relationship is meaningful."
Based on the principles above the following methodological principles are defined:
  • Phenomenon experimental analysis—In relation to the Totality Principle any psychological research should take phenomena as a starting point and not be solely focused on sensory qualities.
  • Biotic experiment—The Gestalt psychologists established a need to conduct real experiments that sharply contrasted with and opposed classic laboratory experiments. This signified experimenting in natural situations, developed in real conditions, in which it would be possible to reproduce, with higher fidelity, what would be habitual for a subject.

Properties

The key principles of gestalt systems are emergence, reification, multistability and invariance.

Reification

Reification

Reification is the constructive or generative aspect of perception, by which the experienced percept contains more explicit spatial information than the sensory stimulus on which it is based.
For instance, a triangle is perceived in picture A, though no triangle is there. In pictures B and D the eye recognizes disparate shapes as "belonging" to a single shape, in C a complete three-dimensional shape is seen, where in actuality no such thing is drawn.
Reification can be explained by progress in the study of illusory contours, which are treated by the visual system as "real" contours.

Multistability

the Necker cube and the Rubin vase, two examples of multistability

Multistability (or multistable perception) is the tendency of ambiguous perceptual experiences to pop back and forth unstably between two or more alternative interpretations. This is seen, for example, in the Necker cube and Rubin's Figure/Vase illusion shown here. Other examples include the three-legged blivet and artist M. C. Escher's artwork and the appearance of flashing marquee lights moving first one direction and then suddenly the other. Again, Gestalt psychology does not explain how images appear multistable, only that they do.

Invariance

Invariance

Invariance is the property of perception whereby simple geometrical objects are recognized independent of rotation, translation, and scale; as well as several other variations such as elastic deformations, different lighting, and different component features. For example, the objects in A in the figure are all immediately recognized as the same basic shape, which are immediately distinguishable from the forms in B. They are even recognized despite perspective and elastic deformations as in C, and when depicted using different graphic elements as in D. Computational theories of vision, such as those by David Marr, have provided alternate explanations of how perceived objects are classified.
Emergence, reification, multistability, and invariance are not necessarily separable modules to model individually, but they could be different aspects of a single unified dynamic mechanism.

Figure-Ground Organization

The perceptual field (what an organism perceives) is organized. Figure-ground organization is one form of perceptual organization. Figure-ground organization is the interpretation of perceptual elements in terms of their shapes and relative locations in the layout of surfaces in the 3-D world. Figure-ground organization structures the perceptual field into a figure (standing out at the front of the perceptual field) and a background (receding behind the figure). Pioneering work on figure-ground organization was carried out by the Danish psychologist Edgar Rubin. The Gestalt psychologists demonstrated that we tend to perceive as figures those parts of our perceptual fields that are convex, symmetric, small, and enclosed.

Prägnanz

Like figure-ground organization, perceptual grouping (sometimes called perceptual segregation) is a form of perceptual organization. Organisms perceive some parts of their perceptual fields as "hanging together" more tightly than others. They use this information for object detection. Perceptual grouping is the process that determines what these "pieces" of the perceptual field are.
The Gestaltists were the first psychologists to systematically study perceptual grouping. According to Gestalt psychologists, the fundamental principle of perceptual grouping is the law of Prägnanz. (The law of Prägnanz is also known as the law of good Gestalt.) Prägnanz is a German word that directly translates to "pithiness" and implies salience, conciseness, and orderliness. The law of Prägnanz says that we tend to experience things as regular, orderly, symmetrical, and simple. As Koffka put it, "Of several geometrically possible organizations that one will actually occur which possesses the best, simplest and most stable shape."
The law of Prägnanz implies that, as individuals perceive the world, they eliminate complexity and unfamiliarity so they can observe reality in its most simplistic form. Eliminating extraneous stimuli helps the mind create meaning. This meaning created by perception implies a global regularity, which is often mentally prioritized over spatial relations. The law of good Gestalt focuses on the idea of conciseness, which is what all of Gestalt theory is based on.
A major aspect of Gestalt psychology is that it implies that the mind understands external stimuli as wholes rather than as the sums of their parts. The wholes are structured and organized using grouping laws.
Gestalt psychologists attempted to discover refinements of the law of Prägnanz, and this involved writing down laws that, hypothetically, allow us to predict the interpretation of sensation, what are often called "gestalt laws". Wertheimer defined a few principles that explain the ways humans perceive objects. Those principles were based on similarity, proximity, continuity. The Gestalt concept is based on perceiving reality in its simplest form. The various laws are called laws or principles, depending on the paper where they appear—but for simplicity's sake, this article uses the term laws. These laws took several forms, such as the grouping of similar, or proximate, objects together, within this global process. These laws deal with the sensory modality of vision. However, there are analogous laws for other sensory modalities including auditory, tactile, gustatory and olfactory (Bregman – GP). The visual Gestalt principles of grouping were introduced in Wertheimer (1923). Through the 1930s and '40s Wertheimer, Kohler and Koffka formulated many of the laws of grouping through the study of visual perception.

Law of Proximity

Law of proximity

The law of proximity states that when an individual perceives an assortment of objects, they perceive objects that are close to each other as forming a group. For example, in the figure illustrating the law of proximity, there are 72 circles, but we perceive the collection of circles in groups. Specifically, we perceive that there is a group of 36 circles on the left side of the image, and three groups of 12 circles on the right side of the image. This law is often used in advertising logos to emphasize which aspects of events are associated.

Law of Similarity

Law of similarity

The law of similarity states that elements within an assortment of objects are perceptually grouped together if they are similar to each other. This similarity can occur in the form of shape, colour, shading or other qualities. For example, the figure illustrating the law of similarity portrays 36 circles all equal distance apart from one another forming a square. In this depiction, 18 of the circles are shaded dark, and 18 of the circles are shaded light. We perceive the dark circles as grouped together and the light circles as grouped together, forming six horizontal lines within the square of circles. This perception of lines is due to the law of similarity.

Law of Closure

Law of closure

Gestalt psychologists believed that humans tend to perceive objects as complete rather than focusing on the gaps that the object might contain. For example, a circle has good Gestalt in terms of completeness. However, we will also perceive an incomplete circle as a complete circle. That tendency to complete shapes and figures is called closure. The law of closure states that individuals perceive objects such as shapes, letters, pictures, etc., as being whole when they are not complete. Specifically, when parts of a whole picture are missing, our perception fills in the visual gap. Research shows that the reason the mind completes a regular figure that is not perceived through sensation is to increase the regularity of surrounding stimuli. For example, the figure that depicts the law of closure portrays what we perceive as a circle on the left side of the image and a rectangle on the right side of the image. However, gaps are present in the shapes. If the law of closure did not exist, the image would depict an assortment of different lines with different lengths, rotations, and curvatures—but with the law of closure, we perceptually combine the lines into whole shapes.

Law of Symmetry

Law of symmetry

The law of symmetry states that the mind perceives objects as being symmetrical and forming around a center point. It is perceptually pleasing to divide objects into an even number of symmetrical parts. Therefore, when two symmetrical elements are unconnected the mind perceptually connects them to form a coherent shape. Similarities between symmetrical objects increase the likelihood that objects are grouped to form a combined symmetrical object. For example, the figure depicting the law of symmetry shows a configuration of square and curled brackets. When the image is perceived, we tend to observe three pairs of symmetrical brackets rather than six individual brackets.

Law of Common Fate

The law of common fate states that objects are perceived as lines that move along the smoothest path. Experiments using the visual sensory modality found that movement of elements of an object produce paths that individuals perceive that the objects are on. We perceive elements of objects to have trends of motion, which indicate the path that the object is on. The law of continuity implies the grouping together of objects that have the same trend of motion and are therefore on the same path. For example, if there are an array of dots and half the dots are moving upward while the other half are moving downward, we would perceive the upward moving dots and the downward moving dots as two distinct units.

Law of Continuity

Law of continuity

The law of continuity (also known as the law of good continuation) states that elements of objects tend to be grouped together, and therefore integrated into perceptual wholes if they are aligned within an object. In cases where there is an intersection between objects, individuals tend to perceive the two objects as two single uninterrupted entities. Stimuli remain distinct even with overlap. We are less likely to group elements with sharp abrupt directional changes as being one object.

Law of Past Experience

The law of past experience implies that under some circumstances visual stimuli are categorized according to past experience. If two objects tend to be observed within close proximity, or small temporal intervals, the objects are more likely to be perceived together. For example, the English language contains 26 letters that are grouped to form words using a set of rules. If an individual reads an English word they have never seen, they use the law of past experience to interpret the letters "L" and "I" as two letters beside each other, rather than using the law of closure to combine the letters and interpret the object as an uppercase U.

Music

An example of the Gestalt movement in effect, as it is both a process and result, is a music sequence. People are able to recognise a sequence of perhaps six or seven notes, despite them being transposed into a different tuning or key.

Problem solving and insight

Gestalt psychology contributed to the scientific study of problem solving. In fact, the early experimental work of the Gestaltists in Germany marks the beginning of the scientific study of problem solving. Later this experimental work continued through the 1960s and early 1970s with research conducted on relatively simple (but novel for participants) laboratory tasks of problem solving.
Given Gestalt psychology's focus on the whole, it was natural for Gestalt psychologists to study problem solving from the perspective of insight, seeking to understand the process by which organisms sometimes suddenly transition from having no idea how to solve a problem to instantly understanding the whole problem and its solution. In a famous set of experiments, Köhler gave chimpanzees some boxes and placed food high off the ground; after some time, the chimpanzees appeared to suddenly realize that they could stack the boxes on top of each other to reach the food.
Max Wertheimer distinguished two kinds of thinking: productive thinking and reproductive thinking. Productive thinking is solving a problem based on insight—a quick, creative, unplanned response to situations and environmental interaction. Reproductive thinking is solving a problem deliberately based on previous experience and knowledge. Reproductive thinking proceeds algorithmically—a problem solver reproduces a series of steps from memory, knowing that they will lead to a solution—or by trial and error.
Karl Duncker, another Gestalt psychologist who studied problem solving, coined the term functional fixedness for describing the difficulties in both visual perception and problem solving that arise from the fact that one element of a whole situation already has a (fixed) function that has to be changed in order to perceive something or find the solution to a problem.
Abraham Luchins also studied problem solving from the perspective of Gestalt psychology. He is well known for his research on the role of mental set (Einstellung effect), which he demonstrated using a series of problems having to do with refilling water jars.
Another Gestalt psychologist, Perkins, believes insight deals with three processes:
  1. Unconscious leap in thinking.
  2. The increased amount of speed in mental processing.
  3. The amount of short-circuiting that occurs in normal reasoning.
Views going against the Gestalt psychology are:
  1. Nothing-special view
  2. Neo-gestalt view
  3. The Three-Process View

Fuzzy-trace theory of memory

Fuzzy-trace theory, a dual process model of memory and reasoning, was also derived from Gestalt psychology. Fuzzy-trace theory posits that we encode information into two separate traces: verbatim and gist. Information stored in verbatim is exact memory for detail (the individual parts of a pattern, for example) while information stored in gist is semantic and conceptual (what we perceive the pattern to be). The effects seen in Gestalt psychology can be attributed to the way we encode information as gist.

Legacy

Gestalt psychology struggled to precisely define terms like Prägnanz, to make specific behavioral predictions, and to articulate testable models of underlying neural mechanisms. It was criticized as being merely descriptive. These shortcomings led, by the mid-20th century, to growing dissatisfaction with Gestaltism and a subsequent decline in its impact on psychology. Despite this decline, Gestalt psychology has formed the basis of much further research into the perception of patterns and objects and of research into behavior, thinking, problem solving and psychopathology.

Support from cybernetics and neurology

In the 1940s and 1950s, laboratory research in neurology and what became known as cybernetics on the mechanism of frogs' eyes indicate that perception of 'gestalts' (in particular gestalts in motion) is perhaps more primitive and fundamental than 'seeing' as such:
A frog hunts on land by vision... He has no fovea, or region of greatest acuity in vision, upon which he must center a part of the image... The frog does not seem to see or, at any rate, is not concerned with the detail of stationary parts of the world around him. He will starve to death surrounded by food if it is not moving. His choice of food is determined only by size and movement. He will leap to capture any object the size of an insect or worm, providing it moves like one. He can be fooled easily not only by a piece of dangled meat but by any moving small object... He does remember a moving thing provided it stays within his field of vision and he is not distracted.
The lowest-level concepts related to visual perception for a human being probably differ little from the concepts of a frog. In any case, the structure of the retina in mammals and in human beings is the same as in amphibians. The phenomenon of distortion of perception of an image stabilized on the retina gives some idea of the concepts of the subsequent levels of the hierarchy. This is a very interesting phenomenon. When a person looks at an immobile object, "fixes" it with his eyes, the eyeballs do not remain absolutely immobile; they make small involuntary movements. As a result the image of the object on the retina is constantly in motion, slowly drifting and jumping back to the point of maximum sensitivity. The image "marks time" in the vicinity of this point.

Quantum cognition modeling

Similarities between Gestalt phenomena and quantum mechanics have been pointed out by, among others, chemist Anton Amann, who commented that "similarities between Gestalt perception and quantum mechanics are on a level of a parable" yet may give useful insight nonetheless. Physicist Elio Conte and co-workers have proposed abstract, mathematical models to describe the time dynamics of cognitive associations with mathematical tools borrowed from quantum mechanics and has discussed psychology experiments in this context. A similar approach has been suggested by physicists David Bohm, Basil Hiley and philosopher Paavo Pylkkänen with the notion that mind and matter both emerge from an "implicate order". The models involve non-commutative mathematics; such models account for situations in which the outcome of two measurements performed one after the other can depend on the order in which they are performed—a pertinent feature for psychological processes, as an experiment performed on a conscious person may influence the outcome of a subsequent experiment by changing the state of mind of that person.

Use in contemporary social psychology

The halo effect can be explained through the application of Gestalt theories to social information processing. The constructive theories of social cognition are applied though the expectations of individuals. They have been perceived in this manner and the person judging the individual is continuing to view them in this positive manner. Gestalt's theories of perception enforces that individual's tendency to perceive actions and characteristics as a whole rather than isolated parts, therefore humans are inclined to build a coherent and consistent impression of objects and behaviors in order to achieve an acceptable shape and form. The halo effect is what forms patterns for individuals, the halo effect being classified as a cognitive bias which occurs during impression formation. The halo effect can also be altered by physical characteristics, social status and many other characteristics. As well, the halo effect can have real repercussions on the individual's perception of reality, either negatively or positively, meaning to construct negative or positive images about other individuals or situations, something that could lead to self-fulfilling prophesies, stereotyping, or even discrimination.

Contemporary cognitive and perceptual psychology

Some of the central criticisms of Gestaltism are based on the preference Gestaltists are deemed to have for theory over data, and a lack of quantitative research supporting Gestalt ideas. This is not necessarily a fair criticism as highlighted by a recent collection of quantitative research on Gestalt perception. Researchers continue to test hypotheses about the mechanisms underlying Gestalt principles such as the principle of similarity.
Other important criticisms concern the lack of definition and support for the many physiological assumptions made by gestaltists and lack of theoretical coherence in modern Gestalt psychology.
In some scholarly communities, such as cognitive psychology and computational neuroscience, gestalt theories of perception are criticized for being descriptive rather than explanatory in nature. For this reason, they are viewed by some as redundant or uninformative. For example, a textbook on visual perception states that, "The physiological theory of the gestaltists has fallen by the wayside, leaving us with a set of descriptive principles, but without a model of perceptual processing. Indeed, some of their 'laws' of perceptual organisation today sound vague and inadequate. What is meant by a 'good' or 'simple' shape, for example?"
One historian of psychology has argued that Gestalt psychologists first discovered many principles later championed by cognitive psychology, including schemas and prototypes. Another psychologist has argued that the Gestalt psychologists made a lasting contribution by showing how the study of illusions can help scientists understand essential aspects of how the visual system normally functions, not merely how it breaks down.

Use in design

The gestalt laws are used in user interface design. The laws of similarity and proximity can, for example, be used as guides for placing radio buttons. They may also be used in designing computers and software for more intuitive human use. Examples include the design and layout of a desktop's shortcuts in rows and columns.

Neurophilosophy

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Neurophilosophy ...