Search This Blog

Friday, July 2, 2021

Confirmation bias

From Wikipedia, the free encyclopedia

Confirmation bias is the tendency to search for, interpret, favor, and recall information in a way that confirms or supports one's prior beliefs or values. People display this bias when they select information that supports their views, ignoring contrary information, or when they interpret ambiguous evidence as supporting their existing attitudes. The effect is strongest for desired outcomes, for emotionally charged issues, and for deeply entrenched beliefs. Confirmation bias cannot be eliminated entirely, but it can be managed, for example, by education and training in critical thinking skills.

Confirmation bias is a broad construct covering a number of explanations. Biased search for information, biased interpretation of this information, and biased memory recall, have been invoked to explain four specific effects: 1) attitude polarization (when a disagreement becomes more extreme even though the different parties are exposed to the same evidence); 2) belief perseverance (when beliefs persist after the evidence for them is shown to be false); 3) the irrational primacy effect (a greater reliance on information encountered early in a series); and 4) illusory correlation (when people falsely perceive an association between two events or situations).

A series of psychological experiments in the 1960s suggested that people are biased toward confirming their existing beliefs. Later work re-interpreted these results as a tendency to test ideas in a one-sided way, focusing on one possibility and ignoring alternatives (myside bias, an alternative name for confirmation bias). In general, current explanations for the observed biases reveal the limited human capacity to process the complete set of information available, leading to a failure to investigate in a neutral, scientific way.

Flawed decisions due to confirmation bias have been found in political, organizational, financial and scientific contexts. These biases contribute to overconfidence in personal beliefs and can maintain or strengthen beliefs in the face of contrary evidence. For example, confirmation bias produces systematic errors in scientific research based on inductive reasoning (the gradual accumulation of supportive evidence). Similarly, a police detective may identify a suspect early in an investigation, but then may only seek confirming rather than disconfirming evidence. A medical practitioner may prematurely focus on a particular disorder early in a diagnostic session, and then seek only confirming evidence. In social media, confirmation bias is amplified by the use of filter bubbles, or "algorithmic editing", which display to individuals only information they are likely to agree with, while excluding opposing views.

Definition and context

Confirmation bias, a phrase coined by English psychologist Peter Wason, is the tendency of people to favor information that confirms or strengthens their beliefs or values, and is difficult to dislodge once affirmed. Confirmation bias is an example of a cognitive bias.

Confirmation bias (or confirmatory bias) has also been termed myside bias. "Congeniality bias" has also been used.

Confirmation biases are effects in information processing. They differ from what is sometimes called the behavioral confirmation effect, commonly known as self-fulfilling prophecy, in which a person's expectations influence their own behavior, bringing about the expected result.

Some psychologists restrict the term "confirmation bias" to selective collection of evidence that supports what one already believes while ignoring or rejecting evidence that supports a different conclusion. Others apply the term more broadly to the tendency to preserve one's existing beliefs when searching for evidence, interpreting it, or recalling it from memory.

Confirmation bias is a result of automatic, unintentional strategies rather than deliberate deception. Confirmation bias cannot be avoided or eliminated entirely, but only managed by improving education and critical thinking skills.

Confirmation bias is a broad construct that has a number of possible explanations, namely: hypothesis-testing by falsification, hypothesis testing by positive test strategy, and information processing explanations.

Types of confirmation bias

Biased search for information

A drawing of a man sitting on a stool at a writing desk
Confirmation bias has been described as an internal "yes man", echoing back a person's beliefs like Charles Dickens' character Uriah Heep

Experiments have found repeatedly that people tend to test hypotheses in a one-sided way, by searching for evidence consistent with their current hypothesis. Rather than searching through all the relevant evidence, they phrase questions to receive an affirmative answer that supports their theory. They look for the consequences that they would expect if their hypothesis were true, rather than what would happen if they were false. For example, someone using yes/no questions to find a number they suspect to be the number 3 might ask, "Is it an odd number?" People prefer this type of question, called a "positive test", even when a negative test such as "Is it an even number?" would yield exactly the same information. However, this does not mean that people seek tests that guarantee a positive answer. In studies where subjects could select either such pseudo-tests or genuinely diagnostic ones, they favored the genuinely diagnostic.

The preference for positive tests in itself is not a bias, since positive tests can be highly informative. However, in combination with other effects, this strategy can confirm existing beliefs or assumptions, independently of whether they are true. In real-world situations, evidence is often complex and mixed. For example, various contradictory ideas about someone could each be supported by concentrating on one aspect of his or her behavior. Thus any search for evidence in favor of a hypothesis is likely to succeed. One illustration of this is the way the phrasing of a question can significantly change the answer. For example, people who are asked, "Are you happy with your social life?" report greater satisfaction than those asked, "Are you unhappy with your social life?"

Even a small change in a question's wording can affect how people search through available information, and hence the conclusions they reach. This was shown using a fictional child custody case. Participants read that Parent A was moderately suitable to be the guardian in multiple ways. Parent B had a mix of salient positive and negative qualities: a close relationship with the child but a job that would take them away for long periods of time. When asked, "Which parent should have custody of the child?" the majority of participants chose Parent B, looking mainly for positive attributes. However, when asked, "Which parent should be denied custody of the child?" they looked for negative attributes and the majority answered that Parent B should be denied custody, implying that Parent A should have custody.

Similar studies have demonstrated how people engage in a biased search for information, but also that this phenomenon may be limited by a preference for genuine diagnostic tests. In an initial experiment, participants rated another person on the introversion–extroversion personality dimension on the basis of an interview. They chose the interview questions from a given list. When the interviewee was introduced as an introvert, the participants chose questions that presumed introversion, such as, "What do you find unpleasant about noisy parties?" When the interviewee was described as extroverted, almost all the questions presumed extroversion, such as, "What would you do to liven up a dull party?" These loaded questions gave the interviewees little or no opportunity to falsify the hypothesis about them. A later version of the experiment gave the participants less presumptive questions to choose from, such as, "Do you shy away from social interactions?" Participants preferred to ask these more diagnostic questions, showing only a weak bias towards positive tests. This pattern, of a main preference for diagnostic tests and a weaker preference for positive tests, has been replicated in other studies.

Personality traits influence and interact with biased search processes. Individuals vary in their abilities to defend their attitudes from external attacks in relation to selective exposure. Selective exposure occurs when individuals search for information that is consistent, rather than inconsistent, with their personal beliefs. An experiment examined the extent to which individuals could refute arguments that contradicted their personal beliefs. People with high confidence levels more readily seek out contradictory information to their personal position to form an argument. Individuals with low confidence levels do not seek out contradictory information and prefer information that supports their personal position. People generate and evaluate evidence in arguments that are biased towards their own beliefs and opinions. Heightened confidence levels decrease preference for information that supports individuals' personal beliefs.

Another experiment gave participants a complex rule-discovery task that involved moving objects simulated by a computer. Objects on the computer screen followed specific laws, which the participants had to figure out. So, participants could "fire" objects across the screen to test their hypotheses. Despite making many attempts over a ten-hour session, none of the participants figured out the rules of the system. They typically attempted to confirm rather than falsify their hypotheses, and were reluctant to consider alternatives. Even after seeing objective evidence that refuted their working hypotheses, they frequently continued doing the same tests. Some of the participants were taught proper hypothesis-testing, but these instructions had almost no effect.

Biased interpretation of information

Smart people believe weird things because they are skilled at defending beliefs they arrived at for non-smart reasons.

Michael Shermer

Confirmation biases are not limited to the collection of evidence. Even if two individuals have the same information, the way they interpret it can be biased.

A team at Stanford University conducted an experiment involving participants who felt strongly about capital punishment, with half in favor and half against it. Each participant read descriptions of two studies: a comparison of U.S. states with and without the death penalty, and a comparison of murder rates in a state before and after the introduction of the death penalty. After reading a quick description of each study, the participants were asked whether their opinions had changed. Then, they read a more detailed account of each study's procedure and had to rate whether the research was well-conducted and convincing. In fact, the studies were fictional. Half the participants were told that one kind of study supported the deterrent effect and the other undermined it, while for other participants the conclusions were swapped.

The participants, whether supporters or opponents, reported shifting their attitudes slightly in the direction of the first study they read. Once they read the more detailed descriptions of the two studies, they almost all returned to their original belief regardless of the evidence provided, pointing to details that supported their viewpoint and disregarding anything contrary. Participants described studies supporting their pre-existing view as superior to those that contradicted it, in detailed and specific ways. Writing about a study that seemed to undermine the deterrence effect, a death penalty proponent wrote, "The research didn't cover a long enough period of time," while an opponent's comment on the same study said, "No strong evidence to contradict the researchers has been presented." The results illustrated that people set higher standards of evidence for hypotheses that go against their current expectations. This effect, known as "disconfirmation bias", has been supported by other experiments.

Another study of biased interpretation occurred during the 2004 U.S. presidential election and involved participants who reported having strong feelings about the candidates. They were shown apparently contradictory pairs of statements, either from Republican candidate George W. Bush, Democratic candidate John Kerry or a politically neutral public figure. They were also given further statements that made the apparent contradiction seem reasonable. From these three pieces of information, they had to decide whether or not each individual's statements were inconsistent. There were strong differences in these evaluations, with participants much more likely to interpret statements from the candidate they opposed as contradictory.

A large round machine with a hole in the middle, with a platter for a person to lie on so that their head can fit into the hole
An MRI scanner allowed researchers to examine how the human brain deals with dissonant information

In this experiment, the participants made their judgments while in a magnetic resonance imaging (MRI) scanner which monitored their brain activity. As participants evaluated contradictory statements by their favored candidate, emotional centers of their brains were aroused. This did not happen with the statements by the other figures. The experimenters inferred that the different responses to the statements were not due to passive reasoning errors. Instead, the participants were actively reducing the cognitive dissonance induced by reading about their favored candidate's irrational or hypocritical behavior.

Biases in belief interpretation are persistent, regardless of intelligence level. Participants in an experiment took the SAT test (a college admissions test used in the United States) to assess their intelligence levels. They then read information regarding safety concerns for vehicles, and the experimenters manipulated the national origin of the car. American participants provided their opinion if the car should be banned on a six-point scale, where one indicated "definitely yes" and six indicated "definitely no". Participants firstly evaluated if they would allow a dangerous German car on American streets and a dangerous American car on German streets. Participants believed that the dangerous German car on American streets should be banned more quickly than the dangerous American car on German streets. There was no difference among intelligence levels at the rate participants would ban a car.

Biased interpretation is not restricted to emotionally significant topics. In another experiment, participants were told a story about a theft. They had to rate the evidential importance of statements arguing either for or against a particular character being responsible. When they hypothesized that character's guilt, they rated statements supporting that hypothesis as more important than conflicting statements.

Biased memory recall of information

People may remember evidence selectively to reinforce their expectations, even if they gather and interpret evidence in a neutral manner. This effect is called "selective recall", "confirmatory memory", or "access-biased memory". Psychological theories differ in their predictions about selective recall. Schema theory predicts that information matching prior expectations will be more easily stored and recalled than information that does not match. Some alternative approaches say that surprising information stands out and so is memorable. Predictions from both these theories have been confirmed in different experimental contexts, with no theory winning outright.

In one study, participants read a profile of a woman which described a mix of introverted and extroverted behaviors. They later had to recall examples of her introversion and extroversion. One group was told this was to assess the woman for a job as a librarian, while a second group were told it was for a job in real estate sales. There was a significant difference between what these two groups recalled, with the "librarian" group recalling more examples of introversion and the "sales" groups recalling more extroverted behavior. A selective memory effect has also been shown in experiments that manipulate the desirability of personality types. In one of these, a group of participants were shown evidence that extroverted people are more successful than introverts. Another group were told the opposite. In a subsequent, apparently unrelated study, participants were asked to recall events from their lives in which they had been either introverted or extroverted. Each group of participants provided more memories connecting themselves with the more desirable personality type, and recalled those memories more quickly.

Changes in emotional states can also influence memory recall. Participants rated how they felt when they had first learned that O.J. Simpson had been acquitted of murder charges. They described their emotional reactions and confidence regarding the verdict one week, two months, and one year after the trial. Results indicated that participants' assessments for Simpson's guilt changed over time. The more that participants' opinion of the verdict had changed, the less stable were the participant's memories regarding their initial emotional reactions. When participants recalled their initial emotional reactions two months and a year later, past appraisals closely resembled current appraisals of emotion. People demonstrate sizable myside bias when discussing their opinions on controversial topics. Memory recall and construction of experiences undergo revision in relation to corresponding emotional states.

Myside bias has been shown to influence the accuracy of memory recall. In an experiment, widows and widowers rated the intensity of their experienced grief six months and five years after the deaths of their spouses. Participants noted a higher experience of grief at six months rather than at five years. Yet, when the participants were asked after five years how they had felt six months after the death of their significant other, the intensity of grief participants recalled was highly correlated with their current level of grief. Individuals appear to utilize their current emotional states to analyze how they must have felt when experiencing past events. Emotional memories are reconstructed by current emotional states.

One study showed how selective memory can maintain belief in extrasensory perception (ESP). Believers and disbelievers were each shown descriptions of ESP experiments. Half of each group were told that the experimental results supported the existence of ESP, while the others were told they did not. In a subsequent test, participants recalled the material accurately, apart from believers who had read the non-supportive evidence. This group remembered significantly less information and some of them incorrectly remembered the results as supporting ESP.

Individual differences

Myside bias was once believed to be correlated with intelligence; however, studies have shown that myside bias can be more influenced by ability to rationally think as opposed to level of intelligence. Myside bias can cause an inability to effectively and logically evaluate the opposite side of an argument. Studies have stated that myside bias is an absence of "active open-mindedness", meaning the active search for why an initial idea may be wrong. Typically, myside bias is operationalized in empirical studies as the quantity of evidence used in support of their side in comparison to the opposite side.

A study has found individual differences in myside bias. This study investigates individual differences that are acquired through learning in a cultural context and are mutable. The researcher found important individual difference in argumentation. Studies have suggested that individual differences such as deductive reasoning ability, ability to overcome belief bias, epistemological understanding, and thinking disposition are significant predictors of the reasoning and generating arguments, counterarguments, and rebuttals.

A study by Christopher Wolfe and Anne Britt also investigated how participants' views of "what makes a good argument?" can be a source of myside bias that influences the way a person formulates their own arguments. The study investigated individual differences of argumentation schema and asked participants to write essays. The participants were randomly assigned to write essays either for or against their preferred side of an argument and were given research instructions that took either a balanced or an unrestricted approach. The balanced-research instructions directed participants to create a "balanced" argument, i.e., that included both pros and cons; the unrestricted-research instructions included nothing on how to create the argument.

Overall, the results revealed that the balanced-research instructions significantly increased the incidence of opposing information in arguments. These data also reveal that personal belief is not a source of myside bias; however, that those participants, who believe that a good argument is one that is based on facts, are more likely to exhibit myside bias than other participants. This evidence is consistent with the claims proposed in Baron's article—that people's opinions about what makes good thinking can influence how arguments are generated.

Discovery

Informal observations

Engraved head-and-shoulders portrait of Francis Bacon wearing a hat and ruff.

Before psychological research on confirmation bias, the phenomenon had been observed throughout history. Beginning with the Greek historian Thucydides (c. 460 BC – c. 395 BC), who wrote of misguided reason in The Peloponnesian War; "... for it is a habit of mankind to entrust to careless hope what they long for, and to use sovereign reason to thrust aside what they do not fancy".[44] Italian poet Dante Alighieri (1265–1321) noted it in the Divine Comedy, in which St. Thomas Aquinas cautions Dante upon meeting in Paradise, "opinion—hasty—often can incline to the wrong side, and then affection for one's own opinion binds, confines the mind".[45] Ibn Khaldun noticed the same effect in his Muqaddimah:

Untruth naturally afflicts historical information. There are various reasons that make this unavoidable. One of them is partisanship for opinions and schools. [...] if the soul is infected with partisanship for a particular opinion or sect, it accepts without a moment's hesitation the information that is agreeable to it. Prejudice and partisanship obscure the critical faculty and preclude critical investigation. The result is that falsehoods are accepted and transmitted.

In the Novum Organum, English philosopher and scientist Francis Bacon (1561–1626) noted that biased assessment of evidence drove "all superstitions, whether in astrology, dreams, omens, divine judgments or the like". He wrote:

The human understanding when it has once adopted an opinion ... draws all things else to support and agree with it. And though there be a greater number and weight of instances to be found on the other side, yet these it either neglects or despises, or else by some distinction sets aside or rejects[.]

In the second volume of his The World as Will and Representation (1844), German philosopher Arthur Schopenhauer observed that "An adopted hypothesis gives us lynx-eyes for everything that confirms it and makes us blind to everything that contradicts it." In his essay (1897) "What Is Art?", Russian novelist Leo Tolstoy wrote:

I know that most men—not only those considered clever, but even those who are very clever, and capable of understanding most difficult scientific, mathematical, or philosophic problems—can very seldom discern even the simplest and most obvious truth if it be such as to oblige them to admit the falsity of conclusions they have formed, perhaps with much difficulty—conclusions of which they are proud, which they have taught to others, and on which they have built their lives.

Hypothesis-testing (falsification) explanation (Wason)

In Peter Wason's initial experiment published in 1960 (which does not mention the term "confirmation bias"), he repeatedly challenged participants to identify a rule applying to triples of numbers. They were told that (2,4,6) fits the rule. They generated triples, and the experimenter told them whether or not each triple conformed to the rule.

The actual rule was simply "any ascending sequence", but participants had great difficulty in finding it, often announcing rules that were far more specific, such as "the middle number is the average of the first and last". The participants seemed to test only positive examples—triples that obeyed their hypothesized rule. For example, if they thought the rule was, "Each number is two greater than its predecessor," they would offer a triple that fitted (confirmed) this rule, such as (11,13,15) rather than a triple that violated (falsified) it, such as (11,12,19).

Wason interpreted his results as showing a preference for confirmation over falsification, hence he coined the term "confirmation bias". Wason also used confirmation bias to explain the results of his selection task experiment. Participants repeatedly performed badly on various forms of this test, in most cases ignoring information that could potentially refute (falsify) the specified rule.

Hypothesis testing (positive test strategy) explanation (Klayman and Ha)

Klayman and Ha's 1987 paper argues that the Wason experiments do not actually demonstrate a bias towards confirmation, but instead a tendency to make tests consistent with the working hypothesis. They called this the "positive test strategy". This strategy is an example of a heuristic: a reasoning shortcut that is imperfect but easy to compute. Klayman and Ha used Bayesian probability and information theory as their standard of hypothesis-testing, rather than the falsificationism used by Wason. According to these ideas, each answer to a question yields a different amount of information, which depends on the person's prior beliefs. Thus a scientific test of a hypothesis is one that is expected to produce the most information. Since the information content depends on initial probabilities, a positive test can either be highly informative or uninformative. Klayman and Ha argued that when people think about realistic problems, they are looking for a specific answer with a small initial probability. In this case, positive tests are usually more informative than negative tests. However, in Wason's rule discovery task the answer—three numbers in ascending order—is very broad, so positive tests are unlikely to yield informative answers. Klayman and Ha supported their analysis by citing an experiment that used the labels "DAX" and "MED" in place of "fits the rule" and "doesn't fit the rule". This avoided implying that the aim was to find a low-probability rule. Participants had much more success with this version of the experiment.

Within the universe of all possible triples, those that fit the true rule are shown schematically as a circle. The hypothesized rule is a smaller circle enclosed within it.
If the true rule (T) encompasses the current hypothesis (H), then positive tests (examining an H to see if it is T) will not show that the hypothesis is false.
Two overlapping circles represent the true rule and the hypothesized rule. Any observation falling in the non-overlapping parts of the circles shows that the two rules are not exactly the same. In other words, those observations falsify the hypothesis.
If the true rule (T) overlaps the current hypothesis (H), then either a negative test or a positive test can potentially falsify H.
The triples fitting the hypothesis are represented as a circle within the universe of all triples. The true rule is a smaller circle within this.
When the working hypothesis (H) includes the true rule (T) then positive tests are the only way to falsify H.

In light of this and other critiques, the focus of research moved away from confirmation versus falsification of an hypothesis, to examining whether people test hypotheses in an informative way, or an uninformative but positive way. The search for "true" confirmation bias led psychologists to look at a wider range of effects in how people process information.

Information processing explanations

There are currently three main information processing explanations of confirmation bias, plus a recent addition.

Cognitive versus motivational

Happy events are more likely to be remembered.

According to Robert MacCoun, most biased evidence processing occurs through a combination of "cold" (cognitive) and "hot" (motivated) mechanisms.

Cognitive explanations for confirmation bias are based on limitations in people's ability to handle complex tasks, and the shortcuts, called heuristics, that they use. For example, people may judge the reliability of evidence by using the availability heuristic that is, how readily a particular idea comes to mind. It is also possible that people can only focus on one thought at a time, so find it difficult to test alternative hypotheses in parallel. Another heuristic is the positive test strategy identified by Klayman and Ha, in which people test a hypothesis by examining cases where they expect a property or event to occur. This heuristic avoids the difficult or impossible task of working out how diagnostic each possible question will be. However, it is not universally reliable, so people can overlook challenges to their existing beliefs.

Motivational explanations involve an effect of desire on belief. It is known that people prefer positive thoughts over negative ones in a number of ways: this is called the "Pollyanna principle". Applied to arguments or sources of evidence, this could explain why desired conclusions are more likely to be believed true. According to experiments that manipulate the desirability of the conclusion, people demand a high standard of evidence for unpalatable ideas and a low standard for preferred ideas. In other words, they ask, "Can I believe this?" for some suggestions and, "Must I believe this?" for others. Although consistency is a desirable feature of attitudes, an excessive drive for consistency is another potential source of bias because it may prevent people from neutrally evaluating new, surprising information. Social psychologist Ziva Kunda combines the cognitive and motivational theories, arguing that motivation creates the bias, but cognitive factors determine the size of the effect.

Cost-benefit

Explanations in terms of cost-benefit analysis assume that people do not just test hypotheses in a disinterested way, but assess the costs of different errors. Using ideas from evolutionary psychology, James Friedrich suggests that people do not primarily aim at truth in testing hypotheses, but try to avoid the most costly errors. For example, employers might ask one-sided questions in job interviews because they are focused on weeding out unsuitable candidates. Yaacov Trope and Akiva Liberman's refinement of this theory assumes that people compare the two different kinds of error: accepting a false hypothesis or rejecting a true hypothesis. For instance, someone who underestimates a friend's honesty might treat him or her suspiciously and so undermine the friendship. Overestimating the friend's honesty may also be costly, but less so. In this case, it would be rational to seek, evaluate or remember evidence of their honesty in a biased way. When someone gives an initial impression of being introverted or extroverted, questions that match that impression come across as more empathic. This suggests that when talking to someone who seems to be an introvert, it is a sign of better social skills to ask, "Do you feel awkward in social situations?" rather than, "Do you like noisy parties?" The connection between confirmation bias and social skills was corroborated by a study of how college students get to know other people. Highly self-monitoring students, who are more sensitive to their environment and to social norms, asked more matching questions when interviewing a high-status staff member than when getting to know fellow students.

Exploratory versus confirmatory

Psychologists Jennifer Lerner and Philip Tetlock distinguish two different kinds of thinking process. Exploratory thought neutrally considers multiple points of view and tries to anticipate all possible objections to a particular position, while confirmatory thought seeks to justify a specific point of view. Lerner and Tetlock say that when people expect to justify their position to others whose views they already know, they will tend to adopt a similar position to those people, and then use confirmatory thought to bolster their own credibility. However, if the external parties are overly aggressive or critical, people will disengage from thought altogether, and simply assert their personal opinions without justification. Lerner and Tetlock say that people only push themselves to think critically and logically when they know in advance they will need to explain themselves to others who are well-informed, genuinely interested in the truth, and whose views they don't already know. Because those conditions rarely exist, they argue, most people are using confirmatory thought most of the time.

Make-believe

Developmental psychologist Eve Whitmore has argued that beliefs and biases involved in confirmation bias have their roots in childhood coping through make-believe, which becomes "the basis for more complex forms of self-deception and illusion into adulthood." The friction brought on by questioning as an adolescent with developing critical thinking can lead to the rationalization of false beliefs, and the habit of such rationalization can become unconscious over the years.

Real-world effects

Social media

In social media, confirmation bias is amplified by the use of filter bubbles, or "algorithmic editing", which displays to individuals only information they are likely to agree with, while excluding opposing views. Some have argued that confirmation bias is the reason why society can never escape from filter bubbles, because individuals are psychologically hardwired to seek information that agrees with their preexisting values and beliefs. Others have further argued that the mixture of the two is degrading democracy—claiming that this "algorithmic editing" removes diverse viewpoints and information—and that unless filter bubble algorithms are removed, voters will be unable to make fully informed political decisions. 

The rise of social media has contributed greatly to the rapid spread of fake news, that is, false and misleading information that is presented as credible news from a seemingly reliable source. Confirmation bias (selecting or reinterpreting evidence to support one's beliefs) is one of three main hurdles cited as to why critical thinking goes astray in these circumstances. The other two are shortcut heuristics (when overwhelmed or short of time, people rely on simple rules such as group consensus or trusting an expert or role model) and social goals (social motivation or peer pressure can interfere with objective analysis of facts at hand).

In combating the spread of fake news, social media sites have considered turning toward "digital nudging". This can currently be done in two different forms of nudging. This includes nudging of information and nudging of presentation. Nudging of information entails social media sites providing a disclaimer or label questioning or warning users of the validity of the source while nudging of presentation includes exposing users to new information which they may not have sought out but could introduce them to viewpoints that may combat their own confirmation biases.

Science and scientific research

A distinguishing feature of scientific thinking is the search for confirming or supportive evidence (inductive reasoning) as well as falsifying evidence (deductive reasoning). Inductive research in particular can have a serious problem with confirmation bias.

Many times in the history of science, scientists have resisted new discoveries by selectively interpreting or ignoring unfavorable data. The assessment of the quality of scientific studies seems to be particularly vulnerable to confirmation bias. Several studies have shown that scientists rate studies that report findings consistent with their prior beliefs more favorably than studies reporting findings inconsistent with their previous beliefs.

However, assuming that the research question is relevant, the experimental design adequate and the data are clearly and comprehensively described, the empirical data obtained should be important to the scientific community and should not be viewed prejudicially, regardless of whether they conform to current theoretical predictions. In practice, researchers may misunderstand, misinterpret, or not read at all studies that contradict their preconceptions, or wrongly cite them anyway as if they actually supported their claims.

Further, confirmation biases can sustain scientific theories or research programs in the face of inadequate or even contradictory evidence. The discipline of parapsychology is often cited as an example in the context of whether it is a protoscience or a pseudoscience.

An experimenter's confirmation bias can potentially affect which data are reported. Data that conflict with the experimenter's expectations may be more readily discarded as unreliable, producing the so-called file drawer effect. To combat this tendency, scientific training teaches ways to prevent bias. For example, experimental design of randomized controlled trials (coupled with their systematic review) aims to minimize sources of bias.

The social process of peer review aims to mitigate the effect of individual scientists' biases, even though the peer review process itself may be susceptible to such biases. Confirmation bias may thus be especially harmful to objective evaluations regarding nonconforming results since biased individuals may regard opposing evidence to be weak in principle and give little serious thought to revising their beliefs. Scientific innovators often meet with resistance from the scientific community, and research presenting controversial results frequently receives harsh peer review.

Media and fact-checking

Sensationalist newspapers in the 1850s and later lead to a gradual need for a more factual media. Colin Dickey has described the subsequent evolution of fact-checking. Key elements were the establishment of Associated Press in the 1850s (short factual material needed), Ralph Pulitzer of the New York World (his Bureau of Accuracy and Fair Play, 1912), Henry Luce and Time magazine (original working title: Facts), and the famous fact-checking department of The New Yorker. More recently, the mainstream media has come under severe economic threat from online startups. In addition the rapid spread of misinformation and conspiracy theories via social media is slowly creeping into mainstream media. One solution is for more media staff to be assigned a fact-checking role, as for example The Washington Post. Independent fact-checking organisations have also become prominent, such as Politifact.

However, the fact-checking of media reports and investigations is subject to the same confirmation bias as that for peer review of scientific research. This bias has been little studied so far. For example, a fact-checker with progressive political views might be more critical than necessary of a factual report from a conservative commentator. Another example is that facts are often explained with ambiguous words, so that progressives and conservatives may interpret the words differently according to their own beliefs.

Finance

Confirmation bias can lead investors to be overconfident, ignoring evidence that their strategies will lose money. In studies of political stock markets, investors made more profit when they resisted bias. For example, participants who interpreted a candidate's debate performance in a neutral rather than partisan way were more likely to profit. To combat the effect of confirmation bias, investors can try to adopt a contrary viewpoint "for the sake of argument". In one technique, they imagine that their investments have collapsed and ask themselves why this might happen.

Medicine and health

Cognitive biases are important variables in clinical decision-making by medical general practitioners (GPs) and medical specialists. Two important ones are confirmation bias and the overlapping availability bias. A GP may make a diagnosis early on during an examination, and then seek confirming evidence rather than falsifying evidence. This cognitive error is partly caused by the availability of evidence about the supposed disorder being diagnosed. For example, the client may have mentioned the disorder, or the GP may have recently read a much-discussed paper about the disorder. The basis of this cognitive shortcut or heuristic (termed anchoring) is that the doctor does not consider multiple possibilities based on evidence, but prematurely latches on (or anchors to) a single cause. In emergency medicine, because of time pressure, there is a high density of decision-making, and shortcuts are frequently applied. The potential failure rate of these cognitive decisions needs to be managed by education about the 30 or more cognitive biases that can occur, so as to set in place proper debiasing strategies. Confirmation bias may also cause doctors to perform unnecessary medical procedures due to pressure from adamant patients.

Raymond Nickerson, a psychologist, blames confirmation bias for the ineffective medical procedures that were used for centuries before the arrival of scientific medicine. If a patient recovered, medical authorities counted the treatment as successful, rather than looking for alternative explanations such as that the disease had run its natural course. Biased assimilation is a factor in the modern appeal of alternative medicine, whose proponents are swayed by positive anecdotal evidence but treat scientific evidence hyper-critically.

Cognitive therapy was developed by Aaron T. Beck in the early 1960s and has become a popular approach. According to Beck, biased information processing is a factor in depression. His approach teaches people to treat evidence impartially, rather than selectively reinforcing negative outlooks. Phobias and hypochondria have also been shown to involve confirmation bias for threatening information.

Politics, law and policing

A woman and a man reading a document in a courtroom
Mock trials allow researchers to examine confirmation biases in a realistic setting

Nickerson argues that reasoning in judicial and political contexts is sometimes subconsciously biased, favoring conclusions that judges, juries or governments have already committed to. Since the evidence in a jury trial can be complex, and jurors often reach decisions about the verdict early on, it is reasonable to expect an attitude polarization effect. The prediction that jurors will become more extreme in their views as they see more evidence has been borne out in experiments with mock trials. Both inquisitorial and adversarial criminal justice systems are affected by confirmation bias.

Confirmation bias can be a factor in creating or extending conflicts, from emotionally charged debates to wars: by interpreting the evidence in their favor, each opposing party can become overconfident that it is in the stronger position. On the other hand, confirmation bias can result in people ignoring or misinterpreting the signs of an imminent or incipient conflict. For example, psychologists Stuart Sutherland and Thomas Kida have each argued that U.S. Navy Admiral Husband E. Kimmel showed confirmation bias when playing down the first signs of the Japanese attack on Pearl Harbor.

A two-decade study of political pundits by Philip E. Tetlock found that, on the whole, their predictions were not much better than chance. Tetlock divided experts into "foxes" who maintained multiple hypotheses, and "hedgehogs" who were more dogmatic. In general, the hedgehogs were much less accurate. Tetlock blamed their failure on confirmation bias, and specifically on their inability to make use of new information that contradicted their existing theories.

In police investigations, a detective may identify a suspect early in an investigation, but then sometimes largely seek supporting or confirming evidence, ignoring or downplaying falsifying evidence.

Social psychology

Social psychologists have identified two tendencies in the way people seek or interpret information about themselves. Self-verification is the drive to reinforce the existing self-image and self-enhancement is the drive to seek positive feedback. Both are served by confirmation biases. In experiments where people are given feedback that conflicts with their self-image, they are less likely to attend to it or remember it than when given self-verifying feedback. They reduce the impact of such information by interpreting it as unreliable. Similar experiments have found a preference for positive feedback, and the people who give it, over negative feedback.

Mass delusions

Confirmation bias can play a key role in the propagation of mass delusions. Witch trials are frequently cited as an example.

For another example, in the Seattle windshield pitting epidemic, there seemed to be a "pitting epidemic" in which windshields were damaged due to an unknown cause. As news of the apparent wave of damage spread, more and more people checked their windshields, discovered that their windshields too had been damaged, thus confirming belief in the supposed epidemic. In fact, the windshields were previously damaged, but the damage went unnoticed until people checked their windshields as the delusion spread.

Paranormal beliefs

One factor in the appeal of alleged psychic readings is that listeners apply a confirmation bias which fits the psychic's statements to their own lives. By making a large number of ambiguous statements in each sitting, the psychic gives the client more opportunities to find a match. This is one of the techniques of cold reading, with which a psychic can deliver a subjectively impressive reading without any prior information about the client. Investigator James Randi compared the transcript of a reading to the client's report of what the psychic had said, and found that the client showed a strong selective recall of the "hits".

As a striking illustration of confirmation bias in the real world, Nickerson mentions numerological pyramidology: the practice of finding meaning in the proportions of the Egyptian pyramids. There are many different length measurements that can be made of, for example, the Great Pyramid of Giza and many ways to combine or manipulate them. Hence it is almost inevitable that people who look at these numbers selectively will find superficially impressive correspondences, for example with the dimensions of the Earth.

Recruitment and selection

Unconscious cognitive bias (including confirmation bias) in job recruitment affects hiring decisions and can potentially prohibit a diverse and inclusive workplace. There are a variety of unconscious biases that affects recruitment decisions but confirmation bias is one of the major ones, especially during the interview stage. The interviewer will often select a candidate that confirms their own beliefs, even though other candidates are equally or better qualified.

Associated effects and outcomes

Polarization of opinion

When people with opposing views interpret new information in a biased way, their views can move even further apart. This is called "attitude polarization". The effect was demonstrated by an experiment that involved drawing a series of red and black balls from one of two concealed "bingo baskets". Participants knew that one basket contained 60 percent black and 40 percent red balls; the other, 40 percent black and 60 percent red. The experimenters looked at what happened when balls of alternating color were drawn in turn, a sequence that does not favor either basket. After each ball was drawn, participants in one group were asked to state out loud their judgments of the probability that the balls were being drawn from one or the other basket. These participants tended to grow more confident with each successive draw—whether they initially thought the basket with 60 percent black balls or the one with 60 percent red balls was the more likely source, their estimate of the probability increased. Another group of participants were asked to state probability estimates only at the end of a sequence of drawn balls, rather than after each ball. They did not show the polarization effect, suggesting that it does not necessarily occur when people simply hold opposing positions, but rather when they openly commit to them.

A less abstract study was the Stanford biased interpretation experiment, in which participants with strong opinions about the death penalty read about mixed experimental evidence. Twenty-three percent of the participants reported that their views had become more extreme, and this self-reported shift correlated strongly with their initial attitudes. In later experiments, participants also reported their opinions becoming more extreme in response to ambiguous information. However, comparisons of their attitudes before and after the new evidence showed no significant change, suggesting that the self-reported changes might not be real. Based on these experiments, Deanna Kuhn and Joseph Lao concluded that polarization is a real phenomenon but far from inevitable, only happening in a small minority of cases, and it was prompted not only by considering mixed evidence, but by merely thinking about the topic.

Charles Taber and Milton Lodge argued that the Stanford team's result had been hard to replicate because the arguments used in later experiments were too abstract or confusing to evoke an emotional response. The Taber and Lodge study used the emotionally charged topics of gun control and affirmative action. They measured the attitudes of their participants towards these issues before and after reading arguments on each side of the debate. Two groups of participants showed attitude polarization: those with strong prior opinions and those who were politically knowledgeable. In part of this study, participants chose which information sources to read, from a list prepared by the experimenters. For example, they could read the National Rifle Association's and the Brady Anti-Handgun Coalition's arguments on gun control. Even when instructed to be even-handed, participants were more likely to read arguments that supported their existing attitudes than arguments that did not. This biased search for information correlated well with the polarization effect.

The backfire effect is a name for the finding that given evidence against their beliefs, people can reject the evidence and believe even more strongly. The phrase was coined by Brendan Nyhan and Jason Reifler in 2010. However, subsequent research has since failed to replicate findings supporting the backfire effect. One study conducted out of the Ohio State University and George Washington University studied 10,100 participants with 52 different issues expected to trigger a backfire effect. While the findings did conclude that individuals are reluctant to embrace facts that contradict their already held ideology, no cases of backfire were detected. The backfire effect has since been noted to be a rare phenomenon rather than a common occurrence.

Persistence of discredited beliefs

[B]eliefs can survive potent logical or empirical challenges. They can survive and even be bolstered by evidence that most uncommitted observers would agree logically demands some weakening of such beliefs. They can even survive the total destruction of their original evidential bases.

—Lee Ross and Craig Anderson

Confirmation biases provide one plausible explanation for the persistence of beliefs when the initial evidence for them is removed or when they have been sharply contradicted.[1]:187 This belief perseverance effect has been first demonstrated experimentally by Festinger, Riecken, and Schachter. These psychologists spent time with a cult whose members were convinced that the world would end on December 21, 1954. After the prediction failed, most believers still clung to their faith. Their book describing this research is aptly named When Prophecy Fails.

The term "belief perseverance," however, was coined in a series of experiments using what is called the "debriefing paradigm": participants read fake evidence for a hypothesis, their attitude change is measured, then the fakery is exposed in detail. Their attitudes are then measured once more to see if their belief returns to its previous level.

A common finding is that at least some of the initial belief remains even after a full debriefing. In one experiment, participants had to distinguish between real and fake suicide notes. The feedback was random: some were told they had done well while others were told they had performed badly. Even after being fully debriefed, participants were still influenced by the feedback. They still thought they were better or worse than average at that kind of task, depending on what they had initially been told.

In another study, participants read job performance ratings of two firefighters, along with their responses to a risk aversion test. This fictional data was arranged to show either a negative or positive association: some participants were told that a risk-taking firefighter did better, while others were told they did less well than a risk-averse colleague. Even if these two case studies were true, they would have been scientifically poor evidence for a conclusion about firefighters in general. However, the participants found them subjectively persuasive. When the case studies were shown to be fictional, participants' belief in a link diminished, but around half of the original effect remained. Follow-up interviews established that the participants had understood the debriefing and taken it seriously. Participants seemed to trust the debriefing, but regarded the discredited information as irrelevant to their personal belief.

The continued influence effect is the tendency to believe previously learned misinformation even after it has been corrected. Misinformation can still influence inferences one generates after a correction has occurred.

Preference for early information

Experiments have shown that information is weighted more strongly when it appears early in a series, even when the order is unimportant. For example, people form a more positive impression of someone described as "intelligent, industrious, impulsive, critical, stubborn, envious" than when they are given the same words in reverse order. This irrational primacy effect is independent of the primacy effect in memory in which the earlier items in a series leave a stronger memory trace. Biased interpretation offers an explanation for this effect: seeing the initial evidence, people form a working hypothesis that affects how they interpret the rest of the information.

One demonstration of irrational primacy used colored chips supposedly drawn from two urns. Participants were told the color distributions of the urns, and had to estimate the probability of a chip being drawn from one of them. In fact, the colors appeared in a prearranged order. The first thirty draws favored one urn and the next thirty favored the other. The series as a whole was neutral, so rationally, the two urns were equally likely. However, after sixty draws, participants favored the urn suggested by the initial thirty.

Another experiment involved a slide show of a single object, seen as just a blur at first and in slightly better focus with each succeeding slide. After each slide, participants had to state their best guess of what the object was. Participants whose early guesses were wrong persisted with those guesses, even when the picture was sufficiently in focus that the object was readily recognizable to other people.

Illusory association between events

Illusory correlation is the tendency to see non-existent correlations in a set of data. This tendency was first demonstrated in a series of experiments in the late 1960s. In one experiment, participants read a set of psychiatric case studies, including responses to the Rorschach inkblot test. The participants reported that the homosexual men in the set were more likely to report seeing buttocks, anuses or sexually ambiguous figures in the inkblots. In fact the fictional case studies had been constructed so that the homosexual men were no more likely to report this imagery or, in one version of the experiment, were less likely to report it than heterosexual men. In a survey, a group of experienced psychoanalysts reported the same set of illusory associations with homosexuality.

Another study recorded the symptoms experienced by arthritic patients, along with weather conditions over a 15-month period. Nearly all the patients reported that their pains were correlated with weather conditions, although the real correlation was zero.

Example
Days Rain No rain
Arthritis 14 6
No arthritis 7 2

This effect is a kind of biased interpretation, in that objectively neutral or unfavorable evidence is interpreted to support existing beliefs. It is also related to biases in hypothesis-testing behavior. In judging whether two events, such as illness and bad weather, are correlated, people rely heavily on the number of positive-positive cases: in this example, instances of both pain and bad weather. They pay relatively little attention to the other kinds of observation (of no pain and/or good weather). This parallels the reliance on positive tests in hypothesis testing. It may also reflect selective recall, in that people may have a sense that two events are correlated because it is easier to recall times when they happened together.

Thursday, July 1, 2021

Dunning–Kruger effect

From Wikipedia, the free encyclopedia

The Dunning–Kruger effect is a hypothetical cognitive bias stating that people with low ability at a task overestimate their ability.

As described by social psychologists David Dunning and Justin Kruger, the bias results from an internal illusion in people of low ability and from an external misperception in people of high ability; that is, "the miscalibration of the incompetent stems from an error about the self, whereas the miscalibration of the highly competent stems from an error about others". It is related to the cognitive bias of illusory superiority and comes from people's inability to recognize their lack of ability. Without the self-awareness of metacognition, people cannot objectively evaluate their level of competence.

The effect, or Dunning and Kruger's original explanation for the effect, has been challenged by mathematical analyses and comparisons across cultures.

Original study

The psychological phenomenon of illusory superiority was identified as a form of cognitive bias in Kruger and Dunning's 1999 study "Unskilled and Unaware of It: How Difficulties in Recognizing One's Own Incompetence Lead to Inflated Self-Assessments". An example derived from cognitive bias evident in the criminal case of McArthur Wheeler, who, on April 19, 1995, robbed two banks while his face was covered with lemon juice, which he believed would make him invisible to the surveillance cameras. This belief was apparently based on his misunderstanding of the chemical properties of lemon juice as an invisible ink.

Other investigations of the phenomenon, such as "Why People Fail to Recognize Their Own Incompetence", indicate that much incorrect self-assessment of competence derives from the person's ignorance of a given activity's standards of performance. Dunning and Kruger's research also indicates that training in a task, such as solving a logic puzzle, increases people's ability to accurately evaluate how good they are at it.

In Self-insight: Roadblocks and Detours on the Path to Knowing Thyself, Dunning described the Dunning–Kruger effect as "the anosognosia of everyday life", referring to a neurological condition in which a disabled person either denies or seems unaware of their disability. He stated: "If you're incompetent, you can't know you're incompetent ... The skills you need to produce a right answer are exactly the skills you need to recognize what a right answer is."

In 2011, Dunning wrote about his observations that people with substantial, measurable deficits in their knowledge or expertise lack the ability to recognize those deficits and, therefore, despite potentially making error after error, tend to think they are performing competently when they are not: "In short, those who are incompetent, for lack of a better term, should have little insight into their incompetence—an assertion that has come to be known as the Dunning–Kruger effect". In 2014, Dunning and Helzer described how the Dunning–Kruger effect "suggests that poor performers are not in a position to recognize the shortcomings in their performance".

Later studies

Dunning and Kruger tested the hypotheses of the cognitive bias of illusory superiority on undergraduate students of introductory courses in psychology by examining the students' self-assessments of their intellectual skills in inductive, deductive, and abductive logical reasoning, English grammar, and personal sense of humor. After learning their self-assessment scores, the students were asked to estimate their ranks in the psychology class. The competent students underestimated their class rank, and the incompetent students overestimated theirs, but the incompetent students did not estimate their class rank as higher than the ranks estimated by the competent group. Across four studies, the research indicated that the study participants who scored in the bottom quartile on tests of their sense of humor, knowledge of grammar, and logical reasoning, overestimated their test performance and their abilities; despite test scores that placed them in the 12th percentile, the participants estimated they ranked in the 62nd percentile.

Moreover, competent students tended to underestimate their own competence, because they erroneously presumed that tasks easy for them to perform were also easy for other people to perform. Incompetent students improved their ability to estimate their class rank correctly after receiving minimal tutoring in the skills they previously lacked, regardless of any objective improvement gained in said skills of perception. The 2004 study "Mind-Reading and Metacognition: Narcissism, not Actual Competence, Predicts Self-estimated Ability" extended the cognitive-bias premise of illusory superiority to test subjects' emotional sensitivity toward other people and their own perceptions of other people.

The 2003 study "How Chronic Self-Views Influence (and Potentially Mislead) Estimates of Performance" indicated a shift in the participants' view of themselves when influenced by external cues. The participants' knowledge of geography was tested; some tests were intended to affect the participants' self-view positively, and some were intended to affect it negatively. The participants then were asked to rate their performances; the participants given tests with a positive intent reported better performance than did the participants given tests with a negative intent.

To test Dunning and Kruger's hypotheses "that people, at all performance levels, are equally poor at estimating their relative performance", the 2006 study "Skilled or Unskilled, but Still Unaware of It: How Perceptions of Difficulty Drive Miscalibration in Relative Comparisons" investigated three studies that manipulated the "perceived difficulty of the tasks, and, hence, [the] participants' beliefs about their relative standing". The investigation indicated that when the experimental subjects were presented with moderately difficult tasks, there was little variation among the best performers and the worst performers in their ability to predict their performance accurately. With more difficult tasks, the best performers were less accurate in predicting their performance than were the worst performers. Therefore, judges at all levels of skill are subject to similar degrees of error in the performance of tasks.

In testing alternative explanations for the cognitive bias of illusory superiority, the 2008 study "Why the Unskilled are Unaware: Further Explorations of (Absent) Self-insight Among the Incompetent" reached the same conclusions as previous studies of the Dunning–Kruger effect: that, in contrast to high performers, "poor performers do not learn from feedback suggesting a need to improve".

One 2020 study suggests that individuals of relatively high social class are more overconfident than lower-class individuals.

Mathematical critique

Dunning and Kruger describe a common cognitive bias and make quantitative assertions that rest on mathematical arguments. But their findings are often misinterpreted, misrepresented, and misunderstood. According to Tal Yarkoni:

Their studies categorically didn’t show that incompetent people are more confident or arrogant than competent people. What they did show is [that] people in the top quartile for actual performance think they perform better than the people in the second quartile, who in turn think they perform better than the people in the third quartile, and so on. So the bias is definitively not that incompetent people think they’re better than competent people. Rather, it’s that incompetent people think they’re much better than they actually are. But they typically still don’t think they’re quite as good as people who, you know, actually are good. (It’s important to note that Dunning and Kruger never claimed to show that the unskilled think they’re better than the skilled; that’s just the way the finding is often interpreted by others.)

Paired measures

Mathematically, the effect relies on the quantifying of paired measures consisting of (a) the measure of the competence people can demonstrate when put to the test (actual competence) and (b) the measure of competence people believe that they have (self-assessed competence). Researchers express the measures either as percentages or as percentile scores scaled from 0 to 1 or from 0 to 100. By convention, researchers express the differences between the two measures as self-assessed competence minus actual competence. In this convention, negative numbers signify erring toward underconfidence, positive numbers signify erring toward overconfidence, and zero signifies accurate self-assessment.

A 2008 study by Joyce Ehrlinger summarized the major assertions of the effect that first appeared in the 1999 seminal article and continued to be supported by many studies after nine years of research: "People are typically overly optimistic when evaluating the quality of their performance on social and intellectual tasks. In particular, poor performers grossly overestimate their performances".

The effect asserts that most people are overconfident about their abilities, and that the least competent people are the most overconfident. Support for both assertions rests upon interpreting the patterns produced from graphing the paired measures.

The most common graphical convention is the Kruger–Dunning-type graph used in the seminal article. It depicted college students' accuracy in self-assessing their competencies in humor, logical reasoning, and grammar. Researchers adopted that convention in subsequent studies of the effect. Additional graphs used by other researchers, who argued for the legitimacy of the effect include (yx) versus (x) cross plots and bar charts. The first two of these studies depicted college students' accuracy in self-assessing their competence in introductory chemistry, and the third depicted their accuracy in self-assessing their competence in business classes.

Some research suggests that the effect may actually be illusory, driven by ceiling/floor effects (exacerbated by measurement error) causing censoring rather than representing a true deficit in metacognition.

Cultural differences in self-perception

Studies of the Dunning–Kruger effect usually have been of North Americans, but studies of Japanese people suggest that cultural forces have a role in the occurrence of the effect. The 2001 study "Divergent Consequences of Success and Failure in Japan and North America: An Investigation of Self-improving Motivations and Malleable Selves" indicated that Japanese people tended to underestimate their abilities and to see underachievement (failure) as an opportunity to improve their abilities at a given task, thereby increasing their value to the social group.

Popular recognition

In 2000, Kruger and Dunning were awarded a satiric Ig Nobel Prize in recognition of the scientific work recorded in "their modest report". "The Dunning–Kruger Song" is part of The Incompetence Opera, a mini-opera that premiered at the Ig Nobel Prize ceremony in 2017. The mini-opera is billed as "a musical encounter with the Peter principle and the Dunning–Kruger Effect".

Cognitive dissonance

From Wikipedia, the free encyclopedia
 
In the field of psychology, cognitive dissonance is the perception of contradictory information. Relevant items of information include a person's actions, feelings, ideas, beliefs, and values, and things in the environment. Cognitive dissonance is typically experienced as psychological stress when they participate in an action that goes against one or more of them. According to this theory, when two actions or ideas are not psychologically consistent with each other, people do all in their power to change them until they become consistent. The discomfort is triggered by the person's belief clashing with new information perceived, wherein they try to find a way to resolve the contradiction to reduce their discomfort.

In A Theory of Cognitive Dissonance (1957), Leon Festinger proposed that human beings strive for internal psychological consistency to function mentally in the real world. A person who experiences internal inconsistency tends to become psychologically uncomfortable and is motivated to reduce the cognitive dissonance. They tend to make changes to justify the stressful behavior, either by adding new parts to the cognition causing the psychological dissonance (rationalization) or by avoiding circumstances and contradictory information likely to increase the magnitude of the cognitive dissonance (confirmation bias).

Coping with the nuances of contradictory ideas or experiences is mentally stressful. It requires energy and effort to sit with those seemingly opposite things that all seem true. Festinger argued that some people would inevitably resolve dissonance by blindly believing whatever they wanted to believe.

Relations among cognitions

To function in the reality of society, human beings continually adjust the correspondence of their mental attitudes and personal actions; such continual adjustments, between cognition and action, result in one of three relationships with reality:

  1. Consonant relationship: Two cognitions or actions consistent with each other (e.g. not wanting to become drunk when out to dinner, and ordering water rather than wine)
  2. Irrelevant relationship: Two cognitions or actions unrelated to each other (e.g. not wanting to become drunk when out and wearing a shirt)
  3. Dissonant relationship: Two cognitions or actions inconsistent with each other (e.g. not wanting to become drunk when out, but then drinking more wine)

Magnitude of dissonance

The term "magnitude of dissonance" refers to the level of discomfort caused to the person. This can be caused by the relationship between two differing internal beliefs, or an action that is incompatible with the beliefs of the person. Two factors determine the degree of psychological dissonance caused by two conflicting cognitions or by two conflicting actions:

  1. The importance of cognitions: the greater the personal value of the elements, the greater the magnitude of the dissonance in the relation. When the value of the importance of the two dissonant items is high, it is difficult to determine which action or thought is correct. Both have had a place of truth, at least subjectively, in the mind of the person. Therefore, when the ideals or actions now clash, it is difficult for the individual to decide which takes priority.
  2. Ratio of cognitions: the proportion of dissonant-to-consonant elements. There is a level of discomfort within each person that is acceptable for living. When a person is within that comfort level, the dissonant factors do not interfere with functioning. However, when dissonant factors are abundant and not enough in line with each other, one goes through a process to regulate and bring the ratio back to an acceptable level. Once a subject chooses to keep one of the dissonant factors, they quickly forget the other to restore peace of mind.

There is always some degree of dissonance within a person as they go about making decisions, due to the changing quantity and quality of knowledge and wisdom that they gain. The magnitude itself is a subjective measurement since the reports are self relayed, and there is no objective way as yet to get a clear measurement of the level of discomfort.

Reduction

Cognitive dissonance theory proposes that people seek psychological consistency between their expectations of life and the existential reality of the world. To function by that expectation of existential consistency, people continually reduce their cognitive dissonance in order to align their cognitions (perceptions of the world) with their actions.

The creation and establishment of psychological consistency allows the person afflicted with cognitive dissonance to lessen mental stress by actions that reduce the magnitude of the dissonance, realized either by changing with or by justifying against or by being indifferent to the existential contradiction that is inducing the mental stress. In practice, people reduce the magnitude of their cognitive dissonance in four ways:

  1. Change the behavior or the cognition ("I'll eat no more of this doughnut.")
  2. Justify the behavior or the cognition, by changing the conflicting cognition ("I'm allowed to cheat my diet every once in a while.")
  3. Justify the behavior or the cognition by adding new behaviors or cognitions ("I'll spend thirty extra minutes at the gymnasium to work off the doughnut.")
  4. Ignore or deny information that conflicts with existing beliefs ("This doughnut is not a high-sugar food.")

Three cognitive biases are components of dissonance theory. The bias that one does not have any biases, the bias that one is "better, kinder, smarter, more moral and nicer than average" and confirmation bias.

That a consistent psychology is required for functioning in the real world also was indicated in the results of The Psychology of Prejudice (2006), wherein people facilitate their functioning in the real world by employing human categories (i.e. sex and gender, age and race, etc.) with which they manage their social interactions with other people.

Based on a brief overview of models and theories related to cognitive consistency from many different scientific fields, such as social psychology, perception, neurocognition, learning, motor control, system control, ethology, and stress, it has even been proposed that "all behaviour involving cognitive processing is caused by the activation of inconsistent cognitions and functions to increase perceived consistency"; that is, all behaviour functions to reduce cognitive inconsistency at some level of information processing. Indeed, the involvement of cognitive inconsistency has long been suggested for behaviors related to for instance curiosity, and aggression and fear, while it has also been suggested that the inability to satisfactorily reduce cognitive inconsistency may - dependent on the type and size of the inconsistency - result in stress.

Selective exposure

Another method to reduce cognitive dissonance is through selective exposure theory. This theory has been discussed since the early days of Festinger's discovery of cognitive dissonance. He noticed that people would selectively expose themselves to some media over others; specifically, they would avoid dissonant messages and prefer consonant messages. Through selective exposure, people actively (and selectively) choose what to watch, view, or read that fit to their current state of mind, mood or beliefs. In other words, consumers select attitude-consistent information and avoid attitude-challenging information. This can be applied to media, news, music, and any other messaging channel. The idea is, choosing something that is in opposition to how you feel or believe in will render cognitive dissonance.

For example, a study was done in an elderly home in 1992 on the loneliest residents—those that did not have family or frequent visitors. The residents were shown a series of documentaries: three that featured a "very happy, successful elderly person", and three that featured an "unhappy, lonely elderly person." After watching the documentaries, the residents indicated they preferred the media featuring the unhappy, lonely person over the happy person. This can be attested to them feeling lonely, and experience cognitive dissonance watching somebody their age feeling happy and being successful. This study explains how people select media that aligns with their mood, as in selectively exposing themselves to people and experiences they are already experiencing. It is more comfortable to see a movie about a character that is similar to you than to watch one about someone who is your age who is more successful than you.

Another example to note is how people mostly consume media that aligns with their political views. In a study done in 2015, participants were shown “attitudinally consistent, challenging, or politically balanced online news.” Results showed that the participants trusted attitude-consistent news the most out of all the others, regardless of the source. It is evident that the participants actively selected media that aligns with their beliefs rather than opposing media.

In fact, recent research has suggested that while a discrepancy between cognitions drives individuals to crave for attitude-consistent information, the experience of negative emotions drives individuals to avoid counterattitudinal information. In other words, it is the psychological discomfort which activates selective exposure as a dissonance-reduction strategy.

Paradigms

There are four theoretic paradigms of cognitive dissonance, the mental stress people suffer when exposed to information that is inconsistent with their beliefs, ideals or values: Belief Disconfirmation, Induced Compliance, Free Choice, and Effort Justification, which respectively explain what happens after a person acts inconsistently, relative to their intellectual perspectives; what happens after a person makes decisions and what are the effects upon a person who has expended much effort to achieve a goal. Common to each paradigm of cognitive-dissonance theory is the tenet: People invested in a given perspective shall—when confronted with contrary evidence—expend great effort to justify retaining the challenged perspective.

Belief disconfirmation

The contradiction of a belief, ideal, or system of values causes cognitive dissonance that can be resolved by changing the challenged belief, yet, instead of effecting change, the resultant mental stress restores psychological consonance to the person by misperception, rejection, or refutation of the contradiction, seeking moral support from people who share the contradicted beliefs or acting to persuade other people that the contradiction is unreal.

The early hypothesis of belief contradiction presented in When Prophecy Fails (1956) reported that faith deepened among the members of an apocalyptic religious cult, despite the failed prophecy of an alien spacecraft soon to land on Earth to rescue them from earthly corruption. At the determined place and time, the cult assembled; they believed that only they would survive planetary destruction; yet the spaceship did not arrive to Earth. The confounded prophecy caused them acute cognitive-dissonance: Had they been victims of a hoax? Had they vainly donated away their material possessions? To resolve the dissonance between apocalyptic, end-of-the-world religious beliefs and earthly, material reality, most of the cult restored their psychological consonance by choosing to believe a less mentally-stressful idea to explain the missed landing: that the aliens had given planet Earth a second chance at existence, which, in turn, empowered them to re-direct their religious cult to environmentalism and social advocacy to end human damage to planet Earth. On overcoming the confounded belief by changing to global environmentalism, the cult increased in numbers by proselytism.

The study of The Rebbe, the Messiah, and the Scandal of Orthodox Indifference (2008) reported the belief contradiction that occurred in the Chabad Orthodox Jewish congregation, who believed that their Rebbe (Menachem Mendel Schneerson) was the Messiah. When he died of a stroke in 1994, instead of accepting that their Rebbe was not the Messiah, some of the congregation proved indifferent to that contradictory fact and continued claiming that Schneerson was the Messiah and that he would soon return from the dead.

Induced compliance

After performing dissonant behavior (lying) a person might find external, consonant elements. Therefore, a snake oil salesman might find a psychological self-justification (great profit) for promoting medical falsehoods, but, otherwise, might need to change his beliefs about the falsehoods.

In the Cognitive Consequences of Forced Compliance (1959), the investigators Leon Festinger and Merrill Carlsmith asked students to spend an hour doing tedious tasks; e.g. turning pegs a quarter-turn, at fixed intervals. The tasks were designed to induce a strong, negative, mental attitude in the subjects. Once the subjects had done the tasks, the experimenters asked one group of subjects to speak with another subject (an actor) and persuade that impostor-subject that the tedious tasks were interesting and engaging. Subjects of one group were paid twenty dollars ($20); those in a second group were paid one dollar ($1) and those in the control group were not asked to speak with the imposter-subject.

At the conclusion of the study, when asked to rate the tedious tasks, the subjects of the second group (paid $1) rated the tasks more positively than did either the subjects in the first group (paid $20) or the subjects of the control group; the responses of the paid subjects were evidence of cognitive dissonance. The researchers, Festinger and Carlsmith, proposed that the subjects experienced dissonance between the conflicting cognitions. "I told someone that the task was interesting" and "I actually found it boring." The subjects paid one dollar were induced to comply, compelled to internalize the "interesting task" mental attitude because they had no other justification. The subjects paid twenty dollars were induced to comply by way of an obvious, external justification for internalizing the "interesting task" mental attitude and experienced a lower degree of cognitive dissonance than did those only paid one dollar.

Forbidden Behaviour paradigm

In the Effect of the Severity of Threat on the Devaluation of Forbidden Behavior (1963), a variant of the induced-compliance paradigm, by Elliot Aronson and Carlsmith, examined self-justification in children. Children were left in a room with toys, including a greatly desirable steam shovel, the forbidden toy. Upon leaving the room, the experimenter told one-half of the group of children that there would be severe punishment if they played with the steam-shovel toy and told the second half of the group that there would be a mild punishment for playing with the forbidden toy. All of the children refrained from playing with the forbidden toy (the steam shovel).

Later, when the children were told that they could freely play with any toy they wanted, the children in the mild-punishment group were less likely to play with the steam shovel (the forbidden toy), despite the removal of the threat of mild punishment. The children threatened with mild punishment had to justify, to themselves, why they did not play with the forbidden toy. The degree of punishment was insufficiently strong to resolve their cognitive dissonance; the children had to convince themselves that playing with the forbidden toy was not worth the effort.

In The Efficacy of Musical Emotions Provoked by Mozart's Music for the Reconciliation of Cognitive Dissonance (2012), a variant of the forbidden-toy paradigm, indicated that listening to music reduces the development of cognitive dissonance. Without music in the background, the control group of four-year-old children were told to avoid playing with a forbidden toy. After playing alone, the control-group children later devalued the importance of the forbidden toy. In the variable group, classical music played in the background while the children played alone. In the second group, the children did not later devalue the forbidden toy. The researchers, Nobuo Masataka and Leonid Perlovsky, concluded that music might inhibit cognitions that induce cognitive dissonance.

Music is a stimulus that can diminish post-decisional dissonance; in an earlier experiment, Washing Away Postdecisional Dissonance (2010), the researchers indicated that the actions of hand-washing might inhibit the cognitions that induce cognitive dissonance. That study later failed to replicate.

Free choice

In the study Post-decision Changes in Desirability of Alternatives (1956) 225 female students rated domestic appliances and then were asked to choose one of two appliances as a gift. The results of a second round of ratings indicated that the women students increased their ratings of the domestic appliance they had selected as a gift and decreased their ratings of the appliances they rejected.

This type of cognitive dissonance occurs in a person faced with a difficult decision, when there always exist aspects of the rejected-object that appeal to the chooser. The action of deciding provokes the psychological dissonance consequent to choosing X instead of Y, despite little difference between X and Y; the decision "I chose X" is dissonant with the cognition that "There are some aspects of Y that I like". The study Choice-induced Preferences in the Absence of Choice: Evidence from a Blind Two-choice Paradigm with Young Children and Capuchin Monkeys (2010) reports similar results in the occurrence of cognitive dissonance in human beings and in animals.

Peer Effects in Pro-Social Behavior: Social Norms or Social Preferences? (2013) indicated that with internal deliberation, the structuring of decisions among people can influence how a person acts, and that social preferences and social norms are related and function with wage-giving among three persons. The actions of the first person influenced the wage-giving actions of the second person. That inequity aversion is the paramount concern of the participants.

Effort justification

Cognitive dissonance occurs to a person who voluntarily engages in (physically or ethically) unpleasant activities to achieve a goal. The mental stress caused by the dissonance can be reduced by the person exaggerating the desirability of the goal. In The Effect of Severity of Initiation on Liking for a Group (1956), to qualify for admission to a discussion group, two groups of people underwent an embarrassing initiation of varied psychological severity. The first group of subjects were to read aloud twelve sexual words considered obscene; the second group of subjects were to read aloud twelve sexual words not considered obscene.

Both groups were given headphones to unknowingly listen to a recorded discussion about animal sexual behaviour, which the researchers designed to be dull and banal. As the subjects of the experiment, the groups of people were told that the animal-sexuality discussion actually was occurring in the next room. The subjects whose strong initiation required reading aloud obscene words evaluated the people of their group as more-interesting persons than the people of the group who underwent the mild initiation to the discussion group.

In Washing Away Your Sins: Threatened Morality and Physical Cleansing (2006), the results indicated that a person washing their hands is an action that helps resolve post-decisional cognitive dissonance because the mental stress usually was caused by the person's ethical–moral self-disgust, which is an emotion related to the physical disgust caused by a dirty environment.

The study The Neural Basis of Rationalization: Cognitive Dissonance Reduction During Decision-making (2011) indicated that participants rated 80 names and 80 paintings based on how much they liked the names and paintings. To give meaning to the decisions, the participants were asked to select names that they might give to their children. For rating the paintings, the participants were asked to base their ratings on whether or not they would display such art at home.

The results indicated that when the decision is meaningful to the person deciding value, the likely rating is based on their attitudes (positive, neutral or negative) towards the name and towards the painting in question. The participants also were asked to rate some of the objects twice and believed that, at session's end, they would receive two of the paintings they had positively rated. The results indicated a great increase in the positive attitude of the participant towards the liked pair of things, whilst also increasing the negative attitude towards the disliked pair of things. The double-ratings of pairs of things, towards which the rating participant had a neutral attitude, showed no changes during the rating period. The existing attitudes of the participant were reinforced during the rating period and the participants suffered cognitive dissonance when confronted by a liked-name paired with a disliked-painting.

Examples

In the fable of “The Fox and the Grapes”, by Aesop, on failing to reach the desired bunch of grapes, the fox then decides he does not truly want the fruit because it is sour. The fox's act of rationalization (justification) reduced his anxiety over the cognitive dissonance from the desire he cannot realise.

Meat-eating

Meat-eating can involve discrepancies between the behavior of eating meat and various ideals that the person holds. Some researchers call this form of moral conflict the meat paradox. Hank Rothgerber posited that meat eaters may encounter a conflict between their eating behavior and their affections toward animals. This occurs when the dissonant state involves recognition of one's behavior as a meat eater and a belief, attitude, or value that this behavior contradicts. The person with this state may attempt to employ various methods, including avoidance, willful ignorance, dissociation, perceived behavioral change, and do-gooder derogation to prevent this form of dissonance from occurring. Once occurred, he or she may reduce it in the form of motivated cognitions, such as denigrating animals, offering pro-meat justifications, or denying responsibility for eating meat.

The extent of cognitive dissonance with regards to meat eating can vary depending on the attitudes and values of the individual involved because these can affect whether or not they see any moral conflict with their values and what they eat. For example, individuals who are more dominance minded and who value having a masculine identity are less likely to experience cognitive dissonance because they are less likely to believe eating meat is morally wrong.

Smoking

The study Patterns of Cognitive Dissonance-reducing Beliefs Among Smokers: A Longitudinal Analysis from the International Tobacco Control (ITC) Four Country Survey (2012) indicated that smokers use justification beliefs to reduce their cognitive dissonance about smoking tobacco and the negative consequences of smoking it.

  1. Continuing smokers (Smoking and no attempt to quit since the previous round of study)
  2. Successful quitters (Quit during the study and did not use tobacco from the time of the previous round of study)
  3. Failed quitters (Quit during the study, but relapsed to smoking at the time of the study)

To reduce cognitive dissonance, the participant smokers adjusted their beliefs to correspond with their actions:

  1. Functional beliefs ("Smoking calms me down when I am stressed or upset."; "Smoking helps me concentrate better."; "Smoking is an important part of my life."; and "Smoking makes it easier for me to socialize.")
  2. Risk-minimizing beliefs ("The medical evidence that smoking is harmful is exaggerated."; "One has to die of something, so why not enjoy yourself and smoke?"; and "Smoking is no more risky than many other things people do.")

Unpleasant medical screenings

In a study titled Cognitive Dissonance and Attitudes Toward Unpleasant Medical Screenings (2016), researchers Michael R. Ent and Mary A. Gerend informed the study participants about a discomforting test for a specific (fictitious) virus called the "human respiratory virus-27". The study used a fake virus to prevent participants from having thoughts, opinions, and feeling about the virus that would interfere with the experiment. The study participants were in two groups; one group was told that they were actual candidates for the virus-27 test, and the second group were told they were not candidates for the test. The researchers reported, "We predicted that [study] participants who thought that they were candidates for the unpleasant test would experience dissonance associated with knowing that the test was both unpleasant and in their best interest—this dissonance was predicted to result in unfavorable attitudes toward the test."

Related phenomena

Cognitive dissonance may also occur when people seek to explain or justify their beliefs, often without questioning the validity of their claims: After the earthquake of 1934, Bihar, India, irrational rumors based upon fear quickly reached the adjoining communities unaffected by the disaster because those people, although not in physical danger, psychologically justified their anxieties about the earthquake. Same pattern can be observed when one's convictions are met with a contradictory order. In a study conducted among 6th grade students, after being induced to cheat in an academic examination, students judged cheating less harshly. Nonetheless, the confirmation bias identifies how people readily read information that confirms their established opinions and readily avoid reading information that contradicts their opinions. The confirmation bias is apparent when a person confronts deeply held political beliefs, i.e. when a person is greatly committed to their beliefs, values, and ideas.

If a contradiction occurs between how a person feels and how a person acts, one's perceptions and emotions align to alleviate stress. The Ben Franklin effect refers to that statesman's observation that the act of performing a favor for a rival leads to increased positive feelings toward that individual. It is also possible that one's emotions be altered to minimize the regret of irrevocable choices. At a hippodrome, bettors had more confidence in their horses after the betting than before.

Applications

Education

The management of cognitive dissonance readily influences the apparent motivation of a student to pursue education. The study Turning Play into Work: Effects of Adult Surveillance and Extrinsic Rewards on Children's Intrinsic Motivation (1975) indicated that the application of the effort justification paradigm increased student enthusiasm for education with the offer of an external reward for studying; students in pre-school who completed puzzles based upon an adult promise of reward were later less interested in the puzzles than were students who completed the puzzle-tasks without the promise of a reward.

The incorporation of cognitive dissonance into models of basic learning-processes to foster the students’ self-awareness of psychological conflicts among their personal beliefs, ideals, and values and the reality of contradictory facts and information, requires the students to defend their personal beliefs. Afterwards, the students are trained to objectively perceive new facts and information to resolve the psychological stress of the conflict between reality and the student's value system. Moreover, educational software that applies the derived principles facilitates the students’ ability to successfully handle the questions posed in a complex subject. Meta-analysis of studies indicates that psychological interventions that provoke cognitive dissonance in order to achieve a directed conceptual change do increase students’ learning in reading skills and about science.

Psychotherapy

The general effectiveness of psychotherapy and psychological intervention is partly explained by the theory of cognitive dissonance. In that vein, social psychology proposed that the mental health of the patient is positively influenced by his and her action in freely choosing a specific therapy and in exerting the required, therapeutic effort to overcome cognitive dissonance. That effective phenomenon was indicated in the results of the study Effects of Choice on Behavioral Treatment of Overweight Children (1983), wherein the children's belief that they freely chose the type of therapy received, resulted in each overweight child losing a greater amount of excessive body weight.

In the study Reducing Fears and Increasing Attentiveness: The Role of Dissonance Reduction (1980), people afflicted with ophidiophobia (fear of snakes) who invested much effort in activities of little therapeutic value for them (experimentally represented as legitimate and relevant) showed improved alleviation of the symptoms of their phobia. Likewise, the results of Cognitive Dissonance and Psychotherapy: The Role of Effort Justification in Inducing Weight Loss (1985) indicated that the patient felt better in justifying their efforts and therapeutic choices towards effectively losing weight. That the therapy of effort expenditure can predict long-term change in the patient's perceptions.

Social behavior

Cognitive dissonance is used to promote positive social behaviours, such as increased condom use; other studies indicate that cognitive dissonance can be used to encourage people to act pro-socially, such as campaigns against public littering, campaigns against racial prejudice, and compliance with anti-speeding campaigns. The theory can also be used to explain reasons for donating to charity. Cognitive dissonance can be applied in social areas such as racism and racial hatred. Acharya of Stanford, Blackwell and Sen of Harvard state CD increases when an individual commits an act of violence toward someone from a different ethnic or racial group and decreases when the individual does not commit any such act of violence. Research from Acharya, Blackwell and Sen shows that individuals committing violence against members of another group develop hostile attitudes towards their victims as a way of minimizing CD. Importantly, the hostile attitudes may persist even after the violence itself declines (Acharya, Blackwell, and Sen, 2015). The application provides a social psychological basis for the constructivist viewpoint that ethnic and racial divisions can be socially or individually constructed, possibly from acts of violence (Fearon and Laitin, 2000). Their framework speaks to this possibility by showing how violent actions by individuals can affect individual attitudes, either ethnic or racial animosity (Acharya, Blackwell, and Sen, 2015).

Consumer behavior

Three main conditions exist for provoking cognitive dissonance when buying: (i) The decision to purchase must be important, such as the sum of money to spend; (ii) The psychological cost; and (iii) The purchase is personally relevant to the consumer. The consumer is free to select from the alternatives and the decision to buy is irreversible.

The study Beyond Reference Pricing: Understanding Consumers' Encounters with Unexpected Prices (2003), indicated that when consumers experience an unexpected price encounter, they adopt three methods to reduce cognitive dissonance: (i) Employ a strategy of continual information; (ii) Employ a change in attitude; and (iii) Engage in minimisation. Consumers employ the strategy of continual information by engaging in bias and searching for information that supports prior beliefs. Consumers might search for information about other retailers and substitute products consistent with their beliefs. Alternatively, consumers might change attitude, such as re-evaluating price in relation to external reference-prices or associating high prices and low prices with quality. Minimisation reduces the importance of the elements of the dissonance; consumers tend to minimise the importance of money, and thus of shopping around, saving, and finding a better deal.

Politics

Cognitive dissonance theory might suggest that since votes are an expression of preference or beliefs, even the act of voting might cause someone to defend the actions of the candidate for whom they voted, and if the decision was close then the effects of cognitive dissonance should be greater.

This effect was studied over the 6 presidential elections of the United States between 1972 and 1996, and it was found that the opinion differential between the candidates changed more before and after the election than the opinion differential of non-voters. In addition, elections where the voter had a favorable attitude toward both candidates, making the choice more difficult, had the opinion differential of the candidates change more dramatically than those who only had a favorable opinion of one candidate. What wasn't studied were the cognitive dissonance effects in cases where the person had unfavorable attitudes toward both candidates. The 2016 U.S. election held historically high unfavorable ratings for both candidates.

Communication

Cognitive dissonance theory of communication was initially advanced by American psychologist Leon Festinger in the 1960s. Festinger theorized that cognitive dissonance usually arises when a person holds two or more incompatible beliefs simultaneously. This is a normal occurrence since people encounter different situations that invoke conflicting thought sequences. This conflict results in a psychological discomfort. According to Festinger, people experiencing a thought conflict try to reduce the psychological discomfort by attempting to achieve an emotional equilibrium. This equilibrium is achieved in three main ways. First, the person may downplay the importance of the dissonant thought. Second, the person may attempt to outweigh the dissonant thought with consonant thoughts. Lastly, the person may incorporate the dissonant thought into their current belief system.

Dissonance plays an important role in persuasion. To persuade people, you must cause them to experience dissonance, and then offer your proposal as a way to resolve the discomfort. Although there is no guarantee your audience will change their minds, the theory maintains that without dissonance, there can be no persuasion. Without a feeling of discomfort, people are not motivated to change. Similarly, it is the feeling of discomfort which motivates people to perform selective exposure (i.e., avoiding disconfirming information) as a dissonance-reduction strategy.

Artificial Intelligence

It is hypothesized that introducing cognitive dissonance into machine learning may be able to assist in the long-term aim of developing 'creative autonomy' on the part of agents, including in multi-agent systems (such as games), and ultimately to the development of 'strong' forms of artificial intelligence, including artificial general intelligence.

Alternative paradigms

Dissonant self-perception: A lawyer can experience cognitive dissonance if he must defend as innocent a client he thinks is guilty. From the perspective of The Theory of Cognitive Dissonance: A Current Perspective (1969), the lawyer might experience cognitive dissonance if his false statement about his guilty client contradicts his identity as a lawyer and an honest man.

Self-perception theory

In Self-perception: An alternative interpretation of cognitive dissonance phenomena (1967), the social psychologist Daryl Bem proposed the self-perception theory whereby people do not think much about their attitudes, even when engaged in a conflict with another person. The Theory of Self-perception proposes that people develop attitudes by observing their own behaviour, and concludes that their attitudes caused the behaviour observed by self-perception; especially true when internal cues either are ambiguous or weak. Therefore, the person is in the same position as an observer who must rely upon external cues to infer their inner state of mind. Self-perception theory proposes that people adopt attitudes without access to their states of mood and cognition.

As such, the experimental subjects of the Festinger and Carlsmith study (Cognitive Consequences of Forced Compliance, 1959) inferred their mental attitudes from their own behaviour. When the subject-participants were asked: "Did you find the task interesting?", the participants decided that they must have found the task interesting, because that is what they told the questioner. Their replies suggested that the participants who were paid twenty dollars had an external incentive to adopt that positive attitude, and likely perceived the twenty dollars as the reason for saying the task was interesting, rather than saying the task actually was interesting.

The theory of self-perception (Bem) and the theory of cognitive dissonance (Festinger) make identical predictions, but only the theory of cognitive dissonance predicts the presence of unpleasant arousal, of psychological distress, which were verified in laboratory experiments.

In The Theory of Cognitive Dissonance: A Current Perspective (Aronson, Berkowitz, 1969), Elliot Aronson linked cognitive dissonance to the self-concept: That mental stress arises when the conflicts among cognitions threatens the person's positive self-image. This reinterpretation of the original Festinger and Carlsmith study, using the induced-compliance paradigm, proposed that the dissonance was between the cognitions "I am an honest person." and "I lied about finding the task interesting."

The study Cognitive Dissonance: Private Ratiocination or Public Spectacle? (Tedeschi, Schlenker, etc. 1971) reported that maintaining cognitive consistency, rather than protecting a private self-concept, is how a person protects their public self-image. Moreover, the results reported in the study I'm No Longer Torn After Choice: How Explicit Choices Implicitly Shape Preferences of Odors (2010) contradict such an explanation, by showing the occurrence of revaluation of material items, after the person chose and decided, even after having forgotten the choice.

Balance theory

Fritz Heider proposed a motivational theory of attitudinal change that derives from the idea that humans are driven to establish and maintain psychological balance. The driving force for this balance is known as the consistency motive, which is an urge to maintain one's values and beliefs consistent over time. Heider's conception of psychological balance has been used in theoretical models measuring cognitive dissonance.

According to balance theory, there are three interacting elements: (1) the self (P), (2) another person (O), and (3) an element (X). These are each positioned at one vertex of a triangle and share two relations:

Unit relations – things and people that belong together based on similarity, proximity, fate, etc.
Sentiment relations – evaluations of people and things (liking, disliking)

Under balance theory, human beings seek a balanced state of relations among the three positions. This can take the form of three positives or two negatives and one positive:

P = you
O = your child
X = picture your child drew
"I love my child"
"She drew me this picture"
"I love this picture"

People also avoid unbalanced states of relations, such as three negatives or two positives and one negative:

P = you
O = John
X = John's dog
"I don't like John"
"John has a dog"
"I don't like the dog either"

Cost–benefit analysis

In the study On the Measurement of the Utility of Public Works (1969), Jules Dupuit reported that behaviors and cognitions can be understood from an economic perspective, wherein people engage in the systematic processing of comparing the costs and benefits of a decision. The psychological process of cost-benefit comparisons helps the person to assess and justify the feasibility (spending money) of an economic decision, and is the basis for determining if the benefit outweighs the cost, and to what extent. Moreover, although the method of cost-benefit analysis functions in economic circumstances, men and women remain psychologically inefficient at comparing the costs against the benefits of their economic decision.

Self-discrepancy theory

E. Tory Higgins proposed that people have three selves, to which they compare themselves:

  1. Actual self – representation of the attributes the person believes him- or herself to possess (basic self-concept)
  2. Ideal self – ideal attributes the person would like to possess (hopes, aspiration, motivations to change)
  3. Ought self – ideal attributes the person believes he or she should possess (duties, obligations, responsibilities)

When these self-guides are contradictory psychological distress (cognitive dissonance) results. People are motivated to reduce self-discrepancy (the gap between two self-guides).

Averse consequences vs. inconsistency

During the 1980s, Cooper and Fazio argued that dissonance was caused by aversive consequences, rather than inconsistency. According to this interpretation, the belief that lying is wrong and hurtful, not the inconsistency between cognitions, is what makes people feel bad. Subsequent research, however, found that people experience dissonance even when they feel they have not done anything wrong. For example, Harmon-Jones and colleagues showed that people experience dissonance even when the consequences of their statements are beneficial—as when they convince sexually active students to use condoms, when they, themselves are not using condoms.

Criticism of the free-choice paradigm

In the study How Choice Affects and Reflects Preferences: Revisiting the Free-choice Paradigm (Chen, Risen, 2010) the researchers criticized the free-choice paradigm as invalid, because the rank-choice-rank method is inaccurate for the study of cognitive dissonance. That the designing of research-models relies upon the assumption that, if the experimental subject rates options differently in the second survey, then the attitudes of the subject towards the options have changed. That there are other reasons why an experimental subject might achieve different rankings in the second survey; perhaps the subjects were indifferent between choices.

Although the results of some follow-up studies (e.g. Do Choices Affect Preferences? Some Doubts and New Evidence, 2013) presented evidence of the unreliability of the rank-choice-rank method, the results of studies such as Neural Correlates of Cognitive Dissonance and Choice-induced Preference Change (2010) have not found the Choice-Rank-Choice method to be invalid, and indicate that making a choice can change the preferences of a person.

Action–motivation model

Festinger's original theory did not seek to explain how dissonance works. Why is inconsistency so aversive? The action–motivation model seeks to answer this question. It proposes that inconsistencies in a person's cognition cause mental stress, because psychological inconsistency interferes with the person's functioning in the real world. Among the ways for coping, the person can choose to exercise a behavior that is inconsistent with their current attitude (a belief, an ideal, a value system), but later try to alter that belief to be consonant with a current behavior; the cognitive dissonance occurs when the person's cognition does not match the action taken. If the person changes the current attitude, after the dissonance occurs, he or she then is obligated to commit to that course of behavior.

Cognitive dissonance produces a state of negative affect, which motivates the person to reconsider the causative behavior in order to resolve the psychological inconsistency that caused the mental stress. As the afflicted person works towards a behavioral commitment, the motivational process then is activated in the left frontal cortex of the brain.

Predictive dissonance model

The predictive dissonance model proposes that cognitive dissonance is fundamentally related to the predictive coding (or predictive processing) model of cognition. A predictive processing account of the mind proposes that perception actively involves the use of a Bayesian hierarchy of acquired prior knowledge, which primarily serves the role of predicting incoming proprioceptive, interoceptive and exteroceptive sensory inputs. Therefore, the brain is an inference machine that attempts to actively predict and explain its sensations. Crucial to this inference is the minimization of prediction error. The predictive dissonance account proposes that the motivation for cognitive dissonance reduction is related to an organism's active drive for reducing prediction error. Moreover, it proposes that human (and perhaps other animal) brains have evolved to selectively ignore contradictory information (as proposed by dissonance theory) to prevent the overfitting of their predictive cognitive models to local and thus non-generalizing conditions. The predictive dissonance account is highly compatible with the action-motivation model since, in practice, prediction error can arise from unsuccessful behavior.

Neuroscience findings

Technological advances are allowing psychologists to study the biomechanics of cognitive dissonance.

Visualization

The study Neural Activity Predicts Attitude Change in Cognitive Dissonance (Van Veen, Krug, etc., 2009) identified the neural bases of cognitive dissonance with functional magnetic resonance imaging (fMRI); the neural scans of the participants replicated the basic findings of the induced-compliance paradigm. When in the fMRI scanner, some of the study participants argued that the uncomfortable, mechanical environment of the MRI machine nevertheless was a pleasant experience for them; some participants, from an experimental group, said they enjoyed the mechanical environment of the fMRI scanner more than did the control-group participants (paid actors) who argued about the uncomfortable experimental environment.

The results of the neural scan experiment support the original theory of Cognitive Dissonance proposed by Festinger in 1957; and also support the psychological conflict theory, whereby the anterior cingulate functions, in counter-attitudinal response, to activate the dorsal anterior cingulate cortex and the anterior insular cortex; the degree of activation of said regions of the brain is predicted by the degree of change in the psychological attitude of the person.

The biomechanics of cognitive dissonance: MRI evidence indicates that the greater the psychological conflict signalled by the anterior cingulate cortex, the greater the magnitude of the cognitive dissonance experienced by the person.

As an application of the free-choice paradigm, the study How Choice Reveals and Shapes Expected Hedonic Outcome (2009) indicates that after making a choice, neural activity in the striatum changes to reflect the person's new evaluation of the choice-object; neural activity increased if the object was chosen, neural activity decreased if the object was rejected. Moreover, studies such as The Neural Basis of Rationalization: Cognitive Dissonance Reduction During Decision-making (2010) and How Choice Modifies Preference: Neural Correlates of Choice Justification (2011) confirm the neural bases of the psychology of cognitive dissonance.

The Neural Basis of Rationalization: Cognitive Dissonance Reduction During Decision-making (Jarcho, Berkman, Lieberman, 2010) applied the free-choice paradigm to fMRI examination of the brain's decision-making process whilst the study participant actively tried to reduce cognitive dissonance. The results indicated that the active reduction of psychological dissonance increased neural activity in the right-inferior frontal gyrus, in the medial fronto-parietal region, and in the ventral striatum, and that neural activity decreased in the anterior insula. That the neural activities of rationalization occur in seconds, without conscious deliberation on the part of the person; and that the brain engages in emotional responses whilst effecting decisions.

Emotional correlations

The results reported in Contributions from Research on Anger and Cognitive Dissonance to Understanding the Motivational Functions of Asymmetrical Frontal Brain Activity (Harmon-Jones, 2004) indicate that the occurrence of cognitive dissonance is associated with neural activity in the left frontal cortex, a brain structure also associated with the emotion of anger; moreover, functionally, anger motivates neural activity in the left frontal cortex. Applying a directional model of Approach motivation, the study Anger and the Behavioural Approach System (2003) indicated that the relation between cognitive dissonance and anger is supported by neural activity in the left frontal cortex that occurs when a person takes control of the social situation causing the cognitive dissonance. Conversely, if the person cannot control or cannot change the psychologically stressful situation, he or she is without a motivation to change the circumstance, then there arise other, negative emotions to manage the cognitive dissonance, such as socially inappropriate behavior.

The anterior cingulate cortex activity increases when errors occur and are being monitored as well as having behavioral conflicts with the self-concept as a form of higher-level thinking. A study was done to test the prediction that the left frontal cortex would have increased activity. University students had to write a paper depending on if they were assigned to a high-choice or low-choice condition. The low-choice condition required students to write about supporting a 10% increase in tuition at their university. The point of this condition was to see how significant the counterchoice may affect a person's ability to cope. The high-choice condition asked students to write in favor of tuition increase as if it were their completely voluntary choice. The researchers use EEG to analyze students before they wrote the essay, as dissonance is at its highest during this time (Beauvois and Joule, 1996). High-choice condition participants showed a higher level of the left frontal cortex than the low-choice participants. Results show that the initial experience of dissonance can be apparent in the anterior cingulate cortex, then the left frontal cortex is activated, which also activates the approach motivational system to reduce anger.

The psychology of mental stress

The results reported in The Origins of Cognitive Dissonance: Evidence from Children and Monkeys (Egan, Santos, Bloom, 2007) indicated that there might be evolutionary force behind the reduction of cognitive dissonance in the actions of pre-school-age children and Capuchin monkeys when offered a choice between two like options, decals and candies. The groups then were offered a new choice, between the choice-object not chosen and a novel choice-object that was as attractive as the first object. The resulting choices of the human and simian subjects concorded with the theory of cognitive dissonance when the children and the monkeys each chose the novel choice-object instead of the choice-object not chosen in the first selection, despite every object having the same value.

The hypothesis of An Action-based Model of Cognitive-dissonance Processes (Harmon-Jones, Levy, 2015) proposed that psychological dissonance occurs consequent to the stimulation of thoughts that interfere with a goal-driven behavior. Researchers mapped the neural activity of the participant when performing tasks that provoked psychological stress when engaged in contradictory behaviors. A participant read aloud the printed name of a color. To test for the occurrence of cognitive dissonance, the name of the color was printed in a color different than the word read aloud by the participant. As a result, the participants experienced increased neural activity in the anterior cingulate cortex when the experimental exercises provoked psychological dissonance.

The study Cognitive Neuroscience of Social Emotions and Implications for Psychopathology: Examining Embarrassment, Guilt, Envy, and Schadenfreude (Jankowski, Takahashi, 2014) identified neural correlations to specific social emotions (e.g. envy and embarrassment) as a measure of cognitive dissonance. The neural activity for the emotion of Envy (the feeling of displeasure at the good fortune of another person) was found to draw neural activity from the dorsal anterior cingulate cortex. That such increased activity in the dorsal anterior cingulate cortex occurred either when a person's self-concept was threatened or when the person suffered embarrassment (social pain) caused by salient, upward social-comparison, by social-class snobbery. That social emotions, such as embarrassment, guilt, envy, and Schadenfreude (joy at the misfortune of another person) are correlated to reduced activity in the insular lobe, and with increased activity in the striate nucleus; those neural activities are associated with a reduced sense of empathy (social responsibility) and an increased propensity towards antisocial behavior (delinquency).

Modeling in neural networks

Artificial neural network models of cognition provide methods for integrating the results of empirical research about cognitive dissonance and attitudes into a single model that explains the formation of psychological attitudes and the mechanisms to change such attitudes. Among the artificial neural-network models that predict how cognitive dissonance might influence a person's attitudes and behavior, are:

Contradictions to the theory

There are some that are skeptical of the idea. Charles G. Lord wrote a paper on whether or not the theory of cognitive dissonance was not tested enough and if it was a mistake to accept it into theory. He claimed that the theorist did not take into account all the factors and came to a conclusion without looking at all the angles.

United States labor law

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Uni...