Search This Blog

Sunday, December 23, 2018

Introspection illusion

From Wikipedia, the free encyclopedia

The surface appearance of an iceberg is often used to illustrate the human conscious and unconscious mind; the visible portions are easily noticed, and yet their shape depends on the much larger portions that are out of view.

The introspection illusion is a cognitive bias in which people wrongly think they have direct insight into the origins of their mental states, while treating others' introspections as unreliable. In certain situations, this illusion leads people to make confident but false explanations of their own behavior (called "causal theories") or inaccurate predictions of their future mental states.

The illusion has been examined in psychological experiments, and suggested as a basis for biases in how people compare themselves to others. These experiments have been interpreted as suggesting that, rather than offering direct access to the processes underlying mental states, introspection is a process of construction and inference, much as people indirectly infer others' mental states from their 
 behavior.

When people mistake unreliable introspection for genuine self-knowledge, the result can be an illusion of superiority over other people, for example when each person thinks they are less biased and less conformist than the rest of the group. Even when experimental subjects are provided with reports of other subjects' introspections, in as detailed a form as possible, they still rate those other introspections as unreliable while treating their own as reliable. Although the hypothesis of an introspection illusion informs some psychological research, the existing evidence is arguably inadequate to decide how reliable introspection is in normal circumstances. Correction for the bias may be possible through education about the bias and its unconscious nature.

Components

The phrase "introspection illusion" was coined by Emily Pronin. Pronin describes the illusion as having four components:
  1. People give a strong weighting to introspective evidence when assessing themselves.
  2. They do not give such a strong weight when assessing others.
  3. People disregard their own behavior when assessing themselves (but not others).
  4. Own introspections are more highly weighted than others. It is not just that people lack access to each other's introspections: they regard only their own as reliable.

Unreliability of introspection

[I]ntrospection does not provide a direct pipeline to nonconscious mental processes. Instead, it is best thought of as a process whereby people use the contents of consciousness to construct a personal narrative that may or may not correspond to their nonconscious states. Timothy D. Wilson and Elizabeth W. Dunn (2004)
A 1977 paper by psychologists Richard Nisbett and Timothy D. Wilson challenged the directness and reliability of introspection, thereby becoming one of the most cited papers in the science of consciousness. Nisbett and Wilson reported on experiments in which subjects verbally explained why they had a particular preference, or how they arrived at a particular idea. On the basis of these studies and existing attribution research, they concluded that reports on mental processes are confabulated. They wrote that subjects had, "little or no introspective access to higher order cognitive processes". They distinguished between mental contents (such as feelings) and mental processes, arguing that while introspection gives us access to contents, processes remain hidden.

Research continues to find that humans evolved only limited abilities to introspect

Although some other experimental work followed from the Nisbett and Wilson paper, difficulties with testing the hypothesis of introspective access meant that research on the topic generally stagnated. A ten-year-anniversary review of the paper raised several objections, questioning the idea of "process" they had used and arguing that unambiguous tests of introspective access are hard to achieve.

Updating the theory in 2002, Wilson admitted that the 1977 claims had been too far-reaching. He instead relied on the theory that the adaptive unconscious does much of the moment-to-moment work of perception and behaviour. When people are asked to report on their mental processes, they cannot access this unconscious activity. However, rather than acknowledge their lack of insight, they confabulate a plausible explanation, and "seem" to be "unaware of their unawareness".

The idea that people can be mistaken about their inner functioning is one applied by eliminative materialists. These philosophers suggest that some concepts, including "belief" or "pain" will turn out to be quite different from what is commonly expected as science advances. 

The faulty guesses that people make to explain their thought processes have been called "causal theories". The causal theories provided after an action will often serve only to justify the person's behaviour in order to relieve cognitive dissonance. That is, a person may not have noticed the real reasons for their behaviour, even when trying to provide explanations. The result is an explanation that mostly just makes themselves feel better. An example might be a man who discriminates against homosexuals because he is embarrassed that he himself is attracted to other men. He may not admit this to himself, instead claiming his prejudice is because he believes that homosexuality is unnatural.

A study conducted by philosopher Eric Schwitzgebel and psychologist Russell T. Hurlburt was set up to measure the extent of introspective accuracy by gathering introspective reports from a single individual who was given the pseudonym "Melanie". Melanie was given a beeper which sounded at random moments, and when it did she had to note what she was currently feeling and thinking. After analyzing the reports the authors had mixed views about the results, the correct interpretation of Melanie's claims and her introspective accuracy. Even after long discussion the two authors disagreed with each other in the closing remarks, Schwitzgebel being pessimistic and Hurlburt optimistic about the reliability of introspection.

Factors in accuracy

Nisbett and Wilson conjectured about several factors that they found to contribute to the accuracy of introspective self-reports on cognition.
  • Availability: Stimuli that are highly salient (either due to recency or being very memorable) are more likely to be recalled and considered for the cause of a response.
  • Plausibility: Whether a person finds a stimulus to be a sufficiently likely cause for an effect determines the influence it has on their reporting of the stimulus.
  • Removal in time: The greater the distance in time since the occurrence of an event, the less available and more difficult to accurately recall it is.
  • Mechanics of judgment: People do not recognize the influence that judgment factors (e.g., position effects) have on them, leading to inaccuracies in self-reporting.
  • Context: Focusing on the context of an object distracts from evaluation of that object and can lead people to falsely believe that their thoughts about the object are represented by the context.
  • Non-events: The absence of an occurrence is naturally less salient and available than an occurrence itself, leading nonevents to have little influence on reports.
  • Nonverbal behaviour: While people receive a large amount of information about others via nonverbal cues, the verbal nature of relaying information and the difficulty of translating nonverbal behaviour into verbal form lead to its lower reporting frequency.
  • Discrepancy between the magnitudes of cause and effect: Because it seems natural to assume that a certain size cause will lead to a similarly-sized effect, connections between causes and effects of different magnitudes are not often drawn.

Unawareness of error

Several hypotheses to explain people's unawareness of their inaccuracies in introspection were provided by Nisbett and Wilson:
  • Confusion between content and process: People are usually unable to access the exact process by which they arrived at a conclusion, but can recall an intermediate step prior to the result. However, this step is still content in nature, not a process. The confusion of these discrete forms leads people to believe that they are able to understand their judgment processes. (Nisbett and Wilson have been criticized for failing to provide a clear definition of the differences between mental content and mental processes.)
  • Knowledge of prior idiosyncratic reactions to a stimulus: An individual's belief that they react in an abnormal manner to a stimulus, which would be unpredictable from the standpoint of an outside observer, seems to support true introspective ability. However, these perceived covariations may actually be false, and truly abnormal covariations are rare.
  • Differences in causal theories between subcultures: The inherent differences between discrete subcultures necessitates that they have some differing causal theories for any one stimulus. Thus, an outsider would not have the same ability to discern a true cause as would an insider, again making it seem to the introspector that they have the capacity to understand the judgment process better than can another.
  • Attentional and intentional knowledge: An individual may consciously know that they were not paying attention to a certain stimulus or did not have a certain intent. Again, as insight that an outside observer does not have, this seems indicative of true introspective ability. However, the authors note that such knowledge can actually mislead the individual in the case that it is not as influential as they may think.
  • Inadequate feedback: By nature, introspection is difficult to be disconfirmed in everyday life, where there are no tests of it and others tend not to question one's introspections. Moreover, when a person's causal theory of reasoning is seemingly disconfirmed, it is easy for them to produce alternative reasons for why the evidence is actually not disconfirmatory at all.
  • Motivational reasons: Considering one's own ability to understand their reasoning as being equivalent to an outsider's is intimidating and a threat to the ego and sense of control. Thus, people do not like to entertain the idea, instead maintaining the belief that they can accurately introspect.

Criticisms

The claim that confabulation of justifications evolved to relieve cognitive dissonance is criticized by some evolutionary biologists for assuming the evolution of a mechanism for feeling dissonanced by a lack of justification. These evolutionary biologists argue that if causal theories had no higher predictive accuracy than prejudices that would have been in place even without causal theories, there would be no evolutionary selection for experiencing any form of discomfort from lack of causal theories. The claim that studies in the United States that appear to show a link between homophobia and homosexuality can be explained by an actual such link is criticized by many scholars. Since much homophobia in the United States is due to religious indoctrination and therefore unrelated to personal sexual preferences, they argue that the appearance of a link is due to volunteer-biased erotica research in which religious homophobes fear God's judgment but not being recorded as "homosexual" by Earthly psychologists while most non-homophobes are misled by false dichotomies to assume that the notion that men can be sexually fluid is somehow "homophobic" and "unethical".

Choice blindness

Inspired by the Nisbett and Wilson paper, Petter Johansson and colleagues investigated subjects' insight into their own preferences using a new technique. Subjects saw two photographs of people and were asked which they found more attractive. They were given a closer look at their "chosen" photograph and asked to verbally explain their choice. However, in some trials, the experimenter had slipped them the other photograph rather than the one they had chosen, using sleight of hand. A majority of subjects failed to notice that the picture they were looking at did not match the one they had chosen just seconds before. Many subjects confabulated explanations of their preference. For example, a man might say "I preferred this one because I prefer blondes" when he had in fact pointed to the dark-haired woman, but had been handed a blonde. These must have been confabulated because they explain a choice that was never made. The large proportion of subjects who were taken in by the deception contrasts with the 84% who, in post-test interviews, said that hypothetically they would have detected a switch if it had been made in front of them. The researchers coined the phrase "choice blindness" for this failure to detect a mismatch.

A follow-up experiment involved shoppers in a supermarket tasting two different kinds of jam, then verbally explaining their preferred choice while taking further spoonfuls from the "chosen" pot. However, the pots were rigged so that, when explaining their choice, the subjects were tasting the jam they had actually rejected. A similar experiment was conducted with tea. Another variation involved subjects choosing between two objects displayed on PowerPoint slides, then explaining their choice when the description of what they chose had been altered.

Research by Paul Eastwick and Eli Finkel (described as "one of the leading lights in the realm of relationship psychology") at Northwestern University also undermined the idea that subjects have direct introspective awareness of what attracts them to other people. These researchers examined male and female subjects' reports of what they found attractive. Men typically reported that physical attractiveness was crucial while women identified earning potential as most important. These subjective reports did not predict their actual choices in a speed dating context, or their dating behaviour in a one-month follow-up.

Consistent with choice blindness, Henkel and Mather found that people are easily convinced by false reminders that they chose different options than they actually chose and that they show greater choice-supportive bias in memory for whichever option they believe they chose.

Criticisms

It is not clear, however, the extent to which these findings apply to real-life experience when we have more time to reflect or use actual faces (as opposed to gray-scale photos). As Prof. Kaszniak points out: "although a priori theories are an important component of people's causal explanations, they are not the sole influence, as originally hypothesized by Nisbett & Wilson. Actors also have privileged information access that includes some degree of introspective access to pertinent causal stimuli and thought processes, as well as better access (than observers) to stimulus-response covariation data about their own behaviour". Other criticisms point out that people who volunteer to psychology lab studies are not representative of the general population and also are behaving in ways that do not reflect how they would behave in real life. Examples include people of many different non-open political ideologies, despite their enmity to each other, having a shared belief that it is "ethical" to give an appearance of humans justifying beliefs and "unethical" to admit that humans are open-minded in the absence of threats that inhibit critical thinking, making them fake justifications.

Attitude change

Studies that ask participants to introspect upon their reasoning (for liking, choosing, or believing something, etc.) tend to see a subsequent decrease in correspondence between attitude and behaviour in the participants. For example, in a study by Wilson et al., participants rated their interest in puzzles that they had been given. Prior to rating, one group had been instructed to contemplate and write down their reasons for liking or disliking the puzzles, while the control group was given no such task. The amount of time participants spent playing with each puzzle was then recorded. The correlation between ratings of and time spent playing each puzzle was much smaller for the introspection group than the control group.

A subsequent study was performed to show the generalizability of these results to more "realistic" circumstances. In this study, participants were all involved in a steady romantic relationship. All were asked to rate how well-adjusted their relationship was. One group was asked to list all of the reasons behind their feelings for their partner, while the control group did not do so. Six months later, the experimenters followed up with participants to check if they were still in the same relationship. Those who had been asked to introspect showed much less attitude-behaviour consistency based upon correlations between earlier relationship ratings and whether they were still dating their partners. This shows that introspection was not predictive, but this also probably means that the introspection has changed the evolution of the relationship.

The authors theorize that these effects are due to participants changing their attitudes, when confronted with a need for justification, without changing their corresponding behaviours. The authors hypothesize that this attitude shift is the result of a combination of things: a desire to avoid feeling foolish for simply not knowing why one feels a certain way; a tendency to make justifications based upon cognitive reasons, despite the large influence of emotion; ignorance of mental biases (e.g., halo effects); and self-persuasion that the reasons one has come up with must be representative with their attitude. In effect, people attempt to supply a "good story" to explain their reasoning, which often leads to convincing themselves that they actually hold a different belief. In studies wherein participants chose an item to keep, their subsequent reports of satisfaction with the item decreased, suggesting that their attitude changes were temporary, returning to the original attitude over time.

Introspection by focusing on feelings

In contrast with introspection by focusing on reasoning, that which instructs one to focus on their feelings has actually been shown to increase attitude-behaviour correlations. This finding suggests that introspecting on one's feelings is not a maladaptive process.

Criticisms

The theory that there are mental processes that act as justifications do not make behavior more adaptive is criticized by some biologists who argue that the cost in nutrients for brain function selects against any brain mechanism that does not make behaviour more adapted to the environment. They argue that the cost in essential nutrients causes even more difficulty than the cost in calories, especially in social groups of many individuals needing the same scarce nutrients, which imposes substantial difficulty on feeding the group and lowers their potential size. These biologists argue that the evolution of argumentation was driven by the effectiveness of arguments on changing risk perception attitudes and life and death decisions to a more adaptive state, as "luxury functions" that did not enhance life and death survival would lose the evolutionary "tug of war" against the selection for nutritional thrift. While there have been claims of non-adaptive brain functions being selected by sexual selection, these biologists criticize any applicability to introspection illusion's causal theories because sexually selected traits are most disabling as a fitness signal during or after puberty but human brains require the highest amount of nutrients before puberty (enhancing the nerve connections in ways that make adult brains capable of faster and more nutrient-efficient firing).

A priori causal theories

In their classic paper, Nisbett and Wilson proposed that introspective confabulations result from a priori theories, of which they put forth four possible origins:
  • Explicit cultural rules (e.g., stopping at red traffic lights)
  • Implicit cultural theories, with certain schemata for likely stimulus-response relationships (e.g., an athlete only endorses a brand because he is paid to do so)
  • Individual observational experiences that lead one to form a theory of covariation
  • Similar connotation between stimulus and response
The authors note that the use of these theories does not necessarily lead to inaccurate assumptions, but that this frequently occurs because the theories are improperly applied.

Explaining biases

Pronin argues that over-reliance on intentions is a factor in a number of different biases. For example, by focusing on their current good intentions, people can overestimate their likelihood of behaving virtuously.

In perceptions of bias

The bias blind spot is an established phenomenon that people rate themselves as less susceptible to bias than their peer group. Emily Pronin and Matthew Kugler argue that this phenomenon is due to the introspection illusion. In their experiments, subjects had to make judgments about themselves and about other subjects. They displayed standard biases, for example rating themselves above the others on desirable qualities (demonstrating illusory superiority). The experimenters explained cognitive bias, and asked the subjects how it might have affected their judgment. The subjects rated themselves as less susceptible to bias than others in the experiment (confirming the bias blind spot). When they had to explain their judgments, they used different strategies for assessing their own and others' bias.

Pronin and Kugler's interpretation is that when people decide whether someone else is biased, they use overt behaviour. On the other hand, when assessing whether or not they themselves are biased, people look inward, searching their own thoughts and feelings for biased motives. Since biases operate unconsciously, these introspections are not informative, but people wrongly treat them as reliable indication that they themselves, unlike other people, are immune to bias.

Pronin and Kugler tried to give their subjects access to others' introspections. To do this, they made audio recordings of subjects who had been told to say whatever came into their heads as they decided whether their answer to a previous question might have been affected by bias. Although subjects persuaded themselves they were unlikely to be biased, their introspective reports did not sway the assessments of observers.

When asked what it would mean to be biased, subjects were more likely to define bias in terms of introspected thoughts and motives when it applied to themselves, but in terms of overt behaviour when it applied to other people. When subjects were explicitly told to avoid relying on introspection, their assessments of their own bias became more realistic.

Additionally, Nisbett and Wilson found that asking participants whether biases (such as the position effect in the stocking study) had an effect on their decisions resulted in a negative response, in contradiction with the data.

In perceptions of conformity

Another series of studies by Pronin and colleagues examined perceptions of conformity. Subjects reported being more immune to social conformity than their peers. In effect, they saw themselves as being "alone in a crowd of sheep". The introspection illusion appeared to contribute to this effect. When deciding whether others respond to social influence, subjects mainly looked at their behaviour, for example explaining other student's political opinions in terms of following the group. When assessing their own conformity, subjects treat their own introspections as reliable. In their own minds, they found no motive to conform, and so decided that they had not been influenced.

In perceptions of control and free will

Psychologist Daniel Wegner has argued that an introspection illusion contributes to belief in paranormal phenomena such as psychokinesis. He observes that in everyday experience, intention (such as wanting to turn on a light) is followed by action (such as flicking a light switch) in a reliable way, but the processes connecting the two are not consciously accessible. Hence though subjects may feel that they directly introspect their own free will, the experience of control is actually inferred from relations between the thought and the action. This theory, called "apparent mental causation", acknowledges the influence of David Hume's view of the mind. This process for detecting when one is responsible for an action is not totally reliable, and when it goes wrong there can be an illusion of control. This could happen when an external event follows, and is congruent with, a thought in someone's mind, without an actual causal link.

As evidence, Wegner cites a series of experiments on magical thinking in which subjects were induced to think they had influenced external events. In one experiment, subjects watched a basketball player taking a series of free throws. When they were instructed to visualise him making his shots, they felt that they had contributed to his success.

If the introspection illusion contributes to the subjective feeling of free will, then it follows that people will more readily attribute free will to themselves rather than others. This prediction has been confirmed by three of Pronin and Kugler's experiments. When college students were asked about personal decisions in their own and their roommate's lives, they regarded their own choices as less predictable. Staff at a restaurant described their co-workers' lives as more determined (having fewer future possibilities) than their own lives. When weighing up the influence of different factors on behaviour, students gave desires and intentions the strongest weight for their own behaviour, but rated personality traits as most predictive of other people.

However, criticism of Wegner's claims regarding the significance of introspection illusion for the notion of free will has been published.

Criticisms

Research shows that human volunteers can estimate their response times accurately, in fact knowing their "mental processes" well, but only with substantial demands made on their attention and cognitive resources (i.e. they are distracted while estimating). Such estimation is likely more than post hoc interpretation and may incorporate privileged information. Mindfulness training can also increase introspective accuracy in some instances. Nisbett and Wilson's findings were criticized by psychologists Ericsson and Simon, among others.

Correcting for the bias

A study that investigated the effect of educating people about unconscious biases on their subsequent self-ratings of susceptibility to bias showed that those who were educated did not exhibit the bias blind spot, in contrast with the control group. This finding provides hope that being informed about unconscious biases such as the introspection illusion may help people to avoid making biased judgments, or at least make them aware that they are biased. Findings from other studies on correction of the bias yielded mixed results. In a later review of the introspection illusion, Pronin suggests that the distinction is that studies that merely provide a warning of unconscious biases will not see a correction effect, whereas those that inform about the bias and emphasize its unconscious nature do yield corrections. Thus, knowledge that bias can operate during conscious awareness seems the defining factor in leading people to correct for it.

Timothy Wilson has tried to find a way out from "introspection illusion", recounted in his book Strangers to Ourselves. He suggests that the observation of our own behaviours more than our thoughts can be one of the keys for clearer introspective knowledge.

Criticisms

Some 21st century critical rationalists argue that claims of correcting for introspection illusions or other cognitive biases pose a threat of immunizing themselves to criticism by alleging that criticism of psychological theories that claim cognitive bias are "justifications" for cognitive bias, making it non-falsifiable by labelling of critics and also potentially totalitarian. These modern critical rationalists argue that defending a theory by claiming that it overcomes bias and alleging that critics are biased, can defend any pseudoscience from criticism; and that the claim that "criticism of A is a defense of B" is inherently incapable of being evidence-based, and that any actual "most humans" bias (if it existed) would be shared by most psychologists thus make psychological claims of biases a way of accusing unbiased criticism of being biased and marketing the biases as overcoming of bias.

No Big Bang? Quantum Equation Predicts Universe Has No Beginning

D

The universe may have existed forever, according to a new model that applies quantum correction terms to complement Einstein's theory of general relativity. The model may also account for dark matter and dark energy, resolving multiple problems at once.

The widely accepted age of the universe, as estimated by general relativity, is 13.8 billion years. In the beginning, everything in existence is thought to have occupied a single infinitely dense point, or singularity. Only after this point began to expand in a "Big Bang" did the universe officially begin.

Although the Big Bang singularity arises directly and unavoidably from the mathematics of general relativity, some scientists see it as problematic because the math can explain only what happened immediately after—not at or before—the singularity.

"The Big Bang singularity is the most serious problem of general relativity because the laws of physics appear to break down there," Ahmed Farag Ali at Benha University and the Zewail City of Science and Technology, both in Egypt, told Phys.org.

Ali and coauthor Saurya Das at the University of Lethbridge in Alberta, Canada, have shown in a paper published in Physics Letters B that the Big Bang singularity can be resolved by their new model in which the universe has no beginning and no end.

Old ideas revisited

The physicists emphasize that their quantum correction terms are not applied ad hoc in an attempt to specifically eliminate the Big Bang singularity. Their work is based on ideas by the theoretical physicist David Bohm, who is also known for his contributions to the philosophy of physics. Starting in the 1950s, Bohm explored replacing classical geodesics (the shortest path between two points on a curved surface) with quantum trajectories.

In their paper, Ali and Das applied these Bohmian trajectories to an equation developed in the 1950s by physicist Amal Kumar Raychaudhuri at Presidency University in Kolkata, India. Raychaudhuri was also Das's teacher when he was an undergraduate student of that institution in the '90s.

Using the quantum-corrected Raychaudhuri equation, Ali and Das derived quantum-corrected Friedmann equations, which describe the expansion and evolution of universe (including the Big Bang) within the context of general relativity. Although it's not a true theory of quantum gravity, the model does contain elements from both quantum theory and general relativity.

Ali and Das also expect their results to hold even if and when a full theory of quantum gravity is formulated.

No singularities nor dark stuff

In addition to not predicting a Big Bang singularity, the new model does not predict a "big crunch" singularity, either. In general relativity, one possible fate of the universe is that it starts to shrink until it collapses in on itself in a big crunch and becomes an infinitely dense point once again.

Ali and Das explain in their paper that their model avoids singularities because of a key difference between classical geodesics and Bohmian trajectories. Classical geodesics eventually cross each other, and the points at which they converge are singularities.

In contrast, Bohmian trajectories never cross each other, so singularities do not appear in the equations.

In cosmological terms, the scientists explain that the quantum corrections can be thought of as a cosmological constant term (without the need for dark energy) and a radiation term. These terms keep the universe at a finite size, and therefore give it an infinite age. The terms also make predictions that agree closely with current observations of the cosmological constant and density of the universe.

New gravity particle

In physical terms, the model describes the universe as being filled with a quantum fluid. The scientists propose that this fluid might be composed of gravitons—hypothetical massless particles that mediate the force of gravity. If they exist, gravitons are thought to play a key role in a theory of quantum gravity.

In a related paper, Das and another collaborator, Rajat Bhaduri of McMaster University, Canada, have lent further credence to this model. They show that gravitons can form a Bose-Einstein condensate (named after Einstein and another Indian physicist, Satyendranath Bose) at temperatures that were present in the universe at all epochs.

Motivated by the model's potential to resolve the Big Bang singularity and account for dark matter and dark energy, the physicists plan to analyze their model more rigorously in the future. Their future work includes redoing their study while taking into account small inhomogeneous and anisotropic perturbations, but they do not expect small perturbations to significantly affect the results.

"It is satisfying to note that such straightforward corrections can potentially resolve so many issues at once," Das said.

Yes, there is a war between science and religion



As the West becomes more and more secular, and the discoveries of evolutionary biology and cosmology shrink the boundaries of faith, the claims that science and religion are compatible grow louder. If you’re a believer who doesn’t want to seem anti-science, what can you do? You must argue that your faith – or any faith – is perfectly compatible with science.

And so one sees claim after claim from believers, religious scientists, prestigious science organizations and even atheists asserting not only that science and religion are compatible, but also that they can actually help each other. This claim is called “accommodationism.”

But I argue that this is misguided: that science and religion are not only in conflict – even at “war” – but also represent incompatible ways of viewing the world.

Opposing methods for discerning truth

The scientific method relies on observing, testing and replication to learn about the world. Jaron Nix/Unsplash, CC BY

My argument runs like this. I’ll construe “science” as the set of tools we use to find truth about the universe, with the understanding that these truths are provisional rather than absolute. These tools include observing nature, framing and testing hypotheses, trying your hardest to prove that your hypothesis is wrong to test your confidence that it’s right, doing experiments and above all replicating your and others’ results to increase confidence in your inference.

And I’ll define religion as does philosopher Daniel Dennett: “Social systems whose participants avow belief in a supernatural agent or agents whose approval is to be sought.” Of course many religions don’t fit that definition, but the ones whose compatibility with science is touted most often – the Abrahamic faiths of Judaism, Christianity and Islam – fill the bill.

Next, realize that both religion and science rest on “truth statements” about the universe – claims about reality. The edifice of religion differs from science by additionally dealing with morality, purpose and meaning, but even those areas rest on a foundation of empirical claims. You can hardly call yourself a Christian if you don’t believe in the Resurrection of Christ, a Muslim if you don’t believe the angel Gabriel dictated the Qur’an to Muhammad, or a Mormon if you don’t believe that the angel Moroni showed Joseph Smith the golden plates that became the Book of Mormon. After all, why accept a faith’s authoritative teachings if you reject its truth claims?

Indeed, even the Bible notes this: “But if there be no resurrection of the dead, then is Christ not risen: And if Christ be not risen, then is our preaching vain, and your faith is also vain.”

Many theologians emphasize religion’s empirical foundations, agreeing with the physicist and Anglican priest John Polkinghorne:
The question of truth is as central to [religion’s] concern as it is in science. Religious belief can guide one in life or strengthen one at the approach of death, but unless it is actually true it can do neither of these things and so would amount to no more than an illusory exercise in comforting fantasy.
The conflict between science and faith, then, rests on the methods they use to decide what is true, and what truths result: These are conflicts of both methodology and outcome.

In contrast to the methods of science, religion adjudicates truth not empirically, but via dogma, scripture and authority – in other words, through faith, defined in Hebrews 11 as “the substance of things hoped for, the evidence of things not seen.” In science, faith without evidence is a vice, while in religion it’s a virtue. Recall what Jesus said to “doubting Thomas,” who insisted in poking his fingers into the resurrected Savior’s wounds: “Thomas, because thou hast seen me, thou hast believed: blessed are they that have not seen, and yet have believed.”

Two ways to look at the same thing, never the twain shall meet. 
Gabriel Lamza/Unsplash, CC BY

And yet, without supporting evidence, Americans believe a number of religious claims: 74 percent of us believe in God, 68 percent in the divinity of Jesus, 68 percent in Heaven, 57 percent in the virgin birth, and 58 percent in the Devil and Hell. Why do they think these are true? Faith.

But different religions make different – and often conflicting – claims, and there’s no way to judge which claims are right. There are over 4,000 religions on this planet, and their “truths” are quite different. (Muslims and Jews, for instance, absolutely reject the Christian belief that Jesus was the son of God.) Indeed, new sects often arise when some believers reject what others see as true. Lutherans split over the truth of evolution, while Unitarians rejected other Protestants’ belief that Jesus was part of God.

And while science has had success after success in understanding the universe, the “method” of using faith has led to no proof of the divine. How many gods are there? What are their natures and moral creeds? Is there an afterlife? Why is there moral and physical evil? There is no one answer to any of these questions. All is mystery, for all rests on faith.

The “war” between science and religion, then, is a conflict about whether you have good reasons for believing what you do: whether you see faith as a vice or a virtue.

Compartmentalizing realms is irrational

So how do the faithful reconcile science and religion? Often they point to the existence of religious scientists, like NIH Director Francis Collins, or to the many religious people who accept science. But I’d argue that this is compartmentalization, not compatibility, for how can you reject the divine in your laboratory but accept that the wine you sip on Sunday is the blood of Jesus?

Can divinity be at play in one setting but not another? 
Jametlene Reskp/Unsplash, CC BY

Others argue that in the past religion promoted science and inspired questions about the universe. But in the past every Westerner was religious, and it’s debatable whether, in the long run, the progress of science has been promoted by religion. Certainly evolutionary biology, my own field, has been held back strongly by creationism, which arises solely from religion.

What is not disputable is that today science is practiced as an atheistic discipline – and largely by atheists. There’s a huge disparity in religiosity between American scientists and Americans as a whole: 64 percent of our elite scientists are atheists or agnostics, compared to only 6 percent of the general population – more than a tenfold difference. Whether this reflects differential attraction of nonbelievers to science or science eroding belief – I suspect both factors operate – the figures are prima facie evidence for a science-religion conflict.

The most common accommodationist argument is Stephen Jay Gould’s thesis of “non-overlapping magisteria.” Religion and science, he argued, don’t conflict because: “Science tries to document the factual character of the natural world, and to develop theories that coordinate and explain these facts. Religion, on the other hand, operates in the equally important, but utterly different, realm of human purposes, meanings and values – subjects that the factual domain of science might illuminate, but can never resolve.”

This fails on both ends. First, religion certainly makes claims about “the factual character of the universe.” In fact, the biggest opponents of non-overlapping magisteria are believers and theologians, many of whom reject the idea that Abrahamic religions are “empty of any claims to historical or scientific facts.”

Nor is religion the sole bailiwick of “purposes, meanings and values,” which of course differ among faiths. There’s a long and distinguished history of philosophy and ethics – extending from Plato, Hume and Kant up to Peter Singer, Derek Parfit and John Rawls in our day – that relies on reason rather than faith as a fount of morality. All serious ethical philosophy is secular ethical philosophy.

In the end, it’s irrational to decide what’s true in your daily life using empirical evidence, but then rely on wishful-thinking and ancient superstitions to judge the “truths” undergirding your faith. This leads to a mind (no matter how scientifically renowned) at war with itself, producing the cognitive dissonance that prompts accommodationism. If you decide to have good reasons for holding any beliefs, then you must choose between faith and reason. And as facts become increasingly important for the welfare of our species and our planet, people should see faith for what it is: not a virtue but a defect.

Illusory superiority

From Wikipedia, the free encyclopedia

In the field of social psychology, illusory superiority is a condition of cognitive bias wherein a person overestimates their own qualities and abilities, in relation to the same qualities and abilities of other persons. Illusory superiority is one of many positive illusions, relating to the self, that are evident in the study of intelligence, the effective performance of tasks and tests, and the possession of desirable personal characteristics and personality traits. 
 
The term illusory superiority first was used by the researchers Van Yperen and Buunk, in 1991. The condition is also known as the Above-average effect, the superiority bias, the leniency error, the sense of relative superiority, the primus inter pares effect, and the Lake Wobegon effect.

Effects in different situations

Illusory superiority has been found in individuals' comparisons of themselves with others in a variety of aspects of life, including performance in academic circumstances (such as class performance, exams and overall intelligence), in working environments (for example in job performance), and in social settings (for example in estimating one's popularity, or the extent to which one possesses desirable personality traits, such as honesty or confidence), and in everyday abilities requiring particular skill.

For illusory superiority to be demonstrated by social comparison, two logical hurdles have to be overcome. One is the ambiguity of the word "average". It is logically possible for nearly all of the set to be above the mean if the distribution of abilities is highly skewed. For example, the mean number of legs per human being is slightly lower than two because some people have fewer than two and almost none have more. Hence experiments usually compare subjects to the median of the peer group, since by definition it is impossible for a majority to exceed the median.
A further problem in inferring inconsistency is that subjects might interpret the question in different ways, so it is logically possible that a majority of them are, for example, more generous than the rest of the group each on "their own understanding" of generosity. This interpretation is confirmed by experiments which varied the amount of interpretive freedom. As subjects evaluated themselves on a specific, well-defined attribute, illusory superiority remains.

Cognitive ability

IQ

One of the main effects of illusory superiority in IQ is the "Downing effect". This describes the tendency of people with a below-average IQ to overestimate their IQ, and of people with an above-average IQ to underestimate their IQ. This tendency was first observed by C. L. Downing, who conducted the first cross-cultural studies on perceived intelligence. His studies also showed that the ability to accurately estimate other people's IQs was proportional to one's own IQ (i.e., the lower the IQ, the less capable of accurately appraising other people's IQs). People with high IQs are better overall at appraising other people's IQs, but when asked about the IQs of people with similar IQs as themselves, they are likely to rate them as having higher IQs. 

The disparity between actual IQ and perceived IQ has also been noted between genders by British psychologist Adrian Furnham, in whose work there was a suggestion that, on average, men are more likely to overestimate their intelligence by 5 points, while women are more likely to underestimate their IQ by a similar margin.

Memory

Illusory superiority has been found in studies comparing memory self-reports, such as Schmidt, Berg & Deelman's research in older adults. This study involved participants aged between 46 and 89 years of age comparing their own memory to that of peers of the same age group, 25-year-olds and their own memory at age 25. This research showed that participants exhibited illusory superiority when comparing themselves to both peers and younger adults, however the researchers asserted that these judgments were only slightly related to age.

Cognitive tasks

In Kruger and Dunning's experiments participants were given specific tasks (such as solving logic problems, analyzing grammar questions, and determining whether jokes were funny), and were asked to evaluate their performance on these tasks relative to the rest of the group, enabling a direct comparison of their actual and perceived performance.

Results were divided into four groups depending on actual performance and it was found that all four groups evaluated their performance as above average, meaning that the lowest-scoring group (the bottom 25%) showed a very large illusory superiority bias. The researchers attributed this to the fact that the individuals who were worst at performing the tasks were also worst at recognizing skill in those tasks. This was supported by the fact that, given training, the worst subjects improved their estimate of their rank as well as getting better at the tasks. The paper, titled "Unskilled and Unaware of It: How Difficulties in Recognizing One's Own Incompetence Lead to Inflated Self-Assessments", won an Ig Nobel Prize in 2000.

In 2003 Dunning and Joyce Ehrlinger, also of Cornell University, published a study that detailed a shift in people's views of themselves influenced by external cues. Cornell undergraduates were given tests of their knowledge of geography, some intended to positively affect their self-views, others intended to affect them negatively. They were then asked to rate their performance, and those given the positive tests reported significantly better performance than those given the negative.

Daniel Ames and Lara Kammrath extended this work to sensitivity to others, and the subjects' perception of how sensitive they were. Research by Burson, Larrick, and Klayman suggests that the effect is not so obvious and may be due to noise and bias levels.

Dunning, Kruger, and coauthors' latest paper on this subject comes to qualitatively similar conclusions after making some attempt to test alternative explanations.

Academic ability and job performance

In a survey of faculty at the University of Nebraska–Lincoln, 68% rated themselves in the top 25% for teaching ability, and more than 90% rated themselves as above average.

In a similar survey, 87% of Master of Business Administration students at Stanford University rated their academic performance as above the median.

Illusory superiority has also explained phenomena such as the large amount of stock market trading (as each trader thinks they are the best, and most likely to succeed), and the number of lawsuits that go to trial (because, due to illusory superiority, many lawyers have an inflated belief that they will win a case).

Self, friends, and peers

One of the first studies that found illusory superiority was carried out in the United States by the College Board in 1976. A survey was attached to the SAT exams (taken by one million students annually), asking the students to rate themselves relative to the median of the sample (rather than the average peer) on a number of vague positive characteristics. In ratings of leadership, 70% of the students put themselves above the median. In ability to get on well with others, 85% put themselves above the median; 25% rated themselves in the top 1%. 

A 2002 study on illusory superiority in social settings, with participants comparing themselves to friends and other peers on positive characteristics (such as punctuality and sensitivity) and negative characteristics (such as naivety or inconsistency). This study found that participants rated themselves more favorably than their friends, but rated their friends more favorably than other peers (but there were several moderating factors).

Research by Perloff and Fetzer, Brown, and Henri Tajfel and John C. Turner also found friends being rated higher than other peers. Tajfel and Turner attributed this to an "ingroup bias" and suggested that this was motivated by the individual's desire for a "positive social identity".

Popularity

In Zuckerman and Jost's study, participants were given detailed questionnaires about their friendships and asked to assess their own popularity. Using social network analysis, they were able to show that participants generally had exaggerated perceptions of their own popularity, especially in comparison to their own friends.

Despite the fact that most people in the study believed that they had more friends than their friends, a 1991 study by sociologist Scott L. Feld on the friendship paradox shows that on average, due to sampling bias, most people have fewer friends than their friends have.

Relationship happiness

Researchers have also found illusory superiority in relationship satisfaction. For example, one study found that participants perceived their own relationships as better than others' relationships on average, but thought that the majority of people were happy with their relationships. It also found evidence that the higher the participants rated their own relationship happiness, the more superior they believed their relationship was—illusory superiority also increased their own relationship satisfaction. This effect was pronounced in men, whose satisfaction was especially related to the perception that one's own relationship was superior as well as to the assumption that few others were unhappy in their relationships. On the other hand, women's satisfaction was particularly related to the assumption that most people were happy with their relationship. One study found that participants became defensive when their spouse or partner were perceived by others to be more successful in any aspect of their life, and had the tendency to exaggerate their success and understate their spouse or partner's success.

Health

Illusory superiority was found in a self-report study of health behaviors (Hoorens & Harris, 1998) that asked participants to estimate how often they and their peers carried out healthy and unhealthy behaviors. Participants reported that they carried out healthy behaviors more often than the average peer, and unhealthy behaviors less often. The findings held even for expected future behavior.

Driving ability

Svenson (1981) surveyed 161 students in Sweden and the United States, asking them to compare their driving skills and safety to other people. For driving skills, 93% of the U.S. sample and 69% of the Swedish sample put themselves in the top 50%; for safety, 88% of the U.S. and 77% of the Swedish put themselves in the top 50%.

McCormick, Walkey and Green (1986) found similar results in their study, asking 178 participants to evaluate their position on eight different dimensions of driving skills (examples include the "dangerous–safe" dimension and the "considerate–inconsiderate" dimension). Only a small minority rated themselves as below the median, and when all eight dimensions were considered together it was found that almost 80% of participants had evaluated themselves as being an above-average driver.

One commercial survey showed that 36% of drivers believed they were an above-average driver while texting or sending emails compared to other drivers; 44% considered themselves average, and 18% below average.

Immunity to bias

Subjects describe themselves in positive terms compared to other people, and this includes describing themselves as less susceptible to bias than other people. This effect is called the "bias blind spot" and has been demonstrated independently.

Cultural differences

A vast majority of the literature on illusory superiority originates from studies on participants in the United States. However, research that only investigates the effects in one specific population is severely limited as this may not be a true representation of human psychology. More recent research investigating self-esteem in other countries suggests that illusory superiority depends on culture. Some studies indicate that East Asians tend to underestimate their own abilities in order to improve themselves and get along with others.

Self-esteem

Illusory superiority's relationship with self-esteem is uncertain. The theory that those with high self-esteem maintain this high level by rating themselves highly is not without merit—studies involving non-depressed college students found that they thought they had more control over positive outcomes compared to their peers, even when controlling for performance. Non-depressed students also actively rate peers below themselves as opposed to rating themselves higher. Students were able to recall a great deal more negative personality traits about others than about themselves.

It should be noted though, that in these studies there was no distinction made between people with legitimate and illegitimate high self-esteem, as other studies have found that absence of positive illusions mainly coexist with high self-esteem and that determined individuals bent on growth and learning are less prone to these illusions. Thus it may be that while illusory superiority is associated with undeserved high self-esteem, people with legitimate high self-esteem do not necessarily exhibit it.

Relation to mental health

Psychology has traditionally assumed that generally accurate self-perceptions are essential to good mental health. This was challenged by a 1988 paper by Taylor and Brown, who argued that mentally healthy individuals typically manifest three cognitive illusions—illusory superiority, illusion of control, and optimism bias. This idea rapidly became very influential, with some authorities concluding that it would be therapeutic to deliberately induce these biases. Since then, further research has both undermined that conclusion and offered new evidence associating illusory superiority with negative effects on the individual.

One line of argument was that in the Taylor and Brown paper, the classification of people as mentally healthy or unhealthy was based on self-reports rather than objective criteria. Hence it was not surprising that people prone to self-enhancement would exaggerate how well-adjusted they are. One study claimed that "mentally normal" groups were contaminated by "defensive deniers", who are the most subject to positive illusions. A longitudinal study found that self-enhancement biases were associated with poor social skills and psychological maladjustment. In a separate experiment where videotaped conversations between men and women were rated by independent observers, self-enhancing individuals were more likely to show socially problematic behaviors such as hostility or irritability. A 2007 study found that self-enhancement biases were associated with psychological benefits (such as subjective well-being) but also inter- and intra-personal costs (such as anti-social behavior).

Neuroimaging

The degree to which people view themselves as more desirable than the average person links to reduced activation in their orbitofrontal cortex and dorsal anterior cingulate cortex. This is suggested to link to the role of these areas in processing "cognitive control".

Explanations

Noisy mental information processing

A 2012 Psychological Bulletin suggests that illusory superiority (as well as other biases) can be explained by a simple information-theoretic generative mechanism that assumes a noisy conversion of objective evidence (observation) into subjective estimates (judgment). The study suggests that the underlying cognitive mechanism is essentially similar to the noisy mixing of memories that can cause the conservatism bias or overconfidence: after our own performance, we readjust our estimates of our own performance more than we readjust our estimates of others' performances. This implies that our estimates of the scores of others are even more conservative (more influenced by the previous expectation) than our estimates of our own performance (more influenced by the new evidence received after giving the test). The difference in the conservative bias of both estimates (conservative estimate of our own performance, and even more conservative estimate of the performance of others) is enough to create illusory superiority. Since mental noise is a sufficient explanation that is much simpler and straightforward than any other explanation involving heuristics, behavior, or social interaction, Occam's razor would argue in its favor as the underlying generative mechanism (it is the hypothesis which makes the fewest assumptions).

Selective recruitment

Selective recruitment is the notion that, when making peer comparisons, an individual selects their own strengths and the other's weaknesses in order that they appear better on the whole. This theory was first tested by Weinstein (1980); however, this was in an experiment relating to optimistic bias, rather than the better-than-average effect. The study involved participants rating certain behaviors as likely to increase or decrease the chance of a series of life events happening to them. It was found that individuals showed less optimistic bias when they were allowed to see others' answers.

Perloff and Fetzer (1986) suggested that when making peer comparisons on a specific characteristic, an individual chooses a comparison target—the peer to whom he is being compared—with lower abilities. To test this theory, Perloff and Fetzer asked participants to compare themselves to specific comparison targets like a close friend, and found that illusory superiority decreased when they were told to envision a specific person rather than vague constructs like "the average peer". However these results are not completely reliable and could be affected by the fact that individuals like their close friends more than an "average peer" and may as a result rate their friend as being higher than average, therefore the friend would not be an objective comparison target.

Egocentrism

Another explanation for how the better-than-average effect works is egocentrism. This is the idea that an individual places greater importance and significance on their own abilities, characteristics, and behaviors than those of others. Egocentrism is therefore a less overtly self-serving bias. According to egocentrism, individuals will overestimate themselves in relation to others because they believe that they have an advantage that others do not have, as an individual considering their own performance and another's performance will consider their performance to be better, even when they are in fact equal. Kruger (1999) found support for the egocentrism explanation in his research involving participant ratings of their ability on easy and difficult tasks. It was found that individuals were consistent in their ratings of themselves as above the median in the tasks classified as "easy" and below the median in the tasks classified as "difficult", regardless of their actual ability. In this experiment the better-than-average effect was observed when it was suggested to participants that they would be successful, but also a worse-than-average effect was found when it was suggested that participants would be unsuccessful.

Focalism

Yet another explanation for the better-than-average effect is "focalism", the idea that greater significance is placed on the object that is the focus of attention. Most studies of the better-than-average effect place greater focus on the self when asking participants to make comparisons (the question will often be phrased with the self being presented before the comparison target—"compare yourself to the average person"). According to focalism this means that the individual will place greater significance on their own ability or characteristic than that of the comparison target. This also means that in theory if, in an experiment on the better-than-average effect, the questions were phrased so that the self and other were switched (e.g., "compare the average peer to yourself") the better-than-average effect should be lessened.

Research into focalism has focused primarily on optimistic bias rather than the better-than-average effect. However, two studies found a decreased effect of optimistic bias when participants were asked to compare an average peer to themselves, rather than themselves to an average peer.

Windschitl, Kruger & Simms (2003) have conducted research into focalism, focusing specifically on the better-than-average effect, and found that asking participants to estimate their ability and likelihood of success in a task produced results of decreased estimations when they were asked about others' chances of success rather than their own.

"Self versus aggregate" comparisons

This idea, put forward by Giladi and Klar, suggests that when making comparisons any single member of a group will tend to evaluate themselves to rank above that group's statistical mean performance level or the median performance level of its members. For example, if an individual is asked to assess his or her own skill at driving compared to the rest of the group, he or she is likely to rate him/herself as an above-average driver. Furthermore, the majority of the group is likely to rate themselves as above average. Research has found this effect in many different areas of human performance and has even generalized it beyond individuals' attempts to draw comparisons involving themselves. Findings of this research therefore suggest that rather than individuals evaluating themselves as above average in a self-serving manner, the better-than-average effect is actually due to a general tendency to evaluate any single person or object as better than average.

Better-than-average heuristic

Alicke and Govorun proposed the idea that, rather than individuals consciously reviewing and thinking about their own abilities, behaviors and characteristics and comparing them to those of others, it is likely that people instead have what they describe as an "automatic tendency to assimilate positively-evaluated social objects toward ideal trait conceptions". For example, if an individual evaluated themselves as honest, they would be likely to then exaggerate their characteristic towards their perceived ideal position on a scale of honesty. Importantly, Alicke noted that this ideal position is not always the top of the scale; for example, with honesty, someone who is always brutally honest may be regarded as rude—the ideal is a balance, perceived differently by different individuals.

Non-social explanations

The better-than-average effect may not have wholly social origins—judgments about inanimate objects suffer similar distortions.

Moderating factors

While illusory superiority has been found to be somewhat self-serving, this does not mean that it will predictably occur—it is not constant. The strength of the effect is moderated by many factors, the main examples of which have been summarized by Alicke and Govorun (2005).

Interpretability/ambiguity of trait

This is a phenomenon that Alicke and Govorun have described as "the nature of the judgement dimension" and refers to how subjective (abstract) or objective (concrete) the ability or characteristic being evaluated is. Research by Sedikides & Strube (1997) has found that people are more self-serving (the effect of illusory superiority is stronger) when the event in question is more open to interpretation, for example social constructs such as popularity and attractiveness are more interpretable than characteristics such as intelligence and physical ability. This has been partly attributed also to the need for a believable self-view.

The idea that ambiguity moderates illusory superiority has empirical research support from a study involving two conditions: in one, participants were given criteria for assessing a trait as ambiguous or unambiguous, and in the other participants were free to assess the traits according to their own criteria. It was found that the effect of illusory superiority was greater in the condition where participants were free to assess the traits.

The effects of illusory superiority have also been found to be strongest when people rate themselves on abilities at which they are totally incompetent. These subjects have the greatest disparity between their actual performance (at the low end of the distribution) and their self-rating (placing themselves above average). This Dunning–Kruger effect is interpreted as a lack of metacognitive ability to recognize their own incompetence.

Method of comparison

The method used in research into illusory superiority has been found to have an implication on the strength of the effect found. Most studies into illusory superiority involve a comparison between an individual and an average peer, of which there are two methods: direct comparison and indirect comparison. A direct comparison—which is more commonly used—involves the participant rating themselves and the average peer on the same scale, from "below average" to "above average" and results in participants being far more self-serving. Researchers have suggested that this occurs due to the closer comparison between the individual and the average peer, however use of this method means that it is impossible to know whether a participant has overestimated themselves, underestimated the average peer, or both. 

The indirect method of comparison involves participants rating themselves and the average peer on separate scales and the illusory superiority effect is found by taking the average peer score away from the individual's score (with a higher score indicating a greater effect). While the indirect comparison method is used less often it is more informative in terms of whether participants have overestimated themselves or underestimated the average peer, and can therefore provide more information about the nature of illusory superiority.

Comparison target

The nature of the comparison target is one of the most fundamental moderating factors of the effect of illusory superiority, and there are two main issues relating to the comparison target that need to be considered. 

First, research into illusory superiority is distinct in terms of the comparison target because an individual compares themselves with a hypothetical average peer rather than a tangible person. Alicke et al. (1995) found that the effect of illusory superiority was still present but was significantly reduced when participants compared themselves with real people (also participants in the experiment, who were seated in the same room), as opposed to when participants compared themselves with an average peer. This suggests that research into illusory superiority may itself be biasing results and finding a greater effect than would actually occur in real life.

Further research into the differences between comparison targets involved four conditions where participants were at varying proximity to an interview with the comparison target: watching live in the same room; watching on tape; reading a written transcript; or making self-other comparisons with an average peer. It was found that when the participant was further removed from the interview situation (in the tape observation and transcript conditions) the effect of illusory superiority was found to be greater. Researchers asserted that these findings suggest that the effect of illusory superiority is reduced by two main factors—individuation of the target and live contact with the target. 

Second, Alicke et al.'s (1995) studies investigated whether the negative connotations to the word "average" may have an effect on the extent to which individuals exhibit illusory superiority, namely whether the use of the word "average" increases illusory superiority. Participants were asked to evaluate themselves, the average peer and a person whom they had sat next to in the previous experiment, on various dimensions. It was found that they placed themselves highest, followed by the real person, followed by the average peer, however the average peer was consistently placed above the mean point on the scale, suggesting that the word "average" did not have a negative effect on the participant's view of the average peer.

Controllability

An important moderating factor of the effect of illusory superiority is the extent to which an individual believes they are able to control and change their position on the dimension concerned. According to Alicke & Govorun positive characteristics that an individual believes are within their control are more self-serving, and negative characteristics that are seen as uncontrollable are less detrimental to self-enhancement. This theory was supported by Alicke's (1985) research, which found that individuals rated themselves as higher than an average peer on positive controllable traits and lower than an average peer on negative uncontrollable traits. The idea, suggested by these findings, that individuals believe that they are responsible for their success and some other factor is responsible for their failure is known as the self-serving bias.

Individual differences of judge

Personality characteristics vary widely between people and have been found to moderate the effects of illusory superiority, one of the main examples of this is self-esteem. Brown (1986) found that in self-evaluations of positive characteristics participants with higher self-esteem showed greater illusory superiority bias than participants with lower self-esteem. Additionally, another study found that participants pre-classified as having high self-esteem tended to interpret ambiguous traits in a self-serving way, whereas participants pre-classified as having low self-esteem did not do this.

Worse-than-average effect

In contrast to what is commonly believed, research has found that better-than-average effects are not universal. In fact, much recent research has found the opposite effect in many tasks, especially if they were more difficult.

Classical radicalism

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Cla...