Search This Blog

Friday, July 2, 2021

Cargo cult

From Wikipedia, the free encyclopedia

Ceremonial cross of John Frum cargo cult, Tanna island, New Hebrides (now Vanuatu), 1967
 
A cargo cult is a indigenist millenarian belief system in which adherents perform rituals which they believe will cause a more technologically advanced society to deliver goods. These cults were first described in Melanesia in the wake of contact with allied military forces during the Second World War.

Isolated and pre-industrial island cultures that were lacking technology found soldiers and supplies arriving in large numbers, often by airdrop. The soldiers would trade with the islanders. After the war, the soldiers departed. Cargo cults arose, attempting to imitate the behaviors of the soldiers, thinking that this would cause the soldiers and their cargo to return. Some cult behaviors involved mimicking the day-to-day activities and dress styles of soldiers, such as performing parade ground drills with wooden or salvaged rifles.

Causes, beliefs, and practices  

Cargo cults are marked by a number of common characteristics, including a "myth-dream" that is a synthesis of indigenous and foreign elements, the expectation of help from the ancestors, charismatic leaders, and lastly, belief in the appearance of an abundance of goods. The indigenous societies of Melanesia were typically characterized by a "big man" political system in which individuals gained prestige through gift exchanges. The more wealth a man could distribute, the more people who were in his debt, and the greater his renown. Those who were unable to reciprocate were identified as "rubbish men." Faced, through colonialism, with foreigners with a seemingly unending supply of goods for exchange, indigenous Melanesians experienced "value dominance.” That is, they were dominated by others in terms of their own (not the foreign) value system, and exchange with foreigners left them feeling like rubbish men.

Since the modern manufacturing process is unknown to them, members, leaders, and prophets of the cults maintain that the manufactured goods of the non-native culture have been created by spiritual means, such as through their deities and ancestors. These goods are intended for the local indigenous people, but the foreigners have unfairly gained control of these objects through malice or mistake. Thus, a characteristic feature of cargo cults is the belief that spiritual agents will, at some future time, give much valuable cargo and desirable manufactured products to the cult members.

Symbols associated with Christianity and modern Western society tend to be incorporated into their rituals: For example, the use of cross-shaped grave markers. Notable examples of cargo cult activity include the setting up of mock airstrips, airports, airplanes, offices, and dining rooms, as well as the fetishization and attempted construction of Western goods, such as radios made of coconuts and straw. Believers may stage "drills" and "marches" with sticks for rifles and use military-style insignia and national insignia painted on their bodies to make them look like soldiers, thereby treating the activities of Western military personnel as rituals to be performed for the purpose of attracting the cargo.

Examples

First occurrences

Discussions of cargo cults usually begin with a series of movements that occurred in the late nineteenth century and early twentieth century. The earliest recorded cargo cult was the Tikka Movement that began in Fiji in 1885 at the height of the colonial era's plantation-style economy. The movement began with a promised return to a golden age of ancestral potency. Minor alterations to priestly practices were undertaken to update them and attempt to recover some kind of ancestral efficacy. Colonial authorities saw the leader of the movement, Tuka, as a troublemaker, and he was exiled, although their attempts to stop him returning proved fruitless.

Cargo cults occurred periodically in many parts of the island of New Guinea, including the Taro Cult in northern Papua New Guinea and the Vailala Madness that arose from 1919 to 1922. The last was documented by Francis Edgar Williams, one of the first anthropologists to conduct fieldwork in Papua New Guinea. Less dramatic cargo cults have appeared in western New Guinea as well, including the Asmat and Dani areas.

Pacific cults of World War II

The most widely known period of cargo cult activity occurred among the Melanesian islanders in the years during and after World War II. A small population of indigenous peoples observed, often directly in front of their dwellings, the largest war ever fought by technologically advanced nations. The Japanese arrived first with a great deal of supplies. Later the Allied forces followed.

The vast amounts of military equipment and supplies that both sides airdropped (or airlifted to airstrips) to troops on these islands meant drastic changes to the lifestyle of the islanders, many of whom had never seen outsiders before. Manufactured clothing, medicine, canned food, tents, weapons and other goods arrived in vast quantities for the soldiers, who often shared some of it with the islanders who were their guides and hosts. This was true of the Japanese Army as well, at least initially before relations deteriorated in most regions.

The John Frum cult, one of the most widely reported and longest-lived, formed on the island of Tanna, Vanuatu. This movement started before the war, and became a cargo cult afterwards. Cult members worshiped certain unspecified Americans having the name "John Frum" or "Tom Navy" who they claimed had brought cargo to their island during World War II and whom they identified as being the spiritual entity who would provide cargo to them in the future.

Postwar developments

With the end of the war, the military abandoned the airbases and stopped dropping cargo. In response, charismatic individuals developed cults among remote Melanesian populations that promised to bestow on their followers deliveries of food, arms, Jeeps, etc. The cult leaders explained that the cargo would be gifts from their own ancestors, or other sources, as had occurred with the outsider armies. In attempts to get cargo to fall by parachute or land in planes or ships again, islanders imitated the same practices they had seen the military personnel use. Cult behaviors usually involved mimicking the day-to-day activities and dress styles of US soldiers, such as performing parade ground drills with wooden or salvaged rifles. The islanders carved headphones from wood and wore them while sitting in fabricated control towers. They waved the landing signals while standing on the runways. They lit signal fires and torches to light up runways and lighthouses.

In a form of sympathetic magic, many built life-size replicas of airplanes out of straw and cut new military-style landing strips out of the jungle, hoping to attract more airplanes. The cult members thought that the foreigners had some special connection to the deities and ancestors of the natives, who were the only beings powerful enough to produce such riches.

Cargo cults were typically created by individual leaders, or big men in the Melanesian culture, and it is not at all clear if these leaders were sincere, or were simply running scams on gullible populations. The leaders typically held cult rituals well away from established towns and colonial authorities, thus making reliable information about these practices very difficult to acquire.

Current status

Some cargo cults are still active. These include:

Theoretical explanations

Anthropologist Anthony F. C. Wallace conceptualized the "Tuka movement" as a revitalization movement. Peter Worsley's analysis of cargo cults placed the emphasis on the economic and political causes of these popular movements. He viewed them as "proto-national" movements by indigenous peoples seeking to resist colonial interventions. He observed a general trend away from millenarianism towards secular political organization through political parties and cooperatives. Theodore Schwartz was the first to emphasize that both Melanesians and Europeans place great value on the demonstration of wealth. "The two cultures met on the common ground of materialistic competitive striving for prestige through entrepreneurial achievement of wealth." Melanesians felt "relative deprivation" in their standard of living, and thus came to focus on cargo as an essential expression of their personhood and agency.

Peter Lawrence was able to add greater historical depth to the study of cargo cults, and observed the striking continuity in the indigenous value systems from pre-cult times to the time of his study. Kenelm Burridge, in contrast, placed more emphasis on cultural change, and on the use of memories of myths to comprehend new realities, including the "secret" of European material possessions. His emphasis on cultural change follows from Worsley's argument on the effects of capitalism; Burridge points out these movements were more common in coastal areas which faced greater intrusions from European colonizers.

Cargo cults often develop during a combination of crises. Under conditions of social stress, such a movement may form under the leadership of a charismatic figure. This leader may have a "vision" (or "myth-dream") of the future, often linked to an ancestral efficacy ("mana") thought to be recoverable by a return to traditional morality. This leader may characterize the present state as a dismantling of the old social order, meaning that social hierarchy and ego boundaries have been broken down.

Contact with colonizing groups brought about a considerable transformation in the way indigenous peoples of Melanesia have thought about other societies. Early theories of cargo cults began from the assumption that practitioners simply failed to understand technology, colonization, or capitalist reform; in this model, cargo cults are a misunderstanding of the systems involved in resource distribution, and an attempt to acquire such goods in the wake of interrupted trade. However, many of these practitioners actually focus on the importance of sustaining and creating new social relationships, with material relations being secondary.

Since the late twentieth century, alternative theories have arisen. For example, some scholars, such as Kaplan and Lindstrom, focus on Europeans' characterization of these movements as a fascination with manufactured goods and what such a focus says about consumerism. Others point to the need to see each movement as reflecting a particularized historical context, even eschewing the term "cargo cult" for them unless there is an attempt to elicit an exchange relationship from Europeans.

The term was first used in print in 1945 by Norris Mervyn Bird, repeating a derogatory description used by planters and businessmen in the Australian Territory of Papua. The term was later adopted by anthropologists, and applied retroactively to movements in a much earlier era. In 1964, Peter Lawrence described the term as follows: "Cargo ritual was any religious activity designed to produce goods in this way and assumed to have been taught [to] the leader [of the cargo cult] by the deity"

In recent decades, anthropology has distanced itself from the term “cargo cult,” which is now seen as having been reductively applied to a lot of complicated and disparate social and religious movements that arose from the stress and trauma of colonialism, and sought to attain much more varied and amorphous goals—things like self-determination—than material cargo.

Cargoism: The discourse on cargo cults

More recent work has debated the suitability of the term cargo cult arguing that it does not refer to an identifiable empirical reality, and that the emphasis on "cargo" says more about Western ideological bias than it does about the movements concerned. Nancy McDowell argues that the focus on cargo cult isolates the phenomenon from the wider social and cultural field (such as politics and economics) that gives it meaning. She states that people experience change as dramatic and complete, rather than as gradual and evolutionary. This sense of a dramatic break is expressed through cargo cult ideology.

Lamont Lindstrom takes this analysis one step further through his examination of "cargoism", the discourse of the West about cargo cults. His analysis is concerned with Western fascination with the phenomenon in both academic and popular writing. In his opinion, the name "cargo cult" is deeply problematic because of its pejorative connotation of backwardness, since it imputes a goal (cargo) obtained through the wrong means (cult); the actual goal is not so much obtaining material goods as creating and renewing social relationships under threat. Martha Kaplan thus argues in favor of erasing the term altogether.

Other uses

Russian political analyst Yekaterina Shulman coined the term "reverse cargo-cult" to describe the Russian point of view on the hypocrisy of institutions in Western societies and their skill at hiding their hypocrisy. According to Shulman, "Cargo-cult is a belief that mock airplanes made of manure and straw-bale may summon the real airplanes who bring canned beef. Reverse cargo-cult is used by the political elites in countries lagging behind who proclaim that, in the developed world, airplanes are also made of manure and straw-bale, and there is also a shortage of canned beef."

Reality

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Reality

Reality is the sum or aggregate of all that is real or existent within a system, as opposed to that which is only imaginary. The term is also used to refer to the ontological status of things, indicating their existence. In physical terms, reality is the totality of a system, known and unknown. Philosophical questions about the nature of reality or existence or being are considered under the rubric of ontology, which is a major branch of metaphysics in the Western philosophical tradition. Ontological questions also feature in diverse branches of philosophy, including the philosophy of science, philosophy of religion, philosophy of mathematics, and philosophical logic. These include questions about whether only physical objects are real (i.e., Physicalism), whether reality is fundamentally immaterial (e.g., Idealism), whether hypothetical unobservable entities posited by scientific theories exist, whether God exists, whether numbers and other abstract objects exist, and whether possible worlds exist.

Related concepts

World views and theories

A common colloquial usage would have reality mean "perceptions, beliefs, and attitudes toward reality", as in "My reality is not your reality." This is often used just as a colloquialism indicating that the parties to a conversation agree, or should agree, not to quibble over deeply different conceptions of what is real. For example, in a religious discussion between friends, one might say (attempting humor), "You might disagree, but in my reality, everyone goes to heaven."

Reality can be defined in a way that links it to worldviews or parts of them (conceptual frameworks): Reality is the totality of all things, structures (actual and conceptual), events (past and present) and phenomena, whether observable or not. It is what a world view (whether it be based on individual or shared human experience) ultimately attempts to describe or map.

Certain ideas from physics, philosophy, sociology, literary criticism, and other fields shape various theories of reality. One such belief is that there simply and literally is no reality beyond the perceptions or beliefs we each have about reality. Such attitudes are summarized in the popular statement, "Perception is reality" or "Life is how you perceive reality" or "reality is what you can get away with" (Robert Anton Wilson), and they indicate anti-realism – that is, the view that there is no objective reality, whether acknowledged explicitly or not.

Many of the concepts of science and philosophy are often defined culturally and socially. This idea was elaborated by Thomas Kuhn in his book The Structure of Scientific Revolutions (1962). The Social Construction of Reality, a book about the sociology of knowledge written by Peter L. Berger and Thomas Luckmann, was published in 1966. It explained how knowledge is acquired and used for the comprehension of reality. Out of all the realities, the reality of everyday life is the most important one since our consciousness requires us to be completely aware and attentive to the experience of everyday life.

Western philosophy

Philosophy addresses two different aspects of the topic of reality: the nature of reality itself, and the relationship between the mind (as well as language and culture) and reality.

On the one hand, ontology is the study of being, and the central topic of the field is couched, variously, in terms of being, existence, "what is", and reality. The task in ontology is to describe the most general categories of reality and how they are interrelated. If a philosopher wanted to proffer a positive definition of the concept "reality", it would be done under this heading. As explained above, some philosophers draw a distinction between reality and existence. In fact, many analytic philosophers today tend to avoid the term "real" and "reality" in discussing ontological issues. But for those who would treat "is real" the same way they treat "exists", one of the leading questions of analytic philosophy has been whether existence (or reality) is a property of objects. It has been widely held by analytic philosophers that it is not a property at all, though this view has lost some ground in recent decades.

On the other hand, particularly in discussions of objectivity that have feet in both metaphysics and epistemology, philosophical discussions of "reality" often concern the ways in which reality is, or is not, in some way dependent upon (or, to use fashionable jargon, "constructed" out of) mental and cultural factors such as perceptions, beliefs, and other mental states, as well as cultural artifacts, such as religions and political movements, on up to the vague notion of a common cultural world view, or Weltanschauung.

The view that there is a reality independent of any beliefs, perceptions, etc., is called realism. More specifically, philosophers are given to speaking about "realism about" this and that, such as realism about universals or realism about the external world. Generally, where one can identify any class of object, the existence or essential characteristics of which is said not to depend on perceptions, beliefs, language, or any other human artifact, one can speak of "realism about" that object.

One can also speak of anti-realism about the same objects. Anti-realism is the latest in a long series of terms for views opposed to realism. Perhaps the first was idealism, so called because reality was said to be in the mind, or a product of our ideas. Berkeleyan idealism is the view, propounded by the Irish empiricist George Berkeley, that the objects of perception are actually ideas in the mind. In this view, one might be tempted to say that reality is a "mental construct"; this is not quite accurate, however, since, in Berkeley's view, perceptual ideas are created and coordinated by God. By the 20th century, views similar to Berkeley's were called phenomenalism. Phenomenalism differs from Berkeleyan idealism primarily in that Berkeley believed that minds, or souls, are not merely ideas nor made up of ideas, whereas varieties of phenomenalism, such as that advocated by Russell, tended to go farther to say that the mind itself is merely a collection of perceptions, memories, etc., and that there is no mind or soul over and above such mental events. Finally, anti-realism became a fashionable term for any view which held that the existence of some object depends upon the mind or cultural artifacts. The view that the so-called external world is really merely a social, or cultural, artifact, called social constructionism, is one variety of anti-realism. Cultural relativism is the view that social issues such as morality are not absolute, but at least partially cultural artifact.

A correspondence theory of knowledge about what exists claims that "true" knowledge of reality represents accurate correspondence of statements about and images of reality with the actual reality that the statements or images are attempting to represent. For example, the scientific method can verify that a statement is true based on the observable evidence that a thing exists. Many humans can point to the Rocky Mountains and say that this mountain range exists, and continues to exist even if no one is observing it or making statements about it.

Being

The nature of being is a perennial topic in metaphysics. For, instance Parmenides taught that reality was a single unchanging Being, whereas Heraclitus wrote that all things flow. The 20th century philosopher Heidegger thought previous philosophers have lost sight the question of Being (qua Being) in favour of the questions of beings (existing things), so that a return to the Parmenidean approach was needed. An ontological catalogue is an attempt to list the fundamental constituents of reality. The question of whether or not existence is a predicate has been discussed since the Early Modern period, not least in relation to the ontological argument for the existence of God. Existence, that something is, has been contrasted with essence, the question of what something is. Since existence without essence seems blank, it associated with nothingness by philosophers such as Hegel. Nihilism represents an extremely negative view of being, the absolute a positive one.

Perception

The question of direct or "naïve" realism, as opposed to indirect or "representational" realism, arises in the philosophy of perception and of mind out of the debate over the nature of conscious experience; the epistemological question of whether the world we see around us is the real world itself or merely an internal perceptual copy of that world generated by neural processes in our brain. Naïve realism is known as direct realism when developed to counter indirect or representative realism, also known as epistemological dualism, the philosophical position that our conscious experience is not of the real world itself but of an internal representation, a miniature virtual-reality replica of the world.

Timothy Leary coined the influential term Reality Tunnel, by which he means a kind of representative realism. The theory states that, with a subconscious set of mental filters formed from their beliefs and experiences, every individual interprets the same world differently, hence "Truth is in the eye of the beholder". His ideas influenced the work of his friend Robert Anton Wilson.

Abstract objects and mathematics

The status of abstract entities, particularly numbers, is a topic of discussion in mathematics.

In the philosophy of mathematics, the best known form of realism about numbers is Platonic realism, which grants them abstract, immaterial existence. Other forms of realism identify mathematics with the concrete physical universe.

Anti-realist stances include formalism and fictionalism.

Some approaches are selectively realistic about some mathematical objects but not others. Finitism rejects infinite quantities. Ultra-finitism accepts finite quantities up to a certain amount. Constructivism and intuitionism are realistic about objects that can be explicitly constructed, but reject the use of the principle of the excluded middle to prove existence by reductio ad absurdum.

The traditional debate has focused on whether an abstract (immaterial, intelligible) realm of numbers has existed in addition to the physical (sensible, concrete) world. A recent development is the mathematical universe hypothesis, the theory that only a mathematical world exists, with the finite, physical world being an illusion within it.

An extreme form of realism about mathematics is the mathematical multiverse hypothesis advanced by Max Tegmark. Tegmark's sole postulate is: All structures that exist mathematically also exist physically. That is, in the sense that "in those [worlds] complex enough to contain self-aware substructures [they] will subjectively perceive themselves as existing in a physically 'real' world". The hypothesis suggests that worlds corresponding to different sets of initial conditions, physical constants, or altogether different equations should be considered real. The theory can be considered a form of Platonism in that it posits the existence of mathematical entities, but can also be considered a mathematical monism in that it denies that anything exists except mathematical objects.

Properties

The problem of universals is an ancient problem in metaphysics about whether universals exist. Universals are general or abstract qualities, characteristics, properties, kinds or relations, such as being male/female, solid/liquid/gas or a certain colour, that can be predicated of individuals or particulars or that individuals or particulars can be regarded as sharing or participating in. For example, Scott, Pat, and Chris have in common the universal quality of being human or humanity.

The realist school claims that universals are real – they exist and are distinct from the particulars that instantiate them. There are various forms of realism. Two major forms are Platonic realism and Aristotelian realism. Platonic realism is the view that universals are real entities and they exist independent of particulars. Aristotelian realism, on the other hand, is the view that universals are real entities, but their existence is dependent on the particulars that exemplify them.

Nominalism and conceptualism are the main forms of anti-realism about universals.

Time and space

A traditional realist position in ontology is that time and space have existence apart from the human mind. Idealists deny or doubt the existence of objects independent of the mind. Some anti-realists whose ontological position is that objects outside the mind do exist, nevertheless doubt the independent existence of time and space.

Kant, in the Critique of Pure Reason, described time as an a priori notion that, together with other a priori notions such as space, allows us to comprehend sense experience. Kant denies that either space or time are substance, entities in themselves, or learned by experience; he holds rather that both are elements of a systematic framework we use to structure our experience. Spatial measurements are used to quantify how far apart objects are, and temporal measurements are used to quantitatively compare the interval between (or duration of) events. Although space and time are held to be transcendentally ideal in this sense, they are also empirically real, i.e. not mere illusions.

Idealist writers such as J. M. E. McTaggart in The Unreality of Time have argued that time is an illusion.

As well as differing about the reality of time as a whole, metaphysical theories of time can differ in their ascriptions of reality to the past, present and future separately.

  • Presentism holds that the past and future are unreal, and only an ever-changing present is real.
  • The block universe theory, also known as Eternalism, holds that past, present and future are all real, but the passage of time is an illusion. It is often said to have a scientific basis in relativity.
  • The growing block universe theory holds that past and present are real, but the future is not.

Time, and the related concepts of process and evolution are central to the system-building metaphysics of A. N. Whitehead and Charles Hartshorne.

Possible worlds

The term "possible world" goes back to Leibniz's theory of possible worlds, used to analyse necessity, possibility, and similar modal notions. Modal realism is the view, notably propounded by David Kellogg Lewis, that all possible worlds are as real as the actual world. In short: the actual world is regarded as merely one among an infinite set of logically possible worlds, some "nearer" to the actual world and some more remote. Other theorists may use the Possible World framework to express and explore problems without committing to it ontologically. Possible world theory is related to alethic logic: a proposition is necessary if it is true in all possible worlds, and possible if it is true in at least one. The many worlds interpretation of quantum mechanics is a similar idea in science.

Theories of everything (TOE) and philosophy

The philosophical implications of a physical TOE are frequently debated. For example, if philosophical physicalism is true, a physical TOE will coincide with a philosophical theory of everything.

The "system building" style of metaphysics attempts to answer all the important questions in a coherent way, providing a complete picture of the world. Plato and Aristotle could be said to be early examples of comprehensive systems. In the early modern period (17th and 18th centuries), the system-building scope of philosophy is often linked to the rationalist method of philosophy, that is the technique of deducing the nature of the world by pure a priori reason. Examples from the early modern period include the Leibniz's Monadology, Descartes's Dualism, Spinoza's Monism. Hegel's Absolute idealism and Whitehead's Process philosophy were later systems.

Other philosophers do not believe its techniques can aim so high. Some scientists think a more mathematical approach than philosophy is needed for a TOE, for instance Stephen Hawking wrote in A Brief History of Time that even if we had a TOE, it would necessarily be a set of equations. He wrote, "What is it that breathes fire into the equations and makes a universe for them to describe?"

Phenomenology

On a much broader and more subjective level, private experiences, curiosity, inquiry, and the selectivity involved in personal interpretation of events shapes reality as seen by one and only one person and hence is called phenomenological. While this form of reality might be common to others as well, it could at times also be so unique to oneself as to never be experienced or agreed upon by anyone else. Much of the kind of experience deemed spiritual occurs on this level of reality.

Phenomenology is a philosophical method developed in the early years of the twentieth century by Edmund Husserl and a circle of followers at the universities of Göttingen and Munich in Germany. Subsequently, phenomenological themes were taken up by philosophers in France, the United States, and elsewhere, often in contexts far removed from Husserl's work.

The word phenomenology comes from the Greek phainómenon, meaning "that which appears", and lógos, meaning "study". In Husserl's conception, phenomenology is primarily concerned with making the structures of consciousness, and the phenomena which appear in acts of consciousness, objects of systematic reflection and analysis. Such reflection was to take place from a highly modified "first person" viewpoint, studying phenomena not as they appear to "my" consciousness, but to any consciousness whatsoever. Husserl believed that phenomenology could thus provide a firm basis for all human knowledge, including scientific knowledge, and could establish philosophy as a "rigorous science".

Husserl's conception of phenomenology has also been criticised and developed by his student and assistant Martin Heidegger, by existentialists like Maurice Merleau-Ponty and Jean-Paul Sartre, and by other philosophers, such as Paul Ricoeur, Emmanuel Levinas, and Dietrich von Hildebrand.

Skeptical hypotheses

A brain in a vat that believes it is walking

Skeptical hypotheses in philosophy suggest that reality is very different from what we think it is; or at least that we cannot prove it is not. Examples include:

Jain philosophy

Jain philosophy postulates that seven tattva (truths or fundamental principles) constitute reality. These seven tattva are:

  1. Jīva – The soul which is characterized by consciousness.
  2. Ajīva – The non-soul.
  3. Asrava – Influx of karma.
  4. Bandha – The bondage of karma.
  5. Samvara – Obstruction of the inflow of karmic matter into the soul.
  6. Nirjara – Shedding of karmas.
  7. Moksha – Liberation or Salvation, i.e. the complete annihilation of all karmic matter (bound with any particular soul).

Physical sciences

Scientific realism

Scientific realism is, at the most general level, the view that the world (the universe) described by science (perhaps ideal science) is the real world, as it is, independent of what we might take it to be. Within philosophy of science, it is often framed as an answer to the question "how is the success of science to be explained?" The debate over what the success of science involves centers primarily on the status of entities that are not directly observable discussed by scientific theories. Generally, those who are scientific realists state that one can make reliable claims about these entities (viz., that they have the same ontological status) as directly observable entities, as opposed to instrumentalism. The most used and studied scientific theories today state more or less the truth.

Realism and locality in physics

Realism in the sense used by physicists does not equate to realism in metaphysics. The latter is the claim that the world is mind-independent: that even if the results of a measurement do not pre-exist the act of measurement, that does not require that they are the creation of the observer. Furthermore, a mind-independent property does not have to be the value of some physical variable such as position or momentum. A property can be dispositional (or potential), i.e. it can be a tendency: in the way that glass objects tend to break, or are disposed to break, even if they do not actually break. Likewise, the mind-independent properties of quantum systems could consist of a tendency to respond to particular measurements with particular values with ascertainable probability. Such an ontology would be metaphysically realistic, without being realistic in the physicist's sense of "local realism" (which would require that a single value be produced with certainty).

A closely related term is counterfactual definiteness (CFD), used to refer to the claim that one can meaningfully speak of the definiteness of results of measurements that have not been performed (i.e. the ability to assume the existence of objects, and properties of objects, even when they have not been measured).

Local realism is a significant feature of classical mechanics, of general relativity, and of electrodynamics; but quantum mechanics has shown that quantum entanglement is possible. This was rejected by Einstein, who proposed the EPR paradox, but it was subsequently quantified by Bell's inequalities. If Bell's inequalities are violated, either local realism or counterfactual definiteness must be incorrect; but some physicists dispute that experiments have demonstrated Bell's violations, on the grounds that the sub-class of inhomogeneous Bell inequalities has not been tested or due to experimental limitations in the tests. Different interpretations of quantum mechanics violate different parts of local realism and/or counterfactual definiteness.

Role of the observer in quantum mechanics

The quantum mind–body problem refers to the philosophical discussions of the mind–body problem in the context of quantum mechanics. Since quantum mechanics involves quantum superpositions, which are not perceived by observers, some interpretations of quantum mechanics place conscious observers in a special position.

The founders of quantum mechanics debated the role of the observer, and of them, Wolfgang Pauli and Werner Heisenberg believed that it was the observer that produced collapse. This point of view, which was never fully endorsed by Niels Bohr, was denounced as mystical and anti-scientific by Albert Einstein. Pauli accepted the term, and described quantum mechanics as lucid mysticism.

Heisenberg and Bohr always described quantum mechanics in logical positivist terms. Bohr also took an active interest in the philosophical implications of quantum theories such as his complementarity, for example. He believed quantum theory offers a complete description of nature, albeit one that is simply ill-suited for everyday experiences – which are better described by classical mechanics and probability. Bohr never specified a demarcation line above which objects cease to be quantum and become classical. He believed that it was not a question of physics, but one of philosophy.

Eugene Wigner reformulated the "Schrödinger's cat" thought experiment as "Wigner's friend" and proposed that the consciousness of an observer is the demarcation line which precipitates collapse of the wave function, independent of any realist interpretation. Commonly known as "consciousness causes collapse", this interpretation of quantum mechanics states that observation by a conscious observer is what makes the wave function collapse.

Multiverse

The multiverse is the hypothetical set of multiple possible universes (including the historical universe we consistently experience) that together comprise everything that exists: the entirety of space, time, matter, and energy as well as the physical laws and constants that describe them. The term was coined in 1895 by the American philosopher and psychologist William James. In the many-worlds interpretation (MWI), one of the mainstream interpretations of quantum mechanics, there are an infinite number of universes and every possible quantum outcome occurs in at least one universe.

The structure of the multiverse, the nature of each universe within it and the relationship between the various constituent universes, depend on the specific multiverse hypothesis considered. Multiverses have been hypothesized in cosmology, physics, astronomy, religion, philosophy, transpersonal psychology and fiction, particularly in science fiction and fantasy. In these contexts, parallel universes are also called "alternative universes", "quantum universes", "interpenetrating dimensions", "parallel dimensions", "parallel worlds", "alternative realities", "alternative timelines", and "dimensional planes", among others.

Scientific theories of everything

A theory of everything (TOE) is a putative theory of theoretical physics that fully explains and links together all known physical phenomena, and predicts the outcome of any experiment that could be carried out in principle. The theory of everything is also called the final theory. Many candidate theories of everything have been proposed by theoretical physicists during the twentieth century, but none have been confirmed experimentally. The primary problem in producing a TOE is that general relativity and quantum mechanics are hard to unify. This is one of the unsolved problems in physics.

Initially, the term "theory of everything" was used with an ironic connotation to refer to various overgeneralized theories. For example, a great-grandfather of Ijon Tichy, a character from a cycle of Stanisław Lem's science fiction stories of the 1960s, was known to work on the "General Theory of Everything". Physicist John Ellis claims to have introduced the term into the technical literature in an article in Nature in 1986. Over time, the term stuck in popularizations of quantum physics to describe a theory that would unify or explain through a single model the theories of all fundamental interactions and of all particles of nature: general relativity for gravitation, and the standard model of elementary particle physics – which includes quantum mechanics – for electromagnetism, the two nuclear interactions, and the known elementary particles.

Current candidates for a theory of everything include string theory, M theory, and loop quantum gravity.

Technology

Virtual reality and cyberspace

Virtual reality (VR) is a computer-simulated environment that can simulate physical presence in places in the real world, as well as in imaginary worlds.

Reality-virtuality continuum.

The virtuality continuum is a continuous scale ranging between the completely virtual, a virtuality, and the completely real: reality. The reality–virtuality continuum therefore encompasses all possible variations and compositions of real and virtual objects. It has been described as a concept in new media and computer science, but in fact it could be considered a matter of anthropology. The concept was first introduced by Paul Milgram.

The area between the two extremes, where both the real and the virtual are mixed, is the so-called mixed reality. This in turn is said to consist of both augmented reality, where the virtual augments the real, and augmented virtuality, where the real augments the virtual. Cyberspace, the world's computer systems considered as an interconnected whole, can be thought of as a virtual reality; for instance, it is portrayed as such in the cyberpunk fiction of William Gibson and others. Second Life and MMORPGs such as World of Warcraft are examples of artificial environments or virtual worlds (falling some way short of full virtual reality) in cyberspace.

"RL" in internet culture

On the Internet, "real life" refers to life in the real world. It generally references life or consensus reality, in contrast to an environment seen as fiction or fantasy, such as virtual reality, lifelike experience, dreams, novels, or movies. Online, the acronym "IRL" stands for "in real life", with the meaning "not on the Internet". Sociologists engaged in the study of the Internet have determined that someday, a distinction between online and real-life worlds may seem "quaint", noting that certain types of online activity, such as sexual intrigues, have already made a full transition to complete legitimacy and "reality". The abbreviation "RL" stands for "real life". For example, one can speak of "meeting in RL" someone whom one has met in a chat or on an Internet forum. It may also be used to express an inability to use the Internet for a time due to "RL problems"

Confirmation bias

From Wikipedia, the free encyclopedia

Confirmation bias is the tendency to search for, interpret, favor, and recall information in a way that confirms or supports one's prior beliefs or values. People display this bias when they select information that supports their views, ignoring contrary information, or when they interpret ambiguous evidence as supporting their existing attitudes. The effect is strongest for desired outcomes, for emotionally charged issues, and for deeply entrenched beliefs. Confirmation bias cannot be eliminated entirely, but it can be managed, for example, by education and training in critical thinking skills.

Confirmation bias is a broad construct covering a number of explanations. Biased search for information, biased interpretation of this information, and biased memory recall, have been invoked to explain four specific effects: 1) attitude polarization (when a disagreement becomes more extreme even though the different parties are exposed to the same evidence); 2) belief perseverance (when beliefs persist after the evidence for them is shown to be false); 3) the irrational primacy effect (a greater reliance on information encountered early in a series); and 4) illusory correlation (when people falsely perceive an association between two events or situations).

A series of psychological experiments in the 1960s suggested that people are biased toward confirming their existing beliefs. Later work re-interpreted these results as a tendency to test ideas in a one-sided way, focusing on one possibility and ignoring alternatives (myside bias, an alternative name for confirmation bias). In general, current explanations for the observed biases reveal the limited human capacity to process the complete set of information available, leading to a failure to investigate in a neutral, scientific way.

Flawed decisions due to confirmation bias have been found in political, organizational, financial and scientific contexts. These biases contribute to overconfidence in personal beliefs and can maintain or strengthen beliefs in the face of contrary evidence. For example, confirmation bias produces systematic errors in scientific research based on inductive reasoning (the gradual accumulation of supportive evidence). Similarly, a police detective may identify a suspect early in an investigation, but then may only seek confirming rather than disconfirming evidence. A medical practitioner may prematurely focus on a particular disorder early in a diagnostic session, and then seek only confirming evidence. In social media, confirmation bias is amplified by the use of filter bubbles, or "algorithmic editing", which display to individuals only information they are likely to agree with, while excluding opposing views.

Definition and context

Confirmation bias, a phrase coined by English psychologist Peter Wason, is the tendency of people to favor information that confirms or strengthens their beliefs or values, and is difficult to dislodge once affirmed. Confirmation bias is an example of a cognitive bias.

Confirmation bias (or confirmatory bias) has also been termed myside bias. "Congeniality bias" has also been used.

Confirmation biases are effects in information processing. They differ from what is sometimes called the behavioral confirmation effect, commonly known as self-fulfilling prophecy, in which a person's expectations influence their own behavior, bringing about the expected result.

Some psychologists restrict the term "confirmation bias" to selective collection of evidence that supports what one already believes while ignoring or rejecting evidence that supports a different conclusion. Others apply the term more broadly to the tendency to preserve one's existing beliefs when searching for evidence, interpreting it, or recalling it from memory.

Confirmation bias is a result of automatic, unintentional strategies rather than deliberate deception. Confirmation bias cannot be avoided or eliminated entirely, but only managed by improving education and critical thinking skills.

Confirmation bias is a broad construct that has a number of possible explanations, namely: hypothesis-testing by falsification, hypothesis testing by positive test strategy, and information processing explanations.

Types of confirmation bias

Biased search for information

A drawing of a man sitting on a stool at a writing desk
Confirmation bias has been described as an internal "yes man", echoing back a person's beliefs like Charles Dickens' character Uriah Heep

Experiments have found repeatedly that people tend to test hypotheses in a one-sided way, by searching for evidence consistent with their current hypothesis. Rather than searching through all the relevant evidence, they phrase questions to receive an affirmative answer that supports their theory. They look for the consequences that they would expect if their hypothesis were true, rather than what would happen if they were false. For example, someone using yes/no questions to find a number they suspect to be the number 3 might ask, "Is it an odd number?" People prefer this type of question, called a "positive test", even when a negative test such as "Is it an even number?" would yield exactly the same information. However, this does not mean that people seek tests that guarantee a positive answer. In studies where subjects could select either such pseudo-tests or genuinely diagnostic ones, they favored the genuinely diagnostic.

The preference for positive tests in itself is not a bias, since positive tests can be highly informative. However, in combination with other effects, this strategy can confirm existing beliefs or assumptions, independently of whether they are true. In real-world situations, evidence is often complex and mixed. For example, various contradictory ideas about someone could each be supported by concentrating on one aspect of his or her behavior. Thus any search for evidence in favor of a hypothesis is likely to succeed. One illustration of this is the way the phrasing of a question can significantly change the answer. For example, people who are asked, "Are you happy with your social life?" report greater satisfaction than those asked, "Are you unhappy with your social life?"

Even a small change in a question's wording can affect how people search through available information, and hence the conclusions they reach. This was shown using a fictional child custody case. Participants read that Parent A was moderately suitable to be the guardian in multiple ways. Parent B had a mix of salient positive and negative qualities: a close relationship with the child but a job that would take them away for long periods of time. When asked, "Which parent should have custody of the child?" the majority of participants chose Parent B, looking mainly for positive attributes. However, when asked, "Which parent should be denied custody of the child?" they looked for negative attributes and the majority answered that Parent B should be denied custody, implying that Parent A should have custody.

Similar studies have demonstrated how people engage in a biased search for information, but also that this phenomenon may be limited by a preference for genuine diagnostic tests. In an initial experiment, participants rated another person on the introversion–extroversion personality dimension on the basis of an interview. They chose the interview questions from a given list. When the interviewee was introduced as an introvert, the participants chose questions that presumed introversion, such as, "What do you find unpleasant about noisy parties?" When the interviewee was described as extroverted, almost all the questions presumed extroversion, such as, "What would you do to liven up a dull party?" These loaded questions gave the interviewees little or no opportunity to falsify the hypothesis about them. A later version of the experiment gave the participants less presumptive questions to choose from, such as, "Do you shy away from social interactions?" Participants preferred to ask these more diagnostic questions, showing only a weak bias towards positive tests. This pattern, of a main preference for diagnostic tests and a weaker preference for positive tests, has been replicated in other studies.

Personality traits influence and interact with biased search processes. Individuals vary in their abilities to defend their attitudes from external attacks in relation to selective exposure. Selective exposure occurs when individuals search for information that is consistent, rather than inconsistent, with their personal beliefs. An experiment examined the extent to which individuals could refute arguments that contradicted their personal beliefs. People with high confidence levels more readily seek out contradictory information to their personal position to form an argument. Individuals with low confidence levels do not seek out contradictory information and prefer information that supports their personal position. People generate and evaluate evidence in arguments that are biased towards their own beliefs and opinions. Heightened confidence levels decrease preference for information that supports individuals' personal beliefs.

Another experiment gave participants a complex rule-discovery task that involved moving objects simulated by a computer. Objects on the computer screen followed specific laws, which the participants had to figure out. So, participants could "fire" objects across the screen to test their hypotheses. Despite making many attempts over a ten-hour session, none of the participants figured out the rules of the system. They typically attempted to confirm rather than falsify their hypotheses, and were reluctant to consider alternatives. Even after seeing objective evidence that refuted their working hypotheses, they frequently continued doing the same tests. Some of the participants were taught proper hypothesis-testing, but these instructions had almost no effect.

Biased interpretation of information

Smart people believe weird things because they are skilled at defending beliefs they arrived at for non-smart reasons.

Michael Shermer

Confirmation biases are not limited to the collection of evidence. Even if two individuals have the same information, the way they interpret it can be biased.

A team at Stanford University conducted an experiment involving participants who felt strongly about capital punishment, with half in favor and half against it. Each participant read descriptions of two studies: a comparison of U.S. states with and without the death penalty, and a comparison of murder rates in a state before and after the introduction of the death penalty. After reading a quick description of each study, the participants were asked whether their opinions had changed. Then, they read a more detailed account of each study's procedure and had to rate whether the research was well-conducted and convincing. In fact, the studies were fictional. Half the participants were told that one kind of study supported the deterrent effect and the other undermined it, while for other participants the conclusions were swapped.

The participants, whether supporters or opponents, reported shifting their attitudes slightly in the direction of the first study they read. Once they read the more detailed descriptions of the two studies, they almost all returned to their original belief regardless of the evidence provided, pointing to details that supported their viewpoint and disregarding anything contrary. Participants described studies supporting their pre-existing view as superior to those that contradicted it, in detailed and specific ways. Writing about a study that seemed to undermine the deterrence effect, a death penalty proponent wrote, "The research didn't cover a long enough period of time," while an opponent's comment on the same study said, "No strong evidence to contradict the researchers has been presented." The results illustrated that people set higher standards of evidence for hypotheses that go against their current expectations. This effect, known as "disconfirmation bias", has been supported by other experiments.

Another study of biased interpretation occurred during the 2004 U.S. presidential election and involved participants who reported having strong feelings about the candidates. They were shown apparently contradictory pairs of statements, either from Republican candidate George W. Bush, Democratic candidate John Kerry or a politically neutral public figure. They were also given further statements that made the apparent contradiction seem reasonable. From these three pieces of information, they had to decide whether or not each individual's statements were inconsistent. There were strong differences in these evaluations, with participants much more likely to interpret statements from the candidate they opposed as contradictory.

A large round machine with a hole in the middle, with a platter for a person to lie on so that their head can fit into the hole
An MRI scanner allowed researchers to examine how the human brain deals with dissonant information

In this experiment, the participants made their judgments while in a magnetic resonance imaging (MRI) scanner which monitored their brain activity. As participants evaluated contradictory statements by their favored candidate, emotional centers of their brains were aroused. This did not happen with the statements by the other figures. The experimenters inferred that the different responses to the statements were not due to passive reasoning errors. Instead, the participants were actively reducing the cognitive dissonance induced by reading about their favored candidate's irrational or hypocritical behavior.

Biases in belief interpretation are persistent, regardless of intelligence level. Participants in an experiment took the SAT test (a college admissions test used in the United States) to assess their intelligence levels. They then read information regarding safety concerns for vehicles, and the experimenters manipulated the national origin of the car. American participants provided their opinion if the car should be banned on a six-point scale, where one indicated "definitely yes" and six indicated "definitely no". Participants firstly evaluated if they would allow a dangerous German car on American streets and a dangerous American car on German streets. Participants believed that the dangerous German car on American streets should be banned more quickly than the dangerous American car on German streets. There was no difference among intelligence levels at the rate participants would ban a car.

Biased interpretation is not restricted to emotionally significant topics. In another experiment, participants were told a story about a theft. They had to rate the evidential importance of statements arguing either for or against a particular character being responsible. When they hypothesized that character's guilt, they rated statements supporting that hypothesis as more important than conflicting statements.

Biased memory recall of information

People may remember evidence selectively to reinforce their expectations, even if they gather and interpret evidence in a neutral manner. This effect is called "selective recall", "confirmatory memory", or "access-biased memory". Psychological theories differ in their predictions about selective recall. Schema theory predicts that information matching prior expectations will be more easily stored and recalled than information that does not match. Some alternative approaches say that surprising information stands out and so is memorable. Predictions from both these theories have been confirmed in different experimental contexts, with no theory winning outright.

In one study, participants read a profile of a woman which described a mix of introverted and extroverted behaviors. They later had to recall examples of her introversion and extroversion. One group was told this was to assess the woman for a job as a librarian, while a second group were told it was for a job in real estate sales. There was a significant difference between what these two groups recalled, with the "librarian" group recalling more examples of introversion and the "sales" groups recalling more extroverted behavior. A selective memory effect has also been shown in experiments that manipulate the desirability of personality types. In one of these, a group of participants were shown evidence that extroverted people are more successful than introverts. Another group were told the opposite. In a subsequent, apparently unrelated study, participants were asked to recall events from their lives in which they had been either introverted or extroverted. Each group of participants provided more memories connecting themselves with the more desirable personality type, and recalled those memories more quickly.

Changes in emotional states can also influence memory recall. Participants rated how they felt when they had first learned that O.J. Simpson had been acquitted of murder charges. They described their emotional reactions and confidence regarding the verdict one week, two months, and one year after the trial. Results indicated that participants' assessments for Simpson's guilt changed over time. The more that participants' opinion of the verdict had changed, the less stable were the participant's memories regarding their initial emotional reactions. When participants recalled their initial emotional reactions two months and a year later, past appraisals closely resembled current appraisals of emotion. People demonstrate sizable myside bias when discussing their opinions on controversial topics. Memory recall and construction of experiences undergo revision in relation to corresponding emotional states.

Myside bias has been shown to influence the accuracy of memory recall. In an experiment, widows and widowers rated the intensity of their experienced grief six months and five years after the deaths of their spouses. Participants noted a higher experience of grief at six months rather than at five years. Yet, when the participants were asked after five years how they had felt six months after the death of their significant other, the intensity of grief participants recalled was highly correlated with their current level of grief. Individuals appear to utilize their current emotional states to analyze how they must have felt when experiencing past events. Emotional memories are reconstructed by current emotional states.

One study showed how selective memory can maintain belief in extrasensory perception (ESP). Believers and disbelievers were each shown descriptions of ESP experiments. Half of each group were told that the experimental results supported the existence of ESP, while the others were told they did not. In a subsequent test, participants recalled the material accurately, apart from believers who had read the non-supportive evidence. This group remembered significantly less information and some of them incorrectly remembered the results as supporting ESP.

Individual differences

Myside bias was once believed to be correlated with intelligence; however, studies have shown that myside bias can be more influenced by ability to rationally think as opposed to level of intelligence. Myside bias can cause an inability to effectively and logically evaluate the opposite side of an argument. Studies have stated that myside bias is an absence of "active open-mindedness", meaning the active search for why an initial idea may be wrong. Typically, myside bias is operationalized in empirical studies as the quantity of evidence used in support of their side in comparison to the opposite side.

A study has found individual differences in myside bias. This study investigates individual differences that are acquired through learning in a cultural context and are mutable. The researcher found important individual difference in argumentation. Studies have suggested that individual differences such as deductive reasoning ability, ability to overcome belief bias, epistemological understanding, and thinking disposition are significant predictors of the reasoning and generating arguments, counterarguments, and rebuttals.

A study by Christopher Wolfe and Anne Britt also investigated how participants' views of "what makes a good argument?" can be a source of myside bias that influences the way a person formulates their own arguments. The study investigated individual differences of argumentation schema and asked participants to write essays. The participants were randomly assigned to write essays either for or against their preferred side of an argument and were given research instructions that took either a balanced or an unrestricted approach. The balanced-research instructions directed participants to create a "balanced" argument, i.e., that included both pros and cons; the unrestricted-research instructions included nothing on how to create the argument.

Overall, the results revealed that the balanced-research instructions significantly increased the incidence of opposing information in arguments. These data also reveal that personal belief is not a source of myside bias; however, that those participants, who believe that a good argument is one that is based on facts, are more likely to exhibit myside bias than other participants. This evidence is consistent with the claims proposed in Baron's article—that people's opinions about what makes good thinking can influence how arguments are generated.

Discovery

Informal observations

Engraved head-and-shoulders portrait of Francis Bacon wearing a hat and ruff.

Before psychological research on confirmation bias, the phenomenon had been observed throughout history. Beginning with the Greek historian Thucydides (c. 460 BC – c. 395 BC), who wrote of misguided reason in The Peloponnesian War; "... for it is a habit of mankind to entrust to careless hope what they long for, and to use sovereign reason to thrust aside what they do not fancy".[44] Italian poet Dante Alighieri (1265–1321) noted it in the Divine Comedy, in which St. Thomas Aquinas cautions Dante upon meeting in Paradise, "opinion—hasty—often can incline to the wrong side, and then affection for one's own opinion binds, confines the mind".[45] Ibn Khaldun noticed the same effect in his Muqaddimah:

Untruth naturally afflicts historical information. There are various reasons that make this unavoidable. One of them is partisanship for opinions and schools. [...] if the soul is infected with partisanship for a particular opinion or sect, it accepts without a moment's hesitation the information that is agreeable to it. Prejudice and partisanship obscure the critical faculty and preclude critical investigation. The result is that falsehoods are accepted and transmitted.

In the Novum Organum, English philosopher and scientist Francis Bacon (1561–1626) noted that biased assessment of evidence drove "all superstitions, whether in astrology, dreams, omens, divine judgments or the like". He wrote:

The human understanding when it has once adopted an opinion ... draws all things else to support and agree with it. And though there be a greater number and weight of instances to be found on the other side, yet these it either neglects or despises, or else by some distinction sets aside or rejects[.]

In the second volume of his The World as Will and Representation (1844), German philosopher Arthur Schopenhauer observed that "An adopted hypothesis gives us lynx-eyes for everything that confirms it and makes us blind to everything that contradicts it." In his essay (1897) "What Is Art?", Russian novelist Leo Tolstoy wrote:

I know that most men—not only those considered clever, but even those who are very clever, and capable of understanding most difficult scientific, mathematical, or philosophic problems—can very seldom discern even the simplest and most obvious truth if it be such as to oblige them to admit the falsity of conclusions they have formed, perhaps with much difficulty—conclusions of which they are proud, which they have taught to others, and on which they have built their lives.

Hypothesis-testing (falsification) explanation (Wason)

In Peter Wason's initial experiment published in 1960 (which does not mention the term "confirmation bias"), he repeatedly challenged participants to identify a rule applying to triples of numbers. They were told that (2,4,6) fits the rule. They generated triples, and the experimenter told them whether or not each triple conformed to the rule.

The actual rule was simply "any ascending sequence", but participants had great difficulty in finding it, often announcing rules that were far more specific, such as "the middle number is the average of the first and last". The participants seemed to test only positive examples—triples that obeyed their hypothesized rule. For example, if they thought the rule was, "Each number is two greater than its predecessor," they would offer a triple that fitted (confirmed) this rule, such as (11,13,15) rather than a triple that violated (falsified) it, such as (11,12,19).

Wason interpreted his results as showing a preference for confirmation over falsification, hence he coined the term "confirmation bias". Wason also used confirmation bias to explain the results of his selection task experiment. Participants repeatedly performed badly on various forms of this test, in most cases ignoring information that could potentially refute (falsify) the specified rule.

Hypothesis testing (positive test strategy) explanation (Klayman and Ha)

Klayman and Ha's 1987 paper argues that the Wason experiments do not actually demonstrate a bias towards confirmation, but instead a tendency to make tests consistent with the working hypothesis. They called this the "positive test strategy". This strategy is an example of a heuristic: a reasoning shortcut that is imperfect but easy to compute. Klayman and Ha used Bayesian probability and information theory as their standard of hypothesis-testing, rather than the falsificationism used by Wason. According to these ideas, each answer to a question yields a different amount of information, which depends on the person's prior beliefs. Thus a scientific test of a hypothesis is one that is expected to produce the most information. Since the information content depends on initial probabilities, a positive test can either be highly informative or uninformative. Klayman and Ha argued that when people think about realistic problems, they are looking for a specific answer with a small initial probability. In this case, positive tests are usually more informative than negative tests. However, in Wason's rule discovery task the answer—three numbers in ascending order—is very broad, so positive tests are unlikely to yield informative answers. Klayman and Ha supported their analysis by citing an experiment that used the labels "DAX" and "MED" in place of "fits the rule" and "doesn't fit the rule". This avoided implying that the aim was to find a low-probability rule. Participants had much more success with this version of the experiment.

Within the universe of all possible triples, those that fit the true rule are shown schematically as a circle. The hypothesized rule is a smaller circle enclosed within it.
If the true rule (T) encompasses the current hypothesis (H), then positive tests (examining an H to see if it is T) will not show that the hypothesis is false.
Two overlapping circles represent the true rule and the hypothesized rule. Any observation falling in the non-overlapping parts of the circles shows that the two rules are not exactly the same. In other words, those observations falsify the hypothesis.
If the true rule (T) overlaps the current hypothesis (H), then either a negative test or a positive test can potentially falsify H.
The triples fitting the hypothesis are represented as a circle within the universe of all triples. The true rule is a smaller circle within this.
When the working hypothesis (H) includes the true rule (T) then positive tests are the only way to falsify H.

In light of this and other critiques, the focus of research moved away from confirmation versus falsification of an hypothesis, to examining whether people test hypotheses in an informative way, or an uninformative but positive way. The search for "true" confirmation bias led psychologists to look at a wider range of effects in how people process information.

Information processing explanations

There are currently three main information processing explanations of confirmation bias, plus a recent addition.

Cognitive versus motivational

Happy events are more likely to be remembered.

According to Robert MacCoun, most biased evidence processing occurs through a combination of "cold" (cognitive) and "hot" (motivated) mechanisms.

Cognitive explanations for confirmation bias are based on limitations in people's ability to handle complex tasks, and the shortcuts, called heuristics, that they use. For example, people may judge the reliability of evidence by using the availability heuristic that is, how readily a particular idea comes to mind. It is also possible that people can only focus on one thought at a time, so find it difficult to test alternative hypotheses in parallel. Another heuristic is the positive test strategy identified by Klayman and Ha, in which people test a hypothesis by examining cases where they expect a property or event to occur. This heuristic avoids the difficult or impossible task of working out how diagnostic each possible question will be. However, it is not universally reliable, so people can overlook challenges to their existing beliefs.

Motivational explanations involve an effect of desire on belief. It is known that people prefer positive thoughts over negative ones in a number of ways: this is called the "Pollyanna principle". Applied to arguments or sources of evidence, this could explain why desired conclusions are more likely to be believed true. According to experiments that manipulate the desirability of the conclusion, people demand a high standard of evidence for unpalatable ideas and a low standard for preferred ideas. In other words, they ask, "Can I believe this?" for some suggestions and, "Must I believe this?" for others. Although consistency is a desirable feature of attitudes, an excessive drive for consistency is another potential source of bias because it may prevent people from neutrally evaluating new, surprising information. Social psychologist Ziva Kunda combines the cognitive and motivational theories, arguing that motivation creates the bias, but cognitive factors determine the size of the effect.

Cost-benefit

Explanations in terms of cost-benefit analysis assume that people do not just test hypotheses in a disinterested way, but assess the costs of different errors. Using ideas from evolutionary psychology, James Friedrich suggests that people do not primarily aim at truth in testing hypotheses, but try to avoid the most costly errors. For example, employers might ask one-sided questions in job interviews because they are focused on weeding out unsuitable candidates. Yaacov Trope and Akiva Liberman's refinement of this theory assumes that people compare the two different kinds of error: accepting a false hypothesis or rejecting a true hypothesis. For instance, someone who underestimates a friend's honesty might treat him or her suspiciously and so undermine the friendship. Overestimating the friend's honesty may also be costly, but less so. In this case, it would be rational to seek, evaluate or remember evidence of their honesty in a biased way. When someone gives an initial impression of being introverted or extroverted, questions that match that impression come across as more empathic. This suggests that when talking to someone who seems to be an introvert, it is a sign of better social skills to ask, "Do you feel awkward in social situations?" rather than, "Do you like noisy parties?" The connection between confirmation bias and social skills was corroborated by a study of how college students get to know other people. Highly self-monitoring students, who are more sensitive to their environment and to social norms, asked more matching questions when interviewing a high-status staff member than when getting to know fellow students.

Exploratory versus confirmatory

Psychologists Jennifer Lerner and Philip Tetlock distinguish two different kinds of thinking process. Exploratory thought neutrally considers multiple points of view and tries to anticipate all possible objections to a particular position, while confirmatory thought seeks to justify a specific point of view. Lerner and Tetlock say that when people expect to justify their position to others whose views they already know, they will tend to adopt a similar position to those people, and then use confirmatory thought to bolster their own credibility. However, if the external parties are overly aggressive or critical, people will disengage from thought altogether, and simply assert their personal opinions without justification. Lerner and Tetlock say that people only push themselves to think critically and logically when they know in advance they will need to explain themselves to others who are well-informed, genuinely interested in the truth, and whose views they don't already know. Because those conditions rarely exist, they argue, most people are using confirmatory thought most of the time.

Make-believe

Developmental psychologist Eve Whitmore has argued that beliefs and biases involved in confirmation bias have their roots in childhood coping through make-believe, which becomes "the basis for more complex forms of self-deception and illusion into adulthood." The friction brought on by questioning as an adolescent with developing critical thinking can lead to the rationalization of false beliefs, and the habit of such rationalization can become unconscious over the years.

Real-world effects

Social media

In social media, confirmation bias is amplified by the use of filter bubbles, or "algorithmic editing", which displays to individuals only information they are likely to agree with, while excluding opposing views. Some have argued that confirmation bias is the reason why society can never escape from filter bubbles, because individuals are psychologically hardwired to seek information that agrees with their preexisting values and beliefs. Others have further argued that the mixture of the two is degrading democracy—claiming that this "algorithmic editing" removes diverse viewpoints and information—and that unless filter bubble algorithms are removed, voters will be unable to make fully informed political decisions. 

The rise of social media has contributed greatly to the rapid spread of fake news, that is, false and misleading information that is presented as credible news from a seemingly reliable source. Confirmation bias (selecting or reinterpreting evidence to support one's beliefs) is one of three main hurdles cited as to why critical thinking goes astray in these circumstances. The other two are shortcut heuristics (when overwhelmed or short of time, people rely on simple rules such as group consensus or trusting an expert or role model) and social goals (social motivation or peer pressure can interfere with objective analysis of facts at hand).

In combating the spread of fake news, social media sites have considered turning toward "digital nudging". This can currently be done in two different forms of nudging. This includes nudging of information and nudging of presentation. Nudging of information entails social media sites providing a disclaimer or label questioning or warning users of the validity of the source while nudging of presentation includes exposing users to new information which they may not have sought out but could introduce them to viewpoints that may combat their own confirmation biases.

Science and scientific research

A distinguishing feature of scientific thinking is the search for confirming or supportive evidence (inductive reasoning) as well as falsifying evidence (deductive reasoning). Inductive research in particular can have a serious problem with confirmation bias.

Many times in the history of science, scientists have resisted new discoveries by selectively interpreting or ignoring unfavorable data. The assessment of the quality of scientific studies seems to be particularly vulnerable to confirmation bias. Several studies have shown that scientists rate studies that report findings consistent with their prior beliefs more favorably than studies reporting findings inconsistent with their previous beliefs.

However, assuming that the research question is relevant, the experimental design adequate and the data are clearly and comprehensively described, the empirical data obtained should be important to the scientific community and should not be viewed prejudicially, regardless of whether they conform to current theoretical predictions. In practice, researchers may misunderstand, misinterpret, or not read at all studies that contradict their preconceptions, or wrongly cite them anyway as if they actually supported their claims.

Further, confirmation biases can sustain scientific theories or research programs in the face of inadequate or even contradictory evidence. The discipline of parapsychology is often cited as an example in the context of whether it is a protoscience or a pseudoscience.

An experimenter's confirmation bias can potentially affect which data are reported. Data that conflict with the experimenter's expectations may be more readily discarded as unreliable, producing the so-called file drawer effect. To combat this tendency, scientific training teaches ways to prevent bias. For example, experimental design of randomized controlled trials (coupled with their systematic review) aims to minimize sources of bias.

The social process of peer review aims to mitigate the effect of individual scientists' biases, even though the peer review process itself may be susceptible to such biases. Confirmation bias may thus be especially harmful to objective evaluations regarding nonconforming results since biased individuals may regard opposing evidence to be weak in principle and give little serious thought to revising their beliefs. Scientific innovators often meet with resistance from the scientific community, and research presenting controversial results frequently receives harsh peer review.

Media and fact-checking

Sensationalist newspapers in the 1850s and later lead to a gradual need for a more factual media. Colin Dickey has described the subsequent evolution of fact-checking. Key elements were the establishment of Associated Press in the 1850s (short factual material needed), Ralph Pulitzer of the New York World (his Bureau of Accuracy and Fair Play, 1912), Henry Luce and Time magazine (original working title: Facts), and the famous fact-checking department of The New Yorker. More recently, the mainstream media has come under severe economic threat from online startups. In addition the rapid spread of misinformation and conspiracy theories via social media is slowly creeping into mainstream media. One solution is for more media staff to be assigned a fact-checking role, as for example The Washington Post. Independent fact-checking organisations have also become prominent, such as Politifact.

However, the fact-checking of media reports and investigations is subject to the same confirmation bias as that for peer review of scientific research. This bias has been little studied so far. For example, a fact-checker with progressive political views might be more critical than necessary of a factual report from a conservative commentator. Another example is that facts are often explained with ambiguous words, so that progressives and conservatives may interpret the words differently according to their own beliefs.

Finance

Confirmation bias can lead investors to be overconfident, ignoring evidence that their strategies will lose money. In studies of political stock markets, investors made more profit when they resisted bias. For example, participants who interpreted a candidate's debate performance in a neutral rather than partisan way were more likely to profit. To combat the effect of confirmation bias, investors can try to adopt a contrary viewpoint "for the sake of argument". In one technique, they imagine that their investments have collapsed and ask themselves why this might happen.

Medicine and health

Cognitive biases are important variables in clinical decision-making by medical general practitioners (GPs) and medical specialists. Two important ones are confirmation bias and the overlapping availability bias. A GP may make a diagnosis early on during an examination, and then seek confirming evidence rather than falsifying evidence. This cognitive error is partly caused by the availability of evidence about the supposed disorder being diagnosed. For example, the client may have mentioned the disorder, or the GP may have recently read a much-discussed paper about the disorder. The basis of this cognitive shortcut or heuristic (termed anchoring) is that the doctor does not consider multiple possibilities based on evidence, but prematurely latches on (or anchors to) a single cause. In emergency medicine, because of time pressure, there is a high density of decision-making, and shortcuts are frequently applied. The potential failure rate of these cognitive decisions needs to be managed by education about the 30 or more cognitive biases that can occur, so as to set in place proper debiasing strategies. Confirmation bias may also cause doctors to perform unnecessary medical procedures due to pressure from adamant patients.

Raymond Nickerson, a psychologist, blames confirmation bias for the ineffective medical procedures that were used for centuries before the arrival of scientific medicine. If a patient recovered, medical authorities counted the treatment as successful, rather than looking for alternative explanations such as that the disease had run its natural course. Biased assimilation is a factor in the modern appeal of alternative medicine, whose proponents are swayed by positive anecdotal evidence but treat scientific evidence hyper-critically.

Cognitive therapy was developed by Aaron T. Beck in the early 1960s and has become a popular approach. According to Beck, biased information processing is a factor in depression. His approach teaches people to treat evidence impartially, rather than selectively reinforcing negative outlooks. Phobias and hypochondria have also been shown to involve confirmation bias for threatening information.

Politics, law and policing

A woman and a man reading a document in a courtroom
Mock trials allow researchers to examine confirmation biases in a realistic setting

Nickerson argues that reasoning in judicial and political contexts is sometimes subconsciously biased, favoring conclusions that judges, juries or governments have already committed to. Since the evidence in a jury trial can be complex, and jurors often reach decisions about the verdict early on, it is reasonable to expect an attitude polarization effect. The prediction that jurors will become more extreme in their views as they see more evidence has been borne out in experiments with mock trials. Both inquisitorial and adversarial criminal justice systems are affected by confirmation bias.

Confirmation bias can be a factor in creating or extending conflicts, from emotionally charged debates to wars: by interpreting the evidence in their favor, each opposing party can become overconfident that it is in the stronger position. On the other hand, confirmation bias can result in people ignoring or misinterpreting the signs of an imminent or incipient conflict. For example, psychologists Stuart Sutherland and Thomas Kida have each argued that U.S. Navy Admiral Husband E. Kimmel showed confirmation bias when playing down the first signs of the Japanese attack on Pearl Harbor.

A two-decade study of political pundits by Philip E. Tetlock found that, on the whole, their predictions were not much better than chance. Tetlock divided experts into "foxes" who maintained multiple hypotheses, and "hedgehogs" who were more dogmatic. In general, the hedgehogs were much less accurate. Tetlock blamed their failure on confirmation bias, and specifically on their inability to make use of new information that contradicted their existing theories.

In police investigations, a detective may identify a suspect early in an investigation, but then sometimes largely seek supporting or confirming evidence, ignoring or downplaying falsifying evidence.

Social psychology

Social psychologists have identified two tendencies in the way people seek or interpret information about themselves. Self-verification is the drive to reinforce the existing self-image and self-enhancement is the drive to seek positive feedback. Both are served by confirmation biases. In experiments where people are given feedback that conflicts with their self-image, they are less likely to attend to it or remember it than when given self-verifying feedback. They reduce the impact of such information by interpreting it as unreliable. Similar experiments have found a preference for positive feedback, and the people who give it, over negative feedback.

Mass delusions

Confirmation bias can play a key role in the propagation of mass delusions. Witch trials are frequently cited as an example.

For another example, in the Seattle windshield pitting epidemic, there seemed to be a "pitting epidemic" in which windshields were damaged due to an unknown cause. As news of the apparent wave of damage spread, more and more people checked their windshields, discovered that their windshields too had been damaged, thus confirming belief in the supposed epidemic. In fact, the windshields were previously damaged, but the damage went unnoticed until people checked their windshields as the delusion spread.

Paranormal beliefs

One factor in the appeal of alleged psychic readings is that listeners apply a confirmation bias which fits the psychic's statements to their own lives. By making a large number of ambiguous statements in each sitting, the psychic gives the client more opportunities to find a match. This is one of the techniques of cold reading, with which a psychic can deliver a subjectively impressive reading without any prior information about the client. Investigator James Randi compared the transcript of a reading to the client's report of what the psychic had said, and found that the client showed a strong selective recall of the "hits".

As a striking illustration of confirmation bias in the real world, Nickerson mentions numerological pyramidology: the practice of finding meaning in the proportions of the Egyptian pyramids. There are many different length measurements that can be made of, for example, the Great Pyramid of Giza and many ways to combine or manipulate them. Hence it is almost inevitable that people who look at these numbers selectively will find superficially impressive correspondences, for example with the dimensions of the Earth.

Recruitment and selection

Unconscious cognitive bias (including confirmation bias) in job recruitment affects hiring decisions and can potentially prohibit a diverse and inclusive workplace. There are a variety of unconscious biases that affects recruitment decisions but confirmation bias is one of the major ones, especially during the interview stage. The interviewer will often select a candidate that confirms their own beliefs, even though other candidates are equally or better qualified.

Associated effects and outcomes

Polarization of opinion

When people with opposing views interpret new information in a biased way, their views can move even further apart. This is called "attitude polarization". The effect was demonstrated by an experiment that involved drawing a series of red and black balls from one of two concealed "bingo baskets". Participants knew that one basket contained 60 percent black and 40 percent red balls; the other, 40 percent black and 60 percent red. The experimenters looked at what happened when balls of alternating color were drawn in turn, a sequence that does not favor either basket. After each ball was drawn, participants in one group were asked to state out loud their judgments of the probability that the balls were being drawn from one or the other basket. These participants tended to grow more confident with each successive draw—whether they initially thought the basket with 60 percent black balls or the one with 60 percent red balls was the more likely source, their estimate of the probability increased. Another group of participants were asked to state probability estimates only at the end of a sequence of drawn balls, rather than after each ball. They did not show the polarization effect, suggesting that it does not necessarily occur when people simply hold opposing positions, but rather when they openly commit to them.

A less abstract study was the Stanford biased interpretation experiment, in which participants with strong opinions about the death penalty read about mixed experimental evidence. Twenty-three percent of the participants reported that their views had become more extreme, and this self-reported shift correlated strongly with their initial attitudes. In later experiments, participants also reported their opinions becoming more extreme in response to ambiguous information. However, comparisons of their attitudes before and after the new evidence showed no significant change, suggesting that the self-reported changes might not be real. Based on these experiments, Deanna Kuhn and Joseph Lao concluded that polarization is a real phenomenon but far from inevitable, only happening in a small minority of cases, and it was prompted not only by considering mixed evidence, but by merely thinking about the topic.

Charles Taber and Milton Lodge argued that the Stanford team's result had been hard to replicate because the arguments used in later experiments were too abstract or confusing to evoke an emotional response. The Taber and Lodge study used the emotionally charged topics of gun control and affirmative action. They measured the attitudes of their participants towards these issues before and after reading arguments on each side of the debate. Two groups of participants showed attitude polarization: those with strong prior opinions and those who were politically knowledgeable. In part of this study, participants chose which information sources to read, from a list prepared by the experimenters. For example, they could read the National Rifle Association's and the Brady Anti-Handgun Coalition's arguments on gun control. Even when instructed to be even-handed, participants were more likely to read arguments that supported their existing attitudes than arguments that did not. This biased search for information correlated well with the polarization effect.

The backfire effect is a name for the finding that given evidence against their beliefs, people can reject the evidence and believe even more strongly. The phrase was coined by Brendan Nyhan and Jason Reifler in 2010. However, subsequent research has since failed to replicate findings supporting the backfire effect. One study conducted out of the Ohio State University and George Washington University studied 10,100 participants with 52 different issues expected to trigger a backfire effect. While the findings did conclude that individuals are reluctant to embrace facts that contradict their already held ideology, no cases of backfire were detected. The backfire effect has since been noted to be a rare phenomenon rather than a common occurrence.

Persistence of discredited beliefs

[B]eliefs can survive potent logical or empirical challenges. They can survive and even be bolstered by evidence that most uncommitted observers would agree logically demands some weakening of such beliefs. They can even survive the total destruction of their original evidential bases.

—Lee Ross and Craig Anderson

Confirmation biases provide one plausible explanation for the persistence of beliefs when the initial evidence for them is removed or when they have been sharply contradicted.[1]:187 This belief perseverance effect has been first demonstrated experimentally by Festinger, Riecken, and Schachter. These psychologists spent time with a cult whose members were convinced that the world would end on December 21, 1954. After the prediction failed, most believers still clung to their faith. Their book describing this research is aptly named When Prophecy Fails.

The term "belief perseverance," however, was coined in a series of experiments using what is called the "debriefing paradigm": participants read fake evidence for a hypothesis, their attitude change is measured, then the fakery is exposed in detail. Their attitudes are then measured once more to see if their belief returns to its previous level.

A common finding is that at least some of the initial belief remains even after a full debriefing. In one experiment, participants had to distinguish between real and fake suicide notes. The feedback was random: some were told they had done well while others were told they had performed badly. Even after being fully debriefed, participants were still influenced by the feedback. They still thought they were better or worse than average at that kind of task, depending on what they had initially been told.

In another study, participants read job performance ratings of two firefighters, along with their responses to a risk aversion test. This fictional data was arranged to show either a negative or positive association: some participants were told that a risk-taking firefighter did better, while others were told they did less well than a risk-averse colleague. Even if these two case studies were true, they would have been scientifically poor evidence for a conclusion about firefighters in general. However, the participants found them subjectively persuasive. When the case studies were shown to be fictional, participants' belief in a link diminished, but around half of the original effect remained. Follow-up interviews established that the participants had understood the debriefing and taken it seriously. Participants seemed to trust the debriefing, but regarded the discredited information as irrelevant to their personal belief.

The continued influence effect is the tendency to believe previously learned misinformation even after it has been corrected. Misinformation can still influence inferences one generates after a correction has occurred.

Preference for early information

Experiments have shown that information is weighted more strongly when it appears early in a series, even when the order is unimportant. For example, people form a more positive impression of someone described as "intelligent, industrious, impulsive, critical, stubborn, envious" than when they are given the same words in reverse order. This irrational primacy effect is independent of the primacy effect in memory in which the earlier items in a series leave a stronger memory trace. Biased interpretation offers an explanation for this effect: seeing the initial evidence, people form a working hypothesis that affects how they interpret the rest of the information.

One demonstration of irrational primacy used colored chips supposedly drawn from two urns. Participants were told the color distributions of the urns, and had to estimate the probability of a chip being drawn from one of them. In fact, the colors appeared in a prearranged order. The first thirty draws favored one urn and the next thirty favored the other. The series as a whole was neutral, so rationally, the two urns were equally likely. However, after sixty draws, participants favored the urn suggested by the initial thirty.

Another experiment involved a slide show of a single object, seen as just a blur at first and in slightly better focus with each succeeding slide. After each slide, participants had to state their best guess of what the object was. Participants whose early guesses were wrong persisted with those guesses, even when the picture was sufficiently in focus that the object was readily recognizable to other people.

Illusory association between events

Illusory correlation is the tendency to see non-existent correlations in a set of data. This tendency was first demonstrated in a series of experiments in the late 1960s. In one experiment, participants read a set of psychiatric case studies, including responses to the Rorschach inkblot test. The participants reported that the homosexual men in the set were more likely to report seeing buttocks, anuses or sexually ambiguous figures in the inkblots. In fact the fictional case studies had been constructed so that the homosexual men were no more likely to report this imagery or, in one version of the experiment, were less likely to report it than heterosexual men. In a survey, a group of experienced psychoanalysts reported the same set of illusory associations with homosexuality.

Another study recorded the symptoms experienced by arthritic patients, along with weather conditions over a 15-month period. Nearly all the patients reported that their pains were correlated with weather conditions, although the real correlation was zero.

Example
Days Rain No rain
Arthritis 14 6
No arthritis 7 2

This effect is a kind of biased interpretation, in that objectively neutral or unfavorable evidence is interpreted to support existing beliefs. It is also related to biases in hypothesis-testing behavior. In judging whether two events, such as illness and bad weather, are correlated, people rely heavily on the number of positive-positive cases: in this example, instances of both pain and bad weather. They pay relatively little attention to the other kinds of observation (of no pain and/or good weather). This parallels the reliance on positive tests in hypothesis testing. It may also reflect selective recall, in that people may have a sense that two events are correlated because it is easier to recall times when they happened together.

Mandatory Palestine

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Mandatory_Palestine   Palestine 1920–...