Search This Blog

Sunday, November 23, 2025

Hard problem of consciousness

From Wikipedia, the free encyclopedia

In the philosophy of mind, the "hard problem" of consciousness is to explain why and how humans (and other organisms) have qualia, phenomenal consciousness, or subjective experience. It is contrasted with the "easy problems" of explaining why and how physical systems give a human being the ability to discriminate, to integrate information, and to perform behavioural functions such as watching, listening, speaking (including generating an utterance that appears to refer to personal behaviour or belief), and so forth. The easy problems are amenable to functional explanation—that is, explanations that are mechanistic or behavioural—since each physical system can be explained purely by reference to the "structure and dynamics" that underpin the phenomenon.

Proponents of the hard problem propose that it is categorically different from the easy problems since no mechanistic or behavioural explanation could explain the character of an experience, not even in principle. Even after all the relevant functional facts are explicated, they argue, there will still remain a further question: "why is the performance of these functions accompanied by experience?" To bolster their case, proponents of the hard problem frequently turn to various philosophical thought experiments, involving philosophical zombies, or inverted qualia, or the ineffability of colour experiences, or the unknowability of foreign states of consciousness, such as the experience of being a bat.

David Chalmers on stage for an Alan Turing Year event at De La Salle University, Manila, 27 March 2012

The terms "hard problem" and "easy problems" were coined by the philosopher David Chalmers in a 1994 talk given at The Science of Consciousness conference held in Tucson, Arizona. The following year, the main talking points of Chalmers' talk were published in The Journal of Consciousness Studies. The publication gained significant attention from consciousness researchers and became the subject of a special volume of the journal, which was later published into a book. In 1996, Chalmers published The Conscious Mind, a book-length treatment of the hard problem, in which he elaborated on his core arguments and responded to counterarguments. His use of the word easy is "tongue-in-cheek". As the cognitive psychologist Steven Pinker puts it, they are about as easy as going to Mars or curing cancer. "That is, scientists more or less know what to look for, and with enough brainpower and funding, they would probably crack it in this century."

The existence of the hard problem is disputed. It has been accepted by some philosophers of mind such as Joseph LevineColin McGinn, and Ned Block and cognitive neuroscientists such as Francisco VarelaGiulio Tononi, and Christof Koch. On the other hand, its existence is rejected by other philosophers of mind, such as Daniel DennettMassimo PigliucciThomas Metzinger, Patricia Churchland, and Keith Frankish, and by cognitive neuroscientists such as Stanislas DehaeneBernard BaarsAnil Seth, and Antonio Damasio. Clinical neurologist and sceptic Steven Novella has dismissed it as "the hard non-problem". According to a 2020 PhilPapers survey, a majority (62.42%) of the philosophers surveyed said they believed that the hard problem is a genuine problem, while 29.72% said that it does not exist.

There are a number of other potential philosophical problems that are related to the Hard Problem. Ned Block believes that there exists a "Harder Problem of Consciousness", due to the possibility of different physical and functional neurological systems potentially having phenomenal overlap. Another potential philosophical problem which is closely related to Benj Hellie's vertiginous question, dubbed "The Even Harder Problem of Consciousness", refers to why a given individual has their own particular personal identity, as opposed to existing as someone else.

Overview

Cognitive scientist David Chalmers first formulated the hard problem in his paper "Facing up to the problem of consciousness" (1995) and expanded upon it in The Conscious Mind (1996). His works provoked comment. Some, such as philosopher David Lewis and Steven Pinker, have praised Chalmers for his argumentative rigour and "impeccable clarity". Pinker later said, in 2018, "In the end I still think that the hard problem is a meaningful conceptual problem, but agree with Dennett that it is not a meaningful scientific problem. No one will ever get a grant to study whether you are a zombie or whether the same Captain Kirk walks on the deck of the Enterprise and the surface of Zakdorn. And I agree with several other philosophers that it may be futile to hope for a solution at all, precisely because it is a conceptual problem, or, more accurately, a problem with our concepts." Daniel Dennett and Patricia Churchland, among others, believe that the hard problem is best seen as a collection of easy problems that will be solved through further analysis of the brain and behaviour.

Consciousness is an ambiguous term. It can be used to mean self consciousness, awareness, the state of being awake, and so on. Chalmers uses Thomas Nagel's definition of consciousness: "the feeling of what it is like to be something." Consciousness, in this sense, is synonymous with experience.

Chalmers' formulation

. . .even when we have explained the performance of all the cognitive and behavioral functions in the vicinity of experience—perceptual discrimination, categorization, internal access, verbal report—there may still remain a further unanswered question: Why is the performance of these functions accompanied by experience?

— David Chalmers, Facing up to the problem of consciousness

The problems of consciousness, Chalmers argues, are of two kinds: the easy problems and the hard problem.

Easy problems

The easy problems are amenable to reductive enquiry. They are a logical consequence of lower-level facts about the world, similar to how a clock's ability to tell time is a logical consequence of its clockwork and structure, or a hurricane being a logical consequence of the structures and functions of certain weather patterns. A clock, a hurricane, and the easy problems, are all the sum of their parts (as are most things).

The easy problems relevant to consciousness concern mechanistic analysis of the neural processes that accompany behaviour. Examples of these include how sensory systems work, how sensory data is processed in the brain, how that data influences behaviour or verbal reports, the neural basis of thought and emotion, and so on. They are problems that can be analysed through "structures and functions".

Hard problem

The hard problem, in contrast, is the problem of why and how those processes are accompanied by experience. It may further include the question of why these processes are accompanied by this or that particular experience, rather than some other kind of experience. In other words, the hard problem is the problem of explaining why certain mechanisms are accompanied by conscious experience. For example, why should neural processing in the brain lead to the felt sensations of, say, feelings of hunger? And why should those neural firings lead to feelings of hunger rather than some other feeling (such as, for example, feelings of thirst)?

Chalmers argues that it is conceivable that the relevant behaviours associated with hunger, or any other feeling, could occur even in the absence of that feeling. This suggests that experience is irreducible to physical systems such as the brain. This is the topic of the next section.

Chalmers believes that the hard problem is irreducible to the easy problems: solving the easy problems will not lead to a solution to the hard problem. This is because the easy problems pertain to the causal structure of the world while the hard problem pertains to consciousness, and facts about consciousness include facts that go beyond mere causal or structural description.

For example, suppose someone were to stub their foot and yelp. In this scenario, the easy problems are mechanistic explanations that involve the activity of the nervous system and brain and its relation to the environment (such as the propagation of nerve signals from the toe to the brain, the processing of that information and how it leads to yelping, and so on). The hard problem is the question of why these mechanisms are accompanied by the feeling of pain, or why these feelings of pain feel the particular way that they do. Chalmers argues that facts about the neural mechanisms of pain, and pain behaviours, do not lead to facts about conscious experience. Facts about conscious experience are, instead, further facts, not derivable from facts about the brain.

The hard problem is often illustrated by appealing to the logical possibility of inverted visible spectra. If there is no logical contradiction in supposing that one's colour vision could be inverted, it follows that mechanistic explanations of visual processing do not determine facts about what it is like to see colours.

An explanation for all of the relevant physical facts about neural processing would leave unexplained facts about what it is like to feel pain. This is in part because functions and physical structures of any sort could conceivably exist in the absence of experience. Alternatively, they could exist alongside a different set of experiences. For example, it is logically possible for a perfect replica of Chalmers to have no experience at all, or for it to have a different set of experiences (such as an inverted visible spectrum, so that the blue-yellow red-green axes of its visual field are flipped).

The same cannot be said about clocks, hurricanes, or other physical things. In those cases, a structural or functional description is a complete description. A perfect replica of a clock is a clock, a perfect replica of a hurricane is a hurricane, and so on. The difference is that physical things are nothing more than their physical constituents. For example, water is nothing more than H2O molecules, and understanding everything about H2O molecules is to understand everything there is to know about water. But consciousness is not like this. Knowing everything there is to know about the brain, or any physical system, is not to know everything there is to know about consciousness. Consciousness, then, must not be purely physical.

Implications for physicalism

Chalmers's idea contradicts physicalism, sometimes labelled materialism. This is the view that everything that exists is a physical or material thing, so everything can be reduced to microphysical things. For example, the rings of Saturn are a physical thing because they are nothing more than a complex arrangement of a large number of subatomic particles interacting in a certain way. According to physicalism, everything, including consciousness, can be explained by appeal to its microphysical constituents. Chalmers's hard problem presents a counterexample to this view and to other phenomena like swarms of birds, since it suggests that consciousness, and analogously swarms of birds, cannot be reductively explained by appealing to their physical constituents. Thus, if the hard problem is a real problem then physicalism must be false, and if physicalism is true then the hard problem must not be a real problem.

Though Chalmers rejects physicalism, he is still a naturalist.

Christian List argues that the existence of first-person perspectives and the inability for physicalism to answer Hellie's vertiginous question is evidence against physicalism, since first-personal facts cannot supervene on physical third-personal facts. List also claims that there exists a "quadrilemma" for metaphysical theories of consciousness, and that for the metaphysical claims of first-person realism, non-solipsism, non-fragmentation, and one-world, at least one of these must be false. List has proposed a model he calls the "many-worlds theory of consciousness" in order to reconcile the subjective nature of consciousness without lapsing into solipsism.

Historical precedents

The hard problem of consciousness has scholarly antecedents considerably earlier than Chalmers. Chalmers himself notes that "a number of thinkers in the recent and distant past" have "recognised the particular difficulties of explaining consciousness." He states that all his original 1996 paper contributed to the discussion was "a catchy name, a minor reformulation of philosophically familiar points".

Among others, thinkers who have made arguments similar to Chalmers' formulation of the hard problem include Isaac NewtonJohn LockeGottfried Wilhelm LeibnizJohn Stuart Mill, and Thomas Henry Huxley. Likewise, Asian philosophers like Dharmakirti and Guifeng Zongmi discussed the problem of how consciousness arises from unconscious matter. The Tattva Bodha, an eighth century text attributed to Adi Shankara from the Advaita Vedanta school of Hinduism, describes consciousness being anubhati, or self-revealing, illuminating all objects of knowledge without itself being a material object.

The mind–body problem

The mind–body problem is the problem of how the mind and the body relate. The mind-body problem is more general than the hard problem of consciousness, since it is the problem of discovering how the mind and body relate in general, thereby implicating any theoretical framework that broaches the topic. The hard problem, in contrast, is often construed as a problem uniquely faced by physicalist or materialist theories of mind.

"What Is It Like to Be a Bat?"

The philosopher Thomas Nagel posited in his 1974 paper "What Is It Like to Be a Bat?" that experiences are essentially subjective (accessible only to the individual undergoing them—i.e., felt only by the one feeling them), while physical states are essentially objective (accessible to multiple individuals). So he argued we have no idea what it could mean to claim that an essentially subjective state just is an essentially non-subjective state (i.e., that a felt state is nothing but a functional state). In other words, we have no idea of what reductivism amounts to. He believes "every subjective phenomenon is essentially connected with a single point of view, and it seems inevitable that an objective, physical theory will abandon that point of view."

Explanatory gap

In 1983, the philosopher Joseph Levine proposed that there is an explanatory gap between our understanding of the physical world and our understanding of consciousness.

Levine's disputes that conscious states are reducible to neuronal or brain states. He uses the example of pain (as an example of a conscious state) and its reduction to the firing of c-fibers (a kind of nerve cell). The difficulty is as follows: even if consciousness is physical, it is not clear which physical states correspond to which conscious states. The bridges between the two levels of description will be contingent, rather than necessary. This is significant because in most contexts, relating two scientific levels of descriptions (such as physics and chemistry) is done with the assurance of necessary connections between the two theories (for example, chemistry follows with necessity from physics).

Levine illustrates this with a thought experiment: Suppose that humanity were to encounter an alien species, and suppose it is known that the aliens do not have any c-fiber. Even if one knows this, it is not obvious that the aliens do not feel pain: that would remain an open question. This is because the fact that aliens do not have c-fibers does not entail that they do not feel pain (in other words, feelings of pain do not follow with logical necessity from the firing of c-fibers). Levine thinks such thought experiments demonstrate an explanatory gap between consciousness and the physical world: even if consciousness is reducible to physical things, consciousness cannot be explained in terms of physical things, because the link between physical things and consciousness is a contingent link.

Levine does not think that the explanatory gap means that consciousness is not physical; he is open to the idea that the explanatory gap is only an epistemological problem for physicalism. In contrast, Chalmers thinks that the hard problem of consciousness does show that consciousness is not physical.

Philosophical zombies

Philosophical zombies are a thought experiment commonly used in discussions of the hard problem. They are hypothetical beings physically identical to humans but that lack conscious experience. Philosophers such as Chalmers, Joseph Levine, and Saul Kripke take zombies as impossible within the bounds of nature but possible within the bounds of logic. This would imply that facts about experience are not logically entailed by the "physical" facts. Therefore, consciousness is irreducible. In Chalmers' words, "after God (hypothetically) created the world, he had more work to do." Daniel Dennett, a philosopher of mind, criticised the field's use of "the zombie hunch" which he deems an "embarrassment" that ought to "be dropped like a hot potato".

Knowledge argument

The knowledge argument, also known as Mary's Room, is another common thought experiment: A hypothetical neuroscientist named Mary has lived her whole life in a black-and-white room and has never seen colour before. She also happens to know everything there is to know about the brain and colour perception. Chalmers believes that when Mary sees the colour red for the first time, she gains new knowledge — the knowledge of "what red looks like" — which is distinct from, and irreducible to, her prior physical knowledge of the brain or visual system. A stronger form of the knowledge argument claims not merely that Mary would lack subjective knowledge of "what red looks like," but that she would lack knowledge of an objective fact about the world: namely, "what red looks like," a non-physical fact that can be learned only through direct experience (qualia). Others, such as Thomas Nagel, take a "physicalist" position, disagree with the argument in its stronger and/or weaker forms. For example, Nagel put forward a "speculative proposal" of devising a language that could "explain to a person blind from birth what it is like to see." The knowledge argument implies that such a language could not exist.

Philosophical responses

David Chalmers' formulation of the hard problem of consciousness provoked considerable debate within philosophy of mind, as well as scientific research.

A diagram showing the relationship between various views concerning the relationship between consciousness and the physical world

The hard problem is considered a problem primarily for physicalist views of the mind (the view that the mind is a physical object or process), since physical explanations tend to be functional, or structural. Because of this, some physicalists have responded to the hard problem by seeking to show that it dissolves upon analysis. Other researchers accept the problem as real and seek to develop a theory of consciousness' place in the world that can solve it, by either modifying physicalism or abandoning it in favour of an alternative ontology (such as panpsychism or dualism). A third response has been to accept the hard problem as real but deny human cognitive faculties can solve it.

PhilPapers is an organisation that archives academic philosophy papers and periodically surveys professional philosophers about their views. It can be used to gauge professional attitudes towards the hard problem. As of the 2020 survey results, it seems that the majority of philosophers (62.42%) agree that the hard problem is real, with a substantial minority that disagrees (29.76%).

Attitudes towards physicalism also differ among professionals. In the 2009 PhilPapers survey, 56.5% of philosophers surveyed subscribed to physicalism and 27.1% of philosophers surveyed rejected physicalism. 16.4% fell into the "other" category. In the 2020 PhilPapers survey, 51.93% of philosophers surveyed indicated that they "accept or lean towards" physicalism and 32.08% indicated that they reject physicalism. 6.23% were "agnostic" or "undecided".

Different solutions have been proposed to the hard problem of consciousness. The sections below taxonomize the various responses to the hard problem. The shape of this taxonomy was first introduced by Chalmers in a 2003 literature review on the topic. The labelling convention of this taxonomy has been incorporated into the technical vocabulary of analytic philosophy, being used by philosophers such as Adrian Boutel, Raamy Majeed, Janet Levin, Pete Mandik & Josh Weisberg, Roberto Pereira, and Helen Yetter-Chappell.

Type-A Materialism

Type-A materialism (also known as reductive materialism or a priori physicalism) is a view characterised by a commitment to physicalism and a full rejection of the hard problem. By this view, the hard problem either does not exist or is just another easy problem, because every fact about the mind is a fact about the performance of various functions or behaviours. So, once all the relevant functions and behaviours have been accounted for, there will not be any facts left over in need of explanation. Thinkers who subscribe to type-A materialism include Paul and Patricia Churchland, Daniel Dennett, Keith Frankish, and Thomas Metzinger.

Some type-A materialists believe in the reality of phenomenal consciousness but believe it is nothing extra in addition to certain functions or behaviours. This view is sometimes referred to as strong reductionism. Other type-A materialists may reject the existence of phenomenal consciousness entirely. This view is referred to as eliminative materialism or illusionism.

Strong reductionism

Many philosophers have disputed that there is a hard problem of consciousness distinct from what Chalmers calls the easy problems of consciousness. Some among them, who are sometimes termed strong reductionists, hold that phenomenal consciousness (i.e., conscious experience) does exist but that it can be fully understood as reducible to the brain.

Broadly, strong reductionists accept that conscious experience is real but argue it can be fully understood in functional terms as an emergent property of the material brain. In contrast to weak reductionists (see above), strong reductionists reject ideas used to support the existence of a hard problem (that the same functional organization could exist without consciousness, or that a blind person who understood vision through a textbook would not know everything about sight) as simply mistaken intuitions.

A notable family of strong reductionist accounts are the higher-order theories of consciousness. In 2005, the philosopher Peter Carruthers wrote about "recognitional concepts of experience", that is, "a capacity to recognize [a] type of experience when it occurs in one's own mental life," and suggested that such a capacity could explain phenomenal consciousness without positing qualia. On the higher-order view, since consciousness is a representation, and representation is fully functionally analysable, there is no hard problem of consciousness.

The philosophers Glenn Carruthers and Elizabeth Schier said in 2012 that the main arguments for the existence of a hard problem—philosophical zombies, Mary's room, and Nagel's bats—are only persuasive if one already assumes that "consciousness must be independent of the structure and function of mental states, i.e. that there is a hard problem." Hence, the arguments beg the question. The authors suggest that "instead of letting our conclusions on the thought experiments guide our theories of consciousness, we should let our theories of consciousness guide our conclusions from the thought experiments."

The philosopher Massimo Pigliucci argued in 2013 that the hard problem is misguided, resulting from a "category mistake".aid: "Of course an explanation isn't the same as an experience, but that's because the two are completely independent categories, like colors and triangles. It is obvious that I cannot experience what it is like to be you, but I can potentially have a complete explanation of how and why it is possible to be you."

In 2017, the philosopher Marco Stango, in a paper on John Dewey's approach to the problem of consciousness (which preceded Chalmers' formulation of the hard problem by over half a century), noted that Dewey's approach would see the hard problem as the consequence of an unjustified assumption that feelings and functional behaviours are not the same physical process: "For the Deweyan philosopher, the 'hard problem' of consciousness is a 'conceptual fact' only in the sense that it is a philosophical mistake: the mistake of failing to see that the physical can be had as an episode of immediate sentiency."

The philosopher Thomas Metzinger likens the hard problem of consciousness to vitalism, a formerly widespread view in biology which was not so much solved as abandoned. Brian Jonathan Garrett has also argued that the hard problem suffers from flaws analogous to those of vitalism.

The philosopher Peter Hacker argues that the hard problem is misguided in that it asks how consciousness can emerge from matter, whereas in fact sentience emerges from the evolution of living organisms. He states: "The hard problem isn't a hard problem at all. The really hard problems are the problems the scientists are dealing with. [...] The philosophical problem, like all philosophical problems, is a confusion in the conceptual scheme." Hacker's critique extends beyond Chalmers and the hard problem, being directed against contemporary philosophy of mind and neuroscience more broadly. Along with the neuroscientist Max Bennett, he has argued that most of contemporary neuroscience remains implicitly dualistic in its conceptualisations and is predicated on the mereological fallacy of ascribing psychological concepts to the brain that can properly be ascribed only to the person as a whole. Hacker further states that "consciousness studies", as it exists today, is "literally a total waste of time" and that "the conception of consciousness which they have is incoherent".

Eliminative materialism / Illusionism

Eliminative materialism or eliminativism is the view that many or all of the mental states used in folk psychology (i.e., common-sense ways of discussing the mind) do not, upon scientific examination, correspond to real brain mechanisms. According the 2020 PhilPapers survey, 4.51% of philosophers surveyed subscribe to eliminativism.

While Patricia Churchland and Paul Churchland have famously applied eliminative materialism to propositional attitudes, philosophers including Daniel Dennett, Georges Rey, and Keith Frankish have applied it to qualia or phenomenal consciousness (i.e., conscious experience). On their view, it is mistaken not only to believe there is a hard problem of consciousness, but to believe phenomenal consciousness exists at all.

This stance has recently taken on the name of illusionism: the view that phenomenal consciousness is an illusion. The term was popularized by the philosopher Keith Frankish. Frankish argues that "illusionism" is preferable to "eliminativism" for labelling the view that phenomenal consciousness is an illusion. More substantively, Frankish argues that illusionism about phenomenal consciousness is preferable to realism about phenomenal consciousness. He states: "Theories of consciousness typically address the hard problem. They accept that phenomenal consciousness is real and aim to explain how it comes to exist. There is, however, another approach, which holds that phenomenal consciousness is an illusion and aims to explain why it seems to exist." Frankish concludes that illusionism "replaces the hard problem with the illusion problem—the problem of explaining how the illusion of phenomenality arises and why it is so powerful."

The philosopher Daniel Dennett was another prominent figure associated with illusionism. After Frankish published a paper in the Journal of Consciousness Studies titled Illusionism as a Theory of Consciousness, Dennett responded with his own paper humorously titled Illusionism as the Obvious Default Theory of Consciousness. Dennett had been arguing for the illusory status of consciousness since early on in his career. For example, in 1979 he published a paper titled On the Absence of Phenomenology (where he argues for the nonexistence of phenomenal consciousness). Similar ideas have been explicated in his 1991 book Consciousness Explained. Dennett argues that the so-called "hard problem" will be solved in the process of solving what Chalmers terms the "easy problems". He compares consciousness to stage magic and its capability to create extraordinary illusions out of ordinary things. To show how people might be commonly fooled into overstating the accuracy of their introspective abilities, he describes a phenomenon called change blindness, a visual process that involves failure to detect scenery changes in a series of alternating images. He accordingly argues that consciousness need not be what it seems to be based on introspection. To address the question of the hard problem, or how and why physical processes give rise to experience, Dennett states that the phenomenon of having experience is nothing more than the performance of functions or the production of behaviour, which can also be referred to as the easy problems of consciousness. Thus, Dennett argues that the hard problem of experience is included among—not separate from—the easy problems, and therefore they can only be explained together as a cohesive unit.

Eliminativists differ on the role they believe intuitive judgement plays in creating the apparent reality of consciousness. The philosopher Jacy Reese Anthis is of the position that this issue is born of an overreliance on intuition, calling philosophical discussions on the topic of consciousness a form of "intuition jousting". But when the issue is tackled with "formal argumentation" and "precise semantics" then the hard problem will dissolve. The philosopher Elizabeth Irvine, in contrast, can be read as having the opposite view, since she argues that phenomenal properties (that is, properties of consciousness) do not exist in our common-sense view of the world. She states that "the hard problem of consciousness may not be a genuine problem for non-philosophers (despite its overwhelming obviousness to philosophers)."

A complete illusionist theory of consciousness must include the description of a mechanism by which the illusion of subjective experience is had and reported by people. Various philosophers and scientists have proposed possible theories. For example, in his book Consciousness and the Social Brain neuroscientist Michael Graziano advocates what he calls attention schema theory, in which our perception of being conscious is merely an error in perception, held by brains which evolved to hold erroneous and incomplete models of their own internal workings, just as they hold erroneous and incomplete models of their own bodies and of the external world.

Criticisms

The main criticisms of eliminative materialism and illusionism hinge on the counterintuitive nature of the view. Arguments of this form are called Moorean Arguments. A Moorean argument seeks to undermine the conclusion of an argument by asserting that the negation of that conclusion is more certain than the premises of the argument.

The roots of the Moorean Argument against illusionism extend back to Augustine of Hippo who stated that he could not be deceived regarding his own existence, since the very act of being deceived secures the existence of a being there to be the recipient of that deception.

In the Early-Modern era, these arguments were repopularized by René Descartes, who coined the now famous phrase "Je pense, donc je suis" ("I think, therefore I am"). Descartes argued that even if he was maximally deceived (because, for example, an evil demon was manipulating all his senses) he would still know with certainty that his mind exists, because the state of being deceived requires a mind as a prerequisite.

This same general argumentative structure is still in use today. For example, in 2002 David Chalmers published an explicitly Moorean argument against illusionism. The argument goes like this: The reality of consciousness is more certain than any theoretical commitments (to, for example, physicalism) that may be motivating the illusionist to deny the existence of consciousness. The reason for this is because we have direct "acquaintance" with consciousness, but we do not have direct acquaintance with anything else (including anything that could inform our beliefs in consciousness being an illusion). In other words: consciousness can be known directly, so the reality of consciousness is more certain than any philosophical or scientific theory that says otherwise. Chalmers concludes that "there is little doubt that something like the Moorean argument is the reason that most people reject illusionism and many find it crazy."

Eliminative materialism and illusionism have been the subject of criticism within the popular press. One highly cited example comes from the philosopher Galen Strawson who wrote an article in the New York Review of Books titled "The Consciousness Deniers". In it, Strawson describes illusionism as the "silliest claim ever made", next to which "every known religious belief is only a little less sensible than the belief that the grass is green." Another notable example comes from Christof Koch (a neuroscientist and one of the leading proponents of Integrated Information Theory) in his popular science book The Feeling of Life Itself. In the early pages of the book, Koch describes eliminativism as the "metaphysical counterpart to Cotard's syndrome, a psychiatric condition in which patients deny being alive." Koch takes the prevalence of eliminativism as evidence that "much of twentieth-century analytic philosophy has gone to the dogs".

Frankish has responded to such criticisms by asserting that "qualia realists" have to conceive of qualia as being either observational or theoretical in nature. If conceived of as observational, then realists cannot claim that illusionists are leaving anything out of their theories of consciousness, as such a claim would presuppose qualia as having certain theoretical components. If conceived of as theoretical, then illusionists are simply denying the theoretical components of qualia but not the mere fact that they exist, which is what they're attempting to explain in the first place.

Type-B Materialism

Type-B Materialism, also known as Weak Reductionism or A Posteriori Physicalism, is the view that the hard problem stems from human psychology, and is therefore not indicative of a genuine ontological gap between consciousness and the physical world. Like Type-A Materialists, Type-B Materialists are committed to physicalism. Unlike Type-A Materialists, however, Type-B Materialists do accept inconceivability arguments often cited in support of the hard problem, but with a key caveat: that inconceivability arguments give us insight only into how the human mind tends to conceptualize the relationship between mind and matter, but not into what the true nature of this relationship actually is. According to this view, there is a gap between two ways of knowing (introspection and neuroscience) that will not be resolved by understanding all the underlying neurobiology, but still believe that consciousness and neurobiology are one and the same in reality.

While Type-B Materialists all agree that intuitions about the hard problem are psychological rather than ontological in origin, they differ as to whether our intuitions about the hard problem are innate or culturally conditioned. This has been dubbed the "hard-wired/soft-wired distinction." In relation to Type-B Materialism, those who believe that our intuitions about the hard problem are innate (and therefore common to all humans) subscribe to the "hard-wired view". Those that believe our intuitions are culturally conditioned subscribe to the "soft-wired view". Unless otherwise specified, the term Type-B Materialism refers to the hard-wired view.

Notable philosophers who subscribe to Type-B Materialism include David PapineauJoseph Levine, and Janet Levine.

The "hard-wired view"

Joseph Levine (who formulated the notion of the explanatory gap) states: "The explanatory gap argument doesn't demonstrate a gap in nature, but a gap in our understanding of nature." He nevertheless contends that full scientific understanding will not close the gap, and that analogous gaps do not exist for other identities in nature, such as that between water and H2O. The philosophers Ned Block and Robert Stalnaker agree that facts about what a conscious experience is like to the one experiencing it cannot be deduced from knowing all the facts about the underlying physiology, but by contrast argue that such gaps of knowledge are also present in many other cases in nature, such as the distinction between water and H2O.

To explain why these two ways of knowing (i.e. third-person scientific observation and first-person introspection) yield such different understandings of consciousness, weak reductionists often invoke the phenomenal concepts strategy, which argues the difference stems from our inaccurate phenomenal concepts (i.e., how we think about consciousness), not from the nature of consciousness itself. By this view, the hard problem of consciousness stems from a dualism of concepts, not from a dualism of properties or substances.

The "soft-wired view"

Some consciousness researchers have argued that the hard problem is a cultural artifact, unique to contemporary Western Culture. This is similar to Type-B Materialism, but it makes the further claim that the psychological facts that cause us to intuit the hard problem are not innate, but culturally conditioned. Notable researchers who hold this view include Anna Wierzbicka, Hakwan Lau and Matthias Michel.

Wierzbicka (who is a linguist) argues that the vocabulary used by consciousness researchers (including words like experience and consciousness) are not universally translatable, and are "parochially English." Weirzbicka calls David Chalmers out by name for using these words, arguing that if philosophers "were to use panhuman concepts expressed in crosstranslatable words" (such as know, think, or feel) then the hard problem would dissolve. David Chalmers has responded to these criticisms by saying that he will not "apologise for using technical terms in an academic article . . . they play a key role in efficient communication in every discipline, including Wierzbicka's".

Type-C Materialism

Type-C materialists acknowledge a distinction between knowledge and experience without asserting a more complete explanation for the experiential phenomenon. One taking this view would admit that there is an explanatory gap for which no answer to date may be satisfactory, but trust that inevitably the gap will be closed. This is described by analogy to progression in other areas of science, such as mass-energy equivalence which would have been unfathomable in ancient times, abiogenesis which was once considered paradoxical from an evolutionary framework, or a suspected future theory of everything combining relativity and quantum mechanics. Similarly, type-C materialism posits that the problem of consciousness is a consequence of our ignorance but just as resolvable as any other question in neuroscience.

Because the explanatory question of consciousness is evaded, type-C materialism does not presuppose the descriptive question, for instance that there is any self-consciousness, wakefulness, or even sentience in a rock. Principally, the basis for the argument arises from the apparently high correlation of consciousness with living brain tissue, thereby rejecting panpsychism without explicitly formulating physical causation. More specifically this position denies the existence of philosophical zombies for which there is an absence of data and no proposed method of testing. Whether via the inconceivability or actual nonexistence of zombies, a contradiction is exposed nullifying the premise of the consciousness problem's "hardness".

Type-C materialism is compatible with several cases and could collapse into one of these other metaphysical views depending on scientific discovery and its interpretation. With evidence of emergence, it resolves to strong reductionism under type A. With a different, possibly cultural paradigm for understanding consciousness, it resolves to type-B materialism. If consciousness is explained by the quantum mind, then it resolves to property dualism under type D. With characterisation of intrinsic properties in physics extending beyond structure and dynamics, it could resolve to type-F monism.

Richard Brown has defended an unorthodox form of type-C materialism which states that the hard problem cannot be decided a priori and the two major positions (physicalism and dualism) can only be vindicated empirically, i.e. through scientific advances. His version of type-C materialism is unorthodox because he claims that it does not collapse into the other positions. He uses "reverse zombie" and "reverse knowledge" thought experiments (anti-dualist versions of the standard anti-physicalist arguments) to show that a priori arguments beg the question and are only useful for revealing one's own intuitions, whether physicalist or dualist. The only reason why such thought experiments, both anti-physicalist and anti-dualist, seem intuitive is because they are prima facie conceivable but not ideally conceivable, where ideal conceivability involves knowledge of the completed science and thus the ability to deduce a priori the discovered identities, in the same way that "water is H₂O" was discovered empirically but the identity is deducible a priori.

Type-D and Type-E Dualism

Dualism views consciousness as either a non-physical substance separate from the brain or a non-physical property of the physical brain. Dualism is the view that the mind is irreducible to the physical body. There are multiple dualist accounts of the causal relationship between the mental and the physical, of which interactionism and epiphenomenalism are the most common today. Interactionism posits that the mental and physical causally impact one another, and is associated with the thought of René Descartes (1596–1650). Epiphenomenalism holds the mental is causally dependent on the physical, but does not in turn causally impact it.

In contemporary philosophy, interactionism has been defended by philosophers including Martine Nida-Rümelin, while epiphenomenalism has been defended by philosophers including Frank Jackson (although Jackson later changed his stance to physicalism). Chalmers has also defended versions of both positions as plausible. Traditional dualists such as Descartes believed the mental and the physical to be two separate substances, or fundamental types of entities (hence "substance dualism"); some more recent dualists, however, accept only one substance, the physical, but state it has both mental and physical properties (hence "property dualism").

Type-F Monism

Meanwhile, panpsychism and neutral monism, broadly speaking, view consciousness as intrinsic to matter. In its most basic form, panpsychism holds that all physical entities have minds (though its proponents take more qualified positions), while neutral monism, in at least some variations, holds that entities are composed of a substance with mental and physical aspects—and is thus sometimes described as a type of panpsychism.

Forms of panpsychism and neutral monism were defended in the early twentieth century by the psychologist William James, the philosopher Alfred North Whitehead, the physicist Arthur Eddington, and the philosopher Bertrand Russell, and interest in these views has been revived in recent decades by philosophers including Thomas NagelGalen StrawsonPhilip Goff, and David Chalmers. Chalmers describes his overall view as "naturalistic dualism", but he says panpsychism is in a sense a form of physicalism, as does Strawson. Proponents of panpsychism argue it solves the hard problem of consciousness parsimoniously by making consciousness a fundamental feature of reality.

Idealism and cosmopsychism

A traditional solution to the hard problem is idealism, according to which consciousness is fundamental and not simply an emergent property of matter. It is claimed that this avoids the hard problem entirely. Objective idealism and cosmopsychism consider mind or consciousness to be the fundamental substance of the universe. Proponents claim that this approach is immune to both the hard problem of consciousness and the combination problem that affects panpsychism.

From an idealist perspective, matter is a representation or image of mental processes. Supporters suggest that this avoids the problems associated with the materialist view of mind as an emergent property of a physical brain. Critics argue that this then leads to a decombination problem: how is it possible to split a single, universal conscious experience into multiple, distinct conscious experiences? In response, Bernardo Kastrup claims that nature hints at a mechanism for this in the condition dissociative identity disorder (previously known as Multiple Personality Disorder). Kastrup proposes dissociation as an example from nature showing that multiple minds with their own individual subjective experience could develop within a single universal mind.

Cognitive psychologist Donald D. Hoffman uses a mathematical model based around conscious agents, within a fundamentally conscious universe, to support conscious realism as a description of nature—one that falls within the objective idealism approaches to the hard problem: "The objective world, i.e., the world whose existence does not depend on the perceptions of a particular conscious agent, consists entirely of conscious agents."

David Chalmers calls this form of idealism one of "the handful of promising approaches to the mind–body problem."

New mysterianism

New mysterianism, most significantly associated with the philosopher Colin McGinn, proposes that the human mind, in its current form, will not be able to explain consciousness. McGinn draws on Noam Chomsky's distinction between problems, which are in principle solvable, and mysteries, which human cognitive faculties are unequipped to ever understand, and places the mind–body problem in the latter category. His position is that a naturalistic explanation does exist but that the human mind is cognitively closed to it due to its limited range of intellectual abilities. He cites Jerry Fodor's concept of the modularity of mind in support of cognitive closure.

While in McGinn's strong form, new mysterianism states that the relationship between consciousness and the material world can never be understood by the human mind, there are also weaker forms that argue it cannot be understood within existing paradigms but that advances in science or philosophy may open the way to other solutions (see above). The ideas of Thomas Nagel and Joseph Levine fall into the second category. Steven Pinker has also endorsed this weaker version of the view, summarising it as follows:

And then there is the theory put forward by philosopher Colin McGinn that our vertigo when pondering the Hard Problem is itself a quirk of our brains. The brain is a product of evolution, and just as animal brains have their limitations, we have ours. Our brains can't hold a hundred numbers in memory, can't visualize seven-dimensional space and perhaps can't intuitively grasp why neural information processing observed from the outside should give rise to subjective experience on the inside. This is where I place my bet, though I admit that the theory could be demolished when an unborn genius—a Darwin or Einstein of consciousness—comes up with a flabbergasting new idea that suddenly makes it all clear to us.

Commentary on the problem's explanatory targets

Philosopher Raamy Majeed argued in 2016 that the hard problem is associated with two "explanatory targets":

  1. [PQ] Physical processing gives rise to experiences with a phenomenal character.
  2. [Q] Our phenomenal qualities are thus-and-so.

The first fact concerns the relationship between the physical and the phenomenal (i.e., how and why are some physical states felt states), whereas the second concerns the very nature of the phenomenal itself (i.e., what does the felt state feel like?).

Wolfgang Fasching argues that the hard problem is not about qualia, but about the what-it-is-like-ness of experience in Nagel's sense—about the givenness of phenomenal contents:

Today there is a strong tendency to simply equate consciousness with the qualia. Yet there is clearly something not quite right about this. The "itchiness of itches" and the "hurtfulness of pain" are qualities we are conscious of. So philosophy of mind tends to treat consciousness as if it consisted simply of the contents of consciousness (the phenomenal qualities), while it really is precisely consciousness of contents, the very givenness of whatever is subjectively given. And therefore the problem of consciousness does not pertain so much to some alleged "mysterious, nonpublic objects", i.e. objects that seem to be only "visible" to the respective subject, but rather to the nature of "seeing" itself (and in today's philosophy of mind astonishingly little is said about the latter).

Relationship to scientific frameworks

Most neuroscientists and cognitive scientists believe that Chalmers' alleged "hard problem" will be solved, or be shown to not be a real problem, in the course of the solution of the so-called "easy problems", although a significant minority disagrees.

Neural correlates of consciousness

Since 1990, researchers including the molecular biologist Francis Crick and the neuroscientist Christof Koch have made significant progress toward identifying which neurobiological events occur concurrently to the experience of subjective consciousness. These postulated events are referred to as neural correlates of consciousness or NCCs. However, this research arguably addresses the question of which neurobiological mechanisms are linked to consciousness but not the question of why they should give rise to consciousness at all, the latter being the hard problem of consciousness as Chalmers formulated it. In "On the Search for the Neural Correlate of Consciousness", Chalmers said he is confident that, granting the principle that something such as what he terms "global availability" can be used as an indicator of consciousness, the neural correlates will be discovered "in a century or two". Nevertheless, he stated regarding their relationship to the hard problem of consciousness:

One can always ask why these processes of availability should give rise to consciousness in the first place. As yet we cannot explain why they do so, and it may well be that full details about the processes of availability will still fail to answer this question. Certainly, nothing in the standard methodology I have outlined answers the question; that methodology assumes a relation between availability and consciousness, and therefore does nothing to explain it. [...] So the hard problem remains. But who knows: Somewhere along the line we may be led to the relevant insights that show why the link is there, and the hard problem may then be solved.

The neuroscientist and Nobel laureate Eric Kandel wrote that locating the NCCs would not solve the hard problem, but rather one of the so-called easy problems to which the hard problem is contrasted. Kandel went on to note Crick and Koch's suggestion that once the binding problem—understanding what accounts for the unity of experience—is solved, it will be possible to solve the hard problem empirically. However, neuroscientist Anil Seth argued that emphasis on the so-called hard problem is a distraction from what he calls the "real problem": understanding the neurobiology underlying consciousness, namely the neural correlates of various conscious processes. This more modest goal is the focus of most scientists working on consciousness. Psychologist Susan Blackmore believes, by contrast, that the search for the neural correlates of consciousness is futile and itself predicated on an erroneous belief in the hard problem of consciousness.

Computational cognition

A functionalist view in cognitive science holds that the mind is an information processing system, and that cognition and consciousness together are a form of computation. Cognition, distinct from consciousness, is explained by neural computation in the computational theory of cognition. The computational theory of mind asserts that not only cognition, but also phenomenal consciousness or qualia, are computational. While the computation system is realised by neurons rather than electronics, in theory it would be possible for artificial intelligence to be conscious.

Integrated information theory

Integrated information theory (IIT), developed by the neuroscientist and psychiatrist Giulio Tononi in 2004 and more recently also advocated by Koch, is one of the most discussed models of consciousness in neuroscience and elsewhere. The theory proposes an identity between consciousness and integrated information, with the latter item (denoted as Φ) defined mathematically and thus in principle measurable. The hard problem of consciousness, write Tononi and Koch, may indeed be intractable when working from matter to consciousness. However, because IIT inverts this relationship and works from phenomenological axioms to matter, they say it could be able to solve the hard problem. In this vein, proponents have said the theory goes beyond identifying human neural correlates and can be extrapolated to all physical systems. Tononi wrote (along with two colleagues):

While identifying the "neural correlates of consciousness" is undoubtedly important, it is hard to see how it could ever lead to a satisfactory explanation of what consciousness is and how it comes about. As will be illustrated below, IIT offers a way to analyse systems of mechanisms to determine if they are properly structured to give rise to consciousness, how much of it, and of which kind.

As part of a broader critique of IIT, Michael Cerullo suggested that the theory's proposed explanation is in fact for what he dubs (following Scott Aaronson) the "Pretty Hard Problem" of methodically inferring which physical systems are conscious—but would not solve Chalmers' hard problem. "Even if IIT is correct," he argues, "it does not explain why integrated information generates (or is) consciousness." Chalmers agrees that IIT, if correct, would solve the "Pretty Hard Problem" rather than the hard problem.

Global workspace theory

Global workspace theory (GWT) is a cognitive architecture and theory of consciousness proposed by the cognitive psychologist Bernard Baars in 1988. Baars explains the theory with the metaphor of a theatre, with conscious processes represented by an illuminated stage. This theatre integrates inputs from a variety of unconscious and otherwise autonomous networks in the brain and then broadcasts them to unconscious networks (represented in the metaphor by a broad, unlit "audience"). The theory has since been expanded upon by other scientists including cognitive neuroscientist Stanislas Dehaene.

In his original paper outlining the hard problem of consciousness, Chalmers discussed GWT as a theory that only targets one of the "easy problems" of consciousness. In particular, he said GWT provided a promising account of how information in the brain could become globally accessible, but argued that "now the question arises in a different form: why should global accessibility give rise to conscious experience? As always, this bridging question is unanswered." J. W. Dalton similarly criticised GWT on the grounds that it provides, at best, an account of the cognitive function of consciousness, and fails to explain its experiential aspect. By contrast, A. C. Elitzur argued: "While [GWT] does not address the 'hard problem', namely, the very nature of consciousness, it constrains any theory that attempts to do so and provides important insights into the relation between consciousness and cognition."

For his part, Baars writes (along with two colleagues) that there is no hard problem of explaining qualia over and above the problem of explaining causal functions, because qualia are entailed by neural activity and themselves causal. Dehaene, in his 2014 book Consciousness and the Brain, rejected the concept of qualia and argued that Chalmers' "easy problems" of consciousness are actually the hard problems. He further stated that the "hard problem" is based only upon ill-defined intuitions that are continually shifting as understanding evolves:

Once our intuitions are educated by cognitive neuroscience and computer simulations, Chalmers' hard problem will evaporate. The hypothetical concept of qualia, pure mental experience, detached from any information-processing role, will be viewed as a peculiar idea of the prescientific era, much like vitalism... [Just as science dispatched vitalism] the science of consciousness will keep eating away at the hard problem of consciousness until it vanishes.

Meta-problem

In 2018, Chalmers highlighted what he calls the "meta-problem of consciousness", another problem related to the hard problem of consciousness:

The meta-problem of consciousness is (to a first approximation) the problem of explaining why we think that there is a [hard] problem of consciousness.

In his "second approximation", he says it is the problem of explaining the behaviour of "phenomenal reports", and the behaviour of expressing a belief that there is a hard problem of consciousness.

Explaining its significance, he says:

Although the meta-problem is strictly speaking an easy problem, it is deeply connected to the hard problem. We can reasonably hope that a solution to the meta-problem will shed significant light on the hard problem. A particularly strong line holds that a solution to the meta-problem will solve or dissolve the hard problem. A weaker line holds that it will not remove the hard problem, but it will constrain the form of a solution.

In other words, the 'strong line' holds that the solution to the meta-problem would provide an explanation of our beliefs about consciousness that is independent of consciousness. That might debunk our beliefs about consciousness, in the same way that (Chalmers suggests) explaining beliefs about god in evolutionary terms may provide arguments against theism itself.

Tom Stoppard's play The Hard Problem, first produced in 2015, is named after the hard problem of consciousness, which Stoppard defines as having "subjective First Person experiences".

Sentience

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Sentience
Determining which animals can experience sensations is challenging, but scientists generally agree that vertebrates, as well as many invertebrate species, are likely sentient.

Sentience is the ability to experience feelings and sensations. It may not necessarily imply higher cognitive functions such as awareness, reasoning, or complex thought processes. Some theorists define sentience exclusively as the capacity for valenced (positive or negative) mental experiences, such as pain and pleasure.

Sentience is an important concept in ethics, as the ability to experience happiness or suffering often forms a basis for determining which entities deserve moral consideration, particularly in utilitarianism.

The word "sentience" has been used to translate a variety of concepts in Asian religions. In science fiction, "sentience" is sometimes used interchangeably with "sapience", "self-awareness", or "consciousness".

Sentience in philosophy

"Sentience" was first coined by philosophers in the 1630s for the concept of an ability to feel, derived from Latin sentiens (feeling). In philosophy, different authors draw different distinctions between consciousness and sentience. According to Antonio Damasio, sentience is a minimalistic way of defining consciousness, which otherwise commonly and collectively describes sentience plus further features of the mind and consciousness, such as creativity, intelligence, sapience, self-awareness, and intentionality (the ability to have thoughts about something). These further features of consciousness may not be necessary for sentience, which is the capacity to feel sensations and emotions.

Consciousness

According to Thomas Nagel in his paper "What Is It Like to Be a Bat?", consciousness can refer to the ability of any entity to have subjective perceptual experiences, or as some philosophers refer to them, "qualia"—in other words, the ability to have states that it feels like something to be in. Some philosophers, notably Colin McGinn, believe that the physical process causing consciousness to happen will never be understood, a position known as "new mysterianism". They do not deny that most other aspects of consciousness are subject to scientific investigation but they argue that qualia will never be explained. Other philosophers, such as Daniel Dennett, argue that qualia is not a meaningful concept.

Regarding animal consciousness, the Cambridge Declaration of Consciousness, publicly proclaimed on 7 July 2012 at Cambridge University, states that many non-human animals possess the neuroanatomical, neurochemical, and neurophysiological substrates of conscious states, and can exhibit intentional behaviors. The declaration notes that all vertebrates (including fish and reptiles) have this neurological substrate for consciousness, and that there is strong evidence that many invertebrates also have it.

Phenomenal vs. affective consciousness

David Chalmers argues that sentience is sometimes used as shorthand for phenomenal consciousness, the capacity to have any subjective experience at all, but sometimes refers to the narrower concept of affective consciousness, the capacity to experience subjective states that have affective valence (i.e., a positive or negative character), such as pain and pleasure.

Sentience quotient

The sentience quotient concept was introduced by Robert A. Freitas Jr. in the late 1970s. It defines sentience as the relationship between the information processing rate of each individual processing unit (neuron), the weight/size of a single unit, and the total number of processing units (expressed as mass). It was proposed as a measure for the sentience of all living beings and computers from a single neuron up to a hypothetical being at the theoretical computational limit of the entire universe. On a logarithmic scale it runs from −70 up to +50.

Eastern religions

Eastern religions including Hinduism, Buddhism, Sikhism, and Jainism recognise non-humans as sentient beings. The term sentient beings is translated from various Sanskrit terms (jantu, bahu jana, jagat, sattva) and "conventionally refers to the mass of living things subject to illusion, suffering, and rebirth (Saṃsāra)". It is related to the concept of ahimsa, non-violence toward other beings.

In Jainism, many things are endowed with a soul, jīva, which is sometimes translated as 'sentience'. Some things are without a soul, ajīva, such as a chair or spoon. There are different rankings of jīva based on the number of senses it has. Water, for example, is a sentient being of the first order, as it is considered to possess only one sense, that of touch.

Sentience in Buddhism is the state of having senses. In Buddhism, there are six senses, the sixth being the subjective experience of the mind. Sentience is simply awareness prior to the arising of Skandha. Thus, an animal qualifies as a sentient being. According to Buddhism, sentient beings made of pure consciousness are possible. In Mahayana Buddhism, which includes Zen and Tibetan Buddhism, the concept is related to the Bodhisattva, an enlightened being devoted to the liberation of others. The first vow of a Bodhisattva states, "Sentient beings are numberless; I vow to free them." In traditional Tibetan Buddhism, plants, stones and other inanimate objects are described as possessing spiritual vitality or a form of 'sentience'.

Animal welfare, rights, and sentience

An octopus traveling with shells collected for protection. Despite evolving independently from humans for over 600 million years, octopuses show various signs of sentience.Octopuses, along with all other cephalopod molluscs and decapod crustaceans, were recognized as sentient by the United Kingdom in 2023.

Sentience has been a central concept in the animal rights movement, tracing back to the well-known writing of Jeremy Bentham in An Introduction to the Principles of Morals and Legislation: "The question is not, Can they reason? nor, Can they talk? but, Can they suffer?"

Richard D. Ryder defines sentientism broadly as the position according to which an entity has moral status if and only if it is sentient. In David Chalmer's more specific terminology, Bentham is a narrow sentientist, since his criterion for moral status is not only the ability to experience any phenomenal consciousness at all, but specifically the ability to experience conscious states with negative affective valence (i.e. suffering). Animal welfare and rights advocates often invoke similar capacities. For example, the documentary Earthlings argues that while animals do not have all the desires and ability to comprehend as do humans, they do share the desires for food and water, shelter and companionship, freedom of movement and avoidance of pain.

Animal welfare advocates typically argue that sentient beings should be protected from unnecessary suffering, whereas animal rights advocates propose a set of basic rights for animals, such as the right to life, liberty, and freedom from suffering.

Gary Francione also bases his abolitionist theory of animal rights, which differs significantly from Singer's, on sentience. He asserts that, "All sentient beings, humans or nonhuman, have one right: the basic right not to be treated as the property of others."

Andrew Linzey, a British theologian, considers that Christianity should regard sentient animals according to their intrinsic worth, rather than their utility to humans.

In 1997 the concept of animal sentience was written into the basic law of the European Union. The legally binding protocol annexed to the Treaty of Amsterdam recognises that animals are "sentient beings", and requires the EU and its member states to "pay full regards to the welfare requirements of animals".

Indicators of sentience

Experiments suggest that bees can display an optimistic mood, engage in playful behavior, and strategically avoid threats or harmful situations unless the reward is significant.

Nociception is the process by which the nervous system detects and responds to potentially harmful stimuli, leading to the sensation of pain. It involves specialized receptors called nociceptors that sense damage or threat and send signals to the brain. Nociception is widespread among animals, even among insects.

The presence of nociception indicates an organism's ability to detect harmful stimuli. A further question is whether the way these noxious stimuli are processed within the brain leads to a subjective experience of pain. To address that, researchers often look for behavioral cues. For example, "if a dog with an injured paw whimpers, licks the wound, limps, lowers pressure on the paw while walking, learns to avoid the place where the injury happened and seeks out analgesics when offered, we have reasonable grounds to assume that the dog is indeed experiencing something unpleasant." Avoiding painful stimuli unless the reward is significant can also provide evidence that pain avoidance is not merely an unconscious reflex (similarly to how humans "can choose to press a hot door handle to escape a burning building").

Sentient animals

Animals such as pigs, chickens, and fish are typically recognized as sentient. There is more uncertainty regarding insects, and findings on certain insect species may not be applicable to others.

Historically, fish were not considered sentient, and their behaviors were often viewed as "reflexes or complex, unconscious species-typical responses" to their environment. Their dissimilarity with humans, including the absence of a direct equivalent of the neocortex in their brain, was used as an argument against sentience. Jennifer Jacquet suggests that the belief that fish do not feel pain originated in response to a 1980s policy aimed at banning catch and release. The range of animals regarded by scientists as sentient or conscious has progressively widened, now including animals such as fish, lobsters and octopus.

Digital sentience

Digital sentience (or artificial sentience) means the sentience of artificial intelligences. The question of whether artificial intelligences can be sentient is controversial.

The AI research community does not consider sentience (that is, the "ability to feel sensations") as an important research goal, unless it can be shown that consciously "feeling" a sensation can make a machine more intelligent than just receiving input from sensors and processing it as information. Stuart Russell and Peter Norvig wrote in 2021: "We are interested in programs that behave intelligently. Individual aspects of consciousness—awareness, self-awareness, attention—can be programmed and can be part of an intelligent machine. The additional project making a machine conscious in exactly the way humans are is not one that we are equipped to take on." Indeed, leading AI textbooks do not mention "sentience" at all.

Digital sentience is of considerable interest to the philosophy of mind. Functionalist philosophers consider that sentience is about "causal roles" played by mental states, which involve information processing. In this view, the physical substrate of this information processing does not need to be biological, so there is no theoretical barrier to the possibility of sentient machines. According to type physicalism however, the physical constitution is important; and depending on the types of physical systems required for sentience, it may or may not be possible for certain types of machines (such as electronic computing devices) to be sentient.

The discussion on the topic of alleged sentience of artificial intelligence has been reignited in 2022 by the claims made about Google's LaMDA (Language Model for Dialogue Applications) artificial intelligence system that it is "sentient" and had a "soul". LaMDA is an artificial intelligence system that creates chatbots—AI robots designed to communicate with humans—by gathering vast amounts of text from the internet and using algorithms to respond to queries in the most fluid and natural way possible. The transcripts of conversations between scientists and LaMDA reveal that the AI system excels at this, providing answers to challenging topics about the nature of emotions, generating Aesop-style fables on cue, and even describing its alleged fears.

Nick Bostrom considers that while LaMDA is probably not sentient, being very sure of it would require understanding how consciousness works, having access to unpublished information about LaMDA's architecture, and finding how to apply the philosophical theory to the machine. He also said about LLMs that "it's not doing them justice to say they're simply regurgitating text", noting that they "exhibit glimpses of creativity, insight and understanding that are quite impressive and may show the rudiments of reasoning". He thinks that "sentience is a matter of degree".

In 2022, philosopher David Chalmers made a speech on whether large language models (LLMs) can be conscious, encouraging more research on the subject. He suggested that current LLMs were probably not conscious, but that the limitations are temporary and that future systems could be serious candidates for consciousness.

According to Jonathan Birch, "measures to regulate the development of sentient AI should run ahead of what would be proportionate to the risks posed by current technology, considering also the risks posed by credible future trajectories." He is concerned that AI sentience would be particularly easy to deny, and that if achieved, humans might nevertheless continue to treat AI systems as mere tools. He notes that the linguistic behaviour of LLMs is not a reliable way to assess whether they are sentient. He suggests to apply theories of consciousness, such as the global workspace theory, to the algorithms implicitly learned by LLMs, but noted that this technique requires advances in AI interpretability to understand what happens inside. He also mentions some other pathways that may lead to AI sentience, such as the brain emulation of sentient animals.

Hate speech

From Wikipedia, the free encyclopedia

Hate speech is a term with varied meaning and has no single, consistent definition. Cambridge Dictionary defines hate speech as "public speech that expresses hate or encourages violence towards a person or group based on something such as race, religion, sex, or sexual orientation". The Encyclopedia of the American Constitution states that hate speech is "usually thought to include communications of animosity or disparagement of an individual or a group on account of a group characteristic such as race, color, national origin, sex, disability, religion, or sexual orientation". Hate speech can include incitement based on social class or political beliefs. There is no single definition of what constitutes "hate" or "disparagement". Legal definitions of hate speech vary from country to country.

There has been much debate over freedom of speech, hate speech, and hate speech legislation. The laws of some countries describe hate speech as speech, gestures, conduct, writing, or displays that incite violence or prejudicial actions against a group or individuals on the basis of their membership in the group, or that disparage or intimidate a group or individuals on the basis of their membership in the group. The law may identify protected groups based on certain characteristics. In some countries, a victim of hate speech may seek redress under civil law, criminal law, or both. In the United States, what is usually labelled "hate speech" is constitutionally protected.

Hate speech is generally accepted to be one of the prerequisites for mass atrocities such as genocideIncitement to genocide is an extreme form of hate speech, and has been prosecuted in international courts such as the International Criminal Tribunal for Rwanda.

History

Early hate speech laws were enacted in the 1820s in France and 1851 in Prussia.

Starting in the 1940s and 50s, various American civil rights groups responded to the atrocities of World War II by advocating for restrictions on hateful speech targeting groups on the basis of race and religion. These organizations used group libel as a legal framework for describing hate speech and addressing its harm. In his discussion of the history of criminal libel, scholar Jeremy Waldron states that these laws helped "vindicate public order, not just by preempting violence, but by upholding against attack a shared sense of the basic elements of each person's status, dignity, and reputation as a citizen or member of society in good standing". A key legal victory for this view came in 1952 when group libel law was affirmed by the United States Supreme Court in Beauharnais v. Illinois. However, the group libel approach lost ground due to a rise in support for individual rights within civil rights movements during the 60s. Critiques of group defamation laws are not limited to defenders of individual rights. Some legal theorists, such as critical race theorist Richard Delgado, support legal limits on hate speech, but claim that defamation is too narrow a category to fully counter hate speech. Ultimately, Delgado advocates a legal strategy that would establish a specific section of tort law for responding to racist insults, citing the difficulty of receiving redress under the existing legal system.

Internet

The rise of the internet and social media has presented a new medium through which hate speech can spread. Hate speech on the internet can be traced all the way back to its initial years, with a 1983 bulletin board system created by neo-Nazi George Dietz considered the first instance of hate speech online. As the internet evolved over time hate speech continued to spread and create its footprint; the first hate speech website Stormfront was published in 1996, and hate speech has become one of the central challenges for social media platforms.

The structure and nature of the internet contribute to both the creation and persistence of hate speech online. The widespread use and access to the internet gives hate mongers an easy way to spread their message to wide audiences with little cost and effort. According to the International Telecommunication Union, approximately 66% of the world population has access to the internet. Additionally, the pseudo-anonymous nature of the internet imboldens many to make statements constituting hate speech that they otherwise wouldn't for fear of social or real life repercussions. While some governments and companies attempt to combat this type of behavior by leveraging real name systems, difficulties in verifying identities online, public opposition to such policies, and sites that don't enforce these policies leave large spaces for this behavior to persist.

Because the internet crosses national borders, comprehensive government regulations on online hate speech can be difficult to implement and enforce. Governments who want to regulate hate speech contend with issues around lack of jurisdiction and conflicting viewpoints from other countries. In an early example of this, the case of Yahoo! Inc. v. La Ligue Contre Le Racisme et l'Antisemitisme had a French court hold Yahoo! liable for allowing Nazi memorabilia auctions to be visible to the public. Yahoo! refused to comply with the ruling and ultimately won relief in a U.S. court which found that the ruling was unenforceable in the U.S. Disagreements like these make national level regulations difficult, and while there are some international efforts and laws that attempt to regulate hate speech and its online presence, as with most international agreements the implementation and interpretation of these treaties varies by country.

Much of the regulation regarding online hate speech is performed voluntarily by individual companies. Many major tech companies have adopted terms of service which outline allowed content on their platform, often banning hate speech. In a notable step for this, on 31 May 2016, Facebook, Google, Microsoft, and Twitter, jointly agreed to a European Union code of conduct obligating them to review "[the] majority of valid notifications for removal of illegal hate speech" posted on their services within 24 hours. Techniques employed by these companies to regulate hate speech include user reporting, Artificial Intelligence flagging, and manual review of content by employees. Major search engines like Google Search also tweak their algorithms to try and suppress hateful content from appearing in their results. However, despite these efforts hate speech remains a persistent problem online. According to a 2021 study by the Anti-Defamation League 33% of Americans were the target of identity based harassment in the preceding year, a statistic which has not noticeably shifted downwards despite increasing self regulation by companies.

State-sanctioned hate speech

A few states, including Saudi Arabia, Iran, Rwanda Hutu factions, actors in the Yugoslav Wars and Ethiopia have been described as spreading official hate speech or incitement to genocide.

Hate speech laws

After World War II, Germany criminalized Volksverhetzung ("incitement of popular hatred") to prevent resurgence of Nazism. Hate speech on the basis of sexual orientation and gender identity also is banned in Germany. Most European countries have likewise implemented various laws and regulations regarding hate speech, and the European Union's Framework Decision 2008/913/JHA requires member states to criminalize hate crimes and speech (though individual implementation and interpretation of this framework varies by state).

International human rights laws from the United Nations Human Rights Committee have been protecting freedom of expression, and one of the most fundamental documents is the Universal Declaration of Human Rights (UDHR) drafted by the U.N. General Assembly in 1948. Article 19 of the UDHR states that "Everyone has the right to freedom of opinion and expression; this right includes freedom to hold opinions without interference and to seek, receive and impart information and ideas through any media and regardless of frontiers."

While there are fundamental laws in place designed to protect freedom of expression, there are also multiple international laws that expand on the UDHR and pose limitations and restrictions, specifically concerning the safety and protection of individuals.

Most developed democracies have laws that restrict hate speech, including Australia, Canada, Denmark, France, Germany, India, Ireland, South Africa, Sweden, New Zealand, and the United Kingdom. The United States does not have hate speech laws, because the U.S. Supreme Court has repeatedly ruled that they violate the guarantee to freedom of speech contained in the First Amendment to the U.S. Constitution.

Laws against hate speech can be divided into two types: those intended to preserve public order and those intended to protect human dignity. The laws designed to protect public order require that a higher threshold be violated, so they are not often enforced. For example, a 1992 study found that only one person was prosecuted in Northern Ireland in the preceding 21 years for violating a law against incitement to religious violence. The laws meant to protect human dignity have a much lower threshold for violation, so those in Canada, Denmark, France, Germany and the Netherlands tend to be more frequently enforced.

Criticism

Several activists and scholars have criticized the practice of limiting hate speech. Kim Holmes, Vice President of the conservative Heritage Foundation and a critic of hate speech theory, has argued that it "assumes bad faith on the part of people regardless of their stated intentions" and that it "obliterates the ethical responsibility of the individual". Rebecca Ruth Gould, a professor of Islamic and Comparative Literature at the University of Birmingham, argues that laws against hate speech constitute viewpoint discrimination (which is prohibited by the First Amendment in the United States) as the legal system punishes some viewpoints but not others. Other scholars, such as Gideon Elford, argue instead that "insofar as hate speech regulation targets the consequences of speech that are contingently connected with the substance of what is expressed then it is viewpoint discriminatory in only an indirect sense." John Bennett argues that restricting hate speech relies on questionable conceptual and empirical foundations and is reminiscent of efforts by totalitarian regimes to control the thoughts of their citizens.

Civil libertarians say that hate speech laws have been used, in both developing and developed nations, to persecute minority viewpoints and critics of the government. Former ACLU president Nadine Strossen says that, while efforts to censor hate speech have the goal of protecting the most vulnerable, they are ineffective and may have the opposite effect: disadvantaged and ethnic minorities being charged with violating laws against hate speech. Journalist Glenn Greenwald says that hate speech laws in Europe have been used to censor left-wing views as much as they have been used to combat hate speech.

Miisa Kreandner and Eriz Henze argue that hate speech laws are arbitrary, as they only protect some categories of people but not others. Henze argues the only way to resolve this problem without abolishing hate speech laws would be to extend them to all possible conceivable categories, which Henze argues would amount to totalitarian control over speech.

Michael Conklin argues that there are benefits to hate speech that are often overlooked. He contends that allowing hate speech provides a more accurate view of the human condition, provides opportunities to change people's minds, and identifies certain people that may need to be avoided in certain circumstances. According to one psychological research study, a high degree of psychopathy is "a significant predictor" for involvement in online hate activity, while none of the other 7 potential factors examined were found to have a statistically significant predictive power.

Political philosopher Jeffrey W. Howard considers the popular framing of hate speech as "free speech vs. other political values" as a mischaracterization. He refers to this as the "balancing model", and says it seeks to weigh the benefit of free speech against other values such as dignity and equality for historically marginalized groups. Instead, he believes that the crux of debate should be whether or not freedom of expression is inclusive of hate speech. Research indicates that when people support censoring hate speech, they are motivated more by concerns about the effects the speech has on others than they are about its effects on themselves. Women are somewhat more likely than men to support censoring hate speech due to greater perceived harm of hate speech, which some researchers believe may be due to gender differences in empathy towards targets of hate speech.

Ethnocentrism

From Wikipedia, the free encyclopedia
Polish sociologist Ludwig Gumplowicz is believed to have coined the term "ethnocentrism" in the 19th century, although he may have merely popularized it

Ethnocentrism in social science and anthropology—as well as in colloquial English discourse—is the application of one's own culture or ethnicity as a frame of reference to judge other cultures, practices, behaviors, beliefs, and people, instead of using the standards of the particular culture involved. Since this judgment is often negative, some people also use the term to refer to the belief that one's culture is superior to, or more correct or normal than, all others—especially regarding the distinctions that define each ethnicity's cultural identity, such as language, behavior, customs, and religion. In common usage, it can also simply mean any culturally biased judgment. For example, ethnocentrism can be seen in the common portrayals of the Global South and the Global North.

Ethnocentrism is sometimes related to racism, stereotyping, discrimination, or xenophobia. However, the term "ethnocentrism" does not necessarily involve a negative view of the others' race or indicate a negative connotation. The opposite of ethnocentrism is cultural relativism, a guiding philosophy stating that the best way to understand a different culture is through their perspective rather than judging them from the subjective viewpoints shaped by one's own cultural standards.

The term "ethnocentrism" was first applied in the social sciences by American sociologist William G. Sumner. In his 1906 book, Folkways, Sumner describes ethnocentrism as "the technical name for the view of things in which one's own group is the center of everything, and all others are scaled and rated with reference to it." He further characterized ethnocentrism as often leading to pride, vanity, the belief in one's own group's superiority, and contempt for outsiders.

Over time, ethnocentrism developed alongside the progression of social understandings by people such as social theorist Theodor W. Adorno. In Adorno's The Authoritarian Personality, he and his colleagues of the Frankfurt School established a broader definition of the term as a result of "in group-out group differentiation", stating that ethnocentrism "combines a positive attitude toward one's own ethnic/cultural group (the in-group) with a negative attitude toward the other ethnic/cultural group (the out-group)." Both of these juxtaposing attitudes are also a result of a process known as social identification and social counter-identification.

Origins and development

The term ethnocentrism derives from two Greek words: "ethnos", meaning nation, and "kentron", meaning center. Scholars believe this term was coined by Polish sociologist Ludwig Gumplowicz in the 19th century, although alternate theories suggest that he only popularized the concept as opposed to inventing it. He saw ethnocentrism as a phenomenon similar to the delusions of geocentrism and anthropocentrism, defining Ethnocentrism as "the reasons by virtue of which each group of people believed it had always occupied the highest point, not only among contemporaneous peoples and nations, but also in relation to all peoples of the historical past."

Subsequently, in the 20th century, American social scientist William G. Sumner proposed two different definitions in his 1906 book Folkways. Sumner stated that "Ethnocentrism is the technical name for this view of things in which one's own group is the center of everything, and all others are scaled and rated with reference to it." In the War and Other Essays (1911), he wrote that "the sentiment of cohesion, internal comradeship, and devotion to the in-group, which carries with it a sense of superiority to any out-group and readiness to defend the interests of the in-group against the out-group, is technically known as ethnocentrism." According to Boris Bizumic, it is a popular misunderstanding that Sumner originated the term ethnocentrism, stating that in actuality, he brought ethnocentrism into the mainstreams of anthropology, social science, and psychology through his English publications.

Several theories have been reinforced through the social and psychological understandings of ethnocentrism including Adorno's Authoritarian Personality Theory (1950), Donald T. Campbell's Realistic Group Conflict Theory (1972), and Henri Tajfel's Social identity theory (1986). These theories have helped to distinguish ethnocentrism as a means to better understand the behaviors caused by in-group and out-group differentiation throughout history and society.

Ethnocentrism in social sciences

William Graham Sumner

In social sciences, ethnocentrism means to judge another culture based on the standard of one's own culture instead of the standard of the other particular culture. When people use their own culture as a parameter to measure other cultures, they often tend to think that their culture is superior and see other cultures as inferior and bizarre. Ethnocentrism can be explained at different levels of analysis. For example, at an intergroup level, this term is seen as a consequence of a conflict between groups; while at the individual level, in-group cohesion and out-group hostility can explain personality traits. Also, ethnocentrism can help us to explain the construction of identity. Ethnocentrism can explain the basis of one's identity by excluding the outgroup that is the target of ethnocentric sentiments and used as a way of distinguishing oneself from other groups that can be more or less tolerant. This practice in social interactions creates social boundaries, such boundaries define and draw symbolic boundaries of the group that one wants to be associated with or belong to. In this way, ethnocentrism is a term not only limited to anthropology but also can be applied to other fields of social sciences like sociology or psychology. Ethnocentrism may be particularly enhanced in the presence of interethnic competition, hostility and violence. On the other hand, ethnocentrism may negatively influence expatriate worker's performance.

A more recent interpretation of ethnocentrism, which expands upon the work of Claude Lévi-Strauss, highlights its positive dimension. Political sociologist Audrey Alejandro of the London School of Economics argues that, while ethnocentrism does produce social hierarchies, it also produces diversity by maintaining the different dispositions, practices, and knowledge of identity groups. Diversity is both fostered and undermined by ethnocentrism. Ethnocentrism, for Alejandro, is therefore neither something to be suppressed nor celebrated uncritically. Rather, observers can cultivate a 'balanced ethnocentrism', (individual self worth) allowing themselves to be challenged and transformed by difference whilst still protecting difference.

Anthropology

The classifications of ethnocentrism originate from the studies of anthropology. With its omnipresence throughout history, ethnocentrism has always been a factor in how different cultures and groups related to one another. Examples including how historically, foreigners would be characterized as "Barbarians". These trends exist in complex societies, e.g., "the Jews consider themselves to be the 'chosen people', and the Greeks defend all foreigners as 'barbarians'", and how China believed their country to be "the centre of the world". However, the anthropocentric interpretations initially took place most notably in the 19th century when anthropologists began to describe and rank various cultures according to the degree to which they had developed significant milestones, such as monotheistic religions, technological advancements, and other historical progressions.

Most rankings were strongly influenced by colonization and the belief to improve societies they colonized, ranking the cultures based on the progression of their western societies and what they classified as milestones. Comparisons were mostly based on what the colonists believed as superior and what their western societies have accomplished. Victorian era politician and historian Thomas Macaulay once claimed that "one shelf of a Western library" had more knowledge than the centuries of text and literature written by Asian cultures. Ideas developed by Western scientists such as Herbert Spencer, including the concept of the "survival of the fittest", contained ethnocentric ideals; influencing the belief that societies which were 'superior' were most likely to survive and prosper. Edward Said's concept of Orientalism represented how Western reactions to non-Western societies were based on an "unequal power relationship" that the Western world developed due to its history of colonialism and the influence it held over non-Western societies.

The ethnocentric classification of "primitive" were also used by 19th and 20th century anthropologists and represented how unawareness in cultural and religious understanding changed overall reactions to non-Western societies. 19th-century anthropologist Edward Burnett Tylor wrote about "primitive" societies in Primitive Culture (1871), creating a "civilization" scale where it was implied that ethnic cultures preceded civilized societies. The use of "savage" as a classification is modernly known as "tribal" or "pre-literate" where it was usually referred as a derogatory term as the "civilization" scale became more common. Examples that demonstrate a lack of understanding include when European travelers judged different languages based on the fact that they could not understand it and displayed a negative reaction, or the intolerance displayed by Westerners when exposed to unknown religions and symbolisms. Georg Wilhelm Friedrich Hegel, a German philosopher, justified Western imperialism by reasoning that since the non-Western societies were "primitive" and "uncivilized", their culture and history was not worth conserving and thus should welcome Westernization.

Photograph of anthropologist Franz Boas
Franz Boas

Anthropologist Franz Boas saw the flaws in this formulaic approach to ranking and interpreting cultural development and committed himself to overthrowing this inaccurate reasoning due to many factors involving their individual characteristics. With his methodological innovations, Boas sought to show the error of the proposition that race determined cultural capacity. In his 1911 book The Mind of Primitive Man, Boas wrote that:

It is somewhat difficult for us to recognize that the value which we attribute to our own civilization is due to the fact that we participate in this civilization, and that it has been controlling all our actions from the time of our birth; but it is certainly conceivable that there may be other civilizations, based perhaps on different traditions and on a different equilibrium of emotion and reason, which are of no less value than ours, although it may be impossible for us to appreciate their values without having grown up under their influence.

Together, Boas and his colleagues propagated the certainty that there are no inferior races or cultures. This egalitarian approach introduced the concept of cultural relativism to anthropology, a methodological principle for investigating and comparing societies in as unprejudiced a way as possible and without using a developmental scale as anthropologists at the time were implementing. Boas and anthropologist Bronisław Malinowski argued that any human science had to transcend the ethnocentric views that could blind any scientist's ultimate conclusions.

Both had also urged anthropologists to conduct ethnographic fieldwork to overcome their ethnocentrism. To help, Malinowski would develop the theory of functionalism as guides for producing non-ethnocentric studies of different cultures. Classic examples of anti-ethnocentric anthropology include Margaret Mead's Coming of Age in Samoa (1928), which in time has met with severe criticism for its incorrect data and generalisations, Malinowski's The Sexual Life of Savages in North-Western Melanesia (1929), and Ruth Benedict's Patterns of Culture (1934). Mead and Benedict were two of Boas's students.

Scholars generally agree that Boas developed his ideas under the influence of the German philosopher Immanuel Kant. Legend has it that, on a field trip to the Baffin Islands in 1883, Boas would pass the frigid nights reading Kant's Critique of Pure Reason. In that work, Kant argued that human understanding could not be described according to the laws that applied to the operations of nature, and that its operations were therefore free, not determined, and that ideas regulated human action, sometimes independent of material interests. Following Kant, Boas pointed out the starving Eskimos who, because of their religious beliefs, would not hunt seals to feed themselves, thus showing that no pragmatic or material calculus determined their values.

Causes

Ethnocentrism is believed to be a learned behavior embedded into a variety of beliefs and values of an individual or group.

Due to enculturation, individuals in in-groups have a deeper sense of loyalty and are more likely to follow the norms and develop relationships with associated members. Within relation to enculturation, ethnocentrism is said to be a transgenerational problem since stereotypes and similar perspectives can be enforced and encouraged as time progresses. Although loyalty can increase better in-grouper approval, limited interactions with other cultures can prevent individuals to have an understanding and appreciation towards cultural differences resulting in greater ethnocentrism.

The social identity approach suggests that ethnocentric beliefs are caused by a strong identification with one's own culture that directly creates a positive view of that culture. It is theorized by Henri Tajfel and John C. Turner that to maintain that positive view, people make social comparisons that cast competing cultural groups in an unfavorable light.

Alternative or opposite perspectives could cause individuals to develop naïve realism and be subject to limitations in understandings. These characteristics can also lead to individuals to become subject to ethnocentrism, when referencing out-groups, and black sheep effect, where personal perspectives contradict those from fellow in-groupers.

Realistic conflict theory assumes that ethnocentrism happens due to "real or perceived conflict" between groups. This also happens when a dominant group may perceive the new members as a threat. Scholars have recently demonstrated that individuals are more likely to develop in-group identification and out-group negatively in response to intergroup competition, conflict, or threat.

Although the causes of ethnocentric beliefs and actions can have varying roots of context and reason, the effects of ethnocentrism has had both negative and positive effects throughout history. The most detrimental effects of ethnocentrism resulting into genocide, apartheid, slavery, and many violent conflicts. Historical examples of these negative effects of ethnocentrism are The Holocaust, the Crusades, the Trail of Tears, and the internment of Japanese Americans. These events were a result of cultural differences reinforced inhumanely by a majority group who thought of themselves as superior. In his 1976 book on evolution, The Selfish Gene, evolutionary biologist Richard Dawkins writes that "blood-feuds and inter-clan warfare are easily interpretative in terms of Hamilton's genetic theory." Simulation-based experiments in evolutionary game theory have attempted to provide an explanation for the selection of ethnocentric-strategy phenotypes.

The positive examples of ethnocentrism throughout history have aimed to prohibit the callousness of ethnocentrism and reverse the perspectives of living in a single culture. These organizations can include the formation of the United Nations; aimed to maintain international relations, and the Olympic Games; a celebration of sports and friendly competition between cultures.

Effects

A study in New Zealand was used to compare how individuals associate with in-groups and out-groupers and has a connotation to discrimination. Strong in-group favoritism benefits the dominant groups and is different from out-group hostility and/or punishment. A suggested solution is to limit the perceived threat from the out-group that also decreases the likeliness for those supporting the in-groups to negatively react.

Ethnocentrism also influences consumer preference over which goods they purchase. A study that used several in-group and out-group orientations have shown a correlation between national identity, consumer cosmopolitanism, consumer ethnocentrism, and the methods consumers choose their products, whether imported or domestic. Countries with high levels of nationalism and isolationism are more likely to demonstrate consumer ethnocentrism, and have a significant preference for domestically-produced goods.

Ethnocentrism and racism

Ethnocentrism is usually associated with racism. However, as mentioned before, ethnocentrism does not necessarily implicate a negative connotation. In European research, the term racism is not linked to ethnocentrism because Europeans avoid applying the concept of race to humans; meanwhile, using this term is not a problem for American researchers. Since ethnocentrism implicated a strong identification with one's in-group, it mostly automatically leads to negative feelings and stereotyping to the members of the outgroup, which can be confused with racism. Finally, scholars agree that avoiding stereotypes is an indispensable prerequisite to overcome ethnocentrism; and mass media play a key role regarding this issue. The differences that each culture possess causes could hinder one another leading to ethnocentrism and racism. A Canadian study established the differences among French Canadian and English Canadian respondents based on products that would be purchased due to ethnocentrism and racism. Due to how diverse the world has become, society has begun to misinterpret the term cultural diversity, by using ethnocentrism to create controversy among all cultures.

Effects of ethnocentrism in the media

Film

As the United States leads the film industry in worldwide revenue, ethnocentric views can be transmitted through character tropes and underlying themes. The 2003 film "The Last Samurai," was analyzed to have strong ethnocentric themes, such as in-group preference and the tendency to show judgement towards those in the out-group. Similarly, the film received criticism for historical inaccuracies and perpetuating a "white savior narrative," showing a tendency for ethnocentrism centered around the United States.

Social media

Approximately 67.1% of the global population use the internet regularly, with 63.7% of the population being social media users. In a 2023 study, researchers found that social media can enable its users to become more tolerant of other people, bridging the gap between cultures, and contributing to global knowledge. In a similar study done regarding social media use by Kenyan teens, researchers found that when social media is limited to a certain group, it can increase ethnocentric views and ideologies.

Problem of other minds

From Wikipedia, the free encyclopedia

The problem of other minds is a philosophical problem traditionally stated as the following epistemological question: "Given that I can only observe the behavior of others, how can I know that others have minds?" The problem is that knowledge of other minds is always indirect. The problem of other minds does not negatively impact social interactions due to people having a "theory of mind" – the ability to spontaneously infer the mental states of others – supported by innate mirror neurons, a theory of mind mechanism, or a tacit theory. There has also been an increase in evidence that behavior results from cognition, which in turn requires a brain, and often involves consciousness.

It is a problem of the philosophical idea known as solipsism: the notion that for any person only one's own mind is known to exist. The problem of other minds maintains that no matter how sophisticated someone's behavior is, that does not reasonably guarantee that someone has the presence of thought occurring within them as when oneself engages in behavior. Phenomenology studies the subjective experience of human life resulting from consciousness. The specific subject within phenomenology studying other minds is intersubjectivity.

In 1953, Karl Popper suggested that a test for the other minds problem is whether one would seriously argue with the other person or machine: "This, I think, would solve the problem of 'other minds'....In arguing with other people (a thing which we have learnt from other people), for example about other minds, we cannot but attribute to them intentions, and this means mental states. We do not argue with a thermometer."

Philosophers such as Christian List have argued that there exists a connection between the problem of other minds and Benj Hellie's vertiginous question, i.e. why people exist as themselves and not as someone else. List argues that there exists a "quadrilemma" for metaphysical consciousness theories where at least one of the following must be false: 'first-person realism', 'non-solipsism', 'non-fragmentation', and 'one world'. List proposes a philosophical model he calls the "many-worlds theory of consciousness" in order to reconcile the subjective nature of consciousness without lapsing into solipsism.

Caspar Hare has argued for a weak form of solipsism with the concept of egocentric presentism, in which other persons can be conscious, but their experiences are simply not present in the way one's own current experience is.  A related concept is perspectival realism, in which things within perceptual awareness have a defining intrinsic property that exists absolutely and not relative to anything, of which several other philosophers have written reviews. Vincent Conitzer has argued for similar ideas on the basis of there being a connection between the A-theory of time and the nature of the self. He argues that one's current perspective could be "metaphysically privileged" on the basis of arguments for A-theory being stronger as arguments for both A-theory and a metaphysically privileged self, and arguments against A-theory are ineffective against this combined position.

Microbiome

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Micro...