Search This Blog

Monday, May 31, 2021

Psychological nativism

From Wikipedia, the free encyclopedia

In the field of psychology, nativism is the view that certain skills or abilities are "native" or hard-wired into the brain at birth. This is in contrast to the "blank slate" or tabula rasa view, which states that the brain has inborn capabilities for learning from the environment but does not contain content such as innate beliefs. This factor contributes to the ongoing nature versus nurture dispute, one borne from the current difficulty of reverse engineering the subconscious operations of the brain, especially the human brain.

Some nativists believe that specific beliefs or preferences are "hard wired". For example, one might argue that some moral intuitions are innate or that color preferences are innate. A less established argument is that nature supplies the human mind with specialized learning devices. This latter view differs from empiricism only to the extent that the algorithms that translate experience into information may be more complex and specialized in nativist theories than in empiricist theories. However, empiricists largely remain open to the nature of learning algorithms and are by no means restricted to the historical associationist mechanisms of behaviorism.

In philosophy

Nativism has a history in philosophy, particularly as a reaction to the straightforwardly empiricist views of John Locke and David Hume. Hume had given persuasive logical arguments that people cannot infer causality from perceptual input. The most one could hope to infer is that two events happen in succession or simultaneously. One response to this argument involves positing that concepts not supplied by experience, such as causality, must exist prior to any experience and hence must be innate.

The philosopher Immanuel Kant (1724–1804) argued in his Critique of Pure Reason that the human mind knows objects in innate, a priori ways. Kant claimed that humans, from birth, must experience all objects as being successive (time) and juxtaposed (space). His list of inborn categories describes predicates that the mind can attribute to any object in general. Arthur Schopenhauer (1788–1860) agreed with Kant, but reduced the number of innate categories to one—causality—which presupposes the others.

Modularity

Modern nativism is most associated with the work of Jerry Fodor (1935–2017), Noam Chomsky (b. 1928), and Steven Pinker (b. 1954), who argue that humans from birth have certain cognitive modules (specialised genetically inherited psychological abilities) that allow them to learn and acquire certain skills, such as language. For example, children demonstrate a facility for acquiring spoken language but require intensive training to learn to read and write. This poverty of the stimulus observation became a principal component of Chomsky's argument for a "language organ"—a genetically inherited neurological module that confers a somewhat universal understanding of syntax that all neurologically healthy humans are born with, which is fine-tuned by an individual's experience with their native language. In The Blank Slate (2002), Pinker similarly cites the linguistic capabilities of children, relative to the amount of direct instruction they receive, as evidence that humans have an inborn facility for speech acquisition (but not for literacy acquisition).

A number of other theorists have disagreed with these claims. Instead, they have outlined alternative theories of how modularization might emerge over the course of development, as a result of a system gradually refining and fine-tuning its responses to environmental stimuli.

Language

Research on the human capacity for language aims to provide support a nativist view. Language is a species characteristic of humans: No human society has ever been discovered that does not employ a language, and all medically able children acquire at least one language in early childhood. The typical five-year-old can already use most, if not all, of the grammatical structures that are found in the language of the surrounding community. Yet, the knowledge of grammar is tacit: Neither the five-year-old nor the adults in the community can easily articulate the principles of the grammar they are following. Experimental evidence shows that infants come equipped with presuppositions that allow them to acquire the rules of their language.

The term universal grammar (or UG) is used for the purported innate biological properties of the human brain, whatever exactly they turn out to be, that are responsible for children's successful acquisition of a native language during the first few years of life. The person most strongly associated with the hypothesising of UG is Noam Chomsky, although the idea of Universal Grammar has clear historical antecedents at least as far back as the 1300s, in the form of the Speculative Grammar of Thomas of Erfurt.

In generative grammar the principles and parameters (P&P) framework was the dominant formulation of UG before Chomsky's current Minimalist Program. In the P&P framework, a principle is a grammatical requirement that is meant to apply to all languages, and a parameter is a tightly constrained point of variation. In the early 1980s parameters were often conceptualized as switches in a switchbox (an idea attributed to James Higginbotham). In more recent research on syntax, parameters are often conceptualized as options for the formal features of functional heads.

The hypothesis that UG plays an essential role in normal child language acquisition arises from species differences: for example, children and household pets may be exposed to quite similar linguistic input, but by the age of three years, the child's ability to comprehend multi-word utterances vastly outstrips that of the dog or cat. This evidence is all the more impressive when one considers that most children do not receive reliable correction for grammatical errors. Indeed, even children who for medical reasons cannot produce speech, and therefore have no possibility of producing an error in the first place, have been found to master both the lexicon and the grammar of their community's language perfectly. The fact that children succeed at language acquisition even when their linguistic input is severely impoverished, as it is when no corrective feedback is available, is related to the argument from the poverty of the stimulus, and is another claim for a central role of UG in child language acquisition.

Relation to Neuroscience

Neuroscientists working on the Blue Brain Project discovered that neurons transmit signals despite an individual's experience. It had been previously assumed that neuronal circuits are made when the experience of an individual is imprinted in the brain, making memories. Researchers at Blue Brain discovered a network of about fifty neurons which they believed were building blocks of more complex knowledge but contained basic innate knowledge that could be combined in different more complex ways to give way to acquired knowledge, like memory.

Scientists ran tests on the neuronal circuits of several rats and ascertained that if the neuronal circuits had only been formed based on an individual's experience, the tests would bring about very different characteristics for each rat. However, the rats all displayed similar characteristics which suggests that their neuronal circuits must have been established previously to their experiences. The Blue Brain Project research suggests that some of the "building blocks" of knowledge are genetic and present at birth.

Criticism

Nativism is sometimes perceived as being too vague to be falsifiable, as there is no fixed definition of when an ability is supposed to be judged "innate". (As Jeffrey Elman and colleagues pointed out in Rethinking Innateness, it is unclear exactly how the supposedly innate information might actually be coded for in the genes.) Further, modern nativist theory makes little in the way of specific falsifiable and testable predictions, and has been compared by some empiricists to a pseudoscience or nefarious brand of "psychological creationism". As influential psychologist Henry L. Roediger III remarked that "Chomsky was and is a rationalist; he had no uses for experimental analyses or data of any sort that pertained to language, and even experimental psycholinguistics was and is of little interest to him".

Some researchers argue that the premises of linguistic nativism were motivated by outdated considerations and need reconsidering. For example, nativism was at least partially motivated by the perception that statistical inferences made from experience were insufficient to account for the complex languages humans develop. In part, this was a reaction to the failure of behaviorism and behaviorist models of the era to easily account for how something as complex and sophisticated as a full-blown language could ever be learned. Indeed, several nativist arguments were inspired by Chomsky's assertion that children could not learn complicated grammar based on the linguistic input they typically receive, and must therefore have an innate language-learning module, or language acquisition device. However, Chomsky's poverty of the stimulus argument is controversial within linguistics.

Many empiricists are now also trying to apply modern learning models and techniques to the question of language acquisition, with marked success. Similarity-based generalization marks another avenue of recent research, which suggests that children may be able to rapidly learn how to use new words by generalizing about the usage of similar words that they already know (see also the distributional hypothesis).

Paul Griffiths, in "What is Innateness?", argues that innateness is too confusing a concept to be fruitfully employed as it confuses "empirically dissociated" concepts. In a previous paper, Griffiths argued that innateness specifically confuses these three distinct biological concepts: developmental fixity, species nature, and intended outcome. Developmental fixity refers to how insensitive a trait is to environmental input, species nature reflects what it is to be an organism of a certain kind, and intended outcome is how an organism is meant to develop.

Instinct

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Instinct#In_humans

Instinct is the inherent inclination of a living organism towards a particular complex behaviour, containing both innate(inborn) and learnt elements. The simplest example of an instinctive behavior is a fixed action pattern (FAP), in which a very short to medium length sequence of actions, without variation, are carried out in response to a corresponding clearly defined stimulus.

A baby leatherback turtle makes its way to the open ocean.

Any behavior is instinctive if it is performed without being based upon prior experience (that is, in the absence of learning), and is therefore an expression of innate biological factors. Sea turtles, newly hatched on a beach, will instinctively move toward the ocean. A marsupial climbs into its mother's pouch upon being born. Honeybees communicate by dancing in the direction of a food source without formal instruction. Other examples include animal fighting, animal courtship behavior, internal escape functions, and the building of nests. Though an instinct is defined by its invariant innate characteristics, details of its performance can be changed by experience; for example, a dog can improve its fighting skills by practice.

An instinctive behavior of shaking water from wet fur

Instincts are inborn complex patterns of behavior that exist in most members of the species, and should be distinguished from reflexes, which are simple responses of an organism to a specific stimulus, such as the contraction of the pupil in response to bright light or the spasmodic movement of the lower leg when the knee is tapped. The absence of volitional capacity must not be confused with an inability to modify fixed action patterns. For example, people may be able to modify a stimulated fixed action pattern by consciously recognizing the point of its activation and simply stop doing it, whereas animals without a sufficiently strong volitional capacity may not be able to disengage from their fixed action patterns, once activated.

Instinctual behaviour in humans has been studied, and is a controversial topic.

History

In animal biology

Jean Henri Fabre (1823-1915), an entomologist, considered instinct to be any behavior which did not require cognition or consciousness to perform. Fabre's inspiration was his intense study of insects, some of whose behaviors he wrongly considered fixed and not subject to environmental influence.

Instinct as a concept fell out of favor in the 1920s with the rise of behaviorism and such thinkers as B. F. Skinner, which held that most significant behavior is learned.

An interest in innate behaviors arose again in the 1950s with Konrad Lorenz and Nikolaas Tinbergen, who made the distinction between instinct and learned behaviors. Our modern understanding of instinctual behavior in animals owes much to their work. For instance, there exists a sensitive period for a bird in which it learns the identity of its mother. Konrad Lorenz famously had a goose imprint on his boots. Thereafter the goose would follow whoever wore the boots. This suggests that the identity of the goose's mother was learned, but the goose's behavior towards what it perceived as its mother was instinctive.

In psychology

The term "instinct" in psychology was first used in the 1870s by Wilhelm Wundt. By the close of the 19th century, most repeated behavior was considered instinctual. In a survey of the literature at that time, one researcher chronicled 4,000 human "instincts," having applied this label to any behavior that was repetitive. In the early twentieth century, there was recognized a "union of instinct and emotion". William McDougall held that many instincts have their respective associated specific emotions. As research became more rigorous and terms better defined, instinct as an explanation for human behavior became less common. In 1932, McDougall argued that the word 'instinct' is more suitable for describing animal behaviour, while he recommended the word 'propensity' for goal directed combinations of the many innate human abilities, which are loosely and variably linked, in a way that shows strong plasticity. In a conference in 1960, chaired by Frank Beach, a pioneer in comparative psychology, and attended by luminaries in the field, the term 'instinct' was restricted in its application. During the 1960s and 1970s, textbooks still contained some discussion of instincts in reference to human behavior. By the year 2000, a survey of the 12 best selling textbooks in Introductory Psychology revealed only one reference to instincts, and that was in regard to Sigmund Freud's referral to the "id" instincts. In this sense, the term 'instinct' appeared to have become outmoded for introductory textbooks on human psychology.

Sigmund Freud considered that mental images of bodily needs, expressed in the form of desires, are called instincts.

In the 1950s, the psychologist Abraham Maslow argued that humans no longer have instincts because we have the ability to override them in certain situations. He felt that what is called instinct is often imprecisely defined, and really amounts to strong drives. For Maslow, an instinct is something which cannot be overridden, and therefore while the term may have applied to humans in the past, it no longer does.

The book Instinct: an enduring problem in psychology (1961) selected a range of writings about the topic.

In a classic paper published in 1972, the psychologist Richard Herrnstein wrote: "A comparison of McDougall's theory of instinct and Skinner's reinforcement theory — representing nature and nurture — shows remarkable, and largely unrecognized, similarities between the contending sides in the nature-nurture dispute as applied to the analysis of behavior."

F.B. Mandal proposed a set of criteria by which a behavior might be considered instinctual: a) be automatic, b) be irresistible, c) occur at some point in development, d) be triggered by some event in the environment, e) occur in every member of the species, f) be unmodifiable, and g) govern behavior for which the organism needs no training (although the organism may profit from experience and to that degree the behavior is modifiable).

In Information behavior: An Evolutionary Instinct (2010, pp. 35–42), Amanda Spink notes that "currently in the behavioral sciences instinct is generally understood as the innate part of behavior that emerges without any training or education in humans." She claims that the viewpoint that information behavior has an instinctive basis is grounded in the latest thinking on human behavior. Furthermore, she notes that "behaviors such as cooperation, sexual behavior, child rearing and aesthetics are [also] seen as 'evolved psychological mechanisms' with an instinctive basis." Spink adds that Steven Pinker similarly asserts that language acquisition is instinctive in humans in his book The Language Instinct (1994). In 1908, William McDougall wrote about the "instinct of curiosity" and its associated "emotion of wonder", though Spink's book does not mention this.

M.S. Blumberg in 2017 examined the use of the word instinct, and found it varied significantly.

In humans

The existence of the simplest instincts in humans is a widely debated topic. Among possible examples of instinct-influenced behavior in humans are the following.

  1. Congenital fear of snakes and spiders was found in six-month-old babies.
  2. Infant cry is believed to be a manifestation of instinct. The infant cannot otherwise protect itself for survival during its long period of maturation. The maternal and paternal bond manifest particularly in response to the infant cry. Its mechanism has been partly elucidated by observations with functional MRI of the parent’s brain.
  3. The herd instinct is found in human children and chimpanzee cubs, but is apparently absent in the young orangutans.
  4. Hormones are linked to specific forms of human behavior, such as sexuality. However, the topic remains debatable as human behavior was shown to influence hormonal levels. High levels of testosterone are often associated in a person (male or female) with aggressiveness, while its decrease is associated with nurturing and protective behavior. Decrease in testosterone level after the birth of a child was found among fathers.
  5. Hygiene behavior in humans was suggested to be partly instinctive, based on emotions such as disgust.

Reflexes

A real gecko hunts the pointer of a mouse, confused with prey.

Examples of behaviors that do not require conscious will include many reflexes. The stimulus in a reflex may not require brain activity but instead may travel to the spinal cord as a message that is then transmitted back through the body, tracing a path called the reflex arc. Reflexes are similar to fixed action patterns in that most reflexes meet the criteria of a FAP. However, a fixed action pattern can be processed in the brain as well; a male stickleback's instinctive aggression towards anything red during his mating season is such an example. Examples of instinctive behaviors in humans include many of the primitive reflexes, such as rooting and suckling, behaviors which are present in mammals. In rats, it has been observed that innate responses are related to specific chemicals, and these chemicals are detected by two organs located in the nose: the vomeronasal organ (VNO) and the main olfactory epithelium (MOE).

Maturational

Some instinctive behaviors depend on maturational processes to appear. For instance, we commonly refer to birds "learning" to fly. However, young birds have been experimentally reared in devices that prevent them from moving their wings until they reached the age at which their cohorts were flying. These birds flew immediately and normally when released, showing that their improvement resulted from neuromuscular maturation and not true learning.

In evolution

Imprinting provides one example of instinct. This complex response may involve visual, auditory, and olfactory cues in the environment surrounding an organism. In some cases, imprinting attaches an offspring to its parent, which is a reproductive benefit to offspring survival. If an offspring has attachment to a parent, it is more likely to stay nearby under parental protection. Attached offspring are also more likely to learn from a parental figure when interacting closely. (Reproductive benefits are a driving force behind natural selection.)

Environment is an important factor in how innate behavior has evolved. A hypothesis of Michael McCollough, a positive psychologist, explains that environment plays a key role in human behaviors such as forgiveness and revenge. This hypothesis theorizes that various social environments cause either forgiveness or revenge to prevail. McCollough relates his theory to game theory. In a tit-for-tat strategy, cooperation and retaliation are comparable to forgiveness and revenge. The choice between the two can be beneficial or detrimental, depending on what the partner-organism chooses. Though this psychological example of game theory does not have such directly measurable results, it provides an interesting theory of unique thought. From a more biological standpoint, the brain's limbic system operates as the main control-area for response to certain stimuli, including a variety of instinctual behavior. The limbic system processes external stimuli related to emotions, social activity, and motivation, which propagates a behavioral response. Some behaviors include maternal care, aggression, defense, and social hierarchy. These behaviors are influenced by sensory input — sight, sound, touch, and smell.

Within the circuitry of the limbic system, there are various places where evolution could have taken place, or could take place in the future. For example, many rodents have receptors in the vomeronasal organ that respond explicitly to predator stimuli that specifically relate to that individual species of rodent. The reception of a predatory stimulus usually creates a response of defense or fear. Mating in rats follows a similar mechanism. The vomeronasal organ and the main olfactory epithelium, together called the olfactory system, detect pheromones from the opposite sex. These signals then travel to the medial amygdala, which disperses the signal to a variety of brain parts. The pathways involved with innate circuitry are extremely specialized and specific. Various organs and sensory receptors play parts in this complex process.

Instinct is a phenomenon that can be investigated from a multitude of angles: genetics, limbic system, nervous pathways, and environment. Researchers can study levels of instincts, from molecular to groups of individuals. Extremely specialized systems have evolved, resulting in individuals which exhibit behaviors without learning them.

Is–ought problem

From Wikipedia, the free encyclopedia
 
David Hume raised the is–ought problem in his Treatise of Human Nature.

The is–ought problem, as articulated by the Scottish philosopher and historian David Hume, arises when one makes claims about what ought to be that are based solely on statements about what is. Hume found that there seems to be a significant difference between positive statements (about what is) and prescriptive or normative statements (about what ought to be), and that it is not obvious how one can coherently move from descriptive statements to prescriptive ones. Hume's law or Hume's guillotine is the thesis that, if a reasoner only has access to non-moral and non-evaluative factual premises, the reasoner cannot logically infer the truth of moral statements.

A similar view is defended by G. E. Moore's open-question argument, intended to refute any identification of moral properties with natural properties. This so-called naturalistic fallacy stands in contrast to the views of ethical naturalists.

The is–ought problem is closely related to the fact–value distinction in epistemology. Though the terms are often used interchangeably, academic discourse concerning the latter may encompass aesthetics in addition to ethics.

Overview

Hume discusses the problem in book III, part I, section I of his book, A Treatise of Human Nature (1739):

In every system of morality, which I have hitherto met with, I have always remarked, that the author proceeds for some time in the ordinary way of reasoning, and establishes the being of a God, or makes observations concerning human affairs; when of a sudden I am surprised to find, that instead of the usual copulations of propositions, is, and is not, I meet with no proposition that is not connected with an ought, or an ought not. This change is imperceptible; but is, however, of the last consequence. For as this ought, or ought not, expresses some new relation or affirmation, it's necessary that it should be observed and explained; and at the same time that a reason should be given, for what seems altogether inconceivable, how this new relation can be a deduction from others, which are entirely different from it. But as authors do not commonly use this precaution, I shall presume to recommend it to the readers; and am persuaded, that this small attention would subvert all the vulgar systems of morality, and let us see, that the distinction of vice and virtue is not founded merely on the relations of objects, nor is perceived by reason.

Hume calls for caution against such inferences in the absence of any explanation of how the ought-statements follow from the is-statements. But how exactly can an "ought" be derived from an "is"? The question, prompted by Hume's small paragraph, has become one of the central questions of ethical theory, and Hume is usually assigned the position that such a derivation is impossible.

In modern times, "Hume's law" often denotes the informal thesis that, if a reasoner only has access to non-moral factual premises, the reasoner cannot logically infer the truth of moral statements; or, more broadly, that one cannot infer evaluative statements (including aesthetic statements) from non-evaluative statements. An alternative definition of Hume's law is that "If P implies Q, and Q is moral, then P is moral". This interpretation-driven definition avoids a loophole with the principle of explosion. Other versions state that the is-ought gap can technically be formally bridged without a moral premise, but only in ways that are formally "vacuous" or "irrelevant", and that provide no "guidance". For example, one can infer from "The Sun is yellow" that "Either the Sun is yellow, or it is wrong to murder". But this provides no relevant moral guidance; absent a contradiction, one cannot deductively infer that "it is wrong to murder" solely from non-moral premises alone, adherents argue.

Implications

The apparent gap between "is" statements and "ought" statements, when combined with Hume's fork, renders "ought" statements of dubious validity. Hume's fork is the idea that all items of knowledge are based either on logic and definitions, or else on observation. If the is–ought problem holds, then "ought" statements do not seem to be known in either of these two ways, and it would seem that there can be no moral knowledge. Moral skepticism and non-cognitivism work with such conclusions.

Responses

Oughts and goals

Ethical naturalists contend that moral truths exist, and that their truth value relates to facts about physical reality. Many modern naturalistic philosophers see no impenetrable barrier in deriving "ought" from "is", believing it can be done whenever we analyze goal-directed behavior. They suggest that a statement of the form "In order for agent A to achieve goal B, A reasonably ought to do C" exhibits no category error and may be factually verified or refuted. "Oughts" exist, then, in light of the existence of goals. A counterargument to this response is that it merely pushes back the 'ought' to the subjectively valued 'goal' and thus provides no fundamentally objective basis to one's goals which, consequentially, provides no basis of distinguishing moral value of fundamentally different goals.

This is similar to work done by moral philosopher Alasdair MacIntyre, who attempts to show that because ethical language developed in the West in the context of a belief in a human telos—an end or goal—our inherited moral language, including terms such as good and bad, have functioned, and function, to evaluate the way in which certain behaviors facilitate the achievement of that telos. In an evaluative capacity, therefore, good and bad carry moral weight without committing a category error. For instance, a pair of scissors that cannot easily cut through paper can legitimately be called bad since it cannot fulfill its purpose effectively. Likewise, if a person is understood as having a particular purpose, then behaviour can be evaluated as good or bad in reference to that purpose. In plainer words, a person is acting good when that person fulfills that person's purpose.

Even if the concept of an "ought" is meaningful, this need not involve morality. This is because some goals may be morally neutral, or (if they exist) against what is moral. A poisoner might realize his victim has not died and say, for example, "I ought to have used more poison," since his goal is to murder. The next challenge of a moral realist is thus to explain what is meant by a "moral ought".

Discourse ethics

Proponents of discourse ethics argue that the very act of discourse implies certain "oughts", that is, certain presuppositions that are necessarily accepted by the participants in discourse, and can be used to further derive prescriptive statements. They therefore argue that it is incoherent to argumentatively advance an ethical position on the basis of the is–ought problem, which contradicts these implied assumptions.

Moral oughts

As MacIntyre explained, someone may be called a good person if people have an inherent purpose. Many ethical systems appeal to such a purpose. This is true of some forms of moral realism, which states that something can be wrong, even if every thinking person believes otherwise (the idea of brute fact about morality). The ethical realist might suggest that humans were created for a purpose (e.g. to serve God), especially if they are an ethical non-naturalist. If the ethical realist is instead an ethical naturalist, they may start with the fact that humans have evolved and pursue some sort of evolutionary ethics (which risks “committing” the moralistic fallacy). Not all moral systems appeal to a human telos or purpose. This is because it is not obvious that people even have any sort of natural purpose, or what that purpose would be. Although many scientists do recognize teleonomy (a tendency in nature), few philosophers appeal to it (this time, to avoid the naturalistic fallacy).

Goal-dependent oughts run into problems even without an appeal to an innate human purpose. Consider cases where one has no desire to be good—whatever it is. If, for instance, a person wants to be good, and good means washing one's hands, then it seems one morally ought to wash their hands. The bigger problem in moral philosophy is what happens if someone does not want to be good, whatever its origins? Put simply, in what sense ought we to hold the goal of being good? It seems one can ask "how am I rationally required to hold 'good' as a value, or to pursue it?"

The issue above mentioned is a result of an important ethical relativist critique. Even if "oughts" depend on goals, the ought seems to vary with the person's goal. This is the conclusion of the ethical subjectivist, who says a person can only be called good according to whether they fulfill their own, self-assigned goal. Alasdair MacIntyre himself suggests that a person's purpose comes from their culture, making him a sort of ethical relativist. Ethical relativists acknowledge local, institutional facts about what is right, but these are facts that can still vary by society. Thus, without an objective "moral goal", a moral ought is difficult to establish. G. E. M. Anscombe was particularly critical of the word "ought" for this reason; understood as "We need such-and-such, and will only get it this way"—for somebody may need something immoral, or else find that their noble need requires immoral action. Anscombe would even go as far to suggest that "the concepts of obligation, and duty—moral obligation and moral duty, that is to say—and of what is morally right and wrong, and of the moral sense of 'ought,' ought to be jettisoned if this is psychologically possible".

If moral goals depend on private assumptions or public agreement, so may morality as a whole. For example, Canada might call it good to maximize global welfare, where a citizen, Alice, calls it good to focus on herself, and then her family, and finally her friends (with little empathy for strangers). It does not seem that Alice can be objectively or rationally bound—without regard to her personal values nor those of groups of other people—to act a certain way. In other words, we may not be able to say "You just should do this". Moreover, persuading her to help strangers would necessarily mean appealing to values she already possesses (or else we would never even have a hope of persuading her). This is another interest of normative ethics—questions of binding forces.

There may be responses to the above relativistic critiques. As mentioned above, ethical realists that are non-natural can appeal to God's purpose for humankind. On the other hand, naturalistic thinkers may posit that valuing people's well-being is somehow "obviously" the purpose of ethics, or else the only relevant purpose worth talking about. This is the move made by natural law, scientific moralists and some utilitarians.

Institutional facts

John Searle also attempts to derive "ought" from "is". He tries to show that the act of making a promise places one under an obligation by definition, and that such an obligation amounts to an "ought". This view is still widely debated, and to answer criticisms, Searle has further developed the concept of institutional facts, for example, that a certain building is in fact a bank and that certain paper is in fact money, which would seem to depend upon general recognition of those institutions and their value.

Indefinables

Indefinables are concepts so global that they cannot be defined; rather, in a sense, they themselves, and the objects to which they refer, define our reality and our ideas. Their meanings cannot be stated in a true definition, but their meanings can be referred to instead by being placed with their incomplete definitions in self-evident statements, the truth of which can be tested by whether or not it is impossible to think the opposite without a contradiction. Thus, the truth of indefinable concepts and propositions using them is entirely a matter of logic.

An example of the above is that of the concepts "finite parts" and "wholes"; they cannot be defined without reference to each other and thus with some amount of circularity, but we can make the self-evident statement that "the whole is greater than any of its parts", and thus establish a meaning particular to the two concepts.

These two notions being granted, it can be said that statements of "ought" are measured by their prescriptive truth, just as statements of "is" are measured by their descriptive truth; and the descriptive truth of an "is" judgment is defined by its correspondence to reality (actual or in the mind), while the prescriptive truth of an "ought" judgment is defined according to a more limited scope—its correspondence to right desire (conceivable in the mind and able to be found in the rational appetite, but not in the more "actual" reality of things independent of the mind or rational appetite).

To some, this may immediately suggest the question: "How can we know what is a right desire if it is already admitted that it is not based on the more actual reality of things independent of the mind?" The beginning of the answer is found when we consider that the concepts "good", "bad", "right" and "wrong" are indefinables. Thus, right desire cannot be defined properly, but a way to refer to its meaning may be found through a self-evident prescriptive truth.

That self-evident truth which the moral cognitivist claims to exist upon which all other prescriptive truths are ultimately based is: One ought to desire what is really good for one and nothing else. The terms "real good" and "right desire" cannot be defined apart from each other, and thus their definitions would contain some degree of circularity, but the stated self-evident truth indicates a meaning particular to the ideas sought to be understood, and it is (the moral cognitivist might claim) impossible to think the opposite without a contradiction. Thus combined with other descriptive truths of what is good (goods in particular considered in terms of whether they suit a particular end and the limits to the possession of such particular goods being compatible with the general end of the possession of the total of all real goods throughout a whole life), a valid body of knowledge of right desire is generated.

Functionalist counterexamples

Several counterexamples have been offered by philosophers claiming to show that there are cases when an "ought" logically follows from an "is." First of all, Hilary Putnam, by tracing back the quarrel to Hume's dictum, claims fact/value entanglement as an objection, since the distinction between them entails a value. A. N. Prior points out, from the statement "He is a sea captain," it logically follows, "He ought to do what a sea captain ought to do." Alasdair MacIntyre points out, from the statement "This watch is grossly inaccurate and irregular in time-keeping and too heavy to carry about comfortably," the evaluative conclusion validly follows, "This is a bad watch." John Searle points out, from the statement "Jones promised to pay Smith five dollars," it logically follows that "Jones ought to pay Smith five dollars." The act of promising by definition places the promiser under obligation.

Moral realism

Philippa Foot adopts a moral realist position, criticizing the idea that when evaluation is superposed on fact there has been a "committal in a new dimension." She introduces, by analogy, the practical implications of using the word "injury." Not just anything counts as an injury. There must be some impairment. When we suppose a man wants the things the injury prevents him from obtaining, haven't we fallen into the old naturalist fallacy?

It may seem that the only way to make a necessary connection between "injury" and the things that are to be avoided, is to say that it is only used in an "action-guiding sense" when applied to something the speaker intends to avoid. But we should look carefully at the crucial move in that argument, and query the suggestion that someone might happen not to want anything for which he would need the use of hands or eyes. Hands and eyes, like ears and legs, play a part in so many operations that a man could only be said not to need them if he had no wants at all.

Foot argues that the virtues, like hands and eyes in the analogy, play so large a part in so many operations that it is implausible to suppose that a committal in a non-naturalist dimension is necessary to demonstrate their goodness.

Philosophers who have supposed that actual action was required if "good" were to be used in a sincere evaluation have got into difficulties over weakness of will, and they should surely agree that enough has been done if we can show that any man has reason to aim at virtue and avoid vice. But is this impossibly difficult if we consider the kinds of things that count as virtue and vice? Consider, for instance, the cardinal virtues, prudence, temperance, courage and justice. Obviously any man needs prudence, but does he not also need to resist the temptation of pleasure when there is harm involved? And how could it be argued that he would never need to face what was fearful for the sake of some good? It is not obvious what someone would mean if he said that temperance or courage were not good qualities, and this not because of the "praising" sense of these words, but because of the things that courage and temperance are.

Misunderstanding

Hilary Putnam argues philosophers that accept Hume's "is–ought" distinction reject his reasons in making it, and thus undermine the entire claim.

Various scholars have also indicated that, in the very work where Hume argues for the is–ought problem, Hume himself derives an "ought" from an "is". Such seeming inconsistencies in Hume have led to an ongoing debate over whether Hume actually held to the is–ought problem in the first place, or whether he meant that ought inferences can be made but only with good argumentation.

Absurdism

From Wikipedia, the free encyclopedia

Sisyphus, the symbol of the absurdity of existence, painting by Franz Stuck (1920)

In philosophy, "the Absurd" refers to the conflict between the human tendency to seek inherent value and meaning in life, and the human inability to find these with any certainty. The universe and the human mind do not each separately cause the Absurd; rather, the Absurd arises by the contradictory nature of the two existing simultaneously.

The absurdist philosopher Albert Camus stated that individuals should embrace the absurd condition of human existence.

Absurdism shares some concepts, and a common theoretical template, with existentialism and nihilism. It has its origins in the work of the 19th-century Danish philosopher Søren Kierkegaard, who chose to confront the crisis that humans face with the Absurd by developing his own existentialist philosophy. Absurdism as a belief system was born of the European existentialist movement that ensued, specifically when Camus rejected certain aspects of that philosophical line of thought and published his essay The Myth of Sisyphus. The aftermath of World War II provided the social environment that stimulated absurdist views and allowed for their popular development, especially in the devastated country of France.

Overview

... in spite of or in defiance of the whole of existence he wills to be himself with it, to take it along, almost defying his torment. For to hope in the possibility of help, not to speak of help by virtue of the absurd, that for God all things are possible – no, that he will not do. And as for seeking help from any other – no, that he will not do for all the world; rather than seek help he would prefer to be himself – with all the tortures of hell, if so it must be.

Søren Kierkegaard, The Sickness Unto Death

In absurdist philosophy, the Absurd arises out of the fundamental disharmony between the individual's search for meaning and the meaninglessness of the universe. In absurdist philosophy, there are also two certainties that permeate human existence. The first is that humans are constantly striving towards the acquisition or identification with meaning and significance. It seems to be an inherent thing in human nature that urges the individual to define meaning in their lives. The second certainty is that the universe’s silence and indifference to human life give the individual no assurance of any such meaning, leading to an existential dread within themselves. According to Camus, when the desire to find meaning and the lack of meaning collide, this is when the absurd is highlighted. The question then brought up becomes whether we should resign ourselves to this despair. As beings looking for meaning in a meaningless world, humans have three ways of resolving the dilemma. Kierkegaard and Camus describe the solutions in their works, The Sickness Unto Death (1849) and The Myth of Sisyphus (1942), respectively:

  • Suicide (or, "escaping existence"): a solution in which a person ends one's own life. Both Kierkegaard and Camus dismiss the viability of this option. Camus states that it does not counter the Absurd. Rather, in the act of ending one's existence, one's existence only becomes more absurd.
  • Religious, spiritual, or abstract belief in a transcendent realm, being, or idea: a solution in which one believes in the existence of a reality that is beyond the Absurd, and, as such, has meaning. Kierkegaard stated that a belief in anything beyond the Absurd requires an irrational but perhaps necessary religious "leap" into the intangible and empirically unprovable (now commonly referred to as a "leap of faith"). However, Camus regarded this solution, and others, as "philosophical suicide".
  • Acceptance of the Absurd: a solution in which one accepts the Absurd and continues to live in spite of it. Camus endorsed this solution, believing that by accepting the Absurd, one can achieve the greatest extent of one's freedom. By recognizing no religious or other moral constraints, and by rebelling against the Absurd (through meaning-making) while simultaneously accepting it as unstoppable, one could find contentment through the transient personal meaning constructed in the process. Kierkegaard, on the other hand, regarded this solution as "demoniac madness": "He rages most of all at the thought that eternity might get it into its head to take his misery from him!"

Relationship to existentialism and nihilism

Absurdism originated from (as well as alongside) the 20th-century strains of existentialism and nihilism; it shares some prominent starting points with both, though also entails conclusions that are uniquely distinct from these other schools of thought. All three arose from the human experience of anguish and confusion stemming from the Absurd: the apparent meaninglessness in a world in which humans, nevertheless, are compelled to find or create meaning. The three schools of thought diverge from there. Existentialists have generally advocated the individual's construction of their own meaning in life as well as the free will of the individual. Nihilists, on the contrary, contend that "it is futile to seek or to affirm meaning where none can be found." Absurdists, following Camus's formulation, hesitantly allow the possibility for some meaning or value in life, but are neither as certain as existentialists are about the value of one's own constructed meaning nor as nihilists are about the total inability to create meaning. Absurdists following Camus also devalue or outright reject free will, encouraging merely that the individual live defiantly and authentically in spite of the psychological tension of the Absurd.

Camus himself passionately worked to counter nihilism, as he explained in his essay "The Rebel", while he also categorically rejected the label of "existentialist" in his essay "Enigma" and in the compilation The Lyrical and Critical Essays of Albert Camus, though he was, and still is, often broadly characterized by others as an existentialist. Both existentialism and absurdism entail consideration of the practical applications of becoming conscious of the truth of existential nihilism: i.e., how a driven seeker of meaning should act when suddenly confronted with the seeming concealment, or downright absence, of meaning in the universe. Camus's own understanding of the world (e.g., "a benign indifference", in The Stranger), and every vision he had for its progress, however, sets him apart from the general existentialist trend.

Basic relationships between existentialism, absurdism and nihilism


Monotheistic existentialism Atheistic existentialism Absurdism Nihilism
1. There is such a thing as meaning or value: Yes Yes It is a logical possibility. No
2. There is inherent meaning in the universe: Yes, but the individual must have come to the knowledge of God. No No No
3. The pursuit of meaning may have meaning in itself: Yes Yes Such a pursuit can and should generate meaning for an individual, but death still renders the activity "ultimately" meaningless. No
4. The individual's construction of any type of meaning is possible: Yes, though this meaning would eventually incorporate God, being the creator of the universe and the "meaning" itself. Yes, meaning-making in a world without inherent meaning is the goal of existentialism. Yes, though it must face up to the Absurd, which means embracing the transient, personal nature of our meaning-making projects and the way they are nullified by death. No
5. There is resolution to the individual's desire to seek meaning: Yes, the creation of one's own meaning involving God. Yes, the creation of one's own meaning. Embracing the absurd can allow one to find joy and meaning in one's own life, but the only "resolution" is in eventual annihilation by death. No

Such a chart represents some of the overlap and tensions between existentialist and absurdist approaches to meaning. While absurdism can be seen as a kind of response to existentialism, it can be debated exactly how substantively the two positions differ from each other. The existentialist, after all, doesn't deny the reality of death. But the absurdist seems to reaffirm the way in which death ultimately nullifies our meaning-making activities, a conclusion the existentialists seem to resist through various notions of posterity or, in Sartre's case, participation in a grand humanist project.

Søren Kierkegaard

Kierkegaard designed the relationship framework based (in part) on how a person reacts to despair. Absurdist philosophy fits into the 'despair of defiance' rubric.

A century before Camus, the 19th-century Danish philosopher Søren Kierkegaard wrote extensively about the absurdity of the world. In his journals, Kierkegaard writes about the absurd:

What is the Absurd? It is, as may quite easily be seen, that I, a rational being, must act in a case where my reason, my powers of reflection, tell me: you can just as well do the one thing as the other, that is to say where my reason and reflection say: you cannot act and yet here is where I have to act... The Absurd, or to act by virtue of the absurd, is to act upon faith ... I must act, but reflection has closed the road so I take one of the possibilities and say: This is what I do, I cannot do otherwise because I am brought to a standstill by my powers of reflection.

— Kierkegaard, Søren, Journals, 1849

Here is another example of the Absurd from his writings:

What, then, is the absurd? The absurd is that the eternal truth has come into existence in time, that God has come into existence, has been born, has grown up. etc., has come into existence exactly as an individual human being, indistinguishable from any other human being, in as much as all immediate recognizability is pre-Socratic paganism and from the Jewish point of view is idolatry.
—Kierkegaard, Concluding Unscientific Postscript, 1846, Hong 1992, p. 210

How can this absurdity be held or believed? Kierkegaard says:

I gladly undertake, by way of brief repetition, to emphasize what other pseudonyms have emphasized. The absurd is not the absurd or absurdities without any distinction (wherefore Johannes de Silentio: "How many of our age understand what the absurd is?"). The absurd is a category, and the most developed thought is required to define the Christian absurd accurately and with conceptual correctness. The absurd is a category, the negative criterion, of the divine or of the relationship to the divine. When the believer has faith, the absurd is not the absurd — faith transforms it, but in every weak moment it is again more or less absurd to him. The passion of faith is the only thing which masters the absurd — if not, then faith is not faith in the strictest sense, but a kind of knowledge. The absurd terminates negatively before the sphere of faith, which is a sphere by itself. To a third person the believer relates himself by virtue of the absurd; so must a third person judge, for a third person does not have the passion of faith. Johannes de Silentio has never claimed to be a believer; just the opposite, he has explained that he is not a believer — in order to illuminate faith negatively.
Journals of Søren Kierkegaard X6B 79

Kierkegaard provides an example in Fear and Trembling (1843), which was published under the pseudonym Johannes de Silentio. In the story of Abraham in the Book of Genesis, Abraham is told by God to kill his son Isaac. Just as Abraham is about to kill Isaac, an angel stops Abraham from doing so. Kierkegaard believes that through virtue of the absurd, Abraham, defying all reason and ethical duties ("you cannot act"), got back his son and reaffirmed his faith ("where I have to act").

Another instance of absurdist themes in Kierkegaard's work appears in The Sickness Unto Death, which Kierkegaard signed with pseudonym Anti-Climacus. Exploring the forms of despair, Kierkegaard examines the type of despair known as defiance. In the opening quotation reproduced at the beginning of the article, Kierkegaard describes how such a man would endure such a defiance and identifies the three major traits of the Absurd Man, later discussed by Albert Camus: a rejection of escaping existence (suicide), a rejection of help from a higher power and acceptance of his absurd (and despairing) condition.

According to Kierkegaard in his autobiography The Point of View of My Work as an Author, most of his pseudonymous writings are not necessarily reflective of his own opinions. Nevertheless, his work anticipated many absurdist themes and provided its theoretical background.

Albert Camus

Though the notion of the 'absurd' pervades all Albert Camus's writing, The Myth of Sisyphus is his chief work on the subject. In it, Camus considers absurdity as a confrontation, an opposition, a conflict or a "divorce" between two ideals. Specifically, he defines the human condition as absurd, as the confrontation between man's desire for significance, meaning and clarity on the one hand – and the silent, cold universe on the other. He continues that there are specific human experiences evoking notions of absurdity. Such a realization or encounter with the absurd leaves the individual with a choice: suicide, a leap of faith, or recognition. He concludes that recognition is the only defensible option.

For Camus, suicide is a "confession" that life is not worth living; it is a choice that implicitly declares that life is "too much." Suicide offers the most basic "way out" of absurdity: the immediate termination of the self and its place in the universe.

The absurd encounter can also arouse a "leap of faith," a term derived from one of Kierkegaard's early pseudonyms, Johannes de Silentio (although the term was not used by Kierkegaard himself), where one believes that there is more than the rational life (aesthetic or ethical). To take a "leap of faith," one must act with the "virtue of the absurd" (as Johannes de Silentio put it), where a suspension of the ethical may need to exist. This faith has no expectations, but is a flexible power initiated by a recognition of the absurd. (Although at some point, one recognizes or encounters the existence of the Absurd and, in response, actively ignores it.) However, Camus states that because the leap of faith escapes rationality and defers to abstraction over personal experience, the leap of faith is not absurd. Camus considers the leap of faith as "philosophical suicide," rejecting both this and physical suicide.

Lastly, a person can choose to embrace the absurd condition. According to Camus, one's freedom – and the opportunity to give life meaning – lies in the recognition of absurdity. If the absurd experience is truly the realization that the universe is fundamentally devoid of absolutes, then we as individuals are truly free. "To live without appeal," as he puts it, is a philosophical move to define absolutes and universals subjectively, rather than objectively. The freedom of humans is thus established in a human's natural ability and opportunity to create their own meaning and purpose; to decide (or think) for him- or herself. The individual becomes the most precious unit of existence, representing a set of unique ideals that can be characterized as an entire universe in its own right. In acknowledging the absurdity of seeking any inherent meaning, but continuing this search regardless, one can be happy, gradually developing meaning from the search alone.

Camus states in The Myth of Sisyphus: "Thus I draw from the absurd three consequences, which are my revolt, my freedom, and my passion. By the mere activity of consciousness I transform into a rule of life what was an invitation to death, and I refuse suicide." "Revolt" here refers to the refusal of suicide and search for meaning despite the revelation of the Absurd; "Freedom" refers to the lack of imprisonment by religious devotion or others' moral codes; "Passion" refers to the most wholehearted experiencing of life, since hope has been rejected, and so he concludes that every moment must be lived fully.

Nihilism

From Wikipedia, the free encyclopedia

Nihilism (/ˈn(h)ɪlɪzəm, ˈn-/; from Latin nihil 'nothing') is a philosophy, or family of views within philosophy, expressing negation (i.e., denial of) towards general aspects of life that are widely accepted within humanity as objectively real, such as knowledge, existence, and the meaning of life. Different nihilist positions hold variously that human values are baseless, that life is meaningless, that knowledge is impossible, or that some set of entities do not exist.

The study of nihilism may regard it as merely a label that has been applied to various separate philosophies, or as a distinct historical concept arising out of nominalism, skepticism, and philosophical pessimism, as well as possibly out of Christianity itself. Contemporary understanding of the idea stems largely from the Nietzschean 'crisis of nihilism', from which derives the two central concepts: the destruction of higher values and the opposition to the affirmation of life. Earlier forms of nihilism however, may be more selective in negating specific hegemonies of social, moral, political and aesthetic thought.

The term is sometimes used in association with anomie to explain the general mood of despair at a perceived pointlessness of existence or arbitrariness of human principles and social institutions. Nihilism has also been described as conspicuous in or constitutive of certain historical periods. For example, Jean Baudrillard and others have characterized postmodernity as a nihilistic epoch or mode of thought. Likewise, some theologians and religious figures have stated that postmodernity and many aspects of modernity represent nihilism by a negation of religious principles. Nihilism has, however, been widely ascribed to both religious and irreligious viewpoints.

In popular use, the term commonly refers to forms of existential nihilism, according to which life is without intrinsic value, meaning, or purpose. Other prominent positions within nihilism include the rejection of all normative and ethical views (§ Moral nihilism), the rejection of all social and political institutions (§ Political nihilism), the stance that no knowledge can or does exist (§ Epistemological nihilism), and a number of metaphysical positions, which assert that non-abstract objects do not exist (§ Metaphysical nihilism), that composite objects do not exist (§ Mereological nihilism), or even that life itself does not exist.

Etymology, terminology and definition

The etymological origin of nihilism is the Latin root word nihil, meaning 'nothing', which is similarly found in the related terms annihilate, meaning 'to bring to nothing', and nihility, meaning 'nothingness'. The term nihilism emerged in several places in Europe during the 18th century, notably in the German form Nihilismus, though was also in use during the Middle Ages to denote certain forms of heresy. The concept itself first took shape within Russian and German philosophy, which respectively represented the two major currents of discourse on nihilism prior to the 20th century. The term likely entered English from either the German Nihilismus, Late Latin nihilismus, or French nihilisme.

Early examples of the term's use are found in German publication. In 1733, German writer Friedrich Lebrecht Goetz used it as a literary term in combination with noism (German: Neinismus). In the period surrounding the French Revolution, the term was also a pejorative for certain value-destructive trends of modernity, namely the negation of Christianity and European tradition in general. Nihilism first entered philosophical study within a discourse surrounding Kantian and post-Kantian philosophies, notably appearing in the writings of Swiss esotericist Jacob Hermann Obereit in 1787 and German philosopher Friedrich Heinrich Jacobi in 1799. As early as 1824, the term began to take on a social connotation with German journalist Joseph von Görres attributing it to a negation of existing social and political institutions. The Russian form of the word, nigilizm (Russian: нигилизм), entered publication in 1829 when Nikolai Nadezhdin used it synonymously with skepticism. In Russian journalism the word continued to have significant social connotations.

From the time of Jacobi, the term almost fell completely out of use throughout Europe until it was revived by Russian author Ivan Turgenev, who brought the word into popular use with his 1862 novel Fathers and Sons, leading many scholars to believe he coined the term. The nihilist characters of the novel define themselves as those who "deny everything", who do "not take any principle on faith, whatever reverence that principle may be enshrined in", and who regard "at the present time, negation is the most useful of all". Despite Turgenev's own anti-nihilistic leanings, many of his readers likewise took up the name of nihilist, thus ascribing the Russian nihilist movement its name. Returning to German philosophy, nihilism was further discussed by German philosopher Friedrich Nietzsche, who used the term to describe the Western world's disintegration of traditional morality. For Nietzsche, nihilism applied to both the modern trends of value-destruction expressed in the 'death of God', as well as what he saw as the life-denying morality of Christianity. Under Nietzsche's profound influence, the term was then further treated within French philosophy and continental philosophy more broadly, while the influence of nihilism in Russia arguably continued well into the Soviet era.

Religious scholars such as Altizer have stated that nihilism must necessarily be understood in relation to religion, and that the study of core elements of its character requires fundamentally theological consideration.

History

Buddhism

The concept of nihilism was discussed by the Buddha (563 B.C. to 483 B.C.), as recorded in the Theravada and Mahayana Tripiṭaka. The Tripiṭaka, originally written in Pali, refers to nihilism as natthikavāda and the nihilist view as micchādiṭṭhi. Various sutras within it describe a multiplicity of views held by different sects of ascetics while the Buddha was alive, some of which were viewed by him to be morally nihilistic. In the "Doctrine of Nihilism" in the Apannaka Sutta, the Buddha describes moral nihilists as holding the following views:

  • Giving produces no beneficial results;
  • Good and bad actions produce no results;
  • After death, beings are not reborn into the present world or into another world; and
  • There is no one in the world who, through direct knowledge, can confirm that beings are reborn into this world or into another world

The Buddha further states that those who hold these views will fail to see the virtue in good mental, verbal, and bodily conduct and the corresponding dangers in misconduct, and will therefore tend towards the latter.

Nirvana and nihilism

The culmination of the path that the Buddha taught was nirvana, "a place of nothingnessnonpossession and…non-attachment…[which is] the total end of death and decay." Ajahn Amaro, an ordained Buddhist monk of more than 40 years, observes that in English nothingness can sound like nihilism. However, the word could be emphasized in a different way, so that it becomes no-thingness, indicating that nirvana is not a thing you can find, but rather a state where you experience the reality of non-grasping.

In the Alagaddupama Sutta, the Buddha describes how some individuals feared his teaching because they believe that their self would be destroyed if they followed it. He describes this as an anxiety caused by the false belief in an unchanging, everlasting self. All things are subject to change and taking any impermanent phenomena to be a self causes suffering. Nonetheless, his critics called him a nihilist who teaches the annihilation and extermination of an existing being. The Buddha's response was that he only teaches the cessation of suffering. When an individual has given up craving and the conceit of 'I am' their mind is liberated, they no longer come into any state of 'being' and are no longer born again.

The Aggi-Vacchagotta Sutta records a conversation between the Buddha and an individual named Vaccha that further elaborates on this. In the sutta, Vaccha asks the Buddha to confirm one of the following, with respect to the existence of the Buddha after death:

  • After death a Buddha reappears somewhere else
  • After death a Buddha does not reappear
  • After death a Buddha both does and does not reappear
  • After death a Buddha neither does nor does not reappear

To all four questions, the Buddha answers that the terms "appear," "not appear," "does and does not reappear," and "neither does nor does not reappear" do not apply. When Vaccha expresses puzzlement, the Buddha asks Vaccha a counter question to the effect of: if a fire were to go out and someone were to ask you whether the fire went north, south, east or west, how would you reply? Vaccha replies that the question does not apply and that an extinguished fire can only be classified as 'out'.

Ṭhānissaro Bhikkhu elaborates on the classification problem around the words 'reappear,' etc. with respect to the Buddha and Nirvana by stating that a "person who has attained the goal [nirvana] is thus indescribable because [they have] abandoned all things by which [they] could be described." The Suttas themselves describe the liberated mind as 'untraceable' or as 'consciousness without feature', making no distinction between the mind of a liberated being that is alive and the mind of one that is no longer alive.

Despite the Buddha's explanations to the contrary, Buddhist practitioners may, at times, still approach Buddhism in a nihilistic manner. Ajahn Amaro illustrates this by retelling the story of a Buddhist monk, Ajahn Sumedho, who in his early years took a nihilistic approach to Nirvana. A distinct feature of Nirvana in Buddhism is that an individual attaining it is no longer subject to rebirth. Ajahn Sumedho, during a conversation with his teacher Ajahn Chah, comments that he is "determined above all things to fully realize Nirvana in this lifetime…deeply weary of the human condition and…[is] determined not to be born again." To this, Ajahn Chah replies: "what about the rest of us, Sumedho? Don't you care about those who'll be left behind?" Ajahn Amaro comments that Ajahn Chah could detect that his student had a nihilistic aversion to life rather than true detachment.

Jacobi

The term nihilism was first introduced by Friedrich Heinrich Jacobi (1743–1819), who used the term to characterize rationalism, and in particular the Spinoza's determinism and the Aufklärung, in order to carry out a reductio ad absurdum according to which all rationalism (philosophy as criticism) reduces to nihilism—and thus it should be avoided and replaced with a return to some type of faith and revelation. Bret W. Davis writes, for example:

The first philosophical development of the idea of nihilism is generally ascribed to Friedrich Jacobi, who in a famous letter criticized Fichte's idealism as falling into nihilism. According to Jacobi, Fichte's absolutization of the ego (the 'absolute I' that posits the 'not-I') is an inflation of subjectivity that denies the absolute transcendence of God.

A related but oppositional concept is fideism, which sees reason as hostile and inferior to faith.

Kierkegaard

unfinished sketch c. 1840 of Søren Kierkegaard by his cousin Niels Christian Kierkegaard

Søren Kierkegaard (1813–1855) posited an early form of nihilism, which he referred to as leveling. He saw leveling as the process of suppressing individuality to a point where an individual's uniqueness becomes non-existent and nothing meaningful in one's existence can be affirmed:

Levelling at its maximum is like the stillness of death, where one can hear one's own heartbeat, a stillness like death, into which nothing can penetrate, in which everything sinks, powerless. One person can head a rebellion, but one person cannot head this levelling process, for that would make him a leader and he would avoid being levelled. Each individual can in his little circle participate in this levelling, but it is an abstract process, and levelling is abstraction conquering individuality.

— The Present Age, translated by Alexander Dru, with Foreword by Walter Kaufmann, 1962, pp. 51–53

Kierkegaard, an advocate of a philosophy of life, generally argued against levelling and its nihilistic consequences, although he believed it would be "genuinely educative to live in the age of levelling [because] people will be forced to face the judgement of [levelling] alone." George Cotkin asserts Kierkegaard was against "the standardization and levelling of belief, both spiritual and political, in the nineteenth century," and that Kierkegaard "opposed tendencies in mass culture to reduce the individual to a cipher of conformity and deference to the dominant opinion." In his day, tabloids (like the Danish magazine Corsaren) and apostate Christianity were instruments of levelling and contributed to the "reflective apathetic age" of 19th century Europe. Kierkegaard argues that individuals who can overcome the levelling process are stronger for it, and that it represents a step in the right direction towards "becoming a true self." As we must overcome levelling, Hubert Dreyfus and Jane Rubin argue that Kierkegaard's interest, "in an increasingly nihilistic age, is in how we can recover the sense that our lives are meaningful."

Russian nihilism

Portrait of a nihilist student by Ilya Repin

From the period 1860–1917, Russian nihilism was both a nascent form of nihilist philosophy and broad cultural movement which overlapped with certain revolutionary tendencies of the era, for which it was often wrongly characterized as a form of political terrorism. Russian nihilism centered on the dissolution of existing values and ideals, incorporating theories of hard determinism, atheism, materialism, positivism, and rational egoism, while rejecting metaphysics, sentimentalism, and aestheticism. Leading philosophers of this school of thought included Nikolay Chernyshevsky and Dmitry Pisarev.

The intellectual origins of the Russian nihilist movement can be traced back to 1855 and perhaps earlier, where it was principally a philosophy of extreme moral and epistemological skepticism. However, it was not until 1862 that the name nihilism was first popularized, when Ivan Turgenev used the term in his celebrated novel Fathers and Sons to describe the disillusionment of the younger generation towards both the progressives and traditionalists that came before them, as well as its manifestation in the view that negation and value-destruction were most necessary to the present conditions. The movement very soon adopted the name, despite the novel's initial harsh reception among both the conservatives and younger generation.

Though philosophically both nihilistic and skeptical, Russian nihilism did not unilaterally negate ethics and knowledge as may be assumed, nor did it espouse meaninglessness unequivocally. Even so, contemporary scholarship has challenged the equating of Russian nihilism with mere skepticism, instead identifying it as a fundamentally Promethean movement. As passionate advocates of negation, the nihilists sought to liberate the Promethean might of the Russian people which they saw embodied in a class of prototypal individuals, or new types in their own words. These individuals, according to Pisarev, in freeing themselves from all authority become exempt from moral authority as well, and are distinguished above the rabble or common masses.

Later interpretations of nihilism were heavily influenced by works of anti-nihilistic literature, such as those of Fyodor Dostoevsky, which arose in response to Russian nihilism. "In contrast to the corrupted nihilists [of the real world], who tried to numb their nihilistic sensitivity and forget themselves through self-indulgence, Dostoevsky's figures voluntarily leap into nihilism and try to be themselves within its boundaries", writes contemporary scholar Nishitani. "The nihility expressed in 'if there is no God, everything is permitted', or 'après moi, le déluge', provides a principle whose sincerity they try to live out to the end. They search for and experiment with ways for the self to justify itself after God has disappeared."

Nietzsche

Nihilism is often associated with the German philosopher Friedrich Nietzsche, who provided a detailed diagnosis of nihilism as a widespread phenomenon of Western culture. Though the notion appears frequently throughout Nietzsche's work, he uses the term in a variety of ways, with different meanings and connotations.

Karen L. Carr describes Nietzsche's characterization of nihilism "as a condition of tension, as a disproportion between what we want to value (or need) and how the world appears to operate." When we find out that the world does not possess the objective value or meaning that we want it to have or have long since believed it to have, we find ourselves in a crisis. Nietzsche asserts that with the decline of Christianity and the rise of physiological decadence, nihilism is in fact characteristic of the modern age, though he implies that the rise of nihilism is still incomplete and that it has yet to be overcome. Though the problem of nihilism becomes especially explicit in Nietzsche's notebooks (published posthumously), it is mentioned repeatedly in his published works and is closely connected to many of the problems mentioned there.

Nietzsche characterized nihilism as emptying the world and especially human existence of meaning, purpose, comprehensible truth, or essential value. This observation stems in part from Nietzsche's perspectivism, or his notion that "knowledge" is always by someone of some thing: it is always bound by perspective, and it is never mere fact. Rather, there are interpretations through which we understand the world and give it meaning. Interpreting is something we can not go without; in fact, it is a condition of subjectivity. One way of interpreting the world is through morality, as one of the fundamental ways that people make sense of the world, especially in regard to their own thoughts and actions. Nietzsche distinguishes a morality that is strong or healthy, meaning that the person in question is aware that he constructs it himself, from weak morality, where the interpretation is projected on to something external.

Nietzsche discusses Christianity, one of the major topics in his work, at length in the context of the problem of nihilism in his notebooks, in a chapter entitled "European Nihilism." Here he states that the Christian moral doctrine provides people with intrinsic value, belief in God (which justifies the evil in the world) and a basis for objective knowledge. In this sense, in constructing a world where objective knowledge is possible, Christianity is an antidote against a primal form of nihilism, against the despair of meaninglessness. However, it is exactly the element of truthfulness in Christian doctrine that is its undoing: in its drive towards truth, Christianity eventually finds itself to be a construct, which leads to its own dissolution. It is therefore that Nietzsche states that we have outgrown Christianity "not because we lived too far from it, rather because we lived too close". As such, the self-dissolution of Christianity constitutes yet another form of nihilism. Because Christianity was an interpretation that posited itself as the interpretation, Nietzsche states that this dissolution leads beyond skepticism to a distrust of all meaning.

Stanley Rosen identifies Nietzsche's concept of nihilism with a situation of meaninglessness, in which "everything is permitted." According to him, the loss of higher metaphysical values that exist in contrast to the base reality of the world, or merely human ideas, gives rise to the idea that all human ideas are therefore valueless. Rejecting idealism thus results in nihilism, because only similarly transcendent ideals live up to the previous standards that the nihilist still implicitly holds. The inability for Christianity to serve as a source of valuating the world is reflected in Nietzsche's famous aphorism of the madman in The Gay Science. The death of God, in particular the statement that "we killed him", is similar to the self-dissolution of Christian doctrine: due to the advances of the sciences, which for Nietzsche show that man is the product of evolution, that Earth has no special place among the stars and that history is not progressive, the Christian notion of God can no longer serve as a basis for a morality.

One such reaction to the loss of meaning is what Nietzsche calls passive nihilism, which he recognizes in the pessimistic philosophy of Schopenhauer. Schopenhauer's doctrine, which Nietzsche also refers to as Western Buddhism, advocates separating oneself from will and desires in order to reduce suffering. Nietzsche characterizes this ascetic attitude as a "will to nothingness", whereby life turns away from itself, as there is nothing of value to be found in the world. This mowing away of all value in the world is characteristic of the nihilist, although in this, the nihilist appears inconsistent: this "will to nothingness" is still a form of valuation or willing. He describes this as "an inconsistency on the part of the nihilists":

A nihilist is a man who judges of the world as it is that it ought not to be, and of the world as it ought to be that it does not exist. According to this view, our existence (action, suffering, willing, feeling) has no meaning: the pathos of 'in vain' is the nihilists' pathos – at the same time, as pathos, an inconsistency on the part of the nihilists.

— Friedrich Nietzsche, KSA 12:9 [60], taken from The Will to Power, section 585, translated by Walter Kaufmann

Nietzsche's relation to the problem of nihilism is a complex one. He approaches the problem of nihilism as deeply personal, stating that this predicament of the modern world is a problem that has "become conscious" in him. According to Nietzsche, it is only when nihilism is overcome that a culture can have a true foundation upon which to thrive. He wished to hasten its coming only so that he could also hasten its ultimate departure.

He states that there is at least the possibility of another type of nihilist in the wake of Christianity's self-dissolution, one that does not stop after the destruction of all value and meaning and succumb to the following nothingness. This alternate, 'active' nihilism on the other hand destroys to level the field for constructing something new. This form of nihilism is characterized by Nietzsche as "a sign of strength," a willful destruction of the old values to wipe the slate clean and lay down one's own beliefs and interpretations, contrary to the passive nihilism that resigns itself with the decomposition of the old values. This willful destruction of values and the overcoming of the condition of nihilism by the constructing of new meaning, this active nihilism, could be related to what Nietzsche elsewhere calls a free spirit or the Übermensch from Thus Spoke Zarathustra and The Antichrist, the model of the strong individual who posits his own values and lives his life as if it were his own work of art. It may be questioned, though, whether "active nihilism" is indeed the correct term for this stance, and some question whether Nietzsche takes the problems nihilism poses seriously enough.

Heideggerean interpretation of Nietzsche

Martin Heidegger's interpretation of Nietzsche influenced many postmodern thinkers who investigated the problem of nihilism as put forward by Nietzsche. Only recently has Heidegger's influence on Nietzschean nihilism research faded. As early as the 1930s, Heidegger was giving lectures on Nietzsche's thought. Given the importance of Nietzsche's contribution to the topic of nihilism, Heidegger's influential interpretation of Nietzsche is important for the historical development of the term nihilism.

Heidegger's method of researching and teaching Nietzsche is explicitly his own. He does not specifically try to present Nietzsche as Nietzsche. He rather tries to incorporate Nietzsche's thoughts into his own philosophical system of Being, Time and Dasein. In his Nihilism as Determined by the History of Being (1944–46), Heidegger tries to understand Nietzsche's nihilism as trying to achieve a victory through the devaluation of the, until then, highest values. The principle of this devaluation is, according to Heidegger, the will to power. The will to power is also the principle of every earlier valuation of values. How does this devaluation occur and why is this nihilistic? One of Heidegger's main critiques on philosophy is that philosophy, and more specifically metaphysics, has forgotten to discriminate between investigating the notion of a being (seiende) and Being (Sein). According to Heidegger, the history of Western thought can be seen as the history of metaphysics. Moreover, because metaphysics has forgotten to ask about the notion of Being (what Heidegger calls Seinsvergessenheit), it is a history about the destruction of Being. That is why Heidegger calls metaphysics nihilistic. This makes Nietzsche's metaphysics not a victory over nihilism, but a perfection of it.

Heidegger, in his interpretation of Nietzsche, has been inspired by Ernst Jünger. Many references to Jünger can be found in Heidegger's lectures on Nietzsche. For example, in a letter to the rector of Freiburg University of November 4, 1945, Heidegger, inspired by Jünger, tries to explain the notion of "God is dead" as the "reality of the Will to Power." Heidegger also praises Jünger for defending Nietzsche against a too biological or anthropological reading during the Nazi era.

Heidegger's interpretation of Nietzsche influenced a number of important postmodernist thinkers. Gianni Vattimo points at a back-and-forth movement in European thought, between Nietzsche and Heidegger. During the 1960s, a Nietzschean 'renaissance' began, culminating in the work of Mazzino Montinari and Giorgio Colli. They began work on a new and complete edition of Nietzsche's collected works, making Nietzsche more accessible for scholarly research. Vattimo explains that with this new edition of Colli and Montinari, a critical reception of Heidegger's interpretation of Nietzsche began to take shape. Like other contemporary French and Italian philosophers, Vattimo does not want, or only partially wants, to rely on Heidegger for understanding Nietzsche. On the other hand, Vattimo judges Heidegger's intentions authentic enough to keep pursuing them. Philosophers who Vattimo exemplifies as a part of this back and forth movement are French philosophers Deleuze, Foucault and Derrida. Italian philosophers of this same movement are Cacciari, Severino and himself. Jürgen Habermas, Jean-François Lyotard and Richard Rorty are also philosophers who are influenced by Heidegger's interpretation of Nietzsche.

Deleuzean interpretation of Nietzsche

Gilles Deleuze's interpretation of Nietzsche's concept of nihilism is different - in some sense diametrically opposed - to the usual definition (as outlined in the rest of this article). Nihilism is one of the main topics of Deleuze's early book Nietzsche and Philosophy (1962). There, Deleuze repeatedly interprets Nietzsche's nihilism as "the enterprise of denying life and depreciating existence". Nihilism thus defined is therefore not the denial of higher values, or the denial of meaning, but rather the depreciation of life in the name of such higher values or meaning. Deleuze therefore (with, he claims, Nietzsche) says that Christianity and Platonism, and with them the whole of metaphysics, are intrinsically nihilist.

Postmodernism

Postmodern and poststructuralist thought has questioned the very grounds on which Western cultures have based their 'truths': absolute knowledge and meaning, a 'decentralization' of authorship, the accumulation of positive knowledge, historical progress, and certain ideals and practices of humanism and the Enlightenment.

Derrida

Jacques Derrida, whose deconstruction is perhaps most commonly labeled nihilistic, did not himself make the nihilistic move that others have claimed. Derridean deconstructionists argue that this approach rather frees texts, individuals or organizations from a restrictive truth, and that deconstruction opens up the possibility of other ways of being. Gayatri Chakravorty Spivak, for example, uses deconstruction to create an ethics of opening up Western scholarship to the voice of the subaltern and to philosophies outside of the canon of western texts. Derrida himself built a philosophy based upon a 'responsibility to the other'. Deconstruction can thus be seen not as a denial of truth, but as a denial of our ability to know truth. That is to say, it makes an epistemological claim, compared to nihilism's ontological claim.

Lyotard

Lyotard argues that, rather than relying on an objective truth or method to prove their claims, philosophers legitimize their truths by reference to a story about the world that can't be separated from the age and system the stories belong to—referred to by Lyotard as meta-narratives. He then goes on to define the postmodern condition as characterized by a rejection both of these meta-narratives and of the process of legitimation by meta-narratives. This concept of the instability of truth and meaning leads in the direction of nihilism, though Lyotard stops short of embracing the latter. In lieu of meta-narratives we have created new language-games in order to legitimize our claims which rely on changing relationships and mutable truths, none of which is privileged over the other to speak to ultimate truth.

Baudrillard

Postmodern theorist Jean Baudrillard wrote briefly of nihilism from the postmodern viewpoint in Simulacra and Simulation. He stuck mainly to topics of interpretations of the real world over the simulations of which the real world is composed. The uses of meaning were an important subject in Baudrillard's discussion of nihilism:

The apocalypse is finished, today it is the precession of the neutral, of forms of the neutral and of indifference...all that remains, is the fascination for desertlike and indifferent forms, for the very operation of the system that annihilates us. Now, fascination (in contrast to seduction, which was attached to appearances, and to dialectical reason, which was attached to meaning) is a nihilistic passion par excellence, it is the passion proper to the mode of disappearance. We are fascinated by all forms of disappearance, of our disappearance. Melancholic and fascinated, such is our general situation in an era of involuntary transparency.

— Jean Baudrillard, Simulacra and Simulation, "On Nihilism", trans. 1995

Positions

From the 20th century, nihilism has encompassed a range of positions within various fields of philosophy. Each of these, as the Encyclopædia Britannica states, "denied the existence of genuine moral truths or values, rejected the possibility of knowledge or communication, and asserted the ultimate meaninglessness or purposelessness of life or of the universe."

  • Cosmic nihilism is the position that reality or the cosmos is either wholly or significantly unintelligible and that it provides no foundation for human aims and principles. Particularly, it may regard the cosmos as distinctly hostile or indifferent to humanity. It is often related to both epistemological and existential nihilism, as well as cosmicism.
  • Epistemological nihilism is a form of philosophical skepticism according to which knowledge does not exist, or, if it does exist, it is unattainable for human beings. It should not be confused with epistemological fallibilism, according to which all knowledge is uncertain.
  • Existential nihilism is the position that life has no intrinsic meaning or value. With respect to the universe, existential nihilism posits that a single human or even the entire human species is insignificant, without purpose, and unlikely to change in the totality of existence. The meaninglessness of life is largely explored in the philosophical school of existentialism, where one can create their own subjective meaning or purpose. In popular use, "nihilism" now most commonly refers to forms of existential nihilism.
  • Metaphysical nihilism is the position that concrete objects and physical constructs might not exist in the possible world, or that, even if there exist possible worlds that contain some concrete objects, there is at least one that contains only abstract objects.
    • Extreme metaphysical nihilism, also sometimes called ontological nihilism, is the position that nothing actually exists at all. The American Heritage Medical Dictionary defines one form of nihilism as "an extreme form of skepticism that denies all existence." A similar skepticism concerning the concrete world can be found in solipsism. However, despite the fact that both views deny the certainty of objects' true existence, the nihilist would deny the existence of self, whereas the solipsist would affirm it. Both of these positions are considered forms of anti-realism.
    • Mereological nihilism, also called compositional nihilism, is the metaphysical position that objects with proper parts do not exist. This position applies to objects in space, and also to objects existing in time, which are posited to have no temporal parts. Rather, only basic building blocks without parts exist, and thus the world we see and experience, full of objects with parts, is a product of human misperception (i.e., if we could see clearly, we would not perceive compositive objects). This interpretation of existence must be based on resolution: The resolution with which humans see and perceive the "improper parts" of the world is not an objective fact of reality, but is rather an implicit trait that can only be qualitatively explored and expressed. Therefore, there is no arguable way to surmise or measure the validity of mereological nihilism. For example, an ant can get lost on a large cylindrical object because the circumference of the object is so large with respect to the ant that the ant effectively feels as though the object has no curvature. Thus, the resolution with which the ant views the world it exists "within" is an important determining factor in how the ant experiences this "within the world" feeling.
  • Moral nihilism, also called ethical nihilism, is the meta-ethical position that no morality or ethics exists whatsoever; therefore, no action is ever morally preferable to any other. Moral nihilism is distinct from both moral relativism and expressivism in that it does not acknowledge socially constructed values as personal or cultural moralities. It may also differ from other moral positions within nihilism that, rather than argue there is no morality, hold that if it does exist, it is a human construction and thus artificial, wherein any and all meaning is relative for different possible outcomes. An alternative scholarly perspective is that moral nihilism is a morality in itself. Cooper writes, "In the widest sense of the word 'morality', moral nihilism is a morality."
  • Passive and active nihilism, the former of which is also equated to philosophical pessimism, refer to two approaches to nihilist thought; passive nihilism sees nihility as an end in itself, whereas active nihilism attempts to surpass it. For Nietzsche, passive nihilism further encapsulates the "will to nothing" and the modern condition of resignation or unawareness towards the dissolution of higher values brought about by the 19th century.
  • Political nihilism is the position holding no political goals whatsoever, except for the complete destruction of all existing political institutions—along with the principles, values, and social institutions that uphold them. Though often related to anarchism, it may differ in that it presents no method of social organisation after a negation of the current political structure has taken place. An analysis of political nihilism is further presented by Leo Strauss.
  • Therapeutic nihilism, also called medical nihilism, is the position that the effectiveness of medical intervention is dubious or without merit. Dealing with the philosophy of science as it relates to the contextualized demarcation of medical research, Jacob Stegenga applies Bayes' theorem to medical research and argues for the premise that "even when presented with evidence for a hypothesis regarding the effectiveness of a medical intervention, we ought to have low confidence in that hypothesis."

In culture and the arts

Dada

The term Dada was first used by Richard Huelsenbeck and Tristan Tzara in 1916. The movement, which lasted from approximately 1916 to 1923, arose during World War I, an event that influenced the artists. The Dada Movement began in the old town of Zürich, Switzerland—known as the "Niederdorf" or "Niederdörfli"—in the Café Voltaire. The Dadaists claimed that Dada was not an art movement, but an anti-art movement, sometimes using found objects in a manner similar to found poetry.

The "anti-art" drive is thought to have stemmed from a post-war emptiness. This tendency toward devaluation of art has led many to claim that Dada was an essentially nihilistic movement. Given that Dada created its own means for interpreting its products, it is difficult to classify alongside most other contemporary art expressions. Due to perceived ambiguity, it has been classified as a nihilistic modus vivendi.

Literature

The term "nihilism" was actually popularized in 1862 by Ivan Turgenev in his novel Fathers and Sons, whose hero, Bazarov, was a nihilist and recruited several followers to the philosophy. He found his nihilistic ways challenged upon falling in love.

Anton Chekhov portrayed nihilism when writing Three Sisters. The phrase "what does it matter" or variants of this are often spoken by several characters in response to events; the significance of some of these events suggests a subscription to nihilism by said characters as a type of coping strategy.

The philosophical ideas of the French author, the Marquis de Sade, are often noted as early examples of nihilistic principles.

 

Marriage in Islam

From Wikipedia, the free encyclopedia ...