Human ethology is the study of human behavior.
Ethology as a discipline is generally thought of as a sub-category of biology, though psychological theories have been developed based on ethological ideas (e.g. sociobiology, evolutionary psychology, attachment theory, and theories about human universals
such as gender differences, incest avoidance, mourning, hierarchy and
pursuit of possession). The bridging between biological sciences and
social sciences creates an understanding of human ethology. The International Society for Human Ethology is dedicated to advancing the study and understanding of human ethology.
History
Ethology has its roots in the study of evolution,
especially after evolution's increasing popularity after Darwin's
detailed observations. It became a distinct discipline in the 1930s with
zoologists Konrad Lorenz, Niko Tinbergen and Karl Von Frisch. These three scientists are known as the major contributors to human
ethology. They are also regarded as the fathers or founders of ethology.
Konrad Lorenz and Niko Tinbergen rejected theories that relied on
stimuli and learning alone, and elaborated on concepts that had not been
well understood, such as instinct.
They promoted the theory that evolution had placed within creatures
innate abilities and responses to certain stimuli that advanced the
thriving of the species. Konrad Lorenz also indicated in his earlier
works that animal behavior can be a major reference for human behavior.
He believed that the research and findings of animal behaviors can lead
to findings of human behaviors as well. In 1943, Lorenz devoted much of
his book, Die angeborenen Formen moglicher Erfahrung,
to human behavior. He designated that one of the most important factors
of ethology was testing the hypothesis derived from animal behavioral
studies on human behavioral studies. Due to Lorenz promoting the
similarities between studying animal and human behavior, human ethology
derived from the study of anima behavior. The other founders of ethology, Niko Tinbergen and Karl von Frisch, received a Nobel Prize
in 1973, for their overarching career discoveries concerning
organization and elicitation of individual and social behavior patterns.
Many developmental psychologists
were eager to incorporate ethological principles into their theories as
a way of explaining observable phenomenon in babies that could not
necessarily be explained by learning or other concepts. John Bowlby and Mary Ainsworth used ethology prominently to explain aspects of infant-caretaker attachment theory. Some important attachment concepts related to evolution:
The Attachment has evolved because it promotes the survival of
helpless infants. Primates and other animals reflexively attach
themselves physically to their parent, and have some calls that elicit
parental attention. Human babies have adaptively developed signaling
mechanisms such as crying, babbling, and smiling. These are seen as
innate and not learned behaviors, because even children born blind and
deaf begin to smile socially at 6 weeks, and cry and babble. These
behaviors facilitate contact with the caregiver and increase the
likelihood of infant survival.
Early signaling behaviors and the baby's tendency to look at faces
rather than objects lead to attachment between the caretaker and baby
that solidifies around 6–9 months of age. Bowlby theorized that this
attachment was evolutionarily fundamental to human survival and is the
basis for all relationships, even into adulthood.
Adults are also adaptively bent toward attachment with infants.
Typical "baby-ish" features, such as a large head and eyes in proportion
to the body, and round cheeks, are features that elicit affection in
adults. Many parents also form a "bond" with their newborn baby within
hours of its birth, leading to a deep sense of emotional attachment with
one's own offspring and increased behaviors that promote infant
survival.
Many of Bowlby's early methods relied heavily on ethological observations of children in their natural environments.
In later years, ethology played a large role in sociobiological
theory and ultimately, in evolutionary psychology, which is a relatively
new field of study. Evolutionary psychology combines ethology,
primatology, anthropology, and other fields to study modern human
behavior to adaptive ancestral human behaviors.
View on human nature
Humans
are social animals. Just as wolves and lions create packs or hunting
groups for self-preservation, humans create complex social structures,
including families and nations.
Humans are "biological organisms that have evolved within a particular environmental niche".
Intelligence, language, social attachment, aggression, and altruism
are part of human nature because they "serve or once served a purpose in
the struggle of the species to survive".
Children's developmental level is defined in terms of biologically based behaviors.
Human's needs evolve based on their current environment. Humans must
adapt in order to survive. Cognitive thinking and communication arose
as a result of a need for cooperation amongst individuals for survival.
View on human nature varies across ethological theorists
Lorenz
believed that humans have an automatic, elicited nature of behavior,
such as stimuli that elicit fixed action patterns. His theory
developed from the reflex model and the hydraulic or "flush toilet"
model, which conceptualized behavior patterns of motivation. Certain
fixed action patterns developed out of motivation for survival. Instinct
is an example of fixed action patterns. Any behavior is instinctive if
it is performed in the absence of learning. Reflexes can be instincts.
For example, a newborn baby instinctively knows to search for and suckle
its mother's breast for nourishment.
Bowlby (and many other modern ethological theorists) believed that
humans spontaneously act to meet the demands of their environment. They
are active participants who seek out a parent, food, or a mate (i.e. an
infant will seek to remain within sight of a caretaker).
Vygotsky believed that the way humans think is based on the culture
they are raised in and the language they are surrounded by. He
emphasized that children grow up in the symbols of their culture,
especially linguistic symbols. These linguistic symbols categorize and
organize the world around them. This organization of the world is
internalized, which influence the way they think.
Human behavior tends to change based on the environment and the
surrounding challenges that individuals begin to face. Two evolutionary
advances in human behavior began as a way to allow humans to communicate
and collaborate. Infrastructure theorist, Mead and Wittgenstein,
theorized the creation of a collaboration in human foraging. This
collaboration created social goals amongst people and also created a
common ground. To coordinate their common goals, humans evolved a new
type of cooperative communication. This communication was based on
gestures that allowed humans to cooperate amongst themselves in order to
achieve their desired goals.[5]
This change in behavior is seen due to the evolving of their
environment. The environment demands survival and humans adapted their
behavior in order to survive. In other words, this is known as the shared intentionality
hypothesis. According to this hypothesis, human thinking evolved from a
self-focused, individual intentionality as an adaptation for "dealing
with problems of social coordination, specifically, problems presented
by individuals' attempts to collaborate and communicate with others."
This evolution happened in two steps, one leading from individual to
"joint intentionality" and the other from joint intentionality to
"collective intentionality".
Mechanistic theories view behavior as passive. This theory argues
that human behavior is in passivity through physiological drives and
emotional stimuli. Unlike mechanistic theories, organismic theories view
behavior as active. An organismic theory argues that an organism is
active in its behavior, meaning that it decides how it behaves and
initiates its own behaviors. Humans have intrinsic needs that they
desire to be met. These needs provide energy for humans to act upon
their needs in order to meet them, rather than being reactive to them.
The active theory on human behavior treats stimuli not as a cause of
behavior, but as opportunities humans can utilize to meet their demands.
Human ethology topics
As
applied to human behavior, in the majority of cases, topical behavior
results from motivational states and the intensity of a specific
external stimulus. Organisms with a high inner motivational state for
such a stimulus is called appetitive behavior. Other important concepts
of zooethology—e.g., territoriality, hierarchy, sensitive periods in ontogenesis—are also useful when discussing human behavior. Irenäus Eibl-Eibesfeldt's book Human Ethology is most important for how these concepts are applied to human behavior.
Human ethology has contributed in two particular ways to our understanding of the ontogeny
of behavior in humans. This has resulted, first, from the application
of techniques for the precise observation, description and
classification of naturally occurring behavior and, secondly, from the
ethological approach to the study of behavior, especially the
development of behavior in terms of evolution. Of particular interest
are questions relating to the function of a particular kind of behavior
(e.g., attachment behavior) and its adaptive value. The description of
the behavioral repertoire of a species, the recognition of patterns of
behavioral development and the classification of established behavioral
patterns are prerequisites for any comparison between different species
or between organisms of a single species. The ethological approach is
the study of the interaction between the organism with certain innate
species-specific structures and the environment for which the organism
is genetically programmed.
Invariant behavior patterns have a morphological basis, mainly in neuronal structures common to all members of a species and, depending on the kind of behavior, may also be common to a genus or family or a whole order, e.g., primates, or even to a whole class, e.g., mammals. In such structures we can retrace and follow the evolutionary process by which the environment produced structures, especially nervous systems and brains,
which generate adaptive behavior. In organisms with a high level of
organization, the processes in which the ethologist is especially
interested are those genetically preprogrammed motor and perceptual
processes that facilitate social interaction and communication, such as
facial expression and vocalization. If we consider the most highly developed means of communication, language and speech,
which is found in humans alone, the question arises as to the
biological foundation of this species-specific behavior and perceptual
skill. The ethologist examines this question primarily from the point of
view of ontogenetic development.
The main strength of human ethology has been its application of
established interpretive patterns to new problems. Based on theories,
concepts and methods that have proved successful in animal ethology, it
looks at human behavior from a new viewpoint. The essence of this is the
evolutionary perspective. But since ethologists have been relatively
unaffected by the long history of the humanities, they often refer to
facts and interpretations neglected by other social sciences. If we look
back at the history of the relationship between the life sciences and the social sciences,
we find two prevailing modes of theoretical orientation: on the one
hand, reductionism, i.e., attempts to reduce human action to
non-cognitive behavior; and on the other, attempts to separate human
action and human society from the animal world completely. The advent of
the theory of evolution in the 19th century brought no easy solution to
the problem of nature and nurture,
since it could still be "solved" in either a continuous or
discontinuous manner. Human ethology as much as any other discipline
significantly contributes to the obsolescence of such simple dichotomies.
Human Ethology has an increasing influence on the dialogue
between Human Sciences and Humanities as shown for example in the book Being Human - Bridging the Gap between the Sciences of Body and Mind.
Methodology
Ethologists
study behavior using two general methods: naturalistic observation and
laboratory experimentation. Ethologist's insistence on observing
organisms in their natural environment differentiates ethology from
related disciplines such as evolutionary psychology and sociobiology,
and their naturalistic observation "ranks as one of their main
contributions to psychology", Naturalistic Observation Ethologists believe that in order to study
species-specific behaviors, a species must be observed in its natural
environment. One can only understand the function of a behavior by
seeing how it specifically fits into the species natural environment
to fulfill a specific need. Ethologists follow a specific set of steps
when studying an organism:
Ethogram
A detailed description of the behavior of a species in its natural environment
Classification
Classify behaviors according to their function (how they encourage survival).
Compare
Compare how a behavior functions in different species and how different behaviors may serve the same function in other species.
Laboratory Experiments
Determine the immediate causes of the behavior described in the first three steps.
These steps fall in line with Tinbergen's "On Aims of Methods of Ethology" in which he states that every study of behavior must answer four questions to be considered legitimate:
function (adaptation)
evolution (phylogeny)
causation (mechanism)
development (ontogeny)
Diversity
Diversity is an important concept in ethology and evolutionary theory, both genetically and culturally.
Genetic diversity serves as a way for populations to adapt to
changing environments. With more variation, it is more likely that some
individuals in a population will possess variations of alleles that are
suited for the environment. Those individuals are more likely to survive
to produce offspring bearing that allele. The population will continue
for more generations because of the success of these individuals.
Population genetics includes several hypotheses and theories regarding
genetic diversity. The neutral theory of evolution proposes that
diversity is the result of the accumulation of neutral substitutions.
Diversifying selection is the hypothesis that two subpopulations of a
species live in different environments that select for different alleles
at a particular locus. This may occur, for instance, if a species has a
large range relative to the mobility of individuals within it.
Cultural diversity is also important. From a cultural
transmission standpoint, humans are the only animals to pass down
cumulative cultural knowledge to their offspring. While chimpanzees can
learn to use tools by watching other chimps around them, but humans are
able to pool their cognitive resources to create increasingly more
complex solutions to problems and more complex ways of interacting with
their environments. The diversity of cultures points to the idea that
humans are shaped by their environments, and also interact with
environments to shape them as well. Cultural diversity arises from
different human adaptations to different environmental factors, which in
turn shapes the environment, which in turn again shapes human behavior.
This cycle results in diverse cultural representations that ultimately
add to the survival of the human species. This approach is important as a
way to build a bridge between biological and social sciences, which
creates a better understanding of human ethology.
One example of human diversity is sexual orientation. Ethologists
have long noted that there are over 250 species of animals which
display homosexual
behaviors. Although no offspring are directly created from homosexual
behaviors, a closer look reveals how the genes for homosexuality can
persist. Homosexuality could decrease competition for heterosexual
mates. Homosexual family members could increase the resources available
to the children of their siblings without producing offspring to compete
for those resources (the gay uncle theory), thus creating better
chances for their siblings' offspring to survive. These related
offspring share the homosexual individual's genes—although to a lesser
extent than direct offspring would—including genes for homosexuality.
This causes a small but stable chance for future generations to be gay
as well, even if the gay family member produces no direct descendants.
Modularity of mind is the notion that a mind may, at least in part, be composed of innate neural structures or mental modules which have distinct, established, and evolutionarily developed functions. However, different definitions of "module" have been proposed by different authors. According to Jerry Fodor, the author of Modularity of Mind, a system can be considered 'modular' if its functions are made of multiple dimensions or units to some degree. One example of modularity in the mind is binding.
When one perceives an object, they take in not only the features of an
object, but the integrated features that can operate in sync or
independently that create a whole. Instead of just seeing red, round, plastic, and moving, the subject may experience a rolling red ball. Binding may suggest that the mind is modular because it takes multiple cognitive processes to perceive one thing.
Early investigations
Historically, questions regarding the functional architecture
of the mind have been divided into two different theories of the nature
of the faculties. The first can be characterized as a horizontal view
because it refers to mental processes as if they are interactions
between faculties such as memory, imagination, judgement, and
perception, which are not domain specific
(e.g., a judgement remains a judgement whether it refers to a
perceptual experience or to the conceptualization/comprehension
process). The second can be characterized as a vertical view because it
claims that the mental faculties are differentiated on the basis of
domain specificity, are genetically determined, are associated with
distinct neurological structures, and are computationally autonomous.
The vertical vision goes back to the 19th-century movement called phrenology and its founder Franz Joseph Gall.
Gall claimed that the individual mental faculties could be associated
precisely, in a one-to-one correspondence, with specific physical areas
of the brain. For example, someone's level of intelligence could be literally "read
off" from the size of a particular bump on his posterior parietal lobe.
Phrenology's practice was debunked scientifically by Pierre Flourens in
the 19th century. He destroyed parts of pigeons' and dogs' brains,
called lesions, and studied the organisms' resulting dysfunction. He was
able to conclude that while the brain localizes in some functions, it
also works as a unit and is not as localized as earlier phrenologists
thought. Before the early 20th century, Edward Bradford Titchener studied the
modules of the mind through introspection. He tried to determine the
original, raw perspective experiences of his subjects. For example, if
he wanted his subjects to perceive an apple, they would need to talk
about spatial characteristics of the apple and the different hues that
they saw without mentioning the apple.
Fodor's Modularity of Mind
In the 1980s, however, Jerry Fodor revived the idea of the modularity of mind, although without the notion of precise physical localizability. Drawing from Noam Chomsky's idea of the language acquisition device and other work in linguistics as well as from the philosophy of mind and the implications of optical illusions, he became a major proponent of the idea with the 1983 publication of Modularity of Mind.
According to Fodor, a module falls somewhere between the behaviorist and cognitivist views of lower-level processes.
Behaviorists
tried to replace the mind with reflexes, which are, according to Fodor,
encapsulated (cognitively impenetrable or unaffected by other cognitive
domains) and non-inferential (straight pathways with no information
added). Low-level processes are unlike reflexes in that they can be
inferential. This can be demonstrated by poverty of the stimulus
argument, which posits that children do not only learn language from
their environment, but are innately programmed with low-level processes
that help them seek and learn language. The proximate stimulus, that
which is initially received by the brain (such as the 2D image received
by the retina), cannot account for the resulting output (for example,
our 3D perception of the world), thus necessitating some form of
computation.
In contrast, cognitivists
saw lower-level processes as continuous with higher-level processes,
being inferential and cognitively penetrable (influenced by other
cognitive domains, such as beliefs). The latter has been shown to be
untrue in some cases, such as the Müller-Lyer illusion, which can persist despite a person's awareness of their existence. This is taken to indicate that other domains, including one's beliefs, cannot influence such processes.
Fodor arrives at the conclusion that such processes are
inferential like higher-order processes and encapsulated in the same
sense as reflexes.
Although he argued for the modularity of "lower level" cognitive processes in Modularity of Mind he also argued that higher-level cognitive processes are not modular since they have dissimilar properties. The Mind Doesn't Work That Way, a reaction to Steven Pinker's How the Mind Works, is devoted to this subject.
Fodor (1983) states that modular systems must—at least to "some interesting extent"—fulfill certain properties:
Domain specificity: modules only operate on certain kinds of inputs—they are specialised
Obligatory firing: modules process in a mandatory manner
Limited accessibility: what central processing can access from input system representations is limited
Fast speed: probably due to the fact that they are encapsulated
(thereby needing only to consult a restricted database) and mandatory
(time need not be wasted in determining whether or not to process
incoming input)
Informational encapsulation: modules need not refer to other psychological systems in order to operate
Shallow outputs: the output of modules is very simple
Specific breakdown patterns
Characteristic ontogeny: there is a regularity of development
Fixed neural architecture.
Pylyshyn
(1999) has argued that while these properties tend to occur with
modules, one—information encapsulation—stands out as being the real
signature of a module; that is the encapsulation of the processes inside
the module from both cognitive influence and from cognitive access. One example is that conscious awareness that the Müller-Lyer illusion is an illusion does not correct visual processing.
Evolutionary psychology and massive modularity
The definition of module
has caused confusion and dispute. In J.A. Fodor's views, modules can be
found in peripheral and low-level visual processing, but not in central
processing. Later, he narrowed the two essential features to domain-specificity and information encapsulation.
According to Frankenhuis and Ploeger, domain-specificity means that "a
given cognitive mechanism accepts, or is specialized to operate on, only
a specific class of information". Information encapsulation means that information processing in the
module cannot be affected by information in the rest of the brain. One
example is that the effects of an optical illusion, created by low-level
processes, persist despite high-level processing caused by conscious
awareness of the illusion itself.
Other perspectives on modularity come from evolutionary psychology. Evolutionary psychologists propose that the mind is made up of genetically influenced and domain-specific mental algorithms or computational modules, designed to solve specific evolutionary problems of the past. Modules are also used for central processing. This theory is sometimes referred to as massive modularity. Leda Cosmides and John Tooby
claimed that modules are units of mental processing that evolved in
response to selection pressures. To them, each module was a complex
computer that innately processed distinct parts of the world, like
facial recognition, recognizing human emotions, and problem-solving. On this view, much modern human psychological activity is rooted in adaptations that occurred earlier in human evolution, when natural selection was forming the modern human species.
A 2010 review by evolutionary psychologists Confer et al.
suggested that domain general theories, such as for "rationality", has
several problems: 1. Evolutionary theories using the idea of numerous
domain-specific adaptions have produced testable predictions that have
been empirically confirmed; the theory of domain-general rational
thought has produced no such predictions or confirmations. 2. The
rapidity of responses such as jealousy due to infidelity indicates a
domain-specific dedicated module rather than a general, deliberate,
rational calculation of consequences. 3. Reactions may occur
instinctively (consistent with innate knowledge) even if a person has
not learned such knowledge. One example being that in the ancestral
environment it is unlikely that males during development learn that
infidelity (usually secret) may cause paternal uncertainty (from
observing the phenotypes of children born many months later and making a
statistical conclusion from the phenotype dissimilarity to the
cuckolded fathers). With respect to general purpose problem solvers, Barkow, Cosmides, and Tooby (1992) have suggested in The Adapted Mind: Evolutionary Psychology and The Generation of Culture that a purely general problem solving mechanism is impossible to build due to the frame problem.
Clune et al. (2013) have argued that computer simulations of the
evolution of neural nets suggest that modularity evolves because,
compared to non-modular networks, connection costs are lower.
Several groups of critics, including psychologists working within evolutionary frameworks, argue that the massively modular theory of mind does little to explain
adaptive psychological traits. Proponents of other models of the mind
argue that the computational theory of mind
is no better at explaining human behavior than a theory with mind
entirely a product of the environment. Even within evolutionary
psychology there is discussion about the degree of modularity, either as
a few generalist modules or as many highly specific modules.Other critics suggest that there is little empirical support in favor of the domain-specific theory beyond performance on the Wason selection task, a task critics state is too limited in scope to test all relevant aspects of reasoning. Moreover, critics argue that Cosmides and Tooby's conclusions contain
several inferential errors and that the authors use untested
evolutionary assumptions to eliminate rival reasoning theories.
Criticisms of the notion of modular minds from genetics include
that it would take too much genetic information to form innate
modularity of mind, the limits to the possible amount of functional
genetic information being imposed by the number of mutations per
generation that led to the prediction that only a small part of the
human genome can be functional in an information-carrying way if an
impossibly high rate of lethal mutations is to be avoided, and that
selection against lethal mutations would have stopped and reversed any
increase in the amount of functional DNA long before it reached the
amount that would be required for modularity of mind. It is argued that
proponents of the theory of mind conflate this with the straw man
argument of assuming no function in any non-protein-coding DNA when
pointing at discoveries of some parts of non-coding DNA
having regulatory functions, while the actual argument of limited
amount of functional DNA does acknowledge that some parts of non-coding
DNA can have functions but putting bounds on the total amount of
information-bearing genetic material regardless of whether or not it
codes for proteins, in agreement with the discoveries of regulatory
functions of non-coding DNA extending only to parts of it and not be
generalized to all DNA that does not code for proteins. The maximum
amount of information-carrying heredity is argued to be too small to
form modular brains.
Wallace (2010) observes that the evolutionary psychologists' definition of "mind" has been heavily influenced by cognitivism and/or information processing definitions of the mind.[20]
Critics point out that these assumptions underlying evolutionary
psychologists' hypotheses are controversial and have been contested by
some psychologists, philosophers, and neuroscientists. For example, Jaak Panksepp,
an affective neuroscientist, point to the "remarkable degree of
neocortical plasticity within the human brain, especially during
development" and states that "the developmental interactions among
ancient special-purpose circuits and more recent general-purpose brain
mechanisms can generate many of the "modularized" human abilities that
evolutionary psychology has entertained."
Philosopher David Buller agrees with the general argument that
the human mind has evolved over time but disagrees with the specific
claims evolutionary psychologists make. He has argued that the
contention that the mind consists of thousands of modules, including
sexually dimorphic jealousy and parental investment modules, are
unsupported by the available empirical evidence. He has suggested that the "modules" result from the brain's developmental plasticity and that they are adaptive responses to local conditions, not past evolutionary environments. However, Buller has also stated that even if massive modularity is
false this does not necessarily have broad implications for evolutionary
psychology. Evolution may create innate motives even without innate
knowledge.
In contrast to modular mental structure, some theories posit domain-general processing,
in which mental activity is distributed across the brain and cannot be
decomposed, even abstractly, into independent units. A staunch defender
of this view is William Uttal, who argues in The New Phrenology
(2003) that there are serious philosophical, theoretical, and
methodological problems with the entire enterprise of trying to localise
cognitive processes in the brain. Part of this argument is that a successful taxonomy of mental processes has yet to be developed.
Merlin Donald argues that over evolutionary time the mind has gained adaptive advantage from being a general problem solver. The mind, as described by Donald, includes module-like "central"
mechanisms, in addition to more recently evolved "domain-general"
mechanisms.
Pragmatism is a philosophical tradition that views language and thought as tools for prediction, problem solving, and action, rather than describing, representing, or mirroring reality.
Pragmatists contend that most philosophical topics—such as the nature
of knowledge, language, concepts, meaning, belief, and science—are best
viewed in terms of their practical uses and successes.
Pragmatism began in the United States in the 1870s. Its origins are often attributed to philosophers Charles Sanders Peirce, William James and John Dewey. In 1878, Peirce described it in his pragmatic maxim:
"Consider the practical effects of the objects of your conception.
Then, your conception of those effects is the whole of your conception
of the object."
Origins
Charles Peirce: the American polymath who first identified pragmatism
Pragmatism as a philosophical movement began in the United States around 1870. Charles Sanders Peirce (and his pragmatic maxim) is given credit for its development, along with later 20th-century contributors, William James and John Dewey. Its direction was determined by The Metaphysical Club members Peirce, Dewey, James, Chauncey Wright and George Herbert Mead.
The word pragmatic has existed in English since the 1500s,
borrowed from French and derived from Greek via Latin. The Greek word pragma, meaning business, deed or act, is a noun derived from the verb prassein, to do. The first use in print of the name pragmatism was in 1898 by James, who credited Peirce with coining the term during the early 1870s. James regarded Peirce's "Illustrations of the Logic of Science" series—including "The Fixation of Belief" (1877), and especially "How to Make Our Ideas Clear" (1878)—as the foundation of pragmatism.Peirce in turn wrote in 1906 that Nicholas St. John Green had been instrumental by emphasizing the importance of applying Alexander Bain's
definition of belief, which was "that upon which a man is prepared to
act". Peirce wrote that "from this definition, pragmatism is scarce more
than a corollary; so that I am disposed to think of him as the
grandfather of pragmatism". John Shook has said, "Chauncey Wright also
deserves considerable credit, for as both Peirce and James recall, it
was Wright who demanded a phenomenalist and fallibilistempiricism as an alternative to rationalistic speculation."
Peirce developed the idea that inquiry depends on real doubt, not mere verbal or hyperbolic doubt, and said that, in order to understand a conception in a fruitful way,
"Consider the practical effects of the objects of your conception. Then,
your conception of those effects is the whole of your conception of the
object", which he later called the pragmatic maxim.
It equates any conception of an object to the general extent of the
conceivable implications for informed practice of that object's effects.
This is the heart of his pragmatism as a method of experimentational
mental reflection arriving at conceptions in terms of conceivable
confirmatory and disconfirmatory circumstances—a method hospitable to
the generation of explanatory hypotheses, and conducive to the
employment and improvement of verification. Typical of Peirce is his
concern with inference to explanatory hypotheses as outside the usual
foundational alternative between deductivist rationalism and inductivist
empiricism, although he was a mathematical logician and a founder of statistics.
Peirce lectured and further wrote on pragmatism to make clear his
own interpretation. While framing a conception's meaning in terms of
conceivable tests, Peirce emphasized that, since a conception is
general, its meaning, its intellectual purport, equates to its
acceptance's implications for general practice, rather than to any
definite set of real effects (or test results); a conception's clarified
meaning points toward its conceivable verifications, but the outcomes
are not meanings, but individual upshots. Peirce in 1905 coined the new
name pragmaticism "for the precise purpose of expressing the original definition", saying that "all went happily" with James's and F. C. S. Schiller's
variant uses of the old name "pragmatism" and that he nonetheless
coined the new name because of the old name's growing use in "literary
journals, where it gets abused". Yet in a 1906 manuscript, he cited as
causes his differences with James and Schiller and, in a 1908 publication, his differences with James as well as literary author Giovanni Papini.
Peirce regarded his own views that truth is immutable and infinity is
real, as being opposed by the other pragmatists, but he remained allied
with them about the falsity of necessitarianism and about the reality of generals and habits understood in terms of potential concrete effects even if unactualized.
Pragmatism enjoyed renewed attention after Willard Van Orman Quine and Wilfrid Sellars used a revised pragmatism to criticize logical positivism in the 1960s. Inspired by the work of Quine and Sellars, a brand of pragmatism known sometimes as neopragmatism gained influence through Richard Rorty, the most influential of the late 20th century pragmatists along with Hilary Putnam and Robert Brandom. Contemporary pragmatism may be broadly divided into a strict analytic tradition and a "neo-classical" pragmatism (such as Susan Haack) that adheres to the work of Peirce, James, and Dewey.
Core tenets
A few of the various but often interrelated positions characteristic of philosophers working from a pragmatist approach include:
Epistemology (justification): a coherentist
theory of justification that rejects the claim that all knowledge and
justified belief rest ultimately on a foundation of noninferential
knowledge or justified belief. Coherentists hold that justification is
solely a function of some relationship between beliefs, none of which
are privileged beliefs in the way maintained by foundationalist theories of justification.
Epistemology (truth): a deflationary or pragmatic
theory of truth; the former is the epistemological claim that
assertions that predicate the truth of a statement do not attribute a
property called truth to such a statement while the latter is the
epistemological claim that assertions that predicate the truth of a
statement attribute the property of useful-to-believe to such a
statement.
Metaphysics: a pluralist view that there is more than one sound way to conceptualize the world and its content.
Philosophy of science: an instrumentalist and scientific anti-realist
view that a scientific concept or theory should be evaluated by how
effectively it explains and predicts phenomena, as opposed to how
accurately it describes objective reality.
Philosophy of language: an anti-representationalist view that rejects analyzing the semantic meaning
of propositions, mental states, and statements in terms of a
correspondence or representational relationship and instead analyzes
semantic meaning in terms of notions like dispositions to action,
inferential relationships, and/or functional roles (e.g. behaviorism and inferentialism). Not to be confused with pragmatics, a sub-field of linguistics with no relation to philosophical pragmatism.
Dewey in The Quest for Certainty
criticized what he called "the philosophical fallacy": Philosophers
often take categories (such as the mental and the physical) for granted
because they don't realize that these are nominal concepts that were invented to help solve specific problems. This causes metaphysical and conceptual confusion. Various examples are the "ultimate Being" of Hegelian philosophers, the belief in a "realm of value",
the idea that logic, because it is an abstraction from concrete
thought, has nothing to do with the action of concrete thinking.
David L. Hildebrand summarized the problem: "Perceptual
inattention to the specific functions comprising inquiry led realists
and idealists alike to formulate accounts of knowledge that project the
products of extensive abstraction back onto experience."
Naturalism and anti-Cartesianism
From
the outset, pragmatists wanted to reform philosophy and bring it more
in line with the scientific method as they understood it. They argued
that idealist and realist philosophy had a tendency to present human
knowledge as something beyond what science could grasp. They held that
these philosophies then resorted either to a phenomenology inspired by
Kant or to correspondence theories of knowledge and truth.[citation needed] Pragmatists criticized the former for its a priorism, and the latter because it takes correspondence as an unanalyzable fact. Pragmatism instead tries to explain the relation between knower and known.
In 1868,[16]
C.S. Peirce argued that there is no power of intuition in the sense of a
cognition unconditioned by inference, and no power of introspection,
intuitive or otherwise, and that awareness of an internal world is by
hypothetical inference from external facts. Introspection and intuition
were staple philosophical tools at least since Descartes. He argued that
there is no absolutely first cognition in a cognitive process; such a
process has its beginning but can always be analyzed into finer
cognitive stages. That which we call introspection does not give
privileged access to knowledge about the mind—the self is a concept that
is derived from our interaction with the external world and not the
other way around. At the same time he held persistently that pragmatism and epistemology
in general could not be derived from principles of psychology understood
as a special science: what we do think is too different from what we should think; in his "Illustrations of the Logic of Science" series, Peirce formulated both pragmatism and principles of statistics as aspects of scientific method in general. This is an important point of disagreement with most other pragmatists,
who advocate a more thorough naturalism and psychologism.
Richard Rorty expanded on these and other arguments in Philosophy and the Mirror of Nature
in which he criticized attempts by many philosophers of science to
carve out a space for epistemology that is entirely unrelated to—and
sometimes thought of as superior to—the empirical sciences. W.V. Quine, who was instrumental in bringing naturalized epistemology back into favor with his essay "Epistemology Naturalized", also criticized "traditional" epistemology and its "Cartesian dream" of
absolute certainty. The dream, he argued, was impossible in practice as
well as misguided in theory, because it separates epistemology from
scientific inquiry.
Hilary Putnam said that the combination of antiskepticism and fallibilism is a central feature of pragmatism.
Reconciliation of anti-skepticism and fallibilism
Hilary Putnam has suggested that the reconciliation of anti-skepticism and fallibilism is the central goal of American pragmatism. Although all human knowledge is partial, with no ability to take a
"God's-eye-view", this does not necessitate a globalized skeptical
attitude, a radical philosophical skepticism (as distinguished from that which is called scientific skepticism). Peirce insisted that (1) in reasoning, there is the presupposition, and at least the hope, that truth and the real are discoverable and would be discovered,
sooner or later but still inevitably, by investigation taken far enough, and (2) contrary to Descartes's famous and influential methodology in the Meditations on First Philosophy,
doubt cannot be feigned or created by verbal fiat to motivate fruitful
inquiry, and much less can philosophy begin in universal doubt. Doubt, like belief, requires justification. Genuine doubt irritates and
inhibits, in the sense that belief is that upon which one is prepared
to act. It arises from confrontation with some specific recalcitrant matter of
fact (which Dewey called a "situation"), which unsettles our belief in
some specific proposition. Inquiry is then the rationally
self-controlled process of attempting to return to a settled state of
belief about the matter. Note that anti-skepticism is a reaction to
modern academic skepticism in the wake of Descartes. The pragmatist
insistence that all knowledge is tentative is quite congenial to the
older skeptical tradition.
Pragmatism was not the first to apply evolution to theories of knowledge: Schopenhauer
advocated a biological idealism as what's useful to an organism to
believe might differ wildly from what is true. Here knowledge and action
are portrayed as two separate spheres with an absolute or transcendental
truth above and beyond any sort of inquiry organisms used to cope with
life. Pragmatism challenges this idealism by providing an "ecological"
account of knowledge: inquiry is how organisms can get a grip on their
environment. Real and true are functional labels in inquiry and cannot be understood outside of this context. It is not realist in a traditionally robust sense of realism (what Hilary Putnam later called metaphysical realism), but it is realist in how it acknowledges an external world which must be dealt with.
Many of James' best-turned phrases—"truth's cash value" and "the true is only the expedient in our way of thinking" [28]—were
taken out of context and caricatured in contemporary literature as
representing the view where any idea with practical utility is true.
William James wrote:
It is high time to urge the use of a
little imagination in philosophy. The unwillingness of some of our
critics to read any but the silliest of possible meanings into our
statements is as discreditable to their imaginations as anything I know
in recent philosophic history. Schiller says the truth is that which
"works." Thereupon he is treated as one who limits verification to the
lowest material utilities. Dewey says truth is what gives
"satisfaction"! He is treated as one who believes in calling everything
true which, if it were true, would be pleasant.
In reality, James asserts, the theory is a great deal more subtle.
The role of belief in representing reality is widely debated in
pragmatism. Is a belief valid when it represents reality? "Copying is
one (and only one) genuine mode of knowing". Are beliefs dispositions which qualify as true or false depending on
how helpful they prove in inquiry and in action? Is it only in the
struggle of intelligent organisms with the surrounding environment that
beliefs acquire meaning? Does a belief only become true when it succeeds
in this struggle? In James's pragmatism nothing practical or useful is
held to be necessarily true
nor is anything which helps to survive merely in the short term. For
example, to believe my cheating spouse is faithful may help me feel
better now, but it is certainly not useful from a more long-term
perspective because it doesn't accord with the facts (and is therefore
not true).
In other fields
While
pragmatism started simply as a criterion of meaning, it quickly
expanded to become a full-fledged epistemology with wide-ranging
implications for the entire philosophical field. Pragmatists who work in
these fields share a common inspiration, but their work is diverse and
there are no received views.
Philosophy of science
In the philosophy of science, instrumentalism
is the view that concepts and theories are merely useful instruments
and progress in science cannot be couched in terms of concepts and
theories somehow mirroring reality. Instrumentalist philosophers often
define scientific progress as nothing more than an improvement in
explaining and predicting phenomena. Instrumentalism does not state that
truth does not matter, but rather provides a specific answer to the
question of what truth and falsity mean and how they function in
science.
One of C. I. Lewis' main arguments in Mind and the World Order: Outline of a Theory of Knowledge
(1929) was that science does not merely provide a copy of reality but
must work with conceptual systems and that those are chosen for
pragmatic reasons, that is, because they aid inquiry. Lewis' own
development of multiple modal logics is a case in point. Lewis is sometimes called a proponent of conceptual pragmatism because of this.
Another development is the cooperation of logical positivism and pragmatism in the works of Charles W. Morris and Rudolf Carnap. The influence of pragmatism on these writers is mostly limited to the incorporation of the pragmatic maxim into their epistemology. Pragmatists with a broader conception of the movement do not often refer to them.
W. V. Quine's paper "Two Dogmas of Empiricism",
published in 1951, is one of the most celebrated papers of 20th-century
philosophy in the analytic tradition. The paper is an attack on two
central tenets of the logical positivists' philosophy. One is the
distinction between analytic statements (tautologies and contradictions)
whose truth (or falsehood) is a function of the meanings of the words
in the statement ('all bachelors are unmarried'), and synthetic
statements, whose truth (or falsehood) is a function of (contingent)
states of affairs. The other is reductionism, the theory that each
meaningful statement gets its meaning from some logical construction of
terms which refers exclusively to immediate experience. Quine's argument
brings to mind Peirce's insistence that axioms are not a priori truths
but synthetic statements.
Logic
Later in his life Schiller became famous for his attacks on logic in his textbook, Formal Logic. By then, Schiller's pragmatism had become the nearest of any of the classical pragmatists to an ordinary language philosophy.
Schiller sought to undermine the very possibility of formal logic, by
showing that words only had meaning when used in context. The least
famous of Schiller's main works was the constructive sequel to his
destructive book Formal Logic. In this sequel, Logic for Use, Schiller attempted to construct a new logic to replace the formal logic that he had criticized in Formal Logic.
What he offers is something philosophers would recognize today as a
logic covering the context of discovery and the hypothetico-deductive
method.
Whereas Schiller dismissed the possibility of formal logic, most
pragmatists are critical rather of its pretension to ultimate validity
and see logic as one logical tool among others—or perhaps, considering
the multitude of formal logics, one set of tools among others. This is
the view of C. I. Lewis. C. S. Peirce developed multiple methods for
doing formal logic.
Stephen Toulmin's The Uses of Argument inspired scholars in informal logic and rhetoric studies (although it is an epistemological work).
Metaphysics
James and Dewey were empirical
thinkers in the most straightforward fashion: experience is the
ultimate test and experience is what needs to be explained. They were
dissatisfied with ordinary empiricism because, in the tradition dating
from Hume, empiricists had a tendency to think of experience as nothing
more than individual sensations. To the pragmatists, this went against
the spirit of empiricism: we should try to explain all that is given in
experience including connections and meaning, instead of explaining them
away and positing sense data as the ultimate reality. Radical empiricism,
or Immediate Empiricism in Dewey's words, wants to give a place to
meaning and value instead of explaining them away as subjective
additions to a world of whizzing atoms.
The
"Chicago Club" including Mead, Dewey, Angell, and Moore. Pragmatism is
sometimes called American pragmatism because so many of its proponents
were and are Americans.
William James gives an interesting example of this philosophical shortcoming:
[A young graduate] began by saying
that he had always taken for granted that when you entered a philosophic
classroom you had to open relations with a universe entirely distinct
from the one you left behind you in the street. The two were supposed,
he said, to have so little to do with each other, that you could not
possibly occupy your mind with them at the same time. The world of
concrete personal experiences to which the street belongs is
multitudinous beyond imagination, tangled, muddy, painful and perplexed.
The world to which your philosophy-professor introduces you is simple,
clean and noble. The contradictions of real life are absent from it. ...
In point of fact it is far less an account of this actual world than a
clear addition built upon it ... It is no explanation of our concrete
universe.
F. C. S. Schiller's first book Riddles of the Sphinx
was published before he became aware of the growing pragmatist movement
taking place in America. In it, Schiller argues for a middle ground
between materialism and absolute metaphysics. These opposites are
comparable to what William James called tough-minded empiricism and
tender-minded rationalism. Schiller contends on the one hand that
mechanistic naturalism cannot make sense of the "higher" aspects of our
world. These include free will, consciousness, purpose, universals and
some would add God. On the other hand, abstract metaphysics cannot make
sense of the "lower" aspects of our world (e.g. the imperfect, change,
physicality). While Schiller is vague about the exact sort of middle
ground he is trying to establish, he suggests that metaphysics is a tool
that can aid inquiry, but that it is valuable only insofar as it does
help in explanation.
In the second half of the 20th century, Stephen Toulmin
argued that the need to distinguish between reality and appearance only
arises within an explanatory scheme and therefore that there is no
point in asking what "ultimate reality" consists of. More recently, a
similar idea has been suggested by the postanalytic philosopherDaniel Dennett,
who argues that anyone who wants to understand the world has to
acknowledge both the "syntactical" aspects of reality (i.e., whizzing
atoms) and its emergent or "semantic" properties (i.e., meaning and
value).
Radical empiricism gives answers to questions about the limits of
science, the nature of meaning and value and the workability of reductionism. These questions feature prominently in current debates about the relationship between religion and science, where it is often assumed—most pragmatists would disagree—that science degrades everything that is meaningful into "merely" physical phenomena.
Philosophy of mind
Both John Dewey in Experience and Nature (1929) and, half a century later, Richard Rorty in his Philosophy and the Mirror of Nature
(1979) argued that much of the debate about the relation of the mind to
the body results from conceptual confusions. They argue instead that
there is no need to posit the mind or mindstuff as an ontological category.
Pragmatists disagree over whether philosophers ought to adopt a
quietist or a naturalist stance toward the mind-body problem. The
former, including Rorty, want to do away with the problem because they
believe it is a pseudo-problem, whereas the latter believe that it is a
meaningful empirical question.
Pragmatism sees no fundamental difference between practical and
theoretical reason, nor any ontological difference between facts and
values. Pragmatist ethics is broadly humanist
because it sees no ultimate test of morality beyond what matters for us
as humans. Good values are those for which we have good reasons, viz.
the good reasons approach.
The pragmatist formulation pre-dates those of other philosophers who
have stressed important similarities between values and facts such as Jerome Schneewind and John Searle.
William
James tried to show the meaningfulness of (some kinds of) spirituality
but, like other pragmatists, did not see religion as the basis of
meaning or morality.
William James' contribution to ethics, as laid out in his essay The Will to Believe
has often been misunderstood as a plea for relativism or irrationality.
On its own terms it argues that ethics always involves a certain degree
of trust or faith and that we cannot always wait for adequate proof
when making moral decisions.
Moral questions immediately present
themselves as questions whose solution cannot wait for sensible proof. A
moral question is a question not of what sensibly exists, but of what
is good, or would be good if it did exist. ... A social organism of any
sort whatever, large or small, is what it is because each member
proceeds to his own duty with a trust that the other members will
simultaneously do theirs. Wherever a desired result is achieved by the
co-operation of many independent persons, its existence as a fact is a
pure consequence of the precursive faith in one another of those
immediately concerned. A government, an army, a commercial system, a
ship, a college, an athletic team, all exist on this condition, without
which not only is nothing achieved, but nothing is even attempted.
Of the classical pragmatists, John Dewey wrote most extensively about morality and democracy. In his classic article "Three Independent Factors in Morals", he tried to integrate three basic philosophical perspectives on
morality: the right, the virtuous and the good. He held that while all
three provide meaningful ways to think about moral questions, the
possibility of conflict among the three elements cannot always be easily
solved.
Dewey also criticized the dichotomy between means and ends which
he saw as responsible for the degradation of our everyday working lives
and education, both conceived as merely a means to an end. He stressed
the need for meaningful labor and a conception of education that viewed it not as a preparation for life but as life itself.
Dewey was opposed to other ethical philosophies of his time, notably the emotivism of Alfred Ayer.
Dewey envisioned the possibility of ethics as an experimental
discipline, and thought values could best be characterized not as
feelings or imperatives, but as hypotheses about what actions will lead
to satisfactory results or what he termed consummatory experience.
An additional implication of this view is that ethics is a fallible
undertaking because human beings are frequently unable to know what
would satisfy them.
During the late 1900s and first decade of 2000, pragmatism was embraced by many in the field of bioethics led by the philosophers John Lachs and his student Glenn McGee, whose 1997 book The Perfect Baby: A Pragmatic Approach to Genetic Engineering (see designer baby) garnered praise from within classical American philosophy
and criticism from bioethics for its development of a theory of
pragmatic bioethics and its rejection of the principalism theory then in
vogue in medical ethics. An anthology published by the MIT Press titled Pragmatic Bioethics
included the responses of philosophers to that debate, including Micah
Hester, Griffin Trotter and others many of whom developed their own
theories based on the work of Dewey, Peirce, Royce and others. Lachs
developed several applications of pragmatism to bioethics independent of
but extending from the work of Dewey and James.
A recent pragmatist contribution to meta-ethics is Todd Lekan's Making Morality. Lekan argues that morality is a fallible but rational practice and that
it has traditionally been misconceived as based on theory or
principles. Instead, he argues, theory and rules arise as tools to make
practice more intelligent.
Aesthetics
John Dewey's Art as Experience,
based on the William James lectures he delivered at Harvard University,
was an attempt to show the integrity of art, culture and everyday
experience (IEP). Art, for Dewey, is or should be a part of
everyone's creative lives and not just the privilege of a select group
of artists. He also emphasizes that the audience is more than a passive
recipient. Dewey's treatment of art was a move away from the transcendental approach to aesthetics in the wake of Immanuel Kant
who emphasized the unique character of art and the disinterested nature
of aesthetic appreciation. A notable contemporary pragmatist
aesthetician is Joseph Margolis.
He defines a work of art as "a physically embodied, culturally emergent
entity", a human "utterance" that isn't an ontological quirk but in
line with other human activity and culture in general. He emphasizes
that works of art are complex and difficult to fathom, and that no
determinate interpretation can be given.
Philosophy of religion
Both Dewey and James investigated the role that religion can still play in contemporary society, the former in A Common Faith and the latter in The Varieties of Religious Experience.
From a general point of view, for William James, something is
true only insofar as it works. Thus, the statement, for example, that
prayer is heard may work on a psychological level but (a) may not help
to bring about the things you pray for (b) may be better explained by
referring to its soothing effect than by claiming prayers are heard. As
such, pragmatism is not antithetical to religion but it is not an
apologetic for faith either. James' metaphysical position however,
leaves open the possibility that the ontological claims of religions may
be true. As he observed in the end of the Varieties, his position does
not amount to a denial of the existence of transcendent realities.
Quite the contrary, he argued for the legitimate epistemic right to
believe in such realities, since such beliefs do make a difference in an
individual's life and refer to claims that cannot be verified or
falsified either on intellectual or common sensorial grounds.
Joseph Margolis in Historied Thought, Constructed World
(California, 1995) makes a distinction between "existence" and
"reality". He suggests using the term "exists" only for those things
which adequately exhibit Peirce's Secondness: things which offer
brute physical resistance to our movements. In this way, such things
which affect us, like numbers, may be said to be "real", although they
do not "exist". Margolis suggests that God, in such a linguistic usage,
might very well be "real", causing believers to act in such and such a
way, but might not "exist".
Education
Pragmatic pedagogy is an educational philosophy
that emphasizes teaching students knowledge that is practical for life
and encourages them to grow into better people. American philosopher John Dewey is considered one of the main thinkers of the pragmatist educational approach.
Neopragmatism
is a broad contemporary category used for various thinkers that
incorporate important insights of, and yet significantly diverge from,
the classical pragmatists. This divergence may occur either in their
philosophical methodology (many of them are loyal to the analytic
tradition) or in conceptual formation: for example, conceptual
pragmatist C. I. Lewis was very critical of Dewey; neopragmatistRichard Rorty disliked Peirce.
Neopragmatist thinkers who are more loyal to classical pragmatism include Sidney Hook and Susan Haack (known for the theory of foundherentism).
Many pragmatist ideas (especially those of Peirce) find a natural
expression in the decision-theoretic reconstruction of epistemology
pursued in the work of Isaac Levi. Nicholas Rescher advocated his version of methodological pragmatism, based on construing pragmatic efficacy not as a replacement for truths but as a means to its evidentiation. Rescher was also a proponent of pragmatic idealism.
Not all pragmatists are easily characterized. With the advent of postanalytic philosophy
and the diversification of Anglo-American philosophy, many philosophers
were influenced by pragmatist thought without necessarily publicly
committing themselves to that philosophical school. Daniel Dennett, a student of Quine's, falls into this category, as does Stephen Toulmin, who arrived at his philosophical position via Wittgenstein, whom he calls "a pragmatist of a sophisticated kind". Another example is Mark Johnson whose embodied philosophy shares its psychologism, direct realism and anti-cartesianism with
pragmatism. Conceptual pragmatism is a theory of knowledge originating
with the work of the philosopher and logician Clarence Irving Lewis. The epistemology of conceptual pragmatism was first formulated in the 1929 book Mind and the World Order: Outline of a Theory of Knowledge.
Philosophers John R. Shook and Tibor Solymosi said that "each new
generation rediscovers and reinvents its own versions of pragmatism by
applying the best available practical and scientific methods to
philosophical problems of contemporary concern".
Legacy and contemporary relevance
In the 20th century, the movements of logical positivism and ordinary language philosophy
have similarities with pragmatism. Like pragmatism, logical positivism
provides a verification criterion of meaning that is supposed to rid us
of nonsense metaphysics; however, logical positivism doesn't stress
action as pragmatism does. The pragmatists rarely used their maxim of
meaning to rule out all metaphysics as nonsense. Usually, pragmatism was
put forth to correct metaphysical doctrines or to construct empirically
verifiable ones rather than to provide a wholesale rejection.
According to Gilbert Ryle, James' pragmatism was "one minor source of the Principle of Verifiability".
Ordinary language philosophy is closer to pragmatism than other philosophy of language because of its nominalist character (although Peirce's pragmatism is not nominalist)
and because it takes the broader functioning of language in an
environment as its focus instead of investigating abstract relations
between language and world.
Pragmatism has ties to process philosophy. Much of the classical pragmatists' work developed in dialogue with process philosophers such as Henri Bergson and Alfred North Whitehead, who aren't usually considered pragmatists because they differ so much on other points. Nonetheless, philosopher Donovan Irven argues there's a strong
connection between Henri Bergson, pragmatist William James, and the
existentialist Jean-Paul Sartre regarding their theories of truth.
Behaviorism and functionalism
in psychology and sociology also have ties to pragmatism, which is not
surprising considering that James and Dewey were both scholars of
psychology and that Mead became a sociologist.
Pragmatism emphasizes the connection between thought and action. Applied fields like public administration, political science, leadership studies, international relations, conflict resolution, and research methodology have incorporated the tenets of pragmatism in their field. Often this
connection is made using Dewey and Addams's expansive notion of
democracy.
Increasing attention is being given to pragmatist epistemology in
other branches of the social sciences, which have struggled with
divisive debates over the status of social scientific knowledge.
Proponents suggest that pragmatism offers an approach that is both pluralist and practical.
Effects on public administration
The classical pragmatism of John Dewey, William James, and Charles Sanders Peirce
has influenced research in the field of public administration. Scholars
claim classical pragmatism had a profound influence on the origin of
the field of public administration. At the most basic level, public administrators are responsible for
making programs "work" in a pluralistic, problems-oriented environment.
Public administrators are also responsible for the day-to-day work with
citizens. Dewey's participatory democracy
can be applied in this environment. Dewey and James' notion of theory
as a tool, helps administrators craft theories to resolve policy and
administrative problems. Further, the birth of American public administration coincides closely with the period of greatest influence of the classical pragmatists.
Which pragmatism (classical pragmatism or neo-pragmatism) makes
the most sense in public administration has been the source of debate.
The debate began when Patricia M. Shields introduced Dewey's notion of the Community of Inquiry. Hugh Miller objected to one element of the community of inquiry
(problematic situation, scientific attitude, participatory democracy):
scientific attitude. A debate that included responses from a practitioner, an economist, a planner, other public administration scholars, and noted philosophers followed. Miller and Shields also responded.
In addition, applied scholarship of public administration that assesses charter schools, contracting out or outsourcing, financial management, performance measurement, urban quality of life initiatives, and urban planning in part draws on the ideas of classical pragmatism in the development of the conceptual framework and focus of analysis.
The health sector's administrators' use of pragmatism has been criticized as incomplete in its pragmatism, however, according to the classical pragmatists, knowledge is always shaped by
human interests. The administrator's focus on "outcomes" simply advances
their own interest, and this focus on outcomes often undermines their
citizen's interests, which often are more concerned with process. On the
other hand, David Brendel argues that pragmatism's ability to bridge
dualisms, focus on practical problems, include multiple perspectives,
incorporate participation from interested parties (patient, family,
health team), and provisional nature makes it well suited to address
problems in this area.
Effects on feminism
Since
the mid 1990s, feminist philosophers have re-discovered classical
pragmatism as a source of feminist theories. Works by Seigfried, Duran, Keith, and Whipps explore the historic and philosophic links between feminism and
pragmatism. The connection between pragmatism and feminism took so long
to be rediscovered because pragmatism itself was eclipsed by logical
positivism during the middle decades of the twentieth century. As a
result, it was lost from feminist discourse. Feminists now consider
pragmatism's greatest strength to be the very features that led to its
decline. These are "persistent and early criticisms of positivist
interpretations of scientific methodology; disclosure of value dimension
of factual claims"; viewing aesthetics as informing everyday
experience; subordinating logical analysis to political, cultural, and
social issues; linking the dominant discourses with domination;
"realigning theory with praxis; and resisting the turn to epistemology
and instead emphasizing concrete experience".
Feminist philosophers point to Jane Addams as a founder of classical pragmatism. Mary Parker Follett was also an important feminist pragmatist concerned with organizational operation during the early decades of the 20th century. In addition, the ideas of Dewey, Mead, and James are consistent with
many feminist tenets. Jane Addams, John Dewey, and George Herbert Mead
developed their philosophies as all three became friends, influenced
each other, and were engaged in the Hull House experience and women's rights causes.
Criticisms
In the 1908 essay "The Thirteen Pragmatisms", Arthur Oncken Lovejoy argued that there's significant ambiguity in the notion of the effects of the truth of a proposition and those of belief in a proposition in order to highlight that many pragmatists had failed to recognize that distinction. He identified 13 different philosophical positions that were each labeled pragmatism.
The Franciscan friar Celestine Bittle presented multiple criticisms of pragmatism in his 1936 book Reality and the Mind: Epistemology. He argued that, in William James's pragmatism, truth is entirely
subjective and is not the widely accepted definition of truth, which is
correspondence to reality. For Bittle, defining truth as what is useful
is a "perversion of language". With truth reduced essentially to what is good, it is no longer an
object of the intellect. Therefore, the problem of knowledge posed by
the intellect is not solved, but rather renamed. Renaming truth as a
product of the will cannot help it solve the problems of the intellect,
according to Bittle. Bittle cited what he saw as contradictions in
pragmatism, such as using objective facts to prove that truth does not
emerge from objective fact; this reveals that pragmatists do recognize
truth as objective fact, and not, as they claim, what is useful. Bittle
argued there are also some statements that cannot be judged on human
welfare at all. Such statements (for example the assertion that "a car
is passing") are matters of "truth and error" and do not affect human
welfare.
British philosopher Bertrand Russell devoted a chapter each to James and Dewey in his 1945 book A History of Western Philosophy; Russell pointed out areas in which he agreed with them but also ridiculed James's views on truth and Dewey's views on inquiry. Hilary Putnam later argued that Russell "presented a mere caricature" of James's views and a "misreading of James", while Tom Burke argued at length that Russell presented "a skewed characterization of Dewey's point of view". Elsewhere, in Russell's book The Analysis of Mind, Russell praised James's radical empiricism, to which Russell's own account of neutral monism was indebted. Dewey, in The Bertrand Russell Case, defended Russell against an attempt to remove Russell from his chair at the College of the City of New York in 1940.
Neopragmatism as represented by Richard Rorty has been criticized as relativistic both by other neopragmatists such as Susan Haack and by many analytic philosophers. Rorty's early analytic work, however, differs notably from his later
work which some, including Rorty, consider to be closer to literary criticism
than to philosophy, and which attracts the brunt of criticism from his
detractors. Rorty has defended his views against charges of relativism
by claiming that such charges simply beg the question. The people who
accuse him of being a relativist presuppose the relative–absolute,
appearance–reality, made–found, etc. dualisms, their rejection being a
defining feature of pragmatism to begin with. For the pragmatist,
relativism about truth makes as little sense as absolutism about truth,
since they do not believe in a metaphysical, extra-linguistic Truth
which exists outside human vocabularies. Rorty instead believes that
scientific, philosophical, and moral progress is made through
conversation about which vocabularies are best at solving society's
problems.