Search This Blog

Thursday, May 10, 2018

Origin of language

From Wikipedia, the free encyclopedia
The evolutionary emergence of language in the human species has been a subject of speculation for several centuries. The topic is difficult to study because of the lack of direct evidence. Consequently, scholars wishing to study the origins of language must draw inferences from other kinds of evidence such as the fossil record, archaeological evidence, contemporary language diversity, studies of language acquisition, and comparisons between human language and systems of communication existing among animals (particularly other primates). Many argue that the origins of language probably relate closely to the origins of modern human behavior, but there is little agreement about the implications and directionality of this connection.

This shortage of empirical evidence has led many scholars to regard the entire topic as unsuitable for serious study. In 1866, the Linguistic Society of Paris banned any existing or future debates on the subject, a prohibition which remained influential across much of the western world until late in the twentieth century.[1] Today, there are various hypotheses about how, why, when, and where language might have emerged.[2] Despite this, there is scarcely more agreement today than a hundred years ago, when Charles Darwin's theory of evolution by natural selection provoked a rash of armchair speculation on the topic.[3] Since the early 1990s, however, a number of linguists, archaeologists, psychologists, anthropologists, and others have attempted to address with new methods what some consider one of the hardest problems in science.[4]

Approaches

One can sub-divide approaches to the origin of language according to some underlying assumptions:[5]
  • "Continuity theories" build on the idea that language exhibits so much complexity that one cannot imagine it simply appearing from nothing in its final form; therefore it must have evolved from earlier pre-linguistic systems among our primate ancestors.
  • "Discontinuity theories" take the opposite approach—that language, as a unique trait which cannot be compared to anything found among non-humans, must have appeared fairly suddenly during the course of human evolution.
  • Some theories see language mostly as an innate faculty—largely genetically encoded.
  • Other theories regard language as a mainly cultural system—learned through social interaction.
Noam Chomsky, a prominent proponent of discontinuity theory, argues that a single chance mutation occurred in one individual in the order of 100,000 years ago, installing the language faculty (a component of the mind–brain) in "perfect" or "near-perfect" form.[6] A majority of linguistic scholars as of 2018 hold continuity-based theories, but they vary in how they envision language development. Among those who see language as mostly innate, some—notably Steven Pinker[7]—avoid speculating about specific precursors in nonhuman primates, stressing simply that the language faculty must have evolved in the usual gradual way.[8] Others in this intellectual camp—notably Ib Ulbæk[5]—hold that language evolved not from primate communication but from primate cognition, which is significantly more complex.

Those who see language as a socially learned tool of communication, such as Michael Tomasello, see it developing from the cognitively controlled aspects of primate communication, these being mostly gestural as opposed to vocal.[9][10] Where vocal precursors are concerned, many continuity theorists envisage language evolving from early human capacities for song.[11][12][13][14]
[15]

Transcending the continuity-versus-discontinuity divide, some scholars view the emergence of language as the consequence of some kind of social transformation[16] that, by generating unprecedented levels of public trust, liberated a genetic potential for linguistic creativity that had previously lain dormant.[17][18][19] "Ritual/speech coevolution theory" exemplifies this approach.[20][21] Scholars in this intellectual camp point to the fact that even chimpanzees and bonobos have latent symbolic capacities that they rarely—if ever—use in the wild.[22] Objecting to the sudden mutation idea, these authors argue that even if a chance mutation were to install a language organ in an evolving bipedal primate, it would be adaptively useless under all known primate social conditions. A very specific social structure—one capable of upholding unusually high levels of public accountability and trust—must have evolved before or concurrently with language to make reliance on "cheap signals" (words) an evolutionarily stable strategy.

Because the emergence of language lies so far back in human prehistory, the relevant developments have left no direct historical traces; neither can comparable processes be observed today. Despite this, the emergence of new sign languages in modern times—Nicaraguan Sign Language, for example—may potentially offer insights into the developmental stages and creative processes necessarily involved.[23] Another approach inspects early human fossils, looking for traces of physical adaptation to language use.[24][25] In some cases, when the DNA of extinct humans can be recovered, the presence or absence of genes considered to be language-relevant —FOXP2, for example—may prove informative.[26] Another approach, this time archaeological, involves invoking symbolic behavior (such as repeated ritual activity) that may leave an archaeological trace—such as mining and modifying ochre pigments for body-painting—while developing theoretical arguments to justify inferences from symbolism in general to language in particular.[27][28][29]

The time range for the evolution of language and/or its anatomical prerequisites extends, at least in principle, from the phylogenetic divergence of Homo (2.3 to 2.4 million years ago) from Pan (5 to 6 million years ago) to the emergence of full behavioral modernity some 150,000 – 50,000 years ago. Few dispute that Australopithecus probably lacked vocal communication significantly more sophisticated than that of great apes in general,[30] but scholarly opinions vary as to the developments since the appearance of Homo some 2.5 million years ago. Some scholars assume the development of primitive language-like systems (proto-language) as early as Homo habilis, while others place the development of symbolic communication only with Homo erectus (1.8 million years ago) or with Homo heidelbergensis (0.6 million years ago) and the development of language proper with Homo sapiens, currently estimated at less than 200,000 years ago.

Using statistical methods to estimate the time required to achieve the current spread and diversity in modern languages, Johanna Nichols—a linguist at the University of California, Berkeley—argued in 1998 that vocal languages must have begun diversifying in our species at least 100,000 years ago.[31] A further study by Q. D. Atkinson[12] suggests that successive population bottlenecks occurred as our African ancestors migrated to other areas, leading to a decrease in genetic and phenotypic diversity. Atkinson argues that these bottlenecks also affected culture and language, suggesting that the further away a particular language is from Africa, the fewer phonemes it contains. By way of evidence, Atkinson claims that today's African languages tend to have relatively large numbers of phonemes, whereas languages from areas in Oceania (the last place to which humans migrated), have relatively few. Relying heavily on Atkinson's work, a subsequent study has explored the rate at which phonemes develop naturally, comparing this rate to some of Africa's oldest languages. The results suggest that language first evolved around 350,000–150,000 years ago, which is around the time when modern Homo sapiens evolved.[32] Estimates of this kind are not universally accepted, but jointly considering genetic, archaeological, palaeontological and much other evidence indicates that language probably emerged somewhere in sub-Saharan Africa during the Middle Stone Age, roughly contemporaneous with the speciation of Homo sapiens.[33]

Language origin hypotheses

Early speculations

I cannot doubt that language owes its origin to the imitation and modification, aided by signs and gestures, of various natural sounds, the voices of other animals, and man's own instinctive cries.
— Charles Darwin, 1871. The Descent of Man, and Selection in Relation to Sex[34]
In 1861, historical linguist Max Müller published a list of speculative theories concerning the origins of spoken language:[35]
  • Bow-wow. The bow-wow or cuckoo theory, which Müller attributed to the German philosopher Johann Gottfried Herder, saw early words as imitations of the cries of beasts and birds.
  • Pooh-pooh. The pooh-pooh theory saw the first words as emotional interjections and exclamations triggered by pain, pleasure, surprise, etc.
  • Ding-dong. Müller suggested what he called the ding-dong theory, which states that all things have a vibrating natural resonance, echoed somehow by man in his earliest words.
  • Yo-he-ho. The yo-he-ho theory claims language emerged from collective rhythmic labor, the attempt to synchronize muscular effort resulting in sounds such as heave alternating with sounds such as ho.
  • Ta-ta. This did not feature in Max Müller's list, having been proposed in 1930 by Sir Richard Paget.[36] According to the ta-ta theory, humans made the earliest words by tongue movements that mimicked manual gestures, rendering them audible.
Most scholars today consider all such theories not so much wrong—they occasionally offer peripheral insights—as comically naïve and irrelevant.[37][38] The problem with these theories is that they are so narrowly mechanistic.[citation needed] They assume that once our ancestors had stumbled upon the appropriate ingenious mechanism for linking sounds with meanings, language automatically evolved and changed.

Problems of reliability and deception

From the perspective of modern science, the main obstacle to the evolution of language-like communication in nature is not a mechanistic one. Rather, it is the fact that symbols—arbitrary associations of sounds or other perceptible forms with corresponding meanings—are unreliable and may well be false.[39] As the saying goes, "words are cheap".[40] The problem of reliability was not recognized at all by Darwin, Müller or the other early evolutionary theorists.
Animal vocal signals are, for the most part, intrinsically reliable. When a cat purrs, the signal constitutes direct evidence of the animal's contented state. We trust the signal, not because the cat is inclined to be honest, but because it just cannot fake that sound. Primate vocal calls may be slightly more manipulable, but they remain reliable for the same reason—because they are hard to fake.[41] Primate social intelligence is "Machiavellian"—self-serving and unconstrained by moral scruples. Monkeys and apes often attempt to deceive each other, while at the same time remaining constantly on guard against falling victim to deception themselves.[42][43] Paradoxically, it is theorized that primates' resistance to deception is what blocks the evolution of their signalling systems along language-like lines. Language is ruled out because the best way to guard against being deceived is to ignore all signals except those that are instantly verifiable. Words automatically fail this test.[20]

Words are easy to fake. Should they turn out to be lies, listeners will adapt by ignoring them in favor of hard-to-fake indices or cues. For language to work, then, listeners must be confident that those with whom they are on speaking terms are generally likely to be honest.[44] A peculiar feature of language is "displaced reference", which means reference to topics outside the currently perceptible situation. This property prevents utterances from being corroborated in the immediate "here" and "now". For this reason, language presupposes relatively high levels of mutual trust in order to become established over time as an evolutionarily stable strategy. This stability is born of a longstanding mutual trust and is what grants language its authority. A theory of the origins of language must therefore explain why humans could begin trusting cheap signals in ways that other animals apparently cannot (see signalling theory).

The 'mother tongues' hypothesis

The "mother tongues" hypothesis was proposed in 2004 as a possible solution to this problem.[45] W. Tecumseh Fitch suggested that the Darwinian principle of 'kin selection'[46]—the convergence of genetic interests between relatives—might be part of the answer. Fitch suggests that languages were originally 'mother tongues'. If language evolved initially for communication between mothers and their own biological offspring, extending later to include adult relatives as well, the interests of speakers and listeners would have tended to coincide. Fitch argues that shared genetic interests would have led to sufficient trust and cooperation for intrinsically unreliable signals—words—to become accepted as trustworthy and so begin evolving for the first time.

Critics of this theory point out that kin selection is not unique to humans.[47] Other primate mothers also share genes with their progeny, as do all other animals, so why is it only humans who speak? Furthermore, it is difficult to believe that early humans restricted linguistic communication to genetic kin: the incest taboo must have forced men and women to interact and communicate with more distant relatives. So even if we accept Fitch's initial premises, the extension of the posited 'mother tongue' networks from close relatives to more distant relatives remains unexplained.[47] Fitch argues, however, that the extended period of physical immaturity of human infants and the postnatal growth of the human brain give the human-infant relationship a different and more extended period of intergenerational dependency than that found in any other species.[45]

Another criticism of Fitch's theory is that language today is not predominantly used to communicate to kin. Although Fitch's theory can potentially explain the origin of human language, it cannot explain the evolution of modern language.[45]

The 'obligatory reciprocal altruism' hypothesis

Ib Ulbæk[5] invokes another standard Darwinian principle—'reciprocal altruism'[48]—to explain the unusually high levels of intentional honesty necessary for language to evolve. 'Reciprocal altruism' can be expressed as the principle that if you scratch my back, I'll scratch yours. In linguistic terms, it would mean that if you speak truthfully to me, I'll speak truthfully to you. Ordinary Darwinian reciprocal altruism, Ulbæk points out, is a relationship established between frequently interacting individuals. For language to prevail across an entire community, however, the necessary reciprocity would have needed to be enforced universally instead of being left to individual choice. Ulbæk concludes that for language to evolve, society as a whole must have been subject to moral regulation.

Critics point out that this theory fails to explain when, how, why or by whom 'obligatory reciprocal altruism' could possibly have been enforced.[21] Various proposals have been offered to remedy this defect.[21] A further criticism is that language doesn't work on the basis of reciprocal altruism anyway. Humans in conversational groups don't withhold information to all except listeners likely to offer valuable information in return. On the contrary, they seem to want to advertise to the world their access to socially relevant information, broadcasting that information without expectation of reciprocity to anyone who will listen.[49]

The gossip and grooming hypothesis

Gossip, according to Robin Dunbar in his book Grooming, Gossip and the Evolution of Language, does for group-living humans what manual grooming does for other primates—it allows individuals to service their relationships and so maintain their alliances on the basis of the principle: if you scratch my back, I'll scratch yours. Dunbar argues that as humans began living in increasingly larger social groups, the task of manually grooming all one's friends and acquaintances became so time-consuming as to be unaffordable.[50] In response to this problem, humans developed 'a cheap and ultra-efficient form of grooming'—vocal grooming. To keep allies happy, one now needs only to 'groom' them with low-cost vocal sounds, servicing multiple allies simultaneously while keeping both hands free for other tasks. Vocal grooming then evolved gradually into vocal language—initially in the form of 'gossip'.[50] Dunbar's hypothesis seems to be supported by the fact that the structure of language shows adaptations to the function of narration in general.[51]

Critics of this theory point out that the very efficiency of 'vocal grooming'—the fact that words are so cheap—would have undermined its capacity to signal commitment of the kind conveyed by time-consuming and costly manual grooming.[52] A further criticism is that the theory does nothing to explain the crucial transition from vocal grooming—the production of pleasing but meaningless sounds—to the cognitive complexities of syntactical speech.

Ritual/speech coevolution

The ritual/speech coevolution theory was originally proposed by social anthropologist Roy Rappaport[17] before being elaborated by anthropologists such as Chris Knight,[20] Jerome Lewis,[53] Nick Enfield,[54] Camilla Power[44] and Ian Watts.[29] Cognitive scientist and robotics engineer Luc Steels[55] is another prominent supporter of this general approach, as is biological anthropologist/neuroscientist Terrence Deacon.[56]

These scholars argue that there can be no such thing as a 'theory of the origins of language'. This is because language is not a separate adaptation but an internal aspect of something much wider—namely, human symbolic culture as a whole.[19] Attempts to explain language independently of this wider context have spectacularly failed, say these scientists, because they are addressing a problem with no solution. Can we imagine a historian attempting to explain the emergence of credit cards independently of the wider system of which they are a part? Using a credit card makes sense only if you have a bank account institutionally recognized within a certain kind of advanced capitalist society—one where electronic communications technology and digital computers have already been invented and fraud can be detected and prevented. In much the same way, language would not work outside a specific array of social mechanisms and institutions. For example, it would not work for a nonhuman ape communicating with others in the wild. Not even the cleverest nonhuman ape could make language work under such conditions.
Lie and alternative, inherent in language ... pose problems to any society whose structure is founded on language, which is to say all human societies. I have therefore argued that if there are to be words at all it is necessary to establish The Word, and that The Word is established by the invariance of liturgy.
— [57]
Advocates of this school of thought point out that words are cheap. As digital hallucinations, they are intrinsically unreliable. Should an especially clever nonhuman ape, or even a group of articulate nonhuman apes, try to use words in the wild, they would carry no conviction. The primate vocalizations that do carry conviction—those they actually use—are unlike words, in that they are emotionally expressive, intrinsically meaningful and reliable because they are relatively costly and hard to fake.

Language consists of digital contrasts whose cost is essentially zero. As pure social conventions, signals of this kind cannot evolve in a Darwinian social world — they are a theoretical impossibility.[39] Being intrinsically unreliable, language works only if you can build up a reputation for trustworthiness within a certain kind of society—namely, one where symbolic cultural facts (sometimes called 'institutional facts') can be established and maintained through collective social endorsement.[58] In any hunter-gatherer society, the basic mechanism for establishing trust in symbolic cultural facts is collective ritual.[59] Therefore, the task facing researchers into the origins of language is more multidisciplinary than is usually supposed. It involves addressing the evolutionary emergence of human symbolic culture as a whole, with language an important but subsidiary component.

Critics of the theory include Noam Chomsky, who terms it the 'non-existence' hypothesis—a denial of the very existence of language as an object of study for natural science.[60] Chomsky's own theory is that language emerged in an instant and in perfect form,[61] prompting his critics in turn to retort that only something that does not exist—a theoretical construct or convenient scientific fiction—could possibly emerge in such a miraculous way.[18] The controversy remains unresolved.

Tool culture resilience and grammar in early Homo

While it is possible to imitate the making of tools like those made by early Homo under circumstances of demonstration being possible, research on primate tool cultures show that non-verbal cultures are vulnerable to environmental change. In particular, if the environment in which a skill can be used disappears for a longer period of time than an individual ape's or early human's lifespan, the skill will be lost if the culture is imitative and non-verbal. Chimpanzees, macaques and capuchin monkeys are all known to lose tool techniques under such circumstances. Researchers on primate culture vulnerability therefore argue that since early Homo species as far back as Homo habilis retained their tool cultures despite many climate change cycles at the timescales of centuries to millennia each, these species had sufficiently developed language abilities to verbally describe complete procedures, and therefore grammar and not only two-word "proto-language".[62][63]

The theory that early Homo species had sufficiently developed brains for grammar is also supported by researchers who study brain development in children, noting that grammar is developed while connections across the brain are still significantly lower than adult level. These researchers argue that these lowered system requirements for grammatical language make it plausible that the genus Homo had grammar at connection levels in the brain that were significantly lower than those of Homo sapiens and that more recent steps in the evolution of the human brain were not about language.[64][65]

Chomsky's single step theory

According to Chomsky's single mutation theory, the emergence of language resembled the formation of a crystal; with digital infinity as the seed crystal in a super-saturated primate brain, on the verge of blossoming into the human mind, by physical law, once evolution added a single small but crucial keystone.[66][61] Whilst some suggest it follows from this theory that language appeared rather suddenly within the history of human evolution, Chomsky, writing with computational linguist and computer scientist Robert C. Berwick, suggests it is completely compatible with modern biology. They note "none of the recent accounts of human language evolution seem to have completely grasped the shift from conventional Darwinism to its fully stochastic modern version—specifically, that there are stochastic effects not only due to sampling like directionless drift, but also due to directed stochastic variation in fitness, migration, and heritability—indeed, all the "forces" that affect individual or gene frequencies. ... All this can affect evolutionary outcomes—outcomes that as far as we can make out are not brought out in recent books on the evolution of language, yet would arise immediately in the case of any new genetic or individual innovation, precisely the kind of scenario likely to be in play when talking about language's emergence."

Citing evolutionary geneticist Svante Pääbo they concur that a substantial difference must have occurred to differentiate Homo sapiens from Neanderthals to "prompt the relentless spread of our species who had never crossed open water up and out of Africa and then on across the entire planet in just a few tens of thousands of years. ... What we do not see is any kind of "gradualism" in new tool technologies or innovations like fire, shelters, or figurative art." Berwick and Chomsky therefore suggest language emerged approximately between 200,000 years ago and 60,000 years ago (between the arrival of the first anatomically modern humans in southern Africa, and the last exodus from Africa, respectively). "That leaves us with about 130,000 years, or approximately 5,000–6,000 generations of time for evolutionary change. This is not 'overnight in one generation' as some have (incorrectly) inferred—but neither is it on the scale of geological eons. It's time enough—within the ballpark for what Nilsson and Pelger (1994) estimated as the time required for the full evolution of a vertebrate eye from a single cell, even without the invocation of any 'evo-devo' effects."[67]

Gestural theory

The gestural theory states that human language developed from gestures that were used for simple communication.

Two types of evidence support this theory.
  1. Gestural language and vocal language depend on similar neural systems. The regions on the cortex that are responsible for mouth and hand movements border each other.
  2. Nonhuman primates can use gestures or symbols for at least primitive communication, and some of their gestures resemble those of humans, such as the "begging posture", with the hands stretched out, which humans share with chimpanzees.[68]
Research has found strong support for the idea that verbal language and sign language depend on similar neural structures. Patients who used sign language, and who suffered from a left-hemisphere lesion, showed the same disorders with their sign language as vocal patients did with their oral language.[69] Other researchers found that the same left-hemisphere brain regions were active during sign language as during the use of vocal or written language.[70]

Primate gesture is at least partially genetic: different nonhuman apes will perform gestures characteristic of their species, even if they have never seen another ape perform that gesture. For example, gorillas beat their breasts. This shows that gestures are an intrinsic and important part of primate communication, which supports the idea that language evolved from gesture.[71]

Further evidence suggests that gesture and language are linked. In humans, manually gesturing has an effect on concurrent vocalizations, thus creating certain natural vocal associations of manual efforts. Chimpanzees move their mouths when performing fine motor tasks. These mechanisms may have played an evolutionary role in enabling the development of intentional vocal communication as a supplement to gestural communication. Voice modulation could have been prompted by preexisting manual actions.[71]

There is also the fact that, from infancy, gestures both supplement and predict speech.[72][73] This addresses the idea that gestures quickly change in humans from a sole means of communication (from a very young age) to a supplemental and predictive behavior that we use despite being able to communicate verbally. This too serves as a parallel to the idea that gestures developed first and language subsequently built upon it.

Two possible scenarios have been proposed for the development of language,[74] one of which supports the gestural theory:
  1. Language developed from the calls of our ancestors.
  2. Language was derived from gesture.
The first perspective that language evolved from the calls of our ancestors seems logical because both humans and animals make sounds or cries. One evolutionary reason to refute this is that, anatomically, the center that controls calls in monkeys and other animals is located in a completely different part of the brain than in humans. In monkeys, this center is located in the depths of the brain related to emotions. In the human system, it is located in an area unrelated to emotion. Humans can communicate simply to communicate—without emotions. So, anatomically, this scenario does not work.[74] Therefore, we resort to the idea that language was derived from gesture (we communicated by gesture first and sound was attached later).

The important question for gestural theories is why there was a shift to vocalization. Various explanations have been proposed:
  1. Our ancestors started to use more and more tools, meaning that their hands were occupied and could no longer be used for gesturing.[75]
  2. Manual gesturing requires that speakers and listeners be visible to one another. In many situations, they might need to communicate, even without visual contact—for example after nightfall or when foliage obstructs visibility.
  3. A composite hypothesis holds that early language took the form of part gestural and part vocal mimesis (imitative 'song-and-dance'), combining modalities because all signals (like those of nonhuman apes and monkeys) still needed to be costly in order to be intrinsically convincing. In that event, each multi-media display would have needed not just to disambiguate an intended meaning but also to inspire confidence in the signal's reliability. The suggestion is that only once community-wide contractual understandings had come into force[76] could trust in communicative intentions be automatically assumed, at last allowing Homo sapiens to shift to a more efficient default format. Since vocal distinctive features (sound contrasts) are ideal for this purpose, it was only at this point—when intrinsically persuasive body-language was no longer required to convey each message—that the decisive shift from manual gesture to our current primary reliance on spoken language occurred.[18][20][77]
A comparable hypothesis states that in 'articulate' language, gesture and vocalisation are intrinsically linked, as language evolved from equally intrinsically linked dance and song.[15] Humans still use manual and facial gestures when they speak, especially when people meet who have no language in common.[78] There are also, of course, a great number of sign languages still in existence, commonly associated with deaf communities. These sign languages are equal in complexity, sophistication, and expressive power, to any oral language[citation needed]. The cognitive functions are similar and the parts of the brain used are similar. The main difference is that the "phonemes" are produced on the outside of the body, articulated with hands, body, and facial expression, rather than inside the body articulated with tongue, teeth, lips, and breathing.[citation needed] (Compare the motor theory of speech perception.)

Critics of gestural theory note that it is difficult to name serious reasons why the initial pitch-based vocal communication (which is present in primates) would be abandoned in favor of the much less effective non-vocal, gestural communication.[citation needed] However, Michael Corballis has pointed out that it is supposed that primate vocal communication (such as alarm calls) cannot be controlled consciously, unlike hand movement, and thus is not credible as precursor to human language; primate vocalization is rather homologous to and continued in involuntary reflexes (connected with basic human emotions) such as screams or laughter (the fact that these can be faked does not disprove the fact that genuine involuntary responses to fear or surprise exist).[citation needed] Also, gesture is not generally less effective, and depending on the situation can even be advantageous, for example in a loud environment or where it is important to be silent, such as on a hunt. Other challenges to the "gesture-first" theory have been presented by researchers in psycholinguistics, including David McNeill.[citation needed]

Tool-use associated sound in the evolution of language

Proponents of the motor theory of language evolution have primarily focused on the visual domain and communication through observation of movements. The Tool-use sound hypothesis suggests that the production and perception of sound, also contributed substantially, particularly incidental sound of locomotion (ISOL) and tool-use sound (TUS).[79] Human bipedalism resulted in rhythmic and more predictable ISOL. That may have stimulated the evolution of musical abilities, auditory working memory, and abilities to produce complex vocalizations, and to mimic natural sounds.[80] Since the human brain proficiently extracts information about objects and events from the sounds they produce, TUS, and mimicry of TUS, might have achieved an iconic function. The prevalence of sound symbolism in many extant languages supports this idea. Self-produced TUS activates multimodal brain processing (motor neurons, hearing, proprioception, touch, vision), and TUS stimulates primate audiovisual mirror neurons, which is likely to stimulate the development of association chains. Tool use and auditory gestures involve motor-processing of the forelimbs, which is associated with the evolution of vertebrate vocal communication. The production, perception, and mimicry of TUS may have resulted in a limited number of vocalizations or protowords that were associated with tool use.[79] A new way to communicate about tools, especially when out of sight, would have had selective advantage. A gradual change in acoustic properties and/or meaning could have resulted in arbitrariness and an expanded repertoire of words. Humans have been increasingly exposed to TUS over millions of years, coinciding with the period during which spoken language evolved.

Mirror neurons and language origins

In humans, functional MRI studies have reported finding areas homologous to the monkey mirror neuron system in the inferior frontal cortex, close to Broca's area, one of the language regions of the brain. This has led to suggestions that human language evolved from a gesture performance/understanding system implemented in mirror neurons. Mirror neurons have been said to have the potential to provide a mechanism for action-understanding, imitation-learning, and the simulation of other people's behavior.[81] This hypothesis is supported by some cytoarchitectonic homologies between monkey premotor area F5 and human Broca's area.[82] Rates of vocabulary expansion link to the ability of children to vocally mirror non-words and so to acquire the new word pronunciations. Such speech repetition occurs automatically, quickly[83] and separately in the brain to speech perception.[84][85] Moreover, such vocal imitation can occur without comprehension such as in speech shadowing[86] and echolalia.[82][87] Further evidence for this link comes from a recent study in which the brain activity of two participants was measured using fMRI while they were gesturing words to each other using hand gestures with a game of charades—a modality that some have suggested might represent the evolutionary precursor of human language. Analysis of the data using Granger Causality revealed that the mirror-neuron system of the observer indeed reflects the pattern of activity of in the motor system of the sender, supporting the idea that the motor concept associated with the words is indeed transmitted from one brain to another using the mirror system.[88]

Not all linguists agree with the above arguments, however. In particular, supporters of Noam Chomsky argue against the possibility that the mirror neuron system can play any role in the hierarchical recursive structures essential to syntax.[89]

Putting the baby down theory

According to Dean Falk's 'putting the baby down' theory, vocal interactions between early hominid mothers and infants sparked a sequence of events that led, eventually, to our ancestors' earliest words.[90] The basic idea is that evolving human mothers, unlike their counterparts in other primates, couldn't move around and forage with their infants clinging onto their backs. Loss of fur in the human case left infants with no means of clinging on. Frequently, therefore, mothers had to put their babies down. As a result, these babies needed to be reassured that they were not being abandoned. Mothers responded by developing 'motherese'—an infant-directed communicative system embracing facial expressions, body language, touching, patting, caressing, laughter, tickling and emotionally expressive contact calls. The argument is that language somehow developed out of all this.[90]

In The Mental and Social Life of Babies, psychologist Kenneth Kaye noted that no usable adult language could have evolved without interactive communication between very young children and adults. "No symbolic system could have survived from one generation to the next if it could not have been easily acquired by young children under their normal conditions of social life."[91]

Grammaticalisation theory

'Grammaticalisation' is a continuous historical process in which free-standing words develop into grammatical appendages, while these in turn become ever more specialized and grammatical. An initially 'incorrect' usage, in becoming accepted, leads to unforeseen consequences, triggering knock-on effects and extended sequences of change. Paradoxically, grammar evolves because, in the final analysis, humans care less about grammatical niceties than about making themselves understood.[92] If this is how grammar evolves today, according to this school of thought, we can legitimately infer similar principles at work among our distant ancestors, when grammar itself was first being established.[93][94][95]

In order to reconstruct the evolutionary transition from early language to languages with complex grammars, we need to know which hypothetical sequences are plausible and which are not. In order to convey abstract ideas, the first recourse of speakers is to fall back on immediately recognizable concrete imagery, very often deploying metaphors rooted in shared bodily experience.[96] A familiar example is the use of concrete terms such as 'belly' or 'back' to convey abstract meanings such as 'inside' or 'behind'. Equally metaphorical is the strategy of representing temporal patterns on the model of spatial ones. For example, English speakers might say 'It is going to rain,' modeled on 'I am going to London.' This can be abbreviated colloquially to 'It's gonna rain.' Even when in a hurry, we don't say 'I'm gonna London'—the contraction is restricted to the job of specifying tense. From such examples we can see why grammaticalization is consistently unidirectional—from concrete to abstract meaning, not the other way around.[93]

Grammaticalization theorists picture early language as simple, perhaps consisting only of nouns.[95]p. 111 Even under that extreme theoretical assumption, however, it is difficult to imagine what would realistically have prevented people from using, say, 'spear' as if it were a verb ('Spear that pig!'). People might have used their nouns as verbs or their verbs as nouns as occasion demanded. In short, while a noun-only language might seem theoretically possible, grammaticalization theory indicates that it cannot have remained fixed in that state for any length of time.[93][97]

Creativity drives grammatical change.[97] This presupposes a certain attitude on the part of listeners. Instead of punishing deviations from accepted usage, listeners must prioritize imaginative mind-reading. Imaginative creativity—emitting a leopard alarm when no leopard was present, for example—is not the kind of behavior which, say, vervet monkeys would appreciate or reward.[98] Creativity and reliability are incompatible demands; for 'Machiavellian' primates as for animals generally, the overriding pressure is to demonstrate reliability.[99] If humans escape these constraints, it is because in our case, listeners are primarily interested in mental states.

To focus on mental states is to accept fictions—inhabitants of the imagination—as potentially informative and interesting. Take the use of metaphor. A metaphor is, literally, a false statement.[100] Think of Romeo's declaration, 'Juliet is the sun!' Juliet is a woman, not a ball of plasma in the sky, but human listeners are not (or not usually) pedants insistent on point-by-point factual accuracy. They want to know what the speaker has in mind. Grammaticalization is essentially based on metaphor. To outlaw its use would be to stop grammar from evolving and, by the same token, to exclude all possibility of expressing abstract thought.[96][101]

A criticism of all this is that while grammaticalization theory might explain language change today, it does not satisfactorily address the really difficult challenge—explaining the initial transition from primate-style communication to language as we know it. Rather, the theory assumes that language already exists. As Bernd Heine and Tania Kuteva acknowledge: "Grammaticalization requires a linguistic system that is used regularly and frequently within a community of speakers and is passed on from one group of speakers to another".[95] Outside modern humans, such conditions do not prevail.

Evolution-Progression Model

Human language is used for self-expression; however, expression displays different stages. The consciousness of self and feelings represents the stage immediately prior to the external, phonetic expression of feelings in the form of sound, i.e., language. Intelligent animals such as dolphins, Eurasian magpies, and chimpanzees live in communities, wherein they assign themselves roles for group survival and show emotions such as sympathy.[102] When such animals view their reflection (mirror test), they recognize themselves and exhibit self-consciousness.[103] Notably, humans evolved in a quite different environment than that of these animals. The human environment accommodated the development of interaction, self-expression, and tool-making as survival became easier with the advancement of tools, shelters, and fire-making.[104] The increasing brain size allowed advanced provisioning and tools and the technological advances during the Palaeolithic era that built upon the previous evolutionary innovations of bipedalism and hand versatility allowed the development of human language.[citation needed]

Self-domesticated ape theory

According to a study investigating the song differences between white-rumped munias and its domesticated counterpart (Bengalese finch), the wild munias use a highly stereotyped song sequence, whereas the domesticated ones sing a highly unconstrained song. In wild finches, song syntax is subject to female preference—sexual selection—and remains relatively fixed. However, in the Bengalese finch, natural selection is replaced by breeding, in this case for colorful plumage, and thus, decoupled from selective pressures, stereotyped song syntax is allowed to drift. It is replaced, supposedly within 1000 generations, by a variable and learned sequence. Wild finches, moreover, are thought incapable of learning song sequences from other finches.[105] In the field of bird vocalization, brains capable of producing only an innate song have very simple neural pathways: the primary forebrain motor center, called the robust nucleus of arcopallium, connects to midbrain vocal outputs, which in turn project to brainstem motor nuclei. By contrast, in brains capable of learning songs, the arcopallium receives input from numerous additional forebrain regions, including those involved in learning and social experience. Control over song generation has become less constrained, more distributed, and more flexible.[106]

One way to think about human evolution is that we are self-domesticated apes. Just as domestication relaxed selection for stereotypic songs in the finches—mate choice was supplanted by choices made by the aesthetic sensibilities of bird breeders and their customers—so might our cultural domestication have relaxed selection on many of our primate behavioral traits, allowing old pathways to degenerate and reconfigure. Given the highly indeterminate way that mammalian brains develop—they basically construct themselves "bottom up", with one set of neuronal interactions setting the stage for the next round of interactions—degraded pathways would tend to seek out and find new opportunities for synaptic hookups. Such inherited de-differentiations of brain pathways might have contributed to the functional complexity that characterizes human language. And, as exemplified by the finches, such de-differentiations can occur in very rapid time-frames.[107]

Speech and language for communication

A distinction can be drawn between speech and language. Language is not necessarily spoken: it might alternatively be written or signed. Speech is among a number of different methods of encoding and transmitting linguistic information, albeit arguably the most natural one.[108]

Some scholars view language as an initially cognitive development, its 'externalisation' to serve communicative purposes occurring later in human evolution. According to one such school of thought, the key feature distinguishing human language is recursion,[109] (in this context, the iterative embedding of phrases within phrases). Other scholars—notably Daniel Everett—deny that recursion is universal, citing certain languages (e.g. Pirahã) which allegedly lack this feature.[110]

The ability to ask questions is considered by some to distinguish language from non-human systems of communication.[111] Some captive primates (notably bonobos and chimpanzees), having learned to use rudimentary signing to communicate with their human trainers, proved able to respond correctly to complex questions and requests. Yet they failed to ask even the simplest questions themselves.[citation needed] Conversely, human children are able to ask their first questions (using only question intonation) at the babbling period of their development, long before they start using syntactic structures. Although babies from different cultures acquire native languages from their social environment, all languages of the world without exception—tonal, non-tonal, intonational and accented—use similar rising "question intonation" for yes–no questions.[112][113] This fact is a strong evidence of the universality of question intonation. In general, according to some authors, sentence intonation/pitch is pivotal in spoken grammar and is the basic information used by children to learn the grammar of whatever language.[15]

Cognitive development and language

One of the intriguing abilities that language users have is that of high-level reference (or deixis), the ability to refer to things or states of being that are not in the immediate realm of the speaker. This ability is often related to theory of mind, or an awareness of the other as a being like the self with individual wants and intentions. According to Chomsky, Hauser and Fitch (2002), there are six main aspects of this high-level reference system:
  • Theory of mind
  • Capacity to acquire non-linguistic conceptual representations, such as the object/kind distinction
  • Referential vocal signals
  • Imitation as a rational, intentional system
  • Voluntary control over signal production as evidence of intentional communication
  • Number representation[109]

Theory of mind

Simon Baron-Cohen (1999) argues that theory of mind must have preceded language use, based on evidence[clarification needed] of use of the following characteristics as much as 40,000 years ago: intentional communication, repairing failed communication, teaching, intentional persuasion, intentional deception, building shared plans and goals, intentional sharing of focus or topic, and pretending. Moreover, Baron-Cohen argues that many primates show some, but not all, of these abilities.[citation needed] Call and Tomasello's research on chimpanzees supports this, in that individual chimps seem to understand that other chimps have awareness, knowledge, and intention, but do not seem to understand false beliefs. Many primates show some tendencies toward a theory of mind, but not a full one as humans have.[citation needed] Ultimately, there is some consensus within the field that a theory of mind is necessary for language use. Thus, the development of a full theory of mind in humans was a necessary precursor to full language use.[citation needed]

Number representation

In one particular study, rats and pigeons were required to press a button a certain number of times to get food. The animals showed very accurate distinction for numbers less than four, but as the numbers increased, the error rate increased.[109] Matsuzawa (1985) attempted to teach chimpanzees Arabic numerals. The difference between primates and humans in this regard was very large, as it took the chimps thousands of trials to learn 1–9 with each number requiring a similar amount of training time; yet, after learning the meaning of 1, 2 and 3 (and sometimes 4), children easily comprehend the value of greater integers by using a successor function (i.e. 2 is 1 greater than 1, 3 is 1 greater than 2, 4 is 1 greater than 3; once 4 is reached it seems most children have an "a-ha!" moment and understand that the value of any integer n is 1 greater than the previous integer). Put simply, other primates learn the meaning of numbers one by one, similar to their approach to other referential symbols, while children first learn an arbitrary list of symbols (1, 2, 3, 4...) and then later learn their precise meanings.[114] These results can be seen as evidence for the application of the "open-ended generative property" of language in human numeral cognition.[109]

Linguistic structures

Lexical-phonological principle

Hockett (1966) details a list of features regarded as essential to describing human language.[115] In the domain of the lexical-phonological principle, two features of this list are most important:
  • Productivity: users can create and understand completely novel messages.
    • New messages are freely coined by blending, analogizing from, or transforming old ones.
    • Either new or old elements are freely assigned new semantic loads by circumstances and context. This says that in every language, new idioms constantly come into existence.
  • Duality (of Patterning): a large number of meaningful elements are made up of a conveniently small number of independently meaningless yet message-differentiating elements.
The sound system of a language is composed of a finite set of simple phonological items. Under the specific phonotactic rules of a given language, these items can be recombined and concatenated, giving rise to morphology and the open-ended lexicon. A key feature of language is that a simple, finite set of phonological items gives rise to an infinite lexical system wherein rules determine the form of each item, and meaning is inextricably linked with form. Phonological syntax, then, is a simple combination of pre-existing phonological units. Related to this is another essential feature of human language: lexical syntax, wherein pre-existing units are combined, giving rise to semantically novel or distinct lexical items.[citation needed]

Certain elements of the lexical-phonological principle are known to exist outside of humans. While all (or nearly all) have been documented in some form in the natural world, very few coexist within the same species. Bird-song, singing nonhuman apes, and the songs of whales all display phonological syntax, combining units of sound into larger structures apparently devoid of enhanced or novel meaning. Certain other primate species do have simple phonological systems with units referring to entities in the world. However, in contrast to human systems, the units in these primates' systems normally occur in isolation, betraying a lack of lexical syntax. There is new evidence to suggest that Campbell's monkeys also display lexical syntax, combining two calls (a predator alarm call with a "boom", the combination of which denotes a lessened threat of danger), however it is still unclear whether this is a lexical or a morphological phenomenon.[citation needed]

Pidgins and creoles

Pidgins are significantly simplified languages with only rudimentary grammar and a restricted vocabulary. In their early stage pidgins mainly consist of nouns, verbs, and adjectives with few or no articles, prepositions, conjunctions or auxiliary verbs. Often the grammar has no fixed word order and the words have no inflection.[116]

If contact is maintained between the groups speaking the pidgin for long periods of time, the pidgins may become more complex over many generations. If the children of one generation adopt the pidgin as their native language it develops into a creole language, which becomes fixed and acquires a more complex grammar, with fixed phonology, syntax, morphology, and syntactic embedding. The syntax and morphology of such languages may often have local innovations not obviously derived from any of the parent languages.

Studies of creole languages around the world have suggested that they display remarkable similarities in grammar and are developed uniformly from pidgins in a single generation. These similarities are apparent even when creoles do not share any common language origins. In addition, creoles share similarities despite being developed in isolation from each other. Syntactic similarities include subject–verb–object word order. Even when creoles are derived from languages with a different word order they often develop the SVO word order. Creoles tend to have similar usage patterns for definite and indefinite articles, and similar movement rules for phrase structures even when the parent languages do not.[116]

Evolutionary timeline

Primate communication

Field primatologists can give us useful insights into great ape communication in the wild.[30] An important finding is that nonhuman primates, including the other great apes, produce calls that are graded, as opposed to categorically differentiated, with listeners striving to evaluate subtle gradations in signalers' emotional and bodily states. Nonhuman apes seemingly find it extremely difficult to produce vocalizations in the absence of the corresponding emotional states.[41] In captivity, nonhuman apes have been taught rudimentary forms of sign language or have been persuaded to use lexigrams—symbols that do not graphically resemble the corresponding words—on computer keyboards. Some nonhuman apes, such as Kanzi, have been able to learn and use hundreds of lexigrams.[117][118]

The Broca's and Wernicke's areas in the primate brain are responsible for controlling the muscles of the face, tongue, mouth, and larynx, as well as recognizing sounds. Primates are known to make "vocal calls", and these calls are generated by circuits in the brainstem and limbic system.[119]

In the wild, the communication of vervet monkeys has been the most extensively studied.[116] They are known to make up to ten different vocalizations. Many of these are used to warn other members of the group about approaching predators. They include a "leopard call", a "snake call", and an "eagle call".[120] Each call triggers a different defensive strategy in the monkeys who hear the call and scientists were able to elicit predictable responses from the monkeys using loudspeakers and prerecorded sounds. Other vocalizations may be used for identification. If an infant monkey calls, its mother turns toward it, but other vervet mothers turn instead toward that infant's mother to see what she will do.[121][122]

Similarly, researchers have demonstrated that chimpanzees (in captivity) use different "words" in reference to different foods. They recorded vocalizations that chimps made in reference, for example, to grapes, and then other chimps pointed at pictures of grapes when they heard the recorded sound.[123][124]

Ardipithecus ramidus

A study published in Homo: Journal of Comparative Human Biology in 2017 claims that A. ramidus, a hominin dated at approximately 4.5Ma, shows the first evidence of an anatomical shift in the hominin lineage suggestive of increased vocal capability.[125] This study compared the skull of A. ramidus with twenty nine chimpanzee skulls of different ages and found that in numerous features A. ramidus clustered with the infant and juvenile measures as opposed to the adult measures. Significantly, such affinity with the shape dimensions of infant and juvenile chimpanzee skull architecture was argued may have resulted in greater vocal capability. This assertion was based on the notion that the chimpanzee vocal tract ratios that prevent speech are a result of growth factors associated with puberty—growth factors absent in A. ramidus ontogeny. A. ramidus was also found to have a degree of cervical lordosis more conducive to vocal modulation when compared with chimpanzees as well as cranial base architecture suggestive of increased vocal capability.

What was significant in this study was the observation that the changes in skull architecture that correlate with reduced aggression are the same changes necessary for the evolution of early hominin vocal ability. In integrating data on anatomical correlates of primate mating and social systems with studies of skull and vocal tract architecture that facilitate speech production, the authors argue that paleoanthropologists to date have failed to grasp the important relationship between early hominin social evolution and language capacity.

In the paleoanthropological literature, these changes in early hominin skull morphology [reduced facial prognathism and lack of canine armoury] have to date been analysed in terms of a shift in mating and social behaviour, with little consideration given to vocally mediated sociality. Similarly, in the literature on language evolution there is a distinct lacuna regarding links between craniofacial correlates of social and mating systems and vocal ability. These are surprising oversights given that pro-sociality and vocal capability require identical alterations to the common ancestral skull and skeletal configuration. We therefore propose a model which integrates data on whole organism morphogenesis with evidence for a potential early emergence of hominin socio-vocal adaptations. Consequently, we suggest vocal capability may have evolved much earlier than has been traditionally proposed. Instead of emerging in the genus Homo, we suggest the palaeoecological context of late Miocene and early Pliocene forests and woodlands facilitated the evolution of hominin socio-vocal capability. We also propose that paedomorphic morphogenesis of the skull via the process of self-domestication enabled increased levels of pro-social behaviour, as well as increased capacity for socially synchronous vocalisation to evolve at the base of the hominin clade.[125]

While the skull of A. ramidus, according to the authors, lacks the anatomical impediments to speech evident in chimpanzees, it is unclear what the vocal capabilities of this early hominin were. While they suggest A. ramidus—based on similar vocal tract ratios—may have had vocal capabilities equivalent to a modern human infant or very young child, they concede this is obviously a debatable and speculative hypothesis. However, they do claim that changes in skull architecture through processes of social selection were a necessary prerequisite for language evolution. As they write:
We propose that as a result of paedomorphic morphogenesis of the cranial base and craniofacial morphology Ar. ramidus would have not been limited in terms of the mechanical components of speech production as chimpanzees and bonobos are. It is possible that Ar. ramidus had vocal capability approximating that of chimpanzees and bonobos, with its idiosyncratic skull morphology not resulting in any significant advances in speech capability. In this sense the anatomical features analysed in this essay would have been exapted in later more voluble species of hominin. However, given the selective advantages of pro-social vocal synchrony, we suggest the species would have developed significantly more complex vocal abilities than chimpanzees and bonobos.[125]

Early Homo

Regarding articulation, there is considerable speculation about the language capabilities of early Homo (2.5 to 0.8 million years ago). Anatomically, some scholars believe features of bipedalism, which developed in australopithecines around 3.5 million years ago, would have brought changes to the skull, allowing for a more L-shaped vocal tract. The shape of the tract and a larynx positioned relatively low in the neck are necessary prerequisites for many of the sounds humans make, particularly vowels.[citation needed] Other scholars believe that, based on the position of the larynx, not even Neanderthals had the anatomy necessary to produce the full range of sounds modern humans make.[126][127] It was earlier proposed that differences between Homo sapiens and Neanderthal vocal tracts could be seen in fossils, but the finding that the Neanderthal hyoid bone (see below) was identical to that found in Homo sapiens has weakened these theories. Still another view considers the lowering of the larynx as irrelevant to the development of speech.[128]

Archaic Homo sapiens

Steven Mithen proposed the term Hmmmmm for the pre-linguistic system of communication used by archaic Homo. beginning with Homo ergaster and reaching the highest sophistication in the Middle Pleistocene with Homo heidelbergensis and Homo neanderthalensis. Hmmmmm is an acronym for holistic (non-compositional), manipulative (utterances are commands or suggestions, not descriptive statements), multi-modal (acoustic as well as gestural and facial), musical, and mimetic.[129]

Homo heidelbergensis

Homo heidelbergensis was a close relative (most probably a migratory descendant) of Homo ergaster. Some researchers believe this species to be the first hominin to make controlled vocalizations, possibly mimicking animal vocalizations,[129] and that as Homo heidelbergensis developed more sophisticated culture, proceeded from this point and possibly developed an early form of symbolic language.

Homo neanderthalensis

The discovery in 1989 of the (Neanderthal) Kebara 2 hyoid bone suggests that Neanderthals may have been anatomically capable of producing sounds similar to modern humans.[130][131] The hypoglossal nerve, which passes through the hypoglossal canal, controls the movements of the tongue, which may have enabled voicing for size exaggeration (see size exaggeration hypothesis below) or may reflect speech abilities.[25][132][133][134][135][136]

However, although Neanderthals may have been anatomically able to speak, Richard G. Klein in 2004 doubted that they possessed a fully modern language. He largely bases his doubts on the fossil record of archaic humans and their stone tool kit. For 2 million years following the emergence of Homo habilis, the stone tool technology of hominins changed very little. Klein, who has worked extensively on ancient stone tools, describes the crude stone tool kit of archaic humans as impossible to break down into categories based on their function, and reports that Neanderthals seem to have had little concern for the final aesthetic form of their tools. Klein argues that the Neanderthal brain may have not reached the level of complexity required for modern speech, even if the physical apparatus for speech production was well-developed.[137][138] The issue of the Neanderthal's level of cultural and technological sophistication remains a controversial one.

Based on computer simulations used to evaluate that evolution of language that resulted in showing three stages in the evolution of syntax, Neanderthals are thought to have been in stage 2, showing they had something more evolved than proto-language but not quite as complex as the language of modern humans.[139]

Homo sapiens

Anatomically modern humans begin to appear in the fossil record in Ethiopia some 200,000 years ago.[140] Although there is still much debate as to whether behavioural modernity emerged in Africa at around the same time, a growing number of archaeologists nowadays invoke the southern African Middle Stone Age use of red ochre pigments—for example at Blombos Cave—as evidence that modern anatomy and behaviour co-evolved.[141] These archaeologists argue strongly that if modern humans at this early stage were using red ochre pigments for ritual and symbolic purposes, they probably had symbolic language as well.[142]

According to the recent African origins hypothesis, from around 60,000 – 50,000 years ago[143] a group of humans left Africa and began migrating to occupy the rest of the world, carrying language and symbolic culture with them.[144]

The descended larynx

Illu larynx.jpg
The larynx or voice box is an organ in the neck housing the vocal folds, which are responsible for phonation. In humans, the larynx is descended. Our species is not unique in this respect: goats, dogs, pigs and tamarins lower the larynx temporarily, to emit loud calls.[145] Several deer species have a permanently lowered larynx, which may be lowered still further by males during their roaring displays.[146] Lions, jaguars, cheetahs and domestic cats also do this.[147] However, laryngeal descent in nonhumans (according to Philip Lieberman) is not accompanied by descent of the hyoid; hence the tongue remains horizontal in the oral cavity, preventing it from acting as a pharyngeal articulator.[148]

Larynx
Larynx external en.svg
Anatomy of the larynx, anterolateral view


Despite all this, scholars remain divided as to how "special" the human vocal tract really is. It has been shown that the larynx does descend to some extent during development in chimpanzees, followed by hyoidal descent.[149] As against this, Philip Lieberman points out that only humans have evolved permanent and substantial laryngeal descent in association with hyoidal descent, resulting in a curved tongue and two-tube vocal tract with 1:1 proportions. Uniquely in the human case, simple contact between the epiglottis and velum is no longer possible, disrupting the normal mammalian separation of the respiratory and digestive tracts during swallowing. Since this entails substantial costs—increasing the risk of choking while swallowing food—we are forced to ask what benefits might have outweighed those costs. The obvious benefit—so it is claimed—must have been speech. But this idea has been vigorously contested. One objection is that humans are in fact not seriously at risk of choking on food: medical statistics indicate that accidents of this kind are extremely rare.[150] Another objection is that in the view of most scholars, speech as we know it emerged relatively late in human evolution, roughly contemporaneously with the emergence of Homo sapiens.[32] A development as complex as the reconfiguration of the human vocal tract would have required much more time, implying an early date of origin. This discrepancy in timescales undermines the idea that human vocal flexibility was initially driven by selection pressures for speech, thus not excluding that it was selected for e.g. improved singing ability.

The size exaggeration hypothesis

To lower the larynx is to increase the length of the vocal tract, in turn lowering formant frequencies so that the voice sounds "deeper"—giving an impression of greater size. John Ohala argues that the function of the lowered larynx in humans, especially males, is probably to enhance threat displays rather than speech itself.[151] Ohala points out that if the lowered larynx were an adaptation for speech, we would expect adult human males to be better adapted in this respect than adult females, whose larynx is considerably less low. In fact, females invariably outperform males in verbal tests[citation needed], falsifying this whole line of reasoning. W. Tecumseh Fitch likewise argues that this was the original selective advantage of laryngeal lowering in our species. Although (according to Fitch) the initial lowering of the larynx in humans had nothing to do with speech, the increased range of possible formant patterns was subsequently co-opted for speech. Size exaggeration remains the sole function of the extreme laryngeal descent observed in male deer. Consistent with the size exaggeration hypothesis, a second descent of the larynx occurs at puberty in humans, although only in males. In response to the objection that the larynx is descended in human females, Fitch suggests that mothers vocalising to protect their infants would also have benefited from this ability.[152]

Phonemic diversity

In 2011, Quentin Atkinson published a survey of phonemes from 500 different languages as well as language families and compared their phonemic diversity by region, number of speakers and distance from Africa. The survey revealed that African languages had the largest number of phonemes, and Oceania and South America had the smallest number. After allowing for the number of speakers, the phonemic diversity was compared to over 2000 possible origin locations. Atkinson's "best fit" model is that language originated in central and southern Africa between 80,000 and 160,000 years ago. This predates the hypothesized southern coastal peopling of Arabia, India, southeast Asia, and Australia. It would also mean that the origin of language occurred at the same time as the emergence of symbolic culture.[153]

History

In religion and mythology

The search for the origin of language has a long history rooted in mythology. Most mythologies do not credit humans with the invention of language but speak of a divine language predating human language. Mystical languages used to communicate with animals or spirits, such as the language of the birds, are also common, and were of particular interest during the Renaissance.
Vāc is the Hindu goddess of speech, or "speech personified". As Brahman's "sacred utterance", she has a cosmological role as the "Mother of the Vedas". The Aztecs' story maintains that only a man, Coxcox, and a woman, Xochiquetzal, survived a flood, having floated on a piece of bark. They found themselves on land and begat many children who were at first born unable to speak, but subsequently, upon the arrival of a dove, were endowed with language, although each one was given a different speech such that they could not understand one another.[154]

In the Old Testament, the Book of Genesis (11) says that God prevented the Tower of Babel from being completed through a miracle that made its construction workers start speaking different languages. After this, they migrated to other regions, grouped together according to which of the newly created languages they spoke, explaining the origins of languages and nations outside of the fertile crescent.

Historical experiments

History contains a number of anecdotes about people who attempted to discover the origin of language by experiment. The first such tale was told by Herodotus (Histories 2.2). He relates that Pharaoh Psammetichus (probably Psammetichus I, 7th century BC) had two children raised by a shepherd, with the instructions that no one should speak to them, but that the shepherd should feed and care for them while listening to determine their first words. When one of the children cried "bekos" with outstretched arms the shepherd concluded that the word was Phrygian, because that was the sound of the Phrygian word for "bread". From this, Psammetichus concluded that the first language was Phrygian. King James V of Scotland is said to have tried a similar experiment; his children were supposed to have spoken Hebrew.[155]

Both the medieval monarch Frederick II and Akbar are said to have tried similar experiments; the children involved in these experiments did not speak. The current situation of deaf people also points into this direction.

History of research

Modern linguistics does not begin until the late 18th century, and the Romantic or animist theses of Johann Gottfried Herder and Johann Christoph Adelung remained influential well into the 19th century. The question of language origin seemed inaccessible to methodical approaches, and in 1866 the Linguistic Society of Paris famously banned all discussion of the origin of language, deeming it to be an unanswerable problem. An increasingly systematic approach to historical linguistics developed in the course of the 19th century, reaching its culmination in the Neogrammarian school of Karl Brugmann and others.[citation needed]

However, scholarly interest in the question of the origin of language has only gradually been rekindled from the 1950s on (and then controversially) with ideas such as universal grammar, mass comparison and glottochronology.[citation needed]

The "origin of language" as a subject in its own right emerged from studies in neurolinguistics, psycholinguistics and human evolution. The Linguistic Bibliography introduced "Origin of language" as a separate heading in 1988, as a sub-topic of psycholinguistics. Dedicated research institutes of evolutionary linguistics are a recent phenomenon, emerging only in the 1990s.[citation needed]

Evolution of morality

From Wikipedia, the free encyclopedia

The evolution of morality refers to the emergence of human moral behavior over the course of human evolution. Morality can be defined as a system of ideas about right and wrong conduct. In everyday life, morality is typically associated with human behavior, and not much thought is given to the social conducts of other creatures. The emerging fields of evolutionary biology and in particular sociobiology have argued that, though human social behaviors are complex, the precursors of human morality can be traced to the behaviors of many other social animals. Sociobiological explanations of human behavior are still controversial. The traditional view of social scientists has been that morality is a construct, and is thus culturally relative, although others argue that there is a science of morality.

Animal sociality

Though animals may not possess what humans may perceive as moral behavior, all social animals have had to modify or restrain their behaviors for group living to be worthwhile. Typical examples of behavioral modification can be found in the societies of ants, bees and termites. Ant colonies may possess millions of individuals. E. O. Wilson argues that the single most important factor that leads to the success of ant colonies is the existence of a sterile worker caste. This caste of females are subservient to the needs of their mother, the queen, and in so doing, have given up their own reproduction in order to raise brothers and sisters. The existence of sterile castes among these social insects significantly restricts the competition for mating and in the process fosters cooperation within a colony. Cooperation among ants is vital, because a solitary ant has an improbable chance of long-term survival and reproduction. However, as part of a group, colonies can thrive for decades. As a consequence, ants are one of the most successful families of species on the planet, accounting for a biomass that rivals that of the human species.[1][2]

The basic reason that social animals live in groups is that opportunities for survival and reproduction are much better in groups than living alone. The social behaviors of mammals are more familiar to humans. Highly social mammals such as primates and elephants have been known to exhibit traits that were once thought to be uniquely human, like empathy and altruism.[3][4]

Primate sociality

Humanity’s closest living relatives are common chimpanzees and bonobos. These primates share a common ancestor with humans who lived four to six million years ago. It is for this reason that chimpanzees and bonobos are viewed as the best available surrogate for this common ancestor. Barbara King argues that while primates may not possess morality in the human sense, they do exhibit some traits that would have been necessary for the evolution of morality. These traits include high intelligence, a capacity for symbolic communication, a sense of social norms, realization of "self", and a concept of continuity.[5][6][7] Frans de Waal and Barbara King both view human morality as having grown out of primate sociality. Many social animals such as primates, dolphins, and whales have shown to exhibit what Michael Shermer refers to as premoral sentiments. According to Shermer, the following characteristics are shared by humans and other social animals, particularly the great apes:
attachment and bonding, cooperation and mutual aid, sympathy and empathy, direct and indirect reciprocity, altruism and reciprocal altruism, conflict resolution and peacemaking, deception and deception detection, community concern and caring about what others think about you, and awareness of and response to the social rules of the group.[8]
Shermer argues that these premoral sentiments evolved in primate societies as a method of restraining individual selfishness and building more cooperative groups. For any social species, the benefits of being part of an altruistic group should outweigh the benefits of individualism. For example, lack of group cohesion could make individuals more vulnerable to attack from outsiders. Being part of group may also improve the chances of finding food. This is evident among animals that hunt in packs to take down large or dangerous prey.
Social Evolution of Humans[9]
Period years ago Society type Number of individuals
6,000,000 Bands 10s
100,000–10,000 Bands 10s–100s
10,000–5,000 Tribes 100s–1,000s
5,000–4,000 Chiefdoms 1,000s–10,000s
4,000–3,000 States 10,000s–100,000s
3,000–present Empires 100,000–1,000,000s

All social animals have hierarchical societies in which each member knows its own place.[citation needed] Social order is maintained by certain rules of expected behavior and dominant group members enforce order through punishment. However, higher order primates also have a sense of reciprocity. Chimpanzees remember who did them favors and who did them wrong.[citation needed] For example, chimpanzees are more likely to share food with individuals who have previously groomed them.[10] Vampire bats also demonstrate a sense of reciprocity and altruism. They share blood by regurgitation, but do not share randomly. They are most likely to share with other bats who have shared with them in the past or who are in dire need of feeding.[11]

Animals such as Capuchin monkeys[12] and dogs[13] also display an understanding of fairness, refusing to co-operate when presented unequal rewards for the same behaviors.

Chimpanzees live in fission-fusion groups that average 50 individuals. It is likely that early ancestors of humans lived in groups of similar size. Based on the size of extant hunter gatherer societies, recent paleolithic hominids lived in bands of a few hundred individuals. As community size increased over the course of human evolution, greater enforcement to achieve group cohesion would have been required. Morality may have evolved in these bands of 100 to 200 people as a means of social control, conflict resolution and group solidarity. This numerical limit is theorized to be hard coded in our genes since even modern humans have difficulty maintaining stable social relationships with more than 100–200 people. According to Dr. de Waal, human morality has two extra levels of sophistication that are not found in primate societies. Humans enforce their society's moral codes much more rigorously with rewards, punishments and reputation building. People also apply a degree of judgment and reason not seen in the animal kingdom[citation needed].

The punishment problems

While groups may benefit from avoiding certain behaviors, those harmful behaviors have the same effect regardless of whether the offending individuals are aware of them or not.[14] Since the individuals themselves can increase their reproductive success by doing many of them, any characteristics that entail impunity are positively selected by evolution.[15] Specifically punishing individuals aware of their breach of rules would select against the ability to be aware of it, precluding any coevolution of both conscious choice and a sense of it being the basis for moral and penal liability in the same species.[16]

Human social intelligence

The social brain hypothesis, detailed by R.I.M Dunbar in the article The Social Brain Hypothesis and Its Implications for Social Evolution, supports the fact that the brain originally evolved to process factual information. The brain allows an individual to recognize patterns, perceive speech, develop strategies to circumvent ecologically-based problems such as foraging for food, and also permits the phenomenon of color vision. Furthermore, having a large brain is a reflection of the large cognitive demands of complex social systems. It is said that in humans and primates the neocortex is responsible for reasoning and consciousness. Therefore, in social animals, the neocortex came under intense selection to increase in size to improve social cognitive abilities. Social animals, such as humans are capable of two important concepts, coalition formation, or group living, and tactical deception, which is a tactic of presenting false information to others. The fundamental importance of animal social skills lies within the ability to manage relationships and in turn, the ability to not just commit information to memory, but manipulate it as well.[17] An adaptive response to the challenges of social interaction and living is theory of mind. Theory of mind as defined by M. Brune, is the ability to infer another individual's mental states or emotions.[18] Having a strong theory of mind is tied closely with possessing advanced social intelligence. Collectively, group living requires cooperation and generates conflict. Social living puts strong evolutionary selection pressures on acquiring social intelligence due to the fact that living in groups has advantages. Advantages to group living include protection from predators and the fact that groups in general outperform the sum of an individual’s performance. But, from an objective point of view, group living also has disadvantages, such as, competition from within the group for resources and mates. This sets the stage for something of an evolutionary arms race from within the species.

Within populations of social animals, altruism, or acts of behavior that are disadvantageous to one individual while benefiting other group members has evolved. This notion seems to be contradictory to evolutionary thought, due to the fact that an organism's fitness and success is defined by its ability to pass genes on to the next generation. According to E. Fehr, in the article, The Nature of Human Altruism, the evolution of altruism can be accounted for when kin selection and inclusive fitness are taken into account; meaning reproductive success is not just dependent on the number of offspring an individual produces, but also the number of offspring that related individuals produce.[19] Outside of familial relationships altruism is also seen, but in a different manner typically defined by the prisoner's dilemma, theorized by John Nash. The prisoner's dilemma serves to define cooperation and defecting with and against individuals driven by incentive, or in Nash's proposed case, years in jail. In evolutionary terms, the best strategy to use for the prisoner's dilemma is tit-for-tat. In the tit-for-tat strategy, an individual should cooperate as long others are cooperating, and not defect until another individual defects against them. At their core, complex social interactions are driven by the need to distinguish sincere cooperation and defection.

Brune details that theory of mind has been traced back to primates, but it is not observed to the extent that it is in the modern human. The emergence of this unique trait is perhaps where the divergence of the modern human begins, along with our acquisition of language. Humans use metaphors and imply much of what we say. Phrases such as, "You know what I mean?" are not uncommon and are direct results of the sophistication of the human theory of mind. Failure to understand another's intentions and emotions can yield inappropriate social responses and are often associated with human mental conditions such as autism, schizophrenia, bipolar disorder, some forms of dementia, and psychopathy. This is especially true for autism spectrum disorders, where social disconnect is evident, but non-social intelligence can be preserved or even in some cases augmented, such as in the case of a savant.[18] The need for social intelligence surrounding theory of mind is a possible answer to the question as to why morality has evolved as a part of human behavior.

Evolution of religion

Psychologist Matt J. Rossano muses that religion emerged after morality and built upon morality by expanding the social scrutiny of individual behavior to include supernatural agents. By including ever watchful ancestors, spirits and gods in the social realm, humans discovered an effective strategy for restraining selfishness and building more cooperative groups.[20] The adaptive value of religion would have enhanced group survival.[21][22]

The Wason selection task

In an experiment where subjects must demonstrate abstract, complex reasoning, researchers have found that humans (as has been seen in other animals) have a strong innate ability to reason about social exchanges. This ability is believed to be intuitive, since the logical rules do not seem to be accessible to the individuals for use in situations without moral overtones.[23]

Emotion

Disgust, one of the basic emotions, may have an important role in certain forms of morality. Disgust is argued to be a specific response to certain things or behaviors that are dangerous or undesirable from an evolutionary perspective. One example is things that increase the risk of an infectious disease such as spoiled foods, dead bodies, other forms of microbiological decomposition, a physical appearance suggesting sickness or poor hygiene, and various body fluids such as feces, vomit, phlegm, and blood. Another example is disgust against evolutionary disadvantageous mating such as incest (the incest taboo) or unwanted sexual advances.[4] Still another example are behaviors that may threaten group cohesion or cooperation such as cheating, lying, and stealing. MRI studies have found that such situations activate areas in the brain associated with disgust.[24]

New Atheism

From Wikipedia, the free encyclopedia

New Atheism is a term coined in 2006 by the agnostic journalist Gary Wolf to describe the positions promoted by some atheists of the twenty-first century.[1][2] This modern-day atheism is advanced by a group of thinkers and writers who advocate the view that superstition, religion and irrationalism should not simply be tolerated but should be countered, criticized, and exposed by rational argument wherever their influence arises in government, education, and politics.[3][4] In his book Why I Am Not a Christian, published in 1927, the philosopher Bertrand Russell put forward similar positions as those espoused by the New Atheists, suggesting that there are no substantive differences[5] between traditional atheism and new atheism according to Richard Ostling.

New Atheism lends itself to and often overlaps with secular humanism and antitheism, particularly in its criticism of what many New Atheists regard as the indoctrination of children and the perpetuation of ideologies founded on belief in the supernatural. Some critics of the movement characterise it pejoratively as "militant atheism" or "fundamentalist atheism".[a][6][7][8][9]

History

Early history

The Harvard botanist Asa Gray, a believing Christian and one of the first supporters of Charles Darwin's theory of evolution, commented in 1868 that the more worldly Darwinists in England had "the English-materialistic-positivistic line of thought".[10] Darwin's supporter Thomas Huxley was openly skeptical, as the biographer Janet Browne describes:
Huxley was rampaging on miracles and the existence of the soul. A few months later, he was to coin the word "agnostic" to describe his own position as neither a believer nor a disbeliever, but one who considered himself free to inquire rationally into the basis of knowledge, a philosopher of pure reason [...] The term fitted him well [...] and it caught the attention of the other free thinking, rational doubters in Huxley's ambit, and came to signify a particularly active form of scientific rationalism during the final decades of the 19th century. [...] In his hands, agnosticism became as doctrinaire as anything else--a religion of skepticism. Huxley used it as a creed that would place him on a higher moral plane than even bishops and archbishops. All the evidence would nevertheless suggest that Huxley was sincere in his rejection of the charge of outright atheism against himself. He refused to be "a liar". To inquire rigorously into the spiritual domain, he asserted, was a more elevated undertaking than slavishly to believe or disbelieve. "A deep sense of religion is compatible with the entire absence of theology," he had told [Anglican clergyman] Charles Kingsley back in 1860. "Pope Huxley", the [magazine] Spectator dubbed him. The label stuck." —Janet Browne[11]

Recent history

The 2004 publication of The End of Faith: Religion, Terror, and the Future of Reason by Sam Harris, a bestseller in the United States, was joined over the next couple years by a series of popular best-sellers by atheist authors.[12] Harris was motivated by the events of September 11, 2001, which he laid directly at the feet of Islam, while also directly criticizing Christianity and Judaism.[13] Two years later Harris followed up with Letter to a Christian Nation, which was also a severe criticism of Christianity.[14] Also in 2006, following his television documentary The Root of All Evil?, Richard Dawkins published The God Delusion, which was on the New York Times best-seller list for 51 weeks.[15]

In a 2010 column entitled "Why I Don't Believe in the New Atheism", Tom Flynn contends that what has been called "New Atheism" is neither a movement nor new, and that what was new was the publication of atheist material by big-name publishers, read by millions, and appearing on bestseller lists.[16]

Prominent figures

The "Four Horsemen"

The 'Four Horsemen of Atheism': clockwise from top left: Richard Dawkins, Christopher Hitchens, Daniel Dennett, and Sam Harris.
Dawkins has said, "We are all atheists about most of the gods that societies have ever believed in. Some of us just go one god further."[17]

On September 30, 2007 four prominent atheists (Richard Dawkins, Sam Harris, Christopher Hitchens and Daniel Dennett) met at Hitchens' residence in Washington, D.C., for a private two-hour unmoderated discussion. The event was videotaped and titled "The Four Horsemen".[18] During "The God Debate" in 2010 featuring Christopher Hitchens vs Dinesh D'Souza, the men were collectively referred to as the "Four Horsemen of the Non-Apocalypse",[19] an allusion to the biblical Four Horsemen of the Apocalypse from the Book of Revelation.[20] The four have been described disparagingly as "evangelical atheists".[21]

Sam Harris is the author of the bestselling non-fiction books The End of Faith, Letter to a Christian Nation, The Moral Landscape, and Waking Up: A Guide to Spirituality Without Religion, as well as two shorter works, initially published as e-books, Free Will[22] and Lying.[23] Harris is a co-founder of the Reason Project.

Richard Dawkins is the author of The God Delusion,[24] which was preceded by a Channel 4 television documentary titled The Root of all Evil?. He is the founder of the Richard Dawkins Foundation for Reason and Science. He wrote: "I don't object to the horseman label, by the way. I'm less keen on 'new atheist': it isn't clear to me how we differ from old atheists."[25]

Christopher Hitchens was the author of God Is Not Great[26] and was named among the "Top 100 Public Intellectuals" by Foreign Policy and Prospect magazine. In addition, Hitchens served on the advisory board of the Secular Coalition for America. In 2010 Hitchens published his memoir Hitch-22 (a nickname provided by close personal friend Salman Rushdie, whom Hitchens always supported during and following The Satanic Verses controversy).[27] Shortly after its publication, Hitchens was diagnosed with esophageal cancer, which led to his death in December 2011.[28] Before his death, Hitchens published a collection of essays and articles in his book Arguably;[29] a short edition Mortality[30] was published posthumously in 2012. These publications and numerous public appearances provided Hitchens with a platform to remain an astute atheist during his illness, even speaking specifically on the culture of deathbed conversions and condemning attempts to convert the terminally ill, which he opposed as "bad taste".[31][32]

Daniel Dennett, author of Darwin's Dangerous Idea,[33] Breaking the Spell[34] and many others, has also been a vocal supporter of The Clergy Project,[35] an organization that provides support for clergy in the US who no longer believe in God and cannot fully participate in their communities any longer.[36]

The "Four Horsemen" video, convened by Dawkins' Foundation, can be viewed free online at his web site: Part 1, Part 2.

"Plus one horse-woman"

After the death of Hitchens, Ayaan Hirsi Ali (who attended the 2012 Global Atheist Convention, which Hitchens was scheduled to attend) was referred to as the "plus one horse-woman", since she was originally invited to the 2007 meeting of the "Horsemen" atheists but had to cancel at the last minute.[37] Hirsi Ali was born in Mogadishu, Somalia, fleeing in 1992 to the Netherlands in order to escape an arranged marriage.[38] She became involved in Dutch politics, rejected faith, and became vocal in opposing Islamic ideology, especially concerning women, as exemplified by her books Infidel and The Caged Virgin.[39] Hirsi Ali was later involved in the production of the film Submission, for which her friend Theo Van Gogh was murdered with a death threat to Hirsi Ali pinned to his chest.[40] This event resulted in Hirsi Ali's hiding and later immigration to the United States, where she now resides and remains a prolific critic of Islam,[41] and the treatment of women in Islamic doctrine and society,[42] and a proponent of free speech and the freedom to offend.[43][44]

Others

Perspective

Many contemporary atheists write from a scientific perspective. Unlike previous writers, many of whom thought that science was indifferent, or even incapable of dealing with the "God" concept, Dawkins argues to the contrary, claiming the "God Hypothesis" is a valid scientific hypothesis,[56] having effects in the physical universe, and like any other hypothesis can be tested and falsified. Other contemporary atheists such as Victor Stenger propose that the personal Abrahamic God is a scientific hypothesis that can be tested by standard methods of science. Both Dawkins and Stenger conclude that the hypothesis fails any such tests,[57] and argue that naturalism is sufficient to explain everything we observe in the universe, from the most distant galaxies to the origin of life, species, and the inner workings of the brain and consciousness. Nowhere, they argue, is it necessary to introduce God or the supernatural to understand reality while upholding the possibility of one. New Atheists reject Jesus' divinity.[58]

Scientific testing of religion

Non-believers assert that many religious or supernatural claims (such as the virgin birth of Jesus and the afterlife) are scientific claims in nature. They argue, as do deists and Progressive Christians, for instance, that the issue of Jesus' supposed parentage is not a question of "values" or "morals", but a question of scientific inquiry.[59] Rational thinkers believe science is capable of investigating at least some, if not all, supernatural claims.[60] Institutions such as the Mayo Clinic and Duke University are attempting to find empirical support for the healing power of intercessory prayer.[61] According to Stenger, these experiments have found no evidence that intercessory prayer works.[62]

Logical arguments

Stenger also argues in his book, God: The Failed Hypothesis, that a God having omniscient, omnibenevolent and omnipotent attributes, which he termed a 3O God, cannot logically exist.[63] A similar series of logical disproofs of the existence of a God with various attributes can be found in Michael Martin and Ricki Monnier's The Impossibility of God,[64] or Theodore M. Drange's article, "Incompatible-Properties Arguments".[65]

Views on non-overlapping magisteria

Richard Dawkins has been particularly critical of the conciliatory view that science and religion are not in conflict, noting, for example, that the Abrahamic religions constantly deal in scientific matters. In a 1998 article published in Free Inquiry magazine,[59] and later in his 2006 book The God Delusion, Dawkins expresses disagreement with the view advocated by Stephen Jay Gould that science and religion are two non-overlapping magisteria (NOMA) each existing in a "domain where one form of teaching holds the appropriate tools for meaningful discourse and resolution". In Gould's proposal, science and religion should be confined to distinct non-overlapping domains: science would be limited to the empirical realm, including theories developed to describe observations, while religion would deal with questions of ultimate meaning and moral value. Dawkins contends that NOMA does not describe empirical facts about the intersection of science and religion, "it is completely unrealistic to claim, as Gould and many others do, that religion keeps itself away from science's turf, restricting itself to morals and values. A universe with a supernatural presence would be a fundamentally and qualitatively different kind of universe from one without. The difference is, inescapably, a scientific difference. Religions make existence claims, and this means scientific claims."

Science and morality

Popularized by Sam Harris is the view that science and thereby currently unknown objective facts may instruct human morality in a globally comparable way. Harris' book The Moral Landscape[66] and accompanying TED Talk How Science can Determine Moral Values[67] proposes that human well-being and conversely suffering may be thought of as a landscape with peaks and valleys representing numerous ways to achieve extremes in human experience, and that there are objective states of well-being.

The politics of new atheism

New atheism is politically engaged in a variety of ways. These include campaigns to draw attention to the biased privileged position religion has and to reduce the influence of religion in the public sphere, attempts to promote cultural change (centering, in the United States, on the mainstream acceptance of atheism), and efforts to promote the idea of an "atheist identity". Internal strategic divisions over these issues have also been notable, as are questions about the diversity of the movement in terms of its gender and racial balance.[68]

Criticisms

Accusations of Evangelicalism within New Atheism

The theologians Jeffrey Robbins and Christopher Rodkey take issue with what they regard as "the evangelical nature of the new atheism, which assumes that it has a Good News to share, at all cost, for the ultimate future of humanity by the conversion of as many people as possible." They believe they have found similarities between new atheism and evangelical Christianity and conclude that the all-consuming nature of both "encourages endless conflict without progress" between both extremities.[69]

Accusations of Atheist Fundamentalism

Sociologist William Stahl said, "What is striking about the current debate is the frequency with which the New Atheists are portrayed as mirror images of religious fundamentalists."[70]

The atheist philosopher of science Michael Ruse has made the claim that Richard Dawkins would fail "introductory" courses on the study of "philosophy or religion" (such as courses on the philosophy of religion), courses which are offered, for example, at many educational institutions such as colleges and universities around the world.[71][72] Ruse also claims that the movement of New Atheism—which is perceived, by him, to be a "bloody disaster"—makes him ashamed, as a professional philosopher of science, to be among those holding to an atheist position, particularly as New Atheism does science a "grave disservice" and does a "disservice to scholarship" at more general level.[71][72]

Paul Kurtz, editor in chief of Free Inquiry, founder of Prometheus Books, was critical of many of the new atheists.[8] He said, "I consider them atheist fundamentalists... They're anti-religious, and they're mean-spirited, unfortunately. Now, they're very good atheists and very dedicated people who do not believe in God. But you have this aggressive and militant phase of atheism, and that does more damage than good".[9]

Jonathan Sacks, author of The Great Partnership: Science, Religion, and the Search for Meaning, feels the new atheists miss the target by believing the "cure for bad religion is no religion, as opposed to good religion". He wrote:
Atheism deserves better than the new atheists whose methodology consists of criticizing religion without understanding it, quoting texts without contexts, taking exceptions as the rule, confusing folk belief with reflective theology, abusing, mocking, ridiculing, caricaturing, and demonizing religious faith and holding it responsible for the great crimes against humanity. Religion has done harm; I acknowledge that. But the cure for bad religion is good religion, not no religion, just as the cure for bad science is good science, not the abandonment of science.[73]

Political biases

The philosopher Massimo Pigliucci feels that the new atheist movement overlaps with scientism, which he feels is philosophically unsound. He writes: "What I do object to is the tendency, found among many New Atheists, to expand the definition of science to pretty much encompassing anything that deals with “facts,” loosely conceived..., it seems clear to me that most of the New Atheists (except for the professional philosophers among them) pontificate about philosophy very likely without having read a single professional paper in that field.... I would actually go so far as to charge many of the leaders of the New Atheism movement (and, by implication, a good number of their followers) with anti-intellectualism, one mark of which is a lack of respect for the proper significance, value, and methods of another field of intellectual endeavor."[74]

Atheist professor Jacques Berlinerblau has criticised the New Atheists' mocking of religion as being inimical to their goals and claims that they haven't achieved anything politically.[75]

Major publications

Faith and rationality

From Wikipedia, the free encyclopedia

Faith and rationality are two ideologies that exist in varying degrees of conflict or compatibility. Rationality is based on reason or facts. Faith is belief in inspiration, revelation, or authority. The word faith sometimes refers to a belief that is held with lack of reason or evidence, a belief that is held in spite of or against reason or evidence, or it can refer to belief based upon a degree of evidential warrant.

Although the words faith and belief are sometimes erroneously conflated[citation needed] and used as synonyms, faith properly refers to a particular type (or subset) of belief, as defined above.

Broadly speaking, there are two categories of views regarding the relationship between faith and rationality:
  1. Rationalism holds that truth should be determined by reason and factual analysis, rather than faith, dogma, tradition or religious teaching.
  2. Fideism holds that faith is necessary, and that beliefs may be held without any evidence or reason and even in conflict with evidence and reason.
The Catholic Church also has taught that true faith and correct reason can and must work together, and, viewed properly, can never be in conflict with one another, as both have their origin in God, as stated in the Papal encyclical letter issued by Pope John Paul II, Fides et Ratio ("[On] Faith and Reason").

Relationship between faith and reason

From at least the days of the Greek Philosophers, the relationship between faith and reason has been hotly debated. Plato argued that knowledge is simply memory of the eternal. Aristotle set down rules by which knowledge could be discovered by reason.

Rationalists point out that many people hold irrational beliefs, for many reasons. There may be evolutionary causes for irrational beliefs — irrational beliefs may increase our ability to survive and reproduce. Or, according to Pascal's Wager, it may be to our advantage to have faith, because faith may promise infinite rewards, while the rewards of reason are seen by many as finite. One more reason for irrational beliefs can perhaps be explained by operant conditioning. For example, in one study by B. F. Skinner in 1948, pigeons were awarded grain at regular time intervals regardless of their behaviour. The result was that each of pigeons developed their own idiosyncratic response which had become associated with the consequence of receiving grain.[1]

Believers in faith — for example those who believe salvation is possible through faith alone — frequently suggest that everyone holds beliefs arrived at by faith, not reason.[citation needed] The belief that the universe is a sensible place and that our minds allow us to arrive at correct conclusions about it, is a belief we hold through faith. Rationalists contend that this is arrived at because they have observed the world being consistent and sensible, not because they have faith that it is.

Beliefs held "by faith" may be seen existing in a number of relationships to rationality:
  • Faith as underlying rationality: In this view, all human knowledge and reason is seen as dependent on faith: faith in our senses, faith in our reason, faith in our memories, and faith in the accounts of events we receive from others. Accordingly, faith is seen as essential to and inseparable from rationality. According to René Descartes, rationality is built first upon the realization of the absolute truth "I think therefore I am", which requires no faith. All other rationalizations are built outward from this realization, and are subject to falsification at any time with the arrival of new evidence.
  • Faith as addressing issues beyond the scope of rationality: In this view, faith is seen as covering issues that science and rationality are inherently incapable of addressing, but that are nevertheless entirely real. Accordingly, faith is seen as complementing rationality, by providing answers to questions that would otherwise be unanswerable.
  • Faith as contradicting rationality: In this view, faith is seen as those views that one holds despite evidence and reason to the contrary. Accordingly, faith is seen as pernicious with respect to rationality, as it interferes with our ability to think, and inversely rationality is seen as the enemy of faith by interfering with our beliefs.
  • Faith and reason as essential together: This is the Catholic view that faith without reason leads to superstition, while reason without faith leads to nihilism and relativism.
  • Faith as based on warrant: In this view some degree of evidence provides warrant for faith. "To explain great things by small."[2]

Views of the Roman Catholic Church

St. Thomas Aquinas, the most important doctor of the Catholic Church, was the first to write a full treatment of the relationship, differences, and similarities between faith—an intellectual assent[3]—and reason,[4] predominately in his Summa Theologica, De Veritate, and Summa contra Gentiles.[5]

The Council of Trent's catechism—the Roman Catechism, written during the Catholic Church's Counter-Reformation to combat Protestantism and Martin Luther's antimetaphysical tendencies.[6][7]

Dei Filius was a dogmatic constitution of the First Vatican Council on the Roman Catholic faith. It was adopted unanimously on 24 April 1870 and was influenced by the philosophical conceptions of Johann Baptist Franzelin, who had written a great deal on the topic of faith and rationality.[8]

Because the Roman Catholic Church does not disparage reason, but rather affirms its veracity and utility, there have been many Catholic scientists over the ages.

Twentieth-century Thomist philosopher Étienne Gilson wrote about faith and reason[9] in his 1922 book Le Thomisme.[10] His contemporary Jacques Maritain wrote about it in his The Degrees of Knowledge.[11]

Fides et Ratio is an encyclical promulgated by Pope John Paul II on 14 September 1998. It deals with the relationship between faith and reason.

Pope Benedict XVI's 12 September 2006 Regensburg Lecture was about faith and reason.

Lutheran epistemology

Some have asserted that Martin Luther taught that faith and reason were antithetical in the sense that questions of faith could not be illuminated by reason. Contemporary Lutheran scholarship however has found a different reality in Luther. Luther rather seeks to separate faith and reason in order to honor the separate spheres of knowledge that each understand. Bernhard Lohse for example has demonstrated in his classic work "Fides Und Ratio" that Luther ultimately sought to put the two together. More recently Hans-Peter Großhans has demonstrated that Luther's work on Bibilical Criticism stresses the need for external coherence in right exegetical method. This means that for Luther it is more important that the Bible be reasonable according to the reality outside of the scriptures than that the Bible make sense to itself, that it has internal coherence. The right tool for understanding the world outside of the Bible for Luther is none other than Reason which for Luther denoted science, philosophy, history and empirical observation. Here a differing picture is presented of a Luther who deeply valued both faith and reason, and held them in dialectical partnership. Luther's concern thus in separating them is honoring their different epistemological spheres.

Reformed epistemology

Faith as underlying rationality

The view that faith underlies all rationality holds that rationality is dependent on faith for its coherence. Under this view, there is no way to comprehensively prove that we are actually seeing what we appear to be seeing, that what we remember actually happened, or that the laws of logic and mathematics are actually real. Instead, all beliefs depend for their coherence on faith in our senses, memory, and reason, because the foundations of rationalism cannot be proven by evidence or reason. Rationally, you can not prove anything you see is real, but you can prove that you yourself are real, and rationalist belief would be that you can believe that the world is consistent until something demonstrates inconsistency. This differs from faith based belief, where you believe that your world view is consistent no matter what inconsistencies the world has with your beliefs.

Rationalist point of view

In this view, there are many beliefs that are held by faith alone, that rational thought would force the mind to reject. As an example, many people believe in the Biblical story of Noah's flood: that the entire Earth was covered by water for forty days. But objected that most plants cannot survive being covered by water for that length of time, a boat of that magnitude could not have been built by wood, and there would be no way for two of every animal to survive on that ship and migrate back to their place of origin. (such as penguins), Although Christian apologists offer answers to these and such issues,[12][13][14] under the premise that such responses are insufficient, one must choose between accepting the story on faith and rejecting reason, or rejecting the story by reason and thus rejecting faith.

Within the rationalist point of view, there remains the possibility of multiple rational explanations. For example, considering the biblical story of Noah's flood, one making rational determinations about the probability of the events does so via interpretation of modern evidence. Two observers of the story may provide different plausible explanations for the life of plants, construction of the boat, species living at the time, and migration following the flood. Some see this as meaning that a person is not strictly bound to choose between faith and reason.

Evangelical views

American biblical scholar Archibald Thomas Robertson stated that the Greek word pistis used for faith in the New Testament (over two hundred forty times), and rendered "assurance" in Acts 17:31 (KJV), is "an old verb to furnish, used regularly by Demosthenes for bringing forward evidence."[15] Likewise Tom Price (Oxford Centre for Christian Apologetics) affirms that when the New Testament talks about faith positively it only uses words derived from the Greek root [pistis] which means "to be persuaded."[16]

In contrast to faith meaning blind trust, in the absence of evidence, even in the teeth of evidence, Alister McGrath quotes Oxford Anglican theologian W. H. Griffith-Thomas, (1861-1924), who states faith is "not blind, but intelligent" and "commences with the conviction of the mind based on adequate evidence...", which McGrath sees as "a good and reliable definition, synthesizing the core elements of the characteristic Christian understanding of faith."[17]

Alvin Plantinga upholds that faith may be the result of evidence testifying to the reliability of the source of truth claims, but although it may involve this, he sees faith as being the result of hearing the truth of the gospel with the internal persuasion by the Holy Spirit moving and enabling him to believe. "Christian belief is produced in the believer by the internal instigation of the Holy Spirit, endorsing the teachings of Scripture, which is itself divinely inspired by the Holy Spirit. The result of the work of the Holy Spirit is faith."[18]

Jewish philosophy

The 14th Century Jewish philosopher Levi ben Gerson tried to reconcile faith and reason. He wrote, "The Torah cannot prevent us from considering to be true that which our reason urges us to believe."[19] His contemporary Hasdai ben Abraham Crescas argued the contrary view, that reason is weak and faith strong, and that only through faith can we discover the fundamental truth that God is love, that through faith alone can we endure the suffering that is the common lot of God's chosen people.

Lie point symmetry

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Lie_point_symmetry     ...