Search This Blog

Saturday, July 31, 2021

Neuroethology

From Wikipedia, the free encyclopedia
 
Echolocation in bats is one model system in neuroethology.

Neuroethology is the evolutionary and comparative approach to the study of animal behavior and its underlying mechanistic control by the nervous system. It is an interdisciplinary science that combines both neuroscience (study of the nervous system) and ethology (study of animal behavior in natural conditions). A central theme of neuroethology, which differentiates it from other branches of neuroscience, is its focus on behaviors that have been favored by natural selection (e.g., finding mates, navigation, locomotion, and predator avoidance) rather than on behaviors that are specific to a particular disease state or laboratory experiment.

Neuroethologists hope to uncover general principles of the nervous system from the study of animals with exaggerated or specialized behaviors. They endeavor to understand how the nervous system translates biologically relevant stimuli into natural behavior. For example, many bats are capable of echolocation which is used for prey capture and navigation. The auditory system of bats is often cited as an example for how acoustic properties of sounds can be converted into a sensory map of behaviorally relevant features of sounds.

Philosophy

Neuroethology is an integrative approach to the study of animal behavior that draws upon several disciplines. Its approach stems from the theory that animals' nervous systems have evolved to address problems of sensing and acting in certain environmental niches and that their nervous systems are best understood in the context of the problems they have evolved to solve. In accordance with Krogh's principle, neuroethologists often study animals that are "specialists" in the behavior the researcher wishes to study e.g. honeybees and social behavior, bat echolocation, owl sound localization, etc.

The scope of neuroethological inquiry might be summarized by Jörg-Peter Ewert, a pioneer of neuroethology, when he considers the types of questions central to neuroethology in his 1980 introductory text to the field:

  1. How are stimuli detected by an organism?
  2. How are environmental stimuli in the external world represented in the nervous system?
  3. How is information about a stimulus acquired, stored and recalled by the nervous system?
  4. How is a behavioral pattern encoded by neural networks?
  5. How is behavior coordinated and controlled by the nervous system?
  6. How can the ontogenetic development of behavior be related to neural mechanisms?

Often central to addressing questions in neuroethology are comparative methodologies, drawing upon knowledge about related organisms' nervous systems, anatomies, life histories, behaviors and environmental niches. While it is not unusual for many types of neurobiology experiments to give rise to behavioral questions, many neuroethologists often begin their research programs by observing a species' behavior in its natural environment. Other approaches to understanding nervous systems include the systems identification approach, popular in engineering. The idea is to stimulate the system using a non-natural stimulus with certain properties. The system's response to the stimulus may be used to analyze the operation of the system. Such an approach is useful for linear systems, but the nervous system is notoriously nonlinear, and neuroethologists argue that such an approach is limited. This argument is supported by experiments in the auditory system, which show that neural responses to complex sounds, like social calls, can not be predicted by the knowledge gained from studying the responses due to pure tones (one of the non-natural stimuli favored by auditory neurophysiologists). This is because of the non-linearity of the system.

Modern neuroethology is largely influenced by the research techniques used. Neural approaches are necessarily very diverse, as is evident through the variety of questions asked, measuring techniques used, relationships explored, and model systems employed. Techniques utilized since 1984 include the use of intracellular dyes, which make maps of identified neurons possible, and the use of brain slices, which bring vertebrate brains into better observation through intracellular electrodes (Hoyle 1984). Currently, other fields toward which neuroethology may be headed include computational neuroscience, molecular genetics, neuroendocrinology and epigenetics. The existing field of neural modeling may also expand into neuroethological terrain, due to its practical uses in robotics. In all this, neuroethologists must use the right level of simplicity to effectively guide research towards accomplishing the goals of neuroethology.

Critics of neuroethology might consider it a branch of neuroscience concerned with 'animal trivia'. Though neuroethological subjects tend not to be traditional neurobiological model systems (i.e. Drosophila, C. elegans, or Danio rerio), neuroethological approaches emphasizing comparative methods have uncovered many concepts central to neuroscience as a whole, such as lateral inhibition, coincidence detection, and sensory maps. The discipline of neuroethology has also discovered and explained the only vertebrate behavior for which the entire neural circuit has been described: the electric fish jamming avoidance response. Beyond its conceptual contributions, neuroethology makes indirect contributions to advancing human health. By understanding simpler nervous systems, many clinicians have used concepts uncovered by neuroethology and other branches of neuroscience to develop treatments for devastating human diseases.

History

Neuroethology owes part of its existence to the establishment of ethology as a unique discipline within zoology. Although animal behavior had been studied since the time of Aristotle (384–342 BC), it was not until the early twentieth century that ethology finally became distinguished from natural science (a strictly descriptive field) and ecology. The main catalysts behind this new distinction were the research and writings of Konrad Lorenz and Niko Tinbergen.

Konrad Lorenz was born in Austria in 1903, and is widely known for his contribution of the theory of fixed action patterns (FAPs): endogenous, instinctive behaviors involving a complex sequence of movements that are triggered ("released") by a certain kind of stimulus. This sequence always proceeds to completion, even if the original stimulus is removed. It is also species-specific and performed by nearly all members. Lorenz constructed his famous "hydraulic model" to help illustrate this concept, as well as the concept of action specific energy, or drives.

Niko Tinbergen was born in the Netherlands in 1907 and worked closely with Lorenz in the development of the FAP theory; their studies focused on the egg retrieval response of nesting geese. Tinbergen performed extensive research on the releasing mechanisms of particular FAPs, and used the bill-pecking behavior of baby herring gulls as his model system. This led to the concept of the supernormal stimulus. Tinbergen is also well known for his four questions that he believed ethologists should be asking about any given animal behavior; among these is that of the mechanism of the behavior, on a physiological, neural and molecular level, and this question can be thought of in many regards as the keystone question in neuroethology. Tinbergen also emphasized the need for ethologists and neurophysiologists to work together in their studies, a unity that has become a reality in the field of neuroethology.

Unlike behaviorism, which studies animals' reactions to non-natural stimuli in artificial, laboratory conditions, ethology sought to categorize and analyze the natural behaviors of animals in a field setting. Similarly, neuroethology asks questions about the neural bases of naturally occurring behaviors, and seeks to mimic the natural context as much as possible in the laboratory.

Although the development of ethology as a distinct discipline was crucial to the advent of neuroethology, equally important was the development of a more comprehensive understanding of neuroscience. Contributors to this new understanding were the Spanish Neuroanatomist, Ramon y Cajal (born in 1852), and physiologists Charles Sherrington, Edgar Adrian, Alan Hodgkin, and Andrew Huxley. Charles Sherrington, who was born in Great Britain in 1857, is famous for his work on the nerve synapse as the site of transmission of nerve impulses, and for his work on reflexes in the spinal cord. His research also led him to hypothesize that every muscular activation is coupled to an inhibition of the opposing muscle. He was awarded a Nobel Prize for his work in 1932 along with Lord Edgar Adrian who made the first physiological recordings of neural activity from single nerve fibers.

Alan Hodgkin and Andrew Huxley (born 1914 and 1917, respectively, in Great Britain), are known for their collaborative effort to understand the production of action potentials in the giant axons of squid. The pair also proposed the existence of ion channels to facilitate action potential initiation, and were awarded the Nobel Prize in 1963 for their efforts.

As a result of this pioneering research, many scientists then sought to connect the physiological aspects of the nervous and sensory systems to specific behaviors. These scientists – Karl von Frisch, Erich von Holst, and Theodore Bullock – are frequently referred to as the "fathers" of neuroethology. Neuroethology did not really come into its own, though, until the 1970s and 1980s, when new, sophisticated experimental methods allowed researchers such as Masakazu Konishi, Walter Heiligenberg, Jörg-Peter Ewert, and others to study the neural circuits underlying verifiable behavior.

Modern neuroethology

The International Society for Neuroethology represents the present discipline of neuroethology, which was founded on the occasion of the NATO-Advanced Study Institute "Advances in Vertebrate Neuroethology" (August 13–24, 1981) organized by J.-P. Ewert, D.J. Ingle and R.R. Capranica, held at the University of Kassel in Hofgeismar, Germany (cf. report Trends in Neurosci. 5:141-143,1982). Its first president was Theodore H. Bullock. The society has met every three years since its first meeting in Tokyo in 1986.

Its membership draws from many research programs around the world; many of its members are students and faculty members from medical schools and neurobiology departments from various universities. Modern advances in neurophysiology techniques have enabled more exacting approaches in an ever-increasing number of animal systems, as size limitations are being dramatically overcome. Survey of the most recent (2007) congress of the ISN meeting symposia topics gives some idea of the field's breadth:

  • Comparative aspects of spatial memory (rodents, birds, humans, bats)
  • Influences of higher processing centers in active sensing (primates, owls, electric fish, rodents, frogs)
  • Animal signaling plasticity over many time scales (electric fish, frogs, birds)
  • Song production and learning in passerine birds
  • Primate sociality
  • Optimal function of sensory systems (flies, moths, frogs, fish)
  • Neuronal complexity in behavior (insects, computational)
  • Contributions of genes to behavior (Drosophila, honeybees, zebrafish)
  • Eye and head movement (crustaceans, humans, robots)
  • Hormonal actions in brain and behavior (rodents, primates, fish, frogs, and birds)
  • Cognition in insects (honeybee)

Application to technology

Neuroethology can help create advancements in technology through an advanced understanding of animal behavior. Model systems were generalized from the study of simple and related animals to humans. For example, the neuronal cortical space map discovered in bats, a specialized champion of hearing and navigating, elucidated the concept of a computational space map. In addition, the discovery of the space map in the barn owl led to the first neuronal example of the Jeffress model. This understanding is translatable to understanding spatial localization in humans, a mammalian relative of the bat. Today, knowledge learned from neuroethology are being applied in new technologies. For example, Randall Beer and his colleagues used algorithms learned from insect walking behavior to create robots designed to walk on uneven surfaces (Beer et al.). Neuroethology and technology contribute to one another bidirectionally.

Neuroethologists seek to understand the neural basis of a behavior as it would occur in an animal's natural environment but the techniques for neurophysiological analysis are lab-based, and cannot be performed in the field setting. This dichotomy between field and lab studies poses a challenge for neuroethology. From the neurophysiology perspective, experiments must be designed for controls and objective rigor, which contrasts with the ethology perspective – that the experiment be applicable to the animal's natural condition, which is uncontrolled, or subject to the dynamics of the environment. An early example of this is when Walter Rudolf Hess developed focal brain stimulation technique to examine a cat's brain controls of vegetative functions in addition to other behaviors. Even though this was a breakthrough in technological abilities and technique, it was not used by many neuroethologists originally because it compromised a cat's natural state, and, therefore, in their minds, devalued the experiments' relevance to real situations.

When intellectual obstacles like this were overcome, it led to a golden age of neuroethology, by focusing on simple and robust forms of behavior, and by applying modern neurobiological methods to explore the entire chain of sensory and neural mechanisms underlying these behaviors (Zupanc 2004). New technology allows neuroethologists to attach electrodes to even very sensitive parts of an animal such as its brain while it interacts with its environment. The founders of neuroethology ushered this understanding and incorporated technology and creative experimental design. Since then even indirect technological advancements such as battery-powered and waterproofed instruments have allowed neuroethologists to mimic natural conditions in the lab while they study behaviors objectively. In addition, the electronics required for amplifying neural signals and for transmitting them over a certain distance have enabled neuroscientists to record from behaving animals performing activities in naturalistic environments. Emerging technologies can complement neuroethology, augmenting the feasibility of this valuable perspective of natural neurophysiology.

Another challenge, and perhaps part of the beauty of neuroethology, is experimental design. The value of neuroethological criteria speak to the reliability of these experiments, because these discoveries represent behavior in the environments in which they evolved. Neuroethologists foresee future advancements through using new technologies and techniques, such as computational neuroscience, neuroendocrinology, and molecular genetics that mimic natural environments.

Case studies

Jamming avoidance response

In 1963, Akira Watanabe and Kimihisa Takeda discovered the behavior of the jamming avoidance response in the knifefish Eigenmannia sp. In collaboration with T.H. Bullock and colleagues, the behavior was further developed. Finally, the work of W. Heiligenberg expanded it into a full neuroethology study by examining the series of neural connections that led to the behavior. Eigenmannia is a weakly electric fish that can generate electric discharges through electrocytes in its tail. Furthermore, it has the ability to electrolocate by analyzing the perturbations in its electric field. However, when the frequency of a neighboring fish's current is very close (less than 20 Hz difference) to that of its own, the fish will avoid having their signals interfere through a behavior known as Jamming Avoidance Response. If the neighbor's frequency is higher than the fish's discharge frequency, the fish will lower its frequency, and vice versa. The sign of the frequency difference is determined by analyzing the "beat" pattern of the incoming interference which consists of the combination of the two fish's discharge patterns.

Neuroethologists performed several experiments under Eigenmannia's natural conditions to study how it determined the sign of the frequency difference. They manipulated the fish's discharge by injecting it with curare which prevented its natural electric organ from discharging. Then, an electrode was placed in its mouth and another was placed at the tip of its tail. Likewise, the neighboring fish's electric field was mimicked using another set of electrodes. This experiment allowed neuroethologists to manipulate different discharge frequencies and observe the fish's behavior. From the results, they were able to conclude that the electric field frequency, rather than an internal frequency measure, was used as a reference. This experiment is significant in that not only does it reveal a crucial neural mechanism underlying the behavior but also demonstrates the value neuroethologists place on studying animals in their natural habitats.

Feature analysis in toad vision

The recognition of prey and predators in the toad was first studied in depth by Jörg-Peter Ewert (Ewert 1974; see also 2004). He began by observing the natural prey-catching behavior of the common toad (Bufo bufo) and concluded that the animal followed a sequence that consisted of stalking, binocular fixation, snapping, swallowing and mouth-wiping. However, initially, the toad's actions were dependent on specific features of the sensory stimulus: whether it demonstrated worm or anti-worm configurations. It was observed that the worm configuration, which signaled prey, was initiated by movement along the object's long axis, whereas anti-worm configuration, which signaled predator, was due to movement along the short axis. (Zupanc 2004).

Ewert and coworkers adopted a variety of methods to study the predator versus prey behavior response. They conducted recording experiments where they inserted electrodes into the brain, while the toad was presented with worm or anti-worm stimuli. This technique was repeated at different levels of the visual system and also allowed feature detectors to be identified. In focus was the discovery of prey-selective neurons in the optic tectum, whose axons could be traced towards the snapping pattern generating cells in the hypoglossal nucleus. The discharge patterns of prey-selective tectal neurons in response to prey objects – in freely moving toads – "predicted" prey-catching reactions such as snapping. Another approach, called stimulation experiment, was carried out in freely moving toads. Focal electrical stimuli were applied to different regions of the brain, and the toad's response was observed. When the thalamic-pretectal region was stimulated, the toad exhibited escape responses, but when the tectum was stimulated in an area close to prey-selective neurons, the toad engaged in prey catching behavior (Carew 2000). Furthermore, neuroanatomical experiments were carried out where the toad's thalamic-pretectal/tectal connection was lesioned and the resulting deficit noted: the prey-selective properties were abolished both in the responses of prey-selective neurons and in the prey catching behavior. These and other experiments suggest that prey selectivity results from pretecto-tectal influences.

Ewert and coworkers showed in toads that there are stimulus-response mediating pathways that translate perception (of visual sign stimuli) into action (adequate behavioral responses). In addition there are modulatory loops that initiate, modify or specify this mediation (Ewert 2004). Regarding the latter, for example, the telencephalic caudal ventral striatum is involved in a loop gating the stimulus-response mediation in a manner of directed attention. The telencephalic ventral medial pallium („primordium hippocampi"), however, is involved in loops that either modify prey-selection due to associative learning or specify prey-selection due to non-associative learning, respectively.

Computational neuroethology

Computational neuroethology (CN or CNE) is concerned with the computer modelling of the neural mechanisms underlying animal behaviors. Together with the term "artificial ethology," the term "computational neuroethology" was first published in literature by Achacoso and Yamamoto in the Spring of 1990, based on their pioneering work on the connectome of C. elegans in 1989, with further publications in 1992. Computational neuroethology was argued for in depth later in 1990 by Randall Beer and by Dave Cliff both of whom acknowledged the strong influence of Michael Arbib's Rana Computatrix computational model of neural mechanisms for visual guidance in frogs and toads.

CNE systems work within a closed-loop environment; that is, they perceive their (perhaps artificial) environment directly, rather than through human input, as is typical in AI systems. For example, Barlow et al. developed a time-dependent model for the retina of the horseshoe crab Limulus polyphemus on a Connection Machine (Model CM-2). Instead of feeding the model retina with idealized input signals, they exposed the simulation to digitized video sequences made underwater, and compared its response with those of real animals.

Model systems

 

Animal language

From Wikipedia, the free encyclopedia
 

Animal languages are forms of non-human animal communication that show similarities to human language. Animals communicate by using a variety of signs such as sounds or movements. Such signing may be considered complex enough to be called a form of language if the inventory of signs is large, the signs are relatively arbitrary, and the animals seem to produce them with a degree of volition (as opposed to relatively automatic conditioned behaviors or unconditioned instincts, usually including facial expressions). In experimental tests, animal communication may also be evidenced through the use of lexigrams (as used by chimpanzees and bonobos).

Many researchers argue that animal communication lacks a key aspect of human language, that is, the creation of new patterns of signs under varied circumstances. (In contrast, for example, humans routinely produce entirely new combinations of words.) Some researchers, including the linguist Charles Hockett, argue that human language and animal communication differ so much that the underlying principles are unrelated. Accordingly, linguist Thomas A. Sebeok has proposed to not use the term "language" for animal sign systems.[2] Marc Hauser, Noam Chomsky, and W. Tecumseh Fitch assert an evolutionary continuum exists between the communication methods of animal and human language.

Aspects of human language

Human and chimp, in this case Claudine André with a bonobo.

The following properties of human language have been argued to separate it from animal communication:

  • Arbitrariness: there is usually no rational relationship between a sound or sign and its meaning. For example, there is nothing intrinsically house-like about the word "house".
  • Discreteness: language is composed of small, repeatable parts (discrete units) that are used in combination to create meaning.
  • Displacement: languages can be used to communicate ideas about things that are not in the immediate vicinity either spatially or temporally.
  • Duality of patterning: the smallest meaningful units (words, morphemes) consist of sequences of units without meaning. This is also referred to as double articulation.
  • Productivity: users can understand and create an indefinitely large number of utterances.
  • Semanticity: specific signals have specific meanings.

Research with apes, like that of Francine Patterson with Koko (gorilla) or Allen and Beatrix Gardner with Washoe (chimpanzee), suggested that apes are capable of using language that meets some of these requirements such as arbitrariness, discreteness, and productivity.

In the wild, chimpanzees have been seen "talking" to each other when warning about approaching danger. For example, if one chimpanzee sees a snake, he makes a low, rumbling noise, signaling for all the other chimps to climb into nearby trees. In this case, the chimpanzees' communication does not indicate displacement, as it is entirely contained to an observable event.

Arbitrariness has been noted in meerkat calls; bee dances demonstrate elements of spatial displacement; and cultural transmission has possibly occurred between the celebrated bonobos Kanzi and Panbanisha.

Human language may not be completely "arbitrary." Research has shown that almost all humans naturally demonstrate limited crossmodal perception (e.g. synesthesia) and multisensory integration, as illustrated by the Kiki and Booba study. Other recent research has tried to explain how the structure of human language emerged, comparing two different aspects of hierarchical structure present in animal communication and proposing that human language arose out of these two separate systems.

Claims that animals have language skills akin to humans however, are extremely controversial. As Steven Pinker illustrates in his book The Language Instinct, claims that chimpanzees can acquire language are exaggerated and rest on very limited or specious data.

The American linguist Charles Hockett theorized that there are sixteen features of human language that distinguished human communication from that of animals. He called these the design features of language. The features mentioned below have so far been found in all spoken human languages and at least one is missing from all other animal communication systems.

  • Vocal-auditory channel: sounds emitted from the mouth and perceived by the auditory system. This applies to many animal communication systems, but there are many exceptions. Ex. An alternative to vocal-auditory communication is visual communication. An example is cobras extending the ribs behind their heads to send the message of intimidation or of feeling threatened. In humans, sign languages provide many examples of fully formed languages that use a visual channel.
  • Broadcast transmission and directional reception: this requires that the recipient can tell the direction that the signal comes from and thus the originator of the signal.
  • Rapid fading (transitory nature): Signal lasts a short time. This is true of all systems involving sound. It does not take into account audio recording technology and is also not true for written language. It tends not to apply to animal signals involving chemicals and smells which often fade slowly. For example, a skunk's smell, produced in its glands, lingers to deter a predator from attacking.
  • Interchangeability: All utterances that are understood can be produced. This is different from some communication systems where, for example, males produce one set of behaviours and females another and they are unable to interchange these messages so that males use the female signal and vice versa. For example, Heliothine moths have differentiated communication: females are able to send a chemical to indicate preparedness to mate, while males cannot send the chemical.
  • Total feedback: The sender of a message is aware of the message being sent.
  • Specialization: The signal produced is intended for communication and is not due to another behavior. For example, dog panting is a natural reaction to being overheated, but is not produced to specifically relay a particular message.
  • Semanticity: There is some fixed relationship between a signal and a meaning.

Primate: studied examples

Humans are able to distinguish real words from fake words based on the phonological order of the word itself. In a 2013 study, baboons have been shown to have this skill, as well. The discovery has led researchers to believe that reading is not as advanced a skill as previously believed, but instead based on the ability to recognize and distinguish letters from one another. The experimental setup consisted of six young adult baboons, and results were measured by allowing the animals to use a touch screen and selecting whether or not the displayed word was indeed a real word, or a nonword such as "dran" or "telk." The study lasted for six weeks, with approximately 50,000 tests completed in that time. The experimenters explain the use of bigrams, which are combinations of two (usually different) letters. They tell us that the bigrams used in nonwords are rare, while the bigrams used in real words are more common. Further studies will attempt to teach baboons how to use an artificial alphabet.

In a 2016 study, a team of biologists from several universities concluded that macaques possess vocal tracts physically capable of speech, "but lack a speech-ready brain to control it".

Non-primates: studied examples

Among the most studied examples of animal languages are:

Birds

  • Bird songs: Songbirds can be very articulate. Grey parrots are famous for their ability to mimic human language, and at least one specimen, Alex, appeared able to answer a number of simple questions about objects he was presented with. Parrots, hummingbirds and songbirds – display vocal learning patterns.

Insects

  • Bee dance: Used to communicate direction and distance of food source in many species of bees.

Mammals

  • African forest elephants: Cornell University's Elephant Listening Project began in 1999 when Katy Payne began studying the calls of African forest elephants in Dzanga National Park in the Central African Republic. Andrea Turkalo has continued Payne's work in Dzanga National Park observing elephant communication. For nearly 20 years, Turkalo has spent the majority of her time using a spectrogram to record the noises that the elephants make. After extensive observation and research, she has been able to recognize elephants by their voices. Researchers hope to translate these voices into an elephant dictionary, but that will likely not occur for many years. Because elephant calls are often made at very low frequencies, this spectrogram is able to detect lower frequencies that human ears are unable to hear, allowing Turkalo to get a better idea of what she perceives the elephants to be saying. Cornell's research on African forest elephants has challenged the idea that humans are considerably better at using language and that animals only have a small repertoire of information that they can convey to others. As Turkalo explained on 60 Minutes' "The Secret Language of Elephants," "Many of their calls are in some ways similar to human speech."
  • Mustached bats: Since these animals spend most of their lives in the dark, they rely heavily on their auditory system to communicate. This acoustic communication includes echolocation or using calls to locate each other in the darkness. Studies have shown that mustached bats use a wide variety of calls to communicate with one another. These calls include 33 different sounds, or "syllables," that the bats then either use alone or combine in various ways to form "composite" syllables.
  • Prairie dogs: Dr. Con Slobodchikoff studied prairie dog communication and discovered:
    • different alarm calls for different species of predators;
    • different escape behaviors for different species of predators;
    • transmission of semantic information, in that playbacks of alarm calls in the absence of predators lead to escape behaviors that are appropriate to the type of predator which elicited the alarm calls;
    • alarm calls containing descriptive information about the general size, color, and speed of travel of the predator.

Aquatic mammals

  • Bottlenose dolphins: Dolphins can hear one another up to 6 miles apart underwater. In one National Geographic article, the success of a mother dolphin communicating with her baby using a telephone was outlined. Researchers noted that it appeared that both dolphins knew who they were speaking with and what they were speaking about. Not only do dolphins communicate via nonverbal cues, but they also seem to chatter and respond to other dolphin's vocalizations.
Spectrogram of humpback whale vocalizations. Detail is shown for the first 24 seconds of the 37 second recording humpback whale "song". The ethereal whale "songs" and echolocation "clicks" are visible as horizontal striations and vertical sweeps respectively.
  • Whales: Two groups of whales, the humpback whale and a subspecies of blue whale found in the Indian Ocean, are known to produce repetitious sounds at varying frequencies known as whale song. Male humpback whales perform these vocalizations only during the mating season, and so it is surmised the purpose of songs is to aid sexual selection. Humpbacks also make a sound called a feeding call, five to ten seconds in length of near constant frequency. Humpbacks generally feed cooperatively by gathering in groups, swimming underneath shoals of fish and all lunging up vertically through the fish and out of the water together. Prior to these lunges, whales make their feeding call. The exact purpose of the call is not known, but research suggests that fish react to it. When the sound was played back to them, a group of herring responded to the sound by moving away from the call, even though no whale was present.
  • Sea lions: Beginning in 1971 and continuing until present day, Dr. Ronald J. Schusterman and his research associates have studied sea lions' cognitive ability. They have discovered that sea lions are able to recognize relationships between stimuli based on similar functions or connections made with their peers, rather than only the stimuli's common features. This is called "equivalence classification". This ability to recognize equivalence may be a precursor to language. Research is currently being conducted at the Pinniped Cognition & Sensory Systems Laboratory to determine how sea lions form these equivalence relationships. Sea lions have also been proven to be able to understand simple syntax and commands when taught an artificial sign language similar to the one used with primates. The sea lions studied were able to learn and use a number of syntactic relations between the signs they were taught, such as how the signs should be arranged in relation to each other. However, the sea lions rarely used the signs semantically or logically. In the wild it's thought that sea lions use the reasoning skills associated with equivalence classification in order to make important decisions that can affect their rate of survival (e.g. recognizing friends and family or avoiding enemies and predators). Sea lions use the following to display their language:
    • Sea lions use their bodies in various postural positions to display communication.
    • Sea lion's vocal cords limit their ability to convey sounds to a range of barks, chirps, clicks, moans, growls and squeaks.
    • There has yet to be an experiment which proves for certain that sea lions use echolocation as a means of communication.

The effects of learning on auditory signaling in these animals is of special interest. Several investigators have pointed out that some marine mammals appear to have an extraordinary capacity to alter both the contextual and structural features of their vocalizations as a result of experience. Janik and Slater (2000) have stated that learning can modify the emission of vocalizations in one of two ways: (1) by influencing the context in which a particular signal is used and/or (2) by altering the acoustic structure of the call itself. Male California sea lions can learn to inhibit their barking in the presence of any male dominant to them, but vocalize normally when dominant males are absent. Recent work on gray seals show different call types can be selectively conditioned and placed under biased control of different cues (Schusterman, in press) and the use of food reinforcement can also modify vocal emissions. "Hoover", a captive male harbor seal demonstrated a convincing case of vocal mimicry. However similar observations have not been reported since. Still shows under the right circumstances pinnipeds may use auditory experience, in addition to environmental consequences such as food reinforcement and social feedback to modify their vocal emissions.

In a 1992 study, Robert Gisiner and Ronald J. Schusterman conducted experiments in which they attempted to teach Rocky, a female California sea lion, syntax. Rocky was taught signed words, then she was asked to perform various tasks dependent on word order after viewing a signed instruction. It was found that Rocky was able to determine relations between signs and words, and form a basic form of syntax. A 1993 study by Ronald J Schusterman and David Kastak found that the California sea lion was capable of understanding abstract concepts such as symmetry, sameness and transitivity. This provides a strong backing to the theory that equivalence relations can form without language.

The distinctive sound of sea lions is produced both above and below water. To mark territory, sea lions "bark", with non-alpha males making more noise than alphas. Although females also bark, they do so less frequently and most often in connection with birthing pups or caring for their young. Females produce a highly directional bawling vocalization, the pup attraction call, which helps mother and pup locate one another. As noted in Animal Behavior, their amphibious lifestyle has made them need acoustic communication for social organization while on land.

Sea lions can hear frequencies as low as 100 Hz and as high as 40,000 Hz and vocalize between the ranges of 100 to 10,000 Hz.

Mollusks

  • Caribbean reef squid have been shown to communicate using a variety of color, shape, and texture changes. Squid are capable of rapid changes in skin color and pattern through nervous control of chromatophores. In addition to camouflage and appearing larger in the face of a threat, squids use color, patterns, and flashing to communicate with one another in various courtship rituals. Caribbean reef squid can send one message via color patterns to a squid on their right, while they send another message to a squid on their left.

Comparison of the terms "animal language" and "animal communication"

It is worth distinguishing "animal language" from "animal communication", although there is some comparative interchange in certain cases (e.g. Cheney & Seyfarth's vervet monkey call studies). Thus "animal language" typically does not include bee dancing, bird song, whale song, dolphin signature whistles, prairie dogs, nor the communicative systems found in most social mammals. The features of language as listed above are a dated formulation by Hockett in 1960. Through this formulation Hockett made one of the earliest attempts to break down features of human language for the purpose of applying Darwinian gradualism. Although an influence on early animal language efforts (see below), is today not considered the key architecture at the core of "animal language" research.

"Clever Hans", an Orlov Trotter horse that was claimed to have been able to perform arithmetic and other intellectual tasks.

Animal Language results are controversial for several reasons. (For a related controversy, see also Clever Hans.) In the 1970s John Lilly was attempting to "break the code": to fully communicate ideas and concepts with wild populations of dolphins so that we could "speak" to them, and share our cultures, histories, and more. This effort failed. Early chimpanzee work was with chimpanzee infants raised as if they were human; a test of the nature vs. nurture hypothesis. Chimpanzees have a laryngeal structure very different from that of humans, and it has been suggested that chimpanzees are not capable of voluntary control of their breathing, although better studies are needed to accurately confirm this. This combination is thought to make it very difficult for the chimpanzees to reproduce the vocal intonations required for human language. Researchers eventually moved towards a gestural (sign language) modality, as well as "keyboard" devices laden with buttons adorned with symbols (known as "lexigrams") that the animals could press to produce artificial language. Other chimpanzees learned by observing human subjects performing the task. This latter group of researchers studying chimpanzee communication through symbol recognition (keyboard) as well as through the use of sign language (gestural), are on the forefront of communicative breakthroughs in the study of animal language, and they are familiar with their subjects on a first name basis: Sarah, Lana, Kanzi, Koko, Sherman, Austin and Chantek.

Perhaps the best known critic of "Animal Language" is Herbert Terrace. Terrace's 1979 criticism using his own research with the chimpanzee Nim Chimpsky was scathing and basically spelled the end of animal language research in that era, most of which emphasized the production of language by animals. In short, he accused researchers of over-interpreting their results, especially as it is rarely parsimonious to ascribe true intentional "language production" when other simpler explanations for the behaviors (gestural hand signs) could be put forth. Also, his animals failed to show generalization of the concept of reference between the modalities of comprehension and production; this generalization is one of many fundamental ones that are trivial for human language use. The simpler explanation according to Terrace was that the animals had learned a sophisticated series of context-based behavioral strategies to obtain either primary (food) or social reinforcement, behaviors that could be over-interpreted as language use.

In 1984 during this anti-Animal Language backlash, Louis Herman published an account of artificial language in the bottlenosed dolphin in the journal Cognition. A major difference between Herman's work and previous research was his emphasis on a method of studying language comprehension only (rather than language comprehension and production by the animal(s)), which enabled rigorous controls and statistical tests, largely because he was limiting his researchers to evaluating the animals' physical behaviors (in response to sentences) with blinded observers, rather than attempting to interpret possible language utterances or productions. The dolphins' names here were Akeakamai and Phoenix. Irene Pepperberg used the vocal modality for language production and comprehension in a grey parrot named Alex in the verbal mode, and Sue Savage-Rumbaugh continues to study bonobos such as Kanzi and Panbanisha. R. Schusterman duplicated many of the dolphin results in his California sea lions ("Rocky"), and came from a more behaviorist tradition than Herman's cognitive approach. Schusterman's emphasis is on the importance on a learning structure known as "equivalence classes."

However, overall, there has not been any meaningful dialog between the linguistics and animal language spheres, despite capturing the public's imagination in the popular press. Also, the growing field of language evolution is another source of future interchange between these disciplines. Most primate researchers tend to show a bias toward a shared pre-linguistic ability between humans and chimpanzees, dating back to a common ancestor, while dolphin and parrot researchers stress the general cognitive principles underlying these abilities. More recent related controversies regarding animal abilities include the closely linked areas of Theory of mind, Imitation (e.g. Nehaniv & Dautenhahn, 2002), Animal Culture (e.g. Rendell & Whitehead, 2001), and Language Evolution (e.g. Christiansen & Kirby, 2003).

There has been a recent emergence in animal language research which has contested the idea that animal communication is less sophisticated than human communication. Denise Herzing has done research on dolphins in the Bahamas whereby she created a two-way conversation via a submerged keyboard. The keyboard allows divers to communicate with wild dolphins. By using sounds and symbols on each key the dolphins could either press the key with their nose or mimic the whistling sound emitted in order to ask humans for a specific prop. This ongoing experiment has shown that in non-linguistic creatures brilliant and rapid thinking does occur despite our previous conceptions of animal communication. Further research done with Kanzi using lexigrams has strengthened the idea that animal communication is much more complex then we once thought.

Psycholinguistics

From Wikipedia, the free encyclopedia

Psycholinguistics or psychology of language is the study of the interrelation between linguistic factors and psychological aspects. The discipline is mainly concerned with the mechanisms by which language is processed and represented in the mind and brain; that is, the psychological and neurobiological factors that enable humans to acquire, use, comprehend, and produce language.

Psycholinguistics is concerned with the cognitive faculties and processes that are necessary to produce the grammatical constructions of language. It is also concerned with the perception of these constructions by a listener.

Initial forays into psycholinguistics were in the philosophical and educational fields, due mainly to their location in departments other than applied sciences (e.g., cohesive data on how the human brain functioned). Modern research makes use of biology, neuroscience, cognitive science, linguistics, and information science to study how the mind-brain processes language, and less so the known processes of social sciences, human development, communication theories, and infant development, among others.

There are several subdisciplines with non-invasive techniques for studying the neurological workings of the brain. For example: neurolinguistics has become a field in its own right; and developmental psycholinguistics, as a branch of psycholinguistics, concerns itself with a child's ability to learn language.

Areas of study

Psycholinguistics is an interdisciplinary field that consists of researchers from a variety of different backgrounds, including psychology, cognitive science, linguistics, speech and language pathology, and discourse analysis. Psycholinguists study how people acquire and use language, according to the following main areas:

  1. language acquisition: how do children acquire language?
  2. language comprehension: how do people comprehend language?
  3. language production: how do people produce language?
  4. second language acquisition: how do people who already know one language acquire another one?

A researcher interested in language comprehension may study word recognition during reading, to examine the processes involved in the extraction of orthographic, morphological, phonological, and semantic information from patterns in printed text. A researcher interested in language production might study how words are prepared to be spoken starting from the conceptual or semantic level (this concerns connotation, and possibly can be examined through the conceptual framework concerned with the semantic differential). Developmental psycholinguists study infants' and children's ability to learn and process language.

Psycholinguistics further divide their studies according to the different components that make up human language.

Linguistics-related areas include:

  • Phonetics and phonology are the study of speech sounds. Within psycholinguistics, research focuses on how the brain processes and understands these sounds.
  • Morphology is the study of word structures, especially between related words (such as dog and dogs) and the formation of words based on rules (such as plural formation).
  • Syntax is the study of how words are combined to form sentences.
  • Semantics deals with the meaning of words and sentences. Where syntax is concerned with the formal structure of sentences, semantics deals with the actual meaning of sentences.
  • Pragmatics is concerned with the role of context in the interpretation of meaning.

History

In seeking to understand the properties of language acquisition, psycholinguistics has roots in debates regarding innate versus acquired behaviors (both in biology and psychology). For some time, the concept of an innate trait was something that was not recognized in studying the psychology of the individual. However, with the redefinition of innateness as time progressed, behaviors considered innate could once again be analyzed as behaviors that interacted with the psychological aspect of an individual. After the diminished popularity of the behaviorist model, ethology reemerged as a leading train of thought within psychology, allowing the subject of language, an innate human behavior, to be examined once more within the scope of psychology.

Origin of "psycholinguistics"

The theoretical framework for psycholinguistics began to be developed before the end of the 19th century as the "Psychology of Language". The science of psycholinguistics, so called, began in 1936 when Jacob Kantor, a prominent psychologist at the time, used the term "psycholinguistic" as a description within his book An Objective Psychology of Grammar.

However, the term "psycholinguistics" only came into widespread usage in 1946 when Kantor's student Nicholas Pronko published an article entitled "Psycholinguistics: A Review". Pronko's desire was to unify myriad related theoretical approaches under a single name. Psycholinguistics was used for the first time to talk about an interdisciplinary science "that could be coherent", as well as being the title of Psycholinguistics: A Survey of Theory and Research Problems, a 1954 book by Charles E. Osgood and Thomas A. Sebeok.

Theories

Language acquisition

Though there is still much debate, there are two primary theories on childhood language acquisition:

  • the behaviorist perspective, whereby all language must be learned by the child; and
  • the innatist perspective, which believes that the abstract system of language cannot be learned, but that humans possess an innate language faculty or an access to what has been called "universal grammar".

The innatist perspective began in 1959 with Noam Chomsky's highly critical review of B.F. Skinner's Verbal Behavior (1957). This review helped start what has been called the cognitive revolution in psychology. Chomsky posited that humans possess a special, innate ability for language, and that complex syntactic features, such as recursion, are "hard-wired" in the brain. These abilities are thought to be beyond the grasp of even the most intelligent and social non-humans. When Chomsky asserted that children acquiring a language have a vast search space to explore among all possible human grammars, there was no evidence that children received sufficient input to learn all the rules of their language. Hence, there must be some other innate mechanism that endows humans with the ability to learn language. According to the "innateness hypothesis", such a language faculty is what defines human language and makes that faculty different from even the most sophisticated forms of animal communication.

The field of linguistics and psycholinguistics has since been defined by pro-and-con reactions to Chomsky. The view in favor of Chomsky still holds that the human ability to use language (specifically the ability to use recursion) is qualitatively different from any sort of animal ability. This ability may have resulted from a favorable mutation or from an adaptation of skills that originally evolved for other purposes.

The view that language must be learned was especially popular before 1960 and is well represented by the mentalistic theories of Jean Piaget and the empiricist Rudolf Carnap. Likewise, the behaviorist school of psychology puts forth the point of view that language is a behavior shaped by conditioned response; hence it is learned. The view that language can be learned has had a recent resurgence inspired by emergentism. This view challenges the "innate" view as scientifically unfalsifiable; that is to say, it cannot be tested. With the increase in computer technology since the 1980s, researchers have been able to simulate language acquisition using neural network models.

Language comprehension

The structures and uses of language are related to the formation of ontological insights. Some see this system as "structured cooperation between language-users" who use conceptual and semantic deference in order to exchange meaning and knowledge, as well as give meaning to language, thereby examining and describing "semantic processes bound by a 'stopping' constraint which are not cases of ordinary deferring." Deferring is normally done for a reason, and a rational person is always disposed to defer if there is good reason.

The theory of the "semantic differential" supposes universal distinctions, such as:

  • Typicality: that included scales such as "regular–rare", "typical–exclusive";
  • Reality: "imaginary–real", "evident–fantastic", "abstract–concrete";
  • Complexity: "complex–simple", "unlimited–limited", "mysterious–usual";
  • Improvement or Organization: "regular–spasmodic", "constant–changeable", "organized–disorganized", "precise–indefinite";
  • Stimulation: "interesting–boring", "trivial–new".

Reading

One question in the realm of language comprehension is how people understand sentences as they read (i.e., sentence processing). Experimental research has spawned several theories about the architecture and mechanisms of sentence comprehension. These theories are typically concerned with the types of information, contained in the sentence, that the reader can use to build meaning, and at what point in reading does that information becomes available to the reader. Issues such as "modular" versus "interactive" processing have been theoretical divides in the field.

A modular view of sentence processing assumes that the stages involved in reading a sentence function independently as separate modules. These modules have limited interaction with one another. For example, one influential theory of sentence processing, the "garden-path theory", states that syntactic analysis takes place first. Under this theory, as the reader is reading a sentence, he or she creates the simplest structure possible, to minimize effort and cognitive load. This is done without any input from semantic analysis or context-dependent information. Hence, in the sentence "The evidence examined by the lawyer turned out to be unreliable", by the time the reader gets to the word "examined" he or she has committed to a reading of the sentence in which the evidence is examining something because it is the simplest parsing. This commitment is made even though it results in an implausible situation: evidence cannot examine something. Under this "syntax first" theory, semantic information is processed at a later stage. It is only later that the reader will recognize that he or she needs to revise the initial parsing into one in which "the evidence" is being examined. In this example, readers typically recognize their mistake by the time they reach "by the lawyer" and must go back and reevaluate the sentence. This reanalysis is costly and contributes to slower reading times.

In contrast to the modular view, an interactive theory of sentence processing, such as a constraint-based lexical approach assumes that all available information contained within a sentence can be processed at any time. Under an interactive view, the semantics of a sentence (such as plausibility) can come into play early on to help determine the structure of a sentence. Hence, in the sentence above, the reader would be able to make use of plausibility information in order to assume that "the evidence" is being examined instead of doing the examining. There are data to support both modular and interactive views; which view is correct is debatable.

When reading, saccades can cause the mind to skip over words because it does not see them as important to the sentence, and the mind completely omits it from the sentence or supplies the wrong word in its stead. This can be seen in "Paris in the the Spring". This is a common psychological test, where the mind will often skip the second "the", especially when there is a line break in between the two.

Language production

Language production refers to how people produce language, either in written or spoken form, in a way that conveys meanings comprehensible to others. One of the most effective ways to explain the way people represent meanings using rule-governed languages is by observing and analyzing instances of speech errors, which include speech disfluencies like false starts, repetition, reformulation and constant pauses in between words or sentences, as well as slips of the tongue, like-blendings, substitutions, exchanges (e.g. Spoonerism), and various pronunciation errors.

These speech errors have significant implications for understanding how language is produced, in that they reflect that:

  1. Speech is planned in advance: speech errors such as substitution and exchanges show that one does not plan their entire sentence before they speak. Rather, their language faculty is constantly tapped during the speech production process. This is accounted for by the limitation of working memory. In particular, errors involving exchanges imply that one plans one's sentence ahead but only with regard to its significant ideas (e.g. the words that constitute the core meaning) and only to a certain extent.
  2. Lexicon is organized semantically and phonologically: substitution and pronunciation errors show that lexicon is organized not only by its meaning, but also its form.
  3. Morphologically complex words are assembled: errors involving blending within a word reflect that there seems to be a rule governing the construction of words in production (and also likely in mental lexicon). In other words, speakers generate the morphologically complex words by merging morphemes rather than retrieving them as chunks.

It is useful to differentiate between three separate phases of language production:

  1. conceptualization: "determining what to say";
  2. formulation: "translating the intention to say something into linguistic form";
  3. execution: "the detailed articulatory planning and articulation itself".

Psycholinguistic research has largely concerned itself with the study of formulation because the conceptualization phase remains largely elusive and mysterious.

Methodologies

Behavioral tasks

Many of the experiments conducted in psycholinguistics, especially early on, are behavioral in nature. In these types of studies, subjects are presented with linguistic stimuli and asked to respond. For example, they may be asked to make a judgment about a word (lexical decision), reproduce the stimulus, or say a visually presented word aloud. Reaction times to respond to the stimuli (usually on the order of milliseconds) and proportion of correct responses are the most often employed measures of performance in behavioral tasks. Such experiments often take advantage of priming effects, whereby a "priming" word or phrase appearing in the experiment can speed up the lexical decision for a related "target" word later.

As an example of how behavioral methods can be used in psycholinguistics research, Fischler (1977) investigated word encoding, using a lexical-decision task. He asked participants to make decisions about whether two strings of letters were English words. Sometimes the strings would be actual English words requiring a "yes" response, and other times they would be non-words requiring a "no" response. A subset of the licit words were related semantically (e.g., cat–dog) while others were unrelated (e.g., bread–stem). Fischler found that related word pairs were responded to faster, compared to unrelated word pairs, which suggests that semantic relatedness can facilitate word encoding.

Eye-movements

Recently, eye tracking has been used to study online language processing. Beginning with Rayner (1978), the importance of understanding eye-movements during reading was established. Later, Tanenhaus et al. (1995) used a visual-world paradigm to study the cognitive processes related to spoken language. Assuming that eye movements are closely linked to the current focus of attention, language processing can be studied by monitoring eye movements while a subject is listening to spoken language.

Language production errors

The analysis of systematic errors in speech, as well as the writing and typing of language, can provide evidence of the process that has generated it. Errors of speech, in particular, grant insight into how the mind produces language while a speaker is mid-utterance. Speech errors tend to occur in the lexical, morpheme, and phoneme encoding steps of language production, as seen by the ways errors can manifest themselves. 

The types of speech errors, with some examples, include:

  • Substitutions (phoneme and lexical) — replacing a sound with an unrelated sound, or a word with its antonym, saying such as "verbal outfit" instead of "verbal output", or "He rode his bike tomorrow" instead of "...yesterday", respectively;
  • Blends — mixing two synonyms and saying "my stummy hurts" in place of either "stomach" or "tummy";
  • Exchanges (phoneme [aka spoonerisms] and morpheme) — swapping two onset sounds or two root words, and saying "You hissed my mystery lectures" instead of "You missed my history lectures", or "They're Turking talkish" instead of "They're talking Turkish", respectively;
  • Morpheme shifts — moving a function morpheme such as "-ly" or "-ed" to a different word and saying "easy enoughly" instead of "easily enough",
  • Perseveration — incorrectly starting a word with a sound that was a part of the previous utterance, such as saying "John gave the goy a ball" instead of "John gave the boy a ball";
  • Anticipation — replacing a sound with one that belongs later in the utterance, such as saying "She drank a cot cup of tea" instead of "She drank a hot cup of tea".

Speech errors will usually occur in the stages that involve lexical, morpheme, or phoneme encoding, and usually not in the first step of semantic encoding. This can be attributed to a speaker still conjuring the idea of what to say; and unless he changes his mind, can not be mistaken for what he wanted to say.

Neuroimaging

Until the recent advent of non-invasive medical techniques, brain surgery was the preferred way for language researchers to discover how language affects the brain. For example, severing the corpus callosum (the bundle of nerves that connects the two hemispheres of the brain) was at one time a treatment for some forms of epilepsy. Researchers could then study the ways in which the comprehension and production of language were affected by such drastic surgery. Where an illness made brain surgery necessary, language researchers had an opportunity to pursue their research.

Newer, non-invasive techniques now include brain imaging by positron emission tomography (PET); functional magnetic resonance imaging (fMRI); event-related potentials (ERPs) in electroencephalography (EEG) and magnetoencephalography (MEG); and transcranial magnetic stimulation (TMS). Brain imaging techniques vary in their spatial and temporal resolutions (fMRI has a resolution of a few thousand neurons per pixel, and ERP has millisecond accuracy). Each methodology has advantages and disadvantages for the study of psycholinguistics.

Computational modeling

Computational modelling, such as the DRC model of reading and word recognition proposed by Max Coltheart and colleagues, is another methodology, which refers to the practice of setting up cognitive models in the form of executable computer programs. Such programs are useful because they require theorists to be explicit in their hypotheses and because they can be used to generate accurate predictions for theoretical models that are so complex that discursive analysis is unreliable. Other examples of computational modelling are McClelland and Elman's TRACE model of speech perception and Franklin Chang's Dual-Path model of sentence production.

Areas for further research

Psycholinguistics is concerned with the nature of the processes that the brain undergoes in order to comprehend and produce language. For example, the cohort model seeks to describe how words are retrieved from the mental lexicon when an individual hears or sees linguistic input. Using new non-invasive imaging techniques, recent research seeks to shed light on the areas of the brain involved in language processing.

Another unanswered question in psycholinguistics is whether the human ability to use syntax originates from innate mental structures or social interaction, and whether or not some animals can be taught the syntax of human language.

Two other major subfields of psycholinguistics investigate first language acquisition, the process by which infants acquire language, and second language acquisition. It is much more difficult for adults to acquire second languages than it is for infants to learn their first language (infants are able to learn more than one native language easily). Thus, sensitive periods may exist during which language can be learned readily. A great deal of research in psycholinguistics focuses on how this ability develops and diminishes over time. It also seems to be the case that the more languages one knows, the easier it is to learn more.

The field of aphasiology deals with language deficits that arise because of brain damage. Studies in aphasiology can offer both advances in therapy for individuals suffering from aphasia and further insight into how the brain processes language.


Right to property

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Right_to_property The right to property , or the right to own property ...