Search This Blog

Wednesday, May 30, 2018

Language acquisition

From Wikipedia, the free encyclopedia
Language acquisition is the process by which humans acquire the capacity to perceive and comprehend language, as well as to produce and use words and sentences to communicate. Language acquisition is one of the quintessential human traits,[1] because non-humans do not communicate by using language.[2] Language acquisition usually refers to first-language acquisition, which studies infants' acquisition of their native language, whether that be spoken language or signed language as a result of prelingual deafness. This is distinguished from second-language acquisition, which deals with the acquisition (in both children and adults) of additional languages. In addition to speech, reading and writing a language with an entirely different script compounds the complexities of true foreign language literacy.

Linguists who are interested in child language acquisition for many years question how language is acquired, lidz et al. states "The question of how these structures are acquired, then, is more properly understood as the question of how a learner takes the surface forms in the input and converts them into abstract linguistic rules and representations."[3] So we know language acquisition involves structures, rules and representation. The capacity to successfully use language requires one to acquire a range of tools including phonology, morphology, syntax, semantics, and an extensive vocabulary. Language can be vocalized as in speech, or manual as in sign. Human language capacity is represented in the brain. Even though human language capacity is finite, one can say and understand an infinite number of sentences, which is based on a syntactic principle called recursion. Evidence suggests that every individual has three recursive mechanisms that allow sentences to go indeterminately. These three mechanisms are: relativization, complementation and coordination.[4] Furthermore, there are actually two main guiding principles in first-language acquisition, that is, speech perception always precedes speech production and the gradually evolving system by which a child learns a language is built up one step at a time, beginning with the distinction between individual phonemes.[5]

History


Learning box for language acquisition

Philosophers in ancient societies were interested in how humans acquired the ability to understand and produce language well before empirical methods for testing those theories were developed, but for the most part they seemed to regard language acquisition as a subset of man's ability to acquire knowledge and learn concepts.[6] Some early observation-based ideas about language acquisition were proposed by Plato, who felt that word-meaning mapping in some form was innate. Additionally, Sanskrit grammarians debated for over twelve centuries whether humans' ability to recognize the meaning of words was god-given (possibly innate) or passed down by previous generations and learned from already established conventions: a child learning the word for cow by listening to trusted speakers talking about cows.[7]

In a more modern context, empiricists, like Thomas Hobbes and John Locke, argued that knowledge (and, for Locke, language) emerge ultimately from abstracted sense impressions. These arguments lean towards the "nurture" side of the argument: that language is acquired through sensory experience, which led to Rudolf Carnap's Aufbau, an attempt to learn all knowledge from sense datum, using the notion of "remembered as similar" to bind them into clusters, which would eventually map into language.[8]

Proponents of behaviorism argued that language may be learned through a form of operant conditioning. In B. F. Skinner's Verbal Behaviour (1957), he suggested that the successful use of a sign, such as a word or lexical unit, given a certain stimulus, reinforces its "momentary" or contextual probability. Since operant conditioning is contingent on reinforcement by rewards, a child would learn that a specific combination of sounds stands for a specific thing through repeated successful associations made between the two. A "successful" use of a sign would be one in which the child is understood (for example, a child saying "up" when he or she wants to be picked up) and rewarded with the desired response from another person, thereby reinforcing the child's understanding of the meaning of that word and making it more likely that he or she will use that word in a similar situation in the future. Some empiricist theories of language acquisition include the statistical learning theory. Charles F. Hockett of language acquisition, relational frame theory, functionalist linguistics, social interactionist theory, and usage-based language acquisition.

Skinner's behaviourist idea was strongly attacked by Noam Chomsky in a review article in 1959, calling it "largely mythology" and a "serious delusion."[9] Arguments against Skinner's idea of language acquisition through operant conditioning include the fact that children often ignore language corrections from adults. Instead, children typically follow a pattern of using an irregular form of a word correctly, making errors later on, and eventually returning to the proper use of the word. For example, a child may correctly learn the word "gave" (past tense of "give"), and later on use the word "gived". Eventually, the child will typically go back to learning the correct word, "gave". The pattern is difficult to attribute to Skinner's idea of operant conditioning as the primary way that children acquire language. Chomsky argued that if language were solely acquired through behavioral conditioning, children would not likely learn the proper use of a word and suddenly use the word incorrectly.[10] Chomsky believed that Skinner failed to account for the central role of syntactic knowledge in language competence. Chomsky also rejected the term "learning", which Skinner used to claim that children "learn" language through operant conditioning.[11] Instead, Chomsky argued for a mathematical approach to language acquisition, based on a study of syntax.

General approaches


A lesson at Kituwah Academy on the Qualla Boundary in North Carolina. The language immersion school, operated by the Eastern Band of Cherokee Indians, teaches the same curriculum as other American primary schools, but the Cherokee language is the medium of instruction from pre-school on up and students learn it as a first language. Such schools have proven instrumental in the preservation and perpetuation of the Cherokee language.

A major debate in understanding language acquisition is how these capacities are picked up by infants from the linguistic input.[12] Input in the linguistic context is defined as "All words, contexts, and other forms of language to which a learner is exposed, relative to acquired proficiency in first or second languages". Nativists such as Noam Chomsky have focused on the hugely complex nature of human grammars, the finiteness and ambiguity of the input that children receive, and the relatively limited cognitive abilities of an infant. From these characteristics, they conclude that the process of language acquisition in infants must be tightly constrained and guided by the biologically given characteristics of the human brain. Otherwise, they argue, it is extremely difficult to explain how children, within the first five years of life, routinely master the complex, largely tacit grammatical rules of their native language.[13] Additionally, the evidence of such rules in their native language is all indirect—adult speech to children cannot encompass what children know by the time they've acquired their native language.[14]

Other scholars, however, have resisted the possibility that infants' routine success at acquiring the grammar of their native language requires anything more than the forms of learning seen with other cognitive skills, including such mundane motor skills as learning to ride a bike. In particular, there has been resistance to the possibility that human biology includes any form of specialization for language. This conflict is often referred to as the "nature and nurture" debate. Of course, most scholars acknowledge that certain aspects of language acquisition must result from the specific ways in which the human brain is "wired" (a "nature" component, which accounts for the failure of non-human species to acquire human languages) and that certain others are shaped by the particular language environment in which a person is raised (a "nurture" component, which accounts for the fact that humans raised in different societies acquire different languages). The as-yet unresolved question is the extent to which the specific cognitive capacities in the "nature" component are also used outside of language.

Social interactionism

Social interactionist theory is an explanation of language development emphasizing the role of social interaction between the developing child and linguistically knowledgeable adults. It is based largely on the socio-cultural theories of Soviet psychologist Lev Vygotsky, and made prominent in the Western world by Jerome Bruner.[15]

Unlike other approaches, it emphasizes the role of feedback and reinforcement in language acquisition. Specifically, it asserts that much of a child's linguistic growth stems from modeling of and interaction with parents and other adults, who very frequently provide instructive correction.[16] It is thus somewhat similar to behaviorist accounts of language, though it differs substantially in that it posits the existence of a social-cognitive model and other mental structures within children (a sharp contrast to the "black box" approach of classical behaviorism).

Another key idea within the theory of social interactionism is that of the zone of proximal development. Briefly, this is a theoretical construct denoting the set of tasks a child is capable of performing with guidance, but not alone.[17] As applied to language, it describes the set of linguistic tasks (proper syntax, suitable vocabulary usage, etc.) a child cannot carry out on their own at a given time, but can learn to carry out if assisted by an able adult.

Relational frame theory

The relational frame theory (RFT) (Hayes, Barnes-Holmes, Roche, 2001), provides a wholly selectionist/learning account of the origin and development of language competence and complexity. Based upon the principles of Skinnerian behaviorism, RFT posits that children acquire language purely through interacting with the environment. RFT theorists introduced the concept of functional contextualism in language learning, which emphasizes the importance of predicting and influencing psychological events, such as thoughts, feelings, and behaviors, by focusing on manipulable variables in their context. RFT distinguishes itself from Skinner's work by identifying and defining a particular type of operant conditioning known as derived relational responding, a learning process that, to date, appears to occur only in humans possessing a capacity for language. Empirical studies supporting the predictions of RFT suggest that children learn language via a system of inherent reinforcements, challenging the view that language acquisition is based upon innate, language-specific cognitive capacities.[18]

Emergentism

Emergentist theories, such as MacWhinney's competition model, posit that language acquisition is a cognitive process that emerges from the interaction of biological pressures and the environment. According to these theories, neither nature nor nurture alone is sufficient to trigger language learning; both of these influences must work together in order to allow children to acquire a language. The proponents of these theories argue that general cognitive processes subserve language acquisition and that the end result of these processes is language-specific phenomena, such as word learning and grammar acquisition. The findings of many empirical studies support the predictions of these theories, suggesting that language acquisition is a more complex process than many believe.[19]

Syntax and morphology

As syntax began to be studied more closely in the early 20th century, in relation to language learning, it became apparent to linguists, psychologists, and philosophers that knowing a language was not merely a matter of associating words with concepts, but that a critical aspect of language involves knowledge of how to put words together—sentences are usually needed in order to communicate successfully, not just isolated words.[6] A child will use short expressions such as Bye-bye Mummy or All-gone milk, which actually are combinations of individual nouns and an operator,[20] before it begins to use gradually more complex sentences. In the 1990s within the Principles and Parameters framework, this hypothesis was extended into a maturation-based Structure building model of child language regarding the acquisition of functional categories. In this model, children are seen as gradually building up more and more complex structures, with Lexical categories (like noun and verb) being acquired before Functional-syntactic categories (like determiner and complementiser).[21] When acquiring a language, it is also often found, in languages such as English, that the most frequently used verbs are irregular verbs.[citation needed] Young children first begin to learn the past tense of verbs individually; however, when they acquire a "rule", such as adding -ed to form the past tense, they begin to exhibit occasional overgeneralization errors (e.g. "runned", "hitted") alongside correct past-tense forms. One influential proposal to the origin of these errors is as follows: the adult state of grammar stores each irregular verb form in memory as well as a "block" on the use of the regular rule for forming that type of verb. In the developing child's mind, retrieval of that "block" may fail, causing the child to erroneously apply the regular rule instead of retrieving the irregular.[22][23]

Generativism

Generative grammar, associated especially with the work of Noam Chomsky, is currently one of the approaches to children's acquisition of syntax.[24] The leading idea is that human biology imposes narrow constraints on the child's "hypothesis space" during language acquisition. In the principles and parameters framework, which has dominated generative syntax since Chomsky's (1980) Lectures on Government and Binding: The Pisa Lectures, the acquisition of syntax resembles ordering from a menu: the human brain comes equipped with a limited set of choices, from which the child selects the correct options by using the parents' speech, in combination with the context.[25]

An important argument, which favors the generative approach, is the poverty of the stimulus argument. The child's input (a finite number of sentences encountered by the child, together with information about the context in which they were uttered) is, in principle, compatible with an infinite number of conceivable grammars. Moreover, few, if any, children can rely on corrective feedback from adults when they make a grammatical error, due to the fact that adults generally provide feedback regardless of whether a child's utterance was grammatical or not, and children have no way of discerning if a response was intended to be a correction. Additionally, when children do understand that they are being corrected, they don't always reproduce accurate restatements. Yet barring situations of medical abnormality or extreme privation, all the children in a given speech-community converge on very much the same grammar by the age of about five years. An especially dramatic example is provided by children who, for medical reasons, are unable to produce speech and, therefore, can never be corrected for a grammatical error but nonetheless, converge on the same grammar as their typically developing peers, according to comprehension-based tests of grammar.[28][29]

Considerations such as those have led Chomsky, Jerry Fodor, Eric Lenneberg and others to argue that the types of grammar the child needs to consider must be narrowly constrained by human biology (the nativist position).[30] These innate constraints are sometimes referred to as universal grammar, the human "language faculty", or the "language instinct".[31]

Empiricism

Although Chomsky's theory of a generative grammar has been enormously influential in the field of linguistics since the 1950s, many criticisms of the basic assumptions of generative theory have been put forth by cognitive-functional linguistics, who argue that language structure is created through language use.[32] These linguists argue that the concept of a language acquisition device (LAD) is unsupported by evolutionary anthropology, which tends to show a gradual adaptation of the human brain and vocal cords to the use of language, rather than a sudden appearance of a complete set of binary parameters delineating the whole spectrum of possible grammars ever to have existed and ever to exist.[33] On the other hand, cognitive-functional theorists use this anthropological data to show how human beings have evolved the capacity for grammar and syntax to meet our demand for linguistic symbols. (Binary parameters are common to digital computers, but may not be applicable to neurological systems such as the human brain.)[citation needed]

Further, the generative theory has several constructs (such as movement, empty categories, complex underlying structures, and strict binary branching) that cannot possibly be acquired from any amount of linguistic input. It is unclear that human language is actually anything like the generative conception of it. Since language, as imagined by nativists, is unlearnably complex,[citation needed] subscribers to this theory argue that it must, therefore, be innate.[34] Nativists hypothesize that some features of syntactic categories exist before a child is exposed to any experience - categories on which the child maps words of their language as they learn their native language.[35] A different theory of language, however, may yield different conclusions. While all theories of language acquisition posit some degree of innateness, they vary in how much value they place on this innate capacity to acquire language. Empiricism places less value on the innate knowledge, arguing instead that the input, combined with both general and language-specific learning capacities, is sufficient for acquisition.[36]

Since 1980, linguists studying children, such as Melissa Bowerman,[37] and psychologists following Jean Piaget, like Elizabeth Bates[38] and Jean Mandler, came to suspect that there may indeed be many learning processes involved in the acquisition process, and that ignoring the role of learning may have been a mistake.[citation needed]

In recent years, the debate surrounding the nativist position has centered on whether the inborn capabilities are language-specific or domain-general, such as those that enable the infant to visually make sense of the world in terms of objects and actions. The anti-nativist view has many strands, but a frequent theme is that language emerges from usage in social contexts, using learning mechanisms that are a part of a general cognitive learning apparatus (which is what is innate). This position has been championed by David M. W. Powers,[39] Elizabeth Bates,[40] Catherine Snow, Anat Ninio, Brian MacWhinney, Michael Tomasello,[41] Michael Ramscar,[42] William O'Grady,[43] and others. Philosophers, such as Fiona Cowie[44] and Barbara Scholz with Geoffrey Pullum[45] have also argued against certain nativist claims in support of empiricism.

The new field of cognitive linguistics has emerged as a specific counter to Chomskian Generative Grammar and Nativism.

Statistical learning

Some language acquisition researchers, such as Elissa Newport, Richard Aslin, and Jenny Saffran, emphasize the possible roles of general learning mechanisms, especially statistical learning, in language acquisition. The development of connectionist models that are able to successfully learn words and syntactical conventions[46] supports the predictions of statistical learning theories of language acquisition, as do empirical studies of children's detection of word boundaries.[47]

Statistical learning theory suggests that, when learning language, a learner would use the natural statistical properties of language to deduce its structure, including sound patterns, words, and the beginnings of grammar.[48] That is, language learners are sensitive to how often syllable combinations or words occur in relation to other syllables.[49][50][51] Infants between 21 months and 23 months old are also able to use statistical learning to develop "lexical categories", such as an animal category, which infants might later map to newly learned words in the same category. These findings suggest that early experience listening to language is critical to vocabulary acquisition.[52]

The statistical abilities are effective, but also limited by what qualifies as input, what is done with that input, and by the structure of the resulting output.[48] One should also note that statistical learning (and more broadly, distributional learning) can be accepted as a component of language acquisition by researchers on either side of the "nature and nurture" debate. From the perspective of that debate, an important question is whether statistical learning can, by itself, serve as an alternative to nativist explanations for the grammatical constraints of human language.

Chunking

Chunking theories of language acquisition constitute a group of theories related to statistical learning theories, in that they assume the input from the environment plays an essential role; however, they postulate different learning mechanisms. The central idea of these theories is that language development occurs through the incremental acquisition of meaningful chunks of elementary constituents, which can be words, phonemes, or syllables. Recently, this approach has been highly successful in simulating several phenomena in the acquisition of syntactic categories[53] and the acquisition of phonological knowledge.[54] The approach has several features that make it unique: the models are implemented as computer programs, which enables clear-cut and quantitative predictions to be made; they learn from naturalistic input, made of actual child-directed utterances; they produce actual utterances, which can be compared with children's utterances; and they have simulated phenomena in several languages, including English, Spanish, and German.[citation needed]

Researchers at the Max Planck Institute for Evolutionary Anthropology have developed a computer model analyzing early toddler conversations to predict the structure of later conversations. They showed that toddlers develop their own individual rules for speaking with slots, into which they could put certain kinds of words. A significant outcome of the research was that rules inferred from toddler speech were better predictors of subsequent speech than traditional grammars.[55]

Representation in the brain

Recent advances in functional neuroimaging technology have allowed for a better understanding of how language acquisition is manifested physically in the brain. Language acquisition almost always occurs in children during a period of rapid increase in brain volume. At this point in development, a child has many more neural connections than he or she will have as an adult, allowing for the child to be more able to learn new things than he or she would be as an adult.[56]

Sensitive period

Language acquisition has been studied from the perspective of developmental psychology and neuroscience,[57] which looks at learning to use and understand language parallel to a child's brain development. It has been determined, through empirical research on developmentally normal children, as well as through some extreme cases of language deprivation, that there is a "sensitive period" of language acquisition in which human infants have the ability to learn any language. Several findings have observed that from birth until the age of six months, infants can discriminate the phonetic contrasts of all languages. Researchers believe that this gives infants the ability to acquire the language spoken around them. After such an age, the child is able to perceive only the phonemes specific to the language learned. The reduced phonemic sensitivity enables children to build phonemic categories and recognize stress patterns and sound combinations specific to the language they are acquiring.[58] As Wilder Penfield noted, "Before the child begins to speak and to perceive, the uncommitted cortex is a blank slate on which nothing has been written. In the ensuing years much is written, and the writing is normally never erased. After the age of ten or twelve, the general functional connections have been established and fixed for the speech cortex." According to the sensitive or critical period models, the age at which a child acquires the ability to use language is a predictor of how well he or she is ultimately able to use language.[59] However, there may be an age at which becoming a fluent and natural user of a language is no longer possible; Penfield and Roberts (1959) cap their sensitive period at 9 years old.[60] Our brains may be automatically wired to learn languages,[citation needed] but the ability does not last into adulthood in the same way that it exists during development.[citation needed] By the onset of puberty (around age 12), language acquisition has typically been solidified and it becomes more difficult to learn a language in the same way a native speaker would.[citation needed] Just like children who speak vocally, deaf children go through the same critical period. Deaf children who acquire their first language later in life show lower performance in complex aspects of grammar.[61] At this point, it is usually a second language that a person is trying to acquire and not a first.[13]

Assuming that children are exposed to language during the critical period,[62] it is almost never missed by cognitively normal children—humans are so well prepared to learn language that it becomes almost impossible not to. Researchers are unable to experimentally test the effects of the sensitive period of development on language acquisition, because it would be unethical to deprive children of language until this period is over. However, case studies on abused, language deprived children show that they were extremely limited in their language skills, even after instruction.[63]

At a very young age, children can already distinguish between different sounds but cannot produce them yet. However, during infancy, children begin to babble. Deaf babies babble in the same order when hearing sounds as non-deaf babies do, thus showing that babbling is not caused by babies simply imitating certain sounds, but is actually a natural part of the process of language development. However, deaf babies do often babble less than non-deaf babies and they begin to babble later on in infancy (begin babbling at 11 months as compared to 6 months) when compared to non-deaf babies.[64]

Prelinguistic language abilities that are crucial for language acquisition have been seen even earlier than infantry. There have been many different studies examining different modes of language acquisition prior to birth. The study of language acquisition in fetuses started back in the late 1980s when different researchers discovered that very young infants could discriminate their native language from other languages. In Mehler et al. (1988),[65] infants underwent discrimination tests and it was shown that infants as young as 4 days old could discriminate utterances in their native language from an unfamiliar language, but could not discriminate between two languages when neither was native to them. These results suggest there are mechanisms for fetal auditory learning, and other researchers have found further behavioral evidence to support this notion. Fetus auditory learning through environment habituation has been seen in a variety of different modes, such as: fetus learning of familiar melodies (Hepper, 1988),[66] story fragments (DeCasper & Spence, 1986),[67] recognition of mother's voice (Kisilevsky, 2003),[68] and more evidence of fetus adaptation to native linguistic environments (Moon, Cooper & Fifer, 1993).[69]

Prosody is the property of speech that conveys an emotional state of the utterance, as well as intended form of speech (whether it be a question, statement or command). Some researchers in the field of developmental neuroscience would argue that fetal auditory learning mechanisms are solely due to discrimination in prosodic elements. Although this would hold merit in an evolutionary psychology perspective (i.e. recognition of mother's voice/familiar group language from emotionally valent stimuli), some theorists argue that there is more than prosodic recognition in elements of fetal learning. Newer evidence shows that fetuses not only react to the native language differently from nonnative, but furthermore that fetuses react differently and can accurately discriminate between native and nonnative vowels (Moon, Lagercrantz, & Kuhl, 2013).[70] Furthermore, a new study in 2016 showed that newborn infants encode the edges of multisyllabic sequences better than the internal components of the sequence (Ferry et al., 2016).[71] Together, these results suggest that newborn infants have learned important properties of syntactic processing in utero, that can be seen in infant knowledge of native language vowels and the sequencing of heard multisyllabic phrases. This ability to sequence specific vowels gives newborn infants some of the fundamental mechanisms needed in order to learn the complex organization of a language. From a neuroscientific perspective, there are neural correlates have been found that demonstrate human fetal learning of speech-like auditory stimulus that most other studies have been analyzing (Partanen et al., 2013).[72] In a study conducted by Partanen et al. (2013),[72] researchers presented fetuses with certain word variants and saw that these fetuses exhibited higher brain activity to the certain word variants compared to controls. In this same study, there was "a significant correlation existed between the amount of prenatal exposure and brain activity, with greater activity being associated with a higher amount of prenatal speech exposure," pointing to the important learning mechanisms present before birth that is fine-tuned to features in speech (Partanen et al., 2013).[72]


The phases of language acquisition in children

Vocabulary acquisition

The capacity to acquire the ability to incorporate the pronunciation of new words depends upon many factors. Before anything the learner needs to be able to hear what they are attempting to pronounce. Another is the capacity to engage in speech repetition.[73][74][75][76] Children with reduced abilities to repeat nonwords (a marker of speech repetition abilities) show a slower rate of vocabulary expansion than children for whom this is easy.[77] It has been proposed that the elementary units of speech have been selected to enhance the ease with which sound and visual input can be mapped into motor vocalization.[78] Several computational models of vocabulary acquisition have been proposed so far.[79][80][81][82][83][84][85] Various studies have shown that the size of a child's vocabulary by the age of 24 months correlates with the child's future development and language skills. A lack of language richness by this age has detrimental and long-term effects on the child's cognitive development, which is why it is so important for parents to engage their infants in language. If a child knows fifty words or less by the age of 24 months, he or she is classified as a late-talker and future language development, like vocabulary expansion and the organization of grammar, is likely to be slower and stunted.[citation needed]

Two more crucial elements of vocabulary acquisition are word segmentation and statistical learning (described above). Word segmentation, or the segmentation of words and syllables from fluent speech can be accomplished by eight-month-old infants.[49] By the time infants are 17-months-old, they are able to link meaning to segmented words.[50]

Recent evidence also suggests that motor skills and experiences may influence vocabulary acquisition during infancy. Specifically, learning to sit independently between 3 and 5 months has been found to predict receptive vocabulary at both 10 and 14 months of age,[86] and independent walking skills have been found to correlate with language skills around 10 to 14 months of age.[87][88] These findings show that language acquisition is an embodied process that is influenced by a child’s overall motor abilities and development. Studies have also shown a correlation between Socio-Economic-Status and vocabulary acquisition.[89]

Meaning

Children learn, on average, ten to fifteen new word meanings each day, but only one of these words can be accounted for by direct instruction.[90] The other nine to fourteen word meanings need to be picked up in some other way. It has been proposed that children acquire these meanings with the use of processes modeled by latent semantic analysis; that is, when they meet an unfamiliar word, children can use information in its context to correctly guess its rough area of meaning.[90] A child may expand the meaning and use of certain words that are already part of its mental lexicon in order to denominate anything that is somehow related but for which it does not know the specific words yet. For instance, a child may broaden the use of mummy and dada in order to indicate anything that belongs to its mother or father, or perhaps every person who resembles its own parents, or say rain while meaning I don't want to go out.[91]

There is also reason to believe that children use various heuristics to properly infer the meaning of words. Markman and others have proposed that children assume words to refer to objects with similar properties ("cow" and "pig" might both be "animals") rather than to objects that are thematically related ("cow" and "milk" are probably not both "animals").[92] Children also seem to adhere to the "whole object assumption" and think that a novel label refers to an entire entity rather than one of its parts.[92] This assumption along with other resources, such as grammar and morphological cues or lexical constraints, may help aid the child in acquiring word-meaning, but they also conflict some of the time.[93]

Neurocognitive research

According to several linguists, neurocognitive research has confirmed many standards of language learning, such as: "learning engages the entire person (cognitive, affective, and psychomotor domains), the human brain seeks patterns in its searching for meaning, emotions affect all aspects of learning, retention and recall, past experience always affects new learning, the brain's working memory has a limited capacity, lecture usually results in the lowest degree of retention, rehearsal is essential for retention, practice [alone] does not make perfect, and each brain is unique" (Sousa, 2006, p. 274). In terms of genetics, the gene ROBO1 has been associated with phonological buffer integrity or length.[94]

Although it is difficult to determine without invasive measures which exact parts of the brain become most active and important for language acquisition, fMRI and PET technology has allowed for some conclusions to be made about where language may be centered. Kuniyoshi Sakai proposed, based on several neuroimaging studies, that there may be a "grammar center", where language is primarily processed in the left lateral premotor cortex (located near the pre central sulcus and the inferior frontal sulcus). Additionally, these studies proposed that first language and second-language acquisition may be represented differently in the cortex.[13] In a study conducted by Newman et al., the relationship between cognitive neuroscience and language acquisition was compared through a standardized test procedure involving native speakers of English and native Spanish speakers who have all had a similar amount of exposure to the English language(averaging about 26 years). Even the number of times an examinee blinked was taken into account during the examination process. It was concluded that the brain does in fact process languages differently, but instead of it being directly related to proficiency levels, it is more so about how the brain processes language itself.[95]

During early infancy, language processing seems to occur over many areas in the brain. However, over time, it gradually becomes concentrated into two areas – Broca's area and Wernicke's area. Broca's area is in the left frontal cortex and is primarily involved in the production of the patterns in vocal and sign language. Wernicke's area is in the left temporal cortex and is primarily involved in language comprehension. The specialization of these language centers is so extensive that damage to them results in a critical condition known as aphasia.[96]

Artificial intelligence

Language acquisition can be modeled as a machine learning process using grammar induction algorithms.[97][98]

As a typically human phenomenon

The capacity to acquire and use language is a key aspect that distinguishes humans from other beings. Although it is difficult to pin down what aspects of language are uniquely human, there are a few design features that can be found in all known forms of human language, but that are missing from forms of animal communication. For example, many animals are able to communicate with each other by signaling to the things around them, but this kind of communication lacks the arbitrariness of human vernaculars (in that there is nothing about the sound of the word "dog" that would hint at its meaning). Other forms of animal communication may utilize arbitrary sounds, but are unable to combine those sounds in different ways to create completely novel messages that can then be automatically understood by another. Hockett called this design feature of human language "productivity". It is crucial to the understanding of human language acquisition that we are not limited to a finite set of words, but, rather, must be able to understand and utilize a complex system that allows for an infinite number of possible messages. So, while many forms of animal communication exist, they differ from human languages in that they have a limited range of vocabulary tokens, and the vocabulary items are not combined syntactically to create phrases.[41]

Prelingual deafness

Prelingual deafness is defined as hearing loss that occurred at birth or before an individual has learned to speak. In the United States, 2 to 3 out of every 1000 children are born deaf or hard of hearing. Even though it might be presumed that deaf children acquire language in different ways since they are not receiving the same auditory input as hearing children, many research findings indicate that deaf children acquire language in the same way that hearing children do and when given the proper language input, understand and express language just as well as their hearing peers. Babies who learn sign language produce signs or gestures that are more regular and more frequent than hearing babies acquiring spoken language. Just as hearing babies babble, deaf babies acquiring sign language will babble with their hands, otherwise known as manual babbling. Therefore, as many studies have shown, language acquisition by deaf children parallel the language acquisition of a spoken language by hearing children because humans are biologically equipped for language regardless of the modality.

Signed Language Acquisition

Deaf children's visual-manual language acquisition not only parallel spoken language acquisition but by the age of 30 months, most deaf children that were exposed to a visual language had a more advanced grasp with subject-pronoun copy rules than hearing children. Their vocabulary bank at the ages of 12-17 months exceed that of a hearing child's, though it does even out when they reach the two-word stage. The use of space for absent referents and the more complex handshapes in some signs prove to be difficult for children between 5-9 years of age because of motor development and the complexity of remembering the spacial use.  Despite certain myths about deaf children signing, their acquisition of a signed language not only develops normally, it exceeds that of a hearing child's at certain points.

Cochlear Implants

Other options besides sign language for kids with prelingual deafness include the use hearing aids to strengthen remaining sensory cells or cochlear implants to stimulate the hearing nerve directly. Cochlear Implants are hearing devices that are placed behind the ear and contain a receiver and electrodes which are placed under the skin and inside the cochlea. Despite these developments, there is still a risk that prelingually deaf children are may not develop good speech and speech reception skills. Although cochlear implants produce sounds, they are unlike typical hearing and deaf and hard of hearing people must undergo intensive therapy in order to learn how to interpret these sounds. They must also learn how to speak given the range of hearing they may or may not have. However, deaf children of deaf parents tend to do better with language, even though they are isolated from sound and speech because their language uses a different mode of communication that is accessible to them; the visual modality of language.

Although cochlear implants were initially approved for adults, now there is pressure to implant children early in order to maximize auditory skills for mainstream learning which in turn has created controversy around the topic. Due to recent advances in technology, cochlear implants allow some deaf people to acquire some sense of hearing. There are interior and exposed exterior components that are surgically implanted. Those who receive cochlear implants earlier on in life show more improvement on speech comprehension and language. Spoken language development does vary widely for those with cochlear implants though due to a number of different factors including: age at implantation, frequency, quality and type of speech training. Some evidence suggests that speech processing occurs at a more rapid pace in some prelingually deaf children with cochlear implants than those with traditional hearing aids. However, cochlear implants may not always work.

Research shows that people develop better language with a cochlear implant when they have a solid first language to rely on to understand the second language they would be learning. In the case of prelingually deaf children with cochlear implants, a signed language, like American Sign Language would be an accessible language for them to learn to help support the use of the cochlear implant as they learn a spoken language as their L2. Without a solid, accessible first language, these children run the risk of language deprivation, especially in the case that a cochlear implant fails to work. They would have no access to sound, meaning no access to the spoken language they are supposed to be learning. If a signed language was not a strong language for them to use and neither was a spoken language, they now have no access to any language and run the risk of missing their Critical period.

Intelligence

From Wikipedia, the free encyclopedia
Intelligence has been defined in many different ways to include the capacity for logic, understanding, self-awareness, learning, emotional knowledge, reasoning, planning, creativity, and problem solving. It can be more generally described as the ability to perceive or infer information, and to retain it as knowledge to be applied towards adaptive behaviors within an environment or context.

Intelligence is most widely studied in humans but has also been observed in both non-human animals and in plants. Intelligence in machines is called artificial intelligence, which is commonly implemented in computer systems using program software.

History of the term

The term intelligence derives from the Latin nouns intelligentia or intellēctus, which in turn stem from the verb intelligere, to comprehend or perceive. In the Middle Ages, intellectus became the scholarly technical term for understanding, and a translation for the Greek philosophical term nous. This term, however, was strongly linked to the metaphysical and cosmological theories of teleological scholasticism, including theories of the immortality of the soul, and the concept of the Active Intellect (also known as the Active Intelligence). This entire approach to the study of nature was strongly rejected by the early modern philosophers such as Francis Bacon, Thomas Hobbes, John Locke, and David Hume, all of whom preferred the word "understanding" (instead of "nous", "intellectus" or "intelligence") in their English philosophical works.[1][2] Hobbes for example, in his Latin De Corpore, used "intellectus intelligit" (translated in the English version as "the understanding understandeth") as a typical example of a logical absurdity.[3] The term "intelligence" has therefore become less common in English language philosophy, but it has later been taken up (with the scholastic theories which it now implies) in more contemporary psychology.[4]

Definitions

The definition of intelligence is controversial.[5] Some groups of psychologists have suggested the following definitions:

From "Mainstream Science on Intelligence" (1994), an op-ed statement in the Wall Street Journal signed by fifty-two researchers (out of 131 total invited to sign):[6]
A very general mental capability that, among other things, involves the ability to reason, plan, solve problems, think abstractly, comprehend complex ideas, learn quickly and learn from experience. It is not merely book learning, a narrow academic skill, or test-taking smarts. Rather, it reflects a broader and deeper capability for comprehending our surroundings—"catching on," "making sense" of things, or "figuring out" what to do.[7]
From "Intelligence: Knowns and Unknowns" (1995), a report published by the Board of Scientific Affairs of the American Psychological Association:
Individuals differ from one another in their ability to understand complex ideas, to adapt effectively to the environment, to learn from experience, to engage in various forms of reasoning, to overcome obstacles by taking thought. Although these individual differences can be substantial, they are never entirely consistent: a given person's intellectual performance will vary on different occasions, in different domains, as judged by different criteria. Concepts of "intelligence" are attempts to clarify and organize this complex set of phenomena. Although considerable clarity has been achieved in some areas, no such conceptualization has yet answered all the important questions, and none commands universal assent. Indeed, when two dozen prominent theorists were recently asked to define intelligence, they gave two dozen, somewhat different, definitions.[8]
Besides those definitions, psychology and learning researchers also have suggested definitions of intelligence such as:

Researcher Quotation
Alfred Binet Judgment, otherwise called "good sense", "practical sense", "initiative", the faculty of adapting one's self to circumstances ... auto-critique.[9]
David Wechsler The aggregate or global capacity of the individual to act purposefully, to think rationally, and to deal effectively with his environment.[10]
Lloyd Humphreys "...the resultant of the process of acquiring, storing in memory, retrieving, combining, comparing, and using in new contexts information and conceptual skills".[11]
Howard Gardner To my mind, a human intellectual competence must entail a set of skills of problem solving — enabling the individual to resolve genuine problems or difficulties that he or she encounters and, when appropriate, to create an effective product — and must also entail the potential for finding or creating problems — and thereby laying the groundwork for the acquisition of new knowledge.[12]
Linda Gottfredson The ability to deal with cognitive complexity.[13]
Sternberg & Salter Goal-directed adaptive behavior.[14]
Reuven Feuerstein The theory of Structural Cognitive Modifiability describes intelligence as "the unique propensity of human beings to change or modify the structure of their cognitive functioning to adapt to the changing demands of a life situation".[15]
Legg & Hutter A synthesis of 70+ definitions from psychology, philosophy, and AI researchers: "Intelligence measures an agent’s ability to achieve goals in a wide range of environments",[5] which has been mathematically formalized.[16]
Alexander Wissner-Gross F = T ∇ S{\displaystyle _{\tau }}[17] "Intelligence is a force, F, that acts so as to maximize future freedom of action. It acts to maximize future freedom of action, or keep options open, with some strength T, with the diversity of possible accessible futures, S, up to some future time horizon, τ. In short, intelligence doesn't like to get trapped".

Human intelligence

Human intelligence is the intellectual power of humans, which is marked by complex cognitive feats and high levels of motivation and self-awareness.[18] Intelligence enables humans to remember descriptions of things and use those descriptions in future behaviors. It is a cognitive process. It gives humans the cognitive abilities to learn, form concepts, understand, and reason, including the capacities to recognize patterns, comprehend ideas, plan, solve problems, and use language to communicate. Intelligence enables humans to experience and think.
Note that much of the above definition applies also to the intelligence of non-human animals.

In animals

The common chimpanzee can use tools. This chimpanzee is using a stick to get food.

Although humans have been the primary focus of intelligence researchers, scientists have also attempted to investigate animal intelligence, or more broadly, animal cognition. These researchers are interested in studying both mental ability in a particular species, and comparing abilities between species. They study various measures of problem solving, as well as numerical and verbal reasoning abilities. Some challenges in this area are defining intelligence so that it has the same meaning across species (e.g. comparing intelligence between literate humans and illiterate animals), and also operationalizing a measure that accurately compares mental ability across different species and contexts.

Wolfgang Köhler's research on the intelligence of apes is an example of research in this area. Stanley Coren's book, The Intelligence of Dogs is a notable book on the topic of dog intelligence.[19] (See also: Dog intelligence.) Non-human animals particularly noted and studied for their intelligence include chimpanzees, bonobos (notably the language-using Kanzi) and other great apes, dolphins, elephants and to some extent parrots, rats and ravens.

Cephalopod intelligence also provides important comparative study. Cephalopods appear to exhibit characteristics of significant intelligence, yet their nervous systems differ radically from those of backboned animals. Vertebrates such as mammals, birds, reptiles and fish have shown a fairly high degree of intellect that varies according to each species. The same is true with arthropods.

g factor in non-humans

Evidence of a general factor of intelligence has been observed in non-human animals. The general factor of intelligence, or g factor, is a psychometric construct that summarizes the correlations observed between an individual’s scores on a wide range of cognitive abilities. First described in humans, the g factor has since been identified in a number of non-human species.[20]
Cognitive ability and intelligence cannot be measured using the same, largely verbally dependent, scales developed for humans. Instead, intelligence is measured using a variety of interactive and observational tools focusing on innovation, habit reversal, social learning, and responses to novelty. Studies have shown that g is responsible for 47% of the individual variance in cognitive ability measures in primates[20] and between 55% and 60% of the variance in mice (Locurto, Locurto). These values are similar to the accepted variance in IQ explained by g in humans (40-50%).[21]

In plants

It has been argued that plants should also be classified as intelligent based on their ability to sense and model external and internal environments and adjust their morphology, physiology and phenotype accordingly to ensure self-preservation and reproduction.[22][23]
A counter argument is that intelligence is commonly understood to involve the creation and use of persistent memories as opposed to computation that does not involve learning. If this is accepted as definitive of intelligence, then it includes the artificial intelligence of robots capable of "machine learning", but excludes those purely autonomic sense-reaction responses that can be observed in many plants. Plants are not limited to automated sensory-motor responses, however, they are capable of discriminating positive and negative experiences and of 'learning' (registering memories) from their past experiences. They are also capable of communication, accurately computing their circumstances, using sophisticated cost–benefit analysis and taking tightly controlled actions to mitigate and control the diverse environmental stressors.[24][25][26]

Artificial intelligence

Artificial intelligence (or AI) is both the intelligence of machines and the branch of computer science which aims to create it, through "the study and design of intelligent agents"[27] or "rational agents", where an intelligent agent is a system that perceives its environment and takes actions which maximize its chances of success.[28] Achievements in artificial intelligence include constrained and well-defined problems such as games, crossword-solving and optical character recognition and a few more general problems such as autonomous cars.[29] General intelligence or strong AI has not yet been achieved and is a long-term goal of AI research.
Among the traits that researchers hope machines will exhibit are reasoning, knowledge, planning, learning, communication, perception, and the ability to move and to manipulate objects.[27][28] In the field of artificial intelligence there is no consensus on how closely the brain should be simulated.

Intelligent agent

From Wikipedia, the free encyclopedia
In artificial intelligence, an intelligent agent (IA) is an autonomous entity which observes through sensors and acts upon an environment using actuators (i.e. it is an agent) and directs its activity towards achieving goals (i.e. it is "rational", as defined in economics[1]). Intelligent agents may also learn or use knowledge to achieve their goals. They may be very simple or very complex. A reflex machine, such as a thermostat, is considered an example of an intelligent agent.[2]

Simple reflex agent

Intelligent agents are often described schematically as an abstract functional system similar to a computer program. For this reason, intelligent agents are sometimes called abstract intelligent agents (AIA)[citation needed] to distinguish them from their real world implementations as computer systems, biological systems, or organizations. Some definitions of intelligent agents emphasize their autonomy, and so prefer the term autonomous intelligent agents. Still others (notably Russell & Norvig (2003)) considered goal-directed behavior as the essence of intelligence and so prefer a term borrowed from economics, "rational agent".

Intelligent agents in artificial intelligence are closely related to agents in economics, and versions of the intelligent agent paradigm are studied in cognitive science, ethics, the philosophy of practical reason, as well as in many interdisciplinary socio-cognitive modeling and computer social simulations.

Intelligent agents are also closely related to software agents (an autonomous computer program that carries out tasks on behalf of users). In computer science, the term intelligent agent may be used to refer to a software agent that has some intelligence, regardless if it is not a rational agent by Russell and Norvig's definition. For example, autonomous programs used for operator assistance or data mining (sometimes referred to as bots) are also called "intelligent agents".

A variety of definitions

Intelligent agents have been defined many different ways.[3] According to Nikola Kasabov[4] AI systems should exhibit the following characteristics:
  • Accommodate new problem solving rules incrementally
  • Adapt online and in real time
  • Are able to analyze itself in terms of behavior, error and success.
  • Learn and improve through interaction with the environment (embodiment)
  • Learn quickly from large amounts of data
  • Have memory-based exemplar storage and retrieval capacities
  • Have parameters to represent short and long term memory, age, forgetting, etc.

Structure of agents

A simple agent program can be defined mathematically as an agent function[5] which maps every possible percepts sequence to a possible action the agent can perform or to a coefficient, feedback element, function or constant that affects eventual actions:
f:P^{\ast }\rightarrow A
Agent function is an abstract concept as it could incorporate various principles of decision making like calculation of utility of individual options, deduction over logic rules, fuzzy logic, etc.[6]

The program agent, instead, maps every possible percept to an action[citation needed].

We use the term percept to refer to the agent's perceptional inputs at any given instant. In the following figures an agent is anything that can be viewed as perceiving its environment through sensors and acting upon that environment through actuators.

Architectures

Weiss (2013) said we should consider four classes of agents:
  • Logic-based agents – in which the decision about what action to perform is made via logical deduction;
  • Reactive agents – in which decision making is implemented in some form of direct mapping from situation to action;
  • Belief-desire-intention agents – in which decision making depends upon the manipulation of data structures representing the beliefs, desires, and intentions of the agent; and finally,
  • Layered architectures – in which decision making is realized via various software layers, each of which is more or less explicitly reasoning about the environment at different levels of abstraction.

Classes

Simple reflex agent
 
Model-based reflex agent
 
Model-based, goal-based agent
Model-based, utility-based agent
 
A general learning agent

Russell & Norvig (2003) group agents into five classes based on their degree of perceived intelligence and capability:[7]
  1. simple reflex agents
  2. model-based reflex agents
  3. goal-based agents
  4. utility-based agents
  5. learning agents

Simple reflex agents

Simple reflex agents act only on the basis of the current percept, ignoring the rest of the percept history. The agent function is based on the condition-action rule: if condition then action.
This agent function only succeeds when the environment is fully observable. Some reflex agents can also contain information on their current state which allows them to disregard conditions whose actuators are already triggered.

Infinite loops are often unavoidable for simple reflex agents operating in partially observable environments. Note: If the agent can randomize its actions, it may be possible to escape from infinite loops.

Model-based reflex agents

A model-based agent can handle partially observable environments. Its current state is stored inside the agent maintaining some kind of structure which describes the part of the world which cannot be seen. This knowledge about "how the world works" is called a model of the world, hence the name "model-based agent".

A model-based reflex agent should maintain some sort of internal model that depends on the percept history and thereby reflects at least some of the unobserved aspects of the current state. Percept history and impact of action on the environment can be determined by using internal model. It then chooses an action in the same way as reflex agent.

Goal-based agents

Goal-based agents further expand on the capabilities of the model-based agents, by using "goal" information. Goal information describes situations that are desirable. This allows the agent a way to choose among multiple possibilities, selecting the one which reaches a goal state. Search and planning are the subfields of artificial intelligence devoted to finding action sequences that achieve the agent's goals.

Utility-based agents

Goal-based agents only distinguish between goal states and non-goal states. It is possible to define a measure of how desirable a particular state is. This measure can be obtained through the use of a utility function which maps a state to a measure of the utility of the state. A more general performance measure should allow a comparison of different world states according to exactly how happy they would make the agent. The term utility can be used to describe how "happy" the agent is.

A rational utility-based agent chooses the action that maximizes the expected utility of the action outcomes - that is, what the agent expects to derive, on average, given the probabilities and utilities of each outcome. A utility-based agent has to model and keep track of its environment, tasks that have involved a great deal of research on perception, representation, reasoning, and learning.

Learning agents

Learning has the advantage that it allows the agents to initially operate in unknown environments and to become more competent than its initial knowledge alone might allow. The most important distinction is between the "learning element", which is responsible for making improvements, and the "performance element", which is responsible for selecting external actions.

The learning element uses feedback from the "critic" on how the agent is doing and determines how the performance element should be modified to do better in the future. The performance element is what we have previously considered to be the entire agent: it takes in percepts and decides on actions.

The last component of the learning agent is the "problem generator". It is responsible for suggesting actions that will lead to new and informative experiences.

Hierarchies of agents

To actively perform their functions, Intelligent Agents today are normally gathered in a hierarchical structure containing many “sub-agents”. Intelligent sub-agents process and perform lower level functions. Taken together, the intelligent agent and sub-agents create a complete system that can accomplish difficult tasks or goals with behaviors and responses that display a form of intelligence.

Applications

An example of an automated online assistant providing automated customer service on a webpage.

Intelligent agents are applied as automated online assistants, where they function to perceive the needs of customers in order to perform individualized customer service. Such an agent may basically consist of a dialog system, an avatar, as well an expert system to provide specific expertise to the user.[8] They can also be used to optimize coordination of human groups online.[9]

Tuesday, May 29, 2018

Cognitive architecture

From Wikipedia, the free encyclopedia

A cognitive architecture can refer to a theory about the structure of the human mind. One of the main goals of a cognitive architecture is to summarize the various results of cognitive psychology in a comprehensive computer model. However, the results need to be formalized so far as they can be the basis of a computer program. The formalized models can be used to further refine a comprehensive theory of cognition, and more immediately, as a commercially usable model. Successful[citation needed] cognitive architectures include ACT-R (Adaptive Control of Thought, ACT) and SOAR.

The Institute of Creative Technologies defines cognitive architecture as: "hypothesis about the fixed structures that provide a mind, whether in natural or artificial systems, and how they work together – in conjunction with knowledge and skills embodied within the architecture – to yield intelligent behavior in a diversity of complex environments."[1]

History

Herbert A. Simon, one of the founders of the field of artificial intelligence, stated that the 1960 thesis by his student Ed Feigenbaum, EPAM provided a possible "architecture for cognition"[2] because it included some commitments for how more than one fundamental aspect of the human mind worked (in EPAM's case, human memory and human learning).

John R. Anderson started research on human memory in the early 1970s and his 1973 thesis with Gordon H. Bower provided a theory of human associative memory.[3] He included more aspects of his research on long-term memory and thinking processes into this research and eventually designed a cognitive architecture he eventually called ACT. He and his student used the term "cognitive architecture" in his lab to refer to the ACT theory as embodied in the collection of papers and designs since they didn't yet have any sort of complete implementation at the time.

In 1983 John R. Anderson published the seminal work in this area, entitled The Architecture of Cognition.[4] One can distinguish between the theory of cognition and the implementation of the theory. The theory of cognition outlined the structure of the various parts of the mind and made commitments to the use of rules, associative networks, and other aspects. The cognitive architecture implements the theory on computers. The software used to implement the cognitive architectures were also "cognitive architectures". Thus, a cognitive architecture can also refer to a blueprint for intelligent agents. It proposes (artificial) computational processes that act like certain cognitive systems, most often, like a person, or acts intelligent under some definition. Cognitive architectures form a subset of general agent architectures. The term 'architecture' implies an approach that attempts to model not only behavior, but also structural properties of the modelled system.

Distinctions

Cognitive architectures can be symbolic, connectionist, or hybrid.[5][6] Some cognitive architectures or models are based on a set of generic rules, as, e.g., the Information Processing Language (e.g., Soar based on the unified theory of cognition, or similarly ACT-R). Many of these architectures are based on the-mind-is-like-a-computer analogy. In contrast subsymbolic processing specifies no such rules a priori and relies on emergent properties of processing units (e.g. nodes). Hybrid architectures combine both types of processing (such as CLARION). A further distinction is whether the architecture is centralized with a neural correlate of a processor at its core, or decentralized (distributed). The decentralized flavor, has become popular under the name of parallel distributed processing in mid-1980s and connectionism, a prime example being neural networks. A further design issue is additionally a decision between holistic and atomistic, or (more concrete) modular structure. By analogy, this extends to issues of knowledge representation.

In traditional AI, intelligence is often programmed from above: the programmer is the creator, and makes something and imbues it with its intelligence, though many traditional AI systems were also designed to learn (e.g. improving their game-playing or problem-solving competence). Biologically inspired computing, on the other hand, takes sometimes a more bottom-up, decentralised approach; bio-inspired techniques often involve the method of specifying a set of simple generic rules or a set of simple nodes, from the interaction of which emerges the overall behavior. It is hoped to build up complexity until the end result is something markedly complex (see complex systems). However, it is also arguable that systems designed top-down on the basis of observations of what humans and other animals can do rather than on observations of brain mechanisms, are also biologically inspired, though in a different way.

Notable examples

A comprehensive review of implemented cognitive architectures has been undertaken in 2010 by Samsonovich et al.[7] and is available as an online repository.[8] Some well-known cognitive architectures, in alphabetical order:

Introduction to entropy

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Introduct...