Search This Blog

Friday, July 13, 2018

Limbic system

From Wikipedia, the free encyclopedia
 
Limbic system
Back Cover, STRESS R US.jpg
Cross section of the human brain showing parts of the limbic system from below.
Traité d'Anatomie et de Physiologie (1786)
 
1511 The Limbic Lobe.jpg
The limbic system largely consists of what was previously known as the limbic lobe.
Details
Identifiers
Latin Systema limbicum
MeSH D008032
NeuroNames 2055
FMA 242000
Anatomical terms of neuroanatomy 

The limbic system is a set of brain structures located on both sides of the thalamus, immediately beneath the cerebrum. It has also been referred to as the paleomammalian cortex. It is not a separate system but a collection of structures from the telencephalon, diencephalon, and mesencephalon. It includes the olfactory bulbs, hippocampus, hypothalamus, amygdala, anterior thalamic nuclei, fornix, columns of fornix, mammillary body, septum pellucidum, habenular commissure, cingulate gyrus, parahippocampal gyrus, entorhinal cortex, and limbic midbrain areas.

The limbic system supports a variety of functions including emotion, behavior, motivation, long-term memory, and olfaction.[4] Emotional life is largely housed in the limbic system, and it has a great deal to do with the formation of memories.

Although the term only originated in the 1940s, some neuroscientists, including Joseph LeDoux, have suggested that the concept of a functionally unified limbic system should be abandoned as obsolete because it is grounded mainly in historical concepts of brain anatomy that are no longer accepted as accurate.[5]

Structure

Anatomical components of the limbic system

The limbic system was originally defined by Paul D, MacLean as a series of cortical structures surrounding the limit between the cerebral hemispheres and the brainstem: the border, or limbus, of the brain. These structures were known together as the limbic lobe.[6] Further studies began to associate these areas with emotional and motivational processes and linked them to subcortical components that were grouped into the limbic system.[7] The existence of such a system as an isolated entity responsible for the neurological regulation of emotion has gone into disuse and currently it is considered as one of the many parts of the brain that regulate visceral, autonomic processes.[8]

Therefore, the definition of anatomical structures considered part of the limbic system is a controversial subject. The following structures are, or have been considered, part of the limbic system:[9][10]

Function

The structures of the limbic system are involved in motivation, emotion, learning, and memory. The limbic system is where the subcortical structures meet the cerebral cortex.[1] The limbic system operates by influencing the endocrine system and the autonomic nervous system. It is highly interconnected with the nucleus accumbens, which plays a role in sexual arousal and the "high" derived from certain recreational drugs. These responses are heavily modulated by dopaminergic projections from the limbic system. In 1954, Olds and Milner found that rats with metal electrodes implanted into their nucleus accumbens, as well as their septal nuclei, repeatedly pressed a lever activating this region, and did so in preference to eating and drinking, eventually dying of exhaustion.[11] The limbic system also includes the basal ganglia. The basal ganglia are a set of subcortical structures that direct intentional movements. The basal ganglia are located near the thalamus and hypothalamus. They receive input from the cerebral cortex, which sends outputs to the motor centers in the brain stem. A part of the basal ganglia called the striatum controls posture and movement. Recent studies indicate that, if there is an inadequate supply of dopamine, the striatum is affected, which can lead to visible behavioral symptoms of Parkinson's disease.[1]

The limbic system is also tightly connected to the prefrontal cortex. Some scientists contend that this connection is related to the pleasure obtained from solving problems. To cure severe emotional disorders, this connection was sometimes surgically severed, a procedure of psychosurgery, called a prefrontal lobotomy (this is actually a misnomer). Patients having undergone this procedure often became passive and lacked all motivation.

The limbic system is often classified as a “cerebral structure”. This structure is closely linked to olfaction, emotions, drives, autonomic regulation, memory, and pathologically to encephalopathy, epilepsy, psychotic symptoms, cognitive defects.[12] The functional relevance of the limbic system has proven to serve many different functions such as affects/emotions, memory, sensory processing, time perception, attention, consciousness, instincts, autonomic/vegetative control, and actions/motor behavior. Some of the disorders associated with the limbic system are epilepsy and schizophrenia.[13]

Hippocampus

Various processes of cognition involve the hippocampus.

Spatial memory

The first and most widely researched area concerns memory, spatial memory in particular. Spatial memory was found to have many sub-regions in the hippocampus, such as the dentate gyrus (DG) in the dorsal hippocampus, the left hippocampus, and the parahippocampal region. The dorsal hippocampus was found to be an important component for the generation of new neurons, called adult-born granules (GC), in adolescence and adulthood.[14] These new neurons contribute to pattern separation in spatial memory, increasing the firing in cell networks, and overall causing stronger memory formations.

While the dorsal hippocampus is involved in spatial memory formation, the left hippocampus is a participant in the recall of these spatial memories. Eichenbaum[15] and his team found, when studying the hippocampal lesions in rats, that the left hippocampus is “critical for effectively combining the ‘what, ‘when,’ and ‘where’ qualities of each experience to compose the retrieved memory.” This makes the left hippocampus a key component in the retrieval of spatial memory. However, Spreng[16] found that the left hippocampus is, in fact, a general concentrated region for binding together bits and pieces of memory composed not only by the hippocampus, but also by other areas of the brain to be recalled at a later time. Eichenbaum’s research in 2007 also demonstrates that the parahippocampal area of the hippocampus is another specialized region for the retrieval of memories just like the left hippocampus.

Learning

The hippocampus, over the decades, has also been found to have a huge impact in learning. Curlik and Shors[17] examined the effects of neurogenesis in the hippocampus and its effects on learning. This researcher and his team employed many different types of mental and physical training on their subjects, and found that the hippocampus is highly responsive to these latter tasks. Thus, they discovered an upsurge of new neurons and neural circuits in the hippocampus as a result of the training, causing an overall improvement in the learning of the task. This neurogenesis contributes to the creation of adult-born granules cells (GC), cells also described by Eichenbaum[15] in his own research on neurogenesis and its contributions to learning. The creation of these cells exhibited "enhanced excitability" in the dentate gyrus (DG) of the dorsal hippocampus, impacting the hippocampus and its contribution to the learning process.[15]

Hippocampus damage

Damage related to the hippocampal region of the brain has reported vast effects on overall cognitive functioning, particularly memory such as spatial memory. As previously mentioned, spatial memory is a cognitive function greatly intertwined with the hippocampus. While damage to the hippocampus may be a result of a brain injury or other injuries of that sort, researchers particularly investigated the effects that high emotional arousal and certain types of drugs had on the recall ability in this specific memory type. In particular, in a study performed by Parkard,[18] rats were given the task of correctly making their way through a maze. In the first condition, rats were stressed by shock or restraint which caused a high emotional arousal. When completing the maze task, these rats had an impaired effect on their hippocampal-dependent memory when compared to the control group. Then, in a second condition, a group of rats were injected with anxiogenic drugs. Like the former these results reported similar outcomes, in that hippocampal-memory was also impaired. Studies such as these reinforce the impact that the hippocampus has on memory processing, in particular the recall function of spatial memory. Furthermore, impairment to the hippocampus can occur from prolonged exposure to stress hormones such as Glucocorticoids (GCs), which target the hippocampus and cause disruption in explicit memory.[19]

In an attempt to curtail life-threatening epileptic seizures, 27-year-old Henry Gustav Molaison underwent bilateral removal of almost all of his hippocampus in 1953. Over the course of fifty years he participated in thousands of tests and research projects that provided specific information on exactly what he had lost. Semantic and episodic events faded within minutes, having never reached his long term memory, yet emotions, unconnected from the details of causation, were often retained. Dr. Suzanne Corkin, who worked with him for 46 years until his death, described the contribution of this tragic "experiment" in her 2013 book.[20]

Amygdala

Episodic-autobiographical memory (EAM) networks

Another integrative part of the limbic system, the amygdala is involved in many cognitive processes. Like the hippocampus, processes in the amygdala seem to impact memory; however, it is not spatial memory as in the hippocampus but episodic-autobiographical memory (EAM) networks. Markowitsch's[21] amygdala research shows it encodes, stores, and retrieves EAM memories. To delve deeper into these types of processes by the amygdala, Markowitsch[21] and his team provided extensive evidence through investigations that the "amygdala's main function is to charge cues so that mnemonic events of a specific emotional significance can be successfully searched within the appropriate neural nets and re-activated." These cues for emotional events created by the amygdala encompass the EAM networks previously mentioned.

Attentional and emotional processes

Besides memory, the amygdala also seems to be an important brain region involved in attentional and emotional processes. First, to define attention in cognitive terms, attention is the ability to home in on some stimuli while ignoring others. Thus, the amygdala seems to be an important structure in this ability. Foremost, however, this structure was historically thought to be linked to fear, allowing the individual to take action in response to that fear. However, as time has gone by, researchers such as Pessoa,[22] generalized this concept with help from evidence of EEG recordings, and concluded that the amygdala helps an organism to define a stimulus and therefore respond accordingly. However, when the amygdala was initially thought to be linked to fear, this gave way for research in the amygdala for emotional processes. Kheirbek[14] demonstrated research that the amygdala is involved in emotional processes, in particular the ventral hippocampus. He described the ventral hippocampus as having a role in neurogenesis and the creation of adult-born granule cells (GC). These cells not only were a crucial part of neurogenesis and the strengthening of spatial memory and learning in the hippocampus but also appear to be an essential component in the amygdala. A deficit of these cells, as Pessoa (2009) predicted in his studies, would result in low emotional functioning, leading to high retention rate of mental diseases, such as anxiety disorders.

Social processing

Social processing, specifically the evaluation of faces in social processing, is an area of cognition specific to the amygdala. In a study done by Todorov,[23] fMRI tasks were performed with participants to evaluate whether the amygdala was involved in the general evaluation of faces. After the study, Todorov concluded from his fMRI results that the amygdala did indeed play a key role in the general evaluation of faces. However, in a study performed by researchers Koscik[24] and his team, the trait of trustworthiness was particularly examined in the evaluation of faces. Koscik and his team demonstrated that the amygdala was involved in evaluating the trustworthiness of an individual. They investigated how brain damage to the amygdala played a role in trustworthiness, and found that individuals that suffered damage tended to confuse trust and betrayal, and thus placed trust in those having done them wrong. Furthermore, Rule,[25] along with his colleagues, expanded on the idea of the amygdala in its critique of trustworthiness in others by performing a study in 2009 in which he examined the amygdala's role in evaluating general first impressions and relating them to real-world outcomes. Their study involved first impressions of CEOs. Rule demonstrated that while the amygdala did play a role in the evaluation of trustworthiness, as observed by Koscik in his own research two years later in 2011, the amygdala also played a generalized role in the overall evaluation of first impression of faces. This latter conclusion, along with Todorov's study on the amygdala’s role in general evaluations of faces and Koscik’s research on trustworthiness and the amygdala, further solidified evidence that the amygdala plays a role in overall social processing.

Evolution

Paul D. MacLean, as part of his triune brain theory, hypothesized that the limbic system is older than other parts of the forebrain, and that it developed to manage circuitry attributed to the fight or flight first identified by Hans Selye [26] in his report of the General Adaptation Syndrome in 1936. It may be considered a part of survival adaptation in reptiles as well as mammals (including humans). MacLean postulated that the human brain has evolved three components, that evolved successively, with more recent components developing at the top/front. These components are, respectively:
  1. The archipallium or primitive ("reptilian") brain, comprising the structures of the brain stem – medulla, pons, cerebellum, mesencephalon, the oldest basal nuclei – the globus pallidus and the olfactory bulbs.
  2. The paleopallium or intermediate ("old mammalian") brain, comprising the structures of the limbic system.
  3. The neopallium, also known as the superior or rational ("new mammalian") brain, comprises almost the whole of the hemispheres (made up of a more recent type of cortex, called neocortex) and some subcortical neuronal groups. It corresponds to the brain of the superior mammals, thus including the primates and, as a consequence, the human species. Similar development of the neocortex in mammalian species unrelated to humans and primates has also occurred, for example in cetaceans and elephants; thus the designation of "superior mammals" is not an evolutionary one, as it has occurred independently in different species. The evolution of higher degrees of intelligence is an example of convergent evolution, and is also seen in non-mammals such as birds.
According to Maclean, each of the components, although connected with the others, retained "their peculiar types of intelligence, subjectivity, sense of time and space, memory, mobility and other less specific functions".

However, while the categorization into structures is reasonable, the recent studies of the limbic system of tetrapods, both living and extinct, have challenged several aspects of this hypothesis, notably the accuracy of the terms "reptilian" and "old mammalian". The common ancestors of reptiles and mammals had a well-developed limbic system in which the basic subdivisions and connections of the amygdalar nuclei were established.[27] Further, birds, which evolved from the dinosaurs, which in turn evolved separately but around the same time as the mammals, have a well-developed limbic system. While the anatomic structures of the limbic system are different in birds and mammals, there are functional equivalents.

Clinical significance

Damage to the structures of limbic system results in conditions like Alzheimer's disease, anterograde amnesia, retrograde amnesia, and Klüver-Bucy syndrome.

Society and culture

Etymology and history

The term limbic comes from the Latin limbus, for "border" or "edge", or, particularly in medical terminology, a border of an anatomical component. Paul Broca coined the term based on its physical location in the brain, sandwiched between two functionally different components.

The limbic system is a term that was introduced in 1949 by the American physician and neuroscientist, Paul D. MacLean.[28][29] The French physician Paul Broca first called this part of the brain le grand lobe limbique in 1878.[6] He examined the differentiation between deeply recessed cortical tissue and underlying, subcortical nuclei.[30] However, most of its putative role in emotion was developed only in 1937 when the American physician James Papez described his anatomical model of emotion, the Papez circuit.[31]

The first evidence that the limbic system was responsible for the cortical representation of emotions was discovered in 1939, by Heinrich Kluver and Paul Bucy. Kluver and Bucy, after much research, demonstrated that the bilateral removal of the temporal lobes in monkeys created an extreme behavioral syndrome. After performing a temporal lobectomy, the monkeys showed a decrease in aggression. The animals revealed a reduced threshold to visual stimuli, and were thus unable to recognize objects that were once familiar.[32] MacLean expanded these ideas to include additional structures in a more dispersed "limbic system", more on the lines of the system described above.[29] MacLean developed the intriguing theory of the "triune brain" to explain its evolution and to try to reconcile rational human behavior with its more primal and violent side. He became interested in the brain's control of emotion and behavior. After initial studies of brain activity in epileptic patients, he turned to cats, monkeys, and other models, using electrodes to stimulate different parts of the brain in conscious animals recording their responses.[33] In the 1950s, he began to trace individual behaviors like aggression and sexual arousal to their physiological sources. He analyzed the brain's center of emotions, the limbic system, and described an area that includes structures called the hippocampus and amygdala. Developing observations made by Papez, he determined that the limbic system had evolved in early mammals to control fight-or-flight responses and react to both emotionally pleasurable and painful sensations. The concept is now broadly accepted in neuroscience.[34] Additionally, MacLean said that the idea of the limbic system leads to a recognition that its presence "represents the history of the evolution of mammals and their distinctive family way of life." In the 1960s, Dr. MacLean enlarged his theory to address the human brain's overall structure and divided its evolution into three parts, an idea that he termed the triune brain. In addition to identifying the limbic system, he pointed to a more primitive brain called the R-complex, related to reptiles, which controls basic functions like muscle movement and breathing. The third part, the neocortex, controls speech and reasoning and is the most recent evolutionary arrival.[35] The concept of the limbic system has since been further expanded and developed by Walle Nauta, Lennart Heimer and others.

Academic dispute

There is controversy over the use of the term limbic system, with scientists such as LeDoux arguing that the term be considered obsolete and abandoned.[36] Originally, the limbic system was believed to be the emotional center of the brain, with cognition being the business of the neocortex. However, cognition depends on acquisition and retention of memories, in which the hippocampus, a primary limbic structure, is involved: hippocampus damage causes severe cognitive (memory) deficits. More important, the "boundaries" of the limbic system have been repeatedly redefined because of advances in neuroscience. Therefore, while it is true that limbic structures are more closely related to emotion, the brain can be thought of as an integrated whole.

What Is Artificial Intelligence?

February 21, 2001 by John McCarthy
Original link:  http://www.kurzweilai.net/what-is-artificial-intelligence

What exactly is “artificial intelligence” (AI)? Stanford University Professor of Computer Science Dr. John McCarthy, a pioneer in AI, answers this question in depth for beginners.

Originally published April 4, 2000 as an academic paper, Stanford University. Published on KurzweilAI.net February 22, 2001.

This article for the layman answers basic questions about artificial intelligence. The opinions expressed here are not all consensus opinion among researchers in AI.
Extinguished philosophies lie about the cradle of every science as the strangled snakes beside that of Hercules.–adapted from T. H. Huxley

Basic Questions

Q. What is artificial intelligence?

A. It is the science and engineering of making intelligent machines, especially intelligent computer programs. It is related to the similar task of using computers to understand human intelligence, but AI does not have to confine itself to methods that are biologically observable.

Q. Yes, but what is intelligence?

A. Intelligence is the computational part of the ability to achieve goals in the world. Varying kinds and degrees of intelligence occur in people, many animals and some machines.

Q. Isn’t there a solid definition of intelligence that doesn’t depend on relating it to human intelligence?

A. Not yet. The problem is that we cannot yet characterize in general what kinds of computational procedures we want to call intelligent. We understand some of the mechanisms of intelligence and not others.

Q. Is intelligence a single thing so that one can ask a yes or no question “Is this machine intelligent or not?”?

A. No. Intelligence involves mechanisms, and AI research has discovered how to make computers carry out some of them and not others. If doing a task requires only mechanisms that are well understood today, computer programs can give very impressive performances on these tasks. Such programs should be considered “somewhat intelligent”.

Q. Isn’t AI about simulating human intelligence?

A. Sometimes but not always or even usually. On the one hand, we can learn something about how to make machines solve problems by observing other people or just by observing our own methods. On the other hand, most work in AI involves studying the problems the world presents to intelligence rather than studying people or animals. AI researchers are free to use methods that are not observed in people or that involve much more computing than people can do.

Q. What about IQ? Do computer programs have IQs?

A. No. IQ is based on the rates at which intelligence develops in children. It is the ratio of the age at which a child normally makes a certain score to the child’s age. The scale is extended to adults in a suitable way. IQ correlates well with various measures of success or failure in life, but making computers that can score high on IQ tests would be weakly correlated with their usefulness. For example, the ability of a child to repeat back a long sequence of digits correlates well with other intellectual abilities, perhaps because it measures how much information the child can compute with at once. However, “digit span” is trivial for even extremely limited computers.

However, some of the problems on IQ tests are useful challenges for AI.

Q. What about other comparisons between human and computer intelligence?

A. Arthur R. Jensen [Jen98], a leading researcher in human intelligence, suggests “as a heuristic hypothesis” that all normal humans have the same intellectual mechanisms and that differences in intelligence are related to “quantitative biochemical and physiological conditions”. I see them as speed, short term memory, and the ability to form accurate and retrievable long term memories.

Whether or not Jensen is right about human intelligence, the situation in AI today is the reverse.

Computer programs have plenty of speed and memory but their abilities correspond to the intellectual mechanisms that program designers understand well enough to put in programs. Some abilities that children normally don’t develop till they are teenagers may be in, and some abilities possessed by two year olds are still out. The matter is further complicated by the fact that the cognitive sciences still have not succeeded in determining exactly what the human abilities are. Very likely the organization of the intellectual mechanisms for AI can usefully be different from that in people.

Whenever people do better than computers on some task or computers use a lot of computation to do as well as people, this demonstrates that the program designers lack understanding of the intellectual mechanisms required to do the task efficiently.

Q. When did AI research start?

A. After WWII, a number of people independently started to work on intelligent machines. The English mathematician Alan Turing may have been the first. He gave a lecture on it in 1947. He also may have been the first to decide that AI was best researched by programming computers rather than by building machines. By the late 1950s, there were many researchers on AI, and most of them were basing their work on programming computers.

Q. Does AI aim to put the human mind into the computer?

A. Some researchers say they have that objective, but maybe they are using the phrase metaphorically. The human mind has a lot of peculiarities, and I’m not sure anyone is serious about imitating all of them.

Q. What is the Turing test?

A. Alan Turing’s 1950 article Computing Machinery and Intelligence [Tur50] discussed conditions for considering a machine to be intelligent. He argued that if the machine could successfully pretend to be human to a knowledgeable observer then you certainly should consider it intelligent. This test would satisfy most people but not all philosophers. The observer could interact with the machine and a human by teletype (to avoid requiring that the machine imitate the appearance or voice of the person), and the human would try to persuade the observer that it was human and the machine would try to fool the observer.

The Turing test is a one-sided test. A machine that passes the test should certainly be considered intelligent, but a machine could still be considered intelligent without knowing enough about humans to imitate a human.

Daniel Dennett’s book Brainchildren [Den98] has an excellent discussion of the Turing test and the various partial Turing tests that have been implemented, i.e. with restrictions on the observer’s knowledge of AI and the subject matter of questioning. It turns out that some people are easily led into believing that a rather dumb program is intelligent.

Q. Does AI aim at human-level intelligence?

A. Yes. The ultimate effort is to make computer programs that can solve problems and achieve goals in the world as well as humans. However, many people involved in particular research areas are much less ambitious.

Q. How far is AI from reaching human-level intelligence? When will it happen?

A. A few people think that human-level intelligence can be achieved by writing large numbers of programs of the kind people are now writing and assembling vast knowledge basis of facts in the languages now used for expressing knowledge.

However, most AI researchers believe that new fundamental ideas are required, and therefore it cannot be predicted when human level intelligence will be achieved.

Q. Are computers the right kind of machine to be made intelligent?

A. Computers can be programmed to simulate any kind of machine.

Many researchers invented non-computer machines, hoping that they would be intelligent in different ways than the computer programs could be. However, they usually simulate their invented machines on a computer and come to doubt that the new machine is worth building. Because many billions of dollars that have been spent in making computers faster and faster, another kind of machine would have to be very fast to perform better than a program on a computer simulating the machine.

Q. Are computers fast enough to be intelligent?

A. Some people think much faster computers are required as well as new ideas. My own opinion is that the computers of 30 years ago were fast enough if only we knew how to program them. Of course, quite apart from the ambitions of AI researchers, computers will keep getting faster.

Q. What about parallel machines?

A. Machines with many processors are much faster than single processors can be. Parallelism itself presents no advantages, and parallel machines are somewhat awkward to program. When extreme speed is required, it is necessary to face this awkwardness.

Q. What about making a “child machine” that could improve by reading and by learning from experience?

A. This idea has been proposed many times, starting in the 1940s. Eventually, it will be made to work. However, AI programs haven’t yet reached the level of being able to learn much of what a child learns from physical experience. Nor do present programs understand language well enough to learn much by reading.

Q. Might an AI system be able to bootstrap itself to higher and higher level intelligence by thinking about AI?

A. I think yes, but we aren’t yet at a level of AI at which this process can begin.

Q. What about chess?

A. Alexander Kronrod, a Russian AI researcher, said “Chess is the Drosophila of AI.” He was making an analogy with geneticists’ use of that fruit fly to study inheritance. Playing chess requires certain intellectual mechanisms and not others. Chess programs now play at grandmaster level, but they do it with limited intellectual mechanisms compared to those used by a human chess player, substituting large amounts of computation for understanding. Once we understand these mechanisms better, we can build human-level chess programs that do far less computation than do present programs.

Unfortunately, the competitive and commercial aspects of making computers play chess have taken precedence over using chess as a scientific domain. It is as if the geneticists after 1910 had organized fruit fly races and concentrated their efforts on breeding fruit flies that could win these races.

Q. What about Go?

A. The Chinese and Japanese game of Go is also a board game in which the players take turns moving. Go exposes the weakness of our present understanding of the intellectual mechanisms involved in human game playing. Go programs are very bad players, in spite of considerable effort (not as much as for chess). The problem seems to be that a position in Go has to be divided mentally into a collection of subpositions which are first analyzed separately followed by an analysis of their interaction. Humans use this in chess also, but chess programs consider the position as a whole. Chess programs compensate for the lack of this intellectual mechanism by doing thousands or, in the case of Deep Blue, many millions of times as much computation.

Sooner or later, AI research will overcome this scandalous weakness.

Q. Don’t some people say that AI is a bad idea?

A. The philosopher John Searle says that the idea of a non-biological machine being intelligent is incoherent. The philosopher Hubert Dreyfus says that AI is impossible. The computer scientist Joseph Weizenbaum says the idea is obscene, anti-human and immoral. Various people have said that since artificial intelligence hasn’t reached human level by now, it must be impossible. Still other people are disappointed that companies they invested in went bankrupt.

Q. Aren’t computability theory and computational complexity the keys to AI? [Note to the layman and beginners in computer science: These are quite technical branches of mathematical logic and computer science, and the answer to the question has to be somewhat technical.]

A. No. These theories are relevant but don’t address the fundamental problems of AI.

In the 1930s mathematical logicians, especially Kurt Gödel and Alan Turing, established that there did not exist algorithms that were guaranteed to solve all problems in certain important mathematical domains. Whether a sentence of first order logic is a theorem is one example, and whether a polynomial equations in several variables has integer solutions is another. Humans solve problems in these domains all the time, and this has been offered as an argument (usually with some decorations) that computers are intrinsically incapable of doing what people do. However, people can’t guarantee to solve arbitrary problems in these domains either.

In the 1960s computer scientists, especially Steve Cook and Richard Karp developed the theory of NP-complete problem domains. Problems in these domains are solvable, but seem to take time exponential in the size of the problem. Which sentences of propositional calculus are satisfiable is a basic example of an NP-complete problem domain. Humans often solve problems in NP-complete domains in times much shorter than is guaranteed by the general algorithms, but can’t solve them quickly in general.

What is important for AI is to have algorithms as capable as people at solving problems. The identification of subdomains for which good algorithms exist is important, but a lot of AI problem solvers are not associated with readily identified subdomains.

The theory of the difficulty of general classes of problems is called computational complexity. So far this theory hasn’t interacted with AI as much as might have been hoped. Success in problem solving by humans and by AI programs seems to rely on properties of problems and problem solving methods that the neither the complexity researchers nor the AI community have been able to identify precisely.

Algorithmic complexity theory as developed by Solomonoff, Kolmogorov and Chaitin (independently of one another) is also relevant. It defines the complexity of a symbolic object as the length of the shortest program that will generate it. Proving that a candidate program is the shortest or close to the shortest is an unsolvable problem, but representing objects by short programs that generate them should be often illuminating even when you can’t prove that the program is the shortest.

Branches of AI

Q. What are the branches of AI?

A. Here’s a list, but some branches are surely missing, because no-one has identified them yet. Some of these may be regarded as concepts or topics rather than full branches.

logical AI

What a program knows about the world in general the facts of the specific situation in which it must act, and its goals are all represented by sentences of some mathematical logical language. The program decides what to do by inferring that certain actions are appropriate for achieving its goals. The first article proposing this was [McC59]. [McC89] is a more recent summary. [McC96] lists some of the concepts involved in logical aI. [Sha97] is an important text.

search

AI programs often examine large numbers of possibilities, e.g. moves in a chess game or inferences by a theorem proving program. Discoveries are continually made about how to do this more efficiently in various domains.

pattern recognition

When a program makes observations of some kind, it is often programmed to compare what it sees with a pattern. For example, a vision program may try to match a pattern of eyes and a nose in a scene in order to find a face. More complex patterns, e.g. in a natural language text, in a chess position, or in the history of some event are also studied. These more complex patterns require quite different methods than do the simple patterns that have been studied the most.

representation

Facts about the world have to be represented in some way. Usually languages of mathematical logic are used.

inference

From some facts, others can be inferred. Mathematical logical deduction is adequate for some purposes, but new methods of non-monotonic inference have been added to logic since the 1970s. The simplest kind of non-monotonic reasoning is default reasoning in which a conclusion is to be inferred by default, but the conclusion can be withdrawn if there is evidence to the contrary. For example, when we hear of a bird, we man infer that it can fly, but this conclusion can be reversed when we hear that it is a penguin. It is the possibility that a conclusion may have to be withdrawn that constitutes the non-monotonic character of the reasoning. Ordinary logical reasoning is monotonic in that the set of conclusions that can the drawn from a set of premises is a monotonic increasing function of the premises

common sense knowledge and reasoning

This is the area in which AI is farthest from human-level, in spite of the fact that it has been an active research area since the 1950s. While there has been considerable progress, e.g. in developing systems of non-monotonic reasoning and theories of action, yet more new ideas are needed. The Cyc system contains a large but spotty collection of common sense facts.

learning from experience

Programs do that. The approaches to AI based on connectionism and neural nets specialize in that. There is also learning of laws expressed in logic. [Mit97] is a comprehensive undergraduate text on machine learning. Programs can only learn what facts or behaviors their formalisms can represent, and unfortunately learning systems are almost all based on very limited abilities to represent information.

planning

Planning programs start with general facts about the world (especially facts about the effects of actions), facts about the particular situation and a statement of a goal. From these, they generate a strategy for achieving the goal. In the most common cases, the strategy is just a sequence of actions.

epistemology

This is a study of the kinds of knowledge that are required for solving problems in the world.

ontology

Ontology is the study of the kinds of things that exist. In AI, the programs and sentences deal with various kinds of objects, and we study what these kinds are and what their basic properties are. Emphasis on ontology begins in the 1990s.

heuristics

A heuristic is a way of trying to discover something or an idea imbedded in a program. The term is used variously in AI. Heuristic functions are used in some approaches to search to measure how far a node in a search tree seems to be from a goal. Heuristic predicates that compare two nodes in a search tree to see if one is better than the other, i.e. constitutes an advance toward the goal, may be more useful. [My opinion].

genetic programming

Genetic programming is a technique for getting programs to solve a task by mating random Lisp programs and selecting fittest in millions of generations.

Applications of AI

Q. What are the applications of AI?

A. Here are some.

game playing

You can buy machines that can play master level chess for a few hundred dollars. There is some AI in them, but they play well against people mainly through brute force computation–looking at hundreds of thousands of positions. To beat a world champion by brute force and known reliable heuristics requires being able to look at 200 million positions per second.

speech recognition

In the 1990s, computer speech recognition reached a practical level for limited purposes. Thus United Airlines has replaced its keyboard tree for flight information by a system using speech recognition of flight numbers and city names. It is quite convenient. On the the other hand, while it is possible to instruct some computers using speech, most users have gone back to the keyboard and the mouse as still more convenient.

understanding natural language

Just getting a sequence of words into a computer is not enough. Parsing sentences is not enough either. The computer has to be provided with an understanding of the domain the text is about, and this is presently possible only for very limited domains.

computer vision

The world is composed of three-dimensional objects, but the inputs to the human eye and computers’ TV cameras are two dimensional. Some useful programs can work solely in two dimensions, but full computer vision requires partial three-dimensional information that is not just a set of two-dimensional views. At present there are only limited ways of representing three-dimensional information directly, and they are not as good as what humans evidently use.

expert systems

A “knowledge engineer” interviews experts in a certain domain and tries to embody their knowledge in a computer program for carrying out some task. How well this works depends on whether the intellectual mechanisms required for the task are within the present state of AI. When this turned out not to be so, there were many disappointing results. One of the first expert systems was MYCIN in 1974, which diagnosed bacterial infections of the blood and suggested treatments. It did better than medical students or practicing doctors, provided its limitations were observed. Namely, its ontology included bacteria, symptoms, and treatments and did not include patients, doctors, hospitals, death, recovery, and events occurring in time. Its interactions depended on a single patient being considered. Since the experts consulted by the knowledge engineers knew about patients, doctors, death, recovery, etc., it is clear that the knowledge engineers forced what the experts told them into a predetermined framework. In the present state of AI, this has to be true. The usefulness of current expert systems depends on their users having common sense.

heuristic classification

One of the most feasible kinds of expert system given the present knowledge of AI is to put some information in one of a fixed set of categories using several sources of information. An example is advising whether to accept a proposed credit card purchase. Information is available about the owner of the credit card, his record of payment and also about the item he is buying and about the establishment from which he is buying it (e.g., about whether there have been previous credit card frauds at this establishment).

More questions

Q. How is AI research done?

A. AI research has both theoretical and experimental sides. The experimental side has both basic and applied aspects.

There are two main lines of research. One is biological, based on the idea that since humans are intelligent, AI should study humans and imitate their psychology or physiology. The other is phenomenal, based on studying and formalizing common sense facts about the world and the problems that the world presents to the achievement of goals. The two approaches interact to some extent, and both should eventually succeed. It is a race, but both racers seem to be walking.

Q. What should I study before or while learning AI?

A. Study mathematics, especially mathematical logic. The more you learn about science in general the better. For the biological approaches to AI, study psychology and the physiology of the nervous system. Learn some programming languages-at least C, Lisp and Prolog. It is also a good idea to learn one basic machine language. Jobs are likely to depend on knowing the languages currently in fashion. In the late 1990s, these include C++ and Java.

Q. What is a good textbook on AI?

A. Artificial Intelligence by Stuart Russell and Peter Norvig, Prentice Hall is the most commonly used textbbook in 1997. The general views expressed there do not exactly correspond to those of this essay. Artificial Intelligence: A New Synthesis by Nils Nilsson, Morgan Kaufman, may be easier to read.

Q. What organizations and publications are concerned with AI?

A. The American Association for Artificial Intelligence (AAAI), the European Coordinating Committee for Artificial Intelligence (ECCAI) and the Society for Artificial Intelligence and Simulation of Behavior (AISB) are scientific societies concerned with AI research. The Association for Computing Machinery (ACM) has a special interest group on artificial intelligence SIGART.

The International Joint Conference on AI (IJCAI) is the main international conference. The AAAI runs a US National Conference on AI. Electronic Transactions on Artificial Intelligence, Artificial Intelligence, and Journal of Artificial Intelligence Research, and IEEE Transactions on Pattern Analysis and Machine Intelligence are four of the main journals publishing AI research papers. I have not yet found everything that should be in this paragraph.
 
Page of Positive Reviews lists papers that experts have found important.
 
Funding a Revolution: Government Support for Computing Research by a committee of the National Research covers support for AI research in Chapter 9.

References


Den98 Daniel Dennett. Brainchildren: Essays on Designing Minds. MIT Press, 1998.

Jen98 Arthur R. Jensen. Does IQ matter? Commentary, pages 20-21, November 1998. The reference is just to Jensen’s comment-one of many.

McC59 John McCarthy. Programs with Common Sense. In Mechanisation of Thought Processes, Proceedings of the Symposium of the National Physics Laboratory, pages 77-84, London, U.K., 1959. Her Majesty’s Stationery Office. Reprinted in McC90.

McC89 John McCarthy. Artificial Intelligence, Logic and Formalizing Common Sense. In Richmond Thomason, editor, Philosophical Logic and Artificial Intelligence. Klüver Academic, 1989.

McC96 John McCarthy. Concepts of Logical AI, 1996. Web only for now but may be referenced.

Mit97 Tom Mitchell. Machine Learning. McGraw-Hill, 1997.

Sha97 Murray Shanahan. Solving the Frame Problem, a mathematical investigation of the common sense law of inertia. M.I.T. Press, 1997.

Cognitive psychology

From Wikipedia, the free encyclopedia

Cognitive psychology is the study of mental processes such as "attention, language use, memory, perception, problem solving, creativity, and thinking". Much of the work derived from cognitive psychology has been integrated into various other modern disciplines of psychological study, including educational psychology, social psychology, personality psychology, abnormal psychology, developmental psychology, and economics.

History

Philosophically, ruminations of the human mind and its processes have been around since the times of the ancient Greeks. In 387 BCE, Plato is known to have suggested that the brain was the seat of the mental processes.[2] In 1637, René Descartes posited that humans are born with innate ideas, and forwarded the idea of mind-body dualism, which would come to be known as substance dualism (essentially the idea that the mind and the body are two separate substances).[3] From that time, major debates ensued through the 19th century regarding whether human thought was solely experiential (empiricism), or included innate knowledge (nativism). Some of those involved in this debate included George Berkeley and John Locke on the side of empiricism, and Immanuel Kant on the side of nativism.[4]

With the philosophical debate continuing, the mid to late 19th century was a critical time in the development of psychology as a scientific discipline. Two discoveries that would later play substantial roles in cognitive psychology were Paul Broca's discovery of the area of the brain largely responsible for language production,[3] and Carl Wernicke's discovery of an area thought to be mostly responsible for comprehension of language.[5] Both areas were subsequently formally named for their founders and disruptions of an individual's language production or comprehension due to trauma or malformation in these areas have come to commonly be known as Broca's aphasia and Wernicke's aphasia.

From the 1920s to the 1950s, the main approach to psychology was behaviorism. Initially, its adherents viewed mental events such as thoughts, ideas, attention, and consciousness as unobservables, hence outside the realm of a science of psychology. One pioneer of cognitive psychology, who worked outside the boundaries (both intellectual and geographical) of behaviorism was Jean Piaget. From 1926 to the 1950s and into the 1980s, he studied the thoughts, language, and intelligence of children and adults.[6]

In the mid-20th century, three main influences arose that would inspire and shape cognitive psychology as a formal school of thought:
  • With the development of new warfare technology during WWII, the need for a greater understanding of human performance came to prominence. Problems such as how to best train soldiers to use new technology and how to deal with matters of attention while under duress became areas of need for military personnel. Behaviorism provided little if any insight into these matters and it was the work of Donald Broadbent, integrating concepts from human performance research and the recently developed information theory, that forged the way in this area.[4]
  • Developments in computer science would lead to parallels being drawn between human thought and the computational functionality of computers, opening entirely new areas of psychological thought. Allen Newell and Herbert Simon spent years developing the concept of artificial intelligence (AI) and later worked with cognitive psychologists regarding the implications of AI. This encouraged a conceptualization of mental functions patterned on the way that computers handled such things as memory storage and retrieval,[4] and it opened an important doorway for cognitivism.
  • Noam Chomsky's 1959 critique[7] of behaviorism, and empiricism more generally, initiated what would come to be known as the "cognitive revolution". Inside psychology, in criticism of behaviorism, J. S. Bruner, J. J. Goodnow & G. A. Austin wrote "a study of thinking" in 1956. In 1960, G. A. Miller, E. Galanter and K. Pribram wrote their famous "Plans and the Structure of Behavior". The same year, Bruner and Miller founded the Harvard Center for Cognitive Studies, which institutionalized the revolution and launched the field of cognitive science.
  • Formal recognition of the field involved the establishment of research institutions such as George Mandler's Center for Human Information Processing in 1964. Mandler described the origins of cognitive psychology in a 2002 article in the Journal of the History of the Behavioral Sciences[8]
Ulric Neisser put the term "cognitive psychology" into common use through his book Cognitive Psychology, published in 1967.[9] Neisser's definition of "cognition" illustrates the then-progressive concept of cognitive processes:
The term "cognition" refers to all processes by which the sensory input is transformed, reduced, elaborated, stored, recovered, and used. It is concerned with these processes even when they operate in the absence of relevant stimulation, as in images and hallucinations. ... Given such a sweeping definition, it is apparent that cognition is involved in everything a human being might possibly do; that every psychological phenomenon is a cognitive phenomenon. But although cognitive psychology is concerned with all human activity rather than some fraction of it, the concern is from a particular point of view. Other viewpoints are equally legitimate and necessary. Dynamic psychology, which begins with motives rather than with sensory input, is a case in point. Instead of asking how a man's actions and experiences result from what he saw, remembered, or believed, the dynamic psychologist asks how they follow from the subject's goals, needs, or instincts.[9]

Mental processes

The main focus of cognitive psychologists is on the mental processes that affect behavior. Those processes include, but are not limited to, the following:

Attention

The psychological definition of attention is "a state of focused awareness on a subset of the available perceptual information".[10] A key function of attention is to identify irrelevant data and filter it out, enabling significant data to be distributed to the other mental processes.[4] For example, the human brain may simultaneously receive auditory, visual, olfactory, taste, and tactile information. The brain is able to handle only a small subset of this information, and this is accomplished through the attentional processes.[4]
Attention can be divided into two major attentional systems: exogenous control and endogenous control[11] Exogenous control works from bottom-up and is responsible for alertness, arousal, orienting reflex, spotlight attention and pop-out effects.[11] Endogenous control works top-down and is the more deliberate attentional system, responsible for selective attention, divided attention, local and global attention, and conscious processing.[11]

Attention tends to be either visual or auditory. One major focal point relating to attention within the field of cognitive psychology is the concept of divided attention. A number of early studies dealt with the ability of a person wearing headphones to discern meaningful conversation when presented with different messages into each ear; this is known as the dichotic listening task.[4] Key findings involved an increased understanding of the mind's ability to both focus on one message, while still being somewhat aware of information being taken in from the ear not being consciously attended to. E.g., participants (wearing earphones) may be told that they will be hearing separate messages in each ear and that they are expected to attend only to information related to basketball. When the experiment starts, the message about basketball will be presented to the left ear and non-relevant information will be presented to the right ear. At some point the message related to basketball will switch to the right ear and the non-relevant information to the left ear. When this happens, the listener is usually able to repeat the entire message at the end, having attended to the left or right ear only when it was appropriate.[4] The ability to attend to one conversation in the face of many is known as the cocktail party effect.

Other major findings include that participants can't comprehend both passages, when shadowing one passage, they can't report content of the unattended message, they can shadow a message better if the pitches in each ear are different.[12] However, while deep processing doesn't occur, early sensory processing does. Subjects did notice if the pitch of the unattended message changed or if it ceased altogether, and some even oriented to the unattended message if their name was mentioned.[12]

Memory

The two main types of memory are short-term memory and long-term memory; however, short-term memory has become better understood to be working memory. Cognitive psychologists often study memory in terms of working memory.

Working memory

Though working memory is often thought of as just short-term memory, it is more clearly defined as the ability to remember information in the face of distraction. The famously known capacity of memory of 7 plus or minus 2 is a combination of both memory in working memory and long term memory.

One of the classic experiments is by Ebbinghaus, who found the serial position effect where information from the beginning and end of list of random words were better recalled than those in the center.[13] This primacy and recency effect varies in intensity based on list length.[13] Its typical U-shaped curve can be disrupted by an attention-grabbing word; this is known as the Von Restorff effect.

The Baddeley & Hitch Model of Working Memory

Many models of working memory have been made. One of the most regarded is the Baddeley and Hitch model of working memory. It takes into account both visual and auditory stimuli, long-term memory to use as a reference, and a central processor to combine and understand it all.

A large part of memory is forgetting, and there is a large debate among psychologists of decay theory versus interference theory.

Long-term memory

Modern conceptions of memory are usually about long-term memory and break it down into three main sub-classes. These three classes are somewhat hierarchical in nature, in terms of the level of conscious thought related to their use.[14]
  • Procedural memory is memory for the performance of particular types of action. It is often activated on a subconscious level, or at most requires a minimal amount of conscious effort. Procedural memory includes stimulus-response-type information, which is activated through association with particular tasks, routines, etc. A person is using procedural knowledge when they seemingly "automatically" respond in a particular manner to a particular situation or process.[14] An example is driving a car.
  • Semantic memory is the encyclopedic knowledge that a person possesses. Knowledge like what the Eiffel Tower looks like, or the name of a friend from sixth grade, represent semantic memory. Access of semantic memory ranges from slightly to extremely effortful, depending on a number of variables including but not limited to recency of encoding of the information, number of associations it has to other information, frequency of access, and levels of meaning (how deeply it was processed when it was encoded).[14]
  • Episodic memory is the memory of autobiographical events that can be explicitly stated. It contains all memories that are temporal in nature, such as when one last brushed one's teeth or where one was when one heard about a major news event. Episodic memory typically requires the deepest level of conscious thought, as it often pulls together semantic memory and temporal information to formulate the entire memory.[14]

Perception

Perception involves both the physical senses (sight, smell, hearing, taste, touch, and proprioception) as well as the cognitive processes involved in interpreting those senses. Essentially, it is how people come to understand the world around them through interpretation of stimuli.[15] Early psychologists like Edward B. Titchener began to work with perception in their structuralist approach to psychology. Structuralism dealt heavily with trying to reduce human thought (or "consciousness," as Titchener would have called it) into its most basic elements by gaining understanding of how an individual perceives particular stimuli.[16]

Current perspectives on perception within cognitive psychology tend to focus on particular ways in which the human mind interprets stimuli from the senses and how these interpretations affect behavior. An example of the way in which modern psychologists approach the study of perception is the research being done at the Center for Ecological Study of Perception and Action at the University of Connecticut (CESPA). One study at CESPA concerns ways in which individuals perceive their physical environment and how that influences their navigation through that environment.[17]

Language

Psychologists have had an interest in the cognitive processes involved with language that dates back to the 1870s, when Carl Wernicke proposed a model for the mental processing of language.[18] Current work on language within the field of cognitive psychology varies widely. Cognitive psychologists may study language acquisition,[19] individual components of language formation (like phonemes),[20] how language use is involved in mood,[21] or numerous other related areas.

Broca's and Wernicke's areas of the brain, which are critical in language

Significant work has been done recently with regard to understanding the timing of language acquisition and how it can be used to determine if a child has, or is at risk of, developing a learning disability. A study from 2012, showed that while this can be an effective strategy, it is important that those making evaluations include all relevant information when making their assessments. Factors such as individual variability, socioeconomic status, short-term and long-term memory capacity, and others must be included in order to make valid assessments.[19]

Metacognition

Metacognition, in a broad sense, is the thoughts that a person has about their own thoughts. More specifically, metacognition includes things like:
  • How effective a person is at monitoring their own performance on a given task (self-regulation).
  • A person's understanding of their capabilities on particular mental tasks.
  • The ability to apply cognitive strategies.[22]
Much of the current study regarding metacognition within the field of cognitive psychology deals with its application within the area of education. Being able to increase a student's metacognitive abilities has been shown to have a significant impact on their learning and study habits.[23] One key aspect of this concept is the improvement of students' ability to set goals and self-regulate effectively to meet those goals. As a part of this process, it is also important to ensure that students are realistically evaluating their personal degree of knowledge and setting realistic goals (another metacognitive task).[24]

Common phenomena related to metacognition include:
  • Déjà Vu: feeling of a repeated experience
  • Cryptomnesia: generating thought believing it is unique but it is actually a memory of a past experience, aka unconscious plagiarism.
  • False Fame Effect: non-famous names can be made to be famous
  • Validity effect: statements seem more valid upon repeated exposure
  • Imagination inflation: imagining an event that did not occur and having increased confidence that it did occur

Modern

Modern perspectives on cognitive psychology generally address cognition as a dual process theory, expounded upon by Daniel Kahneman in 2011.[25] Kahneman differentiated the two styles of processing more, calling them intuition and reasoning. Intuition (or system 1), similar to associative reasoning, was determined to be fast and automatic, usually with strong emotional bonds included in the reasoning process. Kahneman said that this kind of reasoning was based on formed habits and very difficult to change or manipulate. Reasoning (or system 2) was slower and much more volatile, being subject to conscious judgments and attitudes.[25]

Applications

Abnormal psychology

Following the cognitive revolution, and as a result of many of the principle discoveries to come out of the field of cognitive psychology, the discipline of cognitive therapy evolved. Aaron T. Beck is generally regarded as the father of cognitive therapy.[26] His work in the areas of recognition and treatment of depression has gained worldwide recognition. In his 1987 book titled Cognitive Therapy of Depression, Beck puts forth three salient points with regard to his reasoning for the treatment of depression by means of therapy or therapy and antidepressants versus using a pharmacological-only approach:
1. Despite the prevalent use of antidepressants, the fact remains that not all patients respond to them. Beck cites (in 1987) that only 60 to 65% of patients respond to antidepressants, and recent meta-analyses (a statistical breakdown of multiple studies) show very similar numbers.[27]
2. Many of those who do respond to antidepressants end up not taking their medications, for various reasons. They may develop side-effects or have some form of personal objection to taking the drugs.
3. Beck posits that the use of psychotropic drugs may lead to an eventual breakdown in the individual's coping mechanisms. His theory is that the person essentially becomes reliant on the medication as a means of improving mood and fails to practice those coping techniques typically practiced by healthy individuals to alleviate the effects of depressive symptoms. By failing to do so, once the patient is weaned off of the antidepressants, they often are unable to cope with normal levels of depressed mood and feel driven to reinstate use of the antidepressants.[28]

Social psychology

Many facets of modern social psychology have roots in research done within the field of cognitive psychology. Social cognition is a specific sub-set of social psychology that concentrates on processes that have been of particular focus within cognitive psychology, specifically applied to human interactions. Gordon B. Moskowitz defines social cognition as "... the study of the mental processes involved in perceiving, attending to, remembering, thinking about, and making sense of the people in our social world".[29]

The development of multiple social information processing (SIP) models has been influential in studies involving aggressive and anti-social behavior. Kenneth Dodge's SIP model is one of, if not the most, empirically supported models relating to aggression. Among his research, Dodge posits that children who possess a greater ability to process social information more often display higher levels of socially acceptable behavior. His model asserts that there are five steps that an individual proceeds through when evaluating interactions with other individuals and that how the person interprets cues is key to their reactionary process.[30]

Developmental psychology

Many of the prominent names in the field of developmental psychology base their understanding of development on cognitive models. One of the major paradigms of developmental psychology, the Theory of Mind (ToM), deals specifically with the ability of an individual to effectively understand and attribute cognition to those around them. This concept typically becomes fully apparent in children between the ages of 4 and 6. Essentially, before the child develops ToM, they are unable to understand that those around them can have different thoughts, ideas, or feelings than themselves. The development of ToM is a matter of metacognition, or thinking about one's thoughts. The child must be able to recognize that they have their own thoughts and in turn, that others possess thoughts of their own.[31]


One of the foremost minds with regard to developmental psychology, Jean Piaget, focused much of his attention on cognitive development from birth through adulthood. Though there have been considerable challenges to parts of his stages of cognitive development, they remain a staple in the realm of education. Piaget's concepts and ideas predated the cognitive revolution but inspired a wealth of research in the field of cognitive psychology and many of his principles have been blended with modern theory to synthesize the predominant views of today.[32]

Educational psychology

Modern theories of education have applied many concepts that are focal points of cognitive psychology. Some of the most prominent concepts include:
  • Metacognition: Metacognition is a broad concept encompassing all manners of one's thoughts and knowledge about their own thinking. A key area of educational focus in this realm is related to self-monitoring, which relates highly to how well students are able to evaluate their personal knowledge and apply strategies to improve knowledge in areas in which they are lacking.[33]
  • Declarative knowledge and procedural knowledge: Declarative knowledge is a persons 'encyclopedic' knowledge base, whereas procedural knowledge is specific knowledge relating to performing particular tasks. The application of these cognitive paradigms to education attempts to augment a student's ability to integrate declarative knowledge into newly learned procedures in an effort to facilitate accelerated learning.[33]
  • Knowledge organization: Applications of cognitive psychology's understanding of how knowledge is organized in the brain has been a major focus within the field of education in recent years. The hierarchical method of organizing information and how that maps well onto the brain's memory are concepts that have proven extremely beneficial in classrooms.[33]

Personality psychology

Cognitive therapeutic approaches have received considerable attention in the treatment of personality disorders in recent years. The approach focuses on the formation of what it believes to be faulty schemata, centralized on judgmental biases and general cognitive errors.[34]

Cognitive psychology vs. cognitive science

The line between cognitive psychology and cognitive science can be blurry. The differentiation between the two is best understood in terms of cognitive psychology's relationship to applied psychology, and the understanding of psychological phenomena. Cognitive psychologists are often heavily involved in running psychological experiments involving human participants, with the goal of gathering information related to how the human mind takes in, processes, and acts upon inputs received from the outside world.[35] The information gained in this area is then often used in the applied field of clinical psychology.

One of the paradigms of cognitive psychology derived in this manner, is that every individual develops schemata which motivate the person to think or act in a particular way in the face of a particular circumstance. E.g., most people have a schema for waiting in line. When approaching some type of service counter where people are waiting their turn, most people don't just walk to the front of the line and butt in. Their schema for that situation tells them to get in the back of the line. This then applies to the field of abnormal psychology as a result of individuals sometimes developing faulty schemata which lead them to consistently react in a dysfunctional manner. If a person has a schema that says "I am no good at making friends", they may become so reluctant to pursue interpersonal relationships that they become prone to seclusion.

Cognitive science is better understood as predominantly concerned with gathering data through research. Cognitive science envelopes a much broader scope, which has links to philosophy, linguistics, anthropology, neuroscience, and particularly with artificial intelligence. It could be said that cognitive science provides the database of information that fuels the theory from which cognitive psychologists operate.[36] Cognitive scientists' research sometimes involves non-human subjects, allowing them to delve into areas which would come under ethical scrutiny if performed on human participants. I.e., they may do research implanting devices in the brains of rats to track the firing of neurons while the rat performs a particular task. Cognitive science is highly involved in the area of artificial intelligence and its application to the understanding of mental processes.

Criticisms

In the early years of cognitive psychology, behaviorist critics held that the empiricism it pursued was incompatible with the concept of internal mental states. Cognitive neuroscience, however, continues to gather evidence of direct correlations between physiological brain activity and putative mental states, endorsing the basis for cognitive psychology.[37]

Some observers have suggested that as cognitive psychology became a movement during the 1970s, the intricacies of the phenomena and processes it examined meant it also began to lose cohesion as a field of study. In Psychology: Pythagoras to Present, for example, John Malone writes: "Examinations of late twentieth-century textbooks dealing with "cognitive psychology", "human cognition", "cognitive science" and the like quickly reveal that there are many, many varieties of cognitive psychology and very little agreement about exactly what may be its domain." [3] This misfortune produced competing models that questioned information-processing approaches to cognitive functioning such as Decision Making and Behavioral Science.

Operator (computer programming)

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Operator_(computer_programmin...