From Wikipedia, the free encyclopedia
Cognitive neuroscience is an academic field concerned with the scientific study of biological substrates underlying cognition,[1] with a specific focus on the neural substrates of mental processes. It addresses the questions of how psychological/cognitive functions are produced by neural circuits in the brain. Cognitive neuroscience is a branch of both psychology and neuroscience, overlapping with disciplines such as physiological psychology, cognitive psychology and neuropsychology.[2] Cognitive neuroscience relies upon theories in cognitive science coupled with evidence from neuropsychology and computational modeling.[2]
Due to its multidisciplinary nature, cognitive neuroscientists may have various backgrounds. Other than the associated disciplines just mentioned, cognitive neuroscientists may have backgrounds in neurobiology, bioengineering, psychiatry, neurology, physics, computer science, linguistics, philosophy and mathematics.
Methods employed in cognitive neuroscience include experimental paradigms from psychophysics and cognitive psychology, functional neuroimaging, electrophysiology, cognitive genomics and behavioral genetics. Studies of patients with cognitive deficits due to brain lesions constitute an important aspect of cognitive neuroscience.
Theoretical approaches include computational neuroscience and cognitive psychology.
Cognitive neuroscience can look at the effects of damage to the brain and subsequent changes in the thought processes due to changes in neural circuitry resulting from the ensued damage. Also, cognitive abilities based on brain development is studied and examined under the subfield of developmental cognitive neuroscience.
Historical origins
Cognitive neuroscience is an interdisciplinary area of study that has emerged from many other fields, perhaps most significantly neuroscience, psychology, and computer science.[3] There were several stages in these disciplines that changed the way researchers approached their investigations and that led to the field becoming fully established.
Although the task of cognitive neuroscience is to describe how the brain creates the mind, historically it has progressed by investigating how a certain area of the brain supports a given mental faculty. However, early efforts to subdivide the brain proved problematic. The phrenologist movement failed to supply a scientific basis for its theories and has since been rejected. The aggregate field view, meaning that all areas of the brain participated in all behavior,[4] was also rejected as a result of brain mapping, which began with Hitzig and Fritsch’s experiments [5] and eventually developed through methods such as positron emission tomography (PET) and functional magnetic resonance imaging (fMRI).[6] Gestalt theory, neuropsychology, and the cognitive revolution were major turning points in the creation of cognitive neuroscience as a field, bringing together ideas and techniques that enabled researchers to make more links between behavior and its neural substrates.
Origins in philosophy
Philosophers have always been interested in the mind. For example, Aristotle thought the brain was the body’s cooling system and the capacity for intelligence was located in the heart. It has been suggested that the first person to believe otherwise was the Roman physician Galen in the second century AD, who declared that the brain was the source of mental activity [7] although this has also been accredited to Alcmaeon.[8] Psychology, a major contributing field to cognitive neuroscience, emerged from philosophical reasoning about the mind.[9]19th century
Phrenology
One of the predecessors to cognitive neuroscience was phrenology, a pseudoscientific approach that claimed that behavior could be determined by the shape of the scalp. In the early 19th century, Franz Joseph Gall and J. G. Spurzheim believed that the human brain was localized into approximately 35 different sections. In his book, The Anatomy and Physiology of the Nervous System in General, and of the Brain in Particular, Gall claimed that a larger bump in one of these areas meant that that area of the brain was used more frequently by that person. This theory gained significant public attention, leading to the publication of phrenology journals and the creation of phrenometers, which measured the bumps on a human subject's head. While phrenology remained a fixture at fairs and carnivals, it did not enjoy wide acceptance within the scientific community.[10] The major criticism of phrenology is that researchers were not able to test theories empirically.[3]Localizationist view
The localizationist view was concerned with mental abilities being localized to specific areas of the brain rather than on what the characteristics of the abilities were and how to measure them.[3] Studies performed in Europe, such as those of John Hughlings Jackson, supported this view. Jackson studied patients with brain damage, particularly those with epilepsy. He discovered that the epileptic patients often made the same clonic and tonic movements of muscle during their seizures, leading Jackson to believe that they must be occurring in the same place every time. Jackson proposed that specific functions were localized to specific areas of the brain,[11] which was critical to future understanding of the brain lobes.Aggregate field view
According to the aggregate field view, all areas of the brain participate in every mental function.[4]Pierre Flourens, a French experimental psychologist, challenged the localizationist view by using animal experiments.[3] He discovered that removing the cerebellum in rabbits and pigeons affected their sense of muscular coordination, and that all cognitive functions were disrupted in pigeons when the cerebral hemispheres were removed. From this he concluded that the cerebral cortex, cerebellum, and brainstem functioned together as a whole.[12] His approach has been criticised on the basis that the tests were not sensitive enough to notice selective deficits had they been present.[3]
Emergence of neuropsychology
Perhaps the first serious attempts to localize mental functions to specific locations in the brain was by Broca and Wernicke. This was mostly achieved by studying the effects of injuries to different parts of the brain on psychological functions.[13] In 1861, French neurologist Paul Broca came across a man who was able to understand language but unable to speak. The man could only produce the sound "tan". It was later discovered that the man had damage to an area of his left frontal lobe now known as Broca's area. Carl Wernicke, a German neurologist, found a patient who could speak fluently but non-sensibly. The patient had been the victim of a stroke, and could not understand spoken or written language. This patient had a lesion in the area where the left parietal and temporal lobes meet, now known as Wernicke's area. These cases, which suggested that lesions caused specific behavioral changes, strongly supported the localizationist view.Mapping the brain
In 1870, German physicians Eduard Hitzig and Gustav Fritsch published their findings about the behavior of animals. Hitzig and Fritsch ran an electrical current through the cerebral cortex of a dog, causing different muscles to contract depending on which areas of the brain were electrically stimulated. This led to the proposition that individual functions are localized to specific areas of the brain rather than the cerebrum as a whole, as the aggregate field view suggests.[5] Brodmann was also an important figure in brain mapping; his experiments based on Franz Nissl’s tissue staining techniques divided the brain into fifty-two areas.20th century
Cognitive revolution
At the start of the 20th century, attitudes in America were characterised by pragmatism, which led to a preference for behaviorism as the primary approach in psychology. J.B. Watson was a key figure with his stimulus-response approach. By conducting experiments on animals he was aiming to be able to predict and control behaviour.Behaviourism eventually failed because it could not provide realistic psychology of human action and thought – it was too based in physical concepts to explain phenomena like memory and thought. This led to what is often termed as the "cognitive revolution".[14]
Neuron doctrine
In the early 20th century, Santiago Ramón y Cajal and Camillo Golgi began working on the structure of the neuron. Golgi developed a silver staining method that could entirely stain several cells in a particular area, leading him to believe that neurons were directly connected with each other in one cytoplasm. Cajal challenged this view after staining areas of the brain that had less myelin and discovering that neurons were discrete cells. Cajal also discovered that cells transmit electrical signals down the neuron in one direction only. Both Golgi and Cajal were awarded a Nobel Prize in Physiology or Medicine in 1906 for this work on the neuron doctrine.[15]Mid-late 20th century
Several findings in the 20th century continued to advance the field, such as the discovery of ocular dominance columns, recording of single nerve cells in animals, and coordination of eye and head movements. Experimental psychology was also significant in the foundation of cognitive neuroscience. Some particularly important results were the demonstration that some tasks are accomplished via discrete processing stages, the study of attention, and the notion that behavioural data do not provide enough information by themselves to explain mental processes. As a result, some experimental psychologists began to investigate neural bases of behaviour. Wilder Penfield built up maps of primary sensory and motor areas of the brain by stimulating cortices of patients during surgery. Sperry and Gazzaniga’s work on split brain patients in the 1950s was also instrumental in the progress of the field.[7]Brain mapping
New brain mapping technology, particularly fMRI and PET, allowed researchers to investigate experimental strategies of cognitive psychology by observing brain function. Although this is often thought of as a new method (most of the technology is relatively recent), the underlying principle goes back as far as 1878 when blood flow was first associated with brain function.[6] Angelo Mosso, an Italian psychologist of the 19th century, had monitored the pulsations of the adult brain through neurosurgically created bony defects in the skulls of patients. He noted that when the subjects engaged in tasks such as mathematical calculations the pulsations of the brain increased locally. Such observations led Mosso to conclude that blood flow of the brain followed function.[6]Emergence of a new discipline
Birth of cognitive science
On September 11, 1956, a large-scale meeting of cognitivists took place at the Massachusetts Institute of Technology. George A. Miller presented his "The Magical Number Seven, Plus or Minus Two" paper while Noam Chomsky and Newell & Simon presented their findings on computer science. Ulric Neisser commented on many of the findings at this meeting in his 1967 book Cognitive Psychology. The term "psychology" had been waning in the 1950s and 1960s, causing the field to be referred to as "cognitive science". Behaviorists such as Miller began to focus on the representation of language rather than general behavior. David Marr concluded that one should understand any cognitive process at three levels of analysis. These levels include computational, algorithmic/representational, and physical levels of analysis.[16]Combining neuroscience and cognitive science
Before the 1980s, interaction between neuroscience and cognitive science was scarce.[17] The term 'cognitive neuroscience' was coined by George Miller and Michael Gazzaniga toward the end of the 1970s.[17] Cognitive neuroscience began to integrate the newly laid theoretical ground in cognitive science, that emerged between the 1950s and 1960s, with approaches in experimental psychology, neuropsychology and neuroscience. (Neuroscience was not established as a unified discipline until 1971[18]). In the very late 20th century new technologies evolved that are now the mainstay of the methodology of cognitive neuroscience, including TMS (1985) and fMRI (1991).Earlier methods used in cognitive neuroscience includes EEG (human EEG 1920) and MEG (1968). Occasionally cognitive neuroscientists utilize other brain imaging methods such as PET and SPECT. An upcoming technique in neuroscience is NIRS which uses light absorption to calculate changes in oxy- and deoxyhemoglobin in cortical areas. In some animals Single-unit recording can be used. Other methods include microneurography, facial EMG, and eye-tracking. Integrative neuroscience attempts to consolidate data in databases, and form unified descriptive models from various fields and scales: biology, psychology, anatomy, and clinical practice.[19]
Recent trends
Recently the foci of research have expanded from the localization of brain area(s) for specific functions in the adult brain using a single technology, studies have been diverging in several different directions [20] such as monitoring REM sleep via polygraphy, a machine that is capable of recording the electrical activity of a sleeping brain. Advances in non-invasive functional neuroimaging and associated data analysis methods have also made it possible to use highly naturalistic stimuli and tasks such as feature films depicting social interactions in cognitive neuroscience studies.[21]Topics
- Attention
- Change blindness
- Consciousness
- Decision-making
- Learning
- Memory
- Language
- Mirror neurons
- Social cognition
- Emotions
Methods
Experimental methods of specific psychology fields include:- Psychophysics
- Functional magnetic resonance imaging
- Electroencephalography
- Electrocorticography
- Transcranial Magnetic Stimulation
- Computational Modeling