Search This Blog

Tuesday, March 26, 2024

Brain

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Brain_implant
 
rain
The brain of a chimpanzee
 
Details
Identifiers
Latinencephalon
MeSHD001921
NeuroNames21
TA98A14.1.03.001
TA25415

The brain is an organ that serves as the center of the nervous system in all vertebrate and most invertebrate animals. In vertebrates, a small part of the brain called the hypothalamus is the neural control center for all endocrine systems. The brain is the largest cluster of neurons in the body and is typically located in the head, usually near organs for special senses such as vision, hearing and olfaction. It is the most energy-consuming organ of the body, and the most specialized, responsible for endocrine regulation, sensory perception, motor control, and the development of intelligence.

While invertebrate brains arise from paired segmental ganglia (each of which is only responsible for the respective body segment) of the ventral nerve cord, vertebrate brains develop axially from the midline dorsal nerve cord as a vesicular enlargement at the rostral end of the neural tube, with centralized control over all body segments. All vertebrate brains can be embryonically divided into three parts: the forebrain (prosencephalon, subdivided into telencephalon and diencephalon), midbrain (mesencephalon) and hindbrain (rhombencephalon, subdivided into metencephalon and myelencephalon). The spinal cord, which directly interacts with somatic functions below the head, can be considered a caudal extension of the myelencephalon enclosed inside the vertebral column. Together, the brain and spinal cord constitute the central nervous system in all vertebrates.

In humans, the cerebral cortex contains approximately 14–16 billion neurons, and the estimated number of neurons in the cerebellum is 55–70 billion. Each neuron is connected by synapses to several thousand other neurons, typically communicating with one another via root-like protrusions called dendrites and long fiber-like extensions called axons, which are usually myelinated and carry trains of rapid micro-electric signal pulses called action potentials to target specific recipient cells in other areas of the brain or distant parts of the body. The prefrontal cortex, which controls executive functions, is particularly well developed in humans.

Physiologically, brains exert centralized control over a body's other organs. They act on the rest of the body both by generating patterns of muscle activity and by driving the secretion of chemicals called hormones. This centralized control allows rapid and coordinated responses to changes in the environment. Some basic types of responsiveness such as reflexes can be mediated by the spinal cord or peripheral ganglia, but sophisticated purposeful control of behavior based on complex sensory input requires the information integrating capabilities of a centralized brain.

The operations of individual brain cells are now understood in considerable detail but the way they cooperate in ensembles of millions is yet to be solved. Recent models in modern neuroscience treat the brain as a biological computer, very different in mechanism from a digital computer, but similar in the sense that it acquires information from the surrounding world, stores it, and processes it in a variety of ways.

This article compares the properties of brains across the entire range of animal species, with the greatest attention to vertebrates. It deals with the human brain insofar as it shares the properties of other brains. The ways in which the human brain differs from other brains are covered in the human brain article. Several topics that might be covered here are instead covered there because much more can be said about them in a human context. The most important that are covered in the human brain article are brain disease and the effects of brain damage.

Anatomy

a blob with a blue patch in the center, surrounded by a white area, surrounded by a thin strip of dark-colored material
Cross section of the olfactory bulb of a rat, stained in two different ways at the same time: one stain shows neuron cell bodies, the other shows receptors for the neurotransmitter GABA.

The shape and size of the brain varies greatly between species, and identifying common features is often difficult. Nevertheless, there are a number of principles of brain architecture that apply across a wide range of species. Some aspects of brain structure are common to almost the entire range of animal species; others distinguish "advanced" brains from more primitive ones, or distinguish vertebrates from invertebrates.

The simplest way to gain information about brain anatomy is by visual inspection, but many more sophisticated techniques have been developed. Brain tissue in its natural state is too soft to work with, but it can be hardened by immersion in alcohol or other fixatives, and then sliced apart for examination of the interior. Visually, the interior of the brain consists of areas of so-called grey matter, with a dark color, separated by areas of white matter, with a lighter color. Further information can be gained by staining slices of brain tissue with a variety of chemicals that bring out areas where specific types of molecules are present in high concentrations. It is also possible to examine the microstructure of brain tissue using a microscope, and to trace the pattern of connections from one brain area to another.

Cellular structure

drawing showing a neuron with a fiber emanating from it labeled "axon" and making contact with another cell. An inset shows an enlargement of the contact zone.
Neurons generate electrical signals that travel along their axons. When a pulse of electricity reaches a junction called a synapse, it causes a neurotransmitter chemical to be released, which binds to receptors on other cells and thereby alters their electrical activity.

The brains of all species are composed primarily of two broad classes of cells: neurons and glial cells. Glial cells (also known as glia or neuroglia) come in several types, and perform a number of critical functions, including structural support, metabolic support, insulation, and guidance of development. Neurons, however, are usually considered the most important cells in the brain. The property that makes neurons unique is their ability to send signals to specific target cells over long distances. They send these signals by means of an axon, which is a thin protoplasmic fiber that extends from the cell body and projects, usually with numerous branches, to other areas, sometimes nearby, sometimes in distant parts of the brain or body. The length of an axon can be extraordinary: for example, if a pyramidal cell (an excitatory neuron) of the cerebral cortex were magnified so that its cell body became the size of a human body, its axon, equally magnified, would become a cable a few centimeters in diameter, extending more than a kilometer. These axons transmit signals in the form of electrochemical pulses called action potentials, which last less than a thousandth of a second and travel along the axon at speeds of 1–100 meters per second. Some neurons emit action potentials constantly, at rates of 10–100 per second, usually in irregular patterns; other neurons are quiet most of the time, but occasionally emit a burst of action potentials.

Axons transmit signals to other neurons by means of specialized junctions called synapses. A single axon may make as many as several thousand synaptic connections with other cells. When an action potential, traveling along an axon, arrives at a synapse, it causes a chemical called a neurotransmitter to be released. The neurotransmitter binds to receptor molecules in the membrane of the target cell.

Synapses are the key functional elements of the brain. The essential function of the brain is cell-to-cell communication, and synapses are the points at which communication occurs. The human brain has been estimated to contain approximately 100 trillion synapses; even the brain of a fruit fly contains several million. The functions of these synapses are very diverse: some are excitatory (exciting the target cell); others are inhibitory; others work by activating second messenger systems that change the internal chemistry of their target cells in complex ways. A large number of synapses are dynamically modifiable; that is, they are capable of changing strength in a way that is controlled by the patterns of signals that pass through them. It is widely believed that activity-dependent modification of synapses is the brain's primary mechanism for learning and memory.

Most of the space in the brain is taken up by axons, which are often bundled together in what are called nerve fiber tracts. A myelinated axon is wrapped in a fatty insulating sheath of myelin, which serves to greatly increase the speed of signal propagation. (There are also unmyelinated axons). Myelin is white, making parts of the brain filled exclusively with nerve fibers appear as light-colored white matter, in contrast to the darker-colored grey matter that marks areas with high densities of neuron cell bodies.

Evolution

Generic bilaterian nervous system

A rod-shaped body contains a digestive system running from the mouth at one end to the anus at the other. Alongside the digestive system is a nerve cord with a brain at the end, near to the mouth.
Nervous system of a generic bilaterian animal, in the form of a nerve cord with segmental enlargements, and a "brain" at the front

Except for a few primitive organisms such as sponges (which have no nervous system) and cnidarians (which have a diffuse nervous system consisting of a nerve net), all living multicellular animals are bilaterians, meaning animals with a bilaterally symmetric body plan (that is, left and right sides that are approximate mirror images of each other). All bilaterians are thought to have descended from a common ancestor that appeared late in the Cryogenian period, 700–650 million years ago, and it has been hypothesized that this common ancestor had the shape of a simple tubeworm with a segmented body. At a schematic level, that basic worm-shape continues to be reflected in the body and nervous system architecture of all modern bilaterians, including vertebrates. The fundamental bilateral body form is a tube with a hollow gut cavity running from the mouth to the anus, and a nerve cord with an enlargement (a ganglion) for each body segment, with an especially large ganglion at the front, called the brain. The brain is small and simple in some species, such as nematode worms; in other species, such as vertebrates, it is a large and very complex organ. Some types of worms, such as leeches, also have an enlarged ganglion at the back end of the nerve cord, known as a "tail brain".

There are a few types of existing bilaterians that lack a recognizable brain, including echinoderms and tunicates. It has not been definitively established whether the existence of these brainless species indicates that the earliest bilaterians lacked a brain, or whether their ancestors evolved in a way that led to the disappearance of a previously existing brain structure.

Invertebrates

A fly resting on a reflective surface. A large, red eye faces the camera. The body appears transparent, apart from black pigment at the end of its abdomen.
Fruit flies (Drosophila) have been extensively studied to gain insight into the role of genes in brain development.

This category includes tardigrades, arthropods, molluscs, and numerous types of worms. The diversity of invertebrate body plans is matched by an equal diversity in brain structures.

Two groups of invertebrates have notably complex brains: arthropods (insects, crustaceans, arachnids, and others), and cephalopods (octopuses, squids, and similar molluscs). The brains of arthropods and cephalopods arise from twin parallel nerve cords that extend through the body of the animal. Arthropods have a central brain, the supraesophageal ganglion, with three divisions and large optical lobes behind each eye for visual processing. Cephalopods such as the octopus and squid have the largest brains of any invertebrates.

There are several invertebrate species whose brains have been studied intensively because they have properties that make them convenient for experimental work:

  • Fruit flies (Drosophila), because of the large array of techniques available for studying their genetics, have been a natural subject for studying the role of genes in brain development. In spite of the large evolutionary distance between insects and mammals, many aspects of Drosophila neurogenetics have been shown to be relevant to humans. The first biological clock genes, for example, were identified by examining Drosophila mutants that showed disrupted daily activity cycles. A search in the genomes of vertebrates revealed a set of analogous genes, which were found to play similar roles in the mouse biological clock—and therefore almost certainly in the human biological clock as well. Studies done on Drosophila, also show that most neuropil regions of the brain are continuously reorganized throughout life in response to specific living conditions.
  • The nematode worm Caenorhabditis elegans, like Drosophila, has been studied largely because of its importance in genetics. In the early 1970s, Sydney Brenner chose it as a model organism for studying the way that genes control development. One of the advantages of working with this worm is that the body plan is very stereotyped: the nervous system of the hermaphrodite contains exactly 302 neurons, always in the same places, making identical synaptic connections in every worm. Brenner's team sliced worms into thousands of ultrathin sections and photographed each one under an electron microscope, then visually matched fibers from section to section, to map out every neuron and synapse in the entire body. The complete neuronal wiring diagram of C.elegans – its connectome was achieved. Nothing approaching this level of detail is available for any other organism, and the information gained has enabled a multitude of studies that would otherwise have not been possible.
  • The sea slug Aplysia californica was chosen by Nobel Prize-winning neurophysiologist Eric Kandel as a model for studying the cellular basis of learning and memory, because of the simplicity and accessibility of its nervous system, and it has been examined in hundreds of experiments.

Vertebrates

A T-shaped object is made up of the cord at the bottom which feeds into a lower central mass. This is topped by a larger central mass with an arm extending from either side.
The brain of a shark

The first vertebrates appeared over 500 million years ago (Mya), during the Cambrian period, and may have resembled the modern hagfish in form. Jawed fish appeared by 445 Mya, amphibians by 350 Mya, reptiles by 310 Mya and mammals by 200 Mya (approximately). Each species has an equally long evolutionary history, but the brains of modern hagfishes, lampreys, sharks, amphibians, reptiles, and mammals show a gradient of size and complexity that roughly follows the evolutionary sequence. All of these brains contain the same set of basic anatomical components, but many are rudimentary in the hagfish, whereas in mammals the foremost part (the telencephalon) is greatly elaborated and expanded.

Brains are most commonly compared in terms of their size. The relationship between brain size, body size and other variables has been studied across a wide range of vertebrate species. As a rule, brain size increases with body size, but not in a simple linear proportion. In general, smaller animals tend to have larger brains, measured as a fraction of body size. For mammals, the relationship between brain volume and body mass essentially follows a power law with an exponent of about 0.75. This formula describes the central tendency, but every family of mammals departs from it to some degree, in a way that reflects in part the complexity of their behavior. For example, primates have brains 5 to 10 times larger than the formula predicts. Predators tend to have larger brains than their prey, relative to body size.

The nervous system is shown as a rod with protrusions along its length. The spinal cord at the bottom connects to the hindbrain which widens out before narrowing again. This is connected to the midbrain, which again bulges, and which finally connects to the forebrain which has two large protrusions.
The main subdivisions of the embryonic vertebrate brain (left), which later differentiate into structures of the adult brain (right)

All vertebrate brains share a common underlying form, which appears most clearly during early stages of embryonic development. In its earliest form, the brain appears as three swellings at the front end of the neural tube; these swellings eventually become the forebrain, midbrain, and hindbrain (the prosencephalon, mesencephalon, and rhombencephalon, respectively). At the earliest stages of brain development, the three areas are roughly equal in size. In many classes of vertebrates, such as fish and amphibians, the three parts remain similar in size in the adult, but in mammals the forebrain becomes much larger than the other parts, and the midbrain becomes very small.

The brains of vertebrates are made of very soft tissue. Living brain tissue is pinkish on the outside and mostly white on the inside, with subtle variations in color. Vertebrate brains are surrounded by a system of connective tissue membranes called meninges that separate the skull from the brain. Blood vessels enter the central nervous system through holes in the meningeal layers. The cells in the blood vessel walls are joined tightly to one another, forming the blood–brain barrier, which blocks the passage of many toxins and pathogens (though at the same time blocking antibodies and some drugs, thereby presenting special challenges in treatment of diseases of the brain).

Neuroanatomists usually divide the vertebrate brain into six main regions: the telencephalon (cerebral hemispheres), diencephalon (thalamus and hypothalamus), mesencephalon (midbrain), cerebellum, pons, and medulla oblongata. Each of these areas has a complex internal structure. Some parts, such as the cerebral cortex and the cerebellar cortex, consist of layers that are folded or convoluted to fit within the available space. Other parts, such as the thalamus and hypothalamus, consist of clusters of many small nuclei. Thousands of distinguishable areas can be identified within the vertebrate brain based on fine distinctions of neural structure, chemistry, and connectivity.

Corresponding regions of human and shark brain are shown. The shark brain is splayed out, while the human brain is more compact. The shark brain starts with the medulla, which is surrounded by various structures, and ends with the telencephalon. The cross-section of the human brain shows the medulla at the bottom surrounded by the same structures, with the telencephalon thickly coating the top of the brain.
The main anatomical regions of the vertebrate brain, shown for shark and human. The same parts are present, but they differ greatly in size and shape.

Although the same basic components are present in all vertebrate brains, some branches of vertebrate evolution have led to substantial distortions of brain geometry, especially in the forebrain area. The brain of a shark shows the basic components in a straightforward way, but in teleost fishes (the great majority of existing fish species), the forebrain has become "everted", like a sock turned inside out. In birds, there are also major changes in forebrain structure. These distortions can make it difficult to match brain components from one species with those of another species.

Here is a list of some of the most important vertebrate brain components, along with a brief description of their functions as currently understood:

  • The medulla, along with the spinal cord, contains many small nuclei involved in a wide variety of sensory and involuntary motor functions such as vomiting, heart rate and digestive processes.
  • The pons lies in the brainstem directly above the medulla. Among other things, it contains nuclei that control often voluntary but simple acts such as sleep, respiration, swallowing, bladder function, equilibrium, eye movement, facial expressions, and posture.
  • The hypothalamus is a small region at the base of the forebrain, whose complexity and importance belies its size. It is composed of numerous small nuclei, each with distinct connections and neurochemistry. The hypothalamus is engaged in additional involuntary or partially voluntary acts such as sleep and wake cycles, eating and drinking, and the release of some hormones.
  • The thalamus is a collection of nuclei with diverse functions: some are involved in relaying information to and from the cerebral hemispheres, while others are involved in motivation. The subthalamic area (zona incerta) seems to contain action-generating systems for several types of "consummatory" behaviors such as eating, drinking, defecation, and copulation.
  • The cerebellum modulates the outputs of other brain systems, whether motor-related or thought related, to make them certain and precise. Removal of the cerebellum does not prevent an animal from doing anything in particular, but it makes actions hesitant and clumsy. This precision is not built-in but learned by trial and error. The muscle coordination learned while riding a bicycle is an example of a type of neural plasticity that may take place largely within the cerebellum. 10% of the brain's total volume consists of the cerebellum and 50% of all neurons are held within its structure.
  • The optic tectum allows actions to be directed toward points in space, most commonly in response to visual input. In mammals, it is usually referred to as the superior colliculus, and its best-studied function is to direct eye movements. It also directs reaching movements and other object-directed actions. It receives strong visual inputs, but also inputs from other senses that are useful in directing actions, such as auditory input in owls and input from the thermosensitive pit organs in snakes. In some primitive fishes, such as lampreys, this region is the largest part of the brain. The superior colliculus is part of the midbrain.
  • The pallium is a layer of grey matter that lies on the surface of the forebrain and is the most complex and most recent evolutionary development of the brain as an organ. In reptiles and mammals, it is called the cerebral cortex. Multiple functions involve the pallium, including smell and spatial memory. In mammals, where it becomes so large as to dominate the brain, it takes over functions from many other brain areas. In many mammals, the cerebral cortex consists of folded bulges called gyri that create deep furrows or fissures called sulci. The folds increase the surface area of the cortex and therefore increase the amount of gray matter and the amount of information that can be stored and processed.
  • The hippocampus, strictly speaking, is found only in mammals. However, the area it derives from, the medial pallium, has counterparts in all vertebrates. There is evidence that this part of the brain is involved in complex events such as spatial memory and navigation in fishes, birds, reptiles, and mammals.
  • The basal ganglia are a group of interconnected structures in the forebrain. The primary function of the basal ganglia appears to be action selection: they send inhibitory signals to all parts of the brain that can generate motor behaviors, and in the right circumstances can release the inhibition, so that the action-generating systems are able to execute their actions. Reward and punishment exert their most important neural effects by altering connections within the basal ganglia.
  • The olfactory bulb is a special structure that processes olfactory sensory signals and sends its output to the olfactory part of the pallium. It is a major brain component in many vertebrates, but is greatly reduced in humans and other primates (whose senses are dominated by information acquired by sight rather than smell).

Reptiles

Anatomical comparison between the brain of a lizard (A and C) and the brain of a turkey (B and D). Abbreviations: Olf, olfactory lobes; Hmp, cerebral hemispheres; Pn, pineal gland ; Mb, optic lobes of the middle brain ; Cb, cerebellum; MO, medulla oblongata; ii, optic nerves; iv and vi, nerves for the muscles of the eye; Py, pituitary body.
Comparison of Vertebrate Brains: Mammalian, Reptilian, Amphibian, Teleost, and Ammocoetes. CB., cerebellum; PT., pituitary body; PN., pineal body; C. STR., corpus striatum; G.H.R., right ganglion habenulæ. I., olfactory; II., optic nerves.

Modern reptiles and mammals diverged from a common ancestor around 320 million years ago. Interestingly, the number of extant reptiles far exceeds the number of mammalian species, with 11,733 recognized species of reptiles compared to 5,884 extant mammals. Along with the species diversity, reptiles have diverged in terms of external morphology, from limbless to tetrapod gliders to armored chelonians, reflecting adaptive radiation to a diverse array of environments.

Morphological differences are reflected in the nervous system phenotype, such as: absence of lateral motor column neurons in snakes, which innervate limb muscles controlling limb movements; absence of motor neurons that innervate trunk muscles in tortoises; presence of innervation from the trigeminal nerve to pit organs responsible to infrared detection in snakes. Variation in size, weight, and shape of the brain can be found within reptiles. For instance, crocodilians have the largest brain volume to body weight proportion, followed by turtles, lizards, and snakes. Reptiles vary in the investment in different brain sections. Crocodilians have the largest telencephalon, while snakes have the smallest. Turtles have the largest diencephalon per body weight whereas crocodilians have the smallest. On the other hand, lizards have the largest mesencephalon.

Yet their brains share several characteristics revealed by recent anatomical, molecular, and ontogenetic studies. Vertebrates share the highest levels of similarities during embryological development, controlled by conserved transcription factors and signaling centers, including gene expression, morphological and cell type differentiation. In fact, high levels of transcriptional factors can be found in all areas of the brain in reptiles and mammals, with shared neuronal clusters enlightening brain evolution. Conserved transcription factors elucidate that evolution acted in different areas of the brain by either retaining similar morphology and function, or diversifying it.

Anatomically, the reptilian brain has less subdivisions than the mammalian brain, however it has numerous conserved aspects including the organization of the spinal cord and cranial nerve, as well as elaborated brain pattern of organization. Elaborated brains are characterized by migrated neuronal cell bodies away from the periventricular matrix, region of neuronal development, forming organized nuclear groups. Aside from reptiles and mammals, other vertebrates with elaborated brains include hagfish, galeomorph sharks, skates, rays, teleosts, and birds. Overall elaborated brains are subdivided in forebrain, midbrain, and hindbrain.

The hindbrain coordinates and integrates sensory and motor inputs and outputs responsible for, but not limited to, walking, swimming, or flying. It contains input and output axons interconnecting the spinal cord, midbrain and forebrain transmitting information from the external and internal environments. The midbrain links sensory, motor, and integrative components received from the hindbrain, connecting it to the forebrain. The tectum, which includes the optic tectum and torus semicircularis, receives auditory, visual, and somatosensory inputs, forming integrated maps of the sensory and visual space around the animal. The tegmentum receives incoming sensory information and forwards motor responses to and from the forebrain. The isthmus connects the hindbrain with midbrain. The forebrain region is particularly well developed, is further divided into diencephalon and telencephalon. Diencephalon is related to regulation of eye and body movement in response to visual stimuli, sensory information, circadian rhythms, olfactory input, and autonomic nervous system.Telencephalon is related to control of movements, neurotransmitters and neuromodulators responsible for integrating inputs and transmitting outputs are present, sensory systems, and cognitive functions.     

Birds
Brains of an emu, a kiwi, a barn owl, and a pigeon, with visual processing areas labelled

The avian brain is the central organ of the nervous system in birds. Birds possess large, complex brains, which process, integrate, and coordinate information received from the environment and make decisions on how to respond with the rest of the body. Like in all chordates, the avian brain is contained within the skull bones of the head.

The bird brain is divided into a number of sections, each with a different function. The cerebrum or telencephalon is divided into two hemispheres, and controls higher functions. The telencephalon is dominated by a large pallium, which corresponds to the mammalian cerebral cortex and is responsible for the cognitive functions of birds. The pallium is made up of several major structures: the hyperpallium, a dorsal bulge of the pallium found only in birds, as well as the nidopallium, mesopallium, and archipallium. The bird telencephalon nuclear structure, wherein neurons are distributed in three-dimensionally arranged clusters, with no large-scale separation of white matter and grey matter, though there exist layer-like and column-like connections. Structures in the pallium are associated with perception, learning, and cognition. Beneath the pallium are the two components of the subpallium, the striatum and pallidum. The subpallium connects different parts of the telencephalon and plays major roles in a number of critical behaviours. To the rear of the telencephalon are the thalamus, midbrain, and cerebellum. The hindbrain connects the rest of the brain to the spinal cord.

The size and structure of the avian brain enables prominent behaviours of birds such as flight and vocalization. Dedicated structures and pathways integrate the auditory and visual senses, strong in most species of birds, as well as the typically weaker olfactory and tactile senses. Social behaviour, widespread among birds, depends on the organisation and functions of the brain. Some birds exhibit strong abilities of cognition, enabled by the unique structure and physiology of the avian brain.
Mammals

The most obvious difference between the brains of mammals and other vertebrates is in terms of size. On average, a mammal has a brain roughly twice as large as that of a bird of the same body size, and ten times as large as that of a reptile of the same body size.

Size, however, is not the only difference: there are also substantial differences in shape. The hindbrain and midbrain of mammals are generally similar to those of other vertebrates, but dramatic differences appear in the forebrain, which is greatly enlarged and also altered in structure. The cerebral cortex is the part of the brain that most strongly distinguishes mammals. In non-mammalian vertebrates, the surface of the cerebrum is lined with a comparatively simple three-layered structure called the pallium. In mammals, the pallium evolves into a complex six-layered structure called neocortex or isocortex. Several areas at the edge of the neocortex, including the hippocampus and amygdala, are also much more extensively developed in mammals than in other vertebrates.

The elaboration of the cerebral cortex carries with it changes to other brain areas. The superior colliculus, which plays a major role in visual control of behavior in most vertebrates, shrinks to a small size in mammals, and many of its functions are taken over by visual areas of the cerebral cortex. The cerebellum of mammals contains a large portion (the neocerebellum) dedicated to supporting the cerebral cortex, which has no counterpart in other vertebrates.

Primates
Encephalization Quotient
Species EQ
Human 7.4–7.8
Common chimpanzee 2.2–2.5
Rhesus monkey 2.1
Bottlenose dolphin 4.14
Elephant 1.13–2.36
Dog 1.2
Horse 0.9
Rat 0.4

The brains of humans and other primates contain the same structures as the brains of other mammals, but are generally larger in proportion to body size. The encephalization quotient (EQ) is used to compare brain sizes across species. It takes into account the nonlinearity of the brain-to-body relationship. Humans have an average EQ in the 7-to-8 range, while most other primates have an EQ in the 2-to-3 range. Dolphins have values higher than those of primates other than humans, but nearly all other mammals have EQ values that are substantially lower.

Most of the enlargement of the primate brain comes from a massive expansion of the cerebral cortex, especially the prefrontal cortex and the parts of the cortex involved in vision. The visual processing network of primates includes at least 30 distinguishable brain areas, with a complex web of interconnections. It has been estimated that visual processing areas occupy more than half of the total surface of the primate neocortex. The prefrontal cortex carries out functions that include planning, working memory, motivation, attention, and executive control. It takes up a much larger proportion of the brain for primates than for other species, and an especially large fraction of the human brain.

Development

Very simple drawing of the front end of a human embryo, showing each vesicle of the developing brain in a different color.
Brain of a human embryo in the sixth week of development

The brain develops in an intricately orchestrated sequence of stages. It changes in shape from a simple swelling at the front of the nerve cord in the earliest embryonic stages, to a complex array of areas and connections. Neurons are created in special zones that contain stem cells, and then migrate through the tissue to reach their ultimate locations. Once neurons have positioned themselves, their axons sprout and navigate through the brain, branching and extending as they go, until the tips reach their targets and form synaptic connections. In a number of parts of the nervous system, neurons and synapses are produced in excessive numbers during the early stages, and then the unneeded ones are pruned away.

For vertebrates, the early stages of neural development are similar across all species. As the embryo transforms from a round blob of cells into a wormlike structure, a narrow strip of ectoderm running along the midline of the back is induced to become the neural plate, the precursor of the nervous system. The neural plate folds inward to form the neural groove, and then the lips that line the groove merge to enclose the neural tube, a hollow cord of cells with a fluid-filled ventricle at the center. At the front end, the ventricles and cord swell to form three vesicles that are the precursors of the prosencephalon (forebrain), mesencephalon (midbrain), and rhombencephalon (hindbrain). At the next stage, the forebrain splits into two vesicles called the telencephalon (which will contain the cerebral cortex, basal ganglia, and related structures) and the diencephalon (which will contain the thalamus and hypothalamus). At about the same time, the hindbrain splits into the metencephalon (which will contain the cerebellum and pons) and the myelencephalon (which will contain the medulla oblongata). Each of these areas contains proliferative zones where neurons and glial cells are generated; the resulting cells then migrate, sometimes for long distances, to their final positions.

Once a neuron is in place, it extends dendrites and an axon into the area around it. Axons, because they commonly extend a great distance from the cell body and need to reach specific targets, grow in a particularly complex way. The tip of a growing axon consists of a blob of protoplasm called a growth cone, studded with chemical receptors. These receptors sense the local environment, causing the growth cone to be attracted or repelled by various cellular elements, and thus to be pulled in a particular direction at each point along its path. The result of this pathfinding process is that the growth cone navigates through the brain until it reaches its destination area, where other chemical cues cause it to begin generating synapses. Considering the entire brain, thousands of genes create products that influence axonal pathfinding.

The synaptic network that finally emerges is only partly determined by genes, though. In many parts of the brain, axons initially "overgrow", and then are "pruned" by mechanisms that depend on neural activity. In the projection from the eye to the midbrain, for example, the structure in the adult contains a very precise mapping, connecting each point on the surface of the retina to a corresponding point in a midbrain layer. In the first stages of development, each axon from the retina is guided to the right general vicinity in the midbrain by chemical cues, but then branches very profusely and makes initial contact with a wide swath of midbrain neurons. The retina, before birth, contains special mechanisms that cause it to generate waves of activity that originate spontaneously at a random point and then propagate slowly across the retinal layer. These waves are useful because they cause neighboring neurons to be active at the same time; that is, they produce a neural activity pattern that contains information about the spatial arrangement of the neurons. This information is exploited in the midbrain by a mechanism that causes synapses to weaken, and eventually vanish, if activity in an axon is not followed by activity of the target cell. The result of this sophisticated process is a gradual tuning and tightening of the map, leaving it finally in its precise adult form.

Similar things happen in other brain areas: an initial synaptic matrix is generated as a result of genetically determined chemical guidance, but then gradually refined by activity-dependent mechanisms, partly driven by internal dynamics, partly by external sensory inputs. In some cases, as with the retina-midbrain system, activity patterns depend on mechanisms that operate only in the developing brain, and apparently exist solely to guide development.

In humans and many other mammals, new neurons are created mainly before birth, and the infant brain contains substantially more neurons than the adult brain. There are, however, a few areas where new neurons continue to be generated throughout life. The two areas for which adult neurogenesis is well established are the olfactory bulb, which is involved in the sense of smell, and the dentate gyrus of the hippocampus, where there is evidence that the new neurons play a role in storing newly acquired memories. With these exceptions, however, the set of neurons that is present in early childhood is the set that is present for life. Glial cells are different: as with most types of cells in the body, they are generated throughout the lifespan.

There has long been debate about whether the qualities of mind, personality, and intelligence can be attributed to heredity or to upbringing. Although many details remain to be settled, neuroscience shows that both factors are important. Genes determine both the general form of the brain and how it reacts to experience, but experience is required to refine the matrix of synaptic connections, resulting in greatly increased complexity. The presence or absence of experience is critical at key periods of development. Additionally, the quantity and quality of experience are important. For example, animals raised in enriched environments demonstrate thick cerebral cortices, indicating a high density of synaptic connections, compared to animals with restricted levels of stimulation.

Physiology

The functions of the brain depend on the ability of neurons to transmit electrochemical signals to other cells, and their ability to respond appropriately to electrochemical signals received from other cells. The electrical properties of neurons are controlled by a wide variety of biochemical and metabolic processes, most notably the interactions between neurotransmitters and receptors that take place at synapses.

Neurotransmitters and receptors

Neurotransmitters are chemicals that are released at synapses when the local membrane is depolarised and Ca2+ enters into the cell, typically when an action potential arrives at the synapse – neurotransmitters attach themselves to receptor molecules on the membrane of the synapse's target cell (or cells), and thereby alter the electrical or chemical properties of the receptor molecules. With few exceptions, each neuron in the brain releases the same chemical neurotransmitter, or combination of neurotransmitters, at all the synaptic connections it makes with other neurons; this rule is known as Dale's principle. Thus, a neuron can be characterized by the neurotransmitters that it releases. The great majority of psychoactive drugs exert their effects by altering specific neurotransmitter systems. This applies to drugs such as cannabinoids, nicotine, heroin, cocaine, alcohol, fluoxetine, chlorpromazine, and many others.

The two neurotransmitters that are most widely found in the vertebrate brain are glutamate, which almost always exerts excitatory effects on target neurons, and gamma-aminobutyric acid (GABA), which is almost always inhibitory. Neurons using these transmitters can be found in nearly every part of the brain. Because of their ubiquity, drugs that act on glutamate or GABA tend to have broad and powerful effects. Some general anesthetics act by reducing the effects of glutamate; most tranquilizers exert their sedative effects by enhancing the effects of GABA.

There are dozens of other chemical neurotransmitters that are used in more limited areas of the brain, often areas dedicated to a particular function. Serotonin, for example—the primary target of many antidepressant drugs and many dietary aids—comes exclusively from a small brainstem area called the raphe nuclei. Norepinephrine, which is involved in arousal, comes exclusively from a nearby small area called the locus coeruleus. Other neurotransmitters such as acetylcholine and dopamine have multiple sources in the brain but are not as ubiquitously distributed as glutamate and GABA.

Electrical activity

Graph showing 16 voltage traces going across the page from left to right, each showing a different signal. At the middle of the page all of the traces abruptly begin to show sharp jerky spikes, which continue to the end of the plot.
Brain electrical activity recorded from a human patient during an epileptic seizure

As a side effect of the electrochemical processes used by neurons for signaling, brain tissue generates electric fields when it is active. When large numbers of neurons show synchronized activity, the electric fields that they generate can be large enough to detect outside the skull, using electroencephalography (EEG) or magnetoencephalography (MEG). EEG recordings, along with recordings made from electrodes implanted inside the brains of animals such as rats, show that the brain of a living animal is constantly active, even during sleep. Each part of the brain shows a mixture of rhythmic and nonrhythmic activity, which may vary according to behavioral state. In mammals, the cerebral cortex tends to show large slow delta waves during sleep, faster alpha waves when the animal is awake but inattentive, and chaotic-looking irregular activity when the animal is actively engaged in a task, called beta and gamma waves. During an epileptic seizure, the brain's inhibitory control mechanisms fail to function and electrical activity rises to pathological levels, producing EEG traces that show large wave and spike patterns not seen in a healthy brain. Relating these population-level patterns to the computational functions of individual neurons is a major focus of current research in neurophysiology.

Metabolism

All vertebrates have a blood–brain barrier that allows metabolism inside the brain to operate differently from metabolism in other parts of the body. The neurovascular unit regulates cerebral blood flow so that activated neurons can be supplied with energy. Glial cells play a major role in brain metabolism by controlling the chemical composition of the fluid that surrounds neurons, including levels of ions and nutrients.

Brain tissue consumes a large amount of energy in proportion to its volume, so large brains place severe metabolic demands on animals. The need to limit body weight in order, for example, to fly, has apparently led to selection for a reduction of brain size in some species, such as bats. Most of the brain's energy consumption goes into sustaining the electric charge (membrane potential) of neurons. Most vertebrate species devote between 2% and 8% of basal metabolism to the brain. In primates, however, the percentage is much higher—in humans it rises to 20–25%. The energy consumption of the brain does not vary greatly over time, but active regions of the cerebral cortex consume somewhat more energy than inactive regions; this forms the basis for the functional brain imaging methods of PET, fMRI, and NIRS. The brain typically gets most of its energy from oxygen-dependent metabolism of glucose (i.e., blood sugar), but ketones provide a major alternative source, together with contributions from medium chain fatty acids (caprylic and heptanoic acids), lactate, acetate, and possibly amino acids.

Function

Model of a neural circuit in the cerebellum, as proposed by James S. Albus

Information from the sense organs is collected in the brain. There it is used to determine what actions the organism is to take. The brain processes the raw data to extract information about the structure of the environment. Next it combines the processed information with information about the current needs of the animal and with memory of past circumstances. Finally, on the basis of the results, it generates motor response patterns. These signal-processing tasks require intricate interplay between a variety of functional subsystems.

The function of the brain is to provide coherent control over the actions of an animal. A centralized brain allows groups of muscles to be co-activated in complex patterns; it also allows stimuli impinging on one part of the body to evoke responses in other parts, and it can prevent different parts of the body from acting at cross-purposes to each other.

Perception

Drawing showing the ear, inner ear, and brain areas involved in hearing. A series of light blue arrows shows the flow of signals through the system.
Diagram of signal processing in the auditory system

The human brain is provided with information about light, sound, the chemical composition of the atmosphere, temperature, the position of the body in space (proprioception), the chemical composition of the bloodstream, and more. In other animals additional senses are present, such as the infrared heat-sense of snakes, the magnetic field sense of some birds, or the electric field sense mainly seen in aquatic animals.

Each sensory system begins with specialized receptor cells, such as photoreceptor cells in the retina of the eye, or vibration-sensitive hair cells in the cochlea of the ear. The axons of sensory receptor cells travel into the spinal cord or brain, where they transmit their signals to a first-order sensory nucleus dedicated to one specific sensory modality. This primary sensory nucleus sends information to higher-order sensory areas that are dedicated to the same modality. Eventually, via a way-station in the thalamus, the signals are sent to the cerebral cortex, where they are processed to extract the relevant features, and integrated with signals coming from other sensory systems.

Motor control

Motor systems are areas of the brain that are involved in initiating body movements, that is, in activating muscles. Except for the muscles that control the eye, which are driven by nuclei in the midbrain, all the voluntary muscles in the body are directly innervated by motor neurons in the spinal cord and hindbrain. Spinal motor neurons are controlled both by neural circuits intrinsic to the spinal cord, and by inputs that descend from the brain. The intrinsic spinal circuits implement many reflex responses, and contain pattern generators for rhythmic movements such as walking or swimming. The descending connections from the brain allow for more sophisticated control.

The brain contains several motor areas that project directly to the spinal cord. At the lowest level are motor areas in the medulla and pons, which control stereotyped movements such as walking, breathing, or swallowing. At a higher level are areas in the midbrain, such as the red nucleus, which is responsible for coordinating movements of the arms and legs. At a higher level yet is the primary motor cortex, a strip of tissue located at the posterior edge of the frontal lobe. The primary motor cortex sends projections to the subcortical motor areas, but also sends a massive projection directly to the spinal cord, through the pyramidal tract. This direct corticospinal projection allows for precise voluntary control of the fine details of movements. Other motor-related brain areas exert secondary effects by projecting to the primary motor areas. Among the most important secondary areas are the premotor cortex, supplementary motor area, basal ganglia, and cerebellum. In addition to all of the above, the brain and spinal cord contain extensive circuitry to control the autonomic nervous system which controls the movement of the smooth muscle of the body.

Major areas involved in controlling movement
Area Location Function
Ventral horn Spinal cord Contains motor neurons that directly activate muscles
Oculomotor nuclei Midbrain Contains motor neurons that directly activate the eye muscles
Cerebellum Hindbrain Calibrates precision and timing of movements
Basal ganglia Forebrain Action selection on the basis of motivation
Motor cortex Frontal lobe Direct cortical activation of spinal motor circuits
Premotor cortex Frontal lobe Groups elementary movements into coordinated patterns
Supplementary motor area Frontal lobe Sequences movements into temporal patterns
Prefrontal cortex Frontal lobe Planning and other executive functions

Sleep

Many animals alternate between sleeping and waking in a daily cycle. Arousal and alertness are also modulated on a finer time scale by a network of brain areas. A key component of the sleep system is the suprachiasmatic nucleus (SCN), a tiny part of the hypothalamus located directly above the point at which the optic nerves from the two eyes cross. The SCN contains the body's central biological clock. Neurons there show activity levels that rise and fall with a period of about 24 hours, circadian rhythms: these activity fluctuations are driven by rhythmic changes in expression of a set of "clock genes". The SCN continues to keep time even if it is excised from the brain and placed in a dish of warm nutrient solution, but it ordinarily receives input from the optic nerves, through the retinohypothalamic tract (RHT), that allows daily light-dark cycles to calibrate the clock.

The SCN projects to a set of areas in the hypothalamus, brainstem, and midbrain that are involved in implementing sleep-wake cycles. An important component of the system is the reticular formation, a group of neuron-clusters scattered diffusely through the core of the lower brain. Reticular neurons send signals to the thalamus, which in turn sends activity-level-controlling signals to every part of the cortex. Damage to the reticular formation can produce a permanent state of coma.

Sleep involves great changes in brain activity. Until the 1950s it was generally believed that the brain essentially shuts off during sleep, but this is now known to be far from true; activity continues, but patterns become very different. There are two types of sleep: REM sleep (with dreaming) and NREM (non-REM, usually without dreaming) sleep, which repeat in slightly varying patterns throughout a sleep episode. Three broad types of distinct brain activity patterns can be measured: REM, light NREM and deep NREM. During deep NREM sleep, also called slow wave sleep, activity in the cortex takes the form of large synchronized waves, whereas in the waking state it is noisy and desynchronized. Levels of the neurotransmitters norepinephrine and serotonin drop during slow wave sleep, and fall almost to zero during REM sleep; levels of acetylcholine show the reverse pattern.

Homeostasis

Cross-section of a human head, showing location of the hypothalamus

For any animal, survival requires maintaining a variety of parameters of bodily state within a limited range of variation: these include temperature, water content, salt concentration in the bloodstream, blood glucose levels, blood oxygen level, and others. The ability of an animal to regulate the internal environment of its body—the milieu intérieur, as the pioneering physiologist Claude Bernard called it—is known as homeostasis (Greek for "standing still"). Maintaining homeostasis is a crucial function of the brain. The basic principle that underlies homeostasis is negative feedback: any time a parameter diverges from its set-point, sensors generate an error signal that evokes a response that causes the parameter to shift back toward its optimum value. (This principle is widely used in engineering, for example in the control of temperature using a thermostat.)

In vertebrates, the part of the brain that plays the greatest role is the hypothalamus, a small region at the base of the forebrain whose size does not reflect its complexity or the importance of its function. The hypothalamus is a collection of small nuclei, most of which are involved in basic biological functions. Some of these functions relate to arousal or to social interactions such as sexuality, aggression, or maternal behaviors; but many of them relate to homeostasis. Several hypothalamic nuclei receive input from sensors located in the lining of blood vessels, conveying information about temperature, sodium level, glucose level, blood oxygen level, and other parameters. These hypothalamic nuclei send output signals to motor areas that can generate actions to rectify deficiencies. Some of the outputs also go to the pituitary gland, a tiny gland attached to the brain directly underneath the hypothalamus. The pituitary gland secretes hormones into the bloodstream, where they circulate throughout the body and induce changes in cellular activity.

Motivation

Components of the basal ganglia, shown in two cross-sections of the human brain. Blue: caudate nucleus and putamen. Green: globus pallidus. Red: subthalamic nucleus. Black: substantia nigra.

The individual animals need to express survival-promoting behaviors, such as seeking food, water, shelter, and a mate. The motivational system in the brain monitors the current state of satisfaction of these goals, and activates behaviors to meet any needs that arise. The motivational system works largely by a reward–punishment mechanism. When a particular behavior is followed by favorable consequences, the reward mechanism in the brain is activated, which induces structural changes inside the brain that cause the same behavior to be repeated later, whenever a similar situation arises. Conversely, when a behavior is followed by unfavorable consequences, the brain's punishment mechanism is activated, inducing structural changes that cause the behavior to be suppressed when similar situations arise in the future.

Most organisms studied to date use a reward–punishment mechanism: for instance, worms and insects can alter their behavior to seek food sources or to avoid dangers. In vertebrates, the reward-punishment system is implemented by a specific set of brain structures, at the heart of which lie the basal ganglia, a set of interconnected areas at the base of the forebrain. The basal ganglia are the central site at which decisions are made: the basal ganglia exert a sustained inhibitory control over most of the motor systems in the brain; when this inhibition is released, a motor system is permitted to execute the action it is programmed to carry out. Rewards and punishments function by altering the relationship between the inputs that the basal ganglia receive and the decision-signals that are emitted. The reward mechanism is better understood than the punishment mechanism, because its role in drug abuse has caused it to be studied very intensively. Research has shown that the neurotransmitter dopamine plays a central role: addictive drugs such as cocaine, amphetamine, and nicotine either cause dopamine levels to rise or cause the effects of dopamine inside the brain to be enhanced.

Learning and memory

Almost all animals are capable of modifying their behavior as a result of experience—even the most primitive types of worms. Because behavior is driven by brain activity, changes in behavior must somehow correspond to changes inside the brain. Already in the late 19th century theorists like Santiago Ramón y Cajal argued that the most plausible explanation is that learning and memory are expressed as changes in the synaptic connections between neurons. Until 1970, however, experimental evidence to support the synaptic plasticity hypothesis was lacking. In 1971 Tim Bliss and Terje Lømo published a paper on a phenomenon now called long-term potentiation: the paper showed clear evidence of activity-induced synaptic changes that lasted for at least several days. Since then technical advances have made these sorts of experiments much easier to carry out, and thousands of studies have been made that have clarified the mechanism of synaptic change, and uncovered other types of activity-driven synaptic change in a variety of brain areas, including the cerebral cortex, hippocampus, basal ganglia, and cerebellum. Brain-derived neurotrophic factor (BDNF) and physical activity appear to play a beneficial role in the process.

Neuroscientists currently distinguish several types of learning and memory that are implemented by the brain in distinct ways:

  • Working memory is the ability of the brain to maintain a temporary representation of information about the task that an animal is currently engaged in. This sort of dynamic memory is thought to be mediated by the formation of cell assemblies—groups of activated neurons that maintain their activity by constantly stimulating one another.
  • Episodic memory is the ability to remember the details of specific events. This sort of memory can last for a lifetime. Much evidence implicates the hippocampus in playing a crucial role: people with severe damage to the hippocampus sometimes show amnesia, that is, inability to form new long-lasting episodic memories.
  • Semantic memory is the ability to learn facts and relationships. This sort of memory is probably stored largely in the cerebral cortex, mediated by changes in connections between cells that represent specific types of information.
  • Instrumental learning is the ability for rewards and punishments to modify behavior. It is implemented by a network of brain areas centered on the basal ganglia.
  • Motor learning is the ability to refine patterns of body movement by practicing, or more generally by repetition. A number of brain areas are involved, including the premotor cortex, basal ganglia, and especially the cerebellum, which functions as a large memory bank for microadjustments of the parameters of movement.

Research

The Human Brain Project is a large scientific research project, starting in 2013, which aims to simulate the complete human brain.

The field of neuroscience encompasses all approaches that seek to understand the brain and the rest of the nervous system. Psychology seeks to understand mind and behavior, and neurology is the medical discipline that diagnoses and treats diseases of the nervous system. The brain is also the most important organ studied in psychiatry, the branch of medicine that works to study, prevent, and treat mental disorders. Cognitive science seeks to unify neuroscience and psychology with other fields that concern themselves with the brain, such as computer science (artificial intelligence and similar fields) and philosophy.

The oldest method of studying the brain is anatomical, and until the middle of the 20th century, much of the progress in neuroscience came from the development of better cell stains and better microscopes. Neuroanatomists study the large-scale structure of the brain as well as the microscopic structure of neurons and their components, especially synapses. Among other tools, they employ a plethora of stains that reveal neural structure, chemistry, and connectivity. In recent years, the development of immunostaining techniques has allowed investigation of neurons that express specific sets of genes. Also, functional neuroanatomy uses medical imaging techniques to correlate variations in human brain structure with differences in cognition or behavior.

Neurophysiologists study the chemical, pharmacological, and electrical properties of the brain: their primary tools are drugs and recording devices. Thousands of experimentally developed drugs affect the nervous system, some in highly specific ways. Recordings of brain activity can be made using electrodes, either glued to the scalp as in EEG studies, or implanted inside the brains of animals for extracellular recordings, which can detect action potentials generated by individual neurons. Because the brain does not contain pain receptors, it is possible using these techniques to record brain activity from animals that are awake and behaving without causing distress. The same techniques have occasionally been used to study brain activity in human patients with intractable epilepsy, in cases where there was a medical necessity to implant electrodes to localize the brain area responsible for epileptic seizures. Functional imaging techniques such as fMRI are also used to study brain activity; these techniques have mainly been used with human subjects, because they require a conscious subject to remain motionless for long periods of time, but they have the great advantage of being noninvasive.

Drawing showing a monkey in a restraint chair, a computer monitor, a rototic arm, and three pieces of computer equipment, with arrows between them to show the flow of information.
Design of an experiment in which brain activity from a monkey was used to control a robotic arm

Another approach to brain function is to examine the consequences of damage to specific brain areas. Even though it is protected by the skull and meninges, surrounded by cerebrospinal fluid, and isolated from the bloodstream by the blood–brain barrier, the delicate nature of the brain makes it vulnerable to numerous diseases and several types of damage. In humans, the effects of strokes and other types of brain damage have been a key source of information about brain function. Because there is no ability to experimentally control the nature of the damage, however, this information is often difficult to interpret. In animal studies, most commonly involving rats, it is possible to use electrodes or locally injected chemicals to produce precise patterns of damage and then examine the consequences for behavior.

Computational neuroscience encompasses two approaches: first, the use of computers to study the brain; second, the study of how brains perform computation. On one hand, it is possible to write a computer program to simulate the operation of a group of neurons by making use of systems of equations that describe their electrochemical activity; such simulations are known as biologically realistic neural networks. On the other hand, it is possible to study algorithms for neural computation by simulating, or mathematically analyzing, the operations of simplified "units" that have some of the properties of neurons but abstract out much of their biological complexity. The computational functions of the brain are studied both by computer scientists and neuroscientists.

Computational neurogenetic modeling is concerned with the study and development of dynamic neuronal models for modeling brain functions with respect to genes and dynamic interactions between genes.

Recent years have seen increasing applications of genetic and genomic techniques to the study of the brain and a focus on the roles of neurotrophic factors and physical activity in neuroplasticity. The most common subjects are mice, because of the availability of technical tools. It is now possible with relative ease to "knock out" or mutate a wide variety of genes, and then examine the effects on brain function. More sophisticated approaches are also being used: for example, using Cre-Lox recombination it is possible to activate or deactivate genes in specific parts of the brain, at specific times.

History

Illustration by René Descartes of how the brain implements a reflex response

The oldest brain to have been discovered was in Armenia in the Areni-1 cave complex. The brain, estimated to be over 5,000 years old, was found in the skull of a 12 to 14-year-old girl. Although the brains were shriveled, they were well preserved due to the climate found inside the cave.

Early philosophers were divided as to whether the seat of the soul lies in the brain or heart. Aristotle favored the heart, and thought that the function of the brain was merely to cool the blood. Democritus, the inventor of the atomic theory of matter, argued for a three-part soul, with intellect in the head, emotion in the heart, and lust near the liver. The unknown author of On the Sacred Disease, a medical treatise in the Hippocratic Corpus, came down unequivocally in favor of the brain, writing:

Men ought to know that from nothing else but the brain come joys, delights, laughter and sports, and sorrows, griefs, despondency, and lamentations. ... And by the same organ we become mad and delirious, and fears and terrors assail us, some by night, and some by day, and dreams and untimely wanderings, and cares that are not suitable, and ignorance of present circumstances, desuetude, and unskillfulness. All these things we endure from the brain, when it is not healthy...

— On the Sacred Disease, attributed to Hippocrates
Andreas Vesalius' Fabrica, published in 1543, showing the base of the human brain, including optic chiasma, cerebellum, olfactory bulbs, etc.

The Roman physician Galen also argued for the importance of the brain, and theorized in some depth about how it might work. Galen traced out the anatomical relationships among brain, nerves, and muscles, demonstrating that all muscles in the body are connected to the brain through a branching network of nerves. He postulated that nerves activate muscles mechanically by carrying a mysterious substance he called pneumata psychikon, usually translated as "animal spirits". Galen's ideas were widely known during the Middle Ages, but not much further progress came until the Renaissance, when detailed anatomical study resumed, combined with the theoretical speculations of René Descartes and those who followed him. Descartes, like Galen, thought of the nervous system in hydraulic terms. He believed that the highest cognitive functions are carried out by a non-physical res cogitans, but that the majority of behaviors of humans, and all behaviors of animals, could be explained mechanistically.

The first real progress toward a modern understanding of nervous function, though, came from the investigations of Luigi Galvani (1737–1798), who discovered that a shock of static electricity applied to an exposed nerve of a dead frog could cause its leg to contract. Since that time, each major advance in understanding has followed more or less directly from the development of a new technique of investigation. Until the early years of the 20th century, the most important advances were derived from new methods for staining cells. Particularly critical was the invention of the Golgi stain, which (when correctly used) stains only a small fraction of neurons, but stains them in their entirety, including cell body, dendrites, and axon. Without such a stain, brain tissue under a microscope appears as an impenetrable tangle of protoplasmic fibers, in which it is impossible to determine any structure. In the hands of Camillo Golgi, and especially of the Spanish neuroanatomist Santiago Ramón y Cajal, the new stain revealed hundreds of distinct types of neurons, each with its own unique dendritic structure and pattern of connectivity.

A drawing on yellowing paper with an archiving stamp in the corner. A spidery tree branch structure connects to the top of a mass. A few narrow processes follow away from the bottom of the mass.
Drawing by Santiago Ramón y Cajal of two types of Golgi-stained neurons from the cerebellum of a pigeon

In the first half of the 20th century, advances in electronics enabled investigation of the electrical properties of nerve cells, culminating in work by Alan Hodgkin, Andrew Huxley, and others on the biophysics of the action potential, and the work of Bernard Katz and others on the electrochemistry of the synapse. These studies complemented the anatomical picture with a conception of the brain as a dynamic entity. Reflecting the new understanding, in 1942 Charles Sherrington visualized the workings of the brain waking from sleep:

The great topmost sheet of the mass, that where hardly a light had twinkled or moved, becomes now a sparkling field of rhythmic flashing points with trains of traveling sparks hurrying hither and thither. The brain is waking and with it the mind is returning. It is as if the Milky Way entered upon some cosmic dance. Swiftly the head mass becomes an enchanted loom where millions of flashing shuttles weave a dissolving pattern, always a meaningful pattern though never an abiding one; a shifting harmony of subpatterns.

— Sherrington, 1942, Man on his Nature

The invention of electronic computers in the 1940s, along with the development of mathematical information theory, led to a realization that brains can potentially be understood as information processing systems. This concept formed the basis of the field of cybernetics, and eventually gave rise to the field now known as computational neuroscience. The earliest attempts at cybernetics were somewhat crude in that they treated the brain as essentially a digital computer in disguise, as for example in John von Neumann's 1958 book, The Computer and the Brain. Over the years, though, accumulating information about the electrical responses of brain cells recorded from behaving animals has steadily moved theoretical concepts in the direction of increasing realism.

One of the most influential early contributions was a 1959 paper titled What the frog's eye tells the frog's brain: the paper examined the visual responses of neurons in the retina and optic tectum of frogs, and came to the conclusion that some neurons in the tectum of the frog are wired to combine elementary responses in a way that makes them function as "bug perceivers". A few years later David Hubel and Torsten Wiesel discovered cells in the primary visual cortex of monkeys that become active when sharp edges move across specific points in the field of view—a discovery for which they won a Nobel Prize. Follow-up studies in higher-order visual areas found cells that detect binocular disparity, color, movement, and aspects of shape, with areas located at increasing distances from the primary visual cortex showing increasingly complex responses. Other investigations of brain areas unrelated to vision have revealed cells with a wide variety of response correlates, some related to memory, some to abstract types of cognition such as space.

Theorists have worked to understand these response patterns by constructing mathematical models of neurons and neural networks, which can be simulated using computers. Some useful models are abstract, focusing on the conceptual structure of neural algorithms rather than the details of how they are implemented in the brain; other models attempt to incorporate data about the biophysical properties of real neurons. No model on any level is yet considered to be a fully valid description of brain function, though. The essential difficulty is that sophisticated computation by neural networks requires distributed processing in which hundreds or thousands of neurons work cooperatively—current methods of brain activity recording are only capable of isolating action potentials from a few dozen neurons at a time.

Furthermore, even single neurons appear to be complex and capable of performing computations. So, brain models that do not reflect this are too abstract to be representative of brain operation; models that do try to capture this are very computationally expensive and arguably intractable with present computational resources. However, the Human Brain Project is trying to build a realistic, detailed computational model of the entire human brain. The wisdom of this approach has been publicly contested, with high-profile scientists on both sides of the argument.

In the second half of the 20th century, developments in chemistry, electron microscopy, genetics, computer science, functional brain imaging, and other fields progressively opened new windows into brain structure and function. In the United States, the 1990s were officially designated as the "Decade of the Brain" to commemorate advances made in brain research, and to promote funding for such research.

In the 21st century, these trends have continued, and several new approaches have come into prominence, including multielectrode recording, which allows the activity of many brain cells to be recorded all at the same time; genetic engineering, which allows molecular components of the brain to be altered experimentally; genomics, which allows variations in brain structure to be correlated with variations in DNA properties and neuroimaging.

Society and culture

As food

Gulai otak, beef brain curry from Indonesia

Animal brains are used as food in numerous cuisines.

In rituals

Some archaeological evidence suggests that the mourning rituals of European Neanderthals also involved the consumption of the brain.

The Fore people of Papua New Guinea are known to eat human brains. In funerary rituals, those close to the dead would eat the brain of the deceased to create a sense of immortality. A prion disease called kuru has been traced to this.

Encoding (memory)

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Encoding_(memory)

Memory has the ability to encode, store and recall information. Memories give an organism the capability to learn and adapt from previous experiences as well as build relationships. Encoding allows a perceived item of use or interest to be converted into a construct that can be stored within the brain and recalled later from long-term memory. Working memory stores information for immediate use or manipulation, which is aided through hooking onto previously archived items already present in the long-term memory of an individual.

History

Hermann Ebbinghaus
Hermann Ebbinghaus (1850-1909)

Encoding is still relatively new and unexplored but origins of encoding date back to age old philosophers such as Aristotle and Plato. A major figure in the history of encoding is Hermann Ebbinghaus (1850–1909). Ebbinghaus was a pioneer in the field of memory research. Using himself as a subject he studied how we learn and forget information by repeating a list of nonsense syllables to the rhythm of a metronome until they were committed to his memory. These experiments led him to suggest the learning curve. He used these relatively meaningless words so that prior associations between meaningful words would not influence learning. He found that lists that allowed associations to be made and semantic meaning was apparent were easier to recall. Ebbinghaus' results paved the way for experimental psychology in memory and other mental processes.

During the 1900s, further progress in memory research was made. Ivan Pavlov began research pertaining to classical conditioning. His research demonstrated the ability to create a semantic relationship between two unrelated items. In 1932, Frederic Bartlett proposed the idea of mental schemas. This model proposed that whether new information would be encoded was dependent on its consistency with prior knowledge (mental schemas). This model also suggested that information not present at the time of encoding would be added to memory if it was based on schematic knowledge of the world. In this way, encoding was found to be influenced by prior knowledge. With the advance of Gestalt theory came the realization that memory for encoded information was often perceived as different from the stimuli that triggered it. It was also influenced by the context that the stimuli were embedded in.

With advances in technology, the field of neuropsychology emerged and with it a biological basis for theories of encoding. In 1949, Donald Hebb looked at the neuroscience aspect of encoding and stated that "neurons that fire together wire together," implying that encoding occurred as connections between neurons were established through repeated use. The 1950s and 60's saw a shift to the information processing approach to memory based on the invention of computers, followed by the initial suggestion that encoding was the process by which information is entered into memory. In 1956, George Armitage Miller wrote his paper on how short-term memory is limited to seven items, plus-or-minus two, called The Magical Number Seven, Plus or Minus Two. This number was appended when studies done on chunking revealed that seven, plus or minus two could also refer to seven "packets of information". In 1974, Alan Baddeley and Graham Hitch proposed their model of working memory, which consists of the central executive, visuo-spatial sketchpad, and phonological loop as a method of encoding. In 2000, Baddeley added the episodic buffer. Simultaneously Endel Tulving (1983) proposed the idea of encoding specificity whereby context was again noted as an influence on encoding.

Types

There are two main approaches to coding information: the physiological approach, and the mental approach. The physiological approach looks at how a stimulus is represented by neurons firing in the brain, while the mental approach looks at how the stimulus is represented in the mind.

There are many types of mental encoding that are used, such as visual, elaborative, organizational, acoustic, and semantic. However, this is not an extensive list.

Visual encoding

Visual encoding is the process of converting images and visual sensory information to memory stored in the brain. This means that people can convert the new information that they stored into mental pictures (Harrison, C., Semin, A.,(2009). Psychology. New York p. 222) Visual sensory information is temporarily stored within our iconic memory and working memory before being encoded into permanent long-term storage. Baddeley's model of working memory suggests that visual information is stored in the visuo-spatial sketchpad. The visuo-spatial sketchpad is connected to the central executive, which is a key area of working memory. The amygdala is another complex structure that has an important role in visual encoding. It accepts visual input in addition to input, from other systems, and encodes the positive or negative values of conditioned stimuli.

Elaborative encoding

Elaborative encoding is the process of actively relating new information to knowledge that is already in memory. Memories are a combination of old and new information, so the nature of any particular memory depends as much on the old information already in our memories as it does on the new information coming in through our senses. In other words, how we remember something depends on how we think about it at the time. Many studies have shown that long-term retention is greatly enhanced by elaborative encoding.

Semantic encoding

Semantic encoding is the processing and encoding of sensory input that has particular meaning or can be applied to a context. Various strategies can be applied such as chunking and mnemonics to aid in encoding, and in some cases, allow deep processing, and optimizing retrieval.

Words studied in semantic or deep encoding conditions are better recalled as compared to both easy and hard groupings of nonsemantic or shallow encoding conditions with response time being the deciding variable. Brodmann's areas 45, 46, and 47 (the left inferior prefrontal cortex or LIPC) showed significantly more activation during semantic encoding conditions compared to nonsemantic encoding conditions regardless of the difficulty of the nonsemantic encoding task presented. The same area showing increased activation during initial semantic encoding will also display decreasing activation with repetitive semantic encoding of the same words. This suggests the decrease in activation with repetition is process specific occurring when words are semantically reprocessed but not when they are nonsemantically reprocessed. Lesion and neuroimaging studies suggest that the orbitofrontal cortex is responsible for initial encoding and that activity in the left lateral prefrontal cortex correlates with the semantic organization of encoded information.

Acoustic encoding

Acoustic encoding is the encoding of auditory impulses. According to Baddeley, processing of auditory information is aided by the concept of the phonological loop, which allows input within our echoic memory to be sub vocally rehearsed in order to facilitate rememberingen we hear any word, we do so by hearing individual sounds, one at a time. Hence the memory of the beginning of a new word is stored in our echoic memory until the whole sound has been perceived and recognized as a word. Studies indicate that lexical, semantic and phonological factors interact in verbal working memory. The phonological similarity effect (PSE), is modified by word concreteness. This emphasizes that verbal working memory performance cannot exclusively be attributed to phonological or acoustic representation but also includes an interaction of linguistic representation. What remains to be seen is whether linguistic representation is expressed at the time of recall or whether the representational methods used (such as recordings, videos, symbols, etc.) participate in a more fundamental role in encoding and preservation of information in memory. The brain relies primarily on acoustic (aka phonological) encoding for use in short-term storage and primarily semantic encoding for use in long-term storage.

Other senses

Tactile encoding is the processing and encoding of how something feels, normally through touch. Neurons in the primary somatosensory cortex (S1) react to vibrotactile stimuli by activating in synchronization with each series of vibrations. Odors and tastes may also lead to encode.

Organizational encoding is the course of classifying information permitting to the associations amid a sequence of terms.

Long-Term Potentiation

Early LPT Mechanism

Encoding is a biological event that begins with perception. All perceived and striking sensations travel to the brain's thalamus where all these sensations are combined into one single experience. The hippocampus is responsible for analyzing these inputs and ultimately deciding if they will be committed to long-term memory; these various threads of information are stored in various parts of the brain. However, the exact way in which these pieces are identified and recalled later remains unknown.

Encoding is achieved using a combination of chemicals and electricity. Neurotransmitters are released when an electrical pulse crosses the synapse which serves as a connection from nerve cells to other cells. The dendrites receive these impulses with their feathery extensions. A phenomenon called long-term potentiation allows a synapse to increase strength with increasing numbers of transmitted signals between the two neurons. For that to happen, NMDA receptor, which influences the flow of information between neurons by controlling the initiation of long-term potentiation in most hippocampal pathways, need to come to the play. For these NMDA receptors to be activated, there must be two conditions. Firstly, glutamate has to be released and bound to the NMDA receptor site on postsynaptic neurons. Secondly, excitation has to take place in postsynaptic neurons. These cells also organize themselves into groups specializing in different kinds of information processing. Thus, with new experiences the brain creates more connections and may 'rewire'. The brain organizes and reorganizes itself in response to one's experiences, creating new memories prompted by experience, education, or training. Therefore, the use of a brain reflects how it is organised. This ability to re-organize is especially important if ever a part of the brain becomes damaged. Scientists are unsure of whether the stimuli of what we do not recall are filtered out at the sensory phase or if they are filtered out after the brain examines their significance.

Mapping Activity

Positron emission tomography (PET) demonstrates a consistent functional anatomical blueprint of hippocampal activation during episodic encoding and retrieval. Activation in the hippocampal region associated with episodic memory encoding has been shown to occur in the rostral portion of the region whereas activation associated with episodic memory retrieval occurs in the caudal portions. This is referred to as the Hippocampal memory encoding and retrieval model or HIPER model.

One study used PET to measure cerebral blood flow during encoding and recognition of faces in both young and older participants. Young people displayed increased cerebral blood flow in the right hippocampus and the left prefrontal and temporal cortices during encoding and in the right prefrontal and parietal cortex during recognition. Elderly people showed no significant activation in areas activated in young people during encoding, however they did show right prefrontal activation during recognition. Thus it may be concluded that as we grow old, failing memories may be the consequence of a failure to adequately encode stimuli as demonstrated in the lack of cortical and hippocampal activation during the encoding process.

Recent findings in studies focusing on patients with post traumatic stress disorder demonstrate that amino acid transmitters, glutamate and GABA, are intimately implicated in the process of factual memory registration, and suggest that amine neurotransmitters, norepinephrine-epinephrine and serotonin, are involved in encoding emotional memory.

Molecular Perspective

The process of encoding is not yet well understood, however key advances have shed light on the nature of these mechanisms. Encoding begins with any novel situation, as the brain will interact and draw conclusions from the results of this interaction. These learning experiences have been known to trigger a cascade of molecular events leading to the formation of memories. These changes include the modification of neural synapses, modification of proteins, creation of new synapses, activation of gene expression and new protein synthesis. One study found that high central nervous system levels of acetylcholine during wakefulness aided in new memory encoding, while low levels of acetylcholine during slow-wave sleep aided in consolidation of memories. However, encoding can occur on different levels. The first step is short-term memory formation, followed by the conversion to a long-term memory, and then a long-term memory consolidation process.

Synaptic Plasticity

Synaptic plasticity is the ability of the brain to strengthen, weaken, destroy and create neural synapses and is the basis for learning. These molecular distinctions will identify and indicate the strength of each neural connection. The effect of a learning experience depends on the content of such an experience. Reactions that are favored will be reinforced and those that are deemed unfavorable will be weakened. This shows that the synaptic modifications that occur can operate either way, in order to be able to make changes over time depending on the current situation of the organism. In the short term, synaptic changes may include the strengthening or weakening of a connection by modifying the preexisting proteins leading to a modification in synapse connection strength. In the long term, entirely new connections may form or the number of synapses at a connection may be increased, or reduced.

The Encoding Process

A significant short-term biochemical change is the covalent modification of pre-existing proteins in order to modify synaptic connections that are already active. This allows data to be conveyed in the short term, without consolidating anything for permanent storage. From here a memory or an association may be chosen to become a long-term memory, or forgotten as the synaptic connections eventually weaken. The switch from short to long-term is the same concerning both implicit memory and explicit memory. This process is regulated by a number of inhibitory constraints, primarily the balance between protein phosphorylation and dephosphorylation. Finally, long term changes occur that allow consolidation of the target memory. These changes include new protein synthesis, the formation of new synaptic connections, and finally the activation of gene expression in accordance with the new neural configuration. The encoding process has been found to be partially mediated by serotonergic interneurons, specifically in regard to sensitization as blocking these interneurons prevented sensitization entirely. However, the ultimate consequences of these discoveries have yet to be identified. Furthermore, the learning process has been known to recruit a variety of modulatory transmitters in order to create and consolidate memories. These transmitters cause the nucleus to initiate processes required for neuronal growth and long-term memory, mark specific synapses for the capture of long-term processes, regulate local protein synthesis, and even appear to mediate attentional processes required for the formation and recall of memories.

Encoding and Genetics

Human memory, including the process of encoding, is known to be a heritable trait that is controlled by more than one gene. In fact, twin studies suggest that genetic differences are responsible for as much as 50% of the variance seen in memory tasks. Proteins identified in animal studies have been linked directly to a molecular cascade of reactions leading to memory formation, and a sizable number of these proteins are encoded by genes that are expressed in humans as well. In fact, variations within these genes appear to be associated with memory capacity and have been identified in recent human genetic studies.

Complementary Processes

The idea that the brain is separated into two complementary processing networks (task positive and task negative) has recently become an area of increasing interest. The task positive network deals with externally oriented processing whereas the task negative network deals with internally oriented processing. Research indicates that these networks are not exclusive and some tasks overlap in their activation. A study done in 2009 shows encoding success and novelty detection activity within the task-positive network have significant overlap and have thus been concluded to reflect common association of externally oriented processing. It also demonstrates how encoding failure and retrieval success share significant overlap within the task negative network indicating common association of internally oriented processing. Finally, a low level of overlap between encoding success and retrieval success activity and between encoding failure and novelty detection activity respectively indicate opposing modes or processing. In sum task positive and task negative networks can have common associations during the performance of different tasks.

Depth of Processing

Different levels of processing influence how well information is remembered. This idea was first introduced by Craik and Lockhart (1972). They claimed that the level of processing information was dependent upon the depth at which the information was being processed; mainly, shallow processing and deep processing. According to Craik and Lockhart, the encoding of sensory information would be considered shallow processing, as it is highly automatic and requires very little focus. Deeper level processing requires more attention being given to the stimulus and engages more cognitive systems to encode the information. An exception to deep processing is if the individual has been exposed to the stimulus frequently and it has become common in the individual’s life, such as the person’s name. These levels of processing can be illustrated by maintenance and elaborate rehearsal.

Maintenance and Elaborative Rehearsal

Maintenance rehearsal is a shallow form of processing information which involves focusing on an object without thought to its meaning or its association with other objects. For example, the repetition of a series of numbers is a form of maintenance rehearsal. In contrast, elaborative or relational rehearsal is a process in which you relate new material to information already stored in Long-term memory. It's a deep form of processing information and involves thought of the object's meaning as well as making connections between the object, past experiences and the other objects of focus. Using the example of numbers, one might associate them with dates that are personally significant such as your parents' birthdays (past experiences) or perhaps you might see a pattern in the numbers that helps you to remember them.

American Penny

Due to the deeper level of processing that occurs with elaborative rehearsal it is more effective than maintenance rehearsal in creating new memories. This has been demonstrated in people's lack of knowledge of the details in everyday objects. For example, in one study where Americans were asked about the orientation of the face on their country's penny few recalled this with any degree of certainty. Despite the fact that it is a detail that is often seen, it is not remembered as there is no need to because the color discriminates the penny from other coins. The ineffectiveness of maintenance rehearsal, simply being repeatedly exposed to an item, in creating memories has also been found in people's lack of memory for the layout of the digits 0-9 on calculators and telephones.

Maintenance rehearsal has been demonstrated to be important in learning but its effects can only be demonstrated using indirect methods such as lexical decision tasks, and word stem completion which are used to assess implicit learning. In general, however previous learning by maintenance rehearsal is not apparent when memory is being tested directly or explicitly with questions like " Is this the word you were shown earlier?"

Intention to Learn

Studies have shown that the intention to learn has no direct effect on memory encoding. Instead, memory encoding is dependent on how deeply each item is encoded, which could be affected by intention to learn, but not exclusively. That is, intention to learn can lead to more effective learning strategies, and consequently, better memory encoding, but if you learn something incidentally (i.e. without intention to learn) but still process and learn the information effectively, it will get encoded just as well as something learnt with intention.

The effects of elaborative rehearsal or deep processing can be attributed to the number of connections made while encoding that increase the number of pathways available for retrieval.

Optimal Encoding

Organization

Organization is key to memory encoding. Researchers have discovered that our minds naturally organize information if the information received is not organized. One natural way information can be organized is through hierarchies. For example, the grouping mammals, reptiles, and amphibians is a hierarchy of the animal kingdom.

Depth of processing is also related to the organization of information. For example, the connections that are made between the to-be-remembered item, other to-be-remembered items, previous experiences, and context generate retrieval paths for the to-be-remembered item and can act as retrieval cues. These connections create organization on the to-be-remembered item, making it more memorable.

Visual Images

Another method used to enhance encoding is to associate images with words. Gordon Bower and David Winzenz (1970) demonstrated the use of imagery and encoding in their research while using paired-associate learning. Researchers gave participants a list of 15-word-pairs, showing each participant the word pair for 5 seconds for each pair. One group was told to create a mental image of the two words in each pair in which the two items were interacting. The other group was told to use maintenance rehearsal to remember the information. When participants were later tested and asked to recall the second word in each word pairing, researchers found that those who had created visual images of the items interacting remembered over twice as many of the word pairings than those who used maintenance rehearsal.  

Mnemonics

Red Orange Yellow Green Blue Indigo Violet
The mnemonic "Roy G. Biv" can be used to remember the colors of the rainbow

When memorizing simple material such as lists of words, mnemonics may be the best strategy, while "material already in long-term store [will be] unaffected". Mnemonic Strategies are an example of how finding organization within a set of items helps these items to be remembered. In the absence of any apparent organization within a group, organization can be imposed with the same memory enhancing results. An example of a mnemonic strategy that imposes organization is the peg-word system which associates the to-be-remembered items with a list of easily remembered items. Another example of a mnemonic device commonly used is the first letter of every word system or acronyms. When learning the colours in a rainbow most students learn the first letter of every color and impose their own meaning by associating it with a name such as Roy. G. Biv which stands for red, orange, yellow, green, blue, indigo, violet. In this way mnemonic devices not only help the encoding of specific items but also their sequence. For more complex concepts, understanding is the key to remembering. In a study done by Wiseman and Neisser in 1974 they presented participants with a picture (the picture was of a Dalmatian in the style of pointillism making it difficult to see the image). They found that memory for the picture was better if the participants understood what was depicted.

Chunking

Chunking is a memory strategy used to maximize the amount of information stored in short term memory in order to combine it into small, meaningful sections.  By organizing objects into meaningful sections, these sections are then remembered as a unit rather than separate objects. As larger sections are analyzed and connections are made, information is weaved into meaningful associations and combined into fewer, but larger and more significant pieces of information. By doing so, the ability to hold more information in short-term memory increases. To be more specific, the use of chunking would increase recall from 5 to 8 items to 20 items or more as associations are made between these items.

Words are an example of chunking, where instead of simply perceiving letters we perceive and remember their meaningful wholes: words. The use of chunking increases the number of items we are able to remember by creating meaningful "packets" in which many related items are stored as one. The use of chunking is also seen in numbers. One of the most common forms of chunking seen on a daily basis is that of phone numbers. Generally speaking, phone numbers are separated into sections. An example of this would be 909 200 5890, in which numbers are grouped together to make up one whole. Grouping numbers in this manner, allows them to be recalled with more facility because of their comprehensible acquaintanceship.

State-Dependent Learning

For optimal encoding, connections are not only formed between the items themselves and past experiences, but also between the internal state or mood of the encoder and the situation they are in. The connections that are formed between the encoders internal state or the situation and the items to be remembered are State-dependent. In a 1975 study by Godden and Baddeley the effects of State-dependent learning were shown. They asked deep sea divers to learn various materials while either under water or on the side of the pool. They found that those who were tested in the same condition that they had learned the information in were better able to recall that information, i.e. those who learned the material under water did better when tested on that material under water than when tested on land. Context had become associated with the material they were trying to recall and therefore was serving as a retrieval cue. Results similar to these have also been found when certain smells are present at encoding.

However, although the external environment is important at the time of encoding in creating multiple pathways for retrieval, other studies have shown that simply creating the same internal state that was present at the time of encoding is sufficient to serve as a retrieval cue. Therefore, being in the same mindset as in at the time of encoding will help with recalling in the same way that being in the same situation helps recall. This effect called context reinstatement was demonstrated by Fisher and Craik 1977 when they matched retrieval cues with the way information was memorized.

Transfer-Appropriate Processing

Transfer-appropriate processing is a strategy for encoding that leads to successful retrieval. An experiment conducted by Morris and coworkers in 1977 proved that successful retrieval was a result of matching the type of processing used during encoding. During their experiment, their main findings were that an individual's ability to retrieve information was heavily influenced on whether the task at encoding matched the task during retrieval. In the first task, which consisted of the rhyming group, subjects were given a target word and then asked to review a different set of words. During this process, they were asked whether the new words rhymed with the target word. They were solely focusing on the rhyming rather than the actual meaning of the words. In the second task, individuals were also given a target word, followed by a series of new words. Rather than identify the ones that rhymed, the individual was to focus more on the meaning. As it turns out, the rhyming group, who identified the words that rhymed, was able to recall more words than those in the meaning group, who focused solely on their meaning. This study suggests that those who were focusing on rhyming in the first part of the task and on the second, were able to encode more efficiently. In transfer-appropriate processing, encoding occurs in two different stages. This helps demonstrate how stimuli were processed. In the first phase, the exposure to stimuli is manipulated in a way that matches the stimuli. The second phase then pulls heavily from what occurred in the first phase and how the stimuli was presented; it will match the task during encoding.

Encoding Specificity

An ambiguous figure which can be perceived as either a vase or a pair of faces.
Vase or faces?

The context of learning shapes how information is encoded. For instance, Kanizsa in 1979 showed a picture that could be interpreted as either a white vase on a black background or 2 faces facing each other on a white background. The participants were primed to see the vase. Later they were shown the picture again but this time they were primed to see the black faces on the white background. Although this was the same picture as they had seen before, when asked if they had seen this picture before, they said no. The reason for this was that they had been primed to see the vase the first time the picture was presented, and it was therefore unrecognizable the second time as two faces. This demonstrates that the stimulus is understood within the context it is learned in as well the general rule that what really constitutes good learning are tests that test what has been learned in the same way that it was learned. Therefore, to truly be efficient at remembering information, one must consider the demands that future recall will place on this information and study in a way that will match those demands.

Generation Effect

Another principle that may have the potential to aid encoding is the generation effect. The generation effect implies that learning is enhanced when individuals generate information or items themselves rather than reading the content. The key to properly apply the generation effect is to generate information, rather than passively selecting from information already available like in selecting an answer from a multiple-choice question In 1978, researchers Slameka and Graf conducted an experiment to better understand this effect. In this experiment, the participants were assigned to one of two groups, the read group or the generate group. The participants assigned to the read group were asked to simply read a list of paired words that were related, for example, horse-saddle. The participants assigned to the generate group were asked to fill in the blank letters of one of the related words in the pair. In other words, if the participant was given the word horse, they would need to fill in the last four letters of the word saddle.The researchers discovered that the group that was asked to fill in the blanks had better recall for these word pairs than the group that was asked to simply remember the word pairs.

Self-Reference Effect

Research illustrates that the self-reference effect aids encoding. The self-reference effect is the idea that individuals will encode information more effectively if they can personally relate to the information. For example, some people may claim that some birth dates of family members and friends are easier to remember than others. Some researchers claim this may be due to the self-reference effect. For example, some birth dates are easier for individuals to recall if the date is close to their own birth date or any other dates they deem important, such as anniversary dates.

Research has shown that after being encoded, self-reference effect is more effective when it comes to recalling memory than semantic encoding. Researchers have found that the self-reference effect goes more hand and hand with elaborative rehearsal. Elaborative rehearsal is more often than not, found to have a positive correlation with the improvement of retrieving information from memories. Self-reference effect has shown to be more effective when retrieving information after it has been encoded when being compared to other methods such as semantic encoding. Also, it is important to know that studies have concluded that self-reference effect can be used to encode information among all ages. However, they have determined that older adults are more limited in their use of the self-reference effect when being tested with younger adults.

Salience

When an item or idea is considered "salient", it means the item or idea appears to noticeably stand out. When information is salient, it may be encoded in memory more efficiently than if the information did not stand out to the learner. In reference to encoding, any event involving survival may be considered salient. Research has shown that survival may be related to the self-reference effect due to evolutionary mechanisms. Researchers have discovered that even words that are high in survival value are encoded better than words that are ranked lower in survival value. Some research supports evolution, claiming that the human species remembers content associated with survival. Some researchers wanted to see for themselves whether or not the findings of other research was accurate. The researchers decided to replicate an experiment with results that supported the idea that survival content is encoded better than other content. The findings of the experiment further suggested that survival content has a higher advantage of being encoded than other content.

Retrieval Practice

Studies have shown that an effective tool to increase encoding during the process of learning is to create and take practice tests. Using retrieval in order to enhance performance is called the testing effect, as it actively involves creating and recreating the material that one is intending to learn and increases one’s exposure to it. It is also a useful tool in connecting new information to information already stored in memory, as there is a close association between encoding and retrieval. Thus, creating practice tests allows the individual to process the information at a deeper level than simply reading over the material again or using a pre-made test. The benefits of using retrieval practice have been demonstrated in a study done where college students were asked to read a passage for seven minutes and were then given a two-minute break, during which they completed math problems. One group of participants was given seven minutes to write down as much of the passage as they could remember while the other group was given another seven minutes to reread the material. Later all participants were given a recall test at various increments (five minutes, 2 days, and one week) after the initial learning had taken place. The results of these tests showed that those who had been assigned to the group that had been given a recall test during their first day of the experiment were more likely to retain more information than those that had simply reread the text. This demonstrates that retrieval practice is a useful tool in encoding information into long term memory.

Computational Models of Memory Encoding

Computational models of memory encoding have been developed in order to better understand and simulate the mostly expected, yet sometimes wildly unpredictable, behaviors of human memory. Different models have been developed for different memory tasks, which include item recognition, cued recall, free recall, and sequence memory, in an attempt to accurately explain experimentally observed behaviors.

Item recognition

In item recognition, one is asked whether or not a given probe item has been seen before. It is important to note that the recognition of an item can include context. That is, one can be asked whether an item has been seen in a study list. So even though one may have seen the word "apple" sometime during their life, if it was not on the study list, it should not be recalled.

Item recognition can be modeled using Multiple trace theory and the attribute-similarity model. In brief, every item that one sees can be represented as a vector of the item's attributes, which is extended by a vector representing the context at the time of encoding, and is stored in a memory matrix of all items ever seen. When a probe item is presented, the sum of the similarities to each item in the matrix (which is inversely proportional to the sum of the distances between the probe vector and each item in the memory matrix) is computed. If the similarity is above a threshold value, one would respond, "Yes, I recognize that item." Given that context continually drifts by nature of a random walk, more recently seen items, which each share a similar context vector to the context vector at the time of the recognition task, are more likely to be recognized than items seen longer ago.

Cued Recall

In cued recall, an individual is presented with a stimulus, such as a list of words and then asked to remember as many of those words as possible. They are then given cues, such as categories, to help them remember what the stimuli were. An example of this would be to give a subject words such as meteor, star, space ship, and alien to memorize. Then providing them with the cue of "outer space" to remind them of the list of words given. Giving the subject cues, even when never originally mentioned, helped them recall the stimulus much better. These cues help guide the subjects to recall the stimuli they could not remember for themselves prior to being given a cue. Cues can essentially be anything that will help a memory that is deemed forgotten to resurface. An experiment conducted by Tulvig suggests that when subjects were given cues, they were able to recall the previously presented stimuli.

Cued recall can be explained by extending the attribute-similarity model used for item recognition. Because in cued recall, a wrong response can be given for a probe item, the model has to be extended accordingly to account for that. This can be achieved by adding noise to the item vectors when they are stored in the memory matrix. Furthermore, cued recall can be modeled in a probabilistic manner such that for every item stored in the memory matrix, the more similar it is to the probe item, the more likely it is to be recalled. Because the items in the memory matrix contain noise in their values, this model can account for incorrect recalls, such as mistakenly calling a person by the wrong name.

Free Recall

In free recall, one is allowed to recall items that were learned in any order. For example, you could be asked to name as many countries in Europe as you can. Free recall can be modeled using SAM (Search of Associative Memory) which is based on the dual-store model, first proposed by Atkinson and Shiffrin in 1968. SAM consists of two main components: short-term store (STS) and long-term store (LTS). In brief, when an item is seen, it is pushed into STS where it resides with other items also in STS, until it displaced and put into LTS. The longer the item has been in STS, the more likely it is to be displaced by a new item. When items co-reside in STS, the links between those items are strengthened. Furthermore, SAM assumes that items in STS are always available for immediate recall.

SAM explains both primacy and recency effects. Probabilistically, items at the beginning of the list are more likely to remain in STS, and thus have more opportunities to strengthen their links to other items. As a result, items at the beginning of the list are made more likely to be recalled in a free-recall task (primacy effect). Because of the assumption that items in STS are always available for immediate recall, given that there were no significant distractors between learning and recall, items at the end of the list can be recalled excellently (recency effect).

Studies have shown that free recall is one of the most effective methods of studying and transferring information from short term memory to long term memory compared to item recognition and cued recall as greater relational processing is involved.

Incidentally, the idea of STS and LTS was motivated by the architecture of computers, which contain short-term and long-term storage.

Sequence Memory

Sequence memory is responsible for how we remember lists of things, in which ordering matters. For example, telephone numbers are an ordered list of one digit numbers. There are currently two main computational memory models that can be applied to sequence encoding: associative chaining and positional coding.

Associative chaining theory states that every item in a list is linked to its forward and backward neighbors, with forward links being stronger than backward links, and links to closer neighbors being stronger than links to farther neighbors. For example, associative chaining predicts the tendencies of transposition errors, which occur most often with items in nearby positions. An example of a transposition error would be recalling the sequence "apple, orange, banana" instead of "apple, banana, orange."

Positional coding theory suggests that every item in a list is associated to its position in the list. For example, if the list is "apple, banana, orange, mango" apple will be associated to list position 1, banana to 2, orange to 3, and mango to 4. Furthermore, each item is also, albeit more weakly, associated to its index +/- 1, even more weakly to +/- 2, and so forth. So banana is associated not only to its actual index 2, but also to 1, 3, and 4, with varying degrees of strength. For example, positional coding can be used to explain the effects of recency and primacy. Because items at the beginning and end of a list have fewer close neighbors compared to items in the middle of the list, they have less competition for correct recall.

Although the models of associative chaining and positional coding are able to explain a great amount of behavior seen for sequence memory, they are far from perfect. For example, neither chaining nor positional coding is able to properly illustrate the details of the Ranschburg effect, which reports that sequences of items that contain repeated items are harder to reproduce than sequences of unrepeated items. Associative chaining predicts that recall of lists containing repeated items is impaired because recall of any repeated item would cue not only its true successor but also the successors of all other instances of the item. However, experimental data have shown that spaced repetition of items resulted in impaired recall of the second occurrence of the repeated item. Furthermore, it had no measurable effect on the recall of the items that followed the repeated items, contradicting the prediction of associative chaining. Positional coding predicts that repeated items will have no effect on recall, since the positions for each item in the list act as independent cues for the items, including the repeated items. That is, there is no difference between the similarity between any two items and repeated items. This, again, is not consistent with the data.

Because no comprehensive model has been defined for sequence memory to this day, it makes for an interesting area of research.

Inhalant

From Wikipedia, the free encyclopedia https://en.wikipedia.org/w...