Search This Blog

Thursday, July 12, 2018

Self-organizing map

From Wikipedia, the free encyclopedia

A self-organizing map (SOM) or self-organizing feature map (SOFM) is a type of artificial neural network (ANN) that is trained using unsupervised learning to produce a low-dimensional (typically two-dimensional), discretized representation of the input space of the training samples, called a map, and is therefore a method to do dimensionality reduction. Self-organizing maps differ from other artificial neural networks as they apply competitive learning as opposed to error-correction learning (such as backpropagation with gradient descent), and in the sense that they use a neighborhood function to preserve the topological properties of the input space.

A self-organizing map showing U.S. Congress voting patterns. The input data was a table with a row for each member of Congress, and columns for certain votes containing each member's yes/no/abstain vote. The SOM algorithm arranged these members in a two-dimensional grid placing similar members closer together. The first plot shows the grouping when the data are split into two clusters. The second plot shows average distance to neighbours: larger distances are darker. The third plot predicts Republican (red) or Democratic (blue) party membership. The other plots each overlay the resulting map with predicted values on an input dimension: red means a predicted 'yes' vote on that bill, blue means a 'no' vote. The plot was created in Synapse.

This makes SOMs useful for visualization by creating low-dimensional views of high-dimensional data, akin to multidimensional scaling. The artificial neural network introduced by the Finnish professor Teuvo Kohonen in the 1980s is sometimes called a Kohonen map or network.[1][2] The Kohonen net is a computationally convenient abstraction building on biological models of neural systems from the 1970s[3] and morphogenesis models dating back to Alan Turing in the 1950s.[4]

While it is typical to consider this type of network structure as related to feedforward networks where the nodes are visualized as being attached, this type of architecture is fundamentally different in arrangement and motivation.

Useful extensions include using toroidal grids where opposite edges are connected and using large numbers of nodes.

It has been shown that while self-organizing maps with a small number of nodes behave in a way that is similar to K-means, larger self-organizing maps rearrange data in a way that is fundamentally topological in character.

It is also common to use the U-Matrix.[5] The U-Matrix value of a particular node is the average distance between the node's weight vector and that of its closest neighbors.[6] In a square grid, for instance, we might consider the closest 4 or 8 nodes (the Von Neumann and Moore neighborhoods, respectively), or six nodes in a hexagonal grid.

Large SOMs display emergent properties. In maps consisting of thousands of nodes, it is possible to perform cluster operations on the map itself.[7]

Structure and operations

Like most artificial neural networks, SOMs operate in two modes: training and mapping. "Training" builds the map using input examples (a competitive process, also called vector quantization), while "mapping" automatically classifies a new input vector.

The visible part of a self-organizing map is the map space, it consists of components called nodes or neurons. The map space is defined beforehand, usually as a finite two-dimensional region where nodes are arranged in a regular hexagonal or rectangular grid.[8] Each node is associated with a "weight" vector, which is a position in the input space; that is, it has the same dimension as each input vector. While nodes in the map space stay fixed, training consists in moving weight vectors toward the input data, without spoiling the topology induced from the map space. Thus, the self-organizing map describes a mapping from a higher-dimensional input space to a lower-dimensional map space. Once trained, the map can classify a vector from the data space by finding the node with the closest (smallest distance metric) weight vector to the data space vector.

Learning algorithm

The goal of learning in the self-organizing map is to cause different parts of the network to respond similarly to certain input patterns. This is partly motivated by how visual, auditory or other sensory information is handled in separate parts of the cerebral cortex in the human brain.[9]

An illustration of the training of a self-organizing map. The
blue blob is the distribution of the training data, and the
small white disc is the current training datum drawn from
that distribution. At first (left) the SOM nodes are arbitrarily
positioned in the data space. The node (highlighted in yellow)
which is nearest to the training datum is selected. It is moved
towards the training datum, as (to a lesser extent) are its
neighbors on the grid. After many iterations the grid tends to
approximate the data distribution (right).

The weights of the neurons are initialized either to small random values or sampled evenly from the subspace spanned by the two largest principal component eigenvectors. With the latter alternative, learning is much faster because the initial weights already give a good approximation of SOM weights.[10]

The network must be fed a large number of example vectors that represent, as close as possible, the kinds of vectors expected during mapping. The examples are usually administered several times as iterations.

The training utilizes competitive learning. When a training example is fed to the network, its Euclidean distance to all weight vectors is computed. The neuron whose weight vector is most similar to the input is called the best matching unit (BMU). The weights of the BMU and neurons close to it in the SOM grid are adjusted towards the input vector. The magnitude of the change decreases with time and with the grid-distance from the BMU. The update formula for a neuron v with weight vector Wv(s) is
{\displaystyle W_{v}(s+1)=W_{v}(s)+\theta (u,v,s)\cdot \alpha (s)\cdot (D(t)-W_{v}(s))},
where s is the step index, t an index into the training sample, u is the index of the BMU for D(t), α(s) is a monotonically decreasing learning coefficient and D(t) is the input vector; Θ(u, v, s) is the neighborhood function which gives the distance between the neuron u and the neuron v in step s.[11] Depending on the implementations, t can scan the training data set systematically (t is 0, 1, 2...T-1, then repeat, T being the training sample's size), be randomly drawn from the data set (bootstrap sampling), or implement some other sampling method (such as jackknifing).

The neighborhood function Θ(u, v, s) depends on the grid-distance between the BMU (neuron u) and neuron v. In the simplest form it is 1 for all neurons close enough to BMU and 0 for others, but a Gaussian function is a common choice, too. Regardless of the functional form, the neighborhood function shrinks with time.[9] At the beginning when the neighborhood is broad, the self-organizing takes place on the global scale. When the neighborhood has shrunk to just a couple of neurons, the weights are converging to local estimates. In some implementations the learning coefficient α and the neighborhood function Θ decrease steadily with increasing s, in others (in particular those where t scans the training data set) they decrease in step-wise fashion, once every T steps.

This process is repeated for each input vector for a (usually large) number of cycles λ. The network winds up associating output nodes with groups or patterns in the input data set. If these patterns can be named, the names can be attached to the associated nodes in the trained net.

During mapping, there will be one single winning neuron: the neuron whose weight vector lies closest to the input vector. This can be simply determined by calculating the Euclidean distance between input vector and weight vector.

While representing input data as vectors has been emphasized in this article, it should be noted that any kind of object which can be represented digitally, which has an appropriate distance measure associated with it, and in which the necessary operations for training are possible can be used to construct a self-organizing map. This includes matrices, continuous functions or even other self-organizing maps.

Variables

These are the variables needed, with vectors in bold,
  • s is the current iteration
  • \lambda is the iteration limit
  • t is the index of the target input data vector in the input data set \mathbf{D}
  • {\displaystyle {D}(t)} is a target input data vector
  • v is the index of the node in the map
  • {\displaystyle \mathbf {W} _{v}} is the current weight vector of node v
  • u is the index of the best matching unit (BMU) in the map
  • {\displaystyle \theta (u,v,s)} is a restraint due to distance from BMU, usually called the neighborhood function, and
  • \alpha (s) is a learning restraint due to iteration progress.

Algorithm

  1. Randomize the node weight vectors in a map
  2. Randomly pick an input vector {\displaystyle {D}(t)}
  3. Traverse each node in the map
    1. Use the Euclidean distance formula to find the similarity between the input vector and the map's node's weight vector
    2. Track the node that produces the smallest distance (this node is the best matching unit, BMU)
  4. Update the weight vectors of the nodes in the neighborhood of the BMU (including the BMU itself) by pulling them closer to the input vector

    1. {\displaystyle W_{v}(s+1)=W_{v}(s)+\theta (u,v,s)\cdot \alpha (s)\cdot (D(t)-W_{v}(s))}
  5. Increase s and repeat from step 2 while s < \lambda
A variant algorithm:
  1. Randomize the map's nodes' weight vectors
  2. Traverse each input vector in the input data set
    1. Traverse each node in the map
      1. Use the Euclidean distance formula to find the similarity between the input vector and the map's node's weight vector
      2. Track the node that produces the smallest distance (this node is the best matching unit, BMU)
    2. Update the nodes in the neighborhood of the BMU (including the BMU itself) by pulling them closer to the input vector

      1. {\displaystyle W_{v}(s+1)=W_{v}(s)+\theta (u,v,s)\cdot \alpha (s)\cdot (D(t)-W_{v}(s))}
  3. Increase s and repeat from step 2 while s < \lambda

SOM Initialization

Selection of a good initial approximation is a well-known problem for all iterative methods of learning neural networks. Kohonen[12] used random initiation of SOM weights. Recently, principal component initialization, in which initial map weights are chosen from the space of the first principal components, has become popular due to the exact reproducibility of the results.[13]

Careful comparison of the random initiation approach to principal component initialization for one-dimensional SOM (models of principal curves) demonstrated that the advantages of principal component SOM initialization are not universal. The best initialization method depends on the geometry of the specific dataset. Principal component initialization is preferable (in dimension one) if the principal curve approximating the dataset can be univalently and linearly projected on the first principal component (quasilinear sets). For nonlinear datasets, however, random initiation performs better.[14]

Examples

Fisher's Iris Flower Data

Consider an n×m array of nodes, each of which contains a weight vector and is aware of its location in the array. Each weight vector is of the same dimension as the node's input vector. The weights may initially be set to random values.

Now we need input to feed the map. Colors can be represented by their red, green, and blue components. Consequently, we will represent colors as vectors in the unit cube of the free vector space over generated by the basis:
R = <255 0="">
G = <0 0="" 255="">
B = <0 0="" 255="">
The diagram shown

Self organizing maps (SOM) of three and eight colors with U-Matrix.

compares the results of training on the data sets[Note 1]
threeColors = [255, 0, 0], [0, 255, 0], [0, 0, 255]
eightColors = [0, 0, 0], [255, 0, 0], [0, 255, 0], [0, 0, 255], [255, 255, 0], [0, 255, 255], [255, 0, 255], [255, 255, 255]
and the original images. Note the striking resemblance between the two.

Similarly, after training a 40×40 grid of neurons for 250 iterations with a learning rate of 0.1 on Fisher's Iris, the map can already detect the main differences between species.

Self organizing map (SOM) of Fisher's Iris Flower Data Set with U-Matrix. Top left: a color image formed by the first three dimensions of the four-dimensional SOM weight vectors. Top Right: a pseudo-color image of the magnitude of the SOM weight vectors. Bottom Left: a U-Matrix (Euclidean distance between weight vectors of neighboring cells) of the SOM. Bottom Right: An overlay of data points (red: I. setosa, green: I. versicolor and blue: I. virginica) on the U-Matrix based on the minimum Euclidean distance between data vectors and SOM weight vectors.

Interpretation

Cartographical representation of a self-organizing map (U-Matrix) based on Wikipedia featured article data (word frequency). Distance is inversely proportional to similarity. The "mountains" are edges between clusters. The red lines are links between articles.
 
One-dimensional SOM versus principal component analysis (PCA) for data approximation. SOM is a red broken line with squares, 20 nodes. The first principal component is presented by a blue line. Data points are the small grey circles. For PCA, the fraction of variance unexplained in this example is 23.23%, for SOM it is 6.86%.[15]

There are two ways to interpret a SOM. Because in the training phase weights of the whole neighborhood are moved in the same direction, similar items tend to excite adjacent neurons. Therefore, SOM forms a semantic map where similar samples are mapped close together and dissimilar ones apart. This may be visualized by a U-Matrix (Euclidean distance between weight vectors of neighboring cells) of the SOM.[5][6][16]

The other way is to think of neuronal weights as pointers to the input space. They form a discrete approximation of the distribution of training samples. More neurons point to regions with high training sample concentration and fewer where the samples are scarce.

SOM may be considered a nonlinear generalization of Principal components analysis (PCA).[17] It has been shown, using both artificial and real geophysical data, that SOM has many advantages[18][19] over the conventional feature extraction methods such as Empirical Orthogonal Functions (EOF) or PCA.

Originally, SOM was not formulated as a solution to an optimisation problem. Nevertheless, there have been several attempts to modify the definition of SOM and to formulate an optimisation problem which gives similar results.[20] For example, Elastic maps use the mechanical metaphor of elasticity to approximate principal manifolds:[21] the analogy is an elastic membrane and plate.

Alternatives

  • The generative topographic map (GTM) is a potential alternative to SOMs. In the sense that a GTM explicitly requires a smooth and continuous mapping from the input space to the map space, it is topology preserving. However, in a practical sense, this measure of topological preservation is lacking.[22]
  • The time adaptive self-organizing map (TASOM) network is an extension of the basic SOM. The TASOM employs adaptive learning rates and neighborhood functions. It also includes a scaling parameter to make the network invariant to scaling, translation and rotation of the input space. The TASOM and its variants have been used in several applications including adaptive clustering, multilevel thresholding, input space approximation, and active contour modeling.[23] Moreover, a Binary Tree TASOM or BTASOM, resembling a binary natural tree having nodes composed of TASOM networks has been proposed where the number of its levels and the number of its nodes are adaptive with its environment.[24]
  • The growing self-organizing map (GSOM) is a growing variant of the self-organizing map. The GSOM was developed to address the issue of identifying a suitable map size in the SOM. It starts with a minimal number of nodes (usually four) and grows new nodes on the boundary based on a heuristic. By using a value called the spread factor, the data analyst has the ability to control the growth of the GSOM.
  • The elastic maps approach[25] borrows from the spline interpolation the idea of minimization of the elastic energy. In learning, it minimizes the sum of quadratic bending and stretching energy with the least squares approximation error.
  • The conformal approach [26][27] that uses conformal mapping to interpolate each training sample between grid nodes in a continuous surface. A one-to-one smooth mapping is possible in this approach.

Applications

Artificial neuron

From Wikipedia, the free encyclopedia
An artificial neuron is a mathematical function conceived as a model of biological neurons, a neural network. Artificial neurons are elementary units in an artificial neural network. The artificial neuron receives one or more inputs (representing excitatory postsynaptic potentials and inhibitory postsynaptic potentials at neural dendrites) and sums them to produce an output (or activation, representing a neuron's action potential which is transmitted along its axon). Usually each input is separately weighted, and the sum is passed through a non-linear function known as an activation function or transfer function. The transfer functions usually have a sigmoid shape, but they may also take the form of other non-linear functions, piecewise linear functions, or step functions. They are also often monotonically increasing, continuous, differentiable and bounded. The thresholding function has inspired building logic gates referred to as threshold logic; applicable to building logic circuits resembling brain processing. For example, new devices such as memristors have been extensively used to develop such logic in recent times.

The artificial neuron transfer function should not be confused with a linear system's transfer function.

Basic structure

For a given artificial neuron, let there be m + 1 inputs with signals x0 through xm and weights w0 through wm. Usually, the x0 input is assigned the value +1, which makes it a bias input with wk0 = bk. This leaves only m actual inputs to the neuron: from x1 to xm.

The output of the kth neuron is:
y_{k}=\varphi \left(\sum _{{j=0}}^{m}w_{{kj}}x_{j}\right)
Where \varphi (phi) is the transfer function.

Artificial neuron.png

The output is analogous to the axon of a biological neuron, and its value propagates to the input of the next layer, through a synapse. It may also exit the system, possibly as part of an output vector.

It has no learning process as such. Its transfer function weights are calculated and threshold value are predetermined.

Types

Depending on the specific model used they may be called a semi-linear unit, Nv neuron, binary neuron, linear threshold function, or McCulloch–Pitts (MCP) neuron.
Simple artificial neurons, such as the McCulloch–Pitts model, are sometimes described as "caricature models", since they are intended to reflect one or more neurophysiological observations, but without regard to realism.[2]

Biological models

Artificial neurons are designed to mimic aspects of their biological counterparts.
  • Dendrites – In a biological neuron, the dendrites act as the input vector. These dendrites allow the cell to receive signals from a large (>1000) number of neighboring neurons. As in the above mathematical treatment, each dendrite is able to perform "multiplication" by that dendrite's "weight value." The multiplication is accomplished by increasing or decreasing the ratio of synaptic neurotransmitters to signal chemicals introduced into the dendrite in response to the synaptic neurotransmitter. A negative multiplication effect can be achieved by transmitting signal inhibitors (i.e. oppositely charged ions) along the dendrite in response to the reception of synaptic neurotransmitters.
  • Soma – In a biological neuron, the soma acts as the summation function, seen in the above mathematical description. As positive and negative signals (exciting and inhibiting, respectively) arrive in the soma from the dendrites, the positive and negative ions are effectively added in summation, by simple virtue of being mixed together in the solution inside the cell's body.
  • Axon – The axon gets its signal from the summation behavior which occurs inside the soma. The opening to the axon essentially samples the electrical potential of the solution inside the soma. Once the soma reaches a certain potential, the axon will transmit an all-in signal pulse down its length. In this regard, the axon behaves as the ability for us to connect our artificial neuron to other artificial neurons.
Unlike most artificial neurons, however, biological neurons fire in discrete pulses. Each time the electrical potential inside the soma reaches a certain threshold, a pulse is transmitted down the axon. This pulsing can be translated into continuous values. The rate (activations per second, etc.) at which an axon fires converts directly into the rate at which neighboring cells get signal ions introduced into them. The faster a biological neuron fires, the faster nearby neurons accumulate electrical potential (or lose electrical potential, depending on the "weighting" of the dendrite that connects to the neuron that fired). It is this conversion that allows computer scientists and mathematicians to simulate biological neural networks using artificial neurons which can output distinct values (often from −1 to 1).

Encoding

Research has shown that unary coding is used in the neural circuits responsible for birdsong production.[3][4] The use of unary in biological networks is presumably due to the inherent simplicity of the coding. Another contributing factor could be that unary coding provides a certain degree of error correction.[5]

History

The first artificial neuron was the Threshold Logic Unit (TLU), or Linear Threshold Unit,[6] first proposed by Warren McCulloch and Walter Pitts in 1943. The model was specifically targeted as a computational model of the "nerve net" in the brain.[7] As a transfer function, it employed a threshold, equivalent to using the Heaviside step function. Initially, only a simple model was considered, with binary inputs and outputs, some restrictions on the possible weights, and a more flexible threshold value. Since the beginning it was already noticed that any boolean function could be implemented by networks of such devices, what is easily seen from the fact that one can implement the AND and OR functions, and use them in the disjunctive or the conjunctive normal form. Researchers also soon realized that cyclic networks, with feedbacks through neurons, could define dynamical systems with memory, but most of the research concentrated (and still does) on strictly feed-forward networks because of the smaller difficulty they present.

One important and pioneering artificial neural network that used the linear threshold function was the perceptron, developed by Frank Rosenblatt. This model already considered more flexible weight values in the neurons, and was used in machines with adaptive capabilities. The representation of the threshold values as a bias term was introduced by Bernard Widrow in 1960 – see ADALINE.

In the late 1980s, when research on neural networks regained strength, neurons with more continuous shapes started to be considered. The possibility of differentiating the activation function allows the direct use of the gradient descent and other optimization algorithms for the adjustment of the weights. Neural networks also started to be used as a general function approximation model. The best known training algorithm called backpropagation has been rediscovered several times but its first development goes back to the work of Paul Werbos.[8][9]

Types of transfer functions

The transfer function of a neuron is chosen to have a number of properties which either enhance or simplify the network containing the neuron. Crucially, for instance, any multilayer perceptron using a linear transfer function has an equivalent single-layer network; a non-linear function is therefore necessary to gain the advantages of a multi-layer network.

Below, u refers in all cases to the weighted sum of all the inputs to the neuron, i.e. for n inputs,
u=\sum _{{i=1}}^{n}w_{i}x_{i}
where w is a vector of synaptic weights and x is a vector of inputs.

Step function

The output y of this transfer function is binary, depending on whether the input meets a specified threshold, θ. The "signal" is sent, i.e. the output is set to one, if the activation meets the threshold.
y={\begin{cases}1&{\text{if }}u\geq \theta \\0&{\text{if }}u<\theta \end{cases}}
This function is used in perceptrons and often shows up in many other models. It performs a division of the space of inputs by a hyperplane. It is specially useful in the last layer of a network intended to perform binary classification of the inputs. It can be approximated from other sigmoidal functions by assigning large values to the weights.

Linear combination

In this case, the output unit is simply the weighted sum of its inputs plus a bias term. A number of such linear neurons perform a linear transformation of the input vector. This is usually more useful in the first layers of a network. A number of analysis tools exist based on linear models, such as harmonic analysis, and they can all be used in neural networks with this linear neuron. The bias term allows us to make affine transformations to the data.

Sigmoid

A fairly simple non-linear function, the sigmoid function such as the logistic function also has an easily calculated derivative, which can be important when calculating the weight updates in the network. It thus makes the network more easily manipulable mathematically, and was attractive to early computer scientists who needed to minimize the computational load of their simulations. It was previously commonly seen in multilayer perceptrons. However, recent work has shown sigmoid neurons to be less effective than rectified linear neurons. The reason is that the gradients computed by the backpropagation algorithm tend to diminish towards zero as activations propagate through layers of sigmoidal neurons, making it difficult to optimize neural networks using multiple layers of sigmoidal neurons.

Pseudocode algorithm

The following is a simple pseudocode implementation of a single TLU which takes boolean inputs (true or false), and returns a single boolean output when activated. An object-oriented model is used. No method of training is defined, since several exist. If a purely functional model were used, the class TLU below would be replaced with a function TLU with input parameters threshold, weights, and inputs that returned a boolean value.

 class TLU defined as:
  data member threshold : number
  data member weights : list of numbers of size X
  function member fire( inputs : list of booleans of size X ) : boolean defined as:
   variable T : number
   T  0
   for each i in 1 to X :
    if inputs(i) is true :
     T  T + weights(i)
    end if
   end for each
   if T > threshold :
    return true
   else:
    return false
   end if
  end function
 end class

Spindle neuron

From Wikipedia, the free encyclopedia

Spindle neuron
Spindle-cell.png
Cartoon of a spindle cell (right) compared to a normal pyramidal cell (left).
Details
Location Anterior cingulate cortex and Fronto-insular cortex
Shape Unique spindle-shaped projection neuron
Function Global firing rate regulation and regulation of emotional state
Presynaptic connections Local input to ACC and FI
Postsynaptic connections Frontal and temporal cortex.
Identifiers
Anatomical terms of neuroanatomy
Micrograph showing a spindle neuron of the cingulate. HE-LFB stain.

Spindle neurons, also called von Economo neurons (VENs), are a specific class of neurons that are characterized by a large spindle-shaped soma (or body), gradually tapering into a single apical axon in one direction, with only a single dendrite facing opposite. Other neurons tend to have many dendrites, and the polar-shaped morphology of spindle neurons is unique. A neuron's dendrites receive signals, and its axon sends them.

Spindle neurons are found in two very restricted regions in the brains of hominids—the family of species comprising humans and other great apes—the anterior cingulate cortex (ACC) and the fronto-insular cortex (FI). Recently they have been discovered in the dorsolateral prefrontal cortex of humans.[1] Spindle cells are also found in the brains of the humpback whales, fin whales, killer whales, sperm whales,[2][3] bottlenose dolphin, Risso's dolphin, beluga whales,[4] African and Asian elephants,[5] and to a lesser extent in macaque monkeys[6] and raccoons.[7] The appearance of spindle neurons in distantly related clades suggests that they represent convergent evolution, specifically an adaptation to larger brains.

Austrian psychiatrist and neurologist Constantin von Economo (1876–1931) discovered spindle neurons and described them in 1929, which is why they are sometimes called von Economo neurons.[8]

Function

Spindle neurons are relatively large cells that may allow rapid communication across the relatively large brains of great apes, elephants, and cetaceans. Although rare in comparison to other neurons, spindle neurons are abundant, and large, in humans. However, the concentration of spindle cells has been measured to be three times higher in cetaceans in comparison to humans.[3][9] They have only been found thus far in the anterior cingulate cortex (ACC), fronto-insular cortex (FI), and the dorsolateral prefrontal cortex.

Evolutionary significance

The observation that spindle neurons only occur in a highly significant group of animals (from a human point of view) has led to speculation that they are of great importance in human evolution and/or brain function. Their restriction (among the primates) to great apes leads to the hypothesis that they developed no earlier than 15–20 million years ago, prior to the divergence of orangutans from the African great apes. The discovery of spindle neurons in diverse whale species[3][4] has led to the suggestion that they are "a possible obligatory neuronal adaptation in very large brains, permitting fast information processing and transfer along highly specific projections and that evolved in relation to emerging social behaviors."[4]p. 254 Their presence in the brains of these species supports this theory, pointing towards the existence of these specialized neurons only in highly intelligent mammals, and may be an example of convergent evolution.[10] Recently, primitive forms of spindle neurons have also been discovered in macaque monkey brains[11] and raccoons.[7]

ACC spindle neurons

In 1999, Professor John Allman, a neuroscientist, and colleagues at the California Institute of Technology first published a report[12] on spindle neurons found in the anterior cingulate cortex (ACC) of hominids, but not in any other species. Neuronal volumes of ACC spindle neurons were larger in humans and bonobos (Pan paniscus) than the spindle neurons of the common chimpanzee, gorilla, and orangutan.

Allman and his colleagues[13] have delved beyond the level of brain infrastructure to investigate how spindle neurons function at the superstructural level, focusing on their role as 'air traffic controllers' for emotions. Allman's team proposes that spindle neurons help channel neural signals from deep within the cortex to relatively distant parts of the brain.

Specifically, Allman's team[14] found signals from the ACC are received in Brodmann's area 10, in the frontal polar cortex, where regulation of cognitive dissonance (disambiguation between alternatives) is thought to occur. According to Allman, this neural relay appears to convey motivation to act, and concerns the recognition of error. Self-control – and avoidance of error – is thus facilitated by the executive gatekeeping function of the ACC, as it regulates the interference patterns of neural signals between these two brain regions.

In humans, intense emotion activates the anterior cingulate cortex, as it relays neural signals transmitted from the amygdala (a primary processing center for emotions) to the frontal cortex, perhaps by functioning as a sort of lens to focus the complex texture of neural signal interference patterns[citation needed]. The ACC is also active during demanding tasks requiring judgment and discrimination, and when errors are detected by an individual. During difficult tasks, or when experiencing intense love, anger, or lust, activation of the ACC increases. In brain imaging studies, the ACC has specifically been found to be active when mothers hear infants cry, underscoring its role in affording a heightened degree of social sensitivity.

The ACC is a relatively ancient cortical region, and is involved with many autonomic functions, including motor and digestive functions, while also playing a role in the regulation of blood pressure and heart rate. Significant olfactory and gustatory capabilities of the ACC and fronto-insular cortex appear to have been usurped, during recent evolution, to serve enhanced roles related to higher cognition – ranging from planning and self-awareness to role playing and deception. The diminished olfactory function of humans, compared to other primates, may be related to the fact that spindle cells located at crucial neural network hubs have only two dendrites rather than many, resulting in reduced neurological integration.

Fronto-insular spindle neurons

At a Society for Neuroscience meeting in 2003, Allman reported on spindle cells his team found in another brain region, the fronto-insular cortex, a region which appears to have undergone significant evolutionary adaptations in mankind – perhaps as recently as 100,000 years ago.

This fronto-insular cortex is closely connected to the insula, a region that is roughly the size of a thumb in each hemisphere of the human brain. The insula and fronto-insular cortex are part of the insular cortex, wherein the elaborate circuitry associated with spatial awareness are found, and where self-awareness and the complexities of emotion are thought to be generated and experienced. Moreover, this region of the right hemisphere is crucial to navigation and perception of three-dimensional rotations.

Spindle neuron concentrations

ACC

The largest number of ACC spindle neurons are found in humans, fewer in the gracile great apes, and fewest in the robust great apes. In both humans and bonobos they are often found in clusters of 3 to 6 neurons. They are found in humans, bonobos, common chimpanzees, gorillas, orangutans, some cetaceans, and elephants.[15]:245 While total quantities of ACC spindle neurons were not reported by Allman in his seminal research report (as they were in a later report describing their presence in the frontoinsular cortex, below), his team's initial analysis of the ACC layer V in hominids revealed an average of ~9 spindle neurons per section for orangutans (rare, 0.6% of section cells), ~22 for gorillas (frequent, 2.3%), ~37 for chimpanzees (abundant, 3.8%), ~68 for bonobos (abundant/clusters, 4.8%), ~89 for humans (abundant/clusters, 5.6%).[16]

Fronto-insula

All of the primates examined had more spindle cells in the fronto-insula of the right hemisphere than in the left. In contrast to the higher number of spindle cells found in the ACC of the gracile bonobos and chimpanzees, the number of fronto-insular spindle cells was far higher in the cortex of robust gorillas (no data for Orangutans was given). An adult human had 82,855 such cells, a gorilla had 16,710, a bonobo had 2,159, and a chimpanzee had a mere 1,808 – despite the fact that chimpanzees and bonobos are great apes most closely related to humans.

Dorsolateral PFC

Von Economo neurons have been located in the Dorsolateral prefrontal cortex of humans[1] and elephants.[5] In humans they have been observed in higher concentration in Brodmann area 9 (BA9) – mostly isolated or in clusters of 2, while in Brodmann area 24 (BA24) they have been found mostly in clusters of 2-4.[1]

Clinical significance

Abnormal spindle neuron development may be linked to several psychotic disorders, typically those characterized by distortions of reality, disturbances of thought, disturbances of language, and withdrawal from social contact. Altered spindle neuron states have been implicated in both schizophrenia and autism, but research into these correlations remains at a very early stage.  Frontotemporal dementia involves loss of mostly spindle neurons.[17] An initial study suggested that Alzheimer's disease specifically targeted Von Economo neurons; this study was performed with end-stage Alzheimer brains in which cell destruction was widespread, but later, it was found that Alzheimer's disease doesn't affect the spindle neurons.

Neuron

This schematic shows an anatomically accurate single pyramidal neuron, the primary excitatory neuron of cerebral cortex, with a synaptic connection from an incoming axon onto a dendritic spine.

A neuron, also known as a neurone (British spelling) and nerve cell, is an electrically excitable cell that receives, processes, and transmits information through electrical and chemical signals. These signals between neurons occur via specialized connections called synapses. Neurons can connect to each other to form neural circuits. Neurons are the primary components of the central nervous system, which includes the brain and spinal cord, and of the peripheral nervous system, which comprises the autonomic nervous system and the somatic nervous system.

There are many types of specialized neurons. Sensory neurons respond to one particular type of stimulus such as touch, sound, or light and all other stimuli affecting the cells of the sensory organs, and converts it into an electrical signal via transduction, which is then sent to the spinal cord or brain.
Motor neurons receive signals from the brain and spinal cord to cause everything from muscle contractions and affect glandular outputs. Interneurons connect neurons to other neurons within the same region of the brain or spinal cord in neural networks.

A typical neuron consists of a cell body (soma), dendrites, and an axon. The term neurite is used to describe either a dendrite or an axon, particularly in its undifferentiated stage. Dendrites are thin structures that arise from the cell body, often extending for hundreds of micrometers and branching multiple times, giving rise to a complex "dendritic tree". An axon (also called a nerve fiber) is a special cellular extension (process) that arises from the cell body at a site called the axon hillock and travels for a distance, as far as 1 meter in humans or even more in other species. Most neurons receive signals via the dendrites and send out signals down the axon. Numerous axons are often bundled into fascicles that make up the nerves in the peripheral nervous system (like strands of wire make up cables). Bundles of axons in the central nervous system are called tracts. The cell body of a neuron frequently gives rise to multiple dendrites, but never to more than one axon, although the axon may branch hundreds of times before it terminates. At the majority of synapses, signals are sent from the axon of one neuron to a dendrite of another. There are, however, many exceptions to these rules: for example, neurons can lack dendrites, or have no axon, and synapses can connect an axon to another axon or a dendrite to another dendrite.

All neurons are electrically excitable, due to maintenance of voltage gradients across their membranes by means of metabolically driven ion pumps, which combine with ion channels embedded in the membrane to generate intracellular-versus-extracellular concentration differences of ions such as sodium, potassium, chloride, and calcium. Changes in the cross-membrane voltage can alter the function of voltage-dependent ion channels. If the voltage changes by a large enough amount, an all-or-none electrochemical pulse called an action potential is generated and this change in cross-membrane potential travels rapidly along the cell's axon, and activates synaptic connections with other cells when it arrives.

In most cases, neurons are generated by special types of stem cells during brain development and childhood. Neurons in the adult brain generally do not undergo cell division. Astrocytes are star-shaped glial cells that have also been observed to turn into neurons by virtue of the stem cell characteristic pluripotency. Neurogenesis largely ceases during adulthood in most areas of the brain. However, there is strong evidence for generation of substantial numbers of new neurons in two brain areas, the hippocampus and olfactory bulb.[1][2]

Overview

Structure of a typical neuron
Neuron (peripheral nervous system)
Drawing of neurons in the pigeon cerebellum, by Spanish neuroscientist Santiago Ramón y Cajal in 1899. (A) denotes Purkinje cells and (B) denotes granule cells, both of which are multipolar.
 
Neuron cell body

A neuron is a specialized type of cell found in the bodies of all eumetozoans. Only sponges and a few other simpler animals lack neurons. The features that define a neuron are electrical excitability[3] and the presence of synapses, which are complex membrane junctions that transmit signals to other cells. The body's neurons, plus the glial cells that give them structural and metabolic support, together constitute the nervous system. In vertebrates, the majority of neurons belong to the central nervous system, but some reside in peripheral ganglia, and many sensory neurons are situated in sensory organs such as the retina and cochlea.

A typical neuron is divided into three parts: the soma or cell body, dendrites, and axon. The soma is usually compact; the axon and dendrites are filaments that extrude from it. Dendrites typically branch profusely, getting thinner with each branching, and extending their farthest branches a few hundred micrometers from the soma. The axon leaves the soma at a swelling called the axon hillock, and can extend for great distances, giving rise to hundreds of branches. Unlike dendrites, an axon usually maintains the same diameter as it extends. The soma may give rise to numerous dendrites, but never to more than one axon. Synaptic signals from other neurons are received by the soma and dendrites; signals to other neurons are transmitted by the axon. A typical synapse, then, is a contact between the axon of one neuron and a dendrite or soma of another. Synaptic signals may be excitatory or inhibitory. If the net excitation received by a neuron over a short period of time is large enough, the neuron generates a brief pulse called an action potential, which originates at the soma and propagates rapidly along the axon, activating synapses onto other neurons as it goes.

Many neurons fit the foregoing schema in every respect, but there are also exceptions to most parts of it. There are no neurons that lack a soma, but there are neurons that lack dendrites, and others that lack an axon. Furthermore, in addition to the typical axodendritic and axosomatic synapses, there are axoaxonic (axon-to-axon) and dendrodendritic (dendrite-to-dendrite) synapses.

The key to neural function is the synaptic signaling process, which is partly electrical and partly chemical. The electrical aspect depends on properties of the neuron's membrane. Like all animal cells, the cell body of every neuron is enclosed by a plasma membrane, a bilayer of lipid molecules with many types of protein structures embedded in it. A lipid bilayer is a powerful electrical insulator, but in neurons, many of the protein structures embedded in the membrane are electrically active. These include ion channels that permit electrically charged ions to flow across the membrane and ion pumps that actively transport ions from one side of the membrane to the other. Most ion channels are permeable only to specific types of ions. Some ion channels are voltage gated, meaning that they can be switched between open and closed states by altering the voltage difference across the membrane. Others are chemically gated, meaning that they can be switched between open and closed states by interactions with chemicals that diffuse through the extracellular fluid. The interactions between ion channels and ion pumps produce a voltage difference across the membrane, typically a bit less than 1/10 of a volt at baseline. This voltage has two functions: first, it provides a power source for an assortment of voltage-dependent protein machinery that is embedded in the membrane; second, it provides a basis for electrical signal transmission between different parts of the membrane.

Neurons communicate by chemical and electrical synapses in a process known as neurotransmission, also called synaptic transmission. The fundamental process that triggers the release of neurotransmitters is the action potential, a propagating electrical signal that is generated by exploiting the electrically excitable membrane of the neuron. This is also known as a wave of depolarization.

Anatomy and histology

Diagram of a typical myelinated vertebrate motor
neuron

Neurons are highly specialized for the processing and transmission of cellular signals. Given their diversity of functions performed in different parts of the nervous system, there is a wide variety in their shape, size, and electrochemical properties. For instance, the soma of a neuron can vary from 4 to 100 micrometers in diameter.[4]
  • The soma is the body of the neuron. As it contains the nucleus, most protein synthesis occurs here. The nucleus can range from 3 to 18 micrometers in diameter.[5]
  • The dendrites of a neuron are cellular extensions with many branches. This overall shape and structure is referred to metaphorically as a dendritic tree. This is where the majority of input to the neuron occurs via the dendritic spine.
  • The axon is a finer, cable-like projection that can extend tens, hundreds, or even tens of thousands of times the diameter of the soma in length. The axon carries nerve signals away from the soma (and also carries some types of information back to it). Many neurons have only one axon, but this axon may—and usually will—undergo extensive branching, enabling communication with many target cells. The part of the axon where it emerges from the soma is called the axon hillock. Besides being an anatomical structure, the axon hillock is also the part of the neuron that has the greatest density of voltage-dependent sodium channels. This makes it the most easily excited part of the neuron and the spike initiation zone for the axon: in electrophysiological terms, it has the most negative action potential threshold. While the axon and axon hillock are generally involved in information outflow, this region can also receive input from other neurons.
  • The axon terminal contains synapses, specialized structures where neurotransmitter chemicals are released to communicate with target neurons.
The accepted view of the neuron attributes dedicated functions to its various anatomical components; however, dendrites and axons often act in ways contrary to their so-called main function.

Axons and dendrites in the central nervous system are typically only about one micrometer thick, while some in the peripheral nervous system are much thicker. The soma is usually about 10–25 micrometers in diameter and often is not much larger than the cell nucleus it contains. The longest axon of a human motor neuron can be over a meter long, reaching from the base of the spine to the toes.

Sensory neurons can have axons that run from the toes to the posterior column of the spinal cord, over 1.5 meters in adults. Giraffes have single axons several meters in length running along the entire length of their necks. Much of what is known about axonal function comes from studying the squid giant axon, an ideal experimental preparation because of its relatively immense size (0.5–1 millimeters thick, several centimeters long).

Fully differentiated neurons are permanently postmitotic;[6] however, research starting around 2002 shows that additional neurons throughout the brain can originate from neural stem cells through the process of neurogenesis. These are found throughout the brain, but are particularly concentrated in the subventricular zone and subgranular zone.[7]

Histology and internal structure

Golgi-stained neurons in human hippocampal tissue
 
Actin filaments in a mouse Cortical Neuron in culture

Numerous microscopic clumps called Nissl substance (or Nissl bodies) are seen when nerve cell bodies are stained with a basophilic ("base-loving") dye. These structures consist of rough endoplasmic reticulum and associated ribosomal RNA. Named after German psychiatrist and neuropathologist Franz Nissl (1860–1919), they are involved in protein synthesis and their prominence can be explained by the fact that nerve cells are very metabolically active. Basophilic dyes such as aniline or (weakly) haematoxylin [8] highlight negatively charged components, and so bind to the phosphate backbone of the ribosomal RNA.

The cell body of a neuron is supported by a complex mesh of structural proteins called neurofilaments, which are assembled into larger neurofibrils. Some neurons also contain pigment granules, such as neuromelanin (a brownish-black pigment that is byproduct of synthesis of catecholamines), and lipofuscin (a yellowish-brown pigment), both of which accumulate with age. Other structural proteins that are important for neuronal function are actin and the tubulin of microtubules. Actin is predominately found at the tips of axons and dendrites during neuronal development. There the actin dynamics can be modulated via an interplay with microtubule.[12]

There are different internal structural characteristics between axons and dendrites. Typical axons almost never contain ribosomes, except some in the initial segment. Dendrites contain granular endoplasmic reticulum or ribosomes, in diminishing amounts as the distance from the cell body increases.

Classification

Image of pyramidal neurons in mouse cerebral cortex expressing green fluorescent protein. The red staining indicates GABAergic interneurons.[13]
 
SMI32-stained pyramidal neurons in cerebral cortex

Neurons exist in a number of different shapes and sizes and can be classified by their morphology and function.[14] The anatomist Camillo Golgi grouped neurons into two types; type I with long axons used to move signals over long distances and type II with short axons, which can often be confused with dendrites. Type I cells can be further divided by where the cell body or soma is located. The basic morphology of type I neurons, represented by spinal motor neurons, consists of a cell body called the soma and a long thin axon covered by the myelin sheath. Around the cell body is a branching dendritic tree that receives signals from other neurons. The end of the axon has branching terminals (axon terminal) that release neurotransmitters into a gap called the synaptic cleft between the terminals and the dendrites of the next neuron.

Structural classification

Polarity


Most neurons can be anatomically characterized as:
  • Unipolar: only 1 process
  • Bipolar: 1 axon and 1 dendrite
  • Multipolar: 1 axon and 2 or more dendrites
    • Golgi I: neurons with long-projecting axonal processes; examples are pyramidal cells, Purkinje cells, and anterior horn cells.
    • Golgi II: neurons whose axonal process projects locally; the best example is the granule cell.
  • Anaxonic: where the axon cannot be distinguished from the dendrite(s).
  • pseudounipolar: 1 process which then serves as both an axon and a dendrite

Other

Furthermore, some unique neuronal types can be identified according to their location in the nervous system and distinct shape. Some examples are:

Functional classification

Direction

Afferent and efferent also refer generally to neurons that, respectively, bring information to or send information from the brain.

Action on other neurons

A neuron affects other neurons by releasing a neurotransmitter that binds to chemical receptors. The effect upon the postsynaptic neuron is determined not by the presynaptic neuron or by the neurotransmitter, but by the type of receptor that is activated. A neurotransmitter can be thought of as a key, and a receptor as a lock: the same type of key can here be used to open many different types of locks. Receptors can be classified broadly as excitatory (causing an increase in firing rate), inhibitory (causing a decrease in firing rate), or modulatory (causing long-lasting effects not directly related to firing rate).

The two most common neurotransmitters in the brain, glutamate and GABA, have actions that are largely consistent. Glutamate acts on several different types of receptors, and have effects that are excitatory at ionotropic receptors and a modulatory effect at metabotropic receptors. Similarly, GABA acts on several different types of receptors, but all of them have effects (in adult animals, at least) that are inhibitory. Because of this consistency, it is common for neuroscientists to simplify the terminology by referring to cells that release glutamate as "excitatory neurons", and cells that release GABA as "inhibitory neurons". Since over 90% of the neurons in the brain release either glutamate or GABA, these labels encompass the great majority of neurons. There are also other types of neurons that have consistent effects on their targets, for example, "excitatory" motor neurons in the spinal cord that release acetylcholine, and "inhibitory" spinal neurons that release glycine.

The distinction between excitatory and inhibitory neurotransmitters is not absolute, however. Rather, it depends on the class of chemical receptors present on the postsynaptic neuron. In principle, a single neuron, releasing a single neurotransmitter, can have excitatory effects on some targets, inhibitory effects on others, and modulatory effects on others still. For example, photoreceptor cells in the retina constantly release the neurotransmitter glutamate in the absence of light. So-called OFF bipolar cells are, like most neurons, excited by the released glutamate. However, neighboring target neurons called ON bipolar cells are instead inhibited by glutamate, because they lack the typical ionotropic glutamate receptors and instead express a class of inhibitory metabotropic glutamate receptors.[15] When light is present, the photoreceptors cease releasing glutamate, which relieves the ON bipolar cells from inhibition, activating them; this simultaneously removes the excitation from the OFF bipolar cells, silencing them.

It is possible to identify the type of inhibitory effect a presynaptic neuron will have on a postsynaptic neuron, based on the proteins the presynaptic neuron expresses. Parvalbumin-expressing neurons typically dampen the output signal of the postsynaptic neuron in the visual cortex, whereas somatostatin-expressing neurons typically block dendritic inputs to the postsynaptic neuron.[16]

Discharge patterns

Neurons have intrinsic electroresponsive properties like intrinsic transmembrane voltage oscillatory patterns.[17] So neurons can be classified according to their electrophysiological characteristics:
  • Tonic or regular spiking. Some neurons are typically constantly (or tonically) active. Example: interneurons in neurostriatum.
  • Phasic or bursting. Neurons that fire in bursts are called phasic.
  • Fast spiking. Some neurons are notable for their high firing rates, for example some types of cortical inhibitory interneurons, cells in globus pallidus, retinal ganglion cells.[18][19]

Classification by neurotransmitter production

  • Cholinergic neurons—acetylcholine. Acetylcholine is released from presynaptic neurons into the synaptic cleft. It acts as a ligand for both ligand-gated ion channels and metabotropic (GPCRs) muscarinic receptors. Nicotinic receptors are pentameric ligand-gated ion channels composed of alpha and beta subunits that bind nicotine. Ligand binding opens the channel causing influx of Na+ depolarization and increases the probability of presynaptic neurotransmitter release. Acetylcholine is synthesized from choline and acetyl coenzyme A.
  • GABAergic neurons—gamma aminobutyric acid. GABA is one of two neuroinhibitors in the central nervous system (CNS), the other being Glycine. GABA has a homologous function to ACh, gating anion channels that allow Cl ions to enter the post synaptic neuron. Cl causes hyperpolarization within the neuron, decreasing the probability of an action potential firing as the voltage becomes more negative (recall that for an action potential to fire, a positive voltage threshold must be reached). GABA is synthesized from glutamate neurotransmitters by the enzyme glutamate decarboxylase.
  • Glutamatergic neurons—glutamate. Glutamate is one of two primary excitatory amino acid neurotransmitter, the other being Aspartate. Glutamate receptors are one of four categories, three of which are ligand-gated ion channels and one of which is a G-protein coupled receptor (often referred to as GPCR).
  1. AMPA and Kainate receptors both function as cation channels permeable to Na+ cation channels mediating fast excitatory synaptic transmission
  2. NMDA receptors are another cation channel that is more permeable to Ca2+. The function of NMDA receptors is dependant on Glycine receptor binding as a co-agonist within the channel pore. NMDA receptors do not function without both ligands present.
  3. Metabotropic receptors, GPCRs modulate synaptic transmission and postsynaptic excitability.
Glutamate can cause excitotoxicity when blood flow to the brain is interrupted, resulting in brain damage. When blood flow is suppressed, glutamate is released from presynaptic neurons causing NMDA and AMPA receptor activation more so than would normally be the case outside of stress conditions, leading to elevated Ca2+ and Na+ entering the post synaptic neuron and cell damage. Glutamate is synthesized from the amino acid glutamine by the enzyme glutamate synthase.
  • Dopaminergic neurons—dopamine. Dopamine is a neurotransmitter that acts on D1 type (D1 and D5) Gs coupled receptors, which increase cAMP and PKA, and D2 type (D2, D3, and D4) receptors, which activate Gi-coupled receptors that decrease cAMP and PKA. Dopamine is connected to mood and behavior and modulates both pre and post synaptic neurotransmission. Loss of dopamine neurons in the substantia nigra has been linked to Parkinson's disease. Dopamine is synthesized from the amino acid tyrosine. Tyrosine is catalyzed into levadopa (or L-DOPA) by tyrosine hydroxlase, and levadopa is then converted into dopamine by amino acid decarboxylase.
  • Serotonergic neurons—serotonin. Serotonin (5-Hydroxytryptamine, 5-HT) can act as excitatory or inhibitory. Of the four 5-HT receptor classes, 3 are GPCR and 1 is ligand gated cation channel. Serotonin is synthesized from tryptophan by tryptophan hydroxylase, and then further by aromatic acid decarboxylase. A lack of 5-HT at postsynaptic neurons has been linked to depression. Drugs that block the presynaptic serotonin transporter are used for treatment, such as Prozac and Zoloft.

Connectivity

A signal propagating down an axon to the cell body and dendrites of the next cell
 
Chemical synapse

Neurons communicate with one another via synapses, where the axon terminal or en passant bouton (a type of terminal located along the length of the axon) of one cell contacts another neuron's dendrite, soma or, less commonly, axon. Neurons such as Purkinje cells in the cerebellum can have over 1000 dendritic branches, making connections with tens of thousands of other cells; other neurons, such as the magnocellular neurons of the supraoptic nucleus, have only one or two dendrites, each of which receives thousands of synapses. Synapses can be excitatory or inhibitory and either increase or decrease activity in the target neuron, respectively. Some neurons also communicate via electrical synapses, which are direct, electrically conductive junctions between cells.

In a chemical synapse, the process of synaptic transmission is as follows: when an action potential reaches the axon terminal, it opens voltage-gated calcium channels, allowing calcium ions to enter the terminal. Calcium causes synaptic vesicles filled with neurotransmitter molecules to fuse with the membrane, releasing their contents into the synaptic cleft. The neurotransmitters diffuse across the synaptic cleft and activate receptors on the postsynaptic neuron. High cytosolic calcium in the axon terminal also triggers mitochondrial calcium uptake, which, in turn, activates mitochondrial energy metabolism to produce ATP to support continuous neurotransmission.[20]

An autapse is a synapse in which a neuron's axon connects to its own dendrites.

The human brain has a huge number of synapses. Each of the 1011 (one hundred billion) neurons has on average 7,000 synaptic connections to other neurons. It has been estimated that the brain of a three-year-old child has about 1015 synapses (1 quadrillion). This number declines with age, stabilizing by adulthood. Estimates vary for an adult, ranging from 1014 to 5 x 1014 synapses (100 to 500 trillion).[21]
 
An annotated diagram of the stages of an action potential propagating down an axon including the role of ion concentration and pump and channel proteins.

Mechanisms for propagating action potentials

In 1937, John Zachary Young suggested that the squid giant axon could be used to study neuronal electrical properties.[22] Being larger than but similar in nature to human neurons, squid cells were easier to study. By inserting electrodes into the giant squid axons, accurate measurements were made of the membrane potential.

The cell membrane of the axon and soma contain voltage-gated ion channels that allow the neuron to generate and propagate an electrical signal (an action potential). These signals are generated and propagated by charge-carrying ions including sodium (Na+), potassium (K+), chloride (Cl), and calcium (Ca2+).

There are several stimuli that can activate a neuron leading to electrical activity, including pressure, stretch, chemical transmitters, and changes of the electric potential across the cell membrane.[23] Stimuli cause specific ion-channels within the cell membrane to open, leading to a flow of ions through the cell membrane, changing the membrane potential.

Thin neurons and axons require less metabolic expense to produce and carry action potentials, but thicker axons convey impulses more rapidly. To minimize metabolic expense while maintaining rapid conduction, many neurons have insulating sheaths of myelin around their axons. The sheaths are formed by glial cells: oligodendrocytes in the central nervous system and Schwann cells in the peripheral nervous system. The sheath enables action potentials to travel faster than in unmyelinated axons of the same diameter, whilst using less energy. The myelin sheath in peripheral nerves normally runs along the axon in sections about 1 mm long, punctuated by unsheathed nodes of Ranvier, which contain a high density of voltage-gated ion channels. Multiple sclerosis is a neurological disorder that results from demyelination of axons in the central nervous system.

Some neurons do not generate action potentials, but instead generate a graded electrical signal, which in turn causes graded neurotransmitter release. Such nonspiking neurons tend to be sensory neurons or interneurons, because they cannot carry signals long distances.

Neural coding

Neural coding is concerned with how sensory and other information is represented in the brain by neurons. The main goal of studying neural coding is to characterize the relationship between the stimulus and the individual or ensemble neuronal responses, and the relationships amongst the electrical activities of the neurons within the ensemble.[24] It is thought that neurons can encode both digital and analog information.[25]

All-or-none principle

The conduction of nerve impulses is an example of an all-or-none response. In other words, if a neuron responds at all, then it must respond completely. Greater intensity of stimulation does not produce a stronger signal but can produce a higher frequency of firing. There are different types of receptor responses to stimuli, slowly adapting or tonic receptors respond to steady stimulus and produce a steady rate of firing. These tonic receptors most often respond to increased intensity of stimulus by increasing their firing frequency, usually as a power function of stimulus plotted against impulses per second. This can be likened to an intrinsic property of light where to get greater intensity of a specific frequency (color) there have to be more photons, as the photons can't become "stronger" for a specific frequency.

There are a number of other receptor types that are called quickly adapting or phasic receptors, where firing decreases or stops with steady stimulus; examples include: skin when touched by an object causes the neurons to fire, but if the object maintains even pressure against the skin, the neurons stop firing. The neurons of the skin and muscles that are responsive to pressure and vibration have filtering accessory structures that aid their function.

The pacinian corpuscle is one such structure. It has concentric layers like an onion, which form around the axon terminal. When pressure is applied and the corpuscle is deformed, mechanical stimulus is transferred to the axon, which fires. If the pressure is steady, there is no more stimulus; thus, typically these neurons respond with a transient depolarization during the initial deformation and again when the pressure is removed, which causes the corpuscle to change shape again. Other types of adaptation are important in extending the function of a number of other neurons.[26]

History

Drawing by Camillo Golgi of a hippocampus stained using the silver nitrate method
 
Drawing of a Purkinje cell in the cerebellar cortex done by Santiago Ramón y Cajal, demonstrating the ability of Golgi's staining method to reveal fine detail

The neuron's place as the primary functional unit of the nervous system was first recognized in the late 19th century through the work of the Spanish anatomist Santiago Ramón y Cajal.[27]

To make the structure of individual neurons visible, Ramón y Cajal improved a silver staining process that had been developed by Camillo Golgi.[27] The improved process involves a technique called "double impregnation" and is still in use today.

In 1888 Ramón y Cajal published a paper about the bird cerebellum. In this paper, he tells he could not find evidence for anastomis between axons and dendrites and calls each nervous element "an absolutely autonomous canton."[27][28] This became known as the neuron doctrine, one of the central tenets of modern neuroscience.[27]

In 1891 the German anatomist Heinrich Wilhelm Waldeyer wrote a highly influential review about the neuron doctrine in which he introduced the term neuron to describe the anatomical and physiological unit of the nervous system.[29][30]

The silver impregnation stains are an extremely useful method for neuroanatomical investigations because, for reasons unknown, it stains a very small percentage of cells in a tissue, so one is able to see the complete micro structure of individual neurons without much overlap from other cells in the densely packed brain.[31]

Neuron doctrine

The neuron doctrine is the now fundamental idea that neurons are the basic structural and functional units of the nervous system. The theory was put forward by Santiago Ramón y Cajal in the late 19th century. It held that neurons are discrete cells (not connected in a meshwork), acting as metabolically distinct units.

Later discoveries yielded a few refinements to the simplest form of the doctrine. For example, glial cells, which are not considered neurons, play an essential role in information processing.[32] Also, electrical synapses are more common than previously thought,[33] meaning that there are direct, cytoplasmic connections between neurons. In fact, there are examples of neurons forming even tighter coupling: the squid giant axon arises from the fusion of multiple axons.[34]

Ramón y Cajal also postulated the Law of Dynamic Polarization, which states that a neuron receives signals at its dendrites and cell body and transmits them, as action potentials, along the axon in one direction: away from the cell body.[35] The Law of Dynamic Polarization has important exceptions; dendrites can serve as synaptic output sites of neurons[36] and axons can receive synaptic inputs.[37]

Neurons in the brain

The number of neurons in the brain varies dramatically from species to species.[38] The adult human brain contains about 85-86 billion neurons,[38][39] of which 16.3 billion are in the cerebral cortex and 69 billion in the cerebellum.[39] By contrast, the nematode worm Caenorhabditis elegans has just 302 neurons, making it an ideal experimental subject as scientists have been able to map all of the organism's neurons. The fruit fly Drosophila melanogaster, a common subject in biological experiments, has around 100,000 neurons and exhibits many complex behaviors. Many properties of neurons, from the type of neurotransmitters used to ion channel composition, are maintained across species, allowing scientists to study processes occurring in more complex organisms in much simpler experimental systems.

Neurological disorders

Charcot–Marie–Tooth disease (CMT) is a heterogeneous inherited disorder of nerves (neuropathy) that is characterized by loss of muscle tissue and touch sensation, predominantly in the feet and legs but also in the hands and arms in the advanced stages of disease. Presently incurable, this disease is one of the most common inherited neurological disorders, with 36 in 100,000 affected.[40]

Alzheimer's disease (AD), also known simply as Alzheimer's, is a neurodegenerative disease characterized by progressive cognitive deterioration, together with declining activities of daily living and neuropsychiatric symptoms or behavioral changes.[41] The most striking early symptom is loss of short-term memory (amnesia), which usually manifests as minor forgetfulness that becomes steadily more pronounced with illness progression, with relative preservation of older memories. As the disorder progresses, cognitive (intellectual) impairment extends to the domains of language (aphasia), skilled movements (apraxia), and recognition (agnosia), and functions such as decision-making and planning become impaired.[42][43]

Parkinson's disease (PD), also known as Parkinson disease, is a degenerative disorder of the central nervous system that often impairs the sufferer's motor skills and speech.[44] Parkinson's disease belongs to a group of conditions called movement disorders.[45] It is characterized by muscle rigidity, tremor, a slowing of physical movement (bradykinesia), and in extreme cases, a loss of physical movement (akinesia). The primary symptoms are the results of decreased stimulation of the motor cortex by the basal ganglia, normally caused by the insufficient formation and action of dopamine, which is produced in the dopaminergic neurons of the brain. Secondary symptoms may include high level cognitive dysfunction and subtle language problems. PD is both chronic and progressive.

Myasthenia gravis is a neuromuscular disease leading to fluctuating muscle weakness and fatigability during simple activities. Weakness is typically caused by circulating antibodies that block acetylcholine receptors at the post-synaptic neuromuscular junction, inhibiting the stimulative effect of the neurotransmitter acetylcholine. Myasthenia is treated with immunosuppressants, cholinesterase inhibitors and, in selected cases, thymectomy.

Demyelination

Guillain–Barré syndrome – demyelination

Demyelination is the act of demyelinating, or the loss of the myelin sheath insulating the nerves. When myelin degrades, conduction of signals along the nerve can be impaired or lost, and the nerve eventually withers. This leads to certain neurodegenerative disorders like multiple sclerosis and chronic inflammatory demyelinating polyneuropathy.

Axonal degeneration

Although most injury responses include a calcium influx signaling to promote resealing of severed parts, axonal injuries initially lead to acute axonal degeneration, which is rapid separation of the proximal and distal ends within 30 minutes of injury. Degeneration follows with swelling of the axolemma, and eventually leads to bead like formation. Granular disintegration of the axonal cytoskeleton and inner organelles occurs after axolemma degradation. Early changes include accumulation of mitochondria in the paranodal regions at the site of injury. Endoplasmic reticulum degrades and mitochondria swell up and eventually disintegrate. The disintegration is dependent on ubiquitin and calpain proteases (caused by influx of calcium ion), suggesting that axonal degeneration is an active process. Thus the axon undergoes complete fragmentation. The process takes about roughly 24 hrs in the peripheral nervous system (PNS), and longer in the CNS. The signaling pathways leading to axolemma degeneration are currently unknown.

Neurogenesis

It has been demonstrated that neurogenesis can sometimes occur in the adult vertebrate brain, a finding that led to controversy in 1999.[46] Later studies of the age of human neurons suggest that this process occurs only for a minority of cells, and a vast majority of neurons composing the neocortex were formed before birth and persist without replacement.[2]
The body contains a variety of stem cell types that have the capacity to differentiate into neurons. A report in Nature suggested that researchers had found a way to transform human skin cells into working nerve cells using a process called transdifferentiation in which "cells are forced to adopt new identities".[47]

Nerve regeneration

It is often possible for peripheral axons to regrow if they are severed,[48] but a neuron cannot be functionally replaced by one of another type (Llinás' law).

Operator (computer programming)

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Operator_(computer_programmin...