Search This Blog

Thursday, September 21, 2023

Neural coding

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Neural_coding

Neural coding (or neural representation) is a neuroscience field concerned with characterising the hypothetical relationship between the stimulus and the individual or ensemble neuronal responses and the relationship among the electrical activity of the neurons in the ensemble. Based on the theory that sensory and other information is represented in the brain by networks of neurons, it is thought that neurons can encode both digital and analog information.

Overview

Neurons are remarkable among the cells of the body in their ability to propagate signals rapidly over large distances. They do this by generating characteristic electrical pulses called action potentials: voltage spikes that can travel down axons. Sensory neurons change their activities by firing sequences of action potentials in various temporal patterns, with the presence of external sensory stimuli, such as light, sound, taste, smell and touch. It is known that information about the stimulus is encoded in this pattern of action potentials and transmitted into and around the brain, but this is not the only method. Specialized neurons, such as those of the retina, can communicate more information through graded potentials. This differs from action potentials because information about the strength of a stimulus directly correlates with the strength of the neurons output. The signal decays much faster for graded potentials, necessitating short inter-neuron distances and high neuronal density. The advantage of graded potentials are higher information rates capable of encoding more states (i.e. higher fidelity) than spiking neurons.

Although action potentials can vary somewhat in duration, amplitude and shape, they are typically treated as identical stereotyped events in neural coding studies. If the brief duration of an action potential (about 1ms) is ignored, an action potential sequence, or spike train, can be characterized simply by a series of all-or-none point events in time. The lengths of interspike intervals (ISIs) between two successive spikes in a spike train often vary, apparently randomly. The study of neural coding involves measuring and characterizing how stimulus attributes, such as light or sound intensity, or motor actions, such as the direction of an arm movement, are represented by neuron action potentials or spikes. In order to describe and analyze neuronal firing, statistical methods and methods of probability theory and stochastic point processes have been widely applied.

With the development of large-scale neural recording and decoding technologies, researchers have begun to crack the neural code and have already provided the first glimpse into the real-time neural code as memory is formed and recalled in the hippocampus, a brain region known to be central for memory formation. Neuroscientists have initiated several large-scale brain decoding projects.

Encoding and decoding

The link between stimulus and response can be studied from two opposite points of view. Neural encoding refers to the map from stimulus to response. The main focus is to understand how neurons respond to a wide variety of stimuli, and to construct models that attempt to predict responses to other stimuli. Neural decoding refers to the reverse map, from response to stimulus, and the challenge is to reconstruct a stimulus, or certain aspects of that stimulus, from the spike sequences it evokes.

Hypothesized coding schemes

A sequence, or 'train', of spikes may contain information based on different coding schemes. In some neurons the strength with which an postsynaptic partner responds may depend solely on the 'firing rate', the average number of spikes per unit time (a 'rate code'). At the other end, a complex 'temporal code' is based on the precise timing of single spikes. They may be locked to an external stimulus such as in the visual and auditory system or be generated intrinsically by the neural circuitry.

Whether neurons use rate coding or temporal coding is a topic of intense debate within the neuroscience community, even though there is no clear definition of what these terms mean.

Rate coding

The rate coding model of neuronal firing communication states that as the intensity of a stimulus increases, the frequency or rate of action potentials, or "spike firing", increases. Rate coding is sometimes called frequency coding.

Rate coding is a traditional coding scheme, assuming that most, if not all, information about the stimulus is contained in the firing rate of the neuron. Because the sequence of action potentials generated by a given stimulus varies from trial to trial, neuronal responses are typically treated statistically or probabilistically. They may be characterized by firing rates, rather than as specific spike sequences. In most sensory systems, the firing rate increases, generally non-linearly, with increasing stimulus intensity. Under a rate coding assumption, any information possibly encoded in the temporal structure of the spike train is ignored. Consequently, rate coding is inefficient but highly robust with respect to the ISI 'noise'.

During rate coding, precisely calculating firing rate is very important. In fact, the term "firing rate" has a few different definitions, which refer to different averaging procedures, such as an average over time (rate as a single-neuron spike count) or an average over several repetitions (rate of PSTH) of experiment.

In rate coding, learning is based on activity-dependent synaptic weight modifications.

Rate coding was originally shown by Edgar Adrian and Yngve Zotterman in 1926. In this simple experiment different weights were hung from a muscle. As the weight of the stimulus increased, the number of spikes recorded from sensory nerves innervating the muscle also increased. From these original experiments, Adrian and Zotterman concluded that action potentials were unitary events, and that the frequency of events, and not individual event magnitude, was the basis for most inter-neuronal communication.

In the following decades, measurement of firing rates became a standard tool for describing the properties of all types of sensory or cortical neurons, partly due to the relative ease of measuring rates experimentally. However, this approach neglects all the information possibly contained in the exact timing of the spikes. During recent years, more and more experimental evidence has suggested that a straightforward firing rate concept based on temporal averaging may be too simplistic to describe brain activity.

Spike-count rate (average over time)

The spike-count rate, also referred to as temporal average, is obtained by counting the number of spikes that appear during a trial and dividing by the duration of trial. The length T of the time window is set by the experimenter and depends on the type of neuron recorded from and to the stimulus. In practice, to get sensible averages, several spikes should occur within the time window. Typical values are T = 100 ms or T = 500 ms, but the duration may also be longer or shorter (Chapter 1.5 in the textbook 'Spiking Neuron Models').

The spike-count rate can be determined from a single trial, but at the expense of losing all temporal resolution about variations in neural response during the course of the trial. Temporal averaging can work well in cases where the stimulus is constant or slowly varying and does not require a fast reaction of the organism — and this is the situation usually encountered in experimental protocols. Real-world input, however, is hardly stationary, but often changing on a fast time scale. For example, even when viewing a static image, humans perform saccades, rapid changes of the direction of gaze. The image projected onto the retinal photoreceptors changes therefore every few hundred milliseconds (Chapter 1.5 in )

Despite its shortcomings, the concept of a spike-count rate code is widely used not only in experiments, but also in models of neural networks. It has led to the idea that a neuron transforms information about a single input variable (the stimulus strength) into a single continuous output variable (the firing rate).

There is a growing body of evidence that in Purkinje neurons, at least, information is not simply encoded in firing but also in the timing and duration of non-firing, quiescent periods. There is also evidence from retinal cells, that information is encoded not only in the firing rate but also in spike timing. More generally, whenever a rapid response of an organism is required a firing rate defined as a spike-count over a few hundred milliseconds is simply too slow.

Time-dependent firing rate (averaging over several trials)

The time-dependent firing rate is defined as the average number of spikes (averaged over trials) appearing during a short interval between times t and t+Δt, divided by the duration of the interval. It works for stationary as well as for time-dependent stimuli. To experimentally measure the time-dependent firing rate, the experimenter records from a neuron while stimulating with some input sequence. The same stimulation sequence is repeated several times and the neuronal response is reported in a Peri-Stimulus-Time Histogram (PSTH). The time t is measured with respect to the start of the stimulation sequence. The Δt must be large enough (typically in the range of one or a few milliseconds) so that there is a sufficient number of spikes within the interval to obtain a reliable estimate of the average. The number of occurrences of spikes nK(t;t+Δt) summed over all repetitions of the experiment divided by the number K of repetitions is a measure of the typical activity of the neuron between time t and t+Δt. A further division by the interval length Δt yields time-dependent firing rate r(t) of the neuron, which is equivalent to the spike density of PSTH (Chapter 1.5 in).

For sufficiently small Δt, r(t)Δt is the average number of spikes occurring between times t and t+Δt over multiple trials. If Δt is small, there will never be more than one spike within the interval between t and t+Δt on any given trial. This means that r(t)Δt is also the fraction of trials on which a spike occurred between those times. Equivalently, r(t)Δt is the probability that a spike occurs during this time interval.

As an experimental procedure, the time-dependent firing rate measure is a useful method to evaluate neuronal activity, in particular in the case of time-dependent stimuli. The obvious problem with this approach is that it can not be the coding scheme used by neurons in the brain. Neurons can not wait for the stimuli to repeatedly present in an exactly same manner before generating a response.

Nevertheless, the experimental time-dependent firing rate measure can make sense, if there are large populations of independent neurons that receive the same stimulus. Instead of recording from a population of N neurons in a single run, it is experimentally easier to record from a single neuron and average over N repeated runs. Thus, the time-dependent firing rate coding relies on the implicit assumption that there are always populations of neurons.

Temporal coding

When precise spike timing or high-frequency firing-rate fluctuations are found to carry information, the neural code is often identified as a temporal code. A number of studies have found that the temporal resolution of the neural code is on a millisecond time scale, indicating that precise spike timing is a significant element in neural coding. Such codes, that communicate via the time between spikes are also referred to as interpulse interval codes, and have been supported by recent studies.

Neurons exhibit high-frequency fluctuations of firing-rates which could be noise or could carry information. Rate coding models suggest that these irregularities are noise, while temporal coding models suggest that they encode information. If the nervous system only used rate codes to convey information, a more consistent, regular firing rate would have been evolutionarily advantageous, and neurons would have utilized this code over other less robust options. Temporal coding supplies an alternate explanation for the “noise," suggesting that it actually encodes information and affects neural processing. To model this idea, binary symbols can be used to mark the spikes: 1 for a spike, 0 for no spike. Temporal coding allows the sequence 000111000111 to mean something different from 001100110011, even though the mean firing rate is the same for both sequences, at 6 spikes/10 ms. Until recently, scientists had put the most emphasis on rate encoding as an explanation for post-synaptic potential patterns. However, functions of the brain are more temporally precise than the use of only rate encoding seems to allow. In other words, essential information could be lost due to the inability of the rate code to capture all the available information of the spike train. In addition, responses are different enough between similar (but not identical) stimuli to suggest that the distinct patterns of spikes contain a higher volume of information than is possible to include in a rate code.

Temporal codes (also called spike codes ), employ those features of the spiking activity that cannot be described by the firing rate. For example, time-to-first-spike after the stimulus onset, phase-of-firing with respect to background oscillations, characteristics based on the second and higher statistical moments of the ISI probability distribution, spike randomness, or precisely timed groups of spikes (temporal patterns) are candidates for temporal codes. As there is no absolute time reference in the nervous system, the information is carried either in terms of the relative timing of spikes in a population of neurons (temporal patterns) or with respect to an ongoing brain oscillation (phase of firing). One way in which temporal codes are decoded, in presence of neural oscillations, is that spikes occurring at specific phases of an oscillatory cycle are more effective in depolarizing the post-synaptic neuron.

The temporal structure of a spike train or firing rate evoked by a stimulus is determined both by the dynamics of the stimulus and by the nature of the neural encoding process. Stimuli that change rapidly tend to generate precisely timed spikes (and rapidly changing firing rates in PSTHs) no matter what neural coding strategy is being used. Temporal coding in the narrow sense refers to temporal precision in the response that does not arise solely from the dynamics of the stimulus, but that nevertheless relates to properties of the stimulus. The interplay between stimulus and encoding dynamics makes the identification of a temporal code difficult.

In temporal coding, learning can be explained by activity-dependent synaptic delay modifications. The modifications can themselves depend not only on spike rates (rate coding) but also on spike timing patterns (temporal coding), i.e., can be a special case of spike-timing-dependent plasticity.

The issue of temporal coding is distinct and independent from the issue of independent-spike coding. If each spike is independent of all the other spikes in the train, the temporal character of the neural code is determined by the behavior of time-dependent firing rate r(t). If r(t) varies slowly with time, the code is typically called a rate code, and if it varies rapidly, the code is called temporal.

Temporal coding in sensory systems

For very brief stimuli, a neuron's maximum firing rate may not be fast enough to produce more than a single spike. Due to the density of information about the abbreviated stimulus contained in this single spike, it would seem that the timing of the spike itself would have to convey more information than simply the average frequency of action potentials over a given period of time. This model is especially important for sound localization, which occurs within the brain on the order of milliseconds. The brain must obtain a large quantity of information based on a relatively short neural response. Additionally, if low firing rates on the order of ten spikes per second must be distinguished from arbitrarily close rate coding for different stimuli, then a neuron trying to discriminate these two stimuli may need to wait for a second or more to accumulate enough information. This is not consistent with numerous organisms which are able to discriminate between stimuli in the time frame of milliseconds, suggesting that a rate code is not the only model at work.

To account for the fast encoding of visual stimuli, it has been suggested that neurons of the retina encode visual information in the latency time between stimulus onset and first action potential, also called latency to first spike or time-to-first-spike. This type of temporal coding has been shown also in the auditory and somato-sensory system. The main drawback of such a coding scheme is its sensitivity to intrinsic neuronal fluctuations. In the primary visual cortex of macaques, the timing of the first spike relative to the start of the stimulus was found to provide more information than the interval between spikes. However, the interspike interval could be used to encode additional information, which is especially important when the spike rate reaches its limit, as in high-contrast situations. For this reason, temporal coding may play a part in coding defined edges rather than gradual transitions.

The mammalian gustatory system is useful for studying temporal coding because of its fairly distinct stimuli and the easily discernible responses of the organism. Temporally encoded information may help an organism discriminate between different tastants of the same category (sweet, bitter, sour, salty, umami) that elicit very similar responses in terms of spike count. The temporal component of the pattern elicited by each tastant may be used to determine its identity (e.g., the difference between two bitter tastants, such as quinine and denatonium). In this way, both rate coding and temporal coding may be used in the gustatory system – rate for basic tastant type, temporal for more specific differentiation. Research on mammalian gustatory system has shown that there is an abundance of information present in temporal patterns across populations of neurons, and this information is different from that which is determined by rate coding schemes. Groups of neurons may synchronize in response to a stimulus. In studies dealing with the front cortical portion of the brain in primates, precise patterns with short time scales only a few milliseconds in length were found across small populations of neurons which correlated with certain information processing behaviors. However, little information could be determined from the patterns; one possible theory is they represented the higher-order processing taking place in the brain.

As with the visual system, in mitral/tufted cells in the olfactory bulb of mice, first-spike latency relative to the start of a sniffing action seemed to encode much of the information about an odor. This strategy of using spike latency allows for rapid identification of and reaction to an odorant. In addition, some mitral/tufted cells have specific firing patterns for given odorants. This type of extra information could help in recognizing a certain odor, but is not completely necessary, as average spike count over the course of the animal's sniffing was also a good identifier. Along the same lines, experiments done with the olfactory system of rabbits showed distinct patterns which correlated with different subsets of odorants, and a similar result was obtained in experiments with the locust olfactory system.

Temporal coding applications

The specificity of temporal coding requires highly refined technology to measure informative, reliable, experimental data. Advances made in optogenetics allow neurologists to control spikes in individual neurons, offering electrical and spatial single-cell resolution. For example, blue light causes the light-gated ion channel channelrhodopsin to open, depolarizing the cell and producing a spike. When blue light is not sensed by the cell, the channel closes, and the neuron ceases to spike. The pattern of the spikes matches the pattern of the blue light stimuli. By inserting channelrhodopsin gene sequences into mouse DNA, researchers can control spikes and therefore certain behaviors of the mouse (e.g., making the mouse turn left). Researchers, through optogenetics, have the tools to effect different temporal codes in a neuron while maintaining the same mean firing rate, and thereby can test whether or not temporal coding occurs in specific neural circuits.

Optogenetic technology also has the potential to enable the correction of spike abnormalities at the root of several neurological and psychological disorders. If neurons do encode information in individual spike timing patterns, key signals could be missed by attempting to crack the code while looking only at mean firing rates. Understanding any temporally encoded aspects of the neural code and replicating these sequences in neurons could allow for greater control and treatment of neurological disorders such as depression, schizophrenia, and Parkinson's disease. Regulation of spike intervals in single cells more precisely controls brain activity than the addition of pharmacological agents intravenously.

Phase-of-firing code

Phase-of-firing code is a neural coding scheme that combines the spike count code with a time reference based on oscillations. This type of code takes into account a time label for each spike according to a time reference based on phase of local ongoing oscillations at low or high frequencies.

It has been shown that neurons in some cortical sensory areas encode rich naturalistic stimuli in terms of their spike times relative to the phase of ongoing network oscillatory fluctuations, rather than only in terms of their spike count. The local field potential signals reflect population (network) oscillations. The phase-of-firing code is often categorized as a temporal code although the time label used for spikes (i.e. the network oscillation phase) is a low-resolution (coarse-grained) reference for time. As a result, often only four discrete values for the phase are enough to represent all the information content in this kind of code with respect to the phase of oscillations in low frequencies. Phase-of-firing code is loosely based on the phase precession phenomena observed in place cells of the hippocampus. Another feature of this code is that neurons adhere to a preferred order of spiking between a group of sensory neurons, resulting in firing sequence.

Phase code has been shown in visual cortex to involve also high-frequency oscillations. Within a cycle of gamma oscillation, each neuron has its own preferred relative firing time. As a result, an entire population of neurons generates a firing sequence that has a duration of up to about 15 ms.

Population coding

Population coding is a method to represent stimuli by using the joint activities of a number of neurons. In population coding, each neuron has a distribution of responses over some set of inputs, and the responses of many neurons may be combined to determine some value about the inputs. From the theoretical point of view, population coding is one of a few mathematically well-formulated problems in neuroscience. It grasps the essential features of neural coding and yet is simple enough for theoretic analysis. Experimental studies have revealed that this coding paradigm is widely used in the sensor and motor areas of the brain.

For example, in the visual area medial temporal (MT), neurons are tuned to the moving direction. In response to an object moving in a particular direction, many neurons in MT fire with a noise-corrupted and bell-shaped activity pattern across the population. The moving direction of the object is retrieved from the population activity, to be immune from the fluctuation existing in a single neuron's signal. When monkeys are trained to move a joystick towards a lit target, a single neuron will fire for multiple target directions. However it fires the fastest for one direction and more slowly depending on how close the target was to the neuron's "preferred" direction. If each neuron represents movement in its preferred direction, and the vector sum of all neurons is calculated (each neuron has a firing rate and a preferred direction), the sum points in the direction of motion. In this manner, the population of neurons codes the signal for the motion. This particular population code is referred to as population vector coding.

Place-time population codes, termed the averaged-localized-synchronized-response (ALSR) code, have been derived for neural representation of auditory acoustic stimuli. This exploits both the place or tuning within the auditory nerve, as well as the phase-locking within each nerve fiber auditory nerve. The first ALSR representation was for steady-state vowels; ALSR representations of pitch and formant frequencies in complex, non-steady state stimuli were later demonstrated for voiced-pitch, and formant representations in consonant-vowel syllables. The advantage of such representations is that global features such as pitch or formant transition profiles can be represented as global features across the entire nerve simultaneously via both rate and place coding.

Population coding has a number of other advantages as well, including reduction of uncertainty due to neuronal variability and the ability to represent a number of different stimulus attributes simultaneously. Population coding is also much faster than rate coding and can reflect changes in the stimulus conditions nearly instantaneously. Individual neurons in such a population typically have different but overlapping selectivities, so that many neurons, but not necessarily all, respond to a given stimulus.

Typically an encoding function has a peak value such that activity of the neuron is greatest if the perceptual value is close to the peak value, and becomes reduced accordingly for values less close to the peak value. It follows that the actual perceived value can be reconstructed from the overall pattern of activity in the set of neurons. Vector coding is an example of simple averaging. A more sophisticated mathematical technique for performing such a reconstruction is the method of maximum likelihood based on a multivariate distribution of the neuronal responses. These models can assume independence, second order correlations,  or even more detailed dependencies such as higher order maximum entropy models, or copulas.

Correlation coding

The correlation coding model of neuronal firing claims that correlations between action potentials, or "spikes", within a spike train may carry additional information above and beyond the simple timing of the spikes. Early work suggested that correlation between spike trains can only reduce, and never increase, the total mutual information present in the two spike trains about a stimulus feature. However, this was later demonstrated to be incorrect. Correlation structure can increase information content if noise and signal correlations are of opposite sign. Correlations can also carry information not present in the average firing rate of two pairs of neurons. A good example of this exists in the pentobarbital-anesthetized marmoset auditory cortex, in which a pure tone causes an increase in the number of correlated spikes, but not an increase in the mean firing rate, of pairs of neurons.

Independent-spike coding

The independent-spike coding model of neuronal firing claims that each individual action potential, or "spike", is independent of each other spike within the spike train.

Position coding

Plot of typical position coding

A typical population code involves neurons with a Gaussian tuning curve whose means vary linearly with the stimulus intensity, meaning that the neuron responds most strongly (in terms of spikes per second) to a stimulus near the mean. The actual intensity could be recovered as the stimulus level corresponding to the mean of the neuron with the greatest response. However, the noise inherent in neural responses means that a maximum likelihood estimation function is more accurate.

Neural responses are noisy and unreliable.

This type of code is used to encode continuous variables such as joint position, eye position, color, or sound frequency. Any individual neuron is too noisy to faithfully encode the variable using rate coding, but an entire population ensures greater fidelity and precision. For a population of unimodal tuning curves, i.e. with a single peak, the precision typically scales linearly with the number of neurons. Hence, for half the precision, half as many neurons are required. In contrast, when the tuning curves have multiple peaks, as in grid cells that represent space, the precision of the population can scale exponentially with the number of neurons. This greatly reduces the number of neurons required for the same precision.

Sparse coding

The sparse code is when each item is encoded by the strong activation of a relatively small set of neurons. For each item to be encoded, this is a different subset of all available neurons. In contrast to sensor-sparse coding, sensor-dense coding implies that all information from possible sensor locations is known.

As a consequence, sparseness may be focused on temporal sparseness ("a relatively small number of time periods are active") or on the sparseness in an activated population of neurons. In this latter case, this may be defined in one time period as the number of activated neurons relative to the total number of neurons in the population. This seems to be a hallmark of neural computations since compared to traditional computers, information is massively distributed across neurons. Sparse coding of natural images produces wavelet-like oriented filters that resemble the receptive fields of simple cells in the visual cortex. The capacity of sparse codes may be increased by simultaneous use of temporal coding, as found in the locust olfactory system.

Given a potentially large set of input patterns, sparse coding algorithms (e.g. sparse autoencoder) attempt to automatically find a small number of representative patterns which, when combined in the right proportions, reproduce the original input patterns. The sparse coding for the input then consists of those representative patterns. For example, the very large set of English sentences can be encoded by a small number of symbols (i.e. letters, numbers, punctuation, and spaces) combined in a particular order for a particular sentence, and so a sparse coding for English would be those symbols.

Linear generative model

Most models of sparse coding are based on the linear generative model. In this model, the symbols are combined in a linear fashion to approximate the input.

More formally, given a k-dimensional set of real-numbered input vectors , the goal of sparse coding is to determine n k-dimensional basis vectors along with a sparse n-dimensional vector of weights or coefficients for each input vector, so that a linear combination of the basis vectors with proportions given by the coefficients results in a close approximation to the input vector: .

The codings generated by algorithms implementing a linear generative model can be classified into codings with soft sparseness and those with hard sparseness. These refer to the distribution of basis vector coefficients for typical inputs. A coding with soft sparseness has a smooth Gaussian-like distribution, but peakier than Gaussian, with many zero values, some small absolute values, fewer larger absolute values, and very few very large absolute values. Thus, many of the basis vectors are active. Hard sparseness, on the other hand, indicates that there are many zero values, no or hardly any small absolute values, fewer larger absolute values, and very few very large absolute values, and thus few of the basis vectors are active. This is appealing from a metabolic perspective: less energy is used when fewer neurons are firing.

Another measure of coding is whether it is critically complete or overcomplete. If the number of basis vectors n is equal to the dimensionality k of the input set, the coding is said to be critically complete. In this case, smooth changes in the input vector result in abrupt changes in the coefficients, and the coding is not able to gracefully handle small scalings, small translations, or noise in the inputs. If, however, the number of basis vectors is larger than the dimensionality of the input set, the coding is overcomplete. Overcomplete codings smoothly interpolate between input vectors and are robust under input noise. The human primary visual cortex is estimated to be overcomplete by a factor of 500, so that, for example, a 14 x 14 patch of input (a 196-dimensional space) is coded by roughly 100,000 neurons.

Other models are based on matching pursuit, a sparse approximation algorithm which finds the "best matching" projections of multidimensional data, and dictionary learning, a representation learning method which aims to find a sparse matrix representation of the input data in the form of a linear combination of basic elements as well as those basic elements themselves.

Biological evidence

Sparse coding may be a general strategy of neural systems to augment memory capacity. To adapt to their environments, animals must learn which stimuli are associated with rewards or punishments and distinguish these reinforced stimuli from similar but irrelevant ones. Such tasks require implementing stimulus-specific associative memories in which only a few neurons out of a population respond to any given stimulus and each neuron responds to only a few stimuli out of all possible stimuli.

Theoretical work on sparse distributed memory has suggested that sparse coding increases the capacity of associative memory by reducing overlap between representations. Experimentally, sparse representations of sensory information have been observed in many systems, including vision, audition, touch, and olfaction. However, despite the accumulating evidence for widespread sparse coding and theoretical arguments for its importance, a demonstration that sparse coding improves the stimulus-specificity of associative memory has been difficult to obtain.

In the Drosophila olfactory system, sparse odor coding by the Kenyon cells of the mushroom body is thought to generate a large number of precisely addressable locations for the storage of odor-specific memories. Sparseness is controlled by a negative feedback circuit between Kenyon cells and GABAergic anterior paired lateral (APL) neurons. Systematic activation and blockade of each leg of this feedback circuit shows that Kenyon cells activate APL neurons and APL neurons inhibit Kenyon cells. Disrupting the Kenyon cell–APL feedback loop decreases the sparseness of Kenyon cell odor responses, increases inter-odor correlations, and prevents flies from learning to discriminate similar, but not dissimilar, odors. These results suggest that feedback inhibition suppresses Kenyon cell activity to maintain sparse, decorrelated odor coding and thus the odor-specificity of memories.

Non-spike ultramicro-coding (for advanced intelligence)

Whatever the merits and ubiquity of Action-Potential/Synaptic (“Spike”) signalling and its coding, it seems unable to offer any plausible account of higher-intelligence such as human abstract thought; e.g. see  Hence a search for an alternative capable of reliable digital performance, but the only plausible candidate seemed to be the use of ‘spare’ RNA (not involved in protein-coding, hence “ncRNA”). That ncRNA would offer the “written-down” static coding. Such ultramicro sites could not routinely intercommunicate using action-potentials, but they would almost certainly have to use infra-red or nearby optical wavelengths. Such wavelengths would conveniently fit in with the diameters of myelinated nerve fibres — here seen as coaxial cables — thus offering a second fast signalling system (with significantly different properties) operating simultaneously with the traditional system, on the same axons whenever appropriate. 

Even if we accept it as true, such activity is mostly unobservable — not directly observable for practical reasons — so the extent one should accept this model depends on one’s philosophy of science. The model is based on a considerable quantity of mutually-supporting interdisciplinary evidence, so scientific realism should presumably accept it (just as it does for unseen Black Holes or neutrinos), at least until some observed disproof arises — while instrumentalism could be expected to mix disbelief with a willingness to simply use the model as practically useful, given that it does answer several mysteries.

But then additionally there are two minor items of direct evidence in the form of fulfilled predictions: (i) (more a hope than a prediction) that there would be enough spare RNA available — a doubt which was dispelled when Mattick disclosed that (in humans) only about 3% of RNA was used for protein-making, so 97% was available for other tasks. (ii) The feasibility of the coaxial-cable sub-hypothesis was justified by experiments showing that infra-red and other light-frequencies can be transmitted via axons. This non-spike mode is envisaged as operating exclusively within the brain proper — as advanced-thought mechanisms (in the higher vertebrates) — leaving the conventional “spike” signals to do all the intercommunication with the outside world, and do other routine tasks including Hebbian maintenance.

Surprisingly though, there has been some suggestion that a similar mode would have evolved independently in insects (thus accounting for their extraordinary performance-abilities despite their tiny brains). Indeed, as there is a case that the spines and antennas of moths etc. may be receiving infra-red signals direct from the environment, (reviewed ), there is thus a further possibility that there might sometimes be a dedicated feed-in of these signals directly into the insect’s nervous systems (without the usually-expected ‘spike’ sensory mechanisms). That is merely conjectural at this stage, but it might offer scope for some easy-and-economical experimentation.

Yet another non-spike signal-mode: There is also indirect evidence for a third signal-mode for the axon! This mode is very much slower but capable of carrying “much bigger documents” in the form of already-formatted ncRNA-schemas of the above-mentioned static coding — carried as axonal transport by kinesin within the axon (just like the known transport of mRNA, with which it may have been confused in laboratory studies).

Abandoned village

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Abandoned_village
Abandoned village in Russia
The remains of a fieldstone church in Dangelsdorf, [de ] Germany, from the 14th century
Moggessa di Qua near Moggio Udinese / Italy
Glanzenberg, a 13th-century town in Unterengstringen, Switzerland
Villa Epecuén (Argentina)

An abandoned village is a village that has, for some reason, been deserted. In many countries, and throughout history, thousands of villages have been deserted for a variety of causes. Abandonment of villages is often related to epidemic, famine, war, climate change, economic depressions, environmental destruction, or deliberate clearances.

Armenia and Azerbaijan

Hundreds of villages in Nagorno-Karabakh were deserted following the First Nagorno-Karabakh War. Between 1988 and 1993, 400,000 ethnic Azeris, and Kurds fled the area and nearly 200 villages in Armenia itself populated by Azeris and Kurds were abandoned by 1991. Likewise, nearly 300,000 Armenians fled from Azerbaijan between 1988 and 1993, including 50 villages populated by Armenians in Northern Nagorno Karabakh that were abandoned. Some of the Armenian settlements and churches outside Armenia and the Nagorno-Karabakh Republic have either been destroyed or damaged including those in Nakhichevan.

Australia

In Australia, the government requires operators of mining towns to remove all traces of the town when it is abandoned. This has occurred in the cases of Mary Kathleen, Goldsworthy and Shay Gap, but not in cases such as Wittenoom and Big Bell. Some towns have been lost or moved when dams are built. Others when the settlement was abandoned for any number of other reasons such as recurring natural disasters such as bushfires or changed circumstances. In Australia, an abandoned settlement that has infrastructure remaining is synonymous with ghost town.

Belarus

In 1988, two years after the Chernobyl disaster, the Belarusian government created the Polesie State Radioecological Reserve, a 1,313 km2 (507 sq mi) exclusion zone to protect people against the effects of radiation. Twenty-two thousand people lived there in the 96 settlements that were abandoned, including Aravichy and Dzernavichy, and the area has since been expanded by a further 849 km2 (328 sq mi).

Belgium

In 1968 in the town of Doel, a building ban was implemented so that the Port of Antwerp could expand. Then an economic crisis occurred and this plan for expansion was halted. Then in 1998 another plan for expansion for the Port of Antwerp was released and most of the inhabitants left.

China

Many villages in remote parts of the New Territories, Hong Kong, usually in valleys or on islands, have been abandoned due to inaccessibility. Residents go to live in urban areas with better job opportunities. Some villages have been moved to new sites to make way for reservoirs or new town development. See also walled villages of Hong Kong and list of villages in Hong Kong.

Cyprus

Villages have been abandoned as a result of the Cyprus dispute. Some of these are reported to be landmined.

Finland

On the western edge of Vantaa's Ilola district, there is an illegal village called Simosenkylä, where the houses are mainly dilapidated, some completely abandoned.

France

A number of villages, mainly in the north and north western areas of the country, were destroyed during World War I and World War II. A percentage of them were rebuilt next to the original sites, with the original villages remaining in a ruined state.

Germany

Winnefeld church ruin

There are hundreds of abandoned villages, known as Wüstungen, in Germany. Geographer Kurt Scharlau categorized the different types in the 1930s, making distinctions between temporary and permanent Wüstung, settlements used for different purposes (farms or villages), and the extent of abandonment (partial or total). His scheme has been expanded, and has been criticized for not taking into account expansion and regression. Archaeologists commonly distinguish between Flurwüstungen (farmed areas) and Ortswüstungen (sites where buildings formerly stood). The most drastic period of abandonment in modern times was during the 14th and 15th centuries—before 1350, there were about 170,000 settlements in Germany, and this had been reduced by nearly 40,000 by 1450. As in Britain, the Black Death played a large role in this, as did the growth of large villages and towns, the Little Ice Age, the introduction of crop rotation, and war (in Germany, particularly the Thirty Years' War). In later times, the German Empire demolished villages for the creation of training grounds for the military. As a result of the Potsdam conference the southern part of “east Prussia” became “Kaliningrad oblast” with the majority of villages permanently destroyed after the German population had been driven out. The same applied to villages of ethnic Germans at the prewar borders of the now Czech Republic and Germany or Austria respectively as all ethnic Germans were expelled from the then Czechoslovakia.

Hungary

Hundreds of villages were abandoned during the Ottoman wars in the Kingdom of Hungary in the 16th–17th centuries. Many of them were never repopulated, and they generally left few visible traces. Real ghost towns are rare in present-day Hungary, except the abandoned villages of Derenk (left in 1943) and Nagygéc (left in 1970). Due to the decrease in rural population beginning in the 1980s, dozens of villages are now threatened with abandonment. The first village officially declared as "died out" was Gyűrűfű at the end of the 1970s, but it was later repopulated as an "eco-village". Sometimes depopulated villages were successfully saved as small rural resorts like Kán, Tornakápolna, Szanticska, Gorica and Révfalu.

India

One significant event of abandonment in Indian history was due to the Bengal famine of 1770. About ten million people, approximately one-third of the population of the affected area, are estimated to have died in the Bengal famine of 1770. Regions where the famine occurred included especially the modern Indian states of Bihar and West Bengal, but the famine also extended into Odisha and Jharkhand as well as modern Bangladesh. Among the worst affected areas were Birbhum and Murshidabad in Bengal, and Tirhut, Champaran and Bettiah in Bihar. As a result of the famine, these large areas were depopulated and returned to jungle for decades to come as the survivors migrated en masse in a search for food. Many cultivated lands were abandoned—much of Birbhum, for instance, returned to jungle and was virtually impassable for decades afterwards. From 1772 on, bands of bandits and thugs became an established feature of Bengal, and were only brought under control by punitive actions in the 1780s.

Indonesia

Due to natural disasters, many villages are destroyed and abandoned.

Ireland

Several villages in Ireland have been abandoned during the Middle Ages or later: Oliver Goldsmith's poem "The Deserted Village" (1770) being a famous commentary on rural depopulation. Notable deserted villages include:

Smaller rural settlements, known as clachans, were also abandoned in large numbers during the Great Famine (1845–50).

In 1940 the town of Ballinahown in West Wicklow was evacuated for the construction of the Blessington Lakes and Poulaphouca Reservoir.

Territory of the former British Mandate of Palestine

As a consequence of the 1948 Palestinian expulsion and flight during the 1948 Palestine war, around 720,000 Palestinian Arabs were displaced, leaving around 400 Palestinian Arab towns and villages depopulated in what became Israel. In addition, several Jewish communities in what became the West Bank and Gaza Strip were also depopulated.

In August 2005, Israel evacuated Gush Katif and all other Jewish settlements in the Gaza Strip. Some structures in these settlements, including greenhouses and synagogues, were left standing after the withdrawal.

Malta

Ruins of Tal-Baqqari, an abandoned village near Żurrieq

Many small villages around Malta were abandoned between the 14th and 18th centuries. They were abandoned for several reasons, including corsair raids (such as the raids of 1429 and 1551), slow population decline, migration to larger villages as well as political changes such as the transfer of the capital from Mdina to Birgu in 1530, and to Valletta in 1571. Many villages were depopulated after a plague epidemic in 1592–93.

Of Malta's ten original parishes in 1436, two (Ħal Tartarni and Bir Miftuħ) no longer exist, while others such as Mellieħa were abandoned but rebuilt at a later stage. The existence of many of the other villages is known only from militia lists, ecclesiastical or notarial documents, or lists of lost villages compiled by scholars such as Giovanni Francesco Abela.

The villages usually consisted of a chapel surrounded by a number of farmhouses and other buildings. In some cases, such as Ħal-Millieri and Bir Miftuħ, the village disappeared but the chapel still exists.

North Africa

Oases and villages in North Africa have been abandoned due to the expansion of the Sahara desert.

Romania

Many Saxon villages in Transylvania became depopulated or abandoned when their German-speaking inhabitants emigrated to Germany in the 1990s.

Russia

Abandoned village in the Tver Oblast of Russia

Thousands of abandoned villages are scattered across Russia.

Narmeln, the westernmost point of Russia, was a German village on the Vistula Spit until it became depopulated in 1945 during World War II. The Vistula Spit was split between Poland and the Soviet Union after the war, with Narmeln as the only settlement on the Soviet side. Narmeln was never repopulated as the Soviet side was made into an exclusion zone.

Spain

The abandoned village of Merades, Spain; part of the northernmost section of the ruins

Large zones of the mountainous Iberian System and the Pyrenees have undergone heavy depopulation since the early 20th century. In Spain there are many ghost towns scattered across mountain areas especially in Teruel Province.

The traditional agricultural practices such as sheep and goat rearing on which the village economy was based were not taken over by the local youth after the lifestyle changes that swept over rural Spain during the second half of the 20th century. The exodus from the rural mountainous areas in Spain rose steeply after General Franco's Plan de Estabilización in 1959. The population declined steeply as people emigrated towards the industrial areas of the large cities and the coastal towns where tourism grew exponentially.

The abandonment of agricultural land use practices drives the natural establishment of forests through ecological succession in Spain. This spontaneous forest establishment has several consequences for society and nature, such as increase of fire risk and frequency and biodiversity loss. Regarding biodiversity loss, The risk, research findings from Mediterranean showed that this is very site dependent. More recently, the abandonment of land is also discussed by some as an opportunity for rewilding in rural areas in Spain.

Syria

The Dead Cities are a group of abandoned villages in Northern Syria dating back to the times of Late Antiquity and the Byzantine Empire. They are a World Heritage Site.

After the occupation of the Golan Heights by Israel after its victory during the Six-Day War, more than 130,000 Syrians were expelled, and two towns as well as 163 villages were abandoned and destroyed.

In the 2010s, as a result of the Syrian civil war, many villages in Syria, both in areas under government control and under rebel control, have been depopulated. For example, the town of Darayya in Rural Damascus Governorate, with a pre-war population of 225,000 was completely depopulated during the war, and since its return to government control in 2016, only between 10% and 30% of its population have returned. Further north in Idlib Governorate, the two villages of Al-Fu'ah and Kafriya for example, were depopulated completely as their Twelver Shia population were evacuated.

Ukraine

Abandoned village near Chernobyl

Following the 1986 Chernobyl disaster, a 2,600 km2 (1,000 sq mi) zone of exclusion was created and the entire population was evacuated to prevent exposure to radiation. Since then, a limited number of people have been allowed to return: 197 lived in the zone in 2012, down from 328 in 2007 and 612 in 1999. However, all of the villages and the main city of the region, Pripyat, are falling into decay. The only lived in settlement is Chernobyl which houses maintenance staff and scientists working at the nuclear power plant, although they can only live there for short periods of time.

United Kingdom

St Mary Church in West Tofts, Norfolk, a village which had its population relocated and was then incorporated into the Stanford Training Area facility.

Many villages in the United Kingdom have been abandoned throughout history. Some cases were the result of natural events, such as rivers changing course or silting up, or coastal and estuarine erosion.

Sometimes villages were deliberately cleared: the Harrying of the North caused widespread devastation in the winter of 1069–1070. In the 12th and 13th centuries, many villages were removed to make way for monasteries, and in the 18th century, it became fashionable for land-owning aristocrats to live in large mansions set in large landscaped parklands. Villages that obstructed the view were removed, although by the early 19th century it had become common to provide replacements.

In modern times, a few villages have been abandoned due to reservoirs being built and the location being flooded. These include Capel Celyn in Gwynedd, Wales, Mardale Green in the English Lake District and two villages—Ashopton and Derwent—drowned by the Ladybower Reservoir in Derbyshire. In other cases, such as Tide Mills, East Sussex, Imber and Tyneham, the village lands have been converted to military training areas. Villages in Northumberland have been demolished to make way for open-cast mines. Hampton-on-Sea was abandoned due to coastal erosion thought to have been exacerbated by the building of a pier. Several other villages had their populations relocated to make way for military installations; these include a group of villages in the vicinity of Thetford, Norfolk, which were emptied in 1942 to allow for the establishment of the Stanford Training Area, which incorporates the villages as part of the facility's training areas.

Deserted medieval villages

In the United Kingdom, a deserted medieval village (DMV) is a settlement that was abandoned during the Middle Ages, typically leaving no trace apart from earthworks or cropmarks. If there are three or fewer inhabited houses, the convention is to regard the site as deserted; if there are more than three houses, it is regarded as shrunken. The commonest causes of DMVs include failure of marginal agricultural land and clearance and enclosure following depopulation after the Black Death. The study of the causes of each settlement's desertion is an ongoing field of research.

England has an estimated 3,000 DMVs. One of the best known is Wharram Percy in North Yorkshire, where extensive archaeological excavations were conducted between 1948 and 1990. Its ruined church and former fishpond are still visible. Some other examples are Gainsthorpe in Lincolnshire, and Old Wolverton in Milton Keynes.

Cosmic dust

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Cosmic_dust
Porous chondrite dust particle

Cosmic dust – also called extraterrestrial dust, space dust, or star dust – is dust that occurs in outer space or has fallen onto Earth. Most cosmic dust particles measure between a few molecules and 0.1 mm (100 μm), such as micrometeoroids. Larger particles are called meteoroids. Cosmic dust can be further distinguished by its astronomical location: intergalactic dust, interstellar dust, interplanetary dust (as in the zodiacal cloud), and circumplanetary dust (as in a planetary ring). There are several methods to obtain space dust measurement.

In the Solar System, interplanetary dust causes the zodiacal light. Solar System dust includes comet dust, planetary dust (like from Mars), asteroidal dust, dust from the Kuiper belt, and interstellar dust passing through the Solar System. Thousands of tons of cosmic dust are estimated to reach Earth's surface every year, with most grains having a mass between 10−16 kg (0.1 pg) and 10−4 kg (0.1 g). The density of the dust cloud through which the Earth is traveling is approximately 10−6 dust grains/m3.

Cosmic dust contains some complex organic compounds (amorphous organic solids with a mixed aromaticaliphatic structure) that could be created naturally, and rapidly, by stars. A smaller fraction of dust in space is "stardust" consisting of larger refractory minerals that condensed as matter left by stars.

Interstellar dust particles were collected by the Stardust spacecraft and samples were returned to Earth in 2006.

Study and importance

Artist's impression of dust formation around a supernova explosion.

Cosmic dust was once solely an annoyance to astronomers, as it obscures objects they wished to observe. When infrared astronomy began, the dust particles were observed to be significant and vital components of astrophysical processes. Their analysis can reveal information about phenomena like the formation of the Solar System. For example, cosmic dust can drive the mass loss when a star is nearing the end of its life, play a part in the early stages of star formation, and form planets. In the Solar System, dust plays a major role in the zodiacal light, Saturn's B Ring spokes, the outer diffuse planetary rings at Jupiter, Saturn, Uranus and Neptune, and comets.

Zodiacal light caused by cosmic dust.

The interdisciplinary study of dust brings together different scientific fields: physics (solid-state, electromagnetic theory, surface physics, statistical physics, thermal physics), fractal mathematics, surface chemistry on dust grains, meteoritics, as well as every branch of astronomy and astrophysics. These disparate research areas can be linked by the following theme: the cosmic dust particles evolve cyclically; chemically, physically and dynamically. The evolution of dust traces out paths in which the Universe recycles material, in processes analogous to the daily recycling steps with which many people are familiar: production, storage, processing, collection, consumption, and discarding.

Observations and measurements of cosmic dust in different regions provide an important insight into the Universe's recycling processes; in the clouds of the diffuse interstellar medium, in molecular clouds, in the circumstellar dust of young stellar objects, and in planetary systems such as the Solar System, where astronomers consider dust as in its most recycled state. The astronomers accumulate observational ‘snapshots’ of dust at different stages of its life and, over time, form a more complete movie of the Universe's complicated recycling steps.

Parameters such as the particle's initial motion, material properties, intervening plasma and magnetic field determined the dust particle's arrival at the dust detector. Slightly changing any of these parameters can give significantly different dust dynamical behavior. Therefore, one can learn about where that object came from, and what is (in) the intervening medium.

Detection methods

Cosmic dust of the Andromeda Galaxy as revealed in infrared light by the Spitzer Space Telescope.

A wide range of methods is available to study cosmic dust. Cosmic dust can be detected by remote sensing methods that utilize the radiative properties of cosmic dust particles, c.f. Zodiacal light measurement.

Cosmic dust can also be detected directly ('in-situ') using a variety of collection methods and from a variety of collection locations. Estimates of the daily influx of extraterrestrial material entering the Earth's atmosphere range between 5 and 300 tonnes.

NASA collects samples of star dust particles in the Earth's atmosphere using plate collectors under the wings of stratospheric-flying airplanes. Dust samples are also collected from surface deposits on the large Earth ice-masses (Antarctica and Greenland/the Arctic) and in deep-sea sediments.

Don Brownlee at the University of Washington in Seattle first reliably identified the extraterrestrial nature of collected dust particles in the latter 1970s. Another source is the meteorites, which contain stardust extracted from them. Stardust grains are solid refractory pieces of individual presolar stars. They are recognized by their extreme isotopic compositions, which can only be isotopic compositions within evolved stars, prior to any mixing with the interstellar medium. These grains condensed from the stellar matter as it cooled while leaving the star.

Cosmic dust of the Horsehead Nebula as revealed by the Hubble Space Telescope.

In interplanetary space, dust detectors on planetary spacecraft have been built and flown, some are presently flying, and more are presently being built to fly. The large orbital velocities of dust particles in interplanetary space (typically 10–40 km/s) make intact particle capture problematic. Instead, in-situ dust detectors are generally devised to measure parameters associated with the high-velocity impact of dust particles on the instrument, and then derive physical properties of the particles (usually mass and velocity) through laboratory calibration (i.e. impacting accelerated particles with known properties onto a laboratory replica of the dust detector). Over the years dust detectors have measured, among others, the impact light flash, acoustic signal and impact ionisation. Recently the dust instrument on Stardust captured particles intact in low-density aerogel.

Dust detectors in the past flew on the HEOS 2, Helios, Pioneer 10, Pioneer 11, Giotto, Galileo, Ulysses and Cassini space missions, on the Earth-orbiting LDEF, EURECA, and Gorid satellites, and some scientists have utilized the Voyager 1 and 2 spacecraft as giant Langmuir probes to directly sample the cosmic dust. Presently dust detectors are flying on the Ulysses, Proba, Rosetta, Stardust, and the New Horizons spacecraft. The collected dust at Earth or collected further in space and returned by sample-return space missions is then analyzed by dust scientists in their respective laboratories all over the world. One large storage facility for cosmic dust exists at the NASA Houston JSC.

Infrared light can penetrate cosmic dust clouds, allowing us to peer into regions of star formation and the centers of galaxies. NASA's Spitzer Space Telescope was the largest infrared space telescope, before the launch of the James Webb Space Telescope. During its mission, Spitzer obtained images and spectra by detecting the thermal radiation emitted by objects in space between wavelengths of 3 and 180 micrometres. Most of this infrared radiation is blocked by the Earth's atmosphere and cannot be observed from the ground. Findings from the Spitzer have revitalized the studies of cosmic dust. One report showed some evidence that cosmic dust is formed near a supermassive black hole.

Another detection mechanism is polarimetry. Dust grains are not spherical and tend to align to interstellar magnetic fields, preferentially polarizing starlight that passes through dust clouds. In nearby interstellar space, where interstellar reddening is not intense enough to be detected, high precision optical polarimetry has been used to glean the structure of dust within the Local Bubble.

In 2019, researchers found interstellar dust in Antarctica which they relate to the Local Interstellar Cloud. The detection of interstellar dust in Antarctica was done by the measurement of the radionuclides Fe-60 and Mn-53 by highly sensitive Accelerator mass spectrometry.

Radiation properties

HH 151 is a bright jet of glowing material trailed by an intricate, orange-hued plume of gas and dust.

A dust particle interacts with electromagnetic radiation in a way that depends on its cross section, the wavelength of the electromagnetic radiation, and on the nature of the grain: its refractive index, size, etc. The radiation process for an individual grain is called its emissivity, dependent on the grain's efficiency factor. Further specifications regarding the emissivity process include extinction, scattering, absorption, or polarisation. In the radiation emission curves, several important signatures identify the composition of the emitting or absorbing dust particles.

Dust particles can scatter light nonuniformly. Forward scattered light is light that is redirected slightly off its path by diffraction, and back-scattered light is reflected light.

The scattering and extinction ("dimming") of the radiation gives useful information about the dust grain sizes. For example, if the object(s) in one's data is many times brighter in forward-scattered visible light than in back-scattered visible light, then it is understood that a significant fraction of the particles are about a micrometer in diameter.

The scattering of light from dust grains in long exposure visible photographs is quite noticeable in reflection nebulae, and gives clues about the individual particle's light-scattering properties. In X-ray wavelengths, many scientists are investigating the scattering of X-rays by interstellar dust, and some have suggested that astronomical X-ray sources would possess diffuse haloes, due to the dust.

Stardust

Stardust grains (also called presolar grains by meteoriticists) are contained within meteorites, from which they are extracted in terrestrial laboratories. Stardust was a component of the dust in the interstellar medium before its incorporation into meteorites. The meteorites have stored those stardust grains ever since the meteorites first assembled within the planetary accretion disk more than four billion years ago. So-called carbonaceous chondrites are especially fertile reservoirs of stardust. Each stardust grain existed before the Earth was formed. Stardust is a scientific term referring to refractory dust grains that condensed from cooling ejected gases from individual presolar stars and incorporated into the cloud from which the Solar System condensed.

Many different types of stardust have been identified by laboratory measurements of the highly unusual isotopic composition of the chemical elements that comprise each stardust grain. These refractory mineral grains may earlier have been coated with volatile compounds, but those are lost in the dissolving of meteorite matter in acids, leaving only insoluble refractory minerals. Finding the grain cores without dissolving most of the meteorite has been possible, but difficult and labor-intensive (see presolar grains).

Many new aspects of nucleosynthesis have been discovered from the isotopic ratios within the stardust grains. An important property of stardust is the hard, refractory, high-temperature nature of the grains. Prominent are silicon carbide, graphite, aluminium oxide, aluminium spinel, and other such solids that would condense at high temperature from a cooling gas, such as in stellar winds or in the decompression of the inside of a supernova. They differ greatly from the solids formed at low temperature within the interstellar medium.

Also important are their extreme isotopic compositions, which are expected to exist nowhere in the interstellar medium. This also suggests that the stardust condensed from the gases of individual stars before the isotopes could be diluted by mixing with the interstellar medium. These allow the source stars to be identified. For example, the heavy elements within the silicon carbide (SiC) grains are almost pure S-process isotopes, fitting their condensation within AGB star red giant winds inasmuch as the AGB stars are the main source of S-process nucleosynthesis and have atmospheres observed by astronomers to be highly enriched in dredged-up s process elements.

Another dramatic example is given by the so-called supernova condensates, usually shortened by acronym to SUNOCON (from SUperNOva CONdensate) to distinguish them from other stardust condensed within stellar atmospheres. SUNOCONs contain in their calcium an excessively large abundance of 44Ca, demonstrating that they condensed containing abundant radioactive 44Ti, which has a 65-year half-life. The outflowing 44Ti nuclei were thus still "alive" (radioactive) when the SUNOCON condensed near one year within the expanding supernova interior, but would have become an extinct radionuclide (specifically 44Ca) after the time required for mixing with the interstellar gas. Its discovery proved the prediction from 1975 that it might be possible to identify SUNOCONs in this way. The SiC SUNOCONs (from supernovae) are only about 1% as numerous as are SiC stardust from AGB stars.

Stardust itself (SUNOCONs and AGB grains that come from specific stars) is but a modest fraction of the condensed cosmic dust, forming less than 0.1% of the mass of total interstellar solids. The high interest in stardust derives from new information that it has brought to the sciences of stellar evolution and nucleosynthesis.

Laboratories have studied solids that existed before the Earth was formed. This was once thought impossible, especially in the 1970s when cosmochemists were confident that the Solar System began as a hot gas virtually devoid of any remaining solids, which would have been vaporized by high temperature. The existence of stardust proved this historic picture incorrect.

Some bulk properties

Smooth chondrite interplanetary dust particle.

Cosmic dust is made of dust grains and aggregates into dust particles. These particles are irregularly shaped, with porosity ranging from fluffy to compact. The composition, size, and other properties depend on where the dust is found, and conversely, a compositional analysis of a dust particle can reveal much about the dust particle's origin. General diffuse interstellar medium dust, dust grains in dense clouds, planetary rings dust, and circumstellar dust, are each different in their characteristics. For example, grains in dense clouds have acquired a mantle of ice and on average are larger than dust particles in the diffuse interstellar medium. Interplanetary dust particles (IDPs) are generally larger still.

Major elements of 200 stratospheric interplanetary dust particles.

Most of the influx of extraterrestrial matter that falls onto the Earth is dominated by meteoroids with diameters in the range 50 to 500 micrometers, of average density 2.0 g/cm3 (with porosity about 40%). The total influx rate of meteoritic sites of most IDPs captured in the Earth's stratosphere range between 1 and 3 g/cm3, with an average density at about 2.0 g/cm3.

Other specific dust properties: in circumstellar dust, astronomers have found molecular signatures of CO, silicon carbide, amorphous silicate, polycyclic aromatic hydrocarbons, water ice, and polyformaldehyde, among others (in the diffuse interstellar medium, there is evidence for silicate and carbon grains). Cometary dust is generally different (with overlap) from asteroidal dust. Asteroidal dust resembles carbonaceous chondritic meteorites. Cometary dust resembles interstellar grains which can include silicates, polycyclic aromatic hydrocarbons, and water ice.

In September 2020, evidence was presented of solid-state water in the interstellar medium, and particularly, of water ice mixed with silicate grains in cosmic dust grains.

Dust grain formation

The large grains in interstellar space are probably complex, with refractory cores that condensed within stellar outflows topped by layers acquired during incursions into cold dense interstellar clouds. That cyclic process of growth and destruction outside of the clouds has been modeled to demonstrate that the cores live much longer than the average lifetime of dust mass. Those cores mostly start with silicate particles condensing in the atmospheres of cool, oxygen-rich red-giants and carbon grains condensing in the atmospheres of cool carbon stars. Red giants have evolved or altered off the main sequence and have entered the giant phase of their evolution and are the major source of refractory dust grain cores in galaxies. Those refractory cores are also called stardust (section above), which is a scientific term for the small fraction of cosmic dust that condensed thermally within stellar gases as they were ejected from the stars. Several percent of refractory grain cores have condensed within expanding interiors of supernovae, a type of cosmic decompression chamber. Meteoriticists who study refractory stardust (extracted from meteorites) often call it presolar grains but that within meteorites is only a small fraction of all presolar dust. Stardust condenses within the stars via considerably different condensation chemistry than that of the bulk of cosmic dust, which accretes cold onto preexisting dust in dark molecular clouds of the galaxy. Those molecular clouds are very cold, typically less than 50K, so that ices of many kinds may accrete onto grains, in cases only to be destroyed or split apart by radiation and sublimation into a gas component. Finally, as the Solar System formed many interstellar dust grains were further modified by coalescence and chemical reactions in the planetary accretion disk. The history of the various types of grains in the early Solar System is complicated and only partially understood.

Astronomers know that the dust is formed in the envelopes of late-evolved stars from specific observational signatures. In infrared light, emission at 9.7 micrometres is a signature of silicate dust in cool evolved oxygen-rich giant stars. Emission at 11.5 micrometres indicates the presence of silicon carbide dust in cool evolved carbon-rich giant stars. These help provide evidence that the small silicate particles in space came from the ejected outer envelopes of these stars.

Conditions in interstellar space are generally not suitable for the formation of silicate cores. This would take excessive time to accomplish, even if it might be possible. The arguments are that: given an observed typical grain diameter a, the time for a grain to attain a, and given the temperature of interstellar gas, it would take considerably longer than the age of the Universe for interstellar grains to form. On the other hand, grains are seen to have recently formed in the vicinity of nearby stars, in nova and supernova ejecta, and in R Coronae Borealis variable stars which seem to eject discrete clouds containing both gas and dust. So mass loss from stars is unquestionably where the refractory cores of grains formed.

Most dust in the Solar System is highly processed dust, recycled from the material out of which the Solar System formed and subsequently collected in the planetesimals, and leftover solid material such as comets and asteroids, and reformed in each of those bodies' collisional lifetimes. During the Solar System's formation history, the most abundant element was (and still is) H2. The metallic elements: magnesium, silicon, and iron, which are the principal ingredients of rocky planets, condensed into solids at the highest temperatures of the planetary disk. Some molecules such as CO, N2, NH3, and free oxygen, existed in a gas phase. Some molecules, for example, graphite (C) and SiC would condense into solid grains in the planetary disk; but carbon and SiC grains found in meteorites are presolar based on their isotopic compositions, rather than from the planetary disk formation. Some molecules also formed complex organic compounds and some molecules formed frozen ice mantles, of which either could coat the "refractory" (Mg, Si, Fe) grain cores. Stardust once more provides an exception to the general trend, as it appears to be totally unprocessed since its thermal condensation within stars as refractory crystalline minerals. The condensation of graphite occurs within supernova interiors as they expand and cool, and do so even in gas containing more oxygen than carbon, a surprising carbon chemistry made possible by the intense radioactive environment of supernovae. This special example of dust formation has merited specific review.

Planetary disk formation of precursor molecules was determined, in large part, by the temperature of the solar nebula. Since the temperature of the solar nebula decreased with heliocentric distance, scientists can infer a dust grain's origin(s) with knowledge of the grain's materials. Some materials could only have been formed at high temperatures, while other grain materials could only have been formed at much lower temperatures. The materials in a single interplanetary dust particle often show that the grain elements formed in different locations and at different times in the solar nebula. Most of the matter present in the original solar nebula has since disappeared; drawn into the Sun, expelled into interstellar space, or reprocessed, for example, as part of the planets, asteroids or comets.

Due to their highly processed nature, IDPs (interplanetary dust particles) are fine-grained mixtures of thousands to millions of mineral grains and amorphous components. We can picture an IDP as a "matrix" of material with embedded elements which were formed at different times and places in the solar nebula and before the solar nebula's formation. Examples of embedded elements in cosmic dust are GEMS, chondrules, and CAIs.

From the solar nebula to Earth

A dusty trail from the early Solar System to carbonaceous dust today.

The arrows in the adjacent diagram show one possible path from a collected interplanetary dust particle back to the early stages of the solar nebula.

We can follow the trail to the right in the diagram to the IDPs that contain the most volatile and primitive elements. The trail takes us first from interplanetary dust particles to chondritic interplanetary dust particles. Planetary scientists classify chondritic IDPs in terms of their diminishing degree of oxidation so that they fall into three major groups: the carbonaceous, the ordinary, and the enstatite chondrites. As the name implies, the carbonaceous chondrites are rich in carbon, and many have anomalies in the isotopic abundances of H, C, N, and O. From the carbonaceous chondrites, we follow the trail to the most primitive materials. They are almost completely oxidized and contain the lowest condensation temperature elements ("volatile" elements) and the largest amount of organic compounds. Therefore, dust particles with these elements are thought to have been formed in the early life of the Solar System. The volatile elements have never seen temperatures above about 500 K, therefore, the IDP grain "matrix" consists of some very primitive Solar System material. Such a scenario is true in the case of comet dust. The provenance of the small fraction that is stardust (see above) is quite different; these refractory interstellar minerals thermally condense within stars, become a small component of interstellar matter, and therefore remain in the presolar planetary disk. Nuclear damage tracks are caused by the ion flux from solar flares. Solar wind ions impacting on the particle's surface produce amorphous radiation damaged rims on the particle's surface. And spallogenic nuclei are produced by galactic and solar cosmic rays. A dust particle that originates in the Kuiper Belt at 40 AU would have many more times the density of tracks, thicker amorphous rims and higher integrated doses than a dust particle originating in the main-asteroid belt.

Based on 2012 computer model studies, the complex organic molecules necessary for life (extraterrestrial organic molecules) may have formed in the protoplanetary disk of dust grains surrounding the Sun before the formation of the Earth. According to the computer studies, this same process may also occur around other stars that acquire planets.

In September 2012, NASA scientists reported that polycyclic aromatic hydrocarbons (PAHs), subjected to interstellar medium (ISM) conditions, are transformed, through hydrogenation, oxygenation and hydroxylation, to more complex organics – "a step along the path toward amino acids and nucleotides, the raw materials of proteins and DNA, respectively". Further, as a result of these transformations, the PAHs lose their spectroscopic signature which could be one of the reasons "for the lack of PAH detection in interstellar ice grains, particularly the outer regions of cold, dense clouds or the upper molecular layers of protoplanetary disks."

In February 2014, NASA announced a greatly upgraded database for detecting and monitoring polycyclic aromatic hydrocarbons (PAHs) in the universe. According to NASA scientists, over 20% of the carbon in the Universe may be associated with PAHs, possible starting materials for the formation of life. PAHs seem to have been formed shortly after the Big Bang, are abundant in the Universe, and are associated with new stars and exoplanets.

In March 2015, NASA scientists reported that, for the first time, complex DNA and RNA organic compounds of life, including uracil, cytosine and thymine, have been formed in the laboratory under outer space conditions, using starting chemicals, such as pyrimidine, found in meteorites. Pyrimidine, like polycyclic aromatic hydrocarbons (PAHs), the most carbon-rich chemical found in the Universe, may have been formed in red giants or in interstellar dust and gas clouds, according to the scientists.

Some "dusty" clouds in the universe

The Solar System has its own interplanetary dust cloud, as do extrasolar systems. There are different types of nebulae with different physical causes and processes: diffuse nebula, infrared (IR) reflection nebula, supernova remnant, molecular cloud, HII regions, photodissociation regions, and dark nebula.

Distinctions between those types of nebula are that different radiation processes are at work. For example, H II regions, like the Orion Nebula, where a lot of star-formation is taking place, are characterized as thermal emission nebulae. Supernova remnants, on the other hand, like the Crab Nebula, are characterized as nonthermal emission (synchrotron radiation).

Some of the better known dusty regions in the Universe are the diffuse nebulae in the Messier catalog, for example: M1, M8, M16, M17, M20, M42, M43.

Some larger dust catalogs are Sharpless (1959) A Catalogue of HII Regions, Lynds (1965) Catalogue of Bright Nebulae, Lynds (1962) Catalogue of Dark Nebulae, van den Bergh (1966) Catalogue of Reflection Nebulae, Green (1988) Rev. Reference Cat. of Galactic SNRs, The National Space Sciences Data Center (NSSDC), and CDS Online Catalogs.

Dust sample return

The Discovery program's Stardust mission, was launched on 7 February 1999 to collect samples from the coma of comet Wild 2, as well as samples of cosmic dust. It returned samples to Earth on 15 January 2006. In 2007, the recovery of particles of interstellar dust from the samples was announced.

Inhalant

From Wikipedia, the free encyclopedia https://en.wikipedia.org/w...