Search This Blog

Monday, February 24, 2025

Anti-art

 


From Wikipedia, the free encyclopedia
Artist's Shit (Italian: Merda d'artista) is a 1961 artwork by the Italian artist Piero Manzoni, which consists of 90 tin cans, each reportedly filled with 30 grams (1.1 oz) of faeces. One of his friends, Enrico Baj, said that the cans were meant as "an act of defiant mockery of the art world, artists, and art criticism".

Anti-art is a loosely used term applied to an array of concepts and attitudes that reject prior definitions of art and question art in general. Somewhat paradoxically, anti-art tends to conduct this questioning and rejection from the vantage point of art. The term is associated with the Dada movement and is generally accepted as attributable  to Marcel Duchamp pre-World War I around 1914, when he began to use found objects as art. It was used to describe revolutionary forms of art. The term was used later by the Conceptual artists of the 1960s to describe the work of those who claimed to have retired altogether from the practice of art, from the production of works which could be sold.

An expression of anti-art may or may not take traditional form or meet the criteria for being defined as a work of art according to conventional standards. Works of anti-art may express an outright rejection of having conventionally defined criteria as a means of defining what art is, and what it is not. Anti-artworks may reject conventional artistic standards altogether, or focus criticism only on certain aspects of art, such as the art market and high art. Some anti-artworks may reject individualism in art,whereas some may reject "universality" as an accepted factor in art. Additionally, some forms of anti-art reject art entirely, or reject the idea that art is a separate realm or specialization. Anti-artworks may also reject art based upon a consideration of art as being oppressive of a segment of the population.

Anti-art artworks may articulate a disagreement with the generally supposed notion of there being a separation between art and life. Anti-art artworks may voice a question as to whether "art" really exists or not. "Anti-art" has been referred to as a "paradoxical neologism", in that its obvious opposition to art has been observed concurring with staples of twentieth-century art or "modern art", in particular art movements that have self-consciously sought to transgress traditions or institutions. Anti-art itself is not a distinct art movement, however. This would tend to be indicated by the time it spans—longer than that usually spanned by art movements. Some art movements though, are labeled "anti-art". The Dada movement is generally considered the first anti-art movement; the term anti-art itself is said to have been coined by Dadaist Marcel Duchamp around 1914, and his readymades have been cited as early examples of anti-art objects. Theodor W. Adorno in Aesthetic Theory (1970) stated that "...even the abolition of art is respectful of art because it takes the truth claim of art seriously".

Anti-art has become generally accepted by the artworld to be art, although some people still reject Duchamp's readymades as art, for instance the Stuckist group of artists, who are "anti-anti-art".

Forms

Marcel Duchamp, Fountain, 1917. Photograph by Alfred Stieglitz

Anti-art can take the form of art or not. It is posited that anti-art need not even take the form of art, in order to embody its function as anti-art. This point is disputed. Some of the forms of anti-art which are art strive to reveal the conventional limits of art by expanding its properties.

Some instances of anti-art are suggestive of a reduction to what might seem to be fundamental elements or building blocks of art. Examples of this sort of phenomenon might include monochrome paintings, empty frames, silence as music, chance art. Anti-art is also often seen to make use of highly innovative materials and techniques, and well beyond—to include hitherto unheard of elements in visual art. These types of anti-art can be readymades, found object art, détournement, combine paintings, appropriation (art), happenings, performance art, and body art.

Anti-art can involve the renouncement of making art entirely. This can be accomplished through an art strike and this can also be accomplished through revolutionary activism. An aim of anti-art can be to undermine or understate individual creativity. This may be accomplished through the utilization of readymades. Individual creativity can be further downplayed by the use of industrial processes in the making of art. Anti-artists may seek to undermine individual creativity by producing their artworks anonymously. They may refuse to show their artworks. They may refuse public recognition. Anti-artists may choose to work collectively, in order to place less emphasis on individual identity and individual creativity. This can be seen in the instance of happenings. This is sometimes the case with "supertemporal" artworks, which are by design impermanent. Anti-artists will sometimes destroy their works of art. Some artworks made by anti-artists are purposely created to be destroyed. This can be seen in auto-destructive art.

André Malraux has developed a concept of anti-art quite different from that outlined above. For Malraux, anti-art began with the 'Salon' or 'Academic' art of the nineteenth century which rejected the basic ambition of art in favour of a semi-photographic illusionism (often prettified). Of Academic painting, Malraux writes, 'All true painters, all those for whom painting is a value, were nauseated by these pictures – "Portrait of a Great Surgeon Operating" and the like – because they saw in them not a form of painting, but the negation of painting'. For Malraux, anti-art is still very much with us, though in a different form. Its descendants are commercial cinema and television, and popular music and fiction. The 'Salon', Malraux writes, 'has been expelled from painting, but elsewhere it reigns supreme'.

Theory

Anti-art is also a tendency in the theoretical understanding of art and fine art.

The philosopher Roger Taylor puts forward that art is a bourgeois ideology that has its origins with capitalism in "Art, an Enemy of the People". Holding a strong anti-essentialist position he states also that art has not always existed and is not universal but peculiar to Europe.

The Invention of Art: A Cultural History by Larry Shiner is an art history book which fundamentally questions our understanding of art. "The modern system of art is not an essence or a fate but something we have made. Art as we have generally understood it is a European invention barely two hundred years old." (Shiner 2003, p. 3) Shiner presents (fine) art as a social construction that has not always existed throughout human history and could also disappear in its turn.

History

Pre World War I

Jean-Jacques Rousseau rejected the separation between performer and spectator, life and theatre. Karl Marx posited that art was a consequence of the class system and therefore concluded that, in a communist society, there would only be people who engage in the making of art and no "artists".

Illustration of Le rire (1887). First shown 1883 at an "Incohérents" exhibition by Arthur Sapeck (Eugène Bataille).

Arguably the first movement that deliberately set itself in opposition to established art were the Incoherents in late 19th. century Paris. Founded by Jules Lévy in 1882, the Incoherents organized charitable art exhibitions intended to be satirical and humoristic, they presented "...drawings by people who can't draw..." and held masked balls with artistic themes, all in the greater tradition of Montmartre cabaret culture. While short lived – the last Incoherent show took place in 1896 – the movement was popular for its entertainment value. In their commitment to satire, irreverence and ridicule they produced a number of works that show remarkable formal similarities to creations of the avant-garde of the 20th century: ready-mades, monochromes, empty frames and silence as music.

Dada and constructivism

Beginning in Switzerland, during World War I, much of Dada, and some aspects of the art movements it inspired, such as Neo-Dada, Nouveau réalisme, and Fluxus, is considered anti-art. Dadaists rejected cultural and intellectual conformity in art and more broadly in society. For everything that art stood for, Dada was to represent the opposite.

Where art was concerned with traditional aesthetics, Dada ignored aesthetics completely. If art was to appeal to sensibilities, Dada was intended to offend. Through their rejection of traditional culture and aesthetics the Dadaists hoped to destroy traditional culture and aesthetics. Because they were more politicized, the Berlin dadas were the most radically anti-art within Dada. In 1919, in the Berlin group, the Dadaist revolutionary central council outlined the Dadaist ideals of radical communism.

Beginning in 1913 Marcel Duchamp's readymades challenged individual creativity and redefined art as a nominal rather than an intrinsic object.

Tristan Tzara indicated: "I am against systems; the most acceptable system is on principle to have none." In addition, Tzara, who once stated that "logic is always false", probably approved of Walter Serner's vision of a "final dissolution". A core concept in Tzara's thought was that "as long as we do things the way we think we once did them we will be unable to achieve any kind of livable society."

Originating in Russia in 1919, constructivism rejected art in its entirety and as a specific activity creating a universal aesthetic in favour of practices directed towards social purposes, "useful" to everyday life, such as graphic design, advertising and photography. In 1921, exhibiting at the 5x5=25 exhibition, Alexander Rodchenko created monochromes and proclaimed the end of painting. For artists of the Russian Revolution, Rodchenko's radical action was full of utopian possibility. It marked the end of art along with the end of bourgeois norms and practices. It cleared the way for the beginning of a new Russian life, a new mode of production, a new culture.

Surrealism

Beginning in the early 1920s, many Surrealist artists and writers regard their work as an expression of the philosophical movement first and foremost, with the works being an artifact. Surrealism as a political force developed unevenly around the world, in some places more emphasis being put on artistic practices, while in others political practises outweighed. In other places still, Surrealist praxis looked to overshadow both the arts and politics. Politically, Surrealism was ultra-leftist, communist, or anarchist. The split from Dada has been characterised as a split between anarchists and communists, with the Surrealists as communist. In 1925, the Bureau of Surrealist Research declared their affinity for revolutionary politics. By the 1930s many Surrealists had strongly identified themselves with communism. Breton and his comrades supported Leon Trotsky and his International Left Opposition for a while, though there was an openness to anarchism that manifested more fully after World War II.

Leader André Breton was explicit in his assertion that Surrealism was above all a revolutionary movement. Breton believed the tenets of Surrealism could be applied in any circumstance of life, and is not merely restricted to the artistic realm. Breton's followers, along with the Communist Party, were working for the "liberation of man." However, Breton's group refused to prioritize the proletarian struggle over radical creation such that their struggles with the Party made the late 1920s a turbulent time for both. Many individuals closely associated with Breton, notably Louis Aragon, left his group to work more closely with the Communists. In 1929, Breton asked Surrealists to assess their "degree of moral competence", and theoretical refinements included in the second manifeste du surréalisme excluded anyone reluctant to commit to collective action

By the end of World War II the surrealist group led by André Breton decided to explicitly embrace anarchism. In 1952 Breton wrote "It was in the black mirror of anarchism that surrealism first recognised itself."

Letterism and the Situationist International

In 1956, recalling the infinitesimals of Gottfried Wilhelm Leibniz, quantities which could not actually exist except conceptually, the founder of Lettrism, Isidore Isou, developed the notion of a work of art which, by its very nature, could never be created in reality, but which could nevertheless provide aesthetic rewards by being contemplated intellectually. Related to this, and arising out of it, is excoördism, the current incarnation of the Isouian movement, defined as the art of the infinitely large and the infinitely small.

In 1960, Isidore Isou created supertemporal art: a device for inviting and enabling an audience to participate in the creation of a work of art. In its simplest form, this might involve nothing more than the inclusion of several blank pages in a book, for the reader to add his or her own contributions.

In Japan in the late 1950s, Group Kyushu was an edgy, experimental and rambunctious art group. They ripped and burned canvasses, stapled corrugated cardboard, nails, nuts, springs, metal drill shavings, and burlap to their works, assembled many unwieldy junk assemblages, and were best known for covering much of their work in tar. They also occasionally covered their work in urine and excrement. They tried to bring art closer to everyday life, by incorporating objects from daily life into their work, and also by exhibiting and performing their work outside on the street for everyone to see.

Other similar anti-art groups included Neo-Dada (Neo-Dadaizumu Oganaizazu), Gutai (Gutai Bijutsu Kyokai), and Hi-Red-Center. Influenced in various ways by L'Art Informel, these groups and their members worked to foreground material in their work: rather than seeing the art work as representing some remote referent, the material itself and the artists' interaction with it became the main point. The freeing up of gesture was another legacy of L'Art Informel, and the members of Group Kyushu took to it with great verve, throwing, dripping, and breaking material, sometimes destroying the work in the process.

Beginning in the 1950s in France, the Letterist International and after the Situationist International developed a dialectical viewpoint, seeing their task as superseding art, abolishing the notion of art as a separate, specialized activity and transforming it so it became part of the fabric of everyday life. From the Situationist's viewpoint, art is revolutionary or it is nothing. In this way, the Situationists saw their efforts as completing the work of both Dada and surrealism while abolishing both. The situationists renounced the making of art entirely.

The members of the Situationist International liked to think they were probably the most radical, politicized, well organized, and theoretically productive anti-art movement, reaching their apex with the student protests and general strike of May 1968 in France, a view endorsed by others including the academic Martin Puchner.

In 1959 Giuseppe Pinot-Gallizio proposed Industrial Painting as an "industrial-inflationist art"

Neo-Dada and later

Similar to Dada, in the 1960s, Fluxus included a strong current of anti-commercialism and an anti-art sensibility, disparaging the conventional market-driven art world in favor of an artist-centered creative practice. Fluxus artists used their minimal performances to blur the distinction between life and art.

In 1962 Henry Flynt began to campaign for an anti-art position. Flynt wanted avant-garde art to become superseded by the terms of veramusement and brendneologisms meaning approximately pure recreation.

In 1963 George Maciunas advocated revolution, "living art, anti-art" and "non art reality to be grasped by all peoples". Maciunas strived to uphold his stated aims of demonstrating the artist's 'non-professional status...his dispensability and inclusiveness' and that 'anything can be art and anyone can do it.'

In the 1960s, the Dada-influenced art group Black Mask declared that revolutionary art should be "an integral part of life, as in primitive society, and not an appendage to wealth". Black Mask disrupted cultural events in New York by giving made up flyers of art events to the homeless with the lure of free drinks. Later, the Motherfuckers were to grow out of a combination of Black Mask and another group called Angry Arts.

The BBC aired an interview with Duchamp conducted by Joan Bakewell in 1966 which expressed some of Duchamps more explicit Anti-Art ideas. Duchamp compared art with religion, whereby he stated that he wished to do away with art the same way many have done away with religion. Duchamp goes on to explain to the interviewer that "the word art etymologically means to do", that art means activity of any kind, and that it is our society that creates "purely artificial" distinctions of being an artist.

During the 1970s, King Mob was responsible for various attacks on art galleries along with the art inside. According to the philosopher Roger Taylor the concept of art is not universal but is an invention of bourgeois ideology helping to promote this social order. He compares it to a cancer that colonises other forms of life so that it becomes difficult to distinguish one from the other.

Stewart Home called for an Art Strike between 1990 and 1993. Unlike earlier art-strike proposals such as that of Gustav Metzger in the 1970s, it was not intended as an opportunity for artists to seize control of the means of distributing their own work, but rather as an exercise in propaganda and psychic warfare aimed at smashing the entire art world rather than just the gallery system. As Black Mask had done in the 1960s, Stewart Home disrupted cultural events in London in the 1990s by giving made up flyers of literary events to the homeless with the lure of free drinks.

The K Foundation was an art foundation that published a series of Situationist-inspired press adverts and extravagant subversions in the art world. Most notoriously, when their plans to use banknotes as part of a work of art fell through, they burnt a million pounds in cash.

Punk has developed anti-art positions. Some "industrial music" bands describe their work as a form of "cultural terrorism" or as a form of "anti-art". The term is also used to describe other intentionally provocative art forms, such as nonsense verse.

As art

Paradoxically, most forms of anti-art have gradually been completely accepted by the art establishment as normal and conventional forms of art. Even the movements which rejected art with the most virulence are now collected by the most prestigious cultural institutions.

Duchamp's readymades are still regarded as anti-art by the Stuckists, who also say that anti-art has become conformist, and describe themselves as anti-anti-art.

Neural coding

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Neural_coding

Neural coding (or neural representation) is a neuroscience field concerned with characterising the hypothetical relationship between the stimulus and the neuronal responses, and the relationship among the electrical activities of the neurons in the ensemble. Based on the theory that sensory and other information is represented in the brain by networks of neurons, it is believed that neurons can encode both digital and analog information.

Overview

Neurons have an ability uncommon among the cells of the body to propagate signals rapidly over large distances by generating characteristic electrical pulses called action potentials: voltage spikes that can travel down axons. Sensory neurons change their activities by firing sequences of action potentials in various temporal patterns, with the presence of external sensory stimuli, such as light, sound, taste, smell and touch. Information about the stimulus is encoded in this pattern of action potentials and transmitted into and around the brain. Beyond this, specialized neurons, such as those of the retina, can communicate more information through graded potentials. These differ from action potentials because information about the strength of a stimulus directly correlates with the strength of the neurons' output. The signal decays much faster for graded potentials, necessitating short inter-neuron distances and high neuronal density. The advantage of graded potentials is higher information rates capable of encoding more states (i.e. higher fidelity) than spiking neurons.

Although action potentials can vary somewhat in duration, amplitude and shape, they are typically treated as identical stereotyped events in neural coding studies. If the brief duration of an action potential (about 1 ms) is ignored, an action potential sequence, or spike train, can be characterized simply by a series of all-or-none point events in time. The lengths of interspike intervals (ISIs) between two successive spikes in a spike train often vary, apparently randomly. The study of neural coding involves measuring and characterizing how stimulus attributes, such as light or sound intensity, or motor actions, such as the direction of an arm movement, are represented by neuron action potentials or spikes. In order to describe and analyze neuronal firing, statistical methods and methods of probability theory and stochastic point processes have been widely applied.

With the development of large-scale neural recording and decoding technologies, researchers have begun to crack the neural code and have already provided the first glimpse into the real-time neural code as memory is formed and recalled in the hippocampus, a brain region known to be central for memory formation. Neuroscientists have initiated several large-scale brain decoding projects.

Encoding and decoding

The link between stimulus and response can be studied from two opposite points of view. Neural encoding refers to the map from stimulus to response. The main focus is to understand how neurons respond to a wide variety of stimuli, and to construct models that attempt to predict responses to other stimuli. Neural decoding refers to the reverse map, from response to stimulus, and the challenge is to reconstruct a stimulus, or certain aspects of that stimulus, from the spike sequences it evokes.

Hypothesized coding schemes

A sequence, or 'train', of spikes may contain information based on different coding schemes. In some neurons the strength with which a postsynaptic partner responds may depend solely on the 'firing rate', the average number of spikes per unit time (a 'rate code'). At the other end, a complex 'temporal code' is based on the precise timing of single spikes. They may be locked to an external stimulus such as in the visual and auditory system or be generated intrinsically by the neural circuitry.

Whether neurons use rate coding or temporal coding is a topic of intense debate within the neuroscience community, even though there is no clear definition of what these terms mean.

Rate code

The rate coding model of neuronal firing communication states that as the intensity of a stimulus increases, the frequency or rate of action potentials, or "spike firing", increases. Rate coding is sometimes called frequency coding.

Rate coding is a traditional coding scheme, assuming that most, if not all, information about the stimulus is contained in the firing rate of the neuron. Because the sequence of action potentials generated by a given stimulus varies from trial to trial, neuronal responses are typically treated statistically or probabilistically. They may be characterized by firing rates, rather than as specific spike sequences. In most sensory systems, the firing rate increases, generally non-linearly, with increasing stimulus intensity. Under a rate coding assumption, any information possibly encoded in the temporal structure of the spike train is ignored. Consequently, rate coding is inefficient but highly robust with respect to the ISI 'noise'.

During rate coding, precisely calculating firing rate is very important. In fact, the term "firing rate" has a few different definitions, which refer to different averaging procedures, such as an average over time (rate as a single-neuron spike count) or an average over several repetitions (rate of PSTH) of experiment.

In rate coding, learning is based on activity-dependent synaptic weight modifications.

Rate coding was originally shown by Edgar Adrian and Yngve Zotterman in 1926. In this simple experiment different weights were hung from a muscle. As the weight of the stimulus increased, the number of spikes recorded from sensory nerves innervating the muscle also increased. From these original experiments, Adrian and Zotterman concluded that action potentials were unitary events, and that the frequency of events, and not individual event magnitude, was the basis for most inter-neuronal communication.

In the following decades, measurement of firing rates became a standard tool for describing the properties of all types of sensory or cortical neurons, partly due to the relative ease of measuring rates experimentally. However, this approach neglects all the information possibly contained in the exact timing of the spikes. During recent years, more and more experimental evidence has suggested that a straightforward firing rate concept based on temporal averaging may be too simplistic to describe brain activity.

Spike-count rate (average over time)

The spike-count rate, also referred to as temporal average, is obtained by counting the number of spikes that appear during a trial and dividing by the duration of trial. The length T of the time window is set by the experimenter and depends on the type of neuron recorded from and to the stimulus. In practice, to get sensible averages, several spikes should occur within the time window. Typical values are T = 100 ms or T = 500 ms, but the duration may also be longer or shorter (Chapter 1.5 in the textbook 'Spiking Neuron Models' ).

The spike-count rate can be determined from a single trial, but at the expense of losing all temporal resolution about variations in neural response during the course of the trial. Temporal averaging can work well in cases where the stimulus is constant or slowly varying and does not require a fast reaction of the organism — and this is the situation usually encountered in experimental protocols. Real-world input, however, is hardly stationary, but often changing on a fast time scale. For example, even when viewing a static image, humans perform saccades, rapid changes of the direction of gaze. The image projected onto the retinal photoreceptors changes therefore every few hundred milliseconds.

Despite its shortcomings, the concept of a spike-count rate code is widely used not only in experiments, but also in models of neural networks. It has led to the idea that a neuron transforms information about a single input variable (the stimulus strength) into a single continuous output variable (the firing rate).

There is a growing body of evidence that in Purkinje neurons, at least, information is not simply encoded in firing but also in the timing and duration of non-firing, quiescent periods. There is also evidence from retinal cells, that information is encoded not only in the firing rate but also in spike timing.[19] More generally, whenever a rapid response of an organism is required a firing rate defined as a spike-count over a few hundred milliseconds is simply too slow.

Time-dependent firing rate (averaging over several trials)

The time-dependent firing rate is defined as the average number of spikes (averaged over trials) appearing during a short interval between times t and t+Δt, divided by the duration of the interval. It works for stationary as well as for time-dependent stimuli. To experimentally measure the time-dependent firing rate, the experimenter records from a neuron while stimulating with some input sequence. The same stimulation sequence is repeated several times and the neuronal response is reported in a Peri-Stimulus-Time Histogram (PSTH). The time t is measured with respect to the start of the stimulation sequence. The Δt must be large enough (typically in the range of one or a few milliseconds) so that there is a sufficient number of spikes within the interval to obtain a reliable estimate of the average. The number of occurrences of spikes nK(t;t+Δt) summed over all repetitions of the experiment divided by the number K of repetitions is a measure of the typical activity of the neuron between time t and t+Δt. A further division by the interval length Δt yields time-dependent firing rate r(t) of the neuron, which is equivalent to the spike density of PSTH (Chapter 1.5 in.).

For sufficiently small Δt, r(t)Δt is the average number of spikes occurring between times t and t+Δt over multiple trials. If Δt is small, there will never be more than one spike within the interval between t and t+Δt on any given trial. This means that r(t)Δt is also the fraction of trials on which a spike occurred between those times. Equivalently, r(t)Δt is the probability that a spike occurs during this time interval.

As an experimental procedure, the time-dependent firing rate measure is a useful method to evaluate neuronal activity, in particular in the case of time-dependent stimuli. The obvious problem with this approach is that it can not be the coding scheme used by neurons in the brain. Neurons can not wait for the stimuli to repeatedly present in an exactly same manner before generating a response.

Nevertheless, the experimental time-dependent firing rate measure can make sense, if there are large populations of independent neurons that receive the same stimulus. Instead of recording from a population of N neurons in a single run, it is experimentally easier to record from a single neuron and average over N repeated runs. Thus, the time-dependent firing rate coding relies on the implicit assumption that there are always populations of neurons.

Temporal coding

When precise spike timing or high-frequency firing-rate fluctuations are found to carry information, the neural code is often identified as a temporal code. A number of studies have found that the temporal resolution of the neural code is on a millisecond time scale, indicating that precise spike timing is a significant element in neural coding. Such codes, that communicate via the time between spikes are also referred to as interpulse interval codes, and have been supported by recent studies.

Neurons exhibit high-frequency fluctuations of firing-rates which could be noise or could carry information. Rate coding models suggest that these irregularities are noise, while temporal coding models suggest that they encode information. If the nervous system only used rate codes to convey information, a more consistent, regular firing rate would have been evolutionarily advantageous, and neurons would have utilized this code over other less robust options. Temporal coding supplies an alternate explanation for the “noise," suggesting that it actually encodes information and affects neural processing. To model this idea, binary symbols can be used to mark the spikes: 1 for a spike, 0 for no spike. Temporal coding allows the sequence 000111000111 to mean something different from 001100110011, even though the mean firing rate is the same for both sequences, at 6 spikes/10 ms.

Until recently, scientists had put the most emphasis on rate encoding as an explanation for post-synaptic potential patterns. However, functions of the brain are more temporally precise than the use of only rate encoding seems to allow. In other words, essential information could be lost due to the inability of the rate code to capture all the available information of the spike train. In addition, responses are different enough between similar (but not identical) stimuli to suggest that the distinct patterns of spikes contain a higher volume of information than is possible to include in a rate code.

Temporal codes (also called spike codes), employ those features of the spiking activity that cannot be described by the firing rate. For example, time-to-first-spike after the stimulus onset, phase-of-firing with respect to background oscillations, characteristics based on the second and higher statistical moments of the ISI probability distribution, spike randomness, or precisely timed groups of spikes (temporal patterns) are candidates for temporal codes. As there is no absolute time reference in the nervous system, the information is carried either in terms of the relative timing of spikes in a population of neurons (temporal patterns) or with respect to an ongoing brain oscillation (phase of firing). One way in which temporal codes are decoded, in presence of neural oscillations, is that spikes occurring at specific phases of an oscillatory cycle are more effective in depolarizing the post-synaptic neuron.

The temporal structure of a spike train or firing rate evoked by a stimulus is determined both by the dynamics of the stimulus and by the nature of the neural encoding process. Stimuli that change rapidly tend to generate precisely timed spikes (and rapidly changing firing rates in PSTHs) no matter what neural coding strategy is being used. Temporal coding in the narrow sense refers to temporal precision in the response that does not arise solely from the dynamics of the stimulus, but that nevertheless relates to properties of the stimulus. The interplay between stimulus and encoding dynamics makes the identification of a temporal code difficult.

In temporal coding, learning can be explained by activity-dependent synaptic delay modifications. The modifications can themselves depend not only on spike rates (rate coding) but also on spike timing patterns (temporal coding), i.e., can be a special case of spike-timing-dependent plasticity.

The issue of temporal coding is distinct and independent from the issue of independent-spike coding. If each spike is independent of all the other spikes in the train, the temporal character of the neural code is determined by the behavior of time-dependent firing rate r(t). If r(t) varies slowly with time, the code is typically called a rate code, and if it varies rapidly, the code is called temporal.

Temporal coding in sensory systems

For very brief stimuli, a neuron's maximum firing rate may not be fast enough to produce more than a single spike. Due to the density of information about the abbreviated stimulus contained in this single spike, it would seem that the timing of the spike itself would have to convey more information than simply the average frequency of action potentials over a given period of time. This model is especially important for sound localization, which occurs within the brain on the order of milliseconds. The brain must obtain a large quantity of information based on a relatively short neural response. Additionally, if low firing rates on the order of ten spikes per second must be distinguished from arbitrarily close rate coding for different stimuli, then a neuron trying to discriminate these two stimuli may need to wait for a second or more to accumulate enough information. This is not consistent with numerous organisms which are able to discriminate between stimuli in the time frame of milliseconds, suggesting that a rate code is not the only model at work.

To account for the fast encoding of visual stimuli, it has been suggested that neurons of the retina encode visual information in the latency time between stimulus onset and first action potential, also called latency to first spike or time-to-first-spike. This type of temporal coding has been shown also in the auditory and somato-sensory system. The main drawback of such a coding scheme is its sensitivity to intrinsic neuronal fluctuations. In the primary visual cortex of macaques, the timing of the first spike relative to the start of the stimulus was found to provide more information than the interval between spikes. However, the interspike interval could be used to encode additional information, which is especially important when the spike rate reaches its limit, as in high-contrast situations. For this reason, temporal coding may play a part in coding defined edges rather than gradual transitions.

The mammalian gustatory system is useful for studying temporal coding because of its fairly distinct stimuli and the easily discernible responses of the organism. Temporally encoded information may help an organism discriminate between different tastants of the same category (sweet, bitter, sour, salty, umami) that elicit very similar responses in terms of spike count. The temporal component of the pattern elicited by each tastant may be used to determine its identity (e.g., the difference between two bitter tastants, such as quinine and denatonium). In this way, both rate coding and temporal coding may be used in the gustatory system – rate for basic tastant type, temporal for more specific differentiation.

Research on mammalian gustatory system has shown that there is an abundance of information present in temporal patterns across populations of neurons, and this information is different from that which is determined by rate coding schemes. Groups of neurons may synchronize in response to a stimulus. In studies dealing with the front cortical portion of the brain in primates, precise patterns with short time scales only a few milliseconds in length were found across small populations of neurons which correlated with certain information processing behaviors. However, little information could be determined from the patterns; one possible theory is they represented the higher-order processing taking place in the brain.

As with the visual system, in mitral/tufted cells in the olfactory bulb of mice, first-spike latency relative to the start of a sniffing action seemed to encode much of the information about an odor. This strategy of using spike latency allows for rapid identification of and reaction to an odorant. In addition, some mitral/tufted cells have specific firing patterns for given odorants. This type of extra information could help in recognizing a certain odor, but is not completely necessary, as average spike count over the course of the animal's sniffing was also a good identifier. Along the same lines, experiments done with the olfactory system of rabbits showed distinct patterns which correlated with different subsets of odorants, and a similar result was obtained in experiments with the locust olfactory system.

Temporal coding applications

The specificity of temporal coding requires highly refined technology to measure informative, reliable, experimental data. Advances made in optogenetics allow neurologists to control spikes in individual neurons, offering electrical and spatial single-cell resolution. For example, blue light causes the light-gated ion channel channelrhodopsin to open, depolarizing the cell and producing a spike. When blue light is not sensed by the cell, the channel closes, and the neuron ceases to spike. The pattern of the spikes matches the pattern of the blue light stimuli. By inserting channelrhodopsin gene sequences into mouse DNA, researchers can control spikes and therefore certain behaviors of the mouse (e.g., making the mouse turn left). Researchers, through optogenetics, have the tools to effect different temporal codes in a neuron while maintaining the same mean firing rate, and thereby can test whether or not temporal coding occurs in specific neural circuits.

Optogenetic technology also has the potential to enable the correction of spike abnormalities at the root of several neurological and psychological disorders. If neurons do encode information in individual spike timing patterns, key signals could be missed by attempting to crack the code while looking only at mean firing rates. Understanding any temporally encoded aspects of the neural code and replicating these sequences in neurons could allow for greater control and treatment of neurological disorders such as depression, schizophrenia, and Parkinson's disease. Regulation of spike intervals in single cells more precisely controls brain activity than the addition of pharmacological agents intravenously.

Phase-of-firing code

Phase-of-firing code is a neural coding scheme that combines the spike count code with a time reference based on oscillations. This type of code takes into account a time label for each spike according to a time reference based on phase of local ongoing oscillations at low or high frequencies.

It has been shown that neurons in some cortical sensory areas encode rich naturalistic stimuli in terms of their spike times relative to the phase of ongoing network oscillatory fluctuations, rather than only in terms of their spike count. The local field potential signals reflect population (network) oscillations. The phase-of-firing code is often categorized as a temporal code although the time label used for spikes (i.e. the network oscillation phase) is a low-resolution (coarse-grained) reference for time. As a result, often only four discrete values for the phase are enough to represent all the information content in this kind of code with respect to the phase of oscillations in low frequencies. Phase-of-firing code is loosely based on the phase precession phenomena observed in place cells of the hippocampus. Another feature of this code is that neurons adhere to a preferred order of spiking between a group of sensory neurons, resulting in firing sequence.

Phase code has been shown in visual cortex to involve also high-frequency oscillations. Within a cycle of gamma oscillation, each neuron has its own preferred relative firing time. As a result, an entire population of neurons generates a firing sequence that has a duration of up to about 15 ms.

Population coding

Population coding is a method to represent stimuli by using the joint activities of a number of neurons. In population coding, each neuron has a distribution of responses over some set of inputs, and the responses of many neurons may be combined to determine some value about the inputs. From the theoretical point of view, population coding is one of a few mathematically well-formulated problems in neuroscience. It grasps the essential features of neural coding and yet is simple enough for theoretic analysis. Experimental studies have revealed that this coding paradigm is widely used in the sensory and motor areas of the brain.

For example, in the visual area medial temporal (MT), neurons are tuned to the direction of object motion. In response to an object moving in a particular direction, many neurons in MT fire with a noise-corrupted and bell-shaped activity pattern across the population. The moving direction of the object is retrieved from the population activity, to be immune from the fluctuation existing in a single neuron's signal. When monkeys are trained to move a joystick towards a lit target, a single neuron will fire for multiple target directions. However it fires the fastest for one direction and more slowly depending on how close the target was to the neuron's "preferred" direction. If each neuron represents movement in its preferred direction, and the vector sum of all neurons is calculated (each neuron has a firing rate and a preferred direction), the sum points in the direction of motion. In this manner, the population of neurons codes the signal for the motion. This particular population code is referred to as population vector coding.

Place-time population codes, termed the averaged-localized-synchronized-response (ALSR) code, have been derived for neural representation of auditory acoustic stimuli. This exploits both the place or tuning within the auditory nerve, as well as the phase-locking within each nerve fiber auditory nerve. The first ALSR representation was for steady-state vowels; ALSR representations of pitch and formant frequencies in complex, non-steady state stimuli were later demonstrated for voiced-pitch, and formant representations in consonant-vowel syllables. The advantage of such representations is that global features such as pitch or formant transition profiles can be represented as global features across the entire nerve simultaneously via both rate and place coding.

Population coding has a number of other advantages as well, including reduction of uncertainty due to neuronal variability and the ability to represent a number of different stimulus attributes simultaneously. Population coding is also much faster than rate coding and can reflect changes in the stimulus conditions nearly instantaneously. Individual neurons in such a population typically have different but overlapping selectivities, so that many neurons, but not necessarily all, respond to a given stimulus.

Typically an encoding function has a peak value such that activity of the neuron is greatest if the perceptual value is close to the peak value, and becomes reduced accordingly for values less close to the peak value.  It follows that the actual perceived value can be reconstructed from the overall pattern of activity in the set of neurons. Vector coding is an example of simple averaging. A more sophisticated mathematical technique for performing such a reconstruction is the method of maximum likelihood based on a multivariate distribution of the neuronal responses. These models can assume independence, second order correlations,  or even more detailed dependencies such as higher order maximum entropy models, or copulas.

Correlation coding

The correlation coding model of neuronal firing claims that correlations between action potentials, or "spikes", within a spike train may carry additional information above and beyond the simple timing of the spikes. Early work suggested that correlation between spike trains can only reduce, and never increase, the total mutual information present in the two spike trains about a stimulus feature. However, this was later demonstrated to be incorrect. Correlation structure can increase information content if noise and signal correlations are of opposite sign. Correlations can also carry information not present in the average firing rate of two pairs of neurons. A good example of this exists in the pentobarbital-anesthetized marmoset auditory cortex, in which a pure tone causes an increase in the number of correlated spikes, but not an increase in the mean firing rate, of pairs of neurons.

Independent-spike coding

The independent-spike coding model of neuronal firing claims that each individual action potential, or "spike", is independent of each other spike within the spike train.

Position coding

Plot of typical position coding

A typical population code involves neurons with a Gaussian tuning curve whose means vary linearly with the stimulus intensity, meaning that the neuron responds most strongly (in terms of spikes per second) to a stimulus near the mean. The actual intensity could be recovered as the stimulus level corresponding to the mean of the neuron with the greatest response. However, the noise inherent in neural responses means that a maximum likelihood estimation function is more accurate.

Neural responses are noisy and unreliable.

This type of code is used to encode continuous variables such as joint position, eye position, color, or sound frequency. Any individual neuron is too noisy to faithfully encode the variable using rate coding, but an entire population ensures greater fidelity and precision. For a population of unimodal tuning curves, i.e. with a single peak, the precision typically scales linearly with the number of neurons. Hence, for half the precision, half as many neurons are required. In contrast, when the tuning curves have multiple peaks, as in grid cells that represent space, the precision of the population can scale exponentially with the number of neurons. This greatly reduces the number of neurons required for the same precision.

Sparse coding

The sparse code is when each item is encoded by the strong activation of a relatively small set of neurons. For each item to be encoded, this is a different subset of all available neurons. In contrast to sensor-sparse coding, sensor-dense coding implies that all information from possible sensor locations is known.

As a consequence, sparseness may be focused on temporal sparseness ("a relatively small number of time periods are active") or on the sparseness in an activated population of neurons. In this latter case, this may be defined in one time period as the number of activated neurons relative to the total number of neurons in the population. This seems to be a hallmark of neural computations since compared to traditional computers, information is massively distributed across neurons. Sparse coding of natural images produces wavelet-like oriented filters that resemble the receptive fields of simple cells in the visual cortex. The capacity of sparse codes may be increased by simultaneous use of temporal coding, as found in the locust olfactory system.

Given a potentially large set of input patterns, sparse coding algorithms (e.g. sparse autoencoder) attempt to automatically find a small number of representative patterns which, when combined in the right proportions, reproduce the original input patterns. The sparse coding for the input then consists of those representative patterns. For example, the very large set of English sentences can be encoded by a small number of symbols (i.e. letters, numbers, punctuation, and spaces) combined in a particular order for a particular sentence, and so a sparse coding for English would be those symbols.

Linear generative model

Most models of sparse coding are based on the linear generative model. In this model, the symbols are combined in a linear fashion to approximate the input.

More formally, given a k-dimensional set of real-numbered input vectors , the goal of sparse coding is to determine n k-dimensional basis vectors , corresponding to neuronal receptive fields, along with a sparse n-dimensional vector of weights or coefficients for each input vector, so that a linear combination of the basis vectors with proportions given by the coefficients results in a close approximation to the input vector: .

The codings generated by algorithms implementing a linear generative model can be classified into codings with soft sparseness and those with hard sparseness. These refer to the distribution of basis vector coefficients for typical inputs. A coding with soft sparseness has a smooth Gaussian-like distribution, but peakier than Gaussian, with many zero values, some small absolute values, fewer larger absolute values, and very few very large absolute values. Thus, many of the basis vectors are active. Hard sparseness, on the other hand, indicates that there are many zero values, no or hardly any small absolute values, fewer larger absolute values, and very few very large absolute values, and thus few of the basis vectors are active. This is appealing from a metabolic perspective: less energy is used when fewer neurons are firing.

Another measure of coding is whether it is critically complete or overcomplete. If the number of basis vectors n is equal to the dimensionality k of the input set, the coding is said to be critically complete. In this case, smooth changes in the input vector result in abrupt changes in the coefficients, and the coding is not able to gracefully handle small scalings, small translations, or noise in the inputs. If, however, the number of basis vectors is larger than the dimensionality of the input set, the coding is overcomplete. Overcomplete codings smoothly interpolate between input vectors and are robust under input noise. The human primary visual cortex is estimated to be overcomplete by a factor of 500, so that, for example, a 14 x 14 patch of input (a 196-dimensional space) is coded by roughly 100,000 neurons.

Other models are based on matching pursuit, a sparse approximation algorithm which finds the "best matching" projections of multidimensional data, and dictionary learning, a representation learning method which aims to find a sparse matrix representation of the input data in the form of a linear combination of basic elements as well as those basic elements themselves.

Biological evidence

Sparse coding may be a general strategy of neural systems to augment memory capacity. To adapt to their environments, animals must learn which stimuli are associated with rewards or punishments and distinguish these reinforced stimuli from similar but irrelevant ones. Such tasks require implementing stimulus-specific associative memories in which only a few neurons out of a population respond to any given stimulus and each neuron responds to only a few stimuli out of all possible stimuli.

Theoretical work on sparse distributed memory has suggested that sparse coding increases the capacity of associative memory by reducing overlap between representations. Experimentally, sparse representations of sensory information have been observed in many systems, including vision, audition, touch, and olfaction. However, despite the accumulating evidence for widespread sparse coding and theoretical arguments for its importance, a demonstration that sparse coding improves the stimulus-specificity of associative memory has been difficult to obtain.

In the Drosophila olfactory system, sparse odor coding by the Kenyon cells of the mushroom body is thought to generate a large number of precisely addressable locations for the storage of odor-specific memories. Sparseness is controlled by a negative feedback circuit between Kenyon cells and GABAergic anterior paired lateral (APL) neurons. Systematic activation and blockade of each leg of this feedback circuit shows that Kenyon cells activate APL neurons and APL neurons inhibit Kenyon cells. Disrupting the Kenyon cell–APL feedback loop decreases the sparseness of Kenyon cell odor responses, increases inter-odor correlations, and prevents flies from learning to discriminate similar, but not dissimilar, odors. These results suggest that feedback inhibition suppresses Kenyon cell activity to maintain sparse, decorrelated odor coding and thus the odor-specificity of memories.

Molecular paleontology

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Molecular_paleontology

Molecular paleontology refers to the recovery and analysis of DNA, proteins, carbohydrates, or lipids, and their diagenetic products from ancient human, animal, and plant remains. The field of molecular paleontology has yielded important insights into evolutionary events, species' diasporas, the discovery and characterization of extinct species.

In shallow time, advancements in the field of molecular paleontology have allowed scientists to pursue evolutionary questions on a genetic level rather than relying on phenotypic variation alone. By applying molecular analytical techniques to DNA in recent animal remains, one can quantify the level of relatedness between any two organisms for which DNA has been recovered. Using various biotechnological techniques such as DNA isolation, amplification, and sequencing scientists have been able to acquire and expand insights into the divergence and evolutionary history of countless recently extinct organisms. In February 2021, scientists reported, for the first time, the sequencing of DNA from animal remains, a mammoth in this instance, over a million years old, the oldest DNA sequenced to date.

In deep time, compositional heterogeneities in carbonaceous remains of a diversity of animals, ranging in age from the Neoproterozoic to the Recent, have been linked to biological signatures encoded in modern biomolecules via a cascade of oxidative fossilization reactions. The macromolecular composition of carbonaceous fossils, some Tonian in age, preserve biological signatures reflecting original biomineralization, tissue types, metabolism, and relationship affinities (phylogeny).

History

The study of molecular paleontology is said to have begun with the discovery by Abelson of 360 million year old amino acids preserved in fossil shells. However, Svante Pääbo is often the one considered to be the founder of the field of molecular paleontology.

The field of molecular paleontology has had several major advances since the 1950s and is a continuously growing field. Below is a timeline showing notable contributions that have been made.

Timeline

A visual graphic of the events listed in the timeline section.
A timeline demonstrating important dates in molecular paleontology. All of these dates are listed and specifically sourced in the History section under Timeline.

mid-1950s: Abelson found preserved amino acids in fossil shells that were about 360 million years old. Produced idea of comparing fossil amino acid sequences with existing organism so that molecular evolution could be studied.

1970s: Fossil peptides are studied by amino acid analysis. Start to use whole peptides and immunological methods.

Late 1970s: Palaeobotanists (can also be spelled as Paleobotanists) studied molecules from well-preserved fossil plants.

1984: The first successful DNA sequencing of an extinct species, the quagga, a zebra-like species.

1991: Published article on the successful extraction of proteins from the fossil bone of a dinosaur, specifically the Seismosaurus.

2005: Scientists resurrect extinct 1918 influenza virus.

2006: Neanderthals nuclear DNA sequence segments begin to be analyzed and published.

2007: Scientists synthesize entire extinct human endogenous retrovirus (HERV-K) from scratch.

2010: A new species of early hominid, the Denisovans, discovered from mitochondrial and nuclear genomes recovered from bone found in a cave in Siberia. Analysis showed that the Denisovan specimen lived approximately 41,000 years ago, and shared a common ancestor with both modern humans and Neanderthals approximately 1 million years ago in Africa.

2013: The first entire Neanderthal genome is successfully sequenced. More information can be found at the Neanderthal genome project.

2013: A 400,000-year-old specimen with remnant mitochondrial DNA sequenced and is found to be a common ancestor to Neanderthals and Denisovans, Homo heidelbergensis.

2013: Mary Schweitzer and colleagues propose the first chemical mechanism explaining the potential preservation of vertebrate cells and soft tissues into the fossil record. The mechanism proposes that free oxygen radicals, potentially produced by redox-active iron, induce biomolecule crosslinking. This crosslinking mechanism is somewhat analogous to the crosslinking that occurs during histological tissue fixation, such as with formaldehyde. The authors also suggest the source of iron to be the hemoglobin from the deceased organism.

2015: A 110,000-year-old fossil tooth containing DNA from Denisovans was reported.

2018: Molecular paleobiologists link polymers of N-, O-, S-heterocycle composition (AGEs/ALEs, as referred to in the cited publication, Wiemann et al. 2018) in carbonaceous fossil remains mechanistically to structural biomolecules in original tissues. Through oxidative crosslinking, a process similar to the Maillard reaction, nucleophilic amino acid residues condense with Reactive Carbonyl Species derived from lipids and sugars. The processes of biomolecule fossilization, identified via Raman spectroscopy of modern and fossil tissues, experimental modelling, and statistical data evaluation, include Advanced Glycosylation and Advanced Lipoxidation.

2019: An independent laboratory of Molecular Paleontologists confirms the transformation of biomolecules through Advanced Glycosylation and Lipoxidation during fossilization. The authors use Synchrotron Fourier-Transform Infrared spectroscopy.

2020: Wiemann and colleagues identify biological signatures reflecting original biomineralization, tissue types, metabolism, and relationship affinity (phylogeny) in preserved compositional heterogeneities of a diversity of carbonaceous animal fossils. This is the first large-scale analysis of fossils ranging in age from the Neoproterozoic to the Recent, and the first published record of biological signals found in complex organic matter. The authors rely on statistical analyses of a uniquely large Raman spectroscopy data set.

2021: Geochemists find tissue type signals in the composition of carbonaceous fossils dating back to the Tonian, and apply these signals to identify epibionts. The authors use Raman spectroscopy.

2022: Raman spectroscopy data revealing patterns in the fossilization of structural biomolecules have been replicated with Fourier-Transform Infrared spectroscopy and a diversity of different Raman instruments, filters, and excitation sources.

2023: The first in-depth chemical description of how original, biological cells and tissues fossilize is published. Importantly, the study shows that the free oxygen radical hypothesis (proposed by Mary Schweitzer and colleagues in 2013) is in many cases identical to the AGE/ALE formation hypothesis (proposed by Jasmina Wiemann and colleagues in 2018). The combined hypotheses, along with thermal maturation and carbonization, form a loose framework for biological cell and tissue fossilization.

The quagga

The first successful DNA sequencing of an extinct species was in 1984, from a 150-year-old museum specimen of the quagga, a zebra-like species. Mitochondrial DNA (also known as mtDNA) was sequenced from desiccated muscle of the quagga, and was found to differ by 12 base substitutions from the mitochondrial DNA of a mountain zebra. It was concluded that these two species had a common ancestor 3-4 million years ago, which is consistent with known fossil evidence of the species.

Denisovans

The Denisovans of Eurasia, a hominid species related to Neanderthals and humans, was discovered as a direct result of DNA sequencing of a 41,000-year-old specimen recovered in 2008. Analysis of the mitochondrial DNA from a retrieved finger bone showed the specimen to be genetically distinct from both humans and Neanderthals. Two teeth and a toe bone were later found to belong to different individuals with the same population. Analysis suggests that both the Neanderthals and Denisovans were already present throughout Eurasia when modern humans arrived. In November 2015, scientists reported finding a fossil tooth containing DNA from Denisovans, and estimated its age at 110,000-years-old.

Mitochondrial DNA analysis

A photo of Neanderthal DNA extraction in process
Neanderthal DNA extraction. Working in a clean room, researchers at the Max Planck Institute for Evolutionary Anthropology in Leipzig, Germany, took extensive precautions to avoid contaminating Neanderthal DNA samples - extracted from bones like this one - with DNA from any other source, including modern humans. NHGRI researchers are part of the international team that sequenced the genome of the Neanderthal, Homo neanderthalensis.

The mtDNA from the Denisovan finger bone differs from that of modern humans by 385 bases (nucleotides) in the mtDNA strand out of approximately 16,500, whereas the difference between modern humans and Neanderthals is around 202 bases. In contrast, the difference between chimpanzees and modern humans is approximately 1,462 mtDNA base pairs. This suggested a divergence time around one million years ago. The mtDNA from a tooth bore a high similarity to that of the finger bone, indicating they belonged to the same population. From a second tooth, an mtDNA sequence was recovered that showed an unexpectedly large number of genetic differences compared to that found in the other tooth and the finger, suggesting a high degree of mtDNA diversity. These two individuals from the same cave showed more diversity than seen among sampled Neanderthals from all of Eurasia, and were as different as modern-day humans from different continents.

Nuclear genome analysis

Isolation and sequencing of nuclear DNA has also been accomplished from the Denisova finger bone. This specimen showed an unusual degree of DNA preservation and low level of contamination. They were able to achieve near-complete genomic sequencing, allowing a detailed comparison with Neanderthal and modern humans. From this analysis, they concluded, in spite of the apparent divergence of their mitochondrial sequence, the Denisova population along with Neanderthal shared a common branch from the lineage leading to modern African humans. The estimated average time of divergence between Denisovan and Neanderthal sequences is 640,000 years ago, and the time between both of these and the sequences of modern Africans is 804,000 years ago. They suggest the divergence of the Denisova mtDNA results either from the persistence of a lineage purged from the other branches of humanity through genetic drift or else an introgression from an older hominin lineage.

Homo heidelbergensis

A photo of the Denisovan cranium found at Sima de los Huesos
"Homo heidelbergensis Cranium 5 is one of the most important discoveries in the Sima de los Huesos, Atapuerca (Spain). The mandible of this cranium appeared, nearly intact, some years after its find, close to the same location.

Homo heidelbergensis was first discovered in 1907 near Heidelberg, Germany and later also found elsewhere in Europe, Africa, and Asia. However it was not until 2013 that a specimen with retrievable DNA was found, in a ~400,000 year old femur found in the Sima de los Huesos Cave in Spain. The femur was found to contain both mtDNA and nuclear DNA. Improvements in DNA extraction and library preparation techniques allowed for mtDNA to be successfully isolated and sequenced, however the nuclear DNA was found to be too degraded in the observed specimen, and was also contaminated with DNA from an ancient cave bear (Ursus deningeri) present in the cave. The mtDNA analysis found a surprising link between the specimen and the Denisovans, and this finding raised many questions. Several scenarios were proposed in a January 2014 paper titled "A mitochondrial genome sequence of a hominin from Sima de los Huesos", elucidating the lack of convergence in the scientific community on how Homo heidelbergensis is related to other known hominin groups. One plausible scenario that the authors proposed was that the H. heidelbergensis was an ancestor to both Denisovans and Neanderthals. Completely sequenced nuclear genomes from both Denisovans and Neanderthals suggest a common ancestor approximately 700,000 years ago, and one leading researcher in the field, Svante Paabo, suggests that perhaps this new hominin group is that early ancestor.

Applications

Discovery and characterization of new species

Molecular paleontology techniques applied to fossils have contributed to the discovery and characterization of several new species, including the Denisovans and Homo heidelbergensis. We have been able to better understand the path that humans took as they populated the earth, and what species were present during this diaspora.

De-extinction

An artist's color drawing of the Pyrenean ibex
The Pyrenean ibex was temporarily brought back from extinction in 1984.

It is now possible to revive extinct species using molecular paleontology techniques. This was first accomplished via cloning in 2003 with the Pyrenean ibex, a type of wild goat that became extinct in 2000. Nuclei from the Pyrenean ibex's cells were injected into goat eggs emptied of their own DNA, and implanted into surrogate goat mothers. The offspring lived only seven minutes after birth, due to defects in its lungs. Other cloned animals have been observed to have similar lung defects.

There are many species that have gone extinct as a direct result of human activity. Some examples include the dodo, the great auk, the Tasmanian tiger, the Chinese river dolphin, and the passenger pigeon. An extinct species can be revived by using allelic replacement of a closely related species that is still living. By only having to replace a few genes within an organism, instead of having to build the extinct species' genome from scratch, it could be possible to bring back several species in this way, even Neanderthals.

The ethics surrounding the re-introduction of extinct species are very controversial. Critics of bringing extinct species back to life contend that it would divert limited money and resources from protecting the world's current biodiversity problems. With current extinction rates approximated to be 100 to 1,000 times the background extinction rate, it is feared that a de-extinction program might lessen public concerns over the current mass extinction crisis, if it is believed that these species can simply be brought back to life. As the editors of a Scientific American article on de-extinction pose: Should we bring back the woolly mammoth only to let elephants become extinct in the meantime? The main driving factor for the extinction of most species in this era (post 10,000 BC) is the loss of habitat, and temporarily bringing back an extinct species will not recreate the environment they once inhabited.

Proponents of de-extinction, such as George Church, speak of many potential benefits. Reintroducing an extinct keystone species, such as the woolly mammoth, could help re-balance the ecosystems that once depended on them. Some extinct species could create broad benefits for the environments they once inhabited, if returned. For example, woolly mammoths may be able to slow the melting of the Russian and Arctic tundra in several ways such as eating dead grass so that new grass can grow and take root, and periodically breaking up the snow, subjecting the ground below to the arctic air. These techniques could also be used to reintroduce genetic diversity in a threatened species, or even introduce new genes and traits to allow the animals to compete better in a changing environment.

Research and technology

When a new potential specimen is found, scientists normally first analyze for cell and tissue preservation using histological techniques, and test the conditions for the survivability of DNA. They will then attempt to isolate a DNA sample using the technique described below, and conduct a PCR amplification of the DNA to increase the amount of DNA available for testing. This amplified DNA is then sequenced. Care is taken to verify that the sequence matches the phylogenetic traits of the organism. When an organism dies, a technique called amino acid dating can be used to age the organism. It inspects the degree of racemization of aspartic acid, leucine, and alanine within the tissue. As time passes, the D/L ratio (where "D" and "L" are mirror images of each other) increase from 0 to 1. In samples where the D/L ratio of aspartic acid is greater than 0.08, ancient DNA sequences can not be retrieved (as of 1996).

Mitochondrial DNA vs. nuclear DNA

An infographic contrasting inheritance of mitochondrial and nuclear DNA
Unlike nuclear DNA (left), mitochondrial DNA is only inherited from the maternal lineage (right).

Mitochondrial DNA (mtDNA) is separate from one's nuclear DNA. It is present in organelles called mitochondria in each cell. Unlike nuclear DNA, which is inherited from both parents and rearranged every generation, an exact copy of mitochondrial DNA gets passed down from mother to her sons and daughters. The benefits of performing DNA analysis with Mitochondrial DNA is that it has a far smaller mutation rate than nuclear DNA, making tracking lineages on the scale of tens of thousands of years much easier. Knowing the base mutation rate for mtDNA, (in humans this rate is also known as the Human mitochondrial molecular clock) one can determine the amount of time any two lineages have been separated. Another advantage of mtDNA is that thousands of copies of it exist in every cell, whereas only two copies of nuclear DNA exist in each cell. All eukaryotes, a group which includes all plants, animals, and fungi, have mtDNA. A disadvantage of mtDNA is that only the maternal line is represented. For example, a child will inherit 1/8 of its DNA from each of its eight great-grandparents, however it will inherit an exact clone of its maternal great-grandmother's mtDNA. This is analogous to a child inheriting only his paternal great-grandfather's last name, and not a mix of all of the eight surnames.

Isolation

There are many things to consider when isolating a substance. First, depending upon what it is and where it is located, there are protocols that must be carried out in order to avoid contamination and further degradation of the sample. Then, handling of the materials is usually done in a physically isolated work area and under specific conditions (i.e. specific Temperature, moisture, etc...) also to avoid contamination and further loss of sample.

Once the material has been obtained, depending on what it is, there are different ways to isolate and purify it. DNA extraction from fossils is one of the more popular practices and there are different steps that can be taken to get the desired sample. DNA extracted from amber-entombed fossils can be taken from small samples and mixed with different substances, centrifuged, incubated, and centrifuged again.[46] On the other hand, DNA extraction from insects can be done by grinding the sample, mixing it with buffer, and undergoing purification through glass fiber columns. In the end, regardless of how the sample was isolated for these fossils, the DNA isolated must be able to undergo amplification.

Amplification

An infographic showing the replication process of PCR
Polymerase chain reaction

The field of molecular paleontology benefited greatly from the invention of the polymerase chain reaction(PCR), which allows one to make billions of copies of a DNA fragment from just a single preserved copy of the DNA. One of the biggest challenges up until this point was the extreme scarcity of recovered DNA because of degradation of the DNA over time.

Sequencing

DNA sequencing is done to determine the order of nucleotides and genes. There are many different materials from which DNA can be extracted. In animals, the mitochondrial chromosome can be used for molecular study. Chloroplasts can be studied in plants as a primary source of sequence data.

An evolutionary tree of mammals
An evolutionary tree of mammals

In the end, the sequences generated are used to build evolutionary trees. Methods to match data sets include: maximum probability, minimum evolution (also known as neighbor-joining) which searches for the tree with shortest overall length, and the maximum parsimony method which finds the tree requiring the fewest character-state changes. The groups of species defined within a tree can also be later evaluated by statistical tests, such as the bootstrap method, to see if they are indeed significant.

Limitations and challenges

Ideal environmental conditions for preserving DNA where the organism was desiccated and uncovered are difficult to come by, as well as maintaining their condition until analysis. Nuclear DNA normally degrades rapidly after death by endogenous hydrolytic processes, by UV radiation, and other environmental stressors.

Also, interactions with the organic breakdown products of surrounding soil have been found to help preserve biomolecular materials. However, they have also created the additional challenge of being able to separate the various components in order to be able to conduct the proper analysis on them. Some of these breakdowns have also been found to interfere with the action of some of the enzymes used during PCR.

Finally, one of the largest challenge in extracting ancient DNA, particularly in ancient human DNA, is in contamination during PCR. Small amounts of human DNA can contaminate the reagents used for extraction and PCR of ancient DNA. These problems can be overcome by rigorous care in the handling of all solutions as well as the glassware and other tools used in the process. It can also help if only one person performs the extractions, to minimize different types of DNA present.

Relative permittivity

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Relative_permittivity   ...