Search This Blog

Monday, October 23, 2023

Extinction event

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Extinction_event
CambrianOrdovicianSilurianDevonianCarboniferousPermianTriassicJurassicCretaceousPaleogeneNeogene
Marine extinction intensity during the Phanerozoic
%
Millions of years ago
The blue graph shows the apparent percentage (not the absolute number) of marine animal genera becoming extinct during any given time interval. It does not represent all marine species, just those that are readily fossilized. The labels of the traditional "Big Five" extinction events and the more recently recognised Capitanian mass extinction event are clickable links. The two extinction events occurring in the Cambrian (far left) are very large in percentage magnitude, but small in absolute numbers of known taxa due to the relative scarcity of fossil-producing life at that time.

An extinction event (also known as a mass extinction or biotic crisis) is a widespread and rapid decrease in the biodiversity on Earth. Such an event is identified by a sharp change in the diversity and abundance of multicellular organisms. It occurs when the rate of extinction increases with respect to the background extinction rate and the rate of speciation. Estimates of the number of major mass extinctions in the last 540 million years range from as few as five to more than twenty. These differences stem from disagreement as to what constitutes a "major" extinction event, and the data chosen to measure past diversity.

The "Big Five" mass extinctions

In a landmark paper published in 1982, Jack Sepkoski and David M. Raup identified five particular geological intervals with excessive diversity loss. They were originally identified as outliers on a general trend of decreasing extinction rates during the Phanerozoic, but as more stringent statistical tests have been applied to the accumulating data, it has been established that multicellular animal life has experienced at least five major and many minor mass extinctions. The "Big Five" cannot be so clearly defined, but rather appear to represent the largest (or some of the largest) of a relatively smooth continuum of extinction events. An earlier (first) event at the end of the Ediacaran is speculated.

  1. Ordovician–Silurian extinction events (End Ordovician or O–S): 445–444 Ma, just prior to and at the OrdovicianSilurian transition. Two events occurred that killed off 27% of all families, 57% of all genera and 85% of all species. Together they are ranked by many scientists as the second-largest of the five major extinctions in Earth's history in terms of percentage of genera that became extinct. In May 2020, studies suggested that the causes of the mass extinction were global warming, related to volcanism, and anoxia, and not, as considered earlier, cooling and glaciation. However, this is at odds with numerous previous studies, which have indicated global cooling as the primary driver. Most recently, the deposition of volcanic ash has been suggested to be the trigger for reductions in atmospheric carbon dioxide leading to the glaciation and anoxia observed in the geological record.
  2. Late Devonian extinctions: 372–359 Ma, occupying much of the Late Devonian up to the DevonianCarboniferous transition. The Late Devonian was an interval of high diversity loss, concentrated into two extinction events. The largest extinction was the Kellwasser Event (Frasnian-Famennian, or F-F, 372 Ma), an extinction event at the end of the Frasnian, about midway through the Late Devonian. This extinction annihilated coral reefs and numerous tropical benthic (seabed-living) animals such as jawless fish, brachiopods, and trilobites. Another major extinction was the Hangenberg Event (Devonian-Carboniferous, or D-C, 359 Ma), which brought an end to the Devonian as a whole. This extinction wiped out the armored placoderm fish and nearly led to the extinction of the newly evolved ammonoids. These two closely-spaced extinction events collectively eliminated about 19% of all families, 50% of all genera and at least 70% of all species. Sepkoski and Raup (1982) did not initially consider the Late Devonian extinction interval (Givetian, Frasnian, and Famennian stages) to be statistically significant. Regardless, later studies have affirmed the strong ecological impacts of the Kellwasser and Hangenberg Events.
  3. Trilobites were highly successful marine animals until the Permian–Triassic extinction event wiped them all out.
    Permian–Triassic extinction event (End Permian): 252 Ma, at the PermianTriassic transition. Earth's largest extinction killed 53% of marine families, 84% of marine genera, about 81% of all marine species and an estimated 70% of terrestrial vertebrate species. This is also the largest known extinction event for insects. The highly successful marine arthropod, the trilobite, became extinct. The evidence regarding plants is less clear, but new taxa became dominant after the extinction. The "Great Dying" had enormous evolutionary significance: On land, it ended the primacy of early synapsids. The recovery of vertebrates took 30 million years, but the vacant niches created the opportunity for archosaurs to become ascendant. In the seas, the percentage of animals that were sessile (unable to move about) dropped from 67% to 50%. The whole late Permian was a difficult time, at least for marine life, even before the P–T boundary extinction. More recent research has indicated that the End-Capitanian extinction event that preceded the "Great Dying" likely constitutes a separate event from the P–T extinction; if so, it would be larger than some of the "Big Five" extinction events, and perhaps merit a separate place in this list immediately before this one.
  4. Triassic–Jurassic extinction event (End Triassic): 201.3 Ma, at the TriassicJurassic transition. About 23% of all families, 48% of all genera (20% of marine families and 55% of marine genera) and 70% to 75% of all species became extinct. Most non-dinosaurian archosaurs, most therapsids, and most of the large amphibians were eliminated, leaving dinosaurs with little terrestrial competition. Non-dinosaurian archosaurs continued to dominate aquatic environments, while non-archosaurian diapsids continued to dominate marine environments. The Temnospondyl lineage of large amphibians also survived until the Cretaceous in Australia (e.g., Koolasuchus).
  5. Badlands near Drumheller, Alberta, where erosion has exposed the Cretaceous–Paleogene boundary.
    Cretaceous–Paleogene extinction event (End Cretaceous, K–Pg extinction, or formerly K–T extinction): 66 Ma, at the Cretaceous (Maastrichtian) – Paleogene (Danian) transition. The event was formerly called the Cretaceous-Tertiary or K–T extinction or K–T boundary; it is now officially named the Cretaceous–Paleogene (or K–Pg) extinction event. About 17% of all families, 50% of all genera and 75% of all species became extinct. In the seas all the ammonites, plesiosaurs and mosasaurs disappeared and the percentage of sessile animals was reduced to about 33%. All non-avian dinosaurs became extinct during that time. The boundary event was severe with a significant amount of variability in the rate of extinction between and among different clades. Mammals and birds, the former descended from the synapsids and the latter from theropod dinosaurs, emerged as dominant terrestrial animals.

Despite the popularization of these five events, there is no definite line separating them from other extinction events; using different methods of calculating an extinction's impact can lead to other events featuring in the top five.

Older fossil records are more difficult to interpret. This is because:

  • Older fossils are harder to find as they are usually buried at a considerable depth.
  • Dating of older fossils is more difficult.
  • Productive fossil beds are researched more than unproductive ones, therefore leaving certain periods unresearched.
  • Prehistoric environmental events can disturb the deposition process.
  • The preservation of fossils varies on land, but marine fossils tend to be better preserved than their sought after land-based counterparts.

It has been suggested that the apparent variations in marine biodiversity may actually be an artifact, with abundance estimates directly related to quantity of rock available for sampling from different time periods. However, statistical analysis shows that this can only account for 50% of the observed pattern, and other evidence such as fungal spikes (geologically rapid increase in fungal abundance) provides reassurance that most widely accepted extinction events are real. A quantification of the rock exposure of Western Europe indicates that many of the minor events for which a biological explanation has been sought are most readily explained by sampling bias.

Sixth mass extinction

Research completed after the seminal 1982 paper (Sepkoski and Raup) has concluded that a sixth mass extinction event is ongoing due to human activities:

  • Holocene extinction: currently ongoing. Extinctions have occurred at over 1000 times the background extinction rate since 1900, and the rate is increasing. The mass extinction is a result of human activity (an ecocide) driven by population growth and overconsumption of the earth's natural resources. The 2019 global biodiversity assessment by IPBES asserts that out of an estimated 8 million species, 1 million plant and animal species are currently threatened with extinction. In late 2021, WWF Germany suggested that over a million species could go extinct within a decade in the "largest mass extinction event since the end of the dinosaur age." A 2023 study published in PNAS concluded that at least 73 genera of animals have gone extinct since 1500. If humans had never existed, it would have taken 18,000 years for the same genera to have disappeared naturally, the report states.

Extinctions by severity

Extinction events can be tracked by several methods, including geological change, ecological impact, extinction vs. origination (speciation) rates, and most commonly diversity loss among taxonomic units. Most early papers used families as the unit of taxonomy, based on compendiums of marine animal families by Sepkoski (1982, 1992). Later papers by Sepkoski and other authors switched to genera, which are more precise than families and less prone to taxonomic bias or incomplete sampling relative to species. These are several major papers estimating loss or ecological impact from fifteen commonly-discussed extinction events. Different methods used by these papers are described in the following section. The "Big Five" mass extinctions are bolded.

Extinction proportions (diversity loss) of marine genera or ecological impact in estimates of mass extinction severity
Extinction name Age
(Ma)
Sepkoski (1996)
Multiple-interval
genera
Bambach
(2006)
McGhee et al. (2013) Stanley (2016)
Taxonomic
loss
Ecological
ranking
Late Ordovician (Ashgillian / Hirnantian) 445-444 ~49% 57%
(40%, 31%)
52% 7 42-46%
Lau event (Ludfordian) 424 ~23% - 9% 9 -
Kačák Event (Eifelian) ~388 ~24% - 32% 9 -
Taghanic Event (Givetian) ~384 ~30% 28.5% 36% 8 -
Late Devonian/Kellwasser event (Frasnian) 372 ~35% 34.7% 40% 4 16-20%
End-Devonian/Hangenberg event (Famennian) 359 ~28% 31% 50% 7 <13%
Serpukhovian ~330-325 ~23% 31% 39% 6 13-15%
Capitanian 260 ~47% 48% 25% 5 33-35%
Permian–Triassic (Changhsingian) 252 ~58% 55.7% 83% 1 62%
Triassic–Jurassic (Rhaetian) 201 ~37% 47% 73% 3 N/A
Pliensbachian-Toarcian 186-178 ~14% 25%, 20% - - -
End-Jurassic (Tithonian) 145 ~18% 20% - - -
Cenomanian-Turonian 94 ~15% 25% - - -
Cretaceous–Paleogene (Maastrichtian) 66 ~39% 40-47% 40% 2 38-40%
Eocene–Oligocene 34 ~11% 15.6% - - -

The study of major extinction events

Breakthrough studies in the 1980s–1990s

Luis (left) and Walter Alvarez (right) at the K-Pg boundary in Gubbio, Italy in 1981. This team discovered geological evidence for an asteroid impact causing the K-Pg extinction, spurring a wave of public and scientific interest in mass extinctions and their causes

For much of the 20th century, the study of mass extinctions was hampered by insufficient data. Mass extinctions, though acknowledged, were considered mysterious exceptions to the prevailing gradualistic view of prehistory, where slow evolutionary trends define faunal changes. The first breakthrough was published in 1980 by a team led by Luis Alvarez, who discovered trace metal evidence for an asteroid impact at the end of the Cretaceous period. The Alvarez hypothesis for the end-Cretaceous extinction gave mass extinctions, and catastrophic explanations, newfound popular and scientific attention.

Changes in diversity among genera and families, according to Sepkoski (1997). The "Big Five" mass extinctions are labelled with arrows, and taxa are segregated into Cambrian- (Cm), Paleozoic- (Pz), and Modern- (Md) type faunas.

Another landmark study came in 1982, when a paper written by David M. Raup and Jack Sepkoski was published in the journal Science. This paper, originating from a compendium of extinct marine animal families developed by Sepkoski, identified five peaks of marine family extinctions which stand out among a backdrop of decreasing extinction rates through time. Four of these peaks were statistically significant: the Ashgillian (end-Ordovician), Late Permian, Norian (end-Triassic), and Maastrichtian (end-Cretaceous). The remaining peak was a broad interval of high extinction smeared over the later half of the Devonian, with its apex in the Frasnian stage.

Through the 1980s, Raup and Sepkoski continued to elaborate and build upon their extinction and origination data, defining a high-resolution biodiversity curve (the "Sepkoski curve") and successive evolutionary faunas with their own patterns of diversification and extinction. Though these interpretations formed a strong basis for subsequent studies of mass extinctions, Raup and Sepkoski also proposed a more controversial idea in 1984: a 26-million-year periodic pattern to mass extinctions. Two teams of astronomers linked this to a hypothetical brown dwarf in the distant reaches of the solar system, inventing the "Nemesis hypothesis" which has been strongly disputed by other astronomers.

Around the same time, Sepkoski began to devise a compendium of marine animal genera, which would allow researchers to explore extinction at a finer taxonomic resolution. He began to publish preliminary results of this in-progress study as early as 1986, in a paper which identified 29 extinction intervals of note. By 1992, he also updated his 1982 family compendium, finding minimal changes to the diversity curve despite a decade of new data. In 1996, Sepkoski published another paper which tracked marine genera extinction (in terms of net diversity loss) by stage, similar to his previous work on family extinctions. The paper filtered its sample in three ways: all genera (the entire unfiltered sample size), multiple-interval genera (only those found in more than one stage), and "well-preserved" genera (excluding those from groups with poor or understudied fossil records). Diversity trends in marine animal families were also revised based on his 1992 update.

Revived interest in mass extinctions led many other authors to re-evaluate geological events in the context of their effects on life. A 1995 paper by Michael Benton tracked extinction and origination rates among both marine and continental (freshwater & terrestrial) families, identifying 22 extinction intervals and no periodic pattern. Overview books by O.H. Wallister (1996) and A. Hallam and P.B. Wignall (1997) summarized the new extinction research of the previous two decades. One chapter in the former source lists over 60 geological events which could conceivably be considered global extinctions of varying sizes. These texts, and other widely circulated publications in the 1990s, helped to establish the popular image of mass extinctions as a "big five" alongside many smaller extinctions through prehistory.

New data on genera: Sepkoski's compendium

Major Phanerozoic extinctions tracked via proportional genera extinctions by Bambach (2006)

Though Sepkoski passed away in 1999, his marine genera compendium was formally published in 2002. This prompted a new wave of studies into the dynamics of mass extinctions. These papers utilized the compendium to track origination rates (the rate that new species appear or speciate) parallel to extinction rates in the context of geological stages or substages. A review and re-analysis of Sepkoski's data by Bambach (2006) identified 18 distinct mass extinction intervals, including 4 large extinctions in the Cambrian. These fit Sepkoski's definition of extinction, as short substages with large diversity loss and overall high extinction rates relative to their surroundings.

Bambach et al. (2004) considered each of the "Big Five" extinction intervals to have a different pattern in the relationship between origination and extinction trends. Moreover, background extinction rates were broadly variable and could be separated into more severe and less severe time intervals. Background extinctions were least severe relative to the origination rate in the middle Ordovician-early Silurian, late Carboniferous-Permian, and Jurassic-recent. This argues that the Late Ordovician, end-Permian, and end-Cretaceous extinctions were statistically significant outliers in biodiversity trends, while the Late Devonian and end-Triassic extinctions occurred in time periods which were already stressed by relatively high extinction and low origination.

Computer models run by Foote (2005) determined that abrupt pulses of extinction fit the pattern of prehistoric biodiversity much better than a gradual and continuous background extinction rate with smooth peaks and troughs. This strongly supports the utility of rapid, frequent mass extinctions as a major driver of diversity changes. Pulsed origination events are also supported, though to a lesser degree which is largely dependent on pulsed extinctions.

Similarly, Stanley (2007) used extinction and origination data to investigate turnover rates and extinction responses among different evolutionary faunas and taxonomic groups. In contrast to previous authors, his diversity simulations show support for an overall exponential rate of biodiversity growth through the entire Phanerozoic.

Tackling biases in the fossil record

An illustration of the Signor-Lipps effect, a geological bias which posits that increased fossil sampling would help to better constrain the exact time when an organism truly goes extinct.

As data continued to accumulate, some authors began to re-evaluate Sepkoski's sample using methods meant to account for sampling biases. As early as 1982, a paper by Phillip W. Signor and Jere H. Lipps noted that the true sharpness of extinctions was diluted by the incompleteness of the fossil record. This phenomenon, later called the Signor-Lipps effect, notes that a species' true extinction must occur after its last fossil, and that origination must occur before its first fossil. Thus, species which appear to die out just prior to an abrupt extinction event may instead be a victim of the event, despite an apparent gradual decline looking at the fossil record alone. A model by Foote (2007) found that many geological stages had artificially inflated extinction rates due to Signor-Lipps "backsmearing" from later stages with extinction events.

Estimated extinction rates among genera through time. From Foote (2007), top, and Kocsis et al. (2019), bottom

Other biases include the difficulty in assessing taxa with high turnover rates or restricted occurrences, which cannot be directly assessed due to a lack of fine-scale temporal resolution. Many paleontologists opt to assess diversity trends by randomized sampling and rarefaction of fossil abundances rather than raw temporal range data, in order to account for all of these biases. But that solution is influenced by biases related to sample size. One major bias in particular is the "Pull of the recent", the fact that the fossil record (and thus known diversity) generally improves closer to the modern day. This means that biodiversity and abundance for older geological periods may be underestimated from raw data alone.

Alroy (2010) attempted to circumvene sample size-related biases in diversity estimates using a method he called "shareholder quorum subsampling" (SQS). In this method, fossils are sampled from a "collection" (such as a time interval) to assess the relative diversity of that collection. Every time a new species (or other taxon) enters the sample, it brings over all other fossils belonging to that species in the collection (its "share" of the collection). For example, a skewed collection with half its fossils from one species will immediately reach a sample share of 50% if that species is the first to be sampled. This continues, adding up the sample shares until a "coverage" or "quorum" is reached, referring to a pre-set desired sum of share percentages. At that point, the number of species in the sample are counted. A collection with more species is expected to reach a sample quorum with more species, thus accurately comparing the relative diversity change between two collections without relying on the biases inherent to sample size.

Alroy also elaborated on three-timer algorithms, which are meant to counteract biases in estimates of extinction and origination rates. A given taxon is a "three-timer" if it can be found before, after, and within a given time interval, and a "two-timer" if it overlaps with a time interval on one side. Counting "three-timers" and "two-timers" on either end of a time interval, and sampling time intervals in sequence, can together be combined into equations to predict extinction and origination with less bias. In subsequent papers, Alroy continued to refine his equations to improve lingering issues with precision and unusual samples.

McGhee et al. (2013), a paper which primarily focused on ecological effects of mass extinctions, also published new estimates of extinction severity based on Alroy's methods. Many extinctions were significantly more impactful under these new estimates, though some were less prominent.

Stanley (2016) was another paper which attempted to remove two common errors in previous estimates of extinction severity. The first error was the unjustified removal of "singletons", genera unique to only a single time slice. Their removal would mask the influence of groups with high turnover rates or lineages cut short early in their diversification. The second error was the difficulty in distinguishing background extinctions from brief mass extinction events within the same short time interval. To circumvent this issue, background rates of diversity change (extinction/origination) were estimated for stages or substages without mass extinctions, and then assumed to apply to subsequent stages with mass extinctions. For example, the Santonian and Campanian stages were each used to estimate diversity changes in the Maastrichtian prior to the K-Pg mass extinction. Subtracting background extinctions from extinction tallies had the effect of reducing the estimated severity of the six sampled mass extinction events. This effect was stronger for mass extinctions which occurred in periods with high rates of background extinction, like the Devonian.

Uncertainty in the Proterozoic and earlier eons

Because most diversity and biomass on Earth is microbial, and thus difficult to measure via fossils, extinction events placed on-record are those that affect the easily observed, biologically complex component of the biosphere rather than the total diversity and abundance of life. For this reason, well-documented extinction events are confined to the Phanerozoic eon, before which all living organisms were either microbial or at most soft-bodied; the sole exception is the Great Oxidation Event in the Proterozoic. Perhaps due to the absence of a robust microbial fossil record, mass extinctions seem mainly to be a Phanerozoic phenomenon, with apparent extinction rates being low before large complex organisms arose.

Extinction occurs at an uneven rate. Based on the fossil record, the background rate of extinctions on Earth is about two to five taxonomic families of marine animals every million years. Marine fossils are mostly used to measure extinction rates because of their superior fossil record and stratigraphic range compared to land animals.

The Great Oxidation Event, which occurred around 2.45 billion years ago in the Paleoproterozoic, was probably the first major extinction event. Since the Cambrian explosion, five further major mass extinctions have significantly exceeded the background extinction rate. The most recent and best-known, the Cretaceous–Paleogene extinction event, which occurred approximately 66 Ma (million years ago), was a large-scale mass extinction of animal and plant species in a geologically short period of time. In addition to the five major Phanerozoic mass extinctions, there are numerous minor ones as well, and the ongoing mass extinction caused by human activity is sometimes called the sixth extinction.

Evolutionary importance

Mass extinctions have sometimes accelerated the evolution of life on Earth. When dominance of particular ecological niches passes from one group of organisms to another, it is rarely because the newly dominant group is "superior" to the old but usually because an extinction event eliminates the old, dominant group and makes way for the new one, a process known as adaptive radiation.

For example, mammaliaformes ("almost mammals") and then mammals existed throughout the reign of the dinosaurs, but could not compete in the large terrestrial vertebrate niches that dinosaurs monopolized. The end-Cretaceous mass extinction removed the non-avian dinosaurs and made it possible for mammals to expand into the large terrestrial vertebrate niches. The dinosaurs themselves had been beneficiaries of a previous mass extinction, the end-Triassic, which eliminated most of their chief rivals, the crurotarsans.

Another point of view put forward in the Escalation hypothesis predicts that species in ecological niches with more organism-to-organism conflict will be less likely to survive extinctions. This is because the very traits that keep a species numerous and viable under fairly static conditions become a burden once population levels fall among competing organisms during the dynamics of an extinction event.

Furthermore, many groups that survive mass extinctions do not recover in numbers or diversity, and many of these go into long-term decline, and these are often referred to as "Dead Clades Walking". However, clades that survive for a considerable period of time after a mass extinction, and which were reduced to only a few species, are likely to have experienced a rebound effect called the "push of the past".

Darwin was firmly of the opinion that biotic interactions, such as competition for food and space – the 'struggle for existence' – were of considerably greater importance in promoting evolution and extinction than changes in the physical environment. He expressed this in The Origin of Species:

"Species are produced and exterminated by slowly acting causes ... and the most import of all causes of organic change is one which is almost independent of altered ... physical conditions, namely the mutual relation of organism to organism – the improvement of one organism entailing the improvement or extermination of others".

Patterns in frequency

Various authors have suggested that extinction events occurred periodically, every 26 to 30 million years, or that diversity fluctuates episodically about every 62 million years. Various ideas, mostly regarding astronomical influences, attempt to explain the supposed pattern, including the presence of a hypothetical companion star to the Sun, oscillations in the galactic plane, or passage through the Milky Way's spiral arms. However, other authors have concluded that the data on marine mass extinctions do not fit with the idea that mass extinctions are periodic, or that ecosystems gradually build up to a point at which a mass extinction is inevitable. Many of the proposed correlations have been argued to be spurious or lacking statistical significance. Others have argued that there is strong evidence supporting periodicity in a variety of records, and additional evidence in the form of coincident periodic variation in nonbiological geochemical variables such as Strontium isotopes, flood basalts, anoxic events, orogenies, and evaporite deposition. One explanation for this proposed cycle is carbon storage and release by oceanic crust, which exchanges carbon between the atmosphere and mantle.

All genera
"Well-defined" genera
Trend line
"Big Five" mass extinctions
Other mass extinctions
Million years ago
Thousands of genera
Phanerozoic biodiversity as shown by the fossil record

Mass extinctions are thought to result when a long-term stress is compounded by a short-term shock. Over the course of the Phanerozoic, individual taxa appear to have become less likely to suffer extinction, which may reflect more robust food webs, as well as fewer extinction-prone species, and other factors such as continental distribution. However, even after accounting for sampling bias, there does appear to be a gradual decrease in extinction and origination rates during the Phanerozoic. This may represent the fact that groups with higher turnover rates are more likely to become extinct by chance; or it may be an artefact of taxonomy: families tend to become more speciose, therefore less prone to extinction, over time; and larger taxonomic groups (by definition) appear earlier in geological time.

It has also been suggested that the oceans have gradually become more hospitable to life over the last 500 million years, and thus less vulnerable to mass extinctions, but susceptibility to extinction at a taxonomic level does not appear to make mass extinctions more or less probable.

Causes

There is still debate about the causes of all mass extinctions. In general, large extinctions may result when a biosphere under long-term stress undergoes a short-term shock. An underlying mechanism appears to be present in the correlation of extinction and origination rates to diversity. High diversity leads to a persistent increase in extinction rate; low diversity to a persistent increase in origination rate. These presumably ecologically controlled relationships likely amplify smaller perturbations (asteroid impacts, etc.) to produce the global effects observed.

Identifying causes of specific mass extinctions

A good theory for a particular mass extinction should:

  • explain all of the losses, not just focus on a few groups (such as dinosaurs);
  • explain why particular groups of organisms died out and why others survived;
  • provide mechanisms that are strong enough to cause a mass extinction but not a total extinction;
  • be based on events or processes that can be shown to have happened, not just inferred from the extinction.

It may be necessary to consider combinations of causes. For example, the marine aspect of the end-Cretaceous extinction appears to have been caused by several processes that partially overlapped in time and may have had different levels of significance in different parts of the world.

Arens and West (2006) proposed a "press / pulse" model in which mass extinctions generally require two types of cause: long-term pressure on the eco-system ("press") and a sudden catastrophe ("pulse") towards the end of the period of pressure. Their statistical analysis of marine extinction rates throughout the Phanerozoic suggested that neither long-term pressure alone nor a catastrophe alone was sufficient to cause a significant increase in the extinction rate.

Most widely supported explanations

MacLeod (2001) summarized the relationship between mass extinctions and events that are most often cited as causes of mass extinctions, using data from Courtillot, Jaeger & Yang et al. (1996), Hallam (1992) and Grieve & Pesonen (1992):

  • Flood basalt events (giant volcanic eruptions): 11 occurrences, all associated with significant extinctions But Wignall (2001) concluded that only five of the major extinctions coincided with flood basalt eruptions and that the main phase of extinctions started before the eruptions.
  • Sea-level falls: 12, of which seven were associated with significant extinctions.
  • Asteroid impacts: one large impact is associated with a mass extinction, that is, the Cretaceous–Paleogene extinction event; there have been many smaller impacts but they are not associated with significant extinctions, or cannot be dated precisely enough. The impact that created the Siljan Ring either was just before the Late Devonian Extinction or coincided with it.

The most commonly suggested causes of mass extinctions are listed below.

Flood basalt events

The scientific consensus is that the main cause of the End-Permian extinction event was the large amount of carbon dioxide emitted by the volcanic eruptions that created the Siberian Traps, which elevated global temperatures.

The formation of large igneous provinces by flood basalt events could have:

  • produced dust and particulate aerosols, which inhibited photosynthesis and thus caused food chains to collapse both on land and at sea
  • emitted sulfur oxides that were precipitated as acid rain and poisoned many organisms, contributing further to the collapse of food chains
  • emitted carbon dioxide and thus possibly causing sustained global warming once the dust and particulate aerosols dissipated.

Flood basalt events occur as pulses of activity punctuated by dormant periods. As a result, they are likely to cause the climate to oscillate between cooling and warming, but with an overall trend towards warming as the carbon dioxide they emit can stay in the atmosphere for hundreds of years.

Flood basalt events have been implicated as the cause of many major extinction events. It is speculated that massive volcanism caused or contributed to the Kellwasser Event, the End-Guadalupian Extinction Event, the End-Permian Extinction Event, the Smithian-Spathian Extinction, the Triassic-Jurassic Extinction Event, the Toarcian Oceanic Anoxic Event, the Cenomanian-Turonian Oceanic Anoxic Event, the Cretaceous-Palaeogene Extinction Event, and the Palaeocene-Eocene Thermal Maximum. The correlation between gigantic volcanic events expressed in the large igneous provinces and mass extinctions was shown for the last 260 million years. Recently such possible correlation was extended across the whole Phanerozoic Eon.

Sea-level fall

These are often clearly marked by worldwide sequences of contemporaneous sediments that show all or part of a transition from sea-bed to tidal zone to beach to dry land – and where there is no evidence that the rocks in the relevant areas were raised by geological processes such as orogeny. Sea-level falls could reduce the continental shelf area (the most productive part of the oceans) sufficiently to cause a marine mass extinction, and could disrupt weather patterns enough to cause extinctions on land. But sea-level falls are very probably the result of other events, such as sustained global cooling or the sinking of the mid-ocean ridges.

Sea-level falls are associated with most of the mass extinctions, including all of the "Big Five"—End-Ordovician, Late Devonian, End-Permian, End-Triassic, and End-Cretaceous, along with the more recently recognised Capitanian mass extinction of comparable severity to the Big Five.

A 2008 study, published in the journal Nature, established a relationship between the speed of mass extinction events and changes in sea level and sediment. The study suggests changes in ocean environments related to sea level exert a driving influence on rates of extinction, and generally determine the composition of life in the oceans.

Extraterrestrial threats

Impact events
Meteoroid entering the atmosphere with fireball.
An artist's rendering of an asteroid a few kilometers across colliding with the Earth. Such an impact can release the equivalent energy of several million nuclear weapons detonating simultaneously.

The impact of a sufficiently large asteroid or comet could have caused food chains to collapse both on land and at sea by producing dust and particulate aerosols and thus inhibiting photosynthesis. Impacts on sulfur-rich rocks could have emitted sulfur oxides precipitating as poisonous acid rain, contributing further to the collapse of food chains. Such impacts could also have caused megatsunamis and/or global forest fires.

Most paleontologists now agree that an asteroid did hit the Earth about 66 Ma, but there is lingering dispute whether the impact was the sole cause of the Cretaceous–Paleogene extinction event. Nonetheless, in October 2019, researchers reported that the Cretaceous Chicxulub asteroid impact that resulted in the extinction of non-avian dinosaurs 66 Ma, also rapidly acidified the oceans, producing ecological collapse and long-lasting effects on the climate, and was a key reason for end-Cretaceous mass extinction.

The Permian-Triassic extinction event has also been hypothesised to have been caused by an asteroid impact that formed the Araguainha crater due to the estimated date of the crater's formation overlapping with the end-Permian extinction event. However, this hypothesis has been widely challenged, with the impact hypothesis being rejected by most researchers.

According to the Shiva hypothesis, the Earth is subject to increased asteroid impacts about once every 27 million years because of the Sun's passage through the plane of the Milky Way galaxy, thus causing extinction events at 27 million year intervals. Some evidence for this hypothesis has emerged in both marine and non-marine contexts. Alternatively, the Sun's passage through the higher density spiral arms of the galaxy could coincide with mass extinction on Earth, perhaps due to increased impact events. However, a reanalysis of the effects of the Sun's transit through the spiral structure based on maps of the spiral structure of the Milky Way in CO molecular line emission has failed to find a correlation.

A nearby nova, supernova or gamma ray burst

A nearby gamma-ray burst (less than 6000 light-years away) would be powerful enough to destroy the Earth's ozone layer, leaving organisms vulnerable to ultraviolet radiation from the Sun. Gamma ray bursts are fairly rare, occurring only a few times in a given galaxy per million years. It has been suggested that a gamma ray burst caused the End-Ordovician extinction, while a supernova has been proposed as the cause of the Hangenberg event.

Global cooling

Sustained and significant global cooling could kill many polar and temperate species and force others to migrate towards the equator; reduce the area available for tropical species; often make the Earth's climate more arid on average, mainly by locking up more of the planet's water in ice and snow. The glaciation cycles of the current ice age are believed to have had only a very mild impact on biodiversity, so the mere existence of a significant cooling is not sufficient on its own to explain a mass extinction.

It has been suggested that global cooling caused or contributed to the End-Ordovician, Permian–Triassic, Late Devonian extinctions, and possibly others. Sustained global cooling is distinguished from the temporary climatic effects of flood basalt events or impacts.

Global warming

This would have the opposite effects: expand the area available for tropical species; kill temperate species or force them to migrate towards the poles; possibly cause severe extinctions of polar species; often make the Earth's climate wetter on average, mainly by melting ice and snow and thus increasing the volume of the water cycle. It might also cause anoxic events in the oceans (see below).

Global warming as a cause of mass extinction is supported by several recent studies.

The most dramatic example of sustained warming is the Paleocene–Eocene Thermal Maximum, which was associated with one of the smaller mass extinctions. It has also been suggested to have caused the Triassic–Jurassic extinction event, during which 20% of all marine families became extinct. Furthermore, the Permian–Triassic extinction event has been suggested to have been caused by warming.

Clathrate gun hypothesis

Clathrates are composites in which a lattice of one substance forms a cage around another. Methane clathrates (in which water molecules are the cage) form on continental shelves. These clathrates are likely to break up rapidly and release the methane if the temperature rises quickly or the pressure on them drops quickly—for example in response to sudden global warming or a sudden drop in sea level or even earthquakes. Methane is a much more powerful greenhouse gas than carbon dioxide, so a methane eruption ("clathrate gun") could cause rapid global warming or make it much more severe if the eruption was itself caused by global warming.

The most likely signature of such a methane eruption would be a sudden decrease in the ratio of carbon-13 to carbon-12 in sediments, since methane clathrates are low in carbon-13; but the change would have to be very large, as other events can also reduce the percentage of carbon-13.

It has been suggested that "clathrate gun" methane eruptions were involved in the end-Permian extinction ("the Great Dying") and in the Paleocene–Eocene Thermal Maximum, which was associated with one of the smaller mass extinctions.

Anoxic events

Anoxic events are situations in which the middle and even the upper layers of the ocean become deficient or totally lacking in oxygen. Their causes are complex and controversial, but all known instances are associated with severe and sustained global warming, mostly caused by sustained massive volcanism.

It has been suggested that anoxic events caused or contributed to the Ordovician–Silurian, late Devonian, Capitanian, Permian–Triassic, and Triassic–Jurassic extinctions, as well as a number of lesser extinctions (such as the Ireviken, Lundgreni, Mulde, Lau, Smithian-Spathian, Toarcian, and Cenomanian–Turonian events). On the other hand, there are widespread black shale beds from the mid-Cretaceous that indicate anoxic events but are not associated with mass extinctions.

The bio-availability of essential trace elements (in particular selenium) to potentially lethal lows has been shown to coincide with, and likely have contributed to, at least three mass extinction events in the oceans, that is, at the end of the Ordovician, during the Middle and Late Devonian, and at the end of the Triassic. During periods of low oxygen concentrations very soluble selenate (Se6+) is converted into much less soluble selenide (Se2-), elemental Se and organo-selenium complexes. Bio-availability of selenium during these extinction events dropped to about 1% of the current oceanic concentration, a level that has been proven lethal to many extant organisms.

British oceanologist and atmospheric scientist, Andrew Watson, explained that, while the Holocene epoch exhibits many processes reminiscent of those that have contributed to past anoxic events, full-scale ocean anoxia would take "thousands of years to develop".

Hydrogen sulfide emissions from the seas

Kump, Pavlov and Arthur (2005) have proposed that during the Permian–Triassic extinction event the warming also upset the oceanic balance between photosynthesising plankton and deep-water sulfate-reducing bacteria, causing massive emissions of hydrogen sulfide, which poisoned life on both land and sea and severely weakened the ozone layer, exposing much of the life that still remained to fatal levels of UV radiation.

Oceanic overturn

Oceanic overturn is a disruption of thermo-haline circulation that lets surface water (which is more saline than deep water because of evaporation) sink straight down, bringing anoxic deep water to the surface and therefore killing most of the oxygen-breathing organisms that inhabit the surface and middle depths. It may occur either at the beginning or the end of a glaciation, although an overturn at the start of a glaciation is more dangerous because the preceding warm period will have created a larger volume of anoxic water.

Unlike other oceanic catastrophes such as regressions (sea-level falls) and anoxic events, overturns do not leave easily identified "signatures" in rocks and are theoretical consequences of researchers' conclusions about other climatic and marine events.

It has been suggested that oceanic overturn caused or contributed to the late Devonian and Permian–Triassic extinctions.

Geomagnetic reversal

One theory is that periods of increased geomagnetic reversals will weaken Earth's magnetic field long enough to expose the atmosphere to the solar winds, causing oxygen ions to escape the atmosphere in a rate increased by 3–4 orders, resulting in a disastrous decrease in oxygen.

Plate tectonics

Movement of the continents into some configurations can cause or contribute to extinctions in several ways: by initiating or ending ice ages; by changing ocean and wind currents and thus altering climate; by opening seaways or land bridges that expose previously isolated species to competition for which they are poorly adapted (for example, the extinction of most of South America's native ungulates and all of its large metatherians after the creation of a land bridge between North and South America). Occasionally continental drift creates a super-continent that includes the vast majority of Earth's land area, which in addition to the effects listed above is likely to reduce the total area of continental shelf (the most species-rich part of the ocean) and produce a vast, arid continental interior that may have extreme seasonal variations.

Another theory is that the creation of the super-continent Pangaea contributed to the End-Permian mass extinction. Pangaea was almost fully formed at the transition from mid-Permian to late-Permian, and the "Marine genus diversity" diagram at the top of this article shows a level of extinction starting at that time, which might have qualified for inclusion in the "Big Five" if it were not overshadowed by the "Great Dying" at the end of the Permian.

Other hypotheses

Many species of plants and animals are at high risk of extinction due to the destruction of the Amazon rainforest

Many other hypotheses have been proposed, such as the spread of a new disease, or simple out-competition following an especially successful biological innovation. But all have been rejected, usually for one of the following reasons: they require events or processes for which there is no evidence; they assume mechanisms that are contrary to the available evidence; they are based on other theories that have been rejected or superseded.

The Late Pleistocene saw extinctions of numerous predominantly megafaunal species, coinciding in time with the early human migrations across continents.

Scientists have been concerned that human activities could cause more plants and animals to become extinct than any point in the past. Along with human-made changes in climate (see above), some of these extinctions could be caused by overhunting, overfishing, invasive species, or habitat loss. A study published in May 2017 in Proceedings of the National Academy of Sciences argued that a "biological annihilation" akin to a sixth mass extinction event is underway as a result of anthropogenic causes, such as over-population and over-consumption. The study suggested that as much as 50% of the number of animal individuals that once lived on Earth were already extinct, threatening the basis for human existence too.

Future biosphere extinction/sterilization

The eventual warming and expanding of the Sun, combined with the eventual decline of atmospheric carbon dioxide, could actually cause an even greater mass extinction, having the potential to wipe out even microbes (in other words, the Earth would be completely sterilized): rising global temperatures caused by the expanding Sun would gradually increase the rate of weathering, which would in turn remove more and more CO2 from the atmosphere. When CO2 levels get too low (perhaps at 50 ppm), most plant life will die out, although simpler plants like grasses and mosses can survive much longer, until CO2 levels drop to 10 ppm.

With all photosynthetic organisms gone, atmospheric oxygen can no longer be replenished, and it is eventually removed by chemical reactions in the atmosphere, perhaps from volcanic eruptions. Eventually the loss of oxygen will cause all remaining aerobic life to die out via asphyxiation, leaving behind only simple anaerobic prokaryotes. When the Sun becomes 10% brighter in about a billion years, Earth will suffer a moist greenhouse effect resulting in its oceans boiling away, while the Earth's liquid outer core cools due to the inner core's expansion and causes the Earth's magnetic field to shut down. In the absence of a magnetic field, charged particles from the Sun will deplete the atmosphere and further increase the Earth's temperature to an average of around 420 K (147 °C, 296 °F) in 2.8 billion years, causing the last remaining life on Earth to die out. This is the most extreme instance of a climate-caused extinction event. Since this will only happen late in the Sun's life, it would represent the final mass extinction in Earth's history (albeit a very long extinction event).

Effects and recovery

The effects of mass extinction events varied widely. After a major extinction event, usually only weedy species survive due to their ability to live in diverse habitats. Later, species diversify and occupy empty niches. Generally, it takes millions of years for biodiversity to recover after extinction events. In the most severe mass extinctions it may take 15 to 30 million years.

The worst Phanerozoic event, the Permian–Triassic extinction, devastated life on Earth, killing over 90% of species. Life seemed to recover quickly after the P-T extinction, but this was mostly in the form of disaster taxa, such as the hardy Lystrosaurus. The most recent research indicates that the specialized animals that formed complex ecosystems, with high biodiversity, complex food webs and a variety of niches, took much longer to recover. It is thought that this long recovery was due to successive waves of extinction that inhibited recovery, as well as prolonged environmental stress that continued into the Early Triassic. Recent research indicates that recovery did not begin until the start of the mid-Triassic, four to six million years after the extinction; and some writers estimate that the recovery was not complete until 30 million years after the P-T extinction, that is, in the late Triassic. Subsequent to the P-T extinction, there was an increase in provincialization, with species occupying smaller ranges – perhaps removing incumbents from niches and setting the stage for an eventual rediversification.

The effects of mass extinctions on plants are somewhat harder to quantify, given the biases inherent in the plant fossil record. Some mass extinctions (such as the end-Permian) were equally catastrophic for plants, whereas others, such as the end-Devonian, did not affect the flora.

Flame detector

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Flame_detector

A flame detector is a sensor designed to detect and respond to the presence of a flame or fire, allowing flame detection. Responses to a detected flame depend on the installation, but can include sounding an alarm, deactivating a fuel line (such as a propane or a natural gas line), and activating a fire suppression system. When used in applications such as industrial furnaces, their role is to provide confirmation that the furnace is working properly; it can be used to turn off the ignition system though in many cases they take no direct action beyond notifying the operator or control system. A flame detector can often respond faster and more accurately than a smoke or heat detector due to the mechanisms it uses to detect the flame.

Optical flame detectors

Flame detector type regions

Ultraviolet detector

Ultraviolet (UV) detectors work by detecting the UV radiation emitted at the instant of ignition. While capable of detecting fires and explosions within 3–4 milliseconds, a time delay of 2–3 seconds is often included to minimize false alarms which can be triggered by other UV sources such as lightning, arc welding, radiation, and sunlight. UV detectors typically operate with wavelengths shorter than 300 nm to minimize the effects of natural background radiation. The solar blind UV wavelength band is also easily blinded by oily contaminants.

Near IR array

Near infrared (IR) array flame detectors (0.7 to 1.1 μm), also known as visual flame detectors, employ flame recognition technology to confirm fire by analyzing near IR radiation using a charge-coupled device (CCD). A near infrared (IR) sensor is especially able to monitor flame phenomena, without too much hindrance from water and water vapour. Pyroelectric sensors operating at this wavelength can be relatively cheap. Multiple channel or pixel array sensors monitoring flames in the near IR band are arguably the most reliable technologies available for detection of fires. Light emission from a fire forms an image of the flame at a particular instant. Digital image processing can be utilized to recognize flames through analysis of the video created from the near IR images.

Infrared

Infrared (IR) or wideband infrared (1.1 μm and higher) flame detectors monitor the infrared spectral band for specific patterns given off by hot gases. These are sensed using a specialized fire-fighting thermal imaging camera (TIC), a type of thermographic camera. False alarms can be caused by other hot surfaces and background thermal radiation in the area. Water on the detector's lens will greatly reduce the accuracy of the detector, as will exposure to direct sunlight. A special frequency range is 4.3 to 4.4 μm. This is a resonance frequency of CO2. During burning of a hydrocarbon (for example, wood or fossil fuels such as oil and natural gas) much heat and CO2 is released. The hot CO2 emits much energy at its resonance frequency of 4.3 μm. This causes a peak in the total radiation emission and can be well detected. Moreover, the "cold" CO2 in the air is taking care that the sunlight and other IR radiation is filtered. This makes the sensor in this frequency "solar blind"; however, sensitivity is reduced by sunlight. By observing the flicker frequency of a fire (1 to 20 Hz) the detector is made less sensitive to false alarms caused by heat radiation, for example caused by hot machinery.

A severe disadvantage is that almost all radiation can be absorbed by water or water vapour; this is particularly valid for infrared flame detection in the 4.3 to 4.4 μm region. From approx. 3.5 μm and higher the absorption by water or ice is practically 100%. This makes infrared sensors for use in outdoor applications very unresponsive to fires. The biggest problem is our ignorance; some infrared detectors have an (automatic) detector window self test, but this self test only monitors the occurrence of water or ice on the detector window.

A salt film is also harmful, because salt absorbs water. However, water vapour, fog or light rain also makes the sensor almost blind, without the user knowing. The cause is similar to what a fire fighter does if he approaches a hot fire: he protects himself by means of a water vapour screen against the enormous infrared heat radiation. The presence of water vapor, fog, or light rain will then also "protect" the monitor causing it to not see the fire. Visible light will, however be transmitted through the water vapour screen, as can easily been seen by the fact that a human can still see the flames through the water vapour screen.

The usual response time of an IR detector is 3–5 seconds.

Infrared thermal cameras

MWIR infrared (IR) cameras can be used to detect heat and with particular algorithms can detect hot-spots within a scene as well as flames for both detection and prevention of fire and risks of fire. These cameras can be used in complete darkness and operate both inside and outside.

UV/IR

These detectors are sensitive to both UV and IR wavelengths, and detect flame by comparing the threshold signal of both ranges. This helps minimize false alarms.

IR/IR flame detection

Dual IR (IR/IR) flame detectors compare the threshold signal in two infrared ranges. Often one sensor looks at the 4.4 micrometer carbon dioxide (CO2), while the other sensor looks at a reference frequency. Sensing the CO2 emission is appropriate for hydrocarbon fuels; for non-carbon based fuels, e.g., hydrogen, the broadband water bands are sensed.

IR3 flame detection

Multi-infrared detectors make use of algorithms to suppress the effects of background radiation (blackbody radiation), again sensitivity is reduced by this radiation.

Triple-IR flame detectors compare three specific wavelength bands within the IR spectral region and their ratio to each other. In this case one sensor looks at the 4.4 micrometer range while the other sensors look at reference wavelengths both above and below 4.4. This allows the detector to distinguish between non-flame IR sources and actual flames which emit hot CO2 in the combustion process. As a result, both detection range and immunity to false alarms can be significantly increased. IR3 detectors can detect a 0.1m2 (1 ft2) gasoline pan fire at up to 65 m (215 ft) in less than 5 seconds. Triple IRs, like other IR detector types, are susceptible to blinding by a layer of water on the detector's window.

Most IR detectors are designed to ignore constant background IR radiation, which is present in all environments. Instead they are designed to detect suddenly changing or increasing sources of the radiation. When exposed to changing patterns of non-flame IR radiation, IR and UV/IR detectors become more prone to false alarms, while IR3 detectors become somewhat less sensitive but are more immune to false alarms.

3IR+UV flame detection

Multi-Infrared (Multi-IR/3IR) detectors use algorithms to determine the presence of fire and tell them apart from background noise known to as black-body radiation, which in generally reduce the range and accuracy of the detector. Black-body radiation is constantly present in all environments , but is given off especially strongly by objects at high temperature.  this makes high temperature environments, or areas where high temperature material is handled especially challenging for IR only detectors. Thus, one additional UV-C band sensor is sometimes included in flame detectors to add another layer of confirmation, as black-body radiation does not impact UV sensors unless the temperature is extremely high, such as the plasma glow from an Arc welding machine.

Multi-wavelength detectors vary in sensor configuration. 1 IR+UV, or UVIR being the most common and low cost. 2 IR + UV being a compromise between cost and False alarm immunity and 3 IR + UV, which combines past 3IR technology with the additional layer of identification from the UV sensor. 

Multi-Wavelength or Multi-spectral detectors such as 3IR+UV and UVIR are an improvement over their IR-only detectors counterparts which have been known to either false alarm or lose sensitivity and range in the presence of strong background noise such as direct or reflected light sources or even sun exposure.  IR detectors have often relied on Infrared bulk energy growth to as their primary determining factor for fire detection, declaring an alarm when the sensors exceed a given range and ratio. This approach however is prone to trigger from non-fire noise. whether from blackbody radiation, high temperature environments, or simply changes in the ambient lighting. alternatively in another design approach, IR-only detectors may only alarm given perfect conditions and clear signal matches, which results in missing the fire when there is too much noise, such as looking into the sunset.

Modern Flame detectors may also make use of high speed sensors, which allow the capture of the flickering movement of flame, and monitor the pattern and ratios of the spectral output for patterns unique to fire. Higher speed sensors allow for not only faster reaction times, but also more data per second, increasing the level of confidence in fire identification, or false alarm rejection. 

Visible sensors

A visible light sensor (for example a camera: 0.4 to 0.7 μm) is able to present an image, which can be understood by a human being. Furthermore, complex image processing analysis can be executed by computers, which can recognize a flame or even smoke. Unfortunately, a camera can be blinded, like a human, by heavy smoke and by fog. It is also possible to mix visible light information (monitor) with UV or infrared information, in order to better discriminate against false alarms or to improve the detection range. The corona camera is an example of this equipment. In this equipment the information of a UV camera mixed with visible image information. It is used for tracing defects in high voltage equipment and fire detection over high distances.

In some detectors, a sensor for visible radiation (light) is added to the design.

Video

Closed-circuit television or a web camera can be used for visual detection of (wavelengths between 0.4 and 0.7 μm). Smoke or fog can limit the effective range of these, since they operate solely in the visible spectrum.

Other types

Ionization current flame detection

The intense ionization within the body of a flame can be measured by means by the phenomena of flame rectification whereby an AC current flows more easily in one direction when a voltage is applied. This current can be used to verify flame presence and quality. Such detectors can be used in large industrial process gas heaters and are connected to the flame control system. They usually act as both flame quality monitors and for flame failure detection. They are also common in a variety of household gas furnaces and boilers.

Problems with boilers failing to stay lit can often be due to dirty flame sensors or to a poor burner surface with which to complete the electrical circuit. A poor flame or one that is lifting off the burner may also interrupt the continuity.

Flame igniter (top) and flame sensor

Thermocouple flame detection

Thermocouples are used extensively for monitoring flame presence in combustion heating systems and gas cookers. A common use in these installations is to cut off the supply of fuel if the flame fails, in order to prevent unburned fuel from accumulating. These sensors measure heat and therefore are commonly used to determine the absence of a flame. This can be used to verify the presence of a pilot flame.

Applications

UV/IR flame detectors are used in:

Emission of radiation

Emission of radiation

A fire emits radiation, which human eye experiences as the visible yellow red flames and heat. In fact, during a fire, relatively sparsely UV energy and visible light energy is emitted, as compared to the emission of Infrared radiation. A non-hydrocarbon fire, for example, one from hydrogen, does not show a CO2 peak on 4.3 μm because during the burning of hydrogen no CO2 is released. The 4.3 μm CO2 peak in the picture is exaggerated, and is in reality less than 2% of the total energy of the fire. A multi-frequency-detector with sensors for UV, visible light, near IR and/or wideband IR thus have much more "sensor data" to calculate with and therefore are able to detect more types of fires and to detect these types of fires better: hydrogen, methanol, ether or sulphur. It looks like a static picture, but in reality the energy fluctuates, or flickers. This flickering is caused by the fact that the aspirated oxygen and the present combustible are burning and concurrently aspirate new oxygen and new combustible material. These little explosions cause the flickering of the flame.

Sunlight

Sunlight transmission

The sun emits an enormous amount of energy, which would be harmful to human beings if not for the vapours and gases in the atmosphere, like water (clouds), ozone, and others, through which the sunlight is filtered. In the figure it can clearly be seen that "cold" CO2 filters the solar radiation around 4.3 μm. An Infrared detector which uses this frequency is therefore solar blind. Not all manufacturers of flame detectors use sharp filters for the 4.3 μm radiation and thus still pick up quite an amount of sunlight. These cheap flame detectors are hardly usable for outdoor applications. Between 0.7 μm and approx. 3 μm there is relatively large absorption of sunlight. Hence, this frequency range is used for flame detection by a few flame detector manufacturers (in combination with other sensors like ultraviolet, visible light, or near infrared). The big economical advantage is that detector windows can be made of quartz instead of expensive sapphire. These electro-optical sensor combinations also enable the detection of non-hydrocarbons like hydrogen fires without the risk of false alarms caused by artificial light or electrical welding.

Heat radiation

Heat radiation

Infrared flame detectors suffer from Infrared heat radiation which is not emitted by the possible fire. One could say that the fire can be masked by other heat sources. All objects which have a temperature higher than the absolute minimum temperature (0 kelvins or −273.15 °C) emit energy and at room temperature (300 K) this heat is already a problem for the infrared flame detectors with the highest sensitivity. Sometimes a moving hand is sufficient to trigger an IR flame detector. At 700 K a hot object (black body) starts to emit visible light (glow). Dual- or multi-infrared detectors suppress the effects of heat radiation by means of sensors which detect just off the CO2 peak; for example at 4.1 μm. Here it is necessary that there is a large difference in output between the applied sensors (for example sensor S1 and S2 in the picture). A disadvantage is that the radiation energy of a possible fire must be much bigger than the present background heat radiation. In other words, the flame detector becomes less sensitive. Every multi infrared flame detector is negatively influenced by this effect, regardless how expensive it is.

Cone of vision

Cone of Vision (Field of View)

The cone of vision of a flame detector is determined by the shape and size of the window and the housing and the location of the sensor in the housing. For infrared sensors also the lamination of the sensor material plays a part; it limits the cone of vision of the flame detector. A wide cone of vision does not automatically mean that the flame detector is better. For some applications the flame detector needs to be aligned precisely to take care that it does not detect potential background radiation sources. The cone of vision of the flame detector is three dimensional and is not necessarily perfectly round. The horizontal angle of vision and the vertical angle of vision often differ; this is mostly caused by the shape of the housing and by mirroring parts (meant for the self test). Different combustibles can even have a different angle of vision in the same flame detector. Very important is the sensitivity at angles of 45°. Here at least 50% of the maximum sensitivity at the central axis must be achieved. Some flame detectors here achieve 70% or more. In fact these flame detectors have a total horizontal angle of vision of more than 90°, but most of the manufacturers do not mention this. A high sensitivity on the edges of the angle of vision provides advantages for the projection of a flame detector.

The detection range

Detection range

The range of a flame detector is highly determined by the mounting location. In fact, when making a projection, one should imagine in what the flame detector "sees". A rule of thumb is, that the mounting height of the flame detector is twice as high as the highest object in the field of view. Also the accessibility of the flame detector must be taken into account, because of maintenance and/or repairs. A rigid light-mast with a pivot point is for this reason recommendable. A "roof" on top of the flame detector (30 x 30 cm, 1 x 1-foot) prevents quick pollution in outdoor applications. Also the shadow effect must be considered. The shadow effect can be minimized by mounting a second flame detector in the opposite of the first detector. A second advantage of this approach is, that the second flame detector is a redundant one, in case the first one is not working or is blinded. In general, when mounting several flame detectors, one should let them "look" to each other not let them look to the walls. Following this procedure blind spots (caused by the shadow effect) can be avoided and a better redundancy can be achieved than if the flame detectors would "look" from the central position into the area to be protected. The range of flame detectors to the 30 x 30 cm, 1 x 1-foot industry standard fire is stated within the manufacturers data sheets and manuals, this range can be affected by the previously stated de-sensitizing effects of sunlight, water, fog, steam and blackbody radiation.

The square law

Square Law

If the distance between the flame and the flame detector is large compared to the dimension of the fire then the square law applies: If a flame detector can detect a fire with an area A on a certain distance, then a 4 times bigger flame area is necessary if the distance between the flame detector and the fire is doubled. In short:

Double distance = four times bigger flame area (fire).

This law is equally valid for all optical flame detectors, including video based ones. The maximum sensitivity can be estimated by dividing the maximum flame area A by the square of the distance between the fire and the flame detector: c = A/d2. With this constant c can, for the same flame detector and the same type of fire, the maximum distance or the minimum fire area be calculated: A=cd 2 and d=A/c

It must be emphasized, however, that the square root in reality is not valid anymore at very high distances. At long distances other parameters are playing a significant part; like the occurrence of water vapour and of cold CO2 in the air. In the case of a very small flame, on the other hand, the decreasing flickering of the flame will play an increasing part.

A more exact relation - valid when the distance between the flame and the flame detector is small - between the radiation density, E, at the detector and the distance, D, between the detector and a flame of effective radius, R, emitting energy density, M, is given by

E = MR2/(R2+D2)

When R<<D then the relation reduces to the (inverse) square law

E = MR2/D2

Magnetopause

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Magnetopause
Artistic rendition of the Earth's magnetopause. The magnetopause is where the pressure from the solar wind and the planet's magnetic field are equal. The position of the Sun would be far to the left in this image.

The magnetopause is the abrupt boundary between a magnetosphere and the surrounding plasma. For planetary science, the magnetopause is the boundary between the planet's magnetic field and the solar wind. The location of the magnetopause is determined by the balance between the pressure of the dynamic planetary magnetic field and the dynamic pressure of the solar wind. As the solar wind pressure increases and decreases, the magnetopause moves inward and outward in response. Waves (ripples and flapping motion) along the magnetopause move in the direction of the solar wind flow in response to small-scale variations in the solar wind pressure and to Kelvin–Helmholtz instability.

The solar wind is supersonic and passes through a bow shock where the direction of flow is changed so that most of the solar wind plasma is deflected to either side of the magnetopause, much like water is deflected before the bow of a ship. The zone of shocked solar wind plasma is the magnetosheath. At Earth and all the other planets with intrinsic magnetic fields, some solar wind plasma succeeds in entering and becoming trapped within the magnetosphere. At Earth, the solar wind plasma which enters the magnetosphere forms the plasma sheet. The amount of solar wind plasma and energy that enters the magnetosphere is regulated by the orientation of the interplanetary magnetic field, which is embedded in the solar wind.

The Sun and other stars with magnetic fields and stellar winds have a solar magnetopause or heliopause where the stellar environment is bounded by the interstellar environment.

Characteristics

Schematic representation of a planetary dipole magnetic field in a vacuum (right side) deformed by a region of plasma with infinite conductivity. The Sun is to the left. The configuration is equivalent to an image dipole (green arrow) being placed at twice the distance from the planetary dipole to the interaction boundary.

Prior to the age of space exploration, interplanetary space was considered to be a vacuum. The coincidence of the first observation of a solar flare and the geomagnetic storm of 1859 was evidence that plasma was ejected from the Sun during the flare event. Chapman and Ferraro proposed that a plasma was emitted by the Sun in a burst as part of a flare event which disturbed the planet's magnetic field in a manner known as a geomagnetic storm. The collision frequency of particles in the plasma in the interplanetary medium is very low and the electrical conductivity is so high that it could be approximated to an infinite conductor. A magnetic field in a vacuum cannot penetrate a volume with infinite conductivity. Chapman and Bartels (1940) illustrated this concept by postulating a plate with infinite conductivity placed on the dayside of a planet's dipole as shown in the schematic. The field lines on the dayside are bent. At low latitudes, the magnetic field lines are pushed inward. At high latitudes, the magnetic field lines are pushed backwards and over the polar regions. The boundary between the region dominated by the planet's magnetic field (i.e., the magnetosphere) and the plasma in the interplanetary medium is the magnetopause. The configuration equivalent to a flat, infinitely conductive plate is achieved by placing an image dipole (green arrow at left of schematic) at twice the distance from the planet's dipole to the magnetopause along the planet-Sun line. Since the solar wind is continuously flowing outward, the magnetopause above, below and to the sides of the planet are swept backward into the geomagnetic tail as shown in the artist's concept. The region (shown in pink in the schematic) which separates field lines from the planet which are pushed inward from those which are pushed backward over the poles is an area of weak magnetic field or day-side cusp. Solar wind particles can enter the planet's magnetosphere through the cusp region. Because the solar wind exists at all times and not just times of solar flares, the magnetopause is a permanent feature of the space near any planet with a magnetic field.

The magnetic field lines of the planet's magnetic field are not stationary. They are continuously joining or merging with magnetic field lines of the interplanetary magnetic field. The joined field lines are swept back over the poles into the planetary magnetic tail. In the tail, the field lines from the planet's magnetic field are re-joined and start moving toward night-side of the planet. The physics of this process was first explained by Dungey (1961).

If one assumed that magnetopause was just a boundary between a magnetic field in a vacuum and a plasma with a weak magnetic field embedded in it, then the magnetopause would be defined by electrons and ions penetrating one gyroradius into the magnetic field domain. Since the gyro-motion of electrons and ions is in opposite directions, an electric current flows along the boundary. The actual magnetopause is much more complex.

Estimating the standoff distance to the magnetopause

If the pressure from particles within the magnetosphere is neglected, it is possible to estimate the distance to the part of the magnetosphere that faces the Sun. The condition governing this position is that the dynamic ram pressure from the solar wind is equal to the magnetic pressure from the Earth's magnetic field:

where and are the density and velocity of the solar wind, and B(r) is the magnetic field strength of the planet in SI units (B in T, μ0 in H/m).

Since the dipole magnetic field strength varies with distance as the magnetic field strength can be written as , where is the planet's magnetic moment, expressed in .

Solving this equation for r leads to an estimate of the distance

The distance from Earth to the subsolar magnetopause varies over time due to solar activity, but typical distances range from 6–15 R. Empirical models using real-time solar wind data can provide a real-time estimate of the magnetopause location. A bow shock stands upstream from the magnetopause. It serves to decelerate and deflect the solar wind flow before it reaches the magnetopause.

Solar System magnetopauses

Overview of the Solar System magnetopauses
Planet Number Magnetic moment  Magnetopause distance  Observed size of the magnetosphere variance of magnetosphere
Mercury Mercury 1 0.0004 1.5 1.4 0
Venus Venus 2 0 0 0 0
Earth Earth 3 1 10 10 2
Mars Mars 4 0 0 0 0
Jupiter Jupiter 5 20000 42 75 25
Saturn Saturn 6 600 19 19 3
Uranus Uranus 7 50 25 18 0
Neptune Neptune 8 25 24 24.5 1.5

Research on the magnetopause is conducted using the LMN coordinate system (which is set of axes like XYZ). N points normal to the magnetopause outward to the magnetosheath, L lies along the projection of the dipole axis onto the magnetopause (positive northward), and M completes the triad by pointing dawnward.

Venus and Mars do not have a planetary magnetic field and do not have a magnetopause. The solar wind interacts with the planet's atmosphere and a void is created behind the planet. In the case of the Earth's moon and other bodies without a magnetic field or atmosphere, the body's surface interacts with the solar wind and a void is created behind the body.

Cellular automaton

From Wikipedia, the free encyclopedia https://en.wikipedi...