Search This Blog

Monday, April 24, 2017

Global catastrophic risk

From Wikipedia, the free encyclopedia
 
Artist's impression of a major asteroid impact. An asteroid with an impact strength of a billion atomic bombs may have caused the extinction of the dinosaurs.[1]

A global catastrophic risk is a hypothetical future event that has the potential to damage human well-being on a global scale.[2] Some events could cripple or destroy modern civilization. Any event that could cause human extinction is known as an existential risk.[3]

Potential global catastrophic risks include anthropogenic risks (technology risks, governance risks) and natural or external risks. Examples of technology risks are hostile artificial intelligence, biotechnology risks, or nanotechnology weapons. Insufficient global governance creates risks in the social and political domain (potentially leading to a global war with or without a nuclear holocaust, bioterrorism using genetically modified organisms, cyberterrorism destroying critical infrastructures like the electrical grid, or the failure to manage a natural pandemic) as well as problems and risks in the domain of earth system governance (with risks resulting from global warming, environmental degradation, mineral resource exhaustion, fossil energy exhaustion, or famine as a result of non-equitable resource distribution, human overpopulation, crop failures and non-sustainable agriculture). Examples for non-anthropogenic risks are an asteroid impact event, a supervolcanic eruption, a lethal gamma-ray burst, a geomagnetic storm destroying all electronic equipment, natural long-term climate change, or extraterrestrial life impacting life on Earth.

Classifications

Scope/intensity grid from Bostrom's paper "Existential Risk Prevention as Global Priority"[4]

Global catastrophic vs. existential

Philosopher Nick Bostrom classifies risks according to their scope and intensity.[4] A "global catastrophic risk" is any risk that is at least "global" in scope, and is not subjectively "imperceptible" in intensity. Those that are at least "trans-generational" (affecting all future generations) in scope and "terminal"[clarification needed] in intensity are classified as existential risks. While a global catastrophic risk may kill the vast majority of life on earth, humanity could still potentially recover. An existential risk, on the other hand, is one that either destroys humanity (and, presumably, all but the most rudimentary species of non-human lifeforms and/or plant life) entirely or at least prevents any chance of civilization recovering. Bostrom considers existential risks to be far more significant.[5]

Similarly, in Catastrophe: Risk and Response, Richard Posner singles out and groups together events that bring about "utter overthrow or ruin" on a global, rather than a "local or regional" scale. Posner singles out such events as worthy of special attention on cost-benefit grounds because they could directly or indirectly jeopardize the survival of the human race as a whole.[6] Posner's events include meteor impacts, runaway global warming, grey goo, bioterrorism, and particle accelerator accidents.

Researchers experience difficulty in studying near human extinction directly, since humanity has never been destroyed before.[7] While this does not mean that it will not be in the future, it does make modelling existential risks difficult, due in part to survivorship bias.

Other classifications

Bostrom identifies four types of existential risk. "Bangs" are sudden catastrophes, which may be accidental or deliberate. He thinks the most likely sources of bangs are malicious use of nanotechnology, nuclear war, and the possibility that the universe is a simulation that will end. "Crunches" are scenarios in which humanity survives but civilization is irreversibly destroyed. The most likely causes of this, he believes, are exhaustion of natural resources, a stable global government that prevents technological progress, or dysgenic pressures that lower average intelligence. "Shrieks" are undesirable futures. For example, if a single mind enhances its powers by merging with a computer, it could dominate human civilization. Bostrom believes that this scenario is most likely, followed by flawed superintelligence and a repressive totalitarian regime. "Whimpers" are the gradual decline of human civilization or current values. He thinks the most likely cause would be evolution changing moral preference, followed by extraterrestrial invasion.[3]

Likelihood

Some risks, such as that from asteroid impact, with a one-in-a-million chance of causing humanity's extinction in the next century,[8] have had their probabilities predicted with considerable precision (although some scholars claim the actual rate of large impacts could be much higher than originally calculated).[9] Similarly, the frequency of volcanic eruptions of sufficient magnitude to cause catastrophic climate change, similar to the Toba Eruption, which may have almost caused the extinction of the human race,[10] has been estimated at about 1 in every 50,000 years.[11] The 2016 annual report by the Global Challenges Foundation estimates that an average American is more than five times more likely to die during a human-extinction event than in a car crash.[12][13]

The relative danger posed by other threats is much more difficult to calculate. In 2008, a small but illustrious group of experts on different global catastrophic risks at the Global Catastrophic Risk Conference at the University of Oxford suggested a 19% chance of human extinction over the next century. The conference report cautions that the results should be taken "with a grain of salt".[14]
Risk Estimated probability for human extinction before 2100 (2008 expert survey)
Overall probability 19%
Molecular nanotechnology weapons 5%
Superintelligent AI 5%
Non-nuclear wars 4%
Engineered pandemic 2%
Nuclear wars 1%
Nanotechnology accident 0.5%
Natural pandemic 0.05%
Nuclear terrorism 0.03%
Table source: Future of Humanity Institute, 2008.[14]
There are significant methodological challenges in estimating these risks with precision. Most attention has been given to risks to human civilization over the next 100 years, but forecasting for this length of time is difficult. The types of threats posed by nature may prove relatively constant, though new risks could be discovered. Anthropogenic threats, however, are likely to change dramatically with the development of new technology; while volcanoes have been a threat throughout history, nuclear weapons have only been an issue since the 20th century. Historically, the ability of experts to predict the future over these timescales has proved very limited. Man-made threats such as nuclear war or nanotechnology are harder to predict than natural threats, due to the inherent methodological difficulties in the social sciences. In general, it is hard to estimate the magnitude of the risk from this or other dangers, especially as both international relations and technology can change rapidly.
Existential risks pose unique challenges to prediction, even more than other long-term events, because of observation selection effects. Unlike with most events, the failure of a complete extinction event to occur in the past is not evidence against their likelihood in the future, because every world that has experienced such an extinction event has no observers, so regardless of their frequency, no civilization observes existential risks in its history.[7] These anthropic issues can be avoided by looking at evidence that does not have such selection effects, such as asteroid impact craters on the Moon, or directly evaluating the likely impact of new technology.[4]

Moral importance of existential risk

Some scholars have strongly favored reducing existential risk on the grounds that it greatly benefits future generations. Derek Parfit argues that extinction would be a great loss because our descendants could potentially survive for four billion years before the expansion of the Sun makes the Earth uninhabitable.[15][16] Nick Bostrom argues that there is even greater potential in colonizing space. If future humans colonize space, they may be able to support a very large number of people on other planets, potentially lasting for trillions of years.[5] Therefore, reducing existential risk by even a small amount would have a very significant impact on the expected number of people who will exist in the future.

Exponential discounting might make these future benefits much less significant. However, Gaverick Matheny has argued that such discounting is inappropriate when assessing the value of existential risk reduction.[8]

Some economists have discussed the importance of global catastrophic risks, though not existential risks. Martin Weitzman argues that most of the expected economic damage from climate change may come from the small chance that warming greatly exceeds the mid-range expectations, resulting in catastrophic damage.[17] Richard Posner has argued that we are doing far too little, in general, about small, hard-to-estimate risks of large-scale catastrophes.[18]

Numerous cognitive biases can influence people's judgment of the importance of existential risks, including scope insensitivity, hyperbolic discounting, availability heuristic, the conjunction fallacy, the affect heuristic, and the overconfidence effect.[19]

Scope insensitivity influences how bad people consider the extinction of the human race to be. For example, when people are motivated to donate money to altruistic causes, the quantity they are willing to give does not increase linearly with the magnitude of the issue: people are roughly as concerned about 200,000 birds getting stuck in oil as they are about 2,000.[20] Similarly, people are often more concerned about threats to individuals than to larger groups.[19]

There are economic reasons that can explain why so little effort is going into existential risk reduction. It is a global good, so even if a large nation decreases it, that nation will only enjoy a small fraction of the benefit of doing so. Furthermore, the vast majority of the benefits may be enjoyed by far future generations, and though these quadrillions of future people would in theory perhaps be willing to pay massive sums for existential risk reduction, no mechanism for such a transaction exists.[4]

Potential sources of risk

Some sources of catastrophic risk are natural, such as meteor impacts or supervolcanos. Some of these have caused mass extinctions in the past.

On the other hand, some risks are man-made, such as global warming,[21] environmental degradation, engineered pandemics and nuclear war. According to the Future of Humanity Institute, human extinction is more likely to result from anthropogenic causes than natural causes.[4][22]

Anthropogenic

In 2012, Cambridge University created The Cambridge Project for Existential Risk which examines threats to humankind caused by developing technologies.[23] The stated aim is to establish within the University a multidisciplinary research centre, Centre for the Study of Existential Risk, dedicated to the scientific study and mitigation of existential risks of this kind.[23]

The Cambridge Project states that the "greatest threats" to the human species are man-made; they are artificial intelligence, global warming, nuclear war, and rogue biotechnology.[24]

Artificial intelligence

It has been suggested that learning computers that rapidly become superintelligent may take unforeseen actions or that robots would out-compete humanity (one technological singularity scenario).[25] Because of its exceptional scheduling and organizational capability and the range of novel technologies it could develop, it is possible that the first Earth superintelligence to emerge could rapidly become matchless and unrivaled: conceivably it would be able to bring about almost any possible outcome, and be able to foil virtually any attempt that threatened to prevent it achieving its objectives.[26] It could eliminate, wiping out if it chose, any other challenging rival intellects; alternatively it might manipulate or persuade them to change their behavior towards its own interests, or it may merely obstruct their attempts at interference.[26] In Bostrom's book, Superintelligence: Paths, Dangers, Strategies, he defines this as the control problem.[27]
Vernor Vinge has suggested that a moment may come when computers and robots are smarter than humans. He calls this "the Singularity."[28] He suggests that it may be somewhat or possibly very dangerous for humans.[29] This is discussed by a philosophy called Singularitarianism.

Physicist Stephen Hawking, Microsoft founder Bill Gates and SpaceX founder Elon Musk have expressed concerns about the possibility that AI could evolve to the point that humans could not control it, with Hawking theorizing that this could "spell the end of the human race".[30] In 2009, experts attended a conference hosted by the Association for the Advancement of Artificial Intelligence (AAAI) to discuss whether computers and robots might be able to acquire any sort of autonomy, and how much these abilities might pose a threat or hazard. They noted that some robots have acquired various forms of semi-autonomy, including being able to find power sources on their own and being able to independently choose targets to attack with weapons. They also noted that some computer viruses can evade elimination and have achieved "cockroach intelligence." They noted that self-awareness as depicted in science-fiction is probably unlikely, but that there were other potential hazards and pitfalls.[28] Various media sources and scientific groups have noted separate trends in differing areas which might together result in greater robotic functionalities and autonomy, and which pose some inherent concerns.[31][32] Eliezer Yudkowsky believes that risks from artificial intelligence are harder to predict than any other known risks. He also argues that research into artificial intelligence is biased by anthropomorphism. Since people base their judgments of artificial intelligence on their own experience, he claims that they underestimate the potential power of AI. He distinguishes between risks due to technical failure of AI, which means that flawed algorithms prevent the AI from carrying out its intended goals, and philosophical failure, which means that the AI is programmed to realize a flawed ideology.[33]

Biotechnology

Biotechnology can pose a global catastrophic risk in the form of bioengineered organisms (viruses, bacteria, fungi, plants or animals). In many cases the organism will be a pathogen of humans, livestock or crops. However, any organism able to catastrophically disrupt ecosystem functions poses a biotechnology risk. Examples could be highly competitive weeds, outcompeting essential crops.
A catastrophe may be brought about by usage of biological agents in biological warfare, bioterrorism attacks or by accident.[34] Terrorist applications of biotechnology have historically been infrequent.[34] To what extent this is due to a lack of capabilities or motivation is not resolved.[34]

A biotechnology catastrophe may be caused accidentally either by a genetically engineered organism escaping, or by the planned release of such an organism, which turns out to have unforeseen and catastropic interactions with essential natural or agro-ecosystems.

Exponential growth has been observed in the biotechnology sector and Noun and Chyba predict that this will lead to major increases in biotechnological capabilities in the coming decades.[34] They argue that risks from biological warfare and bioterrorism are distinct from nuclear and chemical threats because biological pathogens are easier to mass-produce and their production is hard to control (especially as the technological capabilities are becoming available even to individual users).[34]

Given current development, more risk from novel, engineered pathogens is to be expected in the future.[34] Pathogens may be intentionally or unintentionally genetically modified to change virulence and other characteristics.[34] For example, a group of Australian researchers unintentionally changed characteristics of the mousepox virus while trying to develop a virus to sterilize rodents.[34] The modified virus became highly lethal even in vaccinated and naturally resistant mice.[35][36] The technological means to genetically modify virus characteristics are likely to become more widely available in the future if not properly regulated.[34]

Noun and Chyba propose three categories of measures to reduce risks from biotechnology and natural pandemics: Regulation or prevention of potentially dangerous research, improved recognition of outbreaks and developing facilities to mitigate disease outbreaks (e.g. better and/or more widely distributed vaccines).[34]

Global warming

Global warming refers to the warming caused by human technology since the 19th century or earlier. Global warming reflects abnormal variations to the expected climate within the Earth's atmosphere and subsequent effects on other parts of the Earth. Projections of future climate change suggest further global warming, sea level rise, and an increase in the frequency and severity of some extreme weather events and weather-related disasters. Effects of global warming include loss of biodiversity, stresses to existing food-producing systems, increased spread of known infectious diseases such as malaria, and rapid mutation of microorganisms.
It has been suggested that runaway global warming (runaway climate change) might cause Earth to become searingly hot like Venus. In less extreme scenarios, it could cause the end of civilization as we know it.[37]

Environmental disaster

An environmental or ecological disaster, such as world crop failure and collapse of ecosystem services, could be induced by the present trends of overpopulation, economic development,[38] and non-sustainable agriculture. Most of these scenarios involve one or more of the following: Holocene extinction event, scarcity of water that could lead to approximately one half of the Earth's population being without safe drinking water, pollinator decline, overfishing, massive deforestation, desertification, climate change, or massive water pollution episodes. A very recent threat in this direction is colony collapse disorder,[39] a phenomenon that might foreshadow the imminent extinction[40] of the Western honeybee. As the bee plays a vital role in pollination, its extinction would severely disrupt the food chain.

Mineral resource exhaustion

Romanian American economist Nicholas Georgescu-Roegen, a progenitor in economics and the paradigm founder of ecological economics, has argued that the carrying capacity of Earth — that is, Earth's capacity to sustain human populations and consumption levels — is bound to decrease sometime in the future as Earth's finite stock of mineral resources is presently being extracted and put to use; and consequently, that the world economy as a whole is heading towards an inevitable future collapse, leading to the demise of human civilisation itself.[41]:303f Ecological economist and steady-state theorist Herman Daly, a student of Georgescu-Roegen, has propounded the same argument by asserting that "... all we can do is to avoid wasting the limited capacity of creation to support present and future life [on Earth]."[42]:370

Ever since Georgescu-Roegen and Daly published these views, various scholars in the field have been discussing the existential impossibility of distributing Earth's finite stock of mineral resources evenly among an unknown number of present and future generations. This number of generations is likely to remain unknown to us, as there is little way of knowing in advance if or when mankind will eventually face extinction. In effect, any conceivable intertemporal distribution of the stock will inevitably end up with universal economic decline at some future point.[43]:253–256 [44]:165 [45]:168–171 [46]:150–153 [47]:106–109 [48]:546–549 [49]:142–145

Experimental technology accident

Nick Bostrom suggested that in the pursuit of knowledge, humanity might inadvertently create a device that could destroy Earth and the Solar System.[50] Investigations in nuclear and high-energy physics could create unusual conditions with catastrophic consequences. For example, scientists worried that the first nuclear test might ignite the atmosphere.[51][52] More recently, others worried that the RHIC[53] or the Large Hadron Collider might start a chain-reaction global disaster involving black holes, strangelets, or false vacuum states. These particular concerns have been refuted,[54][55][56][57] but the general concern remains.
Biotechnology could lead to the creation of a pandemic, chemical warfare could be taken to an extreme, nanotechnology could lead to grey goo in which out-of-control self-replicating robots consume all living matter on earth while building more of themselves—in both cases, either deliberately or by accident.[58]

Nanotechnology

Many nanoscale technologies are in development or currently in use.[59] The only one that appears to pose a significant global catastrophic risk is molecular manufacturing, a technique that would make it possible to build complex structures at atomic precision.[60] Molecular manufacturing requires significant advances in nanotechnology, but once achieved could produce highly advanced products at low costs and in large quantities in nanofactories of desktop proportions.[59][60] When nanofactories gain the ability to produce other nanofactories, production may only be limited by relatively abundant factors such as input materials, energy and software.[59]
Molecular manufacturing could be used to cheaply produce, among many other products, highly advanced, durable weapons.[59] Being equipped with compact computers and motors these could be increasingly autonomous and have a large range of capabilities.[59]

Phoenix and Treder classify catastrophic risks posed by nanotechnology into three categories:
  1. From augmenting the development of other technologies such as AI and biotechnology.
  2. By enabling mass-production of potentially dangerous products that cause risk dynamics (such as arms races) depending on how they are used.
  3. From uncontrolled self-perpetuating processes with destructive effects.
At the same time, nanotechnology may be used to alleviate several other global catastrophic risks.[59]

Several researchers state that the bulk of risk from nanotechnology comes from the potential to lead to war, arms races and destructive global government.[35][59][61] Several reasons have been suggested why the availability of nanotech weaponry may with significant likelihood lead to unstable arms races (compared to e.g. nuclear arms races):
  1. A large number of players may be tempted to enter the race since the threshold for doing so is low;[59]
  2. The ability to make weapons with molecular manufacturing will be cheap and easy to hide;[59]
  3. Therefore, lack of insight into the other parties' capabilities can tempt players to arm out of caution or to launch preemptive strikes;[59][62]
  4. Molecular manufacturing may reduce dependency on international trade,[59] a potential peace-promoting factor;
  5. Wars of aggression may pose a smaller economic threat to the aggressor since manufacturing is cheap and humans may not be needed on the battlefield.[59]
Since self-regulation by all state and non-state actors seems hard to achieve,[63] measures to mitigate war-related risks have mainly been proposed in the area of international cooperation.[59][64] International infrastructure may be expanded giving more sovereignty to the international level. This could help coordinate efforts for arms control. International institutions dedicated specifically to nanotechnology (perhaps analogously to the International Atomic Energy Agency IAEA) or general arms control may also be designed.[64] One may also jointly make differential technological progress on defensive technologies, a policy that players should usually favour.[59] The Center for Responsible Nanotechnology also suggests some technical restrictions.[65] Improved transparency regarding technological capabilities may be another important facilitator for arms-control.

A grey goo is another catastrophic scenario, which was proposed by Eric Drexler in his 1986 book Engines of Creation[66] and has been a theme in mainstream media and fiction.[67][68] This scenario involves tiny self-replicating robots that consume the entire biosphere using it as a source of energy and building blocks. Nowadays, however, nanotech experts - including Drexler - discredit the scenario. According to Chris Phoenix a "so-called grey goo could only be the product of a deliberate and difficult engineering process, not an accident".[69]

Warfare and mass destruction

The scenarios that have been explored most frequently are nuclear warfare and doomsday devices. Although the probability of a nuclear war per year is slim, Professor Martin Hellman, described it as inevitable in the long run; unless the probability approaches zero, inevitably there will come a day when civilization's luck runs out.[70] During the Cuban missile crisis, U.S. president John F. Kennedy estimated the odds of nuclear war as being "somewhere between one out of three and even".[71] The United States and Russia have a combined arsenal of 14,700 nuclear weapons,[72] and there is an estimated total of 15,700 nuclear weapons in existence worldwide.[72]

While popular perception sometimes takes nuclear war as "the end of the world", experts assign low probability to human extinction from nuclear war.[73][74] In 1982, Brian Martin estimated that a US–Soviet nuclear exchange might kill 400–450 million directly, mostly in the United States, Europe and Russia and maybe several hundred million more through follow-up consequences in those same areas.[73]

Nuclear war could yield unprecedented human death tolls and habitat destruction. Detonating such a large amount of nuclear weaponry would have a long-term effect on the climate, causing cold weather and reduced sunlight[75] that may generate significant upheaval in advanced civilizations.[76]
Beyond nuclear, other threats to humanity include biological warfare (BW) and bioterrorism. By contrast, chemical warfare while able to create multiple local catastrophes, is unlikely to create a global one.

World population and agricultural crisis

The 20th century saw a rapid increase in human population due to medical developments and massive increases in agricultural productivity[77] such as the Green Revolution.[78] Between 1950 and 1984, as the Green Revolution transformed agriculture around the globe, world grain production increased by 250%. The Green Revolution in agriculture helped food production to keep pace with worldwide population growth or actually enabled population growth. The energy for the Green Revolution was provided by fossil fuels in the form of fertilizers (natural gas), pesticides (oil), and hydrocarbon fueled irrigation.[79] David Pimentel, professor of ecology and agriculture at Cornell University, and Mario Giampietro, senior researcher at the National Research Institute on Food and Nutrition (INRAN), place in their 1994 study Food, Land, Population and the U.S. Economy the maximum U.S. population for a sustainable economy at 200 million. To achieve a sustainable economy and avert disaster, the United States must reduce its population by at least one-third, and world population will have to be reduced by two-thirds, says the study.[80]

The authors of this study believe that the mentioned agricultural crisis will begin to impact us after 2020, and will become critical after 2050. Geologist Dale Allen Pfeiffer claims that coming decades could see spiraling food prices without relief and massive starvation on a global level such as never experienced before.[81][82]

Wheat is humanity's 3rd most produced cereal. Extant fungal infections such as Ug99[83] (a kind of stem rust) can cause 100% crop losses in most modern varieties. Little or no treatment is possible and infection spreads on the wind. Should the world's large grain producing areas become infected then there would be a crisis in wheat availability leading to price spikes and shortages in other food products.[84]

Non-anthropogenic

Asteroid impact

Several asteroids have collided with earth in recent geological history. The Chicxulub asteroid, for example, is theorized to have caused the extinction of the non-avian dinosaurs 66 million years ago at the end of the Cretaceous. If such an object struck Earth it could have a serious impact on civilization. It is even possible that humanity would be completely destroyed. For this to occur, the asteroid would need to be at least 1 km (0.62 mi) in diameter, but probably between 3 and 10 km (2–6 miles).[85] Asteroids with a 1 km diameter have impacted the Earth on average once every 500,000 years.[85] Larger asteroids are less common. Small near-Earth asteroids are regularly observed.
In 1.4 million years, the star Gliese 710 is expected to start causing an increase in the number of meteoroids in the vicinity of Earth when it passes within 1.1 light years of the Sun, perturbing the Oort cloud. Dynamic models by García-Sánchez predict a 5% increase in the rate of impact.[86] Objects perturbed from the Oort cloud take millions of years to reach the inner Solar System.

Extraterrestrial invasion

Extraterrestrial life could invade Earth[87] either to exterminate and supplant human life, enslave it under a colonial system, steal the planet's resources, or destroy the planet altogether.
Although evidence of alien life has never been documented, scientists such as Carl Sagan have postulated that the existence of extraterrestrial life is very likely. In 1969, the "Extra-Terrestrial Exposure Law" was added to the United States Code of Federal Regulations (Title 14, Section 1211) in response to the possibility of biological contamination resulting from the U.S. Apollo Space Program. It was removed in 1991.[88] Scientists consider such a scenario technically possible, but unlikely.[89]

Natural climate change

Climate change refers to a lasting change in the Earth's climate. The climate has ranged from ice ages to warmer periods when palm trees grew in Antarctica. It has been hypothesized that there was also a period called "snowball Earth" when all the oceans were covered in a layer of ice. These global climatic changes occurred slowly, prior to the rise of human civilization about 10 thousand years ago near the end of the last Major Ice Age when the climate became more stable. However, abrupt climate change on the decade time scale has occurred regionally. Since civilization originated during a period of stable climate, a natural variation into a new climate regime (colder or hotter) could pose a threat to civilization.

In the history of the Earth, many ice ages are known to have occurred. More ice ages will be possible at an interval of 40,000–100,000 years. An ice age would have a serious impact on civilization because vast areas of land (mainly in North America, Europe, and Asia) could become uninhabitable. It would still be possible to live in the tropical regions, but with possible loss of humidity and water. Currently, the world is existing in an interglacial period within a much older glacial event. The last glacial expansion ended about 10,000 years ago, and all civilizations evolved later than this. Scientists do not predict that a natural ice age will occur anytime soon.

Cosmic threats

A number of astronomical threats have been identified. Massive objects, e.g. a star, large planet or black hole, could be catastrophic if a close encounter occurred in the Solar System. In April 2008, it was announced that two simulations of long-term planetary movement, one at Paris Observatory and the other at University of California, Santa Cruz indicate a 1% chance that Mercury's orbit could be made unstable by Jupiter's gravitational pull sometime during the lifespan of the Sun. Were this to happen, the simulations suggest a collision with Earth could be one of four possible outcomes (the others being Mercury colliding with the Sun, colliding with Venus, or being ejected from the Solar System altogether). If Mercury were to collide with Earth, all life on Earth could be obliterated: an asteroid 15 km wide is believed to have caused the extinction of the non-avian dinosaurs, whereas Mercury is 4,879 km in diameter.[90]

Another threat might come from gamma ray bursts.[91] Both threats are very unlikely in the foreseeable future.[92]

A similar threat is a hypernova, produced when a hypergiant star explodes and then collapses, sending vast amounts of radiation sweeping across hundreds of lightyears. Hypernovas have never been observed; however, a hypernova may have been the cause of the Ordovician–Silurian extinction events. The nearest hypergiant is Eta Carinae, approximately 8,000 light-years distant.[93] The hazards from various astrophysical radiation sources were reviewed in 2011.[94]

If the Solar System were to pass through a dark nebula, a cloud of cosmic dust, severe global climate change would occur.[95]

A powerful solar flare or solar superstorm, which is a drastic and unusual decrease or increase in the Sun's power output, could have severe consequences for life on Earth.

If our universe lies within a false vacuum, a bubble of lower-energy vacuum could come to exist by chance or otherwise in our universe, and catalyze the conversion of our universe to a lower energy state in a volume expanding at nearly the speed of light, destroying all that we know without forewarning.[96][further explanation needed] Such an occurrence is called a vacuum metastability event.

Geomagnetic reversal

The magnetic poles of the Earth shifted many times in geologic history. The duration of such a shift is still debated. Theories exist that during such times, the Earth's magnetic field would be substantially weakened, threatening civilization by allowing radiation from the Sun, especially solar wind, solar flares or cosmic radiation, to reach the surface. These theories have been somewhat discredited, as statistical analysis shows no evidence for a correlation between past reversals and past extinctions.[97][98]

Global pandemic

Numerous historical examples of pandemics[99] had a devastating effect on a large number of people. The present, unprecedented scale and speed of human movement make it more difficult than ever to contain an epidemic through local quarantines. A global pandemic has become a realistic threat to human civilization.
Naturally evolving pathogens will ultimately develop an upper limit to their virulence.[100] Pathogen with the highest virulence, quickly killing their hosts reduce their chances of spread the infection to new hosts or carriers.[101] This simple model predicts that - if virulence and transmission are not genetically linked - pathogens will evolve towards low virulence and rapid transmission. However, this is not necessarily a safeguard against a global catastrophe, for the following reasons:

1. The fitness advantage of limited virulence is primarily a function of a limited number of hosts. Any pathogene with a high virulence, high transmission rate and long incubation time may have already caused a catastrophic pandemic before ultimately virulence is limited through natural selection.

2. In models where virulence level and rate of transmission are related, high levels of virulence can evolve.[102] Virulence is instead limited by the existence of complex populations of hosts with different susceptibilities to infection, or by some hosts being geographically isolated.[100] The size of the host population and competition between different strains of pathogens can also alter virulence.[103]

3. A pathogen that infects humans as a secondary host and primarily infects another species (a zoonosis) has no constraints on its virulence in people, since the accidental secondary infections do not affect its evolution.[104]

Naturally arising pathogens and Neobiota

In a similar scenario to biotechnology risks, naturally evolving organisms can disrupt essential ecosystem functions.

An example of a pathogen able to threaten global food security is the wheat rust Ug99.

Other examples are neobiota, i.e. organisms that become disruptive to ecosystems once transported - often as a result of human activity - to a new geographical region. Normally the risk is a local disruption. If it becomes coupled with serious crop failures and a global famine it may, however, pose a global catastrophic risk.

Megatsunami

A remote possibility is a megatsunami. It has been suggested that a megatsunami caused by the collapse of a volcanic island could, for example, destroy the entire East Coast of the United States, but such predictions are based on incorrect assumptions and the likelihood of this happening has been greatly exaggerated in the media.[105] While none of these scenarios are likely to destroy humanity completely, they could regionally threaten civilization. There have been two recent high-fatality tsunamis—after the 2011 Tōhoku earthquake and the 2004 Indian Ocean earthquake. A megatsunami could have astronomical origins as well, such as an asteroid impact in an ocean.[106]

Volcanism

A geological event such as massive flood basalt, volcanism, or the eruption of a supervolcano[107] could lead to a so-called volcanic winter, similar to a nuclear winter. One such event, the Toba eruption,[108] occurred in Indonesia about 71,500 years ago. According to the Toba catastrophe theory,[109] the event may have reduced human populations to only a few tens of thousands of individuals. Yellowstone Caldera is another such supervolcano, having undergone 142 or more caldera-forming eruptions in the past 17 million years.[110] A massive volcano eruption would eject extraordinary volumes of volcanic dust, toxic and greenhouse gases into the atmosphere with serious effects on global climate (towards extreme global cooling; volcanic winter if short term, and ice age if long term) or global warming (if greenhouse gases were to prevail).
When the supervolcano at Yellowstone last erupted 640,000 years ago, the magma and ash ejected from the caldera covered most of the United States west of the Mississippi river and part of northeastern Mexico.[111] Another such eruption could threaten civilization.

Research published in 2011 finds evidence that massive volcanic eruptions caused massive coal combustion, supporting models for significant generation of greenhouse gases. Researchers have suggested that massive volcanic eruptions through coal beds in Siberia would generate significant greenhouse gases and cause a runaway greenhouse effect.[112] Massive eruptions can also throw enough pyroclastic debris and other material into the atmosphere to partially block out the sun and cause a volcanic winter, as happened on a smaller scale in 1816 following the eruption of Mount Tambora, the so-called Year Without a Summer. Such an eruption might cause the immediate deaths of millions of people several hundred miles from the eruption, and perhaps billions of deaths[113] worldwide, due to the failure of the monsoon[citation needed], resulting in major crop failures causing starvation on a massive scale.[113]

A much more speculative concept is the Verneshot: a hypothetical volcanic eruption caused by the buildup of gas deep underneath a craton. Such an event may be forceful enough to launch an extreme amount of material from the crust and mantle into a sub-orbital trajectory.

Precautions and prevention

Planetary management and respecting planetary boundaries have been proposed as approaches to preventing ecological catastrophes. Within the scope of these approaches, the field of geoengineering encompasses the deliberate large-scale engineering and manipulation of the planetary environment to combat or counteract anthropogenic changes in atmospheric chemistry. Space colonization is a proposed alternative to improve the odds of surviving an extinction scenario.[114] Solutions of this scope may require megascale engineering. Food storage has been proposed globally, but the monetary cost would be high. Furthermore, it would likely contribute to the current millions of deaths per year due to malnutrition. David Denkenberger and Joshua Pearce have proposed in Feeding Everyone No Matter What a variety of alternate foods for global catastrophic risks such as nuclear winter, volcanic winter, asteroid/comet impact, and abrupt climate change.[115] The alternate foods convert fossil fuels or biomass (e.g. trees and wood) into food.[116] However, significantly more research is needed in this field to make it viable for the entire global population to survive using these methods.[117] Asteroid deflection has been proposed to reduce impact risk. Nuclear disarmament has been proposed to reduce the nuclear winter risk. Precautions being taken include:
  • Some survivalists stocking survival retreats with multiple-year food supplies.
  • The Svalbard Global Seed Vault, which is a vault buried 400 feet (120 m) inside a mountain in the Arctic with over ten tons of seeds from all over the world. 100 million seeds from more than 100 countries were placed inside as a precaution to preserve all the world’s crops. A prepared box of rice originating from 104 countries was the first to be deposited in the vault, where it will be kept at −18 °C (0 °F). Thousands more plant species will be added as organizers attempt to get specimens of every agricultural plant in the world. Cary Fowler, executive director of the Global Crop Diversity Trust said that by preserving as many varieties as possible, the options open to farmers, scientists and governments were maximized. “The opening of the seed vault marks a historic turning point in safeguarding the world’s crop diversity,” he said. Even if the permafrost starts to melt, the seeds will be safe inside the vault for up to 200 years. Some of the seeds will even be viable for a millennium or more, including barley, which can last 2,000 years, wheat (1,700 years), and sorghum (almost 20,000 years).[118]

Organizations

The Bulletin of the Atomic Scientists (est. 1945) is one of the oldest global risk organizations, founded after the public became alarmed by the potential of atomic warfare in the aftermath of WWII. It studies risks associated with nuclear war and energy and famously maintains the Doomsday Clock established in 1947. The Foresight Institute (est. 1986) examines the risks of nanotechnology and its benefits. It was one of the earliest organizations to study the unintended consequences of otherwise harmless technology gone haywire at a global scale. It was founded by K. Eric Drexler who postulated "grey goo".[119][120]

Beginning after 2000, a growing number of scientists, philosophers and tech billionaires created organizations devoted to studying global risks both inside and outside of academia.[121]

Independent non-governmental organizations (NGOs) include the Machine Intelligence Research Institute (est. 2000) which aims to reduce the risk of a catastrophe caused by artificial intelligence and the Singularity.[122] The top donors include Peter Thiel and Jed McCaleb.[123] The Lifeboat Foundation (est. 2009) funds research into preventing a technological catastrophe.[124] Most of the research money funds projects at universities.[125] The Global Catastrophic Risk Institute (est. 2011) is a think tank for all things catastrophic risk. It is funded by the NGO Social and Environmental Entrepreneurs. The Global Challenges Foundation (est. 2012), based in Stockholm and founded by Laszlo Szombatfalvy, releases a year report on the state of global risks.[12][13] The Future of Life Institute (est. 2014) aims to support research and initiatives for safeguarding life considering new technologies and challenges facing humanity.[126] Elon Musk is one of its biggest donors.[127] The Nuclear Threat Initiative seeks to reduce global threats from nuclear, biological and chemical threats, and containment of damage after an event.[128] It maintains a nuclear material security index.[129]

University-based organizations include the Future of Humanity Institute (est. 2005) which researches the questions of humanity's long-term future, particularly existential risk. It was founded by Nick Bostrom and is based at Oxford University. The Centre for the Study of Existential Risk (est. 2012) is a Cambridge-based organization which studies four major technological risks: artificial intelligence, biotechnology, global warming and warfare. All are man-made risks, as Huw Price explained to the AFP news agency, "It seems a reasonable prediction that some time in this or the next century intelligence will escape from the constraints of biology". He added that when this happens "we're no longer the smartest things around," and will risk being at the mercy of "machines that are not malicious, but machines whose interests don't include us."[130] Stephen Hawking is an acting adviser. The Millennium Alliance for Humanity & The Biosphere is a Stanford University-based organization focusing on many issues related to global catastrophe by bringing together members of academic in the humanities.[131][132] It was founded by Paul Ehrlich among others.[133] Stanford University also has the Center for International Security and Cooperation focusing on political cooperation to reduce global catastrophic risk.[134]

Other risk assessment groups are based in or are part of governmental organizations. The World Health Organization (WHO) includes a division called the Global Alert and Response (GAR) which monitors and responds to global epidemic crisis.[135] GAR helps member states with training and coordination of response to epidemics.[136] The United States Agency for International Development (USAID) has its Emerging Pandemic Threats Program which aims to prevent and contain naturally generated pandemics at their source.[137] The Lawrence Livermore National Laboratory has a division called the Global Security Principal Directorate which researches on behalf of the government issues such as bio-security, counter-terrorism, etc.[138]

Sunday, April 23, 2017

IPCC Shows Growth in Arctic Ice Caps






















Note that, since Antarctic ice is about 10X in mass Arctic ice, this means that from 1979-2012 there was a 1.1-1.15%/decade growth in the combined ice caps over this time period.

Continuing the trends reported in AR4, there is very high confidence that the Arctic sea ice extent (annual, multi-year and perennial) decreased over the period 1979–2012 (Figure TS.1). The rate of the annual decrease was very likely between 3.5 and 4.1% per decade (range of 0.45 to 0.51 million km2 per decade). The average decrease in decadal extent of annual Arctic sea ice has been most rapid in summer and autumn (high confidence), but the extent has decreased in every season, and in every successive decade since 1979 (high confidence).

The extent of Arctic perennial and multi-year ice decreased between 1979 and 2012 (very high confidence). The rates are very likely 11.5 [9.4 to 13.6]% per decade (0.73 to 1.07 million km2 per decade) for the sea ice extent at summer minimum (perennial ice) and very likely 13.5 [11 to 16] % per decade for multi-year ice. There is medium confidence from reconstructions that the current (1980–2012) Arctic summer sea ice retreat was unprecedented and SSTs were anomalously high in the perspective of at least the last 1,450 years. {4.2.2, 5.5.2}

It is likely that the annual period of surface melt on Arctic perennial sea ice lengthened by 5.7 [4.8 to 6.6] days per decade over the period 1979–2012. Over this period, in the region between the East Siberian Sea and the western Beaufort Sea, the duration of ice-free conditions increased by nearly 3 months. {4.2.2}

There is high confidence that the average winter sea ice thickness within the Arctic Basin decreased between 1980 and 2008. The average decrease was likely between 1.3 m and 2.3 m.

High confidence in this assessment is based on observations from multiple sources: submarine, electromagnetic probes and satellite altimetry; and is consistent with the decline in multi-year and perennial ice extent. Satellite measurements made in the period 2010–2012 show a decrease in sea ice volume compared to those made over the period 2003–2008 (medium confidence). There is high confidence that in the Arctic, where the sea ice thickness has decreased, the sea ice drift speed has increased. {4.2.2}

It is very likely that the annual Antarctic sea ice extent increased at a rate of between 1.2 and 1.8% per decade (0.13 to 0.20 million km2 per decade) between 1979 and 2012 (very high confidence). There was a greater increase in sea ice area, due to a decrease in the percentage of open water within the ice pack. There is high confidence that there are strong regional differences in this annual rate, with some regions increasing in extent/area and some decreasing. There are also contrasting regions around the Antarctic where the ice-free season has lengthened, and others where it has decreased over the satellite period (high confidence). {4.2.3}

Tuesday, April 18, 2017

RNA world

From Wikipedia, the free encyclopedia

A comparison of RNA (left) with DNA (right), showing the helices and nucleobases each employs

The RNA world refers to the self-replicating ribonucleic acid (RNA) molecules hypothesised to have been the precursors to all current life on Earth.[1][2][3] The hypothesis that current life on Earth descends from an RNA world is widely accepted,[4][5] and is supported by several lines of evidence including how ephemeral RNA was protected on the early earth.[6] Alternative chemical paths to life have been proposed,[7] and RNA-based life may not have been the first life to exist.[8][9]

The RNA world would have eventually been replaced by the DNA-RNA-protein world of today, likely through an intermediate stage of ribonucleoprotein enzymes such as the ribosome and ribozymes, since it is argued that proteins large enough to self-fold and have useful activities would only have come about after RNA was available to catalyze peptide ligation or amino acid polymerization.[9] DNA is thought to have taken over the role of data storage due to its increased stability,[10] while proteins, through a greater variety of monomers (amino acids), replaced RNA's role in specialized biocatalysis.

The RNA world hypothesis is supported by many independent lines of evidence, such as the observations that RNA is central to the translation process and that small RNAs can catalyze all of the chemical group and information transfers required for life.[9][11] The structure of the ribosome has been called the "smoking gun," as it showed that the ribosome is a ribozyme, with a central core of RNA and no amino acid side chains within 18 angstroms of the active site where peptide bond formation is catalyzed.[8] Many of the most critical components of cells (those that evolve the slowest) are composed mostly or entirely of RNA. Also, many critical cofactors (ATP, Acetyl-CoA, NADH, etc.) are either nucleotides or substances clearly related to them. This would mean that the RNA and nucleotide cofactors in modern cells could be an evolutionary remnant of an RNA-based enzymatic system that preceded the protein-based one seen in all extant life.

History

One of the challenges in studying abiogenesis is that the system of reproduction and metabolism utilized by all extant life involves three distinct types of interdependent macromolecules (DNA, RNA, and protein). This suggests that life could not have arisen in its current form, and mechanisms have then been sought whereby the current system might have arisen from a simpler precursor system. The concept of RNA as a primordial molecule[9] can be found in papers by Francis Crick[12] and Leslie Orgel,[13] as well as in Carl Woese's 1967 book The Genetic Code.[14] In 1962 the molecular biologist Alexander Rich, of the Massachusetts Institute of Technology, had posited much the same idea in an article he contributed to a volume issued in honor of Nobel-laureate physiologist Albert Szent-Györgyi.[15] Hans Kuhn in 1972 laid out a possible process by which the modern genetic system might have arisen from a nucleotide-based precursor, and this led Harold White in 1976 to observe that many of the cofactors essential for enzymatic function are either nucleotides or could have been derived from nucleotides. He proposed that these nucleotide cofactors represent "fossils of nucleic acid enzymes".[16] The phrase "RNA World" was first used by Nobel laureate Walter Gilbert in 1986, in a commentary on how recent observations of the catalytic properties of various forms of RNA fit with this hypothesis.[17]

Properties of RNA

The properties of RNA make the idea of the RNA world hypothesis conceptually plausible, though its general acceptance as an explanation for the origin of life requires further evidence.[15] RNA is known to form efficient catalysts and its similarity to DNA makes clear its ability to store information. Opinions differ, however, as to whether RNA constituted the first autonomous self-replicating system or was a derivative of a still-earlier system.[9] One version of the hypothesis is that a different type of nucleic acid, termed pre-RNA, was the first one to emerge as a self-reproducing molecule, to be replaced by RNA only later. On the other hand, the recent finding that activated pyrimidine ribonucleotides can be synthesized under plausible prebiotic conditions[18] means that it is premature to dismiss the RNA-first scenarios.[9] Suggestions for 'simple' pre-RNA nucleic acids have included peptide nucleic acid (PNA), threose nucleic acid (TNA) or glycol nucleic acid (GNA).[19][20] Despite their structural simplicity and possession of properties comparable with RNA, the chemically plausible generation of "simpler" nucleic acids under prebiotic conditions has yet to be demonstrated.[21]

RNA as an enzyme

RNA enzymes, or ribozymes, are found in today's DNA-based life and could be examples of living fossils. Ribozymes play vital roles, such as those in the ribosome, which is vital for protein synthesis. Many other ribozyme functions exist; for example, the hammerhead ribozyme performs self-cleavage[22] and an RNA polymerase ribozyme can synthesize a short RNA strand from a primed RNA template.[23]

Among the enzymatic properties important for the beginning of life are:
  • Self-replication. The ability to self-replicate, or synthesize other RNA molecules; relatively short RNA molecules that can synthesize others have been artificially produced in the lab. The shortest was 165-bases long, though it has been estimated that only part of the molecule was crucial for this function. One version, 189-bases long, had an error rate of just 1.1% per nucleotide when synthesizing an 11 nucleotide long RNA strand from primed template strands.[24] This 189 base pair ribozyme could polymerize a template of at most 14 nucleotides in length, which is too short for self replication, but a potential lead for further investigation. The longest primer extension performed by a ribozyme polymerase was 20 bases.[25] In 2016, researchers reported the use of in vitro evolution to improve dramatically the activity and generality of an RNA polymerase ribozyme by selecting variants that can synthesize functional RNA molecules from an RNA template. Each RNA polymerase ribozyme was engineered to remain linked to its new, synthesized RNA strand, this allowed the team to isolate successful polymerases. The isolated RNA polymerases were again used for another round of evolution. After several rounds of evolution, they obtained one RNA polymerase ribozyme called 24-3 that was able to copy almost any other RNA, from small catalysts to long RNA based enzymes. Particular RNAs were amplified up to 10,000 times, a first RNA version of the polymerase chain reaction (PCR). The RNA polymerase is not yet able to make copies of itself.[26]
  • Catalysis. The ability to catalyze simple chemical reactions—which would enhance creation of molecules that are building blocks of RNA molecules (i.e., a strand of RNA which would make creating more strands of RNA easier). Relatively short RNA molecules with such abilities have been artificially formed in the lab.[27][28] A recent study showed that almost any nucleic acid can evolve into a catalytic sequence under appropriate selection. For instance, an arbitrarily chosen 50-nucleotide DNA fragment encoding for the Bos taurus (cattle) albumin mRNA was subjected to test-tube evolution to derive a catalytic DNA (DNAzyme) with RNA-cleavage activity. After only a few weeks, a DNAzyme with significant catalytic activity had evolved.[29] In general, DNA is much more chemically inert than RNA and hence much more resistant to obtaining catalytic properties. If in vitro evolution works for DNA it will happen much more easily with RNA.
  • Amino acid-RNA ligation. The ability to conjugate an amino acid to the 3'-end of an RNA in order to use its chemical groups or provide a long-branched aliphatic side-chain.[30]
  • Peptide bond formation. The ability to catalyse the formation of peptide bonds between amino acids to produce short peptides or longer proteins. This is done in modern cells by ribosomes, a complex of several RNA molecules known as rRNA together with many proteins. The rRNA molecules are thought responsible for its enzymatic activity, as no amino acid molecules lie within 18Å of the enzyme's active site,[15] and, when the majority of the amino acids in the ribosome were stringently removed, the resulting ribosome retained its full peptidyl transferase activity, fully able to catalyze the formation of peptide bonds between amino acids.[31] A much shorter RNA molecule has been synthesized in the laboratory with the ability to form peptide bonds, and it has been suggested that rRNA has evolved from a similar molecule.[32] It has also been suggested that amino acids may have initially been involved with RNA molecules as cofactors enhancing or diversifying their enzymatic capabilities, before evolving to more complex peptides. Similarly, tRNA is suggested to have evolved from RNA molecules that began to catalyze amino acid transfer.[33]

RNA in information storage

RNA is a very similar molecule to DNA, and only has two chemical differences. The overall structure of RNA and DNA are immensely similar—one strand of DNA and one of RNA can bind to form a double helical structure. This makes the storage of information in RNA possible in a very similar way to the storage of information in DNA. However RNA is less stable.
The major difference between RNA and DNA is the presence of a hydroxyl group at the 2'-position.

Comparison of DNA and RNA structure

The major difference between RNA and DNA is the presence of a hydroxyl group at the 2'-position of the ribose sugar in RNA (illustration, right).[15] This group makes the molecule less stable because when not constrained in a double helix, the 2' hydroxyl can chemically attack the adjacent phosphodiester bond to cleave the phosphodiester backbone. The hydroxyl group also forces the ribose into the C3'-endo sugar conformation unlike the C2'-endo conformation of the deoxyribose sugar in DNA. This forces an RNA double helix to change from a B-DNA structure to one more closely resembling A-DNA.
RNA also uses a different set of bases than DNA—adenine, guanine, cytosine and uracil, instead of adenine, guanine, cytosine and thymine. Chemically, uracil is similar to thymine, differing only by a methyl group, and its production requires less energy.[34] In terms of base pairing, this has no effect. Adenine readily binds uracil or thymine. Uracil is, however, one product of damage to cytosine that makes RNA particularly susceptible to mutations that can replace a GC base pair with a GU (wobble) or AU base pair.

RNA is thought to have preceded DNA, because of their ordering in the biosynthetic pathways. The deoxyribonucleotides used to make DNA are made from ribonucleotides, the building blocks of RNA, by removing the 2'-hydroxyl group. As a consequence a cell must have the ability to make RNA before it can make DNA.

Limitations of information storage in RNA

The chemical properties of RNA make large RNA molecules inherently fragile, and they can easily be broken down into their constituent nucleotides through hydrolysis.[35][36] These limitations do not make use of RNA as an information storage system impossible, simply energy intensive (to repair or replace damaged RNA molecules) and prone to mutation. While this makes it unsuitable for current 'DNA optimised' life, it may have been acceptable for more primitive life.

RNA as a regulator

Riboswitches have been found to act as regulators of gene expression, particularly in bacteria, but also in plants and archaea. Riboswitches alter their secondary structure in response to the binding of a metabolite. This change in structure can result in the formation or disruption of a terminator, truncating or permitting transcription respectively.[37] Alternatively, riboswitches may bind or occlude the Shine-Dalgarno sequence, affecting translation.[38] It has been suggested that these originated in an RNA-based world.[39] In addition, RNA thermometers regulate gene expression in response to temperature changes.[40]

Support and difficulties

The RNA world hypothesis is supported by RNA's ability to store, transmit, and duplicate genetic information, as DNA does. RNA can act as a ribozyme, a special type of enzyme. Because it can perform the tasks of both DNA and enzymes, RNA is believed to have once been capable of supporting independent life forms.[15] Some viruses use RNA as their genetic material, rather than DNA.[41] Further, while nucleotides were not found in experiments based on Miller-Urey experiment, their formation in prebiotically plausible conditions has now been reported, as noted above;[18] the purine base known as adenine is merely a pentamer of hydrogen cyanide. Experiments with basic ribozymes, like Bacteriophage Qβ RNA, have shown that simple self-replicating RNA structures can withstand even strong selective pressures (e.g., opposite-chirality chain terminators).[42]

Since there were no known chemical pathways for the abiogenic synthesis of nucleotides from pyrimidine nucleobases cytosine and uracil under prebiotic conditions, it is thought by some that nucleic acids did not contain these nucleobases seen in life's nucleic acids.[43] The nucleoside cytosine has a half-life in isolation of 19 days at 100 °C (212 °F) and 17,000 years in freezing water, which some argue is too short on the geologic time scale for accumulation.[44] Others have questioned whether ribose and other backbone sugars could be stable enough to find in the original genetic material,[45] and have raised the issue that all ribose molecules would have had to be the same enantiomer, as any nucleotide of the wrong chirality acts as a chain terminator.[46]

Pyrimidine ribonucleosides and their respective nucleotides have been prebiotically synthesised by a sequence of reactions that by-pass free sugars and assemble in a stepwise fashion by including nitrogenous and oxygenous chemistries. In a series of publications, John Sutherland and his team at the School of Chemistry, University of Manchester, have demonstrated high yielding routes to cytidine and uridine ribonucleotides built from small 2 and 3 carbon fragments such as glycolaldehyde, glyceraldehyde or glyceraldehyde-3-phosphate, cyanamide and cyanoacetylene. One of the steps in this sequence allows the isolation of enantiopure ribose aminooxazoline if the enantiomeric excess of glyceraldehyde is 60% or greater, of possible interest towards biological homochirality.[47] This can be viewed as a prebiotic purification step, where the said compound spontaneously crystallised out from a mixture of the other pentose aminooxazolines. Aminooxazolines can react with cyanoacetylene in a mild and highly efficient manner, controlled by inorganic phosphate, to give the cytidine ribonucleotides. Photoanomerization with UV light allows for inversion about the 1' anomeric centre to give the correct beta stereochemistry; one problem with this chemistry is the selective phosphorylation of alpha-cytidine at the 2' position.[48] However, in 2009 they showed that the same simple building blocks allow access, via phosphate controlled nucleobase elaboration, to 2',3'-cyclic pyrimidine nucleotides directly, which are known to be able to polymerise into RNA.[18] Organic chemist Donna Blackmond described this finding as "strong evidence" in favour of the RNA world.[49] However, John Sutherland said that while his team's work suggests that nucleic acids played an early and central role in the origin of life, it did not necessarily support the RNA world hypothesis in the strict sense, which he described as a "restrictive, hypothetical arrangement".[50]

The Sutherland group's 2009 paper also highlighted the possibility for the photo-sanitization of the pyrimidine-2',3'-cyclic phosphates.[18] A potential weakness of these routes is the generation of enantioenriched glyceraldehyde, or its 3-phosphate derivative (glyceraldehyde prefers to exist as its keto tautomer dihydroxyacetone).[citation needed]

On August 8, 2011, a report, based on NASA studies with meteorites found on Earth, was published suggesting building blocks of RNA (adenine, guanine and related organic molecules) may have been formed extraterrestrially in outer space.[51][52][53] On August 29, 2012, astronomers at Copenhagen University reported the detection of a specific sugar molecule, glycolaldehyde, in a distant star system. The molecule was found around the protostellar binary IRAS 16293-2422, which is located 400 light years from Earth.[54][55] Because glycolaldehyde is needed to form RNA, this finding suggests that complex organic molecules may form in stellar systems prior to the formation of planets, eventually arriving on young planets early in their formation.[56]

"Molecular biologist's dream"

"Molecular biologist's dream" is a phrase coined by Gerald Joyce and Leslie Orgel to refer to the problem of emergence of self-replicating RNA molecules, as any movement towards an RNA world on a properly modeled prebiotic early Earth would have been continuously suppressed by destructive reactions.[57] It was noted that many of the steps needed for the nucleotides formation do not proceed efficiently in prebiotic conditions.[58] Joyce and Orgel specifically referred the molecular biologist's dream to "a magic catalyst" that could "convert the activated nucleotides to a random ensemble of polynucleotide sequences, a subset of which had the ability to replicate".[57]

Joyce and Orgel further argued that nucleotides cannot link unless there is some activation of the phosphate group, whereas the only effective activating groups for this are "totally implausible in any prebiotic scenario", particularly adenosine triphosphate.[57] According to Joyce and Orgel, in case of the phosphate group activation, the basic polymer product would have 5',5'-pyrophosphate linkages, while the 3',5'-phosphodiester linkages, which are present in all known RNA, would be much less abundant.[57] The associated molecules would have been also prone to addition of incorrect nucleotides or to reactions with numerous other substances likely to have been present.[57] The RNA molecules would have been also continuously degraded by such destructive process as spontaneous hydrolysis, present on the early Earth.[57] Joyce and Orgel proposed to reject "the myth of a self-replicating RNA molecule that arose de novo from a soup of random polynucleotides"[57][citation needed][unreliable source?] and hypothesised a scenario where the prebiotic processes furnish pools of enantiopure beta-D-ribonucleosides.[59] In a later paper, Joyce describes "The myth of a small RNA molecule that arises de novo and can replicate efficiently and with high fidelity under plausible prebiotic conditions" as a strawman.[60]

Prebiotic RNA synthesis

Nucleotides are the fundamental molecules that combine in series to form RNA. They consist of a nitrogenous base attached to a sugar-phosphate backbone. RNA is made of long stretches of specific nucleotides arranged so that their sequence of bases carries information. The RNA world hypothesis holds that in the primordial soup (or sandwich), there existed free-floating nucleotides. These nucleotides regularly formed bonds with one another, which often broke because the change in energy was so low. However, certain sequences of base pairs have catalytic properties that lower the energy of their chain being created, enabling them to stay together for longer periods of time. As each chain grew longer, it attracted more matching nucleotides faster, causing chains to now form faster than they were breaking down.

These chains have been proposed by some as the first, primitive forms of life.[61] In an RNA world, different sets of RNA strands would have had different replication outputs, which would have increased or decreased their frequency in the population, i.e. natural selection. As the fittest sets of RNA molecules expanded their numbers, novel catalytic properties added by mutation, which benefitted their persistence and expansion, could accumulate in the population. Such an autocatalytic set of ribozymes, capable of self replication in about an hour, has been identified. It was produced by molecular competition (in vitro evolution) of candidate enzyme mixtures.[62]

Competition between RNA may have favored the emergence of cooperation between different RNA chains, opening the way for the formation of the first protocell. Eventually, RNA chains developed with catalytic properties that help amino acids bind together (a process called peptide-bonding). These amino acids could then assist with RNA synthesis, giving those RNA chains that could serve as ribozymes the selective advantage. The ability to catalyze one step in protein synthesis, aminoacylation of RNA, has been demonstrated in a short (five-nucleotide) segment of RNA.[63]

One of the problems with the RNA world hypothesis is to discover the pathway by which RNA became upgraded to the DNA system. Geoffrey Diemer and Ken Stedman at Portland State University in Oregon, may have found a solution. While conducting a survey of viruses in a hot acidic lake in Lassen Volcanic National Park, California, they uncovered evidence that a simple DNA virus had acquired a gene from a completely unrelated RNA-based virus. Virologist Luis Villareal of the University of California Irvine also suggests that viruses capable of converting an RNA-based gene into DNA and then incorporating it into a more complex DNA-based genome might have been common in the Virus world during the RNA to DNA transition some 4 billion years ago.[64][65] This finding bolsters the argument for the transfer of information from the RNA world to the emerging DNA world before the emergence of the Last Universal Common Ancestor. From the research, the diversity of this virus world is still with us.

In March 2015, NASA scientists reported that, for the first time, complex DNA and RNA organic compounds of life, including uracil, cytosine and thymine, have been formed in the laboratory under conditions found only in outer space, using starting chemicals, like pyrimidine, found in meteorites. Pyrimidine, like polycyclic aromatic hydrocarbons (PAHs), the most carbon-rich chemical found in the Universe, may have been formed in giant red stars or in interstellar dust and gas clouds, according to the scientists.[66]

Viroids

Additional evidence supporting the concept of an RNA world has resulted from research on viroids, the first representatives of a novel domain of "subviral pathogens."[67][68] Viroids are mostly plant pathogens, which consist of short stretches (a few hundred nucleobases) of highly complementary, circular, single-stranded, and non-coding RNA without a protein coat. Compared with other infectious plant pathogens, viroids are extremely small in size, ranging from 246 to 467 nucleobases. In comparison, the genome of the smallest known viruses capable of causing an infection are about 2,000 nucleobases long.[69]

In 1989, Diener proposed that, based on their characteristic properties, viroids are more plausible "living relics" of the RNA world than are introns or other RNAs then so considered.[70] If so, viroids have attained potential significance beyond plant pathology to evolutionary biology, by representing the most plausible macromolecules known capable of explaining crucial intermediate steps in the evolution of life from inanimate matter (see: abiogenesis).

Apparently, Diener's hypothesis lay dormant until 2014, when Flores et al. published a review paper, in which Diener's evidence supporting his hypothesis was summarized.[71] In the same year, a New York Times science writer published a popularized version of Diener's proposal, in which, however, he mistakenly credited Flores et al. with the hypothesis' original conception.[72]

Pertinent viroid properties listed in 1989 are:
  1. their small size, imposed by error-prone replication;
  2. their high guanine and cytosine content, which increases stability and replication fidelity;
  3. their circular structure, which assures complete replication without genomic tags;
  4. existence of structural periodicity, which permits modular assembly into enlarged genomes;
  5. their lack of protein-coding ability, consistent with a ribosome-free habitat; and
  6. replication mediated in some by ribozymes—the fingerprint of the RNA world.[71]
The existence, in extant cells, of RNAs with molecular properties predicted for RNAs of the RNA World constitutes an additional argument supporting the RNA World hypothesis.

Origin of sex

Eigen et al.[73] and Woese[74] proposed that the genomes of early protocells were composed of single-stranded RNA, and that individual genes corresponded to separate RNA segments, rather than being linked end-to-end as in present-day DNA genomes. A protocell that was haploid (one copy of each RNA gene) would be vulnerable to damage, since a single lesion in any RNA segment would be potentially lethal to the protocell (e.g. by blocking replication or inhibiting the function of an essential gene).

Vulnerability to damage could be reduced by maintaining two or more copies of each RNA segment in each protocell, i.e. by maintaining diploidy or polyploidy. Genome redundancy would allow a damaged RNA segment to be replaced by an additional replication of its homolog. However, for such a simple organism, the proportion of available resources tied up in the genetic material would be a large fraction of the total resource budget. Under limited resource conditions, the protocell reproductive rate would likely be inversely related to ploidy number. The protocell's fitness would be reduced by the costs of redundancy. Consequently, coping with damaged RNA genes while minimizing the costs of redundancy would likely have been a fundamental problem for early protocells.

A cost-benefit analysis was carried out in which the costs of maintaining redundancy were balanced against the costs of genome damage.[75] This analysis led to the conclusion that, under a wide range of circumstances, the selected strategy would be for each protocell to be haploid, but to periodically fuse with another haploid protocell to form a transient diploid. The retention of the haploid state maximizes the growth rate. The periodic fusions permit mutual reactivation of otherwise lethally damaged protocells. If at least one damage-free copy of each RNA gene is present in the transient diploid, viable progeny can be formed. For two, rather than one, viable daughter cells to be produced would require an extra replication of the intact RNA gene homologous to any RNA gene that had been damaged prior to the division of the fused protocell. The cycle of haploid reproduction, with occasional fusion to a transient diploid state, followed by splitting to the haploid state, can be considered to be the sexual cycle in its most primitive form.[75][76] In the absence of this sexual cycle, haploid protocells with damage in an essential RNA gene would simply die.

This model for the early sexual cycle is hypothetical, but it is very similar to the known sexual behavior of the segmented RNA viruses, which are among the simplest organisms known. Influenza virus, whose genome consists of 8 physically separated single-stranded RNA segments,[77] is an example of this type of virus. In segmented RNA viruses, “mating” can occur when a host cell is infected by at least two virus particles. If these viruses each contain an RNA segment with a lethal damage, multiple infection can lead to reactivation providing that at least one undamaged copy of each virus gene is present in the infected cell. This phenomenon is known as “multiplicity reactivation”. Multiplicity reactivation has been reported to occur in influenza virus infections after induction of RNA damage by UV-irradiation,[78] and ionizing radiation.[79]

Further developments

Patrick Forterre has been working on a novel hypothesis, called "three viruses, three domains":[80] that viruses were instrumental in the transition from RNA to DNA and the evolution of Bacteria, Archaea, and Eukaryota. He believes the last common ancestor (specifically, the "last universal cellular ancestor")[80] was RNA-based and evolved RNA viruses. Some of the viruses evolved into DNA viruses to protect their genes from attack. Through the process of viral infection into hosts the three domains of life evolved.[80][81] Another interesting proposal is the idea that RNA synthesis might have been driven by temperature gradients, in the process of thermosynthesis.[82] Single nucleotides have been shown to catalyze organic reactions.[83]

Steven Benner has argued that chemical conditions on the planet Mars, such as the presence of boron, molybdenum and oxygen, may have been better for initially producing RNA molecules than those on Earth. If so, life-suitable molecules, originating on Mars, may have later migrated to Earth via panspermia or similar process.[2][3]

Alternative hypotheses

The hypothesized existence of an RNA world does not exclude a "Pre-RNA world", where a metabolic system based on a different nucleic acid is proposed to pre-date RNA. A candidate nucleic acid is peptide nucleic acid (PNA), which uses simple peptide bonds to link nucleobases.[84] PNA is more stable than RNA, but its ability to be generated under prebiological conditions has yet to be demonstrated experimentally.

Threose nucleic acid (TNA) has also been proposed as a starting point, as has glycol nucleic acid (GNA), and like PNA, also lack experimental evidence for their respective abiogenesis.
An alternative — or complementary — theory of RNA origin is proposed in the PAH world hypothesis, whereby polycyclic aromatic hydrocarbons (PAHs) mediate the synthesis of RNA molecules.[85] PAHs are the most common and abundant of the known polyatomic molecules in the visible Universe, and are a likely constituent of the primordial sea.[86] PAHs, along with fullerenes (also implicated in the origin of life),[87] have been recently detected in nebulae.[88]

The iron-sulfur world theory proposes that simple metabolic processes developed before genetic materials did, and these energy-producing cycles catalyzed the production of genes.

Some of the difficulties of producing the precursors on earth are bypassed by another alternative or complementary theory for their origin, panspermia. It discusses the possibility that the earliest life on this planet was carried here from somewhere else in the galaxy, possibly on meteorites similar to the Murchison meteorite.[89] This does not invalidate the concept of an RNA world, but posits that this world or its precursors originated not on Earth but rather another, probably older, planet.

There are hypotheses that are in direct conflict to the RNA world hypothesis. The relative chemical complexity of the nucleotide and the unlikelihood of it spontaneously arising, along with the limited number of combinations possible among four base forms, as well as the need for RNA polymers of some length before seeing enzymatic activity, have led some to reject the RNA world hypothesis in favor of a metabolism-first hypothesis, where the chemistry underlying cellular function arose first, along with the ability to replicate and facilitate this metabolism.

RNA-peptide coevolution

Another proposal is that the dual-molecule system we see today, where a nucleotide-based molecule is needed to synthesize protein, and a protein-based molecule is needed to make nucleic acid polymers, represents the original form of life.[90] This theory is called RNA-peptide coevolution,[91] or the Peptide-RNA world, and offers a possible explanation for the rapid evolution of high-quality replication in RNA (since proteins are catalysts), with the disadvantage of having to postulate the formation of two complex molecules, an enzyme (from peptides) and a RNA (from nucleotides). In this Peptide-RNA World scenario, RNA would have contained the instructions for life, while peptides (simple protein enzymes) would have accelerated key chemical reactions to carry out those instructions.[92] The study leaves open the question of exactly how those primitive systems managed to replicate themselves — something neither the RNA World hypothesis nor the Peptide-RNA World theory can yet explain, unless polymerases (enzymes that rapidly assemble the RNA molecule) played a role.[92]

A research project completed in March 2015 by the Sutherland group found that a network of reactions beginning with hydrogen cyanide and hydrogen sulfide, in streams of water irradiated by UV light, could produce the chemical components of proteins and lipids, alongside those of RNA.[93][94] The researchers used the term "cyanosulfidic" to describe this network of reactions.[93]

Implications of the RNA world

The RNA world hypothesis, if true, has important implications for the definition of life. For most of the time that followed Watson and Crick's elucidation of DNA structure in 1953, life was largely defined in terms of DNA and proteins: DNA and proteins seemed the dominant macromolecules in the living cell, with RNA only aiding in creating proteins from the DNA blueprint.

The RNA world hypothesis places RNA at center-stage when life originated. This has been accompanied by many studies[citation needed] in the last ten years that demonstrate important aspects of RNA function not previously known—and supports the idea of a critical role for RNA in the mechanisms of life. The RNA world hypothesis is supported by the observations that ribosomes are ribozymes: the catalytic site is composed of RNA, and proteins hold no major structural role and are of peripheral functional importance. This was confirmed with the deciphering of the 3-dimensional structure of the ribosome in 2001. Specifically, peptide bond formation, the reaction that binds amino acids together into proteins, is now known to be catalyzed by an adenine residue in the rRNA.

Other interesting discoveries demonstrate a role for RNA beyond a simple message or transfer molecule.[95] These include the importance of small nuclear ribonucleoproteins (snRNPs) in the processing of pre-mRNA and RNA editing, RNA interference (RNAi), and reverse transcription from RNA in eukaryotes in the maintenance of telomeres in the telomerase reaction.[96]

Lie group

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Lie_group In mathematics , a Lie gro...