Search This Blog

Sunday, December 28, 2025

Neutrino astronomy

From Wikipedia, the free encyclopedia
An optical module from a neutrino telescope. Neutrino telescopes consist of hundreds to thousands of optical modules distributed over a large volume.

Neutrino astronomy is a branch of astronomy that gathers information about astronomical objects by observing and studying neutrinos emitted by them with the help of neutrino detectors in special Earth observatories. It is an emerging field in astroparticle physics providing insights into the high-energy and non-thermal processes in the universe.

Neutrinos are nearly massless and electrically neutral or chargeless elementary particles. They are created as a result of certain types of radioactive decay, nuclear reactions such as those that take place in the Sun or high energy astrophysical phenomena, in nuclear reactors, or when cosmic rays hit atoms in the atmosphere. Neutrinos rarely interact with matter (only via the weak nuclear force), travel at nearly the speed of light in straight lines, pass through large amounts of matter without any notable absorption or without being deflected by magnetic fields. Unlike photons, neutrinos rarely scatter along their trajectory. But like photons, neutrinos are some of the most common particles in the universe. Because of this, neutrinos offer a unique opportunity to observe processes that are inaccessible to optical telescopes, such as reactions in the Sun's core. Neutrinos that are created in the Sun's core are barely absorbed, so a large quantity of them escape from the Sun and reach the Earth. Neutrinos can also offer a very strong pointing direction compared to charged particle cosmic rays.

Neutrinos are very hard to detect due to their non-interactive nature. In order to detect neutrinos, scientists have to shield the detectors from cosmic rays, which can penetrate hundreds of meters of rock. Neutrinos, on the other hand, can go through the entire planet without being absorbed, like "ghost particles". That's why neutrino detectors are placed many hundreds of meter underground, usually at the bottom of mines. There a neutrino detection liquid such as a Chlorine-rich solution is placed; the neutrinos react with a Chlorine isotope and can create radioactive Argon. Gallium to Germanium conversion has also been used. The IceCube Neutrino Observatory built in 2010 in the south pole is the biggest neutrino detector, consisting of thousands of optical sensors buried 500 meters underneath a cubic kilometer of deep, ultra-transparent ice, detects light emitted by charged particles that are produced when a single neutrino collides with a proton or neutron inside an atom. The resulting nuclear reaction produces secondary particles traveling at high speeds that give off a blue light called Cherenkov radiationSuper-Kamiokande in Japan and ANTARES and KM3NeT in the Mediterranean are some other important neutrino detectors.

Since neutrinos interact weakly, neutrino detectors must have large target masses (often thousands of tons). The detectors also must use shielding and effective software to remove background signal. Since neutrinos are very difficult to detect, the only bodies that have been studied in this way are the sun and the supernova SN1987A, which exploded in 1987. Scientist predicted that supernova explosions would produce bursts of neutrinos, and a similar burst was actually detected from Supernova 1987A.

In the future neutrino astronomy promises to discover other aspects of the universe, including coincidental gravitational waves, gamma ray bursts, the cosmic neutrino background, origins of ultra-high-energy neutrinos, neutrino properties (such as neutrino mass hierarchy), dark matter properties, etc. It will become an integral part of multi-messenger astronomy, complementing gravitational astronomy and traditional telescopic astronomy.

History

Neutrinos were first recorded in 1956 by Clyde Cowan and Frederick Reines in an experiment employing a nearby nuclear reactor as a neutrino source. Their discovery was acknowledged with a Nobel Prize in Physics in 1995.

This was followed by the first atmospheric neutrino detection in 1965 by two groups almost simultaneously. One was led by Frederick Reines who operated a liquid scintillator - the Case-Witwatersrand-Irvine or CWI detector - in the East Rand gold mine in South Africa at an 8.8 km water depth equivalent. The other was a Bombay-Osaka-Durham collaboration that operated in the Indian Kolar Gold Field mine at an equivalent water depth of 7.5 km. Although the KGF group detected neutrino candidates two months later than Reines CWI, they were given formal priority due to publishing their findings two weeks earlier.

In 1968, Raymond Davis, Jr. and John N. Bahcall successfully detected the first solar neutrinos in the Homestake experiment. Davis, along with Japanese physicist Masatoshi Koshiba were jointly awarded half of the 2002 Nobel Prize in Physics "for pioneering contributions to astrophysics, in particular for the detection of cosmic neutrinos (the other half went to Riccardo Giacconi for corresponding pioneering contributions which have led to the discovery of cosmic X-ray sources)."

The first generation of undersea neutrino telescope projects began with the proposal by Moisey Markov in 1960 "...to install detectors deep in a lake or a sea and to determine the location of charged particles with the help of Cherenkov radiation."

The first underwater neutrino telescope began as the DUMAND project. DUMAND stands for Deep Underwater Muon and Neutrino Detector. The project began in 1976 and although it was eventually cancelled in 1995, it acted as a precursor to many of the following telescopes in the following decades.

The Baikal Neutrino Telescope is installed in the southern part of Lake Baikal in Russia. The detector is located at a depth of 1.1 km and began surveys in 1980. In 1993, it was the first to deploy three strings to reconstruct the muon trajectories as well as the first to record atmospheric neutrinos underwater.

AMANDA (Antarctic Muon And Neutrino Detector Array) used the 3 km thick ice layer at the South Pole and was located several hundred meters from the Amundsen-Scott station. Holes 60 cm in diameter were drilled with pressurized hot water in which strings with optical modules were deployed before the water refroze. The depth proved to be insufficient to be able to reconstruct the trajectory due to the scattering of light on air bubbles. A second group of 4 strings were added in 1995/96 to a depth of about 2000 m that was sufficient for track reconstruction. The AMANDA array was subsequently upgraded until January 2000 when it consisted of 19 strings with a total of 667 optical modules at a depth range between 1500 m and 2000 m. AMANDA would eventually be the predecessor to IceCube in 2005.

An example of an early neutrino detector is the Artyomovsk Scintillation Detector [ru] (ASD), located in the Soledar Salt Mine in Ukraine at a depth of more than 100 m. It was created in the Department of High Energy Leptons and Neutrino Astrophysics of the Institute of Nuclear Research of the USSR Academy of Sciences in 1969 to study antineutrino fluxes from collapsing stars in the Galaxy, as well as the spectrum and interactions of muons of cosmic rays with energies up to 10 ^ 13 eV. A feature of the detector is a 100-ton scintillation tank with dimensions on the order of the length of an electromagnetic shower with an initial energy of 100 GeV.

21st century

After the decline of DUMAND the participating groups split into three branches to explore deep sea options in the Mediterranean Sea. ANTARES was anchored to the sea floor in the region off Toulon at the French Mediterranean coast. It consists of 12 strings, each carrying 25 "storeys" equipped with three optical modules, an electronic container, and calibration devices down to a maximum depth of 2475 m.

NEMO (NEutrino Mediterranean Observatory) was pursued by Italian groups to investigate the feasibility of a cubic-kilometer scale deep-sea detector. A suitable site at a depth of 3.5 km about 100 km off Capo Passero at the South-Eastern coast of Sicily has been identified. From 2007 to 2011 the first prototyping phase tested a "mini-tower" with 4 bars deployed for several weeks near Catania at a depth of 2 km. The second phase as well as plans to deploy the full-size prototype tower will be pursued in the KM3NeT framework.

The NESTOR Project was installed in 2004 to a depth of 4 km and operated for one month until a failure of the cable to shore forced it to be terminated. The data taken still successfully demonstrated the detector's functionality and provided a measurement of the atmospheric muon flux. The proof of concept will be implemented in the KM3Net framework.

The second generation of deep-sea neutrino telescope projects reach or even exceed the size originally conceived by the DUMAND pioneers. IceCube, located at the South Pole and incorporating its predecessor AMANDA, was completed in December 2010. Consisting of 5160 digital optical modules installed on 86 strings at depths of 1450 to 2550 m in the Antarctic ice, in 2013 it became the first experiment to detect astrophysical (cosmic) neutrinos. The KM3NeT in the Mediterranean Sea and the GVD are in their preparatory/prototyping phase. IceCube instruments 1 km3 of ice. GVD is also planned to cover 1 km3 but at a much higher energy threshold. KM3NeT is planned to cover several km3 and have two components; ARCA (Astroparticle Research with Cosmics in the Abyss) and ORCA (Oscillations Research with Cosmics in the Abyss). Both KM3NeT and GVD have completed at least part of their construction and it is expected that these two along with IceCube will form a global neutrino observatory.

In July 2018, the IceCube Neutrino Observatory announced that they have traced an extremely-high-energy neutrino that hit their Antarctica-based research station in September 2017 back to its point of origin in the blazar TXS 0506+056 located 3.7 billion light-years away in the direction of the constellation Orion. This was the first time that a neutrino detector has been used to locate an object in space and that a source of cosmic rays has been identified. In November 2022, another significant progress towards identifying the origin of cosmic rays came when IceCube reported the observation of 79 neutrinos with an energy over 1 TeV originated from the nearby galaxy M77. These findings in a well-known object are expected to help study the active nucleus of this galaxy, as well as serving as a baseline for future observations. And in June 2023, IceCube reported the first detection of neutrinos from the galactic plane of the Milky Way.

Detection methods

Neutrinos interact incredibly rarely with matter, so the vast majority of neutrinos will pass through a detector without interacting. If a neutrino does interact, it will only do so once. Therefore, to perform neutrino astronomy, large detectors must be used to obtain enough statistics.

The IceCube Neutrino Detector at the South Pole. The PMTs are under more than a kilometer of ice, and will detect the photons from neutrino interactions within a cubic kilometer of ice

The method of neutrino detection depends on the energy and type of the neutrino. A famous example is that anti-electron neutrinos can interact with a nucleus in the detector by inverse beta decay and produce a positron and a neutron. The positron immediately will annihilate with an electron, producing two 511keV photons. The neutron will attach to another nucleus and give off a gamma with an energy of a few MeV. In general, neutrinos can interact through neutral-current and charged-current interactions. In neutral-current interactions, the neutrino interacts with a nucleus or electron and the neutrino retains its original flavor. In charged-current interactions, the neutrino is absorbed by the nucleus and produces a lepton corresponding to the neutrino's flavor (,, etc.). If the charged resultants are moving fast enough, they can create Cherenkov light.

To observe neutrino interactions, detectors use photomultiplier tubes (PMTs) to detect individual photons. From the timing of the photons, it is possible to determine the time and place of the neutrino interaction. If the neutrino creates a muon during its interaction, then the muon will travel in a line, creating a "track" of Cherenkov photons. The data from this track can be used to reconstruct the directionality of the muon. For high-energy interactions, the neutrino and muon directions are the same, so it's possible to tell where the neutrino came from. This is pointing direction is important in extra-solar system neutrino astronomy. Along with time, position, and possibly direction, it's possible to infer the energy of the neutrino from the interactions. The number of photons emitted is related to the neutrino energy, and neutrino energy is important for measuring the fluxes from solar and geo-neutrinos.

Due to the rareness of neutrino interactions, it is important to maintain a low background signal. For this reason, most neutrino detectors are constructed under a rock or water overburden. This overburden shields against most cosmic rays in the atmosphere; only some of the highest-energy muons are able to penetrate to the depths of our detectors. Detectors must include ways of dealing with data from muons so as to not confuse them with neutrinos. Along with more complicated measures, if a muon track is first detected outside of the desired "fiducial" volume, the event is treated as a muon and not considered. Ignoring events outside the fiducial volume also decreases the signal from radiation outside the detector.

Despite shielding efforts, it is inevitable that some background will make it into the detector, many times in the form of radioactive impurities within the detector itself. At this point, if it is impossible to differentiate between the background and true signal, a Monte Carlo simulation must be used to model the background. While it may be unknown if an individual event is background or signal, it is possible to detect an excess about the background, signifying existence of the desired signal.

Applications

When astronomical bodies, such as the Sun, are studied using light, only the surface of the object can be directly observed. Any light produced in the core of a star will interact with gas particles in the outer layers of the star, taking hundreds of thousands of years to make it to the surface, making it impossible to observe the core directly. Since neutrinos are also created in the cores of stars (as a result of stellar fusion), the core can be observed using neutrino astronomy. Other sources of neutrinos- such as neutrinos released by supernovae- have been detected. Several neutrino experiments have formed the Supernova Early Warning System (SNEWS), where they search for an increase of neutrino flux that could signal a supernova event. There are currently goals to detect neutrinos from other sources, such as active galactic nuclei (AGN), as well as gamma-ray bursts and starburst galaxies. Neutrino astronomy may also indirectly detect dark matter.

Supernova warning

Seven neutrino experiments (Super-K, LVD, IceCube, KamLAND, Borexino, Daya Bay, and HALO) work together as the Supernova Early Warning System (SNEWS). In a core collapse supernova, ninety-nine percent of the energy released will be in neutrinos. While photons can be trapped in the dense supernova for hours, neutrinos are able to escape on the order of seconds. Since neutrinos travel at roughly the speed of light, they can reach Earth before photons do. If two or more of SNEWS detectors observe a coincidence of an increased flux of neutrinos, an alert is sent to professional and amateur astronomers to be on the lookout for supernova light. By using the distance between detectors and the time difference between detections, the alert can also include directionality as to the supernova's location in the sky.

Stellar processes

The proton-proton fusion chain that occurs within the Sun. This process is responsible for the majority of the Sun's energy.

The Sun, like other stars, is powered by nuclear fusion in its core. The core is incredibly large, meaning that photons produced in the core will take a long time to diffuse outward. Therefore, neutrinos are the only way that we can obtain real-time data about the nuclear processes in the Sun.

There are two main processes for stellar nuclear fusion. The first is the Proton-Proton (PP) chain, in which protons are fused together into helium, sometimes temporarily creating the heavier elements of lithium, beryllium, and boron along the way. The second is the CNO cycle, in which carbon, nitrogen, and oxygen are fused with protons, and then undergo alpha decay (helium nucleus emission) to begin the cycle again. The PP chain is the primary process in the Sun, while the CNO cycle is more dominant in stars more massive than the Sun.

Each step in the process has an allowed spectra of energy for the neutrino (or a discrete energy for electron capture processes). The relative rates of the Sun's nuclear processes can be determined by observations in its flux at different energies. This would shed insight into the Sun's properties, such as metallicity, which is the composition of heavier elements.

Borexino is one of the detectors studying solar neutrinos. In 2018, they found 5σ significance for the existence of neutrinos from the fusing of two protons with an electron (pep neutrinos). In 2020, they found for the first time evidence of CNO neutrinos in the Sun. Improvements on the CNO measurement will be especially helpful in determining the Sun's metallicity.

Composition and structure of Earth

The interior of Earth contains radioactive elements such as and the decay chains of and . These elements decay via Beta decay, which emits an anti-neutrino. The energies of these anti-neutrinos are dependent on the parent nucleus. Therefore, by detecting the anti-neutrino flux as a function of energy, we can obtain the relative compositions of these elements and set a limit on the total power output of Earth's geo-reactor. Most of our current data about the core and mantle of Earth comes from seismic data, which does not provide any information as to the nuclear composition of these layers.

Borexino has detected these geo-neutrinos through the process . The resulting positron will immediately annihilate with an electron and produce two gamma-rays each with an energy of 511keV (the rest mass of an electron). The neutron will later be captured by another nucleus, which will lead to a 2.22MeV gamma-ray as the nucleus de-excites. This process on average takes on the order of 256 microseconds. By searching for time and spatial coincidence of these gamma rays, the experimenters can be certain there was an event.

Using over 3,200 days of data, Borexino used geoneutrinos to place constraints on the composition and power output of the mantle. They found that the ratio of to is the same as chondritic meteorites. The power output from uranium and thorium in Earth's mantle was found to be 14.2-35.7 TW with a 68% confidence interval.

Neutrino tomography also provides insight into the interior of Earth. For neutrinos with energies of a few TeV, the interaction probability becomes non-negligible when passing through Earth. The interaction probability will depend on the number of nucleons the neutrino passed along its path, which is directly related to density. If the initial flux is known (as it is in the case of atmospheric neutrinos), then detecting the final flux provides information about the interactions that occurred. The density can then be extrapolated from knowledge of these interactions. This can provide an independent check on the information obtained from seismic data.

The interior of the Earth as we know it. Currently, our information comes only from seismic data. Neutrinos would be an independent check on this data

In 2018, one year worth of IceCube data was evaluated to perform neutrino tomography. The analysis studied upward going muons, which provide both the energy and directionality of the neutrinos after passing through the Earth. A model of Earth with five layers of constant density was fit to the data, and the resulting density agreed with seismic data. The values determined for the total mass of Earth, the mass of the core, and the moment of inertia all agree with the data obtained from seismic and gravitational data. With the current data, the uncertainties on these values are still large, but future data from IceCube and KM3NeT will place tighter restrictions on this data.

High-energy astrophysical events

Neutrinos can either be primary cosmic rays (astrophysical neutrinos), or be produced from cosmic ray interactions. In the latter case, the primary cosmic ray will produce pions and kaons in the atmosphere. As these hadrons decay, they produce neutrinos (called atmospheric neutrinos). At low energies, the flux of atmospheric neutrinos is many times greater than astrophysical neutrinos. At high energies, the pions and kaons have a longer lifetime (due to relativistic time dilation). The hadrons are now more likely to interact before they decay. Because of this, the astrophysical neutrino flux will dominate at high energies (~100TeV). To perform neutrino astronomy of high-energy objects, experiments rely on the highest energy neutrinos.

To perform astronomy of distant objects, a strong angular resolution is required. Neutrinos are electrically neutral and interact weakly, so they travel mostly unperturbed in straight lines. If the neutrino interacts within a detector and produces a muon, the muon will produce an observable track. At high energies, the neutrino direction and muon direction are closely correlated, so it is possible to trace back the direction of the incoming neutrino.

These high-energy neutrinos are either the primary or secondary cosmic rays produced by energetic astrophysical processes. Observing neutrinos could provide insights into these processes beyond what is observable with electromagnetic radiation. In the case of the neutrino detected from a distant blazar, multi-wavelength astronomy was used to show spatial coincidence, confirming the blazar as the source. In the future, neutrinos could be used to supplement electromagnetic and gravitational observations, leading to multi-messenger astronomy.

Nucleoside triphosphate

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Nucleoside_triphosphate

A nucleoside triphosphate is a nucleoside containing a nitrogenous base bound to a 5-carbon sugar (either ribose or deoxyribose), with three phosphate groups bound to the sugar. They are the molecular precursors of both DNA and RNA, which are chains of nucleotides made through the processes of DNA replication and transcription. Nucleoside triphosphates also serve as a source of energy for cellular reactions and are involved in signalling pathways.

Nucleoside triphosphates cannot easily cross the cell membrane, so they are typically synthesized within the cell. Synthesis pathways differ depending on the specific nucleoside triphosphate being made, but given the many important roles of nucleoside triphosphates, synthesis is tightly regulated in all cases. Nucleoside analogues may also be used to treat viral infections. For example, azidothymidine (AZT) is a nucleoside analogue used to prevent and treat HIV/AIDS.[8]

Naming

The term nucleoside refers to a nitrogenous base linked to a 5-carbon sugar (either ribose or deoxyribose). Nucleotides are nucleosides covalently linked to one or more phosphate groups. To provide information about the number of phosphates, nucleotides may instead be referred to as nucleoside (mono, di, or tri) phosphates. Thus, nucleoside triphosphates are a type of nucleotide.

Nucleotides are commonly abbreviated with 3 letters (4 or 5 in case of deoxy- or dideoxy-nucleotides). The first letter indicates the identity of the nitrogenous base (e.g., A for adenine, G for guanine), the second letter indicates the number of phosphates (mono, di, tri), and the third letter is P, standing for phosphate. Nucleoside triphosphates that contain ribose as the sugar are conventionally abbreviated as NTPs, while nucleoside triphosphates containing deoxyribose as the sugar are abbreviated as dNTPs. For example, dATP stands for deoxyribose adenosine triphosphate. NTPs are the building blocks of RNA, and dNTPs are the building blocks of DNA.

The carbons of the sugar in a nucleoside triphosphate are numbered around the carbon ring starting from the original carbonyl of the sugar. Conventionally, the carbon numbers in a sugar are followed by the prime symbol (') to distinguish them from the carbons of the nitrogenous base. The nitrogenous base is linked to the 1' carbon through a glycosidic bond, and the phosphate groups are covalently linked to the 5' carbon. The first phosphate group linked to the sugar is termed the α-phosphate, the second is the β-phosphate, and the third is the γ-phosphate; these are linked to one another by two phosphoanhydride bonds.[14]

Schematic showing the structure of nucleoside triphosphates. Nucleosides consist of a 5-carbon sugar (pentose) connected to a nitrogenous base through a 1' glycosidic bond. Nucleotides are nucleosides with a variable number of phosphate groups connected to the 5' carbon. Nucleoside triphosphates are a specific type of nucleotide. This figure also shows the five common nitrogenous bases found in DNA and RNA on the right.

DNA and RNA synthesis

In nucleic acid synthesis, the 3' OH of a growing chain of nucleotides attacks the α-phosphate on the next NTP to be incorporated (blue), resulting in a phosphodiester linkage and the release of pyrophosphate (PPi). This figure shows DNA synthesis, but RNA synthesis occurs through the same mechanism.

The cellular processes of DNA replication and transcription involve DNA and RNA synthesis, respectively. DNA synthesis uses dNTPs as substrates, while RNA synthesis uses rNTPs as substrates. NTPs cannot be converted directly to dNTPs. DNA contains four different nitrogenous bases: adenine, guanine, cytosine and thymine. RNA also contains adenine, guanine, and cytosine, but replaces thymine with uracil. Thus, DNA synthesis requires dATP, dGTP, dCTP, and dTTP as substrates, while RNA synthesis requires ATP, GTP, CTP, and UTP.

Nucleic acid synthesis is catalyzed by either DNA polymerase or RNA polymerase for DNA and RNA synthesis respectively. These enzymes covalently link the free -OH group on the 3' carbon of a growing chain of nucleotides to the α-phosphate on the 5' carbon of the next (d)NTP, releasing the β- and γ-phosphate groups as pyrophosphate (PPi). This results in a phosphodiester linkage between the two (d)NTPs. The release of PPi provides the energy necessary for the reaction to occur. Nucleic acid synthesis occurs exclusively in the 5' to 3' direction.

Nucleoside triphosphate metabolism

Given their importance in the cell, the synthesis and degradation of nucleoside triphosphates is under tight control. This section focuses on nucleoside triphosphate metabolism in humans, but the process is fairly conserved among species. Nucleoside triphosphates cannot be absorbed well, so all nucleoside triphosphates are typically made de novo. The synthesis of ATP and GTP (purines) differs from the synthesis of CTP, TTP, and UTP (pyrimidines). Both purine and pyrimidine synthesis use phosphoribosyl pyrophosphate (PRPP) as a starting molecule.

The conversion of NTPs to dNTPs can only be done in the diphosphate form. Typically a NTP has one phosphate removed to become a NDP, then is converted to a dNDP by an enzyme called ribonucleotide reductase, then a phosphate is added back to give a dNTP.

Purine synthesis

A nitrogenous base called hypoxanthine is assembled directly onto PRPP.[22] This results in a nucleotide called inosine monophosphate (IMP). IMP is then converted to either a precursor to AMP or GMP. Once AMP or GMP are formed, they can be phosphorylated by ATP to their diphosphate and triphosphate forms.

Purine synthesis is regulated by the allosteric inhibition of IMP formation by the adenine or guanine nucleotides. AMP and GMP also competitively inhibit the formation of their precursors from IMP.

Pyrimidine synthesis

A nitrogenous base called orotate is synthesized independently of PRPP. After orotate is made it is covalently attached to PRPP. This results in a nucleotide called orotate monophosphate (OMP). OMP is converted to UMP, which can then be phosphorylated by ATP to UDP and UTP. UTP can then be converted to CTP by a deamination reaction. TTP is not a substrate for nucleic acid synthesis, so it is not synthesized in the cell. Instead, dTTP is made indirectly from either dUDP or dCDP after conversion to their respective deoxyribose forms.

Pyrimidine synthesis is regulated by the allosteric inhibition of orotate synthesis by UDP and UTP. PRPP and ATP are also allosteric activators of orotate synthesis.

Ribonucleotide reductase

Ribonucleotide reductase (RNR) is the enzyme responsible for converting NTPs to dNTPs. Given that dNTPs are used in DNA replication, the activity of RNR is tightly regulated. RNR can only process NDPs, so NTPs are first dephosphorylated to NDPs before conversion to dNDPs. dNDPs are then typically re-phosphorylated. RNR has 2 subunits and 3 sites: the catalytic site, activity (A) site, and specificity (S) site. The catalytic site is where the NDP to dNDP reaction takes place, the activity site determines whether or not the enzyme is active, and the specificity site determines which reaction takes place in the catalytic site.

The activity site can bind either ATP or dATP. When bound to ATP, RNR is active. When ATP or dATP is bound to the S site, RNR will catalyze synthesis of dCDP and dUDP from CDP and UDP. dCDP and dUDP can go on to indirectly make dTTP. dTTP bound to the S site will catalyze synthesis of dGDP from GDP, and binding of dGDP to the S site will promote synthesis of dADP from ADP. dADP is then phosphorylated to give dATP, which can bind to the A site and turn RNR off.

Other cellular roles

ATP as a source of cellular energy

The energy released during hydrolysis of adenosine tripshophate (ATP), shown here, is frequently coupled with energetically unfavourable cellular reactions.

ATP is the primary energy currency of the cell. Despite being synthesized through the metabolic pathway described above, it is primarily synthesized during both cellular respiration and photosynthesis by ATP synthase. ATP synthase couples the synthesis of ATP from ADP and phosphate with an electrochemical gradient generated by the pumping of protons through either the inner mitochondrial membrane (cellular respiration) or the thylakoid membrane (photosynthesis). This electrochemical gradient is necessary because the formation of ATP is energetically unfavourable.

The hydrolysis of ATP to ADP and Pi proceeds as follows:

This reaction is energetically favourable and releases 30.5 kJ/mol of energy. In the cell, this reaction is often coupled with unfavourable reactions to provide the energy for them to proceed. GTP is occasionally used for energy-coupling in a similar manner.

Binding of a ligand to a G protein-coupled receptor allows GTP to bind the G protein. This causes the alpha subunit to leave and act as a downstream effector.

GTP signal transduction

GTP is essential for signal transduction, especially with G proteins. G proteins are coupled with a cell membrane bound receptor. This whole complex is called a G protein-coupled receptor (GPCR). G proteins can bind either GDP or GTP. When bound to GDP, G proteins are inactive. When a ligand binds a GPCR, an allosteric change in the G protein is triggered, causing GDP to leave and be replaced by GTP. GTP activates the alpha subunit of the G protein, causing it to dissociate from the G protein and act as a downstream effector.

Nucleoside analogues

Nucleoside analogues can be used to treat viral infections. Nucleoside analogues are nucleosides that are structurally similar (analogous) to the nucleosides used in DNA and RNA synthesis. Once these nucleoside analogues enter a cell, they can become phosphorylated by a viral enzyme. The resulting nucleotides are similar enough to the nucleotides used in DNA or RNA synthesis to be incorporated into growing DNA or RNA strands, but they do not have an available 3' OH group to attach the next nucleotide, causing chain termination. This can be exploited for therapeutic uses in viral infections because viral DNA polymerase recognizes certain nucleotide analogues more readily than eukaryotic DNA polymerase. For example, azidothymidine is used in the treatment of HIV/AIDS. Some less selective nucleoside analogues can be used as chemotherapy agents to treat cancer, such as cytosine arabinose (ara-C) in the treatment of certain forms of leukemia.

Resistance to nucleoside analogues is common, and is frequently due to a mutation in the enzyme that phosphorylates the nucleoside after entry into the cell. This is common in nucleoside analogues used to treat HIV/AIDS.

Climate risk

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Climate_risk
The risk equation shows that climate risk is a product of hazard, exposure, and climate change vulnerability (where 'x' represents interaction between the components).

Climate risk is the potential for problems for societies or ecosystems from the impacts of climate change. The assessment of climate risk is based on formal analysis of the consequences, likelihoods and responses to these impacts. Societal constraints can also shape adaptation options. There are different values and preferences around risk, resulting in differences of risk perception.

Common approaches to risk assessment and risk management strategies are based on analysing hazards. This can also be applied to climate risk although there are distinct differences: The climate system is no longer staying within a stationary range of extremes. Hence, climate change impacts are anticipated to increase for the coming decades. There are also substantial differences in regional climate projections. These two aspects make it complicated to understand current and future climate risk around the world. Scientists use various climate change scenarios when they carry out climate risk analysis.

The interaction of three risk factors define the degree of climate risk. They are hazards, vulnerability and exposure. Financial models, such as those that predict the maximum potential loss from natural disasters, often use approaches like the Generalized Pareto Distribution (GPD) to estimate the worst-case financial impacts over time. This is particularly relevant for sectors like insurance, which must account for both the physical and financial risks posed by climate events.

There are various approaches to climate risk management. One example is climate risk insurance. This is a type of insurance designed to mitigate the financial and other risk associated with climate change, especially phenomena like extreme weather.

Understanding the interaction between climate hazards and financial exposure through forecasting is crucial for effective climate risk management, ensuring businesses can adapt and respond effectively to both physical and financial challenges.

Definition

Diagram explaining the relationships between risk, hazard mitigation, resilience, and adaptation

The IPCC Sixth Assessment Report defines climate risk is the potential for negative consequences for society or ecosystems from the impacts of climate change. Risk is used mainly to talk about the potential effects of climate change, but it may also result from the measures that we take to respond to those changes. The definition also recognises the different values and preferences that people have towards the human or ecological systems at risk.

Risk assessment is the qualitative and/or quantitative scientific estimation of risks.

Risk perception is the personal judgement that people make about the characteristics and severity of a risk.

Understanding risks

Climate risks are increasingly felt in all regions of the world, and they are especially visible in the growing number of disasters that are driven by climatic events. Many of these risks and impacts are expected to increase in future, and therefore are an increasing concern. Risk assessments are based on responses of a climate system that is no longer staying within a stationary range of extremes. The Intergovernmental Panel on Climate Change (IPCC) assessment framework is based on the understanding that climate risk emerges from the interaction of three risk factors: hazards, vulnerability and exposure.

In this framework, climate risks are also described in five sets of major risks:

  • unique and threatened systems
  • extreme weather events
  • distribution of impacts
  • global aggregate impacts
  • large-scale singular events

Risks and uncertainties

Risks and uncertainties are closely related concepts. Risk is "the potential" for a negative outcome, so it implies uncertainty or incomplete information. However, risks are more often understood in a more context-specific way. Each component of climate risk - hazards, exposure and vulnerability -  may be uncertain in terms of the magnitude and likelihood of occurrence. Assessment of the risk includes a set of measured uncertainties. These are usually given in terms of a set or range of possible outcomes, which may also include probabilities. The IPCC uses qualitative rating scales for uncertainty which may be based on quantitative results or expert judgement.

Uncertainty is also used in a broader way to describe general lack of knowledge about the world and of possible outcomes (epistemic uncertainty). Some such outcomes are inherently unpredictable (aleatory uncertainty). It can also refer to different framings or understandings about the world (ambiguity) including different scientific understandings. There are many types of sources of uncertainty. Unlike risk, uncertainty does not always carry negative connotations. Risk is subcategory of uncertainty that is considered to make potential issues and problems more manageable. Risk is a term used widely across different management practice areas. Examples are business, economics, environment, finance, information technology, health, insurance, safety, and security.

Vulnerability

Climate change vulnerability is a concept that describes how strongly people or ecosystems are likely to be affected by climate change. Its formal definition is the "propensity or predisposition to be adversely affected" by climate change. It can apply to humans and also to natural systems (or ecosystems). Issues around the capacity to cope and adapt are also part of this concept. Vulnerability is a component of climate risk. It differs within communities and also across societies, regions, and countries. It can increase or decrease over time. Vulnerability is generally a bigger problem for people in low-income countries than for those in high-income countries.

Higher levels of vulnerability will be found in densely populated areas, in particular those affected by poverty, poor governance, and/or conflict. Also, some livelihoods are more sensitive to the effects of climate change than others. Smallholder farming, pastoralism, and fishing are livelihoods that may be especially vulnerable. Further drivers for vulnerability are unsustainable land and ocean use, marginalization, and historical and ongoing patterns of inequity and poor governance.

Management

Responses to risk

Climate change adaptation and climate change mitigation can reduce climate-related risks. These two types of climate action can be complementary and can result in synergies, and thus more successful results.

Adaptation can help decrease climate risk by addressing three interacting risk factors. These are hazards, vulnerability, and exposure. It is not possible to directly reduce hazards. This is because hazards are affected by current and future changes in climate. Instead, adaptation addresses the risks of climate impacts that arise from the way climate-related hazards interact with the exposure and vulnerability of human and ecological systems. Exposure refers to the presence of people, livelihoods, ecosystems and other assets in places that could suffer negative effects. It is possible to reduce exposure by retreating from areas with high climate risks, such as floodplains. Improving systems for early warnings and evacuations are other ways to reduce exposure. The IPCC defines climate change vulnerability as "the propensity or predisposition to be adversely affected" by climate change. It can apply to humans but also to natural systems. Human and ecosystem vulnerability are interdependent. According to the IPCC, climate change vulnerability encompasses a variety of concepts and elements, including sensitivity or susceptibility to harm and lack of capacity to cope and adapt. Sensitivity to climate change could be reduced by for example increasing the storage capacity of a reservoir, or planting crops that are more resistant to climate variability. It is also possible to reduce vulnerability in towns and cities with green garden spaces. These can reduce heat stress and food insecurity for low-income neighbourhoods.

Climate risk management

Climate risk management (CRM) is a term describing the strategies involved in reducing climate risk, through the work of various fields including climate change adaptation, disaster management and sustainable development. Major international conferences and workshops include: United Nations Framework Convention on Climate Change, World Meteorological Organization - Living With Climate.

Climate risk insurance

Climate risk insurance is a type of insurance designed to mitigate the financial and other risk associated with climate change, especially phenomena like extreme weather. The insurance is often treated as a type of insurance needed for improving the climate resilience of poor and developing communities. It provides post-disaster liquidity for relief and reconstruction measures while also preparing for the future measures in order to reduce climate change vulnerability. Insurance is considered an important climate change adaptation measure.

Critics of the insurance, say that such insurance places the bulk of the economic burden on communities responsible for the least amount of carbon emissions. For low-income countries, these insurance programmes can be expensive due to the high start-up costs and infrastructure requirements for the data collection. It is theorised that high-premiums in high risk areas experiencing increased climate threats, would discourage settlement in those areas. These programmes are also usually timely and financially inadequate, which could be an uncertainty to national budgets. A considerable problem on a micro-level is that weather-related disasters usually affect whole regions or communities at the same time, resulting in a large number of claims simultaneously. This means that it is needed to be sold on a very large, diversified scale. However a well-designed climate risk insurance can act as a safety net for countries while improving resilience.

Climate Risk Pooling

A risk pool is a form of risk management that is mostly practiced by insurance companies, which come together to form a pool to provide protection to insurance companies against catastrophic risks such as floods or earthquakes. The term is also used to describe the pooling of similar risks within the concept of insurance. It is basically like multiple insurance companies coming together to form one. While risk pooling is necessary for insurance to work, not all risks can be effectively pooled in a voluntary insurance bracket unless there is a subsidy available to encourage participation.

Disaster risk reduction

Disaster risk reduction aims to make disasters less likely to happen. The approach, also called DRR or disaster risk management, also aims to make disasters less damaging when they do occur. DRR aims to make communities stronger and better prepared to handle disasters. In technical terms, it aims to make them more resilient or less vulnerable. When DRR is successful, it makes communities less the vulnerable because it mitigates the effects of disasters. This means DRR can make risky events fewer and less severe. Climate change can increase climate hazards. So development efforts often consider DRR and climate change adaptation together.

By sector

Climate risks can be categorised into natural environment, infrastructure, human health, the built environment, business and international. The IPCC Sixth Assessment Report considers risks within important sectors affected by climate change, like agriculture, water, cities, ecosystems, health and livelihoods. It also considers sets of major risks across these sectors. Risk categories are often assessed in relation to multiple hazards and impacts, but hazard-specific assessments are often also available, eg. flood risk or heatwave risk assessment.

Ecosystems and their services

The main risks to ecosystems from climate change are biodiversity loss, ecosystem structure change, increased tree mortality, increased wildfire, and ecosystem carbon losses. These risks are linked. Loss of species can increase the risks to ecosystem health. Wildfire is an increasing risk for people as well as to ecosystems in many parts of the world. Wildfires and increased pest infestations due to climate change caused much of the recent tree mortality in North America.

Risks to seas and coastal areas include coral bleaching linked with ocean warming. This can change the composition of ecosystems. Coral bleaching and mortality also increase the risks of flooding on nearby shorelines and islands. Ocean acidification attributed to climate change drives change in coral reefs and other ecosystems such as rocky shores and kelp forests.

Health

Climate change-related risks to health include direct risks from extreme weather such as cold waves, storms, or prolonged high temperatures. There are also indirect risks such as mental health impacts of undernutrition or displacement caused by extreme weather. Similarly there are mental health risks from loss of access to green spaces, reduced air quality, or from anxiety about climate change. There are further risks from changes in conditions for transmission of infectious diseases. Malaria and dengue are particularly climate-sensitive.

Cities

Rising temperatures and heatwaves are key risks for cities. With warmer temperatures the urban heat island effect is likely to get worse. Population growth and land use change will influence human health and productivity risks in cities. Urban flooding is another key risk. This is especially the case in coastal settlements where flood risks are exacerbated by sea-level rise and storm surges. A further set of risks arises from reduced water availability. When supply cannot meet demand from expanding settlements, urban residents become exposed to water insecurity and climate impacts. This is especially so during periods of lower rainfall. These key risks differ greatly between cities, and between different groups of people in the same city.

Livelihoods and communities

Climate change affects livelihoods and living conditions in significant ways. These include access to natural resources and ecosystems, land and other assets.  Access to basic infrastructure services such as water and sanitation, electricity, roads, telecommunications is another aspect of vulnerability of communities and livelihoods to climate change.

The biggest livelihood-related risks stem from losses of agricultural yields, impacts on human health and food security, destruction of homes, and loss of income. There are also risks to fish and livestock that livelihoods depend on. Some communities and livelihoods also face risks of irreversible losses and challenges to development, as well as more complex disaster risks.

The consequences of climate change are the most severe for the poorest populations. These are disproportionately more exposed to hazards such as temperature extremes and droughts. They usually have fewer resources and assets and less access to funding, support and political influence. There are other forms of disadvantage due to discrimination, gender inequalities and through lack of access to resources This includes people with disabilities or minority groups.

Business risks

In 2020 the World Economic Forum ranked climate change as the biggest risk to economy and society. Companies face reputational risks as well as financial risks. Companies publicly criticised for their environmental policies or high emissions might lose customers because of negative reputation.

Water

Climate change is affecting the overall and seasonal availability of water across regions. Climate change is projected to increase the variability of rain. There will be impacts on water quality as well as quantity. Floods can wash pollutants into water bodies and damage water infrastructure. In many places, particularly in the tropics and sub-tropics, there are longer dry spells and droughts, sometimes over consecutive years. These have contributed to drier soil conditions, lower groundwater tables and reduced or changed flows of rivers. There are risks to ecosystems, and across many water-using sectors of the economy. Agriculture is likely to be affected by changes in water availability, putting food security at risk. Irrigation has often contributed to groundwater depletion and changes in the water cycle. It can sometimes make a drought worse.

International

International climate risks are climate risks that cross national borders. Sometimes the impacts of climate change in one country or region can have further consequences for people in other countries. Risks can spread from one country to a neighbouring country, or from one country to distant regions. Risks can also cascade and have knock-on effects elsewhere, across multiple borders and sectors. For example, an impact of the floods in Thailand in 2011 was disruption to manufacturing supply chains affecting the automotive sector and electronics industry in Japan, Europe and the USA.

The different stages in a supply chain, where risks can be transmitted and managed, is an example of a risk pathway. Risk pathways, via which impacts are transmitted, include trade and finance networks, flows of people, resource flows such as water or food, and ecosystem connections.

International risks potentially could affect small trade-dependent countries especially those dependent on food imports. They could also affect richer, developed nations that are relatively less exposed to direct risks from climate change. In addition, there are potential consequences from adaptation responses initiated in one country that might transmit or alter risks elsewhere. For example, a decision to pull out of investment in risky markets may increase climate vulnerability for many communities.

National and international risk assessments

International

The Intergovernmental Panel on Climate Change (IPCC) assessment framework is based on the understanding that climate risk emerges from the interaction of three risk factors: hazards, vulnerability and exposure. One of primary roles of the IPCC, which was created by the United Nations Environment Programme (UNEP) and the World Meteorological Organization (WMO) in 1988, is to evaluate climate risks and explore strategies for their prevention and publish this knowledge each year in a series of comprehensive reports. The most recent report to consider the widest set of climate risks across nature and human activity was the Sixth Assessment Report Working Group II report Impacts, Adaptation and Vulnerability, published in 2022. The assessed levels of risk generally increased compared to previous reports, whilst the impacts were found to have been on the high end of what had been expected.

European Union

The European Climate Risk Assessment (EUCRA) will assess current and future climate change impacts and risks relating to the environment, economy and wider society in Europe. The European Commission's Directorate-General for Climate Action and the EEA lead the preparation. The EUCRA is expected to be published in Spring 2024.

By country

United States

The National Climate Assessment (NCA) is a United States government interagency ongoing effort on climate change science conducted under the auspices of the Global Change Research Act of 1990. The fourth edition 'Volume II: Impacts, Risks, and Adaptation in the United States' was published in 2018.

United Kingdom

The UK Government is required, under the 2008 Climate Change Act, to publish a Climate Change Risk Assessment every five years. This assessment sets out the risks and opportunities facing the UK from climate change. The third assessment published in 2022 identified 61 risks cutting across multiple sectors. These risks were categorised into natural environment, infrastructure, human health, the built environment, business and international.

New Zealand

The Climate Change Response (Zero Carbon) Amendment Act (amended 2019) includes the publication of a National Climate Change Risk Assessment, every six years. The First Assessment (2020) grouped risks according to five value domains: human, natural environment, economy, built environment and governance. The assessment details the 10 most urgent risks overall, among them: risks to potable water supplies (availability and quality), risks to buildings due to extreme weather events, and risks to governments from economic costs of lost productivity, disaster relief and other unforeseen expenditures.

Neutrino astronomy

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Neutrino_astronomy   ...