Search This Blog

Saturday, August 26, 2023

Human epigenome

From Wikipedia, the free encyclopedia

Human epigenome is the complete set of structural modifications of chromatin and chemical modifications of histones and nucleotides (such as cytosine methylation). These modifications affect according to cellular type and development status. Various studies show that epigenome depends on exogenous factors.

Chemical modifications

Different types of chemical modifications exist and the ChIP-seq experimental procedure can be performed in order to study them. The epigenetic profiles of human tissues reveals the following distinct histone modifications in different functional areas:

Active Promoters Active Enhancers Transcribed Gene Bodies Silenced Regions
H3K4me3 H3K4me1 H3K36me3 H3K27me3
H3K27ac H3K27ac
H3K9me3

Methylation

DNA functionally interacts with a variety of epigenetic marks, such as cytosine methylation, also known as 5-methylcytosine (5mC). This epigenetic mark is widely conserved and plays major roles in the regulation of gene expression, in the silencing of transposable elements and repeat sequences.

Individuals differ with their epigenetic profile, for example the variance in CpG methylation among individuals is about 42%. On the contrary, epigenetic profile (including methylation profile) of each individual is constant over the course of a year, reflecting the constancy of our phenotype and metabolic traits. Methylation profile, in particular, is quite stable in a 12-month period and appears to change more over decades.

Methylation sites

CoRSIVs are Correlated Regions of Systemic Interindividual Variation in DNA methylation. They span only 0.1% of the human genome, so they are very rare; they can be inter-correlated over long genomic distances (>50 kbp). CoRSIVs are also associated with genes involved in a lot of human disorders, including tumors, mental disorders and cardiovascular diseases. It has been observed that disease-associated CpG sites are 37% enriched in CoRSIVs compared to control regions and 53% enriched in CoRSIVs relative to tDMRs (tissue specific Differentially Methylated Regions).

Most of the CoRSIVs are only 200 – 300 bp long and include 5–10 CpG dinucleotides, the largest span several kb and involve hundreds of CpGs. These regions tend to occur in clusters and the two genomic areas of high CoRSIV density are observed at the major histocompatibility (MHC) locus on chromosome 6 and at the pericentromeric region on the long arm of chromosome 20.

CoRSIVs are enriched in intergenic and quiescent regions (e.g. subtelomeric regions) and contain many transposable elements, but few CpG islands (CGI) and transcription factor binding sites. CoRSIVs are under-represented in the proximity of genes, in heterochromatic regions, active promoters, and enhancers. They are also usually not present in highly conserved genomic regions.

CoRSIVs can have a useful application: measurements of CoRSIV methylation in one tissue can provide some information about epigenetic regulation in other tissues, indeed we can predict the expression of associated genes because systemic epigenetic variants are generally consistent in all tissues and cell types.

Factors affecting methylation pattern

Quantification of the heritable basis underlying population epigenomic variation is also important to delineate its cis- and trans-regulatory architecture. In particular, most studies state that inter-individual differences in DNA methylation are mainly determined by cis-regulatory sequence polymorphisms, probably involving mutations in TFBSs (Transcription Factor Binding Sites) with downstream consequences on local chromatin environment. The sparsity of trans-acting polymorphisms in humans suggests that such effects are highly deleterious. Indeed, trans-acting factors are expected to be caused by mutations in chromatin control genes or other highly pleiotropic regulators. If trans-acting variants do exist in human populations, they probably segregate as rare alleles or originate from somatic mutations and present with clinical phenotypes, as is the case in many cancers.

Correlation between methylation and gene expression

DNA methylation (in particular in CpG regions) is able to affect gene expression: hypermethylated regions tend to be differentially expressed. In fact, people with a similar methylation profile tend to also have the same transcriptome. Moreover, one key observation from human methylation is that most functionally relevant changes in CpG methylation occur in regulatory elements, such as enhancers.

Anyway, differential expression concerns only a slight number of methylated genes: only one fifth of genes with CpG methylation shows variable expression according to their methylation state. It is important to notice that methylation is not the only factor affecting gene regulation.

Methylation in embryos

It was revealed by immunostaining experiments that in human preimplantation embryos there is a global DNA demethylation process. After fertilisation, the DNA methylation level decreases sharply in the early pronuclei. This is a consequence of active DNA demethylation at this stage. But global demethylation is not an irreversible process, in fact de novo methylation occurring from the early to mid-pronuclear stage and from the 4-cell to the 8-cell stage.

The percentage of DNA methylation is different in oocytes and in sperm: the mature oocyte has an intermediate level of DNA methylation (72%), instead the sperm has high level of DNA methylation (86%). Demethylation in paternal genome occurs quickly after fertilisation, whereas the maternal genome is quite resistant at the demethylation process at this stage. Maternal different methylated regions (DMRs) are more resistant to the preimplantation demethylation wave.

CpG methylation is similar in germinal vesicle (GV) stage, intermediate metaphase I (MI) stage and mature metaphase II (MII) stage. Non-CpG methylation continues to accumulate in these stages.

Chromatin accessibility in germline was evaluated by different approaches, like scATAC-seq and sciATAC-seq, scCOOL-seq, scNOMe-seq and scDNase-seq. Stage-specific proximal and distal regions with accessible chromatin regions were identified. Global chromatin accessibility is found to gradually decrease from the zygote to the 8-cell stage and then increase. Parental allele-specific analysis shows that paternal genome becomes more open than the maternal genome from the late zygote stage to the 4-cell stage, which may reflect decondensation of the paternal genome with replacement of protamines by histones.

Sequence-Dependent Allele-Specific Methylation

DNA methylation imbalances between homologous chromosomes show sequence-dependent behavior. Difference in the methylation state of neighboring cytosines on the same chromosome occurs due to the difference in DNA sequence between the chromosomes. Whole-genome bisulfite sequencing (WGBS) is used to explore sequence-dependent allele-specific methylation (SD-ASM) at a single-chromosome resolution level and comprehensive whole-genome coverage. The results of WGBS tested on 49 methylomes revealed CpG methylation imbalances exceeding 30% differences in 5% of the loci.

On the sites of gene regulatory loci bound by transcription factors the random switching between methylated and unmethylated states of DNA was observed. This is also referred as stochastic switching and it is linked to selective buffering of gene regulatory circuit against mutations and genetic diseases. Only rare genetic variants show the stochastic type of gene regulation.

The study made by Onuchic et al. was aimed to construct the maps of allelic imbalances in DNA methylation, gene transcription, and also of histone modifications. 36 cell and tissue types from 13 participant donors were used to examine 71 epigenomes. The results of WGBS tested on 49 methylomes revealed CpG methylation imbalances exceeding 30% differences in 5% of the loci. The stochastic switching occurred at thousands of heterozygous regulatory loci that were bound to transcription factors. The intermediate methylation state is referred to the relative frequencies between methylated and unmethylated epialleles. The epiallele frequency variations are correlated with the allele affinity for transcription factors.

The analysis of the study suggests that human epigenome in average covers approximately 200 adverse SD-ASM variants. The sensitivity of the genes with tissue-specific expression patterns gives the opportunity for the evolutionary innovation in gene regulation.

Haplotype reconstruction strategy is used to trace chromatin chemical modifications (using ChIP-seq) in a variety of human tissues. Haplotype-resolved epigenomic maps can trace allelic biases in chromatin configuration. A substantial variation among different tissues and individuals is observed. This allows the deeper understanding of cis-regulatory relationships between genes and control sequences.

Structural modifications

During the last few years, several methods have been developed to study the structural and consequently the functional modifications of chromatin. The first project that used epigenomic profiling to identify regulatory elements in the human genome was ENCODE (Encyclopedia of DNA Elements) that focused on profiling histone modifications on cell lines. A few years later ENCODE was included in the International Human Epigenome Consortium (IHEC), which aims to coordinate international epigenome studies.

The structural modifications that these projects aim to study can be divided into five main groups:

  • Nucleosome occupancy to detect regions with regulatory genes;
  • Chromatin interactions and domains;

Topological associated domains (TADs)

Topological associated domains are a degree of structural organization of the genome of the cell. They are formed by regions of chromatin, sized from 100 kilobases up to megabases, which highly self-interact. The domains are linked by other genomic regions, which, based on their size, are either called “topological boundary regions” or “unorganized chromatin”. These boundary regions separate the topological domains from heterochromatin, and prevent the amplification of the latter. Topological domains are diffused in mammalian, although similar genome partitions were identified also in Drosophila.

Topological domains in humans, like in other mammalians, have many functions regarding gene expression and transcriptional control process. Inside these domains, the chromatin shows to be well tangled, while in the boundary regions chromatin interactions are far less present. These boundary areas in particular show some peculiarity that determine the functions of all the topological domains.

Firstly, they contain insulator regions and barrier elements, both of which function as inhibitors of further transcription from the RNA polymerase enzyme. Such elements are characterized by the massive presence of insulator binding proteins CTCF.

Secondly, boundary regions block heterochromatin spreading, thus preventing the loss of useful genetic informations. This information derives from the observation that the heterochromatin mark H3K9me3 sequences clearly interrupts near boundary sequences.

Thirdly, transcription start sites (TSS), housekeeping genes and tRNA genes are particularly abundant in boundary regions, denoting that those areas have a prolific transcriptional activity, thanks to their structural characteristics, different from other topological regions.

Finally, in the border areas of the topological domains and their surroundings there is an enrichment of Alu/B1 and B2 SINE retrotransposons. In the recent years, those sequences were referred to alter binding site of CTCF, thus interfering with expression of some genomic areas.

Further proofs towards a role in genetic modulation and transcription regulation refers to the great conservation of the boundary pattern across mammalian evolution, with a dynamic range of small diversities inside different cell types, suggesting that these topological domains take part in cell-type specific regulatory events.

Correlation between methylation and 3D structure

The 4D Nucleome project aims to realize a 3D maps of mammalian genomes in order to develop predictive models to correlate epigenomic modifications with genetic variation. In particular the goal is to link genetic and epigenomic modifications with the enhancers and promoters which they interact with in three-dimensional space, thus discovering gene-set interactomes and pathways as new candidates for functional analysis and therapeutic targeting.

Hi-C is an experimental method used to map the connections between DNA fragments in three-dimensional space on a genome-wide scale. This technique combines chemical crosslinking of chromatin with restriction enzyme digestion and next-generation DNA sequencing.

This kind of studies are currently limited by the lack or unavailability of raw data.

Nuclear power plant

From Wikipedia, the free encyclopedia
Angra Nuclear Power Plant in Brazil

A nuclear power plant (NPP) is a thermal power station in which the heat source is a nuclear reactor. As is typical of thermal power stations, heat is used to generate steam that drives a steam turbine connected to a generator that produces electricity. As of August 2023, the International Atomic Energy Agency reported there were 412 nuclear power reactors in operation in 31 countries around the world, and 57 nuclear power reactors under construction.

Nuclear plants are very often used for base load since their operations, maintenance, and fuel costs are at the lower end of the spectrum of costs. However, building a nuclear power plant often spans five to ten years, which can accrue to significant financial costs, depending on how the initial investments are financed.

Nuclear power plants have a carbon footprint comparable to that of renewable energy such as solar farms and wind farms, and much lower than fossil fuels such as natural gas and coal. Despite some spectacular catastrophes, nuclear power plants are among the safest mode of electricity generation, comparable to solar and wind power plants.

History

The first time that heat from a nuclear reactor was used to generate electricity was on December 21, 1951, at the Experimental Breeder Reactor I, feeding four light bulbs.

On June 27, 1954, the world's first nuclear power station to generate electricity for a power grid, the Obninsk Nuclear Power Plant, commenced operations in Obninsk, in the Soviet Union. The world's first full scale power station, Calder Hall in the United Kingdom, opened on October 17, 1956. The world's first full scale power station solely devoted to electricity production—Calder Hall was also meant to produce plutonium—the Shippingport Atomic Power Station in Pennsylvania, United States—was connected to the grid on December 18, 1957.

Basic components

Systems

Boiling water reactor (BWR)

The conversion to electrical energy takes place indirectly, as in conventional thermal power stations. The fission in a nuclear reactor heats the reactor coolant. The coolant may be water or gas, or even liquid metal, depending on the type of reactor. The reactor coolant then goes to a steam generator and heats water to produce steam. The pressurized steam is then usually fed to a multi-stage steam turbine. After the steam turbine has expanded and partially condensed the steam, the remaining vapor is condensed in a condenser. The condenser is a heat exchanger which is connected to a secondary side such as a river or a cooling tower. The water is then pumped back into the steam generator and the cycle begins again. The water-steam cycle corresponds to the Rankine cycle.

The nuclear reactor is the heart of the station. In its central part, the reactor's core produces heat due to nuclear fission. With this heat, a coolant is heated as it is pumped through the reactor and thereby removes the energy from the reactor. The heat from nuclear fission is used to raise steam, which runs through turbines, which in turn power the electrical generators.

Nuclear reactors usually rely on uranium to fuel the chain reaction. Uranium is a very heavy metal that is abundant on Earth and is found in sea water as well as most rocks. Naturally occurring uranium is found in two different isotopes: uranium-238 (U-238), accounting for 99.3% and uranium-235 (U-235) accounting for about 0.7%. U-238 has 146 neutrons and U-235 has 143 neutrons.

Different isotopes have different behaviors. For instance, U-235 is fissile which means that it is easily split and gives off a lot of energy making it ideal for nuclear energy. On the other hand, U-238 does not have that property despite it being the same element. Different isotopes also have different half-lives. U-238 has a longer half-life than U-235, so it takes longer to decay over time. This also means that U-238 is less radioactive than U-235.

Since nuclear fission creates radioactivity, the reactor core is surrounded by a protective shield. This containment absorbs radiation and prevents radioactive material from being released into the environment. In addition, many reactors are equipped with a dome of concrete to protect the reactor against both internal casualties and external impacts.

Pressurized water reactor (PWR)

The purpose of the steam turbine is to convert the heat contained in steam into mechanical energy. The engine house with the steam turbine is usually structurally separated from the main reactor building. It is aligned so as to prevent debris from the destruction of a turbine in operation from flying towards the reactor.

In the case of a pressurized water reactor, the steam turbine is separated from the nuclear system. To detect a leak in the steam generator and thus the passage of radioactive water at an early stage, an activity meter is mounted to track the outlet steam of the steam generator. In contrast, boiling water reactors pass radioactive water through the steam turbine, so the turbine is kept as part of the radiologically controlled area of the nuclear power station.

The electric generator converts mechanical power supplied by the turbine into electrical power. Low-pole AC synchronous generators of high rated power are used. A cooling system removes heat from the reactor core and transports it to another area of the station, where the thermal energy can be harnessed to produce electricity or to do other useful work. Typically the hot coolant is used as a heat source for a boiler, and the pressurized steam from that drives one or more steam turbine driven electrical generators.

In the event of an emergency, safety valves can be used to prevent pipes from bursting or the reactor from exploding. The valves are designed so that they can derive all of the supplied flow rates with little increase in pressure. In the case of the BWR, the steam is directed into the suppression chamber and condenses there. The chambers on a heat exchanger are connected to the intermediate cooling circuit.

The main condenser is a large cross-flow shell and tube heat exchanger that takes wet vapor, a mixture of liquid water and steam at saturation conditions, from the turbine-generator exhaust and condenses it back into sub-cooled liquid water so it can be pumped back to the reactor by the condensate and feedwater pumps.

Some nuclear reactors make use of cooling towers to condense the steam exiting the turbines. All steam released is never in contact with radioactivity.

In the main condenser, the wet vapor turbine exhaust come into contact with thousands of tubes that have much colder water flowing through them on the other side. The cooling water typically come from a natural body of water such as a river or lake. Palo Verde Nuclear Generating Station, located in the desert about 97 kilometres (60 mi) west of Phoenix, Arizona, is the only nuclear facility that does not use a natural body of water for cooling, instead it uses treated sewage from the greater Phoenix metropolitan area. The water coming from the cooling body of water is either pumped back to the water source at a warmer temperature or returns to a cooling tower where it either cools for more uses or evaporates into water vapor that rises out the top of the tower.

The water level in the steam generator and the nuclear reactor is controlled using the feedwater system. The feedwater pump has the task of taking the water from the condensate system, increasing the pressure and forcing it into either the steam generators—in the case of a pressurized water reactor — or directly into the reactor, for boiling water reactors.

Continuous power supply to the plant is critical to ensure safe operation. Most nuclear stations require at least two distinct sources of offsite power for redundancy. These are usually provided by multiple transformers that are sufficiently separated and can receive power from multiple transmission lines. In addition, in some nuclear stations, the turbine generator can power the station's loads while the station is online, without requiring external power. This is achieved via station service transformers which tap power from the generator output before they reach the step-up transformer.

Economics

Bruce Nuclear Generating Station (Canada), one of the largest operational nuclear power facility in the world.

The economics of nuclear power plants is a controversial subject, and multibillion-dollar investments ride on the choice of an energy source. Nuclear power stations typically have high capital costs, but low direct fuel costs, with the costs of fuel extraction, processing, use and spent fuel storage internalized costs. Therefore, comparison with other power generation methods is strongly dependent on assumptions about construction timescales and capital financing for nuclear stations. Cost estimates take into account station decommissioning and nuclear waste storage or recycling costs in the United States due to the Price Anderson Act.

With the prospect that all spent nuclear fuel could potentially be recycled by using future reactors, generation IV reactors are being designed to completely close the nuclear fuel cycle. However, up to now, there has not been any actual bulk recycling of waste from a NPP, and on-site temporary storage is still being used at almost all plant sites due to construction problems for deep geological repositories. Only Finland has stable repository plans, therefore from a worldwide perspective, long-term waste storage costs are uncertain.

Olkiluoto Nuclear Power Plant in Eurajoki, Finland. The site houses of one of the most powerful reactors known as EPR.

Construction, or capital cost aside, measures to mitigate global warming such as a carbon tax or carbon emissions trading, increasingly favor the economics of nuclear power. Further efficiencies are hoped to be achieved through more advanced reactor designs, Generation III reactors promise to be at least 17% more fuel efficient, and have lower capital costs, while Generation IV reactors promise further gains in fuel efficiency and significant reductions in nuclear waste.

Unit 1 of the Cernavodă Nuclear Power Plant in Romania

In Eastern Europe, a number of long-established projects are struggling to find financing, notably Belene in Bulgaria and the additional reactors at Cernavodă in Romania, and some potential backers have pulled out. Where cheap gas is available and its future supply relatively secure, this also poses a major problem for nuclear projects.

Analysis of the economics of nuclear power must take into account who bears the risks of future uncertainties. To date all operating nuclear power stations were developed by state-owned or regulated utilities where many of the risks associated with construction costs, operating performance, fuel price, and other factors were borne by consumers rather than suppliers. Many countries have now liberalized the electricity market where these risks and the risk of cheaper competitors emerging before capital costs are recovered, are borne by station suppliers and operators rather than consumers, which leads to a significantly different evaluation of the economics of new nuclear power stations.

Following the 2011 Fukushima nuclear accident in Japan, costs are likely to go up for currently operating and new nuclear power stations, due to increased requirements for on-site spent fuel management and elevated design basis threats. However many designs, such as the currently under construction AP1000, use passive nuclear safety cooling systems, unlike those of Fukushima I which required active cooling systems, which largely eliminates the need to spend more on redundant back up safety equipment.

According to the World Nuclear Association, as of March 2020:

  • Nuclear power is cost competitive with other forms of electricity generation, except where there is direct access to low-cost fossil fuels.
  • Fuel costs for nuclear plants are a minor proportion of total generating costs, though capital costs are greater than those for coal-fired plants and much greater than those for gas-fired plants.
  • System costs for nuclear power (as well as coal and gas-fired generation) are very much lower than for intermittent renewables.
  • Providing incentives for long-term, high-capital investment in deregulated markets driven by short-term price signals presents a challenge in securing a diversified and reliable electricity supply system.
  • In assessing the economics of nuclear power, decommissioning and waste disposal costs are fully taken into account.
  • Nuclear power plant construction is typical of large infrastructure projects around the world, whose costs and delivery challenges tend to be under-estimated.

Safety and accidents

Hypothetical number of global deaths which would have resulted from energy production if the world's energy production was met through a single source, in 2014.

Modern nuclear reactor designs have had numerous safety improvements since the first-generation nuclear reactors. A nuclear power plant cannot explode like a nuclear weapon because the fuel for uranium reactors is not enriched enough, and nuclear weapons require precision explosives to force fuel into a small enough volume to go supercritical. Most reactors require continuous temperature control to prevent a core meltdown, which has occurred on a few occasions through accident or natural disaster, releasing radiation and making the surrounding area uninhabitable. Plants must be defended against theft of nuclear material and attack by enemy military planes or missiles.

The most serious accidents to date have been the 1979 Three Mile Island accident, the 1986 Chernobyl disaster, and the 2011 Fukushima Daiichi nuclear disaster, corresponding to the beginning of the operation of generation II reactors.

Professor of sociology Charles Perrow states that multiple and unexpected failures are built into society's complex and tightly coupled nuclear reactor systems. Such accidents are unavoidable and cannot be designed around. An interdisciplinary team from MIT has estimated that given the expected growth of nuclear power from 2005 to 2055, at least four serious nuclear accidents would be expected in that period. The MIT study does not take into account improvements in safety since 1970.

Controversy

The Ukrainian city of Pripyat abandoned due to a nuclear accident, which took place at Chernobyl Nuclear Power Plant on 26 April 1986, seen in the background.

The nuclear power debate about the deployment and use of nuclear fission reactors to generate electricity from nuclear fuel for civilian purposes peaked during the 1970s and 1980s, when it "reached an intensity unprecedented in the history of technology controversies," in some countries.

Proponents argue that nuclear power is a sustainable energy source which reduces carbon emissions and can increase energy security if its use supplants a dependence on imported fuels. Proponents advance the notion that nuclear power produces virtually no air pollution, in contrast to the chief viable alternative of fossil fuel. Proponents also believe that nuclear power is the only viable course to achieve energy independence for most Western countries. They emphasize that the risks of storing waste are small and can be further reduced by using the latest technology in newer reactors, and the operational safety record in the Western world is excellent when compared to the other major kinds of power plants.

Opponents say that nuclear power poses many threats to people and the environment, and that costs do not justify benefits. Threats include health risks and environmental damage from uranium mining, processing and transport, the risk of nuclear weapons proliferation or sabotage, and the problem of radioactive nuclear waste. Another environmental issue is discharge of hot water into the sea. The hot water modifies the environmental conditions for marine flora and fauna. They also contend that reactors themselves are enormously complex machines where many things can and do go wrong, and there have been many serious nuclear accidents. Critics do not believe that these risks can be reduced through new technology, despite rapid advancements in containment procedures and storage methods.

Opponents argue that when all the energy-intensive stages of the nuclear fuel chain are considered, from uranium mining to nuclear decommissioning, nuclear power is not a low-carbon electricity source despite the possibility of refinement and long-term storage being powered by a nuclear facility. Those countries that do not contain uranium mines cannot achieve energy independence through existing nuclear power technologies. Actual construction costs often exceed estimates, and spent fuel management costs are difficult to define.

On 1 August 2020, the UAE launched the Arab region's first-ever nuclear energy plant. Unit 1 of the Barakah plant in the Al Dhafrah region of Abu Dhabi commenced generating heat on the first day of its launch, while the remaining 3 Units are being built. However, Nuclear Consulting Group head, Paul Dorfman, warned the Gulf nation's investment into the plant as a risk "further destabilizing the volatile Gulf region, damaging the environment and raising the possibility of nuclear proliferation."

Reprocessing

Nuclear reprocessing technology was developed to chemically separate and recover fissionable plutonium from irradiated nuclear fuel. Reprocessing serves multiple purposes, whose relative importance has changed over time. Originally reprocessing was used solely to extract plutonium for producing nuclear weapons. With the commercialization of nuclear power, the reprocessed plutonium was recycled back into MOX nuclear fuel for thermal reactors. The reprocessed uranium, which constitutes the bulk of the spent fuel material, can in principle also be re-used as fuel, but that is only economic when uranium prices are high or disposal is expensive. Finally, the breeder reactor can employ not only the recycled plutonium and uranium in spent fuel, but all the actinides, closing the nuclear fuel cycle and potentially multiplying the energy extracted from natural uranium by more than 60 times.

Nuclear reprocessing reduces the volume of high-level waste, but by itself does not reduce radioactivity or heat generation and therefore does not eliminate the need for a geological waste repository. Reprocessing has been politically controversial because of the potential to contribute to nuclear proliferation, the potential vulnerability to nuclear terrorism, the political challenges of repository siting (a problem that applies equally to direct disposal of spent fuel), and because of its high cost compared to the once-through fuel cycle. In the United States, the Obama administration stepped back from President Bush's plans for commercial-scale reprocessing and reverted to a program focused on reprocessing-related scientific research.

Accident indemnification

Nuclear power works under an insurance framework that limits or structures accident liabilities in accordance with the Paris Convention on Third Party Liability in the Field of Nuclear Energy, the Brussels supplementary convention, and the Vienna Convention on Civil Liability for Nuclear Damage. However states with a majority of the world's nuclear power stations, including the U.S., Russia, China and Japan, are not party to international nuclear liability conventions.

United States
In the United States, insurance for nuclear or radiological incidents is covered (for facilities licensed through 2025) by the Price-Anderson Nuclear Industries Indemnity Act.
United Kingdom
Under the energy policy of the United Kingdom through its 1965 Nuclear Installations Act, liability is governed for nuclear damage for which a UK nuclear licensee is responsible. The Act requires compensation to be paid for damage up to a limit of £150 million by the liable operator for ten years after the incident. Between ten and thirty years afterwards, the Government meets this obligation. The Government is also liable for additional limited cross-border liability (about £300 million) under international conventions (Paris Convention on Third Party Liability in the Field of Nuclear Energy and Brussels Convention supplementary to the Paris Convention).

Decommissioning

Nuclear decommissioning is the dismantling of a nuclear power station and decontamination of the site to a state no longer requiring protection from radiation for the general public. The main difference from the dismantling of other power stations is the presence of radioactive material that requires special precautions to remove and safely relocate to a waste repository.

Decommissioning involves many administrative and technical actions. It includes all clean-up of radioactivity and progressive demolition of the station. Once a facility is decommissioned, there should no longer be any danger of a radioactive accident or to any persons visiting it. After a facility has been completely decommissioned it is released from regulatory control, and the licensee of the station no longer has responsibility for its nuclear safety.

Timing and deferral of decommissioning

Generally speaking, nuclear stations were originally designed for a life of about 30 years. Newer stations are designed for a 40 to 60-year operating life. The Centurion Reactor is a future class of nuclear reactor that is being designed to last 100 years.

One of the major limiting wear factors is the deterioration of the reactor's pressure vessel under the action of neutron bombardment, however in 2018 Rosatom announced it had developed a thermal annealing technique for reactor pressure vessels which ameliorates radiation damage and extends service life by between 15 and 30 years.

Flexibility

Nuclear stations are used primarily for base load because of economic considerations. The fuel cost of operations for a nuclear station is smaller than the fuel cost for operation of coal or gas plants. Since most of the cost of nuclear power plant is capital cost, there is almost no cost saving by running it at less than full capacity.

Nuclear power plants are routinely used in load following mode on a large scale in France, although "it is generally accepted that this is not an ideal economic situation for nuclear stations." Unit A at the decommissioned German Biblis Nuclear Power Plant was designed to modulate its output 15% per minute between 40% and 100% of its nominal power.

Russia has led in the practical development of floating nuclear power stations, which can be transported to the desired location and occasionally relocated or moved for easier decommissioning. In 2022, the United States Department of Energy funded a three-year research study of offshore floating nuclear power generation. In October 2022, NuScale Power and Canadian company Prodigy announced a joint project to bring a North American small modular reactor based floating plant to market.

Insurrectionary anarchism

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Insurrectionary_anarchism

Insurrectionary anarchism is a revolutionary theory and tendency within the anarchist movement that emphasizes insurrection as a revolutionary practice. It is critical of formal organizations such as labor unions and federations that are based on a political program and periodic congresses. Instead, insurrectionary anarchists advocate informal organization and small affinity group based organization. Insurrectionary anarchists put value in attack, permanent class conflict and a refusal to negotiate or compromise with class enemies.

Associated closely with the Italian anarchist movement, the theory of insurrectionary anarchism has historically been linked with a number of high-profile assassinations, as well as the bombing campaigns of the Galleanisti and Informal Anarchist Federation (FAI).

History

Development

Among the earliest inspirations for insurrectionary anarchism was Max Stirner's 1845 book The Ego and Its Own, a tract that upheld a kind of proto-individualist anarchism. Stirner distinguished between "revolution" and "insurrection", defining the aims of "revolution" to be a new arrangement of society by a state, while he considered the aims of an "insurrection" to be the rejection of such arrangements and the free self-organisation of individuals.

During the 1870s, the idea of "propaganda of the deed" was initially developed by Italian anarchists to mean small direct actions that would inspire others to themselves carry out acts of insurrection. Insurrectionists viewed every riot or rebellion as a kind of "revolutionary gymnastics" which could lead to a generalised social revolution. Driven by this theory, Italian individualist anarchists carried out a series of high-profile assassinations during the 1890s, killing French President Sadi Carnot, Spanish Prime Minister Antonio Cánovas del Castillo, Austrian Empress Elisabeth Wittelsbach and Italian King Umberto Savoy.

Meanwhile, the question of organisation had divided the Italian anarchist movement into the syndicalists, who advocated for organisation within the labour movement, and the insurrectionists, who emphasised violent and illegal forms of self-organised direct action. The insurrectionary anarchists rejected all forms of formal organisation, including anarchist federations and trade unions, and criticised the movement's reformist and activist tendencies for failing to take "immediate action". Although both tendencies advocated for anarchist communism, pro-organisationalists such as Francesco Saverio Merlino and Errico Malatesta considered the insurrectionists to really constitute a tendency of individualist anarchism, due to their belief in individual sovereignty and natural law.

Galleanist movement

Luigi Galleani, an early leading proponent of insurrectionary anarchism

Contemporaneous with the rise of anarcho-syndicalism, insurrectionary anarchism was promoted in the United States by the Italian immigrant Luigi Galleani, through his newspaper Cronaca Sovversiva. Galleani was a staunch anti-organisationalist, opposing anarchist participation in the labour movement, which he felt displayed reformist tendencies and a receptiveness to corruption. This stance brought Galleani into conflict with the Industrial Workers of the World (IWW) during the 1912 Lawrence textile strike, following which they entered into a fierce polemic. However, outside observers paid little attention to the differences between the anarchist factions, who were generally viewed as part of the same "amorphous inscrutable threat".

Galleani advocated for propaganda of the deed, which was taken up throughout North America by a network of Galleanist cells, usually consisting of close-knit individuals. Following the American entry into World War I and the ensuing political repression that followed, the Galleanists initiated a violent campaign in opposition to the American government. After some Italian anarchists were killed by police for tearing down an American flag, the Galleanists carried a reprisal attack, which itself triggered a wave of arrests against insurrectionists. When one of the Italian insurrectionists was threatened with deportation, the Galleanists responded with a bombing campaign, sending letter bombs to industrialists, politicians and lawyers. None of the bombs hit their targets, instead injuring a housekeeper and accidentally killing one of the insurrectionist conspirators. Although the conspirators themselves were never caught, Galleani and other Italian insurrectionists were deported and the bombings were used as justification for repression of the 1919 strike wave.

Aftermath of the Wall Street bombing (1920)

During the subsequent political repression, the Italian anarchists Sacco and Vanzetti were arrested on charges of armed robbery. The Galleanists responded by carrying out the Wall Street bombing, killing 38 people and making the task of exonerating the pair more difficult. Nevertheless, the Galleanists continued their efforts to aid Sacco and Vanzetti, who they considered to have been framed. In 1922, they began publication of L'Adunata dei refrattari, in which they encouraged their readers to break the pair out of prison and carry out retributive violence against the responsible state officials. This further exacerbated the split between the syndicalists and insurrectionists, as the two factions excluded each other from their own campaigns.

Political repression largely drove the insurrectionary anarchist movement underground, with Marcus Graham declaring that they would continue to operate on a conspiratorial basis until they could again agitate in the open. During the late 1920s, Graham moved to San Francisco, where he became involved with insurrectionary anarchists around the Galleanist newspaper L'Emancipazione. As the Great Depression limited their capacity, the paper shifted to publications in the English language and invited Graham to be its editor. In January 1933, the group established the newspaper Man!, intended as a means to revive the Galleanism of the previous decade. For Graham and his collaborators, the social revolution was to be built on individuals achieving a form of enlightenment that would break them from "every law, custom and sham creed in which he now finds himself trapped". Like early insurrectionists, Man! rejected syndicalism and the labour movement, which they considered to be inherently authoritarian, and frequently criticised union officials for corruption. Graham also formulated a criticism of technology and called for the destruction of civilisation, in arguments that were an early precursor to anarcho-primitivism.

Man! and L'Adunata dei refrattari continued to act as the main expressions of insurrectionary anarchism throughout the 1930s, but failed to revive it as a popular tendency. Before long, Man! came under increasing police repression, culminating with Graham's arrest and the subsequent cessation of publication in 1939. By the 1940s, the insurrectionary anarchist movement was only a marginal force, concentrated around L'Adunata dei refrattari in New York. The periodical slowly declined until the early 1970s, when it was finally succeeded by the anti-authoritarian publication Fifth Estate.

Resurgence

Insurrectionary anarchism re-emerged within the Italian anarchist movement during the Years of Lead, when the country was marked by instances of left-wing and right-wing terrorism. In 1977, Alfredo Bonanno published his book Armed Joy, which espoused a critique of work, emphasised the feeling of joy and advocated for the use of revolutionary violence. Although Bonanno was imprisoned for the book's publication and the Italian state ordered all copies be destroyed, he continued to pen insurrectionist manifestos. As the Cold War drew to a close, he called for insurrectionary anarchists to coordinate themselves into an informal "Anti-Authoritarian Insurrectionist International" in order to build contact and exchange ideas, but this project was stillborn.

Logo of the Informal Anarchist Federation (FAI)

During the 1980s, Italian insurrectionary anarchists began carrying out small acts of vandalism against "soft targets" such as telecommunications and electricity infrastructure. These were usually carried out by small informal groups, largely distributed throughout Northern and Central Italy, that focused on localised social conflicts. These attacks escalated into violence during the late 1990s, when insurrectionists began carrying out bombings and assaults. The escalation initially caught the Italian authorities off guard, as they were used to these attacks being carried out without casualties.

Between the years of 1999 and 2003, four insurrectionist groups carried out a series of more than 20 bombing attacks, following which they merged together into the Informal Anarchist Federation (FAI) in December 2003. To announce their formation, the FAI carried out a series of bombing attacks against various officials of the European Union, including the European President Romano Prodi, although none of the letter bombs sent out caused any injuries. A further series of letter bomb attacks were carried out by the FAI in 2010 and 2011, during which a number of people were injured. After a cell of the FAI kneecapped an executive of Ansaldo Nucleare in 2012, fears of anarchist terrorism spread rapidly throughout Italy. This led to a wave of arrests against insurrectionary anarchists, including one of the attackers Alfredo Cospito, which briefly put the FAI into an "operational stasis" before they resumed parcel bomb attacks the following year. Over a decade of active operations, the FAI claimed 50 violent attacks, having caused 10 injuries and no deaths.

Anarchist graffiti during the 2008 Greek riots

Since the dissolution of the Red Brigades, insurrectionary anarchists have been considered by the Italian government to be among the most dangerous domestic terrorists in Italy, second only to Islamic terrorists. The FAI's example was followed on an international scale by a number of other insurrectionary anarchist groups, most notably the Conspiracy of Cells of Fire (CCF) in Greece, who joined together with the FAI to launch what they called the "Black International". Parts of Bonanno's insurrectionary programme have also been taken up by anarchist sections of the anti-globalization movement, as well as by the Sardinian nationalist Costantino Cavalleri and the American individualist Wolfi Landstreicher.

Protester facing riot police in the "Battle of Seattle"

In the United States, insurrectionary anarchism had largely been sidelined until the establishment of Up Against the Wall Motherfucker, which promoted the use of violent direct action in solidarity with the King assassination riots. During the mid-2000s, nihilists that were inspired by the rise of insurrectionism in Europe established Anarchy: A Journal of Desire Armed (AJODA), which took up the insurrectionist calls to violence and whose members participated in occupation protests. Insurrectionary anarchists went onto play a leading role in the Occupy movement, although they often clashed with activists that promoted civil disobedience and prefigurative politics, and ultimately failed to develop a long-term strategy for the movement.

Theory and practice

Insurrectionary anarchism generally upholds core anarchist principles, such as anti-authoritarianism, anti-capitalism, anti-clericalism, anti-imperialism, anti-militarism and anti-statism. It has also historically combined with other causes, including radical environmentalism, national liberation struggles and the prison abolition movement.

Direct action

Insurrectionary anarchists generally undertake two basic types of direct action: vandalism of low-profile targets, such as infrastructure or buildings; and violent attacks, often using letter bombs, against political or military targets.

Insurrectionary anarchists often see direct action as a form of emotional release, and participating in action as a source of joy. Militants of the FAI, such as Alfredo Cospito, described their attack against an Italian executive as a moment where they "fully enjoyed my life". Insurrectionists can also see violence as a method of self-empowerment and even, in existential terms, as a means to achieve enlightenment.

Informal organisation

Insurrectionary anarchism shares the anarchist opposition to hierarchical organisation, but goes even further as to oppose any form of organisational structure in general. Instead, insurrectionists emphasise small, informal and temporary forms of organisation, such as affinity groups, that can together undertake direct action. Often formed from pre-existing interpersonal relationships, these groups utilise consensus decision-making to collectively elaborate a programme for attacks against the state and capitalism.

The insurrectionist organisational model has been compared to that of "leaderless resistance", which encourages the independent action of small groups and lone wolves, without an overarching centralised hierarchy. This model minimises risks of espionage and internal debate, while also fostering a degree of ideological pluralism, so long as it doesn't distract from direct action. This model has been noted both for its capacity to resist infiltration, but also for its tendencies towards isolation, and the development of an unofficial leadership. While informal organisation can allow for a certain amount of flexibility and adaptability, information sharing is also hampered by its compartmentalised structure and the reliance of interpersonal trust can present a barrier to recruitment.

Methanogenesis

From Wikipedia, the free encyclopedia

Methanogenesis or biomethanation is the formation of methane coupled to energy conservation by microbes known as methanogens. Organisms capable of producing methane for energy conservation have been identified only from the domain Archaea, a group phylogenetically distinct from both eukaryotes and bacteria, although many live in close association with anaerobic bacteria. The production of methane is an important and widespread form of microbial metabolism. In anoxic environments, it is the final step in the decomposition of biomass. Methanogenesis is responsible for significant amounts of natural gas accumulations, the remainder being thermogenic.

Biochemistry

Cycle for methanogenesis, showing intermediates.

Methanogenesis in microbes is a form of anaerobic respiration. Methanogens do not use oxygen to respire; in fact, oxygen inhibits the growth of methanogens. The terminal electron acceptor in methanogenesis is not oxygen, but carbon. The two best described pathways involve the use of acetic acid or inorganic carbon dioxide as terminal electron acceptors:

CO2 + 4 H2CH4 + 2 H2O
CH3COOH → CH4 + CO2

During anaerobic respiration of carbohydrates, H2 and acetate are formed in a ratio of 2:1 or lower, so H2 contributes only c. 33% to methanogenesis, with acetate contributing the greater proportion. In some circumstances, for instance in the rumen, where acetate is largely absorbed into the bloodstream of the host, the contribution of H2 to methanogenesis is greater.

However, depending on pH and temperature, methanogenesis has been shown to use carbon from other small organic compounds, such as formic acid (formate), methanol, methylamines, tetramethylammonium, dimethyl sulfide, and methanethiol. The catabolism of the methyl compounds is mediated by methyl transferases to give methyl coenzyme M.

Proposed mechanism

The biochemistry of methanogenesis involves the following coenzymes and cofactors: F420, coenzyme B, coenzyme M, methanofuran, and methanopterin.

The mechanism for the conversion of CH
3
–S
bond into methane involves a ternary complex of methyl coenzyme M and coenzyme B fit into a channel terminated by the axial site on nickel of the cofactor F430. One proposed mechanism invokes electron transfer from Ni(I) (to give Ni(II)), which initiates formation of CH
4
. Coupling of the coenzyme M thiyl radical (RS.) with HS coenzyme B releases a proton and re-reduces Ni(II) by one-electron, regenerating Ni(I).

Reverse methanogenesis

Some organisms can oxidize methane, functionally reversing the process of methanogenesis, also referred to as the anaerobic oxidation of methane (AOM). Organisms performing AOM have been found in multiple marine and freshwater environments including methane seeps, hydrothermal vents, coastal sediments and sulfate-methane transition zones. These organisms may accomplish reverse methanogenesis using a nickel-containing protein similar to methyl-coenzyme M reductase used by methanogenic archaea. Reverse methanogenesis occurs according to the reaction:

SO2−
4
+ CH4HCO
3
+ HS + H2O

Importance in carbon cycle

Methanogenesis is the final step in the decay of organic matter. During the decay process, electron acceptors (such as oxygen, ferric iron, sulfate, and nitrate) become depleted, while hydrogen (H2) and carbon dioxide accumulate. Light organics produced by fermentation also accumulate. During advanced stages of organic decay, all electron acceptors become depleted except carbon dioxide. Carbon dioxide is a product of most catabolic processes, so it is not depleted like other potential electron acceptors.

Only methanogenesis and fermentation can occur in the absence of electron acceptors other than carbon. Fermentation only allows the breakdown of larger organic compounds, and produces small organic compounds. Methanogenesis effectively removes the semi-final products of decay: hydrogen, small organics, and carbon dioxide. Without methanogenesis, a great deal of carbon (in the form of fermentation products) would accumulate in anaerobic environments.

Natural occurrence

In ruminants

Testing Australian sheep for exhaled methane production (2001), CSIRO

Enteric fermentation occurs in the gut of some animals, especially ruminants. In the rumen, anaerobic organisms, including methanogens, digest cellulose into forms nutritious to the animal. Without these microorganisms, animals such as cattle would not be able to consume grasses. The useful products of methanogenesis are absorbed by the gut, but methane is released from the animal mainly by belching (eructation). The average cow emits around 250 liters of methane per day. In this way, ruminants contribute about 25% of anthropogenic methane emissions. One method of methane production control in ruminants is by feeding them 3-nitrooxypropanol.

In humans

Some humans produce flatus that contains methane. In one study of the feces of nine adults, five of the samples contained archaea capable of producing methane. Similar results are found in samples of gas obtained from within the rectum.

Even among humans whose flatus does contain methane, the amount is in the range of 10% or less of the total amount of gas.

In plants

Many experiments have suggested that leaf tissues of living plants emit methane. Other research has indicated that the plants are not actually generating methane; they are just absorbing methane from the soil and then emitting it through their leaf tissues.

In soils

Methanogens are observed in anoxic soil environments, contributing to the degradation of organic matter. This organic matter may be placed by humans through landfill, buried as sediment on the bottom of lakes or oceans as sediments, and as residual organic matter from sediments that have formed into sedimentary rocks.

In Earth's crust

Methanogens are a notable part of the microbial communities in continental and marine deep biosphere.

Role in global warming

Atmospheric methane is an important greenhouse gas with a global warming potential 25 times greater than carbon dioxide (averaged over 100 years), and methanogenesis in livestock and the decay of organic material is thus a considerable contributor to global warming. It may not be a net contributor in the sense that it works on organic material which used up atmospheric carbon dioxide when it was created, but its overall effect is to convert the carbon dioxide into methane which is a much more potent greenhouse gas.

Methanogenesis can also be beneficially exploited, to treat organic waste, to produce useful compounds, and the methane can be collected and used as biogas, a fuel. It is the primary pathway whereby most organic matter disposed of via landfill is broken down.

Extra-terrestrial life

The presence of atmospheric methane has a role in the scientific search for extra-terrestrial life. The justification is that on an astronomical timescale, methane in the atmosphere of an Earth-like celestial body will quickly dissipate, and that its presence on such a planet or moon therefore indicates that something is replenishing it. If methane is detected (by using a spectrometer for example) this may indicate that life is, or recently was, present. This was debated when methane was discovered in the Martian atmosphere by M.J. Mumma of NASA's Goddard Flight Center, and verified by the Mars Express Orbiter (2004) and in Titan's atmosphere by the Huygens probe (2005). This debate was furthered with the discovery of 'transient', 'spikes of methane' on Mars by the Curiosity Rover.

It is argued that atmospheric methane can come from volcanoes or other fissures in the planet's crust and that without an isotopic signature, the origin or source may be difficult to identify.

On 13 April 2017, NASA confirmed that the dive of the Cassini orbiter spacecraft on 28 October 2015 discovered an Enceladus plume which has all the ingredients for methanogenesis-based life forms to feed on. Previous results, published in March 2015, suggested hot water is interacting with rock beneath the sea of Enceladus; the new finding supported that conclusion, and add that the rock appears to be reacting chemically. From these observations scientists have determined that nearly 98 percent of the gas in the plume is water, about 1 percent is hydrogen, and the rest is a mixture of other molecules including carbon dioxide, methane and ammonia.

Politics of Europe

From Wikipedia, the free encyclopedia ...