Search This Blog

Monday, April 19, 2021

White guilt

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/White_guilt

White guilt is the individual or collective guilt felt by some white people for harm resulting from racist treatment of ethnic minorities such as African Americans and indigenous peoples by other white people, most specifically in the context of the Atlantic slave trade, European colonialism, genocide of indigenous peoples, The Holocaust and the legacy of these eras.

In certain regions of the Western world, it can be called white settler guilt, white colonial guilt, and other variations, which refer to the guilt more pointedly in relation to European settlement and colonization, such as in Australia and New Zealand. The concept of white guilt has examples both historically and currently in the United States and to a lesser extent in Canada, South Africa, France and the United Kingdom. White guilt has been described by psychologists such as Lisa B. Spanierman and Mary J. Heppner as one of the psychosocial costs of racism for white individuals along with empathy (sadness and anger) for victims of racism and fear of non-white people.

History

Early use

Judith Katz, the author of the 1978 publication White Awareness: Handbook for Anti-Racism Training, is critical of what she calls self-indulgent white guilt fixations. Her concerns about white guilt led her to move from black-white group encounters to all-white groups in her anti-racism training. She also avoided using non-white people to re-educate white people, she said, because she found this led white people to focus on getting acceptance and forgiveness rather than changing their own actions or beliefs.

A report in The Washington Post from 1978 describes the exploitation of white guilt by con artists: "Telephone and mail solicitors, trading on 'white guilt' and on government pressure to advertise in minority-oriented publications, are inducing thousands of businessmen to buy ads in phony publications."

Academic research

In 1999, academic research conducted at the University of Pennsylvania examined the extent of societal feeling of white guilt, possible guilt-based antecedents, and white guilt's relationship to attitudes towards affirmative action. The four studies revealed that "Even though mean White guilt tended to be low, with the mean being just below the midpoint of the scale, the range and variability confirms the existence of feelings of White guilt for some". The findings also showed that white guilt was directly linked to "more negative personal evaluations" of white people generally, and the extent of an individual's feelings of white guilt independently predicted attitudes towards white privilege, racial discrimination and affirmative action.

2003 research at the University of California, Santa Cruz, in its first study, replicated the link between white guilt and strength of belief in white privilege. The second study revealed that white guilt "resulted from seeing European Americans as perpetrators of racial discrimination", and was also predictive of support for compensatory efforts for African Americans.

One academic paper suggests in France, white guilt may be a common feature of management of race relations – in contrast to other European countries.

Rumy Hasan, in his book Multiculturalism: some inconvenient truths (2010), examines "the liberal postcolonial sense of guilt".

Regions

In the United States

American civil rights activist Bayard Rustin wrote that reparations for slavery would be an exploitation of white guilt and damage the "integrity of blacks". In 2006, then-Senator Barack Obama wrote in his book The Audacity of Hope that "rightly or wrongly, white guilt has largely exhausted itself in America". His view on the subject was based on an interaction in the US Senate, where he witnessed a white legislator complain about being made to "feel more white" when a black colleague discussed systemic racism with them.

Shelby Steele, a conservative black political writer, discussed the concept in his 2006 book White Guilt: How Blacks and Whites Together Destroyed the Promise of the Civil Rights Era. Steele criticizes "white guilt" saying that it is nothing more than an alternative interpretation of the concept of "black power":

Whites (and American institutions) must acknowledge historical racism to show themselves redeemed by it, but once they acknowledge it, they lose moral authority over everything having to do with race, equality, social justice, poverty and so on. [...] The authority they lose transfers to the 'victims' of historical racism and becomes their great power in society. This is why white guilt is quite literally the same thing as Black power.

George F. Will, a conservative American political columnist, wrote: "[White guilt is] a form of self-congratulation, where whites initiate 'compassionate policies' toward people of color, to showcase their innocence to racism."

In 2015, when it came to light American civil rights activist Rachel Dolezal had been posing as African American, Washington Post journalist Krissah Thompson described her as "an archetype of white guilt played to its end". Thompson discussed the issue with psychologist Derald Wing Sue, an expert on racial identity, who suggested that Dolezal had become so fascinated by racism and racial justice issues she "over-identified" with black people. In 2016, the school district of Henrico County, Virginia ceased future use of an educational video, Structural Discrimination: The Unequal Opportunity Race, which visualized white privilege and structural racism. Parents complained, calling it a white guilt video, which led to a ban by the county's superintendent.

Since 2016, white liberals rate non-white groups more positively than they do whites. Every other racial group feels more positive about their own race than they do about other races.

In October 2018, The Economist proposed that an increase in Americans claiming Native American ancestry, often incorrectly, may be explained by attempts to "absolve them of collective European guilt for the genocide of indigenous people". In 2019, it was reported how liberal white Americans were being influenced by white guilt, changing patterns of political and social behaviour to be more racially inclusive since the election of Donald Trump. This included the methods by which Democratic nominees were being considered for the 2020 presidential election.

In October 2019, students at middle school in Massachusetts raised money for the Mashpee Wampanoag Tribe, after learning that the tribe had dealt with the first colonists from The Mayflower. The school director said it had "left all our students with this sense of European guilt", and one student remarked "If we don’t try to repair what our ancestors did, the tribes will die off".

In Australia

Author Sally Morgan 1987's book My Place, which explores Aboriginal identity, has come under critique for providing European Australians with a narrative of colonization in Australia which, critics argue, too generously assuages white settler guilt. Marcia Langton has described the book as a kind of an unearned catharsis for European guilt: "The book is a catharsis. It gives release and relief, not so much to Aboriginal people oppressed by psychotic racism, as to the whites who wittingly and unwittingly participated in it".

In New Zealand

In New Zealand, the legacy of Pākehā settlers has created a localized sense of white guilt in relation to the resulting damage to pre-existing Māori culture and mistreatment of indigenous people. Then opposition leader, Bill English, in 2002, gave a speech rejecting the "cringing guilt" said to be a result of the European colonialism to Aotearoa, and the Pakeha settlers who enacted it. This was in response to the government Race Relations Commissioner, comparing the impact of British settlement in New Zealand to the Taliban's vandalism of the Buddhas of Bamyan.

Academic Elizabeth Rata has proposed that "without the mirror image of unexpiated guilt, a necessary process in the recognition and validation of a shared reality, Pākehā guilt moved, not onto the next stage of externalised shame, but into an internal and enclosed narcissism". In her analysis, she suggests that the Waitangi Tribunal has been a missed opportunity to reconcile white guilt in New Zealand.

Critical opinions

Commentator Sunny Hundal, writing for The Guardian, stated it is "reductionist" to assign political opinions to a collective guilt such as "white guilt" and few people on the left actually hold the views being ascribed to them by the conservative writers who expound on the concept of "white guilt" and its implications. Hundal concludes: "Not much annoys me more than the stereotype that to be liberal is to be full of guilt. To be socially liberal, in my view, is to be more mindful of compassion and empathy for others…to label that simply as guilt is just...insulting."

In 2015, Gary Younge explored white guilt's impotence in society, writing: "It won't close the pay gap, the unemployment gap, the wealth gap or the discrepancy between black and white incarceration. It won't bring back Walter Scott, Trayvon Martin or Brandon Moore." Coleman Hughes has suggested that white guilt causes the misdirection of anti-racist efforts, writing that "where white guilt is endemic, demands to redress racism will be strongest, regardless of how much racism actually exists".

Works about white guilt

Biological effects of radiation on the epigenome

From Wikipedia, the free encyclopedia

Ionizing radiation can cause biological effects which are passed on to offspring through the epigenome. The effects of radiation on cells has been found to be dependent on the dosage of the radiation, the location of the cell in regards to tissue, and whether the cell is a somatic or germ line cell. Generally, ionizing radiation appears to reduce methylation of DNA in cells.

Ionizing radiation has been known to cause damage to cellular components such as proteins, lipids, and nucleic acids. It has also been known to cause DNA double-strand breaks. Accumulation of DNA double strand breaks can lead to cell cycle arrest in somatic cells and cause cell death. Due to its ability to induce cell cycle arrest, ionizing radiation is used on abnormal growths in the human body such as cancer cells, in radiation therapy. Most cancer cells are fully treated with some type of radiotherapy, however some cells such as stem cell cancer cells show a reoccurrence when treated by this type of therapy.

Radiation exposure in everyday life

Non-ionising radiations, electromagnetic fields (EMF) such as radiofrequency (RF), or power frequency radiation have become very common in everyday life. All of these exist as low frequency radiation which can come from wireless cellular devices or through electrical appliances which induce extremely low frequency radiation (ELF). Exposure to these radioactive frequencies has shown negative affects on the fertility of men by impacting the DNA of the sperm and deteriorating the testes as well as an increased risk of tumor formation in salivary glands. The International Agency for Research on Cancer considers RF electromagnetic fields to be possibly carcinogenic to humans, however the evidence is limited.

Radiation and medical imaging

Advances in medical imaging has resulted in increased exposure of humans to low doses of ionizing radiation. Radiation exposure in pediatrics has been shown to have a greater impact as children's cells are still developing. The radiation obtained from medical imaging techniques is only harmful if consistently targeted multiple times in a short space of time. Safety measures have been introduced in order to limit the exposure of harmful ionizing radiation such as the usage of protective material during the use of these imaging tools. A lower dosage is also used in order to fully rid the possibility of a harmful effect from the medical imaging tools. The National Council on Radiation Protection and Measurements along with many other scientific committees have ruled in favor of continued use of medical imaging as the reward far outweighs the minimal risk obtained from these imaging techniques. If the safety protocols are not followed there is a potential increase in the risk of developing cancer. This is primarily due to the decreased methylation of cell cycle genes, such as those relating to apoptosis and DNA repair. The ionizing radiation from these techniques can cause many other detrimental effects in cells including changes in gene expression and halting the cell cycle. However, these results are extremely unlikely if the proper protocols are followed.

Target theory

Target theory concerns the models of how radiation kills biological cells and is based around two main postulates:

  1. "Radiation is considered to be a sequence of random projectiles;
  2. the components of the cell are considered as the targets bombarded by these projectiles"

Several models have been based around the above two points. From the various proposed models three main conclusions were found:

  1. Physical hits obey a Poisson distribution
  2. Failure of radioactive particles to attack sensitive areas of cells allow for survival of the cell
  3. Cell death is an exponential function of the dose of radiation received as the number of hits received is directly proportional to the radiation dose; all hits are considered lethal

Radiation exposure through ionizing radiation (IR) affects a variety of processes inside of an exposed cell. IR can cause changes in gene expression, disruption of cell cycle arrest, and apoptotic cell death. The extent of how radiation effects cells depends on the type of cell and the dosage of the radiation. Some irradiated cancer cells have been shown to exhibit DNA methylation patterns due to epigenetic mechanisms in the cell. In medicine, medical diagnostic methods such as CT scans and radiation therapy expose the individual to ionizing radiation. Irradiated cells can also induce genomic instability in neighboring un-radiated cells via the bystander effect. Radiation exposure could also occur via many other channels than just ionizing radiation.

The basic ballistic models

The single-target single-hit model

In this model a single hit on a target is sufficient to kill a cell. The equation used for this model is as follows:

Where k represents a hit on the cell and m represents the mass of the cell.

The n-target single-hit model

In this model the cell has a number of targets n. A single hit on one target is not sufficient to kill the cell but does disable the target. An accumulation of successful hits on various targets leads to cell death. The equation used for this model is as follows:

Where n represents number of the targets in the cell.

The linear quadratic model

The equation used for this model is as follows:

where αD represents a hit made by a one particle track and βD represents a hit made by a two particle track and S(D) represents the probability of survival of the cell.

The three lambda model

This model showed the accuracy of survival description for higher or repeated doses.

The equation used for this model is as follows:

The linear-quadratic-cubic model

The equation used for this model is as follows:

Sublesions hypothesis models

The repair-misrepair model

This model shows the mean number of lesions before any repair activations in a cell.

The equation used for this model is as follows:

where Uo represents the yield of initially induced lesions, with λ being the linear self-repair coefficient, and T equaling time

The lethal-potentially lethal model

This equation explores the hypothesis of a lesion becoming fatal within a given of time if it is not repair by repair enzymes.

The equation used for this model is as follows:

T is the radiation duration and tr is the available repair time.

The saturable repair model

This model illustrates the efficiency of the repair system decreasing as the dosage of radiation increases. This is due to the repair kinetics becoming increasingly saturated with the increase in radiation dosage.

The equation used for this model is as follows:

n(t) is the number of unrepaired lesions, c(t) is the number of repair molecules or enzymes, k is the proportionality coefficient, and T is the time available for repair.

Cellular environment and radiation hormesis

Radiation hormesis

Hormesis is the hypothesis that low levels of disrupting stimulus can cause beneficial adaptations in an organism. The ionizing radiation stimulates repair proteins that are usually not active. Cells use this new stimuli to adapt to the stressors they are being exposed to.

Radiation-Induced Bystander Effect (RIBE)

In biology, the bystander effect is described as changes to nearby non-targeted cells in response to changes in an initially targeted cell by some disrupting agent. In the case of Radiation-Induced Bystander Effect, the stress on the cell is caused by ionizing radiation.

The bystander effect can be broken down into two categories, long range bystander effect and short range bystander effect. In long range bystander effect, the effects of stress are seen further away from the initially targeted cell. In short range bystander, the effects of stress are seen in cells adjacent to the target cell.

Both low linear energy transfer and high linear energy transfer photons have been shown to produce RIBE. Low linear energy transfer photons were reported to cause increases in mutagenesis and a reduction in the survival of cells in clonogenic assays. X-rays and gamma rays were reported to cause increases in DNA double strand break, methylation, and apoptosis. Further studies are needed to reach a conclusive explanation of any epigenetic impact of the bystander effect.

Radiation and oxidative stress

Formation of ROS

Ionizing radiation produces fast moving particles which have the ability to damage DNA, and produce highly reactive free radicals known as reactive oxygen species (ROS). The production of ROS in cells radiated by LDIR (Low-Dose Ionizing Radiation) occur in two ways, by the radiolysis of water molecules or the promotion of nitric oxide synthesis (NOS) activity. The resulting nitric oxide formation reacts with superoxide radicals. This generates peroxynitrite which is toxic to biomolecules. Cellular ROS is also produced with the help of a mechanism involving nicotinamide adenosine dinucleotide phosphate (NADPH) oxidase. NADPH oxidase helps with the formation of ROS by generating a superoxide anion by transferring electrons from cytosolic NADPH across the cell membrane to the extracellular molecular oxygen. This process increases the potential for leakage of electrons and free radicals from the mitochondria. The exposure to the LDIR induces electron release from the mitochondria resulting in more electrons contributing to the superoxide formation in the cells.

The production of ROS in high quantity in cells results in the degradation of biomolecules such as proteins, DNA, and RNA. In one such instance the ROS are known to create double stranded and single stranded breaks in the DNA. This causes the DNA repair mechanisms to try to adapt to the increase in DNA strand breaks. Heritable changes to the DNA sequence have been seen although the DNA nucleotide sequence seems the same after the exposure with LDIR.

Activation of NOS

The formation of ROS is coupled with the formation of nitric oxide synthase activity (NOS). NO reacts with O2 generating peroxynitrite. The increase in the NOS activity causes the production of peroxynitrite (ONOO-). Peroxynitrite is a strong oxidant radical and it reacts with a wide array of biomolecules such as DNA bases, proteins and lipids. Peroxynitrite affects biomolecules function and structure and therefore effectively destabilizes the cell.

Mechanism of oxidative stress and epigenetic gene regulation

Ionizing radiation causes the cell to generate increased ROS and the increase of this species damages biological macromolecules. In order to compensate for this increased radical species, cells adapt to IR induced oxidative effects by modifying the mechanisms of epigenetic gene regulation. There are 4 epigenetic modifications that can take place:

  1. formation of protein adducts inhibiting epigenetic regulation
  2. alteration of genomic DNA methylation status
  3. modification of post translational histone interactions affecting chromatin compaction
  4. modulation of signaling pathways that control transcription factor expression

ROS-mediated protein adduct formation

ROS generated by ionizing radiation chemically modify histones which can cause a change in transcription. Oxidation of cellular lipid components result in an electrophilic molecule formation. The electrophilic molecule binds to the lysine residues of histones causing a ketoamide adduct formation. The ketoamide adduct formation blocks the lysine residues of histones from binding to acetylation proteins thus decreasing gene transcription.

ROS-mediated DNA methylation changes

DNA hypermethylation is seen in the genome with DNA breaks at a gene-specific basis, such as promoters of regulatory genes, but the global methylation of the genome shows a hypomethylation pattern during the period of reactive oxygen species stress.

DNA damage induced by reactive oxygen species results in increased gene methylation and ultimately gene silencing. Reactive oxygen species modify the mechanism of epigenetic methylation by inducing DNA breaks which are later repaired and then methylated by DNMTs. DNA damage response genes, such as GADD45A, recruit nuclear proteins Np95 to direct histone methyltransferase's towards the damaged DNA site. The breaks in DNA caused by the ionizing radiation then recruit the DNMTs in order to repair and further methylate the repair site.

Genome wide hypomethylation occurs due to reactive oxygen species hydroxylating methylcytosines to 5-hydroxymethylcytosine (5hmC). The production of 5hmC serves as an epigenetic marker for DNA damage which is recognizable by DNA repair enzymes. The DNA repair enzymes attracted by the marker convert 5hmC to an unmethylated cytosine base resulting in the hypomethylation of the genome.

Another mechanism that induces hypomethylation is the depletion of S-adenosyl methionine synthetase (SAM). The prevalence of super oxide species causes the oxidization of reduced glutathione (GSH) to GSSG. Due to this, synthesis of the cosubstrate SAM is stopped. SAM is an essential cosubtrate for the normal functioning of DNMTs and histone methyltrasnferase proteins.

ROS-mediated post-translation modification

Double stranded DNA breaks caused by exposure to ionizing radiation are known to alter chromatin structure. Double stranded breaks are primarily repaired by poly ADP (PAR) polymerases which accumulate at the site of the break leading to activation of the chromatin remodeling protein ALC1. ALC1 causes the nucleosome to relax resulting in the epigenetic up-regulation of genes. A similar mechanism involves the ataxia telangiectasia mutated (ATM) serine/threonine kinase which is an enzyme involved in the repair of double stranded breaks caused by ionizing radiation. ATM phosphorylates KAP1 which causes the heterochromatin to relax, allowing increased transcription to occur.

The DNA mismatch repair gene (MSH2) promoter has shown a hypermethylation pattern when exposed to ionizing radiation. Reactive oxygen species induce the oxidization of deoxyguanosine into 8-hydroxydeoxyguanosine (8-OHdG) causing a change in chromatin structure. Gene promoters that contain 8-OHdG deactivate the chromatin by inducing trimethyl-H3K27 in the genome. Other enzymes such as transglutaminases (TGs) control chromatin remodeling through proteins such as sirtuin1 (SIRT1). TGs cause transcriptional repression during reactive oxygen species stress by binding to the chromatin and inhibiting the sirtuin 1 histone deacetylase from performing its function.

ROS-mediated loss of epigenetic imprinting

Epigenetic imprinting is lost during reactive oxygen species stress. This type of oxidative stress causes a loss of NF- κB signaling. Enhancer blocking element CCCTC-binding factor (CTCF) binds to the imprint control region of insulin-like growth factor 2 (IGF2) preventing the enhancers from allowing the transcription of the gene. The NF- κB proteins interact with IκB inhibitory proteins, but during oxidative stress IκB proteins are degraded in the cell. The loss of IκB proteins for NF- κB proteins to bind to results in NF- κB proteins entering the nucleus to bind to specific response elements to counter the oxidative stress. The binding of NF- κB and corepressor HDAC1 to response elements such as the CCCTC-binding factor causes a decrease in expression of the enhancer blocking element. This decrease in expression hinders the binding to the IGF2 imprint control region therefore causing the loss of imprinting and biallelic IGF2 expression.

Mechanisms of epigenetic modifications

After the initial exposure to ionizing radiation, cellular changes are prevalent in the unexposed offspring of irradiated cells for many cell divisions. One way this non-Mendelian mode of inheritance can be explained is through epigenetic mechanisms.

Ionizing radiation and DNA methylation

Genomic instability via hypomethylation of LINE1

Ionizing radiation exposure affects patterns of DNA methylation. Breast cancer cells treated with fractionated doses of ionizing radiation showed DNA hypomethylation at the various gene loci; dose fractionation refers to breaking down one dose of radiation into separate, smaller doses. Hypomethylation of these genes correlated with decreased expression of various DNMTs and methyl CpG binding proteins. LINE1 transposable elements have been identified as targets for ionizing radiation. The hypomethylation of LINE1 elements results in activation of the elements and thus an increase in LINE1 protein levels. Increased transcription of LINE1 transposable elements results in greater mobilization of the LINE1 loci and therefore increases genomic instability.

Ionizing radiation and histone modification

Irradiated cells can be linked to a variety of histone modifications. Ionizing radiation in breast cancer cell inhibits H4 lysine tri-methylation. Mouse models exposed to high levels of X-ray irradiation exhibited a decrease in both the tri-methylation of H4-Lys20 and the compaction of the chromatin. With the loss of tri-methylation of H4-Lys20, DNA hypomethylation increased resulting in DNA damage and increased genomic instability.

Loss of methylation via repair mechanisms

Breaks in DNA due to ionizing radiation can be repaired. New DNA synthesis by DNA polymerases is one of the ways radiation induced DNA damage can be repaired. However, DNA polymerases do not insert methylated bases which leads to a decrease in methylation of the newly synthesized strand. Reactive oxygen species also inhibit DNMT activity which would normally add the missing methyl groups. This increases the chance that the demethylated state of DNA will eventually become permanent.

Clinical consequences and applications

Epigenetic affects on a developing brain

Chronic exposure to these types of radiation can have an effect on children from as early as when they are fetuses. There have been multiple cases reported of hindrance in the development of the brain, behavioral changes such as anxiety, and the disruption of proper learning and language processing. An Increase in the cases of ADHD behavior and autism behavior has been shown to be directly correlated with the exposure of EMF waves. The World Health Organization has classified RFR as a possible carcinogen for its epigenetic effects on DNA expression. The exposure to EMF waves on a consistent 24hr basis has shown to lower the activity of miRNA in the brain affecting developmental and neuronal activity. This epigenetic change causes the silencing of necessary genes along with the change in expression of other genes integral for the normal development of the brain.

MGMT- and LINE1- specific DNA methylation

DNA methylation influences tissue responses to ionizing radiation. Modulation of methylation in the gene MGMT or in transposable elements such as LINE1 could be used to alter tissue responses to ionizing radiation and potentially opening new areas for cancer treatment.

MGMT serves as a prognostic marker in glioblastoma. Hypermethylation of MGMT is associated with the regression of tumors. Hypermethylation of MGMT silences its transcription inhibiting alkylating agents in tumor killing cells. Studies have shown patients who received radiotherapy, but no chemotherapy after tumor extraction, had an improved response to radiotherapy due to the methylation of the MGMT promoter.

Almost all human cancers include hypomethylation of LINE1 elements. Various studies depict that the hypomethylation of LINE1 correlates with a decrease in survival after both chemotherapy and radiotheraphy.

Treatment by DNMT inhibitors

DMNT inhibitors are being explored in the treatment of malignant tumors. Recent in-vitro studies show that DNMT inhibitors can increase the effects of other anti-cancer drugs. Knowledge of in-vivo effect of DNMT inhibitors are still being investigated. The long term effects of the use of DNMT inhibitors are still unknown.

Radioactivity in the life sciences

From Wikipedia, the free encyclopedia
 
Radioactivity is generally used in life sciences for highly sensitive and direct measurements of biological phenomena, and for visualizing the location of biomolecules radiolabelled with a radioisotope.

All atoms exist as stable or unstable isotopes and the latter decay at a given half-life ranging from attoseconds to billions of years; radioisotopes useful to biological and experimental systems have half-lives ranging from minutes to months. In the case of the hydrogen isotope tritium (half-life = 12.3 years) and carbon-14 (half-life = 5,730 years), these isotopes derive their importance from all organic life containing hydrogen and carbon and therefore can be used to study countless living processes, reactions, and phenomena. Most short lived isotopes are produced in cyclotrons, linear particle accelerators, or nuclear reactors and their relatively short half-lives give them high maximum theoretical specific activities which is useful for detection in biological systems.

DOTA linked to the monoclonal antibody tacatuzumab and chelating yttrium-90
 
Whole-body PET scan using 18F-FDG showing intestinal tumors and non-specific accumulation in bladder

Radiolabeling is a technique used to track the passage of a molecule that incorporates a radioisotope through a reaction, metabolic pathway, cell, tissue, organism, or biological system. The reactant is 'labeled' by replacing specific atoms by their isotope. Replacing an atom with its own radioisotope is an intrinsic label that does not alter the structure of the molecule. Alternatively, molecules can be radiolabeled by chemical reactions that introduce an atom, moiety, or functional group that contains a radionuclide. For example, radio-iodination of peptides and proteins with biologically useful iodine isotopes is easily done by an oxidation reaction that replaces the hydroxyl group with iodine on tyrosine and histadine residues. Another example is to use chelators such DOTA that can be chemically coupled to a protein; the chelator in turn traps radiometals thus radiolabeling the protein. This has been used for introducing Yttrium-90 onto a monoclonal antibody for therapeutic purposes and for introducing Gallium-68 onto the peptide Octreotide for diagnostic imaging by PET imaging.

Radiolabeling is not necessary for some applications. For some purposes, soluble ionic salts can be used directly without further modification (e.g., gallium-67, gallium-68, and radioiodine isotopes). These uses rely on the chemical and biological properties of the radioisotope itself, to localize it within the organism or biological system.

Molecular imaging is the biomedical field that employs radiotracers to visualize and quantify biological processes using positron emission tomography (PET) and single-photon emission computed tomography (SPECT) imaging. Again, a key feature of using radioactivity in life science applications is that it is a quantitative technique, so PET/SPECT not only reveals where a radiolabelled molecule is but how much is there.

Radiobiology (also known as radiation biology) is a field of clinical and basic medical sciences that involves the study of the action of radioactivity on biological systems. The controlled action of deleterious radioactivity on living systems is the basis of radiation therapy.

Examples of biologically useful radionuclei

Hydrogen

Tritium (Hydrogen-3) is a very low beta energy emitter that can be used to label proteins, nucleic acids, drugs and almost any organic biomolecule. The maximum theoretical specific activity of tritium is 28.8 Ci/mmol (1.066 PBq/mol). However, there is often more than one tritium atom per molecule: for example, tritiated UTP is sold by most suppliers with carbons 5 and 6 each bonded to a tritium atom.

For tritium detection, liquid scintillation counters have been classically employed, in which the energy of a tritium decay is transferred to a scintillant molecule in solution which in turn gives off photons whose intensity and spectrum can be measured by a photomultiplier array. The efficiency of this process is 4–50%, depending on the scintillation cocktail used. The measurements are typically expressed in counts per minute (CPM) or disintegrations per minute (DPM). Alternatively, a solid-state, tritium-specific phosphor screen can be used together with a phosphorimager to measure and simultaneously image the radiotracer. Measurements/images are digital in nature and can be expressed in intensity or densitometry units within a region of interest (ROI).

Carbon

Carbon-14 has a long half-life of 5,730±40 years. Its maximum specific activity is 0.0624 Ci/mmol (2.31 TBq/mol). It is used in applications such as radiometric dating or drug tests. C-14 labeling is common in drug development to do ADME (absorption, distribution, metabolism and excretion) studies in animal models and in human toxicology and clinical trials. Since tritium exchange may occur in some radiolabeled compounds, this does not happen with C-14 and may thus be preferred.

Sodium

Sodium-22 and chlorine-36 are commonly used to study ion transporters. However, sodium-22 is hard to screen off and chlorine-36, with a half-life of 300,000 years, has low activity.

Sulfur

Sulfur-35 is used to label proteins and nucleic acids. Cysteine is an amino acid containing a thiol group which can be labeled by S-35. For nucleotides that do not contain a sulfur group, the oxygen on one of the phosphate groups can be substituted with a sulfur. This thiophosphate acts the same as a normal phosphate group, although there is a slight bias against it by most polymerases. The maximum theoretical specific activity is 1,494 Ci/mmol (55.28 PBq/mol).

Phosphorus

Phosphorus-33 is used to label nucleotides. It is less energetic than P-32 and does not require protection with plexiglass. A disadvantage is its higher cost compared to P-32, as most of the bombarded P-31 will have acquired only one neutron, while only some will have acquired two or more. Its maximum specific activity is 5,118 Ci/mmol (189.4 PBq/mol).

Phosphorus-32 is widely used for labeling nucleic acids and phosphoproteins. It has the highest emission energy (1.7 MeV) of all common research radioisotopes. This is a major advantage in experiments for which sensitivity is a primary consideration, such as titrations of very strong interactions (i.e., very low dissociation constant), footprinting experiments, and detection of low-abundance phosphorylated species. 32P is also relatively inexpensive. Because of its high energy, however, its safe use requires a number of engineering controls (e.g., acrylic glass) and administrative controls. The half-life of 32P is 14.2 days, and its maximum specific activity is 9131 Ci/mmol.

Iodine

Iodine-125 is commonly used for labeling proteins, usually at tyrosine residues. Unbound iodine is volatile and must be handled in a fume hood. Its maximum specific activity is 2,176 Ci/mmol (80.51 PBq/mol).

A good example of the difference in energy of the various radionuclei is the detection window ranges used to detect them, which are generally proportional to the energy of the emission, but vary from machine to machine: in a Perkin elmer TriLux Beta scintillation counter , the H-3 energy range window is between channel 5–360; C-14, S-35 and P-33 are in the window of 361–660; and P-32 is in the window of 661–1024.

Detection

Autoradiograph of a coronal brain tissue slice, with a radiolabeled GAD67 probe. Most intense signal is seen in subventricular zone.
 
Autoradiograph of Southern blot membrane

Quantitative

In liquid scintillation counting, a small aliquot, filter or swab is added to scintillation fluid and the plate or vial is placed in a scintillation counter to measure the radioactive emissions. Manufacturers have incorporated solid scintillants into multi-well plates to eliminate the need for scintillation fluid and make this into a high-throughput technique.

A gamma counter is similar in format to scintillation counting but it detects gamma emissions directly and does not require a scintillant.

A Geiger counter is a quick and rough approximation of activity. Lower energy emitters such as tritium can not be detected.

Qualitative AND Quantitative

Autoradiography: A tissue section affixed to a microscope slide or a membrane such as a Northern blot or a hybridized slot blot can be placed against x-ray film or phosphor screens to acquire a photographic or digital image. The density of exposure, if calibrated, can supply exacting quantitative information.

Phosphor storage screen: The slide or membrane is placed against a phosphor screen which is then scanned in a phosphorimager. This is many times faster than film/emulsion techniques and outputs data in a digital form, thus it has largely replaced film/emulsion techniques.

Microscopy

Electron microscopy: The sample is not exposed to a beam of electrons but detectors picks up the expelled electrons from the radionuclei.

Micro-autoradiography: A tissue section, typically cryosectioned, is placed against a phosphor screen as above.

Quantitative Whole Body Autoradiography (QWBA): Larger than micro-autoradiography, whole animals, typically rodents, can be analyzed for biodistribution studies.

Scientific methods

Schild regression is a radioligand binding assay. It is used for DNA labelling (5' and 3'), leaving the nucleic acids intact.

Radioactivity concentration

A vial of radiolabel has a "total activity". Taking as an example γ32P ATP, from the catalogues of the two major suppliers, Perkin Elmer NEG502H500UC or GE AA0068-500UCI, in this case, the total activity is 500 μCi (other typical numbers are 250 μCi or 1 mCi). This is contained in a certain volume, depending on the radioactive concentration, such as 5 to 10 mCi/mL (185 to 370 TBq/m3); typical volumes include 50 or 25 μL.

Not all molecules in the solution have a P-32 on the last (i.e., gamma) phosphate: the "specific activity" gives the radioactivity concentration and depends on the radionuclei's half-life. If every molecule were labelled, the maximum theoretical specific activity is obtained that for P-32 is 9131 Ci/mmol. Due to pre-calibration and efficiency issues this number is never seen on a label; the values often found are 800, 3000 and 6000 Ci/mmol. With this number it is possible to calculate the total chemical concentration and the hot-to-cold ratio.

"Calibration date" is the date in which the vial’s activity is the same as on the label. "Pre-calibration" is when the activity is calibrated in a future date to compensate for the decay occurred during shipping.

Comparison with fluorescence

Prior to the widespread use of fluorescence in the past three decades radioactivity was the most common label.

The primary advantage of fluorescence over radiotracers is that it does not require radiological controls and their associated expenses and safety measures. The decay of radioisotopes may limit the shelf life of a reagent, requiring its replacement and thus increasing expenses. Several fluorescent molecules can be used simultaneously (given that they do not overlap, cf. FRET), whereas with radioactivity two isotopes can be used (tritium and a low energy isotope, e.g. 33P due to different intensities) but require special equipment (a tritium screen and a regular phosphor-imaging screen, a specific dual channel detector, e.g.).

Fluorescence is not necessary easier or more convenient to use because fluorescence requires specialized equipment of its own and because quenching makes absolute and/or reproducible quantification difficult.

The primary disadvantage of fluorescence versus radiotracers is a significant biological problem: chemically tagging a molecule with a fluorescent dye radically changes the structure of the molecule, which in turn can radically change the way that molecule interacts with other molecules. In contrast, intrinsic radiolabeling of a molecule can be done without altering its structure in any way. For example, substituting a H-3 for a hydrogen atom or C-14 for a carbon atom does not change the conformation, structure, or any other property of the molecule, it's just switching forms of the same atom. Thus an intrinsically radiolabeled molecule is identical to its unlabeled counterpart.

Measurement of biological phenomena by radiotracers is always direct. In contrast, many life science fluorescence applications are indirect, consisting of a fluorescent dye increasing, decreasing, or shifting in wavelength emission upon binding to the molecule of interest.

Safety

If good health physics controls are maintained in a laboratory where radionuclides are used, it is unlikely that the overall radiation dose received by workers will be of much significance. Nevertheless, the effects of low doses are mostly unknown so many regulations exist to avoid unnecessary risks, such as skin or internal exposure. Due to the low penetration power and many variables involved it is hard to convert a radioactive concentration to a dose. 1 μCi of P-32 on a square centimetre of skin (through a dead layer of a thickness of 70 μm) gives 7961 rads (79.61 grays) per hour . Similarly a mammogram gives an exposure of 300 mrem (3 mSv) on a larger volume (in the US, the average annual dose is 620 mrem or 6.2 mSv ).

 

Radioactive decay

Alpha decay is one type of radioactive decay, in which an atomic nucleus emits an alpha particle, and thereby transforms (or "decays") into an atom with a mass number decreased by 4 and atomic number decreased by 2.

Radioactive decay (also known as nuclear decay, radioactivity, radioactive disintegration or nuclear disintegration) is the process by which an unstable atomic nucleus loses energy by radiation. A material containing unstable nuclei is considered radioactive. Three of the most common types of decay are alpha decay, beta decay, and gamma decay, all of which involve emitting one or more particles or photons. The weak force is the mechanism that is responsible for beta decay, while the other two are governed by the usual electromagnetic and strong forces. Radioactive decay is a stochastic (i.e. random) process at the level of single atoms. According to quantum theory, it is impossible to predict when a particular atom will decay, regardless of how long the atom has existed. However, for a significant number of identical atoms, the overall decay rate can be expressed as a decay constant or as half-life. The half-lives of radioactive atoms have a huge range; from nearly instantaneous to far longer than the age of the universe.

The decaying nucleus is called the parent radionuclide (or parent radioisotope), and the process produces at least one daughter nuclide. Except for gamma decay or internal conversion from a nuclear excited state, the decay is a nuclear transmutation resulting in a daughter containing a different number of protons or neutrons (or both). When the number of protons changes, an atom of a different chemical element is created.

  • Alpha decay occurs when the nucleus ejects an alpha particle (helium nucleus).
  • Beta decay occurs in two ways;
    • (i) beta-minus decay, when the nucleus emits an electron and an antineutrino in a process that changes a neutron to a proton.
    • (ii) beta-plus decay, when the nucleus emits a positron and a neutrino in a process that changes a proton to a neutron, this process is also known as positron emission.
  • In gamma decay a radioactive nucleus first decays by the emission of an alpha or beta particle. The daughter nucleus that results is usually left in an excited state and it can decay to a lower energy state by emitting a gamma ray photon.
  • In neutron emission, extremely neutron-rich nuclei, formed due to other types of decay or after many successive neutron captures, occasionally lose energy by way of neutron emission, resulting in a change from one isotope to another of the same element.
  • In electron capture, the nucleus may capture an orbiting electron, causing a proton to convert into a neutron in a process called electron capture. A neutrino and a gamma ray are subsequently emitted.
  • In cluster decay and nuclear fission, a nucleus heavier than an alpha particle is emitted.

By contrast, there are radioactive decay processes that do not result in a nuclear transmutation. The energy of an excited nucleus may be emitted as a gamma ray in a process called gamma decay, or that energy may be lost when the nucleus interacts with an orbital electron causing its ejection from the atom, in a process called internal conversion. Another type of radioactive decay results in products that vary, appearing as two or more "fragments" of the original nucleus with a range of possible masses. This decay, called spontaneous fission, happens when a large unstable nucleus spontaneously splits into two (or occasionally three) smaller daughter nuclei, and generally leads to the emission of gamma rays, neutrons, or other particles from those products. In contrast, decay products from a nucleus with spin may be distributed non-isotropically with respect to that spin direction. Either because of an external influence such as an electromagnetic field, or because the nucleus was produced in a dynamic process that constrained the direction of its spin, the anisotropy may be detectable. Such a parent process could be a previous decay, or a nuclear reaction.

For a summary table showing the number of stable and radioactive nuclides in each category, see radionuclide. There are 28 naturally occurring chemical elements on Earth that are radioactive, consisting of 34 radionuclides (6 elements have 2 different radionuclides) that date before the time of formation of the Solar System. These 34 are known as primordial nuclides. Well-known examples are uranium and thorium, but also included are naturally occurring long-lived radioisotopes, such as potassium-40.

Another 50 or so shorter-lived radionuclides, such as radium-226 and radon-222, found on Earth, are the products of decay chains that began with the primordial nuclides, or are the product of ongoing cosmogenic processes, such as the production of carbon-14 from nitrogen-14 in the atmosphere by cosmic rays. Radionuclides may also be produced artificially in particle accelerators or nuclear reactors, resulting in 650 of these with half-lives of over an hour, and several thousand more with even shorter half-lives.

History of discovery

Pierre and Marie Curie in their Paris laboratory, before 1907

Radioactivity was discovered in 1896 by the French scientist Henri Becquerel, while working with phosphorescent materials. These materials glow in the dark after exposure to light, and he suspected that the glow produced in cathode ray tubes by X-rays might be associated with phosphorescence. He wrapped a photographic plate in black paper and placed various phosphorescent salts on it. All results were negative until he used uranium salts. The uranium salts caused a blackening of the plate in spite of the plate being wrapped in black paper. These radiations were given the name "Becquerel Rays".

It soon became clear that the blackening of the plate had nothing to do with phosphorescence, as the blackening was also produced by non-phosphorescent salts of uranium and by metallic uranium. It became clear from these experiments that there was a form of invisible radiation that could pass through paper and was causing the plate to react as if exposed to light.

At first, it seemed as though the new radiation was similar to the then recently discovered X-rays. Further research by Becquerel, Ernest Rutherford, Paul Villard, Pierre Curie, Marie Curie, and others showed that this form of radioactivity was significantly more complicated. Rutherford was the first to realize that all such elements decay in accordance with the same mathematical exponential formula. Rutherford and his student Frederick Soddy were the first to realize that many decay processes resulted in the transmutation of one element to another. Subsequently, the radioactive displacement law of Fajans and Soddy was formulated to describe the products of alpha and beta decay.

The early researchers also discovered that many other chemical elements, besides uranium, have radioactive isotopes. A systematic search for the total radioactivity in uranium ores also guided Pierre and Marie Curie to isolate two new elements: polonium and radium. Except for the radioactivity of radium, the chemical similarity of radium to barium made these two elements difficult to distinguish.

Marie and Pierre Curie's study of radioactivity is an important factor in science and medicine. After their research on Becquerel's rays led them to the discovery of both radium and polonium, they coined the term "radioactivity" to define the emission of ionizing radiation by some heavy elements. (Later the term was generalized to all elements.) Their research on the penetrating rays in uranium and the discovery of radium launched an era of using radium for the treatment of cancer. Their exploration of radium could be seen as the first peaceful use of nuclear energy and the start of modern nuclear medicine.

Early health dangers

Taking an X-ray image with early Crookes tube apparatus in 1896. The Crookes tube is visible in the centre. The standing man is viewing his hand with a fluoroscope screen; this was a common way of setting up the tube. No precautions against radiation exposure are being taken; its hazards were not known at the time.

The dangers of ionizing radiation due to radioactivity and X-rays were not immediately recognized.

X-rays

The discovery of X‑rays by Wilhelm Röntgen in 1895 led to widespread experimentation by scientists, physicians, and inventors. Many people began recounting stories of burns, hair loss and worse in technical journals as early as 1896. In February of that year, Professor Daniel and Dr. Dudley of Vanderbilt University performed an experiment involving X-raying Dudley's head that resulted in his hair loss. A report by Dr. H.D. Hawks, of his suffering severe hand and chest burns in an X-ray demonstration, was the first of many other reports in Electrical Review.

Other experimenters, including Elihu Thomson and Nikola Tesla, also reported burns. Thomson deliberately exposed a finger to an X-ray tube over a period of time and suffered pain, swelling, and blistering. Other effects, including ultraviolet rays and ozone, were sometimes blamed for the damage, and many physicians still claimed that there were no effects from X-ray exposure at all.

Despite this, there were some early systematic hazard investigations, and as early as 1902 William Herbert Rollins wrote almost despairingly that his warnings about the dangers involved in the careless use of X-rays were not being heeded, either by industry or by his colleagues. By this time, Rollins had proved that X-rays could kill experimental animals, could cause a pregnant guinea pig to abort, and that they could kill a foetus. He also stressed that "animals vary in susceptibility to the external action of X-light" and warned that these differences be considered when patients were treated by means of X-rays.

Radioactive substances

Radioactivity is characteristic of elements with large atomic numbers. Elements with at least one stable isotope are shown in light blue. Green shows elements of which the most stable isotope has a half-life measured in millions of years. Yellow and orange are progressively less stable, with half-lives in thousands or hundreds of years, down toward one day. Red and purple show highly and extremely radioactive elements where the most stable isotopes exhibit half-lives measured on the order of one day and much less.

However, the biological effects of radiation due to radioactive substances were less easy to gauge. This gave the opportunity for many physicians and corporations to market radioactive substances as patent medicines. Examples were radium enema treatments, and radium-containing waters to be drunk as tonics. Marie Curie protested against this sort of treatment, warning that the effects of radiation on the human body were not well understood. Curie later died from aplastic anaemia, likely caused by exposure to ionizing radiation. By the 1930s, after a number of cases of bone necrosis and death of radium treatment enthusiasts, radium-containing medicinal products had been largely removed from the market (radioactive quackery).

Radiation protection

Only a year after Röntgen's discovery of X rays, the American engineer Wolfram Fuchs (1896) gave what is probably the first protection advice, but it was not until 1925 that the first International Congress of Radiology (ICR) was held and considered establishing international protection standards. The effects of radiation on genes, including the effect of cancer risk, were recognized much later. In 1927, Hermann Joseph Muller published research showing genetic effects and, in 1946, was awarded the Nobel Prize in Physiology or Medicine for his findings.

The second ICR was held in Stockholm in 1928 and proposed the adoption of the röntgen unit, and the International X-ray and Radium Protection Committee (IXRPC) was formed. Rolf Sievert was named Chairman, but a driving force was George Kaye of the British National Physical Laboratory. The committee met in 1931, 1934 and 1937.

After World War II, the increased range and quantity of radioactive substances being handled as a result of military and civil nuclear programs led to large groups of occupational workers and the public being potentially exposed to harmful levels of ionising radiation. This was considered at the first post-war ICR convened in London in 1950, when the present International Commission on Radiological Protection (ICRP) was born. Since then the ICRP has developed the present international system of radiation protection, covering all aspects of radiation hazard.

Units

Graphic showing relationships between radioactivity and detected ionizing radiation

The International System of Units (SI) unit of radioactive activity is the becquerel (Bq), named in honor of the scientist Henri Becquerel. One Bq is defined as one transformation (or decay or disintegration) per second.

An older unit of radioactivity is the curie, Ci, which was originally defined as "the quantity or mass of radium emanation in equilibrium with one gram of radium (element)". Today, the curie is defined as 3.7×1010 disintegrations per second, so that 1 curie (Ci) = 3.7×1010 Bq. For radiological protection purposes, although the United States Nuclear Regulatory Commission permits the use of the unit curie alongside SI units, the European Union European units of measurement directives required that its use for "public health ... purposes" be phased out by 31 December 1985.

The effects of ionizing radiation are often measured in units of gray for mechanical or sievert for damage to tissue.

Types

Alpha particles may be completely stopped by a sheet of paper, beta particles by aluminium shielding. Gamma rays can only be reduced by much more substantial mass, such as a very thick layer of lead.
 
137Cs decay scheme showing half-lives, daughter nuclides, and types and proportion of radiation emitted.

Early researchers found that an electric or magnetic field could split radioactive emissions into three types of beams. The rays were given the names alpha, beta, and gamma, in increasing order of their ability to penetrate matter. Alpha decay is observed only in heavier elements of atomic number 52 (tellurium) and greater, with the exception of beryllium-8 (which decays to two alpha particles). The other two types of decay are observed in all the elements. Lead, atomic number 82, is the heaviest element to have any isotopes stable (to the limit of measurement) to radioactive decay. Radioactive decay is seen in all isotopes of all elements of atomic number 83 (bismuth) or greater. Bismuth-209, however, is only very slightly radioactive, with a half-life greater than the age of the universe; radioisotopes with extremely long half-lives are considered effectively stable for practical purposes.

Transition diagram for decay modes of a radionuclide, with neutron number N and atomic number Z (shown are α, β±, p+, and n0 emissions, EC denotes electron capture).
 
Types of radioactive decay related to neutron and proton numbers

In analysing the nature of the decay products, it was obvious from the direction of the electromagnetic forces applied to the radiations by external magnetic and electric fields that alpha particles carried a positive charge, beta particles carried a negative charge, and gamma rays were neutral. From the magnitude of deflection, it was clear that alpha particles were much more massive than beta particles. Passing alpha particles through a very thin glass window and trapping them in a discharge tube allowed researchers to study the emission spectrum of the captured particles, and ultimately proved that alpha particles are helium nuclei. Other experiments showed beta radiation, resulting from decay and cathode rays, were high-speed electrons. Likewise, gamma radiation and X-rays were found to be high-energy electromagnetic radiation.

The relationship between the types of decays also began to be examined: For example, gamma decay was almost always found to be associated with other types of decay, and occurred at about the same time, or afterwards. Gamma decay as a separate phenomenon, with its own half-life (now termed isomeric transition), was found in natural radioactivity to be a result of the gamma decay of excited metastable nuclear isomers, which were in turn created from other types of decay.

Although alpha, beta, and gamma radiations were most commonly found, other types of emission were eventually discovered. Shortly after the discovery of the positron in cosmic ray products, it was realized that the same process that operates in classical beta decay can also produce positrons (positron emission), along with neutrinos (classical beta decay produces antineutrinos). In a more common analogous process, called electron capture, some proton-rich nuclides were found to capture their own atomic electrons instead of emitting positrons, and subsequently, these nuclides emit only a neutrino and a gamma ray from the excited nucleus (and often also Auger electrons and characteristic X-rays, as a result of the re-ordering of electrons to fill the place of the missing captured electron). These types of decay involve the nuclear capture of electrons or emission of electrons or positrons, and thus acts to move a nucleus toward the ratio of neutrons to protons that has the least energy for a given total number of nucleons. This consequently produces a more stable (lower energy) nucleus.

(A theoretical process of positron capture, analogous to electron capture, is possible in antimatter atoms, but has not been observed, as complex antimatter atoms beyond antihelium are not experimentally available. Such a decay would require antimatter atoms at least as complex as beryllium-7, which is the lightest known isotope of normal matter to undergo decay by electron capture.)

Shortly after the discovery of the neutron in 1932, Enrico Fermi realized that certain rare beta-decay reactions immediately yield neutrons as a decay particle (neutron emission). Isolated proton emission was eventually observed in some elements. It was also found that some heavy elements may undergo spontaneous fission into products that vary in composition. In a phenomenon called cluster decay, specific combinations of neutrons and protons other than alpha particles (helium nuclei) were found to be spontaneously emitted from atoms.

Other types of radioactive decay were found to emit previously seen particles but via different mechanisms. An example is internal conversion, which results in an initial electron emission, and then often further characteristic X-rays and Auger electrons emissions, although the internal conversion process involves neither beta nor gamma decay. A neutrino is not emitted, and none of the electron(s) and photon(s) emitted originate in the nucleus, even though the energy to emit all of them does originate there. Internal conversion decay, like isomeric transition gamma decay and neutron emission, involves the release of energy by an excited nuclide, without the transmutation of one element into another.

Rare events that involve a combination of two beta-decay-type events happening simultaneously are known (see below). Any decay process that does not violate the conservation of energy or momentum laws (and perhaps other particle conservation laws) is permitted to happen, although not all have been detected. An interesting example discussed in a final section, is bound state beta decay of rhenium-187. In this process, the beta electron-decay of the parent nuclide is not accompanied by beta electron emission, because the beta particle has been captured into the K-shell of the emitting atom. An antineutrino is emitted, as in all negative beta decays.

Radionuclides can undergo a number of different reactions. These are summarized in the following table. A nucleus with mass number A and atomic number Z is represented as (A, Z). The column "Daughter nucleus" indicates the difference between the new nucleus and the original nucleus. Thus, (A − 1, Z) means that the mass number is one less than before, but the atomic number is the same as before.

If energy circumstances are favorable, a given radionuclide may undergo many competing types of decay, with some atoms decaying by one route, and others decaying by another. An example is copper-64, which has 29 protons, and 35 neutrons, which decays with a half-life of about 12.7 hours. This isotope has one unpaired proton and one unpaired neutron, so either the proton or the neutron can decay to the other particle, which has opposite isospin. This particular nuclide (though not all nuclides in this situation) is almost equally likely to decay through positron emission (18%), or through electron capture (43%), as it does through electron emission (39%). The excited energy states resulting from these decays which fail to end in a ground energy state, also produce later internal conversion and gamma decay in almost 0.5% of the time.

More common in heavy nuclides is competition between alpha and beta decay. The daughter nuclides will then normally decay through beta or alpha, respectively, to end up in the same place.

Radioactive decay results in a reduction of summed rest mass, once the released energy (the disintegration energy) has escaped in some way. Although decay energy is sometimes defined as associated with the difference between the mass of the parent nuclide products and the mass of the decay products, this is true only of rest mass measurements, where some energy has been removed from the product system. This is true because the decay energy must always carry mass with it, wherever it appears according to the formula E = mc2. The decay energy is initially released as the energy of emitted photons plus the kinetic energy of massive emitted particles (that is, particles that have rest mass). If these particles come to thermal equilibrium with their surroundings and photons are absorbed, then the decay energy is transformed to thermal energy, which retains its mass.

Decay energy, therefore, remains associated with a certain measure of the mass of the decay system, called invariant mass, which does not change during the decay, even though the energy of decay is distributed among decay particles. The energy of photons, the kinetic energy of emitted particles, and, later, the thermal energy of the surrounding matter, all contribute to the invariant mass of the system. Thus, while the sum of the rest masses of the particles is not conserved in radioactive decay, the system mass and system invariant mass (and also the system total energy) is conserved throughout any decay process. This is a restatement of the equivalent laws of conservation of energy and conservation of mass.

Rates

The decay rate, or activity, of a radioactive substance is characterized by:

Constant quantities:

  • The half-lifet1/2, is the time taken for the activity of a given amount of a radioactive substance to decay to half of its initial value.
  • The decay constantλ, "lambda" the reciprocal of the mean lifetime (in s−1), sometimes referred to as simply decay rate.
  • The mean lifetimeτ, "tau" the average lifetime (1/e life) of a radioactive particle before decay.

Although these are constants, they are associated with the statistical behavior of populations of atoms. In consequence, predictions using these constants are less accurate for minuscule samples of atoms.

In principle a half-life, a third-life, or even a (1/2)-life, can be used in exactly the same way as half-life; but the mean life and half-life t1/2 have been adopted as standard times associated with exponential decay.

Time-variable quantities:

  • Total activity A, is the number of decays per unit time of a radioactive sample.
  • Number of particlesN, is the total number of particles in the sample.
  • Specific activitySA, number of decays per unit time per amount of substance of the sample at time set to zero (t = 0). "Amount of substance" can be the mass, volume or moles of the initial sample.

These are related as follows:

where N0 is the initial amount of active substance — substance that has the same percentage of unstable particles as when the substance was formed.

Mathematics

Universal law

The mathematics of radioactive decay depend on a key assumption that a nucleus of a radionuclide has no "memory" or way of translating its history into its present behavior. A nucleus does not "age" with the passage of time. Thus, the probability of its breaking down does not increase with time but stays constant, no matter how long the nucleus has existed. This constant probability may differ greatly between one type of nuclei and another, leading to the many different observed decay rates. However, whatever the probability is, it does not change over time. This is in marked contrast to complex objects which do show aging, such as automobiles and humans. These aging systems do have a chance of breakdown per unit of time that increases from the moment they begin their existence.

Aggregate processes, like the radioactive decay of a lump of atoms, for which the single event probability of realization is very small but in which the number of time-slices is so large that there is nevertheless a reasonable rate of events, are modelled by the Poisson distribution, which is discrete. Radioactive decay and nuclear particle reactions are two examples of such aggregate processes. The mathematics of Poisson processes reduce to the law of exponential decay, which describes the statistical behaviour of a large number of nuclei, rather than one individual nucleus. In the following formalism, the number of nuclei or the nuclei population N, is of course a discrete variable (a natural number)—but for any physical sample N is so large that it can be treated as a continuous variable. Differential calculus is used to model the behaviour of nuclear decay.

One-decay process

Consider the case of a nuclide A that decays into another B by some process A → B (emission of other particles, like electron neutrinos
ν
e
and electrons e as in beta decay, are irrelevant in what follows). The decay of an unstable nucleus is entirely random in time so it is impossible to predict when a particular atom will decay. However, it is equally likely to decay at any instant in time. Therefore, given a sample of a particular radioisotope, the number of decay events −dN expected to occur in a small interval of time dt is proportional to the number of atoms present N, that is.

Particular radionuclides decay at different rates, so each has its own decay constant λ. The expected decay −dN/N is proportional to an increment of time, dt:

The negative sign indicates that N decreases as time increases, as the decay events follow one after another. The solution to this first-order differential equation is the function:

where N0 is the value of N at time t = 0, with the decay constant expressed as λ

We have for all time t:

where Ntotal is the constant number of particles throughout the decay process, which is equal to the initial number of A nuclides since this is the initial substance.

If the number of non-decayed A nuclei is:

then the number of nuclei of B, i.e. the number of decayed A nuclei, is

The number of decays observed over a given interval obeys Poisson statistics. If the average number of decays is N, the probability of a given number of decays N is

Chain-decay processes

Chain of two decays

Now consider the case of a chain of two decays: one nuclide A decaying into another B by one process, then B decaying into another C by a second process, i.e. A → B → C. The previous equation cannot be applied to the decay chain, but can be generalized as follows. Since A decays into B, then B decays into C, the activity of A adds to the total number of B nuclides in the present sample, before those B nuclides decay and reduce the number of nuclides leading to the later sample. In other words, the number of second generation nuclei B increases as a result of the first generation nuclei decay of A, and decreases as a result of its own decay into the third generation nuclei C. The sum of these two terms gives the law for a decay chain for two nuclides:

The rate of change of NB, that is dNB/dt, is related to the changes in the amounts of A and B, NB can increase as B is produced from A and decrease as B produces C.

Re-writing using the previous results:

The subscripts simply refer to the respective nuclides, i.e. NA is the number of nuclides of type A; NA0 is the initial number of nuclides of type A; λA is the decay constant for A – and similarly for nuclide B. Solving this equation for NB gives:

In the case where B is a stable nuclide (λB = 0), this equation reduces to the previous solution:

as shown above for one decay. The solution can be found by the integration factor method, where the integrating factor is eλBt. This case is perhaps the most useful since it can derive both the one-decay equation (above) and the equation for multi-decay chains (below) more directly.

Chain of any number of decays

For the general case of any number of consecutive decays in a decay chain, i.e. A1 → A2 ··· → Ai ··· → AD, where D is the number of decays and i is a dummy index (i = 1, 2, 3, ...D), each nuclide population can be found in terms of the previous population. In this case N2 = 0, N3 = 0,..., ND = 0. Using the above result in a recursive form:

The general solution to the recursive problem is given by Bateman's equations:

Bateman's equations

Alternative modes

In all of the above examples, the initial nuclide decays into just one product. Consider the case of one initial nuclide that can decay into either of two products, that is A → B and A → C in parallel. For example, in a sample of potassium-40, 89.3% of the nuclei decay to calcium-40 and 10.7% to argon-40. We have for all time t:

which is constant, since the total number of nuclides remains constant. Differentiating with respect to time:

defining the total decay constant λ in terms of the sum of partial decay constants λB and λC:

Solving this equation for NA:

where NA0 is the initial number of nuclide A. When measuring the production of one nuclide, one can only observe the total decay constant λ. The decay constants λB and λC determine the probability for the decay to result in products B or C as follows:

because the fraction λB/λ of nuclei decay into B while the fraction λC/λ of nuclei decay into C.

Corollaries of laws

The above equations can also be written using quantities related to the number of nuclide particles N in a sample;

where L = 6.02214076×1023 mol−1 is the Avogadro constant, M is the molar mass of the substance in kg/mol, and the amount of the substance n is in moles.

Decay timing: definitions and relations

Time constant and mean-life

For the one-decay solution A → B:

the equation indicates that the decay constant λ has units of t−1, and can thus also be represented as 1/τ, where τ is a characteristic time of the process called the time constant.

In a radioactive decay process, this time constant is also the mean lifetime for decaying atoms. Each atom "lives" for a finite amount of time before it decays, and it may be shown that this mean lifetime is the arithmetic mean of all the atoms' lifetimes, and that it is τ, which again is related to the decay constant as follows:

This form is also true for two-decay processes simultaneously A → B + C, inserting the equivalent values of decay constants (as given above)

into the decay solution leads to:

Simulation of many identical atoms undergoing radioactive decay, starting with either 4 atoms (left) or 400 (right). The number at the top indicates how many half-lives have elapsed.

Half-life

A more commonly used parameter is the half-life T1/2. Given a sample of a particular radionuclide, the half-life is the time taken for half the radionuclide's atoms to decay. For the case of one-decay nuclear reactions:

the half-life is related to the decay constant as follows: set N = N0/2 and t = T1/2 to obtain

This relationship between the half-life and the decay constant shows that highly radioactive substances are quickly spent, while those that radiate weakly endure longer. Half-lives of known radionuclides vary widely, from more than 1024 years for the very nearly stable nuclide 128Te, to 2.3 x 10−23 seconds for highly unstable nuclides such as 7H.

The factor of ln(2) in the above relations results from the fact that the concept of "half-life" is merely a way of selecting a different base other than the natural base e for the lifetime expression. The time constant τ is the e -1 -life, the time until only 1/e remains, about 36.8%, rather than the 50% in the half-life of a radionuclide. Thus, τ is longer than t1/2. The following equation can be shown to be valid:

Since radioactive decay is exponential with a constant probability, each process could as easily be described with a different constant time period that (for example) gave its "(1/3)-life" (how long until only 1/3 is left) or "(1/10)-life" (a time period until only 10% is left), and so on. Thus, the choice of τ and t1/2 for marker-times, are only for convenience, and from convention. They reflect a fundamental principle only in so much as they show that the same proportion of a given radioactive substance will decay, during any time-period that one chooses.

Mathematically, the nth life for the above situation would be found in the same way as above—by setting N = N0/n, t = T1/n and substituting into the decay solution to obtain

Example for carbon-14

Carbon-14 has a half-life of 5,730 years and a decay rate of 14 disintegrations per minute (dpm) per gram of natural carbon.

If an artifact is found to have radioactivity of 4 dpm per gram of its present C, we can find the approximate age of the object using the above equation:

where:

years,
years.

Changing rates

The radioactive decay modes of electron capture and internal conversion are known to be slightly sensitive to chemical and environmental effects that change the electronic structure of the atom, which in turn affects the presence of 1s and 2s electrons that participate in the decay process. A small number of mostly light nuclides are affected. For example, chemical bonds can affect the rate of electron capture to a small degree (in general, less than 1%) depending on the proximity of electrons to the nucleus. In 7Be, a difference of 0.9% has been observed between half-lives in metallic and insulating environments. This relatively large effect is because beryllium is a small atom whose valence electrons are in 2s atomic orbitals, which are subject to electron capture in 7Be because (like all s atomic orbitals in all atoms) they naturally penetrate into the nucleus.

In 1992, Jung et al. of the Darmstadt Heavy-Ion Research group observed an accelerated β decay of 163Dy66+. Although neutral 163Dy is a stable isotope, the fully ionized 163Dy66+ undergoes β decay into the K and L shells to 163Ho66+ with a half-life of 47 days.

Rhenium-187 is another spectacular example. 187Re normally beta decays to 187Os with a half-life of 41.6 × 109 years, but studies using fully ionised 187Re atoms (bare nuclei) have found that this can decrease to only 32.9 years. This is attributed to "bound-state β decay" of the fully ionised atom – the electron is emitted into the "K-shell" (1s atomic orbital), which cannot occur for neutral atoms in which all low-lying bound states are occupied.

Example of diurnal and seasonal variations in gamma ray detector response.

A number of experiments have found that decay rates of other modes of artificial and naturally occurring radioisotopes are, to a high degree of precision, unaffected by external conditions such as temperature, pressure, the chemical environment, and electric, magnetic, or gravitational fields. Comparison of laboratory experiments over the last century, studies of the Oklo natural nuclear reactor (which exemplified the effects of thermal neutrons on nuclear decay), and astrophysical observations of the luminosity decays of distant supernovae (which occurred far away so the light has taken a great deal of time to reach us), for example, strongly indicate that unperturbed decay rates have been constant (at least to within the limitations of small experimental errors) as a function of time as well.

Recent results suggest the possibility that decay rates might have a weak dependence on environmental factors. It has been suggested that measurements of decay rates of silicon-32, manganese-54, and radium-226 exhibit small seasonal variations (of the order of 0.1%). However, such measurements are highly susceptible to systematic errors, and a subsequent paper has found no evidence for such correlations in seven other isotopes (22Na, 44Ti, 108Ag, 121Sn, 133Ba, 241Am, 238Pu), and sets upper limits on the size of any such effects. The decay of radon-222 was once reported to exhibit large 4% peak-to-peak seasonal variations, which were proposed to be related to either solar flare activity or the distance from the Sun, but detailed analysis of the experiment's design flaws, along with comparisons to other, much more stringent and systematically controlled, experiments refute this claim.

GSI anomaly

An unexpected series of experimental results for the rate of decay of heavy highly charged radioactive ions circulating in a storage ring has provoked theoretical activity in an effort to find a convincing explanation. The rates of weak decay of two radioactive species with half lives of about 40 s and 200 s are found to have a significant oscillatory modulation, with a period of about 7 s. The observed phenomenon is known as the GSI anomaly, as the storage ring is a facility at the GSI Helmholtz Centre for Heavy Ion Research in Darmstadt, Germany. As the decay process produces an electron neutrino, some of the proposed explanations for the observed rate oscillation invoke neutrino properties. Initial ideas related to flavour oscillation met with skepticism. A more recent proposal involves mass differences between neutrino mass eigenstates.

Theoretical basis

The neutrons and protons that constitute nuclei, as well as other particles that approach close enough to them, are governed by several interactions. The strong nuclear force, not observed at the familiar macroscopic scale, is the most powerful force over subatomic distances. The electrostatic force is almost always significant, and, in the case of beta decay, the weak nuclear force is also involved.

The combined effects of these forces produces a number of different phenomena in which energy may be released by rearrangement of particles in the nucleus, or else the change of one type of particle into others. These rearrangements and transformations may be hindered energetically so that they do not occur immediately. In certain cases, random quantum vacuum fluctuations are theorized to promote relaxation to a lower energy state (the "decay") in a phenomenon known as quantum tunneling. Radioactive decay half-life of nuclides has been measured over timescales of 55 orders of magnitude, from 2.3 × 10−23 seconds (for hydrogen-7) to 6.9 × 1031 seconds (for tellurium-128). The limits of these timescales are set by the sensitivity of instrumentation only, and there are no known natural limits to how brief or long a decay half-life for radioactive decay of a radionuclide may be.

The decay process, like all hindered energy transformations, may be analogized by a snowfield on a mountain. While friction between the ice crystals may be supporting the snow's weight, the system is inherently unstable with regard to a state of lower potential energy. A disturbance would thus facilitate the path to a state of greater entropy; the system will move towards the ground state, producing heat, and the total energy will be distributable over a larger number of quantum states thus resulting in an avalanche. The total energy does not change in this process, but, because of the second law of thermodynamics, avalanches have only been observed in one direction and that is toward the "ground state" — the state with the largest number of ways in which the available energy could be distributed.

Such a collapse (a gamma-ray decay event) requires a specific activation energy. For a snow avalanche, this energy comes as a disturbance from outside the system, although such disturbances can be arbitrarily small. In the case of an excited atomic nucleus decaying by gamma radiation in a spontaneous emission of electromagnetic radiation, the arbitrarily small disturbance comes from quantum vacuum fluctuations.

A radioactive nucleus (or any excited system in quantum mechanics) is unstable, and can, thus, spontaneously stabilize to a less-excited system. The resulting transformation alters the structure of the nucleus and results in the emission of either a photon or a high-velocity particle that has mass (such as an electron, alpha particle, or other type).

Occurrence and applications

According to the Big Bang theory, stable isotopes of the lightest five elements (H, He, and traces of Li, Be, and B) were produced very shortly after the emergence of the universe, in a process called Big Bang nucleosynthesis. These lightest stable nuclides (including deuterium) survive to today, but any radioactive isotopes of the light elements produced in the Big Bang (such as tritium) have long since decayed. Isotopes of elements heavier than boron were not produced at all in the Big Bang, and these first five elements do not have any long-lived radioisotopes. Thus, all radioactive nuclei are, therefore, relatively young with respect to the birth of the universe, having formed later in various other types of nucleosynthesis in stars (in particular, supernovae), and also during ongoing interactions between stable isotopes and energetic particles. For example, carbon-14, a radioactive nuclide with a half-life of only 5,730 years, is constantly produced in Earth's upper atmosphere due to interactions between cosmic rays and nitrogen.

Nuclides that are produced by radioactive decay are called radiogenic nuclides, whether they themselves are stable or not. There exist stable radiogenic nuclides that were formed from short-lived extinct radionuclides in the early solar system. The extra presence of these stable radiogenic nuclides (such as xenon-129 from extinct iodine-129) against the background of primordial stable nuclides can be inferred by various means.

Radioactive decay has been put to use in the technique of radioisotopic labeling, which is used to track the passage of a chemical substance through a complex system (such as a living organism). A sample of the substance is synthesized with a high concentration of unstable atoms. The presence of the substance in one or another part of the system is determined by detecting the locations of decay events.

On the premise that radioactive decay is truly random (rather than merely chaotic), it has been used in hardware random-number generators. Because the process is not thought to vary significantly in mechanism over time, it is also a valuable tool in estimating the absolute ages of certain materials. For geological materials, the radioisotopes and some of their decay products become trapped when a rock solidifies, and can then later be used (subject to many well-known qualifications) to estimate the date of the solidification. These include checking the results of several simultaneous processes and their products against each other, within the same sample. In a similar fashion, and also subject to qualification, the rate of formation of carbon-14 in various eras, the date of formation of organic matter within a certain period related to the isotope's half-life may be estimated, because the carbon-14 becomes trapped when the organic matter grows and incorporates the new carbon-14 from the air. Thereafter, the amount of carbon-14 in organic matter decreases according to decay processes that may also be independently cross-checked by other means (such as checking the carbon-14 in individual tree rings, for example).

Szilard–Chalmers effect

The Szilard–Chalmers effect is the breaking of a chemical bond as a result of a kinetic energy imparted from radioactive decay. It operates by the absorption of neutrons by an atom and subsequent emission of gamma rays, often with significant amounts of kinetic energy. This kinetic energy, by Newton's third law, pushes back on the decaying atom, which causes it to move with enough speed to break a chemical bond. This effect can be used to separate isotopes by chemical means.

The Szilard–Chalmers effect was discovered in 1934 by Leó Szilárd and Thomas A. Chalmers. They observed that after bombardment by neutrons, the breaking of a bond in liquid ethyl iodide allowed radioactive iodine to be removed.

Origins of radioactive nuclides

Radioactive primordial nuclides found in the Earth are residues from ancient supernova explosions that occurred before the formation of the solar system. They are the fraction of radionuclides that survived from that time, through the formation of the primordial solar nebula, through planet accretion, and up to the present time. The naturally occurring short-lived radiogenic radionuclides found in today's rocks, are the daughters of those radioactive primordial nuclides. Another minor source of naturally occurring radioactive nuclides are cosmogenic nuclides, that are formed by cosmic ray bombardment of material in the Earth's atmosphere or crust. The decay of the radionuclides in rocks of the Earth's mantle and crust contribute significantly to Earth's internal heat budget.

Decay chains and multiple modes

The daughter nuclide of a decay event may also be unstable (radioactive). In this case, it too will decay, producing radiation. The resulting second daughter nuclide may also be radioactive. This can lead to a sequence of several decay events called a decay chain (see this article for specific details of important natural decay chains). Eventually, a stable nuclide is produced. Any decay daughters that are the result of an alpha decay will also result in helium atoms being created.

Gamma-ray energy spectrum of uranium ore (inset). Gamma-rays are emitted by decaying nuclides, and the gamma-ray energy can be used to characterize the decay (which nuclide is decaying to which). Here, using the gamma-ray spectrum, several nuclides that are typical of the decay chain of 238U have been identified: 226Ra, 214Pb, 214Bi.

An example is the natural decay chain of 238U:

  • Uranium-238 decays, through alpha-emission, with a half-life of 4.5 billion years to thorium-234
  • which decays, through beta-emission, with a half-life of 24 days to protactinium-234
  • which decays, through beta-emission, with a half-life of 1.2 minutes to uranium-234
  • which decays, through alpha-emission, with a half-life of 240 thousand years to thorium-230
  • which decays, through alpha-emission, with a half-life of 77 thousand years to radium-226
  • which decays, through alpha-emission, with a half-life of 1.6 thousand years to radon-222
  • which decays, through alpha-emission, with a half-life of 3.8 days to polonium-218
  • which decays, through alpha-emission, with a half-life of 3.1 minutes to lead-214
  • which decays, through beta-emission, with a half-life of 27 minutes to bismuth-214
  • which decays, through beta-emission, with a half-life of 20 minutes to polonium-214
  • which decays, through alpha-emission, with a half-life of 160 microseconds to lead-210
  • which decays, through beta-emission, with a half-life of 22 years to bismuth-210
  • which decays, through beta-emission, with a half-life of 5 days to polonium-210
  • which decays, through alpha-emission, with a half-life of 140 days to lead-206, which is a stable nuclide.

Some radionuclides may have several different paths of decay. For example, approximately 36% of bismuth-212 decays, through alpha-emission, to thallium-208 while approximately 64% of bismuth-212 decays, through beta-emission, to polonium-212. Both thallium-208 and polonium-212 are radioactive daughter products of bismuth-212, and both decay directly to stable lead-208.

Hazard warning signs

United States labor law

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Uni...