Search This Blog

Friday, May 26, 2023

Signs and symptoms of Graves' disease

The signs and symptoms of Graves' disease generally result from the direct and indirect effects of hyperthyroidism; sometimes they are caused by thyroidal conditions, such as Graves' ophthalmopathy, goitre and pretibial myxedema. These clinical manifestations can involve virtually every system in the body. The mechanisms that mediate these effects are not well understood. The severity of the signs and symptoms of hyperthyroidism is related to the duration of the disease, the magnitude of the thyroid hormone excess, and the patient's age. Although the vast majority of patients enjoy significant improvement and remission after proper medical care, health care providers should be aware of variability in the individual response to hyperthyroidism and individual sensitivity to thyroid hormone fluctuations generally. Graves' disease patients can also undergo periods of hypothyroidism (inadequate production of thyroid hormone; see symptoms of hypothyroidism), due to the challenges of finding the right dosage of thyroid hormone suppression and/or supplementation. The body's need for thyroid hormone can also change over time, such as in the first months after radioactive iodine treatment (RAI). Thyroid autoimmune diseases can also be volatile: hyperthyroidism can interchange with hypothyroidism and euthyroidism.

General symptoms

Effects on the skeleton

Overt hyperthyroidism caused by Grave's Disease is associated with accelerated bone remodeling (resulting in increased porosity of cortical bone and reduced volume of trabecular bone). This can lead to reduced bone density and eventually osteoporosis as well as increased fracture rates. The increased hip fracture rates later in life in turn cause excess late mortality. The changes in bone metabolism are connected with negative calcium balance, an increased excretion of calcium and phosphorus in the urine (hypercalciuria) and stool, and, rarely, hypercalcemia. In hyperthyroidism, the normal cycle duration of bone resorption of approximately 200 days is halved, and each cycle is associated with a 9.6 percent loss of mineralized bone. In hypothyroidism, cycle length approximates 700 days and is associated with a 17 percent increase in mineralized bone.

The extent of the reduction in bone density in most studies is 10–20%. The clinical manifestations on bone differ depending on the age of the patient. Postmenopausal woman are most sensitive to accelerated bone loss from thyrotoxicosis. Accelerated bone growth in growing children can increase ossification in the short term, but generally results in short-stature adults compared with the predicted heights.

If thyrotoxicosis is treated early, bone loss can be minimized. The level of calcium in the blood can be determined by a simple blood test, and a Dual Energy X-Ray Absorptiometry scan can help determine patient bone density relative to the rest of the population. There are many medications that can help to rebuild bone mass and to prevent further bone loss, such as Bisphosphonates. Risedronate treatment has been demonstrated to help restore bone mass in osteopenia/osteoporosis associated with Graves' disease. Nevertheless, weight-bearing exercises, a balanced diet, calcium intake of about 1500 mg a day and enough vitamin D, are of course elementary foundations.

Eye symptoms

Hyperthyroidism almost always causes general eye symptoms like dryness and irritation, regardless of what the cause of the hyperthyroid state is. However, these need to be distinguished from Graves' ophthalmopathy, which can only occur in patients who have Graves' disease. (It may also, rarely, be seen in Hashimoto's thyroiditis, primary hypothyroidism, and thyroid cancer).

About 20–25% of patients with Graves' disease will suffer from clinically obvious Graves' ophthalmopathy, and not just from the eye signs of hyperthyroidism. Only 3 to 5% will develop severe ophthalmopathy. However, when subjected to closer inspection (e.g. by magnetic resonance imaging of the orbits) many more patients have evidence of ophthalmopathy. It is estimated that for every 100,000 persons, 16 women and 3 men have Graves' ophthalmopathy every year.

Although it is true that in most patients ophthalmopathy, goiter, and symptoms of thyrotoxicosis appear more or less coincidentally, it is also true that in certain cases eye signs may appear long before thyrotoxicosis is evident, or become worse when the thyrotoxicosis is subsiding or has been controlled by treatment. In approximately 20% of ophthalmopathy patients, ophthalmopathy appears before the onset of hyperthyroidism, in about 40% concurrently, and in about 20% in the six months after diagnosis. In the remainder, the eye disease first becomes apparent after treatment of the hyperthyroidism, more often in patients treated with radioiodine.

It can sometimes be difficult to distinguish between eye symptoms due to hyperthyroidism and those due to Graves' antibodies, because the two often occur coincidentally. What can make things particularly difficult, is that many patients with hyperthyroidism have lid retraction, which leads to stare and lid lag (due to contraction of the levator palpebrae muscles of the eyelids). This stare may then give the appearance of protruding eyeballs (proptosis), when none in fact exists. This subsides when the hyperthyroidism is treated.

Due to Graves' ophthalmopathy

Photo showing the classic finding of proptosis and lid retraction in Graves' disease

Graves' ophthalmopathy is characterized by inflammation of the extraocular muscles, orbital fat and connective tissue. It results in the following signs, which can be extremely distressing to the patient:

  • Most frequent are symptoms due to conjunctival or corneal irritation: burning, photophobia, tearing, pain, and a gritty or sandy sensation.
  • Protruding eyeballs (known as proptosis and exophthalmos).
  • Diplopia (double vision) is common.
  • Limitation of eye movement (due to impairment of eye muscle function).
  • Periorbital and conjunctival edema (accumulation of fluid beneath the skin around the eyes).
  • In severe cases, the optic nerve may be compressed and acuity of vision impaired.
  • Occasionally loss of vision.

Due to hyperthyroidism

In the absence of Graves' ophthalmopathy, patients may demonstrate other ophthalmic symptoms and signs due to hyperthyroidism:

  • Dry eyes (due to loss of corneal moisture).
  • A sense of irritation, discomfort, or pain in the eyes.
  • A tingling sensation behind the eyes or the feeling of grit or sand in the eyes.
  • Excessive tearing that is often made worse by exposure to cold air, wind, or bright lights.
  • Swelling or redness of the eyes.
  • Stare
  • Lid lag (Von Graefe's sign)
  • Sensitivity to light
  • Blurring of vision
  • Widened palpebral fissures
  • Infrequent blinking
  • The appearance of lid retraction.

Neuropsychological manifestations

Several studies have suggested a high prevalence of neuropsychiatric disorders and mental disorder symptoms in Graves' disease (and thyroid disease in general), which are similar to those in patients with organic brain disease. These manifestations are diverse, affecting the central and peripheral nervous systems. The vast majority of patients with hyperthyroidism meet criteria for some psychiatric disorders, and those with milder presentations are probably not entirely free of mental symptoms such as emotional lability, tension, depression and anxiety. Anxiety syndromes related to hyperthyroidism are typically complicated by major depression and cognitive decline, such as in memory and attention. Some studies contradict the psychological findings. For example, a large 2002 study found "no statistical association between thyroid dysfunction, and the presence of depression or anxiety disorder." In one study on hospitalised elderly patients, over half had cognitive impairment with either dementia or confusion. However, a controlled study on 31 Graves' disease patients found that while patients had subjective reports of cognitive deficits in the toxic phase of Graves' thyrotoxicosis, formal testing found no cognitive impairment and suggested the reported symptoms may reflect the affective and somatic manifestations of hyperthyroidism. Notably, a literature review of 2006 notes methodology issues in the consistency of Graves' disease diagnostic criteria, which might explain the apparently contradicting findings. These researchers found many reports about residual complaints in patients who were euthyroid after treatment, with a high prevalence of anxiety disorders and bipolar disorder, as well as elevated scores on scales of anxiety, depression and psychological distress. In a 1992 study, a significant proportion of the 137 questioned patients with Graves' disease reported – among other things – increased crying (55%), being easily startled (53%), being tired all the time (47%), a significant decrease in social activity (46%), feelings of being out of control (45%), feelings of hopelessness (43%), loss of sense of humor (41%), loss of interest in things they formerly enjoyed (39%), not being able to 'connect' with others (34%).

Several studies point out that the severity of psychiatric symptoms could easily result in an inappropriate referral to a psychiatrist prior to the diagnosis of hyperthyroidism. Consequently, undiagnosed hyperthyroidism sometimes results in inappropriate use of psychotropic medications; prompt recognition of hyperthyroidism (or hypothyroidism) through thyroid function screening is therefore recommended in the evaluation of patients with psychiatric symptoms. Naturally, the management of patients would be improved by collaboration between an endocrinologist and a psychiatrist.

Overall, reported symptoms vary from mild to severe aspects of anxiety or depression, and may include psychotic and behavioral disturbances:

  • Varying degrees of anxiety, such as a very active mind, irritability, hyperactivity, agitation, restlessness, nervousness, distractible hyperactivity and panic attacks. In addition patients may experience vivid dreams and, occasionally, nightmares.
  • Depressive features of mental impairment, memory lapses, diminished attention span, fluctuating depression.
  • Emotional lability and in some patients hypomania.
  • The pathological well-being (euphoria) or hyperactivity may produce a state of exhaustion, and profound fatigue or asthenia chiefly characterizes the picture.
  • Erratic behaviour may include intermittent rage disorder and mild attention deficit disorder. Some patients become hyperirritable and combative, which can precipitate accidents or even assaultive behaviour.
  • In more extreme cases features of psychosis, with delusions of persecution or delusions of reference, and pressure of speech may present themselves. Rarely, patients develop visual or auditory hallucinations or a frank psychosis, and may appear schizophrenic, lose touch with reality and become delirious, Such psychotic symptoms may not completely clear up after the hyperthyroidism has been treated. Paranoia and paranoid-hallucinatory psychosis in hyperthyroidism usually have a manic disposition and it is therefore often not clear if the patient is experiencing a paranoid psychosis with depressive streaks, or a depression that has paranoid streaks.

Treatment of hyperthyroidism typically leads to improvement in cognitive and behavioral impairments. Agitation, inattention, and frontal lobe impairment may improve more rapidly than other cognitive functions. However, several studies confirm that a substantial proportion of patients with hyperthyroidism have psychiatric disorders or mental symptoms and decreased quality of life even after successful treatment of their hyperthyroidism.

Effects on pre-existing psychiatric disorders

Patients with pre-existing psychiatric disorders, will experience a worsening of their usual symptoms, as observed by several studies. A study of 1999 found that Graves' disease exacerbated the symptoms of Tourette's disorder and attention-deficit hyperactivity disorder (ADHD), and points out that the lack of diagnosis of the Graves' disease compromised the efficacy of the treatment of these disorders. Patients who are known to have a convulsive disorder may become more difficult to control with the usual medications, and seizures may appear in patients who have never previously manifested such symptoms.

Sub-clinical hyperthyroidism

In sub-clinical hyperthyroidism, serum TSH is abnormally low, but T4- and T3-levels fall within laboratory reference ranges. It primarily affects the skeleton and the cardiovascular system (abnormalities in other systems have also been reported), in a similar but less severe and less frequent way than overt hyperthyroidism does. It can alter cardiac function, with increased heart rate, increased left ventricular mass index, increased cardiac contractility, diastolic dysfunction, and induction of ectopic atrial beats. Long-term mild excess of thyroid hormone can thus cause impaired cardiac reserve and exercise capacity. In a large population-based study of 2008, the odds of having poorer cognitive function were greater for sub-clinical hyperthyroidism than for stroke, diabetes mellitus, and Parkinson's disease. Sub-clinical hyperthyroidism might modestly increase the risk of cognitive impairment and dementia.

A possible explanation for the mental symptoms of sub-clinical thyroid disease, might be found in the fact that the brain has among the highest expression of THR's, and that neurons are often more sensitive than other tissues to thyroid abnormalities, including sub-clinical hyperthyroidism and thyrotoxicosis. In a 1996 survey study respondents reported a significant decline in memory, attention, planning, and overall productivity from the period 2 years prior to Graves' symptoms onset to the period when hyperthyroid. Also, hypersensitivity of the central nervous system to low-grade hyperthyroidism can result in an anxiety disorder before other Graves' disease symptoms emerge. E.g., panic disorder has been reported to precede Graves' hyperthyroidism by 4 to 5 years in some cases, although it is not known how frequently this occurs.

However, while clinical hyperthyroidism is associated with frank neuropsychological and affective alterations, the occurrence of these alterations and their treatment in mild and sub-clinical hyperthyroidism remains a controversial issue. Regardless of the inconsistent findings, a 2007 study by Andersen et al. states that the distinction between sub-clinical and overt thyroid disease is in any case somewhat arbitrary. Sub-clinical hyperthyroidism has been reported in 63% of euthyroid Graves' disease, but only in 4% of cases where Graves' disease was in remission.

Children and adolescents

Hyperthyroidism has unique effects in children on growth and pubertal development, e.g. causing epiphyseal maturation. In growing children, accelerated bone growth from hyperthyroidism can increase osteogenesis in the short term, but generally results in short-stature adults compared with the predicted heights. Pubertal development tends to be delayed, or slowed. Girls who have undergone menarche may develop secondary amenorrhea. Hyperthyroidism is associated with high sex hormone-binding globulin (SHBG), which may result in high serum estradiol levels in girls and testosterone levels in boys. However, unbound or free levels of these hormones are decreased. Hyperthyroidism before the age of four may cause neurodevelopmental delay. A study by Segni et al. suggests that permanent brain damage can occur as a result of the illness.

Ophthalmopatic findings are more common but less severe in children (severe infiltrative exophthalmos is virtually unknown before mid-adolescence), but besides that, many of the typical clinical features of hyperthyroidism in children and adolescents are similar to those in adults. An important difference between children and adults with Graves' disease, is that children have not yet developed like adults have (psychological and physiological), and that they are a lot more dependent on their environment. The encephalopathy will have profound effects on children's developing personalities and developing relationship with their environment. Disturbments in bodily development further complicates matters. The consequences for the development and the somatic and psychological well-being of the child can be very radical and sometimes irreversible. The earlier a person is affected by thyroid disease, the more the development of personality is affected and the bigger the delay from their potential development level. The child gets behind in its cognitive, emotional and sexual growth, which, by itself, also influences its processing abilities of the endocrine disease.

Children with hyperthyroidism tend to have greater mood swings and disturbances of behavior, as compared with adults. Their attention span decreases, they are usually hyperactive and distractible, they sleep poorly, and their school performance deteriorates. Because devastating personality and emotional changes often appear in the child or adolescent with Graves' disease, many hyperthyroid children are (similar to many adults) referred to a developmental specialist or child psychiatrist before the presence of hyperthyroidism is suspected.

Older patients

In older patients, emotional instability may be less evident, or depression may occur, and the symptoms and signs are manifestly circulatory. In many, the thyroid is not readily palpable. Symptoms such as rapid heart rate, shortness of breath on exertion, and edema may predominate. Older patients also tend to have more weight loss and less of an increase in appetite. Thus anorexia in this group is fairly frequent, as is constipation. Elderly patients may have what is called "apathetic thyrotoxicosis", a state in which they have less and less severe symptoms, except for weakness, depression and lethargy (making it even more prone to escape diagnosis).

Graves' disease and work

Considering the many signs and symptoms, the generally delayed diagnosis, and the possibility of residual complaints after treatment, it is little wonder that a significant number of people with Graves' disease have difficulty keeping their job. One study found that of 303 patients successfully treated for hyperthyroidism (77% had Graves' disease) 53% dealt with lack of energy. About one-third were unable to resume their customary work, mainly due to persistent mental problems. In their 1986 study of 26 patients (10 years after successful treatment of hyperthyroidism), Perrild et al. note that four patients had been granted disability pensions on the basis of intellectual dysfunction. Between 2006 and 2008, Ponto et al. surveyed 250 Graves' disease patients. Of these, 36% were written off sick and 5% even had to take early retirement. In the same study, 34% of 400 questioned physicians reported treating patients with fully impaired earning capacity.

Patients can and do recover with appropriate therapy while continuing to work, but more rapid and certain progress is made if a period away from the usual occupation can be provided. Two important considerations are adequate rest and attention to nutrition.

Left ventricular hypertrophy

From Wikipedia, the free encyclopedia

Left ventricular hypertrophy
Heart left ventricular hypertrophy sa.jpg
A heart with left ventricular hypertrophy in short-axis view
SpecialtyCardiology
ComplicationsHypertrophic cardiomyopathy, Heart failure
Diagnostic methodEchocardiography, cardiovascular MRI
Differential diagnosisAthletic heart syndrome

Left ventricular hypertrophy (LVH) is thickening of the heart muscle of the left ventricle of the heart, that is, left-sided ventricular hypertrophy and resulting increased left ventricular mass.

Causes

While ventricular hypertrophy occurs naturally as a reaction to aerobic exercise and strength training, it is most frequently referred to as a pathological reaction to cardiovascular disease, or high blood pressure. It is one aspect of ventricular remodeling.

While LVH itself is not a disease, it is usually a marker for disease involving the heart. Disease processes that can cause LVH include any disease that increases the afterload that the heart has to contract against, and some primary diseases of the muscle of the heart. Causes of increased afterload that can cause LVH include aortic stenosis, aortic insufficiency and hypertension. Primary disease of the muscle of the heart that cause LVH are known as hypertrophic cardiomyopathies, which can lead into heart failure.

Long-standing mitral insufficiency also leads to LVH as a compensatory mechanism.

Associated genes include OGN, osteoglycin.

Diagnosis

The commonly used method to diagnose LVH is echocardiography, with which the thickness of the muscle of the heart can be measured. The electrocardiogram (ECG) often shows signs of increased voltage from the heart in individuals with LVH, so this is often used as a screening test to determine who should undergo further testing.

Echocardiography

Left ventricular hypertrophy grading
by posterior wall thickness
Mild 12 to 13 mm
Moderate >13 to 17 mm
Severe >17 mm

Two dimensional echocardiography can produce images of the left ventricle. The thickness of the left ventricle as visualized on echocardiography correlates with its actual mass. Left ventricular mass can be further estimated based on geometric assumptions of ventricular shape using the measured wall thickness and internal diameter. Average thickness of the left ventricle, with numbers given as 95% prediction interval for the short axis images at the mid-cavity level are:

  • Women: 4 – 8 mm
  • Men: 5 – 9 mm

CT & MRI

CT and MRI-based measurement can be used to measure the left ventricle in three dimensions and calculate left ventricular mass directly. MRI based measurement is considered the “gold standard” for left ventricular mass, though is usually not readily available for common practice. In older individuals, age related remodeling of the left ventricle's geometry can lead to a discordancy between CT and echocardiographic based measurements of left ventricular mass.

ECG criteria

Left ventricular hypertrophy with secondary repolarization abnormalities as seen on ECG
 
Histopathology of (a) normal myocardium and (b) myocardial hypertrophy. Scale bar indicates 50 μm.
 
Gross pathology of left ventricular hypertrophy. Left ventricle is at right in image, serially sectioned from apex to near base.

There are several sets of criteria used to diagnose LVH via electrocardiography. None of them are perfect, though by using multiple criteria sets, the sensitivity and specificity are increased.

The Sokolow-Lyon index:

  • S in V1 + R in V5 or V6 (whichever is larger) ≥ 35 mm (≥ 7 large squares)
  • R in aVL ≥ 11 mm

The Cornell voltage criteria for the ECG diagnosis of LVH involve measurement of the sum of the R wave in lead aVL and the S wave in lead V3. The Cornell criteria for LVH are:

  • S in V3 + R in aVL > 28 mm (men)
  • S in V3 + R in aVL > 20 mm (women)

The Romhilt-Estes point score system ("diagnostic" >5 points; "probable" 4 points):

ECG Criteria Points
Voltage Criteria (any of):
  1. R or S in limb leads ≥20 mm
  2. S in V1 or V2 ≥30 mm
  3. R in V5 or V6 ≥30 mm
3
ST-T Abnormalities:
  • ST-T vector opposite to QRS without digitalis
  • ST-T vector opposite to QRS with digitalis

3
1

Negative terminal P mode in V1 1 mm in depth and 0.04 sec in duration (indicates left atrial enlargement) 3
Left axis deviation (QRS of −30° or more) 2
QRS duration ≥0.09 sec 1
Delayed intrinsicoid deflection in V5 or V6 (>0.05 sec) 1

Other voltage-based criteria for LVH include:

  • Lead I: R wave > 14 mm
  • Lead aVR: S wave > 15 mm
  • Lead aVL: R wave > 12 mm
  • Lead aVF: R wave > 21 mm
  • Lead V5: R wave > 26 mm
  • Lead V6: R wave > 20 mm

Treatment

Treatment is typically focused on resolving the cause of the LVH with the enlargement not permanent in all cases. In some cases the growth can regress with the reduction of blood pressure.

LVH may be a factor in determining treatment or diagnosis for other conditions, for example, LVH is used in the staging and risk stratification of Non-ischemic cardiomyopathies such as Fabry's Disease. Patients with LVH may have to participate in more complicated and precise diagnostic procedures, such as Echocardiography or Cardiac MRI.

Critical mass

From Wikipedia, the free encyclopedia
A re-creation of the 1945 criticality accident using the Demon core: a plutonium pit is surrounded by blocks of neutron-reflective tungsten carbide. The original experiment was designed to measure the radiation produced when an extra block was added. The mass went supercritical when the block was placed improperly by being dropped.

In nuclear engineering, a critical mass is the smallest amount of fissile material needed for a sustained nuclear chain reaction. The critical mass of a fissionable material depends upon its nuclear properties (specifically, its nuclear fission cross-section), density, shape, enrichment, purity, temperature, and surroundings. The concept is important in nuclear weapon design.

Explanation of criticality

When a nuclear chain reaction in a mass of fissile material is self-sustaining, the mass is said to be in a critical state in which there is no increase or decrease in power, temperature, or neutron population.

A numerical measure of a critical mass is dependent on the effective neutron multiplication factor k, the average number of neutrons released per fission event that go on to cause another fission event rather than being absorbed or leaving the material. When k = 1, the mass is critical, and the chain reaction is self-sustaining.

A subcritical mass is a mass of fissile material that does not have the ability to sustain a fission chain reaction. A population of neutrons introduced to a subcritical assembly will exponentially decrease. In this case, k < 1. A steady rate of spontaneous fissions causes a proportionally steady level of neutron activity. The constant of proportionality increases as k increases.

A supercritical mass is one which, once fission has started, will proceed at an increasing rate. The material may settle into equilibrium (i.e. become critical again) at an elevated temperature/power level or destroy itself. In the case of supercriticality, k > 1.

Due to spontaneous fission a supercritical mass will undergo a chain reaction. For example, a spherical critical mass of pure uranium-235 (235U) with a mass of about 52 kilograms (115 lb) would experience around 15 spontaneous fission events per second. The probability that one such event will cause a chain reaction depends on how much the mass exceeds the critical mass. If there is uranium-238 (238U) present, the rate of spontaneous fission will be much higher. Fission can also be initiated by neutrons produced by cosmic rays.

Changing the point of criticality

The mass where criticality occurs may be changed by modifying certain attributes such as fuel, shape, temperature, density and the installation of a neutron-reflective substance. These attributes have complex interactions and interdependencies. These examples only outline the simplest ideal cases:

Varying the amount of fuel

It is possible for a fuel assembly to be critical at near zero power. If the perfect quantity of fuel were added to a slightly subcritical mass to create an "exactly critical mass", fission would be self-sustaining for only one neutron generation (fuel consumption then makes the assembly subcritical again).

Similarly, if the perfect quantity of fuel were added to a slightly subcritical mass, to create a barely supercritical mass, the temperature of the assembly would increase to an initial maximum (for example: 1 K above the ambient temperature) and then decrease back to the ambient temperature after a period of time, because fuel consumed during fission brings the assembly back to subcriticality once again.

Changing the shape

A mass may be exactly critical without being a perfect homogeneous sphere. More closely refining the shape toward a perfect sphere will make the mass supercritical. Conversely changing the shape to a less perfect sphere will decrease its reactivity and make it subcritical.

Changing the temperature

A mass may be exactly critical at a particular temperature. Fission and absorption cross-sections increase as the relative neutron velocity decreases. As fuel temperature increases, neutrons of a given energy appear faster and thus fission/absorption is less likely. This is not unrelated to Doppler broadening of the 238U resonances but is common to all fuels/absorbers/configurations. Neglecting the very important resonances, the total neutron cross-section of every material exhibits an inverse relationship with relative neutron velocity. Hot fuel is always less reactive than cold fuel (over/under moderation in LWR is a different topic). Thermal expansion associated with temperature increase also contributes a negative coefficient of reactivity since fuel atoms are moving farther apart. A mass that is exactly critical at room temperature would be sub-critical in an environment anywhere above room temperature due to thermal expansion alone.

Varying the density of the mass

The higher the density, the lower the critical mass. The density of a material at a constant temperature can be changed by varying the pressure or tension or by changing crystal structure (see allotropes of plutonium). An ideal mass will become subcritical if allowed to expand or conversely the same mass will become supercritical if compressed. Changing the temperature may also change the density; however, the effect on critical mass is then complicated by temperature effects (see "Changing the temperature") and by whether the material expands or contracts with increased temperature. Assuming the material expands with temperature (enriched uranium-235 at room temperature for example), at an exactly critical state, it will become subcritical if warmed to lower density or become supercritical if cooled to higher density. Such a material is said to have a negative temperature coefficient of reactivity to indicate that its reactivity decreases when its temperature increases. Using such a material as fuel means fission decreases as the fuel temperature increases.

Use of a neutron reflector

Surrounding a spherical critical mass with a neutron reflector further reduces the mass needed for criticality. A common material for a neutron reflector is beryllium metal. This reduces the number of neutrons which escape the fissile material, resulting in increased reactivity.

Use of a tamper

In a bomb, a dense shell of material surrounding the fissile core will contain, via inertia, the expanding fissioning material, which increases the efficiency. This is known as a tamper. A tamper also tends to act as a neutron reflector. Because a bomb relies on fast neutrons (not ones moderated by reflection with light elements, as in a reactor), the neutrons reflected by a tamper are slowed by their collisions with the tamper nuclei, and because it takes time for the reflected neutrons to return to the fissile core, they take rather longer to be absorbed by a fissile nucleus. But they do contribute to the reaction, and can decrease the critical mass by a factor of four. Also, if the tamper is (e.g. depleted) uranium, it can fission due to the high energy neutrons generated by the primary explosion. This can greatly increase yield, especially if even more neutrons are generated by fusing hydrogen isotopes, in a so-called boosted configuration.

Critical size

The critical size is the minimum size of a nuclear reactor core or nuclear weapon that can be made for a specific geometrical arrangement and material composition. The critical size must at least include enough fissionable material to reach critical mass. If the size of the reactor core is less than a certain minimum, too many fission neutrons escape through its surface and the chain reaction is not sustained.

Critical mass of a bare sphere

Top: A sphere of fissile material is too small to allow the chain reaction to become self-sustaining as neutrons generated by fissions can too easily escape.

Middle: By increasing the mass of the sphere to a critical mass, the reaction can become self-sustaining.

Bottom: Surrounding the original sphere with a neutron reflector increases the efficiency of the reactions and also allows the reaction to become self-sustaining.

The shape with minimal critical mass and the smallest physical dimensions is a sphere. Bare-sphere critical masses at normal density of some actinides are listed in the following table. Most information on bare sphere masses is considered classified, since it is critical to nuclear weapons design, but some documents have been declassified.

Nuclide Half-life
(y)
Critical mass
(kg)
Diameter
(cm)
uranium-233 159,200 15 11
uranium-235 703,800,000 52 17
neptunium-236 154,000 7 8.7
neptunium-237 2,144,000 60 18
plutonium-238 87.7 9.04–10.07 9.5–9.9
plutonium-239 24,110 10 9.9
plutonium-240 6561 40 15
plutonium-241 14.3 12 10.5
plutonium-242 375,000 75–100 19–21
americium-241 432.2 55–77 20–23
americium-242m 141 9–14 11–13
americium-243 7370 180–280 30–35
curium-243 29.1 7.34–10 10–11
curium-244 18.1 13.5–30 12.4–16
curium-245 8500 9.41–12.3 11–12
curium-246 4760 39–70.1 18–21
curium-247 15,600,000 6.94–7.06 9.9
berkelium-247 1380 75.7 11.8-12.2
berkelium-249 0.9 192 16.1-16.6
californium-249 351 6 9
californium-251 900 5.46 8.5
californium-252 2.6 2.73 6.9
einsteinium-254 0.755 9.89 7.1

The critical mass for lower-grade uranium depends strongly on the grade: with 20% 235U it is over 400 kg; with 15% 235U, it is well over 600 kg.

The critical mass is inversely proportional to the square of the density. If the density is 1% more and the mass 2% less, then the volume is 3% less and the diameter 1% less. The probability for a neutron per cm travelled to hit a nucleus is proportional to the density. It follows that 1% greater density means that the distance travelled before leaving the system is 1% less. This is something that must be taken into consideration when attempting more precise estimates of critical masses of plutonium isotopes than the approximate values given above, because plutonium metal has a large number of different crystal phases which can have widely varying densities.

Note that not all neutrons contribute to the chain reaction. Some escape and others undergo radiative capture.

Let q denote the probability that a given neutron induces fission in a nucleus. Consider only prompt neutrons, and let ν denote the number of prompt neutrons generated in a nuclear fission. For example, ν ≈ 2.5 for uranium-235. Then, criticality occurs when ν·q = 1. The dependence of this upon geometry, mass, and density appears through the factor q.

Given a total interaction cross section σ (typically measured in barns), the mean free path of a prompt neutron is where n is the nuclear number density. Most interactions are scattering events, so that a given neutron obeys a random walk until it either escapes from the medium or causes a fission reaction. So long as other loss mechanisms are not significant, then, the radius of a spherical critical mass is rather roughly given by the product of the mean free path and the square root of one plus the number of scattering events per fission event (call this s), since the net distance travelled in a random walk is proportional to the square root of the number of steps:

Note again, however, that this is only a rough estimate.

In terms of the total mass M, the nuclear mass m, the density ρ, and a fudge factor f which takes into account geometrical and other effects, criticality corresponds to

which clearly recovers the aforementioned result that critical mass depends inversely on the square of the density.

Alternatively, one may restate this more succinctly in terms of the areal density of mass, Σ:

where the factor f has been rewritten as f' to account for the fact that the two values may differ depending upon geometrical effects and how one defines Σ. For example, for a bare solid sphere of 239Pu criticality is at 320 kg/m2, regardless of density, and for 235U at 550 kg/m2. In any case, criticality then depends upon a typical neutron "seeing" an amount of nuclei around it such that the areal density of nuclei exceeds a certain threshold.

This is applied in implosion-type nuclear weapons where a spherical mass of fissile material that is substantially less than a critical mass is made supercritical by very rapidly increasing ρ (and thus Σ as well) (see below). Indeed, sophisticated nuclear weapons programs can make a functional device from less material than more primitive weapons programs require.

Aside from the math, there is a simple physical analog that helps explain this result. Consider diesel fumes belched from an exhaust pipe. Initially the fumes appear black, then gradually you are able to see through them without any trouble. This is not because the total scattering cross section of all the soot particles has changed, but because the soot has dispersed. If we consider a transparent cube of length L on a side, filled with soot, then the optical depth of this medium is inversely proportional to the square of L, and therefore proportional to the areal density of soot particles: we can make it easier to see through the imaginary cube just by making the cube larger.

Several uncertainties contribute to the determination of a precise value for critical masses, including (1) detailed knowledge of fission cross sections, (2) calculation of geometric effects. This latter problem provided significant motivation for the development of the Monte Carlo method in computational physics by Nicholas Metropolis and Stanislaw Ulam. In fact, even for a homogeneous solid sphere, the exact calculation is by no means trivial. Finally, note that the calculation can also be performed by assuming a continuum approximation for the neutron transport. This reduces it to a diffusion problem. However, as the typical linear dimensions are not significantly larger than the mean free path, such an approximation is only marginally applicable.

Finally, note that for some idealized geometries, the critical mass might formally be infinite, and other parameters are used to describe criticality. For example, consider an infinite sheet of fissionable material. For any finite thickness, this corresponds to an infinite mass. However, criticality is only achieved once the thickness of this slab exceeds a critical value.

Criticality in nuclear weapon design

If two pieces of subcritical material are not brought together fast enough, nuclear predetonation (fizzle) can occur, whereby a very small explosion will blow the bulk of the material apart.

Until detonation is desired, a nuclear weapon must be kept subcritical. In the case of a uranium gun-type bomb, this can be achieved by keeping the fuel in a number of separate pieces, each below the critical size either because they are too small or unfavorably shaped. To produce detonation, the pieces of uranium are brought together rapidly. In Little Boy, this was achieved by firing a piece of uranium (a 'doughnut') down a gun barrel onto another piece (a 'spike'). This design is referred to as a gun-type fission weapon.

A theoretical 100% pure 239Pu weapon could also be constructed as a gun-type weapon, like the Manhattan Project's proposed Thin Man design. In reality, this is impractical because even "weapons grade" 239Pu is contaminated with a small amount of 240Pu, which has a strong propensity toward spontaneous fission. Because of this, a reasonably sized gun-type weapon would suffer nuclear reaction (predetonation) before the masses of plutonium would be in a position for a full-fledged explosion to occur.

Instead, the plutonium is present as a subcritical sphere (or other shape), which may or may not be hollow. Detonation is produced by exploding a shaped charge surrounding the sphere, increasing the density (and collapsing the cavity, if present) to produce a prompt critical configuration. This is known as an implosion type weapon.

Prompt criticality

The event of fission must release, on the average, more than one free neutron of the desired energy level in order to sustain a chain reaction, and each must find other nuclei and cause them to fission. Most of the neutrons released from a fission event come immediately from that event, but a fraction of them come later, when the fission products decay, which may be on the average from microseconds to minutes later. This is fortunate for atomic power generation, for without this delay "going critical" would be an immediately catastrophic event, as it is in a nuclear bomb where upwards of 80 generations of chain reaction occur in less than a microsecond, far too fast for a human, or even a machine, to react. Physicists recognize two points in the gradual increase of neutron flux which are significant: critical, where the chain reaction becomes self-sustaining thanks to the contributions of both kinds of neutron generation, and prompt critical, where the immediate "prompt" neutrons alone will sustain the reaction without need for the decay neutrons. Nuclear power plants operate between these two points of reactivity, while above the prompt critical point is the domain of nuclear weapons and some nuclear power accidents, such as the Chernobyl disaster.

Nuclear technology

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Nuclear_technology
 
A residential smoke detector is the most familiar piece of nuclear technology for some people

Nuclear technology is technology that involves the nuclear reactions of atomic nuclei. Among the notable nuclear technologies are nuclear reactors, nuclear medicine and nuclear weapons. It is also used, among other things, in smoke detectors and gun sights.

History and scientific background

Discovery

The vast majority of common, natural phenomena on Earth only involve gravity and electromagnetism, and not nuclear reactions. This is because atomic nuclei are generally kept apart because they contain positive electrical charges and therefore repel each other.

In 1896, Henri Becquerel was investigating phosphorescence in uranium salts when he discovered a new phenomenon which came to be called radioactivity. He, Pierre Curie and Marie Curie began investigating the phenomenon. In the process, they isolated the element radium, which is highly radioactive. They discovered that radioactive materials produce intense, penetrating rays of three distinct sorts, which they labeled alpha, beta, and gamma after the first three Greek letters. Some of these kinds of radiation could pass through ordinary matter, and all of them could be harmful in large amounts. All of the early researchers received various radiation burns, much like sunburn, and thought little of it.

The new phenomenon of radioactivity was seized upon by the manufacturers of quack medicine (as had the discoveries of electricity and magnetism, earlier), and a number of patent medicines and treatments involving radioactivity were put forward.

Gradually it was realized that the radiation produced by radioactive decay was ionizing radiation, and that even quantities too small to burn could pose a severe long-term hazard. Many of the scientists working on radioactivity died of cancer as a result of their exposure. Radioactive patent medicines mostly disappeared, but other applications of radioactive materials persisted, such as the use of radium salts to produce glowing dials on meters.

As the atom came to be better understood, the nature of radioactivity became clearer. Some larger atomic nuclei are unstable, and so decay (release matter or energy) after a random interval. The three forms of radiation that Becquerel and the Curies discovered are also more fully understood. Alpha decay is when a nucleus releases an alpha particle, which is two protons and two neutrons, equivalent to a helium nucleus. Beta decay is the release of a beta particle, a high-energy electron. Gamma decay releases gamma rays, which unlike alpha and beta radiation are not matter but electromagnetic radiation of very high frequency, and therefore energy. This type of radiation is the most dangerous and most difficult to block. All three types of radiation occur naturally in certain elements.

It has also become clear that the ultimate source of most terrestrial energy is nuclear, either through radiation from the Sun caused by stellar thermonuclear reactions or by radioactive decay of uranium within the Earth, the principal source of geothermal energy.

Nuclear fission

In natural nuclear radiation, the byproducts are very small compared to the nuclei from which they originate. Nuclear fission is the process of splitting a nucleus into roughly equal parts, and releasing energy and neutrons in the process. If these neutrons are captured by another unstable nucleus, they can fission as well, leading to a chain reaction. The average number of neutrons released per nucleus that go on to fission another nucleus is referred to as k. Values of k larger than 1 mean that the fission reaction is releasing more neutrons than it absorbs, and therefore is referred to as a self-sustaining chain reaction. A mass of fissile material large enough (and in a suitable configuration) to induce a self-sustaining chain reaction is called a critical mass.

When a neutron is captured by a suitable nucleus, fission may occur immediately, or the nucleus may persist in an unstable state for a short time. If there are enough immediate decays to carry on the chain reaction, the mass is said to be prompt critical, and the energy release will grow rapidly and uncontrollably, usually leading to an explosion.

When discovered on the eve of World War II, this insight led multiple countries to begin programs investigating the possibility of constructing an atomic bomb — a weapon which utilized fission reactions to generate far more energy than could be created with chemical explosives. The Manhattan Project, run by the United States with the help of the United Kingdom and Canada, developed multiple fission weapons which were used against Japan in 1945 at Hiroshima and Nagasaki. During the project, the first fission reactors were developed as well, though they were primarily for weapons manufacture and did not generate electricity.

In 1951, the first nuclear fission power plant was the first to produce electricity at the Experimental Breeder Reactor No. 1 (EBR-1), in Arco, Idaho, ushering in the "Atomic Age" of more intensive human energy use.

However, if the mass is critical only when the delayed neutrons are included, then the reaction can be controlled, for example by the introduction or removal of neutron absorbers. This is what allows nuclear reactors to be built. Fast neutrons are not easily captured by nuclei; they must be slowed (slow neutrons), generally by collision with the nuclei of a neutron moderator, before they can be easily captured. Today, this type of fission is commonly used to generate electricity.

Nuclear fusion

If nuclei are forced to collide, they can undergo nuclear fusion. This process may release or absorb energy. When the resulting nucleus is lighter than that of iron, energy is normally released; when the nucleus is heavier than that of iron, energy is generally absorbed. This process of fusion occurs in stars, which derive their energy from hydrogen and helium. They form, through stellar nucleosynthesis, the light elements (lithium to calcium) as well as some of the heavy elements (beyond iron and nickel, via the S-process). The remaining abundance of heavy elements, from nickel to uranium and beyond, is due to supernova nucleosynthesis, the R-process.

Of course, these natural processes of astrophysics are not examples of nuclear "technology". Because of the very strong repulsion of nuclei, fusion is difficult to achieve in a controlled fashion. Hydrogen bombs obtain their enormous destructive power from fusion, but their energy cannot be controlled. Controlled fusion is achieved in particle accelerators; this is how many synthetic elements are produced. A fusor can also produce controlled fusion and is a useful neutron source. However, both of these devices operate at a net energy loss. Controlled, viable fusion power has proven elusive, despite the occasional hoax. Technical and theoretical difficulties have hindered the development of working civilian fusion technology, though research continues to this day around the world.

Nuclear fusion was initially pursued only in theoretical stages during World War II, when scientists on the Manhattan Project (led by Edward Teller) investigated it as a method to build a bomb. The project abandoned fusion after concluding that it would require a fission reaction to detonate. It took until 1952 for the first full hydrogen bomb to be detonated, so-called because it used reactions between deuterium and tritium. Fusion reactions are much more energetic per unit mass of fuel than fission reactions, but starting the fusion chain reaction is much more difficult.

Nuclear weapons

A nuclear weapon is an explosive device that derives its destructive force from nuclear reactions, either fission or a combination of fission and fusion. Both reactions release vast quantities of energy from relatively small amounts of matter. Even small nuclear devices can devastate a city by blast, fire and radiation. Nuclear weapons are considered weapons of mass destruction, and their use and control has been a major aspect of international policy since their debut.

The design of a nuclear weapon is more complicated than it might seem. Such a weapon must hold one or more subcritical fissile masses stable for deployment, then induce criticality (create a critical mass) for detonation. It also is quite difficult to ensure that such a chain reaction consumes a significant fraction of the fuel before the device flies apart. The procurement of a nuclear fuel is also more difficult than it might seem, since sufficiently unstable substances for this process do not currently occur naturally on Earth in suitable amounts.

One isotope of uranium, namely uranium-235, is naturally occurring and sufficiently unstable, but it is always found mixed with the more stable isotope uranium-238. The latter accounts for more than 99% of the weight of natural uranium. Therefore, some method of isotope separation based on the weight of three neutrons must be performed to enrich (isolate) uranium-235.

Alternatively, the element plutonium possesses an isotope that is sufficiently unstable for this process to be usable. Terrestrial plutonium does not currently occur naturally in sufficient quantities for such use, so it must be manufactured in a nuclear reactor.

Ultimately, the Manhattan Project manufactured nuclear weapons based on each of these elements. They detonated the first nuclear weapon in a test code-named "Trinity", near Alamogordo, New Mexico, on July 16, 1945. The test was conducted to ensure that the implosion method of detonation would work, which it did. A uranium bomb, Little Boy, was dropped on the Japanese city Hiroshima on August 6, 1945, followed three days later by the plutonium-based Fat Man on Nagasaki. In the wake of unprecedented devastation and casualties from a single weapon, the Japanese government soon surrendered, ending World War II.

Since these bombings, no nuclear weapons have been deployed offensively. Nevertheless, they prompted an arms race to develop increasingly destructive bombs to provide a nuclear deterrent. Just over four years later, on August 29, 1949, the Soviet Union detonated its first fission weapon. The United Kingdom followed on October 2, 1952; France, on February 13, 1960; and China component to a nuclear weapon. Approximately half of the deaths from Hiroshima and Nagasaki died two to five years afterward from radiation exposure. A radiological weapon is a type of nuclear weapon designed to distribute hazardous nuclear material in enemy areas. Such a weapon would not have the explosive capability of a fission or fusion bomb, but would kill many people and contaminate a large area. A radiological weapon has never been deployed. While considered useless by a conventional military, such a weapon raises concerns over nuclear terrorism.

There have been over 2,000 nuclear tests conducted since 1945. In 1963, all nuclear and many non-nuclear states signed the Limited Test Ban Treaty, pledging to refrain from testing nuclear weapons in the atmosphere, underwater, or in outer space. The treaty permitted underground nuclear testing. France continued atmospheric testing until 1974, while China continued up until 1980. The last underground test by the United States was in 1992, the Soviet Union in 1990, the United Kingdom in 1991, and both France and China continued testing until 1996. After signing the Comprehensive Test Ban Treaty in 1996 (which had as of 2011 not entered into force), all of these states have pledged to discontinue all nuclear testing. Non-signatories India and Pakistan last tested nuclear weapons in 1998.

Nuclear weapons are the most destructive weapons known - the archetypal weapons of mass destruction. Throughout the Cold War, the opposing powers had huge nuclear arsenals, sufficient to kill hundreds of millions of people. Generations of people grew up under the shadow of nuclear devastation, portrayed in films such as Dr. Strangelove and The Atomic Cafe.

However, the tremendous energy release in the detonation of a nuclear weapon also suggested the possibility of a new energy source.

Civilian uses

Nuclear power

Nuclear power is a type of nuclear technology involving the controlled use of nuclear fission to release energy for work including propulsion, heat, and the generation of electricity. Nuclear energy is produced by a controlled nuclear chain reaction which creates heat—and which is used to boil water, produce steam, and drive a steam turbine. The turbine is used to generate electricity and/or to do mechanical work.

Currently nuclear power provides approximately 15.7% of the world's electricity (in 2004) and is used to propel aircraft carriers, icebreakers and submarines (so far economics and fears in some ports have prevented the use of nuclear power in transport ships). All nuclear power plants use fission. No man-made fusion reaction has resulted in a viable source of electricity.

Medical applications

The medical applications of nuclear technology are divided into diagnostics and radiation treatment.

Imaging - The largest use of ionizing radiation in medicine is in medical radiography to make images of the inside of the human body using x-rays. This is the largest artificial source of radiation exposure for humans. Medical and dental x-ray imagers use of cobalt-60 or other x-ray sources. A number of radiopharmaceuticals are used, sometimes attached to organic molecules, to act as radioactive tracers or contrast agents in the human body. Positron emitting nucleotides are used for high resolution, short time span imaging in applications known as Positron emission tomography.

Radiation is also used to treat diseases in radiation therapy.

Industrial applications

Since some ionizing radiation can penetrate matter, they are used for a variety of measuring methods. X-rays and gamma rays are used in industrial radiography to make images of the inside of solid products, as a means of nondestructive testing and inspection. The piece to be radiographed is placed between the source and a photographic film in a cassette. After a certain exposure time, the film is developed and it shows any internal defects of the material.

Gauges - Gauges use the exponential absorption law of gamma rays

  • Level indicators: Source and detector are placed at opposite sides of a container, indicating the presence or absence of material in the horizontal radiation path. Beta or gamma sources are used, depending on the thickness and the density of the material to be measured. The method is used for containers of liquids or of grainy substances
  • Thickness gauges: if the material is of constant density, the signal measured by the radiation detector depends on the thickness of the material. This is useful for continuous production, like of paper, rubber, etc.

Electrostatic control - To avoid the build-up of static electricity in production of paper, plastics, synthetic textiles, etc., a ribbon-shaped source of the alpha emitter 241Am can be placed close to the material at the end of the production line. The source ionizes the air to remove electric charges on the material.

Radioactive tracers - Since radioactive isotopes behave, chemically, mostly like the inactive element, the behavior of a certain chemical substance can be followed by tracing the radioactivity. Examples:

  • Adding a gamma tracer to a gas or liquid in a closed system makes it possible to find a hole in a tube.
  • Adding a tracer to the surface of the component of a motor makes it possible to measure wear by measuring the activity of the lubricating oil.

Oil and Gas Exploration- Nuclear well logging is used to help predict the commercial viability of new or existing wells. The technology involves the use of a neutron or gamma-ray source and a radiation detector which are lowered into boreholes to determine the properties of the surrounding rock such as porosity and lithography.

Road Construction - Nuclear moisture/density gauges are used to determine the density of soils, asphalt, and concrete. Typically a cesium-137 source is used.

Commercial applications

  • radioluminescence
  • tritium illumination: Tritium is used with phosphor in rifle sights to increase nighttime firing accuracy. Some runway markers and building exit signs use the same technology, to remain illuminated during blackouts.
  • Betavoltaics.
  • Smoke detector: An ionization smoke detector includes a tiny mass of radioactive americium-241, which is a source of alpha radiation. Two ionisation chambers are placed next to each other. Both contain a small source of 241Am that gives rise to a small constant current. One is closed and serves for comparison, the other is open to ambient air; it has a gridded electrode. When smoke enters the open chamber, the current is disrupted as the smoke particles attach to the charged ions and restore them to a neutral electrical state. This reduces the current in the open chamber. When the current drops below a certain threshold, the alarm is triggered.

Food processing and agriculture

In biology and agriculture, radiation is used to induce mutations to produce new or improved species, such as in atomic gardening. Another use in insect control is the sterile insect technique, where male insects are sterilized by radiation and released, so they have no offspring, to reduce the population.

In industrial and food applications, radiation is used for sterilization of tools and equipment. An advantage is that the object may be sealed in plastic before sterilization. An emerging use in food production is the sterilization of food using food irradiation.

The Radura logo, used to show a food has been treated with ionizing radiation.

Food irradiation is the process of exposing food to ionizing radiation in order to destroy microorganisms, bacteria, viruses, or insects that might be present in the food. The radiation sources used include radioisotope gamma ray sources, X-ray generators and electron accelerators. Further applications include sprout inhibition, delay of ripening, increase of juice yield, and improvement of re-hydration. Irradiation is a more general term of deliberate exposure of materials to radiation to achieve a technical goal (in this context 'ionizing radiation' is implied). As such it is also used on non-food items, such as medical hardware, plastics, tubes for gas-pipelines, hoses for floor-heating, shrink-foils for food packaging, automobile parts, wires and cables (isolation), tires, and even gemstones. Compared to the amount of food irradiated, the volume of those every-day applications is huge but not noticed by the consumer.

The genuine effect of processing food by ionizing radiation relates to damages to the DNA, the basic genetic information for life. Microorganisms can no longer proliferate and continue their malignant or pathogenic activities. Spoilage causing micro-organisms cannot continue their activities. Insects do not survive or become incapable of procreation. Plants cannot continue the natural ripening or aging process. All these effects are beneficial to the consumer and the food industry, likewise.

The amount of energy imparted for effective food irradiation is low compared to cooking the same; even at a typical dose of 10 kGy most food, which is (with regard to warming) physically equivalent to water, would warm by only about 2.5 °C (4.5 °F).

The specialty of processing food by ionizing radiation is the fact, that the energy density per atomic transition is very high, it can cleave molecules and induce ionization (hence the name) which cannot be achieved by mere heating. This is the reason for new beneficial effects, however at the same time, for new concerns. The treatment of solid food by ionizing radiation can provide an effect similar to heat pasteurization of liquids, such as milk. However, the use of the term, cold pasteurization, to describe irradiated foods is controversial, because pasteurization and irradiation are fundamentally different processes, although the intended end results can in some cases be similar.

Detractors of food irradiation have concerns about the health hazards of induced radioactivity.[citation needed] A report for the industry advocacy group American Council on Science and Health entitled "Irradiated Foods" states: "The types of radiation sources approved for the treatment of foods have specific energy levels well below that which would cause any element in food to become radioactive. Food undergoing irradiation does not become any more radioactive than luggage passing through an airport X-ray scanner or teeth that have been X-rayed."

Food irradiation is currently permitted by over 40 countries and volumes are estimated to exceed 500,000 metric tons (490,000 long tons; 550,000 short tons) annually worldwide.

Food irradiation is essentially a non-nuclear technology; it relies on the use of ionizing radiation which may be generated by accelerators for electrons and conversion into bremsstrahlung, but which may use also gamma-rays from nuclear decay. There is a worldwide industry for processing by ionizing radiation, the majority by number and by processing power using accelerators. Food irradiation is only a niche application compared to medical supplies, plastic materials, raw materials, gemstones, cables and wires, etc.

Accidents

Nuclear accidents, because of the powerful forces involved, are often very dangerous. Historically, the first incidents involved fatal radiation exposure. Marie Curie died from aplastic anemia which resulted from her high levels of exposure. Two scientists, an American and Canadian respectively, Harry Daghlian and Louis Slotin, died after mishandling the same plutonium mass. Unlike conventional weapons, the intense light, heat, and explosive force is not the only deadly component to a nuclear weapon. Approximately half of the deaths from Hiroshima and Nagasaki died two to five years afterward from radiation exposure.

Civilian nuclear and radiological accidents primarily involve nuclear power plants. Most common are nuclear leaks that expose workers to hazardous material. A nuclear meltdown refers to the more serious hazard of releasing nuclear material into the surrounding environment. The most significant meltdowns occurred at Three Mile Island in Pennsylvania and Chernobyl in the Soviet Ukraine. The earthquake and tsunami on March 11, 2011 caused serious damage to three nuclear reactors and a spent fuel storage pond at the Fukushima Daiichi nuclear power plant in Japan. Military reactors that experienced similar accidents were Windscale in the United Kingdom and SL-1 in the United States.

Military accidents usually involve the loss or unexpected detonation of nuclear weapons. The Castle Bravo test in 1954 produced a larger yield than expected, which contaminated nearby islands, a Japanese fishing boat (with one fatality), and raised concerns about contaminated fish in Japan. In the 1950s through 1970s, several nuclear bombs were lost from submarines and aircraft, some of which have never been recovered. The last twenty years have seen a marked decline in such accidents.

Examples of environmental benefits

Proponents of nuclear energy note that annually, nuclear-generated electricity reduces 470 million metric tons of carbon dioxide emissions that would otherwise come from fossil fuels. Additionally, the amount of comparatively low waste that nuclear energy does create is safely disposed of by the large scale nuclear energy production facilities or it is repurposed/recycled for other energy uses. Proponents of nuclear energy also bring to attention the opportunity cost of utilizing other forms of electricity. For example, the Environmental Protection Agency estimates that coal kills 30,000 people a year, as a result of its environmental impact, while 60 people died in the Chernobyl disaster. A real world example of impact provided by proponents of nuclear energy is the 650,000 ton increase in carbon emissions in the two months following the closure of the Vermont Yankee nuclear plant.

Entropy (information theory)

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Entropy_(information_theory) In info...