Search This Blog

Saturday, September 2, 2023

Nuclear weapons testing

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Nuclear_weapons_testing
The mushroom cloud from the Castle Bravo thermonuclear weapon test in 1954, the largest nuclear weapons test ever conducted by the United States

Nuclear weapons tests are experiments carried out to determine the performance, yield, and effects of nuclear weapons. Testing nuclear weapons offers practical information about how the weapons function, how detonations are affected by different conditions, and how personnel, structures, and equipment are affected when subjected to nuclear explosions. However, nuclear testing has often been used as an indicator of scientific and military strength. Many tests have been overtly political in their intention; most nuclear weapons states publicly declared their nuclear status through a nuclear test.

The first nuclear device was detonated as a test by the United States at the Trinity site in New Mexico on July 16, 1945, with a yield approximately equivalent to 20 kilotons of TNT. The first thermonuclear weapon technology test of an engineered device, codenamed "Ivy Mike", was tested at the Enewetak Atoll in the Marshall Islands on November 1, 1952 (local date), also by the United States. The largest nuclear weapon ever tested was the "Tsar Bomba" of the Soviet Union at Novaya Zemlya on October 30, 1961, with the largest yield ever seen, an estimated 50–58 megatons.

In 1963, three (UK, US, Soviet Union) of the then four nuclear states and many non-nuclear states signed the Limited Test Ban Treaty, pledging to refrain from testing nuclear weapons in the atmosphere, underwater, or in outer space. The treaty permitted underground nuclear testing. France continued atmospheric testing until 1974, and China continued until 1980. Neither has signed the treaty.

Underground tests conducted by the Soviet Union continued until 1990, the United Kingdom until 1991, the United States until 1992, and both China and France until 1996. In signing the Comprehensive Nuclear-Test-Ban Treaty in 1996, these countries pledged to discontinue all nuclear testing; the treaty has not yet entered into force because of its failure to be ratified by eight countries. Non-signatories India and Pakistan last tested nuclear weapons in 1998. North Korea conducted nuclear tests in 2006, 2009, 2013, 2016, and 2017. The most recent confirmed nuclear test occurred in September 2017 in North Korea.

Types

Four major types of nuclear testing: 1. atmospheric, 2. underground, 3. exoatmospheric, and 4. underwater

Nuclear weapons tests have historically been divided into four categories reflecting the medium or location of the test.

  • Atmospheric testing designates explosions that take place in the atmosphere. Generally, these have occurred as devices detonated on towers, balloons, barges, or islands, or dropped from airplanes, and also those only buried far enough to intentionally create a surface-breaking crater. The United States, the Soviet Union, and China have all conducted tests involving explosions of missile-launched bombs (See List of nuclear weapons tests#Tests of live warheads on rockets). Nuclear explosions close enough to the ground to draw dirt and debris into their mushroom cloud can generate large amounts of nuclear fallout due to irradiation of the debris. This definition of atmospheric is used in the Limited Test Ban Treaty, which banned this class of testing along with exoatmospheric and underwater.
  • Underground testing refers to nuclear tests conducted under the surface of the earth, at varying depths. Underground nuclear testing made up the majority of nuclear tests by the United States and the Soviet Union during the Cold War; other forms of nuclear testing were banned by the Limited Test Ban Treaty in 1963. True underground tests are intended to be fully contained and emit a negligible amount of fallout. Unfortunately these nuclear tests do occasionally "vent" to the surface, producing from nearly none to considerable amounts of radioactive debris as a consequence. Underground testing, almost by definition, causes seismic activity of a magnitude that depends on the yield of the nuclear device and the composition of the medium in which it is detonated, and generally creates a subsidence crater. In 1976, the United States and the USSR agreed to limit the maximum yield of underground tests to 150 kt with the Threshold Test Ban Treaty.
    Underground testing also falls into two physical categories: tunnel tests in generally horizontal tunnel drifts, and shaft tests in vertically drilled holes.
  • Exoatmospheric testing refers to nuclear tests conducted above the atmosphere. The test devices are lifted on rockets. These high-altitude nuclear explosions can generate a nuclear electromagnetic pulse (NEMP) when they occur in the ionosphere, and charged particles resulting from the blast can cross hemispheres following geomagnetic lines of force to create an auroral display.
  • Underwater testing involves nuclear devices being detonated underwater, usually moored to a ship or a barge (which is subsequently destroyed by the explosion). Tests of this nature have usually been conducted to evaluate the effects of nuclear weapons against naval vessels (such as in Operation Crossroads), or to evaluate potential sea-based nuclear weapons (such as nuclear torpedoes or depth charges). Underwater tests close to the surface can disperse large amounts of radioactive particles in water and steam, contaminating nearby ships or structures, though they generally do not create fallout other than very locally to the explosion.

Salvo tests

Another way to classify nuclear tests is by the number of explosions that constitute the test. The treaty definition of a salvo test is:

In conformity with treaties between the United States and the Soviet Union, a salvo is defined, for multiple explosions for peaceful purposes, as two or more separate explosions where a period of time between successive individual explosions does not exceed 5 seconds and where the burial points of all explosive devices can be connected by segments of straight lines, each of them connecting two burial points, and the total length does not exceed 40 kilometers. For nuclear weapon tests, a salvo is defined as two or more underground nuclear explosions conducted at a test site within an area delineated by a circle having a diameter of two kilometers and conducted within a total period of time of 0.1 seconds.

The USSR has exploded up to eight devices in a single salvo test; Pakistan's second and last official test exploded four different devices. Almost all lists in the literature are lists of tests; in the lists in Wikipedia (for example, Operation Cresset has separate items for Cremino and Caerphilly, which together constitute a single test), the lists are of explosions.

Purpose

Separately from these designations, nuclear tests are also often categorized by the purpose of the test itself.

  • Weapons-related tests are designed to garner information about how (and if) the weapons themselves work. Some serve to develop and validate a specific weapon type. Others test experimental concepts or are physics experiments meant to gain fundamental knowledge of the processes and materials involved in nuclear detonations.
  • Weapons effects tests are designed to gain information about the effects of the weapons on structures, equipment, organisms, and the environment. They are mainly used to assess and improve survivability to nuclear explosions in civilian and military contexts, tailor weapons to their targets, and develop the tactics of nuclear warfare.
  • Safety experiments are designed to study the behavior of weapons in simulated accident scenarios. In particular, they are used to verify that a (significant) nuclear detonation cannot happen by accident. They include one-point safety tests and simulations of storage and transportation accidents.
  • Nuclear test detection experiments are designed to improve the capabilities to detect, locate, and identify nuclear detonations, in particular, to monitor compliance with test-ban treaties. In the United States these tests are associated with Operation Vela Uniform before the Comprehensive Test Ban Treaty stopped all nuclear testing among signatories.
  • Peaceful nuclear explosions were conducted to investigate non-military applications of nuclear explosives. In the United States, these were performed under the umbrella name of Operation Plowshare.

Aside from these technical considerations, tests have been conducted for political and training purposes, and can often serve multiple purposes.

Alternatives to full-scale testing

Subcritical experiment at the Nevada National Security Site

Hydronuclear tests study nuclear materials under the conditions of explosive shock compression. They can create subcritical conditions, or supercritical conditions with yields ranging from negligible all the way up to a substantial fraction of full weapon yield.

Critical mass experiments determine the quantity of fissile material required for criticality with a variety of fissile material compositions, densities, shapes, and reflectors. They can be subcritical or supercritical, in which case significant radiation fluxes can be produced. This type of test has resulted in several criticality accidents.

Subcritical (or cold) tests are any type of tests involving nuclear materials and possibly high explosives (like those mentioned above) that purposely result in no yield. The name refers to the lack of creation of a critical mass of fissile material. They are the only type of tests allowed under the interpretation of the Comprehensive Nuclear-Test-Ban Treaty tacitly agreed to by the major atomic powers. Subcritical tests continue to be performed by the United States, Russia, and the People's Republic of China, at least.

Subcritical tests executed by the United States include:

Subcritical Tests
Name Date Time (UT) Location Elevation + Height Notes
A series of 50 tests January 1, 1960 Los Alamos National Lab Test Area 49 35.82289°N 106.30216°W 2,183 metres (7,162 ft) and 20 metres (66 ft) Series of 50 tests during US/USSR joint nuclear test ban.
Odyssey
NTS Area U1a 37.01139°N 116.05983°W 1,222 metres (4,009 ft) and 190 metres (620 ft)
Trumpet
NTS Area U1a-102D 37.01099°N 116.05848°W 1,222 metres (4,009 ft) and 190 metres (620 ft)
Kismet March 1, 1995 NTS Area U1a 37.01139°N 116.05983°W 1,222 metres (4,009 ft) and 293 metres (961 ft) Kismet was a proof of concept for modern hydronuclear tests; it did not contain any SNM (Special Nuclear Material—plutonium or uranium).
Rebound July 2, 1997 10:—:— NTS Area U1a 37.01139°N 116.05983°W 1,222 metres (4,009 ft) and 293 metres (961 ft) Provided information on the behavior of new plutonium alloys compressed by high-pressure shock waves; same as Stagecoach but for the age of the alloys.
Holog September 18, 1997 NTS Area U1a.101A 37.01036°N 116.05888°W 1,222 metres (4,009 ft) and 290 metres (950 ft) Holog and Clarinet may have switched locations.
Stagecoach March 25, 1998 NTS Area U1a 37.01139°N 116.05983°W 1,222 metres (4,009 ft) and 290 metres (950 ft) Provided information on the behavior of aged (up to 40 years) plutonium alloys compressed by high-pressure shock waves.
Bagpipe September 26, 1998 NTS Area U1a.101B 37.01021°N 116.05886°W 1,222 metres (4,009 ft) and 290 metres (950 ft)
Cimarron December 11, 1998 NTS Area U1a 37.01139°N 116.05983°W 1,222 metres (4,009 ft) and 290 metres (950 ft) Plutonium surface ejecta studies.
Clarinet February 9, 1999 NTS Area U1a.101C 37.01003°N 116.05898°W 1,222 metres (4,009 ft) and 290 metres (950 ft) Holog and Clarinet may have switched places on the map.
Oboe September 30, 1999 NTS Area U1a.102C 37.01095°N 116.05877°W 1,222 metres (4,009 ft) and 290 metres (950 ft)
Oboe 2 November 9, 1999 NTS Area U1a.102C 37.01095°N 116.05877°W 1,222 metres (4,009 ft) and 290 metres (950 ft)
Oboe 3 February 3, 2000 NTS Area U1a.102C 37.01095°N 116.05877°W 1,222 metres (4,009 ft) and 290 metres (950 ft)
Thoroughbred March 22, 2000 NTS Area U1a 37.01139°N 116.05983°W 1,222 metres (4,009 ft) and 290 metres (950 ft) Plutonium surface ejecta studies, followup to Cimarron.
Oboe 4 April 6, 2000 NTS Area U1a.102C 37.01095°N 116.05877°W 1,222 metres (4,009 ft) and 290 metres (950 ft)
Oboe 5 August 18, 2000 NTS Area U1a.102C 37.01095°N 116.05877°W 1,222 metres (4,009 ft) and 290 metres (950 ft)
Oboe 6 December 14, 2000 NTS Area U1a.102C 37.01095°N 116.05877°W 1,222 metres (4,009 ft) and 290 metres (950 ft)
Oboe 8 September 26, 2001 NTS Area U1a.102C 37.01095°N 116.05877°W 1,222 metres (4,009 ft) and 290 metres (950 ft)
Oboe 7 December 13, 2001 NTS Area U1a.102C 37.01095°N 116.05877°W 1,222 metres (4,009 ft) and 290 metres (950 ft)
Oboe 9 June 7, 2002 21:46:— NTS Area U1a.102C 37.01095°N 116.05877°W 1,222 metres (4,009 ft) and 290 metres (950 ft)
Mario August 29, 2002 19:00:— NTS Area U1a 37.01139°N 116.05983°W 1,222 metres (4,009 ft) and 290 metres (950 ft) Plutonium surface studies (optical analysis of spall). Used wrought plutonium from Rocky Flats.
Rocco September 26, 2002 19:00:— NTS Area U1a 37.01139°N 116.05983°W 1,222 metres (4,009 ft) and 290 metres (950 ft) Plutonium surface studies (optical analysis of spall), followup to Mario. Used cast plutonium from Los Alamos.
Piano September 19, 2003 20:44:— NTS Area U1a.102C 37.01095°N 116.05877°W 1,222 metres (4,009 ft) and 290 metres (950 ft)
Armando May 25, 2004 NTS Area U1a 37.01139°N 116.05983°W 1,222 metres (4,009 ft) and 290 metres (950 ft) Plutonium spall measurements using x-ray analysis.
Step Wedge April 1, 2005 NTS Area U1a 37.01139°N 116.05983°W 1,222 metres (4,009 ft) and 190 metres (620 ft) April–May 2005, a series of mini-hydronuclear experiments interpreting Armando results.
Unicorn August 31, 2006 01:00:— NTS Area U6c 36.98663°N 116.0439°W 1,222 metres (4,009 ft) and 190 metres (620 ft) "...confirm nuclear performance of the W88 warhead with a newly-manufactured pit." Early pit studies.
Thermos January 1, 2007 NTS Area U1a 37.01139°N 116.05983°W 1,222 metres (4,009 ft) and 190 metres (620 ft) February 6 – May 3, 2007, 12 mini-hydronuclear experiments in thermos-sized flasks.
Bacchus September 16, 2010 NTS Area U1a.05? 37.01139°N 116.05983°W 1,222 metres (4,009 ft) and 190 metres (620 ft)
Barolo A December 1, 2010 NTS Area U1a.05? 37.01139°N 116.05983°W 1,222 metres (4,009 ft) and 190 metres (620 ft)
Barolo B February 2, 2011 NTS Area U1a.05? 37.01139°N 116.05983°W 1,222 metres (4,009 ft) and 190 metres (620 ft)
Castor September 1, 2012 NTS Area U1a 37.01139°N 116.05983°W 1,222 metres (4,009 ft) and 190 metres (620 ft) Not even a subcritical, contained no plutonium; a dress rehearsal for Pollux.
Pollux December 5, 2012 NTS Area U1a 37.01139°N 116.05983°W 1,222 metres (4,009 ft) and 190 metres (620 ft) A subcritical test with a scaled-down warhead mockup.
Leda June 15, 2014 NTS Area U1a 37.01139°N 116.05983°W 1,222 metres (4,009 ft) and 190 metres (620 ft) Like Castor, the plutonium was replaced by a surrogate; this is a dress rehearsal for the later Lydia. The target was a weapons pit mock-up.
Lydia ??-??-2015 NTS Area U1a 37.01139°N 116.05983°W 1,222 metres (4,009 ft) and 190 metres (620 ft) Expected to be a plutonium subcritical test with a scaled-down warhead mockup.
Vega December 13, 2017 Nevada test site
Plutonium subcritical test with a scaled down warhead mockup.
Ediza February 13, 2019 NTS Area U1a 37.01139°N 116.05983°W
Plutonium subcritical test designed to confirm supercomputer simulations for stockpile safety.
Nightshade A November 2020 Nevada test site
Plutonium subcritical test designed to measure ejecta emission.

History

The Phoenix of Hiroshima (foreground) in Hong Kong Harbor in 1967, was involved in several famous anti-nuclear protest voyages against nuclear testing in the Pacific.
The 6,900-square-mile (18,000 km2) expanse of the Semipalatinsk Test Site (indicated in red), attached to Kurchatov (along the Irtysh river). The site comprised an area the size of Wales.

The first atomic weapons test was conducted near Alamogordo, New Mexico, on July 16, 1945, during the Manhattan Project, and given the codename "Trinity". The test was originally to confirm that the implosion-type nuclear weapon design was feasible, and to give an idea of what the actual size and effects of a nuclear explosion would be before they were used in combat against Japan. While the test gave a good approximation of many of the explosion's effects, it did not give an appreciable understanding of nuclear fallout, which was not well understood by the project scientists until well after the atomic bombings of Hiroshima and Nagasaki.

The United States conducted six atomic tests before the Soviet Union developed their first atomic bomb (RDS-1) and tested it on August 29, 1949. Neither country had very many atomic weapons to spare at first, and so testing was relatively infrequent (when the U.S. used two weapons for Operation Crossroads in 1946, they were detonating over 20% of their current arsenal). However, by the 1950s the United States had established a dedicated test site on its own territory (Nevada Test Site) and was also using a site in the Marshall Islands (Pacific Proving Grounds) for extensive atomic and nuclear testing.

The early tests were used primarily to discern the military effects of atomic weapons (Crossroads had involved the effect of atomic weapons on a navy, and how they functioned underwater) and to test new weapon designs. During the 1950s, these included new hydrogen bomb designs, which were tested in the Pacific, and also new and improved fission weapon designs. The Soviet Union also began testing on a limited scale, primarily in Kazakhstan. During the later phases of the Cold War, though, both countries developed accelerated testing programs, testing many hundreds of bombs over the last half of the 20th century.

In 1954 the Castle Bravo fallout plume spread dangerous levels of radiation over an area over 100 miles (160 km) long, including inhabited islands.

Atomic and nuclear tests can involve many hazards. Some of these were illustrated in the U.S. Castle Bravo test in 1954. The weapon design tested was a new form of hydrogen bomb, and the scientists underestimated how vigorously some of the weapon materials would react. As a result, the explosion—with a yield of 15 Mt—was over twice what was predicted. Aside from this problem, the weapon also generated a large amount of radioactive nuclear fallout, more than had been anticipated, and a change in the weather pattern caused the fallout to spread in a direction not cleared in advance. The fallout plume spread high levels of radiation for over 100 miles (160 km), contaminating a number of populated islands in nearby atoll formations. Though they were soon evacuated, many of the islands' inhabitants suffered from radiation burns and later from other effects such as increased cancer rate and birth defects, as did the crew of the Japanese fishing boat Daigo Fukuryū Maru. One crewman died from radiation sickness after returning to port, and it was feared that the radioactive fish they had been carrying had made it into the Japanese food supply.

Because of concerns about worldwide fallout levels, the Partial Test Ban Treaty was signed in 1963. Above are the per capita thyroid doses (in rads) in the continental United States resulting from all exposure routes from all atmospheric nuclear tests conducted at the Nevada Test Site from 1951 to 1962.

Castle Bravo was the worst U.S. nuclear accident, but many of its component problems—unpredictably large yields, changing weather patterns, unexpected fallout contamination of populations and the food supply—occurred during other atmospheric nuclear weapons tests by other countries as well. Concerns over worldwide fallout rates eventually led to the Partial Test Ban Treaty in 1963, which limited signatories to underground testing. Not all countries stopped atmospheric testing, but because the United States and the Soviet Union were responsible for roughly 86% of all nuclear tests, their compliance cut the overall level substantially. France continued atmospheric testing until 1974, and China until 1980.

A tacit moratorium on testing was in effect from 1958 to 1961 and ended with a series of Soviet tests in late 1961, including the Tsar Bomba, the largest nuclear weapon ever tested. The United States responded in 1962 with Operation Dominic, involving dozens of tests, including the explosion of a missile launched from a submarine.

Almost all new nuclear powers have announced their possession of nuclear weapons with a nuclear test. The only acknowledged nuclear power that claims never to have conducted a test was South Africa (although see Vela incident), which has since dismantled all of its weapons. Israel is widely thought to possess a sizable nuclear arsenal, though it has never tested, unless they were involved in Vela. Experts disagree on whether states can have reliable nuclear arsenals—especially ones using advanced warhead designs, such as hydrogen bombs and miniaturized weapons—without testing, though all agree that it is very unlikely to develop significant nuclear innovations without testing. One other approach is to use supercomputers to conduct "virtual" testing, but codes need to be validated against test data.

There have been many attempts to limit the number and size of nuclear tests; the most far-reaching is the Comprehensive Test Ban Treaty of 1996, which has not, as of 2013, been ratified by eight of the "Annex 2 countries" required for it to take effect, including the United States. Nuclear testing has since become a controversial issue in the United States, with a number of politicians saying that future testing might be necessary to maintain the aging warheads from the Cold War. Because nuclear testing is seen as furthering nuclear arms development, many are opposed to future testing as an acceleration of the arms race.

In total nuclear test megatonnage, from 1945 to 1992, 520 atmospheric nuclear explosions (including eight underwater) were conducted with a total yield of 545 megatons, with a peak occurring in 1961–1962, when 340 megatons were detonated in the atmosphere by the United States and Soviet Union, while the estimated number of underground nuclear tests conducted in the period from 1957 to 1992 was 1,352 explosions with a total yield of 90 Mt.

Yield

The yields of atomic bombs and thermonuclear are typically measured in different amounts. Thermonuclear bombs can be hundreds or thousands of times stronger than their atomic counterparts. Due to this, thermonuclear bombs' yields are usually expressed in megatons which is about the equivalent of 1,000,000 tons of TNT. In contrast, atomic bombs' yields are typically measured in kilotons, or about 1,000 tons of TNT.

In US context, it was decided during the Manhattan Project that yield measured in tons of TNT equivalent could be imprecise. This comes from the range of experimental values of the energy content of TNT, ranging from 900 to 1,100 calories per gram (3,800 to 4,600 kJ/g). There is also the issue of which ton to use, as short tons, long tons, and metric tonnes all have different values. It was therefore decided that one kiloton would be equivalent to 1.0×1012 calories (4.2×1012 kJ).

Nuclear testing by country

Over 2,000 nuclear tests have been conducted in over a dozen different sites around the world. Red Russia/Soviet Union, blue France, light blue United States, violet Britain, yellow China, orange India, brown Pakistan, green North Korea, and light green (territories exposed to nuclear bombs). The Black dot indicates the location of the Vela incident.
"Baker Shot", part of Operation Crossroads, a nuclear test by the United States at Bikini Atoll in 1946

The nuclear powers have conducted more than 2,000 nuclear test explosions (numbers are approximate, as some test results have been disputed):

There may also have been at least three alleged but unacknowledged nuclear explosions (see list of alleged nuclear tests) including the Vela incident.

From the first nuclear test in 1945 until tests by Pakistan in 1998, there was never a period of more than 22 months with no nuclear testing. June 1998 to October 2006 was the longest period since 1945 with no acknowledged nuclear tests.

A summary table of all the nuclear testing that has happened since 1945 is here: Worldwide nuclear testing counts and summary.

Graph of nuclear testing

Treaties against testing

There are many existing anti-nuclear explosion treaties, notably the Partial Nuclear Test Ban Treaty and the Comprehensive Nuclear Test Ban Treaty. These treaties were proposed in response to growing international concerns about environmental damage among other risks. Nuclear testing involving humans also contributed to the formation of these treaties. Examples can be seen in the following articles:

The Partial Nuclear Test Ban treaty makes it illegal to detonate any nuclear explosion anywhere except underground, in order to reduce atmospheric fallout. Most countries have signed and ratified the Partial Nuclear Test Ban, which went into effect in October 1963. Of the nuclear states, France, China, and North Korea have never signed the Partial Nuclear Test Ban Treaty.

The 1996 Comprehensive Nuclear-Test-Ban Treaty (CTBT) bans all nuclear explosions everywhere, including underground. For that purpose, the Preparatory Commission of the Comprehensive Nuclear-Test-Ban Treaty Organization is building an international monitoring system with 337 facilities located all over the globe. 85% of these facilities are already operational. As of May 2012, the CTBT has been signed by 183 States, of which 157 have also ratified. However, for the Treaty to enter into force it needs to be ratified by 44 specific nuclear technology-holder countries. These "Annex 2 States" participated in the negotiations on the CTBT between 1994 and 1996 and possessed nuclear power or research reactors at that time. The ratification of eight Annex 2 states is still missing: China, Egypt, Iran, Israel and the United States have signed but not ratified the Treaty; India, North Korea and Pakistan have not signed it.

The following is a list of the treaties applicable to nuclear testing:

Name Agreement date In force date In effect today? Notes
Unilateral USSR ban March 31, 1958 March 31, 1958 no USSR unilaterally stops testing provided the West does as well.
Bilateral testing ban August 2, 1958 October 31, 1958 no USA agrees; ban begins on 31 October 1958, 3 November 1958 for the Soviets, and lasts until abrogated by a USSR test on 1 September 1961.
Antarctic Treaty System December 1, 1959 June 23, 1961 yes Bans testing of all kinds in Antarctica.
Partial Nuclear Test Ban Treaty (PTBT) August 5, 1963 October 10, 1963 yes Ban on all but underground testing.
Outer Space Treaty January 27, 1967 October 10, 1967 yes Bans testing on the moon and other celestial bodies.
Treaty of Tlatelolco February 14, 1967 April 22, 1968 yes Bans testing in South America and the Caribbean Sea Islands.
Nuclear Non-proliferation Treaty January 1, 1968 March 5, 1970 yes Bans the proliferation of nuclear technology to non-nuclear nations.
Seabed Arms Control Treaty February 11, 1971 May 18, 1972 yes Bans emplacement of nuclear weapons on the ocean floor outside territorial waters.
Strategic Arms Limitation Treaty (SALT I) January 1, 1972
no A five-year ban on installing launchers.
Anti-Ballistic Missile Treaty May 26, 1972 August 3, 1972 no Restricts ABM development; additional protocol added in 1974; abrogated by the US in 2002.
Agreement on the Prevention of Nuclear War June 22, 1973 June 22, 1973 yes Promises to make all efforts to promote security and peace.
Threshold Test Ban Treaty July 1, 1974 December 11, 1990 yes Prohibits higher than 150 kt for underground testing.
Peaceful Nuclear Explosions Treaty (PNET) January 1, 1976 December 11, 1990 yes Prohibits higher than 150 kt, or 1500kt in aggregate, testing for peaceful purposes.
Moon Treaty January 1, 1979 January 1, 1984 no Bans use and emplacement of nuclear weapons on the moon and other celestial bodies.
Strategic Arms Limitations Treaty (SALT II) June 18, 1979
no Limits strategic arms. Kept but not ratified by the US, abrogated in 1986.
Treaty of Rarotonga August 6, 1985
? Bans nuclear weapons in South Pacific Ocean and islands. US never ratified.
Intermediate Range Nuclear Forces Treaty (INF) December 8, 1987 June 1, 1988 no Eliminated Intermediate Range Ballistic Missiles (IRBMs). Implemented by 1 June 1991. Both sides alleged the other was in violation of the treaty. Expired following U.S. withdrawal, 2 August 2019.
Treaty on Conventional Armed Forces in Europe November 19, 1990 July 17, 1992 yes Bans categories of weapons, including conventional, from Europe. Russia notified signatories of intent to suspend, 14 July 2007.
Strategic Arms Reduction Treaty I (START I) July 31, 1991 December 5, 1994 no 35-40% reduction in ICBMs with verification. Treaty expired 5 December 2009, renewed (see below).
Treaty on Open Skies March 24, 1992 January 1, 2002 yes Allows for unencumbered surveillance over all signatories.
US unilateral testing moratorium October 2, 1992 October 2, 1992 no George. H. W. Bush declares unilateral ban on nuclear testing. Extended several times, not yet abrogated.
Strategic Arms Reduction Treaty (START II) January 3, 1993 January 1, 2002 no Deep reductions in ICBMs. Abrogated by Russia in 2002 in retaliation of US abrogation of ABM Treaty.
Southeast Asian Nuclear-Weapon-Free Zone Treaty (Treaty of Bangkok) December 15, 1995 March 28, 1997 yes Bans nuclear weapons from southeast Asia.
African Nuclear Weapon Free Zone Treaty (Pelindaba Treaty) January 1, 1996 July 16, 2009 yes Bans nuclear weapons in Africa.
Comprehensive Nuclear Test Ban Treaty (CTBT) September 10, 1996
yes (effectively) Bans all nuclear testing, peaceful and otherwise. Strong detection and verification mechanism (CTBTO). US has signed and adheres to the treaty, though has not ratified it.
Treaty on Strategic Offensive Reductions (SORT, Treaty of Moscow) May 24, 2002 June 1, 2003 no Reduces warheads to 1700–2200 in ten years. Expired, replaced by START II.
START I treaty renewal April 8, 2010 January 26, 2011 yes Same provisions as START I.

Compensation for victims

Over 500 atmospheric nuclear weapons tests were conducted at various sites around the world from 1945 to 1980. As public awareness and concern mounted over the possible health hazards associated with exposure to the nuclear fallout, various studies were done to assess the extent of the hazard. A Centers for Disease Control and Prevention/ National Cancer Institute study claims that nuclear fallout might have led to approximately 11,000 excess deaths, most caused by thyroid cancer linked to exposure to iodine-131.

  • United States: Prior to March 2009, the U.S. was the only nation to compensate nuclear test victims. Since the Radiation Exposure Compensation Act of 1990, more than $1.38 billion in compensation has been approved. The money is going to people who took part in the tests, notably at the Nevada Test Site, and to others exposed to the radiation. As of 2017, the U.S. government refused to pay for the medical care of troops who associate their health problems with the construction of Runit Dome in the Marshall Islands.
  • France: In March 2009, the French Government offered to compensate victims for the first time and legislation is being drafted which would allow payments to people who suffered health problems related to the tests. The payouts would be available to victims' descendants and would include Algerians, who were exposed to nuclear testing in the Sahara in 1960. However, victims say the eligibility requirements for compensation are too narrow.
  • United Kingdom: There is no formal British government compensation program. However, nearly 1,000 veterans of Christmas Island nuclear tests in the 1950s are engaged in legal action against the Ministry of Defense for negligence. They say they suffered health problems and were not warned of potential dangers before the experiments.
  • Russia: Decades later, Russia offered compensation to veterans who were part of the 1954 Totsk test. However, there was no compensation to civilians sickened by the Totsk test. Anti-nuclear groups say there has been no government compensation for other nuclear tests.
  • China: China has undertaken highly secretive atomic tests in remote deserts in a Central Asian border province. Anti-nuclear activists say there is no known government program for compensating victims.

Milestone nuclear explosions

The following list is of milestone nuclear explosions. In addition to the atomic bombings of Hiroshima and Nagasaki, the first nuclear test of a given weapon type for a country is included, as well as tests that were otherwise notable (such as the largest test ever). All yields (explosive power) are given in their estimated energy equivalents in kilotons of TNT (see TNT equivalent). Putative tests (like Vela incident) have not been included.

Date Name
Yield (kt)
Country Significance
July 16, 1945 Trinity 18–20 United States First fission-device test, first plutonium implosion detonation.
August 6, 1945 Little Boy 12–18 United States Bombing of Hiroshima, Japan, first detonation of a uranium gun-type device, first use of a nuclear device in combat.
August 9, 1945 Fat Man 18–23 United States Bombing of Nagasaki, Japan, second detonation of a plutonium implosion device (the first being the Trinity Test), second and last use of a nuclear device in combat.
August 29, 1949 RDS-1 22 Soviet Union First fission-weapon test by the Soviet Union.
May 8, 1951 George 225 United States First boosted nuclear weapon test, first weapon test to employ fusion in any measure.
October 3, 1952 Hurricane 25 United Kingdom First fission weapon test by the United Kingdom.
November 1, 1952 Ivy Mike 10,400 United States First "staged" thermonuclear weapon, with cryogenic fusion fuel, primarily a test device and not weaponized.
November 16, 1952 Ivy King 500 United States Largest pure-fission weapon ever tested.
August 12, 1953 RDS-6s 400 Soviet Union First fusion-weapon test by the Soviet Union (not "staged").
March 1, 1954 Castle Bravo 15,000 United States First "staged" thermonuclear weapon using dry fusion fuel. A serious nuclear fallout accident occurred. Largest nuclear detonation conducted by United States.
November 22, 1955 RDS-37 1,600 Soviet Union First "staged" thermonuclear weapon test by the Soviet Union (deployable).
May 31, 1957 Orange Herald 720 United Kingdom Largest boosted fission weapon ever tested. Intended as a fallback "in megaton range" in case British thermonuclear development failed.
November 8, 1957 Grapple X 1,800 United Kingdom First (successful) "staged" thermonuclear weapon test by the United Kingdom
February 13, 1960 Gerboise Bleue 70 France First fission weapon test by France.
October 31, 1961 Tsar Bomba 50,000 Soviet Union Largest thermonuclear weapon ever tested—scaled down from its initial 100 Mt design by 50%.
October 16, 1964 596 22 China First fission-weapon test by the People's Republic of China.
June 17, 1967 Test No. 6 3,300 China First "staged" thermonuclear weapon test by the People's Republic of China.
August 24, 1968 Canopus 2,600 France First "staged" thermonuclear weapon test by France
May 18, 1974 Smiling Buddha 12 India First fission nuclear explosive test by India.
May 11, 1998 Pokhran-II 45–50 India First potential fusion-boosted weapon test by India; first deployable fission weapon test by India.
May 28, 1998 Chagai-I 40 Pakistan First fission weapon (boosted) test by Pakistan
October 9, 2006 2006 nuclear test under 1 North Korea First fission-weapon test by North Korea (plutonium-based).
September 3, 2017 2017 nuclear test 200–300 North Korea First "staged" thermonuclear weapon test claimed by North Korea.
Note

Electromagnetic compatibility

Anechoic RF chamber used for EMC testing (radiated emissions and immunity). The furniture has to be made of wood or plastic, not metal.
Log-periodic antenna measurement for outdoors

Electromagnetic compatibility (EMC) is the ability of electrical equipment and systems to function acceptably in their electromagnetic environment, by limiting the unintentional generation, propagation and reception of electromagnetic energy which may cause unwanted effects such as electromagnetic interference (EMI) or even physical damage to operational equipment. The goal of EMC is the correct operation of different equipment in a common electromagnetic environment. It is also the name given to the associated branch of electrical engineering.

EMC pursues three main classes of issue. Emission is the generation of electromagnetic energy, whether deliberate or accidental, by some source and its release into the environment. EMC studies the unwanted emissions and the countermeasures which may be taken in order to reduce unwanted emissions. The second class, susceptibility, is the tendency of electrical equipment, referred to as the victim, to malfunction or break down in the presence of unwanted emissions, which are known as Radio frequency interference (RFI). Immunity is the opposite of susceptibility, being the ability of equipment to function correctly in the presence of RFI, with the discipline of "hardening" equipment being known equally as susceptibility or immunity. A third class studied is coupling, which is the mechanism by which emitted interference reaches the victim.

Interference mitigation and hence electromagnetic compatibility may be achieved by addressing any or all of these issues, i.e., quieting the sources of interference, inhibiting coupling paths and/or hardening the potential victims. In practice, many of the engineering techniques used, such as grounding and shielding, apply to all three issues.

History

Origins

The earliest EMC issue was lightning strike (lightning electromagnetic pulse, or LEMP) on ships and buildings. Lightning rods or lightning conductors began to appear in the mid-18th century. With the advent of widespread electricity generation and power supply lines from the late 19th century on, problems also arose with equipment short-circuit failure affecting the power supply, and with local fire and shock hazard when the power line was struck by lightning. Power stations were provided with output circuit breakers. Buildings and appliances would soon be provided with input fuses, and later in the 20th century miniature circuit breakers (MCB) would come into use.

Early twentieth century

It may be said that radio interference and its correction arose with the first spark-gap experiment of Marconi in the late 1800s. As radio communications developed in the first half of the 20th century, interference between broadcast radio signals began to occur and an international regulatory framework was set up to ensure interference-free communications.

Switching devices became commonplace through the middle of the 20th century, typically in petrol powered cars and motorcycles but also in domestic appliances such as thermostats and refrigerators. This caused transient interference with domestic radio and (after World War II) TV reception, and in due course laws were passed requiring the suppression of such interference sources.

ESD problems first arose with accidental electric spark discharges in hazardous environments such as coal mines and when refuelling aircraft or motor cars. Safe working practices had to be developed.

Postwar period

After World War II the military became increasingly concerned with the effects of nuclear electromagnetic pulse (NEMP), lightning strike, and even high-powered radar beams, on vehicle and mobile equipment of all kinds, and especially aircraft electrical systems.

When high RF emission levels from other sources became a potential problem (such as with the advent of microwave ovens), certain frequency bands were designated for Industrial, Scientific and Medical (ISM) use, allowing emission levels limited only by thermal safety standards. Later, the International Telecommunication Union adopted a Recommendation providing limits of radiation from ISM devices in order to protect radiocommunications. A variety of issues such as sideband and harmonic emissions, broadband sources, and the ever-increasing popularity of electrical switching devices and their victims, resulted in a steady development of standards and laws.

From the late 1970s, the popularity of modern digital circuitry rapidly grew. As the technology developed, with ever-faster switching speeds (increasing emissions) and lower circuit voltages (increasing susceptibility), EMC increasingly became a source of concern. Many more nations became aware of EMC as a growing problem and issued directives to the manufacturers of digital electronic equipment, which set out the essential manufacturer requirements before their equipment could be marketed or sold. Organizations in individual nations, across Europe and worldwide, were set up to maintain these directives and associated standards. In 1979, the American FCC published a regulation that required the electromagnetic emissions of all "digital devices" to be below certain limits. This regulatory environment led to a sharp growth in the EMC industry supplying specialist devices and equipment, analysis and design software, and testing and certification services. Low-voltage digital circuits, especially CMOS transistors, became more susceptible to ESD damage as they were miniaturised and, despite the development of on-chip hardening techniques, a new ESD regulatory regime had to be developed.

Modern era

From the 1980s on the explosive growth in mobile communications and broadcast media channels put huge pressure on the available airspace. Regulatory authorities began squeezing band allocations closer and closer together, relying on increasingly sophisticated EMC control methods, especially in the digital communications domain, to keep cross-channel interference to acceptable levels. Digital systems are inherently less susceptible than analogue systems, and also offer far easier ways (such as software) to implement highly sophisticated protection and error-correction measures.

In 1985, the USA released the ISM bands for low-power mobile digital communications, leading to the development of Wi-Fi and remotely-operated car door keys. This approach relies on the intermittent nature of ISM interference and use of sophisticated error-correction methods to ensure lossless reception during the quiet gaps between any bursts of interference.

Concepts

"Electromagnetic interference" (EMI) is defined as the "degradation in the performance of equipment or transmission channel or a system caused by an electromagnetic disturbance" (IEV 161-01-06) while "electromagnetic disturbance" is defined as "an electromagnetic phenomenon that can degrade the performance of a device, equipment or system, or adversely affect living or inert matter (IEV 161-01-05). The terms "electromagnetic disturbance" and "electromagnetic interference" designate respectively the cause and the effect,

Electromagnetic compatibility (EMC) is an equipment characteristic or property and is defined as " the ability of equipment or a system to function satisfactorily in its electromagnetic environment without introducing intolerable electromagnetic disturbances to anything in that environment " (IEV 161-01-07).

EMC ensures the correct operation, in the same electromagnetic environment, of different equipment items which use or respond to electromagnetic phenomena, and the avoidance of any interference. Another way of saying this is that EMC is the control of EMI so that unwanted effects are prevented.

Besides understanding the phenomena in themselves, EMC also addresses the countermeasures, such as control regimes, design and measurement, which should be taken in order to prevent emissions from causing any adverse effect.

Technical characteristics of interference

Types of interference

EMC is often understood as the control of electromagnetic interference (EMI). Electromagnetic interference divides into several categories according to the source and signal characteristics.

The origin of interference, often called "noise" in this context, can be man-made (artificial) or natural.

Continuous, or continuous wave (CW), interference comprises a given range of frequencies. This type is naturally divided into sub-categories according to frequency range, and as a whole is sometimes referred to as "DC to daylight". One common classification is into narrowband and broadband, according to the spread of the frequency range.

An electromagnetic pulse (EMP), sometimes called a transient disturbance, is a short-duration pulse of energy. This energy is usually broadband by nature, although it often excites a relatively narrow-band damped sine wave response in the victim. Pulse signals divide broadly into isolated and repetitive events.

Coupling mechanisms

The four EMI coupling modes

When a source emits interference, it follows a route to the victim known as the coupling path. There are four basic coupling mechanisms: conductive, capacitive, magnetic or inductive, and radiative. Any coupling path can be broken down into one or more of these coupling mechanisms working together.

Conductive coupling occurs when the coupling path between the source and victim is formed by direct electrical contact with a conducting body.

Capacitive coupling occurs when a varying electrical field exists between two adjacent conductors, inducing a change in voltage on the receiving conductor.

Inductive coupling or magnetic coupling occurs when a varying magnetic field exists between two parallel conductors, inducing a change in voltage along the receiving conductor.

Radiative coupling or electromagnetic coupling occurs when source and victim are separated by a large distance. Source and victim act as radio antennas: the source emits or radiates an electromagnetic wave which propagates across the space in between and is picked up or received by the victim.

Control

The damaging effects of electromagnetic interference pose unacceptable risks in many areas of technology, and it is necessary to control such interference and reduce the risks to acceptable levels.

The control of electromagnetic interference (EMI) and assurance of EMC comprises a series of related disciplines:

  • Characterising the threat.
  • Setting standards for emission and susceptibility levels.
  • Design for standards compliance.
  • Testing for standards compliance.

The risk posed by the threat is usually statistical in nature, so much of the work in threat characterisation and standards setting is based on reducing the probability of disruptive EMI to an acceptable level, rather than its assured elimination.

For a complex or novel piece of equipment, this may require the production of a dedicated EMC control plan summarizing the application of the above and specifying additional documents required.

Characterisation of the problem requires understanding of:

  • The interference source and signal.
  • The coupling path to the victim.
  • The nature of the victim both electrically and in terms of the significance of malfunction.

Design

A TV tuner card showing many small bypass capacitors and three metal shields: the PCI bracket, the metal box with two coax inputs, and the shield for the S-Video connector

Breaking a coupling path is equally effective at either the start or the end of the path, therefore many aspects of good EMC design practice apply equally to potential sources and to potential victims. A design which easily couples energy to the outside world will equally easily couple energy in and will be susceptible. A single improvement will often reduce both emissions and susceptibility. Grounding and shielding aim to reduce emissions or divert EMI away from the victim by providing an alternative, low-impedance path. Techniques include:

  • Grounding or earthing schemes such as star earthing for audio equipment or ground planes for RF. The scheme must also satisfy safety regulations.
  • Shielded cables, where the signal wires are surrounded by an outer conductive layer that is grounded at one or both ends.
  • Shielded housings. A conductive metal housing will act as an interference shield. In order to access the interior, such a housing is typically made in sections (such as a box and lid); an RF gasket may be used at the joints to reduce the amount of interference that leaks through. RF gaskets come in various types. A plain metal gasket may be either braided wire or a flat strip slotted to create many springy "fingers". Where a waterproof seal is required, a flexible elastomeric base may be impregnated with chopped metal fibers dispersed into the interior or long metal fibers covering the surface or both.

Other general measures include:

  • Decoupling or filtering at critical points such as cable entries and high-speed switches, using RF chokes and/or RC elements. A line filter implements these measures between a device and a line.
  • Transmission line techniques for cables and wiring, such as balanced differential signal and return paths, and impedance matching.
  • Avoidance of antenna structures such as loops of circulating current, resonant mechanical structures, unbalanced cable impedances or poorly grounded shielding.
  • Eliminating spurious rectifying junctions that can form between metal structures around and near transmitter installations. Such junctions in combination with unintentional antenna structures can radiate harmonics of the transmitter frequency.
Spread spectrum method reduces EMC peaks. Frequency spectrum of the heating up period of a switching power supply which uses the spread spectrum method incl. waterfall diagram over a few minutes

Additional measures to reduce emissions include:

  • Avoid unnecessary switching operations. Necessary switching should be done as slowly as is technically possible.
  • Noisy circuits (e. g. with a lot of switching activity) should be physically separated from the rest of the design.
  • High peaks at single frequencies can be avoided by using the spread spectrum method, in which different parts of the circuit emit at different frequencies.
  • Harmonic wave filters.
  • Design for operation at lower signal levels, reducing the energy available for emission.

Additional measures to reduce susceptibility include:

  • Fuses, trip switches and circuit breakers.
  • Transient absorbers.
  • Design for operation at higher signal levels, reducing the relative noise level in comparison.
  • Error-correction techniques in digital circuitry. These may be implemented in hardware, software or a combination of both.
  • Differential signaling or other common-mode noise techniques for signal routing

Testing

Testing is required to confirm that a particular device meets the required standards. It divides broadly into emissions testing and susceptibility testing. Open-area test sites, or OATS, are the reference sites in most standards. They are especially useful for emissions testing of large equipment systems. However RF testing of a physical prototype is most often carried out indoors, in a specialised EMC test chamber. Types of chamber include anechoic, reverberation and the gigahertz transverse electromagnetic cell (GTEM cell). Sometimes computational electromagnetics simulations are used to test virtual models. Like all compliance testing, it is important that the test equipment, including the test chamber or site and any software used, be properly calibrated and maintained. Typically, a given run of tests for a particular piece of equipment will require an EMC test plan and follow-up test report. The full test program may require the production of several such documents.

Emissions are typically measured for radiated field strength and where appropriate for conducted emissions along cables and wiring. Inductive (magnetic) and capacitive (electric) field strengths are near-field effects, and are only important if the device under test (DUT) is designed for location close to other electrical equipment. For conducted emissions, typical transducers include the LISN (line impedance stabilisation network) or AMN (artificial mains network) and the RF current clamp. For radiated emission measurement, antennas are used as transducers. Typical antennas specified include dipole, biconical, log-periodic, double ridged guide and conical log-spiral designs. Radiated emissions must be measured in all directions around the DUT. Specialized EMI test receivers or EMI analysers are used for EMC compliance testing. These incorporate bandwidths and detectors as specified by international EMC standards. An EMI receiver may be based on a spectrum analyser to measure the emission levels of the DUT across a wide band of frequencies (frequency domain), or on a tunable narrower-band device which is swept through the desired frequency range. EMI receivers along with specified transducers can often be used for both conducted and radiated emissions. Pre-selector filters may also be used to reduce the effect of strong out-of-band signals on the front-end of the receiver. Some pulse emissions are more usefully characterized using an oscilloscope to capture the pulse waveform in the time domain.

Radiated field susceptibility testing typically involves a high-powered source of RF or EM energy and a radiating antenna to direct the energy at the potential victim or device under test (DUT). Conducted voltage and current susceptibility testing typically involves a high-powered signal generator, and a current clamp or other type of transformer to inject the test signal. Transient or EMP signals are used to test the immunity of the DUT against powerline disturbances including surges, lightning strikes and switching noise. In motor vehicles, similar tests are performed on battery and signal lines. The transient pulse may be generated digitally and passed through a broadband pulse amplifier, or applied directly to the transducer from a specialised pulse generator. Electrostatic discharge testing is typically performed with a piezo spark generator called an "ESD pistol". Higher energy pulses, such as lightning or nuclear EMP simulations, can require a large current clamp or a large antenna which completely surrounds the DUT. Some antennas are so large that they are located outdoors, and care must be taken not to cause an EMP hazard to the surrounding environment.

Legislation

Several organizations, both national and international, work to promote international co-operation on standardization (harmonization), including publishing various EMC standards. Where possible, a standard developed by one organization may be adopted with little or no change by others. This helps for example to harmonize national standards across Europe.

International standards organizations include:

Among the main national organizations are:

Compliance with national or international standards is usually laid down by laws passed by individual nations. Different nations can require compliance with different standards.

In European law, EU directive 2014/30/EU (previously 2004/108/EC) on EMC defines the rules for the placing on the market/putting into service of electric/electronic equipment within the European Union. The Directive applies to a vast range of equipment including electrical and electronic appliances, systems and installations. Manufacturers of electric and electronic devices are advised to run EMC tests in order to comply with compulsory CE-labeling. More are given in the list of EMC directives. Compliance with the applicable harmonised standards whose reference is listed in the OJEU under the EMC Directive gives presumption of conformity with the corresponding essential requirements of the EMC Directive.

In 2019, the USA adopted a program for the protection of critical infrastructure against an electromagnetic pulse, whether caused by a geomagnetic storm or a high-altitude nuclear weapon.

Kidney disease

From Wikipedia, the free encyclopedia
Kidney disease
Other namesRenal disease, nephropathy
Pathologic kidney specimen showing marked pallor of the cortex, contrasting to the darker areas of surviving medullary tissue. The patient died with acute kidney injury.
SpecialtyNephrology, urology 
ComplicationsUremia, death

Kidney disease, or renal disease, technically referred to as nephropathy, is damage to or disease of a kidney. Nephritis is an inflammatory kidney disease and has several types according to the location of the inflammation. Inflammation can be diagnosed by blood tests. Nephrosis is non-inflammatory kidney disease. Nephritis and nephrosis can give rise to nephritic syndrome and nephrotic syndrome respectively. Kidney disease usually causes a loss of kidney function to some degree and can result in kidney failure, the complete loss of kidney function. Kidney failure is known as the end-stage of kidney disease, where dialysis or a kidney transplant is the only treatment option.

Chronic kidney disease is defined as prolonged kidney abnormalities (functional and/or structural in nature) that last for more than three months. Acute kidney disease is now termed acute kidney injury and is marked by the sudden reduction in kidney function over seven days. In 2007, about one in eight Americans had chronic kidney disease. This rate is increasing over time to where about 1 in 7 Americans are estimated to have CKD as of 2021.

Causes

Deaths due to kidney diseases per million persons in 2012
  16-61
  62-79
  80-88
  89-95
  96-110
  111-120
  121-135
  136-160
  161-186
  187-343

Causes of kidney disease include deposition of the Immunoglobulin A antibodies in the glomerulus, administration of analgesics, xanthine oxidase deficiency, toxicity of chemotherapy agents, and a long-term exposure to lead or its salts. Chronic conditions that can produce nephropathy include systemic lupus erythematosus, diabetes mellitus and high blood pressure (hypertension), which lead to diabetic nephropathy and hypertensive nephropathy, respectively.

Analgesics

One cause of nephropathy is the long term usage of pain medications known as analgesics. The pain medicines which can cause kidney problems include aspirin, acetaminophen, and nonsteroidal anti-inflammatory drugs (NSAIDs). This form of nephropathy is "chronic analgesic nephritis," a chronic inflammatory change characterized by loss and atrophy of tubules and interstitial fibrosis and inflammation (BRS Pathology, 2nd edition).

Specifically, long-term use of the analgesic phenacetin has been linked to renal papillary necrosis (necrotizing papillitis).

Diabetes

Diabetic nephropathy is a progressive kidney disease caused by angiopathy of the capillaries in the glomeruli. It is characterized by nephrotic syndrome and diffuse scarring of the glomeruli. It is particularly associated with poorly managed diabetes mellitus and is a primary reason for dialysis in many developed countries. It is classified as a small blood vessel complication of diabetes.

Autosomal dominant polycystic kidney disease

Gabow 1990 talks about Autosomal Dominant Polycystic Kidney disease and how this disease is genetic. They go on to say "Autosomal dominant polycystic kidney disease (ADPKD) is the most common genetic disease, affecting a half million Americans. The clinical phenotype can result from at least two different gene defects. One gene that can cause ADPKD has been located on the short arm of chromosome 16." The same article also goes on to say that millions of Americans are effected by this disease and is very common.

Long COVID and Kidney Disease

Yende & Parikh 2021 talk about the effects that COVID can have on a person that has a pre-existing health issue regarding kidney diseases. "frailty, chronic diseases, disability and immunodeficiency are at increased risk of kidney disease and progression to kidney failure, and infection with SARS-CoV-2 can further increase this risk" (Long COVID and Kidney Disease, 2021).

Diet

Higher dietary intake of animal protein, animal fat, and cholesterol may increase risk for microalbuminuria, a sign of kidney function decline, and generally, diets higher in fruits, vegetables, and whole grains but lower in meat and sweets may be protective against kidney function decline. This may be because sources of animal protein, animal fat, and cholesterol, and sweets are more acid-producing, while fruits, vegetables, legumes, and whole grains are more base-producing.

IgA nephropathy

IgA nephropathy is the most common glomerulonephritis throughout the world  Primary IgA nephropathy is characterized by deposition of the IgA antibody in the glomerulus. The classic presentation (in 40-50% of the cases) is episodic frank hematuria which usually starts within a day or two of a non-specific upper respiratory tract infection (hence synpharyngitic) as opposed to post-streptococcal glomerulonephritis which occurs some time (weeks) after initial infection. Less commonly gastrointestinal or urinary infection can be the inciting agent. All of these infections have in common the activation of mucosal defenses and hence IgA antibody production.

Iodinated contrast media

Kidney disease induced by iodinated contrast media (ICM) is called CIN (= contrast induced nephropathy) or contrast-induced AKI (= acute kidney injury). Currently, the underlying mechanisms are unclear. But there is a body of evidence that several factors including apoptosis-induction seem to play a role.

Lithium

Lithium, a medication commonly used to treat bipolar disorder and schizoaffective disorders, can cause nephrogenic diabetes insipidus; its long-term use can lead to nephropathy.

Lupus

Despite expensive treatments, lupus nephritis remains a major cause of morbidity and mortality in people with relapsing or refractory lupus nephritis.

Xanthine oxidase deficiency

Another possible cause of Kidney disease is due to decreased function of xanthine oxidase in the purine degradation pathway. Xanthine oxidase will degrade hypoxanthine to xanthine and then to uric acid. Xanthine is not very soluble in water; therefore, an increase in xanthine forms crystals (which can lead to kidney stones) and result in damage to the kidney. Xanthine oxidase inhibitors, like allopurinol, can cause nephropathy.

Polycystic disease of the kidneys

Additional possible cause of nephropathy is due to the formation of cysts or pockets containing fluid within the kidneys. These cysts become enlarged with the progression of aging causing renal failure. Cysts may also form in other organs including the liver, brain, and ovaries. Polycystic Kidney Disease is a genetic disease caused by mutations in the PKD1, PKD2, and PKHD1 genes. This disease affects about half a million people in the US. Polycystic kidneys are susceptible to infections and cancer.

Toxicity of chemotherapy agents

Nephropathy can be associated with some therapies used to treat cancer. The most common form of kidney disease in cancer patients is Acute Kidney Injury (AKI) which can usually be due to volume depletion from vomiting and diarrhea that occur following chemotherapy or occasionally due to kidney toxicities of chemotherapeutic agents. Kidney failure from break down of cancer cells, usually after chemotherapy, is unique to onconephrology. Several chemotherapeutic agents, for example Cisplatin, are associated with acute and chronic kidney injuries. Newer agents such as anti Vascular Endothelial Growth Factor (anti VEGF) are also associated with similar injuries, as well as proteinuria, hypertension and thrombotic microangiopathy.

Diagnosis

The standard diagnostic workup of suspected kidney disease includes a medical history, physical examination, a urine test, and an ultrasound of the kidneys (renal ultrasonography). An ultrasound is essential in the diagnosis and management of kidney disease.

Treatment

Treatment approaches for kidney disease focus on managing the symptoms, controlling the progression, and also treating co-morbidities that a person may have.

Dialysis

Transplantation

Millions of people across the world have kidney disease. Of those millions, several thousand will need dialysis or a kidney transplant at its end-stage. In the United States, as of 2008, 16,500 people needed a kidney transplant. Of those, 5,000 died while waiting for a transplant. Currently, there is a shortage of donors, and in 2007 there were only 64,606 kidney transplants in the world. This shortage of donors is causing countries to place monetary value on kidneys. Countries such as Iran and Singapore are eliminating their lists by paying their citizens to donate. Also, the black market accounts for 5-10 percent of transplants that occur worldwide. The act of buying an organ through the black market is illegal in the United States. To be put on the waiting list for a kidney transplant, patients must first be referred by a physician, then they must choose and contact a donor hospital. Once they choose a donor hospital, patients must then receive an evaluation to make sure they are sustainable to receive a transplant. In order to be a match for a kidney transplant, patients must match blood type and human leukocyte antigen factors with their donors. They must also have no reactions to the antibodies from the donor's kidneys.

Prognosis

Kidney disease can have serious consequences if it cannot be controlled effectively. Generally, the progression of kidney disease is from mild to serious. Some kidney diseases can cause kidney failure.

Carbon steel

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Carbon_steel

Carbon steel is a steel with carbon content from about 0.05 up to 2.1 percent by weight. The definition of carbon steel from the American Iron and Steel Institute (AISI) states:

The term carbon steel may also be used in reference to steel which is not stainless steel; in this use carbon steel may include alloy steels. High carbon steel has many different uses such as milling machines, cutting tools (such as chisels) and high strength wires. These applications require a much finer microstructure, which improves the toughness.

Carbon steel is a popular metal choice for knife-making due to its high amount of carbon, giving the blade more edge retention. To make the most out of this type of steel it is very important to heat treat it properly. If not, the knife may end up being brittle, or too soft to hold an edge.

As the carbon content percentage rises, steel has the ability to become harder and stronger through heat treating; however, it becomes less ductile. Regardless of the heat treatment, a higher carbon content reduces weldability. In carbon steels, the higher carbon content lowers the melting point.

Uses of carbon steel

  • Carbon steel is used to construct buildings, bridges, and other infrastructure projects.
  • It is also used in producing pipes, fittings, and other components for the oil and gas industry.
  • Carbon steel is an essential material in the automotive industry, where it is used to make parts such as engine blocks, transmission components, and suspension parts.
  • It is also utilised in the production of railway tracks and locomotives.

Properties, characteristics & environmental impact

  • Carbon steel is often divided into two main categories: low-carbon steel and high-carbon steel.
  • Carbon steel may also contain other elements, such as manganese, phosphorus, sulfur, and silicon, which can affect its properties.
  • Carbon steel can be easily machined and welded, making it versatile for various applications. It can also be heat treated to improve its strength and durability.
  • Carbon steel is susceptible to rust and corrosion, especially in environments with high moisture levels and/or salt.
  • Carbon steel can be shielded from corrosion by coating it with paint, varnish, or other protective material.
  • Alternatively, it can be made from a stainless steel alloy that contains chromium, which provides excellent corrosion resistance.
  • Carbon steel is sometimes alloyed with other elements to improve its properties, such as adding chromium and/or nickel to improve its resistance to corrosion and oxidation or adding molybdenum to improve its strength and toughness at high temperatures.
  • Carbon steel is an environmentally friendly material, as it is easily recyclable and can be reused in various applications. It is also energy-efficient to produce, as it requires less energy than other metals such as aluminium and copper.

Type

Mild or low-carbon steel

Mild steel (iron containing a small percentage of carbon, strong and tough but not readily tempered), also known as plain-carbon steel and low-carbon steel, is now the most common form of steel because its price is relatively low while it provides material properties that are acceptable for many applications. Mild steel contains approximately 0.05–0.30% carbon making it malleable and ductile. Mild steel has a relatively low tensile strength, but it is cheap and easy to form. Surface hardness can be increased with carburization.

In applications where large cross-sections are used to minimize deflection, failure by yield is not a risk so low-carbon steels are the best choice, for example as structural steel. The density of mild steel is approximately 7.85 g/cm3 (7,850 kg/m3; 0.284 lb/cu in) and the Young's modulus is 200 GPa (29×106 psi).

Low-carbon steels display yield-point runout where the material has two yield points. The first yield point (or upper yield point) is higher than the second and the yield drops dramatically after the upper yield point. If a low-carbon steel is only stressed to some point between the upper and lower yield point then the surface develops Lüder bands. Low-carbon steels contain less carbon than other steels and are easier to cold-form, making them easier to handle. Typical applications of low carbon steel are car parts, pipes, construction, and food cans.

High-tensile steel

High-tensile steels are low-carbon, or steels at the lower end of the medium-carbon range, which have additional alloying ingredients in order to increase their strength, wear properties or specifically tensile strength. These alloying ingredients include chromium, molybdenum, silicon, manganese, nickel, and vanadium. Impurities such as phosphorus and sulfur have their maximum allowable content restricted.

Higher-carbon steels

Carbon steels which can successfully undergo heat-treatment have a carbon content in the range of 0.30–1.70% by weight. Trace impurities of various other elements can significantly affect the quality of the resulting steel. Trace amounts of sulfur in particular make the steel red-short, that is, brittle and crumbly at working temperatures. Low-alloy carbon steel, such as A36 grade, contains about 0.05% sulfur and melt around 1,426–1,538 °C (2,600–2,800 °F). Manganese is often added to improve the hardenability of low-carbon steels. These additions turn the material into a low-alloy steel by some definitions, but AISI's definition of carbon steel allows up to 1.65% manganese by weight. There are two types of higher carbon steels which are high carbon steel and the ultra high carbon steel. The reason for the limited use of high carbon steel is that it has extremely poor ductility and weldability and has a higher cost of production. the applications best suited for the high carbon steels is its use in the spring industry, farm industry, and in the production of wide range of high-strength wires.

AISI classification

Carbon steel is broken down into four classes based on carbon content:

Low-carbon steel

0.05 to 0.15% carbon (plain carbon steel) content.

Medium-carbon steel

Approximately 0.3–0.5% carbon content. Balances ductility and strength and has good wear resistance; used for large parts, forging and automotive components.

High-carbon steel

Approximately 0.6 to 1.0% carbon content. Very strong, used for springs, edged tools, and high-strength wires.

Ultra-high-carbon steel

Approximately 1.25–2.0% carbon content. Steels that can be tempered to great hardness. Used for special purposes like (non-industrial-purpose) knives, axles, and punches. Most steels with more than 2.5% carbon content are made using powder metallurgy.

Heat treatment

Iron-carbon phase diagram, showing the temperature and carbon ranges for certain types of heat treatments

The purpose of heat treating carbon steel is to change the mechanical properties of steel, usually ductility, hardness, yield strength, or impact resistance. Note that the electrical and thermal conductivity are only slightly altered. As with most strengthening techniques for steel, Young's modulus (elasticity) is unaffected. All treatments of steel trade ductility for increased strength and vice versa. Iron has a higher solubility for carbon in the austenite phase; therefore all heat treatments, except spheroidizing and process annealing, start by heating the steel to a temperature at which the austenitic phase can exist. The steel is then quenched (heat drawn out) at a moderate to low rate allowing carbon to diffuse out of the austenite forming iron-carbide (cementite) and leaving ferrite, or at a high rate, trapping the carbon within the iron thus forming martensite. The rate at which the steel is cooled through the eutectoid temperature (about 727 °C or 1,341 °F) affects the rate at which carbon diffuses out of austenite and forms cementite. Generally speaking, cooling swiftly will leave iron carbide finely dispersed and produce a fine grained pearlite and cooling slowly will give a coarser pearlite. Cooling a hypoeutectoid steel (less than 0.77 wt% C) results in a lamellar-pearlitic structure of iron carbide layers with α-ferrite (nearly pure iron) between. If it is hypereutectoid steel (more than 0.77 wt% C) then the structure is full pearlite with small grains (larger than the pearlite lamella) of cementite formed on the grain boundaries. A eutectoid steel (0.77% carbon) will have a pearlite structure throughout the grains with no cementite at the boundaries. The relative amounts of constituents are found using the lever rule. The following is a list of the types of heat treatments possible:

Spheroidizing
Spheroidite forms when carbon steel is heated to approximately 700 °C (1,300 °F) for over 30 hours. Spheroidite can form at lower temperatures but the time needed drastically increases, as this is a diffusion-controlled process. The result is a structure of rods or spheres of cementite within primary structure (ferrite or pearlite, depending on which side of the eutectoid you are on). The purpose is to soften higher carbon steels and allow more formability. This is the softest and most ductile form of steel.
Full annealing
Carbon steel is heated to approximately 400 °C (750 °F) for 1 hour; this ensures all the ferrite transforms into austenite (although cementite might still exist if the carbon content is greater than the eutectoid). The steel must then be cooled slowly, in the realm of 20 °C (36 °F) per hour. Usually it is just furnace cooled, where the furnace is turned off with the steel still inside. This results in a coarse pearlitic structure, which means the "bands" of pearlite are thick. Fully annealed steel is soft and ductile, with no internal stresses, which is often necessary for cost-effective forming. Only spheroidized steel is softer and more ductile.
Process annealing
A process used to relieve stress in a cold-worked carbon steel with less than 0.3% C. The steel is usually heated to 550 to 650 °C (1,000 to 1,200 °F) for 1 hour, but sometimes temperatures as high as 700 °C (1,300 °F). The image above shows the process annealing area.
Isothermal annealing
It is a process in which hypoeutectoid steel is heated above the upper critical temperature. This temperature is maintained for a time and then reduced to below the lower critical temperature and is again maintained. It is then cooled to room temperature. This method eliminates any temperature gradient.
Normalizing
Carbon steel is heated to approximately 550 °C (1,000 °F) for 1 hour; this ensures the steel completely transforms to austenite. The steel is then air-cooled, which is a cooling rate of approximately 38 °C (100 °F) per minute. This results in a fine pearlitic structure, and a more-uniform structure. Normalized steel has a higher strength than annealed steel; it has a relatively high strength and hardness.
Quenching
Carbon steel with at least 0.4 wt% C is heated to normalizing temperatures and then rapidly cooled (quenched) in water, brine, or oil to the critical temperature. The critical temperature is dependent on the carbon content, but as a general rule is lower as the carbon content increases. This results in a martensitic structure; a form of steel that possesses a super-saturated carbon content in a deformed body-centered cubic (BCC) crystalline structure, properly termed body-centered tetragonal (BCT), with much internal stress. Thus quenched steel is extremely hard but brittle, usually too brittle for practical purposes. These internal stresses may cause stress cracks on the surface. Quenched steel is approximately three times harder (four with more carbon) than normalized steel.
Martempering (marquenching)
Martempering is not actually a tempering procedure, hence the term marquenching. It is a form of isothermal heat treatment applied after an initial quench, typically in a molten salt bath, at a temperature just above the "martensite start temperature". At this temperature, residual stresses within the material are relieved and some bainite may be formed from the retained austenite which did not have time to transform into anything else. In industry, this is a process used to control the ductility and hardness of a material. With longer marquenching, the ductility increases with a minimal loss in strength; the steel is held in this solution until the inner and outer temperatures of the part equalize. Then the steel is cooled at a moderate speed to keep the temperature gradient minimal. Not only does this process reduce internal stresses and stress cracks, but it also increases impact resistance.
Tempering
This is the most common heat treatment encountered because the final properties can be precisely determined by the temperature and time of the tempering. Tempering involves reheating quenched steel to a temperature below the eutectoid temperature and then cooling. The elevated temperature allows very small amounts spheroidite to form, which restores ductility but reduces hardness. Actual temperatures and times are carefully chosen for each composition.
Austempering
The austempering process is the same as martempering, except the quench is interrupted and the steel is held in the molten salt bath at temperatures between 205 and 540 °C (400 and 1,000 °F), and then cooled at a moderate rate. The resulting steel, called bainite, produces an acicular microstructure in the steel that has great strength (but less than martensite), greater ductility, higher impact resistance, and less distortion than martensite steel. The disadvantage of austempering is it can be used only on a few sheets of steel, and it requires a special salt bath.

Case hardening

Case hardening processes harden only the exterior of the steel part, creating a hard, wear-resistant skin (the "case") but preserving a tough and ductile interior. Carbon steels are not very hardenable meaning they can not be hardened throughout thick sections. Alloy steels have a better hardenability, so they can be through-hardened and do not require case hardening. This property of carbon steel can be beneficial, because it gives the surface good wear characteristics but leaves the core flexible and shock-absorbing.

Forging temperature of steel

Steel type Maximum forging temperature Burning temperature
(°F) (°C) (°F) (°C)
1.5% carbon 1,920 1,049 2,080 1,140
1.1% carbon 1,980 1,082 2,140 1,171
0.9% carbon 2,050 1,121 2,230 1,221
0.5% carbon 2,280 1,249 2,460 1,349
0.2% carbon 2,410 1,321 2,680 1,471
3.0% nickel steel 2,280 1,249 2,500 1,371
3.0% nickel–chromium steel 2,280 1,249 2,500 1,371
5.0% nickel (case-hardening) steel 2,320 1,271 2,640 1,449
Chromium-vanadium steel 2,280 1,249 2,460 1,349
High-speed steel 2,370 1,299 2,520 1,385
Stainless steel 2,340 1,282 2,520 1,385
Austenitic chromium–nickel steel 2,370 1,299 2,590 1,420
Silico-manganese spring steel 2,280 1,249 2,460 1,350

Entropy (statistical thermodynamics)

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Entropy_(statistical_thermody...