Search This Blog

Monday, June 27, 2022

History of nuclear weapons

From Wikipedia, the free encyclopedia
 
A nuclear fireball lights up the night in the United States' nuclear test Upshot-Knothole Badger on April 18, 1953.
 

Nuclear weapons possess enormous destructive power from nuclear fission or combined fission and fusion reactions. Building on scientific breakthroughs made during the 1930s, the United States, the United Kingdom, Canada, and free France collaborated during World War II, in what was called the Manhattan Project, to build a fission weapon, also known as an atomic bomb. In August 1945, the atomic bombings of Hiroshima and Nagasaki were conducted by the United States against Japan at the close of that war, standing to date as the only use of nuclear weapons in hostilities. The Soviet Union started development shortly after with their own atomic bomb project, and not long after, both countries were developing even more powerful fusion weapons known as hydrogen bombs. Britain and France built their own systems in the 1950s, and the list of states with nuclear weapons has gradually grown larger in the decades since.

Physics and politics in the 1930s and 1940s

In nuclear fission, the nucleus of a fissile atom (in this case, enriched uranium) absorbs a thermal neutron, becomes unstable and splits into two new atoms, releasing some energy and between one and three new neutrons, which can perpetuate the process.

In the first decades of the 20th century, physics was revolutionized with developments in the understanding of the nature of atoms. In 1898, Pierre and Marie Curie discovered that pitchblende, an ore of uranium, contained a substance—which they named radium—that emitted large amounts of radioactivity. Ernest Rutherford and Frederick Soddy identified that atoms were breaking down and turning into different elements. Hopes were raised among scientists and laymen that the elements around us could contain tremendous amounts of unseen energy, waiting to be harnessed.

H. G. Wells was inspired to write about atomic weapons in a 1914 novel, The World Set Free, which appeared shortly before the First World War. In a 1924 article, Winston Churchill speculated about the possible military implications: "Might not a bomb no bigger than an orange be found to possess a secret power to destroy a whole block of buildings—nay to concentrate the force of a thousand tons of cordite and blast a township at a stroke?"

In January 1933, the Nazis came to power in Germany and suppressed Jewish scientists. Like many others Leó Szilárd fled to London where in 1934, he patented the idea of a nuclear chain reaction via neutrons. The patent also introduced the term critical mass to describe the minimum amount of material required to sustain the chain reaction and its potential to cause an explosion (British patent 630,726). The patent was not about an atomic bomb per se, as the possibility of chain reaction was still very speculative. Szilard subsequently assigned the patent to the British Admiralty so that it could be covered by the Official Secrets Act. In a very real sense, Szilárd was the father of the atomic bomb academically. The 1934 military secret patent of Szilard was clearly ahead of the time. The neutron inducted nuclear chain reactions and power from that reactions, and the possibility of the nuclear explosion from these reactions, and the rudimentary mechanics of such a power plant was five years before the public discovery of nuclear fission, and 8 years before Szilard and his friend Enrico Fermi demonstrated a working reactor with uranium at the university of Chicago in 1942. When he coined the neutron inducted chain reaction, he was not sure about the usage of the perfect element or isotope yet, despite he mentioned correctly the uranium and thorium too, finally he stressed mistakenly the idea of berylium.

In 1934, Szilard joined with Enrico Fermi in patenting the world's first working nuclear reactor.

In Paris in 1934, Irène and Frédéric Joliot-Curie discovered that artificial radioactivity could be induced in stable elements by bombarding them with alpha particles; in Italy Enrico Fermi reported similar results when bombarding uranium with neutrons.

In December 1938, Otto Hahn and Fritz Strassmann reported that they had detected the element barium after bombarding uranium with neutrons. Lise Meitner and Otto Robert Frisch correctly interpreted these results as being due to the splitting of the uranium atom. Frisch confirmed this experimentally on January 13, 1939. They gave the process the name "fission" because of its similarity to the splitting of a cell into two new cells. Even before it was published, news of Meitner's and Frisch's interpretation crossed the Atlantic.

After learning about the German fission in 1939, Szilard concluded that uranium would be the element which can realize his 1933 idea about nuclear chain reaction.

Scientists at Columbia University decided to replicate the experiment and on January 25, 1939, conducted the first nuclear fission experiment in the United States in the basement of Pupin Hall. The following year, they identified the active component of uranium as being the rare isotope uranium-235.

Between 1939 and 1940, Joliot-Curie's team applied for a patent family covering different use cases of atomic energy, one (case III, in patent FR 971,324 - Perfectionnements aux charges explosives, meaning Improvements in Explosive Charges) being the first official document explicitly mentioning a nuclear explosion as a purpose, including for war. This patent was applied for on May 4, 1939, but only granted in 1950, being withheld by French authorities in the meantime.

Uranium appears in nature primarily in two isotopes: uranium-238 and uranium-235. When the nucleus of uranium-235 absorbs a neutron, it undergoes nuclear fission, releasing energy and, on average, 2.5 neutrons. Because uranium-235 releases more neutrons than it absorbs, it can support a chain reaction and so is described as fissile. Uranium-238, on the other hand, is not fissile as it does not normally undergo fission when it absorbs a neutron.

By the start of the war in September 1939, many scientists likely to be persecuted by the Nazis had already escaped. Physicists on both sides were well aware of the possibility of utilizing nuclear fission as a weapon, but no one was quite sure how it could be engineered. In August 1939, concerned that Germany might have its own project to develop fission-based weapons, Albert Einstein signed a letter to U.S. President Franklin D. Roosevelt warning him of the threat.

Roosevelt responded by setting up the Uranium Committee under Lyman James Briggs but, with little initial funding ($6,000), progress was slow. It was not until the U.S. entered the war in December 1941 that Washington decided to commit the necessary resources to a top-secret high priority bomb project.

Organized research first began in Britain and Canada as part of the Tube Alloys project: the world's first nuclear weapons project. The Maud Committee was set up following the work of Frisch and Rudolf Peierls who calculated uranium-235's critical mass and found it to be much smaller than previously thought which meant that a deliverable bomb should be possible. In the February 1940 Frisch–Peierls memorandum they stated that: "The energy liberated in the explosion of such a super-bomb...will, for an instant, produce a temperature comparable to that of the interior of the sun. The blast from such an explosion would destroy life in a wide area. The size of this area is difficult to estimate, but it will probably cover the centre of a big city."

Edgar Sengier, a director of Shinkolobwe Mine in the Congo which produced by far the highest quality uranium ore in the world, had become aware of uranium's possible use in a bomb. In late 1940, fearing that it might be seized by the Germans, he shipped the mine's entire stockpile of ore to a warehouse in New York.

For 18 months British research outpaced the American but by mid-1942, it became apparent that the industrial effort required was beyond Britain's already stretched wartime economy. In September 1942, General Leslie Groves was appointed to lead the U.S. project which became known as the Manhattan Project. Two of his first acts were to obtain authorization to assign the highest priority AAA rating on necessary procurements, and to order the purchase of all 1,250 tons of the Shinkolobwe ore. The Tube Alloys project was quickly overtaken by the U.S. effort and after Roosevelt and Churchill signed the Quebec Agreement in 1943, it was relocated and amalgamated into the Manhattan Project.

Leo Szilard, invented the electron microscope, linear accelerator, cyclotron, nuclear chain reaction and patented the nuclear reactor in London in 1934.

From Los Alamos to Hiroshima

UC Berkeley physicist J. Robert Oppenheimer led the Allied scientific effort at Los Alamos.
 
Proportions of uranium-238 (blue) and uranium-235 (red) found naturally versus grades that are enriched by separating the two isotopes atom-by-atom using various methods that all require a massive investment in time and money.

The beginning of the American research about nuclear weapons (The Manhattan Project) started with the Einstein–Szilárd letter.

With a scientific team led by J. Robert Oppenheimer, the Manhattan project brought together some of the top scientific minds of the day, including many exiles from Europe, with the production power of American industry for the goal of producing fission-based explosive devices before Germany. Britain and the U.S. agreed to pool their resources and information for the project, but the other Allied power, the Soviet Union (USSR), was not informed. The U.S. made a tremendous investment in the project which at the time was the second largest industrial enterprise ever seen, spread across more than 30 sites in the U.S. and Canada. Scientific development was centralized in a secret laboratory at Los Alamos.

For a fission weapon to operate, there must be sufficient fissile material to support a chain reaction, a critical mass. To separate the fissile uranium-235 isotope from the non-fissile uranium-238, two methods were developed which took advantage of the fact that uranium-238 has a slightly greater atomic mass: electromagnetic separation and gaseous diffusion. Another secret site was erected at rural Oak Ridge, Tennessee, for the large-scale production and purification of the rare isotope, which required considerable investment. At the time, K-25, one of the Oak Ridge facilities, was the world's largest factory under one roof. The Oak Ridge site employed tens of thousands of people at its peak, most of whom had no idea what they were working on.

Electromagnetic U235 separation plant at Oak Ridge, Tenn. Massive new physics machines were assembled at secret installations around the United States for the production of enriched uranium and plutonium.

Although uranium-238 cannot be used for the initial stage of an atomic bomb, when it absorbs a neutron, it becomes uranium-239 which decays into neptunium-239, and finally the relatively stable plutonium-239, which is fissile like uranium-235. After Fermi achieved the world's first sustained and controlled nuclear chain reaction with the creation of the first atomic pile, massive reactors were secretly constructed at what is now known as the Hanford Site to transform uranium-238 into plutonium for a bomb.

The simplest form of nuclear weapon is a gun-type fission weapon, where a sub-critical mass would be shot at another sub-critical mass. The result would be a super-critical mass and an uncontrolled chain reaction that would create the desired explosion. The weapons envisaged in 1942 were the two gun-type weapons, Little Boy (uranium) and Thin Man (plutonium), and the Fat Man plutonium implosion bomb.

In early 1943 Oppenheimer determined that two projects should proceed forwards: the Thin Man project (plutonium gun) and the Fat Man project (plutonium implosion). The plutonium gun was to receive the bulk of the research effort, as it was the project with the most uncertainty involved. It was assumed that the uranium gun-type bomb could then be adapted from it.

In December 1943 the British mission of 19 scientists arrived in Los Alamos. Hans Bethe became head of the Theoretical Division.

The two fission bomb assembly methods.

In April 1944 it was found by Emilio Segrè that the plutonium-239 produced by the Hanford reactors had too high a level of background neutron radiation, and underwent spontaneous fission to a very small extent, due to the unexpected presence of plutonium-240 impurities. If such plutonium were used in a gun-type design, the chain reaction would start in the split second before the critical mass was fully assembled, blowing the weapon apart with a much lower yield than expected, in what is known as a fizzle.

As a result, development of Fat Man was given high priority. Chemical explosives were used to implode a sub-critical sphere of plutonium, thus increasing its density and making it into a critical mass. The difficulties with implosion centered on the problem of making the chemical explosives deliver a perfectly uniform shock wave upon the plutonium sphere— if it were even slightly asymmetric, the weapon would fizzle. This problem was solved by the use of explosive lenses which would focus the blast waves inside the imploding sphere, akin to the way in which an optical lens focuses light rays.

After D-Day, General Groves ordered a team of scientists to follow eastward-moving victorious Allied troops into Europe to assess the status of the German nuclear program (and to prevent the westward-moving Soviets from gaining any materials or scientific manpower). They concluded that, while Germany had an atomic bomb program headed by Werner Heisenberg, the government had not made a significant investment in the project, and it had been nowhere near success. Similarly, Japan's efforts at developing a nuclear weapon were starved of resources. The Japanese navy lost interest when a committee led by Yoshio Nishina concluded in 1943 that "it would probably be difficult even for the United States to realize the application of atomic power during the war".

Historians claim to have found a rough schematic showing a Nazi nuclear bomb. In March 1945, a German scientific team was directed by the physicist Kurt Diebner to develop a primitive nuclear device in Ohrdruf, Thuringia. Last ditch research was conducted in an experimental nuclear reactor at Haigerloch.

Decision to drop the bomb

On April 12, after Roosevelt's death, Vice President Harry S. Truman assumed the presidency. At the time of the unconditional surrender of Germany on May 8, 1945, the Manhattan Project was still months away from producing a working weapon.

Because of the difficulties in making a working plutonium bomb, it was decided that there should be a test of the weapon. On July 16, 1945, in the desert north of Alamogordo, New Mexico, the first nuclear test took place, code-named "Trinity", using a device nicknamed "the gadget." The test, a plutonium implosion-type device, released energy equivalent to 22 kilotons of TNT, far more powerful than any weapon ever used before. The news of the test's success was rushed to Truman at the Potsdam Conference, where Churchill was briefed and Soviet Premier Joseph Stalin was informed of the new weapon. On July 26, the Potsdam Declaration was issued containing an ultimatum for Japan: either surrender or suffer "complete and utter destruction", although nuclear weapons were not mentioned.

The atomic bombings of Hiroshima and Nagasaki killed tens of thousand Japanese combatants and non-combatants and destroyed dozens of military bases and supply depots as well as hundreds (or thousands) of factories.

After hearing arguments from scientists and military officers over the possible use of nuclear weapons against Japan (though some recommended using them as demonstrations in unpopulated areas, most recommended using them against built up targets, a euphemistic term for populated cities), Truman ordered the use of the weapons on Japanese cities, hoping it would send a strong message that would end in the capitulation of the Japanese leadership, and avoid a lengthy invasion of the islands. Truman and his Secretary of State James F. Byrnes were also intent on ending the Pacific war before the Soviets could enter it, given that Roosevelt had promised Stalin control of Manchuria if he joined the invasion. On May 10–11, 1945, the Target Committee at Los Alamos, led by Oppenheimer, recommended Kyoto, Hiroshima, Yokohama, and Kokura as possible targets. Concerns about Kyoto's cultural heritage led to it being replaced by Nagasaki. In late July and early August 1945 a series of leaflets were dropped over several Japanese cities warning them of an imminent destructive attack (though not mentioning nuclear bombs). Evidence suggests that these leaflets were never dropped over Hiroshima and Nagasaki, or were dropped too late. Although a testimony does contradict this.

Hiroshima: burns from the intense thermal effect of the atomic bomb.

On August 6, 1945, a uranium-based weapon, Little Boy, was detonated above the Japanese city of Hiroshima, and three days later, a plutonium-based weapon, Fat Man, was detonated above the Japanese city of Nagasaki. To date, Hiroshima and Nagasaki remain the only two instances of nuclear weapons being used in combat. The atomic raids killed at least one hundred thousand Japanese civilians and military personnel outright, with the heat, radiation, and blast effects. Many tens of thousands would later die of radiation sickness and related cancers. Truman promised a "rain of ruin" if Japan did not surrender immediately, threatening to systematically eliminate their ability to wage war. On August 15, Emperor Hirohito announced Japan's surrender.

Soviet atomic bomb project

The Soviet Union was not invited to share in the new weapons developed by the United States and the other Allies. During the war, information had been pouring in from a number of volunteer spies involved with the Manhattan Project (known in Soviet cables under the code-name of Enormoz), and the Soviet nuclear physicist Igor Kurchatov was carefully watching the Allied weapons development. It came as no surprise to Stalin when Truman had informed him at the Potsdam conference that he had a "powerful new weapon." Truman was shocked at Stalin's lack of interest. Stalin was nonetheless outraged by the situation, more by the Americans' guarded monopoly of the bomb than the weapon itself. Some historians share the assessment that Truman immediately authorized nuclear weapons as a "negotiating tool" in the early Cold War. In alarm at this monopoly, the Soviets urgently undertook their own atomic program.

The Soviet spies in the U.S. project were all volunteers and none were Soviet citizens. One of the most valuable, Klaus Fuchs, was a German émigré theoretical physicist who had been part of the early British nuclear efforts and the UK mission to Los Alamos. Fuchs had been intimately involved in the development of the implosion weapon, and passed on detailed cross-sections of the Trinity device to his Soviet contacts. Other Los Alamos spies—none of whom knew each other—included Theodore Hall and David Greenglass. The information was kept but not acted upon, as the Soviet Union was still too busy fighting the war in Europe to devote resources to this new project.

In the years immediately after World War II, the issue of who should control atomic weapons became a major international point of contention. Many of the Los Alamos scientists who had built the bomb began to call for "international control of atomic energy," often calling for either control by transnational organizations or the purposeful distribution of weapons information to all superpowers, but due to a deep distrust of the intentions of the Soviet Union, both in postwar Europe and in general, the policy-makers of the United States worked to maintain the American nuclear monopoly.

A half-hearted plan for international control was proposed at the newly formed United Nations by Bernard Baruch (The Baruch Plan), but it was clear both to American commentators—and to the Soviets—that it was an attempt primarily to stymie Soviet nuclear efforts. The Soviets vetoed the plan, effectively ending any immediate postwar negotiations on atomic energy, and made overtures towards banning the use of atomic weapons in general.

The Soviets had put their full industrial might and manpower into the development of their own atomic weapons. The initial problem for the Soviets was primarily one of resources—they had not scouted out uranium resources in the Soviet Union and the U.S. had made deals to monopolise the largest known (and high purity) reserves in the Belgian Congo. The USSR used penal labour to mine the old deposits in Czechoslovakia—now an area under their control—and searched for other domestic deposits (which were eventually found).

Two days after the bombing of Nagasaki, the U.S. government released an official technical history of the Manhattan Project, authored by Princeton physicist Henry DeWolf Smyth, known colloquially as the Smyth Report. The sanitized summary of the wartime effort focused primarily on the production facilities and scale of investment, written in part to justify the wartime expenditure to the American public.

The Soviet program, under the suspicious watch of former NKVD chief Lavrenty Beria (a participant and victor in Stalin's Great Purge of the 1930s), would use the Report as a blueprint, seeking to duplicate as much as possible the American effort. The "secret cities" used for the Soviet equivalents of Hanford and Oak Ridge literally vanished from the maps for decades to come.

At the Soviet equivalent of Los Alamos, Arzamas-16, physicist Yuli Khariton led the scientific effort to develop the weapon. Beria distrusted his scientists, however, and he distrusted the carefully collected espionage information. As such, Beria assigned multiple teams of scientists to the same task without informing each team of the other's existence. If they arrived at different conclusions, Beria would bring them together for the first time and have them debate with their newfound counterparts. Beria used the espionage information as a way to double-check the progress of his scientists, and in his effort for duplication of the American project even rejected more efficient bomb designs in favor of ones that more closely mimicked the tried-and-true Fat Man bomb used by the U.S. against Nagasaki.

On August 29, 1949, the effort brought its results, when the USSR successfully tested its first fission bomb, dubbed "Joe-1" by the U.S. The news of the first Soviet bomb was announced to the world first by the United States, which had detected atmospheric radioactive traces generated from its test site in the Kazakh Soviet Socialist Republic.

The loss of the American monopoly on nuclear weapons marked the first tit-for-tat of the nuclear arms race.

American developments after World War II

With the Atomic Energy Act of 1946, the U.S. Congress established the civilian Atomic Energy Commission (AEC) to take over the development of nuclear weapons from the military, and to develop nuclear power. The AEC made use of many private companies in processing uranium and thorium and in other urgent tasks related to the development of bombs. Many of these companies had very lax safety measures and employees were sometimes exposed to radiation levels far above what was allowed then or now. (In 1974, the Formerly Utilized Sites Remedial Action Program (FUSRAP) of the Army Corps of Engineers was set up to deal with contaminated sites left over from these operations.)

The Atomic Energy Act also established the United States Congress Joint Committee on Atomic Energy, which had broad legislative and executive oversight jurisdiction over nuclear matters and became one of the powerful congressional committees in U.S. history. Its two early chairmen, Senator Brien McMahon and Senator Bourke Hickenlooper, both pushed for increased production of nuclear materials and a resultant increase in the American atomic stockpile. The size of that stockpile, which had been low in the immediate postwar years, was a closely guarded secret. Indeed, within the U.S. government, including the Departments of State and Defense, there was considerable confusion over who actually knew the size of the stockpile, and some people chose not to know for fear they might disclose the number accidentally.

The first thermonuclear weapons

Hungarian physicist Edward Teller toiled for years trying to discover a way to make a fusion bomb.

The notion of using a fission weapon to ignite a process of nuclear fusion can be dated back to September 1941, when it was first proposed by Enrico Fermi to his colleague Edward Teller during a discussion at Columbia University. At the first major theoretical conference on the development of an atomic bomb hosted by J. Robert Oppenheimer at the University of California, Berkeley in the summer of 1942, Teller directed the majority of the discussion towards this idea of a "Super" bomb.

It was thought at the time that a fission weapon would be quite simple to develop and that perhaps work on a hydrogen bomb (thermonuclear weapon) would be possible to complete before the end of the Second World War. However, in reality the problem of a regular atomic bomb was large enough to preoccupy the scientists for the next few years, much less the more speculative "Super" bomb. Only Teller continued working on the project—against the will of project leaders Oppenheimer and Hans Bethe.

The Joe-1 atomic bomb test by the Soviet Union that took place in August 1949 came earlier than expected by Americans, and over the next several months there was an intense debate within the U.S. government, military, and scientific communities regarding whether to proceed with development of the far more powerful Super.

After the atomic bombings of Japan, many scientists at Los Alamos rebelled against the notion of creating a weapon thousands of times more powerful than the first atomic bombs. For the scientists the question was in part technical—the weapon design was still quite uncertain and unworkable—and in part moral: such a weapon, they argued, could only be used against large civilian populations, and could thus only be used as a weapon of genocide.

Many scientists, such as Bethe, urged that the United States should not develop such weapons and set an example towards the Soviet Union. Promoters of the weapon, including Teller, Ernest Lawrence, and Luis Alvarez, argued that such a development was inevitable, and to deny such protection to the people of the United States—especially when the Soviet Union was likely to create such a weapon themselves—was itself an immoral and unwise act.

Oppenheimer, who was now head of the General Advisory Committee of the successor to the Manhattan Project, the Atomic Energy Commission, presided over a recommendation against the development of the weapon. The reasons were in part because the success of the technology seemed limited at the time (and not worth the investment of resources to confirm whether this was so), and because Oppenheimer believed that the atomic forces of the United States would be more effective if they consisted of many large fission weapons (of which multiple bombs could be dropped on the same targets) rather than the large and unwieldy super bombs, for which there was a relatively limited number of targets of sufficient size to warrant such a development.

What is more, if such weapons were developed by both superpowers, they would be more effective against the U.S. than against the USSR, as the U.S. had far more regions of dense industrial and civilian activity as targets for large weapons than the Soviet Union.

The "Mike" shot in 1952 inaugurated the age of fusion weapons.

In the end, President Truman made the final decision, looking for a proper response to the first Soviet atomic bomb test in 1949. On January 31, 1950, Truman announced a crash program to develop the hydrogen (fusion) bomb. At this point, however, the exact mechanism was still not known: the classical hydrogen bomb, whereby the heat of the fission bomb would be used to ignite the fusion material, seemed highly unworkable. However, an insight by Los Alamos mathematician Stanislaw Ulam showed that the fission bomb and the fusion fuel could be in separate parts of the bomb, and that radiation of the fission bomb could first work in a way to compress the fusion material before igniting it.

Teller pushed the notion further, and used the results of the boosted-fission "George" test (a boosted-fission device using a small amount of fusion fuel to boost the yield of a fission bomb) to confirm the fusion of heavy hydrogen elements before preparing for their first true multi-stage, Teller-Ulam hydrogen bomb test. Many scientists, initially against the weapon, such as Oppenheimer and Bethe, changed their previous opinions, seeing the development as being unstoppable.

The first fusion bomb was tested by the United States in Operation Ivy on November 1, 1952, on Elugelab Island in the Enewetak (or Eniwetok) Atoll of the Marshall Islands, code-named "Mike." Mike used liquid deuterium as its fusion fuel and a large fission weapon as its trigger. The device was a prototype design and not a deliverable weapon: standing over 20 ft (6 m) high and weighing at least 140,000 lb (64 t) (its refrigeration equipment added an additional 24,000 lb (11,000 kg) as well), it could not have been dropped from even the largest planes.

Its explosion yielded energy equivalent to 10.4 megatons of TNT—over 450 times the power of the bomb dropped onto Nagasaki— and obliterated Elugelab, leaving an underwater crater 6240 ft (1.9 km) wide and 164 ft (50 m) deep where the island had once been. Truman had initially tried to create a media blackout about the test—hoping it would not become an issue in the upcoming presidential election—but on January 7, 1953, Truman announced the development of the hydrogen bomb to the world as hints and speculations of it were already beginning to emerge in the press.

Not to be outdone, the Soviet Union exploded its first thermonuclear device, designed by the physicist Andrei Sakharov, on August 12, 1953, labeled "Joe-4" by the West. This created concern within the U.S. government and military, because, unlike Mike, the Soviet device was a deliverable weapon, which the U.S. did not yet have. This first device though was arguably not a true hydrogen bomb, and could only reach explosive yields in the hundreds of kilotons (never reaching the megaton range of a staged weapon). Still, it was a powerful propaganda tool for the Soviet Union, and the technical differences were fairly oblique to the American public and politicians.

Following the Mike blast by less than a year, Joe-4 seemed to validate claims that the bombs were inevitable and vindicate those who had supported the development of the fusion program. Coming during the height of McCarthyism, the effect was pronounced on the security hearings in early 1954, which revoked former Los Alamos director Robert Oppenheimer's security clearance on the grounds that he was unreliable, had not supported the American hydrogen bomb program, and had made long-standing left-wing ties in the 1930s. Edward Teller participated in the hearing as the only major scientist to testify against Oppenheimer, resulting in his virtual expulsion from the physics community.

On March 1, 1954, the U.S. detonated its first practical thermonuclear weapon (which used isotopes of lithium as its fusion fuel), known as the "Shrimp" device of the Castle Bravo test, at Bikini Atoll, Marshall Islands. The device yielded 15 megatons, more than twice its expected yield, and became the worst radiological disaster in U.S. history. The combination of the unexpectedly large blast and poor weather conditions caused a cloud of radioactive nuclear fallout to contaminate over 7,000 square miles (18,000 km2). 239 Marshall Island natives and 28 Americans were exposed to significant amounts of radiation, resulting in elevated levels of cancer and birth defects in the years to come.

The crew of the Japanese tuna-fishing boat Lucky Dragon 5, who had been fishing just outside the exclusion zone, returned to port suffering from radiation sickness and skin burns; one crew member was terminally ill. Efforts were made to recover the cargo of contaminated fish but at least two large tuna were probably sold and eaten. A further 75 tons of tuna caught between March and December were found to be unfit for human consumption. When the crew member died and the full results of the contamination were made public by the U.S., Japanese concerns were reignited about the hazards of radiation.

The hydrogen bomb age had a profound effect on the thoughts of nuclear war in the popular and military mind. With only fission bombs, nuclear war was something that possibly could be limited. Dropped by planes and only able to destroy the most built up areas of major cities, it was possible for many to look at fission bombs as a technological extension of large-scale conventional bombing—such as the extensive firebombing of German and Japanese cities during World War II. Proponents brushed aside as grave exaggeration claims that such weapons could lead to worldwide death or harm.

Even in the decades before fission weapons, there had been speculation about the possibility for human beings to end all life on the planet, either by accident or purposeful maliciousness—but technology had not provided the capacity for such action. The great power of hydrogen bombs made worldwide annihilation possible.

The Castle Bravo incident itself raised a number of questions about the survivability of a nuclear war. Government scientists in both the U.S. and the USSR had insisted that fusion weapons, unlike fission weapons, were cleaner, as fusion reactions did not produce the dangerously radioactive by-products of fission reactions. While technically true, this hid a more gruesome point: the last stage of a multi-staged hydrogen bomb often used the neutrons produced by the fusion reactions to induce fissioning in a jacket of natural uranium, and provided around half of the yield of the device itself.

This fission stage made fusion weapons considerably more dirty than they were made out to be. This was evident in the towering cloud of deadly fallout that followed the Bravo test. When the Soviet Union tested its first megaton device in 1955, the possibility of a limited nuclear war seemed even more remote in the public and political mind. Even cities and countries that were not direct targets would suffer fallout contamination. Extremely harmful fission products would disperse via normal weather patterns and embed in soil and water around the planet.

Speculation began to run towards what fallout and dust from a full-scale nuclear exchange would do to the world as a whole, rather than just cities and countries directly involved. In this way, the fate of the world was now tied to the fate of the bomb-wielding superpowers.

Deterrence and brinkmanship

November 1951 nuclear test at the Nevada Test Site, from Operation Buster, with a yield of 21 kilotons. It was the first U.S. nuclear field exercise conducted on land; troops shown are 6 mi (9.7 km) from the blast.

Throughout the 1950s and the early 1960s the U.S. and the USSR both endeavored, in a tit-for-tat approach, to prevent the other power from acquiring nuclear supremacy. This had massive political and cultural effects during the Cold War. As one instance of this mindset, in the early 1950s it was proposed to drop a nuclear bomb on the Moon as a globally visible demonstration of American weaponry.

The first atomic bombs dropped on Hiroshima and Nagasaki on August 6 and 9, 1945, respectively, were large, custom-made devices, requiring highly trained personnel for their arming and deployment. They could be dropped only from the largest bomber planes—at the time the B-29 Superfortress—and each plane could only carry a single bomb in its hold. The first hydrogen bombs were similarly massive and complicated. This ratio of one plane to one bomb was still fairly impressive in comparison with conventional, non-nuclear weapons, but against other nuclear-armed countries it was considered a grave danger.

In the immediate postwar years, the U.S. expended much effort on making the bombs "G.I.-proof"—capable of being used and deployed by members of the U.S. Army, rather than Nobel Prize–winning scientists. In the 1950s, the U.S. undertook a nuclear testing program to improve the nuclear arsenal.

Starting in 1951, the Nevada Test Site (in the Nevada desert) became the primary location for all U.S. nuclear testing (in the USSR, Semipalatinsk Test Site in Kazakhstan served a similar role). Tests were divided into two primary categories: "weapons related" (verifying that a new weapon worked or looking at exactly how it worked) and "weapons effects" (looking at how weapons behaved under various conditions or how structures behaved when subjected to weapons).

In the beginning, almost all nuclear tests were either atmospheric (conducted above ground, in the atmosphere) or underwater (such as some of the tests done in the Marshall Islands). Testing was used as a sign of both national and technological strength, but also raised questions about the safety of the tests, which released nuclear fallout into the atmosphere (most dramatically with the Castle Bravo test in 1954, but in more limited amounts with almost all atmospheric nuclear testing).

Because testing was seen as a sign of technological development (the ability to design usable weapons without some form of testing was considered dubious), halts on testing were often called for as stand-ins for halts in the nuclear arms race itself, and many prominent scientists and statesmen lobbied for a ban on nuclear testing. In 1958, the U.S., USSR, and the United Kingdom (a new nuclear power) declared a temporary testing moratorium for both political and health reasons, but by 1961 the Soviet Union had broken the moratorium and both the USSR and the U.S. began testing with great frequency.

As a show of political strength, the Soviet Union tested the largest-ever nuclear weapon in October 1961, the massive Tsar Bomba, which was tested in a reduced state with a yield of around 50 megatons—in its full state it was estimated to have been around 100 Mt. The weapon was largely impractical for actual military use, but was hot enough to induce third-degree burns at a distance of 62 mi (100 km) away. In its full, dirty, design it would have increased the amount of worldwide fallout since 1945 by 25%.

In 1963, all nuclear and many non-nuclear states signed the Limited Test Ban Treaty, pledging to refrain from testing nuclear weapons in the atmosphere, underwater, or in outer space. The treaty permitted underground tests.

Most tests were considerably more modest, and worked for direct technical purposes as well as their potential political overtones. Weapons improvements took on two primary forms. One was an increase in efficiency and power, and within only a few years fission bombs were developed that were many times more powerful than the ones created during World War II. The other was a program of miniaturization, reducing the size of the nuclear weapons.

Smaller bombs meant that bombers could carry more of them, and also that they could be carried on the new generation of rockets in development in the 1950s and 1960s. U.S. rocket science received a large boost in the postwar years, largely with the help of engineers acquired from the Nazi rocketry program. These included scientists such as Wernher von Braun, who had helped design the V-2 rockets the Nazis launched across the English Channel. An American program, Project Paperclip, had endeavored to move German scientists into American hands (and away from Soviet hands) and put them to work for the U.S.

Weapons improvement

The introduction of nuclear armed rockets, like the MGR-1 Honest John, reflected a change in both nuclear technology and strategy.
 
Long-range bomber aircraft, such as the B-52 Stratofortress, allowed deployment of a wide range of strategic nuclear weapons.
 
A SSM-N-8 Regulus is launched from USS Halibut; prior to the development of the SLBM, the United States employed submarines with Regulus cruise missiles in the submarine-based strategic deterrent role.

Early nuclear armed rockets—such as the MGR-1 Honest John, first deployed by the U.S. in 1953—were surface-to-surface missiles with relatively short ranges (around 15 mi/25 km maximum) and yields around twice the size of the first fission weapons. The limited range meant they could only be used in certain types of military situations. U.S. rockets could not, for example, threaten Moscow with an immediate strike, and could only be used as tactical weapons (that is, for small-scale military situations).

Strategic weapons—weapons that could threaten an entire country—relied, for the time being, on long-range bombers that could penetrate deep into enemy territory. In the U.S., this requirement led, in 1946, to creation of the Strategic Air Command—a system of bombers headed by General Curtis LeMay (who previously presided over the firebombing of Japan during WWII). In operations like Chrome Dome, SAC kept nuclear-armed planes in the air 24 hours a day, ready for an order to attack Moscow.

These technological possibilities enabled nuclear strategy to develop a logic considerably different from previous military thinking. Because the threat of nuclear warfare was so awful, it was first thought that it might make any war of the future impossible. President Dwight D. Eisenhower's doctrine of "massive retaliation" in the early years of the Cold War was a message to the USSR, saying that if the Red Army attempted to invade the parts of Europe not given to the Eastern bloc during the Potsdam Conference (such as West Germany), nuclear weapons would be used against the Soviet troops and potentially the Soviet leaders.

With the development of more rapid-response technologies (such as rockets and long-range bombers), this policy began to shift. If the Soviet Union also had nuclear weapons and a policy of "massive retaliation" was carried out, it was reasoned, then any Soviet forces not killed in the initial attack, or launched while the attack was ongoing, would be able to serve their own form of nuclear retaliation against the U.S. Recognizing that this was an undesirable outcome, military officers and game theorists at the RAND think tank developed a nuclear warfare strategy that was eventually called Mutually Assured Destruction (MAD).

MAD divided potential nuclear war into two stages: first strike and second strike. First strike meant the first use of nuclear weapons by one nuclear-equipped nation against another nuclear-equipped nation. If the attacking nation did not prevent the attacked nation from a nuclear response, the attacked nation would respond with a second strike against the attacking nation. In this situation, whether the U.S. first attacked the USSR or the USSR first attacked the U.S., the result would be that both nations would be damaged to the point of utter social collapse.

According to game theory, because starting a nuclear war was suicidal, no logical country would shoot first. However, if a country could launch a first strike that utterly destroyed the target country's ability to respond, that might give that country the confidence to initiate a nuclear war. The object of a country operating by the MAD doctrine is to deny the opposing country this first strike capability.

MAD played on two seemingly opposed modes of thought: cold logic and emotional fear. The English phrase MAD was often known by, "nuclear deterrence," was translated by the French as "dissuasion," and "terrorization" by the Soviets. This apparent paradox of nuclear war was summed up by British Prime Minister Winston Churchill as "the worse things get, the better they are"—the greater the threat of mutual destruction, the safer the world would be.

This philosophy made a number of technological and political demands on participating nations. For one thing, it said that it should always be assumed that an enemy nation may be trying to acquire first strike capability, which must always be avoided. In American politics this translated into demands to avoid "bomber gaps" and "missile gaps" where the Soviet Union could potentially outshoot the Americans. It also encouraged the production of thousands of nuclear weapons by both the U.S. and the USSR, far more than needed to simply destroy the major civilian and military infrastructures of the opposing country. These policies and strategies were satirized in the 1964 Stanley Kubrick film Dr. Strangelove, in which the Soviets, unable to keep up with the US's first strike capability, instead plan for MAD by building a Doomsday Machine, and thus, after a (literally) mad US General orders a nuclear attack on the USSR, the end of the world is brought about.

With early warning systems, it was thought that the strikes of nuclear war would come from dark rooms filled with computers, not the battlefield of the wars of old.

The policy also encouraged the development of the first early warning systems. Conventional war, even at its fastest, was fought over days and weeks. With long-range bombers, from the start of a nuclear attack to its conclusion was mere hours. Rockets could reduce a conflict to minutes. Planners reasoned that conventional command and control systems could not adequately react to a nuclear attack, so great lengths were taken to develop computer systems that could look for enemy attacks and direct rapid responses.

The U.S. poured massive funding into development of SAGE, a system that could track and intercept enemy bomber aircraft using information from remote radar stations. It was the first computer system to feature real-time processing, multiplexing, and display devices. It was the first general computing machine, and a direct predecessor of modern computers.

Emergence of the anti-nuclear movement

Women Strike for Peace during the Cuban Missile Crisis

The atomic bombings of Hiroshima and Nagasaki and the end of World War II quickly followed the 1945 Trinity nuclear test, and the Little Boy device was detonated over the Japanese city of Hiroshima on 6 August 1945. Exploding with a yield equivalent to 12,500 tonnes of TNT, the blast and thermal wave of the bomb destroyed nearly 50,000 buildings and killed approximately 75,000 people. Subsequently, the world's nuclear weapons stockpiles grew.

Operation Crossroads was a series of nuclear weapon tests conducted by the United States at Bikini Atoll in the Pacific Ocean in the summer of 1946. Its purpose was to test the effect of nuclear weapons on naval ships. To prepare the Bikini atoll for the nuclear tests, Bikini's native residents were evicted from their homes and resettled on smaller, uninhabited islands where they were unable to sustain themselves.

National leaders debated the impact of nuclear weapons on domestic and foreign policy. Also involved in the debate about nuclear weapons policy was the scientific community, through professional associations such as the Federation of Atomic Scientists and the Pugwash Conference on Science and World Affairs. Radioactive fallout from nuclear weapons testing was first drawn to public attention in 1954 when a Hydrogen bomb test in the Pacific contaminated the crew of the Japanese fishing boat Lucky Dragon. One of the fishermen died in Japan seven months later. The incident caused widespread concern around the world and "provided a decisive impetus for the emergence of the anti-nuclear weapons movement in many countries". The anti-nuclear weapons movement grew rapidly because for many people the atomic bomb "encapsulated the very worst direction in which society was moving".

Peace movements emerged in Japan and in 1954 they converged to form a unified "Japanese Council Against Atomic and Hydrogen Bombs". Japanese opposition to the Pacific nuclear weapons tests was widespread, and "an estimated 35 million signatures were collected on petitions calling for bans on nuclear weapons". The Russell–Einstein Manifesto was issued in London on July 9, 1955, by Bertrand Russell in the midst of the Cold War. It highlighted the dangers posed by nuclear weapons and called for world leaders to seek peaceful resolutions to international conflict. The signatories included eleven pre-eminent intellectuals and scientists, including Albert Einstein, who signed it just days before his death on April 18, 1955. A few days after the release, philanthropist Cyrus S. Eaton offered to sponsor a conference—called for in the manifesto—in Pugwash, Nova Scotia, Eaton's birthplace. This conference was to be the first of the Pugwash Conferences on Science and World Affairs, held in July 1957.

In the United Kingdom, the first Aldermaston March organised by the Campaign for Nuclear Disarmament took place at Easter 1958, when several thousand people marched for four days from Trafalgar Square, London, to the Atomic Weapons Research Establishment close to Aldermaston in Berkshire, England, to demonstrate their opposition to nuclear weapons. The Aldermaston marches continued into the late 1960s when tens of thousands of people took part in the four-day marches.

In 1959, a letter in the Bulletin of the Atomic Scientists was the start of a successful campaign to stop the Atomic Energy Commission dumping radioactive waste in the sea 19 kilometres from Boston. On November 1, 1961, at the height of the Cold War, about 50,000 women brought together by Women Strike for Peace marched in 60 cities in the United States to demonstrate against nuclear weapons. It was the largest national women's peace protest of the 20th century.

In 1958, Linus Pauling and his wife presented the United Nations with the petition signed by more than 11,000 scientists calling for an end to nuclear-weapon testing. The "Baby Tooth Survey," headed by Dr Louise Reiss, demonstrated conclusively in 1961 that above-ground nuclear testing posed significant public health risks in the form of radioactive fallout spread primarily via milk from cows that had ingested contaminated grass. Public pressure and the research results subsequently led to a moratorium on above-ground nuclear weapons testing, followed by the Partial Test Ban Treaty, signed in 1963 by John F. Kennedy and Nikita Khrushchev.

Cuban Missile Crisis

U-2 photographs revealed that the Soviet Union was stationing nuclear missiles on the island of Cuba in 1962, beginning the Cuban Missile Crisis.
 
Submarine-launched ballistic missiles with multiple warheads made defending against nuclear attack impractical.

Bombers and short-range rockets were not reliable: planes could be shot down, and earlier nuclear missiles could cover only a limited range— for example, the first Soviet rockets' range limited them to targets in Europe. However, by the 1960s, both the United States and the Soviet Union had developed intercontinental ballistic missiles, which could be launched from extremely remote areas far away from their target. They had also developed submarine-launched ballistic missiles, which had less range but could be launched from submarines very close to the target without any radar warning. This made any national protection from nuclear missiles increasingly impractical.

The military realities made for a precarious diplomatic situation. The international politics of brinkmanship led leaders to exclaim their willingness to participate in a nuclear war rather than concede any advantage to their opponents, feeding public fears that their generation may be the last. Civil defense programs undertaken by both superpowers, exemplified by the construction of fallout shelters and urging civilians about the survivability of nuclear war, did little to ease public concerns.

The climax of brinksmanship came in early 1962, when an American U-2 spy plane photographed a series of launch sites for medium-range ballistic missiles being constructed on the island of Cuba, just off the coast of the southern United States, beginning what became known as the Cuban Missile Crisis. The U.S. administration of John F. Kennedy concluded that the Soviet Union, then led by Nikita Khrushchev, was planning to station Soviet nuclear missiles on the island (as a response to placing US Jupiter MRBMs in Italy and Turkey), which was under the control of communist Fidel Castro. On October 22, Kennedy announced the discoveries in a televised address. He announced a naval blockade around Cuba that would turn back Soviet nuclear shipments, and warned that the military was prepared "for any eventualities." The missiles had 2,400 mile (4,000 km) range, and would allow the Soviet Union to quickly destroy many major American cities on the Eastern Seaboard if a nuclear war began.

The leaders of the two superpowers stood nose to nose, seemingly poised over the beginnings of a third world war. Khrushchev's ambitions for putting the weapons on the island were motivated in part by the fact that the U.S. had stationed similar weapons in Britain, Italy, and nearby Turkey, and had previously attempted to sponsor an invasion of Cuba the year before in the failed Bay of Pigs Invasion. On October 26, Khrushchev sent a message to Kennedy offering to withdraw all missiles if Kennedy committed to a policy of no future invasions of Cuba. Khrushchev worded the threat of assured destruction eloquently:

"You and I should not now pull on the ends of the rope in which you have tied a knot of war, because the harder you and I pull, the tighter the knot will become. And a time may come when this knot is tied so tight that the person who tied it is no longer capable of untying it, and then the knot will have to be cut. What that would mean I need not explain to you, because you yourself understand perfectly what dreaded forces our two countries possess."

A day later, however, the Soviets sent another message, this time demanding that the U.S. remove its missiles from Turkey before any missiles were withdrawn from Cuba. On the same day, a U-2 plane was shot down over Cuba and another almost intercepted over the Soviet Union, as Soviet merchant ships neared the quarantine zone. Kennedy responded by accepting the first deal publicly, and sending his brother Robert to the Soviet embassy to accept the second deal privately. On October 28, the Soviet ships stopped at the quarantine line and, after some hesitation, turned back towards the Soviet Union. Khrushchev announced that he had ordered the removal of all missiles in Cuba, and U.S. Secretary of State Dean Rusk was moved to comment, "We went eyeball to eyeball, and the other fellow just blinked."

The Crisis was later seen as the closest the U.S. and the USSR ever came to nuclear war and had been narrowly averted by last-minute compromise by both superpowers. Fears of communication difficulties led to the installment of the first hotline, a direct link between the superpowers that allowed them to more easily discuss future military activities and political maneuverings. It had been made clear that missiles, bombers, submarines, and computerized firing systems made escalating any situation to Armageddon far more easy than anybody desired.

After stepping so close to the brink, both the U.S. and the USSR worked to reduce their nuclear tensions in the years immediately following. The most immediate culmination of this work was the signing of the Partial Test Ban Treaty in 1963, in which the U.S. and USSR agreed to no longer test nuclear weapons in the atmosphere, underwater, or in outer space. Testing underground continued, allowing for further weapons development, but the worldwide fallout risks were purposefully reduced, and the era of using massive nuclear tests as a form of saber rattling ended.

In December 1979, NATO decided to deploy cruise and Pershing II missiles in Western Europe in response to Soviet deployment of intermediate range mobile missiles, and in the early 1980s, a "dangerous Soviet-US nuclear confrontation" arose. In New York on June 12, 1982, one million people gathered to protest about nuclear weapons, and to support the second UN Special Session on Disarmament. As the nuclear abolitionist movement grew, there were many protests at the Nevada Test Site. For example, on February 6, 1987, nearly 2,000 demonstrators, including six members of Congress, protested against nuclear weapons testing and more than 400 people were arrested. Four of the significant groups organizing this renewal of anti-nuclear activism were Greenpeace, The American Peace Test, The Western Shoshone, and Nevada Desert Experience.

There have been at least four major false alarms, the most recent in 1995, that resulted in the activation of nuclear attack early warning protocols. They include the accidental loading of a training tape into the American early-warning computers; a computer chip failure that appeared to show a random number of attacking missiles; a rare alignment of the Sun, the U.S. missile fields and a Soviet early-warning satellite that caused it to confuse high-altitude clouds with missile launches; the launch of a Norwegian research rocket resulted in President Yeltsin activating his nuclear briefcase for the first time.

Initial proliferation

In the fifties and sixties, three more countries joined the "nuclear club." The United Kingdom had been an integral part of the Manhattan Project following the Quebec Agreement in 1943. The passing of the McMahon Act by the United States in 1946 unilaterally broke this partnership and prevented the passage of any further information to the United Kingdom. The British Government, under Clement Attlee, determined that a British Bomb was essential. Because of British involvement in the Manhattan Project, Britain had extensive knowledge in some areas, but not in others.

An improved version of 'Fat Man' was developed, and on 26 February 1952, Prime Minister Winston Churchill announced that the United Kingdom also had an atomic bomb and a successful test took place on 3 October 1952. At first these were free-fall bombs, intended for use by the V Force of jet bombers. A Vickers Valiant dropped the first UK nuclear weapon on 11 October 1956 at Maralinga, South Australia. Later came a missile, Blue Steel, intended for carriage by the V Force bombers, and then the Blue Streak medium-range ballistic missile (later canceled). Anglo-American cooperation on nuclear weapons was restored by the 1958 US-UK Mutual Defence Agreement. As a result of this and the Polaris Sales Agreement, the United Kingdom has bought United States designs for submarine missiles and fitted its own warheads. It retains full independent control over the use of the missiles. It no longer possesses any free-fall bombs.

France had been heavily involved in nuclear research before World War II through the work of the Joliot-Curies. This was discontinued after the war because of the instability of the Fourth Republic and lack of finances. However, in the 1950s, France launched a civil nuclear research program, which produced plutonium as a byproduct.

In 1956, France formed a secret Committee for the Military Applications of Atomic Energy and a development program for delivery vehicles. With the return of Charles de Gaulle to the French presidency in 1958, final decisions to build a bomb were made, which led to a successful test in 1960. Since then, France has developed and maintained its own nuclear deterrent independent of NATO.

In 1951, China and the Soviet Union signed an agreement whereby China supplied uranium ore in exchange for technical assistance in producing nuclear weapons. In 1953, China established a research program under the guise of civilian nuclear energy. Throughout the 1950s the Soviet Union provided large amounts of equipment. But as the relations between the two countries worsened the Soviets reduced the amount of assistance and, in 1959, refused to donate a bomb for copying purposes. Despite this, the Chinese made rapid progress. Chinese first gained possession of nuclear weapons in 1964, making it the fifth country to have them. It tested its first atomic bomb at Lop Nur on October 16, 1964 (Project 596); and tested a nuclear missile on October 25, 1966; and tested a thermonuclear (hydrogen) bomb (Test No. 6) on June 14, 1967. China ultimately conducted a total of 45 nuclear tests; although the country has never become a signatory to the Limited Test Ban Treaty, it conducted its last nuclear test in 1996. In the 1980s, China's nuclear weapons program was a source of nuclear proliferation, as China transferred its CHIC-4 technology to Pakistan. China became a party to the Non-Proliferation Treaty (NPT) as a nuclear weapon state in 1992, and the Nuclear Suppliers Group (NSG) in 2004. As of 2017, the number of Chinese warheads is thought to be in the low hundreds, The Atomic Heritage Foundation notes a 2018 estimate of approximately 260 nuclear warheads, including between 50 and 60 ICBMs and four nuclear submarines. China declared a policy of "no first use" in 1964, the only nuclear weapons state to announce such a policy; this declaration has no effect on its capabilities and there are no diplomatic means of verifying or enforcing this declaration.

Cold War

ICBMs, like the American Minuteman missile, allowed nations to deliver nuclear weapons thousands of miles away with relative ease.
 
On 12 December 1982, 30,000 women held hands around the 6 miles (9.7 km) perimeter of the RAF Greenham Common base, in protest against the decision to site American cruise missiles there.

After World War II, the balance of power between the Eastern and Western blocs and the fear of global destruction prevented the further military use of atomic bombs. This fear was even a central part of Cold War strategy, referred to as the doctrine of Mutually Assured Destruction. So important was this balance to international political stability that a treaty, the Anti-Ballistic Missile Treaty (or ABM treaty), was signed by the U.S. and the USSR in 1972 to curtail the development of defenses against nuclear weapons and the ballistic missiles that carry them. This doctrine resulted in a large increase in the number of nuclear weapons, as each side sought to ensure it possessed the firepower to destroy the opposition in all possible scenarios.

Early delivery systems for nuclear devices were primarily bombers like the United States B-29 Superfortress and Convair B-36, and later the B-52 Stratofortress. Ballistic missile systems, based on Wernher von Braun's World War II designs (specifically the V-2 rocket), were developed by both United States and Soviet Union teams (in the case of the U.S., effort was directed by the German scientists and engineers although the Soviet Union also made extensive use of captured German scientists, engineers, and technical data).

These systems were used to launch satellites, such as Sputnik, and to propel the Space Race, but they were primarily developed to create Intercontinental Ballistic Missiles (ICBMs) that could deliver nuclear weapons anywhere on the globe. Development of these systems continued throughout the Cold War—though plans and treaties, beginning with the Strategic Arms Limitation Treaty (SALT I), restricted deployment of these systems until, after the fall of the Soviet Union, system development essentially halted, and many weapons were disabled and destroyed. On January 27, 1967, more than 60 nations signed the Outer Space Treaty, banning nuclear weapons in space.

There have been a number of potential nuclear disasters. Following air accidents U.S. nuclear weapons have been lost near Atlantic City, New Jersey (1957); Savannah, Georgia (1958) (see Tybee Bomb); Goldsboro, North Carolina (1961); off the coast of Okinawa (1965); in the sea near Palomares, Spain (1966) (see 1966 Palomares B-52 crash); and near Thule, Greenland (1968) (see 1968 Thule Air Base B-52 crash). Most of the lost weapons were recovered, the Spanish device after three months' effort by the DSV Alvin and DSV Aluminaut. Investigative journalist Eric Schlosser discovered that at least 700 "significant" accidents and incidents involving 1,250 nuclear weapons were recorded in the United States between 1950 and 1968.

The Soviet Union was less forthcoming about such incidents, but the environmental group Greenpeace believes that there are around forty non-U.S. nuclear devices that have been lost and not recovered, compared to eleven lost by America, mostly in submarine disasters. The U.S. has tried to recover Soviet devices, notably in the 1974 Project Azorian using the specialist salvage vessel Hughes Glomar Explorer to raise a Soviet submarine. After news leaked out about this boondoggle, the CIA would coin a favorite phrase for refusing to disclose sensitive information, called glomarization: We can neither confirm nor deny the existence of the information requested but, hypothetically, if such data were to exist, the subject matter would be classified, and could not be disclosed.

The collapse of the Soviet Union in 1991 essentially ended the Cold War. However, the end of the Cold War failed to end the threat of nuclear weapon use, although global fears of nuclear war reduced substantially. In a major move of symbolic de-escalation, Boris Yeltsin, on January 26, 1992, announced that Russia planned to stop targeting United States cities with nuclear weapons.

Cost

The designing, testing, producing, deploying, and defending against nuclear weapons is one of the largest expenditures for the nations which possess nuclear weapons. In the United States during the Cold War years, between "one quarter to one third of all military spending since World War II [was] devoted to nuclear weapons and their infrastructure." According to a retrospective Brookings Institution study published in 1998 by the Nuclear Weapons Cost Study Committee (formed in 1993 by the W. Alton Jones Foundation), the total expenditures for U.S. nuclear weapons from 1940 to 1998 was $5.5 trillion in 1996 Dollars.

For comparison, the total public debt at the end of fiscal year 1998 was $5,478,189,000,000 in 1998 Dollars or $5.3 trillion in 1996 Dollars. The entire public debt in 1998 was therefore equal to the cost of research, development, and deployment of U.S. nuclear weapons and nuclear weapons-related programs during the Cold War.

Second nuclear age

Large stockpile with global range (dark blue), smaller stockpile with global range (medium blue), small stockpile with regional range (light blue).

The second nuclear age can be regarded as proliferation of nuclear weapons among lesser powers and for reasons other than the American-Soviet-Chinese rivalry.

India embarked relatively early on a program aimed at nuclear weapons capability, but apparently accelerated this after the Sino-Indian War of 1962. India's first atomic-test explosion was in 1974 with Smiling Buddha, which it described as a "peaceful nuclear explosion."

After the collapse of Eastern Military High Command and the disintegration of Pakistan as a result of the 1971 Winter war, Bhutto of Pakistan launched scientific research on nuclear weapons. The Indian test caused Pakistan to spur its programme, and the ISI conducted successful espionage operations in the Netherlands, while also developing the programme indigenously. India tested fission and perhaps fusion devices in 1998, and Pakistan successfully tested fission devices that same year, raising concerns that they would use nuclear weapons on each other.

All of the former Soviet bloc countries with nuclear weapons (Belarus, Ukraine, and Kazakhstan) transferred their warheads to Russia by 1996.

South Africa also had an active program to develop uranium-based nuclear weapons, but dismantled its nuclear weapon program in the 1990s. Experts do not believe it actually tested such a weapon, though it later claimed it constructed several crude devices that it eventually dismantled. In the late 1970s American spy satellites detected a "brief, intense, double flash of light near the southern tip of Africa." Known as the Vela Incident, it was speculated to have been a South African or possibly Israeli nuclear weapons test, though some feel that it may have been caused by natural events or a detector malfunction.

Israel is widely believed to possess an arsenal of up to several hundred nuclear warheads, but this has never been officially confirmed or denied (though the existence of their Dimona nuclear facility was confirmed by Mordechai Vanunu in 1986). Several key US scientists involved in the American bomb making program, clandestinely helped the Israelis and thus played an important role in nuclear proliferation. One was Edward Teller, among others.

In January 2004, Dr A. Q. Khan of Pakistan's programme confessed to having been a key mover in "proliferation activities", seen as part of an international proliferation network of materials, knowledge, and machines from Pakistan to Libya, Iran, and North Korea.

North Korea announced in 2003 that it had several nuclear explosives. The first claimed detonation was the 2006 North Korean nuclear test, conducted on October 9, 2006. On May 25, 2009, North Korea continued nuclear testing, violating United Nations Security Council Resolution 1718. A third test was conducted on 13 February 2013, two tests were conducted in 2016 in January and September, followed by test a year later in September 2017.

As part of the Budapest Memorandum on Security Assurances in 1994, the country of Ukraine surrendered its nuclear arsenal, left over from the USSR, in part on the promise that its borders would remain respected if it did so. In 2022 during the 2021–2022 Russo-Ukrainian crisis, Russian President Vladimir Putin, as he had lightly done in the past, alleged that Ukraine was on the path to receiving nuclear weapons. According to Putin, there was a "real danger" that Western allies could help supply Ukraine, which appeared to be on the path to joining NATO, with nuclear arms. Critics labelled Putin's claims as "conspiracy theories" designed to build a case for an invasion of Ukraine.

Application-specific integrated circuit

A tray of application-specific integrated circuit (ASIC) chips

An application-specific integrated circuit (ASIC /ˈsɪk/) is an integrated circuit (IC) chip customized for a particular use, rather than intended for general-purpose use. For example, a chip designed to run in a digital voice recorder or a high-efficiency video codec (e.g. AMD VCE) is an ASIC. Application-specific standard product (ASSP) chips are intermediate between ASICs and industry standard integrated circuits like the 7400 series or the 4000 series. ASIC chips are typically fabricated using metal-oxide-semiconductor (MOS) technology, as MOS integrated circuit chips.

As feature sizes have shrunk and design tools improved over the years, the maximum complexity (and hence functionality) possible in an ASIC has grown from 5,000 logic gates to over 100 million. Modern ASICs often include entire microprocessors, memory blocks including ROM, RAM, EEPROM, flash memory and other large building blocks. Such an ASIC is often termed a SoC (system-on-chip). Designers of digital ASICs often use a hardware description language (HDL), such as Verilog or VHDL, to describe the functionality of ASICs.

Field-programmable gate arrays (FPGA) are the modern-day technology improvement on breadboards, meaning that they are not made to be application-specific as opposed to ASICs. Programmable logic blocks and programmable interconnects allow the same FPGA to be used in many different applications. For smaller designs or lower production volumes, FPGAs may be more cost-effective than an ASIC design, even in production. The non-recurring engineering (NRE) cost of an ASIC can run into the millions of dollars. Therefore, device manufacturers typically prefer FPGAs for prototyping and devices with low production volume and ASICs for very large production volumes where NRE costs can be amortized across many devices.

History

Early ASICs used gate array technology. By 1967, Ferranti and Interdesign were manufacturing early bipolar gate arrays. In 1967, Fairchild Semiconductor introduced the Micromatrix family of bipolar diode–transistor logic (DTL) and transistor–transistor logic (TTL) arrays.

Complementary metal-oxide-semiconductor (CMOS) technology opened the door to the broad commercialization of gate arrays. The first CMOS gate arrays were developed by Robert Lipp, in 1974 for International Microcircuits, Inc. (IMI).

Metal-oxide-semiconductor (MOS) standard cell technology was introduced by Fairchild and Motorola, under the trade names Micromosaic and Polycell, in the 1970s. This technology was later successfully commercialized by VLSI Technology (founded 1979) and LSI Logic (1981).

A successful commercial application of gate array circuitry was found in the low-end 8-bit ZX81 and ZX Spectrum personal computers, introduced in 1981 and 1982. These were used by Sinclair Research (UK) essentially as a low-cost I/O solution aimed at handling the computer's graphics.

Customization occurred by varying a metal interconnect mask. Gate arrays had complexities of up to a few thousand gates; this is now called mid-scale integration. Later versions became more generalized, with different base dies customized by both metal and polysilicon layers. Some base dies also include random-access memory (RAM) elements.

Standard-cell designs

In the mid-1980s, a designer would choose an ASIC manufacturer and implement their design using the design tools available from the manufacturer. While third-party design tools were available, there was not an effective link from the third-party design tools to the layout and actual semiconductor process performance characteristics of the various ASIC manufacturers. Most designers used factory-specific tools to complete the implementation of their designs. A solution to this problem, which also yielded a much higher density device, was the implementation of standard cells. Every ASIC manufacturer could create functional blocks with known electrical characteristics, such as propagation delay, capacitance and inductance, that could also be represented in third-party tools. Standard-cell design is the utilization of these functional blocks to achieve very high gate density and good electrical performance. Standard-cell design is intermediate between § Gate-array and semi-custom design and § Full-custom design in terms of its non-recurring engineering and recurring component costs as well as performance and speed of development (including time to market).

By the late 1990s, logic synthesis tools became available. Such tools could compile HDL descriptions into a gate-level netlist. Standard-cell integrated circuits (ICs) are designed in the following conceptual stages referred to as electronics design flow, although these stages overlap significantly in practice:

  1. Requirements engineering: A team of design engineers starts with a non-formal understanding of the required functions for a new ASIC, usually derived from requirements analysis.
  2. Register-transfer level (RTL) design: The design team constructs a description of an ASIC to achieve these goals using a hardware description language. This process is similar to writing a computer program in a high-level language.
  3. Functional verification: Suitability for purpose is verified by functional verification. This may include such techniques as logic simulation through test benches, formal verification, emulation, or creating and evaluating an equivalent pure software model, as in Simics. Each verification technique has advantages and disadvantages, and most often several methods are used together for ASIC verification. Unlike most FPGAs, ASICs cannot be reprogrammed once fabricated and therefore ASIC designs that are not completely correct are much more costly, increasing the need for full test coverage.
  4. Logic synthesis: Logic synthesis transforms the RTL design into a large collection called of lower-level constructs called standard cells. These constructs are taken from a standard-cell library consisting of pre-characterized collections of logic gates performing specific functions. The standard cells are typically specific to the planned manufacturer of the ASIC. The resulting collection of standard cells and the needed electrical connections between them is called a gate-level netlist.
  5. Placement: The gate-level netlist is next processed by a placement tool which places the standard cells onto a region of an integrated circuit die representing the final ASIC. The placement tool attempts to find an optimized placement of the standard cells, subject to a variety of specified constraints.
  6. Routing: An electronics routing tool takes the physical placement of the standard cells and uses the netlist to create the electrical connections between them. Since the search space is large, this process will produce a "sufficient" rather than "globally optimal" solution. The output is a file which can be used to create a set of photomasks enabling a semiconductor fabrication facility, commonly called a 'fab' or 'foundry' to manufacture physical integrated circuits. Placement and routing are closely interrelated and are collectively called place and route in electronics design.
  7. Sign-off: Given the final layout, circuit extraction computes the parasitic resistances and capacitances. In the case of a digital circuit, this will then be further mapped into delay information from which the circuit performance can be estimated, usually by static timing analysis. This, and other final tests such as design rule checking and power analysis collectively called signoff are intended to ensure that the device will function correctly over all extremes of the process, voltage and temperature. When this testing is complete the photomask information is released for chip fabrication.

These steps, implemented with a level of skill common in the industry, almost always produce a final device that correctly implements the original design, unless flaws are later introduced by the physical fabrication process.

The design steps also called design flow, are also common to standard product design. The significant difference is that standard-cell design uses the manufacturer's cell libraries that have been used in potentially hundreds of other design implementations and therefore are of much lower risk than a full custom design. Standard cells produce a design density that is cost-effective, and they can also integrate IP cores and static random-access memory (SRAM) effectively, unlike gate arrays.

Gate-array and semi-custom design

Microscope photograph of a gate-array ASIC showing the predefined logic cells and custom interconnections. This particular design uses less than 20% of available logic gates.

Gate array design is a manufacturing method in which diffused layers, each consisting of transistors and other active devices, are predefined and electronics wafers containing such devices are "held in stock" or unconnected prior to the metallization stage of the fabrication process. The physical design process defines the interconnections of these layers for the final device. For most ASIC manufacturers, this consists of between two and nine metal layers with each layer running perpendicular to the one below it. Non-recurring engineering costs are much lower than full custom designs, as photolithographic masks are required only for the metal layers. Production cycles are much shorter, as metallization is a comparatively quick process; thereby accelerating time to market.

Gate-array ASICs are always a compromise between rapid design and performance as mapping a given design onto what a manufacturer held as a stock wafer never gives 100% circuit utilization. Often difficulties in routing the interconnect require migration onto a larger array device with a consequent increase in the piece part price. These difficulties are often a result of the layout EDA software used to develop the interconnect.

Pure, logic-only gate-array design is rarely implemented by circuit designers today, having been almost entirely replaced by field-programmable devices. The most prominent of such devices are field-programmable gate arrays (FPGAs) which can be programmed by the user and thus offer minimal tooling charges, non-recurring engineering, only marginally increased piece part cost, and comparable performance.

Today, gate arrays are evolving into structured ASICs that consist of a large IP core like a CPU, digital signal processor units, peripherals, standard interfaces, integrated memories, SRAM, and a block of reconfigurable, uncommitted logic. This shift is largely because ASIC devices are capable of integrating large blocks of system functionality, and systems on a chip (SoCs) require glue logic, communications subsystems (such as networks on chip), peripherals, and other components rather than only functional units and basic interconnection.

In their frequent usages in the field, the terms "gate array" and "semi-custom" are synonymous when referring to ASICs. Process engineers more commonly use the term "semi-custom", while "gate-array" is more commonly used by logic (or gate-level) designers.

Full-custom design

Microscope photograph of custom ASIC (486 chipset) showing gate-based design on top and custom circuitry on bottom

By contrast, full-custom ASIC design defines all the photolithographic layers of the device. Full-custom design is used for both ASIC design and for standard product design.

The benefits of full-custom design include reduced area (and therefore recurring component cost), performance improvements, and also the ability to integrate analog components and other pre-designed—and thus fully verified—components, such as microprocessor cores, that form a system on a chip.

The disadvantages of full-custom design can include increased manufacturing and design time, increased non-recurring engineering costs, more complexity in the computer-aided design (CAD) and electronic design automation systems, and a much higher skill requirement on the part of the design team.

For digital-only designs, however, "standard-cell" cell libraries, together with modern CAD systems, can offer considerable performance/cost benefits with low risk. Automated layout tools are quick and easy to use and also offer the possibility to "hand-tweak" or manually optimize any performance-limiting aspect of the design.

This is designed by using basic logic gates, circuits or layout specially for a design.

Structured design

Structured ASIC design (also referred to as "platform ASIC design") is a relatively new trend in the semiconductor industry, resulting in some variation in its definition. However, the basic premise of a structured ASIC is that both manufacturing cycle time and design cycle time are reduced compared to cell-based ASIC, by virtue of there being pre-defined metal layers (thus reducing manufacturing time) and pre-characterization of what is on the silicon (thus reducing design cycle time).

Definition from Foundations of Embedded Systems states that:

In a "structured ASIC" design, the logic mask-layers of a device are predefined by the ASIC vendor (or in some cases by a third party). Design differentiation and customization is achieved by creating custom metal layers that create custom connections between predefined lower-layer logic elements. "Structured ASIC" technology is seen as bridging the gap between field-programmable gate arrays and "standard-cell" ASIC designs. Because only a small number of chip layers must be custom-produced, "structured ASIC" designs have much smaller non-recurring expenditures (NRE) than "standard-cell" or "full-custom" chips, which require that a full mask set be produced for every design.

— Foundations of Embedded Systems

This is effectively the same definition as a gate array. What distinguishes a structured ASIC from a gate array is that in a gate array, the predefined metal layers serve to make manufacturing turnaround faster. In a structured ASIC, the use of predefined metallization is primarily to reduce cost of the mask sets as well as making the design cycle time significantly shorter.

For example, in a cell-based or gate-array design the user must often design power, clock, and test structures themselves. By contrast, these are predefined in most structured ASICs and therefore can save time and expense for the designer compared to gate-array based designs. Likewise, the design tools used for structured ASIC can be substantially lower cost and easier (faster) to use than cell-based tools, because they do not have to perform all the functions that cell-based tools do. In some cases, the structured ASIC vendor requires customized tools for their device (e.g., custom physical synthesis) be used, also allowing for the design to be brought into manufacturing more quickly.

Cell libraries, IP-based design, hard and soft macros

Cell libraries of logical primitives are usually provided by the device manufacturer as part of the service. Although they will incur no additional cost, their release will be covered by the terms of a non-disclosure agreement (NDA) and they will be regarded as intellectual property by the manufacturer. Usually, their physical design will be pre-defined so they could be termed "hard macros".

What most engineers understand as "intellectual property" are IP cores, designs purchased from a third-party as sub-components of a larger ASIC. They may be provided in the form of a hardware description language (often termed a "soft macro"), or as a fully routed design that could be printed directly onto an ASIC's mask (often termed a "hard macro"). Many organizations now sell such pre-designed cores – CPUs, Ethernet, USB or telephone interfaces – and larger organizations may have an entire department or division to produce cores for the rest of the organization. The company ARM (Advanced RISC Machines) only sells IP cores, making it a fabless manufacturer.

Indeed, the wide range of functions now available in structured ASIC design is a result of the phenomenal improvement in electronics in the late 1990s and early 2000s; as a core takes a lot of time and investment to create, its re-use and further development cuts product cycle times dramatically and creates better products. Additionally, open-source hardware organizations such as OpenCores are collecting free IP cores, paralleling the open-source software movement in hardware design.

Soft macros are often process-independent (i.e. they can be fabricated on a wide range of manufacturing processes and different manufacturers). Hard macros are process-limited and usually further design effort must be invested to migrate (port) to a different process or manufacturer.

Multi-project wafers

Some manufacturers and IC design houses offer multi-project wafer service (MPW) as a method of obtaining low cost prototypes. Often called shuttles, these MPWs, containing several designs, run at regular, scheduled intervals on a "cut and go" basis, usually with limited liability on the part of the manufacturer. The contract involves delivery of bare dies or the assembly and packaging of a handful of devices. The service usually involves the supply of a physical design database (i.e. masking information or pattern generation (PG) tape). The manufacturer is often referred to as a "silicon foundry" due to the low involvement it has in the process.

Application-specific standard product

Renesas M66591GP: USB2.0 Peripheral Controller

An application-specific standard product or ASSP is an integrated circuit that implements a specific function that appeals to a wide market. As opposed to ASICs that combine a collection of functions and are designed by or for one customer, ASSPs are available as off-the-shelf components. ASSPs are used in all industries, from automotive to communications. As a general rule, if you can find a design in a data book, then it is probably not an ASIC, but there are some exceptions.

For example, two ICs that might or might not be considered ASICs are a controller chip for a PC and a chip for a modem. Both of these examples are specific to an application (which is typical of an ASIC) but are sold to many different system vendors (which is typical of standard parts). ASICs such as these are sometimes called application-specific standard products (ASSPs).

Examples of ASSPs are encoding/decoding chip, Ethernet network interface controller chip, etc.

IEEE used to publish an ASSP magazine, which was renamed to IEEE Signal Processing Magazine in 1990.

Prediabetes

From Wikipedia, the free encyclopedia
 
Prediabetes
Hyperglycemia.png
White hexagons in the image represent glucose molecules, which are increased in the lower image. Hyperglycemia is the only major symptom of prediabetes.
SpecialtyEndocrinology 
ComplicationsDiabetic complications

Prediabetes is a component of the metabolic syndrome and is characterized by elevated blood sugar levels that fall below the threshold to diagnose diabetes mellitus. It usually does not cause symptoms but people with prediabetes often have obesity (especially abdominal or visceral obesity), dyslipidemia with high triglycerides and/or low HDL cholesterol, and hypertension. It is also associated with increased risk for cardiovascular disease (CVD). Prediabetes is more accurately considered an early stage of diabetes as health complications associated with type 2 diabetes often occur before the diagnosis of diabetes.

Prediabetes can be diagnosed by measuring hemoglobin A1c, fasting glucose, or glucose tolerance test. Many people may be diagnosed through routine screening tests. The primary treatment approach includes lifestyle changes such as exercise and dietary adjustments. Some medications can be used to reduce the risks associated with prediabetes. There is a high rate of progression to type 2 diabetes but not everyone with prediabetes develops type 2 diabetes. Prediabetes can be a reversible condition with lifestyle changes.

For many people, prediabetes and diabetes are diagnosed through a routine screening at a check-up. However, an additional routine screening done by dentists, a new and promising concept, and not only medical doctors, can be very effective in early detection and treatment. The earlier prediabetes is diagnosed, the more likely an intervention will be successful.

Signs and symptoms

Prediabetes typically has no distinct signs or symptoms except the sole sign of high blood sugar. Patients should monitor for signs and symptoms of type 2 diabetes mellitus such as increased thirst, increased urination, and feeling tired.

Causes

The cause of prediabetes is multifactorial and is known to have contributions from lifestyle and genetic factors. Ultimately prediabetes occurs when control of insulin and blood glucose in the body becomes abnormal, also known as insulin resistance. Risk factors for prediabetes include family history of diabetes, older age, women who have a history of gestational diabetes or high birth weight babies (greater than 9 lbs.).

The increasing rates of prediabetes and diabetes suggest lifestyle and/or environmental factors that contribute to prediabetes. It remains unclear which dietary components are causative and risk is likely influenced by genetic background. Lack of physical activity is a risk factor for type 2 diabetes and physical activity can reduce the risk of progressing to type 2 diabetes.

Pathophysiology

Normal glucose homeostasis is controlled by three interrelated processes. These processes include gluconeogenesis (glucose production that occurs in the liver), uptake and utilization of glucose by the peripheral tissues of the body, and insulin secretion by the pancreatic beta islet cells. The presence of glucose in the bloodstream triggers the production and release of insulin from the pancreas' beta islet cells. The main function of insulin is to increase the rate of transport of glucose from the bloodstream into certain cells of the body, such as striated muscles, fibroblasts, and fat cells. It also is necessary for transport of amino acids, glycogen formation in the liver and skeletal muscles, triglyceride formation from glucose, nucleic acid synthesis, and protein synthesis. In individuals with prediabetes, a failure of pancreatic hormone release, failure of targeted tissues to respond to the insulin present or both leads to blood glucose rises to abnormally high levels.

Diagnosis

Prediabetes can be diagnosed with three different types of blood tests:

  • Fasting blood sugar (glucose) level of:
    • 110 to 125 mg/dL (6.1 mmol/L to 6.9 mmol/L) – WHO criteria
    • 100 to 125 mg/dL (5.6 mmol/L to 6.9 mmol/L) – ADA criteria
  • Glucose tolerance test: blood sugar level of 140 to 199 mg/dL (7.8 to 11.0 mM) 2 hours after ingesting a standardized 75 gram glucose solution
  • Glycated hemoglobin (HbA1c) between 5.7 and 6.4 percent, ie 38.9 and 46.4 mmol/mol

Levels above these limits would justify a diagnosis for diabetes.

Impaired fasting glucose

Impaired fasting glycemia or impaired fasting glucose (IFG) refers to a condition in which the fasting blood glucose is elevated above what is considered normal levels but is not high enough to be classified as diabetes mellitus. It is considered a pre-diabetic state, associated with insulin resistance and increased risk of cardiovascular pathology, although of lesser risk than impaired glucose tolerance (IGT). IFG sometimes progresses to type 2 diabetes mellitus.

Fasting blood glucose levels are in a continuum within a given population, with higher fasting glucose levels corresponding to a higher risk for complications caused by the high glucose levels. Some patients with impaired fasting glucose also may be diagnosed with impaired glucose tolerance, but many have normal responses to a glucose tolerance test. Fasting glucose is helpful in identifying prediabetes when positive but has a risk of false negatives.

World Health Organization (WHO) criteria for impaired fasting glucose differs from the American Diabetes Association (ADA) criteria, because the normal range of glucose is defined differently by each. Fasting plasma glucose levels 100 mg/dL (5.5 mmol/L) and higher have been shown to increase complication rates significantly, however, WHO opted to keep its upper limit of normal at under 110 mg/dL for fear of causing too many people to be diagnosed as having impaired fasting glucose, whereas the ADA lowered the upper limit of normal to a fasting plasma glucose under 100 mg/dL.

  • WHO criteria: fasting plasma glucose level from 6.1 mmol/l (110 mg/dL) to 6.9 mmol/L (125 mg/dL)
  • ADA criteria: fasting plasma glucose level from 5.6 mmol/L (100 mg/dL) to 6.9 mmol/L (125 mg/dL)

Impaired glucose tolerance

Impaired glucose tolerance (IGT) is diagnosed with an oral glucose tolerance test. According to the criteria of the World Health Organization and the American Diabetes Association, impaired glucose tolerance is defined as:

  • two-hour glucose levels of 140 to 199 mg per dL (7.8 to 11.0 mmol/l) on the 75-g oral glucose tolerance test. A patient is said to be under the condition of IGT when he/she has an intermediately raised glucose level after 2 hours, but less than the level that would qualify for type 2 diabetes mellitus. The fasting glucose may be either normal or mildly elevated.

From 10 to 15 percent of adults in the United States have impaired glucose tolerance or impaired fasting glucose.

Hemoglobin A1c

Hemoglobin A1c is a measure of the percent of red blood cells that are glycated, or have a glucose molecule attached. This can be used as an indicator of blood glucose level over a longer period of time and is often used to diagnose prediabetes as well as diabetes. HbA1c may not accurately represent blood glucose levels and should not be used in certain medical conditions such as iron-deficiency anemia, Vitamin B12 and folate deficiency, pregnancy, hemolytic anemia, an enlarged spleen, and end-stage kidney failure.

Fasting Insulin

Estimate of insulin resistance (IR) and insulin sensitivity (%S) according to the Homeostatic model assessment (HOMA). Patterns were modeled as a function of fasting plasma insulin and varying fasting plasma glucose. Calculated using HOMA Calculator. Adapted from 

Hyperinsulinemia due to insulin resistance may occur in individuals with normal glucose levels and therefore is not diagnosed with usual tests. Hyperinsulinemia precedes prediabetes and diabetes that are characterized by hyperglycemia. Insulin resistance can be diagnosed by measures of plasma insulin, both fasting or during a glucose tolerance test. The use of fasting insulin to identify patients at risk has been proposed, but is currently not commonly used in clinical practice.

The implications of hyperinsulinemia is the risk of comorbidities related to diabetes that may precede changes in blood glucose, including cardiovascular diseases.

Screening

Fasting plasma glucose screening should begin at age 30–45 and be repeated at least every three years. Earlier and more frequent screening should be conducted in at-risk individuals. The risk factors for which are listed below:

Prediabetes Screening in a Dental Setting

Screening

The United States Preventative Services Task Force (USPSTF) recommends adults who are overweight/obese and aged 40–70 years old to get screened during visits to their regular physician. The American Diabetes Association (ADA) recommends normal testing repeated every three years and recommends a larger range of people get tested: anyone over the age of 45 regardless of risk; an adult of any age who is obese or overweight and has one or more risk factors, which includes hypertension, a first degree relative with diabetes, a physical inactivity, high risk race/ethnicity, Asian Americans with BMI of ≥23 kg/m2, HDL < 35 mg/dL or TG > 250 mg/dL, women who have delivered child >9 lbs or with gestational diabetes, A1c ≥ 5.7%, impaired fasting glucose (IFG) or impaired glucose tolerance (IGT).

It’s been found that people will visit their dentist more regularly than their primary physician for checkups, so the dentist’s office becomes a very useful place for potentially checking for diabetes. For people who are unaware of their diabetes risk and fall into the non-White, obese, or ≥45 y old category, screening at the dentist’s would have the highest odds of getting identified as someone at risk. Studies have been done to evaluate the overall effectiveness and value of prediabetes testing in the dental setting, usually at dental schools. One study has looked at screening through dental visits followed up by intervention programs such as the commercial Weight Watchers and found it a cost-effective means to identify and treat affected people in the long term. Cost is a factor people at risk might need to consider, since, on average, people diagnosed with diabetes have approximately 2.3 times higher medical expenditure that what expenditures would be in its absence.

A simple test may be instituted at the dentist’s office, in which an oral blood sample is taken during a normal visit to the dentist’s. This sample tests for glycated hemoglobin (HbA1c) levels; HbA1c gives healthcare professionals an idea of blood glucose levels and is the most reliable form of testing for diabetes in asymptomatic patients. Fasting and “acute perturbations” are not needed for HbA1c test and it reveals average glycemic control over 3 month period. HbA1c less than 5.6% is considered normal. Glucose status can also be tested through fasting blood sugar (FBS) and requires a blood sample after a patient has fasted for at least eight hours, so it might not be as convenient. Patients who have gotten oral blood samples taken say it feels as if part of a normal procedure, and dentists say it is convenient.

In a study that analyzed data from 10,472 adults from 2013 to 2014 and 2015 to 2016, it was revealed that screening for risk of prediabetes in the dental setting has the potential to alert an estimated 22.36 million adults. Diabetes may be asymptomatic for a long time, but since it would be wasteful to test every patient at the dental office, utilizing known risk factors may guide who is ideal for testing. Since a history is already taken at a dental office, a few additional questions would help guide a dentist on narrowing down who the test is recommended for. For example, people with high BMI are at higher risk for diabetes. A study done by the School of Dentistry, Diabetes Research Centre, Mazandaran University of Medical Sciences in Sari, Iran found a relation between periodontitis and the prediabetic condition, and this could be another tool to help guide who might be recommended to take the test. Periodontal disease occurs when a number of anaerobic bacteria living on the tooth surface cause infections, which leads to a potentially sustained immune response. Diabetes is a type of condition in which infections are easier to get, and hyperglycemia contributes to the mechanism causing oral complications.

Early Detection and Management

Over half the people who are diagnosed with prediabetes eventually develop type 2 diabetes and once diagnosed with prediabetes, people experience a range of emotions: distress and fear; denial and downplay of risks; guilt and self-criticism; and self-compassion. While prediabetes is a reversible condition, it requires diet change and exercise, which may be more difficult for people diagnosed prediabetes because facing the risk of a chronic condition is associated with negative emotions, which further hinder the self-regulation that is required in reversing a prediabetes diagnosis. Still, without taking action, 37% of individuals with prediabetes will develop diabetes in only 4 years, and lifestyle intervention may decrease the percentage of prediabetic patients in whom diabetes develops to 20%. The National Diabetes Prevention Program (DPP) has a Center of Disease Control (CDC)-recognized lifestyle change program that showed prediabetic people following the structured program can cut their risk of developing type 2 diabetes by 58% (71% for people over 60 years old). Considering the possibility to recover from the prediabetic status but also this emotional struggle upon diagnosis, it is encouraged for higher risk patients to get tested early. Having an additional screening option in the dental setting may offset some of the emotional struggle because it is more regularly visited and therefore has the potential to initiate earlier recognition and intervention.

Prevention

The American College of Endocrinology (ACE) and the American Association of Clinical Endocrinologists (AACE) have developed lifestyle intervention guidelines for preventing the onset of type 2 diabetes:

  • Healthy diet (a diet with limited refined carbohydrates, added sugars, trans fats, as well as limited intake of sodium and total calories)
  • Physical fitness (30–45 minutes of cardiovascular exercise per day, 3–5 days a week)
  • Weight loss by as little as 5–10 percent may have a significant impact on overall health

Management

There is evidence that prediabetes is a curable disease state. Although some drugs can delay the onset of diabetes, lifestyle modifications play a greater role in the prevention of diabetes. Intensive weight loss and lifestyle intervention, if sustained, may improve glucose tolerance substantially and prevent progression from IGT to type 2 diabetes. The Diabetes Prevention Program (DPP) study found a 16% reduction in diabetes risk for every kilogram of weight loss. Reducing weight by 7% through a low-fat diet and performing 150 minutes of exercise a week is the goal. The ADA guidelines recommend modest weight loss (5–10% body weight), moderate-intensity exercise (30 minutes daily), and smoking cessation.

There are many dietary approaches that can reduce the risk of progression to diabetes. Most involve the reduction of added sugars and fats but there remains a lack of conclusive evidence proving the best approach.

For patients with severe risk factors, prescription medication may be appropriate. This may be considered in patients for whom lifestyle therapy has failed, or is not sustainable, and who are at high-risk for developing type 2 diabetes. Metformin and acarbose help prevent the development of prediabetes, and also have a good safety profile. Evidence also supports thiazolidinediones but there are safety concerns, and data on newer agents such as GLP-1 receptor agonists, DPP4 inhibitors or meglitinides are lacking.

Prognosis

The progression to type 2 diabetes mellitus is not inevitable for those with prediabetes. The progression into diabetes mellitus from prediabetes is approximately 25% over three to five years. This increases to 50% risk of progressing to diabetes over 10 years. Diabetes is a leading cause of morbidity and mortality. Effects of the disease may affect larger blood vessels (e.g., atherosclerosis within the larger arteries of the cardiovascular system) or smaller blood vessels, as seen with damage to the retina of the eye, damage to the kidney, and damage to the nerves.

Prediabetes is a risk factor for mortality and there is evidence of cardiovascular disease developing prior to a diagnosis of diabetes.

Epidemiology

Studies conducted from 1988–1994 indicated that of the total population of US in the age group 40–74 years, 34% had IFG, 15% had IGT, and 40% had prediabetes (IFG, IGT, or both). Eighteen million people (6% of the population) had type 2 diabetes in 2002.

The incidence of diabetes is growing. In 2014, 29.1 million people or 9% of the US population had diabetes. In 2011–2012, the prevalence of diabetes in the U.S. using hemoglobin A1C, fasting plasma glucose or the two-hour plasma glucose definition was 14% for total diabetes, 9% for diagnosed diabetes, 5% for undiagnosed diabetes and 38% for prediabetes.

Validity (statistics)

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Validity_(statistics)

Validity is the main extent to which a concept, conclusion or measurement is well-founded and likely corresponds accurately to the real world. The word "valid" is derived from the Latin validus, meaning strong. The validity of a measurement tool (for example, a test in education) is the degree to which the tool measures what it claims to measure. Validity is based on the strength of a collection of different types of evidence (e.g. face validity, construct validity, etc.) described in greater detail below.

In psychometrics, validity has a particular application known as test validity: "the degree to which evidence and theory support the interpretations of test scores" ("as entailed by proposed uses of tests").

It is generally accepted that the concept of scientific validity addresses the nature of reality in terms of statistical measures and as such is an epistemological and philosophical issue as well as a question of measurement. The use of the term in logic is narrower, relating to the relationship between the premises and conclusion of an argument. In logic, validity refers to the property of an argument whereby if the premises are true then the truth of the conclusion follows by necessity. The conclusion of an argument is true if the argument is sound, which is to say if the argument is valid and its premises are true. By contrast, "scientific or statistical validity" is not a deductive claim that is necessarily truth preserving, but is an inductive claim that remains true or false in an undecided manner. This is why "scientific or statistical validity" is a claim that is qualified as being either strong or weak in its nature, it is never necessary nor certainly true. This has the effect of making claims of "scientific or statistical validity" open to interpretation as to what, in fact, the facts of the matter mean.

Validity is important because it can help determine what types of tests to use, and help to make sure researchers are using methods that are not only ethical, and cost-effective, but also a method that truly measures the idea or constructs in question.

Test validity

Validity (accuracy)

Validity of an assessment is the degree to which it measures what it is supposed to measure. This is not the same as reliability, which is the extent to which a measurement gives results that are very consistent. Within validity, the measurement does not always have to be similar, as it does in reliability. However, just because a measure is reliable, it is not necessarily valid. E.g. a scale that is 5 pounds off is reliable but not valid. A test cannot be valid unless it is reliable. Validity is also dependent on the measurement measuring what it was designed to measure, and not something else instead. Validity (similar to reliability) is a relative concept; validity is not an all-or-nothing idea. There are many different types of validity.

Construct validity

Construct validity refers to the extent to which operationalizations of a construct (e.g., practical tests developed from a theory) measure a construct as defined by a theory. It subsumes all other types of validity. For example, the extent to which a test measures intelligence is a question of construct validity. A measure of intelligence presumes, among other things, that the measure is associated with things it should be associated with (convergent validity), not associated with things it should not be associated with (discriminant validity).

Construct validity evidence involves the empirical and theoretical support for the interpretation of the construct. Such lines of evidence include statistical analyses of the internal structure of the test including the relationships between responses to different test items. They also include relationships between the test and measures of other constructs. As currently understood, construct validity is not distinct from the support for the substantive theory of the construct that the test is designed to measure. As such, experiments designed to reveal aspects of the causal role of the construct also contribute to constructing validity evidence.

Content validity

Content validity is a non-statistical type of validity that involves "the systematic examination of the test content to determine whether it covers a representative sample of the behavior domain to be measured" (Anastasi & Urbina, 1997 p. 114). For example, does an IQ questionnaire have items covering all areas of intelligence discussed in the scientific literature?

Content validity evidence involves the degree to which the content of the test matches a content domain associated with the construct. For example, a test of the ability to add two numbers should include a range of combinations of digits. A test with only one-digit numbers, or only even numbers, would not have good coverage of the content domain. Content related evidence typically involves a subject matter expert (SME) evaluating test items against the test specifications. Experts should pay attention to any cultural differences. For example, when a driving assessment questionnaire adopts from England (e. g. DBQ), the experts should consider right-hand driving in Britain. Some studies found how this will be critical to get a valid questionnaire. Before going to the final administration of questionnaires, the researcher should consult the validity of items against each of the constructs or variables and accordingly modify measurement instruments on the basis of SME's opinion.

A test has content validity built into it by careful selection of which items to include (Anastasi & Urbina, 1997). Items are chosen so that they comply with the test specification which is drawn up through a thorough examination of the subject domain. Foxcroft, Paterson, le Roux & Herbst (2004, p. 49) note that by using a panel of experts to review the test specifications and the selection of items the content validity of a test can be improved. The experts will be able to review the items and comment on whether the items cover a representative sample of the behavior domain.

Face validity

Face validity is an estimate of whether a test appears to measure a certain criterion; it does not guarantee that the test actually measures phenomena in that domain. Measures may have high validity, but when the test does not appear to be measuring what it is, it has low face validity. Indeed, when a test is subject to faking (malingering), low face validity might make the test more valid. Considering one may get more honest answers with lower face validity, it is sometimes important to make it appear as though there is low face validity whilst administering the measures.

Face validity is very closely related to content validity. While content validity depends on a theoretical basis for assuming if a test is assessing all domains of a certain criterion (e.g. does assessing addition skills yield in a good measure for mathematical skills? To answer this you have to know, what different kinds of arithmetic skills mathematical skills include) face validity relates to whether a test appears to be a good measure or not. This judgment is made on the "face" of the test, thus it can also be judged by the amateur.

Face validity is a starting point, but should never be assumed to be probably valid for any given purpose, as the "experts" have been wrong before—the Malleus Malificarum (Hammer of Witches) had no support for its conclusions other than the self-imagined competence of two "experts" in "witchcraft detection," yet it was used as a "test" to condemn and burn at the stake tens of thousands men and women as "witches."

Criterion validity

Criterion validity evidence involves the correlation between the test and a criterion variable (or variables) taken as representative of the construct. In other words, it compares the test with other measures or outcomes (the criteria) already held to be valid. For example, employee selection tests are often validated against measures of job performance (the criterion), and IQ tests are often validated against measures of academic performance (the criterion).

If the test data and criterion data are collected at the same time, this is referred to as concurrent validity evidence. If the test data are collected first in order to predict criterion data collected at a later point in time, then this is referred to as predictive validity evidence.

Concurrent validity

Concurrent validity refers to the degree to which the operationalization correlates with other measures of the same construct that are measured at the same time. When the measure is compared to another measure of the same type, they will be related (or correlated). Returning to the selection test example, this would mean that the tests are administered to current employees and then correlated with their scores on performance reviews.

Predictive validity

Predictive validity refers to the degree to which the operationalization can predict (or correlate with) other measures of the same construct that are measured at some time in the future. Again, with the selection test example, this would mean that the tests are administered to applicants, all applicants are hired, their performance is reviewed at a later time, and then their scores on the two measures are correlated.

This is also when measurement predicts a relationship between what is measured and something else; predicting whether or not the other thing will happen in the future. High correlation between ex-ante predicted and ex-post actual outcomes is the strongest proof of validity.

Experimental validity

The validity of the design of experimental research studies is a fundamental part of the scientific method, and a concern of research ethics. Without a valid design, valid scientific conclusions cannot be drawn.

Statistical conclusion validity

Statistical conclusion validity is the degree to which conclusions about the relationship among variables based on the data are correct or ‘reasonable’. This began as being solely about whether the statistical conclusion about the relationship of the variables was correct, but now there is a movement towards moving to ‘reasonable’ conclusions that use: quantitative, statistical, and qualitative data.

Statistical conclusion validity involves ensuring the use of adequate sampling procedures, appropriate statistical tests, and reliable measurement procedures. As this type of validity is concerned solely with the relationship that is found among variables, the relationship may be solely a correlation.

Internal validity

Internal validity is an inductive estimate of the degree to which conclusions about causal relationships can be made (e.g. cause and effect), based on the measures used, the research setting, and the whole research design. Good experimental techniques, in which the effect of an independent variable on a dependent variable is studied under highly controlled conditions, usually allow for higher degrees of internal validity than, for example, single-case designs.

Eight kinds of confounding variable can interfere with internal validity (i.e. with the attempt to isolate causal relationships):

  1. History, the specific events occurring between the first and second measurements in addition to the experimental variables
  2. Maturation, processes within the participants as a function of the passage of time (not specific to particular events), e.g., growing older, hungrier, more tired, and so on.
  3. Testing, the effects of taking a test upon the scores of a second testing.
  4. Instrumentation, changes in calibration of a measurement tool or changes in the observers or scorers may produce changes in the obtained measurements.
  5. Statistical regression, operating where groups have been selected on the basis of their extreme scores.
  6. Selection, biases resulting from differential selection of respondents for the comparison groups.
  7. Experimental mortality, or differential loss of respondents from the comparison groups.
  8. Selection-maturation interaction, etc. e.g., in multiple-group quasi-experimental designs

External validity

External validity concerns the extent to which the (internally valid) results of a study can be held to be true for other cases, for example to different people, places or times. In other words, it is about whether findings can be validly generalized. If the same research study was conducted in those other cases, would it get the same results?

A major factor in this is whether the study sample (e.g. the research participants) are representative of the general population along relevant dimensions. Other factors jeopardizing external validity are:

  1. Reactive or interaction effect of testing, a pretest might increase the scores on a posttest
  2. Interaction effects of selection biases and the experimental variable.
  3. Reactive effects of experimental arrangements, which would preclude generalization about the effect of the experimental variable upon persons being exposed to it in non-experimental settings
  4. Multiple-treatment interference, where effects of earlier treatments are not erasable.

Ecological validity

Ecological validity is the extent to which research results can be applied to real-life situations outside of research settings. This issue is closely related to external validity but covers the question of to what degree experimental findings mirror what can be observed in the real world (ecology = the science of interaction between organism and its environment). To be ecologically valid, the methods, materials and setting of a study must approximate the real-life situation that is under investigation.

Ecological validity is partly related to the issue of experiment versus observation. Typically in science, there are two domains of research: observational (passive) and experimental (active). The purpose of experimental designs is to test causality, so that you can infer A causes B or B causes A. But sometimes, ethical and/or methological restrictions prevent you from conducting an experiment (e.g. how does isolation influence a child's cognitive functioning?). Then you can still do research, but it is not causal, it is correlational. You can only conclude that A occurs together with B. Both techniques have their strengths and weaknesses.

Relationship to internal validity

On first glance, internal and external validity seem to contradict each other – to get an experimental design you have to control for all interfering variables. That is why you often conduct your experiment in a laboratory setting. While gaining internal validity (excluding interfering variables by keeping them constant) you lose ecological or external validity because you establish an artificial laboratory setting. On the other hand, with observational research you can not control for interfering variables (low internal validity) but you can measure in the natural (ecological) environment, at the place where behavior normally occurs. However, in doing so, you sacrifice internal validity.

The apparent contradiction of internal validity and external validity is, however, only superficial. The question of whether results from a particular study generalize to other people, places or times arises only when one follows an inductivist research strategy. If the goal of a study is to deductively test a theory, one is only concerned with factors which might undermine the rigor of the study, i.e. threats to internal validity. In other words, the relevance of external and internal validity to a research study depends on the goals of the study. Furthermore, conflating research goals with validity concerns can lead to the mutual-internal-validity problem, where theories are able to explain only phenomena in artificial laboratory settings but not the real world.

Diagnostic validity

In psychiatry there is a particular issue with assessing the validity of the diagnostic categories themselves. In this context:

  • content validity may refer to symptoms and diagnostic criteria;
  • concurrent validity may be defined by various correlates or markers, and perhaps also treatment response;
  • predictive validity may refer mainly to diagnostic stability over time;
  • discriminant validity may involve delimitation from other disorders.

Robins and Guze proposed in 1970 what were to become influential formal criteria for establishing the validity of psychiatric diagnoses. They listed five criteria:

  • distinct clinical description (including symptom profiles, demographic characteristics, and typical precipitants)
  • laboratory studies (including psychological tests, radiology and postmortem findings)
  • delimitation from other disorders (by means of exclusion criteria)
  • follow-up studies showing a characteristic course (including evidence of diagnostic stability)
  • family studies showing familial clustering

These were incorporated into the Feighner Criteria and Research Diagnostic Criteria that have since formed the basis of the DSM and ICD classification systems.

Kendler in 1980 distinguished between:

  • antecedent validators (familial aggregation, premorbid personality, and precipitating factors)
  • concurrent validators (including psychological tests)
  • predictive validators (diagnostic consistency over time, rates of relapse and recovery, and response to treatment)

Nancy Andreasen (1995) listed several additional validators – molecular genetics and molecular biology, neurochemistry, neuroanatomy, neurophysiology, and cognitive neuroscience – that are all potentially capable of linking symptoms and diagnoses to their neural substrates.

Kendell and Jablinsky (2003) emphasized the importance of distinguishing between validity and utility, and argued that diagnostic categories defined by their syndromes should be regarded as valid only if they have been shown to be discrete entities with natural boundaries that separate them from other disorders.

Kendler (2006) emphasized that to be useful, a validating criterion must be sensitive enough to validate most syndromes that are true disorders, while also being specific enough to invalidate most syndromes that are not true disorders. On this basis, he argues that a Robins and Guze criterion of "runs in the family" is inadequately specific because most human psychological and physical traits would qualify - for example, an arbitrary syndrome comprising a mixture of "height over 6 ft, red hair, and a large nose" will be found to "run in families" and be "hereditary", but this should not be considered evidence that it is a disorder. Kendler has further suggested that "essentialist" gene models of psychiatric disorders, and the hope that we will be able to validate categorical psychiatric diagnoses by "carving nature at its joints" solely as a result of gene discovery, are implausible.

In the United States Federal Court System validity and reliability of evidence is evaluated using the Daubert Standard: see Daubert v. Merrell Dow Pharmaceuticals. Perri and Lichtenwald (2010) provide a starting point for a discussion about a wide range of reliability and validity topics in their analysis of a wrongful murder conviction.

Lie group

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Lie_group In mathematics , a Lie gro...