Search This Blog

Tuesday, February 20, 2024

Nuclear weapons in popular culture

A nuclear fireball lights up the night in a United States nuclear test.

Since their public debut in August 1945, nuclear weapons and their potential effects have been a recurring motif in popular culture, to the extent that the decades of the Cold War are often referred to as the "atomic age".

Images of nuclear weapons

The now-familiar peace symbol was originally a specifically anti-nuclear weapons icon.

The atomic bombings of Hiroshima and Nagasaki ushered in the "atomic age", and the bleak pictures of the bombed-out cities released shortly after the end of World War II became symbols of the power and destruction of the new weapons (it is worth noting that the first pictures released were only from distances, and did not contain any human bodies—such pictures would only be released in later years).

The first pictures released of a nuclear explosion—the blast from the Trinity test—focused on the fireball itself; later pictures would focus primarily on the mushroom cloud that followed. After the United States began a regular program of nuclear testing in the late 1940s, continuing through the 1950s (and matched by the Soviet Union), the mushroom cloud has served as a symbol of the weapons themselves.

Pictures of nuclear weapons themselves (the actual casings) were not made public until 1960, and even those were only mock-ups of the "Fat Man" and "Little Boy" weapons dropped on Japan—not the more powerful weapons developed more recently. Diagrams of the general principles of operation of thermonuclear weapons have been available in very general terms since at least 1969 in at least two encyclopedia articles, and open literature research into inertial confinement fusion has been at least richly suggestive of how the "secondary" and "inter" stages of thermonuclear weapons work.

In general, however, the design of nuclear weapons has been the most closely guarded secret until long after the secrets had been independently developed—or stolen—by all the major powers and a number of lesser ones. It is generally possible to trace US knowledge of foreign progress in nuclear weapons technology by reading the US Department of Energy document "Restricted Data Declassification Decisions—1946 to the Present" (although some nuclear weapons design data have been reclassified since concern about proliferation of nuclear weapons to "nth countries" increased in the late 1970s).

However, two controversial publications breached this silence in ways that made many in the US and allied nuclear weapons community very anxious.

Former nuclear weapons designer Theodore Taylor described how terrorists could, without using any classified information at all, design a working fission nuclear weapon to journalist John McPhee, who published this information in the best-selling book The Curve of Binding Energy in 1974.

In 1979 the US Department of Energy sued to suppress the publication of an article by Howard Morland in The Progressive magazine detailing design information on thermonuclear and fission nuclear weapons he was able to glean in conversations with officials at several DoE contractor plants active in manufacture of nuclear weapons components. Ray Kidder, a nuclear weapon designer testifying for Morland, identified several open literature sources for the information Morland repeated in his article, while aviation historian Chuck Hansen produced a similar document for US Senator Charles Percy. Morland and The Progressive won the case, and Morland published a book on his journalistic research for the article, the trial, and a technical appendix in which he "corrected" what he felt were false assumptions in his original article about the design of thermonuclear weapons in his book, The Secret That Exploded. The concepts in Morland's book are widely acknowledged in other popular-audience descriptions of the inner workings of thermonuclear weapons.

During the 1950s, many countries developed large civil-defense programs designed to aid the populace in the event of nuclear warfare. These generally included drills for evacuation to fallout shelters, popularized through popular media such as the US film Duck and Cover. These drills, with their images of eerily empty streets and the activity of hiding from a nuclear bomb under a schoolroom desk, would later become symbols of the seemingly inescapable and common fate created by such weapons. Some Americans built back-yard fallout shelters, which would provide little protection from a direct hit, but would keep out wind-blown fallout, for a few days or weeks (Switzerland, which never acquired nuclear weapons, although it had the technological sophistication to do so long before Pakistan or North Korea, has built nuclear blast shelters that would protect most of its population from a nuclear war.)

After the development of hydrogen bombs in the 1950s, and especially after the massive and widely publicized Castle Bravo test accident by the United States in 1954, which spread nuclear fallout over a large area and resulted in the death of at least one Japanese fisherman, the idea of a "limited" or "survivable" nuclear war became increasingly replaced by a perception that nuclear war meant the potentially instant end of all civilization: in fact, the explicit strategy of the nuclear powers was called Mutual Assured Destruction. Nuclear weapons became synonymous with apocalypse, and as a symbol this resonated through the culture of nations with freedom of the press. Several popular novels—such as Alas, Babylon and On the Beach—portrayed the aftermath of nuclear war. Several science-fiction novels, such as A Canticle for Leibowitz, explored the long-term consequences. Stanley Kubrick's film Dr. Strangelove or: How I Learned to Stop Worrying and Love the Bomb satirically portrayed the events and the thinking that could begin a nuclear war.

Nuclear weapons are also one of the main targets of peace organizations. The CND (Campaign for Nuclear Disarmament) was one of the main organisations campaigning against the "Bomb". Its symbol, a combination of the semaphore symbols for "N" (nuclear) and "D" (disarmament), entered modern popular culture as an icon of peace.

A limited number of Indian films depicting nuclear weapons and technology have been made and these mostly show nuclear weapons in a negative light especially in the hand of non-state actors. Atom Bomb (1947) by Homi Wadia, one of the first Indian films involving nuclear technology, is about a man with enhanced physical strength due to the effects of a nuclear weapons test. Indian films involving non-state actors and nuclear weapons include Agent Vinod (1977) by Deepak Bahry and a 2012 film of the same name by Sriram Raghavan, Vikram (1986) by Rajasekhar, Mr. India (1987) by Shekhar Kapoor, Tirangaa (1993) by Mehul Kumar, The Hero: Love Story of a Spy (2003) by Anil Sharma, Fanaa (2006) by Kunal Kohli, and Tiger Zinda Hai (2017) by Ali Abbas Zafar. Other Indian films covering nuclear weapons include Hava Aney Dey (2004) by Partho Sen-Gupta about a future nuclear war between India and Pakistan and Parmanu: The Story of Pokhran (2018) by Abhishek Sharma – the first nuclear historical film in India about the Pokhran-II Indian nuclear weapons tests. Sacred Games, an Indian Netflix series based on the novel of the same name, involves the acquirement of a nuclear bomb by an apocalyptic cult who plans to blow it up in Mumbai.

History of biological warfare

From Wikipedia, the free encyclopedia

Before the 20th century, the use of biological agents took three major forms:

  • Deliberate contamination of food and water with poisonous or contagious material
  • Use of microbes, biological toxins, animals, or plants (living or dead) in a weapon system
  • Use of biologically inoculated fabrics and persons

In the 20th century, sophisticated bacteriological and virological techniques allowed the production of significant stockpiles of weaponized bio-agents:

Antiquity

The earliest documented incident of the intention to use biological weapons is possibly recorded in Hittite texts of 1500–1200 BC, in which victims of tularemia were driven into enemy lands, causing an epidemic. Although the Assyrians knew of ergot, a parasitic fungus of rye which produces ergotism when ingested, there is no evidence that they poisoned enemy wells with the fungus, as has been claimed.

According to Homer's epic poems about the legendary Trojan War, the Iliad and the Odyssey, spears and arrows were tipped with poison. During the First Sacred War in Greece, in about 590 BC, Athens and the Amphictionic League poisoned the water supply of the besieged town of Kirrha (near Delphi) with the toxic plant hellebore. According to Herodotus, during the 4th century BC Scythian archers dipped their arrow tips into decomposing cadavers of humans and snakes or in blood mixed with manure, supposedly making them contaminated with dangerous bacterial agents like Clostridium perfringens and Clostridium tetani, and snake venom.

In a naval battle against King Eumenes of Pergamon in 184 BC, Hannibal of Carthage had clay pots filled with venomous snakes and instructed his sailors to throw them onto the decks of enemy ships. The Roman commander Manius Aquillius poisoned the wells of besieged enemy cities in about 130 BC. In about AD 198, the Parthian city of Hatra (near Mosul, Iraq) repulsed the Roman army led by Septimius Severus by hurling clay pots filled with live scorpions at them. Like Scythian archers, Roman soldiers dipped their swords into excrements and cadavers too — victims were commonly infected by tetanus as result.

The use of bees as guided biological weapons was described in Byzantine written sources, such as Tactica of Emperor Leo VI the Wise in the chapter On Naval Warfare.

There are numerous other instances of the use of plant toxins, venoms, and other poisonous substances to create biological weapons in antiquity.

Post-classical ages

The Mongol Empire established commercial and political connections between the Eastern and Western areas of the world, through the most mobile army ever seen. The armies, composed of the most rapidly moving travelers who had ever moved between the steppes of East Asia (where bubonic plague was and remains endemic among small rodents), managed to keep the chain of infection without a break until they reached, and infected, peoples and rodents who had never encountered it. The ensuing Black Death may have killed up to 25 million total, including China and roughly a third of the population of Europe and in the next decades, changing the course of Asian and European history.

Biologicals were extensively used in many parts of Africa from the sixteenth century AD, most of the time in the form of poisoned arrows, or powder spread on the war front as well as poisoning of horses and water supply of the enemy forces. In Borgu, there were specific mixtures to kill, hypnotize, make the enemy bold, and to act as an antidote against the poison of the enemy as well. The creation of biologicals was reserved for a specific and professional class of medicine-men. In South Sudan, the people of the Koalit Hills kept their country free of Arab invasions by using tsetse flies as a weapon of war. Several accounts can give an idea of the efficiency of the biologicals. For example, Mockley-Ferryman in 1892 commented on the Dahomean invasion of Borgu, stating that "their (Borgawa) poisoned arrows enabled them to hold their own with the forces of Dahomey notwithstanding the latter's muskets." The same scenario happened to Portuguese raiders in Senegambia when they were defeated by Mali's Gambian forces, and to John Hawkins in Sierra Leone where he lost a number of his men to poisoned arrows.

During the Middle Ages, victims of the bubonic plague were used for biological attacks, often by flinging fomites such as infected corpses and excrement over castle walls using catapults. Bodies would be tied along with cannonballs and shot towards the city area. In 1346, during the siege of Caffa (now Feodossia, Crimea) the attacking Tartar Forces (subjugated by the Mongol empire under Genghis Khan more than a century earlier), used the bodies of Mongol warriors of the Golden Horde who had died of plague, as weapons. It has been speculated that this operation may have been responsible for the advent of the Black Death in Europe. At the time, the attackers thought that the stench was enough to kill them, though it was the disease that was deadly. (However in recent years, some scholarship and research has cast doubt on the use of trebuchets to hurl corpses due to factors such as the size of trebuchets and how close they would have to be constructed due to the hilly landscape in Caffa.)

At the siege of Thun-l'Évêque in 1340, during the Hundred Years' War, the attackers catapulted decomposing animals into the besieged area.

In 1422, during the siege of Karlstein Castle in Bohemia, Hussite attackers used catapults to throw dead (but not plague-infected) bodies and 2000 carriage-loads of dung over the walls.

English Longbowmen usually did not draw their arrows from a quiver; rather, they stuck their arrows into the ground in front of them. This allowed them to nock the arrows faster and the dirt and soil was likely to stick to the arrowheads, thus making the wounds much more likely to become infected.

17th and 18th century

Europe

The last known incident of using plague corpses for biological warfare may have occurred in 1710, when Russian forces attacked Swedish troops by flinging plague-infected corpses over the city walls of Reval (Tallinn) (although this is disputed). However, during the 1785 siege of La Calle, Tunisian forces flung diseased clothing into the city.

North America

During Pontiac's Rebellion, in June 1763 a group of Native Americans laid siege to British-held Fort Pitt. During a parley in the middle of the siege on June 24, Captain Simeon Ecuyer gave representatives of the besieging Delawares, including Turtleheart, two blankets and a handkerchief enclosed in small metal boxes that had been exposed to smallpox, in an attempt to spread the disease to the besieging Native warriors in order to end the siege. William Trent, the trader turned militia commander who had come up with the plan, sent an invoice to the British colonial authorities in North America indicating that the purpose of giving the blankets was "to Convey the Smallpox to the Indians." The invoice was approved by General Thomas Gage, then serving as Commander-in-Chief, North America. A reported outbreak that began the spring before left as many as one hundred Native Americans dead in Ohio Country from 1763 to 1764. It is not clear whether the smallpox was a result of the Fort Pitt incident or the virus was already present among the Delaware people as outbreaks happened on their own every dozen or so years and the delegates were met again later and seemingly had not contracted smallpox. Trade and combat also provided ample opportunity for transmission of the disease.

A month later, Colonel Henry Bouquet, who was leading a relief attempt towards Fort Pitt, wrote to his superior Sir Jeffery Amherst to discuss the possibility of using smallpox-infested blankets to spread smallpox amongst Natives. Amherst wrote to Bouquet that: "Could it not be contrived to send the small pox among the disaffected tribes of Indians? We must on this occasion use every stratagem in our power to reduce them." Bouquet replied in a latter, writing that "I will try to inocculate [sic] the Indians by means of Blankets that may fall in their hands, taking care however not to get the disease myself. As it is pity to oppose good men against them, I wish we could make use of the Spaniard's Method, and hunt them with English Dogs. Supported by Rangers, and some Light Horse, who would I think effectively extirpate or remove that Vermine." After receiving Bouquet's response, Amherst wrote back to him, stating that "You will Do well to try to Innoculate [sic] the Indians by means of Blankets, as well as to try Every other method that can serve to Extirpate this Execrable Race. I should be very glad your Scheme for Hunting them Down by Dogs could take Effect, but England is at too great a Distance to think of that at present."

New South Wales

Many Aboriginal Australians have claimed that smallpox outbreaks in Australia were a deliberate result of European colonisation, though this possibility has only been raised by historians from the 1980s onwards, when Noel Butlin suggested "there are some possibilities that... disease could have been used deliberately as an exterminating agent."

In 1997, scholar David Day claimed there "remains considerable circumstantial evidence to suggest that officers other than Phillip, or perhaps convicts or soldiers... deliberately spread smallpox among aborigines", and in 2000, John Lambert argued that "strong circumstantial evidence suggests the smallpox epidemic which ravaged Aborigines in 1789, may have resulted from deliberate infection."

Judy Campbell argued in 2002 that it is highly improbable that the First Fleet was the source of the epidemic as "smallpox had not occurred in any members of the First Fleet"; the only possible source of infection from the Fleet being exposure to variolous matter imported for the purposes of inoculation against smallpox. Campbell argued that, while there has been considerable speculation about a hypothetical exposure to the First Fleet's variolous matter, there was no evidence that Aboriginal people were ever actually exposed to it. She pointed to regular contact between fishing fleets from the Indonesia archipelago, where smallpox was always present, and Aboriginal people in Australia's North as a more likely source for the introduction of smallpox. She notes that while these fishermen are generally referred to as 'Macassans', referring to the port of Macassar on the island of Sulawesi from which most of the fishermen originated, "some travelled from islands as distant as New Guinea". She noted that there is little disagreement that the smallpox epidemic of the 1860s was contracted from Macassan fishermen and spread through the Aboriginal population by Aborigines fleeing outbreaks and also via their traditional social, kinship and trading networks. She argued that the 1789–90 epidemic followed the same pattern.

These claims are controversial as it is argued that any smallpox virus brought to New South Wales probably would have been sterilised by heat and humidity encountered during the voyage of the First Fleet from England and incapable of biological warfare. However, in 2007, Christopher Warren demonstrated that any smallpox which might have been carried onboard the First Fleet may have been still viable upon landing in Australia. Since them, some scholars have argued that smallpox in Australia was deliberately spread by the inhabitants of the British penal colony at Port Jackson in 1789.

In 2013, Warren reviewed the issue and argued that smallpox did not spread across Australia before 1824 and showed that there was no smallpox at Macassar that could have caused the outbreak at Sydney. Warren, however, did not address the issue of persons who joined the Macassan fleet from other islands and from parts of Sulawesi other than the port of Macassar. Warren concluded that the British were "the most likely candidates to have released smallpox" near Sydney Cove in 1789. Warren proposed that the British had no choice as they were confronted with dire circumstances when, among other factors, they ran out of ammunition for their muskets; he also used Aboriginal oral tradition and archaeological records from indigenous gravesites to analyse the cause and effect of the spread of smallpox in 1789.

Prior to the publication of Warren's article (2013), a professor of physiology John Carmody argued that the epidemic was an outbreak of chickenpox which took a drastic toll on an Aboriginal population without immunological resistance. With regard to how smallpox might have reached the Sydney region, Carmody said: "There is absolutely no evidence to support any of the theories and some of them are fanciful and far-fetched." Warren argued against the chickenpox theory at endnote 3 of Smallpox at Sydney Cove – Who, When, Why?. However, in a 2014 joint paper on historic Aboriginal demography, Carmody and the Australian National University's Boyd Hunter argued that the recorded behavior of the epidemic ruled out smallpox and indicated chickenpox.

20th century

By the turn of the 20th century, advances in microbiology had made thinking about "germ warfare" part of the zeitgeist. Jack London, in his short story '"Yah! Yah! Yah!"' (1909), described a punitive European expedition to a South Pacific island deliberately exposing the Polynesian population to measles, of which many of them died. London wrote another science fiction tale the following year, "The Unparalleled Invasion" (1910), in which the Western nations wipe out all of China with a biological attack.

First World War

During the First World War (1914–1918), the German Empire made some early attempts at anti-agriculture biological warfare. Those attempts were made by special sabotage group headed by Rudolf Nadolny. Using diplomatic pouches and couriers, the German General Staff supplied small teams of saboteurs in the Russian Duchy of Finland, and in the then-neutral countries of Romania, the United States, and Argentina. In Finland, saboteurs mounted on reindeer placed ampoules of anthrax in stables of Russian horses in 1916. Anthrax was also supplied to the German military attaché in Bucharest, as was glanders, which was employed against livestock destined for Allied service. German intelligence officer and US citizen Anton Casimir Dilger established a secret lab in the basement of his sister's home in Chevy Chase, Maryland, that produced glanders which was used to infect livestock in ports and inland collection points including, at least, Newport News, Norfolk, Baltimore, and New York City, and probably St. Louis and Covington, Kentucky. In Argentina, German agents also employed glanders in the port of Buenos Aires and also tried to ruin wheat harvests with a destructive fungus. Also, Germany itself became a victim of similar attacks — horses bound for Germany were infected with Burkholderia by French operatives in Switzerland.

The Geneva Protocol of 1925 prohibited the use of chemical weapons and biological weapons among signatory states in international armed conflicts, but said nothing about experimentation, production, storage, or transfer; later treaties did cover these aspects. Twentieth-century advances in microbiology enabled the first pure-culture biological agents to be developed by World War II.

Interwar period and WWII

In the interwar period, little research was done in biological warfare in both Britain and the United States at first. In the United Kingdom the preoccupation was mainly in withstanding the anticipated conventional bombing attacks that would be unleashed in the event of war with Germany. As tensions increased, Sir Frederick Banting began lobbying the British government to establish a research program into the research and development of biological weapons to effectively deter the Germans from launching a biological attack. Banting proposed a number of innovative schemes for the dissemination of pathogens, including aerial-spray attacks and germs distributed through the mail system.

With the onset of hostilities, the Ministry of Supply finally established a biological weapons programme at Porton Down, headed by the microbiologist Paul Fildes. The research was championed by Winston Churchill and soon tularemia, anthrax, brucellosis, and botulism toxins had been effectively weaponized. In particular, Gruinard Island in Scotland, during a series of extensive tests, was contaminated with anthrax for the next 48 years. Although Britain never offensively used the biological weapons it developed, its program was the first to successfully weaponize a variety of deadly pathogens and bring them into industrial production. Other nations, notably France and Japan, had begun their own biological-weapons programs.

When the United States entered the war, mounting British pressure for the creation of a similar research program for an Allied pooling of resources led to the creation of a large industrial complex at Fort Detrick, Maryland in 1942 under the direction of George W. Merck. The biological and chemical weapons developed during that period were tested at the Dugway Proving Grounds in Utah. Soon there were facilities for the mass production of anthrax spores, brucellosis, and botulism toxins, although the war was over before these weapons could be of much operational use.

However, the most notorious program of the period was run by the secret Imperial Japanese Army Unit 731 during the war, based at Pingfan in Manchuria and commanded by Lieutenant General Shirō Ishii. This unit did research on BW, conducted often fatal human experiments on prisoners, and produced biological weapons for combat use. Although the Japanese effort lacked the technological sophistication of the American or British programs, it far outstripped them in its widespread application and indiscriminate brutality. Biological weapons were used against both Chinese soldiers and civilians in several military campaigns. Three veterans of Unit 731 testified in a 1989 interview to the Asahi Shimbun that they contaminated the Horustein river with typhoid near the Soviet troops during the Battle of Khalkhin Gol. In 1940, the Imperial Japanese Army Air Force bombed Ningbo with ceramic bombs full of fleas carrying the bubonic plague. A film showing this operation was seen by the imperial princes Tsuneyoshi Takeda and Takahito Mikasa during a screening made by mastermind Shiro Ishii. During the Khabarovsk War Crime Trials the accused, such as Major General Kiyashi Kawashima, testified that as early as 1941 some 40 members of Unit 731 air-dropped plague-contaminated fleas on Changde. These operations caused epidemic plague outbreaks.

Many of these operations were ineffective due to inefficient delivery systems, using disease-bearing insects rather than dispersing the agent as a bioaerosol cloud.

Ban Shigeo, a technician at the Japanese Army's 9th Technical Research Institute, left an account of the activities at the Institute which was published in "The Truth About the Army Noborito Institute". Ban included an account of his trip to Nanking in 1941 to participate in the testing of poisons on Chinese prisoners. His testimony tied the Noborito Institute to the infamous Unit 731, which participated in biomedical research.

During the final months of World War II, Japan planned to utilize plague as a biological weapon against U.S. civilians in San Diego, California, during Operation Cherry Blossoms at Night. They hoped that it would kill tens of thousands of U.S. civilians and thereby dissuade America from attacking Japan. The plan was set to launch on September 22, 1945, at night, but it never came into fruition due to Japan's surrender on August 15, 1945.

When the war ended, the US Army quietly enlisted certain members of Noborito in its efforts against the communist camp in the early years of the Cold War. The head of Unit 731, Shiro Ishii, was granted immunity from war crimes prosecution in exchange for providing information to the United States on the Unit's activities. Allegations were made that a "chemical section" of a US clandestine unit hidden within Yokosuka naval base was operational during the Korean War, and then worked on unspecified projects inside the United States from 1955 to 1959, before returning to Japan to enter the private sector.

Some of the Unit 731 personnel were imprisoned by the Soviets, and may have been a potential source of information on Japanese weaponization.

Postwar period

Considerable research into BW was undertaken throughout the Cold War era by the US, UK and USSR, and probably other major nations as well, although it is generally believed that such weapons were never used.

In Britain, the 1950s saw the weaponization of plague, brucellosis, tularemia and later equine encephalomyelitis and vaccinia viruses. Trial tests at sea were carried out including Operation Cauldron off Stornoway in 1952. The programme was cancelled in 1956, when the British government unilaterally renounced the use of biological and chemical weapons.

The United States initiated its weaponization efforts with disease vectors in 1953, focused on Plague-fleas, EEE-mosquitoes, and yellow fever – mosquitoes (OJ-AP). However, US medical scientists in occupied Japan undertook extensive research on insect vectors, with the assistance of former Unit 731 staff, as early as 1946.

The United States Army Chemical Corps then initiated a crash program to weaponize anthrax (N) in the E61 1/2-lb hour-glass bomblet. Though the program was successful in meeting its development goals, the lack of validation on the infectivity of anthrax stalled standardization. The United States Air Force was also unsatisfied with the operational qualities of the M114/US bursting bomblet and labeled it an interim item until the Chemical Corps could deliver a superior weapon.

Around 1950 the Chemical Corps also initiated a program to weaponize tularemia (UL). Shortly after the E61/N failed to make standardization, tularemia was standardized in the 3.4" M143 bursting spherical bomblet. This was intended for delivery by the MGM-29 Sergeant missile warhead and could produce 50% infection over a 7-square-mile (18 km2) area. Although tularemia is treatable by antibiotics, treatment does not shorten the course of the disease. US conscientious objectors were used as consenting test subjects for tularemia in a program known as Operation Whitecoat. There were also many unpublicized tests carried out in public places with bio-agent simulants during the Cold War.

E120 biological bomblet, developed before the U.S. signed the Biological and Toxic Weapons Convention.

In addition to the use of bursting bomblets for creating biological aerosols, the Chemical Corps started investigating aerosol-generating bomblets in the 1950s. The E99 was the first workable design, but was too complex to be manufactured. By the late 1950s the 4.5" E120 spraying spherical bomblet was developed; a B-47 bomber with a SUU-24/A dispenser could infect 50% or more of the population of a 16-square-mile (41 km2) area with tularemia with the E120. The E120 was later superseded by dry-type agents.

Dry-type biologicals resemble talcum powder, and can be disseminated as aerosols using gas expulsion devices instead of a burster or complex sprayer. The Chemical Corps developed Flettner rotor bomblets and later triangular bomblets for wider coverage due to improved glide angles over Magnus-lift spherical bomblets. Weapons of this type were in advanced development by the time the program ended.

From January 1962, Rocky Mountain Arsenal "grew, purified and biodemilitarized" plant pathogen Wheat Stem Rust (Agent TX), Puccinia graminis, var. tritici, for the Air Force biological anti-crop program. TX-treated grain was grown at the Arsenal from 1962–1968 in Sections 23–26. Unprocessed TX was also transported from Beale AFB for purification, storage, and disposal. Trichothecenes Mycotoxin is a toxin that can be extracted from Wheat Stem Rust and Rice Blast and can kill or incapacitate depending on the concentration used. The "red mold disease" of wheat and barley in Japan is prevalent in the region that faces the Pacific Ocean. Toxic trichothecenes, including nivalenol, deoxynivalenol, and monoace tylnivalenol (fusarenon- X) from Fusarium nivale, can be isolated from moldy grains. In the suburbs of Tokyo, an illness similar to "red mold disease" was described in an outbreak of a food borne disease, as a result of the consumption of Fusarium- infected rice. Ingestion of moldy grains that are contaminated with trichothecenes has been associated with mycotoxicosis.

Although there is no evidence that biological weapons were used by the United States, China and North Korea accused the US of large-scale field testing of BW against them during the Korean War (1950–1953). At the time of the Korean War the United States had only weaponized one agent, brucellosis ("Agent US"), which is caused by Brucella suis. The original weaponized form used the M114 bursting bomblet in M33 cluster bombs. While the specific form of the biological bomb was classified until some years after the Korean War, in the various exhibits of biological weapons that Korea alleged were dropped on their country nothing resembled an M114 bomblet. There were ceramic containers that had some similarity to Japanese weapons used against the Chinese in World War II, developed by Unit 731.

Cuba also accused the United States of spreading human and animal disease on their island nation.

During the 1948 1947–1949 Palestine war, International Red Cross reports raised suspicion that the Israeli Haganah militia had released Salmonella typhi bacteria into the water supply for the city of Acre, causing an outbreak of typhoid among the inhabitants. Egyptian troops later claimed to have captured disguised Haganah soldiers near wells in Gaza, whom they executed for allegedly attempting another attack. Israel denies these allegations.

Biological and Toxin Weapons Convention

In mid-1969, the UK and the Warsaw Pact, separately, introduced proposals to the UN to ban biological weapons, which would lead to the signing of the Biological and Toxin Weapons Convention in 1972. United States President Richard Nixon signed an executive order in November 1969, which stopped production of biological weapons in the United States and allowed only scientific research of lethal biological agents and defensive measures such as immunization and biosafety. The biological munition stockpiles were destroyed, and approximately 2,200 researchers became redundant.

Special munitions for the United States Special Forces and the CIA and the Big Five Weapons for the military were destroyed in accordance with Nixon's executive order to end the offensive program. The CIA maintained its collection of biologicals well into 1975 when it became the subject of the senate Church Committee.

The Biological and Toxin Weapons Convention was signed by the US, UK, USSR and other nations, as a ban on "development, production and stockpiling of microbes or their poisonous products except in amounts necessary for protective and peaceful research" in 1972. The convention bound its signatories to a far more stringent set of regulations than had been envisioned by the 1925 Geneva Protocols. By 1996, 137 countries had signed the treaty; however it is believed that since the signing of the Convention the number of countries capable of producing such weapons has increased.

The Soviet Union continued research and production of offensive biological weapons in a program called Biopreparat, despite having signed the convention. The United States had no solid proof of this program until Vladimir Pasechnik defected in 1989, and Kanatjan Alibekov, the first deputy director of Biopreparat defected in 1992. Pathogens developed by the organization would be used in open-air trials. It is known that Vozrozhdeniye Island, located in the Aral Sea, was used as a testing site. In 1971, such testing led to the accidental aerosol release of smallpox over the Aral Sea and a subsequent smallpox epidemic.

During the closing stages of the Rhodesian Bush War, the Rhodesian government resorted to use chemical and biological warfare agents. Watercourses at several sites inside the Mozambique border were deliberately contaminated with cholera. These biological attacks had little overall impact on the fighting capability of ZANLA, but resulted in at least 809 recorded deaths of insurgents. It also caused considerable distress to the local population. The Rhodesians also experimented with several other pathogens and toxins for use in their counterinsurgency.

After the 1991 Persian Gulf War, Iraq admitted to the United Nations inspection team to having produced 19,000 liters of concentrated botulinum toxin, of which approximately 10,000 L were loaded into military weapons; the 19,000 liters have never been fully accounted for. This is approximately three times the amount needed to kill the entire current human population by inhalation, although in practice it would be impossible to distribute it so efficiently, and, unless it is protected from oxygen, it deteriorates in storage.

According to the U.S. Congress Office of Technology Assessment eight countries were generally reported as having undeclared offensive biological warfare programs in 1995: China, Iran, Iraq, Israel, Libya, North Korea, Syria and Taiwan. Five countries had admitted to having had offensive weapon or development programs in the past: United States, Russia, France, the United Kingdom, and Canada. Offensive BW programs in Iraq were dismantled by Coalition Forces and the UN after the first Gulf War (1990–91), although an Iraqi military BW program was covertly maintained in defiance of international agreements until it was apparently abandoned during 1995 and 1996.

21st century

On September 18, 2001, and for a few days thereafter, several letters were received by members of the U.S. Congress and American media outlets which contained intentionally prepared anthrax spores; the attack sickened at least 22 people of whom five died. The identity of the bioterrorist remains unknown, although in 2008 authorities stated that Bruce Ivins was likely the perpetrator. (See 2001 anthrax attacks.)

Suspicions of an ongoing Iraqi biological warfare program were not substantiated in the wake of the March 2003 invasion of that country. Later that year, however, Muammar Gaddafi was persuaded to terminate Libya's biological warfare program. In 2008, according to a U.S. Congressional Research Service report, China, Cuba, Egypt, Iran, Israel, North Korea, Russia, Syria and Taiwan are considered, with varying degrees of certainty, to have some biologicalwarfare capability. According to the same 2008 report by the U.S. Congressional Research Service, "Developments in biotechnology, including genetic engineering, may produce a wide variety of live agents and toxins that are difficult to detect and counter; and new chemical warfare agents and mixtures of chemical weapons and biowarfare agents are being developed . . . Countries are using the natural overlap between weapons and civilian applications of chemical and biological materials to conceal chemical weapon and bioweapon production." By 2011, 165 countries had officially joined the BWC and pledged to disavow biological weapons.

Disease surveillance

From Wikipedia, the free encyclopedia
 
Disease surveillance is an epidemiological practice by which the spread of disease is monitored in order to establish patterns of progression. The main role of disease surveillance is to predict, observe, and minimize the harm caused by outbreak, epidemic, and pandemic situations, as well as increase knowledge about which factors contribute to such circumstances. A key part of modern disease surveillance is the practice of disease case reporting.

In modern times, reporting incidences of disease outbreaks has been transformed from manual record keeping, to instant worldwide internet communication.

The number of cases could be gathered from hospitals – which would be expected to see most of the occurrences – collated, and eventually made public. With the advent of modern communication technology, this has changed dramatically. Organizations like the World Health Organization (WHO) and the Centers for Disease Control and Prevention (CDC) now can report cases and deaths from significant diseases within days – sometimes within hours – of the occurrence. Further, there is considerable public pressure to make this information available quickly and accurately.

Mandatory reporting

Formal reporting of notifiable infectious diseases is a requirement placed upon health care providers by many regional and national governments, and upon national governments by the World Health Organization to monitor spread as a result of the transmission of infectious agents. Since 1969, WHO has required that all cases of the following diseases be reported to the organization: cholera, plague, yellow fever, smallpox, relapsing fever and typhus. In 2005, the list was extended to include polio and SARS. Regional and national governments typically monitor a larger set of (around 80 in the U.S.) communicable diseases that can potentially threaten the general population. Tuberculosis, HIV, botulism, hantavirus, anthrax, and rabies are examples of such diseases. The incidence counts of diseases are often used as health indicators to describe the overall health of a population.

World Health Organization

The World Health Organization (WHO) is the lead agency for coordinating global response to major diseases. The WHO maintains Websites for a number of diseases and has active teams in many countries where these diseases occur.

During the SARS outbreak in early 2004, for example, the Beijing staff of the WHO produced updates every few days for the duration of the outbreak. Beginning in January 2004, the WHO has produced similar updates for H5N1. These results are widely reported and closely watched.

WHO's Epidemic and Pandemic Alert and Response (EPR) to detect, verify rapidly and respond appropriately to epidemic-prone and emerging disease threats covers the following diseases:

Political challenges

As the lead organization in global public health, the WHO occupies a delicate role in global politics. It must maintain good relationships with each of the many countries in which it is active. As a result, it may only report results within a particular country with the agreement of the country's government. Because some governments regard the release of any information on disease outbreaks as a state secret, this can place the WHO in a difficult position.

The WHO coordinated International Outbreak Alert and Response is designed to ensure "outbreaks of potential international importance are rapidly verified and information is quickly shared within the Network" but not necessarily by the public; integrate and coordinate "activities to support national efforts" rather than challenge national authority within that nation in order to "respect the independence and objectivity of all partners". The commitment that "All Network responses will proceed with full respect for ethical standards, human rights, national and local laws, cultural sensitivities and tradition" ensures each nation that its security, financial, and other interests will be given full weight.

Technical challenges

Testing for a disease can be expensive, and distinguishing between two diseases can be prohibitively difficult in many countries. One standard means of determining if a person has had a particular disease is to test for the presence of antibodies that are particular to this disease. In the case of H5N1, for example, there is a low pathogenic H5N1 strain in wild birds in North America that a human could conceivably have antibodies against. It would be extremely difficult to distinguish between antibodies produced by this strain, and antibodies produced by Asian lineage HPAI A(H5N1). Similar difficulties are common, and make it difficult to determine how widely a disease may have spread.

There is currently little available data on the spread of H5N1 in wild birds in Africa and Asia. Without such data, predicting how the disease might spread in the future is difficult. Information that scientists and decision makers need to make useful medical products and informed decisions for health care, but currently lack include:

  • Surveillance of wild bird populations
  • Cell cultures of particular strains of diseases

H5N1

Surveillance of H5N1 in humans, poultry, wild birds, cats and other animals remains very weak in many parts of Asia and Africa. Much remains unknown about the exact extent of its spread.

H5N1 in China is less than fully reported. Blogs have described many discrepancies between official China government announcements concerning H5N1 and what people in China see with their own eyes. Many reports of total H5N1 cases have excluded China due to widespread disbelief in China's official numbers. (See Disease surveillance in China.)

"Only half the world's human bird flu cases are being reported to the World Health Organization within two weeks of being detected, a response time that must be improved to avert a pandemic, a senior WHO official said Saturday. Shigeru Omi, WHO's regional director for the Western Pacific, said it is estimated that countries would have only two to three weeks to stamp out, or at least slow, a pandemic flu strain after it began spreading in humans."

David Nabarro, chief avian flu coordinator for the United Nations, says avian flu has too many unanswered questions.

CIDRAP reported on 25 August 2006 on a new US government Website that allows the public to view current information about testing of wild birds for H5N1 avian influenza, which is part of a national wild-bird surveillance plan that "includes five strategies for early detection of highly pathogenic avian influenza. Sample numbers from three of these will be available on HEDDS: live wild birds, subsistence hunter-killed birds, and investigations of sick and dead wild birds. The other two strategies involve domestic bird testing and environmental sampling of water and wild-bird droppings. [...] A map on the new USGS site shows that, 9327 birds from Alaska have been tested so far this year, with only a few from most other states. Last year, officials tested just 721 birds from Alaska and none from most other states, another map shows. The goal of the surveillance program for 2006 is to collect 75000 to 100000 samples from wild birds and 50000 environmental samples, officials have said".

Mathematical modelling of infectious diseases

From Wikipedia, the free encyclopedia

Mathematical models can project how infectious diseases progress to show the likely outcome of an epidemic (including in plants) and help inform public health and plant health interventions. Models use basic assumptions or collected statistics along with mathematics to find parameters for various infectious diseases and use those parameters to calculate the effects of different interventions, like mass vaccination programs. The modelling can help decide which intervention(s) to avoid and which to trial, or can predict future growth patterns, etc.

History

The modelling of infectious diseases is a tool that has been used to study the mechanisms by which diseases spread, to predict the future course of an outbreak and to evaluate strategies to control an epidemic.

The first scientist who systematically tried to quantify causes of death was John Graunt in his book Natural and Political Observations made upon the Bills of Mortality, in 1662. The bills he studied were listings of numbers and causes of deaths published weekly. Graunt's analysis of causes of death is considered the beginning of the "theory of competing risks" which according to Daley and Gani is "a theory that is now well established among modern epidemiologists".

The earliest account of mathematical modelling of spread of disease was carried out in 1760 by Daniel Bernoulli. Trained as a physician, Bernoulli created a mathematical model to defend the practice of inoculating against smallpox. The calculations from this model showed that universal inoculation against smallpox would increase the life expectancy from 26 years 7 months to 29 years 9 months. Daniel Bernoulli's work preceded the modern understanding of germ theory.

In the early 20th century, William Hamer and Ronald Ross applied the law of mass action to explain epidemic behaviour.

The 1920s saw the emergence of compartmental models. The Kermack–McKendrick epidemic model (1927) and the Reed–Frost epidemic model (1928) both describe the relationship between susceptible, infected and immune individuals in a population. The Kermack–McKendrick epidemic model was successful in predicting the behavior of outbreaks very similar to that observed in many recorded epidemics.

Recently, agent-based models (ABMs) have been used in exchange for simpler compartmental models. For example, epidemiological ABMs have been used to inform public health (nonpharmaceutical) interventions against the spread of SARS-CoV-2. Epidemiological ABMs, in spite of their complexity and requiring high computational power, have been criticized for simplifying and unrealistic assumptions. Still, they can be useful in informing decisions regarding mitigation and suppression measures in cases when ABMs are accurately calibrated.

Assumptions

Models are only as good as the assumptions on which they are based. If a model makes predictions that are out of line with observed results and the mathematics is correct, the initial assumptions must change to make the model useful.

  • Rectangular and stationary age distribution, i.e., everybody in the population lives to age L and then dies, and for each age (up to L) there is the same number of people in the population. This is often well-justified for developed countries where there is a low infant mortality and much of the population lives to the life expectancy.
  • Homogeneous mixing of the population, i.e., individuals of the population under scrutiny assort and make contact at random and do not mix mostly in a smaller subgroup. This assumption is rarely justified because social structure is widespread. For example, most people in London only make contact with other Londoners. Further, within London then there are smaller subgroups, such as the Turkish community or teenagers (just to give two examples), who mix with each other more than people outside their group. However, homogeneous mixing is a standard assumption to make the mathematics tractable.

Types of epidemic models

Stochastic

"Stochastic" means being or having a random variable. A stochastic model is a tool for estimating probability distributions of potential outcomes by allowing for random variation in one or more inputs over time. Stochastic models depend on the chance variations in risk of exposure, disease and other illness dynamics. Statistical agent-level disease dissemination in small or large populations can be determined by stochastic methods.

Deterministic

When dealing with large populations, as in the case of tuberculosis, deterministic or compartmental mathematical models are often used. In a deterministic model, individuals in the population are assigned to different subgroups or compartments, each representing a specific stage of the epidemic.

The transition rates from one class to another are mathematically expressed as derivatives, hence the model is formulated using differential equations. While building such models, it must be assumed that the population size in a compartment is differentiable with respect to time and that the epidemic process is deterministic. In other words, the changes in population of a compartment can be calculated using only the history that was used to develop the model.

Sub-exponential growth

A common explanation for the growth of epidemics holds that 1 person infects 2, those 2 infect 4 and so on and so on with the number of infected doubling every generation. It is analogous to a game of tag where 1 person tags 2, those 2 tag 4 others who've never been tagged and so on. As this game progresses it becomes increasing frenetic as the tagged run past the previously tagged to hunt down those who have never been tagged. Thus this model of an epidemic leads to a curve that grows exponentially until it crashes to zero as all the population have been infected. i.e. no herd immunity and no peak and gradual decline as seen in reality.

Reproduction number

The basic reproduction number (denoted by R0) is a measure of how transferable a disease is. It is the average number of people that a single infectious person will infect over the course of their infection. This quantity determines whether the infection will increase sub-exponentially, die out, or remain constant: if R0 > 1, then each person on average infects more than one other person so the disease will spread; if R0 < 1, then each person infects fewer than one person on average so the disease will die out; and if R0 = 1, then each person will infect on average exactly one other person, so the disease will become endemic: it will move throughout the population but not increase or decrease.

Endemic steady state

An infectious disease is said to be endemic when it can be sustained in a population without the need for external inputs. This means that, on average, each infected person is infecting exactly one other person (any more and the number of people infected will grow sub-exponentially and there will be an epidemic, any less and the disease will die out). In mathematical terms, that is:

The basic reproduction number (R0) of the disease, assuming everyone is susceptible, multiplied by the proportion of the population that is actually susceptible (S) must be one (since those who are not susceptible do not feature in our calculations as they cannot contract the disease). Notice that this relation means that for a disease to be in the endemic steady state, the higher the basic reproduction number, the lower the proportion of the population susceptible must be, and vice versa. This expression has limitations concerning the susceptibility proportion, e.g. the R0 equals 0.5 implicates S has to be 2, however this proportion exceeds the population size.

Assume the rectangular stationary age distribution and let also the ages of infection have the same distribution for each birth year. Let the average age of infection be A, for instance when individuals younger than A are susceptible and those older than A are immune (or infectious). Then it can be shown by an easy argument that the proportion of the population that is susceptible is given by:

We reiterate that L is the age at which in this model every individual is assumed to die. But the mathematical definition of the endemic steady state can be rearranged to give:

Therefore, due to the transitive property:

This provides a simple way to estimate the parameter R0 using easily available data.

For a population with an exponential age distribution,

This allows for the basic reproduction number of a disease given A and L in either type of population distribution.

Compartmental models in epidemiology

Compartmental models are formulated as Markov chains. A classic compartmental model in epidemiology is the SIR model, which may be used as a simple model for modelling epidemics. Multiple other types of compartmental models are also employed.

The SIR model

Diagram of the SIR model with initial values , and rates for infection and for recovery
Animation of the SIR model with initial values , and rate of recovery . The animation shows the effect of reducing the rate of infection from to . If there is no medicine or vaccination available, it is only possible to reduce the infection rate (often referred to as "flattening the curve") by appropriate measures such as social distancing.

In 1927, W. O. Kermack and A. G. McKendrick created a model in which they considered a fixed population with only three compartments: susceptible, ; infected, ; and recovered, . The compartments used for this model consist of three classes:

  • is used to represent the individuals not yet infected with the disease at time t, or those susceptible to the disease of the population.
  • denotes the individuals of the population who have been infected with the disease and are capable of spreading the disease to those in the susceptible category.
  • is the compartment used for the individuals of the population who have been infected and then removed from the disease, either due to immunization or due to death. Those in this category are not able to be infected again or to transmit the infection to others.

Other compartmental models

There are many modifications of the SIR model, including those that include births and deaths, where upon recovery there is no immunity (SIS model), where immunity lasts only for a short period of time (SIRS), where there is a latent period of the disease where the person is not infectious (SEIS and SEIR), and where infants can be born with immunity (MSIR).

Infectious disease dynamics

Mathematical models need to integrate the increasing volume of data being generated on host-pathogen interactions. Many theoretical studies of the population dynamics, structure and evolution of infectious diseases of plants and animals, including humans, are concerned with this problem.

Research topics include:

Mathematics of mass vaccination

If the proportion of the population that is immune exceeds the herd immunity level for the disease, then the disease can no longer persist in the population and its transmission dies out. Thus, a disease can be eliminated from a population if enough individuals are immune due to either vaccination or recovery from prior exposure to disease. For example, smallpox eradication, with the last wild case in 1977, and certification of the eradication of indigenous transmission of 2 of the 3 types of wild poliovirus (type 2 in 2015, after the last reported case in 1999, and type 3 in 2019, after the last reported case in 2012).

The herd immunity level will be denoted q. Recall that, for a stable state:

In turn,

which is approximately:

Graph of herd immunity threshold vs basic reproduction number with selected diseases

S will be (1 − q), since q is the proportion of the population that is immune and q + S must equal one (since in this simplified model, everyone is either susceptible or immune). Then:

Remember that this is the threshold level. Die out of transmission will only occur if the proportion of immune individuals exceeds this level due to a mass vaccination programme.

We have just calculated the critical immunization threshold (denoted qc). It is the minimum proportion of the population that must be immunized at birth (or close to birth) in order for the infection to die out in the population.

Because the fraction of the final size of the population p that is never infected can be defined as:

Hence,

Solving for , we obtain:

When mass vaccination cannot exceed the herd immunity

If the vaccine used is insufficiently effective or the required coverage cannot be reached, the program may fail to exceed qc. Such a program will protect vaccinated individuals from disease, but may change the dynamics of transmission.

Suppose that a proportion of the population q (where q < qc) is immunised at birth against an infection with R0 > 1. The vaccination programme changes R0 to Rq where

This change occurs simply because there are now fewer susceptibles in the population who can be infected. Rq is simply R0 minus those that would normally be infected but that cannot be now since they are immune.

As a consequence of this lower basic reproduction number, the average age of infection A will also change to some new value Aq in those who have been left unvaccinated.

Recall the relation that linked R0, A and L. Assuming that life expectancy has not changed, now:

But R0 = L/A so:

Thus, the vaccination program may raise the average age of infection, and unvaccinated individuals will experience a reduced force of infection due to the presence of the vaccinated group. For a disease that leads to greater clinical severity in older populations, the unvaccinated proportion of the population may experience the disease relatively later in life than would occur in the absence of vaccine.

When mass vaccination exceeds the herd immunity

If a vaccination program causes the proportion of immune individuals in a population to exceed the critical threshold for a significant length of time, transmission of the infectious disease in that population will stop. If elimination occurs everywhere at the same time, then this can lead to eradication.

Elimination
Interruption of endemic transmission of an infectious disease, which occurs if each infected individual infects less than one other, is achieved by maintaining vaccination coverage to keep the proportion of immune individuals above the critical immunization threshold.
Eradication
Elimination everywhere at the same time such that the infectious agent dies out (for example, smallpox and rinderpest).

Reliability

Models have the advantage of examining multiple outcomes simultaneously, rather than making a single forecast. Models have shown broad degrees of reliability in past pandemics, such as SARS, SARS-CoV-2, Swine flu, MERS and Ebola.

Politics of Europe

From Wikipedia, the free encyclopedia ...