Search This Blog

Friday, May 19, 2023

Castle Bravo

From Wikipedia, the free encyclopedia
 
Castle Bravo
CastleBravo1.gif
Time-lapse of the Bravo detonation and subsequent mushroom cloud.
Information
CountryUnited States
Test seriesOperation Castle
Test siteBikini Atoll
DateMarch 1, 1954
(69 years ago)
Test typeAtmospheric
Yield15 megatonnes of TNT (63 PJ)

Test chronology

Castle Bravo was the first in a series of high-yield thermonuclear weapon design tests conducted by the United States at Bikini Atoll, Marshall Islands, as part of Operation Castle. Detonated on March 1, 1954, the device was the most powerful nuclear device detonated by the United States and its first lithium deuteride fueled thermonuclear weapon. Castle Bravo's yield was 15 megatonnes of TNT (63 PJ), 2.5 times the predicted 6 megatonnes of TNT (25 PJ), due to unforeseen additional reactions involving lithium-7, which led to the unexpected radioactive contamination of areas to the east of Bikini Atoll. At the time, it was the most powerful artificial explosion in history.

Fallout, the heaviest of which was in the form of pulverized surface coral from the detonation, fell on residents of Rongelap and Utirik atolls, while the more particulate and gaseous fallout spread around the world. The inhabitants of the islands were not evacuated until three days later and suffered radiation sickness. Twenty-three crew members of the Japanese fishing vessel Daigo Fukuryū Maru ("Lucky Dragon No. 5") were also contaminated by the heavy fallout, experiencing acute radiation syndrome. The blast incited a strong international reaction over atmospheric thermonuclear testing.

The Bravo Crater is located at 11°41′50″N 165°16′19″E. The remains of the Castle Bravo causeway are at 11°42′6″N 165°17′7″E.

Bomb design

SHRIMP
Castle Bravo Shrimp Device 002 - restoration1.jpg
The SHRIMP device in its shot cab
TypeTeller-Ulam design thermonuclear weapon
Production history
DesignerBen Diven-project engineer
Designed24 February 1953 (GMT)
ManufacturerLos Alamos National Laboratory
Unit costAbout $2,666,000 (1954 USD)
ProducedOctober 1953 (GMT)
No. built1
VariantsTX-21C, TX-26
Specifications
Mass10,659 kilograms (23,499 lb)
Length455.93 centimeters (179.50 in)
Diameter136.90 centimeters (53.90 in)

FillingLithium-6 deuteride
Filling weight400 kilograms (880 lb)
Blast yield15 megatons of TNT (63 PJ)

Primary system

The Castle Bravo device was housed in a cylinder that weighed 23,500 pounds (10.7 t) and measured 179.5 inches (456 cm) in length and 53.9 inches (137 cm) in diameter.

The primary device was a COBRA deuterium-tritium gas-boosted atomic bomb made by Los Alamos Scientific Laboratory, a very compact MK 7 device. This boosted fission device was tested in the Upshot Knothole Climax event and yielded 61 kilotonnes of TNT (260 TJ) (out of 50–70 kt expected yield range). It was considered successful enough that the planned operation series Domino, designed to explore the same question about a suitable primary for thermonuclear bombs, could be canceled. The implosion system was quite lightweight at 410 kg (900 lb), because it eliminated the aluminium pusher shell around the tamper and used the more compact ring lenses, a design feature shared with the Mark 5, 12, 13 and 18 designs. The explosive material of the inner charges in the MK 7 was changed to the more powerful Cyclotol 75/25, instead of the Composition B used in most stockpiled bombs at that time, as Cyclotol 75/25 was denser than Composition B and thus could generate the same amount of explosive force in a smaller volume (it provided 13 percent more compressive energy than Comp B). The composite uranium-plutonium COBRA core was levitated in a type-D pit. COBRA was Los Alamos' most recent product of design work on the "new principles" of the hollow core. A copper pit liner encased within the weapon-grade plutonium inner capsule prevented DT gas diffusion into the plutonium, a technique first tested in Greenhouse Item. The assembled module weighed 830 kg (1,840 lb), measuring 770 mm (30.5 in) across. It was located at the end of the device, which, as seen in the declassified film, shows a small cone projecting from the ballistic case. This cone is the part of the paraboloid that was used to focus the radiation emanating from the primary into the secondary.

Deuterium and lithium

The device was called SHRIMP, and had the same basic configuration (radiation implosion) as the Ivy Mike wet device, except with a different type of fusion fuel. SHRIMP used lithium deuteride (LiD), which is solid at room temperature; Ivy Mike used cryogenic liquid deuterium (D2), which required elaborate cooling equipment. Castle Bravo was the first test by the United States of a practical deliverable fusion bomb, even though the TX-21 as proof-tested in the Bravo event was not weaponized. The successful test rendered obsolete the cryogenic design used by Ivy Mike and its weaponized derivative, the JUGHEAD, which was slated to be tested as the initial Castle Yankee. It also used a 7075 aluminium ballistic case 9.5 cm thick. Aluminium was used to drastically reduce the bomb's weight and simultaneously provided sufficient radiation confinement time to raise yield, a departure from the heavy stainless steel casing (304L or MIM 316L) employed by contemporary weapon-projects.

The SHRIMP was at least in theory and in many critical aspects identical in geometry to the RUNT and RUNT II devices later proof-fired in Castle Romeo and Castle Yankee respectively. On paper it was a scaled-down version of these devices, and its origins can be traced back to the spring and summer of 1953. The United States Air Force indicated the importance of lighter thermonuclear weapons for delivery by the B-47 Stratojet and B-58 Hustler. Los Alamos National Laboratory responded to this indication with a follow-up enriched version of the RUNT scaled down to a 3/4 scale radiation-implosion system called the SHRIMP. The proposed weight reduction (from TX-17's 42,000 pounds (19,000 kg) to TX-21's 25,000 pounds (11,000 kg)) would provide the Air Force with a much more versatile deliverable gravity bomb. The final version tested in Castle used partially enriched lithium as its fusion fuel. Natural lithium is a mixture of lithium-6 and lithium-7 isotopes (with 7.5% of the former). The enriched lithium used in Bravo was nominally 40% lithium-6 (the remainder was the much more common lithium-7, which was incorrectly assumed to be inert). The fuel slugs varied in enrichment from 37 to 40% in 6Li, and the slugs with lower enrichment were positioned at the end of the fusion-fuel chamber, away from the primary. The lower levels of lithium enrichment in the fuel slugs, compared with the ALARM CLOCK and many later hydrogen weapons, were due to shortages in enriched lithium at that time, as the first of the Alloy Development Plants (ADP) started production by the fall of 1953. The volume of LiD fuel used was approximately 60% the volume of the fusion fuel filling used in the wet SAUSAGE and dry RUNT I and II devices, or about 500 liters (110 imp gal; 130 U.S. gal), corresponding to about 400 kg of lithium deuteride (as LiD has a density of 0.78201 g/cm3). The mixture cost about 4.54 USD/g at that time. The fusion burn efficiency was close to 25.1%, the highest attained efficiency of the first thermonuclear weapon generation. This efficiency is well within the figures given in a November 1956 statement, when a DOD official disclosed that thermonuclear devices with efficiencies ranging from 15% to up about 40% had been tested. Hans Bethe reportedly stated independently that the first generation of thermonuclear weapons had (fusion) efficiencies varying from as low as 15% to up about 25%.

The thermonuclear burn would produce (like the fission fuel in the primary) pulsations (generations) of high-energy neutrons with an average temperature of 14 MeV through Jetter's cycle.

Jetter's cycle

Jetter.svg

The Jetter cycle is a combination of reactions involving lithium, deuterium, and tritium. It consumes Lithium-6 and deuterium, and in two reactions (with energies of 17.6 MeV and 4.8 MeV, mediated by a neutron and tritium) it produces two alpha particles.

The reaction would produce high-energy neutrons with 14 MeV, and its neutronicity was estimated at ≈0.885 (for a Lawson criterion of ≈1.5).

Possible additional tritium for high-yield

As SHRIMP, along with the RUNT I and ALARM CLOCK, were to be high-yield shots required to assure the thermonuclear “emergency capability”, their fusion fuel may have been spiked with additional tritium, in the form of 6LiT. All of the high-energy 14 MeV neutrons would cause fission in the uranium fusion tamper wrapped around the secondary and the spark plug's plutonium rod. The ratio of deuterium (and tritium) atoms burned by 14 MeV neutrons spawned by the burning was expected to vary from 5:1 to 3:1, a standardization derived from Mike, while for these estimations, the ratio of 3:1 was predominantly used in ISRINEX. The neutronicity of the fusion reactions harnessed by the fusion tamper would dramatically increase the yield of the device.

SHRIMP's indirect drive

Bravo SHRIMP device shot-cab.

Attached to the cylindrical ballistic case was a natural-uranium liner, the radiation case, that was about 2.5 cm thick. Its internal surface was lined with a copper liner that was about 240 μm thick, and made from 0.08-μm thick copper foil, to increase the overall albedo of the hohlraum. Copper possesses excellent reflecting properties, and its low cost, compared to other reflecting materials like gold, made it useful for mass-produced hydrogen weapons. Hohlraum albedo is a very important design parameter for any inertial-confinement configuration. A relatively high albedo permits higher interstage coupling due to the more favorable azimuthal and latitudinal angles of reflected radiation. The limiting value of the albedo for high-Z materials is reached when the thickness is 5–10 g/cm2, or 0.5–1.0 free paths. Thus, a hohlraum made of uranium much thicker than a free path of uranium would be needlessly heavy and costly. At the same time, the angular anisotropy increases as the atomic number of the scatterer material is reduced. Therefore, hohlraum liners require the use of copper (or, as in other devices, gold or aluminium), as the absorption probability increases with the value of Zeff of the scatterer. There are two sources of X-rays in the hohlraum: the primary's irradiance, which is dominant at the beginning and during the pulse rise; and the wall, which is important during the required radiation temperature's (Tr) plateau. The primary emits radiation in a manner similar to a flash bulb, and the secondary needs constant Tr to properly implode. This constant wall temperature is dictated by the ablation pressure requirements to drive compression, which lie on average at about 0.4 keV (out of a range of 0.2 to 2 keV), corresponding to several million kelvins. Wall temperature depended on the temperature of the primary's core which peaked at about 5.4 keV during boosted-fission. The final wall-temperature, which corresponds to energy of the wall-reradiated X-rays to the secondary's pusher, also drops due to losses from the hohlraum material itself. Natural uranium nails, lined to the top of their head with copper, attached the radiation case to the ballistic case. The nails were bolted in vertical arrays in a double-shear configuration to better distribute the shear loads. This method of attaching the radiation case to the ballistic case was first used successfully in the Ivy Mike device. The radiation case had a parabolic end, which housed the COBRA primary that was employed to create the conditions needed to start the fusion reaction, and its other end was a cylinder, as also seen in Bravo's declassified film.

The space between the uranium fusion tamper, and the case formed a radiation channel to conduct X-rays from the primary to the secondary assembly; the interstage. It is one of the most closely guarded secrets of a multistage thermonuclear weapon. Implosion of the secondary assembly is indirectly driven, and the techniques used in the interstage to smooth the spatial profile (i.e. reduce coherence and nonuniformities) of the primary's irradiance are of utmost importance. This was done with the introduction of the channel filler—an optical element used as a refractive medium, also encountered as random-phase plate in the ICF laser assemblies. This medium was a polystyrene plastic foam filling, extruded or impregnated with a low-molecular-weight hydrocarbon (possibly methane gas), which turned to a low-Z plasma from the X-rays, and along with channeling radiation it modulated the ablation front on the high-Z surfaces; it "tamped" the sputtering effect that would otherwise "choke" radiation from compressing the secondary. The reemitted X-rays from the radiation case must be deposited uniformly on the outer walls of the secondary's tamper and ablate it externally, driving the thermonuclear fuel capsule (increasing the density and temperature of the fusion fuel) to the point needed to sustain a thermonuclear reaction. (see Nuclear weapon design). This point is above the threshold where the fusion fuel would turn opaque to its emitting radiation, as determined from its Rosseland opacity, meaning that the generated energy balances the energy lost to fuel's vicinity (as radiation, particle losses). After all, for any hydrogen weapon system to work, this energy equilibrium must be maintained through the compression equilibrium between the fusion tamper and the spark plug (see below), hence their name equilibrium supers.

SHRIMP device delivered via truck awaiting installation.

Since the ablative process takes place on both walls of the radiation channel, a numerical estimate made with ISRINEX (a thermonuclear explosion simulation program) suggested that the uranium tamper also had a thickness of 2.5 cm, so that an equal pressure would be applied to both walls of the hohlraum. The rocket effect on the surface of tamper's wall created by the ablation of its several superficial layers would force an equal mass of uranium that rested in the remainder of the tamper to speed inwards, thus imploding the thermonuclear core. At the same time, the rocket effect on the surface of the hohlraum would force the radiation case to speed outwards. The ballistic case would confine the exploding radiation case for as long as necessary. The fact that the tamper material was uranium enriched in 235U is primarily based on the final fission reaction fragments detected in the radiochemical analysis, which conclusively showed the presence of 237U, found by the Japanese in the shot debris. The first-generation thermonuclear weapons (MK-14, 16, 17, 21, 22 and 24) all used uranium tampers enriched to 37.5% 235U. The exception to this was the MK-15 ZOMBIE that used a 93.5% enriched fission jacket.

The secondary assembly

Bravo secondary fireball
In a similar manner to the earlier pipes filled with a partial pressure of helium, as used in the Ivy Mike test of 1952, the 1954 Castle Bravo test was likewise heavily instrumented with Line-of-Sight (LOS) pipes, to better define and quantify the timing and energies of the x-rays and neutrons produced by these early thermonuclear devices. One of the outcomes of this diagnostic work resulted in this graphic depiction of the transport of energetic x-ray and neutrons through a vacuum line, some 2.3 km long, whereupon it heated solid matter at the "station 1200" blockhouse and thus generated a secondary fireball.

The secondary assembly was the actual SHRIMP component of the weapon. The weapon, like most contemporary thermonuclear weapons at that time, bore the same codename as the secondary component. The secondary was situated in the cylindrical end of the device, where its end was locked to the radiation case by a type of mortise and tenon joint. The hohlraum at its cylindrical end had an internal projection, which nested the secondary and had better structural strength to support the secondary's assembly, which had most of the device's mass. A visualization to this is that the joint looked much like a cap (the secondary) fitted in a cone (the projection of the radiation case). Any other major supporting structure would interfere to radiation transfer from the primary to the secondary and complex vibrational behavior. With this form of joint bearing most of the structural loads of the secondary, the latter and the hohlraum-ballistic case ensemble behaved as a single mass sharing common eigenmodes. To reduce excessive loading of the joint, especially during deployment of the weapon, the forward section of the secondary (i.e. the thermal blast/heat shield) was anchored to the radiation case by a set of thin wires, which also aligned the center line of the secondary with the primary, as they diminished bending and torsional loads on the secondary, another technique adopted from the SAUSAGE. The secondary assembly was an elongated truncated cone. From its front part (excluding the blast-heat shield) to its aft section it was steeply tapered. Tapering was used for two reasons. First, radiation drops by the square of the distance, hence radiation coupling is relatively poor in the aftermost sections of the secondary. This made the use of a higher mass of the then scarce fusion fuel in the rear end of the secondary assembly ineffective and the overall design wasteful. This was also the reason why the lower-enriched slugs of fusion fuel were placed far aft of the fuel capsule. Second, as the primary could not illuminate the whole surface of the hohlraum, in part due to the large axial length of the secondary, relatively small solid angles would be effective to compress the secondary, leading to poor radiation focusing. By tapering the secondary, the hohlraum could be shaped as a cylinder in its aft section obviating the need to machine the radiation case to a parabola at both ends. This optimized radiation focusing and enabled a streamlined production line, as it was cheaper, faster and easier to manufacture a radiation case with only one parabolic end. The tapering in this design was much steeper than its cousins, the RUNT, and the ALARM CLOCK devices. SHRIMP's tapering and its mounting to the hohlraum apparently made the whole secondary assembly resemble the body of a shrimp. The secondary's length is defined by the two pairs of dark-colored diagnostic hot spot pipes attached to the middle and left section of the device. These pipe sections were 8+58 inches (220 mm) in diameter and 40 feet (12 m) long and were butt-welded end-to-end to the ballistic case leading out to the top of the shot cab. They would carry the initial reaction's light up to the array of 12 mirror towers built in an arc on the artificial 1-acre (0.40 ha) shot island created for the event. From those pipes, mirrors would reflect early bomb light from the bomb casing to a series of remote high-speed cameras, and so that Los Alamos could determine both the simultaneity of the design (i.e. the time interval between primary's firing and secondary's ignition) and the thermonuclear burn rate in these two crucial areas of the secondary device.

This secondary assembly device contained the lithium deuteride fusion fuel in a stainless-steel canister. Running down to the center of the secondary was a 1.3 cm thick hollow cylindrical rod of plutonium, nested in the steel canister. This was the spark plug, a tritium-boosted fission device. It was assembled by plutonium rings and had a hollow volume inside that measured about 0.5 cm in diameter. This central volume was lined with copper, which like the liner in the primary's fissile core prevented DT gas diffusion in plutonium. The spark plug's boosting charge contained about 4 grams of tritium and, imploding together with the secondary's compression, was timed to detonate by the first generations of neutrons that arrived from the primary. Timing was defined by the geometric characteristics of the sparkplug (its uncompressed annular radius), which detonated when its criticality, or keff, transcended 1. Its purpose was to compress the fusion material around it from its inside, equally applying pressure with the tamper. The compression factor of the fusion fuel and its adiabatic compression energy determined the minimal energy required for the spark plug to counteract the compression of the fusion fuel and the tamper's momentum. The spark plug weighed about 18 kg, and its initial firing yielded 0.6 kilotonnes of TNT (2.5 TJ). Then it would be completely fissioned by the fusion neutrons, contributing about 330 kilotonnes of TNT (1,400 TJ) to the total yield. The energy required by the spark plug to counteract the compression of the fusion fuel was lower than the primary's yield because coupling of the primary's energy in the hohlraum is accompanied by losses due to the difference between the X-ray fireball and the hohlraum temperatures. The neutrons entered the assembly by a small hole through the ≈28 cm thick 238U blast-heat shield. It was positioned in front of the secondary assembly facing the primary. Similar to the tamper-fusion capsule assembly, the shield was shaped as a circular frustum, with its small diameter facing the primary's side, and with its large diameter locked by a type of mortise and tenon joint to the rest of the secondary assembly. The shield-tamper ensemble can be visualized as a circular bifrustum. All parts of the tamper were similarly locked together to provide structural support and rigidity to the secondary assembly. Surrounding the fusion-fuel–spark-plug assembly was the uranium tamper with a standoff air-gap about 0.9 cm wide that was to increase the tamper's momentum, a levitation technique used as early as Operation Sandstone and described by physicist Ted Taylor as hammer-on-the-nail-impact. Since there were also technical concerns that high-Z tamper material would mix rapidly with the relatively low-density fusion fuel—leading to unacceptably large radiation losses—the stand-off gap also acted as a buffer to mitigate the unavoidable and undesirable Taylor mixing.

Use of boron

Boron was used at many locations in this dry system; it has a high cross-section for the absorption of slow neutrons, which fission 235U and 239Pu, but a low cross-section for the absorption of fast neutrons, which fission 238U. Because of this characteristic, 10B deposited onto the surface of the secondary stage would prevent pre-detonation of the spark plug by stray neutrons from the primary without interfering with the subsequent fissioning of the 238U of the fusion tamper wrapping the secondary. Boron also played a role in increasing the compressive plasma pressure around the secondary by blocking the sputtering effect, leading to higher thermonuclear efficiency. Because the structural foam holding the secondary in place within the casing was doped with 10B, the secondary was compressed more highly, at a cost of some radiated neutrons. (The Castle Koon MORGENSTERN device did not use 10B in its design; as a result, the intense neutron flux from its RACER IV primary predetonated the spherical fission spark plug, which in turn "cooked" the fusion fuel, leading to an overall poor compression.) The plastic's low molecular weight is unable to implode the secondary's mass. Its plasma-pressure is confined in the boiled-off sections of the tamper and the radiation case so that material from neither of these two walls can enter the radiation channel that has to be open for the radiation transit.

Detonation

Bravo detonation and fireball.

The device was mounted in a "shot cab" on an artificial island built on a reef off Namu Island, in Bikini Atoll. A sizable array of diagnostic instruments were trained on it, including high-speed cameras trained through an arc of mirror towers around the shot cab.

The detonation took place at 06:45 on March 1, 1954, local time (18:45 on February 28 GMT).

When Bravo was detonated, within one second it formed a fireball almost 4.5 miles (7.2 km) across. This fireball was visible on Kwajalein Atoll over 250 miles (400 km) away. The explosion left a crater 6,500 feet (2,000 m) in diameter and 250 feet (76 m) in depth. The mushroom cloud reached a height of 47,000 feet (14,000 m) and a diameter of 7 miles (11 km) in about a minute, a height of 130,000 feet (40 km) and 62 mi (100 km) in diameter in less than 10 minutes and was expanding at more than 100 meters per second (360 km/h; 220 mph). As a result of the blast, the cloud contaminated more than 7,000 square miles (18,000 km2) of the surrounding Pacific Ocean, including some of the surrounding small islands like Rongerik, Rongelap, and Utirik.

In terms of energy released (usually measured in TNT equivalence), Castle Bravo was about 1,000 times more powerful than each of the atomic bombs that were dropped on Hiroshima and Nagasaki during World War II. Castle Bravo is the sixth largest nuclear explosion in history, exceeded by the Soviet tests of Tsar Bomba at approximately 50 Mt, Test 219 at 24.2 Mt, and three other (Test 147, Test 173 and Test 174) ≈20 Mt Soviet tests in 1962 at Novaya Zemlya.

High yield

Diagram of Tritium bonus provided by Lithium-7 isotope.

The yield of 15 megatons was triple that of the 5 Mt predicted by its designers. The cause of the higher yield was an error made by designers of the device at Los Alamos National Laboratory. They considered only the lithium-6 isotope in the lithium-deuteride secondary to be reactive; the lithium-7 isotope, accounting for 60% of the lithium content, was assumed to be inert. It was expected that the lithium-6 isotope would absorb a neutron from the fissioning plutonium and emit an alpha particle and tritium in the process, of which the latter would then fuse with the deuterium and increase the yield in a predicted manner. Lithium-6 indeed reacted in this manner.

It was assumed that the lithium-7 would absorb one neutron, producing lithium-8, which decays (through beta decay into beryllium-8) to a pair of alpha particles on a timescale of nearly a second, vastly longer than the timescale of nuclear detonation. However, when lithium-7 is bombarded with energetic neutrons with an energy greater than 2.47 MeV, rather than simply absorbing a neutron, it undergoes nuclear fission into an alpha particle, a tritium nucleus, and another neutron. As a result, much more tritium was produced than expected, the extra tritium fusing with deuterium and producing an extra neutron. The extra neutron produced by fusion and the extra neutron released directly by lithium-7 decay produced a much larger neutron flux. The result was greatly increased fissioning of the uranium tamper and increased yield.

Summarizing, the reactions involving lithium-6 result in some combination of the two following net reactions:

1n + 6Li → 3H + 4He + 4.783 MeV
6Li + 2H → 2 4He + 22.373 MeV

But when lithium-7 is present, one also has some amounts of the following two net reactions:

7Li + 1n → 3H + 4He + 1n
7Li + 2H → 2 4He + n + 15.123 MeV

This resultant extra fuel (both lithium-6 and lithium-7) contributed greatly to the fusion reactions and neutron production and in this manner greatly increased the device's explosive output. The test used lithium with a high percentage of lithium-7 only because lithium-6 was then scarce and expensive; the later Castle Union test used almost pure lithium-6. Had sufficient lithium-6 been available, the usability of the common lithium-7 might not have been discovered.

The unexpectedly high yield of the device severely damaged many of the permanent buildings on the control site island on the far side of the atoll. Little of the desired diagnostic data on the shot was collected; many instruments designed to transmit their data back before being destroyed by the blast were instead vaporized instantly, while most of the instruments that were expected to be recovered for data retrieval were destroyed by the blast.

In an additional unexpected event, albeit one of far less consequence, X-rays traveling through line-of-sight (LOS) pipes caused a small second fireball at Station 1200 with a yield of 1 kiloton of TNT (4.2 TJ).

High levels of fallout

The Bravo fallout plume spread dangerous levels of radioactivity over an area over 280 miles (450 km) long, including inhabited islands. The contour lines show the cumulative radiation exposure in roentgens (R) for the first 96 hours after the test. Although widely published, this fallout map is not perfectly correct.

The fission reactions of the natural uranium tamper were quite dirty, producing a large amount of fallout. That, combined with the larger than expected yield and a major wind shift, produced some very serious consequences for those in the fallout range. In the declassified film Operation Castle, the task force commander Major General Percy Clarkson pointed to a diagram indicating that the wind shift was still in the range of "acceptable fallout", although just barely.

The decision to carry out the Bravo test under the prevailing winds was made by Dr. Alvin C. Graves, the Scientific Director of Operation Castle. Graves had total authority over detonating the weapon, above that of the military commander of Operation Castle. Graves appears in the widely available film of the earlier 1952 test "Ivy Mike", which examines the last-minute fallout decisions. The narrator, the western actor Reed Hadley, is filmed aboard the control ship in that film, showing the final conference. Hadley points out that 20,000 people live in the potential area of the fallout. He asks the control panel scientist if the test can be aborted and is told "yes", but it would ruin all their preparations in setting up timed measuring instruments. In Mike, the fallout correctly landed north of the inhabited area but, in the 1954 Bravo test, there was a large amount of wind shear, and the wind that was blowing north the day before the test steadily veered towards the east.

Inhabited islands affected

Radioactive fallout was spread eastward onto the inhabited Rongelap and Rongerik atolls, which were evacuated 48 hours after the detonation. In 1957, the Atomic Energy Commission deemed Rongelap safe to return, and allowed 82 inhabitants to move back to the island. Upon their return, they discovered that their previous staple foods, including arrowroot, makmok, and fish, had either disappeared or gave residents various illnesses, and they were again removed. Ultimately, 15 islands and atolls were contaminated, and by 1963 Marshall Islands natives began to suffer from thyroid tumors, including 20 of 29 Rongelap children at the time of Bravo, and many birth defects were reported. The islanders received compensation from the U.S. government, relative to how much contamination they received, beginning in 1956; by 1995 the Nuclear Claims Tribunal reported that it had awarded $43.2 million, nearly its entire fund, to 1,196 claimants for 1,311 illnesses. A medical study, named Project 4.1, studied the effects of the fallout on the islanders.

Map showing points (X) where contaminated fish were caught or where the sea was found to be excessively radioactive. B=original "danger zone" around Bikini announced by the U.S. government. W="danger zone" extended later. xF=position of the Lucky Dragon fishing boat. NE, EC, and SE are equatorial currents.

Although the atmospheric fallout plume drifted eastward, once fallout landed in the water it was carried in several directions by ocean currents, including northwest and southwest.

Daigo Fukuryū Maru

A Japanese fishing boat, Daigo Fukuryū Maru (Lucky Dragon No.5), came in direct contact with the fallout, which caused many of the crew to grow ill due to radiation sickness. One member died of a secondary infection six months later after acute radiation exposure, and another had a child that was stillborn and deformed. This resulted in an international incident and reignited Japanese concerns about radiation, especially as Japanese citizens were once more adversely affected by US nuclear weapons. The official US position had been that the growth in the strength of atomic bombs was not accompanied by an equivalent growth in radioactivity released, and they denied that the crew was affected by radioactive fallout. Japanese scientists who had collected data from the fishing vessel disagreed with this.

Sir Joseph Rotblat, working at St Bartholomew's Hospital, London, demonstrated that the contamination caused by the fallout from the test was far greater than that stated officially. Rotblat deduced that the bomb had three stages and showed that the fission phase at the end of the explosion increased the amount of radioactivity a thousand-fold. Rotblat's paper was taken up by the media, and the outcry in Japan reached such a level that diplomatic relations became strained and the incident was even dubbed by some as a "second Hiroshima". Nevertheless, the Japanese and US governments quickly reached a political settlement, with the transfer to Japan of $15.3 million as compensation, with the surviving victims receiving about ¥2 million each ($5,550 in 1954, or about $56,000 in 2023). It was also agreed that the victims would not be given Hibakusha status.

The device's firing crew was located on Enyu island, variously spelled as Eneu island, as depicted here

Bomb test personnel take shelter

Unanticipated fallout and the radiation emitted by it also affected many of the vessels and personnel involved in the test, in some cases forcing them into bunkers for several hours. In contrast to the crew of the Lucky Dragon No. 5, who did not anticipate the hazard and therefore did not take shelter in the hold of their ship, or refrain from inhaling the fallout dust, the firing crew that triggered the explosion safely sheltered in their firing station when they noticed the wind was carrying the fallout in the unanticipated direction towards the island of Enyu on the Bikini Atoll where they were located, with the fire crew sheltering in place ("buttoning up") for several hours until outside radiation decayed to safer levels. "25 roentgens per hour" was recorded above the bunker.

US Navy ships affected

The US Navy tanker USS Patapsco was at Enewetak Atoll in late February 1954. Patapsco lacked a decontamination washdown system, and was therefore ordered on 27 February, to return to Pearl Harbor at the highest possible speed. A breakdown in her engine systems, namely a cracked cylinder liner, slowed Patapsco to one-third of her full speed, and when the Castle Bravo detonation took place, she was still about 180 to 195 nautical miles east of Bikini. Patapsco was in the range of nuclear fallout, which began landing on the ship in the mid-afternoon of 2 March. By this time Patapsco was 565 to 586 nautical miles from ground zero. The fallout was at first thought to be harmless and there were no radiation detectors aboard, so no decontamination measures were taken. Measurements taken after Patapsco had returned to Pearl Harbor suggested an exposure range of 0.18 to 0.62 R/hr. Total exposure estimates range from 3.3 R to 18 R of whole-body radiation, taking into account the effects of natural washdown from rain, and variations between above- and below-deck exposure.

International incident

The fallout spread traces of radioactive material as far as Australia, India and Japan, and even the United States and parts of Europe. Though organized as a secret test, Castle Bravo quickly became an international incident, prompting calls for a ban on the atmospheric testing of thermonuclear devices.

A worldwide network of gummed film stations was established to monitor fallout following Operation Castle. Although meteorological data was poor, a general connection of tropospheric flow patterns with observed fallout was evident. There was a tendency for fallout/debris to remain in tropical latitudes, with incursions into the temperate regions associated with meteorological disturbances of the predominantly zonal flow. Outside of the tropics, the Southwestern United States received the greatest total fallout, about five times that received in Japan.

Stratospheric fallout particles of strontium-90 from the test were later captured with balloon-borne air filters used to sample the air at stratospheric altitudes, the research (Project Ashcan) was conducted to better understand the stratosphere and fallout times, and arrive at more accurate meteorological models after hindcasting.

The fallout from Castle Bravo and other testing on the atoll also affected islanders who had previously inhabited the atoll, and who returned there some time after the tests. This was due to the presence of radioactive caesium-137 in locally grown coconut milk. Plants and trees absorb potassium as part of the normal biological process, but will also readily absorb caesium if present, being of the same group on the periodic table, and therefore very similar chemically. Islanders consuming contaminated coconut milk were found to have abnormally high concentrations of caesium in their bodies and so had to be evacuated from the atoll a second time.

The American magazine Consumer Reports warned of the contamination of milk with strontium-90.

Weapon history

The Soviet Union had previously used lithium deuteride in its Sloika design (known as the "Joe-4" in the U.S.), in 1953. It was not a true hydrogen bomb; fusion provided only 15–20% of its yield, most coming from boosted fission reactions. Its yield was 400 kilotons, and it could not be infinitely scaled, as with a true thermonuclear device.

The Teller–Ulam-based "Ivy Mike" device had a much greater yield of 10.4 Mt, but most of this also came from fission: 77% of the total came from fast fission of its natural-uranium tamper.

Castle Bravo had the greatest yield of any U.S. nuclear test, 15 Mt, though again, a substantial fraction came from fission. In the Teller–Ulam design, the fission and fusion stages were kept physically separate in a reflective cavity. The radiation from the exploding fission primary brought the fuel in the fusion secondary to critical density and pressure, setting off thermonuclear (fusion) chain reactions, which in turn set off a tertiary fissioning of the bomb's 238U fusion tamper and casing. Consequently, this type of bomb is also known as a "fission-fusion-fission" device. The Soviet researchers, led by Andrei Sakharov, developed and tested their first Teller–Ulam device in 1955.

The publication of the Bravo fallout analysis was a militarily sensitive issue, with Joseph Rotblat possibly deducing the staging nature of the Castle Bravo device by studying the ratio and presence of tell-tale isotopes, namely uranium-237, present in the fallout. This information could potentially reveal the means by which megaton-yield nuclear devices achieve their yield. Soviet scientist Andrei Sakharov hit upon what the Soviet Union regarded as "Sakharov's third idea" during the month after the Castle Bravo test, the final piece of the puzzle being the idea that the compression of the secondary can be accomplished by the primary's X-rays before fusion began.

The Shrimp device design later evolved into the Mark 21 nuclear bomb, of which 275 units were produced, weighing 17,600 pounds (8,000 kg) and measuring 12.5 feet (3.8 m) long and 58 inches (1.5 m) in diameter. This 18-megaton bomb was produced until July 1956. In 1957, it was converted into the Mark 36 nuclear bomb and entered into production again.

Health impacts

Page 36 from the Project 4.1 final report, showing four photographs of exposed Marshallese. Faces blotted out for privacy reasons.

Following the test, the United States Department of Energy estimated that 253 inhabitants of the Marshall Islands were impacted by the radioactive fallout. This single test exposed the surrounding populations to varying levels of radiation. The fallout levels attributed to the Castle Bravo test are the highest in history. Populations neighboring the test site were exposed to high levels of radiation resulting in mild radiation sickness of many (nausea, vomiting, diarrhea). Several weeks later, many people began suffering from alopecia (hair loss) and skin lesions as well.

The exposure to fallout has been linked to increase the likelihood of several types of cancer such as leukemia and thyroid cancer. The relationship between Iodine-131 levels and thyroid cancer is still being researched. There are also correlations between fallout exposure levels and diseases such as thyroid disease like hypothyroidism. Populations of the Marshall Islands that received significant exposure to radionuclides have a much greater risk of developing cancer.

The female population of the Marshall Islands have a sixty times greater mortality rate from cervical cancer than a comparable mainland United States population. The Islands populations also have a five time greater likelihood of breast or gastrointestinal mortality, and lung cancer mortality is three times higher than the mainland population. The mortality rate of the male population on the Marshall Islands from lung cancer is four times greater than the overall United States rates, and the oral cancer rates are ten times greater.

There is a presumed association between radiation levels and functioning of the female reproductive system.

In popular culture

The Castle Bravo detonation and the subsequent poisoning of the crew aboard Daigo Fukuryū Maru led to an increase in antinuclear protests in Japan. It was compared to the bombings of Hiroshima and Nagasaki, and the Castle Bravo test was frequently part of the plots of numerous Japanese media, especially in relation to Japan's most widely recognized media icon, Godzilla. In the 2019 film Godzilla: King of the Monsters, Castle Bravo becomes the call sign for Monarch Outpost 54 located in the Atlantic Ocean, near Bermuda.

The Donald Fagen song "Memorabilia" from his 2012 album Sunken Condos mentions both the Castle Bravo and Ivy King nuclear tests.

In 2013, the Defense Threat Reduction Agency published Castle Bravo: Fifty Years of Legend and Lore. The report is a guide to off-site radiation exposures, a narrative history, and a guide to primary historical references concerning the Castle Bravo test. The report focuses on the circumstances that resulted in radioactive exposure of the uninhabited atolls, and makes no attempt to address in detail the effects on or around Bikini Atoll.

Desalination

From Wikipedia, the free encyclopedia
 
Reverse osmosis desalination plant in Barcelona, Spain

Desalination is a process that takes away mineral components from saline water. More generally, desalination refers to the removal of salts and minerals from a target substance, as in soil desalination, which is an issue for agriculture. Saltwater (especially sea water) is desalinated to produce water suitable for human consumption or irrigation. The by-product of the desalination process is brine. Desalination is used on many seagoing ships and submarines. Most of the modern interest in desalination is focused on cost-effective provision of fresh water for human use. Along with recycled wastewater, it is one of the few rainfall-independent water resources.

Due to its energy consumption, desalinating sea water is generally more costly than fresh water from surface water or groundwater, water recycling and water conservation. However, these alternatives are not always available and depletion of reserves is a critical problem worldwide. Desalination processes are using either thermal methods (in the case of distillation) or membrane-based methods (e.g. in the case of reverse osmosis) energy types.

An estimate in 2018 found that "18,426 desalination plants are in operation in over 150 countries. They produce 87 million cubic meters of clean water each day and supply over 300 million people." The energy intensity has improved: It is now about 3 kWh/m3 (in 2018), down by a factor of 10 from 20-30 kWh/m3 in 1970. Nevertheless, desalination represented about 25% of the energy consumed by the water sector in 2016.

Applications

Schematic of a multistage flash desalinator
A – steam in     B – seawater in     C – potable water out
D – brine out (waste)     E – condensate out     F – heat exchange    G – condensation collection (desalinated water)
H – brine heater
The pressure vessel acts as a countercurrent heat exchanger. A vacuum pump lowers the pressure in the vessel to facilitate the evaporation of the heated seawater (brine) which enters the vessel from the right side (darker shades indicate lower temperature). The steam condenses on the pipes on top of the vessel in which the fresh sea water moves from the left to the right.

There are now about 21,000 desalination plants in operation around the globe. The biggest ones are in the United Arab Emirates, Saudi Arabia, and Israel. The world's largest desalination plant is located in Saudi Arabia (Ras Al-Khair Power and Desalination Plant) with a capacity of 1,401,000 cubic meters per day.

Desalination is currently expensive compared to most alternative sources of water, and only a very small fraction of total human use is satisfied by desalination. It is usually only economically practical for high-valued uses (such as household and industrial uses) in arid areas. However, there is growth in desalination for agricultural use and highly populated areas such as Singapore or California. The most extensive use is in the Persian Gulf.

While noting costs are falling, and generally positive about the technology for affluent areas in proximity to oceans, a 2005 study argued, "Desalinated water may be a solution for some water-stress regions, but not for places that are poor, deep in the interior of a continent, or at high elevation. Unfortunately, that includes some of the places with the biggest water problems.", and, "Indeed, one needs to lift the water by 2000 m, or transport it over more than 1600 km to get transport costs equal to the desalination costs."

Thus, it may be more economical to transport fresh water from somewhere else than to desalinate it. In places far from the sea, like New Delhi, or in high places, like Mexico City, transport costs could match desalination costs. Desalinated water is also expensive in places that are both somewhat far from the sea and somewhat high, such as Riyadh and Harare. By contrast in other locations transport costs are much less, such as Beijing, Bangkok, Zaragoza, Phoenix, and, of course, coastal cities like Tripoli." After desalination at Jubail, Saudi Arabia, water is pumped 320  km inland to Riyadh. For coastal cities, desalination is increasingly viewed as a competitive choice.

In 2023, Israel was using desalination to replenish the Sea of Galilee's water supply.

Not everyone is convinced that desalination is or will be economically viable or environmentally sustainable for the foreseeable future. Debbie Cook wrote in 2011 that desalination plants can be energy intensive and costly. Therefore, water-stressed regions might do better to focus on conservation or other water supply solutions than invest in desalination plants.

Technologies

Desalination is an artificial process by which saline water (generally sea water) is converted to fresh water. The most common desalination processes are distillation and reverse osmosis.

There are several methods. Each has advantages and disadvantages but all are useful. The methods can be divided into membrane-based (e.g., reverse osmosis) and thermal-based (e.g., multistage flash distillation) methods. The traditional process of desalination is distillation (i.e., boiling and re-condensation of seawater to leave salt and impurities behind).

There are currently two technologies with a large majority of the world's desalination capacity: multi-stage flash distillation and reverse osmosis.

Distillation

Solar distillation

Solar distillation mimics the natural water cycle, in which the sun heats sea water enough for evaporation to occur. After evaporation, the water vapor is condensed onto a cool surface. There are two types of solar desalination. The first type uses photovoltaic cells to convert solar energy to electrical energy to power desalination. The second type converts solar energy to heat, and is known as solar thermal powered desalination.

Natural evaporation

Water can evaporate through several other physical effects besides solar irradiation. These effects have been included in a multidisciplinary desalination methodology in the IBTS Greenhouse. The IBTS is an industrial desalination (power)plant on one side and a greenhouse operating with the natural water cycle (scaled down 1:10) on the other side. The various processes of evaporation and condensation are hosted in low-tech utilities, partly underground and the architectural shape of the building itself. This integrated biotectural system is most suitable for large scale desert greening as it has a km2 footprint for the water distillation and the same for landscape transformation in desert greening, respectively the regeneration of natural fresh water cycles.

Water desalination
Methods

Vacuum distillation

In vacuum distillation atmospheric pressure is reduced, thus lowering the temperature required to evaporate the water. Liquids boil when the vapor pressure equals the ambient pressure and vapor pressure increases with temperature. Effectively, liquids boil at a lower temperature, when the ambient atmospheric pressure is less than usual atmospheric pressure. Thus, because of the reduced pressure, low-temperature "waste" heat from electrical power generation or industrial processes can be employed.

Multi-stage flash distillation

Water is evaporated and separated from sea water through multi-stage flash distillation, which is a series of flash evaporations. Each subsequent flash process utilizes energy released from the condensation of the water vapor from the previous step.

Multiple-effect distillation

Multiple-effect distillation (MED) works through a series of steps called "effects". Incoming water is sprayed onto pipes which are then heated to generate steam. The steam is then used to heat the next batch of incoming sea water. To increase efficiency, the steam used to heat the sea water can be taken from nearby power plants. Although this method is the most thermodynamically efficient among methods powered by heat, a few limitations exist such as a max temperature and max number of effects.

Vapor-compression distillation

Vapor-compression evaporation involves using either a mechanical compressor or a jet stream to compress the vapor present above the liquid. The compressed vapor is then used to provide the heat needed for the evaporation of the rest of the sea water. Since this system only requires power, it is more cost effective if kept at a small scale.

Wave-powered desalination

Wave powered desalination systems generally convert mechanical wave motion directly to hydraulic power for reverse osmosis. Such systems aim to maximize efficiency and reduce costs by avoiding conversion to electricity, minimizing excess pressurization above the osmotic pressure, and innovating on hydraulic and wave power components. One such example is CETO, a wave power technology that desalinates seawater using submerged buoys. Wave-powered desalination plants began operating on Garden Island in Western Australia in 2013 and in Perth in 2015.

Membrane distillation

Membrane distillation uses a temperature difference across a membrane to evaporate vapor from a brine solution and condense pure water on the colder side. The design of the membrane can have a significant effect on efficiency and durability. A study found that a membrane created via co-axial electrospinning of PVDF-HFP and silica aerogel was able to filter 99.99% of salt after continuous 30 day usage.

Osmosis

Reverse osmosis

Schematic representation of a typical desalination plant using reverse osmosis. Hybrid desalination plants using liquid nitrogen freeze thaw in conjunction with reverse osmosis have been found to improve efficiency.

The leading process for desalination in terms of installed capacity and yearly growth is reverse osmosis (RO). The RO membrane processes use semipermeable membranes and applied pressure (on the membrane feed side) to preferentially induce water permeation through the membrane while rejecting salts. Reverse osmosis plant membrane systems typically use less energy than thermal desalination processes. Energy cost in desalination processes varies considerably depending on water salinity, plant size and process type. At present the cost of seawater desalination, for example, is higher than traditional water sources, but it is expected that costs will continue to decrease with technology improvements that include, but are not limited to, improved efficiency, reduction in plant footprint, improvements to plant operation and optimization, more effective feed pretreatment, and lower cost energy sources.

Reverse osmosis uses a thin-film composite membrane, which comprises an ultra-thin, aromatic polyamide thin-film. This polyamide film gives the membrane its transport properties, whereas the remainder of the thin-film composite membrane provides mechanical support. The polyamide film is a dense, void-free polymer with a high surface area, allowing for its high water permeability. A recent study has found that the water permeability is primarily governed by the internal nanoscale mass distribution of the polyamide active layer.

The reverse osmosis process requires maintenance. Various factors interfere with efficiency: ionic contamination (calcium, magnesium etc.); dissolved organic carbon (DOC); bacteria; viruses; colloids and insoluble particulates; biofouling and scaling. In extreme cases, the RO membranes are destroyed. To mitigate damage, various pretreatment stages are introduced. Anti-scaling inhibitors include acids and other agents such as the organic polymers polyacrylamide and polymaleic acid, phosphonates and polyphosphates. Inhibitors for fouling are biocides (as oxidants against bacteria and viruses), such as chlorine, ozone, sodium or calcium hypochlorite. At regular intervals, depending on the membrane contamination; fluctuating seawater conditions; or when prompted by monitoring processes, the membranes need to be cleaned, known as emergency or shock-flushing. Flushing is done with inhibitors in a fresh water solution and the system must go offline. This procedure is environmentally risky, since contaminated water is diverted into the ocean without treatment. Sensitive marine habitats can be irreversibly damaged.

Off-grid solar-powered desalination units use solar energy to fill a buffer tank on a hill with seawater. The reverse osmosis process receives its pressurized seawater feed in non-sunlight hours by gravity, resulting in sustainable drinking water production without the need for fossil fuels, an electricity grid or batteries. Nano-tubes are also used for the same function (i.e., Reverse Osmosis).

Forward osmosis

Forward osmosis uses a semi-permeable membrane to effect separation of water from dissolved solutes. The driving force for this separation is an osmotic pressure gradient, such as a "draw" solution of high concentration.

Freeze–thaw

Freeze–thaw desalination (or freezing desalination) uses freezing to remove fresh water from salt water. Salt water is sprayed during freezing conditions into a pad where an ice-pile builds up. When seasonal conditions warm, naturally desalinated melt water is recovered. This technique relies on extended periods of natural sub-freezing conditions.

A different freeze–thaw method, not weather dependent and invented by Alexander Zarchin, freezes seawater in a vacuum. Under vacuum conditions the ice, desalinated, is melted and diverted for collection and the salt is collected.

Electrodialysis

Electrodialysis utilizes electric potential to move the salts through pairs of charged membranes, which trap salt in alternating channels. Several variances of electrodialysis exist such as conventional electrodialysis, electrodialysis reversal.

Electrodialysis can simultaneously remove salt and carbonic acid from seawater. Preliminary estimates suggest that the cost of such carbon removal can be paid for in large part if not entirely from the sale of the desalinated water produced as a byproduct.

Microbial desalination

Microbial desalination cells are biological electrochemical systems that implements the use of electro-active bacteria to power desalination of water in situ, resourcing the natural anode and cathode gradient of the electro-active bacteria and thus creating an internal supercapacitor.

Design aspects

Energy consumption

The energy consumption of the desalination process depends on the salinity of the water. Brackish water desalination requires less energy than seawater desalination.

The energy intensity of seawater desalination has improved: It is now about 3 kWh/m3 (in 2018), down by a factor of 10 from 20-30 kWh/m3 in 1970. This is similar to the energy consumption of other fresh water supplies transported over large distances, but much higher than local fresh water supplies that use 0.2 kWh/m3 or less.

A minimum energy consumption for seawater desalination of around 1 kWh/m3 has been determined, excluding prefiltering and intake/outfall pumping. Under 2 kWh/m3 has been achieved with reverse osmosis membrane technology, leaving limited scope for further energy reductions as the reverse osmosis energy consumption in the 1970s was 16 kWh/m3.

Supplying all US domestic water by desalination would increase domestic energy consumption by around 10%, about the amount of energy used by domestic refrigerators. Domestic consumption is a relatively small fraction of the total water usage.

Energy consumption of seawater desalination methods (kWh/m3)
Desalination Method   ⇨ Multi-stage
Flash
"MSF"
Multi-Effect
Distillation
"MED"
Mechanical Vapor
Compression
"MVC"
Reverse
Osmosis
"RO"
Energy ⇩
Electrical energy 4–6 1.5–2.5 7–12 3–5.5
Thermal energy 50–110 60–110 none none
Electrical equivalent of thermal energy 9.5–19.5 5–8.5 none none
Total equivalent electrical energy 13.5–25.5 6.5–11 7–12 3–5.5

Note: "Electrical equivalent" refers to the amount of electrical energy that could be generated using a given quantity of thermal energy and appropriate turbine generator. These calculations do not include the energy required to construct or refurbish items consumed in the process.

Given the energy intensive nature of desalination, with associated economic and environmental costs, desalination is generally considered a last resort after water conservation. But this is changing as prices continue to fall.

Cogeneration

Cogeneration is generating excess heat and electricity generation from a single process. Cogeneration can provide usable heat for desalination in an integrated, or "dual-purpose", facility where a power plant provides the energy for desalination. Alternatively, the facility's energy production may be dedicated to the production of potable water (a stand-alone facility), or excess energy may be produced and incorporated into the energy grid. Cogeneration takes various forms, and theoretically any form of energy production could be used. However, the majority of current and planned cogeneration desalination plants use either fossil fuels or nuclear power as their source of energy. Most plants are located in the Middle East or North Africa, which use their petroleum resources to offset limited water resources. The advantage of dual-purpose facilities is they can be more efficient in energy consumption, thus making desalination more viable.

The Shevchenko BN-350, a former nuclear-heated desalination unit in Kazakhstan

The current trend in dual-purpose facilities is hybrid configurations, in which the permeate from reverse osmosis desalination is mixed with distillate from thermal desalination. Basically, two or more desalination processes are combined along with power production. Such facilities have been implemented in Saudi Arabia at Jeddah and Yanbu.

A typical supercarrier in the US military is capable of using nuclear power to desalinate 1,500,000 L (330,000 imp gal; 400,000 US gal) of water per day.

Alternatives to desalination

Increased water conservation and efficiency remain the most cost-effective approaches in areas with a large potential to improve the efficiency of water use practices. Wastewater reclamation provides multiple benefits over desalination of saline water, although it typically uses desalination membranes. Urban runoff and storm water capture also provide benefits in treating, restoring and recharging groundwater.

A proposed alternative to desalination in the American Southwest is the commercial importation of bulk water from water-rich areas either by oil tankers converted to water carriers, or pipelines. The idea is politically unpopular in Canada, where governments imposed trade barriers to bulk water exports as a result of a North American Free Trade Agreement (NAFTA) claim.

The California Department of Water Resources and the California State Water Resources Control Board submitted a report to the state legislature recommending that urban water suppliers achieve an indoor water use efficiency standard of 55 US gallons (210 litres) per capita per day by 2023, declining to 47 US gallons (180 litres) per day by 2025, and 42 US gallons (160 litres) by 2030 and beyond.

Costs

Factors that determine the costs for desalination include capacity and type of facility, location, feed water, labor, energy, financing and concentrate disposal. Costs of desalinating sea water (infrastructure, energy, and maintenance) are generally higher than fresh water from rivers or groundwater, water recycling, and water conservation, but alternatives are not always available. Desalination costs in 2013 ranged from US$0.45 to US$1.00/m3. More than half of the cost comes directly from energy cost, and since energy prices are very volatile, actual costs can vary substantially.

The cost of untreated fresh water in the developing world can reach US$5/cubic metre.

Cost Comparison of Desalination Methods
Method Cost (US$/liter)
Passive solar ( 30.42% energy efficient) 0.034
Passive solar (improved single-slope, India) 0.024
Passive solar (improved double slope, India) 0.007
Multi Stage Flash (MSF) < 0.001
Reverse Osmosis (Concentrated solar power) 0.0008
Reverse Osmosis (Photovoltaic power) 0.000825
 
Average water consumption and cost of supply by sea water desalination at US$1 per cubic metre (±50%)
Area Consumption
Litre/person/day
Desalinated Water Cost
US$/person/day
US 378 0.38
Europe 189 0.19
Africa 57 0.06
UN recommended minimum 49 0.05

Desalination stills control pressure, temperature and brine concentrations to optimize efficiency. Nuclear-powered desalination might be economical on a large scale.

In 2014, the Israeli facilities of Hadera, Palmahim, Ashkelon, and Sorek were desalinizing water for less than US$0.40 per cubic meter. As of 2006, Singapore was desalinating water for US$0.49 per cubic meter.

Environmental concerns

Intake

In the United States, cooling water intake structures are regulated by the Environmental Protection Agency (EPA). These structures can have the same impacts on the environment as desalination facility intakes. According to EPA, water intake structures cause adverse environmental impact by sucking fish and shellfish or their eggs into an industrial system. There, the organisms may be killed or injured by heat, physical stress, or chemicals. Larger organisms may be killed or injured when they become trapped against screens at the front of an intake structure. Alternative intake types that mitigate these impacts include beach wells, but they require more energy and higher costs.

The Kwinana Desalination Plant opened in the Australian city of Perth, in 2007. Water there and at Queensland's Gold Coast Desalination Plant and Sydney's Kurnell Desalination Plant is withdrawn at 0.1 m/s (0.33 ft/s), which is slow enough to let fish escape. The plant provides nearly 140,000 m3 (4,900,000 cu ft) of clean water per day.

Outflow

Desalination processes produce large quantities of brine, possibly at above ambient temperature, and contain residues of pretreatment and cleaning chemicals, their reaction byproducts and heavy metals due to corrosion (especially in thermal-based plants). Chemical pretreatment and cleaning are a necessity in most desalination plants, which typically includes prevention of biofouling, scaling, foaming and corrosion in thermal plants, and of biofouling, suspended solids and scale deposits in membrane plants.

To limit the environmental impact of returning the brine to the ocean, it can be diluted with another stream of water entering the ocean, such as the outfall of a wastewater treatment or power plant. With medium to large power plant and desalination plants, the power plant's cooling water flow is likely to be several times larger than that of the desalination plant, reducing the salinity of the combination. Another method to dilute the brine is to mix it via a diffuser in a mixing zone. For example, once a pipeline containing the brine reaches the sea floor, it can split into many branches, each releasing brine gradually through small holes along its length. Mixing can be combined with power plant or wastewater plant dilution. Furthermore, zero liquid discharge systems can be adopted to treat brine before disposal.

Another possibility is making the desalination plant movable, thus avoiding that the brine builds up into a single location (as it keeps being produced by the desalination plant). Some such movable (ship-connected) desalination plants have been constructed.

Brine is denser than seawater and therefore sinks to the ocean bottom and can damage the ecosystem. Brine plumes have been seen to diminish over time to a diluted concentration, to where there was little to no effect on the surrounding environment. However studies have shown the dilution can be misleading due to the depth at which it occurred. If the dilution was observed during the summer season, there is possibility that there could have been a seasonal thermocline event that could have prevented the concentrated brine to sink to sea floor. This has the potential to not disrupt the sea floor ecosystem and instead the waters above it. Brine dispersal from the desalination plants has been seen to travel several kilometers away, meaning that it has the potential to cause harm to ecosystems far away from the plants. Careful reintroduction with appropriate measures and environmental studies can minimize this problem.

Other issues

Due to the nature of the process, there is a need to place the plants on approximately 25 acres of land on or near the shoreline. In the case of a plant built inland, pipes have to be laid into the ground to allow for easy intake and outtake. However, once the pipes are laid into the ground, they have a possibility of leaking into and contaminating nearby aquifers. Aside from environmental risks, the noise generated by certain types of desalination plants can be loud.

Health aspects

Iodine deficiency

Desalination removes iodine from water and could increase the risk of iodine deficiency disorders. Israeli researchers claimed a possible link between seawater desalination and iodine deficiency, finding iodine deficits among adults exposed to iodine-poor water concurrently with an increasing proportion of their area's drinking water from seawater reverse osmosis (SWRO). They later found probable iodine deficiency disorders in a population reliant on desalinated seawater. A possible link of heavy desalinated water use and national iodine deficiency was suggested by Israeli researchers. They found a high burden of iodine deficiency in the general population of Israel: 62% of school-age children and 85% of pregnant women fall below the WHO's adequacy range. They also pointed out the national reliance on iodine-depleted desalinated water, the absence of a universal salt iodization program and reports of increased use of thyroid medication in Israel as a possible reasons that the population's iodine intake is low. In the year that the survey was conducted, the amount of water produced from the desalination plants constitutes about 50% of the quantity of fresh water supplied for all needs and about 80% of the water supplied for domestic and industrial needs in Israel.

Experimental techniques

Other desalination techniques include:

Waste heat

Thermally-driven desalination technologies are frequently suggested for use with low-temperature waste heat sources, as the low temperatures are not useful for process heat needed in many industrial processes, but ideal for the lower temperatures needed for desalination. In fact, such pairing with waste heat can even improve electrical process: Diesel generators commonly provide electricity in remote areas. About 40–50% of the energy output is low-grade heat that leaves the engine via the exhaust. Connecting a thermal desalination technology such as membrane distillation system to the diesel engine exhaust repurposes this low-grade heat for desalination. The system actively cools the diesel generator, improving its efficiency and increasing its electricity output. This results in an energy-neutral desalination solution. An example plant was commissioned by Dutch company Aquaver in March 2014 for Gulhi, Maldives.

Low-temperature thermal

Originally stemming from ocean thermal energy conversion research, low-temperature thermal desalination (LTTD) takes advantage of water boiling at low pressure, even at ambient temperature. The system uses pumps to create a low-pressure, low-temperature environment in which water boils at a temperature gradient of 8–10 °C (14–18 °F) between two volumes of water. Cool ocean water is supplied from depths of up to 600 m (2,000 ft). This water is pumped through coils to condense the water vapor. The resulting condensate is purified water. LTTD may take advantage of the temperature gradient available at power plants, where large quantities of warm wastewater are discharged from the plant, reducing the energy input needed to create a temperature gradient.

Experiments were conducted in the US and Japan to test the approach. In Japan, a spray-flash evaporation system was tested by Saga University. In Hawaii, the National Energy Laboratory tested an open-cycle OTEC plant with fresh water and power production using a temperature difference of 20 °C (36 °F) between surface water and water at a depth of around 500 m (1,600 ft). LTTD was studied by India's National Institute of Ocean Technology (NIOT) in 2004. Their first LTTD plant opened in 2005 at Kavaratti in the Lakshadweep islands. The plant's capacity is 100,000 L (22,000 imp gal; 26,000 US gal)/day, at a capital cost of INR 50 million (€922,000). The plant uses deep water at a temperature of 10 to 12 °C (50 to 54 °F). In 2007, NIOT opened an experimental, floating LTTD plant off the coast of Chennai, with a capacity of 1,000,000 L (220,000 imp gal; 260,000 US gal)/day. A smaller plant was established in 2009 at the North Chennai Thermal Power Station to prove the LTTD application where power plant cooling water is available.

Thermoionic process

In October 2009, Saltworks Technologies announced a process that uses solar or other thermal heat to drive an ionic current that removes all sodium and chlorine ions from the water using ion-exchange membranes.

Evaporation and condensation for crops

The Seawater greenhouse uses natural evaporation and condensation processes inside a greenhouse powered by solar energy to grow crops in arid coastal land.

Ion concentration polarisation (ICP)

In 2022, using a technique that utilised multiple stages of ion concentration polarisation followed by a single stage of electrodialysis, researchers from MIT manage to create a filterless portable desalination unit, capable of removing both dissolved salts and suspended solids. Designed for use by non-experts in remote areas or natural disasters, as well as on military operations, the prototype is the size of a suitcase, measuring 42 × 33.5 × 19 cm3 and weighing 9.25 kg. The process is fully automated, notifying the user when the water is safe to drink, and can be controlled by a single button or smartphone app. As it does not require a high pressure pump the process is highly energy efficient, consuming only 20 watt-hours per liter of drinking water produced, making it capable of being powered by common portable solar panels. Using a filterless design at low pressures or replaceable filters significantly reduces maintenance requirements, while the device itself is self cleaning. However, the device is limited to producing 0.33 liters of drinking water per minute. There are also concerns that fouling will impact the long-term reliability, especially in water with high turbidity. The researchers are working to increase the efficiency and production rate with the intent to commercialise the product in the future, however a significant limitation is the reliance on expensive materials in the current design.

Other approaches

Adsorption-based desalination (AD) relies on the moisture absorption properties of certain materials such as Silica Gel.

Forward osmosis

One process was commercialized by Modern Water PLC using forward osmosis, with a number of plants reported to be in operation.

Hydrogel based desalination

Scheme of the desalination machine: the desalination box of volume contains a gel of volume which is separated by a sieve from the outer solution volume . The box is connected to two big tanks with high and low salinity by two taps which can be opened and closed as desired. The chain of buckets expresses the fresh water consumption followed by refilling the low-salinity reservoir by salt water.

The idea of the method is in the fact that when the hydrogel is put into contact with aqueous salt solution, it swells absorbing a solution with the ion composition different from the original one. This solution can be easily squeezed out from the gel by means of sieve or microfiltration membrane. The compression of the gel in closed system lead to change in salt concentration, whereas the compression in open system, while the gel is exchanging ions with bulk, lead to the change in the number of ions. The consequence of the compression and swelling in open and closed system conditions mimics the reverse Carnot Cycle of refrigerator machine. The only difference is that instead of heat this cycle transfers salt ions from the bulk of low salinity to a bulk of high salinity. Similarly to the Carnot cycle this cycle is fully reversible, so can in principle work with an ideal thermodynamic efficiency. Because the method is free from the use of osmotic membranes it can compete with reverse osmosis method. In addition, unlike the reverse osmosis, the approach is not sensitive to the quality of feed water and its seasonal changes, and allows the production of water of any desired concentration.

Small-scale solar

The United States, France and the United Arab Emirates are working to develop practical solar desalination. AquaDania's WaterStillar has been installed at Dahab, Egypt, and in Playa del Carmen, Mexico. In this approach, a solar thermal collector measuring two square metres can distill from 40 to 60 litres per day from any local water source – five times more than conventional stills. It eliminates the need for plastic PET bottles or energy-consuming water transport. In Central California, a startup company WaterFX is developing a solar-powered method of desalination that can enable the use of local water, including runoff water that can be treated and used again. Salty groundwater in the region would be treated to become freshwater, and in areas near the ocean, seawater could be treated.

Passarell

The Passarell process uses reduced atmospheric pressure rather than heat to drive evaporative desalination. The pure water vapor generated by distillation is then compressed and condensed using an advanced compressor. The compression process improves distillation efficiency by creating the reduced pressure in the evaporation chamber. The compressor centrifuges the pure water vapor after it is drawn through a demister (removing residual impurities) causing it to compress against tubes in the collection chamber. The compression of the vapor increases its temperature. The heat is transferred to the input water falling in the tubes, vaporizing the water in the tubes. Water vapor condenses on the outside of the tubes as product water. By combining several physical processes, Passarell enables most of the system's energy to be recycled through its evaporation, demisting, vapor compression, condensation, and water movement processes.

Geothermal

Geothermal energy can drive desalination. In most locations, geothermal desalination beats using scarce groundwater or surface water, environmentally and economically.

Nanotechnology

Nanotube membranes of higher permeability than current generation of membranes may lead to eventual reduction in the footprint of RO desalination plants. It has also been suggested that the use of such membranes will lead to reduction in the energy needed for desalination.

Hermetic, sulphonated nano-composite membranes have shown to be capable of removing various contaminants to the parts per billion level, and have little or no susceptibility to high salt concentration levels.

Biomimesis

Biomimetic membranes are another approach.

Electrochemical

In 2008, Siemens Water Technologies announced technology that applied electric fields to desalinate one cubic meter of water while using only a purported 1.5 kWh of energy. If accurate, this process would consume one-half the energy of other processes. As of 2012 a demonstration plant was operating in Singapore. Researchers at the University of Texas at Austin and the University of Marburg are developing more efficient methods of electrochemically mediated seawater desalination.

Electrokinetic shocks

A process employing electrokinetic shock waves can be used to accomplish membraneless desalination at ambient temperature and pressure. In this process, anions and cations in salt water are exchanged for carbonate anions and calcium cations, respectively using electrokinetic shockwaves. Calcium and carbonate ions react to form calcium carbonate, which precipitates, leaving fresh water. The theoretical energy efficiency of this method is on par with electrodialysis and reverse osmosis.

Temperature swing solvent extraction

Temperature Swing Solvent Extraction (TSSE) uses a solvent instead of a membrane or high temperatures.

Solvent extraction is a common technique in chemical engineering. It can be activated by low-grade heat (less than 70 °C (158 °F), which may not require active heating. In a study, TSSE removed up to 98.4 percent of the salt in brine. A solvent whose solubility varies with temperature is added to saltwater. At room temperature the solvent draws water molecules away from the salt. The water-laden solvent is then heated, causing the solvent to release the now salt-free water.

It can desalinate extremely salty brine up to seven times as salty as the ocean. For comparison, the current methods can only handle brine twice as salty.

Wave energy

A small-scale offshore system uses wave energy to desalinate 30–50 m3/day. The system operates with no external power, and is constructed of recycled plastic bottles.

Plants

Trade Arabia claims Saudi Arabia to be producing 7.9 million cubic meters of desalinated water daily, or 22% of world total as of 2021 yearend.

  • Perth began operating a reverse osmosis seawater desalination plant in 2006. The Perth desalination plant is powered partially by renewable energy from the Emu Downs Wind Farm.
  • A desalination plant now operates in Sydney, and the Wonthaggi desalination plant was under construction in Wonthaggi, Victoria. A wind farm at Bungendore in New South Wales was purpose-built to generate enough renewable energy to offset the Sydney plant's energy use, mitigating concerns about harmful greenhouse gas emissions.
  • A January 17, 2008, article in The Wall Street Journal stated, "In November, Connecticut-based Poseidon Resources Corp. won a key regulatory approval to build the $300 million water-desalination plant in Carlsbad, north of San Diego. The facility would produce 190,000 cubic metres of drinking water per day, enough to supply about 100,000 homes. As of June 2012, the cost for the desalinated water had risen to $2,329 per acre-foot. Each $1,000 per acre-foot works out to $3.06 for 1,000 gallons, or $0.81 per cubic meter.

As new technological innovations continue to reduce the capital cost of desalination, more countries are building desalination plants as a small element in addressing their water scarcity problems.

  • Israel desalinizes water for a cost of 53 cents per cubic meter 
  • Singapore desalinizes water for 49 cents per cubic meter  and also treats sewage with reverse osmosis for industrial and potable use (NEWater).
  • China and India, the world's two most populous countries, are turning to desalination to provide a small part of their water needs 
  • In 2007 Pakistan announced plans to use desalination 
  • All Australian capital cities (except Canberra, Darwin, Northern Territory and Hobart) are either in the process of building desalination plants, or are already using them. In late 2011, Melbourne will begin using Australia's largest desalination plant, the Wonthaggi desalination plant to raise low reservoir levels.
  • In 2007 Bermuda signed a contract to purchase a desalination plant 
  • Before 2015, the largest desalination plant in the United States was at Tampa Bay, Florida, which began desalinizing 25 million gallons (95000 m3) of water per day in December 2007. In the United States, the cost of desalination is $3.06 for 1,000 gallons, or 81 cents per cubic meter. In the United States, California, Arizona, Texas, and Florida use desalination for a very small part of their water supply. Since 2015, the Claude "Bud" Lewis Carlsbad Desalination Plant has been producing 50 million gallons of drinking water daily.
  • After being desalinized at Jubail, Saudi Arabia, water is pumped 200 miles (320 km) inland though a pipeline to the capital city of Riyadh.

As of 2008, "World-wide, 13,080 desalination plants produce more than 12 billion gallons of water a day, according to the International Desalination Association." An estimate in 2009 found that the worldwide desalinated water supply will triple between 2008 and 2020.

One of the world's largest desalination hubs is the Jebel Ali Power Generation and Water Production Complex in the United Arab Emirates. It is a site featuring multiple plants using different desalination technologies and is capable of producing 2.2 million cubic meters of water per day.

A typical aircraft carrier in the U.S. military uses nuclear power to desalinize 400,000 US gallons (1,500,000 L) of water per day.

In nature

Mangrove leaf with salt crystals

Evaporation of water over the oceans in the water cycle is a natural desalination process.

The formation of sea ice produces ice with little salt, much lower than in seawater.

Seabirds distill seawater using countercurrent exchange in a gland with a rete mirabile. The gland secretes highly concentrated brine stored near the nostrils above the beak. The bird then "sneezes" the brine out. As freshwater is not usually available in their environments, some seabirds, such as pelicans, petrels, albatrosses, gulls and terns, possess this gland, which allows them to drink the salty water from their environments while they are far from land.

Mangrove trees grow in seawater; they secrete salt by trapping it in parts of the root, which are then eaten by animals (usually crabs). Additional salt is removed by storing it in leaves that fall off. Some types of mangroves have glands on their leaves, which work in a similar way to the seabird desalination gland. Salt is extracted to the leaf exterior as small crystals, which then fall off the leaf.

Willow trees and reeds absorb salt and other contaminants, effectively desalinating the water. This is used in artificial constructed wetlands, for treating sewage.

History

Desalination has been known to history for millennia as both a concept, and later practice, though in a limited form. The ancient Greek philosopher Aristotle observed in his work Meteorology that "salt water, when it turns into vapour, becomes sweet and the vapour does not form salt water again when it condenses," and also noticed that a fine wax vessel would hold potable water after being submerged long enough in seawater, having acted as a membrane to filter the salt. There are numerous other examples of experimentation in desalination throughout Antiquity and the Middle Ages, but desalination was never feasible on a large scale until the modern era. A good example of this experimentation are the observations by Leonardo da Vinci (Florence, 1452), who realized that distilled water could be made cheaply in large quantities by adapting a still to a cookstove. During the Middle Ages elsewhere in Central Europe, work continued on refinements in distillation, although not necessarily directed towards desalination.

However, it is possible that the first major land-based desalination plant may have been installed under emergency conditions on an island off the coast of Tunisia in 1560. It is believed that a garrison of 700 Spanish soldiers was besieged by a large number of Turks and that, during the siege, the captain in charge fabricated a still capable of producing 40 barrels of fresh water per day, though details of the device have not been reported.

Before the Industrial Revolution, desalination was primarily of concern to oceangoing ships, which otherwise needed to keep on board supplies of fresh water. Sir Richard Hawkins (1562-1622), who made extensive travels in the South Seas, reported in his return that he had been able to supply his men with fresh water by means of shipboard distillation. Additionally, during the early 1600s, several prominent figures of the era such as Francis Bacon or Walter Raleigh published reports on water desalination. These reports and others, set the climate for the first patent dispute concerning desalination apparatus. The two first patents regarding water desalination date back to 1675 and 1683 (patents No.184 and No. 226, published by Mr. William Walcot and Mr. Robert Fitzgerald (and others), respectively). Nevertheless, neither of the two inventions was really put into service as a consequence of technical problems derived from scale-up difficulties. No significant improvements to the basic seawater distillation process were made for some time during the 150 years from the mid-1600s until 1800.

When the frigate Protector was sold to Denmark in the 1780s (as the ship Hussaren) the desalination plant was studied and recorded in great detail. In the newly formed United States, Thomas Jefferson catalogued heat-based methods going back to the 1500s, and formulated practical advice that was publicized to all U.S. ships on the backs of sailing clearance permits.

Beginning about 1800, things started changing very rapidly as consequence of the appearance of the steam engine and the so-called age of steam. The development of a knowledge of the thermodynamics of steam processes  and the need for a pure water source for its use in boilers, generated a positive effect regarding distilling systems. Additionally, the spread of European colonialism induced a need for freshwater in remote parts of the world, thus creating the appropriate climate for water desalination.

In parallel with the development and improvement of systems using steam (multiple-effect evaporators), this type of devices quickly demonstrated their potential in the field of desalination. In 1852, Alphonse René le Mire de Normandy, was issued a British patent for a vertical tube seawater distilling unit which thanks to its simplicity of design and ease of construction, very quickly gained popularity for shipboard use. Land-based desalting units did not significantly appear until the later half of the nineteenth century. In the 1860s, the US Army purchased three Normandy evaporators, each rated at 7000 gallons/day and installed them on the islands of Key West and Dry Tortugas. Another important land-based desalter plant was installed at Suakin during the 1880s which was able to provide freshwater to the British troops placed there. It consisted of six-effect distillers with a capacity of 350 tons/day.

Significant research into improved desalination methods occurred in the United States after World War II. The Office of Saline Water was created in the United States Department of the Interior in 1955 in accordance with the Saline Water Conversion Act of 1952. It was merged into the Office of Water Resources Research in 1974.

The first industrial desalination plant in the United States opened in Freeport, Texas in 1961 with the hope of bringing water security to the region after a decade of drought. Vice-president Lyndon B. Johnson attended the plant's opening on June 21, 1961. President John F. Kennedy recorded a speech from the White House, describing desalination as "a work that in many ways is more important than any other scientific enterprise in which this country is now engaged."

Research took place at state universities in California, at the Dow Chemical Company and DuPont. Many studies focus on ways to optimize desalination systems.

The first commercial reverse osmosis desalination plant, Coalinga desalination plant, was inaugurated in California in 1965 for brackish water. A few years later, in 1975, the first sea water reverse osmosis desalination plant came into operation.

Society and culture

Despite the issues associated with desalination processes, public support for its development can be very high. One survey of a Southern California community saw 71.9% of all respondents being in support of desalination plant development in their community. In many cases, high freshwater scarcity corresponds to higher public support for desalination development whereas areas with low water scarcity tend to have less public support for its development.

Entropy (information theory)

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Entropy_(information_theory) In info...