Search This Blog

Sunday, June 26, 2022

Canonical quantization

In physics, canonical quantization is a procedure for quantizing a classical theory, while attempting to preserve the formal structure, such as symmetries, of the classical theory, to the greatest extent possible.

Historically, this was not quite Werner Heisenberg's route to obtaining quantum mechanics, but Paul Dirac introduced it in his 1926 doctoral thesis, the "method of classical analogy" for quantization, and detailed it in his classic text. The word canonical arises from the Hamiltonian approach to classical mechanics, in which a system's dynamics is generated via canonical Poisson brackets, a structure which is only partially preserved in canonical quantization.

This method was further used in the context of quantum field theory by Paul Dirac, in his construction of quantum electrodynamics. In the field theory context, it is also called the second quantization of fields, in contrast to the semi-classical first quantization of single particles.

History

When it was first developed, quantum physics dealt only with the quantization of the motion of particles, leaving the electromagnetic field classical, hence the name quantum mechanics.

Later the electromagnetic field was also quantized, and even the particles themselves became represented through quantized fields, resulting in the development of quantum electrodynamics (QED) and quantum field theory in general. Thus, by convention, the original form of particle quantum mechanics is denoted first quantization, while quantum field theory is formulated in the language of second quantization.

First quantization

Single particle systems

The following exposition is based on Dirac's treatise on quantum mechanics. In the classical mechanics of a particle, there are dynamic variables which are called coordinates (x) and momenta (p). These specify the state of a classical system. The canonical structure (also known as the symplectic structure) of classical mechanics consists of Poisson brackets enclosing these variables, such as {x,p} = 1. All transformations of variables which preserve these brackets are allowed as canonical transformations in classical mechanics. Motion itself is such a canonical transformation.

By contrast, in quantum mechanics, all significant features of a particle are contained in a state , called a quantum state. Observables are represented by operators acting on a Hilbert space of such quantum states.

The eigenvalue of an operator acting on one of its eigenstates represents the value of a measurement on the particle thus represented. For example, the energy is read off by the Hamiltonian operator acting on a state , yielding

,

where En is the characteristic energy associated to this eigenstate.

Any state could be represented as a linear combination of eigenstates of energy; for example,

,

where an are constant coefficients.

As in classical mechanics, all dynamical operators can be represented by functions of the position and momentum ones, and , respectively. The connection between this representation and the more usual wavefunction representation is given by the eigenstate of the position operator representing a particle at position , which is denoted by an element in the Hilbert space, and which satisfies . Then, .

Likewise, the eigenstates of the momentum operator specify the momentum representation: .

The central relation between these operators is a quantum analog of the above Poisson bracket of classical mechanics, the canonical commutation relation,

.

This relation encodes (and formally leads to) the uncertainty principle, in the form Δx Δpħ/2. This algebraic structure may be thus considered as the quantum analog of the canonical structure of classical mechanics.

Many-particle systems

When turning to N-particle systems, i.e., systems containing N identical particles (particles characterized by the same quantum numbers such as mass, charge and spin), it is necessary to extend the single-particle state function to the N-particle state function . A fundamental difference between classical and quantum mechanics concerns the concept of indistinguishability of identical particles. Only two species of particles are thus possible in quantum physics, the so-called bosons and fermions which obey the rules:

(bosons),

(fermions).

Where we have interchanged two coordinates of the state function. The usual wave function is obtained using the Slater determinant and the identical particles theory. Using this basis, it is possible to solve various many-particle problems.

Issues and limitations

Classical and quantum brackets

Dirac's book details his popular rule of supplanting Poisson brackets by commutators:

One might interpret this proposal as saying that we should seek a "quantization map" mapping a function on the classical phase space to an operator on the quantum Hilbert space such that

It is now known that there is no reasonable such quantization map satisfying the above identity exactly for all functions and .

Groenewold's theorem

One concrete version of the above impossibility claim is Groenewold's theorem (after Dutch theoretical physicist Hilbrand J. Groenewold), which we describe for a system with one degree of freedom for simplicity. Let us accept the following "ground rules" for the map . First, should send the constant function 1 to the identity operator. Second, should take and to the usual position and momentum operators and . Third, should take a polynomial in and to a "polynomial" in and , that is, a finite linear combinations of products of and , which may be taken in any desired order. In its simplest form, Groenewold's theorem says that there is no map satisfying the above ground rules and also the bracket condition

for all polynomials and .

Actually, the nonexistence of such a map occurs already by the time we reach polynomials of degree four. Note that the Poisson bracket of two polynomials of degree four has degree six, so it does not exactly make sense to require a map on polynomials of degree four to respect the bracket condition. We can, however, require that the bracket condition holds when and have degree three. Groenewold's theorem can be stated as follows:

Theorem: There is no quantization map (following the above ground rules) on polynomials of degree less than or equal to four that satisfies
whenever and have degree less than or equal to three. (Note that in this case, has degree less than or equal to four.)

The proof can be outlined as follows. Suppose we first try to find a quantization map on polynomials of degree less than or equal to three satisfying the bracket condition whenever has degree less than or equal to two and has degree less than or equal to two. Then there is precisely one such map, and it is the Weyl quantization. The impossibility result now is obtained by writing the same polynomial of degree four as a Poisson bracket of polynomials of degree three in two different ways. Specifically, we have

On the other hand, we have already seen that if there is going to be a quantization map on polynomials of degree three, it must be the Weyl quantization; that is, we have already determined the only possible quantization of all the cubic polynomials above.

The argument is finished by computing by brute force that

does not coincide with

.

Thus, we have two incompatible requirements for the value of .

Axioms for quantization

If Q represents the quantization map that acts on functions f in classical phase space, then the following properties are usually considered desirable:

  1. and   (elementary position/momentum operators)
  2.   is a linear map
  3.   (Poisson bracket)
  4.   (von Neumann rule).

However, not only are these four properties mutually inconsistent, any three of them are also inconsistent! As it turns out, the only pairs of these properties that lead to self-consistent, nontrivial solutions are 2 & 3, and possibly 1 & 3 or 1 & 4. Accepting properties 1 & 2, along with a weaker condition that 3 be true only asymptotically in the limit ħ→0 (see Moyal bracket), leads to deformation quantization, and some extraneous information must be provided, as in the standard theories utilized in most of physics. Accepting properties 1 & 2 & 3 but restricting the space of quantizable observables to exclude terms such as the cubic ones in the above example amounts to geometric quantization.

Second quantization: field theory

Quantum mechanics was successful at describing non-relativistic systems with fixed numbers of particles, but a new framework was needed to describe systems in which particles can be created or destroyed, for example, the electromagnetic field, considered as a collection of photons. It was eventually realized that special relativity was inconsistent with single-particle quantum mechanics, so that all particles are now described relativistically by quantum fields.

When the canonical quantization procedure is applied to a field, such as the electromagnetic field, the classical field variables become quantum operators. Thus, the normal modes comprising the amplitude of the field are simple oscillators, each of which is quantized in standard first quantization, above, without ambiguity. The resulting quanta are identified with individual particles or excitations. For example, the quanta of the electromagnetic field are identified with photons. Unlike first quantization, conventional second quantization is completely unambiguous, in effect a functor, since the constituent set of its oscillators are quantized unambiguously.

Historically, quantizing the classical theory of a single particle gave rise to a wavefunction. The classical equations of motion of a field are typically identical in form to the (quantum) equations for the wave-function of one of its quanta. For example, the Klein–Gordon equation is the classical equation of motion for a free scalar field, but also the quantum equation for a scalar particle wave-function. This meant that quantizing a field appeared to be similar to quantizing a theory that was already quantized, leading to the fanciful term second quantization in the early literature, which is still used to describe field quantization, even though the modern interpretation detailed is different.

One drawback to canonical quantization for a relativistic field is that by relying on the Hamiltonian to determine time dependence, relativistic invariance is no longer manifest. Thus it is necessary to check that relativistic invariance is not lost. Alternatively, the Feynman integral approach is available for quantizing relativistic fields, and is manifestly invariant. For non-relativistic field theories, such as those used in condensed matter physics, Lorentz invariance is not an issue.

Field operators

Quantum mechanically, the variables of a field (such as the field's amplitude at a given point) are represented by operators on a Hilbert space. In general, all observables are constructed as operators on the Hilbert space, and the time-evolution of the operators is governed by the Hamiltonian, which must be a positive operator. A state annihilated by the Hamiltonian must be identified as the vacuum state, which is the basis for building all other states. In a non-interacting (free) field theory, the vacuum is normally identified as a state containing zero particles. In a theory with interacting particles, identifying the vacuum is more subtle, due to vacuum polarization, which implies that the physical vacuum in quantum field theory is never really empty. For further elaboration, see the articles on the quantum mechanical vacuum and the vacuum of quantum chromodynamics. The details of the canonical quantization depend on the field being quantized, and whether it is free or interacting.

Real scalar field

A scalar field theory provides a good example of the canonical quantization procedure. Classically, a scalar field is a collection of an infinity of oscillator normal modes. It suffices to consider a 1+1-dimensional space-time in which the spatial direction is compactified to a circle of circumference 2π, rendering the momenta discrete.

The classical Lagrangian density describes an infinity of coupled harmonic oscillators, labelled by x which is now a label (and not the displacement dynamical variable to be quantized), denoted by the classical field φ,

where V(φ) is a potential term, often taken to be a polynomial or monomial of degree 3 or higher. The action functional is

.

The canonical momentum obtained via the Legendre transformation using the action L is , and the classical Hamiltonian is found to be

Canonical quantization treats the variables φ and π as operators with canonical commutation relations at time t= 0, given by

Operators constructed from φ and π can then formally be defined at other times via the time-evolution generated by the Hamiltonian,

However, since φ and π no longer commute, this expression is ambiguous at the quantum level. The problem is to construct a representation of the relevant operators on a Hilbert space and to construct a positive operator H as a quantum operator on this Hilbert space in such a way that it gives this evolution for the operators as given by the preceding equation, and to show that contains a vacuum state on which H has zero eigenvalue. In practice, this construction is a difficult problem for interacting field theories, and has been solved completely only in a few simple cases via the methods of constructive quantum field theory. Many of these issues can be sidestepped using the Feynman integral as described for a particular V(φ) in the article on scalar field theory.

In the case of a free field, with V(φ) = 0, the quantization procedure is relatively straightforward. It is convenient to Fourier transform the fields, so that

The reality of the fields implies that

.

The classical Hamiltonian may be expanded in Fourier modes as

where .

This Hamiltonian is thus recognizable as an infinite sum of classical normal mode oscillator excitations φk, each one of which is quantized in the standard manner, so the free quantum Hamiltonian looks identical. It is the φks that have become operators obeying the standard commutation relations, [φk, πk] = [φk, πk] = , with all others vanishing. The collective Hilbert space of all these oscillators is thus constructed using creation and annihilation operators constructed from these modes,

for which [ak, ak] = 1 for all k, with all other commutators vanishing.

The vacuum is taken to be annihilated by all of the ak, and is the Hilbert space constructed by applying any combination of the infinite collection of creation operators ak to . This Hilbert space is called Fock space. For each k, this construction is identical to a quantum harmonic oscillator. The quantum field is an infinite array of quantum oscillators. The quantum Hamiltonian then amounts to

,

where Nk may be interpreted as the number operator giving the number of particles in a state with momentum k.

This Hamiltonian differs from the previous expression by the subtraction of the zero-point energy ħωk/2 of each harmonic oscillator. This satisfies the condition that H must annihilate the vacuum, without affecting the time-evolution of operators via the above exponentiation operation. This subtraction of the zero-point energy may be considered to be a resolution of the quantum operator ordering ambiguity, since it is equivalent to requiring that all creation operators appear to the left of annihilation operators in the expansion of the Hamiltonian. This procedure is known as Wick ordering or normal ordering.

Other fields

All other fields can be quantized by a generalization of this procedure. Vector or tensor fields simply have more components, and independent creation and destruction operators must be introduced for each independent component. If a field has any internal symmetry, then creation and destruction operators must be introduced for each component of the field related to this symmetry as well. If there is a gauge symmetry, then the number of independent components of the field must be carefully analyzed to avoid over-counting equivalent configurations, and gauge-fixing may be applied if needed.

It turns out that commutation relations are useful only for quantizing bosons, for which the occupancy number of any state is unlimited. To quantize fermions, which satisfy the Pauli exclusion principle, anti-commutators are needed. These are defined by {A,B} = AB+BA.

When quantizing fermions, the fields are expanded in creation and annihilation operators, θk, θk, which satisfy

The states are constructed on a vacuum |0> annihilated by the θk, and the Fock space is built by applying all products of creation operators θk to |0>. Pauli's exclusion principle is satisfied, because , by virtue of the anti-commutation relations.

Condensates

The construction of the scalar field states above assumed that the potential was minimized at φ = 0, so that the vacuum minimizing the Hamiltonian satisfies 〈 φ 〉= 0, indicating that the vacuum expectation value (VEV) of the field is zero. In cases involving spontaneous symmetry breaking, it is possible to have a non-zero VEV, because the potential is minimized for a value φ = v . This occurs for example, if V(φ) = gφ4 − 2m2φ2 with g > 0 and m2 > 0, for which the minimum energy is found at v = ±m/g. The value of v in one of these vacua may be considered as condensate of the field φ. Canonical quantization then can be carried out for the shifted field φ(x,t)−v, and particle states with respect to the shifted vacuum are defined by quantizing the shifted field. This construction is utilized in the Higgs mechanism in the standard model of particle physics.

Mathematical quantization

Deformation quantization

The classical theory is described using a spacelike foliation of spacetime with the state at each slice being described by an element of a symplectic manifold with the time evolution given by the symplectomorphism generated by a Hamiltonian function over the symplectic manifold. The quantum algebra of "operators" is an ħ-deformation of the algebra of smooth functions over the symplectic space such that the leading term in the Taylor expansion over ħ of the commutator [A, B] expressed in the phase space formulation is {A, B} . (Here, the curly braces denote the Poisson bracket. The subleading terms are all encoded in the Moyal bracket, the suitable quantum deformation of the Poisson bracket.) In general, for the quantities (observables) involved, and providing the arguments of such brackets, ħ-deformations are highly nonunique—quantization is an "art", and is specified by the physical context. (Two different quantum systems may represent two different, inequivalent, deformations of the same classical limit, ħ → 0.)

Now, one looks for unitary representations of this quantum algebra. With respect to such a unitary representation, a symplectomorphism in the classical theory would now deform to a (metaplectic) unitary transformation. In particular, the time evolution symplectomorphism generated by the classical Hamiltonian deforms to a unitary transformation generated by the corresponding quantum Hamiltonian.

A further generalization is to consider a Poisson manifold instead of a symplectic space for the classical theory and perform an ħ-deformation of the corresponding Poisson algebra or even Poisson supermanifolds.

Geometric quantization

In contrast to the theory of deformation quantization described above, geometric quantization seeks to construct an actual Hilbert space and operators on it. Starting with a symplectic manifold , one first constructs a prequantum Hilbert space consisting of the space of square-integrable sections of an appropriate line bundle over . On this space, one can map all classical observables to operators on the prequantum Hilbert space, with the commutator corresponding exactly to the Poisson bracket. The prequantum Hilbert space, however, is clearly too big to describe the quantization of .

One then proceeds by choosing a polarization, that is (roughly), a choice of variables on the -dimensional phase space. The quantum Hilbert space is then the space of sections that depend only on the chosen variables, in the sense that they are covariantly constant in the other directions. If the chosen variables are real, we get something like the traditional Schrödinger Hilbert space. If the chosen variables are complex, we get something like the Segal–Bargmann space.

History of the Teller–Ulam design

From Wikipedia, the free encyclopedia
 
Ivy Mike, the first full test of the Teller–Ulam design (a staged fusion bomb), with a yield of 10.4 megatons (November 1, 1952)

This article chronicles the history and origins of the Teller–Ulam design, the technical concept behind modern thermonuclear weapons, also known as hydrogen bombs. The design, the details of which are military secrets known to only a handful of major nations, is believed to be used in virtually all modern nuclear weapons that make up the arsenals of the major nuclear powers.

History

Teller's "Super"

Physicist Edward Teller was for many years the chief force lobbying for research into developing fusion weapons.

The idea of using the energy from a fission device to begin a fusion reaction was first proposed by the Italian physicist Enrico Fermi to his colleague Edward Teller in the fall of 1941 during what would soon become the Manhattan Project, the World War II effort by the United States and United Kingdom to develop the first nuclear weapons. Teller soon was a participant at Robert Oppenheimer's summer conference on the development of a fission bomb held at the University of California, Berkeley, where he guided discussion towards the idea of creating his "Super" bomb, which would hypothetically be many times more powerful than the yet-undeveloped fission weapon. Teller assumed creating the fission bomb would be nothing more than an engineering problem, and that the "Super" provided a much more interesting theoretical challenge.

Ivy King, the largest pure fission bomb tested by the US, yielding 500 kt (November 16, 1952)

For the remainder of the war the effort was focused on first developing fission weapons. Nevertheless, Teller continued to pursue the "Super", to the point of neglecting work assigned to him for the fission weapon at the secret Los Alamos lab where he worked. (Much of the work Teller declined to do was given instead to Klaus Fuchs, who was later discovered to be a spy for the Soviet Union.) Teller was given some resources with which to study the "Super", and contacted his friend Maria Göppert-Mayer to help with laborious calculations relating to opacity. The "Super", however, proved elusive, and the calculations were incredibly difficult to perform, especially since there was no existing way to run small-scale tests of the principles involved (in comparison, the properties of fission could be more easily probed with cyclotrons, newly created nuclear reactors, and various other tests).

Even though they had witnessed the Trinity test, after the atomic bombings of Japan scientists at Los Alamos were surprised by how devastating the effects of the weapon had been. Many of the scientists rebelled against the notion of creating a weapon thousands of times more powerful than the first atomic bombs. For the scientists the question was in part technical — the weapon design was still quite uncertain and unworkable — and in part moral: such a weapon, they argued, could only be used against large civilian populations, and could thus only be used as a weapon of genocide. Many scientists, such as Teller's colleague Hans Bethe (who had discovered stellar nucleosynthesis, the nuclear fusion that takes place in stars), urged that the United States should not develop such weapons and set an example towards the Soviet Union. Promoters of the weapon, including Teller and Berkeley physicists Ernest Lawrence and Luis Alvarez, argued that such a development was inevitable, and to deny such protection to the people of the United States — especially when the Soviet Union was likely to create such a weapon itself — was itself an immoral and unwise act. Still others, such as Oppenheimer, simply thought that the existing stockpile of fissile material was better spent in attempting to develop a large arsenal of tactical atomic weapons rather than potentially squandered on the development of a few massive "Supers".

In any case, work slowed greatly at Los Alamos, as some 5,500 of the 7,100 scientists and related staff who had been there at the conclusion of the war left to go back to their previous positions at universities and laboratories. A conference was held at Los Alamos in 1946 to examine the feasibility of building a Super; it concluded that it was feasible, but there were a number of dissenters to that conclusion.

When the Soviet Union exploded their own atomic bomb (dubbed "Joe 1" by the US) in August 1949, it caught Western analysts off guard, and over the next several months there was an intense debate within the US government, military, and scientific communities on whether to proceed with the far-more-powerful Super. On January 31, 1950, US President Harry S. Truman ordered a program to develop a hydrogen bomb.

Many scientists returned to Los Alamos to work on the "Super" program, but the initial attempts still seemed highly unworkable. In the "classical Super," it was thought that the heat alone from the fission bomb would be used to ignite the fusion material, but that proved to be impossible. For a while, many scientists thought (and many hoped) that the weapon itself would be impossible to construct.

Ulam's and Teller's contributions

Classified paper by Teller and Ulam on March 9, 1951: On Heterocatalytic Detonations I: Hydrodynamic Lenses and Radiation Mirrors in which they proposed the staged implosion (Teller–Ulam) design. This declassified version is heavily redacted.

The exact history of the Teller–Ulam breakthrough is not completely known, partly because of numerous conflicting personal accounts and also by the continued classification of documents that would reveal which was closer to the truth. Previous models of the "Super" had apparently placed the fusion fuel either surrounding the fission "trigger" (in a spherical formation) or at the heart of it (similar to a "boosted" weapon) in the hopes that the closer the fuel was to the fission explosion, the higher the chance it would ignite the fusion fuel by the sheer force of the heat generated.

In 1951, after still many years of fruitless labor on the "Super", a breakthrough idea from the Polish émigré mathematician Stanislaw Ulam was seized upon by Teller and developed into the first workable design for a megaton-range hydrogen bomb. This concept, now called "staged implosion" was first proposed in a classified scientific paper, On Heterocatalytic Detonations I. Hydrodynamic Lenses and Radiation Mirrors by Teller and Ulam on March 9, 1951. The exact amount of contribution provided respectively from Ulam and Teller to what became known as the "Teller–Ulam design" is not definitively known in the public domain—the degree of credit assigned to Teller by his contemporaries is almost exactly commensurate with how well they thought of Teller in general. In an interview with Scientific American from 1999, Teller told the reporter:

I contributed; Ulam did not. I'm sorry I had to answer it in this abrupt way. Ulam was rightly dissatisfied with an old approach. He came to me with a part of an idea which I already had worked out and difficulty getting people to listen to. He was willing to sign a paper. When it then came to defending that paper and really putting work into it, he refused. He said, "I don't believe in it."

A view of the Sausage device casing, with its diagnostic and cryogenic equipment attached. The long pipes would receive the first bits of radiation from the primary and secondary ("Teller light") just before the device fully detonated.

The issue is controversial. Bethe in his “Memorandum on the History of the Thermonuclear Program” (1952) cited Teller as the discoverer of an “entirely new approach to thermonuclear reactions”, which “was a matter of inspiration” and was “therefore, unpredictable” and “largely accidental.” At the Oppenheimer hearing, in 1954, Bethe spoke of Teller's “stroke of genius” in the invention of the H-bomb. And finally in 1997 Bethe stated that “the crucial invention was made in 1951, by Teller.” 

Other scientists (antagonistic to Teller, such as J. Carson Mark) have claimed that Teller would have never gotten any closer without the idea of Ulam. The nuclear weapons designer Ted Taylor was clear about assigning credit for the basic staging and compression ideas to Ulam, while giving Teller the credit for recognizing the critical role of radiation as opposed to hydrodynamic pressure.

Teller became known in the press as the "father of the hydrogen bomb", a title which he did not seek to discourage. Many of Teller's colleagues were irritated that he seemed to enjoy taking full credit for something he had only a part in, and in response, with encouragement from Enrico Fermi, Teller authored an article titled "The Work of Many People," which appeared in Science magazine in February 1955, emphasizing that he was not alone in the weapon's development (he would later write in his memoirs that he had told a "white lie" in the 1955 article, and would imply that he should receive full credit for the weapon's invention). Hans Bethe, who also participated in the hydrogen bomb project, once drolly said, "For the sake of history, I think it is more precise to say that Ulam is the father, because he provided the seed, and Teller is the mother, because he remained with the child. As for me, I guess I am the midwife."

The dry-fuel device detonated in the "Castle Bravo" shot demonstrated that the Teller–Ulam design could be made deployable, but also that the final fission stage created large amounts of nuclear fallout.

The Teller–Ulam breakthrough—the details of which are still classified—was apparently the separation of the fission and fusion components of the weapons, and to use the radiation produced by the fission bomb to first compress the fusion fuel before igniting it. Some sources have suggested that Ulam initially proposed compressing the secondary through the shock waves generated by the primary and that it was Teller who then realized that the radiation from the primary would be able to accomplish the task (hence "radiation implosion"). However, compression alone would not have been enough and the other crucial idea, staging the bomb by separating the primary and secondary, seems to have been exclusively contributed by Ulam. The elegance of the design impressed many scientists, to the point that some who previously wondered if it were feasible suddenly believed it was inevitable and that it would be created by both the US and the Soviet Union. Even Oppenheimer, who was originally opposed to the project, called the idea "technically sweet." The "George" shot of Operation Greenhouse in 1951 tested the basic concept for the first time on a very small scale (and the next shot in the series, "Item," was the first boosted fission weapon), raising expectations to a near certainty that the concept would work.

On November 1, 1952, the Teller–Ulam configuration was tested in the "Ivy Mike" shot at an island in the Enewetak atoll, with a yield of 10.4 megatons of TNT (44 PJ) (over 450 times more powerful than the bomb dropped on Nagasaki during World War II). The device, dubbed the Sausage, used an extra-large fission bomb as a "trigger" and liquid deuterium, kept in its liquid state by 20 short tons (18 tonnes) of cryogenic equipment, as its fusion fuel, and it had a mass of around 80 short tons (73 tonnes) altogether. An initial press blackout was attempted, but it was soon announced that the US had detonated a megaton-range hydrogen bomb.

Like the Bravo test, Castle Romeo "ran away," producing a much higher yield than originally estimated (11 megatons instead of 4), making it the third largest test ever conducted by the US. The Romeo "shrimp" device derived its lithium deuteride from natural instead of "enriched" lithium.

The elaborate refrigeration plant necessary to keep its fusion fuel in a liquid state meant that the "Ivy Mike" device was too heavy and too complex to be of practical use. The first deployable Teller–Ulam weapon in the US would not be developed until 1954, when the liquid deuterium fuel of the "Ivy Mike" device would be replaced with a dry fuel of lithium deuteride and tested in the "Castle Bravo" shot (the device was codenamed the Shrimp). The dry lithium mixture performed much better than had been expected, and the "Castle Bravo" device that was detonated in 1954 had a yield two-and-a-half times greater than had been expected (at 15 Mt (63 PJ), it was also the most powerful bomb ever detonated by the United States). Because much of the yield came from the final fission stage of its 238
U
tamper, it generated much nuclear fallout, which caused one of the worst nuclear accidents in US history after unforeseen weather patterns blew it over populated areas of the atoll and Japanese fishermen on board the Daigo Fukuryu Maru.

After an initial period focused on making multi-megaton hydrogen bombs, efforts in the United States shifted towards developing miniaturized Teller–Ulam weapons which could outfit Intercontinental Ballistic Missiles and Submarine Launched Ballistic Missiles. The last major design breakthrough in this respect was accomplished by the mid-1970s, when versions of the Teller–Ulam design were created which could fit on the end of a small MIRVed missile.

Soviet research

In the Soviet Union, the scientists working on their own hydrogen bomb project also ran into difficulties in developing a megaton-range fusion weapon. Because Klaus Fuchs had only been at Los Alamos at a very early stage of the hydrogen bomb design (before the Teller–Ulam configuration had been completed), none of his espionage information was of much use, and the Soviet physicists working on the project had to develop their weapon independently.

The first Soviet fusion design, developed by Andrei Sakharov and Vitaly Ginzburg in 1949 (before the Soviets had a working fission bomb), was dubbed the Sloika, after a Russian layered puff pastry, and was not of the Teller–Ulam configuration, but rather used alternating layers of fissile material and lithium deuteride fusion fuel spiked with tritium (this was later dubbed Sakharov's "First Idea"). Though nuclear fusion was technically achieved, it did not have the scaling property of a "staged" weapon, and their first "hydrogen bomb" test, "Joe 4" is no longer considered to be a "true" hydrogen bomb, and is rather considered a hybrid fission/fusion device more similar to a large boosted fission weapon than a Teller–Ulam weapon (though using an order of magnitude more fusion fuel than a boosted weapon). Detonated in 1953 with a yield equivalent to 400 kt (1,700 TJ) (only 15%-20% from fusion), the Sloika device did, however, have the advantage of being a weapon which could actually be delivered to a military target, unlike the "Ivy Mike" device, though it was never widely deployed. Teller had proposed a similar design as early as 1946, dubbed the "Alarm Clock" (meant to "wake up" research into the "Super"), though it was calculated to be ultimately not worth the effort and no prototype was ever developed or tested.

Attempts to use a Sloika design to achieve megaton-range results proved unfeasible in the Soviet Union as it had in the calculations done in the US, but its value as a practical weapon since it was 20 times more powerful than their first fission bomb, should not be underestimated. The Soviet physicists calculated that at best the design might yield a single megaton of energy if it was pushed to its limits. After the US tested the "Ivy Mike" device in 1952, proving that a multimegaton bomb could be created, the Soviets searched for an additional design and continued to work on improving the Sloika (the "First Idea"). The "Second Idea", as Sakharov referred to it in his memoirs, was a previous proposal by Ginzburg in November 1948 to use lithium deuteride in the bomb, which would, by the bombardment by neutrons, produce tritium. In late 1953, physicist Viktor Davidenko achieved the first breakthrough, that of keeping the primary and the secondary parts of the bombs in separate pieces ("staging"). The next breakthrough was discovered and developed by Sakharov and Yakov Zeldovich, that of using the X-rays from the fission bomb to compress the secondary before fusion ("radiation implosion"), in the spring of 1954. Sakharov's "Third Idea", as the Teller–Ulam design was known in the Soviet Union, was tested in the shot "RDS-37" in November 1955 with a yield of 1.6 Mt (6.7 PJ).

If the Soviets had been able to analyze the fallout data from either the "Ivy Mike" or "Castle Bravo" tests, they could have been able to discern that the fission primary was being kept separate from the fusion secondary, a key part of the Teller–Ulam device, and perhaps that the fusion fuel had been subjected to high amounts of compression before detonation.(De Geer 1991) One of the key Soviet bomb designers, Yuli Khariton, later said:

At that time, Soviet research was not organized on a sufficiently high level, and useful results were not obtained, although radiochemical analyses of samples of fallout could have provided some useful information about the materials used to produce the explosion. The relationship between certain short-lived isotopes formed in the course of thermonuclear reactions could have made it possible to judge the degree of compression of the thermonuclear fuel, but knowing the degree of compression would not have allowed Soviet scientists to conclude exactly how the exploded device had been made, and it would not have revealed its design.

Fireball of the Tsar Bomba (RDS-220), the largest weapon ever detonated (1961). Dropped from over 10 km and detonated at 4 km high, its fireball would have touched the ground were it not for the shock wave from the explosion reflecting off the ground and striking the bottom of the fireball, and nearly reached as high as the altitude of the deploying Tu-95 bomber. The RDS-220 test demonstrated how "staging" could be used to develop arbitrarily powerful weapons.

Sakharov stated in his memoirs that though he and Davidenko had fallout dust in cardboard boxes several days after the "Mike" test with the hope of analyzing it for information, a chemist at Arzamas-16 (the Soviet weapons laboratory) had mistakenly poured the concentrate down the drain before it could be analyzed. Only in the fall of 1952 did the Soviet Union set up an organized system for monitoring fallout data. Nonetheless, the memoirs also say that the yield from one of the American tests, which became an international incident involving Japan, told Sakharov that the US design was much better than theirs, and he decided that they must have exploded a separate fission bomb and somehow used its energy to compress the lithium deuteride. But how, he asked himself, can an explosion to one side be used to compress the ball of fusion fuel within 5% of symmetry? Then it hit him! Focus the X-rays!

The Soviets demonstrated the power of the "staging" concept in October 1961 when they detonated the massive and unwieldy Tsar Bomba, a 50 Mt (210 PJ) hydrogen bomb which derived almost 97% of its energy from fusion rather than fission—its uranium tamper was replaced with one of lead shortly before firing, in an effort to prevent excessive nuclear fallout. Had it been fired in its "full" form, it would have yielded at around 100 Mt (420 PJ). The weapon was technically deployable (it was tested by dropping it from a specially modified bomber), but militarily impractical, and was developed and tested primarily as a show of Soviet strength. It is the largest nuclear weapon developed and tested by any country.

Other countries

United Kingdom

The details of the development of the Teller–Ulam design in other countries are less well known. In any event, the United Kingdom initially had difficulty in its development of it and failed in its first attempt in May 1957 (its "Grapple I" test failed to ignite as planned, but much of its energy came from fusion in its secondary). However, it succeeded in its second attempt in its November 1957 "Grapple X" test, which yielded 1.8 Mt. The British development of the Teller–Ulam design was apparently independent, but it was allowed to share in some US fallout data which may have been useful. After the successful detonation of a megaton-range device and thus its practical understanding of the Teller–Ulam design "secret," the United States agreed to exchange some of its nuclear designs with the United Kingdom, which led to the 1958 US-UK Mutual Defence Agreement.

China

The People's Republic of China detonated its first device using a Teller–Ulam design June 1967 ("Test No. 6"), a mere 32 months after detonating its first fission weapon (the shortest fission-to-fusion development yet known), with a yield of 3.3 Mt. Little is known about the Chinese thermonuclear program.

Development of the bomb was led by Yu Min.

France

Very little is known about the French development of the Teller–Ulam design beyond the fact that it detonated a 2.6 Mt device in the "Canopus" test in August 1968.

India

On 11 May 1998, India announced that it has detonated a hydrogen bomb in its Operation Shakti tests ("Shakti I", specifically). Some non-Indian analysts, using seismographic readings, have suggested that it might not be the case by pointing at the low yield of the test, which they say is close to 30 kilotons (as opposed to 45 kilotons announced by India).

However, some non-Indian experts agree with India. Dr. Harold M. Agnew, former director of the Los Alamos National Laboratory, said that India's assertion of having detonated a staged thermonuclear bomb was believable. The British seismologist Roger Clarke argued that seismic magnitudes suggested a combined yield of up to 60 kilotonnes, consistent with the Indian announced total yield of 56 kilotonnes. Professor Jack Evernden, a US seismologist, has always maintained that for correct estimation of yields, one should "account properly for geological and seismological differences between test sites." His estimation of the yields of the Indian tests concur with those of India.

Indian scientists have argued that some international estimations of the yields of India's nuclear tests are unscientific.

India says that the yield of its tests were deliberately kept low to avoid civilian damage and that it can build staged thermonuclear weapons of various yields up to around 200 kilotons on the basis of those tests. Another cited reason for the low yields was that radioactivity released from yields significantly more than 45 kilotons might not have been contained fully.

Even low-yield tests can have a bearing on thermonuclear capability, as they can provide information on the behavior of primaries without the full ignition of secondaries.

North Korea

North Korea claimed to have tested its miniaturised thermonuclear bomb on January 6, 2016. North Korea's first three nuclear tests (2006, 2009 and 2013) had a relatively low yield and do not appear to have been of a thermonuclear weapon design. In 2013, the South Korean Defense Ministry has speculated that North Korea may be trying to develop a "hydrogen bomb" and such a device may be North Korea's next weapons test. In January 2016, North Korea claimed to have successfully tested a hydrogen bomb, but only a magnitude 5.1 seismic event was detected at the time of the test, a similar magnitude to the 2013 test of a 6–9 kt atomic bomb. Those seismic recordings have scientists worldwide doubting North Korea's claim that a hydrogen bomb was tested and suggest it was a non-fusion nuclear test. On September 9, 2016, North Korea conducted their fifth nuclear test which yielded between 10 and 30 kilotons.

On September 3, 2017, North Korea conducted a sixth nuclear test just a few hours after photographs of North Korean leader Kim Jong-un inspecting a device resembling a thermonuclear weapon warhead were released. Initial estimates in first few days were between 70 and 160 kilotons and were raised over a week later to range of 250 to over 300 kilotons. Jane's Information Group estimated, based mainly on visual analysis of propaganda pictures, that the bomb might weigh between 250 and 360 kilograms (~550 – 790 lbs.).

Public knowledge

Photographs of warhead casings, such as this one of the W80 nuclear warhead, allow for some speculation as to the relative size and shapes of the primaries and the secondaries in US thermonuclear weapons.

The Teller–Ulam design was for many years considered one of the top nuclear secrets, and even today, it is not discussed in any detail by official publications with origins "behind the fence" of classification. The policy of the US Department of Energy (DOE) has always been not to acknowledge when "leaks" occur since doing such would acknowledge the accuracy of the supposed leaked information. Aside from images of the warhead casing but never of the "physics package" itself, most information in the public domain about the design is relegated to a few terse statements and the work of a few individual investigators.

Here is a short discussion of the events that led to the formation of the "public" models of the Teller–Ulam design, with some discussions as to their differences and disagreements with those principles outlined above.

Early knowledge

The general principles of the "classical Super" design were public knowledge even before thermonuclear weapons were first tested. After Truman ordered the crash program to develop the hydrogen bomb in January 1950, the Boston Daily Globe published a cutaway description of a hypothetical hydrogen bomb with the caption Artist's conception of how H-bomb might work using atomic bomb as a mere "trigger" to generate enough heat to set up the H-bomb's "thermonuclear fusion" process.

The fact that a large proportion of the yield of a thermonuclear device stems from the fission of a uranium 238 tamper (fission-fusion-fission principle) was revealed when the Castle Bravo test "ran away," producing a much higher yield than originally estimated and creating large amounts of nuclear fallout.

DOE statements

In 1972, the DOE declassified a statement that "The fact that in thermonuclear (TN) weapons, a fission 'primary' is used to trigger a TN reaction in thermonuclear fuel referred to as a 'secondary'", and in 1979, it added: "The fact that, in thermonuclear weapons, radiation from a fission explosive can be contained and used to transfer energy to compress and ignite a physically separate component containing thermonuclear fuel." To the latter sentence, it specified, "Any elaboration of this statement will be classified." (emphasis in original) The only statement that may pertain to the sparkplug was declassified in 1991: "Fact that fissile and/or fissionable materials are present in some secondaries, material unidentified, location unspecified, use unspecified, and weapons undesignated." In 1998, the DOE declassified the statement that "The fact that materials may be present in channels and the term 'channel filler,' with no elaboration," which may refer to the polystyrene foam (or an analogous substance). (DOE 2001, sect. V.C.)

Whether the statements vindicate some or all of the models presented above is up for interpretation, and official US government releases about the technical details of nuclear weapons have been purposely equivocating in the past (such as the Smyth Report). Other information, such as the types of fuel used in some of the early weapons, has been declassified, but precise technical information has not been.

The Progressive case

Most of the current ideas of the Teller–Ulam design came into public awareness after the DOE attempted to censor a magazine article by the anti-weapons activist Howard Morland in 1979 on the "secret of the hydrogen bomb." In 1978, Morland had decided that discovering and exposing the "last remaining secret" would focus attention onto the arms race and allow citizens to feel empowered to question official statements on the importance of nuclear weapons and nuclear secrecy. Most of Morland's ideas about how the weapon worked were compiled from highly-accessible sources, the drawings that most inspired his approach came from the Encyclopedia Americana. Morland also interviewed, often informally, many former Los Alamos scientists (including Teller and Ulam, though neither gave him any useful information), and used a variety of interpersonal strategies to encourage informational responses from them (such as by asking questions such as "Do they still use sparkplugs?" even if he was unaware what the latter term specifically referred to). (Morland 1981)

Morland eventually concluded that the "secret" was that the primary and secondary were kept separate and that radiation pressure from the primary compressed the secondary before igniting it. When an early draft of the article, to be published in The Progressive magazine, was sent to the DOE after it had fallen into the hands of a professor who was opposed to Morland's goal, the DOE requested that the article not be published and pressed for a temporary injunction. After a short court hearing in which the DOE argued that Morland's information was (1). likely derived from classified sources, (2). if not derived from classified sources, itself counted as "secret" information under the "born secret" clause of the 1954 Atomic Energy Act, and (3). dangerous and would encourage nuclear proliferation, Morland and his lawyers disagreed on all points, but the injunction was granted, as the judge in the case thought that it was safer to grant the injunction and allow Morland, et al., to appeal, which they did in United States v. The Progressive, et al. (1979).

Through a variety of more complicated circumstances, the DOE case began to wane, as it became clear that some of the data it attempted to claim as "secret" had been published in a students' encyclopedia a few years earlier. After another hydrogen bomb speculator, Chuck Hansen, had his own ideas about the "secret" (quite different from Morland's) published in a Wisconsin newspaper, the DOE claimed The Progressive case was moot, dropped its suit, and allowed the magazine to publish, which it did in November 1979. Morland had by then, however, changed his opinion of how the bomb worked to suggesting that a foam medium (the polystyrene) rather than radiation pressure was used to compress the secondary and that in the secondary was a sparkplug of fissile material as well. He published the changes, based in part on the proceedings of the appeals trial, as a short erratum in The Progressive a month later. In 1981, Morland published a book, The secret that exploded, about his experience, describing in detail the train of thought which led him to his conclusions about the "secret."

Because the DOE sought to censor Morland's work, one of the few times that it violated its usual approach of not acknowledging "secret" material that had been released, it is interpreted as being at least partially correct, but to what degree it lacks information or has incorrect information is not known with any great confidence. The difficulty which a number of nations had in developing the Teller–Ulam design (even when they understood the design, such as with the United Kingdom) makes it somewhat unlikely that the simple information alone is what provides the ability to manufacture thermonuclear weapons. Nevertheless, the ideas put forward by Morland in 1979 have been the basis for all current speculation on the Teller–Ulam design.

Civilization

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Civilization The ancient Sumerians ...