In nuclear physics, the semi-empirical mass formula (SEMF) (sometimes also called the Weizsäcker formula, Bethe–Weizsäcker formula, or Bethe–Weizsäcker mass formula to distinguish it from the Bethe–Weizsäcker process) is used to approximate the mass and various other properties of an atomic nucleus from its number of protons and neutrons. As the name suggests, it is based partly on theory and partly on empirical measurements. The formula represents the liquid drop model proposed by George Gamow,
which can account for most of the terms in the formula and gives rough
estimates for the values of the coefficients. It was first formulated in
1935 by German physicist Carl Friedrich von Weizsäcker
and although refinements have been made to the coefficients over the
years, the structure of the formula remains the same today.
The formula gives a good approximation for atomic masses and
thereby other effects. However, it fails to explain the existence of
lines of greater binding energy at certain numbers of protons and
neutrons. These numbers, known as magic numbers, are the foundation of the nuclear shell model.
The liquid drop model
Illustration of the terms of the semi-empirical mass formula in the liquid drop model of the atomic nucleus.
The liquid drop model was first proposed by George Gamow and further developed by Niels Bohr and John Archibald Wheeler. It treats the nucleus as a drop of incompressible fluid of very high density, held together by the nuclear force (a residual effect of the strong force),
there is a similarity to the structure of a spherical liquid drop.
While a crude model, the liquid drop model accounts for the spherical
shape of most nuclei and makes a rough prediction of binding energy.
The corresponding mass formula is defined purely in terms of the
numbers of protons and neutrons it contains. The original Weizsäcker
formula defines five terms:
Volume energy, when an assembly of nucleons of the same
size is packed together into the smallest volume, each interior nucleon
has a certain number of other nucleons in contact with it. So, this
nuclear energy is proportional to the volume.
Surface energy corrects for the previous assumption made that
every nucleon interacts with the same number of other nucleons. This
term is negative and proportional to the surface area, and is therefore
roughly equivalent to liquid surface tension.
Coulomb energy, the potential energy from each pair of protons. As this is a repulsive force, the binding energy is reduced.
Asymmetry energy (also called Pauli Energy), which accounts for the Pauli exclusion principle.
Unequal numbers of neutrons and protons imply filling higher energy
levels for one type of particle, while leaving lower energy levels
vacant for the other type.
Pairing energy, which accounts for the tendency of proton pairs and neutron pairs to occur. An even number of particles is more stable than an odd number due to spin coupling.
The formula
The binding energy per nucleon (in MeV) shown as a function of the neutron number N and atomic number Z
as given by the semi-empirical mass formula. A dashed line is included
to show nuclides that have been discovered by experiment.
The
difference between the energies predicted and that of known binding
energies, given in kiloelectronvolts. Phenomena present can be explained
by further subtle terms, but the mass formula cannot explain the
presence of lines, clearly identifiable by sharp peaks in contours.
where and are the rest mass of a proton and a neutron, respectively, and is the binding energy of the nucleus. The semi-empirical mass formula states the binding energy is:
The term is either zero or , depending on the parity of and , where for some exponent . Note that as , the numerator of the term can be rewritten as .
Each of the terms in this formula has a theoretical basis. The coefficients , , , , and are determined empirically; while they may be derived from experiment, they are typically derived from least squares
fit to contemporary data. While typically expressed by its basic five
terms, further terms exist to explain additional phenomena. Akin to how
changing a polynomial fit will change its coefficients, the interplay
between these coefficients as new phenomena are introduced is complex;
some terms influence each other, whereas the term is largely independent.
Volume term
The term is known as the volume term. The volume of the nucleus is proportional to A, so this term is proportional to the volume, hence the name.
The basis for this term is the strong nuclear force. The strong force affects both protons and neutrons, and as expected, this term is independent of Z. Because the number of pairs that can be taken from A particles is , one might expect a term proportional to .
However, the strong force has a very limited range, and a given nucleon
may only interact strongly with its nearest neighbors and next nearest
neighbors. Therefore, the number of pairs of particles that actually
interact is roughly proportional to A, giving the volume term its form.
The coefficient is smaller than the binding energy possessed by the nucleons with respect to their neighbors (), which is of order of 40 MeV. This is because the larger the number of nucleons in the nucleus, the larger their kinetic energy is, due to the Pauli exclusion principle. If one treats the nucleus as a Fermi ball of nucleons, with equal numbers of protons and neutrons, then the total kinetic energy is , with the Fermi energy which is estimated as 38 MeV. Thus the expected value of in this model is , not far from the measured value.
Surface term
The term is known as the surface term. This term, also based on the strong force, is a correction to the volume term.
The volume term suggests that each nucleon interacts with a constant number of nucleons, independent of A.
While this is very nearly true for nucleons deep within the nucleus,
those nucleons on the surface of the nucleus have fewer nearest
neighbors, justifying this correction. This can also be thought of as a
surface tension term, and indeed a similar mechanism creates surface
tension in liquids.
If the volume of the nucleus is proportional to A, then the radius should be proportional to and the surface area to . This explains why the surface term is proportional to . It can also be deduced that should have a similar order of magnitude to .
Coulomb term
The term or is known as the Coulomb or electrostatic term.
The basis for this term is the electrostatic repulsion between protons. To a very rough approximation, the nucleus can be considered a sphere of uniform charge density. The potential energy of such a charge distribution can be shown to be
where Q is the total charge and R is the radius of the sphere. The value of can be approximately calculated by using this equation to calculate the potential energy, using an empirical nuclear radius of and Q=Ze. However, because electrostatic repulsion will only exist for more than one proton, becomes :
where is the fine-structure constant and is the radius of a nucleus, giving to be approximately 1.25 femtometers. is the proton reduced Compton wavelength, and is the proton mass. This gives an approximate theoretical value of 0.691 MeV, not far from the measured value.
Asymmetry term
The term is known as the asymmetry term (or Pauli term).
The theoretical justification for this term is more complex. The Pauli exclusion principle states that no two identicalfermions can occupy exactly the same quantum state
in an atom. At a given energy level, there are only finitely many
quantum states available for particles. What this means in the nucleus
is that as more particles are "added", these particles must occupy
higher energy levels, increasing the total energy of the nucleus (and
decreasing the binding energy). Note that this effect is not based on
any of the fundamental forces (gravitational, electromagnetic, etc.), only the Pauli exclusion principle.
Protons and neutrons, being distinct types of particles, occupy
different quantum states. One can think of two different "pools" of
states, one for protons and one for neutrons. Now, for example, if there
are significantly more neutrons than protons in a nucleus, some of the
neutrons will be higher in energy than the available states in the
proton pool. If we could move some particles from the neutron pool to
the proton pool, in other words change some neutrons into protons, we
would significantly decrease the energy. The imbalance between the
number of protons and neutrons causes the energy to be higher than it
needs to be, for a given number of nucleons. This is the basis for the asymmetry term.
The actual form of the asymmetry term can again be derived by
modeling the nucleus as a Fermi ball of protons and neutrons. Its total
kinetic energy is
where and are the Fermi energies of the protons and neutrons. Since these are proportional to and , respectively, one gets
for some constant C.
The leading terms in the expansion in the difference are then
At the zeroth order in the expansion the kinetic energy is just the overall Fermi energy multiplied by . Thus we get
The first term contributes to the volume term in the semi-empirical
mass formula, and the second term is minus the asymmetry term (remember
the kinetic energy contributes to the total binding energy with a negative sign).
is 38 MeV, so calculating
from the equation above, we get only half the measured value. The
discrepancy is explained by our model not being accurate: nucleons in
fact interact with each other, and are not spread evenly across the
nucleus. For example, in the shell model, a proton and a neutron with overlapping wavefunctions will have a greater strong interaction
between them and stronger binding energy. This makes it energetically
favourable (i.e. having lower energy) for protons and neutrons to have
the same quantum numbers (other than isospin), and thus increase the energy cost of asymmetry between them.
One can also understand the asymmetry term intuitively, as follows. It should be dependent on the absolute difference, and the form is simple and differentiable, which is important for certain applications of the formula. In addition, small differences between Z and N do not have a high energy cost. The A in the denominator reflects the fact that a given difference is less significant for larger values of A.
Pairing term
Magnitude
of the pairing term in the total binding energy for even-even and
odd-odd nuclei, as a function of mass number. Two fits are shown (blue
and red line). The pairing term (positive for even-even and negative for
odd-odd nuclei) was derived from the binding energy data in: G. Audi et
al., 'The AME2012 atomic mass evaluation', in Chinese Physics C 36
(2012/12) pp. 1287–1602.
The term is known as the pairing term (possibly also known as the pairwise interaction). This term captures the effect of spin-coupling. It is given by:
where is found empirically to have a value of about 1000 keV, slowly decreasing with mass number A.
The binding energy may be increased by converting one of the odd
protons or neutrons into a neutron or proton so the odd nucleon can form
a pair with its odd neighbour forming and even Z, N. The pair have
overlapping wave functions and sit very close together with a bond
stronger than any other configuration.
When the pairing term is substituted into the binding energy equation,
for even Z, N, the pairing term adds binding energy and for odd Z, N the
pairing term removes binding energy.
The dependence on mass number is commonly parametrized as
The value of the exponent kP is determined
from experimental binding energy data. In the past its value was often
assumed to be −3/4, but modern experimental data indicate that a value
of −1/2 is nearer the mark:
or .
Due to the Pauli exclusion principle
the nucleus would have a lower energy if the number of protons with
spin up were equal to the number of protons with spin down. This is also
true for neutrons. Only if both Z and N are even can both
protons and neutrons have equal numbers of spin up and spin down
particles. This is a similar effect to the asymmetry term.
The factor
is not easily explained theoretically. The Fermi ball calculation we
have used above, based on the liquid drop model but neglecting
interactions, will give an
dependence, as in the asymmetry term. This means that the actual effect
for large nuclei will be larger than expected by that model. This
should be explained by the interactions between nucleons; For example,
in the shell model, two protons with the same quantum numbers (other than spin) will have completely overlapping wavefunctions and will thus have greater strong interaction
between them and stronger binding energy. This makes it energetically
favourable (i.e. having lower energy) for protons to form pairs of
opposite spin. The same is true for neutrons.
Calculating the coefficients
The
coefficients are calculated by fitting to experimentally measured
masses of nuclei. Their values can vary depending on how they are fitted
to the data and which unit is used to express the mass. Several
examples are as shown below.
This model uses in the numerator of the Asymmetry term.
The formula does not consider the internal shell structure of the nucleus.
The semi-empirical mass formula therefore provides a good fit to
heavier nuclei, and a poor fit to very light nuclei, especially 4He. For light nuclei, it is usually better to use a model that takes this shell structure into account.
Examples of consequences of the formula
By maximizing Eb(A,Z) with respect to Z, one would find the best neutron–proton ratioN/Z for a given atomic weight A. We get
This is roughly 1 for light nuclei, but for heavy nuclei the ratio grows in good agreement with experiment.
By substituting the above value of Z back into Eb, one obtains the binding energy as a function of the atomic weight, Eb(A).
Maximizing Eb(A) /A with respect to A gives the nucleus which is most strongly bound, i.e. most stable. The value we get is A = 63 (copper), close to the measured values of A = 62 (nickel) and A = 58 (iron).
The liquid drop model also allows the computation of fission barriers for nuclei, which determine the stability of a nucleus against spontaneous fission. It was originally speculated that elements beyond atomic number 104 could not exist, as they would undergo fission with very short half-lives, though this formula did not consider stabilizing effects of closed nuclear shells. A modified formula considering shell effects reproduces known data and the predicted island of stability
(in which fission barriers and half-lives are expected to increase,
reaching a maximum at the shell closures), though also suggests a
possible limit to existence of superheavy nuclei beyond Z = 120 and N = 184.
This model uses in the numerator of the Coulomb term.
Spacecraft electric propulsion (or just electric propulsion) is a type of spacecraft propulsion technique that uses electrostatic or electromagnetic fields to accelerate mass to high speed and thus generate thrust to modify the velocity of a spacecraft in orbit.
Electric thrusters typically use much less propellant than chemical rockets because they have a higher exhaust speed (operate at a higher specific impulse) than chemical rockets.
Due to limited electric power the thrust is much weaker compared to
chemical rockets, but electric propulsion can provide thrust for a
longer time.
Electric propulsion was first successfully demonstrated by NASA and is now a mature and widely used technology on spacecraft. American and Russian satellites have used electric propulsion for decades. As of 2019, over 500 spacecraft operated throughout the Solar System use electric propulsion for station keeping, orbit raising, or primary propulsion. In the future, the most advanced electric thrusters may be able to impart a delta-v of 100 km/s (62 mi/s), which is enough to take a spacecraft to the outer planets of the Solar System (with nuclear power), but is insufficient for interstellar travel. An electric rocket with an external power source (transmissible through laser on the photovoltaic panels) has a theoretical possibility for interstellar flight. However, electric propulsion is not suitable for launches from the Earth's surface, as it offers too little thrust.
On a journey to Mars, an electrically powered ship might be able
to carry 70% of its initial mass to the destination, while a chemical
rocket could carry only a few percent.
History
The idea of electric propulsion for spacecraft was introduced in 1911 by Konstantin Tsiolkovsky. Earlier, Robert Goddard had noted such a possibility in his personal notebook.
Electrically powered propulsion with a nuclear reactor was considered by Tony Martin for interstellarProject Daedalus in 1973, but the approach was rejected because of its thrust profile, the weight of equipment needed to convert nuclear energy into electricity, and as a result a small acceleration, which would take a century to achieve the desired speed.
The first demonstration of electric propulsion was an ion engine carried on board the NASASERT-1 (Space Electric Rocket Test) spacecraft. It launched on 20 July 1964 and operated for 31 minutes.
A follow-up mission launched on 3 February 1970, SERT-2. It carried two
ion thrusters, one operated for more than five months and the other for
almost three months.
These types of rocket-like reaction engines use electric energy to obtain thrust from propellant. Unlike rocket engines, these kinds of engines do not require nozzles, and thus are not considered true rockets.
Electric propulsion thrusters for spacecraft may be grouped into
three families based on the type of force used to accelerate the ions of
the plasma:
If the acceleration is caused mainly by the Coulomb force (i.e. application of a static electric field in the direction of the acceleration) the device is considered electrostatic. Types:
The electrothermal category groups devices that use electromagnetic fields to generate a plasma
to increase the temperature of the bulk propellant. The thermal energy
imparted to the propellant gas is then converted into kinetic energy by a
nozzle
of either solid material or magnetic fields. Low molecular weight gases
(e.g. hydrogen, helium, ammonia) are preferred propellants for this
kind of system.
An electrothermal engine uses a nozzle to convert heat into
linear motion, so it is a true rocket even though the energy producing
the heat comes from an external source.
Performance of electrothermal systems in terms of specific impulse (Isp) is 500 to ~1000 seconds, but exceeds that of cold gas thrusters, monopropellant rockets, and even most bipropellant rockets. In the USSR, electrothermal engines entered use in 1971; the Soviet "Meteor-3", "Meteor-Priroda", "Resurs-O" satellite series and the Russian "Elektro" satellite are equipped with them. Electrothermal systems by Aerojet (MR-510) are currently used on Lockheed Martin A2100 satellites using hydrazine as a propellant.
Electromagnetic thrusters accelerate ions either by the Lorentz force or by the effect of electromagnetic fields where the electric field is not in the direction of the acceleration. Types:
Electrodynamic tethers are long conducting wires, such as one deployed from a tether satellite, which can operate on electromagnetic principles as generators, by converting their kinetic energy to electric energy, or as motors, converting electric energy to kinetic energy.
Electric potential is generated across a conductive tether by its
motion through the Earth's magnetic field. The choice of the metal conductor to be used in an electrodynamic tether is determined by factors such as electrical conductivity, and density. Secondary factors, depending on the application, include cost, strength, and melting point.
Controversial
Some proposed propulsion methods apparently violate currently-understood laws of physics, including:
Electric
propulsion systems can be characterized as either steady (continuous
firing for a prescribed duration) or unsteady (pulsed firings
accumulating to a desired impulse). These classifications can be applied to all types of propulsion engines.
Electrically powered rocket engines provide lower thrust compared to chemical rockets by several orders of magnitude because of the limited electrical power available in a spacecraft.
A chemical rocket imparts energy to the combustion products directly,
whereas an electrical system requires several steps. However, the high
velocity and lower reaction mass
expended for the same thrust allows electric rockets to run on less
fuel. This differs from the typical chemical-powered spacecraft, where
the engines require more fuel, requiring the spacecraft to mostly follow
an inertial trajectory.
When near a planet, low-thrust propulsion may not offset the
gravitational force. An electric rocket engine cannot provide enough
thrust to lift the vehicle from a planet's surface, but a low thrust
applied for a long interval can allow a spacecraft to maneuver near a
planet.
Artist's conception of the NASA reference design for the Project Orion starship powered by nuclear propulsion
Project Orion was a study conducted between the 1950s and 1960s by the United StatesAir Force, DARPA, and NASA for the purpose of identifying the efficacy of a starship directly propelled by a series of explosions of atomic bombs behind the craft via nuclear pulse propulsion.
Early versions of this vehicle were proposed to take off from the
ground; later versions were presented for use only in space. Six
non-nuclear tests were conducted using models. The project was
eventually abandoned for multiple reasons, such as the Partial Test Ban Treaty, which banned nuclear explosions in space, as well as concerns over nuclear fallout.
The idea of rocket propulsion by combustion of explosive substance was first proposed by Russian explosives expert Nikolai Kibalchich in 1881, and in 1891 similar ideas were developed independently by German engineer Hermann Ganswindt. Robert A. Heinlein mentions powering spaceships with nuclear bombs in his 1940 short story "Blowups Happen." Real life proposals of nuclear propulsion were first made by Stanislaw Ulam in 1946, and preliminary calculations were made by F. Reines and Ulam in a Los Alamos memorandum dated 1947. The actual project, initiated in 1958, was led by Ted Taylor at General Atomics and physicist Freeman Dyson, who at Taylor's request took a year away from the Institute for Advanced Study in Princeton to work on the project.
The Orion concept offered high thrust and high specific impulse,
or propellant efficiency, at the same time. The unprecedented extreme
power requirements for doing so would be met by nuclear explosions, of
such power relative to the vehicle's mass as to be survived only by
using external detonations without attempting to contain them in
internal structures. As a qualitative comparison, traditional chemical rockets—such as the Saturn V that took the Apollo program to the Moon—produce high thrust with low specific impulse, whereas electric ion engines
produce a small amount of thrust very efficiently. Orion would have
offered performance greater than the most advanced conventional or
nuclear rocket engines then under consideration. Supporters of Project
Orion felt that it had potential for cheap interplanetary travel, but it lost political approval over concerns about fallout from its propulsion.
The Partial Test Ban Treaty of 1963 is generally acknowledged to have ended the project. However, from Project Longshot to Project Daedalus, Mini-Mag Orion,
and other proposals which reach engineering analysis at the level of
considering thermal power dissipation, the principle of external nuclear pulse propulsion
to maximize survivable power has remained common among serious concepts
for interstellar flight without external power beaming and for very
high-performance interplanetary flight. Such later proposals have
tended to modify the basic principle by envisioning equipment driving
detonation of much smaller fission or fusion pellets, in contrast to
Project Orion's larger nuclear pulse units (full nuclear bombs) based on
less speculative technology.
Basic principles
The Orion Spacecraft – key components
The Orion nuclear pulse drive combines a very high exhaust velocity,
from 19 to 31 km/s (12 to 19 mi/s) in typical interplanetary designs,
with meganewtons of thrust.
Many spacecraft propulsion drives can achieve one of these or the
other, but nuclear pulse rockets are the only proposed technology that
could potentially meet the extreme power requirements to deliver both at
once (see spacecraft propulsion for more speculative systems).
Specific impulse (Isp)
measures how much thrust can be derived from a given mass of fuel, and
is a standard figure of merit for rocketry. For any rocket propulsion,
since the kinetic energy of exhaust goes up with velocity squared (kinetic energy = ½ mv2), whereas the momentum and thrust go up with velocity linearly (momentum = mv), obtaining a particular level of thrust (as in a number of g acceleration) requires far more power each time that exhaust velocity and Isp are much increased in a design goal. (For instance, the most fundamental reason that current and proposed electric propulsion systems of high Isp tend to be low thrust is due to their limits on available power. Their thrust is actually inversely proportional to Isp if power going into exhaust is constant or at its limit from heat dissipation needs or other engineering constraints.)
The Orion concept detonates nuclear explosions externally at a rate of
power release which is beyond what nuclear reactors could survive
internally with known materials and design.
Since weight is no limitation, an Orion craft can be extremely
robust. An uncrewed craft could tolerate very large accelerations,
perhaps 100 g. A human-crewed Orion, however, must use some sort of damping system
behind the pusher plate to smooth the near instantaneous acceleration
to a level that humans can comfortably withstand – typically about 2 to 4
g.
The high performance depends on the high exhaust velocity, in
order to maximize the rocket's force for a given mass of propellant. The
velocity of the plasma debris is proportional to the square root of the
change in the temperature (Tc) of the nuclear
fireball. Since such fireballs typically achieve ten million degrees
Celsius or more in less than a millisecond, they create very high
velocities. However, a practical design must also limit the destructive
radius of the fireball. The diameter of the nuclear fireball is
proportional to the square root of the bomb's explosive yield.
The shape of the bomb's reaction mass is critical to efficiency.
The original project designed bombs with a reaction mass made of tungsten. The bomb's geometry and materials focused the X-rays and plasma from the core of nuclear explosive to hit the reaction mass. In effect each bomb would be a nuclear shaped charge.
A bomb with a cylinder of reaction mass expands into a flat,
disk-shaped wave of plasma when it explodes. A bomb with a disk-shaped
reaction mass expands into a far more efficient cigar-shaped wave of
plasma debris. The cigar shape focuses much of the plasma to impinge
onto the pusher-plate. For greatest mission efficiency the rocket equation demands that the greatest fraction of the bomb's explosive force be directed at the spacecraft, rather than being spent isotropically.
The maximum effective specific impulse, Isp, of an Orion nuclear pulse drive generally is equal to:
where C0 is the collimation factor (what fraction
of the explosion plasma debris will actually hit the impulse absorber
plate when a pulse unit explodes), Ve is the nuclear pulse unit plasma debris velocity, and gn is the standard acceleration of gravity (9.81 m/s2; this factor is not necessary if Isp
is measured in N·s/kg or m/s). A collimation factor of nearly 0.5 can
be achieved by matching the diameter of the pusher plate to the diameter
of the nuclear fireball created by the explosion of a nuclear pulse
unit.
The smaller the bomb, the smaller each impulse will be, so the
higher the rate of impulses and more than will be needed to achieve
orbit. Smaller impulses also mean less g shock on the pusher plate and less need for damping to smooth out the acceleration.
The optimal Orion drive bomblet yield (for the human crewed 4,000
ton reference design) was calculated to be in the region of 0.15 kt,
with approx 800 bombs needed to orbit and a bomb rate of approx 1 per
second.
Sizes of vehicles
The following can be found in George Dyson's book. The figures for the comparison with Saturn V are taken from this section and converted from metric (kg) to US short tons (abbreviated "t" here).
Image
of the smallest Orion vehicle extensively studied, which could have had
a payload of around 100 tonnes in an 8 crew round trip to Mars. On the left, the 10 meter diameter Saturn V
"Boost-to-orbit" variant, requiring in-orbit assembly before the Orion
vehicle would be capable of moving under its own propulsion system. On
the far right, the fully assembled "lofting" configuration, in which the
spacecraft would be lifted high into the atmosphere before pulse
propulsion began. As depicted in the 1964 NASA document "Nuclear Pulse Space Vehicle Study Vol III - Conceptual Vehicle Designs and Operational Systems."
In late 1958 to early 1959, it was realized that the smallest
practical vehicle would be determined by the smallest achievable bomb
yield. The use of 0.03 kt (sea-level yield) bombs would give vehicle
mass of 880 tons. However, this was regarded as too small for anything
other than an orbital test vehicle and the team soon focused on a 4,000
ton "base design".
At that time, the details of small bomb designs were shrouded in
secrecy. Many Orion design reports had all details of bombs removed
before release. Contrast the above details with the 1959 report by
General Atomics, which explored the parameters of three different sizes of hypothetical Orion spacecraft:
"Satellite" Orion
"Midrange" Orion
"Super" Orion
Ship diameter
17–20 m
40 m
400 m
Ship mass
300 t
1000–2000 t
8,000,000 t
Number of bombs
540
1080
1080
Individual bomb mass
0.22 t
0.37–0.75 t
3000 t
The biggest design above is the "super" Orion design; at 8 million tonnes, it could easily be a city. In interviews, the designers contemplated the large ship as a possible interstellar ark.
This extreme design could be built with materials and techniques that
could be obtained in 1958 or were anticipated to be available shortly
after.
Most of the three thousand tonnes of each of the "super" Orion's propulsion units would be inert material such as polyethylene, or boron
salts, used to transmit the force of the propulsion units detonation to
the Orion's pusher plate, and absorb neutrons to minimize fallout. One
design proposed by Freeman Dyson for the "Super Orion" called for the
pusher plate to be composed primarily of uranium or a transuranic element so that upon reaching a nearby star system the plate could be converted to nuclear fuel.
Theoretical applications
The
Orion nuclear pulse rocket design has extremely high performance. Orion
nuclear pulse rockets using nuclear fission type pulse units were
originally intended for use on interplanetary space flights.
Missions that were designed for an Orion vehicle in the original
project included single stage (i.e., directly from Earth's surface) to
Mars and back, and a trip to one of the moons of Saturn.
Freeman Dyson performed the first analysis of what kinds of Orion missions were possible to reach Alpha Centauri, the nearest star system to the Sun. His 1968 paper "Interstellar Transport" (Physics Today, October 1968, pp. 41–45)
retained the concept of large nuclear explosions but Dyson moved away
from the use of fission bombs and considered the use of one megaton deuterium
fusion explosions instead. His conclusions were simple: the debris
velocity of fusion explosions was probably in the 3000–30,000 km/s range
and the reflecting geometry of Orion's hemispherical pusher plate would
reduce that range to 750–15,000 km/s.
To estimate the upper and lower limits of what could be done
using contemporary technology (in 1968), Dyson considered two starship
designs. The more conservative energy limited pusher plate design simply had to absorb all the thermal energy of each impinging explosion (4×1015
joules, half of which would be absorbed by the pusher plate) without
melting. Dyson estimated that if the exposed surface consisted of copper
with a thickness of 1 mm, then the diameter and mass of the
hemispherical pusher plate would have to be 20 kilometers and 5 million
tonnes, respectively. 100 seconds would be required to allow the copper
to radiatively cool before the next explosion. It would then take on the
order of 1000 years for the energy-limited heat sink Orion design to reach Alpha Centauri.
In order to improve on this performance while reducing size and cost, Dyson also considered an alternative momentum limited
pusher plate design where an ablation coating of the exposed surface is
substituted to get rid of the excess heat. The limitation is then set
by the capacity of shock absorbers to transfer momentum from the
impulsively accelerated pusher plate to the smoothly accelerated
vehicle. Dyson calculated that the properties of available materials
limited the velocity transferred by each explosion to ~30 meters per
second independent of the size and nature of the explosion. If the
vehicle is to be accelerated at 1 Earth gravity (9.81 m/s2) with this velocity transfer, then the pulse rate is one explosion every three seconds. The dimensions and performance of Dyson's vehicles are given in the following table:
"Energy Limited" Orion
"Momentum Limited" Orion
Ship diameter (meters)
20,000 m
100 m
Mass of empty ship (tonnes)
10,000,000 t (incl.5,000,000 t copper hemisphere)
100,000 t (incl. 50,000 t structure+payload)
+Number of bombs = total bomb mass (each 1 Mt bomb weighs 1 tonne)
Later studies indicate that the top cruise velocity that can theoretically be achieved are a few percent of the speed of light (0.08–0.1c). An atomic (fission) Orion can achieve perhaps 9%–11% of the speed of light. A nuclear pulse drive starship powered by fusion-antimatter catalyzed nuclear pulse propulsion units would be similarly in the 10% range and pure Matter-antimatter annihilation rockets would be theoretically capable of obtaining a velocity between 50% to 80% of the speed of light. In each case saving fuel for slowing down halves the maximum speed. The concept of using a magnetic sail
to decelerate the spacecraft as it approaches its destination has been
discussed as an alternative to using propellant; this would allow the
ship to travel near the maximum theoretical velocity.
At 0.1c, Orion thermonuclear starships would require a
flight time of at least 44 years to reach Alpha Centauri, not counting
time needed to reach that speed (about 36 days at constant acceleration
of 1g or 9.8 m/s2). At 0.1c, an Orion starship would require 100 years to travel 10 light years. The astronomer Carl Sagan suggested that this would be an excellent use for current stockpiles of nuclear weapons.
A concept similar to Orion was designed by the British Interplanetary Society (B.I.S.) in the years 1973–1974. Project Daedalus was to be a robotic interstellar probe to Barnard's Star that would travel at 12% of the speed of light. In 1989, a similar concept was studied by the U.S. Navy and NASA in Project Longshot.
Both of these concepts require significant advances in fusion
technology, and therefore cannot be built at present, unlike Orion.
The expense
of the fissionable materials required was thought to be high, until the
physicist Ted Taylor showed that with the right designs for explosives,
the amount of fissionables used on launch was close to constant for
every size of Orion from 2,000 tons to 8,000,000 tons. The larger bombs
used more explosives to super-compress the fissionables, increasing
efficiency. The extra debris from the explosives also serves as
additional propulsion mass.
The bulk of costs for historical nuclear defense programs have
been for delivery and support systems, rather than for production cost
of the bombs directly (with warheads being 7% of the U.S. 1946–1996
expense total according to one study).
After initial infrastructure development and investment, the marginal
cost of additional nuclear bombs in mass production can be relatively
low. In the 1980s, some U.S. thermonuclear warheads had $1.1 million
estimated cost each ($630 million for 560).
For the perhaps simpler fission pulse units to be used by one Orion
design, a 1964 source estimated a cost of $40000 or less each in mass
production, which would be up to approximately $0.3 million each in
modern-day dollars adjusted for inflation.
Project Daedalus later proposed fusion explosives (deuterium or tritium pellets) detonated by electron beam inertial confinement. This is the same principle behind inertial confinement fusion. Theoretically, it could be scaled down to far smaller explosions, and require small shock absorbers.
Vehicle architecture
A design for the Orion propulsion module
From 1957 to 1964 this information was used to design a spacecraft
propulsion system called Orion, in which nuclear explosives would be
thrown behind a pusher-plate mounted on the bottom of a spacecraft and
exploded. The shock wave and radiation from the detonation would impact
against the underside of the pusher plate, giving it a powerful push.
The pusher plate would be mounted on large two-stage shock absorbers that would smoothly transmit acceleration to the rest of the spacecraft.
During take-off, there were concerns of danger from fluidic
shrapnel being reflected from the ground. One proposed solution was to
use a flat plate of conventional explosives spread over the pusher
plate, and detonate this to lift the ship from the ground before going
nuclear. This would lift the ship far enough into the air that the first
focused nuclear blast would not create debris capable of harming the
ship.
A design for a pulse unit
A preliminary design for a nuclear pulse unit was produced. It
proposed the use of a shaped-charge fusion-boosted fission explosive.
The explosive was wrapped in a beryllium oxide channel filler, which was surrounded by a uranium radiation mirror. The mirror and channel filler were open ended, and in this open end a flat plate of tungsten
propellant was placed. The whole unit was built into a can with a
diameter no larger than 6 inches (150 mm) and weighed just over 300
pounds (140 kg) so it could be handled by machinery scaled-up from a
soft-drink vending machine; Coca-Cola was consulted on the design.
At 1 microsecond after ignition the gamma bomb plasma and
neutrons would heat the channel filler and be somewhat contained by the
uranium shell. At 2–3 microseconds the channel filler would transmit
some of the energy to the propellant, which vaporized. The flat plate of
propellant formed a cigar-shaped explosion aimed at the pusher plate.
The plasma would cool to 25,200 °F (14,000 °C) as it traversed
the 82 feet (25 m) distance to the pusher plate and then reheat to
120,600 °F (67,000 °C) as, at about 300 microseconds, it hits the pusher
plate and is recompressed. This temperature emits ultraviolet light,
which is poorly transmitted through most plasmas. This helps keep the
pusher plate cool. The cigar shaped distribution profile and low density
of the plasma reduces the instantaneous shock to the pusher plate.
Because the momentum transferred by the plasma is greatest in the
center, the pusher plate's thickness would decrease by approximately a
factor of 6 from the center to the edge. This ensures the change in
velocity is the same for the inner and outer parts of the plate.
At low altitudes where the surrounding air is dense gamma scattering
could potentially harm the crew without a radiation shield, a radiation
refuge would also be necessary on long missions to survive solar flares. Radiation shielding effectiveness increases exponentially with shield thickness, see gamma ray
for a discussion of shielding. On ships with a mass greater than
2,200,000 pounds (1,000,000 kg) the structural bulk of the ship, its
stores along with the mass of the bombs and propellant, would provide
more than adequate shielding for the crew. Stability was initially
thought to be a problem due to inaccuracies in the placement of the
bombs, but it was later shown that the effects would cancel out.
Numerous model flight tests, using conventional explosives, were conducted at Point Loma, San Diego in 1959. On November 14, 1959 the one-meter model, also known as "Hot Rod" and "putt-putt", first flew using RDX
(chemical explosives) in a controlled flight for 23 seconds to a height
of 184 feet (56 m). Film of the tests has been transcribed to video and were featured on the BBC TV program "To Mars by A-Bomb" in 2003 with comments by Freeman Dyson and Arthur C. Clarke. The model landed by parachute undamaged and is in the collection of the Smithsonian National Air and Space Museum.
The first proposed shock absorber was a ring-shaped airbag. It
was soon realized that, should an explosion fail, the
1,100,000–2,200,000-pound (500,000–1,000,000 kg) pusher plate would tear
away the airbag on the rebound. So a two-stage detuned spring and
piston shock absorber design was developed. On the reference design the
first stage mechanical absorber was tuned to 4.5 times the pulse
frequency whilst the second stage gas piston was tuned to 0.5 times the
pulse frequency. This permitted timing tolerances of 10 ms in each
explosion.
The final design coped with bomb failure by overshooting and
rebounding into a center position. Thus following a failure and on
initial ground launch it would be necessary to start or restart the
sequence with a lower yield device. In the 1950s methods of adjusting bomb yield
were in their infancy and considerable thought was given to providing a
means of swapping out a standard yield bomb for a smaller yield one in a
2 or 3 second time frame or to provide an alternative means of firing
low yield bombs. Modern variable yield devices would allow a single
standardized explosive to be tuned down, configured to a lower yield,
automatically.
The bombs had to be launched behind the pusher plate with enough
velocity to explode 66–98 feet (20–30 m) beyond it every 1.1 seconds.
Numerous proposals were investigated, from multiple guns poking over the
edge of the pusher plate to rocket propelled bombs launched from roller
coaster tracks, however the final reference design used a simple gas
gun to shoot the devices through a hole in the center of the pusher
plate.
Potential problems
Exposure to repeated nuclear blasts raises the problem of ablation
(erosion) of the pusher plate. Calculations and experiments indicated
that a steel pusher plate would ablate less than 1 mm, if unprotected.
If sprayed with an oil it would not ablate at all (this was discovered
by accident; a test plate had oily fingerprints on it and the
fingerprints suffered no ablation). The absorption spectra of carbon and hydrogen minimize heating. The design temperature of the shockwave, 120,600 °F (67,000 °C), emits ultraviolet
light. Most materials and elements are opaque to ultraviolet,
especially at the 49,000 psi (340 MPa) pressures the plate experiences.
This prevents the plate from melting or ablating.
One issue that remained unresolved at the conclusion of the
project was whether or not the turbulence created by the combination of
the propellant and ablated pusher plate would dramatically increase the
total ablation of the pusher plate. According to Freeman Dyson, in the
1960s they would have had to actually perform a test with a real nuclear
explosive to determine this; with modern simulation technology this
could be determined fairly accurately without such empirical
investigation.
Another potential problem with the pusher plate is that of spalling—shards
of metal—potentially flying off the top of the plate. The shockwave
from the impacting plasma on the bottom of the plate passes through the
plate and reaches the top surface. At that point, spalling may occur,
damaging the pusher plate. For that reason, alternative
substances—plywood and fiberglass—were investigated for the surface
layer of the pusher plate and thought to be acceptable.
If the conventional explosives in the nuclear bomb detonate but a
nuclear explosion does not ignite, shrapnel could strike and
potentially critically damage the pusher plate.
True engineering tests of the vehicle systems were thought to be
impossible because several thousand nuclear explosions could not be
performed in any one place. Experiments were designed to test pusher
plates in nuclear fireballs and long-term tests of pusher plates could
occur in space. The shock-absorber designs could be tested at full-scale
on Earth using chemical explosives.
However, the main unsolved problem for a launch from the surface of the Earth was thought to be nuclear fallout. Freeman Dyson, group leader on the project, estimated back in the 1960s that with conventional nuclear weapons, each launch would statistically cause on average between 0.1 and 1 fatal cancers from the fallout. That estimate is based on no-threshold
model assumptions, a method often used in estimates of statistical
deaths from other industrial activities. Each few million dollars of
efficiency indirectly gained or lost in the world economy may
statistically average lives saved or lost, in terms of opportunity gains
versus costs.
Indirect effects could matter for whether the overall influence of an
Orion-based space program on future human global mortality would be a
net increase or a net decrease, including if change in launch costs and
capabilities affected space exploration, space colonization, the odds of long-term human species survival, space-based solar power, or other hypotheticals.
Danger to human life was not a reason given for shelving the
project. The reasons included lack of a mission requirement, the fact
that no one in the U.S. government could think of any reason to put
thousands of tons of payload into orbit, the decision to focus on
rockets for the Moon mission, and ultimately the signing of the Partial Test Ban Treaty in 1963. The danger to electronic systems on the ground from an electromagnetic pulse
was not considered to be significant from the sub-kiloton blasts
proposed since solid-state integrated circuits were not in general use
at the time.
From many smaller detonations combined, the fallout for the
entire launch of a 12,000,000-pound (5,400,000 kg) Orion is equal to the
detonation of a typical 10 megaton (40 petajoule) nuclear weapon as an air burst, therefore most of its fallout would be the comparatively dilute delayed fallout.
Assuming the use of nuclear explosives with a high portion of total
yield from fission, it would produce a combined fallout total similar to
the surface burst yield of the Mike shot of Operation Ivy, a 10.4 Megaton device detonated in 1952. The comparison is not quite perfect as, due to its surface burst location, Ivy Mike created a large amount of early fallout contamination. Historical above-ground nuclear weapon tests included 189 megatons of fission yield and caused average global radiation exposure per person peaking at 1.0×10−5 rem/sq ft (0.11 mSv/a) in 1963, with a 6.5×10−7 rem/sq ft (0.007 mSv/a) residual in modern times, superimposed upon other sources of exposure, primarily natural background radiation,
which averages 0.00022 rem/sq ft (2.4 mSv/a) globally but varies
greatly, such as 0.00056 rem/sq ft (6 mSv/a) in some high-altitude
cities.
Any comparison would be influenced by how population dosage is affected
by detonation locations, with very remote sites preferred.
With special designs of the nuclear explosive, Ted Taylor
estimated that fission product fallout could be reduced tenfold, or even
to zero, if a pure fusion explosive
could be constructed instead. A 100% pure fusion explosive has yet to
be successfully developed, according to declassified US government
documents, although relatively clean PNEs (Peaceful nuclear explosions) were tested for canal excavation by the Soviet Union in the 1970s with 98% fusion yield in the Taiga test's 15 kiloton devices, 0.3 kilotons fission, which excavated part of the proposed Pechora–Kama Canal.
The vehicle's propulsion system and its test program would violate the Partial Test Ban Treaty
of 1963, as currently written, which prohibits all nuclear detonations
except those conducted underground as an attempt to slow the arms race
and to limit the amount of radiation in the atmosphere caused by nuclear
detonations. There was an effort by the US government to put an
exception into the 1963 treaty to allow for the use of nuclear
propulsion for spaceflight but Soviet fears about military applications
kept the exception out of the treaty. This limitation would affect only
the US, Russia, and the United Kingdom. It would also violate the Comprehensive Nuclear-Test-Ban Treaty
which has been signed by the United States and China as well as the de
facto moratorium on nuclear testing that the declared nuclear powers
have imposed since the 1990s.
The launch of such an Orion nuclear bomb rocket from the ground or low Earth orbit would generate an electromagnetic pulse that could cause significant damage to computers and satellites as well as flooding the van Allen belts
with high-energy radiation. Since the EMP footprint would be a few
hundred miles wide, this problem might be solved by launching from very
remote areas. A few relatively small space-based electrodynamic tethers could be deployed to quickly eject the energetic particles from the capture angles of the Van Allen belts.
An Orion spacecraft could be boosted by non-nuclear means to a
safer distance only activating its drive well away from Earth and its
satellites. The Lofstrom launch loop or a space elevator hypothetically provide excellent solutions; in the case of the space elevator, existing carbon nanotubes composites, with the possible exception of Colossal carbon tubes, do not yet have sufficient tensile strength.
All chemical rocket designs are extremely inefficient and expensive
when launching large mass into orbit but could be employed if the result
were cost effective.
Professor Glenn Reynolds
has written that a less-developed country could leapfrog all others in
space by building a massive Orion launcher using 1960's technology.
A test that was similar to the test of a pusher plate occurred as an
accidental side effect of a nuclear containment test called "Pascal-B" conducted on 27 August 1957.
The test's experimental designer Dr. Robert Brownlee performed a highly
approximate calculation that suggested that the low-yield nuclear
explosive would accelerate the massive (900 kg) steel capping plate to
six times escape velocity.
The plate was never found but Dr. Brownlee believes that the plate
never left the atmosphere; for example, it could have been vaporized by
compression heating of the atmosphere due to its high speed. The
calculated velocity was interesting enough that the crew trained a
high-speed camera on the plate which, unfortunately, only appeared in
one frame indicating a very high lower bound for the speed of the plate.
An Orion spaceship features prominently in the science fiction novel Footfall by Larry Niven and Jerry Pournelle.
In the face of an alien siege/invasion of Earth, the humans must resort
to drastic measures to get a fighting ship into orbit to face the alien
fleet.
The opening premise of the show Ascension
is that in 1963 President John F. Kennedy and the U.S. government,
fearing the Cold War will escalate and lead to the destruction of Earth,
launched the Ascension, an Orion-class spaceship, to colonize a planet orbiting Proxima Centauri, assuring the survival of the human race.
Author Stephen Baxter's science fiction novel Ark employs an Orion-class generation ship to escape ecological disaster on Earth.
Towards the conclusion of his Empire Games trilogy, Charles Stross
includes a spacecraft modeled after Project Orion. The crafts'
designers, constrained by a 1960's level of industrial capacity, intend
it to be used to explore parallel worlds and to act as a nuclear
deterrent, leapfrogging their foes more contemporary capabilities.