Search This Blog

Friday, January 26, 2018

Bernhard Riemann

From Wikipedia, the free encyclopedia
Bernhard Riemann
Georg Friedrich Bernhard Riemann.jpeg
Bernhard Riemann in 1863.
Born Georg Friedrich Bernhard Riemann
17 September 1826
Breselenz, Kingdom of Hanover (modern-day Germany)
Died 20 July 1866 (aged 39)
Selasca, Kingdom of Italy
Residence Kingdom of Hanover
Nationality German
Alma mater
Known for See list
Scientific career
Fields
Institutions University of Göttingen
Thesis Grundlagen für eine allgemeine Theorie der Funktionen einer veränderlichen complexen Größe (1851)
Doctoral advisor Carl Friedrich Gauss
Other academic advisors
Notable students Gustav Roch
Influences J. P. G. L. Dirichlet
Signature
Bernhard Riemann signature.png
Georg Friedrich Bernhard Riemann (German: [ˈʀiːman] (About this sound listen); 17 September 1826 – 20 July 1866) was a German mathematician who made contributions to analysis, number theory, and differential geometry. In the field of real analysis, he is mostly known for the first rigorous formulation of the integral, the Riemann integral, and his work on Fourier series. His contributions to complex analysis include most notably the introduction of Riemann surfaces, breaking new ground in a natural, geometric treatment of complex analysis. His famous 1859 paper on the prime-counting function, containing the original statement of the Riemann hypothesis, is regarded as one of the most influential papers in analytic number theory. Through his pioneering contributions to differential geometry, Bernhard Riemann laid the foundations of the mathematics of general relativity.

Biography

Early years

Riemann was born on September 17, 1826 in Breselenz, a village near Dannenberg in the Kingdom of Hanover. His father, Friedrich Bernhard Riemann, was a poor Lutheran pastor in Breselenz who fought in the Napoleonic Wars. His mother, Charlotte Ebell, died before her children had reached adulthood. Riemann was the second of six children, shy and suffering from numerous nervous breakdowns. Riemann exhibited exceptional mathematical skills, such as calculation abilities, from an early age but suffered from timidity and a fear of speaking in public.

Education

During 1840, Riemann went to Hanover to live with his grandmother and attend lyceum (middle school). After the death of his grandmother in 1842, he attended high school at the Johanneum Lüneburg. In high school, Riemann studied the Bible intensively, but he was often distracted by mathematics. His teachers were amazed by his adept ability to perform complicated mathematical operations, in which he often outstripped his instructor's knowledge. In 1846, at the age of 19, he started studying philology and Christian theology in order to become a pastor and help with his family's finances.

During the spring of 1846, his father, after gathering enough money, sent Riemann to the University of Göttingen, where he planned to study towards a degree in Theology. However, once there, he began studying mathematics under Carl Friedrich Gauss (specifically his lectures on the method of least squares). Gauss recommended that Riemann give up his theological work and enter the mathematical field; after getting his father's approval, Riemann transferred to the University of Berlin in 1847.[1] During his time of study, Jacobi, Lejeune Dirichlet, Steiner, and Eisenstein were teaching. He stayed in Berlin for two years and returned to Göttingen in 1849.

Academia

Riemann held his first lectures in 1854, which founded the field of Riemannian geometry and thereby set the stage for Einstein's general theory of relativity. In 1857, there was an attempt to promote Riemann to extraordinary professor status at the University of Göttingen. Although this attempt failed, it did result in Riemann finally being granted a regular salary. In 1859, following Lejeune Dirichlet's death, he was promoted to head the mathematics department at Göttingen. He was also the first to suggest using dimensions higher than merely three or four in order to describe physical reality.[2] In 1862 he married Elise Koch and had a daughter.

Austro-Prussian War and death in Italy


Riemann's tombstone in Biganzolo

Riemann fled Göttingen when the armies of Hanover and Prussia clashed there in 1866.[3] He died of tuberculosis during his third journey to Italy in Selasca (now a hamlet of Verbania on Lake Maggiore) where he was buried in the cemetery in Biganzolo (Verbania). Riemann was a dedicated Christian, the son of a Protestant minister, and saw his life as a mathematician as another way to serve God. During his life, he held closely to his Christian faith and considered it to be the most important aspect of his life. At the time of his death, he was reciting the Lord’s Prayer with his wife and died before they finished saying the prayer.[4] Meanwhile, in Göttingen his housekeeper discarded some of the papers in his office, including much unpublished work. Riemann refused to publish incomplete work, and some deep insights may have been lost forever.[3]

Riemann's tombstone in Biganzolo (Italy) refers to Romans 8:28 ("And we know that all things work together for good to them that love God, to them who are called according to his purpose"):

Here rests in God Georg Friedrich Bernhard Riemann
Professor in Göttingen
born in Breselenz, Germany 17 September 1826
died in Selasca, Italy 20 July 1866
For those who love God, all things must work together for the best.[5]

Riemannian geometry

Riemann's published works opened up research areas combining analysis with geometry. These would subsequently become major parts of the theories of Riemannian geometry, algebraic geometry, and complex manifold theory. The theory of Riemann surfaces was elaborated by Felix Klein and particularly Adolf Hurwitz. This area of mathematics is part of the foundation of topology and is still being applied in novel ways to mathematical physics.

In 1853, Gauss asked his student Riemann to prepare a Habilitationsschrift on the foundations of geometry. Over many months, Riemann developed his theory of higher dimensions and delivered his lecture at Göttingen in 1854 entitled "Ueber die Hypothesen welche der Geometrie zu Grunde liegen" ("On the hypotheses which underlie geometry"). It was only published twelve years later in 1868 by Dedekind, two years after his death. Its early reception appears to have been slow but it is now recognized as one of the most important works in geometry.

The subject founded by this work is Riemannian geometry. Riemann found the correct way to extend into n dimensions the differential geometry of surfaces, which Gauss himself proved in his theorema egregium. The fundamental object is called the Riemann curvature tensor. For the surface case, this can be reduced to a number (scalar), positive, negative, or zero; the non-zero and constant cases being models of the known non-Euclidean geometries.

Riemann's idea was to introduce a collection of numbers at every point in space (i.e., a tensor) which would describe how much it was bent or curved. Riemann found that in four spatial dimensions, one needs a collection of ten numbers at each point to describe the properties of a manifold, no matter how distorted it is. This is the famous construction central to his geometry, known now as a Riemannian metric.

Complex analysis

In his dissertation, he established a geometric foundation for complex analysis through Riemann surfaces, through which multi-valued functions like the logarithm (with infinitely many sheets) or the square root (with two sheets) could become one-to-one functions. Complex functions are harmonic functions (that is, they satisfy Laplace's equation and thus the Cauchy–Riemann equations) on these surfaces and are described by the location of their singularities and the topology of the surfaces. The topological "genus" of the Riemann surfaces is given by g=w/2-n+1, where the surface has n leaves coming together at w branch points. For g>1 the Riemann surface has (3g-3) parameters (the "moduli").

His contributions to this area are numerous. The famous Riemann mapping theorem says that a simply connected domain in the complex plane is "biholomorphically equivalent" (i.e. there is a bijection between them that is holomorphic with a holomorphic inverse) to either \mathbb {C} or to the interior of the unit circle. The generalization of the theorem to Riemann surfaces is the famous uniformization theorem, which was proved in the 19th century by Henri Poincaré and Felix Klein. Here, too, rigorous proofs were first given after the development of richer mathematical tools (in this case, topology). For the proof of the existence of functions on Riemann surfaces he used a minimality condition, which he called the Dirichlet principle. Weierstrass found a gap in the proof: Riemann had not noticed that his working assumption (that the minimum existed) might not work; the function space might not be complete, and therefore the existence of a minimum was not guaranteed. Through the work of David Hilbert in the Calculus of Variations, the Dirichlet principle was finally established. Otherwise, Weierstrass was very impressed with Riemann, especially with his theory of abelian functions. When Riemann's work appeared, Weierstrass withdrew his paper from Crelle's Journal and did not publish it. They had a good understanding when Riemann visited him in Berlin in 1859. Weierstrass encouraged his student Hermann Amandus Schwarz to find alternatives to the Dirichlet principle in complex analysis, in which he was successful. An anecdote from Arnold Sommerfeld[6] shows the difficulties which contemporary mathematicians had with Riemann's new ideas. In 1870, Weierstrass had taken Riemann's dissertation with him on a holiday to Rigi and complained that it was hard to understand. The physicist Hermann von Helmholtz assisted him in the work over night and returned with the comment that it was "natural" and "very understandable".

Other highlights include his work on abelian functions and theta functions on Riemann surfaces. Riemann had been in a competition with Weierstrass since 1857 to solve the Jacobian inverse problems for abelian integrals, a generalization of elliptic integrals. Riemann used theta functions in several variables and reduced the problem to the determination of the zeros of these theta functions. Riemann also investigated period matrices and characterized them through the "Riemannian period relations" (symmetric, real part negative). By Ferdinand Georg Frobenius and Solomon Lefschetz the validity of this relation is equivalent with the embedding of C^{n}/\Omega (where \Omega is the lattice of the period matrix) in a projective space by means of theta functions. For certain values of n, this is the Jacobian variety of the Riemann surface, an example of an abelian manifold.

Many mathematicians such as Alfred Clebsch furthered Riemann's work on algebraic curves. These theories depended on the properties of a function defined on Riemann surfaces. For example, the Riemann–Roch theorem (Roch was a student of Riemann) says something about the number of linearly independent differentials (with known conditions on the zeros and poles) of a Riemann surface.

According to Laugwitz,[7] automorphic functions appeared for the first time in an essay about the Laplace equation on electrically charged cylinders. Riemann however used such functions for conformal maps (such as mapping topological triangles to the circle) in his 1859 lecture on hypergeometric functions or in his treatise on minimal surfaces.

Real analysis

In the field of real analysis, he discovered the Riemann integral in his habilitation. Among other things, he showed that every piecewise continuous function is integrable. Similarly, the Stieltjes integral goes back to the Göttinger mathematician, and so they are named together the Riemann–Stieltjes integral.

In his habilitation work on Fourier series, where he followed the work of his teacher Dirichlet, he showed that Riemann-integrable functions are "representable" by Fourier series. Dirichlet has shown this for continuous, piecewise-differentiable functions (thus with countably many non-differentiable points). Riemann gave an example of a Fourier series representing a continuous, almost nowhere-differentiable function, a case not covered by Dirichlet. He also proved the Riemann–Lebesgue lemma: if a function is representable by a Fourier series, then the Fourier coefficients go to zero for large n.

Riemann's essay was also the starting point for Georg Cantor's work with Fourier series, which was the impetus for set theory.

He also worked with hypergeometric differential equations in 1857 using complex analytical methods and presented the solutions through the behavior of closed paths about singularities (described by the monodromy matrix). The proof of the existence of such differential equations by previously known monodromy matrices is one of the Hilbert problems.

Number theory

He made some famous contributions to modern analytic number theory. In a single short paper, the only one he published on the subject of number theory, he investigated the zeta function that now bears his name, establishing its importance for understanding the distribution of prime numbers. The Riemann hypothesis was one of a series of conjectures he made about the function's properties.

In Riemann's work, there are many more interesting developments. He proved the functional equation for the zeta function (already known to Euler), behind which a theta function lies. Also, it gives a better approximation for the prime-counting function \pi (x) than Gauss's function Li(x)[citation needed]. Through the summation of this approximation function over the non-trivial zeros on the line with real portion 1/2, he gave an exact, "explicit formula" for \pi (x).

Riemann knew Chebyshev's work on the Prime Number Theorem. He had visited Dirichlet in 1852. But Riemann's methods were very different.

Writings



  • 1868 On the hypotheses which lie at the foundation of geometry, translated by W.K.Clifford, Nature 8 1873 183 – reprinted in Clifford's Collected Mathematical Papers, London 1882 (MacMillan); New York 1968 (Chelsea) http://www.emis.de/classics/Riemann/. Also in Ewald, William B., ed., 1996 “From Kant to Hilbert: A Source Book in the Foundations of Mathematics”, 2 vols. Oxford Uni. Press: 652–61.
  • 1892 Collected Works of Bernhardt Riemann (H. Weber ed). In German. Reprinted New York 1953 (Dover)
  • Riemann, Bernhard (2004), Collected papers, Kendrick Press, Heber City, UT, ISBN 978-0-9740427-2-5, MR 2121437
  • Wednesday, January 24, 2018

    Earth's energy budget

    From Wikipedia, the free encyclopedia
     
    Earth's climate is largely determined by the planet's energy budget, i.e., the balance of incoming and outgoing radiation. It is measured by satellites and shown in W/m2.[1]

    Earth's energy budget accounts for the balance between energy Earth receives from the Sun,[2] and energy Earth radiates back into outer space after having been distributed throughout the five components of Earth's climate system and having thus powered the so-called "Earth’s heat engine".[3] This system is made up of earth's water, ice, atmosphere, rocky crust, and all living things.[4]

    Quantifying changes in these amounts is required to accurately model the Earth's climate.[5]

    Incoming, top-of-atmosphere (TOA) shortwave flux radiation, shows energy received from the sun (Jan 26–27, 2012).
    Outgoing, longwave flux radiation at the top-of-atmosphere (Jan 26–27, 2012). Heat energy radiated from Earth (in watts per square metre) is shown in shades of yellow, red, blue and white. The brightest-yellow areas are the hottest and are emitting the most energy out to space, while the dark blue areas and the bright white clouds are much colder, emitting the least energy.

    Received radiation is unevenly distributed over the planet, because the Sun heats equatorial regions more than polar regions. The atmosphere and ocean work non-stop to even out solar heating imbalances through evaporation of surface water, convection, rainfall, winds, and ocean circulation. Earth is very close to be (but not perfectly) in radiative equilibrium, the situation where the incoming solar energy is balanced by an equal flow of heat to space; under that condition, global temperatures will be relatively stable. Globally, over the course of the year, the Earth system —land surfaces, oceans, and atmosphere— absorbs and then radiates back to space an average of about 240 watts of solar power per square meter. Anything that increases or decreases the amount of incoming or outgoing energy will change global temperatures in response.[6]

    However, Earth's energy balance and heat fluxes depend on many factors, such as atmospheric composition (mainly aerosols and greenhouse gases), the albedo (reflectivity) of surface properties, cloud cover and vegetation and land use patterns.

    Changes in surface temperature due to Earth's energy budget do not occur instantaneously, due to the inertia of the oceans and the cryosphere. The net heat flux is buffered primarily by becoming part of the ocean's heat content, until a new equilibrium state is established between radiative forcings and the climate response.[7]

    Energy budget

    A Sankey diagram illustrating the Earth's energy budget described in this section — line thickness is linearly proportional to relative amount of energy.[8]

    In spite of the enormous transfers of energy into and from the Earth, it maintains a relatively constant temperature because, as a whole, there is little net gain or loss: Earth emits via atmospheric and terrestrial radiation (shifted to longer electromagnetic wavelengths) to space about the same amount of energy as it receives via insolation (all forms of electromagnetic radiation).

    To quantify Earth's heat budget or heat balance, let the insolation received at the top of the atmosphere be 100 units (100 units=about 1,360 watts per square meter facing the sun), as shown in the accompanying illustration. Called the albedo of Earth, around 35 units are reflected back to space: 27 from the top of clouds, 2 from snow and ice-covered areas, and 6 by other parts of the atmosphere. The 65 remaining units are absorbed: 14 within the atmosphere and 51 by the Earth’s surface. These 51 units are radiated to space in the form of terrestrial radiation: 17 directly radiated to space and 34 absorbed by the atmosphere (19 through latent heat of condensation, 9 via convection and turbulence, and 6 directly absorbed). The 48 units absorbed by the atmosphere (34 units from terrestrial radiation and 14 from insolation) are finally radiated back to space. These 65 units (17 from the ground and 48 from the atmosphere) balance the 65 units absorbed from the sun in order to maintain zero net gain of energy by the Earth.[8]

    Incoming radiant energy (shortwave)

    The total amount of energy received per second at the top of Earth's atmosphere (TOA) is measured in watts and is given by the solar constant times the cross-sectional area of the Earth. Because the surface area of a sphere is four times the cross-sectional surface area of a sphere (i.e. the area of a circle), the average TOA flux is one quarter of the solar constant and so is approximately 340 W/m².[1][9] Since the absorption varies with location as well as with diurnal, seasonal and annual variations, the numbers quoted are long-term averages, typically averaged from multiple satellite measurements.[1]

    Of the ~340 W/m² of solar radiation received by the Earth, an average of ~77 W/m² is reflected back to space by clouds and the atmosphere and ~23 W/m² is reflected by the surface albedo, leaving ~240 W/m² of solar energy input to the Earth's energy budget. This gives the earth a mean net albedo of 0.29.[1]

    Earth's internal heat and other small effects

    The geothermal heat flux from the Earth's interior is estimated to be 47 terawatts.[10] This comes to 0.087 watt/square metre, which represents only 0.027% of Earth's total energy budget at the surface, which is dominated by 173,000 terawatts of incoming solar radiation.[11]

    Human production of energy is even lower, at an estimated 18 TW[citation needed].

    Photosynthesis has a larger effect: photosynthetic efficiency turns up to 2% of incoming sunlight into biomass, for a total photosynthetic productivity of earth between ~1500–2250 TW (~1%+/-0.26% solar energy hitting the Earth's surface)[12].

    Other minor sources of energy are usually ignored in these calculations, including accretion of interplanetary dust and solar wind, light from stars other than the Sun and the thermal radiation from space. Earlier, Joseph Fourier had claimed that deep space radiation was significant in a paper often cited as the first on the greenhouse effect.[13]

    Longwave radiation

    Longwave radiation is usually defined as outgoing infrared energy leaving the planet. However, the atmosphere absorbs parts initially, or cloud cover can reflect radiation. Generally, heat energy is transported between the planet's surface layers (land and ocean) to the atmosphere, transported via evapotranspiration and latent heat fluxes or conduction/convection processes.[1] Ultimately, energy is radiated in the form of longwave infrared radiation back into space.
    Recent satellite observations indicate additional precipitation, which is sustained by increased energy leaving the surface through evaporation (the latent heat flux), offsetting increases in longwave flux to the surface.[5]

    Earth's energy imbalance

    If the incoming energy flux is not equal to the outgoing energy flux, net heat is added to or lost by the planet (if the incoming flux is larger or smaller than the outgoing respectively).

    Indirect measurement

    An imbalance must show in something on Earth warming or cooling (depending on the direction of the imbalance), and the ocean being the larger thermal reservoir on Earth, is a prime candidate for measurements.

    Earth's energy imbalance measurements provided by Argo floats have detected an accumulation of ocean heat content (OHC). The estimated imbalance was measured during a deep solar minimum of 2005-2010 to be 0.58 ± 0.15 W/m².[14] Later research estimated the surface energy imbalance to be 0.60 ± 0.17 W/m².[15]

    Direct measurement

    Several satellites indirectly measure the energy absorbed and radiated by Earth and by inference the energy imbalance. The NASA Earth Radiation Budget Experiment (ERBE) project involves three such satellites: the Earth Radiation Budget Satellite (ERBS), launched October 1984; NOAA-9, launched December 1984; and NOAA-10, launched September 1986.[16]

    Today NASA's satellite instruments, provided by CERES, part of the NASA's Earth Observing System (EOS), are designed to measure both solar-reflected and Earth-emitted radiation.[17]

    Natural greenhouse effect

    refer to caption and image description
    Diagram showing the energy budget of  Earth's atmosphere, which includes the greenhouse effect

    The major atmospheric gases (oxygen and nitrogen) are transparent to incoming sunlight but are also transparent to outgoing thermal (infrared) radiation. However, water vapor, carbon dioxide, methane and other trace gases are opaque to many wavelengths of thermal radiation. The Earth's surface radiates the net equivalent of 17 percent of the incoming solar energy in the form of thermal infrared. However, the amount that directly escapes to space is only about 12 percent of incoming solar energy. The remaining fraction, 5 to 6 percent, is absorbed by the atmosphere by greenhouse gas molecules.[18]

    Atmospheric gases only absorb some wavelengths of energy but are transparent to others. The absorption patterns of water vapor (blue peaks) and carbon dioxide (pink peaks) overlap in some wavelengths. Carbon dioxide is not as strong a greenhouse gas as water vapor, but it absorbs energy in wavelengths (12–15 micrometres) that water vapor does not, partially closing the "window" through which heat radiated by the surface would normally escape to space. (Illustration NASA, Robert Rohde)[19]

    When greenhouse gas molecules absorb thermal infrared energy, their temperature rises. Those gases then radiate an increased amount of thermal infrared energy in all directions. Heat radiated upward continues to encounter greenhouse gas molecules; those molecules also absorb the heat, and their temperature rises and the amount of heat they radiate increases. The atmosphere thins with altitude, and at roughly 5–6 kilometres, the concentration of greenhouse gases in the overlying atmosphere is so thin that heat can escape to space.[18]

    Because greenhouse gas molecules radiate infrared energy in all directions, some of it spreads downward and ultimately returns to the Earth's surface, where it is absorbed. The Earth's surface temperature is thus higher than it would be if it were heated only by direct solar heating. This supplemental heating is the natural greenhouse effect.[18] It is as if the Earth is covered by a blanket that allows high frequency radiation (sunlight) to enter, but slows the rate at which the low frequency infrared radiant energy emitted by the Earth leaves.

    Climate sensitivity

    A change in the incident radiated portion of the energy budget is referred to as a radiative forcing. Climate sensitivity is the steady state change in the equilibrium temperature as a result of changes in the energy budget.

    Climate forcings and global warming

    Expected Earth energy imbalance for three choices of aerosol climate forcing. Measured imbalance, close to 0.6 W/m², implies that aerosol forcing is close to −1.6 W/m². (Credit: NASA/GISS)[14]

    Climate forcings are changes that cause temperatures to rise or fall, disrupting the energy balance. Natural climate forcings include changes in the Sun's brightness, Milankovitch cycles (small variations in the shape of Earth's orbit and its axis of rotation that occur over thousands of years) and volcanic eruptions that inject light-reflecting particles as high as the stratosphere. Man-made forcings include particle pollution (aerosols) that absorb and reflect incoming sunlight; deforestation, which changes how the surface reflects and absorbs sunlight; and the rising concentration of atmospheric carbon dioxide and other greenhouse gases, which decreases the rate at which heat is radiated to space.

    A forcing can trigger feedbacks that intensify (positive feedback) or weaken (negative feedback) the original forcing. For example, loss of ice at the poles, which makes them less reflective, causes greater absorption of energy and so increases the rate at which the ice melts, is an example of a positive feedback.[19]

    The observed planetary energy imbalance during the recent solar minimum shows that solar forcing of climate, although natural and significant, is overwhelmed by anthropogenic climate forcing.[20]

    In 2012, NASA scientists reported that to stop global warming atmospheric CO2 content would have to be reduced to 350 ppm or less, assuming all other climate forcings were fixed. The impact of anthropogenic aerosols has not been quantified, but individual aerosol types are thought to have substantial heating and cooling effects.[14]

    Fast-neutron reactor

    From Wikipedia, the free encyclopedia
     
    Shevchenko BN350 nuclear fast reactor and desalination plant situated on the shore of the Caspian Sea. The plant generated 135 MWe and provided steam for an associated desalination plant. View of the interior of the reactor hall.

    A fast-neutron reactor or simply a fast reactor is a category of nuclear reactor in which the fission chain reaction is sustained by fast neutrons, as opposed to thermal neutrons used in thermal-neutron reactors. Such a reactor needs no neutron moderator, but must use fuel that is relatively rich in fissile material when compared to that required for a thermal reactor.

    Introduction

    Basic fission concepts

    In order to sustain a fission chain reaction, the neutrons released in fission events have to react with other atoms in the fuel. The chance of this occurring depends on the energy of the neutron; most atoms will only undergo induced fission with high energy neutrons, although a smaller number prefer much lower energies.

    Natural uranium consists mostly of three isotopes, U-238, U-235, and trace quantities of U-234, a decay product of U-238. U-238 accounts for roughly 99.3% of natural uranium and undergoes fission only by neutrons with energies of 5 MeV or greater, the so-called fast neutrons[1]. About 0.7% of natural uranium is U-235, which undergoes fission by neutrons of any energy, but particularly by lower energy neutrons. When either of these isotopes undergoes fission they release neutrons with an energy distribution peaking around 1 to 2 MeV. The flux of higher energy fission neutrons (> 2 MeV) is too low to create sufficient fission in U-238, and the flux of lower energy fission neutrons (< 2 MeV) is too low to do so easily in U-235 [2].

    The common solution to this problem is to slow the neutron from these fast speeds using a neutron moderator, any substance which interacts with the neutrons and slows their speed. The most common moderator is normal water, which slows the neutrons through elastic scattering until the neutrons reach thermal equilibrium with the water. The key to reactor design is to carefully lay out the fuel and water so the neutrons have time to slow enough to become highly reactive with the U-235, but not so far as to allow them easy pathways to escape the reactor core entirely.

    Although U-238 will not undergo fission by the neutrons released in fission, thermal neutrons can be captured by the nucleus to transmute the atom into Pu-239. Pu-239 has a neutron cross section very similar to that of U-235, and most of the atoms created this way will undergo fission from the thermal neutrons. In most reactors this accounts for as much as ⅓ of the energy being generated. Not all of the Pu-239 is burned up during normal operation, and the leftover, along with leftover U-238, can be separated out to be used in new fuel during nuclear reprocessing.

    Water is a common moderator for practical reasons, but has its disadvantages. From a nuclear standpoint, the primary problem is that water can absorb a neutron and remove it from the reaction. It does this just enough that the amount of U-235 in natural ore is too low to sustain the chain reaction; the neutrons lost through absorption in the water and U-238, along with those lost to the environment, results in too few left in the fuel. The most common solution to this problem is to slightly concentrate the amount of U-235 in the fuel to produce enriched uranium, with the leftover U-238 known as depleted uranium. Other designs use different moderators, like heavy water, that are much less likely to absorb neutrons, allowing them to run on unenriched fuel. In either case, the reactor's neutron economy is based on thermal neutrons.

    Fast fission, breeders

    Although U-235 and Pu-239 are less sensitive to higher energy neutrons, they still remain somewhat reactive well into the MeV area. If the fuel is enriched, eventually a threshold will be reached where there are enough fissile atoms in the fuel to maintain a chain reaction even with fast neutrons.

    The primary advantage is that by removing the moderator, the size of the reactor can be greatly reduced, and to some extent the complexity. This is commonly used for shipboard and submarine reactor systems, where size and weight are major concerns. The downside to the fast reaction is that fuel enrichment is an expensive process, so this is generally not suitable for electrical generation or other roles where cost is more important than size.

    There is another advantage to the fast reaction that has led to considerable development for civilian use. Fast reactors lack a moderator, and thus lack one of the systems that remove neutrons from the system. Those running on Pu-239 further increase the number of neutrons, because its most common fission cycle gives off three neutrons rather than the mix of two and three neutrons released from U-235. By surrounding the reactor core with a moderator and then a blanket of U-238, those neutrons can be captured and used to breed more Pu-239. This is the same reaction that occurs internally in conventional designs, but in this case the blanket does not have to sustain a reaction and thus can be made of natural uranium or even depleted uranium.

    Due to the surplus of neutrons from Pu-239 fission, the reactor will actually breed more Pu-239 than it consumes. The blanket material can then be processed to extract the Pu-239 to replace the losses in the reactor, and the surplus is then mixed with other fuel to produce MOX fuel that can be fed into conventional slow neutron reactors. A single fast reactor can thereby feed several slow ones, greatly increasing the amount of energy extracted from the natural uranium, from less than 1% in a normal once-through cycle, to as much as 60% in the best fast reactor cycles.

    Given the limited stores of natural uranium ore, and the rate that nuclear power was expected to take over baseload generation, through the 1960s and 70s fast breeder reactors were seen as the solution to the world's energy needs. Using twice-through processing, a fast breeder economy increases the fuel capacity of known ore deposits by as much as 100 times, meaning that even existing ore sources would last hundreds of years. The disadvantage to this approach is that the breeder reactor has to be fed highly enriched fuel, which is very expensive to produce. Even though it breeds more fuel than it consumes, the resulting MOX is still expensive. It was widely expected that this would still be below the price of enriched uranium as demand increased and known resources dwindled.

    Through the 1970s, breeder designs were being widely experimented on, especially in the USA, France and the USSR. However, this coincided with a crash in uranium prices. The expected increased demand led mining companies to build up new supply channels, which came online just as the rate of reactor construction stalled in the mid-1970s. The resulting oversupply caused fuel prices to decline from about US$40 per pound in 1980 to less than $20 by 1984. Breeders produced fuel that was much more expensive, on the order of $100 to $160, and the few units that had reached commercial operation proved to be economically disastrous. Interest in breeder reactors were further muted by Jimmy Carter's April 1977 decision to defer construction of breeders in the US due to proliferation concerns, and the terrible operating record of France's Superphénix reactor.

    Advantages

    Fast neutron reactors can reduce the total radiotoxicity of nuclear waste, and dramatically reduce the waste's lifetime.[8] They can also use all or almost all of the fuel in the waste. Fast neutrons have an advantage in the transmutation of nuclear waste. With fast neutrons, the ratio between splitting and the capture of neutrons of plutonium or minor actinide is often larger than when the neutrons are slower, at thermal or near-thermal "epithermal" speeds. The transmuted even-numbered actinides (e.g. Pu-240, Pu-242) split nearly as easily as odd-numbered actinides in fast reactors. After they split, the actinides become a pair of "fission products." These elements have less total radiotoxicity. Since disposal of the fission products is dominated by the most radiotoxic fission product, cesium-137, which has a half life of 30.1 years,[8] the result is to reduce nuclear waste lifetimes from tens of millennia (from transuranic isotopes) to a few centuries. The processes are not perfect, but the remaining transuranics are reduced from a significant problem to a tiny percentage of the total waste, because most transuranics can be used as fuel.
    • Fast reactors technically solve the "fuel shortage" argument against uranium-fueled reactors without assuming unexplored reserves, or extraction from dilute sources such as ordinary granite or the ocean. They permit nuclear fuels to be bred from almost all the actinides, including known, abundant sources of depleted uranium and thorium, and light water reactor wastes. On average, more neutrons per fission are produced from fissions caused by fast neutrons than from those caused by thermal neutrons. This results in a larger surplus of neutrons beyond those required to sustain the chain reaction. These neutrons can be used to produce extra fuel, or to transmute long half-life waste to less troublesome isotopes, such as was done at the Phénix reactor in Marcoule in France, or some can be used for each purpose. Though conventional thermal reactors also produce excess neutrons, fast reactors can produce enough of them to breed more fuel than they consume. Such designs are known as fast breeder reactors.[citation needed]

    Disadvantages

    • Fast-neutron reactors are costly to build and operate, and are not likely to be cost-competitive with thermal neutron reactors unless the price of uranium increases dramatically.[9]
    • Due to the low cross sections of most materials at high neutron energies, critical mass in a fast reactor is much higher than in a thermal reactor. In practice, this means significantly higher enrichment: >20% enrichment in a fast reactor compared to <5 a="" enrichment="" greater="" href="https://en.wikipedia.org/wiki/Nuclear_proliferation" in="" raises="" reactors.="" thermal="" this="" title="Nuclear proliferation" typical="">nuclear proliferation
    and nuclear security issues.[citation needed]
  • Sodium is often used as a coolant in fast reactors, because it does not moderate neutron speeds much and has a high heat capacity. However, it burns and foams in air. It has caused difficulties in reactors (e.g. USS Seawolf (SSN-575), Monju), although some sodium-cooled fast reactors have operated safely (notably the Superphénix and EBR-II for 30 years).[citation needed]
  • Since liquid metals other than lithium and beryllium have low moderating ability, the primary interaction of neutrons with fast reactor coolant is the (n,gamma) reaction, which induces radioactivity in the coolant. Neutron irradiation activates a significant fraction of coolant in high-power fast reactors, up to around a terabecquerel of beta decays per kilogram of coolant in steady operation.[10] Boiling in the coolant, e.g. in an accident, would reduce coolant density and thus the absorption rate, such that the reactor has a positive void coefficient, which is dangerous and undesirable from a safety and accident standpoint. This can be avoided with a gas cooled reactor, since voids do not form in such a reactor during an accident; however, activation in the coolant remains a problem. A helium-cooled reactor would avoid this, since the elastic scattering and total cross sections are approximately equal, i.e. there are very few (n,gamma) reactions in the coolant and the low density of helium at typical operating conditions means that the amount neutrons have few interactions with coolant.[citation needed]
  • Nuclear reactor design

    Coolant

    Water, the most common coolant in thermal reactors, is generally not a feasible coolant for a fast reactor, because it acts as a neutron moderator. However the Generation IV reactor known as the supercritical water reactor with decreased coolant density may reach a hard enough neutron spectrum to be considered a fast reactor. Breeding, which is the primary advantage of fast over thermal reactors, may be accomplished with a thermal, light-water cooled & moderated system using very high enriched (~90%) uranium.

    All current fast reactors are liquid metal cooled reactors. The early Clementine reactor used mercury coolant and plutonium metal fuel. Sodium-potassium alloy (NaK) coolant is popular in test reactors due to its low melting point. In addition to its toxicity to humans, mercury has a high cross section for the (n,gamma) reaction, causing activation in the coolant and losing neutrons that could otherwise be absorbed in the fuel, which is why it is no longer used or considered as a coolant in reactors. Molten lead cooling has been used in naval propulsion units as well as some other prototype reactors. All large-scale fast reactors have used molten sodium coolant.

    Another proposed fast reactor is a Molten Salt Reactor, one in which the molten salt's moderating properties are insignificant. This is typically achieved by replacing the light metal fluorides (e.g. Lithium fluoride - LiF, Beryllium fluoride - BeF2) in the salt carrier with heavier metal chlorides (e.g., Potassium chloride - KCI, Rubidium chloride - RbCl, Zirconium chloride - ZrCl4). Moltex Energy[11] based in the UK proposes to build a fast neutron reactor called the Stable Salt Reactor. In this reactor design the nuclear fuel is dissolved in a molten salt. The fuel salt is contained in stainless steel tubes similar to those use in solid fuel reactors. The reactor is cooled using the natural convection of another molten salt coolant. Moltex claims that their design will be less expensive to build than a coal fired power plant and can consume nuclear waste from conventional solid fuel reactors.

    Gas-cooled fast reactors have been the subject of research as well, as helium, the most commonly proposed coolant in such a reactor, has small absorption and scattering cross sections, thus preserving the fast neutron spectrum without significant neutron absorption in the coolant.[citation needed]

    Nuclear fuel

    In practice, sustaining a fission chain reaction with fast neutrons means using relatively highly enriched uranium or plutonium. The reason for this is that fissile reactions are favored at thermal energies, since the ratio between the Pu239 fission cross section and U238 absorption cross section is ~100 in a thermal spectrum and 8 in a fast spectrum. Fission and absorption cross sections are low for both Pu239 and U238 at high (fast) energies, which means that fast neutrons are likelier to pass through fuel without interacting than thermal neutrons; thus, more fissile material is needed. Therefore it is impossible to build a fast reactor using only natural uranium fuel. However, it is possible to build a fast reactor that will breed fuel (from fertile material) by producing more fissile material than it consumes. After the initial fuel charge such a reactor can be refueled by reprocessing. Fission products can be replaced by adding natural or even depleted uranium with no further enrichment required. This is the concept of the fast breeder reactor or FBR.

    So far, most fast neutron reactors have used either MOX (mixed oxide) or metal alloy fuel. Soviet fast neutron reactors have been using (high U-235 enriched) uranium fuel. The Indian prototype reactor has been using uranium-carbide fuel.

    While criticality at fast energies may be achieved with uranium enriched to 5.5 weight percent Uranium-235, fast reactor designs have often been proposed with enrichments in the range of 20 percent for a variety of reasons, including core lifetime: If a fast reactor were loaded with the minimal critical mass, then the reactor would become subcritical after the first fission had occurred. Rather, an excess of fuel is inserted with reactivity control mechanisms, such that the reactivity control is inserted fully at the beginning of life to bring the reactor from supercritical to critical; as the fuel is depleted, the reactivity control is withdrawn to mitigate the negative reactivity feedback from fuel depletion and fission product poisons. In a fast breeder reactor, the above applies, though the reactivity from fuel depletion is also compensated by the breeding of either Uranium-233 or Plutonium-239 and 241 from Thorium 232 or Uranium 238, respectively.

    Control

    Like thermal reactors, fast neutron reactors are controlled by keeping the criticality of the reactor reliant on delayed neutrons, with gross control from neutron-absorbing control rods or blades.
    They cannot, however, rely on changes to their moderators because there is no moderator. So Doppler broadening in the moderator, which affects thermal neutrons, does not work, nor does a negative void coefficient of the moderator. Both techniques are very common in ordinary light water reactors.

    Doppler broadening from the molecular motion of the fuel, from its heat, can provide rapid negative feedback. The molecular movement of the fissionables themselves can tune the fuel's relative speed away from the optimal neutron speed. Thermal expansion of the fuel itself can also provide quick negative feedback. Small reactors such as those used in submarines may use doppler broadening or thermal expansion of neutron reflectors.

    Shevchenko BN350 desalination unit. View of the only nuclear-heated desalination unit in the world

    History

    A 2008 IAEA proposal for a Fast Reactor Knowledge Preservation System[12] notes that:
    during the past 15 years there has been stagnation in the development of fast reactors in the industrialized countries that were involved, earlier, in intensive development of this area. All studies on fast reactors have been stopped in countries such as Germany, Italy, the United Kingdom and the United States of America and the only work being carried out is related to the decommissioning of fast reactors. Many specialists who were involved in the studies and development work in this area in these countries have already retired or are close to retirement. In countries such as France, Japan and the Russian Federation that are still actively pursuing the evolution of fast reactor technology, the situation is aggravated by the lack of young scientists and engineers moving into this branch of nuclear power.

    List of fast reactors

    Decommissioned reactors

    United States

    • CLEMENTINE, the first fast reactor, built in 1946 at Los Alamos National Laboratory. Plutonium metal fuel, mercury coolant, power 25 kW thermal, used for research, especially as a fast neutron source.
    • EBR-I at Idaho Falls, which in 1951 became the first reactor to generate significant amounts of electrical power. Decommissioned 1964.
    • Fermi 1 near Detroit was a prototype fast breeder reactor that began operating in 1957 and shut down in 1972.
    • EBR-II Prototype for the Integral Fast Reactor, 1965–1995?.
    • SEFOR in Arkansas, a 20 MWt research reactor which operated from 1969 to 1972.
    • Fast Flux Test Facility, 400 MWt, Operated flawlessly from 1982 to 1992, at Hanford Washington, now deactivated, liquid sodium is drained with argon backfill under care and maintenance.

    Europe

    • DFR (Dounreay Fast Reactor, 1959–1977, 14 MWe) and PFR (Prototype Fast Reactor, 1974–1994, 250 MWe), in Caithness, in the Highland area of Scotland.
    • Rhapsodie in Cadarache, France, (20 then 40 MW) between 1967 and 1982.
    • Superphénix, in France, 1200 MWe, closed in 1997 due to a political decision and very high costs of operation.
    • Phénix, 1973, France, 233 MWe, restarted 2003 at 140 MWe for experiments on transmutation of nuclear waste for six years, ceased power generation in March 2009, though it will continue in test operation and to continue research programs by CEA until the end of 2009. Stopped in 2010.
    • KNK-II, Germany

    USSR/Russia

    • Small lead-cooled fast reactors used for naval propulsion, particularly by the Soviet Navy.
    • BR-5 - research fast neutron reactor at the Institute of Physics and Energy in Obninsk. Years of operation 1959-2002.
    • BN-350, constructed by the Soviet Union in Shevchenko (today's Aqtau) on the Caspian Sea, 130 MWe plus 80,000 tons of fresh water per day.
    • IBR-2 - research fast neutron reactor at the Joint Institute of Nuclear Research in Dubna (near Moscow).
    • BN-600 - sodium-cooled fast breeder reactor at the Beloyarsk Nuclear Power Station. Provides 560 MW to the Middle Urals power grid. In operation since 1980.
    • BN-800 - sodium-cooled fast breeder reactor at the Beloyarsk Nuclear Power Station. Designed to generate 880 MW of electrical power. Started producing electricity in October, 2014. Achieved full power in August, 2016.

    Asia

    • Monju reactor, 300 MWe, in Japan. was closed in 1995 following a serious sodium leak and fire. It was restarted May 6, 2010 and in August 2010 another accident, involving dropped machinery, shut down the reactor again. As of June 2011, the reactor has only generated electricity for one hour since its first testing two decades prior.

    Never operated

    Currently operating

    • BN-600, 1981, Russia, 600 MWe, scheduled end of life 2010[13] but still in operation.[14]
    • BN-800, Russia, testing began June 27, 2014,[15][16] estimated total power 880 MW. Achieved full power in August, 2016.
    • BOR-60 - sodium-cooled reactor at the Research Institute of Atomic Reactors in Dmitrovgrad. In operation since 1980.(experimental purposes)
    • FBTR, 1985, India, 10.5 MWt (experimental purposes)
    • China Experimental Fast Reactor, 65 MWt (experimental purposes), planned 2009, critical 2010[17]

    Under repair

    • Jōyō (常陽), 1977–1997 and 2004–2007, Japan, 140 MWt. Experimental reactor, operated as an irradiation test facility. After an incident in 2007, the reactor is suspended for repairing, recovery works were planned to be completed in 2014.[18]

    Under construction

    • PFBR, Kalpakkam, India, 500 MWe.
    • CFR-600, China, 600 MWe.

    In design phase

    • BN-1200, Russia, build starting after 2014,[19] operation in 2018–2020[20]
    • Toshiba 4S being developed in Japan and was planned to be shipped to Galena, Alaska (USA) but progress is stalled (see Galena Nuclear Power Plant)
    • KALIMER, 600 MWe, South Korea, projected 2030.[21] KALIMER is a continuation of the sodium cooled, metallic fueled, fast neutron reactor in a pool represented by the Advanced Burner Reactor (2006), S-PRISM (1998-present), Integral Fast Reactor (1984-1994), and EBR-II (1965-1995).
    • Generation IV reactor (Helium·Sodium·Lead cooled) US-proposed international effort, after 2030
    • JSFR, Japan, project for a 1500 MWe reactor began in 1998, but without success.
    • ASTRID, France, project for a 600 MWe sodium-cooled reactor. Planned experimental operation in 2020.[22]
    • MACR, 1 MWe, USA/Mars, projected 2033. MACR (Mars Atmospherically Cooled Reactor) is a gas-cooled (Carbon Dioxide coolant) fast neutron reactor intended to provide power to the planned Mars colonies.

    Planned

    • Future FBR, India, 600 MWe, after 2025[23]

    Political psychology

    From Wikipedia, the free encyclopedia ...