Search This Blog

Saturday, August 12, 2023

CNO cycle

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/CNO_cycle
Logarithm of the relative energy output (ε) of proton–proton (p–p), CNO, and triple-α fusion processes at different temperatures (T). The dashed line shows the combined energy generation of the p–p and CNO processes within a star.

The CNO cycle (for carbonnitrogenoxygen; sometimes called Bethe–Weizsäcker cycle after Hans Albrecht Bethe and Carl Friedrich von Weizsäcker) is one of the two known sets of fusion reactions by which stars convert hydrogen to helium, the other being the proton–proton chain reaction (p–p cycle), which is more efficient at the Sun's core temperature. The CNO cycle is hypothesized to be dominant in stars that are more than 1.3 times as massive as the Sun.

Unlike the proton-proton reaction, which consumes all its constituents, the CNO cycle is a catalytic cycle. In the CNO cycle, four protons fuse, using carbon, nitrogen, and oxygen isotopes as catalysts, each of which is consumed at one step of the CNO cycle, but re-generated in a later step. The end product is one alpha particle (a stable helium nucleus), two positrons, and two electron neutrinos.

There are various alternative paths and catalysts involved in the CNO cycles, but all these cycles have the same net result:

4 1
1
H
  +   2
e
  →   4
2
He
  +   2
e+
  +   2
e
  +   2
ν
e
  +   3
γ
  +   24.7 MeV
  →   4
2
He
  +   2
ν
e
  +   7
γ
  +   26.7 MeV

The positrons will almost instantly annihilate with electrons, releasing energy in the form of gamma rays. The neutrinos escape from the star carrying away some energy. One nucleus goes on to become carbon, nitrogen, and oxygen isotopes through a number of transformations in a repeating cycle.

Overview of the CNO-I Cycle

The proton–proton chain is more prominent in stars the mass of the Sun or less. This difference stems from temperature dependency differences between the two reactions; pp-chain reaction starts at temperatures around 4×106 K (4 megakelvin), making it the dominant energy source in smaller stars. A self-maintaining CNO chain starts at approximately 15×106 K, but its energy output rises much more rapidly with increasing temperatures so that it becomes the dominant source of energy at approximately 17×106 K.

The Sun has a core temperature of around 15.7×106 K, and only 1.7% of 4
He
nuclei produced in the Sun are born in the CNO cycle.

The CNO-I process was independently proposed by Carl von Weizsäcker and Hans Bethe in the late 1930s.

The first reports of the experimental detection of the neutrinos produced by the CNO cycle in the Sun were published in 2020 by the BOREXINO collaboration. This was also the first experimental confirmation that the Sun had a CNO cycle, that the proposed magnitude of the cycle was accurate, and that von Weizsäcker and Bethe were correct.

Cold CNO cycles

Under typical conditions found in stars, catalytic hydrogen burning by the CNO cycles is limited by proton captures. Specifically, the timescale for beta decay of the radioactive nuclei produced is faster than the timescale for fusion. Because of the long timescales involved, the cold CNO cycles convert hydrogen to helium slowly, allowing them to power stars in quiescent equilibrium for many years.

CNO-I

The first proposed catalytic cycle for the conversion of hydrogen into helium was initially called the carbon–nitrogen cycle (CN-cycle), also referred to as the Bethe–Weizsäcker cycle in honor of the independent work of Carl Friedrich von Weizsäcker in 1937–38 and Hans Bethe. Bethe's 1939 papers on the CN-cycle drew on three earlier papers written in collaboration with Robert Bacher and Milton Stanley Livingston and which came to be known informally as "Bethe's Bible". It was considered the standard work on nuclear physics for many years and was a significant factor in his being awarded the 1967 Nobel Prize in Physics. Bethe's original calculations suggested the CN-cycle was the Sun's primary source of energy. This conclusion arose from a belief that is now known to be mistaken, that the abundance of nitrogen in the sun is approximately 10%; it is actually less than half a percent. The CN-cycle, named as it contains no stable isotope of oxygen, involves the following cycle of transformations:

12
6
C
  →  13
7
N
  →  13
6
C
  →   14
7
N
  →   15
8
O
  →   15
7
N
  →   12
6
C

This cycle is now understood as being the first part of a larger process, the CNO-cycle, and the main reactions in this part of the cycle (CNO-I) are:

12
6
C
 
1
1
H
 
→  13
7
N
 

γ
 
    1.95 MeV
13
7
N
 
    →  13
6
C
 

e+
 

ν
e
 
1.20 MeV (half-life of 9.965 minutes)
13
6
C
 
1
1
H
 
→  14
7
N
 

γ
 
    7.54 MeV
14
7
N
 
1
1
H
 
→  15
8
O
 

γ
 
    7.35 MeV
15
8
O
 
    →  15
7
N
 

e+
 

ν
e
 
1.73 MeV (half-life of 122.24 seconds)
15
7
N
 
1
1
H
 
→  12
6
C
 
4
2
He
 
    4.96 MeV

where the carbon-12 nucleus used in the first reaction is regenerated in the last reaction. After the two positrons emitted annihilate with two ambient electrons producing an additional 2.04 MeV, the total energy released in one cycle is 26.73 MeV; in some texts, authors are erroneously including the positron annihilation energy in with the beta-decay Q-value and then neglecting the equal amount of energy released by annihilation, leading to possible confusion. All values are calculated with reference to the Atomic Mass Evaluation 2003.

The limiting (slowest) reaction in the CNO-I cycle is the proton capture on 14
7
N
. In 2006 it was experimentally measured down to stellar energies, revising the calculated age of globular clusters by around 1 billion years.

The neutrinos emitted in beta decay will have a spectrum of energy ranges, because although momentum is conserved, the momentum can be shared in any way between the positron and neutrino, with either emitted at rest and the other taking away the full energy, or anything in between, so long as all the energy from the Q-value is used. The total momentum received by the positron and the neutrino is not great enough to cause a significant recoil of the much heavier daughter nucleus and hence, its contribution to kinetic energy of the products, for the precision of values given here, can be neglected. Thus the neutrino emitted during the decay of nitrogen-13 can have an energy from zero up to 1.20 MeV, and the neutrino emitted during the decay of oxygen-15 can have an energy from zero up to 1.73 MeV. On average, about 1.7 MeV of the total energy output is taken away by neutrinos for each loop of the cycle, leaving about 25 MeV available for producing luminosity.

CNO-II

In a minor branch of the above reaction, occurring in the Sun's core 0.04% of the time, the final reaction involving 15
7
N
shown above does not produce carbon-12 and an alpha particle, but instead produces oxygen-16 and a photon and continues

15
7
N
 → 16
8
O
 → 17
9
F
 → 17
8
O
 → 14
7
N
 → 15
8
O
 → 15
7
N

In detail:

15
7
N
 
1
1
H
 
→  16
8
O
 

γ
 
    12.13 MeV
16
8
O
 
1
1
H
 
→  17
9
F
 

γ
 
    0.60 MeV
17
9
F
 
    →  17
8
O
 

e+
 

ν
e
 
2.76 MeV (half-life of 64.49 seconds)
17
8
O
 
1
1
H
 
→  14
7
N
 
4
2
He
 
    1.19 MeV
14
7
N
 
1
1
H
 
→  15
8
O
 

γ
 
    7.35 MeV
15
8
O
 
    →  15
7
N
 

e+
 

ν
e
 
2.75 MeV (half-life of 122.24 seconds)

Like the carbon, nitrogen, and oxygen involved in the main branch, the fluorine produced in the minor branch is merely an intermediate product; at steady state, it does not accumulate in the star.

CNO-III

This subdominant branch is significant only for massive stars. The reactions are started when one of the reactions in CNO-II results in fluorine-18 and a photon instead of nitrogen-14 and an alpha particle, and continues

17
8
O
18
9
F
18
8
O
15
7
N
16
8
O
17
9
F
17
8
O

In detail:

17
8
O
 
+   1
1
H
 
→   18
9
F
 
+  
γ
 
    +   5.61 MeV
18
9
F
 
    →   18
8
O
 
+  
e+
 
+  
ν
e
 
+   1.656 MeV (half-life of 109.771 min)
18
8
O
 
+   1
1
H
 
→   15
7
N
 
+   4
2
He
 
    +   3.98 MeV
15
7
N
 
+   1
1
H
 
→   16
8
O
 
+  
γ
 
    +   12.13 MeV
16
8
O
 
+   1
1
H
 
→   17
9
F
 
+  
γ
 
    +   0.60 MeV
17
9
F
 
    →   17
8
O
 
+  
e+
 
+  
ν
e
 
+   2.76 MeV (half-life of 64.49 s)

CNO-IV

A proton reacts with a nucleus causing release of an alpha particle.

Like the CNO-III, this branch is also only significant in massive stars. The reactions are started when one of the reactions in CNO-III results in fluorine-19 and a photon instead of nitrogen-15 and an alpha particle, and continues

18
8
O
 → 19
9
F
 → 16
8
O
 → 17
9
F
 → 17
8
O
 → 18
9
F
 → 18
8
O

In detail:

18
8
O
 
1
1
H
 
→  19
9
F
 

γ
 
    7.994 MeV
19
9
F
 
1
1
H
 
→  16
8
O
 
4
2
He
 
    8.114 MeV
16
8
O
 
1
1
H
 
→  17
9
F
 

γ
 
    0.60 MeV
17
9
F
 
    →  17
8
O
 

e+
 

ν
e
 
2.76 MeV (half-life of 64.49 seconds)
17
8
O
 
1
1
H
 
→  18
9
F
 

γ
 
    5.61 MeV
18
9
F
 
    →  18
8
O
 

e+
 

ν
e
 
1.656 MeV (half-life of 109.771 minutes)

In some instances 18
9
F
can combine with a helium nucleus to start a sodium-neon cycle.

Hot CNO cycles

Under conditions of higher temperature and pressure, such as those found in novae and X-ray bursts, the rate of proton captures exceeds the rate of beta-decay, pushing the burning to the proton drip line. The essential idea is that a radioactive species will capture a proton before it can beta decay, opening new nuclear burning pathways that are otherwise inaccessible. Because of the higher temperatures involved, these catalytic cycles are typically referred to as the hot CNO cycles; because the timescales are limited by beta decays instead of proton captures, they are also called the beta-limited CNO cycles.

HCNO-I

The difference between the CNO-I cycle and the HCNO-I cycle is that 13
7
N
captures a proton instead of decaying, leading to the total sequence

12
6
C
13
7
N
14
8
O
14
7
N
15
8
O
15
7
N
12
6
C

In detail:

12
6
C
 
1
1
H
 
→  13
7
N
 

γ
 
    1.95 MeV
13
7
N
 
1
1
H
 
→  14
8
O
 

γ
 
    4.63 MeV
14
8
O
 
    →  14
7
N
 

e+
 

ν
e
 
5.14 MeV (half-life of 70.641 seconds)
14
7
N
 
1
1
H
 
→  15
8
O
 

γ
 
    7.35 MeV
15
8
O
 
    →  15
7
N
 

e+
 

ν
e
 
2.75 MeV (half-life of 122.24 seconds)
15
7
N
 
1
1
H
 
→  12
6
C
 
4
2
He
 
    4.96 MeV

HCNO-II

The notable difference between the CNO-II cycle and the HCNO-II cycle is that 17
9
F
captures a proton instead of decaying, and neon is produced in a subsequent reaction on 18
9
F
, leading to the total sequence

15
7
N
16
8
O
17
9
F
18
10
Ne
18
9
F
15
8
O
15
7
N

In detail:

15
7
N
 
1
1
H
 
→  16
8
O
 

γ
 
    12.13 MeV
16
8
O
 
1
1
H
 
→  17
9
F
 

γ
 
    0.60 MeV
17
9
F
 
1
1
H
 
→  18
10
Ne
 

γ
 
    3.92 MeV
18
10
Ne
 
    →  18
9
F
 

e+
 

ν
e
 
4.44 MeV (half-life of 1.672 seconds)
18
9
F
 
1
1
H
 
→  15
8
O
 
4
2
He
 
    2.88 MeV
15
8
O
 
    →  15
7
N
 

e+
 

ν
e
 
2.75 MeV (half-life of 122.24 seconds)

HCNO-III

An alternative to the HCNO-II cycle is that 18
9
F
captures a proton moving towards higher mass and using the same helium production mechanism as the CNO-IV cycle as

18
9
F
19
10
Ne
19
9
F
16
8
O
17
9
F
18
10
Ne
18
9
F

In detail:

18
9
F
 
1
1
H
 
→  19
10
Ne
 

γ
 
    6.41 MeV
19
10
Ne
 
    →  19
9
F
 

e+
 

ν
e
 
3.32 MeV (half-life of 17.22 seconds)
19
9
F
 
1
1
H
 
→  16
8
O
 
4
2
He
 
    8.11 MeV
16
8
O
 
1
1
H
 
→  17
9
F
 

γ
 
    0.60 MeV
17
9
F
 
1
1
H
 
→  18
10
Ne
 

γ
 
    3.92 MeV
18
10
Ne
 
    →  18
9
F
 

e+
 

ν
e
 
4.44 MeV (half-life of 1.672 seconds)

Use in astronomy

While the total number of "catalytic" nuclei are conserved in the cycle, in stellar evolution the relative proportions of the nuclei are altered. When the cycle is run to equilibrium, the ratio of the carbon-12/carbon-13 nuclei is driven to 3.5, and nitrogen-14 becomes the most numerous nucleus, regardless of initial composition. During a star's evolution, convective mixing episodes moves material, within which the CNO cycle has operated, from the star's interior to the surface, altering the observed composition of the star. Red giant stars are observed to have lower carbon-12/carbon-13 and carbon-12/nitrogen-14 ratios than do main sequence stars, which is considered to be convincing evidence for the operation of the CNO cycle.

Nuclear structure

From Wikipedia, the free encyclopedia

Understanding the structure of the atomic nucleus is one of the central challenges in nuclear physics.

Models

The liquid drop model

The liquid drop model is one of the first models of nuclear structure, proposed by Carl Friedrich von Weizsäcker in 1935. It describes the nucleus as a semiclassical fluid made up of neutrons and protons, with an internal repulsive electrostatic force proportional to the number of protons. The quantum mechanical nature of these particles appears via the Pauli exclusion principle, which states that no two nucleons of the same kind can be at the same state. Thus the fluid is actually what is known as a Fermi liquid. In this model, the binding energy of a nucleus with protons and neutrons is given by

where is the total number of nucleons (Mass Number). The terms proportional to and represent the volume and surface energy of the liquid drop, the term proportional to represents the electrostatic energy, the term proportional to represents the Pauli exclusion principle and the last term is the pairing term, which lowers the energy for even numbers of protons or neutrons. The coefficients and the strength of the pairing term may be estimated theoretically, or fit to data. This simple model reproduces the main features of the binding energy of nuclei.

The assumption of nucleus as a drop of Fermi liquid is still widely used in the form of Finite Range Droplet Model (FRDM), due to the possible good reproduction of nuclear binding energy on the whole chart, with the necessary accuracy for predictions of unknown nuclei.

The shell model

The expression "shell model" is ambiguous in that it refers to two different items. It was previously used to describe the existence of nucleon shells according to an approach closer to what is now called mean field theory. Nowadays, it refers to a formalism analogous to the configuration interaction formalism used in quantum chemistry.

Introduction to the shell concept

Difference between experimental binding energies and the liquid drop model prediction as a function of neutron number for Z>7

Systematic measurements of the binding energy of atomic nuclei show systematic deviations with respect to those estimated from the liquid drop model. In particular, some nuclei having certain values for the number of protons and/or neutrons are bound more tightly together than predicted by the liquid drop model. These nuclei are called singly/doubly magic. This observation led scientists to assume the existence of a shell structure of nucleons (protons and neutrons) within the nucleus, like that of electrons within atoms.

Indeed, nucleons are quantum objects. Strictly speaking, one should not speak of energies of individual nucleons, because they are all correlated with each other. However, as an approximation one may envision an average nucleus, within which nucleons propagate individually. Owing to their quantum character, they may only occupy discrete energy levels. These levels are by no means uniformly distributed; some intervals of energy are crowded, and some are empty, generating a gap in possible energies. A shell is such a set of levels separated from the other ones by a wide empty gap.

The energy levels are found by solving the Schrödinger equation for a single nucleon moving in the average potential generated by all other nucleons. Each level may be occupied by a nucleon, or empty. Some levels accommodate several different quantum states with the same energy; they are said to be degenerate. This occurs in particular if the average nucleus has some symmetry.

The concept of shells allows one to understand why some nuclei are bound more tightly than others. This is because two nucleons of the same kind cannot be in the same state (Pauli exclusion principle). So the lowest-energy state of the nucleus is one where nucleons fill all energy levels from the bottom up to some level. A nucleus with full shells is exceptionally stable, as will be explained.

As with electrons in the electron shell model, protons in the outermost shell are relatively loosely bound to the nucleus if there are only few protons in that shell, because they are farthest from the center of the nucleus. Therefore, nuclei which have a full outer proton shell will be more tightly bound and have a higher binding energy than other nuclei with a similar total number of protons. This is also true for neutrons.

Furthermore, the energy needed to excite the nucleus (i.e. moving a nucleon to a higher, previously unoccupied level) is exceptionally high in such nuclei. Whenever this unoccupied level is the next after a full shell, the only way to excite the nucleus is to raise one nucleon across the gap, thus spending a large amount of energy. Otherwise, if the highest occupied energy level lies in a partly filled shell, much less energy is required to raise a nucleon to a higher state in the same shell.

Some evolution of the shell structure observed in stable nuclei is expected away from the valley of stability. For example, observations of unstable isotopes have shown shifting and even a reordering of the single particle levels of which the shell structure is composed. This is sometimes observed as the creation of an island of inversion or in the reduction of excitation energy gaps above the traditional magic numbers.

Basic hypotheses

Some basic hypotheses are made in order to give a precise conceptual framework to the shell model:

  • The atomic nucleus is a quantum n-body system.
  • The internal motion of nucleons within the nucleus is non-relativistic, and their behavior is governed by the Schrödinger equation.
  • Nucleons are considered to be pointlike, without any internal structure.

Brief description of the formalism

The general process used in the shell model calculations is the following. First a Hamiltonian for the nucleus is defined. Usually, for computational practicality, only one- and two-body terms are taken into account in this definition. The interaction is an effective theory: it contains free parameters which have to be fitted with experimental data.

The next step consists in defining a basis of single-particle states, i.e. a set of wavefunctions describing all possible nucleon states. Most of the time, this basis is obtained via a Hartree–Fock computation. With this set of one-particle states, Slater determinants are built, that is, wavefunctions for Z proton variables or N neutron variables, which are antisymmetrized products of single-particle wavefunctions (antisymmetrized meaning that under exchange of variables for any pair of nucleons, the wavefunction only changes sign).

In principle, the number of quantum states available for a single nucleon at a finite energy is finite, say n. The number of nucleons in the nucleus must be smaller than the number of available states, otherwise the nucleus cannot hold all of its nucleons. There are thus several ways to choose Z (or N) states among the n possible. In combinatorial mathematics, the number of choices of Z objects among n is the binomial coefficient CZ
n
. If n is much larger than Z (or N), this increases roughly like nZ. Practically, this number becomes so large that every computation is impossible for A=N+Z larger than 8.

To obviate this difficulty, the space of possible single-particle states is divided into core and valence, by analogy with chemistry (see core electron and valence electron). The core is a set of single-particles which are assumed to be inactive, in the sense that they are the well bound lowest-energy states, and that there is no need to reexamine their situation. They do not appear in the Slater determinants, contrary to the states in the valence space, which is the space of all single-particle states not in the core, but possibly to be considered in the choice of the build of the (Z-) N-body wavefunction. The set of all possible Slater determinants in the valence space defines a basis for (Z-) N-body states.

The last step consists in computing the matrix of the Hamiltonian within this basis, and to diagonalize it. In spite of the reduction of the dimension of the basis owing to the fixation of the core, the matrices to be diagonalized reach easily dimensions of the order of 109, and demand specific diagonalization techniques.

The shell model calculations give in general an excellent fit with experimental data. They depend however strongly on two main factors:

  • The way to divide the single-particle space into core and valence.
  • The effective nucleon–nucleon interaction.

Mean field theories

The independent-particle model (IPM)

The interaction between nucleons, which is a consequence of strong interactions and binds the nucleons within the nucleus, exhibits the peculiar behaviour of having a finite range: it vanishes when the distance between two nucleons becomes too large; it is attractive at medium range, and repulsive at very small range. This last property correlates with the Pauli exclusion principle according to which two fermions (nucleons are fermions) cannot be in the same quantum state. This results in a very large mean free path predicted for a nucleon within the nucleus.

The main idea of the Independent Particle approach is that a nucleon moves inside a certain potential well (which keeps it bound to the nucleus) independently from the other nucleons. This amounts to replacing an N-body problem (N particles interacting) by N single-body problems. This essential simplification of the problem is the cornerstone of mean field theories. These are also widely used in atomic physics, where electrons move in a mean field due to the central nucleus and the electron cloud itself.

The independent particle model and mean field theories (we shall see that there exist several variants) have a great success in describing the properties of the nucleus starting from an effective interaction or an effective potential, thus are a basic part of atomic nucleus theory. One should also notice that they are modular enough, in that it is quite easy to extend the model to introduce effects such as nuclear pairing, or collective motions of the nucleon like rotation, or vibration, adding the corresponding energy terms in the formalism. This implies that in many representations, the mean field is only a starting point for a more complete description which introduces correlations reproducing properties like collective excitations and nucleon transfer.

Nuclear potential and effective interaction

A large part of the practical difficulties met in mean field theories is the definition (or calculation) of the potential of the mean field itself. One can very roughly distinguish between two approaches:

  • The phenomenological approach is a parameterization of the nuclear potential by an appropriate mathematical function. Historically, this procedure was applied with the greatest success by Sven Gösta Nilsson, who used as a potential a (deformed) harmonic oscillator potential. The most recent parameterizations are based on more realistic functions, which account more accurately for scattering experiments, for example. In particular the form known as the Woods–Saxon potential can be mentioned.
  • The self-consistent or Hartree–Fock approach aims to deduce mathematically the nuclear potential from an effective nucleon–nucleon interaction. This technique implies a resolution of the Schrödinger equation in an iterative fashion, starting from an ansatz wavefunction and improving it variationally, since the potential depends there upon the wavefunctions to be determined. The latter are written as Slater determinants.

In the case of the Hartree–Fock approaches, the trouble is not to find the mathematical function which describes best the nuclear potential, but that which describes best the nucleon–nucleon interaction. Indeed, in contrast with atomic physics where the interaction is known (it is the Coulomb interaction), the nucleon–nucleon interaction within the nucleus is not known analytically.

There are two main reasons for this fact. First, the strong interaction acts essentially among the quarks forming the nucleons. The nucleon–nucleon interaction in vacuum is a mere consequence of the quark–quark interaction. While the latter is well understood in the framework of the Standard Model at high energies, it is much more complicated in low energies due to color confinement and asymptotic freedom. Thus there is yet no fundamental theory allowing one to deduce the nucleon–nucleon interaction from the quark–quark interaction. Furthermore, even if this problem were solved, there would remain a large difference between the ideal (and conceptually simpler) case of two nucleons interacting in vacuum, and that of these nucleons interacting in the nuclear matter. To go further, it was necessary to invent the concept of effective interaction. The latter is basically a mathematical function with several arbitrary parameters, which are adjusted to agree with experimental data.

Most modern interaction are zero-range so they act only when the two nucleons are in contact, as introduced by Tony Skyrme.

The self-consistent approaches of the Hartree–Fock type

In the Hartree–Fock approach of the n-body problem, the starting point is a Hamiltonian containing n kinetic energy terms, and potential terms. As mentioned before, one of the mean field theory hypotheses is that only the two-body interaction is to be taken into account. The potential term of the Hamiltonian represents all possible two-body interactions in the set of n fermions. It is the first hypothesis.

The second step consists in assuming that the wavefunction of the system can be written as a Slater determinant of one-particle spin-orbitals. This statement is the mathematical translation of the independent-particle model. This is the second hypothesis.

There remains now to determine the components of this Slater determinant, that is, the individual wavefunctions of the nucleons. To this end, it is assumed that the total wavefunction (the Slater determinant) is such that the energy is minimum. This is the third hypothesis.

Technically, it means that one must compute the mean value of the (known) two-body Hamiltonian on the (unknown) Slater determinant, and impose that its mathematical variation vanishes. This leads to a set of equations where the unknowns are the individual wavefunctions: the Hartree–Fock equations. Solving these equations gives the wavefunctions and individual energy levels of nucleons, and so the total energy of the nucleus and its wavefunction.

This short account of the Hartree–Fock method explains why it is called also the variational approach. At the beginning of the calculation, the total energy is a "function of the individual wavefunctions" (a so-called functional), and everything is then made in order to optimize the choice of these wavefunctions so that the functional has a minimum – hopefully absolute, and not only local. To be more precise, there should be mentioned that the energy is a functional of the density, defined as the sum of the individual squared wavefunctions. The Hartree–Fock method is also used in atomic physics and condensed matter physics as Density Functional Theory, DFT.

The process of solving the Hartree–Fock equations can only be iterative, since these are in fact a Schrödinger equation in which the potential depends on the density, that is, precisely on the wavefunctions to be determined. Practically, the algorithm is started with a set of individual grossly reasonable wavefunctions (in general the eigenfunctions of a harmonic oscillator). These allow to compute the density, and therefrom the Hartree–Fock potential. Once this done, the Schrödinger equation is solved anew, and so on. The calculation stops – convergence is reached – when the difference among wavefunctions, or energy levels, for two successive iterations is less than a fixed value. Then the mean field potential is completely determined, and the Hartree–Fock equations become standard Schrödinger equations. The corresponding Hamiltonian is then called the Hartree–Fock Hamiltonian.

The relativistic mean field approaches

Born first in the 1970s with the works of John Dirk Walecka on quantum hadrodynamics, the relativistic models of the nucleus were sharpened up towards the end of the 1980s by P. Ring and coworkers. The starting point of these approaches is the relativistic quantum field theory. In this context, the nucleon interactions occur via the exchange of virtual particles called mesons. The idea is, in a first step, to build a Lagrangian containing these interaction terms. Second, by an application of the least action principle, one gets a set of equations of motion. The real particles (here the nucleons) obey the Dirac equation, whilst the virtual ones (here the mesons) obey the Klein–Gordon equations.

In view of the non-perturbative nature of strong interaction, and also in view of the fact that the exact potential form of this interaction between groups of nucleons is relatively badly known, the use of such an approach in the case of atomic nuclei requires drastic approximations. The main simplification consists in replacing in the equations all field terms (which are operators in the mathematical sense) by their mean value (which are functions). In this way, one gets a system of coupled integro-differential equations, which can be solved numerically, if not analytically.

The interacting boson model

The interacting boson model (IBM) is a model in nuclear physics in which nucleons are represented as pairs, each of them acting as a boson particle, with integral spin of 0, 2 or 4. This makes calculations feasible for larger nuclei. There are several branches of this model - in one of them (IBM-1) one can group all types of nucleons in pairs, in others (for instance - IBM-2) one considers protons and neutrons in pairs separately.

Spontaneous breaking of symmetry in nuclear physics

One of the focal points of all physics is symmetry. The nucleon–nucleon interaction and all effective interactions used in practice have certain symmetries. They are invariant by translation (changing the frame of reference so that directions are not altered), by rotation (turning the frame of reference around some axis), or parity (changing the sense of axes) in the sense that the interaction does not change under any of these operations. Nevertheless, in the Hartree–Fock approach, solutions which are not invariant under such a symmetry can appear. One speaks then of spontaneous symmetry breaking.

Qualitatively, these spontaneous symmetry breakings can be explained in the following way: in the mean field theory, the nucleus is described as a set of independent particles. Most additional correlations among nucleons which do not enter the mean field are neglected. They can appear however by a breaking of the symmetry of the mean field Hamiltonian, which is only approximate. If the density used to start the iterations of the Hartree–Fock process breaks certain symmetries, the final Hartree–Fock Hamiltonian may break these symmetries, if it is advantageous to keep these broken from the point of view of the total energy.

It may also converge towards a symmetric solution. In any case, if the final solution breaks the symmetry, for example, the rotational symmetry, so that the nucleus appears not to be spherical, but elliptic, all configurations deduced from this deformed nucleus by a rotation are just as good solutions for the Hartree–Fock problem. The ground state of the nucleus is then degenerate.

A similar phenomenon happens with the nuclear pairing, which violates the conservation of the number of baryons (see below).

Extensions of the mean field theories

Nuclear pairing phenomenon

The most common extension to mean field theory is the nuclear pairing. Nuclei with an even number of nucleons are systematically more bound than those with an odd one. This implies that each nucleon binds with another one to form a pair, consequently the system cannot be described as independent particles subjected to a common mean field. When the nucleus has an even number of protons and neutrons, each one of them finds a partner. To excite such a system, one must at least use such an energy as to break a pair. Conversely, in the case of odd number of protons or neutrons, there exists an unpaired nucleon, which needs less energy to be excited.

This phenomenon is closely analogous to that of Type 1 superconductivity in solid state physics. The first theoretical description of nuclear pairing was proposed at the end of the 1950s by Aage Bohr, Ben Mottelson, and David Pines (which contributed to the reception of the Nobel Prize in Physics in 1975 by Bohr and Mottelson). It was close to the BCS theory of Bardeen, Cooper and Schrieffer, which accounts for metal superconductivity. Theoretically, the pairing phenomenon as described by the BCS theory combines with the mean field theory: nucleons are both subject to the mean field potential and to the pairing interaction.

The Hartree–Fock–Bogolyubov (HFB) method is a more sophisticated approach, enabling one to consider the pairing and mean field interactions consistently on equal footing. HFB is now the de facto standard in the mean field treatment of nuclear systems.

Symmetry restoration

Peculiarity of mean field methods is the calculation of nuclear property by explicit symmetry breaking. The calculation of the mean field with self-consistent methods (e.g. Hartree-Fock), breaks rotational symmetry, and the calculation of pairing property breaks particle-number.

Several techniques for symmetry restoration by projecting on good quantum numbers have been developed.

Particle vibration coupling

Mean field methods (eventually considering symmetry restoration) are a good approximation for the ground state of the system, even postulating a system of independent particles. Higher-order corrections consider the fact that the particles interact together by the means of correlation. These correlations can be introduced taking into account the coupling of independent particle degrees of freedom, low-energy collective excitation of systems with even number of protons and neutrons.

In this way, excited states can be reproduced by the means of random phase approximation (RPA), also eventually consistently calculating corrections to the ground state (e.g. by the means of nuclear field theory).

Homework

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Homework A person doing geometry home...