Search This Blog

Saturday, May 26, 2018

CP violation

From Wikipedia, the free encyclopedia

In particle physics, CP violation is a violation of CP-symmetry (or charge conjugation parity symmetry): the combination of C-symmetry (charge conjugation symmetry) and P-symmetry (parity symmetry). CP-symmetry states that the laws of physics should be the same if a particle is interchanged with its antiparticle (C symmetry) while its spatial coordinates are inverted ("mirror" or P symmetry). The discovery of CP violation in 1964 in the decays of neutral kaons resulted in the Nobel Prize in Physics in 1980 for its discoverers James Cronin and Val Fitch.

It plays an important role both in the attempts of cosmology to explain the dominance of matter over antimatter in the present Universe, and in the study of weak interactions in particle physics.

CP-symmetry

CP-symmetry, often called just CP, is the product of two symmetries: C for charge conjugation, which transforms a particle into its antiparticle, and P for parity, which creates the mirror image of a physical system. The strong interaction and electromagnetic interaction seem to be invariant under the combined CP transformation operation, but this symmetry is slightly violated during certain types of weak decay. Historically, CP-symmetry was proposed to restore order after the discovery of parity violation in the 1950s.

The idea behind parity symmetry is that the equations of particle physics are invariant under mirror inversion. This leads to the prediction that the mirror image of a reaction (such as a chemical reaction or radioactive decay) occurs at the same rate as the original reaction. Parity symmetry appears to be valid for all reactions involving electromagnetism and strong interactions. Until 1956, parity conservation was believed to be one of the fundamental geometric conservation laws (along with conservation of energy and conservation of momentum). However, in 1956 a careful critical review of the existing experimental data by theoretical physicists Tsung-Dao Lee and Chen-Ning Yang revealed that while parity conservation had been verified in decays by the strong or electromagnetic interactions, it was untested in the weak interaction. They proposed several possible direct experimental tests. The first test based on beta decay of cobalt-60 nuclei was carried out in 1956 by a group led by Chien-Shiung Wu, and demonstrated conclusively that weak interactions violate the P symmetry or, as the analogy goes, some reactions did not occur as often as their mirror image.

Overall, the symmetry of a quantum mechanical system can be restored if another symmetry S can be found such that the combined symmetry PS remains unbroken. This rather subtle point about the structure of Hilbert space was realized shortly after the discovery of P violation, and it was proposed that charge conjugation was the desired symmetry to restore order.

Simply speaking, charge conjugation is a symmetry between particles and antiparticles, and so CP-symmetry was proposed in 1957 by Lev Landau as the true symmetry between matter and antimatter. In other words, a process in which all particles are exchanged with their antiparticles was assumed to be equivalent to the mirror image of the original process.

CP violation in the Standard Model

"Direct" CP violation is allowed in the Standard Model if a complex phase appears in the CKM matrix describing quark mixing, or the PMNS matrix describing neutrino mixing. A necessary condition for the appearance of the complex phase is the presence of at least three generations of quarks. If fewer generations are present, the complex phase parameter can be absorbed into redefinitions of the quark fields. A popular rephasing invariant whose vanishing signals absence of CP violation and occurs in most CP violating amplitudes is the Jarlskog invariant, {\displaystyle J=c_{12}c_{13}^{2}c_{23}s_{12}s_{13}s_{23}\sin \delta \approx 3~10^{-5}.}

The reason why such a complex phase causes CP violation is not immediately obvious, but can be seen as follows. Consider any given particles (or sets of particles) a and b, and their antiparticles {\bar {a}} and {\bar {b}}. Now consider the processes a\rightarrow b and the corresponding antiparticle process {\bar {a}}\rightarrow {\bar {b}}, and denote their amplitudes M and {\bar {M}} respectively. Before CP violation, these terms must be the same complex number. We can separate the magnitude and phase by writing M=|M|e^{i\theta }. If a phase term is introduced from (e.g.) the CKM matrix, denote it e^{i\phi }. Note that {\bar {M}} contains the conjugate matrix to M, so it picks up a phase term e^{-i\phi }.

Now the formula becomes:
M=|M|e^{i\theta }e^{i\phi }
{\bar {M}}=|M|e^{i\theta }e^{-i\phi }
Physically measurable reaction rates are proportional to |M|^{2}, thus so far nothing is different. However, consider that there are two different routes: {\displaystyle a{\overset {1}{\longrightarrow }}b} and {\displaystyle a{\overset {2}{\longrightarrow }}b} or equivalently, two unrelated intermediate states: {\displaystyle a\rightarrow 1\rightarrow b} and {\displaystyle a\rightarrow 2\rightarrow b}. Now we have:
M=|M_{1}|e^{i\theta _{1}}e^{i\phi _{1}}+|M_{2}|e^{i\theta _{2}}e^{i\phi _{2}}
{\bar {M}}=|M_{1}|e^{i\theta _{1}}e^{-i\phi _{1}}+|M_{2}|e^{i\theta _{2}}e^{-i\phi _{2}}
Some further calculation gives:
{\displaystyle |M|^{2}-|{\bar {M}}|^{2}=-4|M_{1}||M_{2}|\sin(\theta _{1}-\theta _{2})\sin(\phi _{1}-\phi _{2})}
Thus, we see that a complex phase gives rise to processes that proceed at different rates for particles and antiparticles, and CP is violated.

From the theoretical end, the CKM matrix is defined as VCKM =Uu.Ud, where Uu and Ud are unitary transformation matrices which diagonalize the fermion mass matrices Mu and Md, respectively.

Thus, there are two necessary conditions for getting a complex CKM matrix:

1.    At least one of Uu and Ud is complex, or the CKM matrix will be purely real.
2.    Even both of them are complex, Uu and Ud mustn’t be the same, i.e., Uu≠Ud , or CKM matrix will be an identity matrix which is also purely real.

Experimental status

Indirect CP violation

In 1964, James Cronin, Val Fitch and coworkers provided clear evidence from kaon decay that CP-symmetry could be broken.[1] This work[2] won them the 1980 Nobel Prize. This discovery showed that weak interactions violate not only the charge-conjugation symmetry C between particles and antiparticles and the P or parity, but also their combination. The discovery shocked particle physics and opened the door to questions still at the core of particle physics and of cosmology today. The lack of an exact CP-symmetry, but also the fact that it is so nearly a symmetry, created a great puzzle.

Only a weaker version of the symmetry could be preserved by physical phenomena, which was CPT symmetry. Besides C and P, there is a third operation, time reversal T, which corresponds to reversal of motion. Invariance under time reversal implies that whenever a motion is allowed by the laws of physics, the reversed motion is also an allowed one and occurs at the same rate forwards and backwards. The combination of CPT is thought to constitute an exact symmetry of all types of fundamental interactions. Because of the CPT symmetry, a violation of the CP-symmetry is equivalent to a violation of the T symmetry. CP violation implied nonconservation of T, provided that the long-held CPT theorem was valid. In this theorem, regarded as one of the basic principles of quantum field theory, charge conjugation, parity, and time reversal are applied together.

Direct CP violation


Kaon oscillation box diagram

The two box diagrams above are the Feynman diagrams providing the leading contributions to the amplitude of
K0
-
K0
oscillation

The kind of CP violation discovered in 1964 was linked to the fact that neutral kaons can transform into their antiparticles (in which each quark is replaced with the other's antiquark) and vice versa, but such transformation does not occur with exactly the same probability in both directions; this is called indirect CP violation. Despite many searches, no other manifestation of CP violation was discovered until the 1990s, when the NA31 experiment at CERN suggested evidence for CP violation in the decay process of the very same neutral kaons (direct CP violation). The observation was somewhat controversial, and final proof for it came in 1999 from the KTeV experiment at Fermilab[3] and the NA48 experiment at CERN.[4]

In 2001, a new generation of experiments, including the BaBar Experiment at the Stanford Linear Accelerator Center (SLAC)[5] and the Belle Experiment at the High Energy Accelerator Research Organisation (KEK)[6] in Japan, observed direct CP violation in a different system, namely in decays of the B mesons.[7] A large number of CP violation processes in B meson decays have now been discovered. Before these "B-factory" experiments, there was a logical possibility that all CP violation was confined to kaon physics. However, this raised the question of why CP violation did not extend to the strong force, and furthermore, why this was not predicted by the unextended Standard Model, despite the model's accuracy for "normal" phenomena.

In 2011, a hint of CP violation in decays of neutral D mesons was reported by the LHCb experiment at CERN using 0.6 fb−1 of Run 1 data.[8] However, the same measurement using the full 3.0 fb−1 Run 1 sample was consistent with CP symmetry.[9]

In 2013 LHCb announced discovery of CP violation in strange B meson decays.[10]

Strong CP problem

There is no experimentally known violation of the CP-symmetry in quantum chromodynamics. As there is no known reason for it to be conserved in QCD specifically, this is a "fine tuning" problem known as the strong CP problem.

QCD does not violate the CP-symmetry as easily as the electroweak theory; unlike the electroweak theory in which the gauge fields couple to chiral currents constructed from the fermionic fields, the gluons couple to vector currents. Experiments do not indicate any CP violation in the QCD sector. For example, a generic CP violation in the strongly interacting sector would create the electric dipole moment of the neutron which would be comparable to 10−18 e·m while the experimental upper bound is roughly one trillionth that size.

This is a problem because at the end, there are natural terms in the QCD Lagrangian that are able to break the CP-symmetry.
{\mathcal {L}}=-{\frac {1}{4}}F_{\mu \nu }F^{\mu \nu }-{\frac {n_{f}g^{2}\theta }{32\pi ^{2}}}F_{\mu \nu }{\tilde {F}}^{\mu \nu }+{\bar {\psi }}(i\gamma ^{\mu }D_{\mu }-me^{i\theta '\gamma _{5}})\psi
For a nonzero choice of the θ angle and the chiral phase of the quark mass θ′ one expects the CP-symmetry to be violated. One usually assumes that the chiral quark mass phase can be converted to a contribution to the total effective \scriptstyle {\tilde {\theta }} angle, but it remains to be explained why this angle is extremely small instead of being of order one; the particular value of the θ angle that must be very close to zero (in this case) is an example of a fine-tuning problem in physics, and is typically solved by physics beyond the Standard Model.

There are several proposed solutions to solve the strong CP problem. The most well-known is Peccei–Quinn theory, involving new scalar particles called axions. A newer, more radical approach not requiring the axion is a theory involving two time dimensions first proposed in 1998 by Bars, Deliduman, and Andreev.[11]

CP violation and the matter–antimatter imbalance

The universe is made chiefly of matter, rather than consisting of equal parts of matter and antimatter as might be expected. It can be demonstrated that, to create an imbalance in matter and antimatter from an initial condition of balance, the Sakharov conditions must be satisfied, one of which is the existence of CP violation during the extreme conditions of the first seconds after the Big Bang. Explanations which do not involve CP violation are less plausible, since they rely on the assumption that the matter–antimatter imbalance was present at the beginning, or on other admittedly exotic assumptions.

The Big Bang should have produced equal amounts of matter and antimatter if CP-symmetry was preserved; as such, there should have been total cancellation of both—protons should have cancelled with antiprotons, electrons with positrons, neutrons with antineutrons, and so on. This would have resulted in a sea of radiation in the universe with no matter. Since this is not the case, after the Big Bang, physical laws must have acted differently for matter and antimatter, i.e. violating CP-symmetry.

The Standard Model contains at least three sources of CP violation. The first of these, involving the Cabibbo–Kobayashi–Maskawa matrix in the quark sector, has been observed experimentally and can only account for a small portion of the CP violation required to explain the matter-antimatter asymmetry. The strong interaction should also violate CP, in principle, but the failure to observe the electric dipole moment of the neutron in experiments suggests that any CP violation in the strong sector is also too small to account for the necessary CP violation in the early universe. The third source of CP violation is the Pontecorvo–Maki–Nakagawa–Sakata matrix in the lepton sector. The current long-baseline neutrino oscillation experiments, T2K and NOνA, may be able to find evidence of CP violation over a small fraction of possible values of the CP violating Dirac phase while the proposed next-generation experiments, Hyper-Kamiokande and DUNE, will be sensitive enough to definitively observe CP violation over a relatively large fraction of possible values of the Dirac phase. Further into the future, a neutrino factory could be sensitive to nearly all possible values of the CP violating Dirac phase. If neutrinos are Majorana fermions, the PMNS matrix could have two additional CP violating Majorana phases, leading to a fourth source of CP violation within the Standard Model. The experimental evidence for Majorana neutrinos would be the observation of neutrinoless double-beta decay. The best limits come from the GERDA experiment. CP violation in the lepton sector generates a matter-antimatter asymmetry through a process called leptogenesis. This could become the preferred explanation in the Standard Model for the matter-antimatter asymmetry of the universe once CP violation is experimentally confirmed in the lepton sector.

If CP violation in the lepton sector is experimentally determined to be too small to account for matter-antimatter asymmetry, some new physics beyond the Standard Model would be required to explain additional sources of CP violation. Fortunately, it is generally the case that adding new particles and/or interactions to the Standard Model introduces new sources of CP violation since CP is not a symmetry of nature.

Sakharov proposed a way to restore CP-symmetry using T-symmetry, extending spacetime before the Big Bang. He described complete CPT reflections of events on each side of what he called the "initial singularity". Because of this, phenomena with an opposite arrow of time at t < 0 would undergo an opposite CP violation, so the CP-symmetry would be preserved as a whole. The anomalous excess of matter over antimatter after the Big Bang in the orthochronous (or positive) sector, becomes an excess of antimatter before the Big Bang (antichronous or negative sector) as both charge conjugation, parity and arrow of time are reversed due to CPT reflections of all phenomena occurring over the initial singularity:
We can visualize that neutral spinless maximons (or photons) are produced at t < 0 from contracting matter having an excess of antiquarks, that they pass "one through the other" at the instant t = 0 when the density is infinite, and decay with an excess of quarks when t > 0, realizing total CPT symmetry of the universe. All the phenomena at t < 0 are assumed in this hypothesis to be CPT reflections of the phenomena at t > 0.
— Andrei Sakharov, in Collected Scientific Works (1982).[12]

Hierarchy problem

From Wikipedia, the free encyclopedia
In theoretical physics, the hierarchy problem is the large discrepancy between aspects of the weak force and gravity.[1] There is no scientific consensus on why, for example, the weak force is 1024 times as strong as gravity.

Technical definition

A hierarchy problem occurs when the fundamental value of some physical parameter, such as a coupling constant or a mass, in some Lagrangian is vastly different from its effective value, which is the value that gets measured in an experiment. This happens because the effective value is related to the fundamental value by a prescription known as renormalization, which applies corrections to it. Typically the renormalized value of parameters are close to their fundamental values, but in some cases, it appears that there has been a delicate cancellation between the fundamental quantity and the quantum corrections. Hierarchy problems are related to fine-tuning problems and problems of naturalness and over the past decade many scientists[2][3][4][5][6] argued that the hierarchy problem is a specific application of Bayesian statistics.

Studying renormalization in hierarchy problems is difficult, because such quantum corrections are usually power-law divergent, which means that the shortest-distance physics are most important. Because we do not know the precise details of the shortest-distance theory of physics, we cannot even address how this delicate cancellation between two large terms occurs. Therefore, researchers are led to postulate new physical phenomena that resolve hierarchy problems without fine tuning.

Overview

A simple example:

Suppose a physics model requires four parameters which allow it to produce a very high quality working model, calculations, and predictions of some aspect of our physical universe. Suppose we find through experiments that the parameters have values:
  • 1.2
  • 1.31
  • 0.9 and
  • 404,331,557,902,116,024,553,602,703,216.58 (roughly 4 x 1029).
We might wonder how such figures arise. But in particular we might be especially curious about a theory where three values are close to one, and the fourth is so different; in other words, the huge disproportion we seem to find between the first three parameters and the fourth. We might also wonder, if one force is so much weaker than the others that it needs a factor of 4 x 1029 to allow it to be related to them in terms of effects, how did our universe come to be so exactly balanced when its forces emerged. In current particle physics the differences between some parameters are much larger than this, so the question is even more noteworthy.

One answer given by physicists is the anthropic principle. If the universe came to exist by chance, and perhaps vast numbers of other universes exist or have existed, then life capable of physics experiments only arose in universes that by chance had very balanced forces. All the universes where the forces were not balanced, didn't develop life capable of the question. So if a lifeform like human beings asks such a question, it must have arisen in a universe having balanced forces, however rare that might be. So when we look, that is what we would expect to find, and what we do find.

A second answer is that perhaps there is a deeper understanding of physics, which, if we discovered and understood it, would make clear these aren't really fundamental parameters and there is a good reason why they have the exact values we have found, because they all derive from other more fundamental parameters that are not so unbalanced.

Examples in particle physics

The Higgs mass

In particle physics, the most important hierarchy problem is the question that asks why the weak force is 1024 times as strong as gravity.[7] Both of these forces involve constants of nature, Fermi's constant for the weak force and Newton's constant for gravity. Furthermore, if the Standard Model is used to calculate the quantum corrections to Fermi's constant, it appears that Fermi's constant is surprisingly large and is expected to be closer to Newton's constant, unless there is a delicate cancellation between the bare value of Fermi's constant and the quantum corrections to it.

Cancellation of the Higgs boson quadratic mass renormalization between fermionic top quark loop and scalar stop squark tadpole Feynman diagrams in a supersymmetric extension of the Standard Model

More technically, the question is why the Higgs boson is so much lighter than the Planck mass (or the grand unification energy, or a heavy neutrino mass scale): one would expect that the large quantum contributions to the square of the Higgs boson mass would inevitably make the mass huge, comparable to the scale at which new physics appears, unless there is an incredible fine-tuning cancellation between the quadratic radiative corrections and the bare mass.

It should be remarked that the problem cannot even be formulated in the strict context of the Standard Model, for the Higgs mass cannot be calculated. In a sense, the problem amounts to the worry that a future theory of fundamental particles, in which the Higgs boson mass will be calculable, should not have excessive fine-tunings.

One proposed solution, popular amongst many physicists, is that one may solve the hierarchy problem via supersymmetry. Supersymmetry can explain how a tiny Higgs mass can be protected from quantum corrections. Supersymmetry removes the power-law divergences of the radiative corrections to the Higgs mass and solves the hierarchy problem as long as the supersymmetric particles are light enough to satisfy the BarbieriGiudice criterion.[8] This still leaves open the mu problem, however. Currently the tenets of supersymmetry are being tested at the LHC, although no evidence has been found so far for supersymmetry.

Theoretical solutions

Supersymmetric solution

Each particle that couples to the Higgs field has a Yukawa coupling λf. The coupling with the Higgs field for fermions gives an interaction term {\mathcal {L}}_{\mathrm {Yukawa} }=-\lambda _{f}{\bar {\psi }}H\psi , with \psi being the Dirac field and H the Higgs field. Also, the mass of a fermion is proportional to its Yukawa coupling, meaning that the Higgs boson will couple most to the most massive particle. This means that the most significant corrections to the Higgs mass will originate from the heaviest particles, most prominently the top quark. By applying the Feynman rules, one gets the quantum corrections to the Higgs mass squared from a fermion to be:
\Delta m_{H}^{2}=-{\frac {\left|\lambda _{f}\right|^{2}}{8\pi ^{2}}}[\Lambda _{\mathrm {UV} }^{2}+...].
The \Lambda _{\mathrm {UV} } is called the ultraviolet cutoff and is the scale up to which the Standard Model is valid. If we take this scale to be the Planck scale, then we have the quadratically diverging Lagrangian. However, suppose there existed two complex scalars (taken to be spin 0) such that:

λS= |λf|2 (the couplings to the Higgs are exactly the same).

Then by the Feynman rules, the correction (from both scalars) is:
\Delta m_{H}^{2}=2\times {\frac {\lambda _{S}}{16\pi ^{2}}}[\Lambda _{\mathrm {UV} }^{2}+...].
(Note that the contribution here is positive. This is because of the spin-statistics theorem, which means that fermions will have a negative contribution and bosons a positive contribution. This fact is exploited.)

This gives a total contribution to the Higgs mass to be zero if we include both the fermionic and bosonic particles. Supersymmetry is an extension of this that creates 'superpartners' for all Standard Model particles.[9]

Conformal solution

Without supersymmetry, a solution to the hierarchy problem has been proposed using just the Standard Model. The idea can be traced back to the fact that the term in the Higgs field that produces the uncontrolled quadratic correction upon renormalization is the quadratic one. If the Higgs field had no mass term, then no hierarchy problem arises. But by missing a quadratic term in the Higgs field, one must find a way to recover the breaking of electroweak symmetry through a non-null vacuum expectation value. This can be obtained using the Weinberg–Coleman mechanism with terms in the Higgs potential arising from quantum corrections. Mass obtained in this way is far too small with respect to what is seen in accelerator facilities and so a conformal Standard Model needs more than one Higgs particle. This proposal has been put forward in 2006 by Krzysztof Antoni Meissner and Hermann Nicolai[10] and is currently under scrutiny. But if no further excitation is observed beyond the one seen so far at LHC, this model would have to be abandoned.

Solution via extra dimensions

If we live in a 3+1 dimensional world, then we calculate the Gravitational Force via Gauss' law for gravity:
\mathbf {g} (\mathbf {r} )=-Gm{\frac {\mathbf {e_{r}} }{r^{2}}} (1)
which is simply Newton's law of gravitation. Note that Newton's constant G can be rewritten in terms of the Planck mass.
{\displaystyle G={\frac {\hbar c}{M_{\mathrm {Pl} }^{2}}}}
If we extend this idea to \delta extra dimensions, then we get:
\mathbf {g} (\mathbf {r} )=-m{\frac {\mathbf {e_{r}} }{M_{\mathrm {Pl} _{3+1+\delta }}^{2+\delta }r^{2+\delta }}} (2)
where M_{\mathrm {Pl} _{3+1+\delta }} is the 3+1+ \delta dimensional Planck mass. However, we are assuming that these extra dimensions are the same size as the normal 3+1 dimensions. Let us say that the extra dimensions are of size n <<< than normal dimensions. If we let r << n, then we get (2). However, if we let r >> n, then we get our usual Newton's law. However, when r >> n, the flux in the extra dimensions becomes a constant, because there is no extra room for gravitational flux to flow through. Thus the flux will be proportional to n^{\delta } because this is the flux in the extra dimensions. The formula is:
\mathbf {g} (\mathbf {r} )=-m{\frac {\mathbf {e_{r}} }{M_{\mathrm {Pl} _{3+1+\delta }}^{2+\delta }r^{2}n^{\delta }}}
-m{\frac {\mathbf {e_{r}} }{M_{\mathrm {Pl} }^{2}r^{2}}}=-m{\frac {\mathbf {e_{r}} }{M_{\mathrm {Pl} _{3+1+\delta }}^{2+\delta }r^{2}n^{\delta }}}
which gives:
{\frac {1}{M_{\mathrm {Pl} }^{2}r^{2}}}={\frac {1}{M_{\mathrm {Pl} _{3+1+\delta }}^{2+\delta }r^{2}n^{\delta }}}\Rightarrow
M_{\mathrm {Pl} }^{2}=M_{\mathrm {Pl} _{3+1+\delta }}^{2+\delta }n^{\delta }.
Thus the fundamental Planck mass (the extra-dimensional one) could actually be small, meaning that gravity is actually strong, but this must be compensated by the number of the extra dimensions and their size. Physically, this means that gravity is weak because there is a loss of flux to the extra dimensions.

This section adapted from "Quantum Field Theory in a Nutshell" by A. Zee.[11]
Braneworld models
In 1998 Nima Arkani-Hamed, Savas Dimopoulos, and Gia Dvali proposed the ADD model, also known as the model with large extra dimensions, an alternative scenario to explain the weakness of gravity relative to the other forces.[12][13] This theory requires that the fields of the Standard Model are confined to a four-dimensional membrane, while gravity propagates in several additional spatial dimensions that are large compared to the Planck scale.[14]

In 1998/99 Merab Gogberashvili published on arXiv (and subsequently in peer-reviewed journals) a number of articles where he showed that if the Universe is considered as a thin shell (a mathematical synonym for "brane") expanding in 5-dimensional space then it is possible to obtain one scale for particle theory corresponding to the 5-dimensional cosmological constant and Universe thickness, and thus to solve the hierarchy problem.[15][16][17] It was also shown that four-dimensionality of the Universe is the result of stability requirement since the extra component of the Einstein field equations giving the localized solution for matter fields coincides with one of the conditions of stability.

Subsequently, there were proposed the closely related Randall–Sundrum scenarios which offered their solution to the hierarchy problem.
Finite Groups
It has also been noted that the group order of the Baby Monster group is of the right order of magnitude, 4×1033. It is known that the Monster Group is related to the symmetries of a particular[which?] bosonic string theory on the Leech lattice. However, there's no physical reason for why the size of the Monster Group or its subgroups should appear in the Lagrangian. Most physicists think this is merely a coincidence. Another coincidence is that in reduced Planck units, the Higgs mass is approximately 48.|M|^{-1/3}=125.5\;\mathrm {GeV} where |M| is the order of the Monster group. This suggests that the smallness of the Higgs mass may be due to a redundancy caused by a symmetry of the extra dimensions, which must be divided out. There are other groups that are also of the right order of magnitude for example Weyl(E_{8}\times E_{8}).
Extra dimensions
Until now, no experimental or observational evidence of extra dimensions has been officially reported. Analyses of results from the Large Hadron Collider severely constrain theories with large extra dimensions.[18] However, extra dimensions could explain why the gravity force is so weak, and why the expansion of the universe is faster than expected.[19]

The cosmological constant

In physical cosmology, current observations in favor of an accelerating universe imply the existence of a tiny, but nonzero cosmological constant. This is a hierarchy problem very similar to that of the Higgs boson mass problem, since the cosmological constant is also very sensitive to quantum corrections. It is complicated, however, by the necessary involvement of general relativity in the problem and may be a clue that we do not understand gravity on long distance scales (such as the size of the universe today). While quintessence has been proposed as an explanation of the acceleration of the Universe, it does not actually address the cosmological constant hierarchy problem in the technical sense of addressing the large quantum corrections. Supersymmetry does not address the cosmological constant problem, since supersymmetry cancels the M4Planck contribution, but not the M2Planck one (quadratically diverging).

Beta function (physics)

From Wikipedia, the free encyclopedia
In theoretical physics, specifically quantum field theory, a beta function, β(g), encodes the dependence of a coupling parameter, g, on the energy scale, μ, of a given physical process described by quantum field theory. It is defined as
\beta(g) = \frac{\partial g}{\partial \log(\mu)} ~,
and, because of the underlying renormalization group, it has no explicit dependence on μ, so it only depends on μ implicitly through g. This dependence on the energy scale thus specified is known as the running of the coupling parameter, a fundamental feature of scale-dependence in quantum field theory, and its explicit computation is achievable through a variety of mathematical techniques.

Scale invariance

If the beta functions of a quantum field theory vanish, usually at particular values of the coupling parameters, then the theory is said to be scale-invariant. Almost all scale-invariant QFTs are also conformally invariant. The study of such theories is conformal field theory.

The coupling parameters of a quantum field theory can run even if the corresponding classical field theory is scale-invariant. In this case, the non-zero beta function tells us that the classical scale invariance is anomalous.

Examples

Beta functions are usually computed in some kind of approximation scheme. An example is perturbation theory, where one assumes that the coupling parameters are small. One can then make an expansion in powers of the coupling parameters and truncate the higher-order terms (also known as higher loop contributions, due to the number of loops in the corresponding Feynman graphs).

Here are some examples of beta functions computed in perturbation theory:

Quantum electrodynamics

The one-loop beta function in quantum electrodynamics (QED) is
  • \beta(e)=\frac{e^3}{12\pi^2}~,
or, equivalently,
  • \beta(\alpha)=\frac{2\alpha^2}{3\pi}~,
written in terms of the fine structure constant in natural units, α = e2/4π.

This beta function tells us that the coupling increases with increasing energy scale, and QED becomes strongly coupled at high energy. In fact, the coupling apparently becomes infinite at some finite energy, resulting in a Landau pole. However, one cannot expect the perturbative beta function to give accurate results at strong coupling, and so it is likely that the Landau pole is an artifact of applying perturbation theory in a situation where it is no longer valid.

Quantum chromodynamics

The one-loop beta function in quantum chromodynamics with n_f flavours and n_{s} scalar Higgs bosons is
{\displaystyle \beta (g)=-\left(11-{\frac {n_{s}}{3}}-{\frac {2n_{f}}{3}}\right){\frac {g^{3}}{16\pi ^{2}}}~,}
or
{\displaystyle \beta (\alpha _{s})=-\left(11-{\frac {n_{s}}{3}}-{\frac {2n_{f}}{3}}\right){\frac {\alpha _{s}^{2}}{2\pi }}~,}
written in terms of αs = {\displaystyle g^{2}/4\pi } .

If nf ≤ 16, the ensuing beta function dictates that the coupling decreases with increasing energy scale, a phenomenon known as asymptotic freedom. Conversely, the coupling increases with decreasing energy scale. This means that the coupling becomes large at low energies, and one can no longer rely on perturbation theory.

SU(N) Non-Abelian gauge theory

While the (Yang-Mills) gauge group of QCD is SU(3), and determines 3 colors, we can generalize to any number of colors, N_c, with a gauge group G=SU(N_c). Then for this gauge group, with Dirac fermions in a representation R_{f} of G and with complex scalars in a representation R_s, the one-loop beta function is
{\displaystyle \beta (g)=-\left({\frac {11}{3}}C_{2}(G)-{\frac {1}{3}}n_{s}T(R_{s})-{\frac {4}{3}}n_{f}T(R_{f})\right){\frac {g^{3}}{16\pi ^{2}}}~,}
where C_2(G) is the quadratic Casimir of G and {\displaystyle T(R)} is another Casimir invariant defined by {\displaystyle Tr(T_{R}^{a}T_{R}^{b})=T(R)\delta ^{ab}} for generators T^{a,b}_R of the Lie algebra in the representation R. (For Weyl or Majorana fermions, replace 4/3 by 2/3, and for real scalars, replace 1/3 by 1/6.) For gauge fields (i.e. gluons), necessarily in the adjoint of G, C_2(G) = N_c; for fermions in the fundamental (or anti-fundamental) representation of G, {\displaystyle T(R)=1/2}. Then for QCD, with N_c = 3, the above equation reduces to that listed for the quantum chromodynamics beta function.

This famous result was derived nearly simultaneously in 1973 by Politzer,[1] Gross and Wilczek,[2] for which the three were awarded the Nobel Prize in Physics in 2004. Unbeknownst to these authors, G. 't Hooft, had announced the result in a comment following a talk by K. Symanzik at a small meeting in Marseilles in June,1972, but he never published it.[3]

Introduction to entropy

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Introduct...