Search This Blog

Sunday, June 25, 2023

Equivalence principle

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Equivalence_principle
A falling object drops exactly the same on a planet or in an accelerating frame of reference

In the theory of general relativity, the equivalence principle is the equivalence of gravitational and inertial mass, and Albert Einstein's observation that the gravitational "force" as experienced locally while standing on a massive body (such as the Earth) is the same as the pseudo-force experienced by an observer in a non-inertial (accelerated) frame of reference.

Einstein's statement of the equality of inertial and gravitational mass

A little reflection will show that the law of the equality of the inertial and gravitational mass is equivalent to the assertion that the acceleration imparted to a body by a gravitational field is independent of the nature of the body. For Newton's equation of motion in a gravitational field, written out in full, it is:

(Inertial mass) (Acceleration) (Gravitational mass) (Intensity of the gravitational field).

It is only when there is numerical equality between the inertial and gravitational mass that the acceleration is independent of the nature of the body.

Development of gravitational theory

Something like the equivalence principle emerged in the early 17th century, when Galileo expressed experimentally that the acceleration of a test mass due to gravitation is independent of the amount of mass being accelerated.

Johannes Kepler, using Galileo's discoveries, showed knowledge of the equivalence principle by accurately describing what would occur if the Moon were stopped in its orbit and dropped towards Earth. This can be deduced without knowing if or in what manner gravity decreases with distance, but requires assuming the equivalency between gravity and inertia.

If two stones were placed in any part of the world near each other, and beyond the sphere of influence of a third cognate body, these stones, like two magnetic needles, would come together in the intermediate point, each approaching the other by a space proportional to the comparative mass of the other. If the moon and earth were not retained in their orbits by their animal force or some other equivalent, the earth would mount to the moon by a fifty-fourth part of their distance, and the moon fall towards the earth through the other fifty-three parts, and they would there meet, assuming, however, that the substance of both is of the same density.

— Johannes Kepler, "Astronomia Nova", 1609

The 1/54 ratio is Kepler's estimate of the Moon–Earth mass ratio, based on their diameters. The accuracy of his statement can be deduced by using Newton's inertia law F=ma and Galileo's gravitational observation that distance . Setting these accelerations equal for a mass is the equivalence principle. Noting the time to collision for each mass is the same gives Kepler's statement that Dmoon/DEarth=MEarth/Mmoon, without knowing the time to collision or how or if the acceleration force from gravity is a function of distance.

Newton's gravitational theory simplified and formalized Galileo's and Kepler's ideas by recognizing Kepler's "animal force or some other equivalent" beyond gravity and inertia were not needed, deducing from Kepler's planetary laws how gravity reduces with distance.

The equivalence principle was properly introduced by Albert Einstein in 1907, when he observed that the acceleration of bodies towards the center of the Earth at a rate of 1g (g = 9.81 m/s2 being a standard reference of gravitational acceleration at the Earth's surface) is equivalent to the acceleration of an inertially moving body that would be observed on a rocket in free space being accelerated at a rate of 1g. Einstein stated it thus:

we ... assume the complete physical equivalence of a gravitational field and a corresponding acceleration of the reference system.

— Einstein, 1907

That is, being on the surface of the Earth is equivalent to being inside a spaceship (far from any sources of gravity) that is being accelerated by its engines. The direction or vector of acceleration equivalence on the surface of the earth is "up" or directly opposite the center of the planet while the vector of acceleration in a spaceship is directly opposite from the mass ejected by its thrusters. From this principle, Einstein deduced that free-fall is inertial motion. Objects in free-fall do not experience being accelerated downward (e.g. toward the earth or other massive body) but rather weightlessness and no acceleration. In an inertial frame of reference bodies (and photons, or light) obey Newton's first law, moving at constant velocity in straight lines. Analogously, in a curved spacetime the world line of an inertial particle or pulse of light is as straight as possible (in space and time). Such a world line is called a geodesic and from the point of view of the inertial frame is a straight line. This is why an accelerometer in free-fall doesn't register any acceleration; there isn't any between the internal test mass and the accelerometer's body.

As an example: an inertial body moving along a geodesic through space can be trapped into an orbit around a large gravitational mass without ever experiencing acceleration. This is possible because spacetime is radically curved in close vicinity to a large gravitational mass. In such a situation the geodesic lines bend inward around the center of the mass and a free-floating (weightless) inertial body will simply follow those curved geodesics into an elliptical orbit. An accelerometer on-board would never record any acceleration.

By contrast, in Newtonian mechanics, gravity is assumed to be a force. This force draws objects having mass towards the center of any massive body. At the Earth's surface, the force of gravity is counteracted by the mechanical (physical) resistance of the Earth's surface. So in Newtonian physics, a person at rest on the surface of a (non-rotating) massive object is in an inertial frame of reference. These considerations suggest the following corollary to the equivalence principle, which Einstein formulated precisely in 1911:

Whenever an observer detects the local presence of a force that acts on all objects in direct proportion to the inertial mass of each object, that observer is in an accelerated frame of reference.

Einstein also referred to two reference frames, K and K'. K is a uniform gravitational field, whereas K' has no gravitational field but is uniformly accelerated such that objects in the two frames experience identical forces:

We arrive at a very satisfactory interpretation of this law of experience, if we assume that the systems K and K' are physically exactly equivalent, that is, if we assume that we may just as well regard the system K as being in a space free from gravitational fields, if we then regard K as uniformly accelerated. This assumption of exact physical equivalence makes it impossible for us to speak of the absolute acceleration of the system of reference, just as the usual theory of relativity forbids us to talk of the absolute velocity of a system; and it makes the equal falling of all bodies in a gravitational field seem a matter of course.

— Einstein, 1911

This observation was the start of a process that culminated in general relativity. Einstein suggested that it should be elevated to the status of a general principle, which he called the "principle of equivalence" when constructing his theory of relativity:

As long as we restrict ourselves to purely mechanical processes in the realm where Newton's mechanics holds sway, we are certain of the equivalence of the systems K and K'. But this view of ours will not have any deeper significance unless the systems K and K' are equivalent with respect to all physical processes, that is, unless the laws of nature with respect to K are in entire agreement with those with respect to K'. By assuming this to be so, we arrive at a principle which, if it is really true, has great heuristic importance. For by theoretical consideration of processes which take place relatively to a system of reference with uniform acceleration, we obtain information as to the career of processes in a homogeneous gravitational field.

— Einstein, 1911

Einstein combined (postulated) the equivalence principle with special relativity to predict that clocks run at different rates in a gravitational potential, and light rays bend in a gravitational field, even before he developed the concept of curved spacetime.

So the original equivalence principle, as described by Einstein, concluded that free-fall and inertial motion were physically equivalent. This form of the equivalence principle can be stated as follows. An observer in a windowless room cannot distinguish between being on the surface of the Earth, and being in a spaceship in deep space accelerating at 1g. This is not strictly true, because massive bodies give rise to tidal effects (caused by variations in the strength and direction of the gravitational field) which are absent from an accelerating spaceship in deep space. The room, therefore, should be small enough that tidal effects can be neglected.

Although the equivalence principle guided the development of general relativity, it is not a founding principle of relativity but rather a simple consequence of the geometrical nature of the theory. In general relativity, objects in free-fall follow geodesics of spacetime, and what we perceive as the force of gravity is instead a result of our being unable to follow those geodesics of spacetime, because the mechanical resistance of Earth's matter or surface prevents us from doing so.

Since Einstein developed general relativity, there was a need to develop a framework to test the theory against other possible theories of gravity compatible with special relativity. This was developed by Robert Dicke as part of his program to test general relativity. Two new principles were suggested, the so-called Einstein equivalence principle and the strong equivalence principle, each of which assumes the weak equivalence principle as a starting point. They only differ in whether or not they apply to gravitational experiments.

Another clarification needed is that the equivalence principle assumes a constant acceleration of 1g without considering the mechanics of generating 1g. If we do consider the mechanics of it, then we must assume the aforementioned windowless room has a fixed mass. Accelerating it at 1g means there is a constant force being applied, which = m*g where m is the mass of the windowless room along with its contents (including the observer). Now, if the observer jumps inside the room, an object lying freely on the floor will decrease in weight momentarily because the acceleration is going to decrease momentarily due to the observer pushing back against the floor in order to jump. The object will then gain weight while the observer is in the air and the resulting decreased mass of the windowless room allows greater acceleration; it will lose weight again when the observer lands and pushes once more against the floor; and it will finally return to its initial weight afterwards. To make all these effects equal those we would measure on a planet producing 1g, the windowless room must be assumed to have the same mass as that planet. Additionally, the windowless room must not cause its own gravity, otherwise the scenario changes even further. These are technicalities, clearly, but practical ones if we wish the experiment to demonstrate more or less precisely the equivalence of 1g gravity and 1g acceleration.

Modern usage

Three forms of the equivalence principle are in current use: weak (Galilean), Einsteinian, and strong.

The weak equivalence principle

The weak equivalence principle, also known as the universality of free fall or the Galilean equivalence principle can be stated in many ways. The strong EP, a generalization of the weak EP, includes astronomic bodies with gravitational self-binding energy (e.g., 1.74 solar-mass pulsar PSR J1903+0327, 15.3% of whose separated mass is absent as gravitational binding energy) Instead, the weak EP assumes falling bodies are self-bound by non-gravitational forces only (e.g. a stone). Either way:

  • The trajectory of a point mass in a gravitational field depends only on its initial position and velocity, and is independent of its composition and structure.
  • All test particles at the alike spacetime point, in a given gravitational field, will undergo the same acceleration, independent of their properties, including their rest mass.
  • All local centers of mass free-fall (in vacuum) along identical (parallel-displaced, same speed) minimum action trajectories independent of all observable properties.
  • The vacuum world-line of a body immersed in a gravitational field is independent of all observable properties.
  • The local effects of motion in a curved spacetime (gravitation) are indistinguishable from those of an accelerated observer in flat spacetime, without exception.
  • Mass (measured with a balance) and weight (measured with a scale) are locally in identical ratio for all bodies (the opening page to Newton's PhilosophiƦ Naturalis Principia Mathematica, 1687).

Locality eliminates measurable tidal forces originating from a radial divergent gravitational field (e.g., the Earth) upon finite sized physical bodies. The "falling" equivalence principle embraces Galileo's, Newton's, and Einstein's conceptualization. The equivalence principle does not deny the existence of measurable effects caused by a rotating gravitating mass (frame dragging), or bear on the measurements of light deflection and gravitational time delay made by non-local observers.

Active, passive, and inertial masses

By definition of active and passive gravitational mass, the force on due to the gravitational field of is:

Likewise the force on a second object of arbitrary mass2 due to the gravitational field of mass0 is:

By definition of inertial mass:

If and are the same distance from then, by the weak equivalence principle, they fall at the same rate (i.e. their accelerations are the same)

Hence:

Therefore:

In other words, passive gravitational mass must be proportional to inertial mass for all objects.

Furthermore, by Newton's third law of motion:

must be equal and opposite to

It follows that:

In other words, passive gravitational mass must be proportional to active gravitational mass for all objects.

The dimensionless Eƶtvƶs-parameter is the difference of the ratios of gravitational and inertial masses divided by their average for the two sets of test masses "A" and "B".

Tests of the weak equivalence principle

Tests of the weak equivalence principle are those that verify the equivalence of gravitational mass and inertial mass. An obvious test is dropping different objects, ideally in a vacuum environment, e.g., inside the Fallturm Bremen drop tower.

Researcher Year Method Result
John Philoponus 6th century Said that by observation, two balls of very different weights will fall at nearly the same speed no detectable difference
Simon Stevin ~1586 Dropped lead balls of different masses off the Delft churchtower no detectable difference
Galileo Galilei ~1610 Rolling balls of varying weight down inclined planes to slow the speed so that it was measurable no detectable difference
Isaac Newton ~1680 Measure the period of pendulums of different mass but identical length difference is less than 1 part in 103
Friedrich Wilhelm Bessel 1832 Measure the period of pendulums of different mass but identical length no measurable difference
LorĆ”nd Eƶtvƶs 1908 Measure the torsion on a wire, suspending a balance beam, between two nearly identical masses under the acceleration of gravity and the rotation of the Earth difference is 10±2 part in 109 (H2O/Cu)
Roll, Krotkov and Dicke 1964 Torsion balance experiment, dropping aluminum and gold test masses
David Scott 1971 Dropped a falcon feather and a hammer at the same time on the Moon no detectable difference (not a rigorous experiment, but very dramatic being the first lunar one)
Braginsky and Panov 1971 Torsion balance, aluminum and platinum test masses, measuring acceleration towards the Sun difference is less than 1 part in 1012
Eƶt-Wash group 1987– Torsion balance, measuring acceleration of different masses towards the Earth, Sun and Galactic Center, using several different kinds of masses


See:

Year Investigator Sensitivity Method
500? Philoponus "small" Drop tower
1585 Stevin 5×10−2 Drop tower
1590? Galileo 2×10−2 Pendulum, drop tower
1686 Newton 10−3 Pendulum
1832 Bessel 2×10−5 Pendulum
1908 (1922) Eƶtvƶs 2×10−9 Torsion balance
1910 Southerns 5×10−6 Pendulum
1918 Zeeman 3×10−8 Torsion balance
1923 Potter 3×10−6 Pendulum
1935 Renner 2×10−9 Torsion balance
1964 Dicke, Roll, Krotkov 3x10−11 Torsion balance
1972 Braginsky, Panov 10−12 Torsion balance
1976 Shapiro, et al. 10−12 Lunar laser ranging
1981 Keiser, Faller 4×10−11 Fluid support
1987 Niebauer, et al. 10−10 Drop tower
1989 Stubbs, et al. 10−11 Torsion balance
1990 Adelberger, Eric G.; et al. 10−12 Torsion balance
1999 Baessler, et al. 5×10−14 Torsion balance
2017 MICROSCOPE 10−15 Earth orbit

Experiments are still being performed at the University of Washington which have placed limits on the differential acceleration of objects towards the Earth, the Sun and towards dark matter in the Galactic Center. Future satellite experiments – STEP (Satellite Test of the Equivalence Principle), and Galileo Galilei – will test the weak equivalence principle in space, to much higher accuracy.

With the first successful production of antimatter, in particular anti-hydrogen, a new approach to test the weak equivalence principle has been proposed. Experiments to compare the gravitational behavior of matter and antimatter are currently being developed.

Proposals that may lead to a quantum theory of gravity such as string theory and loop quantum gravity predict violations of the weak equivalence principle because they contain many light scalar fields with long Compton wavelengths, which should generate fifth forces and variation of the fundamental constants. Heuristic arguments suggest that the magnitude of these equivalence principle violations could be in the 10−13 to 10−18 range. Currently envisioned tests of the weak equivalence principle are approaching a degree of sensitivity such that non-discovery of a violation would be just as profound a result as discovery of a violation. Non-discovery of equivalence principle violation in this range would suggest that gravity is so fundamentally different from other forces as to require a major reevaluation of current attempts to unify gravity with the other forces of nature. A positive detection, on the other hand, would provide a major guidepost towards unification.

The Einstein equivalence principle

What is now called the "Einstein equivalence principle" states that the weak equivalence principle holds, and that:

The outcome of any local non-gravitational experiment in a freely falling laboratory is independent of the velocity of the laboratory and its location in spacetime.

Here "local" has a very special meaning: not only must the experiment not look outside the laboratory, but it must also be small compared to variations in the gravitational field, tidal forces, so that the entire laboratory is freely falling. It also implies the absence of interactions with "external" fields other than the gravitational field.

The principle of relativity implies that the outcome of local experiments must be independent of the velocity of the apparatus, so the most important consequence of this principle is the Copernican idea that dimensionless physical values such as the fine-structure constant and electron-to-proton mass ratio must not depend on where in space or time we measure them. Many physicists believe that any Lorentz invariant theory that satisfies the weak equivalence principle also satisfies the Einstein equivalence principle.

Schiff's conjecture suggests that the weak equivalence principle implies the Einstein equivalence principle, but it has not been proven. Nonetheless, the two principles are tested with very different kinds of experiments. The Einstein equivalence principle has been criticized as imprecise, because there is no universally accepted way to distinguish gravitational from non-gravitational experiments (see for instance Hadley and Durand).

Tests of the Einstein equivalence principle

In addition to the tests of the weak equivalence principle, the Einstein equivalence principle can be tested by searching for variation of dimensionless constants and mass ratios. The present best limits on the variation of the fundamental constants have mainly been set by studying the naturally occurring Oklo natural nuclear fission reactor, where nuclear reactions similar to ones we observe today have been shown to have occurred underground approximately two billion years ago. These reactions are extremely sensitive to the values of the fundamental constants.

Constant Year Method Limit on fractional change
proton gyromagnetic factor 1976 astrophysical 10−1
weak interaction constant 1976 Oklo 10−2
fine-structure constant 1976 Oklo 10−7
electronproton mass ratio 2002 quasars 10−4

There have been a number of controversial attempts to constrain the variation of the strong interaction constant. There have been several suggestions that "constants" do vary on cosmological scales. The best known is the reported detection of variation (at the 10−5 level) of the fine-structure constant from measurements of distant quasars, see Webb et al. Other researchers dispute these findings. Other tests of the Einstein equivalence principle are gravitational redshift experiments, such as the Pound–Rebka experiment which test the position independence of experiments.

The strong equivalence principle

The strong equivalence principle suggests the laws of gravitation are independent of velocity and location. In particular,

The gravitational motion of a small test body depends only on its initial position in spacetime and velocity, and not on its constitution.

and

The outcome of any local experiment (gravitational or not) in a freely falling laboratory is independent of the velocity of the laboratory and its location in spacetime.

The first part is a version of the weak equivalence principle that applies to objects that exert a gravitational force on themselves, such as stars, planets, black holes or Cavendish experiments. The second part is the Einstein equivalence principle (with the same definition of "local"), restated to allow gravitational experiments and self-gravitating bodies. The freely-falling object or laboratory, however, must still be small, so that tidal forces may be neglected (hence "local experiment").

This is the only form of the equivalence principle that applies to self-gravitating objects (such as stars), which have substantial internal gravitational interactions. It requires that the gravitational constant be the same everywhere in the universe and is incompatible with a fifth force. It is much more restrictive than the Einstein equivalence principle.

The strong equivalence principle suggests that gravity is entirely geometrical by nature (that is, the metric alone determines the effect of gravity) and does not have any extra fields associated with it. If an observer measures a patch of space to be flat, then the strong equivalence principle suggests that it is absolutely equivalent to any other patch of flat space elsewhere in the universe. Einstein's theory of general relativity (including the cosmological constant) is thought to be the only theory of gravity that satisfies the strong equivalence principle. A number of alternative theories, such as Brans–Dicke theory, satisfy only the Einstein equivalence principle.

Tests of the strong equivalence principle

The strong equivalence principle can be tested by searching for a variation of Newton's gravitational constant G over the life of the universe, or equivalently, variation in the masses of the fundamental particles. A number of independent constraints, from orbits in the Solar System and studies of Big Bang nucleosynthesis have shown that G cannot have varied by more than 10%.

Thus, the strong equivalence principle can be tested by searching for fifth forces (deviations from the gravitational force-law predicted by general relativity). These experiments typically look for failures of the inverse-square law (specifically Yukawa forces or failures of Birkhoff's theorem) behavior of gravity in the laboratory. The most accurate tests over short distances have been performed by the Eƶt–Wash group. A future satellite experiment, SEE (Satellite Energy Exchange), will search for fifth forces in space and should be able to further constrain violations of the strong equivalence principle. Other limits, looking for much longer-range forces, have been placed by searching for the Nordtvedt effect, a "polarization" of solar system orbits that would be caused by gravitational self-energy accelerating at a different rate from normal matter. This effect has been sensitively tested by the Lunar Laser Ranging Experiment. Other tests include studying the deflection of radiation from distant radio sources by the sun, which can be accurately measured by very long baseline interferometry. Another sensitive test comes from measurements of the frequency shift of signals to and from the Cassini spacecraft. Together, these measurements have put tight limits on Brans–Dicke theory and other alternative theories of gravity.

In 2014, astronomers discovered a stellar triple system containing a millisecond pulsar PSR J0337+1715 and two white dwarfs orbiting it. The system provided them a chance to test the strong equivalence principle in a strong gravitational field with high accuracy.

In 2020 a group of astronomers analyzed data from the Spitzer Photometry and Accurate Rotation Curves (SPARC) sample, together with estimates of the large-scale external gravitational field from an all-sky galaxy catalog. They concluded that there was highly statistically significant evidence of violations of the strong equivalence principle in weak gravitational fields in the vicinity of rotationally supported galaxies. They observed an effect consistent with the external field effect of Modified Newtonian dynamics (MOND), a hypothesis that proposes a modified gravity theory beyond general relativity, and inconsistent with tidal effects in the Lambda-CDM model paradigm, commonly known as the Standard Model of Cosmology.

Challenges

One challenge to the equivalence principle is the Brans–Dicke theory. Self-creation cosmology is a modification of the Brans–Dicke theory.

In August 2010, researchers from the University of New South Wales, Swinburne University of Technology, and Cambridge University published a paper titled "Evidence for spatial variation of the fine-structure constant", whose tentative conclusion is that, "qualitatively, [the] results suggest a violation of the Einstein Equivalence Principle, and could infer a very large or infinite universe, within which our 'local' Hubble volume represents a tiny fraction."

Explanations

Dutch physicist and string theorist Erik Verlinde has generated a self-contained, logical derivation of the equivalence principle based on the starting assumption of a holographic universe. Given this situation, gravity would not be a true fundamental force as is currently thought but instead an "emergent property" related to entropy. Verlinde's entropic gravity theory apparently leads naturally to the correct observed strength of dark energy; previous failures to explain its incredibly small magnitude have been called by such people as cosmologist Michael Turner (who is credited as having coined the term "dark energy") as "the greatest embarrassment in the history of theoretical physics". These ideas are far from settled and still very controversial.

Variable speed of light

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Variable_speed_of_light

A variable speed of light (VSL) is a feature of a family of hypotheses stating that the speed of light may in some way not be constant, for example, that it varies in space or time, or depending on frequency. Accepted classical theories of physics, and in particular general relativity, predict a constant speed of light in any local frame of reference and in some situations these predict apparent variations of the speed of light depending on frame of reference, but this article does not refer to this as a variable speed of light. Various alternative theories of gravitation and cosmology, many of them non-mainstream, incorporate variations in the local speed of light.

Attempts to incorporate a variable speed of light into physics were made by Robert Dicke in 1957, and by several researchers starting from the late 1980s.

VSL should not be confused with faster than light theories, its dependence on a medium's refractive index or its measurement in a remote observer's frame of reference in a gravitational potential. In this context, the "speed of light" refers to the limiting speed c of the theory rather than to the velocity of propagation of photons.

Historical proposals

Background

Einstein's equivalence principle, on which general relativity is founded, requires that in any local, freely falling reference frame, the speed of light is always the same. This leaves open the possibility, however, that an inertial observer inferring the apparent speed of light in a distant region might calculate a different value. Spatial variation of the speed of light in a gravitational potential as measured against a distant observer's time reference is implicitly present in general relativity. The apparent speed of light will change in a gravity field and, in particular, go to zero at an event horizon as viewed by a distant observer. In deriving the gravitational redshift due to a spherically-symmetric massive body, a radial speed of light dr/dt can be defined in Schwarzschild coordinates, with t being the time recorded on a stationary clock at infinity. The result is

where m is MG/c2 and where natural units are used such that c0 is equal to one.

Dicke's proposal (1957)

Robert Dicke, in 1957, developed a VSL theory of gravity, a theory in which (unlike general relativity) the speed of light measured locally by a free-falling observer could vary. Dicke assumed that both frequencies and wavelengths could vary, which since resulted in a relative change of c. Dicke assumed a refractive index (eqn. 5) and proved it to be consistent with the observed value for light deflection. In a comment related to Mach's principle, Dicke suggested that, while the right part of the term in eq. 5 is small, the left part, 1, could have "its origin in the remainder of the matter in the universe".

Given that in a universe with an increasing horizon more and more masses contribute to the above refractive index, Dicke considered a cosmology where c decreased in time, providing an alternative explanation to the cosmological redshift.

Subsequent proposals

Variable speed of light models, including Dicke's, have been developed which agree with all known tests of general relativity.

Other models claim to shed light on the equivalence principle or make a link to Dirac's large numbers hypothesis.

Several hypotheses for varying speed of light, seemingly in contradiction to general relativity theory, have been published, including those of Giere and Tan (1986) and Sanejouand (2009). In 2003, Magueijo gave a review of such hypotheses.

Cosmological models with varying speeds of light have been proposed independently by Jean-Pierre Petit in 1988, John Moffat in 1992, and the team of Andreas Albrecht and JoĆ£o Magueijo in 1998 to explain the horizon problem of cosmology and propose an alternative to cosmic inflation.

Relation to other constants and their variation

Gravitational constant G

In 1937, Paul Dirac and others began investigating the consequences of natural constants changing with time. For example, Dirac proposed a change of only 5 parts in 1011 per year of the Newtonian constant of gravitation G to explain the relative weakness of the gravitational force compared to other fundamental forces. This has become known as the Dirac large numbers hypothesis.

However, Richard Feynman showed that the gravitational constant most likely could not have changed this much in the past 4 billion years based on geological and solar system observations, although this may depend on assumptions about G varying in isolation. (See also strong equivalence principle.)

Fine-structure constant Ī±

One group, studying distant quasars, has claimed to detect a variation of the fine-structure constant at the level in one part in 105. Other authors dispute these results. Other groups studying quasars claim no detectable variation at much higher sensitivities.

The natural nuclear reactor of Oklo has been used to check whether the atomic fine-structure constant Ī± might have changed over the past 2 billion years. That is because Ī± influences the rate of various nuclear reactions. For example, 149
Sm
captures a neutron to become 150
Sm
, and since the rate of neutron capture depends on the value of Ī±, the ratio of the two samarium isotopes in samples from Oklo can be used to calculate the value of Ī± from 2 billion years ago. Several studies have analysed the relative concentrations of radioactive isotopes left behind at Oklo, and most have concluded that nuclear reactions then were much the same as they are today, which implies Ī± was the same too.

Paul Davies and collaborators have suggested that it is in principle possible to disentangle which of the dimensionful constants (the elementary charge, Planck's constant, and the speed of light) of which the fine-structure constant is composed is responsible for the variation. However, this has been disputed by others and is not generally accepted.

Criticisms of various VSL concepts

Dimensionless and dimensionful quantities

It has to be clarified what a variation in a dimensionful quantity actually means, since any such quantity can be changed merely by changing one's choice of units. John Barrow wrote:

"[An] important lesson we learn from the way that pure numbers like Ī± define the world is what it really means for worlds to be different. The pure number we call the fine-structure constant and denote by Ī± is a combination of the electron charge, e, the speed of light, c, and Planck's constant, h. At first we might be tempted to think that a world in which the speed of light was slower would be a different world. But this would be a mistake. If c, h, and e were all changed so that the values they have in metric (or any other) units were different when we looked them up in our tables of physical constants, but the value of Ī± remained the same, this new world would be observationally indistinguishable from our world. The only thing that counts in the definition of worlds are the values of the dimensionless constants of Nature. If all masses were doubled in value [including the Planck mass mP] you cannot tell because all the pure numbers defined by the ratios of any pair of masses are unchanged."

Any equation of physical law can be expressed in a form in which all dimensional quantities are normalized against like-dimensioned quantities (called nondimensionalization), resulting in only dimensionless quantities remaining. In fact, physicists can choose their units so that the physical constants c, G, ħ = h/(2Ļ€), 4Ļ€Īµ0, and kB take the value one, resulting in every physical quantity being normalized against its corresponding Planck unit. For that, it has been claimed that specifying the evolution of a dimensional quantity is meaningless and does not make sense. When Planck units are used and such equations of physical law are expressed in this nondimensionalized form, no dimensional physical constants such as c, G, ħ, Īµ0, nor kB remain, only dimensionless quantities, as predicted by the Buckingham Ļ€ theorem. Short of their anthropometric unit dependence, there simply is no speed of light, gravitational constant, nor the Planck constant, remaining in mathematical expressions of physical reality to be subject to such hypothetical variation. For example, in the case of a hypothetically varying gravitational constant, G, the relevant dimensionless quantities that potentially vary ultimately become the ratios of the Planck mass to the masses of the fundamental particles. Some key dimensionless quantities (thought to be constant) that are related to the speed of light (among other dimensional quantities such as ħ, e, Īµ0), notably the fine-structure constant or the proton-to-electron mass ratio, could in principle have meaningful variance and their possible variation continues to be studied.

General critique of varying c cosmologies

From a very general point of view, G. F. R. Ellis and Jean-Philippe Uzan expressed concerns that a varying c would require a rewrite of much of modern physics to replace the current system which depends on a constant c. Ellis claimed that any varying c theory (1) must redefine distance measurements; (2) must provide an alternative expression for the metric tensor in general relativity; (3) might contradict Lorentz invariance; (4) must modify Maxwell's equations; and (5) must be done consistently with respect to all other physical theories. VSL cosmologies remain out of mainstream physics.

Dimensionless physical constant

From Wikipedia, the free encyclopedia

In physics, a dimensionless physical constant is a physical constant that is dimensionless, i.e. a pure number having no units attached and having a numerical value that is independent of whatever system of units may be used. In aerodynamics for example, if one considers one particular airfoil, the Reynolds number value of the laminar–turbulent transition is one relevant dimensionless physical constant of the problem. However, it is strictly related to the particular problem: for example, it is related to the airfoil being considered and also to the type of fluid in which it moves.

The term fundamental physical constant is used to refer to some universal dimensionless constants. Perhaps the best-known example is the fine-structure constant, Ī±, which has an approximate value of 1137.036.

Terminology

It has been argued the term fundamental physical constant should be restricted to the dimensionless universal physical constants that currently cannot be derived from any other source; this stricter definition is followed here.

However, the term fundamental physical constant has also been used occasionally to refer to certain universal dimensioned physical constants, such as the speed of light c, vacuum permittivity Īµ0, Planck constant h, and the gravitational constant G, that appear in the most basic theories of physics. NIST and CODATA sometimes used the term in this less strict manner.

Characteristics

There is no exhaustive list of such constants but it does make sense to ask about the minimal number of fundamental constants necessary to determine a given physical theory. Thus, the Standard Model requires 25 physical constants, about half of them are the masses of fundamental particles (which become "dimensionless" when expressed relative to the Planck mass or, alternatively, as coupling strength with the Higgs field along with the gravitational constant).

Fundamental physical constants cannot be derived and have to be measured. Developments in physics may lead to either a reduction or an extension of their number: discovery of new particles, or new relationships between physical phenomena, would introduce new constants, while the development of a more fundamental theory might allow the derivation of several constants from a more fundamental constant.

A long-sought goal of theoretical physics is to find first principles (theory of everything) from which all of the fundamental dimensionless constants can be calculated and compared to the measured values.

The large number of fundamental constants required in the Standard Model has been regarded as unsatisfactory since the theory's formulation in the 1970s. The desire for a theory that would allow the calculation of particle masses is a core motivation for the search for "Physics beyond the Standard Model".

History

In the 1920s and 1930s, Arthur Eddington embarked upon extensive mathematical investigation into the relations between the fundamental quantities in basic physical theories, later used as part of his effort to construct an overarching theory unifying quantum mechanics and cosmological physics. For example, he speculated on the potential consequences of the ratio of the electron radius to its mass. Most notably, in a 1929 paper he set out an argument based on the Pauli exclusion principle and the Dirac equation that fixed the value of the reciprocal of the fine-structure constant as š›¼−1 = 16 + 12 × 16 × (16 − 1) = 136. When its value was discovered to be closer to 137, he changed his argument to match that value. His ideas were not widely accepted, and subsequent experiments have shown that they were wrong (for example, none of the measurements of the fine-structure constant suggest an integer value; in 2018 it was measured at Ī± = 1/137.035999046(27)).

Though his derivations and equations were unfounded, Eddington was the first physicist to recognize the significance of universal dimensionless constants, now considered among the most critical components of major physical theories such as the Standard Model and Ī›CDM cosmology. He was also the first to argue for the importance of the cosmological constant Ī› itself, considering it vital for explaining the expansion of the universe, at a time when most physicists (including its discoverer, Albert Einstein) considered it an outright mistake or mathematical artifact and assumed a value of zero: this at least proved prescient, and a significant positive Ī› features prominently in Ī›CDM.

Eddington may have been the first to attempt in vain to derive the basic dimensionless constants from fundamental theories and equations, but he was certainly not the last. Many others would subsequently undertake similar endeavors, and efforts occasionally continue even today. None have yet produced convincing results or gained wide acceptance among theoretical physicists.

An empirical relation between the masses of the electron, muon and tau has been discovered by physicist Yoshio Koide, but this formula remains unexplained.

Examples

Dimensionless fundamental physical constants include:

Fine-structure constant

One of the dimensionless fundamental constants is the fine-structure constant:

where e is the elementary charge, ħ is the reduced Planck constant, c is the speed of light in vacuum, and Īµ0 is the permittivity of free space. The fine-structure constant is fixed to the strength of the electromagnetic force. At low energies, Ī±1137, whereas at the scale of the Z boson, about 90 GeV, one measures Ī±1127. There is no accepted theory explaining the value of Ī±; Richard Feynman elaborates:

There is a most profound and beautiful question associated with the observed coupling constant, e – the amplitude for a real electron to emit or absorb a real photon. It is a simple number that has been experimentally determined to be close to 0.08542455. (My physicist friends won't recognize this number, because they like to remember it as the inverse of its square: about 137.03597 with about an uncertainty of about 2 in the last decimal place. It has been a mystery ever since it was discovered more than fifty years ago, and all good theoretical physicists put this number up on their wall and worry about it.) Immediately you would like to know where this number for a coupling comes from: is it related to pi or perhaps to the base of natural logarithms? Nobody knows. It's one of the greatest damn mysteries of physics: a magic number that comes to us with no understanding by man. You might say the "hand of God" wrote that number, and "we don't know how He pushed his pencil." We know what kind of a dance to do experimentally to measure this number very accurately, but we don't know what kind of dance to do on the computer to make this number come out, without putting it in secretly!

Standard model

The original standard model of particle physics from the 1970s contained 19 fundamental dimensionless constants describing the masses of the particles and the strengths of the electroweak and strong forces. In the 1990s, neutrinos were discovered to have nonzero mass, and a quantity called the vacuum angle was found to be indistinguishable from zero.

The complete standard model requires 25 fundamental dimensionless constants (Baez, 2011). At present, their numerical values are not understood in terms of any widely accepted theory and are determined only from measurement. These 25 constants are:

Cosmological constants

The cosmological constant, which can be thought of as the density of dark energy in the universe, is a fundamental constant in physical cosmology that has a dimensionless value of approximately 10−122. Other dimensionless constants are the measure of homogeneity in the universe, denoted by Q, which is explained below by Martin Rees, the baryon mass per photon, the cold dark matter mass per photon and the neutrino mass per photon.

Barrow and Tipler

Barrow and Tipler (1986) anchor their broad-ranging discussion of astrophysics, cosmology, quantum physics, teleology, and the anthropic principle in the fine-structure constant, the proton-to-electron mass ratio (which they, along with Barrow (2002), call Ī²), and the coupling constants for the strong force and gravitation.

Martin Rees's Six Numbers

Martin Rees, in his book Just Six Numbers, mulls over the following six dimensionless constants, whose values he deems fundamental to present-day physical theory and the known structure of the universe:

N and Īµ govern the fundamental interactions of physics. The other constants (D excepted) govern the size, age, and expansion of the universe. These five constants must be estimated empirically. D, on the other hand, is necessarily a nonzero natural number and does not have an uncertainty. Hence most physicists would not deem it a dimensionless physical constant of the sort discussed in this entry.

Any plausible fundamental physical theory must be consistent with these six constants, and must either derive their values from the mathematics of the theory, or accept their values as empirical.

Inequality (mathematics)

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Inequality...