Search This Blog

Friday, June 8, 2018

Gravity

From Wikipedia, the free encyclopedia

Hammer and feather drop: astronaut David Scott (from mission Apollo 15) on the Moon enacting the legend of Galileo's gravity experiment. (1.38 MB, ogg/Theora format).

Gravity, or gravitation, is a natural phenomenon by which all things with mass are brought toward (or gravitate toward) one another, including objects ranging from electrons and atoms, to planets, stars, and galaxies. Since energy and mass are equivalent, all forms of energy (including photons and light) cause gravitation and are under the influence of it.[1] On Earth, gravity gives weight to physical objects, and the Moon's gravity causes the ocean tides. The gravitational attraction of the original gaseous matter present in the Universe caused it to begin coalescing, forming stars – and for the stars to group together into galaxies – so gravity is responsible for many of the large scale structures in the Universe. Gravity has an infinite range, although its effects become increasingly weaker on farther objects.

Gravity is most accurately described by the general theory of relativity (proposed by Albert Einstein in 1915) which describes gravity not as a force, but as a consequence of the curvature of spacetime caused by the uneven distribution of mass. The most extreme example of this curvature of spacetime is a black hole, from which nothing—not even light—can escape once past the black hole's event horizon.[2] However, for most applications, gravity is well approximated by Newton's law of universal gravitation, which describes gravity as a force which causes any two bodies to be attracted to each other, with the force proportional to the product of their masses and inversely proportional to the square of the distance between them.

Gravity is the weakest of the four fundamental forces of physics, approximately 1038 times weaker than the strong force, 1036 times weaker than the electromagnetic force and 1029 times weaker than the weak force. As a consequence, it has no significant influence at the level of subatomic particles.[3] In contrast, it is the dominant force at the macroscopic scale, and is the cause of the formation, shape and trajectory (orbit) of astronomical bodies. For example, gravity causes the Earth and the other planets to orbit the Sun, it also causes the Moon to orbit the Earth, and causes the formation of tides, the formation and evolution of the Solar System, stars and galaxies.

The earliest instance of gravity in the Universe, possibly in the form of quantum gravity, supergravity or a gravitational singularity, along with ordinary space and time, developed during the Planck epoch (up to 10−43 seconds after the birth of the Universe), possibly from a primeval state, such as a false vacuum, quantum vacuum or virtual particle, in a currently unknown manner.[4] Attempts to develop a theory of gravity consistent with quantum mechanics, a quantum gravity theory, which would allow gravity to be united in a common mathematical framework (a theory of everything) with the other three forces of physics, are a current area of research.

History of gravitational theory

Scientific revolution

Modern work on gravitational theory began with the work of Galileo Galilei in the late 16th and early 17th centuries. In his famous (though possibly apocryphal[5]) experiment dropping balls from the Tower of Pisa, and later with careful measurements of balls rolling down inclines, Galileo showed that gravitational acceleration is the same for all objects. This was a major departure from Aristotle's belief that heavier objects have a higher gravitational acceleration.[6] Galileo postulated air resistance as the reason that objects with less mass fall more slowly in an atmosphere. Galileo's work set the stage for the formulation of Newton's theory of gravity.[7]

Newton's theory of gravitation


Sir Isaac Newton, an English physicist who lived from 1642 to 1727

In 1687, English mathematician Sir Isaac Newton published Principia, which hypothesizes the inverse-square law of universal gravitation. In his own words, "I deduced that the forces which keep the planets in their orbs must [be] reciprocally as the squares of their distances from the centers about which they revolve: and thereby compared the force requisite to keep the Moon in her Orb with the force of gravity at the surface of the Earth; and found them answer pretty nearly."[8] The equation is the following:

F=G{\frac {m_{1}m_{2}}{r^{2}}}\

Where F is the force, m1 and m2 are the masses of the objects interacting, r is the distance between the centers of the masses and G is the gravitational constant.

Newton's theory enjoyed its greatest success when it was used to predict the existence of Neptune based on motions of Uranus that could not be accounted for by the actions of the other planets. Calculations by both John Couch Adams and Urbain Le Verrier predicted the general position of the planet, and Le Verrier's calculations are what led Johann Gottfried Galle to the discovery of Neptune.

A discrepancy in Mercury's orbit pointed out flaws in Newton's theory. By the end of the 19th century, it was known that its orbit showed slight perturbations that could not be accounted for entirely under Newton's theory, but all searches for another perturbing body (such as a planet orbiting the Sun even closer than Mercury) had been fruitless. The issue was resolved in 1915 by Albert Einstein's new theory of general relativity, which accounted for the small discrepancy in Mercury's orbit.

Although Newton's theory has been superseded by Einstein's general relativity, most modern non-relativistic gravitational calculations are still made using Newton's theory because it is simpler to work with and it gives sufficiently accurate results for most applications involving sufficiently small masses, speeds and energies.

Equivalence principle

The equivalence principle, explored by a succession of researchers including Galileo, Loránd Eötvös, and Einstein, expresses the idea that all objects fall in the same way, and that the effects of gravity are indistinguishable from certain aspects of acceleration and deceleration. The simplest way to test the weak equivalence principle is to drop two objects of different masses or compositions in a vacuum and see whether they hit the ground at the same time. Such experiments demonstrate that all objects fall at the same rate when other forces (such as air resistance and electromagnetic effects) are negligible. More sophisticated tests use a torsion balance of a type invented by Eötvös. Satellite experiments, for example STEP, are planned for more accurate experiments in space.[9]

Formulations of the equivalence principle include:
  • The weak equivalence principle: The trajectory of a point mass in a gravitational field depends only on its initial position and velocity, and is independent of its composition.[10]
  • The Einsteinian equivalence principle: The outcome of any local non-gravitational experiment in a freely falling laboratory is independent of the velocity of the laboratory and its location in spacetime.[11]
  • The strong equivalence principle requiring both of the above.

General relativity


Two-dimensional analogy of spacetime distortion generated by the mass of an object. Matter changes the geometry of spacetime, this (curved) geometry being interpreted as gravity. White lines do not represent the curvature of space but instead represent the coordinate system imposed on the curved spacetime, which would be rectilinear in a flat spacetime.

In general relativity, the effects of gravitation are ascribed to spacetime curvature instead of a force. The starting point for general relativity is the equivalence principle, which equates free fall with inertial motion and describes free-falling inertial objects as being accelerated relative to non-inertial observers on the ground.[12][13] In Newtonian physics, however, no such acceleration can occur unless at least one of the objects is being operated on by a force.

Einstein proposed that spacetime is curved by matter, and that free-falling objects are moving along locally straight paths in curved spacetime. These straight paths are called geodesics. Like Newton's first law of motion, Einstein's theory states that if a force is applied on an object, it would deviate from a geodesic. For instance, we are no longer following geodesics while standing because the mechanical resistance of the Earth exerts an upward force on us, and we are non-inertial on the ground as a result. This explains why moving along the geodesics in spacetime is considered inertial.

Einstein discovered the field equations of general relativity, which relate the presence of matter and the curvature of spacetime and are named after him. The Einstein field equations are a set of 10 simultaneous, non-linear, differential equations. The solutions of the field equations are the components of the metric tensor of spacetime. A metric tensor describes a geometry of spacetime. The geodesic paths for a spacetime are calculated from the metric tensor.

Solutions

Notable solutions of the Einstein field equations include:

Tests

The tests of general relativity included the following:[14]
  • General relativity accounts for the anomalous perihelion precession of Mercury.[15]
  • The prediction that time runs slower at lower potentials (gravitational time dilation) has been confirmed by the Pound–Rebka experiment (1959), the Hafele–Keating experiment, and the GPS.
  • The prediction of the deflection of light was first confirmed by Arthur Stanley Eddington from his observations during the Solar eclipse of 29 May 1919.[16][17] Eddington measured starlight deflections twice those predicted by Newtonian corpuscular theory, in accordance with the predictions of general relativity. However, his interpretation of the results was later disputed.[18] More recent tests using radio interferometric measurements of quasars passing behind the Sun have more accurately and consistently confirmed the deflection of light to the degree predicted by general relativity.[19] See also gravitational lens.
  • The time delay of light passing close to a massive object was first identified by Irwin I. Shapiro in 1964 in interplanetary spacecraft signals.
  • Gravitational radiation has been indirectly confirmed through studies of binary pulsars. On 11 February 2016, the LIGO and Virgo collaborations announced the first observation of a gravitational wave.
  • Alexander Friedmann in 1922 found that Einstein equations have non-stationary solutions (even in the presence of the cosmological constant). In 1927 Georges Lemaître showed that static solutions of the Einstein equations, which are possible in the presence of the cosmological constant, are unstable, and therefore the static Universe envisioned by Einstein could not exist. Later, in 1931, Einstein himself agreed with the results of Friedmann and Lemaître. Thus general relativity predicted that the Universe had to be non-static—it had to either expand or contract. The expansion of the Universe discovered by Edwin Hubble in 1929 confirmed this prediction.[20]
  • The theory's prediction of frame dragging was consistent with the recent Gravity Probe B results.[21]
  • General relativity predicts that light should lose its energy when traveling away from massive bodies through gravitational redshift. This was verified on earth and in the solar system around 1960.

Gravity and quantum mechanics

In the decades after the discovery of general relativity, it was realized that general relativity is incompatible with quantum mechanics.[22] It is possible to describe gravity in the framework of quantum field theory like the other fundamental forces, such that the attractive force of gravity arises due to exchange of virtual gravitons, in the same way as the electromagnetic force arises from exchange of virtual photons.[23][24] This reproduces general relativity in the classical limit. However, this approach fails at short distances of the order of the Planck length,[22] where a more complete theory of quantum gravity (or a new approach to quantum mechanics) is required.

Specifics

Earth's gravity


An initially-stationary object which is allowed to fall freely under gravity drops a distance which is proportional to the square of the elapsed time. This image spans half a second and was captured at 20 flashes per second.

Every planetary body (including the Earth) is surrounded by its own gravitational field, which can be conceptualized with Newtonian physics as exerting an attractive force on all objects. Assuming a spherically symmetrical planet, the strength of this field at any given point above the surface is proportional to the planetary body's mass and inversely proportional to the square of the distance from the center of the body.


If an object with comparable mass to that of the Earth were to fall towards it, then the corresponding acceleration of the Earth would be observable.

The strength of the gravitational field is numerically equal to the acceleration of objects under its influence.[25] The rate of acceleration of falling objects near the Earth's surface varies very slightly depending on latitude, surface features such as mountains and ridges, and perhaps unusually high or low sub-surface densities.[26] For purposes of weights and measures, a standard gravity value is defined by the International Bureau of Weights and Measures, under the International System of Units (SI).

That value, denoted g, is g = 9.80665 m/s2 (32.1740 ft/s2).[27][28]

The standard value of 9.80665 m/s2 is the one originally adopted by the International Committee on Weights and Measures in 1901 for 45° latitude, even though it has been shown to be too high by about five parts in ten thousand.[29] This value has persisted in meteorology and in some standard atmospheres as the value for 45° latitude even though it applies more precisely to latitude of 45°32'33".[30]

Assuming the standardized value for g and ignoring air resistance, this means that an object falling freely near the Earth's surface increases its velocity by 9.80665 m/s (32.1740 ft/s or 22 mph) for each second of its descent. Thus, an object starting from rest will attain a velocity of 9.80665 m/s (32.1740 ft/s) after one second, approximately 19.62 m/s (64.4 ft/s) after two seconds, and so on, adding 9.80665 m/s (32.1740 ft/s) to each resulting velocity. Also, again ignoring air resistance, any and all objects, when dropped from the same height, will hit the ground at the same time.

According to Newton's 3rd Law, the Earth itself experiences a force equal in magnitude and opposite in direction to that which it exerts on a falling object. This means that the Earth also accelerates towards the object until they collide. Because the mass of the Earth is huge, however, the acceleration imparted to the Earth by this opposite force is negligible in comparison to the object's. If the object doesn't bounce after it has collided with the Earth, each of them then exerts a repulsive contact force on the other which effectively balances the attractive force of gravity and prevents further acceleration.

The apparent force of gravity on Earth is the resultant (vector sum) of two forces:[31] (a) The gravitational attraction in accordance with Newton's universal law of gravitation, and (b) the centrifugal force, which results from the choice of an earthbound, rotating frame of reference. The force of gravity is the weakest at the equator because of the centrifugal force caused by the Earth's rotation and because points on the equator are furthest from the center of the Earth. The force of gravity varies with latitude and increases from about 9.780 m/s2 at the Equator to about 9.832 m/s2 at the poles.

Equations for a falling body near the surface of the Earth

Under an assumption of constant gravitational attraction, Newton's law of universal gravitation simplifies to F = mg, where m is the mass of the body and g is a constant vector with an average magnitude of 9.81 m/s2 on Earth. This resulting force is the object's weight. The acceleration due to gravity is equal to this g. An initially stationary object which is allowed to fall freely under gravity drops a distance which is proportional to the square of the elapsed time. The image on the right, spanning half a second, was captured with a stroboscopic flash at 20 flashes per second. During the first ​120 of a second the ball drops one unit of distance (here, a unit is about 12 mm); by ​220 it has dropped at total of 4 units; by ​320, 9 units and so on.

Under the same constant gravity assumptions, the potential energy, Ep, of a body at height h is given by Ep = mgh (or Ep = Wh, with W meaning weight). This expression is valid only over small distances h from the surface of the Earth. Similarly, the expression h={\tfrac {v^{2}}{2g}} for the maximum height reached by a vertically projected body with initial velocity v is useful for small heights and small initial velocities only.

Gravity and astronomy


Gravity acts on stars that form the Milky Way.[32]

The application of Newton's law of gravity has enabled the acquisition of much of the detailed information we have about the planets in the Solar System, the mass of the Sun, and details of quasars; even the existence of dark matter is inferred using Newton's law of gravity. Although we have not traveled to all the planets nor to the Sun, we know their masses. These masses are obtained by applying the laws of gravity to the measured characteristics of the orbit. In space an object maintains its orbit because of the force of gravity acting upon it. Planets orbit stars, stars orbit galactic centers, galaxies orbit a center of mass in clusters, and clusters orbit in superclusters. The force of gravity exerted on one object by another is directly proportional to the product of those objects' masses and inversely proportional to the square of the distance between them.

The earliest gravity (possibly in the form of quantum gravity, supergravity or a gravitational singularity), along with ordinary space and time, developed during the Planck epoch (up to 10−43 seconds after the birth of the Universe), possibly from a primeval state (such as a false vacuum, quantum vacuum or virtual particle), in a currently unknown manner.[4]

Gravitational radiation

According to general relativity, gravitational radiation is generated in situations where the curvature of spacetime is oscillating, such as is the case with co-orbiting objects. The gravitational radiation emitted by the Solar System is far too small to measure. However, gravitational radiation has been indirectly observed as an energy loss over time in binary pulsar systems such as PSR B1913+16. It is believed that neutron star mergers and black hole formation may create detectable amounts of gravitational radiation. Gravitational radiation observatories such as the Laser Interferometer Gravitational Wave Observatory (LIGO) have been created to study the problem. In February 2016, the Advanced LIGO team announced that they had detected gravitational waves from a black hole collision. On 14 September 2015, LIGO registered gravitational waves for the first time, as a result of the collision of two black holes 1.3 billion light-years from Earth.[33][34] This observation confirms the theoretical predictions of Einstein and others that such waves exist. The event confirms that binary black holes exist. It also opens the way for practical observation and understanding of the nature of gravity and events in the Universe including the Big Bang and what happened after it.[35][36]

Speed of gravity

In December 2012, a research team in China announced that it had produced measurements of the phase lag of Earth tides during full and new moons which seem to prove that the speed of gravity is equal to the speed of light.[37] This means that if the Sun suddenly disappeared, the Earth would keep orbiting it normally for 8 minutes, which is the time light takes to travel that distance. The team's findings were released in the Chinese Science Bulletin in February 2013.[38]

In October 2017, the LIGO and Virgo detectors received gravitational wave signals within 2 seconds of gamma ray satellites and optical telescopes seeing signals from the same direction. This confirmed that the speed of gravitational waves was the same as the speed of light.[39]

Anomalies and discrepancies

There are some observations that are not adequately accounted for, which may point to the need for better theories of gravity or perhaps be explained in other ways.


Rotation curve of a typical spiral galaxy: predicted (A) and observed (B). The discrepancy between the curves is attributed to dark matter.
  • Extra-fast stars: Stars in galaxies follow a distribution of velocities where stars on the outskirts are moving faster than they should according to the observed distributions of normal matter. Galaxies within galaxy clusters show a similar pattern. Dark matter, which would interact through gravitation but not electromagnetically, would account for the discrepancy. Various modifications to Newtonian dynamics have also been proposed.
  • Flyby anomaly: Various spacecraft have experienced greater acceleration than expected during gravity assist maneuvers.
  • Accelerating expansion: The metric expansion of space seems to be speeding up. Dark energy has been proposed to explain this. A recent alternative explanation is that the geometry of space is not homogeneous (due to clusters of galaxies) and that when the data are reinterpreted to take this into account, the expansion is not speeding up after all,[40] however this conclusion is disputed.[41]
  • Anomalous increase of the astronomical unit: Recent measurements indicate that planetary orbits are widening faster than if this were solely through the Sun losing mass by radiating energy.
  • Extra energetic photons: Photons travelling through galaxy clusters should gain energy and then lose it again on the way out. The accelerating expansion of the Universe should stop the photons returning all the energy, but even taking this into account photons from the cosmic microwave background radiation gain twice as much energy as expected. This may indicate that gravity falls off faster than inverse-squared at certain distance scales.[42]
  • Extra massive hydrogen clouds: The spectral lines of the Lyman-alpha forest suggest that hydrogen clouds are more clumped together at certain scales than expected and, like dark flow, may indicate that gravity falls off slower than inverse-squared at certain distance scales.[42]

Alternative theories

Historical alternative theories

Modern alternative theories

Weak interaction

From Wikipedia, the free encyclopedia
 

The radioactive beta decay is due to the weak interaction, which transforms a neutron into: a proton, an electron, and an electron antineutrino.

In particle physics, the weak interaction (the weak force or weak nuclear force) is the mechanism of interaction between sub-atomic particles that causes radioactive decay and thus plays an essential role in nuclear fission. The theory of the weak interaction is sometimes called quantum flavordynamics (QFD), in analogy with the terms quantum chromodynamics (QCD) dealing with the strong interaction and quantum electrodynamics (QED) dealing with the electromagnetic force. However, the term QFD is rarely used because the weak force is best understood in terms of electro-weak theory (EWT).[1]

The weak interaction takes place only at very small, sub-atomic distances, less than the diameter of a proton. It is one of the four known fundamental interactions of nature, alongside the strong interaction, electromagnetism, and gravitation.

Background

The Standard Model of particle physics provides a uniform framework for understanding the electromagnetic, weak, and strong interactions. An interaction occurs when two particles, typically but not necessarily half-integer spin fermions, exchange integer-spin, force-carrying bosons. The fermions involved in such exchanges can be either elementary (e.g. electrons or quarks) or composite (e.g. protons or neutrons), although at the deepest levels, all weak interactions ultimately are between elementary particles.

In the case of the weak interaction, fermions can exchange three distinct types of force carriers known as the W+, W, and Z bosons. The mass of each of these bosons is far greater than the mass of a proton or neutron, which is consistent with the short range of the weak force. The force is in fact termed weak because its field strength over a given distance is typically several orders of magnitude less than that of the strong nuclear force or electromagnetic force.

Quarks, which make up composite particles like neutrons and protons, come in six "flavors" – up, down, strange, charm, top and bottom – which give those composite particles their properties. The weak interaction is unique in that it allows for quarks to swap their flavor for another. The swapping of those properties is mediated by the force carrier bosons. For example, during beta minus decay, a down quark within a neutron is changed into an up quark, thus converting the neutron to a proton and resulting in the emission of an electron and an electron antineutrino.

The weak interaction is the only fundamental interaction that breaks parity-symmetry, and similarly, the only one to break charge parity symmetry.

Other important examples of phenomena involving the weak interaction include beta decay, and the fusion of hydrogen into deuterium that powers the Sun's thermonuclear process. Most fermions will decay by a weak interaction over time. Such decay makes radiocarbon dating possible, as carbon-14 decays through the weak interaction to nitrogen-14. It can also create radioluminescence, commonly used in tritium illumination, and in the related field of betavoltaics.[2]

During the quark epoch of the early universe, the electroweak force separated into the electromagnetic and weak forces.

History

In 1933, Enrico Fermi proposed the first theory of the weak interaction, known as Fermi's interaction. He suggested that beta decay could be explained by a four-fermion interaction, involving a contact force with no range.[3][4]

However, it is better described as a non-contact force field having a finite range, albeit very short.[citation needed] In 1968, Sheldon Glashow, Abdus Salam and Steven Weinberg unified the electromagnetic force and the weak interaction by showing them to be two aspects of a single force, now termed the electro-weak force.[citation needed]

The existence of the W and Z bosons was not directly confirmed until 1983.[5]

Properties


A diagram depicting the various decay routes due to the weak interaction and some indication of their likelihood. The intensity of the lines is given by the CKM parameters.

The weak interaction is unique in a number of respects:
Due to their large mass (approximately 90 GeV/c2[6]) these carrier particles, termed the W and Z bosons, are short-lived with a lifetime of under 10−24 seconds.[7] The weak interaction has a coupling constant (an indicator of interaction strength) of between 10−7 and 10−6, compared to the strong interaction's coupling constant of 1 and the electromagnetic coupling constant of about 10−2;[8] consequently the weak interaction is weak in terms of strength.[9] The weak interaction has a very short range (around 10−17 to 10−16 m[9]).[8] At distances around 10−18 meters, the weak interaction has a strength of a similar magnitude to the electromagnetic force, but this starts to decrease exponentially with increasing distance. At distances of around 3×10−17 m, a distance which is scaled up by just one and a half decimal orders of magnitude from before, the weak interaction is 10,000 times weaker than the electromagnetic.[10]

The weak interaction affects all the fermions of the Standard Model, as well as the Higgs boson; neutrinos interact through gravity and the weak interaction only, and neutrinos were the original reason for the name weak force.[9] The weak interaction does not produce bound states nor does it involve binding energy – something that gravity does on an astronomical scale, that the electromagnetic force does at the atomic level, and that the strong nuclear force does inside nuclei.[11]

Its most noticeable effect is due to its first unique feature: flavor changing. A neutron, for example, is heavier than a proton (its sister nucleon), but it cannot decay into a proton without changing the flavor (type) of one of its two down quarks to an up quark. Neither the strong interaction nor electromagnetism permit flavor changing, so this proceeds by weak decay; without weak decay, quark properties such as strangeness and charm (associated with the quarks of the same name) would also be conserved across all interactions.

All mesons are unstable because of weak decay.[12] In the process known as beta decay, a down quark in the neutron can change into an up quark by emitting a virtual
W
boson which is then converted into an electron and an electron antineutrino.[13] Another example is the electron capture, a common variant of radioactive decay, wherein a proton and an electron within an atom interact, and are changed to a neutron (an up quark is changed to a down quark) and an electron neutrino is emitted.

Due to the large masses of the W bosons, particle transformations or decays (e.g., flavor change) that depend on the weak interaction typically occur much more slowly than transformations or decays that depend only on the strong or electromagnetic forces. For example, a neutral pion decays electromagnetically, and so has a life of only about 10−16 seconds. In contrast, a charged pion can only decay through the weak interaction, and so lives about 10−8 seconds, or a hundred million times longer than a neutral pion.[14] A particularly extreme example is the weak-force decay of a free neutron, which takes about 15 minutes.[13]

Weak isospin and weak hypercharge

All particles have a property called weak isospin (symbol T3), which serves as a quantum number and governs how that particle behaves in the weak interaction. Weak isospin plays the same role in the weak interaction as does electric charge in electromagnetism, and color charge in the strong interaction. All left-handed fermions have a weak isospin value of either +​12 or −​12. For example, the up quark has a T3 of +​12 and the down quark −​12. A quark never decays through the weak interaction into a quark of the same T3: Quarks with a T3 of +​12 only decay into quarks with a T3 of −​12 and vice versa.



π+
decay through the weak interaction

In any given interaction, weak isospin is conserved: the sum of the weak isospin numbers of the particles entering the interaction equals the sum of the weak isospin numbers of the particles exiting that interaction. For example, a (left-handed)
π+
, with a weak isospin of 1 normally decays into a
ν
μ
(+​12) and a
μ+
(as a right-handed antiparticle, +​12).[14]

Following the development of the electroweak theory, another property, weak hypercharge, was developed. It is dependent on a particle's electrical charge and weak isospin, and is defined by:

\qquad Y_W = 2(Q - T_3)

where YW is the weak hypercharge of a given type of particle, Q is its electrical charge (in elementary charge units) and T3 is its weak isospin. Whereas some particles have a weak isospin of zero, all spin-​12 particles have non-zero weak hypercharge. Weak hypercharge is the generator of the U(1) component of the electroweak gauge group.

Interaction types

There are two types of weak interaction (called vertices). The first type is called the "charged-current interaction" because it is mediated by particles that carry an electric charge (the
W+
or
W
bosons
), and is responsible for the beta decay phenomenon. The second type is called the "neutral-current interaction" because it is mediated by a neutral particle, the Z boson.

Charged-current interaction


The Feynman diagram for beta-minus decay of a neutron into a proton, electron and electron anti-neutrino, via an intermediate heavy
W
boson

In one type of charged current interaction, a charged lepton (such as an electron or a muon, having a charge of −1) can absorb a
W+
boson
(a particle with a charge of +1) and be thereby converted into a corresponding neutrino (with a charge of 0), where the type ("flavor") of neutrino (electron, muon or tau) is the same as the type of lepton in the interaction, for example:
\mu^-+ W^+\to \nu_\mu
Similarly, a down-type quark (d with a charge of −​13) can be converted into an up-type quark (u, with a charge of +​23), by emitting a
W
boson or by absorbing a
W+
boson. More precisely, the down-type quark becomes a quantum superposition of up-type quarks: that is to say, it has a possibility of becoming any one of the three up-type quarks, with the probabilities given in the CKM matrix tables. Conversely, an up-type quark can emit a
W+
boson, or absorb a
W
boson, and thereby be converted into a down-type quark, for example:
{\begin{aligned}d&\to u+W^{-}\\d+W^{+}&\to u\\c&\to s+W^{+}\\c+W^{-}&\to s\end{aligned}}
The W boson is unstable so will rapidly decay, with a very short lifetime. For example:
{\begin{aligned}W^{-}&\to e^{-}+{\bar  \nu }_{e}~\\W^{+}&\to e^{+}+\nu _{e}~\end{aligned}}
Decay of the W boson to other products can happen, with varying probabilities.[16]

In the so-called beta decay of a neutron (see picture, above), a down quark within the neutron emits a virtual
W
boson and is thereby converted into an up quark, converting the neutron into a proton. Because of the energy involved in the process (i.e., the mass difference between the down quark and the up quark), the
W
boson can only be converted into an electron and an electron-antineutrino.[17] At the quark level, the process can be represented as:
d\to u+ e^- + \bar\nu_e~

Neutral-current interaction

In neutral current interactions, a quark or a lepton (e.g., an electron or a muon) emits or absorbs a neutral Z boson. For example:
e^-\to e^- + Z^0
Like the W boson, the Z boson also decays rapidly,[16] for example:
Z^0\to b+\bar b

Electroweak theory

The Standard Model of particle physics describes the electromagnetic interaction and the weak interaction as two different aspects of a single electroweak interaction. This theory was developed around 1968 by Sheldon Glashow, Abdus Salam and Steven Weinberg, and they were awarded the 1979 Nobel Prize in Physics for their work.[18] The Higgs mechanism provides an explanation for the presence of three massive gauge bosons (W+,W,
Z0
, the three carriers of the weak interaction) and the massless photon (γ, the carrier of the electromagnetic interaction).[19]

According to the electroweak theory, at very high energies, the universe has four components of the Higgs field whose interactions are carried by four massless gauge bosons – each similar to the photon – forming a complex scalar Higgs field doublet. However, at low energies, this gauge symmetry is spontaneously broken down to the U(1) symmetry of electromagnetism, since one of the Higgs fields acquires a vacuum expectation value. This symmetry-breaking would be expected to produce three massless bosons, but instead they become integrated by the other three fields and acquire mass through the Higgs mechanism. These three boson integrations produce the
W+
,
W
and
Z0
bosons of the weak interaction. The fourth gauge boson is the photon of electromagnetism, and remains massless.[19]

This theory has made a number of predictions, including a prediction of the masses of the Z and W-bosons before their discovery. On 4 July 2012, the CMS and the ATLAS experimental teams at the Large Hadron Collider independently announced that they had confirmed the formal discovery of a previously unknown boson of mass between 125–127 GeV/c2, whose behaviour so far was "consistent with" a Higgs boson, while adding a cautious note that further data and analysis were needed before positively identifying the new boson as being a Higgs boson of some type. By 14 March 2013, the Higgs boson was tentatively confirmed to exist.[20]

Violation of symmetry


Left- and right-handed particles: p is the particle's momentum and S is its spin. Note the lack of reflective symmetry between the states.

The laws of nature were long thought to remain the same under mirror reflection. The results of an experiment viewed via a mirror were expected to be identical to the results of a mirror-reflected copy of the experimental apparatus. This so-called law of parity conservation was known to be respected by classical gravitation, electromagnetism and the strong interaction; it was assumed to be a universal law.[21] However, in the mid-1950s Chen-Ning Yang and Tsung-Dao Lee suggested that the weak interaction might violate this law. Chien Shiung Wu and collaborators in 1957 discovered that the weak interaction violates parity, earning Yang and Lee the 1957 Nobel Prize in Physics.[22]

Although the weak interaction was once described by Fermi's theory, the discovery of parity violation and renormalization theory suggested that a new approach was needed. In 1957, Robert Marshak and George Sudarshan and, somewhat later, Richard Feynman and Murray Gell-Mann proposed a V−A (vector minus axial vector or left-handed) Lagrangian for weak interactions. In this theory, the weak interaction acts only on left-handed particles (and right-handed antiparticles). Since the mirror reflection of a left-handed particle is right-handed, this explains the maximal violation of parity. Interestingly, the V−A theory was developed before the discovery of the Z boson, so it did not include the right-handed fields that enter in the neutral current interaction.

However, this theory allowed a compound symmetry CP to be conserved. CP combines parity P (switching left to right) with charge conjugation C (switching particles with antiparticles). Physicists were again surprised when in 1964, James Cronin and Val Fitch provided clear evidence in kaon decays that CP symmetry could be broken too, winning them the 1980 Nobel Prize in Physics.[23] In 1973, Makoto Kobayashi and Toshihide Maskawa showed that CP violation in the weak interaction required more than two generations of particles,[24] effectively predicting the existence of a then unknown third generation. This discovery earned them half of the 2008 Nobel Prize in Physics.[25] Unlike parity violation, CP violation occurs in only a small number of instances, but remains widely held as an answer to the difference between the amount of matter and antimatter in the universe; it thus forms one of Andrei Sakharov's three conditions for baryogenesis.[26]

Fourth dimension in art

From Wikipedia, the free encyclopedia

An illustration from Jouffret's Traité élémentaire de géométrie à quatre dimensions. The book, which influenced Picasso, was given to him by Princet.

New possibilities opened up by the concept of four-dimensional space (and difficulties involved in trying to visualize it) helped inspire many modern artists in the first half of the twentieth century. Early Cubists, Surrealists, Futurists, and abstract artists took ideas from higher-dimensional mathematics and used them to radically advance their work.[1]

Early influence

 

French mathematician Maurice Princet was known as "le mathématicien du cubisme" ("the mathematician of cubism").[2] An associate of the School of Paris, a group of avant-gardists including Pablo Picasso, Guillaume Apollinaire, Max Jacob, Jean Metzinger, and Marcel Duchamp, Princet is credited with introducing the work of Henri Poincaré and the concept of the "fourth dimension" to the cubists at the Bateau-Lavoir during the first decade of the 20th century.[3]

Princet introduced Picasso to Esprit Jouffret's Traité élémentaire de géométrie à quatre dimensions (Elementary Treatise on the Geometry of Four Dimensions, 1903),[4] a popularization of Poincaré's Science and Hypothesis in which Jouffret described hypercubes and other complex polyhedra in four dimensions and projected them onto the two-dimensional page. Picasso's Portrait of Daniel-Henry Kahnweiler in 1910 was an important work for the artist, who spent many months shaping it.[5] The portrait bears similarities to Jouffret's work and shows a distinct movement away from the Proto-Cubist fauvism displayed in Les Demoiselles d'Avignon, to a more considered analysis of space and form.[6]

Early cubist Max Weber wrote an article entitled "In The Fourth Dimension from a Plastic Point of View", for Alfred Stieglitz's July 1910 issue of Camera Work. In the piece, Weber states, "In plastic art, I believe, there is a fourth dimension which may be described as the consciousness of a great and overwhelming sense of space-magnitude in all directions at one time, and is brought into existence through the three known measurements."[7]

Another influence on the School of Paris was that of Jean Metzinger and Albert Gleizes, both painters and theoreticians. The first major treatise written on the subject of Cubism was their 1912 collaboration Du "Cubisme", which says that:
"If we wished to relate the space of the [Cubist] painters to geometry, we should have to refer it to the non-Euclidian mathematicians; we should have to study, at some length, certain of Riemann's theorems."[8]
The American modernist painter and photographer Morton Livingston Schamberg wrote in 1910 two letters to Walter Pach,[9][10] parts of which were published in a review of the 1913 Armory Show for The Philadelphia Inquirer,[11] about the influence of the fourth dimension on avant-garde painting; describing how the artists' employed "harmonic use of forms" distinguishing between the "representation or rendering of space and the designing in space":[12]
If we still further add to design in the third dimension, a consideration of weight, pressure, resistance, movement, as distinguished from motion, we arrive at what may legitimately be called design in the fourth dimension, or the harmonic use of what may arbitrarily be called volume. It is only at this point that we can appreciate the masterly productions of such a man as Cézanne.[13]
Cézanne's explorations of geometric simplification and optical phenomena inspired the Cubists to experiment with simultaneity, complex multiple views of the same subject, as observed from differing viewpoints at the same time.[14]

Dimensionist manifesto

In 1936 in Paris, Charles Tamkó Sirató published his Manifeste Dimensioniste,[15] which described how
the Dimensionist tendency has led to:
  1. Literature leaving the line and entering the plane.
  2. Painting leaving the plane and entering space.
  3. Sculpture stepping out of closed, immobile forms.
  4. …The artistic conquest of four-dimensional space, which to date has been completely art-free.
The manifesto was signed by many prominent modern artists worldwide. Hans Arp, Francis Picabia, Kandinsky, Robert Delaunay and Marcel Duchamp amongst others added their names in Paris, then a short while later it was endorsed by artists abroad including László Moholy-Nagy, Joan Miró, David Kakabadze, Alexander Calder, and Ben Nicholson.[15]

Crucifixion (Corpus Hypercubus)

Dalí's 1954 painting Crucifixion (Corpus Hypercubus)

In 1953, the surrealist Salvador Dalí proclaimed his intention to paint "an explosive, nuclear and hypercubic" crucifixion scene.[16][17] He said that, "This picture will be the great metaphysical work of my summer".[18] Completed the next year, Crucifixion (Corpus Hypercubus) depicts Jesus Christ upon the net of a hypercube, also known as a tesseract. The unfolding of a tesseract into eight cubes is analogous to unfolding the sides of a cube into six squares. The Metropolitan Museum of Art describes the painting as a "new interpretation of an oft-depicted subject. ..[showing] Christ's spiritual triumph over corporeal harm."[19]

Abstract art

Some of Piet Mondrian's (1872–1944) abstractions and his practice of Neoplasticism are said to be rooted in his view of a utopian universe, with perpendiculars visually extending into another dimension.[20]

Other forms of art

The fourth dimension has been the subject of numerous fictional stories.[21]

Operator (computer programming)

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Operator_(computer_programmin...