An animation of color confinement, a property of the strong interaction. If energy is supplied to the quarks as shown, the gluon tube connecting quarks elongates until it reaches a point where it "snaps" and the energy added to the system results in the formation of a quark–antiquark pair. Thus single quarks are never seen in isolation.An animation of the strong interaction between a proton and a neutron, mediated by pions. The colored small double circles inside are gluons.
Most of the mass of a proton or neutron
is the result of the strong interaction energy; the individual quarks
provide only about 1% of the mass of a proton. At the range of 10−15 m (1 femtometer, slightly more than the radius of a nucleon), the strong force is approximately 100 times as strong as electromagnetism, 106 times as strong as the weak interaction, and 1038 times as strong as gravitation.
In the context of atomic nuclei, the force binds protons and neutrons together to form a nucleus and is called the nuclear force (or residual strong force). Because the force is mediated by massive, short lived mesons
on this scale, the residual strong interaction obeys a
distance-dependent behavior between nucleons that is quite different
from when it is acting to bind quarks within hadrons. There are also
differences in the binding energies of the nuclear force with regard to nuclear fusion versus nuclear fission. Nuclear fusion accounts for most energy production in the Sun and other stars. Nuclear fission allows for decay of radioactive elements and isotopes,
although it is often mediated by the weak interaction. Artificially,
the energy associated with the nuclear force is partially released in nuclear power and nuclear weapons, both in uranium or plutonium-based fission weapons and in fusion weapons like the hydrogen bomb.
History
Before
1971, physicists were uncertain as to how the atomic nucleus was bound
together. It was known that the nucleus was composed of protons and neutrons and that protons possessed positive electric charge,
while neutrons were electrically neutral. By the understanding of
physics at that time, positive charges would repel one another and the
positively charged protons should cause the nucleus to fly apart.
However, this was never observed. New physics was needed to explain this
phenomenon.
A stronger attractive force was postulated to explain how the atomic nucleus was bound despite the protons' mutual electromagnetic repulsion. This hypothesized force was called the strong force, which was believed to be a fundamental force that acted on the protons and neutrons that make up the nucleus.
In 1964, Murray Gell-Mann, and separately George Zweig, proposed that baryons, which include protons and neutrons, and mesons
were composed of elementary particles. Zweig called the elementary
particles "aces" while Gell-Mann called them "quarks"; the theory came
to be called the quark model. The strong attraction between nucleons was the side-effect of a more
fundamental force that bound the quarks together into protons and
neutrons. The theory of quantum chromodynamics explains that quarks carry what is called a color charge, although it has no relation to visible color. Quarks with unlike color charge attract one another as a result of the
strong interaction, and the particle that mediates this was called the gluon.
Behavior of the strong interaction
The
strong interaction is observable at two ranges, and mediated by
different force carriers in each one. On a scale less than about 0.8 fm (roughly the radius of a nucleon), the force is carried by gluons and holds quarks together to form protons, neutrons, and other hadrons. On a larger scale, up to about 3 fm, the force is carried by mesons and binds nucleons (protons and neutrons) together to form the nucleus of an atom. In the former context, it is often known as the color force, and is so strong that if hadrons are struck by high-energy particles, they produce jets
of massive particles instead of emitting their constituents (quarks and
gluons) as freely moving particles. This property of the strong force
is called color confinement.
Two layers of strong interaction
Interaction
range
held
carrier
result
Strong
< 0.8 fm
quark
gluon
hadron
Residual Strong
1–3 fm
hadron
meson
nucleus
Within hadrons
The fundamental couplings of the strong interaction, from left to right: (a) gluon radiation, (b) gluon splitting and (c,d) gluon self-coupling.
The word strong is used since the strong interaction is the "strongest" of the four fundamental forces. At a distance of 10−15 m, its strength is around 100 times that of the electromagnetic force, some 106 times as great as that of the weak force, and about 1038 times that of gravitation.
The force carrier particle of the strong interaction is the gluon, a massless gauge boson. Gluons are thought to interact with quarks and other gluons by way of a type of charge called color charge.
Color charge is analogous to electromagnetic charge, but it comes in
three types (±red, ±green, and ±blue) rather than one, which results in
different rules of behavior. These rules are described by quantum chromodynamics (QCD), the theory of quark–gluon interactions.
Unlike the photon
in electromagnetism, which is neutral, the gluon carries a color
charge. Quarks and gluons are the only fundamental particles that carry
non-vanishing color charge, and hence they participate in strong
interactions only with each other. The strong force is the expression of
the gluon interaction with other quark and gluon particles.
All quarks and gluons in QCD interact with each other through the
strong force. The strength of interaction is parameterized by the
strong coupling constant. This strength is modified by the gauge color charge of the particle, a group-theoretical property.
The strong force acts between quarks. Unlike all other forces
(electromagnetic, weak, and gravitational), the strong force does not
diminish in strength with increasing distance between pairs of quarks.
After a limiting distance (about the size of a hadron) has been reached, it remains at a strength of about 10000N, no matter how much farther the distance between the quarks.
As the separation between the quarks grows, the energy added to the
pair creates new pairs of matching quarks between the original two;
hence it is impossible to isolate quarks. The explanation is that the
amount of work done against a force of 10000 N
is enough to create particle–antiparticle pairs within a very short
distance. The energy added to the system by pulling two quarks apart
would create a pair of new quarks that will pair up with the original
ones. In QCD, this phenomenon is called color confinement; as a result, only hadrons, not individual free quarks, can be observed. The failure of all experiments that have searched for free quarks is considered to be evidence of this phenomenon.
The elementary quark and gluon particles involved in a high
energy collision are not directly observable. The interaction produces
jets of newly created hadrons that are observable. Those hadrons are
created, as a manifestation of mass–energy equivalence, when sufficient
energy is deposited into a quark–quark bond, as when a quark in one
proton is struck by a very fast quark of another impacting proton during
a particle accelerator experiment. However, quark–gluon plasmas have been observed.
A Feynman diagram (shown by the animation in the lead) with the individual quark constituents shown, to illustrate how the fundamental strong interaction gives rise to the nuclear force. Straight lines are quarks, while multi-colored loops are gluons (the carriers of the fundamental force).
While color confinement implies that the strong force acts without
distance-diminishment between pairs of quarks in compact collections of
bound quarks (hadrons), at distances approaching or greater than the
radius of a proton, a residual force (described below) remains. It
manifests as a force between the "colorless" hadrons, and is known as
the nuclear force or residual strong force (and historically as the strong nuclear force).
The nuclear force acts between hadrons, known as mesons and baryons. This "residual strong force", acting indirectly, transmits gluons that form part of the virtual π and ρmesons, which, in turn, transmit the force between nucleons that holds the nucleus (beyond hydrogen-1 nucleus) together.
The residual strong force is thus a minor residuum of the strong
force that binds quarks together into protons and neutrons. This same
force is much weaker between neutrons and protons, because it is mostly neutralized within them, in the same way that electromagnetic forces between neutral atoms (van der Waals forces) are much weaker than the electromagnetic forces that hold electrons in association with the nucleus, forming the atoms.
Unlike the strong force, the residual strong force diminishes
with distance, and does so rapidly. The decrease is approximately as a
negative exponential power of distance, though there is no simple
expression known for this; see Yukawa potential.
The rapid decrease with distance of the attractive residual force and
the less rapid decrease of the repulsive electromagnetic force acting
between protons within a nucleus, causes the instability of larger
atomic nuclei, such as all those with atomic numbers larger than 82 (the element lead).
Although the nuclear force is weaker than the strong interaction itself, it is still highly energetic: transitions produce gamma rays. The mass of a nucleus is significantly different from the summed masses of the individual nucleons. This mass defect is due to the potential energy associated with the nuclear force. Differences between mass defects power nuclear fusion and nuclear fission.
Unification
The so-called Grand Unified Theories
(GUT) aim to describe the strong interaction and the electroweak
interaction as aspects of a single force, similarly to how the
electromagnetic and weak interactions were unified by the
Glashow–Weinberg–Salam model into electroweak interaction. The strong interaction has a property called asymptotic freedom,
wherein the strength of the strong force diminishes at higher energies
(or temperatures). The theorized energy where its strength becomes equal
to the electroweak interaction is the grand unification energy.
However, no Grand Unified Theory has yet been successfully formulated
to describe this process, and Grand Unification remains an unsolved problem in physics.
Despite being the most successful theory of particle physics to date, the Standard Model is not perfect. A large share of the published output of theoretical physicists
consists of proposals for various forms of "Beyond the Standard Model"
new physics proposals that would modify the Standard Model in ways
subtle enough to be consistent with existing data, yet address its
imperfections materially enough to predict non-Standard Model outcomes
of new experiments that can be proposed.
The Standard Model of elementary particles + hypothetical Graviton
Phenomena not explained
The
Standard Model is inherently an incomplete theory. There are
fundamental physical phenomena in nature that the Standard Model does
not adequately explain:
Gravity. The standard model does not explain gravity. The approach of simply adding a graviton
to the Standard Model does not recreate what is observed experimentally
without other modifications, as yet undiscovered, to the Standard
Model. Moreover, the Standard Model is widely considered to be
incompatible with the most successful theory of gravity to date, general relativity.
Dark matter. Assuming that general relativity and Lambda CDM
are true, cosmological observations tell us the standard model explains
about 5% of the mass-energy present in the universe. About 26% should
be dark matter (the remaining 69% being dark energy) which would behave
just like other matter, but which only interacts weakly (if at all) with
the Standard Model fields. Yet the Standard Model does not supply any
fundamental particles that are good dark matter candidates.
Dark energy.
As mentioned, the remaining 69% of the universe's energy should consist
of the so-called dark energy, a constant energy density for the vacuum.
Attempts to explain dark energy in terms of vacuum energy of the standard model lead to a mismatch of 120 orders of magnitude.
Neutrino oscillations. According to the Standard Model, neutrinos do not oscillate. However, experiments and astronomical observations have shown that neutrino oscillation
does occur. These are typically explained by postulating that neutrinos
have mass. Neutrinos do not have mass in the Standard Model, and mass
terms for the neutrinos can be added to the Standard Model by hand, but
these lead to new theoretical problems. For example, the mass terms need
to be extraordinarily small and it is not clear if the neutrino masses
would arise in the same way that the masses of other fundamental
particles do in the Standard Model. There are also other extensions of
the Standard Model for neutrino oscillations which do not assume massive
neutrinos, such as Lorentz-violating neutrino oscillations.
Matter–antimatter asymmetry.
The universe is made out of mostly matter. However, the standard model
predicts that matter and antimatter should have been created in (almost)
equal amounts if the initial conditions of the universe did not involve
disproportionate matter relative to antimatter. The Standard Model can
incorporate baryogenesis through sphalerons
in a thermodynamic imbalance during the early universe, though the
amount of net baryons (and leptons) thus created may not be sufficient
to account for the present baryon asymmetry. Thus, there might be no
mechanism in the Standard Model to sufficiently explain this asymmetry.
Experimental results not explained
No experimental result is accepted as definitively contradicting the Standard Model at the 5 σ level, widely considered to be the threshold of a discovery in particle
physics. Because every experiment contains some degree of statistical
and systemic uncertainty, and the theoretical predictions themselves are
also almost never calculated exactly and are subject to uncertainties
in measurements of the fundamental constants of the Standard Model (some
of which are tiny and others of which are substantial), it is to be
expected that some of the hundreds of experimental tests of the Standard
Model will deviate from it to some extent, even if there were no new
physics to be discovered.
At any given moment there are several experimental results
standing that significantly differ from a Standard Model-based
prediction. In the past, many of these discrepancies have been found to
be statistical flukes or experimental errors that vanish as more data
has been collected, or when the same experiments were conducted more
carefully. On the other hand, any physics beyond the Standard Model
would necessarily first appear in experiments as a statistically
significant difference between an experiment and the theoretical
prediction. The task is to determine which is the case.
In each case, physicists seek to determine if a result is merely a
statistical fluke or experimental error on the one hand, or a sign of
new physics on the other. More statistically significant results cannot
be mere statistical flukes but can still result from experimental error
or inaccurate estimates of experimental precision. Frequently,
experiments are tailored to be more sensitive to experimental results
that would distinguish the Standard Model from theoretical alternatives.
Some of the most notable examples include the following:
B meson decay etc. – results from a BaBar experiment may suggest a surplus over Standard Model predictions of a type of particle decay ( B → D(*) τ−ντ ). In this, an electron and positron collide, resulting in a B meson and an antimatter B meson, which then decays into a D meson and a tau lepton as well as a tau antineutrino. While the level of certainty of the excess (3.4 σ
in statistical jargon) is not enough to declare a break from the
Standard Model, the results are a potential sign of something amiss and
are likely to affect existing theories, including those attempting to
deduce the properties of Higgs bosons. In 2015, LHCb reported observing a 2.1 σ excess in the same ratio of branching fractions. The Belle experiment also reported an excess. In 2017 a meta analysis of all available data reported a cumulative 5 σ deviation from SM.
Neutron lifetime puzzle
– Free neutrons are not stable but decay after some time. Currently
there are two methods used to measure this lifetime ("bottle" versus
"beam") that give different values not within each other's error margin. Currently the lifetime from the bottle method is at with a difference of 10 seconds below the beam method value of . This problem may be solved by taking into account neutron scattering
which decreases the lifetime of the involved neutrons. This error occurs
in the bottle method and the effect depends on the shape of the bottle –
thus this might be a bottle method only systematic error.
Theoretical predictions not observed
Observation at particle colliders of all of the fundamental particles predicted by the Standard Model has been confirmed. The Higgs boson is predicted by the Standard Model's explanation of the Higgs mechanism,
which describes how the weak SU(2) gauge symmetry is broken and how
fundamental particles obtain mass; it was the last particle predicted by
the Standard Model to be observed. On July 4, 2012, CERN scientists using the Large Hadron Collider announced the discovery of a particle consistent with the Higgs boson, with a mass of about 126 GeV/c2.
A Higgs boson was confirmed to exist on March 14, 2013, although
efforts to confirm that it has all of the properties predicted by the
Standard Model are ongoing.
A few hadrons (i.e. composite particles made of quarks)
whose existence is predicted by the Standard Model, which can be
produced only at very high energies in very low frequencies have not yet
been definitively observed, and "glueballs" (i.e. composite particles made of gluons)
have also not yet been definitively observed. Some very low frequency
particle decays predicted by the Standard Model have also not yet been
definitively observed because insufficient data is available to make a
statistically significant observation.
Unexplained relations
Koide formula – an unexplained empirical equation remarked upon by Yoshio Koide in 1981, and later by others. It relates the masses of the three charged leptons: .
The Standard Model does not predict lepton masses (they are free
parameters of the theory). However, the value of the Koide formula being
equal to 2/3 within experimental errors of the measured lepton masses
suggests the existence of a theory which is able to predict lepton
masses.
The CKM matrix,
if interpreted as a rotation matrix in a 3-dimensional vector space,
"rotates" a vector composed of square roots of down-type quark masses into a vector of square roots of up-type quark masses , up to vector lengths, a result due to Kohzo Nishida.
The sum of squares of the Yukawa couplings of all Standard Model
fermions is approximately 0.984, which is very close to 1. To put it
another way, the sum of squares of fermion masses is very close to half
of squared Higgs vacuum expectation value. This sum is dominated by the
top quark.
The sum of squares of boson masses (that is, W, Z, and Higgs bosons)
is also very close to half of squared Higgs vacuum expectation value,
the ratio is approximately 1.004.
Consequently, the sum of squared masses of all Standard Model
particles is very close to the squared Higgs vacuum expectation value,
the ratio is approximately 0.994.
It is unclear if these empirical relationships represent any
underlying physics; according to Koide, the rule he discovered "may be
an accidental coincidence".
Theoretical problems
Some features of the standard model are added in an ad hoc way. These are not problems per se (i.e. the theory works fine with the ad hoc
insertions), but they imply a lack of understanding. These contrived
features have motivated theorists to look for more fundamental theories
with fewer parameters. Some of the contrivances are:
Hierarchy problem – the standard model introduces particle masses through a process known as spontaneous symmetry breaking caused by the Higgs field. Within the standard model, the mass of the Higgs particle gets some very large quantum corrections due to the presence of virtual particles (mostly virtual top quarks). These corrections are much larger than the actual mass of the Higgs. This means that the bare mass parameter of the Higgs in the standard model must be fine tuned in such a way that almost completely cancels the quantum corrections. This level of fine-tuning is deemed unnatural by many theorists. The problem cannot be formulated in the strict context of the Standard
Model, for the Higgs mass cannot be calculated. In a sense, the problem
amounts to the worry that a future theory of fundamental particles, in
which the Higgs boson mass will be calculable, should not have excessive
fine-tunings.
Number of parameters – the standard model depends on
19 parameter numbers. Their values are known from experiment, but the
origin of the values is unknown. Some theorists have tried to find relations between different parameters, for example, between the masses of particles in different generations or calculating particle masses, such as in asymptotic safety scenarios.
Quantum triviality
– suggests that it may not be possible to create a consistent quantum
field theory involving elementary scalar Higgs particles. This is
sometimes called the Landau pole problem. A possible solution is that the renormalized value could go to zero as
the cut-off is removed, meaning that the bare value is completely
screened by quantum fluctuations.
Strong CP problem – it can be argued theoretically that the standard model should contain a term in the strong interaction that breaks CP symmetry, causing slightly different interaction rates for matter vs. antimatter.
Experimentally, however, no such violation has been found, implying
that the coefficient of this term – if any – would be suspiciously
close to zero.
The standard model has three gauge symmetries; the colourSU(3), the weak isospinSU(2), and the weak hyperchargeU(1) symmetry, corresponding to the three fundamental forces. Due to renormalization the coupling constants of each of these symmetries vary with the energy at which they are measured. Around 1016 GeV
these couplings become approximately equal. This has led to speculation
that above this energy the three gauge symmetries of the standard model
are unified in one single gauge symmetry with a simple gauge group, and just one coupling constant. Below this energy the symmetry is spontaneously broken to the standard model symmetries. Popular choices for the unifying group are the special unitary group in five dimensions SU(5) and the special orthogonal group in ten dimensions SO(10).
Theories that unify the standard model symmetries in this way are called Grand Unified Theories
(or GUTs), and the energy scale at which the unified symmetry is broken
is called the GUT scale. Generically, grand unified theories predict
the creation of magnetic monopoles in the early universe, and instability of the proton. Neither of these have been observed, and this absence of observation puts limits on the possible GUTs.
Supersymmetry extends the Standard Model by adding another class of symmetries to the Lagrangian. These symmetries exchange fermionic particles with bosonic ones. Such a symmetry predicts the existence of supersymmetric particles, abbreviated as sparticles, which include the sleptons, squarks, neutralinos and charginos. Each particle in the Standard Model would have a superpartner whose spin differs by 1/2 from the ordinary particle. Due to the breaking of supersymmetry, the sparticles are much heavier than their ordinary counterparts; they are so heavy that existing particle colliders may not be powerful enough to produce them.
Neutrino oscillations are usually explained using massive neutrinos. In the standard model, neutrinos have exactly zero mass, as the standard model only contains left-handed neutrinos. With no suitable right-handed partner, it is impossible to add a renormalizable mass term to the standard model. These measurements only give the mass differences between the different
flavours. The best constraint on the absolute mass of the neutrinos
comes from precision measurements of tritium
decay, providing an upper limit 2 eV, which makes them at least five
orders of magnitude lighter than the other particles in the standard
model. This necessitates an extension of the standard model, which not only
needs to explain how neutrinos get their mass, but also why the mass is
so small.
One approach to add masses to the neutrinos, the so-called seesaw mechanism, is to add right-handed neutrinos and have these couple to left-handed neutrinos with a Dirac mass term. The right-handed neutrinos have to be sterile,
meaning that they do not participate in any of the standard model
interactions. Because they have no charges, the right-handed neutrinos
can act as their own anti-particles, and have a Majorana mass
term. Like the other Dirac masses in the standard model, the neutrino
Dirac mass is expected to be generated through the Higgs mechanism, and
is therefore unpredictable. The standard model fermion masses differ by
many orders of magnitude; the Dirac neutrino mass has at least the same
uncertainty. On the other hand, the Majorana mass for the right-handed
neutrinos does not arise from the Higgs mechanism, and is therefore
expected to be tied to some energy scale of new physics beyond the
standard model, for example the Planck scale. Therefore, any process involving right-handed neutrinos will be
suppressed at low energies. The correction due to these suppressed
processes effectively gives the left-handed neutrinos a mass that is
inversely proportional to the right-handed Majorana mass, a mechanism
known as the see-saw. The presence of heavy right-handed neutrinos thereby explains both the
small mass of the left-handed neutrinos and the absence of the
right-handed neutrinos in observations. However, due to the uncertainty
in the Dirac neutrino masses, the right-handed neutrino masses can lie
anywhere. For example, they could be as light as keV and be dark matter, they can have a mass in the LHC energy range and lead to observable lepton number violation, or they can be near the GUT scale, linking the right-handed neutrinos to the possibility of a grand unified theory.
The mass terms mix neutrinos of different generations. This mixing is parameterized by the PMNS matrix, which is the neutrino analogue of the CKM quark mixing matrix.
Unlike the quark mixing, which is almost minimal, the mixing of the
neutrinos appears to be almost maximal. This has led to various
speculations of symmetries between the various generations that could
explain the mixing patterns. The mixing matrix could also contain several complex phases that break
CP invariance, although there has been no experimental probe of these.
These phases could potentially create a surplus of leptons over
anti-leptons in the early universe, a process known as leptogenesis.
This asymmetry could then at a later stage be converted in an excess of
baryons over anti-baryons, and explain the matter-antimatter asymmetry
in the universe.
The light neutrinos are disfavored as an explanation for the
observation of dark matter, based on considerations of large-scale
structure formation in the early universe. Simulations of structure formation
show that they are too hot – that is, their kinetic energy is large
compared to their mass – while formation of structures similar to the
galaxies in our universe requires cold dark matter.
The simulations show that neutrinos can at best explain a few percent
of the missing mass in dark matter. However, the heavy, sterile,
right-handed neutrinos are a possible candidate for a dark matter WIMP.
There are however other explanations for neutrino oscillations
which do not necessarily require neutrinos to have masses, such as Lorentz-violating neutrino oscillations.
Preon models
Several preon
models have been proposed to address the unsolved problem concerning
the fact that there are three generations of quarks and leptons. Preon
models generally postulate some additional new particles which are
further postulated to be able to combine to form the quarks and leptons
of the standard model. One of the earliest preon models was the Rishon model.
To date, no preon model is widely accepted or fully verified.
Theoretical physics continues to strive toward a theory of
everything, a theory that fully explains and links together all known
physical phenomena, and predicts the outcome of any experiment that
could be carried out in principle.
In practical terms the immediate goal in this regard is to develop a theory which would unify the Standard Model with General Relativity in a theory of quantum gravity.
Additional features, such as overcoming conceptual flaws in either
theory or accurate prediction of particle masses, would be desired.
The challenges in putting together such a theory are not just conceptual
– they include the experimental aspects of the very high energies
needed to probe exotic realms.
Theories of quantum gravity such as loop quantum gravity
and others are thought by some to be promising candidates to the
mathematical unification of quantum field theory and general relativity,
requiring less drastic changes to existing theories. However recent work places stringent limits on the putative effects of
quantum gravity on the speed of light, and disfavours some current
models of quantum gravity.
Extensions, revisions, replacements, and reorganizations of the
Standard Model exist in attempt to correct for these and other issues. String theory is one such reinvention, and many theoretical physicists think that such theories are the next theoretical step toward a true Theory of Everything.
Among the numerous variants of string theory, M-theory,
whose mathematical existence was first proposed at a String Conference
in 1995 by Edward Witten, is believed by many to be a proper "ToE" candidate, notably by physicists Brian Greene and Stephen Hawking. Though a full mathematical description is not yet known, solutions to the theory exist for specific cases. Recent works have also proposed alternate string models, some of which lack the various harder-to-test features of M-theory (e.g. the existence of Calabi–Yau manifolds, many extra dimensions, etc.) including works by well-published physicists such as Lisa Randall.
A body remains at rest, or in motion at a constant speed in a straight line, unless it is acted upon by a force.
At any instant of time, the net force on a body is equal to the body's acceleration multiplied by its mass or, equivalently, the rate at which the body's momentum is changing with time.
If two bodies exert forces on each other, these forces have the same magnitude but opposite directions.
The three laws of motion were first stated by Isaac Newton in his Philosophiæ Naturalis Principia Mathematica (Mathematical Principles of Natural Philosophy), originally published in 1687. Newton used them to investigate and explain the motion of many physical
objects and systems. In the time since Newton, new insights,
especially around the concept of energy, built the field of classical mechanics
on his foundations. Limitations to Newton's laws have also been
discovered; new theories are necessary when objects move at very high
speeds (special relativity), are very massive (general relativity), or are very small (quantum mechanics).
Prerequisites
Newton's laws are often stated in terms of point or particle
masses, that is, bodies whose volume is negligible. This is a
reasonable approximation for real bodies when the motion of internal
parts can be neglected, and when the separation between bodies is much
larger than the size of each. For instance, the Earth and the Sun can
both be approximated as pointlike when considering the orbit of the
former around the latter, but the Earth is not pointlike when
considering activities on its surface.
The mathematical description of motion, or kinematics,
is based on the idea of specifying positions using numerical
coordinates. Movement is represented by these numbers changing over
time: a body's trajectory is represented by a function that assigns to
each value of a time variable the values of all the position
coordinates. The simplest case is one-dimensional, that is, when a body
is constrained to move only along a straight line. Its position can then
be given by a single number, indicating where it is relative to some
chosen reference point. For example, a body might be free to slide along
a track that runs left to right, and so its location can be specified
by its distance from a convenient zero point, or origin,
with negative numbers indicating positions to the left and positive
numbers indicating positions to the right. If the body's location as a
function of time is , then its average velocity over the time interval from to is[6]Here, the Greek letter (delta) is used, per tradition, to mean "change in". A positive average velocity means that the position coordinate
increases over the interval in question, a negative average velocity
indicates a net decrease over that interval, and an average velocity of
zero means that the body ends the time interval in the same place as it
began. Calculus gives the means to define an instantaneous
velocity, a measure of a body's speed and direction of movement at a
single moment of time, rather than over an interval. One notation for
the instantaneous velocity is to replace with the symbol , for example,This denotes that the instantaneous velocity is the derivative
of the position with respect to time. It can roughly be thought of as
the ratio between an infinitesimally small change in position to the infinitesimally small time interval over which it occurs. More carefully, the velocity and all other derivatives can be defined using the concept of a limit. A function has a limit of at a given input value if the difference between and can be made arbitrarily small by choosing an input sufficiently close to . One writes, Instantaneous velocity can be defined as the limit of the average velocity as the time interval shrinks to zero:Acceleration is to velocity as velocity is to position: it is the derivative of the velocity with respect to time. Acceleration can likewise be defined as a limit:Consequently, the acceleration is the second derivative of position, often written .
Position, when thought of as a displacement from an origin point, is a vector: a quantity with both magnitude and direction.
Velocity and acceleration are vector quantities as well. The
mathematical tools of vector algebra provide the means to describe
motion in two, three or more dimensions. Vectors are often denoted with
an arrow, as in , or in bold typeface, such as .
Often, vectors are represented visually as arrows, with the direction
of the vector being the direction of the arrow, and the magnitude of the
vector indicated by the length of the arrow. Numerically, a vector can
be represented as a list; for example, a body's velocity vector might be
,
indicating that it is moving at 3 metres per second along the
horizontal axis and 4 metres per second along the vertical axis. The
same motion described in a different coordinate system will be represented by different numbers, and vector algebra can be used to translate between these alternatives.
The study of mechanics is complicated by the fact that household words like energy are used with a technical meaning. Moreover, words which are synonymous in everyday speech are not so in physics: force is not the same as power or pressure, for example, and mass has a different meaning than weight. The physics concept of force
makes quantitative the everyday idea of a push or a pull. Forces in
Newtonian mechanics are often due to strings and ropes, friction, muscle
effort, gravity, and so forth. Like displacement, velocity, and
acceleration, force is a vector quantity.
Laws
First law
Artificial satellites move along curved orbits, rather than in straight lines, because of the Earth's gravity.
Translated from Latin, Newton's first law reads,
Every object perseveres in its state of rest, or of uniform
motion in a right line, unless it is compelled to change that state by
forces impressed thereon.
Newton's first law expresses the principle of inertia:
the natural behavior of a body is to move in a straight line at
constant speed. A body's motion preserves the status quo, but external
forces can perturb this.
The modern understanding of Newton's first law is that no inertial observer
is privileged over any other. The concept of an inertial observer makes
quantitative the everyday idea of feeling no effects of motion. For
example, a person standing on the ground watching a train go past is an
inertial observer. If the observer on the ground sees the train moving
smoothly in a straight line at a constant speed, then a passenger
sitting on the train will also be an inertial observer: the train
passenger feels no motion. The principle expressed by Newton's
first law is that there is no way to say which inertial observer is
"really" moving and which is "really" standing still. One observer's
state of rest is another observer's state of uniform motion in a
straight line, and no experiment can deem either point of view to be
correct or incorrect. There is no absolute standard of rest. Newton himself believed that absolute space and time existed, but that the only measures of space or time accessible to experiment are relative.
Second law
The change of motion of an object is proportional to the
force impressed; and is made in the direction of the straight line in
which the force is impressed.
By "motion", Newton meant the quantity now called momentum,
which depends upon the amount of matter contained in a body, the speed
at which that body is moving, and the direction in which it is moving. In modern notation, the momentum of a body is the product of its mass and its velocity:
where all three quantities can change over time.
In common cases the mass
does not change with time and the derivative acts only upon the
velocity. Then force equals the product of the mass and the time
derivative of the velocity, which is the acceleration:
As the acceleration is the second derivative of position with respect to time, this can also be written
Newton's second law, in modern form, states that the time derivative of the momentum is the force:
When applied to systems of variable mass, the equation above is only valid only for a fixed set of particles. Applying the derivative as in
can lead to incorrect results. For example, the momentum of a water jet system must include the momentum of the ejected water:
A free body diagram for a block on an inclined plane, illustrating the normal force perpendicular to the plane (N), the downward force of gravity (mg), and a force f along the direction of the plane that could be applied, for example, by friction or a string
The forces acting on a body add as vectors, and so the total force on a body depends upon both the magnitudes and the directions of the individual forces.
When the net force on a body is equal to zero, then by Newton's second
law, the body does not accelerate, and it is said to be in mechanical equilibrium. A state of mechanical equilibrium is stable if, when the position of the body is changed slightly, the body remains near that equilibrium. Otherwise, the equilibrium is unstable.
A common visual representation of forces acting in concert is the free body diagram, which schematically portrays a body of interest and the forces applied to it by outside influences. For example, a free body diagram of a block sitting upon an inclined plane can illustrate the combination of gravitational force, "normal" force, friction, and string tension.
Newton's second law is sometimes presented as a definition
of force, i.e., a force is that which exists when an inertial observer
sees a body accelerating. This is sometimes regarded as a potential tautology
— acceleration implies force, force implies acceleration. However,
Newton's second law not only merely defines the force by the
acceleration: forces exist as separate from the acceleration produced by
the force in a particular system. The same force that is identified as
producing acceleration to an object can then be applied to any other
object, and the resulting accelerations (coming from that same force)
will always be inversely proportional to the mass of the object. What
Newton's Second Law states is that all the effect of a force onto a
system can be reduced to two pieces of information: the magnitude of the
force, and it's direction, and then goes on to specify what the effect
is.
Beyond that, an equation detailing the force might also be specified, like Newton's law of universal gravitation. By inserting such an expression for into Newton's second law, an equation with predictive power can be written. Newton's second law has also been regarded as setting out a research
program for physics, establishing that important goals of the subject
are to identify the forces present in nature and to catalogue the
constituents of matter.
However, forces can often be measured directly with no acceleration being involved, such as through weighing scales.
By postulating a physical object that can be directly measured
independently from acceleration, Newton made a objective physical
statement with the second law alone, the predictions of which can be
verified even if no force law is given.
Third law
To every action, there is always opposed an equal reaction;
or, the mutual actions of two bodies upon each other are always equal,
and directed to contrary parts.
Rockets work by creating unbalanced high pressure that pushes the rocket upwards while exhaust gas exits through an open nozzle.
In other words, if one body exerts a force on a second body, the
second body is also exerting a force on the first body, of equal
magnitude in the opposite direction. Overly brief paraphrases of the
third law, like "action equals reaction"
might have caused confusion among generations of students: the "action"
and "reaction" apply to different bodies. For example, consider a book
at rest on a table. The Earth's gravity pulls down upon the book. The
"reaction" to that "action" is not the support force from the table holding up the book, but the gravitational pull of the book acting on the Earth.
Newton's third law relates to a more fundamental principle, the conservation of momentum. The latter remains true even in cases where Newton's statement does not, for instance when force fields as well as material bodies carry momentum, and when momentum is defined properly, in quantum mechanics as well. In Newtonian mechanics, if two bodies have momenta and respectively, then the total momentum of the pair is , and the rate of change of is
By Newton's second law, the first term is the total force upon the
first body, and the second term is the total force upon the second body.
If the two bodies are isolated from outside influences, the only force
upon the first body can be that from the second, and vice versa. By
Newton's third law, these forces have equal magnitude but opposite
direction, so they cancel when added, and is constant. Alternatively, if is known to be constant, it follows that the forces have equal magnitude and opposite direction.
Candidates for additional laws
Various sources have proposed elevating other ideas used in classical
mechanics to the status of Newton's laws. For example, in Newtonian
mechanics, the total mass of a body made by bringing together two
smaller bodies is the sum of their individual masses. Frank Wilczek has suggested calling attention to this assumption by designating it "Newton's Zeroth Law". Another candidate for a "zeroth law" is the fact that at any instant, a
body reacts to the forces applied to it at that instant. Likewise, the idea that forces add like vectors (or in other words obey the superposition principle), and the idea that forces change the energy of a body, have both been described as a "fourth law".
Moreover, some texts organize the basic ideas of Newtonian
mechanics into different postulates, other than the three laws as
commonly phrased, with the goal of being more clear about what is
empirically observed and what is true by definition.
Examples
The study of the behavior of massive bodies using Newton's laws is
known as Newtonian mechanics. Some example problems in Newtonian
mechanics are particularly noteworthy for conceptual or historical
reasons.
A bouncing ball photographed at 25 frames per second using a stroboscopic flash. In between bounces, the ball's height as a function of time is close to being a parabola, deviating from a parabolic arc because of air resistance, spin, and deformation into a non-spherical shape upon impact.
If a body falls from rest near the surface of the Earth, then in the
absence of air resistance, it will accelerate at a constant rate. This
is known as free fall.
The speed attained during free fall is proportional to the elapsed
time, and the distance traveled is proportional to the square of the
elapsed time. Importantly, the acceleration is the same for all bodies, independently
of their mass. This follows from combining Newton's second law of
motion with his law of universal gravitation. The latter states that the magnitude of the gravitational force from the Earth upon the body is
where is the mass of the falling body, is the mass of the Earth, is Newton's constant, and
is the distance from the center of the Earth to the body's location,
which is very nearly the radius of the Earth. Setting this equal to , the body's mass cancels from both sides of the equation, leaving an acceleration that depends upon , , and , and can be taken to be constant. This particular value of acceleration is typically denoted :
If the body is not released from rest but instead launched
upwards and/or horizontally with nonzero velocity, then free fall
becomes projectile motion. When air resistance can be neglected, projectiles follow parabola-shaped
trajectories, because gravity affects the body's vertical motion and
not its horizontal. At the peak of the projectile's trajectory, its
vertical velocity is zero, but its acceleration is downwards, as it is at all times. Setting the wrong vector equal to zero is a common confusion among physics students.
Two objects in uniform circular motion, orbiting around the barycenter (center of mass of both objects)
When a body is in uniform circular motion, the force on it changes
the direction of its motion but not its speed. For a body moving in a
circle of radius at a constant speed , its acceleration has a magnitudeand is directed toward the center of the circle. The force required to sustain this acceleration, called the centripetal force, is therefore also directed toward the center of the circle and has magnitude . Many orbits,
such as that of the Moon around the Earth, can be approximated by
uniform circular motion. In such cases, the centripetal force is
gravity, and by Newton's law of universal gravitation has magnitude , where
is the mass of the larger body being orbited. Therefore, the mass of a
body can be calculated from observations of another body orbiting around
it.
Newton's cannonball is a thought experiment
that interpolates between projectile motion and uniform circular
motion. A cannonball that is lobbed weakly off the edge of a tall cliff
will hit the ground in the same amount of time as if it were dropped
from rest, because the force of gravity only affects the cannonball's
momentum in the downward direction, and its effect is not diminished by
horizontal movement. If the cannonball is launched with a greater
initial horizontal velocity, then it will travel farther before it hits
the ground, but it will still hit the ground in the same amount of time.
However, if the cannonball is launched with an even larger initial
velocity, then the curvature of the Earth becomes significant: the
ground itself will curve away from the falling cannonball. A very fast
cannonball will fall away from the inertial straight-line trajectory at
the same rate that the Earth curves away beneath it; in other words, it
will be in orbit (imagining that it is not slowed by air resistance or
obstacles).
Consider a body of mass able to move along the axis, and suppose an equilibrium point exists at the position . That is, at ,
the net force upon the body is the zero vector, and by Newton's second
law, the body will not accelerate. If the force upon the body is
proportional to the displacement from the equilibrium point, and
directed to the equilibrium point, then the body will perform simple harmonic motion. Writing the force as , Newton's second law becomes
This differential equation has the solution
where the frequency is equal to , and the constants and can be calculated knowing, for example, the position and velocity the body has at a given time, like .
One reason that the harmonic oscillator is a conceptually
important example is that it is good approximation for many systems near
a stable mechanical equilibrium. For example, a pendulum
has a stable equilibrium in the vertical position: if motionless there,
it will remain there, and if pushed slightly, it will swing back and
forth. Neglecting air resistance and friction in the pivot, the force
upon the pendulum is gravity, and Newton's second law becomes where is the length of the pendulum and is its angle from the vertical. When the angle is small, the sine of is nearly equal to (see small-angle approximation), and so this expression simplifies to the equation for a simple harmonic oscillator with frequency .
A harmonic oscillator can be damped, often by friction or
viscous drag, in which case energy bleeds out of the oscillator and the
amplitude of the oscillations decreases over time. Also, a harmonic
oscillator can be driven by an applied force, which can lead to the phenomenon of resonance.
Rockets, like the Space Shuttle Atlantis,
expel mass during operation. This means that the mass being pushed, the
rocket and its remaining onboard fuel supply, is constantly changing.
Newtonian physics treats matter as being neither created nor
destroyed, though it may be rearranged. It can be the case that an
object of interest gains or loses mass because matter is added to or
removed from it. In such a situation, Newton's laws can be applied to
the individual pieces of matter, keeping track of which pieces belong to
the object of interest over time. For instance, if a rocket of mass , moving at velocity , ejects matter at a velocity relative to the rocket, then
where is the net external force (e.g., a planet's gravitational pull).
Fan and sail
A boat equipped with a fan and a sail
The fan and sail example is a situation studied in discussions of Newton's third law. In the situation, a fan is attached to a cart or a sailboat
and blows on its sail. From the third law, one would reason that the
force of the air pushing in one direction would cancel out the force
done by the fan on the sail, leaving the entire apparatus stationary.
However, because the system is not entirely enclosed, there are
conditions in which the vessel will move; for example, if the sail is
built in a manner that redirects the majority of the airflow back
towards the fan, the net force will result in the vessel moving forward.
Work and energy
The concept of energy
was developed after Newton's time, but it has become an inseparable
part of what is considered "Newtonian" physics. Energy can broadly be
classified into kinetic, due to a body's motion, and potential, due to a body's position relative to others. Thermal energy,
the energy carried by heat flow, is a type of kinetic energy not
associated with the macroscopic motion of objects but instead with the
movements of the atoms and molecules of which they are made. According
to the work-energy theorem, when a force acts upon a body while that body moves along the line of the force, the force does work upon the body, and the amount of work done is equal to the change in the body's kinetic energy. In many cases of interest, the net work done by a force when a body
moves in a closed loop — starting at a point, moving along some
trajectory, and returning to the initial point — is zero. If this is the
case, then the force can be written in terms of the gradient of a function called a scalar potential:
This is true for many forces including that of gravity, but not for
friction; indeed, almost any problem in a mechanics textbook that does
not involve friction can be expressed in this way.The fact that the force can be written in this way can be understood from the conservation of energy.
Without friction to dissipate a body's energy into heat, the body's
energy will trade between potential and (non-thermal) kinetic forms
while the total amount remains constant. Any gain of kinetic energy,
which occurs when the net force on the body accelerates it to a higher
speed, must be accompanied by a loss of potential energy. So, the net
force upon the body is determined by the manner in which the potential
energy decreases.
A rigid body is an object whose size is too large to neglect and
which maintains the same shape over time. In Newtonian mechanics, the
motion of a rigid body is often understood by separating it into
movement of the body's center of mass and movement around the center of mass.
The total center of mass of the forks, cork, and toothpick is vertically below the pen's tip.
Significant aspects of the motion of an extended body can be
understood by imagining the mass of that body concentrated to a single
point, known as the center of mass. The location of a body's center of
mass depends upon how that body's material is distributed. For a
collection of pointlike objects with masses at positions , the center of mass is located at where
is the total mass of the collection. In the absence of a net external
force, the center of mass moves at a constant speed in a straight line.
This applies, for example, to a collision between two bodies. If the total external force is not zero, then the center of mass changes velocity as though it were a point body of mass .
This follows from the fact that the internal forces within the
collection, the forces that the objects exert upon each other, occur in
balanced pairs by Newton's third law. In a system of two bodies with one
much more massive than the other, the center of mass will approximately
coincide with the location of the more massive body.
Rotational analogues of Newton's laws
When Newton's laws are applied to rotating extended bodies, they lead
to new quantities that are analogous to those invoked in the original
laws. The analogue of mass is the moment of inertia, the counterpart of momentum is angular momentum, and the counterpart of force is torque.
Angular momentum is calculated with respect to a reference point. If the displacement vector from a reference point to a body is and the body has momentum , then the body's angular momentum with respect to that point is, using the vector cross product, Taking the time derivative of the angular momentum gives The first term vanishes because and point in the same direction. The remaining term is the torque, When the torque is zero, the angular momentum is constant, just as when the force is zero, the momentum is constant. The torque can vanish even when the force is non-zero, if the body is located at the reference point () or if the force and the displacement vector are directed along the same line.
The angular momentum of a collection of point masses, and thus of
an extended body, is found by adding the contributions from each of the
points. This provides a means to characterize a body's rotation about
an axis, by adding up the angular momenta of its individual pieces. The
result depends on the chosen axis, the shape of the body, and the rate
of rotation.
Animation of three points or bodies attracting to each other
Newton's law of universal gravitation states that any body attracts
any other body along the straight line connecting them. The size of the
attracting force is proportional to the product of their masses, and
inversely proportional to the square of the distance between them.
Finding the shape of the orbits that an inverse-square force law will
produce is known as the Kepler problem. The Kepler problem can be solved in multiple ways, including by demonstrating that the Laplace–Runge–Lenz vector is constant, or by applying a duality transformation to a 2-dimensional harmonic oscillator. However it is solved, the result is that orbits will be conic sections, that is, ellipses (including circles), parabolas, or hyperbolas. The eccentricity
of the orbit, and thus the type of conic section, is determined by the
energy and the angular momentum of the orbiting body. Planets do not
have sufficient energy to escape the Sun, and so their orbits are
ellipses, to a good approximation; because the planets pull on one
another, actual orbits are not exactly conic sections.
If a third mass is added, the Kepler problem becomes the three-body problem, which in general has no exact solution in closed form.
That is, there is no way to start from the differential equations
implied by Newton's laws and, after a finite sequence of standard
mathematical operations, obtain equations that express the three bodies'
motions over time. Numerical methods can be applied to obtain useful, albeit approximate, results for the three-body problem. The positions and velocities of the bodies can be stored in variables
within a computer's memory; Newton's laws are used to calculate how the
velocities will change over a short interval of time, and knowing the
velocities, the changes of position over that time interval can be
computed. This process is looped
to calculate, approximately, the bodies' trajectories. Generally
speaking, the shorter the time interval, the more accurate the
approximation.
Three double pendulums, initialized with almost exactly the same initial conditions, diverge over time.
Newton's laws of motion allow the possibility of chaos. That is, qualitatively speaking, physical systems obeying Newton's laws
can exhibit sensitive dependence upon their initial conditions: a
slight change of the position or velocity of one part of a system can
lead to the whole system behaving in a radically different way within a
short time. Noteworthy examples include the three-body problem, the double pendulum, dynamical billiards, and the Fermi–Pasta–Ulam–Tsingou problem.
Newton's laws can be applied to fluids by considering a fluid as composed of infinitesimal pieces, each exerting forces upon neighboring pieces. The Euler momentum equation is an expression of Newton's second law adapted to fluid dynamics. A fluid is described by a velocity field, i.e., a function
that assigns a velocity vector to each point in space and time. A small
object being carried along by the fluid flow can change velocity for
two reasons: first, because the velocity field at its position is
changing over time, and second, because it moves to a new location where
the velocity field has a different value. Consequently, when Newton's
second law is applied to an infinitesimal portion of fluid, the
acceleration has two terms, a combination known as a total or material derivative. The mass of an infinitesimal portion depends upon the fluid density, and there is a net force upon it if the fluid pressure varies from one side of it to another. Accordingly, becomes
where is the density, is the pressure, and stands for an external influence like a gravitational pull. Incorporating the effect of viscosity turns the Euler equation into a Navier–Stokes equation:
where is the kinematic viscosity.
Singularities
It is mathematically possible for a collection of point masses,
moving in accord with Newton's laws, to launch some of themselves away
so forcefully that they fly off to infinity in a finite time. This unphysical behavior, known as a "noncollision singularity", depends upon the masses being pointlike and able to approach one another arbitrarily closely, as well as the lack of a relativistic speed limit in Newtonian physics.
Relation to other formulations of classical physics
Classical mechanics can be mathematically formulated in multiple
different ways, other than the "Newtonian" description (which itself, of
course, incorporates contributions from others both before and after
Newton). The physical content of these different formulations is the
same as the Newtonian, but they provide different insights and
facilitate different types of calculations. For example, Lagrangian mechanics
helps make apparent the connection between symmetries and conservation
laws, and it is useful when calculating the motion of constrained
bodies, like a mass restricted to move along a curving track or on the
surface of a sphere.Hamiltonian mechanics is convenient for statistical physics, leads to further insight about symmetry, and can be developed into sophisticated techniques for perturbation theory.
Due to the breadth of these topics, the discussion here will be
confined to concise treatments of how they reformulate Newton's laws of
motion.
Lagrangian
Lagrangian mechanics
differs from the Newtonian formulation by considering entire
trajectories at once rather than predicting a body's motion at a single
instant.It is traditional in Lagrangian mechanics to denote position with and velocity with .
The simplest example is a massive point particle, the Lagrangian for
which can be written as the difference between its kinetic and potential
energies:
where the kinetic energy is
and the potential energy is some function of the position, . The physical path that the particle will take between an initial point and a final point
is the path for which the integral of the Lagrangian is "stationary".
That is, the physical path has the property that small perturbations of
it will, to a first approximation, not change the integral of the
Lagrangian. Calculus of variations provides the mathematical tools for finding this path.Applying the calculus of variations to the task of finding the path yields the Euler–Lagrange equation for the particle,
Evaluating the partial derivatives of the Lagrangian gives
which is a restatement of Newton's second law. The left-hand side is the
time derivative of the momentum, and the right-hand side is the force,
represented in terms of the potential energy.
Landau and Lifshitz
argue that the Lagrangian formulation makes the conceptual content of
classical mechanics more clear than starting with Newton's laws. Lagrangian mechanics provides a convenient framework in which to prove Noether's theorem, which relates symmetries and conservation laws. The conservation of momentum can be derived by applying Noether's
theorem to a Lagrangian for a multi-particle system, and so, Newton's
third law is a theorem rather than an assumption.
In Hamiltonian mechanics,
the dynamics of a system are represented by a function called the
Hamiltonian, which in many cases of interest is equal to the total
energy of the system.
The Hamiltonian is a function of the positions and the momenta of all
the bodies making up the system, and it may also depend explicitly upon
time. The time derivatives of the position and momentum variables are
given by partial derivatives of the Hamiltonian, via Hamilton's equations. The simplest example is a point mass constrained to move in a straight line, under the effect of a potential. Writing for the position coordinate and for the body's momentum, the Hamiltonian is
In this example, Hamilton's equations are
and
Evaluating these partial derivatives, the former equation becomes
which reproduces the familiar statement that a body's momentum is the
product of its mass and velocity. The time derivative of the momentum is
which, upon identifying the negative derivative of the potential with the force, is just Newton's second law once again.
As in the Lagrangian formulation, in Hamiltonian mechanics the
conservation of momentum can be derived using Noether's theorem, making
Newton's third law an idea that is deduced rather than assumed.
Among the proposals to reform the standard introductory-physics
curriculum is one that teaches the concept of energy before that of
force, essentially "introductory Hamiltonian mechanics".
Hamilton–Jacobi
The Hamilton–Jacobi equation provides yet another formulation of classical mechanics, one which makes it mathematically analogous to wave optics. This formulation also uses Hamiltonian functions, but in a different
way than the formulation described above. The paths taken by bodies or
collections of bodies are deduced from a function of positions and time . The Hamiltonian is incorporated into the Hamilton–Jacobi equation, a differential equation for . Bodies move over time in such a way that their trajectories are perpendicular to the surfaces of constant ,
analogously to how a light ray propagates in the direction
perpendicular to its wavefront. This is simplest to express for the case
of a single point mass, in which is a function , and the point mass moves in the direction along which changes most steeply. In other words, the momentum of the point mass is the gradient of :
The Hamilton–Jacobi equation for a point mass is
The relation to Newton's laws can be seen by considering a point mass moving in a time-independent potential , in which case the Hamilton–Jacobi equation becomes
Taking the gradient of both sides, this becomes
Interchanging the order of the partial derivatives on the left-hand side, and using the power and chain rules on the first term on the right-hand side,
Gathering together the terms that depend upon the gradient of ,
This is another re-expression of Newton's second law. The expression in brackets is a total or material derivative as mentioned above, in which the first term indicates how the function being differentiated
changes over time at a fixed location, and the second term captures how
a moving particle will see different values of that function as it
travels from place to place:
Relation to other physical theories
Thermodynamics and statistical physics
A simulation of a larger, but still microscopic, particle (in yellow) surrounded by a gas of smaller particles, illustrating Brownian motion
In statistical physics, the kinetic theory of gases applies Newton's laws of motion to large numbers (typically on the order of the Avogadro number) of particles. Kinetic theory can explain, for example, the pressure
that a gas exerts upon the container holding it as the aggregate of
many impacts of atoms, each imparting a tiny amount of momentum.
The Langevin equation
is a special case of Newton's second law, adapted for the case of
describing a small object bombarded stochastically by even smaller ones. It can be writtenwhere is a drag coefficient and
is a force that varies randomly from instant to instant, representing
the net effect of collisions with the surrounding particles. This is
used to model Brownian motion.
Electromagnetism
Newton's three laws can be applied to phenomena involving electricity and magnetism, though subtleties and caveats exist.
Coulomb's law for the electric force between two stationary, electrically charged
bodies has much the same mathematical form as Newton's law of universal
gravitation: the force is proportional to the product of the charges,
inversely proportional to the square of the distance between them, and
directed along the straight line between them. The Coulomb force that a
charge exerts upon a charge is equal in magnitude to the force that exerts upon , and it points in the exact opposite direction. Coulomb's law is thus consistent with Newton's third law.
Electromagnetism treats forces as produced by fields acting upon charges. The Lorentz force law
provides an expression for the force upon a charged body that can be
plugged into Newton's second law in order to calculate its acceleration.
According to the Lorentz force law, a charged body in an electric field
experiences a force in the direction of that field, a force
proportional to its charge and to the strength of the electric field. In addition, a moving
charged body in a magnetic field experiences a force that is also
proportional to its charge, in a direction perpendicular to both the
field and the body's direction of motion. Using the vector cross product,
The Lorentz force law in effect: electrons are bent into a circular trajectory by a magnetic field.
If the electric field vanishes (),
then the force will be perpendicular to the charge's motion, just as in
the case of uniform circular motion studied above, and the charge will
circle (or more generally move in a helix) around the magnetic field lines at the cyclotron frequency.Mass spectrometry
works by applying electric and/or magnetic fields to moving charges and
measuring the resulting acceleration, which by the Lorentz force law
yields the mass-to-charge ratio.
Collections of charged bodies do not always obey Newton's third
law: there can be a change of one body's momentum without a compensatory
change in the momentum of another. The discrepancy is accounted for by
momentum carried by the electromagnetic field itself. The momentum per
unit volume of the electromagnetic field is proportional to the Poynting vector.
There is subtle conceptual conflict between electromagnetism and Newton's first law: Maxwell's theory of electromagnetism
predicts that electromagnetic waves will travel through empty space at a
constant, definite speed. Thus, some inertial observers seemingly have a
privileged status over the others, namely those who measure the speed of light
and find it to be the value predicted by the Maxwell equations. In
other words, light provides an absolute standard for speed, yet the
principle of inertia holds that there should be no such standard. This
tension is resolved in the theory of special relativity, which revises
the notions of space and time in such a way that all inertial observers will agree upon the speed of light in vacuum.
In special relativity, the rule that Wilczek called "Newton's Zeroth
Law" breaks down: the mass of a composite object is not merely the sum
of the masses of the individual pieces.
Newton's first law, inertial motion, remains true. A form of Newton's
second law, that force is the rate of change of momentum, also holds, as
does the conservation of momentum. However, the definition of momentum
is modified. Among the consequences of this is the fact that the more
quickly a body moves, the harder it is to accelerate, and so, no matter
how much force is applied, a body cannot be accelerated to the speed of
light. Depending on the problem at hand, momentum in special relativity
can be represented as a three-dimensional vector, , where is the body's rest mass and is the Lorentz factor, which depends upon the body's speed. Alternatively, momentum and force can be represented as four-vectors.
Newton's third law must be modified in special relativity. The
third law refers to the forces between two bodies at the same moment in
time, and a key feature of special relativity is that simultaneity is
relative. Events that happen at the same time relative to one observer
can happen at different times relative to another. So, in a given
observer's frame of reference, action and reaction may not be exactly
opposite, and the total momentum of interacting bodies may not be
conserved. The conservation of momentum is restored by including the
momentum stored in the field that describes the bodies' interaction.
Newtonian mechanics is a good approximation to special relativity when the speeds involved are small compared to that of light.
General relativity
General relativity
is a theory of gravity that advances beyond that of Newton. In general
relativity, the gravitational force of Newtonian mechanics is reimagined
as curvature of spacetime.
A curved path like an orbit, attributed to a gravitational force in
Newtonian mechanics, is not the result of a force deflecting a body from
an ideal straight-line path, but rather the body's attempt to fall
freely through a background that is itself curved by the presence of
other masses. A remark by John Archibald Wheeler
that has become proverbial among physicists summarizes the theory:
"Spacetime tells matter how to move; matter tells spacetime how to
curve." Wheeler himself thought of this reciprocal relationship as a modern, generalized form of Newton's third law. The relation between matter distribution and spacetime curvature is given by the Einstein field equations, which require tensor calculus to express.
The Newtonian theory of gravity is a good approximation to the
predictions of general relativity when gravitational effects are weak
and objects are moving slowly compared to the speed of light.[85]: 327 [95]
Quantum mechanics
Quantum mechanics
is a theory of physics originally developed in order to understand
microscopic phenomena: behavior at the scale of molecules, atoms or
subatomic particles. Generally and loosely speaking, the smaller a
system is, the more an adequate mathematical model will require
understanding quantum effects. The conceptual underpinning of quantum
physics is very different from that of classical physics. Instead of thinking about quantities like position, momentum, and energy as properties that an object has, one considers what result might appear when a measurement
of a chosen type is performed. Quantum mechanics allows the physicist
to calculate the probability that a chosen measurement will elicit a
particular result. The expectation value for a measurement is the average of the possible results it might yield, weighted by their probabilities of occurrence.
The Ehrenfest theorem
provides a connection between quantum expectation values and Newton's
second law, a connection that is necessarily inexact, as quantum physics
is fundamentally different from classical. In quantum physics, position
and momentum are represented by mathematical entities known as Hermitian operators, and the Born rule
is used to calculate the expectation values of a position measurement
or a momentum measurement. These expectation values will generally
change over time; that is, depending on the time at which (for example) a
position measurement is performed, the probabilities for its different
possible outcomes will vary. The Ehrenfest theorem says, roughly
speaking, that the equations describing how these expectation values
change over time have a form reminiscent of Newton's second law.
However, the more pronounced quantum effects are in a given situation,
the more difficult it is to derive meaningful conclusions from this
resemblance.
History
Isaac Newton (1643–1727), in a 1689 portrait by Godfrey Kneller
Newton's first and second laws, in Latin, from the original 1687 Principia Mathematica
The concepts invoked in Newton's laws of motion — mass, velocity,
momentum, force — have predecessors in earlier work, and the content of
Newtonian physics was further developed after Newton's time. Newton
combined knowledge of celestial motions with the study of events on
Earth and showed that one theory of mechanics could encompass both.
As noted by scholar I. Bernard Cohen,
Newton's work was more than a mere synthesis of previous results, as he
selected certain ideas and further transformed them, with each in a new
form that was useful to him, while at the same time proving false of
certain basic or fundamental principles of scientists such as Galileo Galilei, Johannes Kepler, René Descartes, and Nicolaus Copernicus. He approached natural philosophy with mathematics in a completely novel
way, in that instead of a preconceived natural philosophy, his style
was to begin with a mathematical construct, and build on from there,
comparing it to the real world to show that his system accurately
accounted for it.
The subject of physics is often traced back to Aristotle,
but the history of the concepts involved is obscured by multiple
factors. An exact correspondence between Aristotelian and modern
concepts is not simple to establish: Aristotle did not clearly
distinguish what we would call speed and force, used the same term for density and viscosity,
and conceived of motion as always through a medium, rather than through
space. In addition, some concepts often termed "Aristotelian" might
better be attributed to his followers and commentators upon him. These commentators found that Aristotelian physics had difficulty explaining projectile motion. Aristotle divided motion into two types: "natural" and "violent". The
"natural" motion of terrestrial solid matter was to fall downwards,
whereas a "violent" motion could push a body sideways. Moreover, in
Aristotelian physics, a "violent" motion requires an immediate cause;
separated from the cause of its "violent" motion, a body would revert to
its "natural" behavior. Yet, a javelin continues moving after it leaves
the thrower's hand. Aristotle concluded that the air around the javelin
must be imparted with the ability to move the javelin forward.
Philoponus and impetus
John Philoponus, a Byzantine Greek
thinker active during the sixth century, found this absurd: the same
medium, air, was somehow responsible both for sustaining motion and for
impeding it. If Aristotle's idea were true, Philoponus said, armies
would launch weapons by blowing upon them with bellows. Philoponus
argued that setting a body into motion imparted a quality, impetus, that would be contained within the body itself. As long as its impetus was sustained, the body would continue to move. In the following centuries, versions of impetus theory were advanced by individuals including Nur ad-Din al-Bitruji, Avicenna, Abu'l-Barakāt al-Baghdādī, John Buridan, and Albert of Saxony. In retrospect, the idea of impetus can be seen as a forerunner of the modern concept of momentum. The intuition that objects move according to some kind of impetus persists in many students of introductory physics.
The French philosopher René Descartes introduced the concept of inertia by way of his "laws of nature" in The World (Traité du monde et de la lumière) written 1629–33. However, The World purported a heliocentric worldview, and in 1633 this view had given rise a great conflict between Galileo Galilei and the Roman Catholic Inquisition. Descartes knew about this controversy and did not wish to get involved. The World was not published until 1664, ten years after his death.
Galileo Galilei (1564–1642)
The modern concept of inertia is credited to Galileo. Based on his
experiments, Galileo concluded that the "natural" behavior of a moving
body was to keep moving, until something else interfered with it. In Two New Sciences (1638) Galileo wrote:
Imagine
any particle projected along a horizontal plane without friction; then
we know, from what has been more fully explained in the preceding pages,
that this particle will move along this same plane with a motion which
is uniform and perpetual, provided the plane has no limits.
René Descartes (1596–1650)
Galileo recognized that in projectile motion, the Earth's gravity affects vertical but not horizontal motion. However, Galileo's idea of inertia was not exactly the one that would
be codified into Newton's first law. Galileo thought that a body moving a
long distance inertially would follow the curve of the Earth. This idea
was corrected by Isaac Beeckman, Descartes, and Pierre Gassendi, who recognized that inertial motion should be motion in a straight line. Descartes published his laws of nature (laws of motion) with this correction in Principles of Philosophy (Principia Philosophiae) in 1644, with the heliocentric part toned down.
Ball in circular motion has string cut and flies off tangentially.
First Law of Nature: Each thing
when left to itself continues in the same state; so any moving body goes
on moving until something stops it.
Second
Law of Nature: Each moving thing if left to itself moves in a straight
line; so any body moving in a circle always tends to move away from the
centre of the circle.
According to American philosopher Richard J. Blackwell, Dutch scientist Christiaan Huygens had worked out his own, concise version of the law in 1656. It was not published until 1703, eight years after his death, in the opening paragraph of De Motu Corporum ex Percussione.
Hypothesis I: Any body already in
motion will continue to move perpetually with the same speed and in a
straight line unless it is impeded.
According to Huygens, this law was already known by Galileo and Descartes among others.
Force and the second law
Christiaan Huygens (1629–1695)
Christiaan Huygens, in his Horologium Oscillatorium
(1673), put forth the hypothesis that "By the action of gravity,
whatever its sources, it happens that bodies are moved by a motion
composed both of a uniform motion in one direction or another and of a
motion downward due to gravity." Newton's second law generalized this
hypothesis from gravity to all forces.
One important characteristic of Newtonian physics is that forces can act at a distance without requiring physical contact. For example, the Sun and the Earth pull on each other gravitationally,
despite being separated by millions of kilometres. This contrasts with
the idea, championed by Descartes among others, that the Sun's gravity
held planets in orbit by swirling them in a vortex of transparent
matter, aether. Newton considered aetherial explanations of force but ultimately rejected them. The study of magnetism by William Gilbert and others created a precedent for thinking of immaterial forces, and unable to find a quantitatively satisfactory explanation of his law
of gravity in terms of an aetherial model, Newton eventually declared, "I feign no hypotheses": whether or not a model like Descartes's vortices could be found to underlie the Principia's theories of motion and gravity, the first grounds for judging them must be the successful predictions they made. And indeed, since Newton's time every attempt at such a model has failed.
Momentum conservation and the third law
Johannes Kepler (1571–1630)
Johannes Kepler
suggested that gravitational attractions were reciprocal — that, for
example, the Moon pulls on the Earth while the Earth pulls on the Moon —
but he did not argue that such pairs are equal and opposite. In his Principles of Philosophy
(1644), Descartes introduced the idea that during a collision between
bodies, a "quantity of motion" remains unchanged. Descartes defined this
quantity somewhat imprecisely by adding up the products of the speed
and "size" of each body, where "size" for him incorporated both volume
and surface area. Moreover, Descartes thought of the universe as a plenum, that is, filled with matter, so all motion required a body to displace a medium as it moved.
During the 1650s, Huygens studied collisions between hard spheres
and deduced a principle that is now identified as the conservation of
momentum. Christopher Wren would later deduce the same rules for elastic collisions that Huygens had, and John Wallis would apply momentum conservation to study inelastic collisions. Newton cited the work of Huygens, Wren, and Wallis to support the validity of his third law.
Newton arrived at his set of three laws incrementally. In a 1684 manuscript written to Huygens,
he listed four laws: the principle of inertia, the change of motion by
force, a statement about relative motion that would today be called Galilean invariance,
and the rule that interactions between bodies do not change the motion
of their center of mass. In a later manuscript, Newton added a law of
action and reaction, while saying that this law and the law regarding
the center of mass implied one another. Newton probably settled on the
presentation in the Principia, with three primary laws and then other statements reduced to corollaries, during 1685.
After the Principia
Page 157 from Mechanism of the Heavens (1831), Mary Somerville's expanded version of the first two volumes of Laplace's Traité de mécanique céleste. Here, Somerville deduces the inverse-square law of gravity from Kepler's laws of planetary motion.
Newton expressed his second law by saying that the force on a body is
proportional to its change of motion, or momentum. By the time he wrote
the Principia, he had already developed calculus (which he called "the science of fluxions"), but in the Principia he made no explicit use of it, perhaps because he believed geometrical arguments in the tradition of Euclid to be more rigorous.Consequently, the Principia does not express acceleration as the second derivative of position, and so it does not give the second law as . This form of the second law was written (for the special case of constant force) at least as early as 1716, by Jakob Hermann; Leonhard Euler would employ it as a basic premise in the 1740s. Euler pioneered the study of rigid bodies and established the basic theory of fluid dynamics. Pierre-Simon Laplace's five-volume Traité de mécanique céleste (1798–1825) forsook geometry and developed mechanics purely through algebraic expressions, while resolving questions that the Principia had left open, like a full theory of the tides.
The concept of energy became a key part of Newtonian mechanics in
the post-Newton period. Huygens' solution of the collision of hard
spheres showed that in that case, not only is momentum conserved, but
kinetic energy is as well (or, rather, a quantity that in retrospect we
can identify as one-half the total kinetic energy). The question of what
is conserved during all other processes, like inelastic collisions and
motion slowed by friction, was not resolved until the 19th century.
Debates on this topic overlapped with philosophical disputes between the
metaphysical views of Newton and Leibniz, and variants of the term
"force" were sometimes used to denote what we would call types of
energy. For example, in 1742, Émilie du Châtelet wrote, "Dead force consists of a simple tendency to motion: such is that of a spring ready to relax; living force
is that which a body has when it is in actual motion." In modern
terminology, "dead force" and "living force" correspond to potential
energy and kinetic energy respectively. Conservation of energy was not established as a universal principle
until it was understood that the energy of mechanical work can be
dissipated into heat. With the concept of energy given a solid grounding, Newton's laws could
then be derived within formulations of classical mechanics that put
energy first, as in the Lagrangian and Hamiltonian formulations
described above.
Modern presentations of Newton's laws use the mathematics of
vectors, a topic that was not developed until the late 19th and early
20th centuries. Vector algebra, pioneered by Josiah Willard Gibbs and Oliver Heaviside, stemmed from and largely supplanted the earlier system of quaternions invented by William Rowan Hamilton.