Search This Blog

Tuesday, July 15, 2025

Numerical analysis

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Numerical_analysis
Babylonian clay tablet YBC 7289 (c. 1800–1600 BCE) with annotations. The approximation of the square root of 2 is four sexagesimal figures, which is about six decimal figures. 1 + 24/60 + 51/602 + 10/603 = 1.41421296...

Numerical analysis is the study of algorithms that use numerical approximation (as opposed to symbolic manipulations) for the problems of mathematical analysis (as distinguished from discrete mathematics). It is the study of numerical methods that attempt to find approximate solutions of problems rather than the exact ones. Numerical analysis finds application in all fields of engineering and the physical sciences, and in the 21st century also the life and social sciences like economics, medicine, business and even the arts. Current growth in computing power has enabled the use of more complex numerical analysis, providing detailed and realistic mathematical models in science and engineering. Examples of numerical analysis include: ordinary differential equations as found in celestial mechanics (predicting the motions of planets, stars and galaxies), numerical linear algebra in data analysis, and stochastic differential equations and Markov chains for simulating living cells in medicine and biology.

Before modern computers, numerical methods often relied on hand interpolation formulas, using data from large printed tables. Since the mid-20th century, computers calculate the required functions instead, but many of the same formulas continue to be used in software algorithms.

The numerical point of view goes back to the earliest mathematical writings. A tablet from the Yale Babylonian Collection (YBC 7289), gives a sexagesimal numerical approximation of the square root of 2, the length of the diagonal in a unit square.

Numerical analysis continues this long tradition: rather than giving exact symbolic answers translated into digits and applicable only to real-world measurements, approximate solutions within specified error bounds are used.

Applications

The overall goal of the field of numerical analysis is the design and analysis of techniques to give approximate but accurate solutions to a wide variety of hard problems, many of which are infeasible to solve symbolically:

  • Advanced numerical methods are essential in making numerical weather prediction feasible.
  • Computing the trajectory of a spacecraft requires the accurate numerical solution of a system of ordinary differential equations.
  • Car companies can improve the crash safety of their vehicles by using computer simulations of car crashes. Such simulations essentially consist of solving partial differential equations numerically.
  • In the financial field, (private investment funds) and other financial institutions use quantitative finance tools from numerical analysis to attempt to calculate the value of stocks and derivatives more precisely than other market participants.
  • Airlines use sophisticated optimization algorithms to decide ticket prices, airplane and crew assignments and fuel needs. Historically, such algorithms were developed within the overlapping field of operations research.
  • Insurance companies use numerical programs for actuarial analysis.

History

The field of numerical analysis predates the invention of modern computers by many centuries. Linear interpolation was already in use more than 2000 years ago. Many great mathematicians of the past were preoccupied by numerical analysis, as is obvious from the names of important algorithms like Newton's method, Lagrange interpolation polynomial, Gaussian elimination, or Euler's method. The origins of modern numerical analysis are often linked to a 1947 paper by John von Neumann and Herman Goldstine, but others consider modern numerical analysis to go back to work by E. T. Whittaker in 1912.

NIST publication

To facilitate computations by hand, large books were produced with formulas and tables of data such as interpolation points and function coefficients. Using these tables, often calculated out to 16 decimal places or more for some functions, one could look up values to plug into the formulas given and achieve very good numerical estimates of some functions. The canonical work in the field is the NIST publication edited by Abramowitz and Stegun, a 1000-plus page book of a very large number of commonly used formulas and functions and their values at many points. The function values are no longer very useful when a computer is available, but the large listing of formulas can still be very handy.

The mechanical calculator was also developed as a tool for hand computation. These calculators evolved into electronic computers in the 1940s, and it was then found that these computers were also useful for administrative purposes. But the invention of the computer also influenced the field of numerical analysis, since now longer and more complicated calculations could be done.

The Leslie Fox Prize for Numerical Analysis was initiated in 1985 by the Institute of Mathematics and its Applications.

Key concepts

Direct and iterative methods

Direct methods compute the solution to a problem in a finite number of steps. These methods would give the precise answer if they were performed in infinite precision arithmetic. Examples include Gaussian elimination, the QR factorization method for solving systems of linear equations, and the simplex method of linear programming. In practice, finite precision is used and the result is an approximation of the true solution (assuming stability).

In contrast to direct methods, iterative methods are not expected to terminate in a finite number of steps, even if infinite precision were possible. Starting from an initial guess, iterative methods form successive approximations that converge to the exact solution only in the limit. A convergence test, often involving the residual, is specified in order to decide when a sufficiently accurate solution has (hopefully) been found. Even using infinite precision arithmetic these methods would not reach the solution within a finite number of steps (in general). Examples include Newton's method, the bisection method, and Jacobi iteration. In computational matrix algebra, iterative methods are generally needed for large problems.

Iterative methods are more common than direct methods in numerical analysis. Some methods are direct in principle but are usually used as though they were not, e.g. GMRES and the conjugate gradient method. For these methods the number of steps needed to obtain the exact solution is so large that an approximation is accepted in the same manner as for an iterative method.

As an example, consider the problem of solving

3x3 + 4 = 28

for the unknown quantity x.

Direct method

3x3 + 4 = 28.
Subtract 4 3x3 = 24.
Divide by 3 x3 =  8.
Take cube roots x =  2.

For the iterative method, apply the bisection method to f(x) = 3x3 − 24. The initial values are a = 0, b = 3, f(a) = −24, f(b) = 57.

Iterative method
a b mid f(mid)
0 3 1.5 −13.875
1.5 3 2.25 10.17...
1.5 2.25 1.875 −4.22...
1.875 2.25 2.0625 2.32...

From this table it can be concluded that the solution is between 1.875 and 2.0625. The algorithm might return any number in that range with an error less than 0.2.

Conditioning

Ill-conditioned problem: Take the function f(x) = 1/(x − 1). Note that f(1.1) = 10 and f(1.001) = 1000: a change in x of less than 0.1 turns into a change in f(x) of nearly 1000. Evaluating f(x) near x = 1 is an ill-conditioned problem.

Well-conditioned problem: By contrast, evaluating the same function f(x) = 1/(x − 1) near x = 10 is a well-conditioned problem. For instance, f(10) = 1/9 ≈ 0.111 and f(11) = 0.1: a modest change in x leads to a modest change in f(x).

Discretization

Furthermore, continuous problems must sometimes be replaced by a discrete problem whose solution is known to approximate that of the continuous problem; this process is called 'discretization'. For example, the solution of a differential equation is a function. This function must be represented by a finite amount of data, for instance by its value at a finite number of points at its domain, even though this domain is a continuum.

Generation and propagation of errors

The study of errors forms an important part of numerical analysis. There are several ways in which error can be introduced in the solution of the problem.

Round-off

Round-off errors arise because it is impossible to represent all real numbers exactly on a machine with finite memory (which is what all practical digital computers are).

Truncation and discretization error

Truncation errors are committed when an iterative method is terminated or a mathematical procedure is approximated and the approximate solution differs from the exact solution. Similarly, discretization induces a discretization error because the solution of the discrete problem does not coincide with the solution of the continuous problem. In the example above to compute the solution of , after ten iterations, the calculated root is roughly 1.99. Therefore, the truncation error is roughly 0.01.

Once an error is generated, it propagates through the calculation. For example, the operation + on a computer is inexact. A calculation of the type is even more inexact.

A truncation error is created when a mathematical procedure is approximated. To integrate a function exactly, an infinite sum of regions must be found, but numerically only a finite sum of regions can be found, and hence the approximation of the exact solution. Similarly, to differentiate a function, the differential element approaches zero, but numerically only a nonzero value of the differential element can be chosen.

Numerical stability and well-posed problems

An algorithm is called numerically stable if an error, whatever its cause, does not grow to be much larger during the calculation. This happens if the problem is well-conditioned, meaning that the solution changes by only a small amount if the problem data are changed by a small amount. To the contrary, if a problem is 'ill-conditioned', then any small error in the data will grow to be a large error. Both the original problem and the algorithm used to solve that problem can be well-conditioned or ill-conditioned, and any combination is possible. So an algorithm that solves a well-conditioned problem may be either numerically stable or numerically unstable. An art of numerical analysis is to find a stable algorithm for solving a well-posed mathematical problem.

Areas of study

The field of numerical analysis includes many sub-disciplines. Some of the major ones are:

Computing values of functions

Interpolation: Observing that the temperature varies from 20 degrees Celsius at 1:00 to 14 degrees at 3:00, a linear interpolation of this data would conclude that it was 17 degrees at 2:00 and 18.5 degrees at 1:30pm.

Extrapolation: If the gross domestic product of a country has been growing an average of 5% per year and was 100 billion last year, it might be extrapolated that it will be 105 billion this year.

A line through 20 points

Regression: In linear regression, given n points, a line is computed that passes as close as possible to those n points.

How much for a glass of lemonade?

Optimization: Suppose lemonade is sold at a lemonade stand, at $1.00 per glass, that 197 glasses of lemonade can be sold per day, and that for each increase of $0.01, one less glass of lemonade will be sold per day. If $1.485 could be charged, profit would be maximized, but due to the constraint of having to charge a whole-cent amount, charging $1.48 or $1.49 per glass will both yield the maximum income of $220.52 per day.

Wind direction in blue, true trajectory in black, Euler method in red

Differential equation: If 100 fans are set up to blow air from one end of the room to the other and then a feather is dropped into the wind, what happens? The feather will follow the air currents, which may be very complex. One approximation is to measure the speed at which the air is blowing near the feather every second, and advance the simulated feather as if it were moving in a straight line at that same speed for one second, before measuring the wind speed again. This is called the Euler method for solving an ordinary differential equation.

One of the simplest problems is the evaluation of a function at a given point. The most straightforward approach, of just plugging in the number in the formula is sometimes not very efficient. For polynomials, a better approach is using the Horner scheme, since it reduces the necessary number of multiplications and additions. Generally, it is important to estimate and control round-off errors arising from the use of floating-point arithmetic.

Interpolation, extrapolation, and regression

Interpolation solves the following problem: given the value of some unknown function at a number of points, what value does that function have at some other point between the given points?

Extrapolation is very similar to interpolation, except that now the value of the unknown function at a point which is outside the given points must be found.

Regression is also similar, but it takes into account that the data are imprecise. Given some points, and a measurement of the value of some function at these points (with an error), the unknown function can be found. The least squares-method is one way to achieve this.

Solving equations and systems of equations

Another fundamental problem is computing the solution of some given equation. Two cases are commonly distinguished, depending on whether the equation is linear or not. For instance, the equation is linear while is not.

Much effort has been put in the development of methods for solving systems of linear equations. Standard direct methods, i.e., methods that use some matrix decomposition are Gaussian elimination, LU decomposition, Cholesky decomposition for symmetric (or hermitian) and positive-definite matrix, and QR decomposition for non-square matrices. Iterative methods such as the Jacobi method, Gauss–Seidel method, successive over-relaxation and conjugate gradient method are usually preferred for large systems. General iterative methods can be developed using a matrix splitting.

Root-finding algorithms are used to solve nonlinear equations (they are so named since a root of a function is an argument for which the function yields zero). If the function is differentiable and the derivative is known, then Newton's method is a popular choice. Linearization is another technique for solving nonlinear equations.

Solving eigenvalue or singular value problems

Several important problems can be phrased in terms of eigenvalue decompositions or singular value decompositions. For instance, the spectral image compression algorithm is based on the singular value decomposition. The corresponding tool in statistics is called principal component analysis.

Optimization

Optimization problems ask for the point at which a given function is maximized (or minimized). Often, the point also has to satisfy some constraints.

The field of optimization is further split in several subfields, depending on the form of the objective function and the constraint. For instance, linear programming deals with the case that both the objective function and the constraints are linear. A famous method in linear programming is the simplex method.

The method of Lagrange multipliers can be used to reduce optimization problems with constraints to unconstrained optimization problems.

Evaluating integrals

Numerical integration, in some instances also known as numerical quadrature, asks for the value of a definite integral. Popular methods use one of the Newton–Cotes formulas (like the midpoint rule or Simpson's rule) or Gaussian quadrature. These methods rely on a "divide and conquer" strategy, whereby an integral on a relatively large set is broken down into integrals on smaller sets. In higher dimensions, where these methods become prohibitively expensive in terms of computational effort, one may use Monte Carlo or quasi-Monte Carlo methods (see Monte Carlo integration), or, in modestly large dimensions, the method of sparse grids.

Differential equations

Numerical analysis is also concerned with computing (in an approximate way) the solution of differential equations, both ordinary differential equations and partial differential equations.

Partial differential equations are solved by first discretizing the equation, bringing it into a finite-dimensional subspace. This can be done by a finite element method, a finite difference method, or (particularly in engineering) a finite volume method. The theoretical justification of these methods often involves theorems from functional analysis. This reduces the problem to the solution of an algebraic equation.

Software

Since the late twentieth century, most algorithms are implemented in a variety of programming languages. The Netlib repository contains various collections of software routines for numerical problems, mostly in Fortran and C. Commercial products implementing many different numerical algorithms include the IMSL and NAG libraries; a free-software alternative is the GNU Scientific Library.

Over the years the Royal Statistical Society published numerous algorithms in its Applied Statistics (code for these "AS" functions is here); ACM similarly, in its Transactions on Mathematical Software ("TOMS" code is here). The Naval Surface Warfare Center several times published its Library of Mathematics Subroutines (code here).

There are several popular numerical computing applications such as MATLABTK Solver, S-PLUS, and IDL as well as free and open-source alternatives such as FreeMat, ScilabGNU Octave (similar to Matlab), and IT++ (a C++ library). There are also programming languages such as R (similar to S-PLUS), Julia, and Python with libraries such as NumPy, SciPy and SymPy. Performance varies widely: while vector and matrix operations are usually fast, scalar loops may vary in speed by more than an order of magnitude.

Many computer algebra systems such as Mathematica also benefit from the availability of arbitrary-precision arithmetic which can provide more accurate results.

Also, any spreadsheet software can be used to solve simple problems relating to numerical analysis. Excel, for example, has hundreds of available functions, including for matrices, which may be used in conjunction with its built in "solver".

Quark model

From Wikipedia, the free encyclopedia
Figure 1: The pseudoscalar meson nonet. Members of the original meson "octet" are shown in green, the singlet in magenta. Although these mesons are now grouped into a nonet, the Eightfold Way name derives from the patterns of eight for the mesons and baryons in the original classification scheme.

In particle physics, the quark model is a classification scheme for hadrons in terms of their valence quarks—the quarks and antiquarks that give rise to the quantum numbers of the hadrons. The quark model underlies "flavor SU(3)", or the Eightfold Way, the successful classification scheme organizing the large number of lighter hadrons that were being discovered starting in the 1950s and continuing through the 1960s. It received experimental verification beginning in the late 1960s and is a valid and effective classification of them to date. The model was independently proposed by physicists Murray Gell-Mann, who dubbed them "quarks" in a concise paper, and George Zweig, who suggested "aces" in a longer manuscript. André Petermann also touched upon the central ideas from 1963 to 1965, without as much quantitative substantiation. Today, the model has essentially been absorbed as a component of the established quantum field theory of strong and electroweak particle interactions, dubbed the Standard Model.

Hadrons are not really "elementary", and can be regarded as bound states of their "valence quarks" and antiquarks, which give rise to the quantum numbers of the hadrons. These quantum numbers are labels identifying the hadrons, and are of two kinds. One set comes from the Poincaré symmetryJPC, where J, P and C stand for the total angular momentum, P-symmetry, and C-symmetry, respectively.

The other set is the flavor quantum numbers such as the isospin, strangeness, charm, and so on. The strong interactions binding the quarks together are insensitive to these quantum numbers, so variation of them leads to systematic mass and coupling relationships among the hadrons in the same flavor multiplet.

All quarks are assigned a baryon number of 1/3. Up, charm and top quarks have an electric charge of +2/3, while the down, strange, and bottom quarks have an electric charge of −1/3. Antiquarks have the opposite quantum numbers. Quarks are spin-1/2 particles, and thus fermions. Each quark or antiquark obeys the Gell-Mann–Nishijima formula individually, so any additive assembly of them will as well.

Mesons are made of a valence quark–antiquark pair (thus have a baryon number of 0), while baryons are made of three quarks (thus have a baryon number of 1). This article discusses the quark model for the up, down, and strange flavors of quark (which form an approximate flavor SU(3) symmetry). There are generalizations to larger number of flavors.

History

Developing classification schemes for hadrons became a timely question after new experimental techniques uncovered so many of them that it became clear that they could not all be elementary. These discoveries led Wolfgang Pauli to exclaim "Had I foreseen that, I would have gone into botany." and Enrico Fermi to advise his student Leon Lederman: "Young man, if I could remember the names of these particles, I would have been a botanist." These new schemes earned Nobel prizes for experimental particle physicists, including Luis Alvarez, who was at the forefront of many of these developments. Constructing hadrons as bound states of fewer constituents would thus organize the "zoo" at hand. Several early proposals, such as the ones by Enrico Fermi and Chen-Ning Yang (1949), and the Sakata model (1956), ended up satisfactorily covering the mesons, but failed with baryons, and so were unable to explain all the data.

The Gell-Mann–Nishijima formula, developed by Murray Gell-Mann and Kazuhiko Nishijima, led to the Eightfold Way classification, invented by Gell-Mann, with important independent contributions from Yuval Ne'eman, in 1961. The hadrons were organized into SU(3) representation multiplets, octets and decuplets, of roughly the same mass, due to the strong interactions; and smaller mass differences linked to the flavor quantum numbers, invisible to the strong interactions. The Gell-Mann–Okubo mass formula systematized the quantification of these small mass differences among members of a hadronic multiplet, controlled by the explicit symmetry breaking of SU(3).

The spin-3/2 Ω
baryon
, a member of the ground-state decuplet, was a crucial prediction of that classification. After it was discovered in an experiment at Brookhaven National Laboratory, Gell-Mann received a Nobel Prize in Physics for his work on the Eightfold Way, in 1969.

Finally, in 1964, Gell-Mann and George Zweig, discerned independently what the Eightfold Way picture encodes: They posited three elementary fermionic constituents—the "up", "down", and "strange" quarks—which are unobserved, and possibly unobservable in a free form. Simple pairwise or triplet combinations of these three constituents and their antiparticles underlie and elegantly encode the Eightfold Way classification, in an economical, tight structure, resulting in further simplicity. Hadronic mass differences were now linked to the different masses of the constituent quarks.

It would take about a decade for the unexpected nature—and physical reality—of these quarks to be appreciated more fully (See Quarks). Counter-intuitively, they cannot ever be observed in isolation (color confinement), but instead always combine with other quarks to form full hadrons, which then furnish ample indirect information on the trapped quarks themselves. Conversely, the quarks serve in the definition of quantum chromodynamics, the fundamental theory fully describing the strong interactions; and the Eightfold Way is now understood to be a consequence of the flavor symmetry structure of the lightest three of them.

Mesons

Figure 2: Pseudoscalar mesons of spin-0 form a nonet
Figure 3: Vector mesons of spin-1 form a nonet

The Eightfold Way classification is named after the following fact: If we take three flavors of quarks, then the quarks lie in the fundamental representation, 3 (called the triplet) of flavor SU(3). The antiquarks lie in the complex conjugate representation 3. The nine states (nonet) made out of a pair can be decomposed into the trivial representation, 1 (called the singlet), and the adjoint representation, 8 (called the octet). The notation for this decomposition is

Figure 1 shows the application of this decomposition to the mesons. If the flavor symmetry were exact (as in the limit that only the strong interactions operate, but the electroweak interactions are notionally switched off), then all nine mesons would have the same mass. However, the physical content of the full theory includes consideration of the symmetry breaking induced by the quark mass differences, and considerations of mixing between various multiplets (such as the octet and the singlet).

N.B. Nevertheless, the mass splitting between the η and the η′ is larger than the quark model can accommodate, and this "ηη′ puzzle" has its origin in topological peculiarities of the strong interaction vacuum, such as instanton configurations.

Mesons are hadrons with zero baryon number. If the quark–antiquark pair are in an orbital angular momentum L state, and have spin S, then

  • |LS| ≤ JL + S, where S = 0 or 1,
  • P = (−1)L+1, where the 1 in the exponent arises from the intrinsic parity of the quark–antiquark pair.
  • C = (−1)L+S for mesons which have no flavor. Flavored mesons have indefinite value of C.
  • For isospin I = 1 and 0 states, one can define a new multiplicative quantum number called the G-parity such that G = (−1)I+L+S.

If P = (−1)J, then it follows that S = 1, thus PC = 1. States with these quantum numbers are called natural parity states; while all other quantum numbers are thus called exotic (for example, the state JPC = 0−−).

Baryons

Figure 4. The S = 1/2 ground state baryon octet
Figure 5. The S = 3/2 baryon decuplet

Since quarks are fermions, the spin–statistics theorem implies that the wavefunction of a baryon must be antisymmetric under the exchange of any two quarks. This antisymmetric wavefunction is obtained by making it fully antisymmetric in color, discussed below, and symmetric in flavor, spin and space put together. With three flavors, the decomposition in flavor is The decuplet is symmetric in flavor, the singlet antisymmetric and the two octets have mixed symmetry. The space and spin parts of the states are thereby fixed once the orbital angular momentum is given.

It is sometimes useful to think of the basis states of quarks as the six states of three flavors and two spins per flavor. This approximate symmetry is called spin-flavor SU(6). In terms of this, the decomposition is

The 56 states with symmetric combination of spin and flavour decompose under flavor SU(3) into where the superscript denotes the spin, S, of the baryon. Since these states are symmetric in spin and flavor, they should also be symmetric in space—a condition that is easily satisfied by making the orbital angular momentum L = 0. These are the ground-state baryons.

The S = 1/2 octet baryons are the two nucleons (p+
, n0
), the three Sigmas (Σ+
, Σ0
, Σ
), the two Xis (Ξ0
, Ξ
), and the Lambda (Λ0
). The S = 3/2 decuplet baryons are the four Deltas (Δ++
, Δ+
, Δ0
, Δ
), three Sigmas (Σ∗+
, Σ∗0
, Σ∗−
), two Xis (Ξ∗0
, Ξ∗−
), and the Omega (Ω
).

For example, the constituent quark model wavefunction for the proton is

Mixing of baryons, mass splittings within and between multiplets, and magnetic moments are some of the other quantities that the model predicts successfully.

The group theory approach described above assumes that the quarks are eight components of a single particle, so the anti-symmetrization applies to all the quarks. A simpler approach is to consider the eight flavored quarks as eight separate, distinguishable, non-identical particles. Then the anti-symmetrization applies only to two identical quarks (like uu, for instance).

Then, the proton wavefunction can be written in a simpler form:

and the

If quark–quark interactions are limited to two-body interactions, then all the successful quark model predictions, including sum rules for baryon masses and magnetic moments, can be derived.

Discovery of color

Color quantum numbers are the characteristic charges of the strong force, and are completely uninvolved in electroweak interactions. They were discovered as a consequence of the quark model classification, when it was appreciated that the spin S = 3/2 baryon, the Δ++
, required three up quarks with parallel spins and vanishing orbital angular momentum. Therefore, it could not have an antisymmetric wavefunction, (required by the Pauli exclusion principle). Oscar Greenberg noted this problem in 1964, suggesting that quarks should be para-fermions.

Instead, six months later, Moo-Young Han and Yoichiro Nambu suggested the existence of a hidden degree of freedom, they labeled as the group SU(3)' (but later called 'color). This led to three triplets of quarks whose wavefunction was anti-symmetric in the color degree of freedom. Flavor and color were intertwined in that model: they did not commute.

The modern concept of color completely commuting with all other charges and providing the strong force charge was articulated in 1973, by William Bardeen, Harald Fritzsch, and Murray Gell-Mann.

States outside the quark model

While the quark model is derivable from the theory of quantum chromodynamics, the structure of hadrons is more complicated than this model allows. The full quantum mechanical wavefunction of any hadron must include virtual quark pairs as well as virtual gluons, and allows for a variety of mixings. There may be hadrons which lie outside the quark model. Among these are the glueballs (which contain only valence gluons), hybrids (which contain valence quarks as well as gluons) and exotic hadrons (such as tetraquarks or pentaquarks).

Lorentz factor

From Wikipedia, the free encyclopedia
Definition of the Lorentz factor γ

The Lorentz factor or Lorentz term (also known as the gamma factor) is a dimensionless quantity expressing how much the measurements of time, length, and other physical properties change for an object while it moves. The expression appears in several equations in special relativity, and it arises in derivations of the Lorentz transformations. The name originates from its earlier appearance in Lorentzian electrodynamics – named after the Dutch physicist Hendrik Lorentz.

It is generally denoted γ (the Greek lowercase letter gamma). Sometimes (especially in discussion of superluminal motion) the factor is written as Γ (Greek uppercase-gamma) rather than γ.

Definition

The Lorentz factor γ is defined as  where:

This is the most frequently used form in practice, though not the only one (see below for alternative forms).

To complement the definition, some authors define the reciprocal  see velocity addition formula.

Occurrence

Following is a list of formulae from Special relativity which use γ as a shorthand:

  • The Lorentz transformation: The simplest case is a boost in the x-direction (more general forms including arbitrary directions and rotations not listed here), which describes how spacetime coordinates change from one inertial frame using coordinates (x, y, z, t) to another (x, y, z, t) with relative velocity v:

Corollaries of the above transformations are the results:

  • Time dilation: The time (t) between two ticks as measured in the frame in which the clock is moving, is longer than the time (t) between these ticks as measured in the rest frame of the clock:
  • Length contraction: The length (x) of an object as measured in the frame in which it is moving, is shorter than its length (x) in its own rest frame:

Applying conservation of momentum and energy leads to these results:

  • Relativistic mass: The mass m of an object in motion is dependent on and the rest mass m0:
  • Relativistic momentum: The relativistic momentum relation takes the same form as for classical momentum, but using the above relativistic mass:
  • Relativistic kinetic energy: The relativistic kinetic energy relation takes the slightly modified form: As is a function of , the non-relativistic limit gives , as expected from Newtonian considerations.

Numerical values

Lorentz factor γ as a function of fraction of given velocity and speed of light. Its initial value is 1 (when v = 0); and as velocity approaches the speed of light (vc) γ increases without bound (γ → ∞).
α (Lorentz factor inverse) as a function of velocity—a circular arc

In the table below, the left-hand column shows speeds as different fractions of the speed of light (i.e. in units of c). The middle column shows the corresponding Lorentz factor, the final is the reciprocal. Values in bold are exact.

Log-log plot of Lorentz factor γ (left) and 1/γ (right) vs fraction of speed of light β (bottom) and 1−β (top)

Alternative representations

There are other ways to write the factor. Above, velocity v was used, but related variables such as momentum and rapidity may also be convenient.

Momentum

Solving the previous relativistic momentum equation for γ leads to This form is rarely used, although it does appear in the Maxwell–Jüttner distribution.

Rapidity

Applying the definition of rapidity as the hyperbolic angle also leads to γ (by use of hyperbolic identities):

Using the property of Lorentz transformation, it can be shown that rapidity is additive, a useful property that velocity does not have. Thus the rapidity parameter forms a one-parameter group, a foundation for physical models.

Bessel function

The Bunney identity represents the Lorentz factor in terms of an infinite series of Bessel functions

Series expansion (velocity)

The Lorentz factor has the Maclaurin series: which is a special case of a binomial series.

The approximation may be used to calculate relativistic effects at low speeds. It holds to within 1% error for v < 0.4 c (v < 120,000 km/s), and to within 0.1% error for v < 0.22 c (v < 66,000 km/s).

The truncated versions of this series also allow physicists to prove that special relativity reduces to Newtonian mechanics at low speeds. For example, in special relativity, the following two equations hold:

For and , respectively, these reduce to their Newtonian equivalents:

The Lorentz factor equation can also be inverted to yield This has an asymptotic form

The first two terms are occasionally used to quickly calculate velocities from large γ values. The approximation holds to within 1% tolerance for γ > 2, and to within 0.1% tolerance for γ > 3.5.

Applications in astronomy

The standard model of long-duration gamma-ray bursts (GRBs) holds that these explosions are ultra-relativistic (initial γ greater than approximately 100), which is invoked to explain the so-called "compactness" problem: absent this ultra-relativistic expansion, the ejecta would be optically thick to pair production at typical peak spectral energies of a few 100 keV, whereas the prompt emission is observed to be non-thermal.

Muons, a subatomic particle, travel at a speed such that they have a relatively high Lorentz factor and therefore experience extreme time dilation. Since muons have a mean lifetime of just 2.2 μs, muons generated from cosmic-ray collisions 10 km (6.2 mi) high in Earth's atmosphere should be nondetectable on the ground due to their decay rate. However, roughly 10% of muons from these collisions are still detectable on the surface, thereby demonstrating the effects of time dilation on their decay rate.

Numerical analysis

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Numerical_analysis ...