Search This Blog

Tuesday, August 16, 2022

Nonstandard calculus

From Wikipedia, the free encyclopedia

In mathematics, nonstandard calculus is the modern application of infinitesimals, in the sense of nonstandard analysis, to infinitesimal calculus. It provides a rigorous justification for some arguments in calculus that were previously considered merely heuristic.

Non-rigorous calculations with infinitesimals were widely used before Karl Weierstrass sought to replace them with the (ε, δ)-definition of limit starting in the 1870s. (See history of calculus.) For almost one hundred years thereafter, mathematicians such as Richard Courant viewed infinitesimals as being naive and vague or meaningless.

Contrary to such views, Abraham Robinson showed in 1960 that infinitesimals are precise, clear, and meaningful, building upon work by Edwin Hewitt and Jerzy Łoś. According to Howard Keisler, "Robinson solved a three hundred year old problem by giving a precise treatment of infinitesimals. Robinson's achievement will probably rank as one of the major mathematical advances of the twentieth century."

History

The history of nonstandard calculus began with the use of infinitely small quantities, called infinitesimals in calculus. The use of infinitesimals can be found in the foundations of calculus independently developed by Gottfried Leibniz and Isaac Newton starting in the 1660s. John Wallis refined earlier techniques of indivisibles of Cavalieri and others by exploiting an infinitesimal quantity he denoted in area calculations, preparing the ground for integral calculus. They drew on the work of such mathematicians as Pierre de Fermat, Isaac Barrow and René Descartes.

In early calculus the use of infinitesimal quantities was criticized by a number of authors, most notably Michel Rolle and Bishop Berkeley in his book The Analyst.

Several mathematicians, including Maclaurin and d'Alembert, advocated the use of limits. Augustin Louis Cauchy developed a versatile spectrum of foundational approaches, including a definition of continuity in terms of infinitesimals and a (somewhat imprecise) prototype of an ε, δ argument in working with differentiation. Karl Weierstrass formalized the concept of limit in the context of a (real) number system without infinitesimals. Following the work of Weierstrass, it eventually became common to base calculus on ε, δ arguments instead of infinitesimals.

This approach formalized by Weierstrass came to be known as the standard calculus. After many years of the infinitesimal approach to calculus having fallen into disuse other than as an introductory pedagogical tool, use of infinitesimal quantities was finally given a rigorous foundation by Abraham Robinson in the 1960s. Robinson's approach is called nonstandard analysis to distinguish it from the standard use of limits. This approach used technical machinery from mathematical logic to create a theory of hyperreal numbers that interpret infinitesimals in a manner that allows a Leibniz-like development of the usual rules of calculus. An alternative approach, developed by Edward Nelson, finds infinitesimals on the ordinary real line itself, and involves a modification of the foundational setting by extending ZFC through the introduction of a new unary predicate "standard".

Motivation

To calculate the derivative of the function at x, both approaches agree on the algebraic manipulations:

This becomes a computation of the derivatives using the hyperreals if is interpreted as an infinitesimal and the symbol "" is the relation "is infinitely close to".

In order to make f ' a real-valued function, the final term is dispensed with. In the standard approach using only real numbers, that is done by taking the limit as tends to zero. In the hyperreal approach, the quantity is taken to be an infinitesimal, a nonzero number that is closer to 0 than to any nonzero real. The manipulations displayed above then show that is infinitely close to 2x, so the derivative of f at x is then 2x.

Discarding the "error term" is accomplished by an application of the standard part function. Dispensing with infinitesimal error terms was historically considered paradoxical by some writers, most notably George Berkeley.

Once the hyperreal number system (an infinitesimal-enriched continuum) is in place, one has successfully incorporated a large part of the technical difficulties at the foundational level. Thus, the epsilon, delta techniques that some believe to be the essence of analysis can be implemented once and for all at the foundational level, and the students needn't be "dressed to perform multiple-quantifier logical stunts on pretense of being taught infinitesimal calculus", to quote a recent study. More specifically, the basic concepts of calculus such as continuity, derivative, and integral can be defined using infinitesimals without reference to epsilon, delta (see next section).

Keisler's textbook

Keisler's Elementary Calculus: An Infinitesimal Approach defines continuity on page 125 in terms of infinitesimals, to the exclusion of epsilon, delta methods. The derivative is defined on page 45 using infinitesimals rather than an epsilon-delta approach. The integral is defined on page 183 in terms of infinitesimals. Epsilon, delta definitions are introduced on page 282.

Definition of derivative

The hyperreals can be constructed in the framework of Zermelo–Fraenkel set theory, the standard axiomatisation of set theory used elsewhere in mathematics. To give an intuitive idea for the hyperreal approach, note that, naively speaking, nonstandard analysis postulates the existence of positive numbers ε which are infinitely small, meaning that ε is smaller than any standard positive real, yet greater than zero. Every real number x is surrounded by an infinitesimal "cloud" of hyperreal numbers infinitely close to it. To define the derivative of f at a standard real number x in this approach, one no longer needs an infinite limiting process as in standard calculus. Instead, one sets

where st is the standard part function, yielding the real number infinitely close to the hyperreal argument of st, and is the natural extension of to the hyperreals.

Continuity

A real function f is continuous at a standard real number x if for every hyperreal x' infinitely close to x, the value f(x' ) is also infinitely close to f(x). This captures Cauchy's definition of continuity as presented in his 1821 textbook Cours d'Analyse, p. 34.

Here to be precise, f would have to be replaced by its natural hyperreal extension usually denoted f* (see discussion of Transfer principle in main article at nonstandard analysis).

Using the notation for the relation of being infinitely close as above, the definition can be extended to arbitrary (standard or nonstandard) points as follows:

A function f is microcontinuous at x if whenever , one has

Here the point x' is assumed to be in the domain of (the natural extension of) f.

The above requires fewer quantifiers than the (εδ)-definition familiar from standard elementary calculus:

f is continuous at x if for every ε > 0, there exists a δ > 0 such that for every x' , whenever |x − x' | < δ, one has |f(x) − f(x' )| < ε.

Uniform continuity

A function f on an interval I is uniformly continuous if its natural extension f* in I* has the following property (see Keisler, Foundations of Infinitesimal Calculus ('07), p. 45):

for every pair of hyperreals x and y in I*, if then .

In terms of microcontinuity defined in the previous section, this can be stated as follows: a real function is uniformly continuous if its natural extension f* is microcontinuous at every point of the domain of f*.

This definition has a reduced quantifier complexity when compared with the standard (ε, δ)-definition. Namely, the epsilon-delta definition of uniform continuity requires four quantifiers, while the infinitesimal definition requires only two quantifiers. It has the same quantifier complexity as the definition of uniform continuity in terms of sequences in standard calculus, which however is not expressible in the first-order language of the real numbers.

The hyperreal definition can be illustrated by the following three examples.

Example 1: a function f is uniformly continuous on the semi-open interval (0,1], if and only if its natural extension f* is microcontinuous (in the sense of the formula above) at every positive infinitesimal, in addition to continuity at the standard points of the interval.

Example 2: a function f is uniformly continuous on the semi-open interval [0,∞) if and only if it is continuous at the standard points of the interval, and in addition, the natural extension f* is microcontinuous at every positive infinite hyperreal point.

Example 3: similarly, the failure of uniform continuity for the squaring function

is due to the absence of microcontinuity at a single infinite hyperreal point, see below.

Concerning quantifier complexity, the following remarks were made by Kevin Houston:

The number of quantifiers in a mathematical statement gives a rough measure of the statement’s complexity. Statements involving three or more quantifiers can be difficult to understand. This is the main reason why it is hard to understand the rigorous definitions of limit, convergence, continuity and differentiability in analysis as they have many quantifiers. In fact, it is the alternation of the and that causes the complexity.

Andreas Blass wrote as follows:

Often ... the nonstandard definition of a concept is simpler than the standard definition (both intuitively simpler and simpler in a technical sense, such as quantifiers over lower types or fewer alternations of quantifiers).

Compactness

A set A is compact if and only if its natural extension A* has the following property: every point in A* is infinitely close to a point of A. Thus, the open interval (0,1) is not compact because its natural extension contains positive infinitesimals which are not infinitely close to any positive real number.

Heine–Cantor theorem

The fact that a continuous function on a compact interval I is necessarily uniformly continuous (the Heine–Cantor theorem) admits a succinct hyperreal proof. Let x, y be hyperreals in the natural extension I* of I. Since I is compact, both st(x) and st(y) belong to I. If x and y were infinitely close, then by the triangle inequality, they would have the same standard part

Since the function is assumed continuous at c,

and therefore f(x) and f(y) are infinitely close, proving uniform continuity of f.

Why is the squaring function not uniformly continuous?

Let f(x) = x2 defined on . Let be an infinite hyperreal. The hyperreal number is infinitely close to N. Meanwhile, the difference

is not infinitesimal. Therefore, f* fails to be microcontinuous at the hyperreal point N. Thus, the squaring function is not uniformly continuous, according to the definition in uniform continuity above.

A similar proof may be given in the standard setting (Fitzpatrick 2006, Example 3.15).

Example: Dirichlet function

Consider the Dirichlet function

It is well known that, under the standard definition of continuity, the function is discontinuous at every point. Let us check this in terms of the hyperreal definition of continuity above, for instance let us show that the Dirichlet function is not continuous at π. Consider the continued fraction approximation an of π. Now let the index n be an infinite hypernatural number. By the transfer principle, the natural extension of the Dirichlet function takes the value 1 at an. Note that the hyperrational point an is infinitely close to π. Thus the natural extension of the Dirichlet function takes different values (0 and 1) at these two infinitely close points, and therefore the Dirichlet function is not continuous at π.

Limit

While the thrust of Robinson's approach is that one can dispense with the approach using multiple quantifiers, the notion of limit can be easily recaptured in terms of the standard part function st, namely

if and only if whenever the difference x − a is infinitesimal, the difference f(x) − L is infinitesimal, as well, or in formulas:

if st(x) = a  then st(f(x)) = L,

cf. (ε, δ)-definition of limit.

Limit of sequence

Given a sequence of real numbers , if L is the limit of the sequence and

if for every infinite hypernatural n, st(xn)=L (here the extension principle is used to define xn for every hyperinteger n).

This definition has no quantifier alternations. The standard (ε, δ)-style definition, on the other hand, does have quantifier alternations:

Extreme value theorem

To show that a real continuous function f on [0,1] has a maximum, let N be an infinite hyperinteger. The interval [0, 1] has a natural hyperreal extension. The function f is also naturally extended to hyperreals between 0 and 1. Consider the partition of the hyperreal interval [0,1] into N subintervals of equal infinitesimal length 1/N, with partition points xi = i /N as i "runs" from 0 to N. In the standard setting (when N is finite), a point with the maximal value of f can always be chosen among the N+1 points xi, by induction. Hence, by the transfer principle, there is a hyperinteger i0 such that 0 ≤ i0 ≤ N and for all i = 0, …, N (an alternative explanation is that every hyperfinite set admits a maximum). Consider the real point

where st is the standard part function. An arbitrary real point x lies in a suitable sub-interval of the partition, namely , so that st(xi) = x. Applying st to the inequality , . By continuity of f,

.

Hence f(c) ≥ f(x), for all x, proving c to be a maximum of the real function f. See Keisler (1986, p. 164).

Intermediate value theorem

As another illustration of the power of Robinson's approach, a short proof of the intermediate value theorem (Bolzano's theorem) using infinitesimals is done by the following.

Let f be a continuous function on [a,b] such that f(a)<0 while f(b)>0. Then there exists a point c in [a,b] such that f(c)=0.

The proof proceeds as follows. Let N be an infinite hyperinteger. Consider a partition of [a,b] into N intervals of equal length, with partition points xi as i runs from 0 to N. Consider the collection I of indices such that f(xi)>0. Let i0 be the least element in I (such an element exists by the transfer principle, as I is a hyperfinite set). Then the real number

is the desired zero of f. Such a proof reduces the quantifier complexity of a standard proof of the IVT.

Basic theorems

If f is a real valued function defined on an interval [a, b], then the transfer operator applied to f, denoted by *f, is an internal, hyperreal-valued function defined on the hyperreal interval [*a, *b].

Theorem: Let f be a real-valued function defined on an interval [a, b]. Then f is differentiable at a < x < b if and only if for every non-zero infinitesimal h, the value

is independent of h. In that case, the common value is the derivative of f at x.

This fact follows from the transfer principle of nonstandard analysis and overspill.

Note that a similar result holds for differentiability at the endpoints a, b provided the sign of the infinitesimal h is suitably restricted.

For the second theorem, the Riemann integral is defined as the limit, if it exists, of a directed family of Riemann sums; these are sums of the form

where

Such a sequence of values is called a partition or mesh and

the width of the mesh. In the definition of the Riemann integral, the limit of the Riemann sums is taken as the width of the mesh goes to 0.

Theorem: Let f be a real-valued function defined on an interval [a, b]. Then f is Riemann-integrable on [a, b] if and only if for every internal mesh of infinitesimal width, the quantity

is independent of the mesh. In this case, the common value is the Riemann integral of f over [a, b].

Applications

One immediate application is an extension of the standard definitions of differentiation and integration to internal functions on intervals of hyperreal numbers.

An internal hyperreal-valued function f on [a, b] is S-differentiable at x, provided

exists and is independent of the infinitesimal h. The value is the S derivative at x.

Theorem: Suppose f is S-differentiable at every point of [a, b] where ba is a bounded hyperreal. Suppose furthermore that

Then for some infinitesimal ε

To prove this, let N be a nonstandard natural number. Divide the interval [a, b] into N subintervals by placing N − 1 equally spaced intermediate points:

Then

Now the maximum of any internal set of infinitesimals is infinitesimal. Thus all the εk's are dominated by an infinitesimal ε. Therefore,

from which the result follows.

 

Positron

From Wikipedia, the free encyclopedia

Positron (antielectron)
PositronDiscovery.png
Cloud chamber photograph by C. D. Anderson of the first positron ever identified. A 6 mm lead plate separates the chamber. The deflection and direction of the particle's ion trail indicate that the particle is a positron.
CompositionElementary particle
StatisticsFermionic
GenerationFirst
InteractionsGravity, Electromagnetic, Weak
Symbol
e+
,
β+
AntiparticleElectron
TheorizedPaul Dirac (1928)
DiscoveredCarl D. Anderson (1932)
Massme

9.1093837015(28)×10−31 kg
5.48579909070(16)×10−4 Da

0.5109989461(13) MeV/c2
Mean lifetimestable (same as electron)
Electric charge+1 e
+1.602176565(35)×10−19 C
Spin1/2 (same as electron)
Weak isospinLH: 0, RH: 1/2

The positron or antielectron is the antiparticle or the antimatter counterpart of the electron. It has an electric charge of +1 e, a spin of 1/2 (the same as the electron), and the same mass as an electron. When a positron collides with an electron, annihilation occurs. If this collision occurs at low energies, it results in the production of two or more photons.

Positrons can be created by positron emission radioactive decay (through weak interactions), or by pair production from a sufficiently energetic photon which is interacting with an atom in a material.

History

Theory

In 1928, Paul Dirac published a paper proposing that electrons can have both a positive and negative charge. This paper introduced the Dirac equation, a unification of quantum mechanics, special relativity, and the then-new concept of electron spin to explain the Zeeman effect. The paper did not explicitly predict a new particle but did allow for electrons having either positive or negative energy as solutions. Hermann Weyl then published a paper discussing the mathematical implications of the negative energy solution. The positive-energy solution explained experimental results, but Dirac was puzzled by the equally valid negative-energy solution that the mathematical model allowed. Quantum mechanics did not allow the negative energy solution to simply be ignored, as classical mechanics often did in such equations; the dual solution implied the possibility of an electron spontaneously jumping between positive and negative energy states. However, no such transition had yet been observed experimentally.

Dirac wrote a follow-up paper in December 1929 that attempted to explain the unavoidable negative-energy solution for the relativistic electron. He argued that "... an electron with negative energy moves in an external [electromagnetic] field as though it carries a positive charge." He further asserted that all of space could be regarded as a "sea" of negative energy states that were filled, so as to prevent electrons jumping between positive energy states (negative electric charge) and negative energy states (positive charge). The paper also explored the possibility of the proton being an island in this sea, and that it might actually be a negative-energy electron. Dirac acknowledged that the proton having a much greater mass than the electron was a problem, but expressed "hope" that a future theory would resolve the issue.

Robert Oppenheimer argued strongly against the proton being the negative-energy electron solution to Dirac's equation. He asserted that if it were, the hydrogen atom would rapidly self-destruct. Hermann Weyl in 1931 showed that the negative-energy electron must have the same mass as that of the positive-energy electron. Persuaded by Oppenheimer's and Weyl's argument, Dirac published a paper in 1931 that predicted the existence of an as-yet-unobserved particle that he called an "anti-electron" that would have the same mass and the opposite charge as an electron and that would mutually annihilate upon contact with an electron.

Feynman, and earlier Stueckelberg, proposed an interpretation of the positron as an electron moving backward in time, reinterpreting the negative-energy solutions of the Dirac equation. Electrons moving backward in time would have a positive electric charge. Wheeler invoked this concept to explain the identical properties shared by all electrons, suggesting that "they are all the same electron" with a complex, self-intersecting worldline. Yoichiro Nambu later applied it to all production and annihilation of particle-antiparticle pairs, stating that "the eventual creation and annihilation of pairs that may occur now and then is no creation or annihilation, but only a change of direction of moving particles, from the past to the future, or from the future to the past." The backwards in time point of view is nowadays accepted as completely equivalent to other pictures, but it does not have anything to do with the macroscopic terms "cause" and "effect", which do not appear in a microscopic physical description.

Experimental clues and discovery

Wilson cloud chambers used to be very important particle detectors in the early days of particle physics. They were used in the discovery of the positron, muon, and kaon.
 

Several sources have claimed that Dmitri Skobeltsyn first observed the positron long before 1930, or even as early as 1923. They state that while using a Wilson cloud chamber in order to study the Compton effect, Skobeltsyn detected particles that acted like electrons but curved in the opposite direction in an applied magnetic field, and that he presented photographs with this phenomenon in a conference in Cambridge, on 23–27 July 1928. In his book on the history of the positron discovery from 1963, Norwood Russell Hanson has given a detailed account of the reasons for this assertion, and this may have been the origin of the myth. But he also presented Skobeltsyn's objection to it in an appendix. Later, Skobeltsyn rejected this claim even more strongly, calling it "nothing but sheer nonsense".

Skobeltsyn did pave the way for the eventual discovery of the positron by two important contributions: adding a magnetic field to his cloud chamber (in 1925) , and by discovering charged particle cosmic rays, for which he is credited in Carl Anderson's Nobel lecture. Skobeltzyn did observe likely positron tracks on images taken in 1931, but did not identify them as such at the time.

Likewise, in 1929 Chung-Yao Chao, a graduate student at Caltech, noticed some anomalous results that indicated particles behaving like electrons, but with a positive charge, though the results were inconclusive and the phenomenon was not pursued.

Carl David Anderson discovered the positron on 2 August 1932, for which he won the Nobel Prize for Physics in 1936. Anderson did not coin the term positron, but allowed it at the suggestion of the Physical Review journal editor to whom he submitted his discovery paper in late 1932. The positron was the first evidence of antimatter and was discovered when Anderson allowed cosmic rays to pass through a cloud chamber and a lead plate. A magnet surrounded this apparatus, causing particles to bend in different directions based on their electric charge. The ion trail left by each positron appeared on the photographic plate with a curvature matching the mass-to-charge ratio of an electron, but in a direction that showed its charge was positive.

Anderson wrote in retrospect that the positron could have been discovered earlier based on Chung-Yao Chao's work, if only it had been followed up on. Frédéric and Irène Joliot-Curie in Paris had evidence of positrons in old photographs when Anderson's results came out, but they had dismissed them as protons.

The positron had also been contemporaneously discovered by Patrick Blackett and Giuseppe Occhialini at the Cavendish Laboratory in 1932. Blackett and Occhialini had delayed publication to obtain more solid evidence, so Anderson was able to publish the discovery first.

Natural production

Positrons are produced, together with neutrinos naturally in β+ decays of naturally occurring radioactive isotopes (for example, potassium-40) and in interactions of gamma quanta (emitted by radioactive nuclei) with matter. Antineutrinos are another kind of antiparticle produced by natural radioactivity (β decay). Many different kinds of antiparticles are also produced by (and contained in) cosmic rays. In research published in 2011 by the American Astronomical Society, positrons were discovered originating above thunderstorm clouds; positrons are produced in gamma-ray flashes created by electrons accelerated by strong electric fields in the clouds. Antiprotons have also been found to exist in the Van Allen Belts around the Earth by the PAMELA module.

Antiparticles, of which the most common are antineutrinos and positrons due to their low mass, are also produced in any environment with a sufficiently high temperature (mean particle energy greater than the pair production threshold). During the period of baryogenesis, when the universe was extremely hot and dense, matter and antimatter were continually produced and annihilated. The presence of remaining matter, and absence of detectable remaining antimatter, also called baryon asymmetry, is attributed to CP-violation: a violation of the CP-symmetry relating matter to antimatter. The exact mechanism of this violation during baryogenesis remains a mystery.

Positron production from radioactive
β+
decay can be considered both artificial and natural production, as the generation of the radioisotope can be natural or artificial. Perhaps the best known naturally-occurring radioisotope which produces positrons is potassium-40, a long-lived isotope of potassium which occurs as a primordial isotope of potassium. Even though it is a small percentage of potassium (0.0117%), it is the single most abundant radioisotope in the human body. In a human body of 70 kg (150 lb) mass, about 4,400 nuclei of 40K decay per second. The activity of natural potassium is 31 Bq/g. About 0.001% of these 40K decays produce about 4000 natural positrons per day in the human body. These positrons soon find an electron, undergo annihilation, and produce pairs of 511 keV photons, in a process similar (but much lower intensity) to that which happens during a PET scan nuclear medicine procedure.

Recent observations indicate black holes and neutron stars produce vast amounts of positron-electron plasma in astrophysical jets. Large clouds of positron-electron plasma have also been associated with neutron stars.

Observation in cosmic rays

Satellite experiments have found evidence of positrons (as well as a few antiprotons) in primary cosmic rays, amounting to less than 1% of the particles in primary cosmic rays. However, the fraction of positrons in cosmic rays has been measured more recently with improved accuracy, especially at much higher energy levels, and the fraction of positrons has been seen to be greater in these higher energy cosmic rays.

These do not appear to be the products of large amounts of antimatter from the Big Bang, or indeed complex antimatter in the universe (evidence for which is lacking, see below). Rather, the antimatter in cosmic rays appear to consist of only these two elementary particles. Recent theories suggest the source of such positrons may come from annihilation of dark matter particles, acceleration of positrons to high energies in astrophysical objects, and production of high energy positrons in the interactions of cosmic ray nuclei with interstellar gas.

Preliminary results from the presently operating Alpha Magnetic Spectrometer (AMS-02) on board the International Space Station show that positrons in the cosmic rays arrive with no directionality, and with energies that range from 0.5 GeV to 500 GeV. Positron fraction peaks at a maximum of about 16% of total electron+positron events, around an energy of 275 ± 32 GeV. At higher energies, up to 500 GeV, the ratio of positrons to electrons begins to fall again. The absolute flux of positrons also begins to fall before 500 GeV, but peaks at energies far higher than electron energies, which peak about 10 GeV. These results on interpretation have been suggested to be due to positron production in annihilation events of massive dark matter particles.

Positrons, like anti-protons, do not appear to originate from any hypothetical "antimatter" regions of the universe. On the contrary, there is no evidence of complex antimatter atomic nuclei, such as antihelium nuclei (i.e., anti-alpha particles), in cosmic rays. These are actively being searched for. A prototype of the AMS-02 designated AMS-01, was flown into space aboard the Space Shuttle Discovery on STS-91 in June 1998. By not detecting any antihelium at all, the AMS-01 established an upper limit of 1.1×10−6 for the antihelium to helium flux ratio.

Artificial production

Physicists at the Lawrence Livermore National Laboratory in California have used a short, ultra-intense laser to irradiate a millimeter-thick gold target and produce more than 100 billion positrons. Presently significant lab production of 5 MeV positron-electron beams allows investigation of multiple characteristics such as how different elements react to 5 MeV positron interactions or impacts, how energy is transferred to particles, and the shock effect of gamma-ray bursts (GRBs).

Applications

Certain kinds of particle accelerator experiments involve colliding positrons and electrons at relativistic speeds. The high impact energy and the mutual annihilation of these matter/antimatter opposites create a fountain of diverse subatomic particles. Physicists study the results of these collisions to test theoretical predictions and to search for new kinds of particles.

The ALPHA experiment combines positrons with antiprotons to study properties of antihydrogen.

Gamma rays, emitted indirectly by a positron-emitting radionuclide (tracer), are detected in positron emission tomography (PET) scanners used in hospitals. PET scanners create detailed three-dimensional images of metabolic activity within the human body.

An experimental tool called positron annihilation spectroscopy (PAS) is used in materials research to detect variations in density, defects, displacements, or even voids, within a solid material.

Dirac equation

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Dirac_equation In particle physics ,...