Search This Blog

Monday, July 21, 2025

Data validation

From Wikipedia, the free encyclopedia

In computing, data validation or input validation is the process of ensuring data has undergone data cleansing to confirm it has data quality, that is, that it is both correct and useful. It uses routines, often called "validation rules", "validation constraints", or "check routines", that check for correctness, meaningfulness, and security of data that are input to the system. The rules may be implemented through the automated facilities of a data dictionary, or by the inclusion of explicit application program validation logic of the computer and its application.

This is distinct from formal verification, which attempts to prove or disprove the correctness of algorithms for implementing a specification or property.

Overview

Data validation is intended to provide certain well-defined guarantees for fitness and consistency of data in an application or automated system. Data validation rules can be defined and designed using various methodologies, and be deployed in various contexts. Their implementation can use declarative data integrity rules, or procedure-based business rules.

The guarantees of data validation do not necessarily include accuracy, and it is possible for data entry errors such as misspellings to be accepted as valid. Other clerical and/or computer controls may be applied to reduce inaccuracy within a system.

Different kinds

In evaluating the basics of data validation, generalizations can be made regarding the different kinds of validation according to their scope, complexity, and purpose.

For example:

  • Data type validation;
  • Range and constraint validation;
  • Code and cross-reference validation;
  • Structured validation; and
  • Consistency validation

Data-type check

Data type validation is customarily carried out on one or more simple data fields.

The simplest kind of data type validation verifies that the individual characters provided through user input are consistent with the expected characters of one or more known primitive data types as defined in a programming language or data storage and retrieval mechanism.

For example, an integer field may require input to use only characters 0 through 9.

Simple range and constraint check

Simple range and constraint validation may examine input for consistency with a minimum/maximum range, or consistency with a test for evaluating a sequence of characters, such as one or more tests against regular expressions. For example, a counter value may be required to be a non-negative integer, and a password may be required to meet a minimum length and contain characters from multiple categories.

Code and cross-reference check

Code and cross-reference validation includes operations to verify that data is consistent with one or more possibly-external rules, requirements, or collections relevant to a particular organization, context or set of underlying assumptions. These additional validity constraints may involve cross-referencing supplied data with a known look-up table or directory information service such as LDAP.

For example, a user-provided country code might be required to identify a current geopolitical region.

Structured check

Structured validation allows for the combination of other kinds of validation, along with more complex processing. Such complex processing may include the testing of conditional constraints for an entire complex data object or set of process operations within a system.

Consistency check

Consistency validation ensures that data is logical. For example, the delivery date of an order can be prohibited from preceding its shipment date.

Example

Multiple kinds of data validation are relevant to 10-digit pre-2007 ISBNs (the 2005 edition of ISO 2108 required ISBNs to have 13 digits from 2007 onwards).

  • Size. A pre-2007 ISBN must consist of 10 digits, with optional hyphens or spaces separating its four parts.
  • Format checks. Each of the first 9 digits must be 0 through 9, and the 10th must be either 0 through 9 or an X.
  • Check digit. To detect transcription errors in which digits have been altered or transposed, the last digit of a pre-2007 ISBN must match the result of a mathematical formula incorporating the other 9 digits (ISBN-10 check digits).

Validation types

Allowed character checks
Checks to ascertain that only expected characters are present in a field. For example a numeric field may only allow the digits 0–9, the decimal point and perhaps a minus sign or commas. A text field such as a personal name might disallow characters used for markup. An e-mail address might require at least one @ sign and various other structural details. Regular expressions can be effective ways to implement such checks.
Batch totals
Checks for missing records. Numerical fields may be added together for all records in a batch. The batch total is entered and the computer checks that the total is correct, e.g., add the 'Total Cost' field of a number of transactions together.
Cardinality check
Checks that record has a valid number of related records. For example, if a contact record is classified as "customer" then it must have at least one associated order (cardinality > 0). This type of rule can be complicated by additional conditions. For example, if a contact record in a payroll database is classified as "former employee" then it must not have any associated salary payments after the separation date (cardinality = 0).
Check digits
Used for numerical data. To support error detection, an extra digit is added to a number which is calculated from the other digits.
Consistency checks
Checks fields to ensure data in these fields correspond, e.g., if expiration date is in the past then status is not "active".
Cross-system consistency checks
Compares data in different systems to ensure it is consistent. Systems may represent the same data differently, in which case comparison requires transformation (e.g., one system may store customer name in a single Name field as 'Doe, John Q', while another uses First_Name 'John' and Last_Name 'Doe' and Middle_Name 'Quality').
Data type checks
Checks input conformance with typed data. For example, an input box accepting numeric data may reject the letter 'O'.
File existence check
Checks that a file with a specified name exists. This check is essential for programs that use file handling.
Format check
Checks that the data is in a specified format (template), e.g., dates have to be in the format YYYY-MM-DD. Regular expressions may be used for this kind of validation.
Presence check
Checks that data is present, e.g., customers may be required to have an email address.
Range check
Checks that the data is within a specified range of values, e.g., a probability must be between 0 and 1.
Referential integrity
Values in two relational database tables can be linked through foreign key and primary key. If values in the foreign key field are not constrained by internal mechanisms, then they should be validated to ensure that the referencing table always refers to a row in the referenced table.
Spelling and grammar check
Looks for spelling and grammatical errors.
Uniqueness check
Checks that each value is unique. This can be applied to several fields (i.e. Address, First Name, Last Name).
Table look up check
A table look up check compares data to a collection of allowed values.

Post-validation actions

Enforcement Action
Enforcement action typically rejects the data entry request and requires the input actor to make a change that brings the data into compliance. This is most suitable for interactive use, where a real person is sitting on the computer and making entry. It also works well for batch upload, where a file input may be rejected and a set of messages sent back to the input source for why the data is rejected.
Another form of enforcement action involves automatically changing the data and saving a conformant version instead of the original version. This is most suitable for cosmetic change. For example, converting an [all-caps] entry to a [Pascal case] entry does not need user input. An inappropriate use of automatic enforcement would be in situations where the enforcement leads to loss of business information. For example, saving a truncated comment if the length is longer than expected. This is not typically a good thing since it may result in loss of significant data.
Advisory Action
Advisory actions typically allow data to be entered unchanged but sends a message to the source actor indicating those validation issues that were encountered. This is most suitable for non-interactive system, for systems where the change is not business critical, for cleansing steps of existing data and for verification steps of an entry process.
Verification Action
Verification actions are special cases of advisory actions. In this case, the source actor is asked to verify that this data is what they would really want to enter, in the light of a suggestion to the contrary. Here, the check step suggests an alternative (e.g., a check of a mailing address returns a different way of formatting that address or suggests a different address altogether). You would want in this case, to give the user the option of accepting the recommendation or keeping their version. This is not a strict validation process, by design and is useful for capturing addresses to a new location or to a location that is not yet supported by the validation databases.
Log of validation
Even in cases where data validation did not find any issues, providing a log of validations that were conducted and their results is important. This is helpful to identify any missing data validation checks in light of data issues and in improving

Validation and security

Failures or omissions in data validation can lead to data corruption or a security vulnerability. Data validation checks that data are fit for purpose, valid, sensible, reasonable and secure before they are processed.

Gravitational interaction of antimatter

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Gravitational_interaction_of_antimatter 

The gravitational interaction of antimatter with matter or antimatter has been observed by physicists. As was the consensus among physicists previously, it was experimentally confirmed that gravity attracts both matter and antimatter at the same rate within experimental error.

Antimatter's rarity and tendency to annihilate when brought into contact with matter makes its study a technically demanding task. Furthermore, gravity is much weaker than the other fundamental forces, for reasons still of interest to physicists, complicating efforts to study gravity in systems small enough to be feasibly created in lab, including antimatter systems. Most methods for the creation of antimatter (specifically antihydrogen) result in particles and atoms of high kinetic energy, which are unsuitable for gravity-related study.

Antimatter is gravitationally attracted to matter. The magnitude of the gravitational force is also the same. This is predicted by theoretical arguments like the gravitational equivalence of energy and matter, and has been experimentally verified for antihydrogen. However the equivalence of the gravitational acceleration of matter to matter vs antimatter to matter has an error margin of about 20%. Difficulties in creating quantum gravity models have led to the idea that antimatter may react with a slightly different magnitude.

Theories of gravitational attraction

When antimatter was first discovered in 1932, physicists wondered how it would react to gravity. Initial analysis focused on whether antimatter should react the same as matter or react oppositely. Several theoretical arguments arose which convinced physicists that antimatter would react the same as normal matter. They inferred that gravitational repulsion between matter and antimatter was implausible as it would violate CPT invariance, conservation of energy, result in vacuum instability, and result in CP violation. It was also theorized that it would be inconsistent with the results of the Eötvös test of the weak equivalence principle. Many of these early theoretical objections were later overturned.

The equivalence principle

The equivalence principle predicts that mass and energy react the same way with gravity, therefore matter and antimatter would be accelerated identically by a gravitational field. From this point of view, matter-antimatter gravitational repulsion is unlikely.

Photon behavior

Photons, which are their own antiparticles in the framework of the Standard Model, have in a large number of astronomical tests (gravitational redshift and gravitational lensing, for example) been observed to interact with the gravitational field of ordinary matter exactly as predicted by the general theory of relativity. This is a feature that any theory that predicts that matter and antimatter repel must explain.[citation needed]

CPT theorem

The CPT theorem implies that the difference between the properties of a matter particle and those of its antimatter counterpart is completely described by C-inversion. Since this C-inversion does not affect gravitational mass, the CPT theorem predicts that the gravitational mass of antimatter is the same as that of ordinary matter. A repulsive gravity is then excluded, since that would imply a difference in sign between the observable gravitational mass of matter and antimatter.

Morrison's argument

In 1958, Philip Morrison argued that antigravity would violate conservation of energy. If matter and antimatter responded oppositely to a gravitational field, then it would take no energy to change the height of a particle–antiparticle pair. However, when moving through a gravitational potential, the frequency and energy of light is shifted. Morrison argued that energy would be created by producing matter and antimatter at one height and then annihilating it higher up, since the photons used in production would have less energy than the photons yielded from annihilation.

Schiff's argument

Later in 1958, L. Schiff used quantum field theory to argue that antigravity would be inconsistent with the results of the Eötvös experiment. However, the renormalization technique used in Schiff's analysis is heavily criticized, and his work is seen as inconclusive. In 2014 the argument was redone by Marcoen Cabbolet, who concluded however that it merely demonstrates the incompatibility of the Standard Model and gravitational repulsion.

Good's argument

In 1961, Myron L. Good argued that antigravity would result in the observation of an unacceptably high amount of CP violation in the anomalous regeneration of kaons. At the time, CP violation had not yet been observed. However, Good's argument is criticized for being expressed in terms of absolute potentials. By rephrasing the argument in terms of relative potentials, Gabriel Chardin found that it resulted in an amount of kaon regeneration which agrees with observation. He argued that antigravity is a potential explanation for CP violation based on his models on K mesons. His results date to 1992. Since then however, studies on CP violation mechanisms in the B mesons systems have fundamentally invalidated these explanations.

Gerard 't Hooft's argument

According to Gerard 't Hooft, every physicist recognizes immediately what is wrong with the idea of gravitational repulsion: if a ball is thrown high up in the air so that it falls back, then its motion is symmetric under time-reversal; and therefore, the ball falls also down in opposite time-direction. Since a matter particle in opposite time-direction is an antiparticle, this proves according to 't Hooft that antimatter falls down on earth just like "normal" matter. However, Cabbolet replied that 't Hooft's argument is false, and only proves that an anti-ball falls down on an anti-earth – which is not disputed.

Theories of gravitational repulsion

Since repulsive gravity has not been refuted experimentally, it is possible to speculate about physical principles that would bring about such a repulsion. Thus far, three radically different theories have been published.

Kowitt's theory

The first theory of repulsive gravity was a quantum theory published by Mark Kowitt. In this modified Dirac theory, Kowitt postulated that the positron is not a hole in the sea of electrons-with-negative-energy as in usual Dirac hole theory, but instead is a hole in the sea of electrons-with-negative-energy-and-positive-gravitational-mass: this yields a modified C-inversion, by which the positron has positive energy but negative gravitational mass. Repulsive gravity is then described by adding extra terms (mgΦg and mgAg) to the wave equation. The idea is that the wave function of a positron moving in the gravitational field of a matter particle evolves such that in time it becomes more probable to find the positron further away from the matter particle.

Santilli and Villata's theory

Classical theories of repulsive gravity have been published by Ruggero Santilli and Massimo Villata. Both theories are extensions of general relativity, and are experimentally indistinguishable. The general idea remains that gravity is the deflection of a continuous particle trajectory due to the curvature of spacetime, but antiparticles 'live' in an inverted spacetime. The equation of motion for antiparticles is then obtained from the equation of motion of ordinary particles by applying the C, P, and T operators (Villata) or by applying isodual maps (Santilli), which amounts to the same thing: the equation of motion for antiparticles then predicts a repulsion of matter and antimatter. It has to be taken that the observed trajectories of antiparticles are projections on our spacetime of the true trajectories in the inverted spacetime. However, it has been argued on methodological and ontological grounds that the area of application of Villata's theory cannot be extended to include the microcosmos. These objections were subsequently dismissed by Villata.

Cabbolet's theory

The first non-classical, non-quantum physical principles underlying a matter–antimatter gravitational repulsion have been published by Marcoen Cabbolet. He introduces the Elementary Process Theory, which uses a new language for physics, i.e. a new mathematical formalism and new physical concepts, and which is incompatible with both quantum mechanics and general relativity. The core idea is that nonzero rest mass particles such as electrons, protons, neutrons and their antimatter counterparts exhibit stepwise motion as they alternate between a particlelike state of rest and a wavelike state of motion. Gravitation then takes place in a wavelike state, and the theory allows, for example, that the wavelike states of protons and antiprotons interact differently with the earth's gravitational field.

Analysis

Further authors have used a matter–antimatter gravitational repulsion to explain cosmological observations, but these publications do not address the physical principles of gravitational repulsion.

Experiments

Supernova 1987A

One source of experimental evidence in favor of normal gravity was the observation of neutrinos from Supernova 1987A. In 1987, three neutrino detectors around the world simultaneously observed a cascade of neutrinos emanating from a supernova in the Large Magellanic Cloud. Although the supernova happened about 164,000 light years away, both neutrinos and antineutrinos seem to have been detected virtually simultaneously. If both were actually observed, then any difference in the gravitational interaction would have to be very small. However, neutrino detectors cannot distinguish perfectly between neutrinos and antineutrinos. Some physicists conservatively estimate that there is less than a 10% chance that no regular neutrinos were observed at all. Others estimate even lower probabilities, some as low as 1%. Unfortunately, this accuracy is unlikely to be improved by duplicating the experiment any time soon. The last known supernova to occur at such a close range prior to Supernova 1987A was around 1867.

Cold neutral antihydrogen experiments

Since 2010 the production of cold antihydrogen has become possible at the Antiproton Decelerator at CERN. Antihydrogen, which is electrically neutral, should make it possible to directly measure the gravitational attraction of antimatter particles to the matter of Earth.

Antihydrogen atoms have been trapped at CERN, first ALPHA and then ATRAP; in 2012 ALPHA used such atoms to set the first free-fall loose bounds on the gravitational interaction of antimatter with matter, measured to within ±7500% of ordinary gravity, not enough for a clear scientific statement about the sign of gravity acting on antimatter. Future experiments need to be performed with higher precision, either with beams of antihydrogen (AEgIS) or with trapped antihydrogen (ALPHA or GBAR).

In 2013, experiments on antihydrogen atoms released from the ALPHA trap set direct, i.e. freefall, coarse limits on antimatter gravity. These limits were coarse, with a relative precision of ±100%, thus, far from a clear statement even for the sign of gravity acting on antimatter. Future experiments at CERN with beams of antihydrogen, such as AEgIS, or with trapped antihydrogen, such as ALPHA and GBAR, have to improve the sensitivity to make a clear, scientific statement about gravity on antimatter.

In 2023 ALPHA achieved the first result that proved that antimatter has the same sign for gravitational free fall acceleration as regular matter.

Antiparticle

From Wikipedia, the free encyclopedia
Diagram illustrating the particles and antiparticles of electron, neutron and proton, as well as their "size" (not to scale). It is easier to identify them by looking at the total mass of both the antiparticle and particle. On the left, from top to bottom, is shown an electron (small red dot), a proton (big blue dot), and a neutron (big dot, black in the middle, gradually fading to white near the edges). On the right, from top to bottom, are shown the anti electron (small blue dot), anti proton (big red dot) and anti neutron (big dot, white in the middle, fading to black near the edges).
Illustration of electric charge of particles (left) and antiparticles (right). From top to bottom; electron/positron, proton/antiproton, neutron/antineutron.

In particle physics, every type of particle of "ordinary" matter (as opposed to antimatter) is associated with an antiparticle with the same mass but with opposite physical charges (such as electric charge). For example, the antiparticle of the electron is the positron (also known as an antielectron). While the electron has a negative electric charge, the positron has a positive electric charge, and is produced naturally in certain types of radioactive decay. The opposite is also true: the antiparticle of the positron is the electron.

Some particles, such as the photon, are their own antiparticle. Otherwise, for each pair of antiparticle partners, one is designated as the normal particle (the one that occurs in matter usually interacted with in daily life). The other (usually given the prefix "anti-") is designated the antiparticle.

Particle–antiparticle pairs can annihilate each other, producing photons; since the charges of the particle and antiparticle are opposite, total charge is conserved. For example, the positrons produced in natural radioactive decay quickly annihilate themselves with electrons, producing pairs of gamma rays, a process exploited in positron emission tomography.

The laws of nature are very nearly symmetrical with respect to particles and antiparticles. For example, an antiproton and a positron can form an antihydrogen atom, which is believed to have the same properties as a hydrogen atom. This leads to the question of why the formation of matter after the Big Bang resulted in a universe consisting almost entirely of matter, rather than being a half-and-half mixture of matter and antimatter. The discovery of charge parity violation helped to shed light on this problem by showing that this symmetry, originally thought to be perfect, was only approximate. The question about how the formation of matter after the Big Bang resulted in a universe consisting almost entirely of matter remains an unanswered one, and explanations so far are not truly satisfactory, overall.

Because charge is conserved, it is not possible to create an antiparticle without either destroying another particle of the same charge (as is for instance the case when antiparticles are produced naturally via beta decay or the collision of cosmic rays with Earth's atmosphere), or by the simultaneous creation of both a particle and its antiparticle (pair production), which can occur in particle accelerators such as the Large Hadron Collider at CERN.

Particles and their antiparticles have equal and opposite charges, so that an uncharged particle also gives rise to an uncharged antiparticle. In many cases, the antiparticle and the particle coincide: pairs of photons, Z0 bosons, π0
 mesons, and hypothetical gravitons and some hypothetical WIMPs all self-annihilate. However, electrically neutral particles need not be identical to their antiparticles: for example, the neutron and antineutron are distinct.

History

Experiment

In 1932, soon after the prediction of positrons by Paul Dirac, Carl D. Anderson found that cosmic-ray collisions produced these particles in a cloud chamber – a particle detector in which moving electrons (or positrons) leave behind trails as they move through the gas. The electric charge-to-mass ratio of a particle can be measured by observing the radius of curling of its cloud-chamber track in a magnetic field. Positrons, because of the direction that their paths curled, were at first mistaken for electrons travelling in the opposite direction. Positron paths in a cloud-chamber trace the same helical path as an electron but rotate in the opposite direction with respect to the magnetic field direction due to their having the same magnitude of charge-to-mass ratio but with opposite charge and, therefore, opposite signed charge-to-mass ratios.

The antiproton and antineutron were found by Emilio Segrè and Owen Chamberlain in 1955 at the University of California, Berkeley. Since then, the antiparticles of many other subatomic particles have been created in particle accelerator experiments. In recent years, complete atoms of antimatter have been assembled out of antiprotons and positrons, collected in electromagnetic traps.

Dirac hole theory

... the development of quantum field theory made the interpretation of antiparticles as holes unnecessary, even though it lingers on in many textbooks.

Solutions of the Dirac equation contain negative energy quantum states. As a result, an electron could always radiate energy and fall into a negative energy state. Even worse, it could keep radiating infinite amounts of energy because there were infinitely many negative energy states available. To prevent this unphysical situation from happening, Dirac proposed that a "sea" of negative-energy electrons fills the universe, already occupying all of the lower-energy states so that, due to the Pauli exclusion principle, no other electron could fall into them. Sometimes, however, one of these negative-energy particles could be lifted out of this Dirac sea to become a positive-energy particle. But, when lifted out, it would leave behind a hole in the sea that would act exactly like a positive-energy electron with a reversed charge. These holes were interpreted as "negative-energy electrons" by Paul Dirac and mistakenly identified with protons in his 1930 paper A Theory of Electrons and Protons. However, these "negative-energy electrons" turned out to be positrons, and not protons.

This picture implied an infinite negative charge for the universe – a problem of which Dirac was aware. Dirac tried to argue that we would perceive this as the normal state of zero charge. Another difficulty was the difference in masses of the electron and the proton. Dirac tried to argue that this was due to the electromagnetic interactions with the sea, until Hermann Weyl proved that hole theory was completely symmetric between negative and positive charges. Dirac also predicted a reaction e
 + p+
 → γ + γ, where an electron and a proton annihilate to give two photons. Robert Oppenheimer and Igor Tamm, however, proved that this would cause ordinary matter to disappear too fast. A year later, in 1931, Dirac modified his theory and postulated the positron, a new particle of the same mass as the electron. The discovery of this particle the next year removed the last two objections to his theory.

Within Dirac's theory, the problem of infinite charge of the universe remains. Some bosons also have antiparticles, but since bosons do not obey the Pauli exclusion principle (only fermions do), hole theory does not work for them. A unified interpretation of antiparticles is now available in quantum field theory, which solves both these problems by describing antimatter as negative energy states of the same underlying matter field, i.e. particles moving backwards in time.

Elementary antiparticles

Antiquarks
Generation Name Symbol Spin Charge (e) Mass (MeV/c2)
Observed
1 up antiquark u 12 23 2.2+0.6
−0.4
Yes
down antiquark d 12 +13 4.6+0.5
−0.4
Yes
2 charm antiquark c 12 23 1280±30 Yes
strange antiquark s 12 +13 96+8
−4
Yes
3 top antiquark t 12 23 173100±600 Yes
bottom antiquark b 12 +13 4180+40
−30
Yes
Antileptons
Generation Name Symbol Spin Charge (e) Mass (MeV/c2)
Observed
1 positron e+
 1 /2 +1 0.511 Yes
electron antineutrino ν
e
 1 /2 0 < 0.0000022 Yes
2 antimuon μ+
 1 /2 +1 105.7 Yes
muon antineutrino ν
μ
 1 /2 0 < 0.170 Yes
3 antitau τ+
 1 /2 +1 1776.86±0.12 Yes
tau antineutrino ν
τ
 1 /2 0 < 15.5 Yes
Antibosons
Name Symbol Spin Charge (e) Mass (GeV/c2)
Interaction mediated Observed
anti W boson W+
1 +1 80.385±0.015 weak interaction Yes

Composite antiparticles


Class Subclass Name Symbol Spin Charge

(e)

Mass (MeV/c2) Mass (kg) Observed
Antihadron Antibaryon Antiproton p  1 /2 −1 938.27208943(29) 1.67262192595(52)×10−27 Yes
Antineutron n  1 /2 0 939.56542194(48) ? Yes

Particle–antiparticle annihilation

Feynman diagram of a kaon oscillation. A straight red line suddenly turns purple, showing a kaon changing into an antikaon. A medallion is show zooming in on the region where the line changes color. The medallion shows that the line is not straight, but rather that at the place the kaon changes into an antikaon, the red line breaks into two curved lines, corresponding the production of virtual pions, which rejoin into the violet line, corresponding to the annihilation of the virtual pions.
An example of a virtual pion pair that influences the propagation of a kaon, causing a neutral kaon to mix with the antikaon. This is an example of renormalization in quantum field theory – the field theory being necessary because of the change in particle number.

If a particle and antiparticle are in the appropriate quantum states, then they can annihilate each other and produce other particles. Reactions such as e
 + e+
 → γγ (the two-photon annihilation of an electron-positron pair) are an example. The single-photon annihilation of an electron-positron pair, e
 + e+
 → γ, cannot occur in free space because it is impossible to conserve energy and momentum together in this process. However, in the Coulomb field of a nucleus the translational invariance is broken and single-photon annihilation may occur. The reverse reaction (in free space, without an atomic nucleus) is also impossible for this reason. In quantum field theory, this process is allowed only as an intermediate quantum state for times short enough that the violation of energy conservation can be accommodated by the uncertainty principle. This opens the way for virtual pair production or annihilation in which a one particle quantum state may fluctuate into a two particle state and back. These processes are important in the vacuum state and renormalization of a quantum field theory. It also opens the way for neutral particle mixing through processes such as the one pictured here, which is a complicated example of mass renormalization.

Properties

Quantum states of a particle and an antiparticle are interchanged by the combined application of charge conjugation , parity and time reversal . and are linear, unitary operators, is antilinear and antiunitary, . If denotes the quantum state of a particle with momentum and spin whose component in the z-direction is , then one has

where denotes the charge conjugate state, that is, the antiparticle. In particular a massive particle and its antiparticle transform under the same irreducible representation of the Poincaré group which means the antiparticle has the same mass and the same spin.

If , and can be defined separately on the particles and antiparticles, then

where the proportionality sign indicates that there might be a phase on the right hand side.

As anticommutes with the charges, , particle and antiparticle have opposite electric charges q and -q.

Quantum field theory

This section draws upon the ideas, language and notation of canonical quantization of a quantum field theory.

One may try to quantize an electron field without mixing the annihilation and creation operators by writing

where we use the symbol k to denote the quantum numbers p and σ of the previous section and the sign of the energy, E(k), and ak denotes the corresponding annihilation operators. Of course, since we are dealing with fermions, we have to have the operators satisfy canonical anti-commutation relations. However, if one now writes down the Hamiltonian

then one sees immediately that the expectation value of H need not be positive. This is because E(k) can have any sign whatsoever, and the combination of creation and annihilation operators has expectation value 1 or 0.

So one has to introduce the charge conjugate antiparticle field, with its own creation and annihilation operators satisfying the relations

where k has the same p, and opposite σ and sign of the energy. Then one can rewrite the field in the form

where the first sum is over positive energy states and the second over those of negative energy. The energy becomes

where E0 is an infinite negative constant. The vacuum state is defined as the state with no particle or antiparticle, i.e., and . Then the energy of the vacuum is exactly E0. Since all energies are measured relative to the vacuum, H is positive definite. Analysis of the properties of ak and bk shows that one is the annihilation operator for particles and the other for antiparticles. This is the case of a fermion.

This approach is due to Vladimir Fock, Wendell Furry and Robert Oppenheimer. If one quantizes a real scalar field, then one finds that there is only one kind of annihilation operator; therefore, real scalar fields describe neutral bosons. Since complex scalar fields admit two different kinds of annihilation operators, which are related by conjugation, such fields describe charged bosons.

Feynman–Stückelberg interpretation

By considering the propagation of the negative energy modes of the electron field backward in time, Ernst Stückelberg reached a pictorial understanding of the fact that the particle and antiparticle have equal mass m and spin J but opposite charges q. This allowed him to rewrite perturbation theory precisely in the form of diagrams. Richard Feynman later gave an independent systematic derivation of these diagrams from a particle formalism, and they are now called Feynman diagrams. Each line of a diagram represents a particle propagating either backward or forward in time. In Feynman diagrams, anti-particles are shown traveling backwards in time relative to normal matter, and vice versa. This technique is the most widespread method of computing amplitudes in quantum field theory today.

Since this picture was first developed by Stückelberg, and acquired its modern form in Feynman's work, it is called the Feynman–Stückelberg interpretation of antiparticles to honor both scientists.

Data validation

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Data_validation In computing ...