Close-up of a table-top CW dye laser based on rhodamine 6G,
emitting at 580 nm (yellow). The emitted laser beam is visible as faint
yellow lines between the yellow window (center) and the yellow optics
(upper-right), where it reflects down across the image to an unseen
mirror, and back into the dye jet from the lower left corner. The orange
dye-solution enters the laser from the left and exits to the right,
still glowing from triplet phosphorescence, and is pumped by a 514 nm
(blue-green) beam from an argon laser. The pump laser can be seen
entering the dye jet, beneath the yellow window.
A dye laser is a laser that uses an organic dye as the lasing medium, usually as a liquidsolution. Compared to gases and most solid state lasing media, a dye can usually be used for a much wider range of wavelengths, often spanning 50 to 100 nanometers or more. The wide bandwidth makes them particularly suitable for tunable lasers
and pulsed lasers. The dye rhodamine 6G, for example, can be tuned from
635 nm (orangish-red) to 560 nm (greenish-yellow), and produce pulses
as short as 16 femtoseconds.
Moreover, the dye can be replaced by another type in order to generate
an even broader range of wavelengths with the same laser, from the
near-infrared to the near-ultraviolet, although this usually requires
replacing other optical components in the laser as well, such as dielectric mirrors or pump lasers.
In addition to the usual liquid state, dye lasers are also available as solid state dye lasers (SSDL). These SSDL lasers use dye-doped organic matrices as gain medium.
Construction
The internal cavity
of a linear dye-laser, showing the beam path. The pump laser (green)
enters the dye cell from the left. The emitted beam exits to the right
(lower yellow beam) through a cavity dumper
(not shown). A diffraction grating is used as the high-reflector (upper
yellow beam, left side). The two meter beam is redirected several times
by mirrors and prisms, which reduce the overall length, expand or focus
the beam for various parts of the cavity, and eliminate one of two
counter-propagating waves produced by the dye cell. The laser is capable
of continuous wave operation or ultrashort picosecond pulses
(trillionth of a second, equating to a beam less than 1/3 of a millimeter in length).A
ring dye laser. P-pump laser beam; G-gain dye jet; A-saturable absorber
dye jet; M0, M1, M2-planar mirrors; OC–output coupler; CM1 to
CM4-curved mirrors.
A dye laser uses a gain medium consisting of an organic dye, which is a carbon-based, soluble stain that is often fluorescent, such as the dye in a highlighter pen. The dye is mixed with a compatible solvent, allowing the molecules to diffuse
evenly throughout the liquid. The dye solution may be circulated
through a dye cell, or streamed through open air using a dye jet. A high
energy source of light is needed to 'pump' the liquid beyond its lasing threshold. A fast discharge flashtube or an external laser is usually used for this purpose. Mirrors
are also needed to oscillate the light produced by the dye's
fluorescence, which is amplified with each pass through the liquid. The
output mirror is normally around 80% reflective, while all other mirrors
are usually more than 99.9% reflective. The dye solution is usually
circulated at high speeds, to help avoid triplet absorption and to decrease degradation of the dye. A prism or diffraction grating is usually mounted in the beam path, to allow tuning of the beam.
Because the liquid medium of a dye laser can fit any shape, there
are a multitude of different configurations that can be used. A Fabry–Pérot
laser cavity is usually used for flashtube pumped lasers, which
consists of two mirrors, which may be flat or curved, mounted parallel
to each other with the laser medium in between. The dye cell is often a
thin tube approximately equal in length to the flashtube, with both
windows and an inlet/outlet for the liquid on each end. The dye cell is
usually side-pumped, with one or more flashtubes running parallel to the
dye cell in a reflector cavity. The reflector cavity is often water
cooled, to prevent thermal shock in the dye caused by the large amounts
of near-infrared radiation which the flashtube produces. Axial pumped
lasers have a hollow, annular-shaped flashtube that surrounds the dye
cell, which has lower inductance
for a shorter flash, and improved transfer efficiency. Coaxial pumped
lasers have an annular dye cell that surrounds the flashtube, for even
better transfer efficiency, but have a lower gain due to diffraction
losses. Flash pumped lasers can be used only for pulsed output
applications.
A ring laser design is often chosen for continuous operation,
although a Fabry–Pérot design is sometimes used. In a ring laser, the
mirrors of the laser are positioned to allow the beam to travel in a
circular path. The dye cell, or cuvette, is usually very small.
Sometimes a dye jet is used to help avoid reflection losses. The dye is
usually pumped with an external laser, such as a nitrogen, excimer, or frequency doubledNd:YAG laser. The liquid is circulated at very high speeds, to prevent triplet absorption from cutting off the beam. Unlike Fabry–Pérot cavities, a ring laser does not generate standing waves which cause spatial hole burning,
a phenomenon where energy becomes trapped in unused portions of the
medium between the crests of the wave. This leads to a better gain from
the lasing medium.
Operation
The dyes
used in these lasers contain rather large organic molecules which
fluoresce. Most dyes have a very short time between the absorption and
emission of light, referred to as the fluorescence lifetime, which is
often on the order of a few nanoseconds. (In comparison, most
solid-state lasers have a fluorescence lifetime ranging from hundreds of
microseconds to a few milliseconds.) Under standard laser-pumping
conditions, the molecules emit their energy before a population inversion can properly build up, so dyes require rather specialized means of pumping. Liquid dyes have an extremely high lasing threshold. In addition, the large molecules are subject to complex excited state transitions during which the spin can be "flipped", quickly changing from the useful, fast-emitting "singlet" state to the slower "triplet" state.
The incoming light excites the dye molecules into the state of being ready to emit stimulated radiation; the singlet state. In this state, the molecules emit light via fluorescence, and the dye is transparent to the lasing wavelength. Within a microsecond or less, the molecules will change to their triplet state. In the triplet state, light is emitted via phosphorescence,
and the molecules absorb the lasing wavelength, making the dye
partially opaque. Flashlamp-pumped lasers need a flash with an extremely
short duration, to deliver the large amounts of energy necessary to
bring the dye past threshold before triplet absorption overcomes singlet
emission. Dye lasers with an external pump-laser can direct enough
energy of the proper wavelength into the dye with a relatively small
amount of input energy, but the dye must be circulated at high speeds to
keep the triplet molecules out of the beam path. Due to their high
absorption, the pumping energy may often be concentrated into a rather
small volume of liquid.
Since organic dyes tend to decompose under the influence of
light, the dye solution is normally circulated from a large reservoir. The dye solution can be flowing through a cuvette, i.e., a glass container, or be as a dye jet, i.e., as a sheet-like stream in open air from a specially-shaped nozzle.
With a dye jet, one avoids reflection losses from the glass surfaces
and contamination of the walls of the cuvette. These advantages come at
the cost of a more-complicated alignment.
Liquid dyes have very high gain
as laser media. The beam needs to make only a few passes through the
liquid to reach full design power, and hence, the high transmittance of
the output coupler. The high gain also leads to high losses, because reflections from the dye-cell walls or flashlamp reflector cause parasitic oscillations, dramatically reducing the amount of energy available to the beam. Pump cavities are often coated, anodized, or otherwise made of a material that will not reflect at the lasing wavelength while reflecting at the pump wavelength.
A benefit of organic dyes is their high fluorescence efficiency.
The greatest losses in many lasers and other fluorescence devices is not
from the transfer efficiency (absorbed versus reflected/transmitted
energy) or quantum yield
(emitted number of photons per absorbed number), but from the losses
when high-energy photons are absorbed and reemitted as photons of longer
wavelengths. Because the energy of a photon is determined by its
wavelength, the emitted photons will be of lower energy; a phenomenon
called the Stokes shift.
The absorption centers of many dyes are very close to the emission
centers. Sometimes the two are close enough that the absorption profile
slightly overlaps the emission profile. As a result, most dyes exhibit
very small Stokes shifts and consequently allow for lower energy losses
than many other laser types due to this phenomenon. The wide absorption
profiles make them particularly suited to broadband pumping, such as
from a flashtube. It also allows a wide range of pump lasers to be used
for any certain dye and, conversely, many different dyes can be used
with a single pump laser.
A cuvette used in a dye laser. A thin sheet of liquid is passed between the windows at high speeds. The windows are set at Brewster's angle (air-to-glass interface) for the pump laser, and at Brewster's angle (liquid-to-glass interface) for the emitted beam.
Stokes shift
in Rhodamine 6G during broadband absorption/emission. In laser
operation, the Stokes shift is the difference between the pump
wavelength and the output.
CW dye lasers
Continuous-wave (CW) dye lasers
often use a dye jet. CW dye-lasers can have a linear or a ring cavity,
and provided the foundation for the development of femtosecond lasers.
Narrow linewidth dye lasers
Multiple prismsexpand the beam in one direction, providing better illumination of a diffraction grating.
Depending on the angle unwanted wavelengths are dispersed, so are used
to tune the output of a dye laser, often to a linewidth of a fraction of
an angstrom.
Dye lasers' emission is inherently broad. However, tunable narrow
linewidth emission has been central to the success of the dye laser. In
order to produce narrow bandwidth tuning these lasers use many types of
cavities and resonators which include gratings, prisms, multiple-prism grating arrangements, and etalons.
Rhodamine 6G Chloride powder; mixed with methanol; emitting yellow light under the influence of a green laser
Some of the laser dyes are rhodamine (orange, 540–680 nm), fluorescein (green, 530–560 nm), coumarin (blue 490–620 nm), stilbene (violet 410–480 nm), umbelliferone (blue, 450–470 nm), tetracene, malachite green, and others. While some dyes are actually used in food coloring, most dyes are very toxic, and often carcinogenic. Many dyes, such as rhodamine 6G,
(in its chloride form), can be very corrosive to all metals except
stainless steel. Although dyes have very broad fluorescence spectra, the
dye's absorption and emission will tend to center on a certain
wavelength and taper off to each side, forming a tunability curve, with
the absorption center being of a shorter wavelength than the emission
center. Rhodamine 6G, for example, has its highest output around 590 nm,
and the conversion efficiency lowers as the laser is tuned to either
side of this wavelength.
A wide variety of solvents can be used, although most dyes will
dissolve better in some solvents than in others. Some of the solvents
used are water, glycol, ethanol, methanol, hexane, cyclohexane, cyclodextrin,
and many others. Solvents can be highly toxic, and can sometimes be
absorbed directly through the skin, or through inhaled vapors. Many
solvents are also extremely flammable. The various solvents can also
have an effect on the specific color of the dye solution, the lifetime
of the singlet state, either enhancing or quenching the triplet state, and, thus, on the lasing bandwidth and power obtainable with a particular laser-pumping source.
Adamantane is added to some dyes to prolong their life.
Cycloheptatriene and cyclooctatetraene (COT) can be added as triplet
quenchers for rhodamine G, increasing the laser output power. Output
power of 1.4 kilowatt at 585 nm was achieved using Rhodamine 6G with COT
in methanol-water solution.
Excitation lasers
Flashlamps and several types of lasers can be used to optically pump dye lasers. A partial list of excitation lasers include:
R. L. Fork, B. I. Greene, and C. V. Shank demonstrated, in 1981, the generation of ultra-short laser pulse using a ring-dye laser (or dye laser exploiting colliding pulsemode-locking). This kind of laser is capable of generating laser pulses of ~ 0.1 ps duration.
The introduction of grating techniques and intra-cavity prismatic pulse compressors eventually resulted in the routine emission of femtosecond dye laser pulses.
Applications
An atomic vapor laser isotope separation
experiment at LLNL. Green light is from a copper vapor pump laser used
to pump a highly tuned dye laser which is producing the orange light.
Dye lasers are very versatile. In addition to their recognized
wavelength agility these lasers can offer very large pulsed energies or
very high average powers. Flashlamp-pumped dye lasers have been shown
to yield hundreds of Joules per pulse and copper-laser-pumped dye lasers
are known to yield average powers in the kilowatt regime.
Dye lasers are used in many applications including:
In laser medicine these lasers are applied in several areas, including dermatology
where they are used to make skin tone more even. The wide range of
wavelengths possible allows very close matching to the absorption lines
of certain tissues, such as melanin or hemoglobin,
while the narrow bandwidth obtainable helps reduce the possibility of
damage to the surrounding tissue. They are used to treat port-wine stains and other blood vessel disorders, scars and kidney stones. They can be matched to a variety of inks for tattoo removal, as well as a number of other applications.
In spectroscopy, dye lasers can be used to study the absorption
and emission spectra of various materials. Their tunability, (from the
near-infrared to the near-ultraviolet), narrow bandwidth, and high
intensity allows a much greater diversity than other light sources. The
variety of pulse widths, from ultra-short, femtosecond pulses to
continuous-wave operation, makes them suitable for a wide range of
applications, from the study of fluorescent lifetimes and semiconductor
properties to lunar laser ranging experiments.
Tunable lasers are used in swept-frequency metrology to enable measurement of absolute distances with very high accuracy. A two axis interferometer
is set up and by sweeping the frequency, the frequency of the light
returning from the fixed arm is slightly different from the frequency
returning from the distance measuring arm. This produces a beat
frequency which can be detected and used to determine the absolute
difference between the lengths of the two arms.
In physical cosmology, baryogenesis (also known as baryosynthesis) is the physical process that is hypothesized to have taken place during the early universe to produce baryonic asymmetry, the observation that only matter (baryons) and not antimatter (antibaryons) is detected in universe other than in cosmic ray collisions.
Since it is assumed in cosmology
that the particles we see were created using the same physics we
measure today, and in particle physics experiments today matter and
antimatter are always symmetric, the dominance of matter over antimatter
is unexplained.
A number of theoretical mechanisms are proposed to account for this discrepancy, namely identifying conditions that favour symmetry breaking
and the creation of normal matter (as opposed to antimatter). This
imbalance has to be exceptionally small, on the order of 1 in every 1630000000 (≈2×109) particles a small fraction of a second after the Big Bang.
After most of the matter and antimatter was annihilated, what remained
was all the baryonic matter in the current universe, along with a much
greater number of bosons. Experiments reported in 2010 at Fermilab, however, seem to show that this imbalance is much greater than previously assumed.
These experiments involved a series of particle collisions and found
that the amount of generated matter was approximately 1% larger than the
amount of generated antimatter. The reason for this discrepancy is not
yet known.
Most grand unified theories explicitly break the baryon number symmetry, which would account for this discrepancy, typically invoking reactions mediated by very massive X bosons(X) or massive Higgs bosons (H0 ). The rate at which these events occur is governed largely by the mass of the intermediate X or H0
particles, so by assuming these reactions are responsible for the
majority of the baryon number seen today, a maximum mass can be
calculated above which the rate would be too slow to explain the
presence of matter today. These estimates predict that a large volume of material will occasionally exhibit a spontaneous proton decay, which has not been observed. Therefore, the imbalance between matter and antimatter remains a mystery.
The majority of ordinary matter in the universe is found in atomic nuclei, which are made of neutrons and protons.
There is no evidence of primordial antimatter. In the universe about 1
in 10,000 protons are antiprotons, consistent with ongoing production
due to cosmic rays. Possible domains of antimatter in other parts of the
universe is inconsistent with the lack of measurable of gamma radiation background.
Furthermore, accurate predictions of Big Bang nucleosynthesis depend upon the value of the baryon asymmetry factor (see § Relation to Big Bang nucleosynthesis).
The match between the predictions and observations of the
nucleosynthesis model constrains the value of this baryon asymmetry
factor. In particular, if the model computed with equal amounts of
baryons and antibaryons, they annihilate each other so completely that
not enough baryons are left to create nucleons.
There are two main interpretations for this disparity: either the universe began with a small preference for matter (total baryonic number
of the universe different from zero), or the universe was originally
perfectly symmetric, but somehow a set of particle physics phenomena
contributed to a small imbalance in favour of matter over time. The goal
of cosmological theories of baryogenesis is to explain the baryon
asymmetry factor using quantum field theory of elementary particles.
Sakharov conditions
In 1967, Andrei Sakharov proposed a set of three necessary conditions that a baryon-generating
interaction must satisfy to produce matter and antimatter at different
rates. These conditions were inspired by the recent discoveries of the cosmic microwave background and CP-violation in the neutral kaon system. The three necessary "Sakharov conditions" are:
Baryon number violation is a necessary condition to produce an excess
of baryons over anti-baryons. But C-symmetry violation is also needed
so that the interactions which produce more baryons than anti-baryons
will not be counterbalanced by interactions which produce more
anti-baryons than baryons. CP-symmetry violation is similarly required
because otherwise equal numbers of left-handed baryons and right-handed anti-baryons would be produced, as well as equal numbers of left-handed anti-baryons and right-handed baryons.
Finally, the last condition, known as the out-of-equilibrium decay
scenario, states that the rate of a reaction which generates
baryon-asymmetry must be less than the rate of expansion of the
universe. This ensures the particles and their corresponding
antiparticles do not achieve thermal equilibrium due to rapid expansion
decreasing the occurrence of pair-annihilation. The interactions must be
out of thermal equilibrium at the time of the baryon-number and C/CP
symmetry violating decay occurs to generate the asymmetry.
In the Standard Model
The Standard Model
can incorporate baryogenesis, though the amount of net baryons (and
leptons) thus created may not be sufficient to account for the present
baryon asymmetry. There is a required one excess quark per billion
quark-antiquark pairs in the early universe in order to provide all the
observed matter in the universe. This insufficiency has not yet been explained, theoretically or otherwise.
Baryogenesis within the Standard Model requires the electroweaksymmetry breaking to be a first-ordercosmological phase transition, since otherwise sphalerons
wipe out any baryon asymmetry that happened up to the phase transition.
Beyond this, the remaining amount of baryon non-conserving interactions
is negligible.
The phase transition domain wall breaks the P-symmetry
spontaneously, allowing for CP-symmetry violating interactions to break
C-symmetry on both its sides. Quarks tend to accumulate on the broken
phase side of the domain wall, while anti-quarks tend to accumulate on
its unbroken phase side.
Due to CP-symmetry violating electroweak interactions, some amplitudes
involving quarks are not equal to the corresponding amplitudes involving
anti-quarks, but rather have opposite phase (see CKM matrix and Kaon); since time reversal takes an amplitude to its complex conjugate, CPT-symmetry is conserved in this entire process.
Though some of their amplitudes have opposite phases, both quarks
and anti-quarks have positive energy, and hence acquire the same phase
as they move in space-time. This phase also depends on their mass, which
is identical but depends both on flavor and on the HiggsVEV which changes along the domain wall.
Thus certain sums of amplitudes for quarks have different absolute
values compared to those of anti-quarks. In all, quarks and anti-quarks
may have different reflection and transmission probabilities through the
domain wall, and it turns out that more quarks coming from the unbroken
phase are transmitted compared to anti-quarks.
Thus there is a net baryonic flux through the domain wall. Due to
sphaleron transitions, which are abundant in the unbroken phase, the
net anti-baryonic content of the unbroken phase is wiped out as
anti-baryons are transformed into leptons.
However, sphalerons are rare enough in the broken phase as not to wipe
out the excess of baryons there. In total, there is net creation of
baryons (as well as leptons).
In this scenario, non-perturbative electroweak interactions (i.e.
the sphaleron) are responsible for the B-violation, the perturbative
electroweak Lagrangian is responsible for the CP-violation, and the
domain wall is responsible for the lack of thermal equilibrium and the
P-violation; together with the CP-violation it also creates a
C-violation in each of its sides.
The
central question to baryogenesis is what causes the preference for
matter over antimatter in the universe, as well as the magnitude of this
asymmetry. An important quantifier is the asymmetry parameter, given by
where nB and nB refer to the number density of baryons and antibaryons respectively and nγ is the number density of cosmic background radiationphotons.
According to the Big Bang model, matter decoupled from the cosmic background radiation (CBR) at a temperature of roughly 3000kelvin, corresponding to an average kinetic energy of 3000 K / (10.08×103 K/eV) = 0.3 eV. After the decoupling, the total
number of CBR photons remains constant. Therefore, due to space-time
expansion, the photon density decreases. The photon density at
equilibrium temperature T is given by
with kB as the Boltzmann constant, ħ as the Planck constant divided by 2π and c as the speed of light in vacuum, and ζ(3) as Apéry's constant. At the current CBR photon temperature of 2.725 K, this corresponds to a photon density nγ of around 411 CBR photons per cubic centimeter.
Therefore, the asymmetry parameter η, as defined above, is not the "best" parameter. Instead, the preferred asymmetry parameter uses the entropy density s,
because the entropy density of the universe remained reasonably constant
throughout most of its evolution. The entropy density is
with p and ρ as the pressure and density from the energy density tensor Tμν, and g⁎ as the effective number of degrees of freedom for "massless" particles at temperature T (in so far as mc2 ≪ kBT holds),
for bosons and fermions with gi and gj degrees of freedom at temperatures Ti and Tj respectively. At the present epoch, s = 7.04 nγ.
Other models
B-meson decay
Another
possible explanation for the cause of baryogenesis is the decay
reaction of B-mesogenesis. This phenomenon suggests that in the early
universe, particles such as the B-meson decay into a visible Standard Model baryon as well as a dark antibaryon that is invisible to current observation techniques.
Asymmetric Dark Matter
The asymmetric dark matter proposal investigates mechanisms that would explain the abundance of dark matter but lack of dark antimatter as the consequence of the same effect as would explain baryogenesis.
Usually atoms can be imagined as a nucleus of protons and neutrons, and a surrounding "cloud" of orbiting electrons which "take up space". However, this is only somewhat correct because subatomic particles and their properties are governed by their quantum nature, which means they do not act as everyday objects appear to act – they can act like waves as well as particles, and they do not have well-defined sizes or positions. In the Standard Model of particle physics, matter is not a fundamental concept because the elementary constituents of atoms are quantum entities which do not have an inherent "size" or "volume" in any everyday sense of the word. Due to the exclusion principle and other fundamental interactions, some "point particles" known as fermions (quarks, leptons),
and many composites and atoms, are effectively forced to keep a
distance from other particles under everyday conditions; this creates
the property of matter which appears to us as matter taking up space.
For much of the history of the natural sciences,
people have contemplated the exact nature of matter. The idea that
matter was built of discrete building blocks, the so-called particulate theory of matter, appeared in both ancient Greece and ancient India. Early philosophers who proposed the particulate theory of matter include the Indian philosopher Kaṇāda (c. 6th century BCE), and the pre-Socratic Greek philosophers Leucippus (c. 490 BCE) and Democritus (c. 470–380 BCE).
Related concepts
Comparison with mass
Matter is a general term describing any physical substance,
which is sometimes defined in incompatible ways in different fields of
science. Some definitions are based on historical usage from a time when
there was no reason to distinguish mass from simply a quantity of matter. By contrast, mass is not a substance but a well-defined, extensive property of matter and other substances or systems. Various types of mass are defined within physics – including rest mass, inertial mass, and relativistic mass.
In physics, matter is sometimes equated with particles that
exhibit rest mass (i.e., that cannot travel at the speed of light), such
as quarks and leptons. However, in both physics and chemistry, matter
exhibits both wave-like and particle-like properties (the so-called wave–particle duality).
Relation with chemical substance
Steam and liquid water are two different forms of the same pure chemical substance, water.
A chemical substance is a unique form of matter with constant chemical composition and characteristic properties. Chemical substances may take the form of a single element or chemical compounds. If two or more chemical substances can be combined without reacting, they may form a chemical mixture. If a mixture is separated to isolate one chemical substance to a desired degree, the resulting substance is said to be chemically pure.
Chemical substances can exist in several different physical states or phases (e.g. solids, liquids, gases, or plasma) without changing their chemical composition. Substances transition between these phases of matter in response to changes in temperature or pressure. Some chemical substances can be combined or converted into new substances by means of chemical reactions. Chemicals that do not possess this ability are said to be inert.
Pure water is an example of a chemical substance, with a constant composition of two hydrogen atomsbonded to a single oxygen atom (i.e. H2O). The atomic ratio of hydrogen to oxygen is always 2:1 in every molecule of water. Pure water will tend to boil
near 100 °C (212 °F), an example of one of the characteristic
properties that define it. Other notable chemical substances include diamond (a form of the element carbon), table salt (NaCl; an ionic compound), and refined sugar (C12H22O11; an organic compound).
Definition
Based on atoms
A definition of "matter" based on its physical and chemical structure is: matter is made up of atoms. Such atomic matter is also sometimes termed ordinary matter. As an example, deoxyribonucleic acidmolecules
(DNA) are matter under this definition because they are made of atoms.
This definition can be extended to include charged atoms and molecules,
so as to include plasmas (gases of ions) and electrolytes (ionic solutions), which are not obviously included in the atoms definition. Alternatively, one can adopt the protons, neutrons, and electrons definition.
Based on protons, neutrons and electrons
A definition of "matter" more fine-scale than the atoms and molecules definition is: matter is made up of what atoms and molecules are made of, meaning anything made of positively charged protons, neutral neutrons, and negatively charged electrons. This definition goes beyond atoms and molecules, however, to include substances made from these building blocks that are not simply atoms or molecules, for example, electron beams in an old cathode ray tube television, or white dwarf
matter—typically, carbon and oxygen nuclei in a sea of degenerate
electrons. At a microscopic level, the constituent "particles" of matter
such as protons, neutrons, and electrons obey the laws of quantum
mechanics and exhibit wave-particle duality. At an even deeper level,
protons and neutrons are made up of quarks and the force fields (gluons) that bind them together, leading to the next definition.
Based on quarks and leptons
Under the "quarks and leptons" definition, the elementary and composite particles made of the quarks (in purple) and leptons
(in green) would be matter—while the gauge bosons (in red) would not be
matter. However, interaction energy inherent to composite particles
(for example, gluons involved in neutrons and protons) contribute to the
mass of ordinary matter.
As seen in the above discussion, many early definitions of what can
be called "ordinary matter" were based on its structure or "building
blocks". On the scale of elementary particles, a definition that
follows this tradition can be stated as:
"ordinary matter is everything that is composed of quarks and leptons", or "ordinary matter is everything that is composed of any elementary fermions except antiquarks and antileptons".The connection between these formulations follows.
Leptons (the most famous being the electron), and quarks (of which baryons, such as protons and neutrons, are made) combine to form atoms, which in turn form molecules.
Because atoms and molecules are said to be matter, it is natural to
phrase the definition as: "ordinary matter is anything that is made of
the same things that atoms and molecules are made of". (However, notice
that one also can make from these building blocks matter that is not
atoms or molecules.) Then, because electrons are leptons, and protons
and neutrons are made of quarks, this definition in turn leads to the
definition of matter as being "quarks and leptons", which are two of the
four types of elementary fermions (the other two being antiquarks and
antileptons, which can be considered antimatter as described later).
Carithers and Grannis state: "Ordinary matter is composed entirely of first-generation particles, namely the [up] and [down] quarks, plus the electron and its neutrino." (Higher generations particles quickly decay into first-generation particles, and thus are not commonly encountered.)
This definition of ordinary matter is more subtle than it first
appears. All the particles that make up ordinary matter (leptons and
quarks) are elementary fermions, while all the force carriers are elementary bosons. The W and Z bosons that mediate the weak force are not made of quarks or leptons, and so are not ordinary matter, even if they have mass. In other words, mass is not something that is exclusive to ordinary matter.
The quark–lepton definition of ordinary matter, however,
identifies not only the elementary building blocks of matter, but also
includes composites made from the constituents (atoms and molecules, for
example). Such composites contain an interaction energy that holds the
constituents together, and may constitute the bulk of the mass of the
composite. As an example, to a great extent, the mass of an atom is
simply the sum of the masses of its constituent protons, neutrons and
electrons. However, digging deeper, the protons and neutrons are made up
of quarks bound together by gluon fields (see dynamics of quantum chromodynamics) and these gluon fields contribute significantly to the mass of hadrons. In other words, most of what composes the "mass" of ordinary matter is due to the binding energy of quarks within protons and neutrons. For example, the sum of the mass of the three quarks in a nucleon is approximately 12.5 MeV/c2, which is low compared to the mass of a nucleon (approximately 938 MeV/c2). The bottom line is that most of the mass of everyday objects comes from the interaction energy of its elementary components.
The Standard Model groups matter particles into three
generations, where each generation consists of two quarks and two
leptons. The first generation is the up and down quarks, the electron and the electron neutrino; the second includes the charm and strange quarks, the muon and the muon neutrino; the third generation consists of the top and bottom quarks and the tau and tau neutrino. The most natural explanation for this would be that quarks and leptons of higher generations are excited states of the first generations. If this turns out to be the case, it would imply that quarks and leptons are composite particles, rather than elementary particles.
This quark–lepton definition of matter also leads to what can be
described as "conservation of (net) matter" laws—discussed later below.
Alternatively, one could return to the mass–volume–space concept of
matter, leading to the next definition, in which antimatter becomes
included as a subclass of matter.
Based on elementary fermions (mass, volume, and space)
A common or traditional definition of matter is "anything that has mass and volume (occupies space)".For example, a car would be said to be made of matter, as it has mass and volume (occupies space).
The observation that matter occupies space goes back to
antiquity. However, an explanation for why matter occupies space is
recent, and is argued to be a result of the phenomenon described in the Pauli exclusion principle, which applies to fermions.
Two particular examples where the exclusion principle clearly relates
matter to the occupation of space are white dwarf stars and neutron
stars, discussed further below.
Thus, matter can be defined as everything composed of elementary
fermions. Although we do not encounter them in everyday life, antiquarks
(such as the antiproton) and antileptons (such as the positron) are the antiparticles
of the quark and the lepton, are elementary fermions as well, and have
essentially the same properties as quarks and leptons, including the
applicability of the Pauli exclusion principle which can be said to
prevent two particles from being in the same place at the same time (in
the same state), i.e. makes each particle "take up space". This
particular definition leads to matter being defined to include anything
made of these antimatter particles as well as the ordinary quark and lepton, and thus also anything made of mesons, which are unstable particles made up of a quark and an antiquark.
In general relativity and cosmology
In the context of relativity,
mass is not an additive quantity, in the sense that one cannot add the
rest masses of particles in a system to get the total rest mass of the
system. In relativity, usually a more general view is that it is not the sum of rest masses, but the energy–momentum tensor
that quantifies the amount of matter. This tensor gives the rest mass
for the entire system. Matter, therefore, is sometimes considered as
anything that contributes to the energy–momentum of a system, that is,
anything that is not purely gravity. This view is commonly held in fields that deal with general relativity such as cosmology. In this view, light and other massless particles and fields are all part of matter.
Structure
In particle physics, fermions are particles that obey Fermi–Dirac statistics. Fermions can be elementary, like the electron—or composite, like the proton and neutron. In the Standard Model, there are two types of elementary fermions: quarks and leptons, which are discussed next.
Quark structure of a proton: 2 up quarks and 1 down quark.
Baryons
are strongly interacting fermions, and so are subject to Fermi–Dirac
statistics. Amongst the baryons are the protons and neutrons, which
occur in atomic nuclei, but many other unstable baryons exist as well.
The term baryon usually refers to triquarks—particles made of three
quarks. Also, "exotic" baryons made of four quarks and one antiquark are
known as pentaquarks, but their existence is not generally accepted.
Baryonic matter is the part of the universe that is made of
baryons (including all atoms). This part of the universe does not
include dark energy, dark matter, black holes or various forms of degenerate matter, such as those that compose white dwarf stars and neutron stars. Microwave light seen by Wilkinson Microwave Anisotropy Probe (WMAP) suggests that only about 4.6% of that part of the universe within range of the best telescopes
(that is, matter that may be visible because light could reach us from
it) is made of baryonic matter. About 26.8% is dark matter, and about
68.3% is dark energy.
The great majority of ordinary matter in the universe is unseen,
since visible stars and gas inside galaxies and clusters account for
less than 10 per cent of the ordinary matter contribution to the
mass–energy density of the universe.
Hadronic
Hadronic matter can refer to 'ordinary' baryonic matter, made from hadrons (baryons and mesons), or quark matter (a generalisation of atomic nuclei), i.e. the 'low' temperature QCD matter. It includes degenerate matter and the result of high energy heavy nuclei collisions.
In physics, degenerate matter refers to the ground state of a gas of fermions at a temperature near absolute zero. The Pauli exclusion principle
requires that only two fermions can occupy a quantum state, one spin-up
and the other spin-down. Hence, at zero temperature, the fermions fill
up sufficient levels to accommodate all the available fermions—and in
the case of many fermions, the maximum kinetic energy (called the Fermi energy)
and the pressure of the gas becomes very large, and depends on the
number of fermions rather than the temperature, unlike normal states of
matter.
Degenerate matter is thought to occur during the evolution of heavy stars. The demonstration by Subrahmanyan Chandrasekhar that white dwarf stars have a maximum allowed mass because of the exclusion principle caused a revolution in the theory of star evolution.
Degenerate matter includes the part of the universe that is made up of neutron stars and white dwarfs.
Strange matter is a particular form of quark matter, usually thought of as a liquid of up, down, and strange quarks. It is contrasted with nuclear matter, which is a liquid of neutrons and protons
(which themselves are built out of up and down quarks), and with
non-strange quark matter, which is a quark liquid that contains only up
and down quarks. At high enough density, strange matter is expected to
be color superconducting. Strange matter is hypothesized to occur in the core of neutron stars, or, more speculatively, as isolated droplets that may vary in size from femtometers (strangelets) to kilometers (quark stars).
The broader meaning is just quark matter that contains three
flavors of quarks: up, down, and strange. In this definition, there is a
critical pressure and an associated critical density, and when nuclear
matter (made of protons and neutrons)
is compressed beyond this density, the protons and neutrons dissociate
into quarks, yielding quark matter (probably strange matter).
The narrower meaning is quark matter that is more stable than nuclear matter. The idea that this could happen is the "strange matter hypothesis" of Bodmer and Witten. In this definition, the critical pressure is zero: the true ground state of matter is always quark matter. The nuclei that we see in the matter around us, which are droplets of nuclear matter, are actually metastable, and given enough time (or the right external stimulus) would decay into droplets of strange matter, i.e. strangelets.
Leptons are particles of spin-1⁄2, meaning that they are fermions. They carry an electric charge of −1 e (charged leptons) or 0 e (neutrinos). Unlike quarks, leptons do not carry colour charge, meaning that they do not experience the strong interaction. Leptons also undergo radioactive decay, meaning that they are subject to the weak interaction. Leptons are massive particles, therefore are subject to gravity.
Phase diagram for a typical substance at a fixed volume
In bulk, matter can exist in several different forms, or states of aggregation, known as phases, depending on ambient pressure, temperature and volume. A phase is a form of matter that has a relatively uniform chemical composition and physical properties (such as density, specific heat, refractive index, and so forth). These phases include the three familiar ones (solids, liquids, and gases), as well as more exotic states of matter (such as plasmas, superfluids, supersolids, Bose–Einstein condensates, ...). A fluid may be a liquid, gas or plasma. There are also paramagnetic and ferromagnetic phases of magnetic materials. As conditions change, matter may change from one phase into another. These phenomena are called phase transitions and are studied in the field of thermodynamics.
In nanomaterials, the vastly increased ratio of surface area to volume
results in matter that can exhibit properties entirely different from
those of bulk material, and not well described by any bulk phase (see nanomaterials for more details).
Phases are sometimes called states of matter, but this term can lead to confusion with thermodynamic states. For example, two gases maintained at different pressures are in different thermodynamic states (different pressures), but in the same phase (both are gases).
Antimatter is matter that is composed of the antiparticles of those that constitute ordinary matter. If a particle and its antiparticle come into contact with each other, the two annihilate; that is, they may both be converted into other particles with equal energy in accordance with Albert Einstein's equation E = mc2. These new particles may be high-energy photons (gamma rays)
or other particle–antiparticle pairs. The resulting particles are
endowed with an amount of kinetic energy equal to the difference between
the rest mass
of the products of the annihilation and the rest mass of the original
particle–antiparticle pair, which is often quite large. Depending on
which definition of "matter" is adopted, antimatter can be said to be a
particular subclass of matter, or the opposite of matter.
Antimatter is not found naturally on Earth, except very briefly and in vanishingly small quantities (as the result of radioactive decay, lightning or cosmic rays).
This is because antimatter that came to exist on Earth outside the
confines of a suitable physics laboratory would almost instantly meet
the ordinary matter that Earth is made of, and be annihilated.
Antiparticles and some stable antimatter (such as antihydrogen) can be made in tiny amounts, but not in enough quantity to do more than test a few of its theoretical properties.
There is considerable speculation both in science and science fiction
as to why the observable universe is apparently almost entirely matter
(in the sense of quarks and leptons but not antiquarks or antileptons),
and whether other places are almost entirely antimatter (antiquarks and
antileptons) instead. In the early universe, it is thought that matter
and antimatter were equally represented, and the disappearance of
antimatter requires an asymmetry in physical laws called CP (charge–parity) symmetry violation, which can be obtained from the Standard Model, but at this time the apparent asymmetry of matter and antimatter in the visible universe is one of the great unsolved problems in physics. Possible processes by which it came about are explored in more detail under baryogenesis.
Formally, antimatter particles can be defined by their negative baryon number or lepton number, while "normal" (non-antimatter) matter particles have positive baryon or lepton number.[51] These two classes of particles are the antiparticle partners of one another.
In October 2017, scientists reported further evidence that matter and antimatter, equally produced at the Big Bang, are identical, should completely annihilate each other and, as a result, the universe should not exist.
This implies that there must be something, as yet unknown to
scientists, that either stopped the complete mutual destruction of
matter and antimatter in the early forming universe, or that gave rise
to an imbalance between the two forms.
Conservation
Two quantities that can define an amount of matter in the quark–lepton sense (and antimatter in an antiquark–antilepton sense), baryon number and lepton number, are conserved in the Standard Model. A baryon
such as the proton or neutron has a baryon number of one, and a quark,
because there are three in a baryon, is given a baryon number of 1/3.
So the net amount of matter, as measured by the number of quarks (minus
the number of antiquarks, which each have a baryon number of −1/3),
which is proportional to baryon number, and number of leptons (minus
antileptons), which is called the lepton number, is practically
impossible to change in any process. Even in a nuclear bomb, none of
the baryons (protons and neutrons of which the atomic nuclei are
composed) are destroyed—there are as many baryons after as before the
reaction, so none of these matter particles are actually destroyed and
none are even converted to non-matter particles (like photons of light
or radiation). Instead, nuclear (and perhaps chromodynamic) binding energy is released, as these baryons become bound into mid-size nuclei having less energy (and, equivalently, less mass) per nucleon compared to the original small (hydrogen) and large (plutonium etc.) nuclei. Even in electron–positron annihilation,
there is no net matter being destroyed, because there was zero net
matter (zero total lepton number and baryon number) to begin with before
the annihilation—one lepton minus one antilepton equals zero net lepton
number—and this net amount matter does not change as it simply remains
zero after the annihilation.
In short, matter, as defined in physics, refers to baryons and
leptons. The amount of matter is defined in terms of baryon and lepton
number. Baryons and leptons can be created, but their creation is
accompanied by antibaryons or antileptons; and they can be destroyed by
annihilating them with antibaryons or antileptons. Since
antibaryons/antileptons have negative baryon/lepton numbers, the overall
baryon/lepton numbers are not changed, so matter is conserved. However,
baryons/leptons and antibaryons/antileptons all have positive mass, so
the total amount of mass is not conserved.
Further, outside of natural or artificial nuclear reactions, there is
almost no antimatter generally available in the universe (see baryon asymmetry and leptogenesis), so particle annihilation is rare in normal circumstances.
Dark
Pie chart showing the fractions of energy in the universe contributed by different sources. Ordinary matter is divided into luminous matter (the stars and luminous gases and 0.005% radiation) and nonluminous matter
(intergalactic gas and about 0.1% neutrinos and 0.04% supermassive
black holes). Ordinary matter is uncommon. Modeled after Ostriker and
Steinhardt. For more information, see NASA.
Dark energy (73%)
Dark matter (23%)
Non-luminous matter (3.6%)
Luminous matter (0.4%)
Ordinary matter, in the quarks and leptons definition, constitutes about 4% of the energy of the observable universe. The remaining energy is theorized to be due to exotic forms, of which 23% is dark matterand 73% is dark energy.
Galaxy rotation curve
for the Milky Way. Vertical axis is speed of rotation about the
galactic center. Horizontal axis is distance from the galactic center.
The sun is marked with a yellow ball. The observed curve of speed of
rotation is blue. The predicted curve based upon stellar mass and gas in
the Milky Way is red. The difference is due to dark matter or perhaps a modification of the law of gravity. Scatter in observations is indicated roughly by gray bars.
In astrophysics and cosmology, dark matter
is matter of unknown composition that does not emit or reflect enough
electromagnetic radiation to be observed directly, but whose presence
can be inferred from gravitational effects on visible matter. Observational evidence of the early universe and the Big Bang
theory require that this matter have energy and mass, but not be
composed of ordinary baryons (protons and neutrons). The commonly
accepted view is that most of the dark matter is non-baryonic in nature. As such, it is composed of particles as yet unobserved in the laboratory. Perhaps they are supersymmetric particles, which are not Standard Model particles but relics formed at very high energies in the early phase of the universe and still floating about.
In cosmology, dark energy is the name given to the source of the repelling influence that is accelerating the rate of expansion of the universe.
Its precise nature is currently a mystery, although its effects can
reasonably be modeled by assigning matter-like properties such as energy
density and pressure to the vacuum itself.
Fully 70% of the matter density in
the universe appears to be in the form of dark energy. Twenty-six
percent is dark matter. Only 4% is ordinary matter. So less than 1 part
in 20 is made out of matter we have observed experimentally or described
in the standard model of particle physics. Of the other 96%, apart from the properties just mentioned, we know absolutely nothing.
— Lee Smolin (2007), The Trouble with Physics, p. 16
Exotic matter is a concept of particle physics,
which may include dark matter and dark energy but goes further to
include any hypothetical material that violates one or more of the
properties of known forms of matter. Some such materials might possess
hypothetical properties like negative mass.
In ancient India, the Buddhist, Hindu, and Jain philosophical traditions each posited that matter was made of atoms (paramanu, pudgala)
that were "eternal, indestructible, without parts, and innumerable" and
which associated or dissociated to form more complex matter according
to the laws of nature.
They coupled their ideas of soul, or lack thereof, into their theory of
matter. The strongest developers and defenders of this theory were the Nyaya-Vaisheshika school, with the ideas of the Indian philosopher Kanada being the most followed.Buddhist philosophers also developed these ideas in late 1st-millennium
CE, ideas that were similar to the Vaisheshika school, but ones that
did not include any soul or conscience. Jain philosophers included the soul (jiva), adding qualities such as taste, smell, touch, and color to each atom.
They extended the ideas found in early literature of the Hindus and
Buddhists by adding that atoms are either humid or dry, and this quality
cements matter. They also proposed the possibility that atoms combine
because of the attraction of opposites, and the soul attaches to these
atoms, transforms with karma residue, and transmigrates with each rebirth.
In ancient Greece, pre-Socratic philosophers speculated the underlying nature of the visible world. Thales (c. 624 BCE–c. 546 BCE) regarded water as the fundamental material of the world. Anaximander (c. 610 BCE–c. 546 BCE) posited that the basic material was wholly characterless or limitless: the Infinite (apeiron). Anaximenes (flourished 585 BCE, d. 528 BCE) posited that the basic stuff was pneuma or air. Heraclitus (c. 535 BCE–c. 475 BCE) seems to say the basic element is fire, though perhaps he means that all is change. Empedocles (c. 490–430 BCE) spoke of four elements of which everything was made: earth, water, air, and fire. Meanwhile, Parmenides argued that change does not exist, and Democritus argued that everything is composed of minuscule, inert bodies of all shapes called atoms, a philosophy called atomism. All of these notions had deep philosophical problems.
Aristotle
(384 BCE–322 BCE) was the first to put the conception on a sound
philosophical basis, which he did in his natural philosophy, especially
in Physics book I. He adopted as reasonable suppositions the four Empedoclean elements, but added a fifth, aether.
Nevertheless, these elements are not basic in Aristotle's mind. Rather
they, like everything else in the visible world, are composed of the
basic principles matter and form.
For my definition of matter is just
this—the primary substratum of each thing, from which it comes to be
without qualification, and which persists in the result.
— Aristotle, Physics I:9:192a32
The word Aristotle uses for matter, ὕλη (hyle or hule), can be literally translated as wood or timber, that is, "raw material" for building.
Indeed, Aristotle's conception of matter is intrinsically linked to
something being made or composed. In other words, in contrast to the
early modern conception of matter as simply occupying space, matter for
Aristotle is definitionally linked to process or change: matter is what
underlies a change of substance. For example, a horse eats grass: the
horse changes the grass into itself; the grass as such does not persist
in the horse, but some aspect of it—its matter—does. The matter is not
specifically described (e.g., as atoms),
but consists of whatever persists in the change of substance from grass
to horse. Matter in this understanding does not exist independently
(i.e., as a substance),
but exists interdependently (i.e., as a "principle") with form and only
insofar as it underlies change. It can be helpful to conceive of the
relationship of matter and form as very similar to that between parts
and whole. For Aristotle, matter as such can only receive
actuality from form; it has no activity or actuality in itself, similar
to the way that parts as such only have their existence in a whole (otherwise they would be independent wholes).
French philosopher René Descartes
(1596–1650) originated the modern conception of matter. He was
primarily a geometer. Unlike Aristotle, who deduced the existence of
matter from the physical reality of change, Descartes arbitrarily
postulated matter to be an abstract, mathematical substance that
occupies space:
So, extension in length, breadth,
and depth, constitutes the nature of bodily substance; and thought
constitutes the nature of thinking substance. And everything else
attributable to body presupposes extension, and is only a mode of an
extended thing.
— René Descartes, Principles of Philosophy
For Descartes, matter has only the property of extension, so its only activity aside from locomotion is to exclude other bodies: this is the mechanical philosophy.
Descartes makes an absolute distinction between mind, which he defines
as unextended, thinking substance, and matter, which he defines as
unthinking, extended substance. They are independent things. In contrast, Aristotle defines matter and the formal/forming principle as complementary principles that together compose one independent thing (substance). In short, Aristotle defines matter (roughly speaking) as what things are actually made of (with a potential independent existence), but Descartes elevates matter to an actual independent thing in itself.
The continuity and difference between Descartes's and Aristotle's
conceptions is noteworthy. In both conceptions, matter is passive or
inert. In the respective conceptions matter has different relationships
to intelligence. For Aristotle, matter and intelligence (form) exist
together in an interdependent relationship, whereas for Descartes,
matter and intelligence (mind) are definitionally opposed, independent substances.
Descartes's justification for restricting the inherent qualities
of matter to extension is its permanence, but his real criterion is not
permanence (which equally applied to color and resistance), but his
desire to use geometry to explain all material properties.
Like Descartes, Hobbes, Boyle, and Locke argued that the inherent
properties of bodies were limited to extension, and that so-called
secondary qualities, like color, were only products of human perception.
English philosopher Isaac Newton
(1643–1727) inherited Descartes's mechanical conception of matter. In
the third of his "Rules of Reasoning in Philosophy", Newton lists the
universal qualities of matter as "extension, hardness, impenetrability,
mobility, and inertia". Similarly in Optics
he conjectures that God created matter as "solid, massy, hard,
impenetrable, movable particles", which were "...even so very hard as
never to wear or break in pieces".
The "primary" properties of matter were amenable to mathematical
description, unlike "secondary" qualities such as color or taste. Like
Descartes, Newton rejected the essential nature of secondary qualities.
Newton developed Descartes's notion of matter by restoring to
matter intrinsic properties in addition to extension (at least on a
limited basis), such as mass. Newton's use of gravitational force, which
worked "at a distance", effectively repudiated Descartes's mechanics,
in which interactions happened exclusively by contact.
Though Newton's gravity would seem to be a power of bodies, Newton himself did not admit it to be an essential property of matter. Carrying the logic forward more consistently, Joseph Priestley (1733–1804) argued that corporeal properties transcend contact mechanics: chemical properties require the capacity for attraction. He argued matter has other inherent powers besides the so-called primary qualities of Descartes, et al.
19th and 20th centuries
Since Priestley's time, there has been a massive expansion in
knowledge of the constituents of the material world (viz., molecules,
atoms, subatomic particles). In the 19th century, following the
development of the periodic table, and of atomic theory, atoms were seen as being the fundamental constituents of matter; atoms formed molecules and compounds.
The common definition in terms of occupying space and having mass
is in contrast with most physical and chemical definitions of matter,
which rely instead upon its structure and upon attributes not
necessarily related to volume and mass. At the turn of the nineteenth
century, the knowledge of matter began a rapid evolution.
Aspects of the Newtonian view still held sway. James Clerk Maxwell discussed matter in his work Matter and Motion. He carefully separates "matter" from space and time, and defines it in terms of the object referred to in Newton's first law of motion.
However, the Newtonian picture was not the whole story. In the 19th
century, the term "matter" was actively discussed by a host of
scientists and philosophers, and a brief outline can be found in Levere. A textbook discussion from 1870 suggests matter is what is made up of atoms:
Three divisions of matter are recognized in science: masses, molecules and atoms. A Mass of matter is any portion of matter appreciable by the senses. A Molecule is the smallest particle of matter into which a body can be divided without losing its identity. An Atom is a still smaller particle produced by division of a molecule.
Rather than simply having the attributes of mass and occupying space,
matter was held to have chemical and electrical properties. In 1909 the
famous physicist J. J. Thomson
(1856–1940) wrote about the "constitution of matter" and was concerned
with the possible connection between matter and electrical charge.
In the late 19th century with the discovery of the electron, and in the early 20th century, with the Geiger–Marsden experiment discovery of the atomic nucleus, and the birth of particle physics, matter was seen as made up of electrons, protons and neutrons
interacting to form atoms. There then developed an entire literature
concerning the "structure of matter", ranging from the "electrical
structure" in the early 20th century,
to the more recent "quark structure of matter", introduced as early as
1992 by Jacob with the remark: "Understanding the quark structure of
matter has been one of the most important advances in contemporary
physics." In this connection, physicists speak of matter fields, and speak of particles as "quantum excitations of a mode of the matter field".
And here is a quote from de Sabbata and Gasperini: "With the word
'matter' we denote, in this context, the sources of the interactions,
that is spinor fields (like quarks and leptons), which are believed to be the fundamental components of matter, or scalar fields, like the Higgs particles, which are used to introduced mass in a gauge theory (and that, however, could be composed of more fundamental fermionfields)."
Protons and neutrons however are not indivisible: they can be divided into quarks. And electrons are part of a particle family called leptons. Both quarks and leptons are elementary particles, and were in 2004 seen by authors of an undergraduate text as being the fundamental constituents of matter.
These quarks and leptons interact through four fundamental forces: gravity, electromagnetism, weak interactions, and strong interactions. The Standard Model
of particle physics is currently the best explanation for all of
physics, but despite decades of efforts, gravity cannot yet be accounted
for at the quantum level; it is only described by classical physics (see Quantum gravity and Graviton) to the frustration of theoreticians like Stephen Hawking. Interactions between quarks and leptons are the result of an exchange of force-carrying particles such as photons between quarks and leptons.
The force-carrying particles are not themselves building blocks. As one
consequence, mass and energy (which to our present knowledge cannot be
created or destroyed) cannot always be related to matter (which can be
created out of non-matter particles such as photons, or even out of pure
energy, such as kinetic energy). Force mediators are usually not considered matter: the mediators of the electric force (photons) possess energy (see Planck relation) and the mediators of the weak force (W and Z bosons) have mass, but neither are considered matter either. However, while these quanta are not considered matter, they do contribute to the total mass of atoms, subatomic particles, and all systems that contain them.
Summary
The modern conception of matter has been refined many times in history, in light of the improvement in knowledge of just what the basic building blocks are, and in how they interact.
The term "matter" is used throughout physics in a wide variety of contexts: for example, one refers to "condensed matter physics", "elementary matter", "partonic" matter, "dark" matter, "anti"-matter, "strange" matter, and "nuclear" matter. In discussions of matter and antimatter, the former has been referred to by Alfvén as koinomatter (Gk. common matter). It is fair to say that in physics,
there is no broad consensus as to a general definition of matter, and
the term "matter" usually is used in conjunction with a specifying
modifier.
The history of the concept of matter is a history of the fundamental length scales
used to define matter. Different building blocks apply depending upon
whether one defines matter on an atomic or elementary particle level.
One may use a definition that matter is atoms, or that matter is hadrons, or that matter is leptons and quarks depending upon the scale at which one wishes to define matter.