In physics, especially quantum field theory, regularization is a method of modifying observables which have singularities in order to make them finite by the introduction of a suitable parameter called the regulator.
The regulator, also known as a "cutoff", models our lack of
knowledge about physics at unobserved scales (e.g. scales of small size
or large energy levels). It compensates for (and requires) the
possibility of separation of scales that "new physics" may be discovered
at those scales which the present theory is unable to model, while
enabling the current theory to give accurate predictions as an
"effective theory" within its intended scale of use.
It is distinct from renormalization, another technique to control infinities without assuming new physics, by adjusting for self-interaction feedback.
Regularization was for many decades controversial even amongst its inventors, as it combines physical and epistemological claims into the same equations. However, it is now well understood and has proven to yield useful, accurate predictions.
Overview
Regularization
procedures deal with infinite, divergent, and nonsensical expressions
by introducing an auxiliary concept of a regulator (for example, the
minimal distance
in space which is useful, in case the divergences arise from
short-distance physical effects). The correct physical result is
obtained in the limit in which the regulator goes away (in our example, ), but the virtue of the regulator is that for its finite value, the result is finite.
However, the result usually includes terms proportional to expressions like which are not well-defined in the limit . Regularization is the first step towards obtaining a completely finite and meaningful result; in quantum field theory it must be usually followed by a related, but independent technique called renormalization.
Renormalization is based on the requirement that some physical
quantities — expressed by seemingly divergent expressions such as
— are equal to the observed values. Such a constraint allows one to
calculate a finite value for many other quantities that looked
divergent.
The existence of a limit as ε goes to zero and the independence
of the final result from the regulator are nontrivial facts. The
underlying reason for them lies in universality as shown by Kenneth Wilson and Leo Kadanoff and the existence of a second order phase transition. Sometimes, taking the limit as ε goes to zero is not possible. This is the case when we have a Landau pole and for nonrenormalizable couplings like the Fermi interaction. However, even for these two examples, if the regulator only gives reasonable results for (where is a superior energy cuttoff) and we are working with scales of the order of , regulators with
still give pretty accurate approximations. The physical reason why we
can't take the limit of ε going to zero is the existence of new physics
below Λ.
It is not always possible to define a regularization such that
the limit of ε going to zero is independent of the regularization. In
this case, one says that the theory contains an anomaly. Anomalous theories have been studied in great detail and are often founded on the celebrated Atiyah–Singer index theorem or variations thereof (see, for example, the chiral anomaly).
The mass of a charged particle should include the mass–energy in its electrostatic field (electromagnetic mass). Assume that the particle is a charged spherical shell of radius re. The mass–energy in the field is
which becomes infinite as re → 0. This implies that the point particle would have infinite inertia, making it unable to be accelerated. Incidentally, the value of re that makes equal to the electron mass is called the classical electron radius, which (setting and restoring factors of c and ) turns out to be
Regularization: Classical physics theory breaks down at small
scales, e.g., the difference between an electron and a point particle
shown above. Addressing this problem requires new kinds of additional
physical constraints. For instance, in this case, assuming a finite
electron radius (i.e., regularizing the electron mass-energy) suffices
to explain the system below a certain size. Similar regularization
arguments work in other renormalization problems. For example, a theory
may hold under one narrow set of conditions, but due to calculations
involving infinities or singularities, it may breakdown under other
conditions or scales. In the case of the electron, another way to avoid
infinite mass-energy while retaining the point nature of the particle is
to postulate tiny additional dimensions over which the particle could
'spread out' rather than restrict its motion solely over 3D space. This
is precisely the motivation behind string theory and other multi-dimensional models including multiple time dimensions.
Rather than the existence of unknown new physics, assuming the
existence of particle interactions with other surrounding particles in
the environment, renormalization offers an alternative strategy to resolve infinities in such classical problems.
Specific types
Specific types of regularization procedures include
Perturbative predictions by quantum field theory about quantum scattering of elementary particles, implied by a corresponding Lagrangian density, are computed using the Feynman rules, a regularization method to circumvent ultraviolet divergences so as to obtain finite results for Feynman diagrams containing loops, and a renormalization scheme. Regularization method results in regularized n-point Green's functions (propagators), and a suitable limiting procedure (a renormalization scheme) then leads to perturbative S-matrix
elements. These are independent of the particular regularization method
used, and enable one to model perturbatively the measurable physical
processes (cross sections, probability amplitudes, decay widths and
lifetimes of excited states). However, so far no known regularized
n-point Green's functions can be regarded as being based on a physically
realistic theory of quantum-scattering since the derivation of each
disregards some of the basic tenets of conventional physics (e.g., by
not being Lorentz-invariant,
by introducing either unphysical particles with a negative metric or
wrong statistics, or discrete space-time, or lowering the dimensionality
of space-time, or some combination thereof). So the available
regularization methods are understood as formalistic technical devices,
devoid of any direct physical meaning. In addition, there are qualms
about renormalization. For a history and comments on this more than half-a-century old open conceptual problem, see e.g.
Pauli's conjecture
As
it seems that the vertices of non-regularized Feynman series adequately
describe interactions in quantum scattering, it is taken that their
ultraviolet divergences are due to the asymptotic, high-energy behavior
of the Feynman propagators. So it is a prudent, conservative approach to
retain the vertices in Feynman series, and modify only the Feynman
propagators to create a regularized Feynman series. This is the
reasoning behind the formal Pauli–Villars covariant regularization by
modification of Feynman propagators through auxiliary unphysical
particles, cf. and representation of physical reality by Feynman diagrams.
In 1949 Pauli
conjectured there is a realistic regularization, which is implied by a
theory that respects all the established principles of contemporary
physics.
So its propagators (i) do not need to be regularized, and (ii) can be
regarded as such a regularization of the propagators used in quantum
field theories that might reflect the underlying physics. The additional
parameters of such a theory do not need to be removed (i.e. the theory
needs no renormalization) and may provide some new information about the
physics of quantum scattering, though they may turn out experimentally
to be negligible. By contrast, any present regularization method
introduces formal coefficients that must eventually be disposed of by
renormalization.
Opinions
Paul Dirac
was persistently, extremely critical about procedures of
renormalization. In 1963, he wrote, "… in the renormalization theory we
have a theory that has defied all the attempts of the mathematician to
make it sound. I am inclined to suspect that the renormalization theory
is something that will not survive in the future,…"
He further observed that "One can distinguish between two main
procedures for a theoretical physicist. One of them is to work from the
experimental basis ... The other procedure is to work from the
mathematical basis. One examines and criticizes the existing theory. One
tries to pin-point the faults in it and then tries to remove them. The
difficulty here is to remove the faults without destroying the very
great successes of the existing theory."
Abdus Salam
remarked in 1972, "Field-theoretic infinities first encountered in
Lorentz's computation of electron have persisted in classical
electrodynamics for seventy and in quantum electrodynamics for some
thirty-five years. These long years of frustration have left in the
subject a curious affection for the infinities and a passionate belief
that they are an inevitable part of nature; so much so that even the
suggestion of a hope that they may after all be circumvented - and
finite values for the renormalization constants computed - is considered
irrational."
However, in Gerard ’t Hooft’s
opinion, "History tells us that if we hit upon some obstacle, even if
it looks like a pure formality or just a technical complication, it
should be carefully scrutinized. Nature might be telling us something,
and we should find out what it is."
The difficulty with a realistic regularization is that so far
there is none, although nothing could be destroyed by its bottom-up
approach; and there is no experimental basis for it.
Minimal realistic regularization
Considering
distinct theoretical problems, Dirac in 1963 suggested: "I believe
separate ideas will be needed to solve these distinct problems and that
they will be solved one at a time through successive stages in the
future evolution of physics. At this point I find myself in disagreement
with most physicists. They are inclined to think one master idea will
be discovered that will solve all these problems together. I think it is
asking too much to hope that anyone will be able to solve all these
problems together. One should separate them one from another as much as
possible and try to tackle them separately. And I believe the future
development of physics will consist of solving them one at a time, and
that after any one of them has been solved there will still be a great
mystery about how to attack further ones."
According to Dirac, "Quantum electrodynamics
is the domain of physics that we know most about, and presumably it
will have to be put in order before we can hope to make any fundamental
progress with other field theories, although these will continue to
develop on the experimental basis."
Dirac’s two preceding remarks suggest that we should start
searching for a realistic regularization in the case of quantum
electrodynamics (QED) in the four-dimensional Minkowski spacetime, starting with the original QED Lagrangian density.
The path-integral formulation provides the most direct way from the Lagrangian density to the corresponding Feynman series in its Lorentz-invariant form.
The free-field part of the Lagrangian density determines the Feynman
propagators, whereas the rest determines the vertices. As the QED
vertices are considered to adequately describe interactions in QED
scattering, it makes sense to modify only the free-field part of the
Lagrangian density so as to obtain such regularized Feynman series that
the Lehmann–Symanzik–Zimmermann
reduction formula provides a perturbative S-matrix that: (i) is
Lorentz-invariant and unitary; (ii) involves only the QED particles;
(iii) depends solely on QED parameters and those introduced by the
modification of the Feynman propagators—for particular values of these
parameters it is equal to the QED perturbative S-matrix; and (iv)
exhibits the same symmetries as the QED perturbative S-matrix. Let us
refer to such a regularization as the minimal realistic regularization, and start searching for the corresponding, modified free-field parts of the QED Lagrangian density.
Transport theoretic approach
According to Bjorken and Drell, it would make physical sense to sidestep ultraviolet divergences by using more detailed description than can be provided by differential field equations. And Feynman
noted about the use of differential equations: "... for neutron
diffusion it is only an approximation that is good when the distance
over which we are looking is large compared with the mean free path. If
we looked more closely, we would see individual neutrons running
around." And then he wondered, "Could it be that the real world consists
of little X-ons which can be seen only at very tiny distances? And that
in our measurements we are always observing on such a large scale that
we can’t see these little X-ons, and that is why we get the differential
equations? ... Are they [therefore] also correct only as a smoothed-out
imitation of a really much more complicated microscopic world?"
Already in 1938, Heisenberg
proposed that a quantum field theory can provide only an idealized,
large-scale description of quantum dynamics, valid for distances larger
than some fundamental length, expected also by Bjorken and Drell in 1965.
Feynman's preceding remark provides a possible physical reason for its
existence; either that or it is just another way of saying the same
thing (there is a fundamental unit of distance) but having no new
information.
Hints at new physics
The need for regularization terms in any quantum field theory of quantum gravity is a major motivation for physics beyond the standard model. Infinities of the non-gravitational forces in QFT can be controlled via renormalization
only but additional regularization - and hence new physics—is required
uniquely for gravity. The regularizers model, and work around, the
breakdown of QFT at small scales and thus show clearly the need for some
other theory to come into play beyond QFT at these scales. A. Zee
(Quantum Field Theory in a Nutshell, 2003) considers this to be a
benefit of the regularization framework—theories can work well in their
intended domains but also contain information about their own
limitations and point clearly to where new physics is needed.
The presence of charged particles makes plasma electrically conductive,
with the dynamics of individual particles and macroscopic plasma motion
governed by collective electromagnetic fields and very sensitive to
externally applied fields. The response of plasma to electromagnetic fields is used in many modern devices and technologies, such as plasma televisions or plasma etching.
Depending on temperature and density, a certain number of neutral particles may also be present, in which case plasma is called partially ionized. Neon signs and lightning are examples of partially ionized plasmas.
Unlike the phase transitions
between the other three states of matter, the transition to plasma is
not well defined and is a matter of interpretation and context. Whether a given degree of ionization suffices to call a substance "plasma" depends on the specific phenomenon being considered.
Early history
Plasma was first identified in laboratory by Sir William Crookes. Crookes presented a lecture on what he called "radiant matter" to the British Association for the Advancement of Science, in Sheffield, on Friday, 22 August 1879.
Systematic studies of plasma began with the research of Irving Langmuir and his colleagues in the 1920s. Langmuir also introduced the term "plasma" as a description of ionized gas in 1928:
Except near the electrodes, where there are sheaths
containing very few electrons, the ionized gas contains ions and
electrons in about equal numbers so that the resultant space charge is
very small. We shall use the name plasma to describe this region containing balanced charges of ions and electrons.
Lewi Tonks
and Harold Mott-Smith, both of whom worked with Langmuir in the 1920s,
recall that Langmuir first used the term by analogy with the blood plasma.
Mott-Smith recalls, in particular, that the transport of electrons from
thermionic filaments reminded Langmuir of "the way blood plasma carries
red and white corpuscles and germs."
Plasma is typically an electrically quasineutral medium of unbound positive and negative particles
(i.e., the overall charge of a plasma is roughly zero). Although these
particles are unbound, they are not "free" in the sense of not
experiencing forces. Moving charged particles generate electric currents, and any movement of a charged plasma particle affects and is affected by the fields created by the other charges. In turn, this governs collective behaviour with many degrees of variation.
Plasma is distinct from the other states of matter. In
particular, describing a low-density plasma as merely an "ionized gas"
is wrong and misleading, even though it is similar to the gas phase in
that both assume no definite shape or volume. The following table
summarizes some principal differences:
State
Property
Gas
Plasma
Interactions
Short-range: Two-particle (binary) collisions are the rule.
Long-range: Collective motion of particles is ubiquitous in plasma, resulting in various waves and other types of collective phenomena.
Electrical conductivity
Very low: Gases are excellent insulators up to electric field strengths of tens of kilovolts per centimetre.
Very high: For many purposes, the conductivity of a plasma may be treated as infinite.
Independently acting species
One: All gas particles behave in a similar way, largely influenced by collisions with one another and by gravity.
Two or more: Electrons and ions possess different charges
and vastly different masses, so that they behave differently in many
circumstances, with various types of plasma-specific waves and instabilities emerging as a result.
Ideal plasma
Three factors define an ideal plasma:
The plasma approximation: The plasma approximation applies when the plasma parameter Λ, representing the number of charge carriers within the Debye sphere is much higher than unity.
It can be readily shown that this criterion is equivalent to smallness
of the ratio of the plasma electrostatic and thermal energy densities.
Such plasmas are called weakly coupled.
Bulk interactions: The Debye length
is much smaller than the physical size of the plasma. This criterion
means that interactions in the bulk of the plasma are more important
than those at its edges, where boundary effects may take place. When
this criterion is satisfied, the plasma is quasineutral.
Collisionlessness: The electron plasma frequency (measuring plasma oscillations
of the electrons) is much larger than the electron–neutral collision
frequency. When this condition is valid, electrostatic interactions
dominate over the processes of ordinary gas kinetics. Such plasmas are
called collisionless.
The strength and range of the electric force and the good
conductivity of plasmas usually ensure that the densities of positive
and negative charges in any sizeable region are equal
("quasineutrality"). A plasma with a significant excess of charge
density, or, in the extreme case, is composed of a single species, is
called a non-neutral plasma. In such a plasma, electric fields play a dominant role. Examples are charged particle beams, an electron cloud in a Penning trap and positron plasmas.
A dusty plasma
contains tiny charged particles of dust (typically found in space). The
dust particles acquire high charges and interact with each other. A
plasma that contains larger particles is called grain plasma. Under
laboratory conditions, dusty plasmas are also called complex plasmas.
Properties and parameters
Artist's rendition of the Earth's plasma fountain,
showing oxygen, helium, and hydrogen ions that gush into space from
regions near the Earth's poles. The faint yellow area shown above the
north pole represents gas lost from Earth into space; the green area is
the aurora borealis, where plasma energy pours back into the atmosphere.
Density and ionization degree
For plasma to exist, ionization is necessary. The term "plasma density" by itself usually refers to the electron density , that is, the number of charge-contributing electrons per unit volume. The degree of ionization is defined as fraction of neutral particles that are ionized:
where is the ion density and the neutral density (in number of particles per unit volume). In the case of fully ionized matter, . Because of the quasineutrality of plasma, the electron and ion densities are related by , where is the average ion charge (in units of the elementary charge).
Temperature
Plasma temperature, commonly measured in kelvin or electronvolts,
is a measure of the thermal kinetic energy per particle. High
temperatures are usually needed to sustain ionization, which is a
defining feature of a plasma. The degree of plasma ionization is
determined by the electron temperature relative to the ionization energy (and more weakly by the density). In thermal equilibrium, the relationship is given by the Saha equation. At low temperatures, ions and electrons tend to recombine into bound states—atoms—and the plasma will eventually become a gas.
In most cases, the electrons and heavy plasma particles (ions and
neutral atoms) separately have a relatively well-defined temperature;
that is, their energy distribution function is close to a Maxwellian even in the presence of strong electric or magnetic
fields. However, because of the large difference in mass between
electrons and ions, their temperatures may be different, sometimes
significantly so. This is especially common in weakly ionized
technological plasmas, where the ions are often near the ambient temperature while electrons reach thousands of kelvin. The opposite case is the z-pinch plasma where the ion temperature may exceed that of electrons.
Lightning
as an example of plasma present at Earth's surface: Typically,
lightning discharges 30 kiloamperes at up to 100 megavolts, and emits
radio waves, light, X- and even gamma rays. Plasma temperatures can approach 30000 K and electron densities may exceed 1024 m−3.
Since plasmas are very good electrical conductors, electric potentials play an important role.
The average potential in the space between charged particles,
independent of how it can be measured, is called the "plasma potential",
or the "space potential". If an electrode is inserted into a plasma,
its potential will generally lie considerably below the plasma potential
due to what is termed a Debye sheath.
The good electrical conductivity of plasmas makes their electric fields
very small. This results in the important concept of "quasineutrality",
which says the density of negative charges is approximately equal to
the density of positive charges over large volumes of the plasma (), but on the scale of the Debye length, there can be charge imbalance. In the special case that double layers are formed, the charge separation can extend some tens of Debye lengths.
The magnitude of the potentials and electric fields must be determined by means other than simply finding the net charge density. A common example is to assume that the electrons satisfy the Boltzmann relation:
Differentiating this relation provides a means to calculate the electric field from the density:
It is possible to produce a plasma that is not quasineutral. An
electron beam, for example, has only negative charges. The density of a
non-neutral plasma must generally be very low, or it must be very small,
otherwise, it will be dissipated by the repulsive electrostatic force.
Magnetization
The existence of charged particles causes the plasma to generate, and be affected by, magnetic fields.
Plasma with a magnetic field strong enough to influence the motion of
the charged particles is said to be magnetized. A common quantitative
criterion is that a particle on average completes at least one gyration
around the magnetic-field line before making a collision, i.e., , where is the electron gyrofrequency and
is the electron collision rate. It is often the case that the electrons
are magnetized while the ions are not. Magnetized plasmas are anisotropic,
meaning that their properties in the direction parallel to the magnetic
field are different from those perpendicular to it. While electric
fields in plasmas are usually small due to the plasma high conductivity,
the electric field associated with a plasma moving with velocity in the magnetic field is given by the usual Lorentz formula, and is not affected by Debye shielding.
Mathematical descriptions
The complex self-constricting magnetic field lines and current paths in a field-aligned Birkeland current that can develop in a plasma.
To completely describe the state of a plasma, all of the particle
locations and velocities that describe the electromagnetic field in the
plasma region would need to be written down. However, it is generally
not practical or necessary to keep track of all the particles in a
plasma. Therefore, plasma physicists commonly use less detailed descriptions, of which there are two main types:
Fluid model
Fluid models describe plasmas in terms of smoothed quantities, like density and averaged velocity around each position (see Plasma parameters). One simple fluid model, magnetohydrodynamics, treats the plasma as a single fluid governed by a combination of Maxwell's equations and the Navier–Stokes equations. A more general description is the two-fluid plasma,
where the ions and electrons are described separately. Fluid models are
often accurate when collisionality is sufficiently high to keep the
plasma velocity distribution close to a Maxwell–Boltzmann distribution.
Because fluid models usually describe the plasma in terms of a single
flow at a certain temperature at each spatial location, they can neither
capture velocity space structures like beams or double layers, nor resolve wave-particle effects.
Kinetic model
Kinetic models describe the particle velocity distribution function
at each point in the plasma and therefore do not need to assume a Maxwell–Boltzmann distribution.
A kinetic description is often necessary for collisionless plasmas.
There are two common approaches to kinetic description of a plasma. One
is based on representing the smoothed distribution function on a grid in
velocity and position. The other, known as the particle-in-cell
(PIC) technique, includes kinetic information by following the
trajectories of a large number of individual particles. Kinetic models
are generally more computationally intensive than fluid models. The Vlasov equation may be used to describe the dynamics of a system of charged particles interacting with an electromagnetic field.
In magnetized plasmas, a gyrokinetic approach can substantially reduce the computational expense of a fully kinetic simulation.
Plasma science and technology
Plasmas are studied by the vast academic field of plasma science or plasma physics, including several sub-disciplines such as space plasma physics.
Plasmas can appear in nature in various forms and locations, with a few examples given in the following table:
Most artificial plasmas are generated by the application of electric
and/or magnetic fields through a gas. Plasma generated in a laboratory
setting and for industrial use can be generally categorized by:
The type of power source used to generate the plasma—DC, AC (typically with radio frequency (RF)) and microwave
The pressure they operate at—vacuum pressure (< 10 mTorr or 1
Pa), moderate pressure (≈1 Torr or 100 Pa), atmospheric pressure
(760 Torr or 100 kPa)
The degree of ionization within the plasma—fully, partially, or weakly ionized
The temperature relationships within the plasma—thermal plasma (), non-thermal or "cold" plasma ()
The electrode configuration used to generate the plasma
The magnetization of the particles within the plasma—magnetized (both ion and electrons are trapped in Larmor orbits
by the magnetic field), partially magnetized (the electrons but not the
ions are trapped by the magnetic field), non-magnetized (the magnetic
field is too weak to trap the particles in orbits but may generate Lorentz forces)
Just like the many uses of plasma, there are several means for its
generation. However, one principle is common to all of them: there must
be energy input to produce and sustain it. For this case, plasma is generated when an electric current is applied across a dielectric gas or fluid (an electrically non-conducting material) as can be seen in the adjacent image, which shows a discharge tube as a simple example (DC used for simplicity).
The potential difference and subsequent electric field pull the bound electrons (negative) toward the anode (positive electrode) while the cathode (negative electrode) pulls the nucleus. As the voltage increases, the current stresses the material (by electric polarization) beyond its dielectric limit (termed strength) into a stage of electrical breakdown, marked by an electric spark, where the material transforms from being an insulator into a conductor (as it becomes increasingly ionized). The underlying process is the Townsend avalanche,
where collisions between electrons and neutral gas atoms create more
ions and electrons (as can be seen in the figure on the right). The
first impact of an electron on an atom results in one ion and two
electrons. Therefore, the number of charged particles increases rapidly
(in the millions) only "after about 20 successive sets of collisions", mainly due to a small mean free path (average distance travelled between collisions).
Electric arc
Cascade process of ionization. Electrons are "e−", neutral atoms "o", and cations "+".Avalanche
effect between two electrodes. The original ionization event liberates
one electron, and each subsequent collision liberates a further
electron, so two electrons emerge from each collision: the ionizing
electron and the liberated electron.
Electric arc is a continuous electric discharge between two electrodes, similar to lightning.
With ample current density, the discharge forms a luminous arc, where
the inter-electrode material (usually, a gas) undergoes various stages —
saturation, breakdown, glow, transition, and thermal arc. The voltage
rises to its maximum in the saturation stage, and thereafter it
undergoes fluctuations of the various stages, while the current
progressively increases throughout. Electrical resistance along the arc creates heat, which dissociates more gas molecules and ionizes the resulting atoms. Therefore, the electrical energy is given to electrons, which, due to their great mobility and large numbers, are able to disperse it rapidly by elastic collisions to the heavy particles.
Glow discharge plasmas:
non-thermal plasmas generated by the application of DC or low frequency
RF (<100 kHz) electric field to the gap between two metal
electrodes. Probably the most common plasma; this is the type of plasma
generated within fluorescent light tubes.
Capacitively coupled plasma (CCP): similar to glow discharge plasmas, but generated with high frequency RF electric fields, typically 13.56 MHz.
These differ from glow discharges in that the sheaths are much less
intense. These are widely used in the microfabrication and integrated
circuit manufacturing industries for plasma etching and plasma enhanced
chemical vapor deposition.
Inductively coupled plasma (ICP):
similar to a CCP and with similar applications but the electrode
consists of a coil wrapped around the chamber where plasma is formed.
Arc discharge:
this is a high power thermal discharge of very high temperature
(≈10,000 K). It can be generated using various power supplies. It is
commonly used in metallurgical processes. For example, it is used to smelt minerals containing Al2O3 to produce aluminium.
Corona discharge: this is a non-thermal discharge generated by the application of high voltage to sharp electrode tips. It is commonly used in ozone generators and particle precipitators.
Dielectric barrier discharge (DBD):
this is a non-thermal discharge generated by the application of high
voltages across small gaps wherein a non-conducting coating prevents the
transition of the plasma discharge into an arc. It is often mislabeled
"Corona" discharge in industry and has similar application to corona
discharges. A common usage of this discharge is in a plasma actuator for vehicle drag reduction. It is also widely used in the web treatment of fabrics.
The application of the discharge to synthetic fabrics and plastics
functionalizes the surface and allows for paints, glues and similar
materials to adhere.
The dielectric barrier discharge was used in the mid-1990s to show that
low temperature atmospheric pressure plasma is effective in
inactivating bacterial cells. This work and later experiments using mammalian cells led to the establishment of a new field of research known as plasma medicine.
The dielectric barrier discharge configuration was also used in the
design of low temperature plasma jets. These plasma jets are produced by
fast propagating guided ionization waves known as plasma bullets.
Capacitive discharge: this is a nonthermal plasma generated by the application of RF power (e.g., 13.56 MHz)
to one powered electrode, with a grounded electrode held at a small
separation distance on the order of 1 cm. Such discharges are commonly
stabilized using a noble gas such as helium or argon.
"Piezoelectric direct discharge plasma:" is a nonthermal plasma
generated at the high side of a piezoelectric transformer (PT). This
generation variant is particularly suited for high efficient and compact
devices where a separate high voltage power supply is not desired.
A world effort was triggered in the 1960s to study magnetohydrodynamic converters in order to bring MHD power conversion to market with commercial power plants of a new kind, converting the kinetic energy of a high velocity plasma into electricity with no moving parts at a high efficiency.
Research was also conducted in the field of supersonic and hypersonic
aerodynamics to study plasma interaction with magnetic fields to
eventually achieve passive and even active flow control around vehicles or projectiles, in order to soften and mitigate shock waves, lower thermal transfer and reduce drag.
Such ionized gases used in "plasma technology" ("technological" or "engineered" plasmas) are usually weakly ionized gases in the sense that only a tiny fraction of the gas molecules are ionized.
These kinds of weakly ionized gases are also nonthermal "cold" plasmas.
In the presence of magnetics fields, the study of such magnetized
nonthermal weakly ionized gases involves resistive magnetohydrodynamics with low magnetic Reynolds number, a challenging field of plasma physics where calculations require dyadic tensors in a 7-dimensionalphase space. When used in combination with a high Hall parameter, a critical value triggers the problematic electrothermal instability which limited these technological developments.
Complex plasma phenomena
Although the underlying equations governing plasmas are relatively
simple, plasma behaviour is extraordinarily varied and subtle: the
emergence of unexpected behaviour from a simple model is a typical
feature of a complex system.
Such systems lie in some sense on the boundary between ordered and
disordered behaviour and cannot typically be described either by simple,
smooth, mathematical functions, or by pure randomness. The spontaneous
formation of interesting spatial features on a wide range of length
scales is one manifestation of plasma complexity. The features are
interesting, for example, because they are very sharp, spatially
intermittent (the distance between features is much larger than the
features themselves), or have a fractal
form. Many of these features were first studied in the laboratory, and
have subsequently been recognized throughout the universe. Examples of complexity and complex structures in plasmas include:
Filamentation also refers to the self-focusing of a high power laser pulse. At high powers, the nonlinear part of the index of refraction
becomes important and causes a higher index of refraction in the center
of the laser beam, where the laser is brighter than at the edges,
causing a feedback that focuses the laser even more. The tighter focused
laser has a higher peak brightness (irradiance) that forms a plasma.
The plasma has an index of refraction lower than one, and causes a
defocusing of the laser beam. The interplay of the focusing index of
refraction, and the defocusing plasma makes the formation of a long
filament of plasma that can be micrometers to kilometers in length.
One interesting aspect of the filamentation generated plasma is the
relatively low ion density due to defocusing effects of the ionized
electrons. (See also Filament propagation)
Impermeable plasma
Impermeable plasma is a type of thermal plasma which acts like an
impermeable solid with respect to gas or cold plasma and can be
physically pushed. Interaction of cold gas and thermal plasma was
briefly studied by a group led by Hannes Alfvén in 1960s and 1970s for its possible applications in insulation of fusion plasma from the reactor walls. However, later it was found that the external magnetic fields in this configuration could induce kink instabilities in the plasma and subsequently lead to an unexpectedly high heat loss to the walls.
In 2013, a group of materials scientists reported that they have successfully generated stable impermeable plasma with no magnetic confinement
using only an ultrahigh-pressure blanket of cold gas. While
spectroscopic data on the characteristics of plasma were claimed to be
difficult to obtain due to the high pressure, the passive effect of
plasma on synthesis of different nanostructures
clearly suggested the effective confinement. They also showed that upon
maintaining the impermeability for a few tens of seconds, screening of ions
at the plasma-gas interface could give rise to a strong secondary mode
of heating (known as viscous heating) leading to different kinetics of
reactions and formation of complex nanomaterials.