Search This Blog

Thursday, April 16, 2026

Symmetry (physics)

From Wikipedia, the free encyclopedia

The symmetry of a physical system is a physical or mathematical feature of the system (observed or intrinsic) that is preserved or remains unchanged under some transformation.

A family of particular transformations may be continuous (such as rotation of a circle) or discrete (e.g., reflection of a bilaterally symmetric figure, or rotation of a regular polygon). Continuous and discrete transformations give rise to corresponding types of symmetries. Continuous symmetries can be described by Lie groups while discrete symmetries are described by finite groups (see Symmetry group).

These two concepts, Lie and finite groups, are the foundation for the fundamental theories of modern physics. Symmetries are frequently amenable to mathematical formulations such as group representations and can, in addition, be exploited to simplify many problems.

Arguably the most important example of a symmetry in physics is that the speed of light has the same value in all frames of reference, which is described in special relativity by a group of transformations of the spacetime known as the Poincaré group. Another important example is the invariance of the form of physical laws under arbitrary differentiable coordinate transformations, which is an important idea in general relativity.

As a kind of invariance

Invariance is specified mathematically by transformations that leave some property (e.g. quantity) unchanged. This idea can apply to basic real-world observations. For example, temperature may be homogeneous throughout a room. Since the temperature does not depend on the position of an observer within the room, we say that the temperature is invariant under a shift in an observer's position within the room.

Similarly, a uniform sphere rotated about its center will appear exactly as it did before the rotation. The sphere is said to exhibit spherical symmetry. A rotation about any axis of the sphere will preserve the shape of its surface from any given vantage point.

Invariance in force

The above ideas lead to the useful idea of invariance when discussing observed physical symmetry; this can be applied to symmetries in forces as well.

For example, an electric field due to an electrically charged wire of infinite length is said to exhibit cylindrical symmetry, because the electric field strength at a given distance r from the wire will have the same magnitude at each point on the surface of a cylinder (whose axis is the wire) with radius r. Rotating the wire about its own axis does not change its position or charge density, hence it will preserve the field. The field strength at a rotated position is the same. This is not true in general for an arbitrary system of charges.

In Newton's theory of mechanics, given two bodies, each with mass m, starting at the origin and moving along the x-axis in opposite directions, one with speed v1 and the other with speed v2 the total kinetic energy of the system (as calculated from an observer at the origin) is 1/2m(v12 + v22) and remains the same if the velocities are interchanged. The total kinetic energy is preserved under a reflection in the y-axis.

The last example above illustrates another way of expressing symmetries, namely through the equations that describe some aspect of the physical system. The above example shows that the total kinetic energy will be the same if v1 and v2 are interchanged.

Local and global

Symmetries may be broadly classified as global or local. A global symmetry is one that keeps a property invariant for a transformation that is applied simultaneously at all points of spacetime, whereas a local symmetry is one that keeps a property invariant when a possibly different symmetry transformation is applied at each point of spacetime; specifically a local symmetry transformation is parameterised by the spacetime coordinates, whereas a global symmetry is not. This implies that a global symmetry is also a local symmetry. Local symmetries play an important role in physics as they form the basis for gauge theories.

Continuous

The two examples of rotational symmetry described above – spherical and cylindrical – are each instances of continuous symmetry. These are characterised by invariance following a continuous change in the geometry of the system. For example, the wire may be rotated through any angle about its axis and the field strength will be the same on a given cylinder. Mathematically, continuous symmetries are described by transformations that change continuously as a function of their parameterization. An important subclass of continuous symmetries in physics are spacetime symmetries.

Spacetime

Continuous spacetime symmetries are symmetries involving transformations of space and time. These may be further classified as spatial symmetries, involving only the spatial geometry associated with a physical system; temporal symmetries, involving only changes in time; or spatio-temporal symmetries, involving changes in both space and time.

  • Time translation: A physical system may have the same features over a certain interval of time Δt; this is expressed mathematically as invariance under the transformation tt + a for any real parameters t and t + a in the interval. For example, in classical mechanics, a particle solely acted upon by gravity will have gravitational potential energy mgh when suspended from a height h above the Earth's surface. Assuming no change in the height of the particle, this will be the total gravitational potential energy of the particle at all times. In other words, by considering the state of the particle at some time t0 and also at t0 + a, the particle's total gravitational potential energy will be preserved.
  • Spatial translation: These spatial symmetries are represented by transformations of the form rr + a and describe those situations where a property of the system does not change with a continuous change in location. For example, the temperature in a room may be independent of where the thermometer is located in the room.
  • Spatial rotation: These spatial symmetries are classified as proper rotations and improper rotations. The former are just the 'ordinary' rotations; mathematically, they are represented by square matrices with unit determinant. The latter are represented by square matrices with determinant −1 and consist of a proper rotation combined with a spatial reflection (inversion). For example, a sphere has proper rotational symmetry. Other types of spatial rotations are described in the article Rotation symmetry.
  • Poincaré transformations: These are spatio-temporal symmetries which preserve distances in Minkowski spacetime, i.e. they are isometries of Minkowski space. They are studied primarily in special relativity. Those isometries that leave the origin fixed are called Lorentz transformations and give rise to the symmetry known as Lorentz covariance.
  • Projective symmetries: These are spatio-temporal symmetries which preserve the geodesic structure of spacetime. They may be defined on any smooth manifold, but find many applications in the study of exact solutions in general relativity.
  • Inversion transformations: These are spatio-temporal symmetries which generalise Poincaré transformations to include other conformal one-to-one transformations on the space-time coordinates. Lengths are not invariant under inversion transformations but there is a cross-ratio on four points that is invariant.

Mathematically, spacetime symmetries are usually described by smooth vector fields on a smooth manifold. The underlying local diffeomorphisms associated with the vector fields correspond more directly to the physical symmetries, but the vector fields themselves are more often used when classifying the symmetries of the physical system.

Some of the most important vector fields are Killing vector fields which are those spacetime symmetries that preserve the underlying metric structure of a manifold. In rough terms, Killing vector fields preserve the distance between any two points of the manifold and often go by the name of isometries.

Discrete

A discrete symmetry is a symmetry that describes non-continuous changes in a system. For example, a square possesses discrete rotational symmetry, as only rotations by multiples of right angles will preserve the square's original appearance. Discrete symmetries sometimes involve some type of 'swapping', these swaps usually being called reflections or interchanges.

  • Time reversal: Many laws of physics describe real phenomena when the direction of time is reversed. Mathematically, this is represented by the transformation, . For example, Newton's second law of motion still holds if, in the equation , is replaced by . This may be illustrated by recording the motion of an object thrown up vertically (neglecting air resistance) and then playing it back. The object will follow the same parabolic trajectory through the air, whether the recording is played normally or in reverse. Thus, position is symmetric with respect to the instant that the object is at its maximum height.
  • Spatial inversion: These are represented by transformations of the form and indicate an invariance property of a system when the coordinates are 'inverted'. Stated another way, these are symmetries between a certain object and its mirror image.
  • Glide reflection: These are represented by a composition of a translation and a reflection. These symmetries occur in some crystals and in some planar symmetries, known as wallpaper symmetries.

C, P, and T

The Standard Model of particle physics has three related natural near-symmetries. These state that the universe in which we live should be indistinguishable from one where a certain type of change is introduced.

  • C-symmetry (charge symmetry), a universe where every particle is replaced with its antiparticle.
  • P-symmetry (parity symmetry), a universe where everything is mirrored along the three physical axes. This excludes weak interactions as demonstrated by Chien-Shiung Wu.
  • T-symmetry (time reversal symmetry), a universe where the direction of time is reversed. T-symmetry is counterintuitive (the future and the past are not symmetrical) but explained by the fact that the Standard Model describes local properties, not global ones like entropy. To properly reverse the direction of time, one would have to put the Big Bang and the resulting low-entropy state in the "future". Since we perceive the "past" ("future") as having lower (higher) entropy than the present, the inhabitants of this hypothetical time-reversed universe would perceive the future in the same way as we perceive the past, and vice versa.

These symmetries are near-symmetries because each is broken in the present-day universe. However, the Standard Model predicts that the combination of the three (that is, the simultaneous application of all three transformations) must be a symmetry, called CPT symmetry. CP violation, the violation of the combination of C- and P-symmetry, is necessary for the presence of significant amounts of baryonic matter in the universe. CP violation is a fruitful area of current research in particle physics.

Supersymmetry

A type of symmetry known as supersymmetry has been used to try to make theoretical advances in the Standard Model. Supersymmetry is based on the idea that there is another physical symmetry beyond those already developed in the Standard Model, specifically a symmetry between bosons and fermions. Supersymmetry asserts that each type of boson has, as a supersymmetric partner, a fermion, called a superpartner, and vice versa. Supersymmetry has not yet been experimentally verified: no known particle has the correct properties to be a superpartner of any other known particle. Currently LHC is preparing for a run which tests supersymmetry.

Generalized symmetries

Generalized symmetries encompass a number of recently recognized generalizations of the concept of a global symmetry. These include higher form symmetries, higher group symmetries, non-invertible symmetries, and subsystem symmetries.

Mathematics of physical symmetry

The transformations describing physical symmetries typically form a mathematical group. Group theory is an important area of mathematics for physicists.

Continuous symmetries are specified mathematically by continuous groups (called Lie groups). Many physical symmetries are isometries and are specified by symmetry groups. Sometimes this term is used for more general types of symmetries. The set of all proper rotations (about any angle) through any axis of a sphere form a Lie group called the special orthogonal group SO(3). (The '3' refers to the three-dimensional space of an ordinary sphere.) Thus, the symmetry group of the sphere with proper rotations is SO(3). Any rotation preserves distances on the surface of the ball. The set of all Lorentz transformations form a group called the Lorentz group (this may be generalised to the Poincaré group).

Discrete groups describe discrete symmetries. For example, the symmetries of an equilateral triangle are characterized by the symmetric group S3.

A type of physical theory based on local symmetries is called a gauge theory and the symmetries natural to such a theory are called gauge symmetries. Gauge symmetries in the Standard Model, used to describe three of the fundamental interactions, are based on the SU(3) × SU(2) × U(1) group. (Roughly speaking, the symmetries of the SU(3) group describe the strong force, the SU(2) group describes the weak interaction and the U(1) group describes the electromagnetic force.)

Also, the reduction by symmetry of the energy functional under the action by a group and spontaneous symmetry breaking of transformations of symmetric groups appear to elucidate topics in particle physics (for example, the unification of electromagnetism and the weak force in physical cosmology).

Conservation laws and symmetry

The symmetry properties of a physical system are intimately related to the conservation laws characterizing that system. Noether's theorem gives a precise description of this relation. The theorem states that each continuous symmetry of a physical system implies that some physical property of that system is conserved. Conversely, each conserved quantity has a corresponding symmetry. For example, spatial translation symmetry (i.e. homogeneity of space) gives rise to conservation of (linear) momentum, and temporal translation symmetry (i.e. homogeneity of time) gives rise to conservation of energy.

The following table summarizes some fundamental symmetries and the associated conserved quantity.

Class Invariance Conserved quantity
Proper orthochronous
Poincaré symmetry
translation in time
(homogeneity)
energy
E

translation in space
(homogeneity)
linear momentum
p

rotation in space
(isotropy)
angular momentum
L = r × p

Lorentz-boost
(isotropy)
boost 3-vector
N = tpEr
Discrete symmetry P, coordinate inversion spatial parity

C, charge conjugation charge parity

T, time reversal time parity

CPT product of parities
Internal symmetry (independent of
spacetime coordinates)
U(1) transformation electric charge

U(1) transformation lepton generation number

U(1) transformation hypercharge

U(1)Y transformation weak hypercharge

U(2) [ U(1) × SU(2) ] electroweak force

SU(2) transformation isospin

SU(2)L transformation weak isospin

P × SU(2) G-parity

SU(3) "winding number" baryon number

SU(3) transformation quark color

SU(3) (approximate) quark flavor

U(1) × SU(2) × SU(3) Standard Model

Mathematics

Continuous symmetries in physics preserve transformations. One can specify a symmetry by showing how a very small transformation affects various particle fields. The commutator of two of these infinitesimal transformations is equivalent to a third infinitesimal transformation of the same kind hence they form a Lie algebra.

A general coordinate transformation described as the general field (also known as a diffeomorphism) has the infinitesimal effect on a scalar , spinor or vector field that can be expressed (using the Einstein summation convention):

Without gravity only the Poincaré symmetries are preserved which restricts to be of the form:

where M is an antisymmetric matrix (giving the Lorentz and rotational symmetries) and P is a general vector (giving the translational symmetries). Other symmetries affect multiple fields simultaneously. For example, local gauge transformations apply to both a vector and spinor field:

where are generators of a particular Lie group. So far the transformations on the right have only included fields of the same type. Supersymmetries are defined according to how the mix fields of different types.

Another symmetry which is part of some theories of physics and not in others is scale invariance which involve Weyl transformations of the following kind:

If the fields have this symmetry then it can be shown that the field theory is almost certainly conformally invariant also. This means that in the absence of gravity h(x) would restricted to the form:

with D generating scale transformations and K generating special conformal transformations. For example, N = 4 supersymmetric Yang–Mills theory has this symmetry while general relativity does not although other theories of gravity such as conformal gravity do. The 'action' of a field theory is an invariant under all the symmetries of the theory. Much of modern theoretical physics is to do with speculating on the various symmetries the Universe may have and finding the invariants to construct field theories as models.

In string theories, since a string can be decomposed into an infinite number of particle fields, the symmetries on the string world sheet is equivalent to special transformations which mix an infinite number of fields.

Nuclear physics

From Wikipedia, the free encyclopedia

Discoveries in nuclear physics have led to applications in many fields such as nuclear power, nuclear weapons, nuclear medicine and magnetic resonance imaging, industrial and agricultural isotopes, ion implantation in materials engineering, and radiocarbon dating in geology and archaeology. Such applications are studied in the field of nuclear engineering.

Particle physics evolved out of nuclear physics and the two fields are typically taught in close association. Nuclear astrophysics, the application of nuclear physics to astrophysics, is crucial in explaining the inner workings of stars and the origin of the chemical elements.

History

Henri Becquerel
Since the 1920s, cloud chambers played an important role of particle detectors and eventually lead to the discovery of positron, muon and kaon.

The history of nuclear physics as a discipline distinct from atomic physics, starts with the discovery of radioactivity by Henri Becquerel in 1896, made while investigating phosphorescence in uranium salts. The discovery of the electron by J. J. Thomson a year later was an indication that the atom had internal structure. At the beginning of the 20th century the accepted model of the atom was J. J. Thomson's "plum pudding" model in which the atom was a positively charged ball with smaller negatively charged electrons embedded inside it.

In the years that followed, radioactivity was extensively investigated, notably by Marie Curie, a Polish physicist whose maiden name was Sklodowska, Pierre Curie, Ernest Rutherford and others. By the turn of the century, physicists had also discovered three types of radiation emanating from atoms, which they named alpha, beta, and gamma radiation. Experiments by Otto Hahn in 1911 and by James Chadwick in 1914 discovered that the beta decay spectrum was continuous rather than discrete. That is, electrons were ejected from the atom with a continuous range of energies, rather than the discrete amounts of energy that were observed in gamma and alpha decays. This was a problem for nuclear physics at the time, because it seemed to indicate that energy was not conserved in these decays.

The 1903 Nobel Prize in Physics was awarded jointly to Becquerel, for his discovery and to Marie and Pierre Curie for their subsequent research into radioactivity. Rutherford was awarded the Nobel Prize in Chemistry in 1908 for his "investigations into the disintegration of the elements and the chemistry of radioactive substances".

In 1905, Albert Einstein formulated the idea of mass–energy equivalence. While the work on radioactivity by Becquerel and Marie Curie predates this, an explanation of the source of the energy of radioactivity would have to wait for the discovery that the nucleus itself was composed of smaller constituents, the nucleons.

Rutherford discovers the nucleus

In 1906, Ernest Rutherford published "Retardation of the a Particle from Radium in passing through matter." Hans Geiger expanded on this work in a communication to the Royal Society with experiments he and Rutherford had done, passing alpha particles through air, aluminum foil and gold leaf. More work was published in 1909 by Geiger and Ernest Marsden, and further greatly expanded work was published in 1910 by Geiger. In 1911–1912 Rutherford went before the Royal Society to explain the experiments and propound the new theory of the atomic nucleus as we now understand it.

Published in 1909, with the eventual classical analysis by Rutherford published May 1911,the key preemptive experiment was performed during 1909, at the University of Manchester. Ernest Rutherford's assistant, Professor  Johannes  "Hans" Geiger, and an undergraduate, Marsden, performed an experiment in which Geiger and Marsden under Rutherford's supervision fired alpha particles (helium 4 nuclei) at a thin film of gold foil. The plum pudding model had predicted that the alpha particles should come out of the foil with their trajectories being at most slightly bent. But Rutherford instructed his team to look for something that shocked him to observe: a few particles were scattered through large angles, even completely backwards in some cases. He likened it to firing a bullet at tissue paper and having it bounce off. The discovery, with Rutherford's analysis of the data in 1911, led to the Rutherford model of the atom, in which the atom had a very small, very dense nucleus containing most of its mass, and consisting of heavy positively charged particles with embedded electrons in order to balance out the charge (since the neutron was unknown). As an example, in this model (which is not the modern one) nitrogen-14 consisted of a nucleus with 14 protons and 7 electrons (21 total particles) and the nucleus was surrounded by 7 more orbiting electrons.

Eddington and stellar nuclear fusion

Around 1920, Arthur Eddington anticipated the discovery and mechanism of nuclear fusion processes in stars, in his paper The Internal Constitution of the Stars. At that time, the source of stellar energy was a complete mystery; Eddington correctly speculated that the source was fusion of hydrogen into helium, liberating enormous energy according to Einstein's equation E = mc2. This was a particularly remarkable development since at that time fusion and thermonuclear energy, and even that stars are largely composed of hydrogen (see metallicity), had not yet been discovered.

Studies of nuclear spin

The Rutherford model worked quite well until studies of nuclear spin were carried out by Franco Rasetti at the California Institute of Technology in 1929. By 1925 it was known that protons and electrons each had a spin of ±+12. In the Rutherford model of nitrogen-14, 20 of the total 21 nuclear particles should have paired up to cancel each other's spin, and the final odd particle should have left the nucleus with a net spin of 12. Rasetti discovered, however, that nitrogen-14 had a spin of 1.

James Chadwick discovers the neutron

In 1932 Chadwick realized that radiation that had been observed by Walther Bothe, Herbert Becker, Irène and Frédéric Joliot-Curie was actually due to a neutral particle of about the same mass as the proton, that he called the neutron (following a suggestion from Rutherford about the need for such a particle). In the same year Dmitri Ivanenko suggested that there were no electrons in the nucleus — only protons and neutrons — and that neutrons were spin 12 particles, which explained the mass not due to protons. The neutron spin immediately solved the problem of the spin of nitrogen-14, as the one unpaired proton and one unpaired neutron in this model each contributed a spin of 12 in the same direction, giving a final total spin of 1.

With the discovery of the neutron, scientists could at last calculate what fraction of binding energy each nucleus had, by comparing the nuclear mass with that of the protons and neutrons which composed it. Differences between nuclear masses were calculated in this way. When nuclear reactions were measured, these were found to agree with Einstein's calculation of the equivalence of mass and energy to within 1% as of 1934.

Proca's equations of the massive vector boson field

Alexandru Proca was the first to develop and report the massive vector boson field equations and a theory of the mesonic field of nuclear forces. Proca's equations were known to Wolfgang Pauli who mentioned the equations in his Nobel address, and they were also known to Yukawa, Wentzel, Taketani, Sakata, Kemmer, Heitler, and Fröhlich who appreciated the content of Proca's equations for developing a theory of the atomic nuclei in Nuclear Physics.

Yukawa's meson postulated to bind nuclei

In 1935 Hideki Yukawa proposed the first significant theory of the strong force to explain how the nucleus holds together. In the Yukawa interaction a virtual particle, later called a meson, mediated a force between all nucleons, including protons and neutrons. This force explained why nuclei did not disintegrate under the influence of proton repulsion, and it also gave an explanation of why the attractive strong force had a more limited range than the electromagnetic repulsion between protons. Later, the discovery of the pi meson showed it to have the properties of Yukawa's particle.

With Yukawa's papers, the modern model of the atom was complete. The center of the atom contains a tight ball of neutrons and protons, which is held together by the strong nuclear force, unless it is too large. Unstable nuclei may undergo alpha decay, in which they emit an energetic helium nucleus, or beta decay, in which they eject an electron (or positron). After one of these decays the resultant nucleus may be left in an excited state, and in this case it decays to its ground state by emitting high-energy photons (gamma decay).

The study of the strong and weak nuclear forces (the latter explained by Enrico Fermi via Fermi's interaction in 1934) led physicists to collide nuclei and electrons at ever higher energies. This research became the science of particle physics, the crown jewel of which is the standard model of particle physics, which describes the strong, weak, and electromagnetic forces.

Modern nuclear physics

A heavy nucleus can contain hundreds of nucleons. This means that with some approximation it can be treated as a classical system, rather than a quantum-mechanical one. In the resulting liquid-drop model, the nucleus has an energy that arises partly from surface tension and partly from electrical repulsion of the protons. The liquid-drop model is able to reproduce many features of nuclei, including the general trend of binding energy with respect to mass number, as well as the phenomenon of nuclear fission.

Superimposed on this classical picture, however, are quantum-mechanical effects, which can be described using the nuclear shell model, developed in large part by Maria Goeppert Mayer and J. Hans D. Jensen. Nuclei with certain "magic" numbers of neutrons and protons are particularly stable, because their shells are filled.

Other more complicated models for the nucleus have also been proposed, such as the interacting boson model, in which pairs of neutrons and protons interact as bosons.

Ab initio methods try to solve the nuclear many-body problem from the ground up, starting from the nucleons and their interactions.

Much of current research in nuclear physics relates to the study of nuclei under extreme conditions such as high spin and excitation energy. Nuclei may also have extreme shapes (similar to that of Rugby balls or even pears) or extreme neutron-to-proton ratios. Experimenters can create such nuclei using artificially induced fusion or nucleon transfer reactions, employing ion beams from an accelerator. Beams with even higher energies can be used to create nuclei at very high temperatures, and there are signs that these experiments have produced a phase transition from normal nuclear matter to a new state, the quark–gluon plasma, in which the quarks mingle with one another, rather than being segregated in triplets as they are in neutrons and protons.

Nuclear decay

Eighty elements have at least one stable isotope which is never observed to decay, amounting to a total of about 251 stable nuclides. However, thousands of isotopes have been characterized as unstable. These "radioisotopes" decay over time scales ranging from fractions of a second to trillions of years. Plotted on a chart as a function of atomic and neutron numbers, the binding energy of the nuclides forms what is known as the valley of stability. Stable nuclides lie along the bottom of this energy valley, while increasingly unstable nuclides lie up the valley walls, that is, have weaker binding energy.

The most stable nuclei fall within certain ranges or balances of composition of neutrons and protons: too few or too many neutrons (in relation to the number of protons) will cause it to decay. For example, in beta decay, a nitrogen-16 atom (7 protons, 9 neutrons) is converted to an oxygen-16 atom (8 protons, 8 neutrons) within a few seconds of being created. In this decay a neutron in the nitrogen nucleus is converted by the weak interaction into a proton, an electron and an antineutrino. The element is transmuted to another element, with a different number of protons.

In alpha decay, which typically occurs in the heaviest nuclei, the radioactive element decays by emitting a helium nucleus (2 protons and 2 neutrons), giving another element, plus helium-4. In many cases this process continues through several steps of this kind, including other types of decays (usually beta decay) until a stable element is formed.

In gamma decay, a nucleus decays from an excited state into a lower energy state, by emitting a gamma ray. The element is not changed to another element in the process (no nuclear transmutation is involved).

Other more exotic decays are possible (see the first main article). For example, in internal conversion decay, the energy from an excited nucleus may eject one of the inner orbital electrons from the atom, in a process which produces high speed electrons but is not beta decay and (unlike beta decay) does not transmute one element to another.

Nuclear fusion

In nuclear fusion, two low-mass nuclei come into very close contact with each other so that the strong force fuses them. It requires a large amount of energy for the strong or nuclear forces to overcome the electrical repulsion between the nuclei in order to fuse them; therefore nuclear fusion can only take place at very high temperatures or high pressures. When nuclei fuse, a very large amount of energy is released and the combined nucleus assumes a lower energy level. The binding energy per nucleon increases with mass number up to nickel-62. Stars like the Sun are powered by the fusion of four protons into a helium nucleus, two positrons, and two neutrinos. The uncontrolled fusion of hydrogen into helium is known as thermonuclear runaway. A frontier in current research at various institutions, for example the Joint European Torus (JET) and ITER, is the development of an economically viable method of using energy from a controlled fusion reaction. Nuclear fusion is the origin of the energy (including in the form of light and other electromagnetic radiation) produced by the core of all stars including our own Sun.

Nuclear fission

Nuclear fission is the reverse process to fusion. For nuclei heavier than nickel-62 the binding energy per nucleon decreases with the mass number. It is therefore possible for energy to be released if a heavy nucleus breaks apart into two lighter ones.

The process of alpha decay is in essence a special type of spontaneous nuclear fission. It is a highly asymmetrical fission because the four particles which make up the alpha particle are especially tightly bound to each other, making production of this nucleus in fission particularly likely.

From several of the heaviest nuclei whose fission produces free neutrons, and which also easily absorb neutrons to initiate fission, a self-igniting type of neutron-initiated fission can be obtained, in a chain reaction. Chain reactions were known in chemistry before physics, and in fact many familiar processes like fires and chemical explosions are chemical chain reactions. The fission or "nuclear" chain-reaction, using fission-produced neutrons, is the source of energy for nuclear power plants and fission-type nuclear bombs, such as those detonated in Hiroshima and Nagasaki, Japan, at the end of World War II. Heavy nuclei such as uranium and thorium may also undergo spontaneous fission, but they are much more likely to undergo decay by alpha decay.

For a neutron-initiated chain reaction to occur, there must be a critical mass of the relevant isotope present in a certain space under certain conditions. The conditions for the smallest critical mass require the conservation of the emitted neutrons and also their slowing or moderation so that there is a greater cross-section or probability of them initiating another fission. In two regions of Oklo, Gabon, Africa, natural nuclear fission reactors were active over 1.5 billion years ago. Measurements of natural neutrino emission have demonstrated that around half of the heat emanating from the Earth's core results from radioactive decay. However, it is not known if any of this results from fission chain reactions.

Production of "heavy" elements

According to the theory, as the Universe cooled after the Big Bang it eventually became possible for common subatomic particles as we know them (neutrons, protons and electrons) to exist. The most common particles created in the Big Bang which are still easily observable to us today were protons and electrons (in equal numbers). The protons would eventually form hydrogen atoms. Almost all the neutrons created in the Big Bang were absorbed into helium-4 in the first three minutes after the Big Bang, and this helium accounts for most of the helium in the universe today (see Big Bang nucleosynthesis).

Some relatively small quantities of elements beyond helium (lithium, beryllium, and perhaps some boron) were created in the Big Bang, as the protons and neutrons collided with each other, but all of the "heavier elements" (carbon, element number 6, and elements of greater atomic number) that we see today, were created inside stars during a series of fusion stages, such as the proton–proton chain, the CNO cycle and the triple-alpha process. Progressively heavier elements are created during the evolution of a star.

Energy is only released in fusion processes involving smaller atoms than iron because the binding energy per nucleon peaks around iron (56 nucleons). Since the creation of heavier nuclei by fusion requires energy, nature resorts to the process of neutron capture. Neutrons (due to their lack of charge) are readily absorbed by a nucleus. The heavy elements are created by either a slow neutron capture process (the so-called s-process) or the rapid, or r-process. The s process occurs in thermally pulsing stars (called AGB, or asymptotic giant branch stars) and takes hundreds to thousands of years to reach the heaviest elements of lead and bismuth. The r-process is thought to occur in supernova explosions, which provide the necessary conditions of high temperature, high neutron flux and ejected matter. These stellar conditions make the successive neutron captures very fast, involving very neutron-rich species which then beta-decay to heavier elements, especially at the so-called waiting points that correspond to more stable nuclides with closed neutron shells (magic numbers).

Hard and soft science

From Wikipedia, the free encyclopedia

Hard science and soft science are colloquial terms used to compare scientific fields on the basis of perceived methodological rigor, exactitude, and objectivity. In general, the formal sciences and natural sciences are considered hard science by their practitioners, whereas the social sciences and other sciences are described by them as soft science.

Precise definitions vary, but features often cited as characteristic of hard science include producing testable predictions, performing controlled experiments, relying on quantifiable data and mathematical models, a high degree of accuracy and objectivity, higher levels of consensus, faster progression of the field, greater explanatory success, cumulativeness, replicability, and generally applying a purer form of the scientific method. A closely related idea (originating in the nineteenth century with Auguste Comte) is that scientific disciplines can be arranged into a hierarchy of hard to soft on the basis of factors such as rigor, "development", and whether they are basic or applied.

Philosophers and historians of science have questioned the relationship between these characteristics and perceived hardness or softness. The more "developed" hard sciences do not necessarily have a greater degree of consensus or selectivity in accepting new results. Commonly cited methodological differences are also not a reliable indicator. For example, social sciences such as psychology and sociology use mathematical models extensively, but are usually considered soft sciences. While scientific controls are cited as a methodological difference between hard and soft sciences, in certain natural sciences, like astronomy and geology, it is impossible to perform controlled experiments to test most hypotheses and observation and natural experiments are primarily used instead. Survey data about the replication crisis among researchers strongly suggests that the failure to reproduce published findings has impacted the natural and applied sciences along with psychology and the social sciences. However, there are some observable differences between hard and soft sciences. For example, hard sciences make more extensive use of graphs, and soft sciences are more prone to a rapid turnover of buzzwords.

The metaphor has been criticised for stigmatizing soft sciences, creating an unwarranted imbalance in the public perception, funding, and recognition of different fields.

History of the terms

The origin of the terms "hard science" and "soft science" is obscure. The earliest attested use of "hard science" is found in an 1858 issue of the Journal of the Society of Arts, but the idea of a hierarchy of the sciences can be found earlier, in the work of the French philosopher Auguste Comte (1798‒1857). He identified astronomy as the most general science, followed by physics, chemistry, biology, then sociology. This view was highly influential, and was intended to classify fields based on their degree of intellectual development and the complexity of their subject matter.

The modern distinction between hard and soft science is often attributed to a 1964 article published in Science by John R. Platt. He explored why he considered some scientific fields to be more productive than others, though he did not actually use the terms themselves. In 1967, sociologist of science Norman W. Storer specifically distinguished between the natural sciences as hard and the social sciences as soft. He defined hardness in terms of the degree to which a field uses mathematics and described a trend of scientific fields increasing in hardness over time, identifying features of increased hardness as including better integration and organization of knowledge, an improved ability to detect errors, and an increase in the difficulty of learning the subject.

Empirical support

In the 1970s sociologist Stephen Cole conducted a number of empirical studies attempting to find evidence for a hierarchy of scientific disciplines, and was unable to find significant differences in terms of core of knowledge, degree of codification, or research material. Differences that he did find evidence for included a tendency for textbooks in soft sciences to rely on more recent work, while the material in textbooks from the hard sciences was more consistent over time. After he published in 1983, it has been suggested that Cole might have missed some relationships in the data because he studied individual measurements, without accounting for the way multiple measurements could trend in the same direction, and because not all the criteria that could indicate a discipline's scientific status were analysed.

In 1984, Cleveland performed a survey of 57 journals and found that natural science journals used many more graphs than journals in mathematics or social science, and that social science journals often presented large amounts of observational data in the absence of graphs. The amount of page area used for graphs ranged from 0% to 31%, and the variation was primarily due to the number of graphs included rather than their sizes. Further analyses by Smith in 2000, based on samples of graphs from journals in seven major scientific disciplines, found that the amount of graph usage correlated "almost perfectly" with hardness (r=0.97). They also suggested that the hierarchy applies to individual fields, and demonstrated the same result using ten subfields of psychology (r=0.93).

In a 2010 article, Fanelli proposed that we expect more positive outcomes in "softer" sciences because there are fewer constraints on researcher bias. They found that among research papers that tested a hypothesis, the frequency of positive results was predicted by the perceived hardness of the field. For example, the social sciences as a whole had a 2.3-fold increased odds of positive results compared to the physical sciences, with the biological sciences in between. They added that this supported the idea that the social sciences and natural sciences differ only in degree, as long as the social sciences follow the scientific approach.

In 2013, Fanelli tested whether the ability of researchers in a field to "achieve consensus and accumulate knowledge" increases with the hardness of the science, and sampled 29,000 papers from 12 disciplines using measurements that indicate the degree of scholarly consensus. Out of the three possibilities (hierarchy, hard/soft distinction, or no ordering), the results supported a hierarchy, with physical sciences performing the best followed by biological sciences and then social sciences. The results also held within disciplines, as well as when mathematics and the humanities were included.

The perception of hard vs soft science is influenced by gender bias with a higher proportion of women in a given field leading to a "soft" perception even within STEM fields. This perception of softness is accompanied by a devaluation of the field's worth.

Criticism

Critics of the concept argue that soft sciences are implicitly considered to be less "legitimate" scientific fields, or simply not scientific at all. An editorial in Nature stated that social science findings are more likely to intersect with everyday experience and may be dismissed as "obvious or insignificant" as a result. Being labelled a soft science can affect the perceived value of a discipline to society and the amount of funding available to it. In the 1980s, mathematician Serge Lang successfully blocked influential political scientist Samuel P. Huntington's admission to the US National Academy of Sciences, describing Huntington's use of mathematics to quantify the relationship between factors such as "social frustration" (Lang asked Huntington if he possessed a "social-frustration meter") as "pseudoscience". During the late 2000s recessions, social science was disproportionately targeted for funding cuts compared to mathematics and natural science. Proposals were made for the United States' National Science Foundation to cease funding disciplines such as political science altogether.Both of these incidents prompted critical discussion of the distinction between hard and soft sciences.

Symmetry (physics)

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Symmetry_(physics)   The symmetry of a p...