Search This Blog

Wednesday, March 5, 2025

Differential geometry

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Differential_geometry
A triangle immersed in a saddle-shape plane (a hyperbolic paraboloid), as well as two diverging ultraparallel lines

Differential geometry is a mathematical discipline that studies the geometry of smooth shapes and smooth spaces, otherwise known as smooth manifolds. It uses the techniques of single variable calculus, vector calculus, linear algebra and multilinear algebra. The field has its origins in the study of spherical geometry as far back as antiquity. It also relates to astronomy, the geodesy of the Earth, and later the study of hyperbolic geometry by Lobachevsky. The simplest examples of smooth spaces are the plane and space curves and surfaces in the three-dimensional Euclidean space, and the study of these shapes formed the basis for development of modern differential geometry during the 18th and 19th centuries.

Since the late 19th century, differential geometry has grown into a field concerned more generally with geometric structures on differentiable manifolds. A geometric structure is one which defines some notion of size, distance, shape, volume, or other rigidifying structure. For example, in Riemannian geometry distances and angles are specified, in symplectic geometry volumes may be computed, in conformal geometry only angles are specified, and in gauge theory certain fields are given over the space. Differential geometry is closely related to, and is sometimes taken to include, differential topology, which concerns itself with properties of differentiable manifolds that do not rely on any additional geometric structure (see that article for more discussion on the distinction between the two subjects). Differential geometry is also related to the geometric aspects of the theory of differential equations, otherwise known as geometric analysis.

Differential geometry finds applications throughout mathematics and the natural sciences. Most prominently the language of differential geometry was used by Albert Einstein in his theory of general relativity, and subsequently by physicists in the development of quantum field theory and the standard model of particle physics. Outside of physics, differential geometry finds applications in chemistry, economics, engineering, control theory, computer graphics and computer vision, and recently in machine learning.

History and development

The history and development of differential geometry as a subject begins at least as far back as classical antiquity. It is intimately linked to the development of geometry more generally, of the notion of space and shape, and of topology, especially the study of manifolds. In this section we focus primarily on the history of the application of infinitesimal methods to geometry, and later to the ideas of tangent spaces, and eventually the development of the modern formalism of the subject in terms of tensors and tensor fields.

Classical antiquity until the Renaissance (300 BC – 1600 AD)

The study of differential geometry, or at least the study of the geometry of smooth shapes, can be traced back at least to classical antiquity. In particular, much was known about the geometry of the Earth, a spherical geometry, in the time of the ancient Greek mathematicians. Famously, Eratosthenes calculated the circumference of the Earth around 200 BC, and around 150 AD Ptolemy in his Geography introduced the stereographic projection for the purposes of mapping the shape of the Earth. Implicitly throughout this time principles that form the foundation of differential geometry and calculus were used in geodesy, although in a much simplified form. Namely, as far back as Euclid's Elements it was understood that a straight line could be defined by its property of providing the shortest distance between two points, and applying this same principle to the surface of the Earth leads to the conclusion that great circles, which are only locally similar to straight lines in a flat plane, provide the shortest path between two points on the Earth's surface. Indeed, the measurements of distance along such geodesic paths by Eratosthenes and others can be considered a rudimentary measure of arclength of curves, a concept which did not see a rigorous definition in terms of calculus until the 1600s.

Around this time there were only minimal overt applications of the theory of infinitesimals to the study of geometry, a precursor to the modern calculus-based study of the subject. In Euclid's Elements the notion of tangency of a line to a circle is discussed, and Archimedes applied the method of exhaustion to compute the areas of smooth shapes such as the circle, and the volumes of smooth three-dimensional solids such as the sphere, cones, and cylinders.

There was little development in the theory of differential geometry between antiquity and the beginning of the Renaissance. Before the development of calculus by Newton and Leibniz, the most significant development in the understanding of differential geometry came from Gerardus Mercator's development of the Mercator projection as a way of mapping the Earth. Mercator had an understanding of the advantages and pitfalls of his map design, and in particular was aware of the conformal nature of his projection, as well as the difference between praga, the lines of shortest distance on the Earth, and the directio, the straight line paths on his map. Mercator noted that the praga were oblique curvatur in this projection. This fact reflects the lack of a metric-preserving map of the Earth's surface onto a flat plane, a consequence of the later Theorema Egregium of Gauss.

After calculus (1600–1800)

An osculating circle of plane curve

The first systematic or rigorous treatment of geometry using the theory of infinitesimals and notions from calculus began around the 1600s when calculus was first developed by Gottfried Leibniz and Isaac Newton. At this time, the recent work of René Descartes introducing analytic coordinates to geometry allowed geometric shapes of increasing complexity to be described rigorously. In particular around this time Pierre de Fermat, Newton, and Leibniz began the study of plane curves and the investigation of concepts such as points of inflection and circles of osculation, which aid in the measurement of curvature. Indeed, already in his first paper on the foundations of calculus, Leibniz notes that the infinitesimal condition indicates the existence of an inflection point. Shortly after this time the Bernoulli brothers, Jacob and Johann made important early contributions to the use of infinitesimals to study geometry. In lectures by Johann Bernoulli at the time, later collated by L'Hopital into the first textbook on differential calculus, the tangents to plane curves of various types are computed using the condition , and similarly points of inflection are calculated. At this same time the orthogonality between the osculating circles of a plane curve and the tangent directions is realised, and the first analytical formula for the radius of an osculating circle, essentially the first analytical formula for the notion of curvature, is written down.

In the wake of the development of analytic geometry and plane curves, Alexis Clairaut began the study of space curves at just the age of 16. In his book Clairaut introduced the notion of tangent and subtangent directions to space curves in relation to the directions which lie along a surface on which the space curve lies. Thus Clairaut demonstrated an implicit understanding of the tangent space of a surface and studied this idea using calculus for the first time. Importantly Clairaut introduced the terminology of curvature and double curvature, essentially the notion of principal curvatures later studied by Gauss and others.

Around this same time, Leonhard Euler, originally a student of Johann Bernoulli, provided many significant contributions not just to the development of geometry, but to mathematics more broadly. In regards to differential geometry, Euler studied the notion of a geodesic on a surface deriving the first analytical geodesic equation, and later introduced the first set of intrinsic coordinate systems on a surface, beginning the theory of intrinsic geometry upon which modern geometric ideas are based. Around this time Euler's study of mechanics in the Mechanica lead to the realization that a mass traveling along a surface not under the effect of any force would traverse a geodesic path, an early precursor to the important foundational ideas of Einstein's general relativity, and also to the Euler–Lagrange equations and the first theory of the calculus of variations, which underpins in modern differential geometry many techniques in symplectic geometry and geometric analysis. This theory was used by Lagrange, a co-developer of the calculus of variations, to derive the first differential equation describing a minimal surface in terms of the Euler–Lagrange equation. In 1760 Euler proved a theorem expressing the curvature of a space curve on a surface in terms of the principal curvatures, known as Euler's theorem.

Later in the 1700s, the new French school led by Gaspard Monge began to make contributions to differential geometry. Monge made important contributions to the theory of plane curves, surfaces, and studied surfaces of revolution and envelopes of plane curves and space curves. Several students of Monge made contributions to this same theory, and for example Charles Dupin provided a new interpretation of Euler's theorem in terms of the principle curvatures, which is the modern form of the equation.

Intrinsic geometry and non-Euclidean geometry (1800–1900)

The field of differential geometry became an area of study considered in its own right, distinct from the more broad idea of analytic geometry, in the 1800s, primarily through the foundational work of Carl Friedrich Gauss and Bernhard Riemann, and also in the important contributions of Nikolai Lobachevsky on hyperbolic geometry and non-Euclidean geometry and throughout the same period the development of projective geometry.

Dubbed the single most important work in the history of differential geometry, in 1827 Gauss produced the Disquisitiones generales circa superficies curvas detailing the general theory of curved surfaces. In this work and his subsequent papers and unpublished notes on the theory of surfaces, Gauss has been dubbed the inventor of non-Euclidean geometry and the inventor of intrinsic differential geometry. In his fundamental paper Gauss introduced the Gauss map, Gaussian curvature, first and second fundamental forms, proved the Theorema Egregium showing the intrinsic nature of the Gaussian curvature, and studied geodesics, computing the area of a geodesic triangle in various non-Euclidean geometries on surfaces.

At this time Gauss was already of the opinion that the standard paradigm of Euclidean geometry should be discarded, and was in possession of private manuscripts on non-Euclidean geometry which informed his study of geodesic triangles. Around this same time János Bolyai and Lobachevsky independently discovered hyperbolic geometry and thus demonstrated the existence of consistent geometries outside Euclid's paradigm. Concrete models of hyperbolic geometry were produced by Eugenio Beltrami later in the 1860s, and Felix Klein coined the term non-Euclidean geometry in 1871, and through the Erlangen program put Euclidean and non-Euclidean geometries on the same footing. Implicitly, the spherical geometry of the Earth that had been studied since antiquity was a non-Euclidean geometry, an elliptic geometry.

The development of intrinsic differential geometry in the language of Gauss was spurred on by his student, Bernhard Riemann in his Habilitationsschrift, On the hypotheses which lie at the foundation of geometry. In this work Riemann introduced the notion of a Riemannian metric and the Riemannian curvature tensor for the first time, and began the systematic study of differential geometry in higher dimensions. This intrinsic point of view in terms of the Riemannian metric, denoted by by Riemann, was the development of an idea of Gauss's about the linear element of a surface. At this time Riemann began to introduce the systematic use of linear algebra and multilinear algebra into the subject, making great use of the theory of quadratic forms in his investigation of metrics and curvature. At this time Riemann did not yet develop the modern notion of a manifold, as even the notion of a topological space had not been encountered, but he did propose that it might be possible to investigate or measure the properties of the metric of spacetime through the analysis of masses within spacetime, linking with the earlier observation of Euler that masses under the effect of no forces would travel along geodesics on surfaces, and predicting Einstein's fundamental observation of the equivalence principle a full 60 years before it appeared in the scientific literature.

In the wake of Riemann's new description, the focus of techniques used to study differential geometry shifted from the ad hoc and extrinsic methods of the study of curves and surfaces to a more systematic approach in terms of tensor calculus and Klein's Erlangen program, and progress increased in the field. The notion of groups of transformations was developed by Sophus Lie and Jean Gaston Darboux, leading to important results in the theory of Lie groups and symplectic geometry. The notion of differential calculus on curved spaces was studied by Elwin Christoffel, who introduced the Christoffel symbols which describe the covariant derivative in 1868, and by others including Eugenio Beltrami who studied many analytic questions on manifolds. In 1899 Luigi Bianchi produced his Lectures on differential geometry which studied differential geometry from Riemann's perspective, and a year later Tullio Levi-Civita and Gregorio Ricci-Curbastro produced their textbook systematically developing the theory of absolute differential calculus and tensor calculus. It was in this language that differential geometry was used by Einstein in the development of general relativity and pseudo-Riemannian geometry.

Modern differential geometry (1900–2000)

The subject of modern differential geometry emerged from the early 1900s in response to the foundational contributions of many mathematicians, including importantly the work of Henri Poincaré on the foundations of topology. At the start of the 1900s there was a major movement within mathematics to formalise the foundational aspects of the subject to avoid crises of rigour and accuracy, known as Hilbert's program. As part of this broader movement, the notion of a topological space was distilled in by Felix Hausdorff in 1914, and by 1942 there were many different notions of manifold of a combinatorial and differential-geometric nature.

Interest in the subject was also focused by the emergence of Einstein's theory of general relativity and the importance of the Einstein Field equations. Einstein's theory popularised the tensor calculus of Ricci and Levi-Civita and introduced the notation for a Riemannian metric, and for the Christoffel symbols, both coming from G in Gravitation. Élie Cartan helped reformulate the foundations of the differential geometry of smooth manifolds in terms of exterior calculus and the theory of moving frames, leading in the world of physics to Einstein–Cartan theory.

Following this early development, many mathematicians contributed to the development of the modern theory, including Jean-Louis Koszul who introduced connections on vector bundles, Shiing-Shen Chern who introduced characteristic classes to the subject and began the study of complex manifolds, Sir William Vallance Douglas Hodge and Georges de Rham who expanded understanding of differential forms, Charles Ehresmann who introduced the theory of fibre bundles and Ehresmann connections, and others. Of particular importance was Hermann Weyl who made important contributions to the foundations of general relativity, introduced the Weyl tensor providing insight into conformal geometry, and first defined the notion of a gauge leading to the development of gauge theory in physics and mathematics.

In the middle and late 20th century differential geometry as a subject expanded in scope and developed links to other areas of mathematics and physics. The development of gauge theory and Yang–Mills theory in physics brought bundles and connections into focus, leading to developments in gauge theory. Many analytical results were investigated including the proof of the Atiyah–Singer index theorem. The development of complex geometry was spurred on by parallel results in algebraic geometry, and results in the geometry and global analysis of complex manifolds were proven by Shing-Tung Yau and others. In the latter half of the 20th century new analytic techniques were developed in regards to curvature flows such as the Ricci flow, which culminated in Grigori Perelman's proof of the Poincaré conjecture. During this same period primarily due to the influence of Michael Atiyah, new links between theoretical physics and differential geometry were formed. Techniques from the study of the Yang–Mills equations and gauge theory were used by mathematicians to develop new invariants of smooth manifolds. Physicists such as Edward Witten, the only physicist to be awarded a Fields medal, made new impacts in mathematics by using topological quantum field theory and string theory to make predictions and provide frameworks for new rigorous mathematics, which has resulted for example in the conjectural mirror symmetry and the Seiberg–Witten invariants.

Branches

Riemannian geometry

Riemannian geometry studies Riemannian manifolds, smooth manifolds with a Riemannian metric. This is a concept of distance expressed by means of a smooth positive definite symmetric bilinear form defined on the tangent space at each point. Riemannian geometry generalizes Euclidean geometry to spaces that are not necessarily flat, though they still resemble Euclidean space at each point infinitesimally, i.e. in the first order of approximation. Various concepts based on length, such as the arc length of curves, area of plane regions, and volume of solids all possess natural analogues in Riemannian geometry. The notion of a directional derivative of a function from multivariable calculus is extended to the notion of a covariant derivative of a tensor. Many concepts of analysis and differential equations have been generalized to the setting of Riemannian manifolds.

A distance-preserving diffeomorphism between Riemannian manifolds is called an isometry. This notion can also be defined locally, i.e. for small neighborhoods of points. Any two regular curves are locally isometric. However, the Theorema Egregium of Carl Friedrich Gauss showed that for surfaces, the existence of a local isometry imposes that the Gaussian curvatures at the corresponding points must be the same. In higher dimensions, the Riemann curvature tensor is an important pointwise invariant associated with a Riemannian manifold that measures how close it is to being flat. An important class of Riemannian manifolds is the Riemannian symmetric spaces, whose curvature is not necessarily constant. These are the closest analogues to the "ordinary" plane and space considered in Euclidean and non-Euclidean geometry.

Pseudo-Riemannian geometry

Pseudo-Riemannian geometry generalizes Riemannian geometry to the case in which the metric tensor need not be positive-definite. A special case of this is a Lorentzian manifold, which is the mathematical basis of Einstein's general relativity theory of gravity.

Finsler geometry

Finsler geometry has Finsler manifolds as the main object of study. This is a differential manifold with a Finsler metric, that is, a Banach norm defined on each tangent space. Riemannian manifolds are special cases of the more general Finsler manifolds. A Finsler structure on a manifold is a function such that:

  1. for all in and all ,
  2. is infinitely differentiable in ,
  3. The vertical Hessian of is positive definite.

Symplectic geometry

Symplectic geometry is the study of symplectic manifolds. An almost symplectic manifold is a differentiable manifold equipped with a smoothly varying non-degenerate skew-symmetric bilinear form on each tangent space, i.e., a nondegenerate 2-form ω, called the symplectic form. A symplectic manifold is an almost symplectic manifold for which the symplectic form ω is closed: dω = 0.

A diffeomorphism between two symplectic manifolds which preserves the symplectic form is called a symplectomorphism. Non-degenerate skew-symmetric bilinear forms can only exist on even-dimensional vector spaces, so symplectic manifolds necessarily have even dimension. In dimension 2, a symplectic manifold is just a surface endowed with an area form and a symplectomorphism is an area-preserving diffeomorphism. The phase space of a mechanical system is a symplectic manifold and they made an implicit appearance already in the work of Joseph Louis Lagrange on analytical mechanics and later in Carl Gustav Jacobi's and William Rowan Hamilton's formulations of classical mechanics.

By contrast with Riemannian geometry, where the curvature provides a local invariant of Riemannian manifolds, Darboux's theorem states that all symplectic manifolds are locally isomorphic. The only invariants of a symplectic manifold are global in nature and topological aspects play a prominent role in symplectic geometry. The first result in symplectic topology is probably the Poincaré–Birkhoff theorem, conjectured by Henri Poincaré and then proved by G.D. Birkhoff in 1912. It claims that if an area preserving map of an annulus twists each boundary component in opposite directions, then the map has at least two fixed points.

Contact geometry

Contact geometry deals with certain manifolds of odd dimension. It is close to symplectic geometry and like the latter, it originated in questions of classical mechanics. A contact structure on a (2n + 1)-dimensional manifold M is given by a smooth hyperplane field H in the tangent bundle that is as far as possible from being associated with the level sets of a differentiable function on M (the technical term is "completely nonintegrable tangent hyperplane distribution"). Near each point p, a hyperplane distribution is determined by a nowhere vanishing 1-form , which is unique up to multiplication by a nowhere vanishing function:

A local 1-form on M is a contact form if the restriction of its exterior derivative to H is a non-degenerate two-form and thus induces a symplectic structure on Hp at each point. If the distribution H can be defined by a global one-form then this form is contact if and only if the top-dimensional form

is a volume form on M, i.e. does not vanish anywhere. A contact analogue of the Darboux theorem holds: all contact structures on an odd-dimensional manifold are locally isomorphic and can be brought to a certain local normal form by a suitable choice of the coordinate system.

Complex and Kähler geometry

Complex differential geometry is the study of complex manifolds. An almost complex manifold is a real manifold , endowed with a tensor of type (1, 1), i.e. a vector bundle endomorphism (called an almost complex structure)

, such that

It follows from this definition that an almost complex manifold is even-dimensional.

An almost complex manifold is called complex if , where is a tensor of type (2, 1) related to , called the Nijenhuis tensor (or sometimes the torsion). An almost complex manifold is complex if and only if it admits a holomorphic coordinate atlas. An almost Hermitian structure is given by an almost complex structure J, along with a Riemannian metric g, satisfying the compatibility condition

An almost Hermitian structure defines naturally a differential two-form

The following two conditions are equivalent:

where is the Levi-Civita connection of . In this case, is called a Kähler structure, and a Kähler manifold is a manifold endowed with a Kähler structure. In particular, a Kähler manifold is both a complex and a symplectic manifold. A large class of Kähler manifolds (the class of Hodge manifolds) is given by all the smooth complex projective varieties.

CR geometry

CR geometry is the study of the intrinsic geometry of boundaries of domains in complex manifolds.

Conformal geometry

Conformal geometry is the study of the set of angle-preserving (conformal) transformations on a space.

Differential topology

Differential topology is the study of global geometric invariants without a metric or symplectic form.

Differential topology starts from the natural operations such as Lie derivative of natural vector bundles and de Rham differential of forms. Beside Lie algebroids, also Courant algebroids start playing a more important role.

Lie groups

A Lie group is a group in the category of smooth manifolds. Beside the algebraic properties this enjoys also differential geometric properties. The most obvious construction is that of a Lie algebra which is the tangent space at the unit endowed with the Lie bracket between left-invariant vector fields. Beside the structure theory there is also the wide field of representation theory.

Geometric analysis

Geometric analysis is a mathematical discipline where tools from differential equations, especially elliptic partial differential equations are used to establish new results in differential geometry and differential topology.

Gauge theory

Gauge theory is the study of connections on vector bundles and principal bundles, and arises out of problems in mathematical physics and physical gauge theories which underpin the standard model of particle physics. Gauge theory is concerned with the study of differential equations for connections on bundles, and the resulting geometric moduli spaces of solutions to these equations as well as the invariants that may be derived from them. These equations often arise as the Euler–Lagrange equations describing the equations of motion of certain physical systems in quantum field theory, and so their study is of considerable interest in physics.

Bundles and connections

The apparatus of vector bundles, principal bundles, and connections on bundles plays an extraordinarily important role in modern differential geometry. A smooth manifold always carries a natural vector bundle, the tangent bundle. Loosely speaking, this structure by itself is sufficient only for developing analysis on the manifold, while doing geometry requires, in addition, some way to relate the tangent spaces at different points, i.e. a notion of parallel transport. An important example is provided by affine connections. For a surface in R3, tangent planes at different points can be identified using a natural path-wise parallelism induced by the ambient Euclidean space, which has a well-known standard definition of metric and parallelism. In Riemannian geometry, the Levi-Civita connection serves a similar purpose. More generally, differential geometers consider spaces with a vector bundle and an arbitrary affine connection which is not defined in terms of a metric. In physics, the manifold may be spacetime and the bundles and connections are related to various physical fields.

Intrinsic versus extrinsic

From the beginning and through the middle of the 19th century, differential geometry was studied from the extrinsic point of view: curves and surfaces were considered as lying in a Euclidean space of higher dimension (for example a surface in an ambient space of three dimensions). The simplest results are those in the differential geometry of curves and differential geometry of surfaces. Starting with the work of Riemann, the intrinsic point of view was developed, in which one cannot speak of moving "outside" the geometric object because it is considered to be given in a free-standing way. The fundamental result here is Gauss's theorema egregium, to the effect that Gaussian curvature is an intrinsic invariant.

The intrinsic point of view is more flexible. For example, it is useful in relativity where space-time cannot naturally be taken as extrinsic. However, there is a price to pay in technical complexity: the intrinsic definitions of curvature and connections become much less visually intuitive.

These two points of view can be reconciled, i.e. the extrinsic geometry can be considered as a structure additional to the intrinsic one. (See the Nash embedding theorem.) In the formalism of geometric calculus both extrinsic and intrinsic geometry of a manifold can be characterized by a single bivector-valued one-form called the shape operator.

Applications

Below are some examples of how differential geometry is applied to other fields of science and mathematics.

Non-ionizing radiation

From Wikipedia, the free encyclopedia
Different types of electromagnetic radiation

Non-ionizing (or non-ionising) radiation refers to any type of electromagnetic radiation that does not carry enough energy per quantum (photon energy) to ionize atoms or molecules—that is, to completely remove an electron from an atom or molecule. Instead of producing charged ions when passing through matter, non-ionizing electromagnetic radiation has sufficient energy only for excitation (the movement of an electron to a higher energy state). Non-ionizing radiation is not a significant health risk. In contrast, ionizing radiation has a higher frequency and shorter wavelength than non-ionizing radiation, and can be a serious health hazard: exposure to it can cause burns, radiation sickness, many kinds of cancer, and genetic damage. Using ionizing radiation requires elaborate radiological protection measures, which in general are not required with non-ionizing radiation.

Non-ionizing radiation is used in various technologies, including radio broadcasting, telecommunications, medical imaging, and heat therapy.

The region at which radiation is considered "ionizing" is not well defined, since different molecules and atoms ionize at different energies. The usual definitions have suggested that radiation with particle or photon energies less than 10 electronvolts (eV) be considered non-ionizing. Another suggested threshold is 33 electronvolts, which is the energy needed to ionize water molecules. The light from the Sun that reaches the earth is largely composed of non-ionizing radiation, since the ionizing far-ultraviolet rays have been filtered out by the gases in the atmosphere, particularly oxygen.

Mechanisms of interaction with matter, including living tissue

Near ultraviolet, visible light, infrared, microwave, radio waves, and low-frequency radio frequency (very low frequency, extremely low frequency) are all examples of non-ionizing radiation. By contrast, far ultraviolet light, X-rays, gamma-rays, and all particle radiation from radioactive decay are ionizing. Visible and near ultraviolet electromagnetic radiation may induce photochemical reactions, or accelerate radical reactions, such as photochemical aging of varnishes or the breakdown of flavoring compounds in beer to produce the "lightstruck flavor". Near ultraviolet radiation, although technically non-ionizing, may still excite and cause photochemical reactions in some molecules. This happens because at ultraviolet photon energies, molecules may become electronically excited or promoted to free-radical form, even without ionization taking place.

The occurrence of ionization depends on the energy of the individual particles or waves, and not on their number. An intense flood of particles or waves will not cause ionization if these particles or waves do not carry enough energy to be ionizing, unless they raise the temperature of a body to a point high enough to ionize small fractions of atoms or molecules by the process of thermal-ionization. In such cases, even "non-ionizing radiation" is capable of causing thermal-ionization if it deposits enough heat to raise temperatures to ionization energies. These reactions occur at far higher energies than with ionizing radiation, which requires only a single particle to ionize. A familiar example of thermal ionization is the flame-ionization of a common fire, and the browning reactions in common food items induced by infrared radiation, during broiling-type cooking.

The energy of non-ionizing radiation is low, and instead of producing charged ions when passing through matter, it has only sufficient energy to change the rotational, vibrational or electronic valence configurations of molecules and atoms. This produces thermal effects. The possible non-thermal effects of non-ionizing forms of radiation on living tissue have only recently been studied. Much of the current debate is about relatively low levels of exposure to radio frequency (RF) radiation from mobile phones and base stations producing "non-thermal" effects. Some experiments have suggested that there may be biological effects at non-thermal exposure levels, but the evidence for production of health hazard is contradictory and unproven. The scientific community and international bodies acknowledge that further research is needed to improve our understanding in some areas. The consensus is that there is no consistent and convincing scientific evidence of adverse health effects caused by RF radiation at powers sufficiently low that no thermal health effects are produced.

Health risks

Different biological effects are observed for different types of non-ionizing radiation. The upper frequencies (lower energy ultraviolet) of non-ionizing radiation are capable of non-thermal biological damage, similar to ionizing radiation. It is still to be proven that non-thermal effects of radiation of much lower frequencies (microwave, millimetre and radiowave radiation) entail health risks.

Upper frequencies

Exposure to non-ionizing ultraviolet light is a risk factor for developing skin cancer (especially non-melanoma skin cancers), sunburn, premature aging of skin, and other effects. Despite the possible hazards it is beneficial to humans in the right dosage, since Vitamin D is produced due to the biochemical effects of ultraviolet light. Vitamin D plays many roles in the body with the most well known being in bone mineralisation.

Lower frequencies

Non-ionizing radiation hazard sign

In addition to the well-known effect of non-ionizing ultraviolet light causing skin cancer, non-ionizing radiation can produce non-mutagenic effects such as inciting thermal energy in biological tissue that can lead to burns. In 2011, the International Agency for Research on Cancer (IARC) from the World Health Organization (WHO) released a statement adding RF electromagnetic fields (including microwave and millimetre waves) to their list of things which are possibly carcinogenic to humans.

In terms of potential biological effects, the non-ionizing portion of the spectrum can be subdivided into:

  1. The optical radiation portion, where electron excitation can occur (visible light, infrared light)
  2. The portion where the wavelength is smaller than the body. Heating via induced currents can occur. In addition, there are claims of other adverse biological effects. Such effects are not well understood and even largely denied. (Microwave and higher-frequency RF).
  3. The portion where the wavelength is much larger than the body, and heating via induced currents seldom occurs (lower-frequency RF, power frequencies, static fields).

The above effects have only been shown to be due to heating effects. At low power levels where there is no heating effect, the risk of cancer is not significant.

The International Agency for Research on Cancer recently stated that there could be some risk from non-ionizing radiation to humans. But a subsequent study reported that the basis of the IARC evaluation was not consistent with observed incidence trends. This and other reports suggest that there is virtually no way that results on which the IARC based its conclusions are correct.


Source Wavelength Frequency Biological effects
UV-A Black light, Sunlight 319–400 nm 750–940 THz Eye: photochemical cataract; skin: erythema, including pigmentation
Visible light Sunlight, fire, LEDs, light bulbs, lasers 400–780 nm 385–750 THz Eye: photochemical & thermal retinal injury; skin: photoaging
IR-A Sunlight, thermal radiation, incandescent light bulbs, lasers, remote controls 780 nm – 1.4 μm 215–385 THz Eye: thermal retinal injury, thermal cataract; skin: burn
IR-B Sunlight, thermal radiation, incandescent light bulbs, lasers 1.4–3 μm 100–215 THz Eye: corneal burn, cataract; skin: burn
IR-C Sunlight, thermal radiation, incandescent light bulbs, far-infrared laser 3 μm – 1 mm 300 GHz – 100 THz Eye: corneal burn, cataract; heating of body surface
Microwave Mobile/cell phones, microwave ovens, cordless phones, millimeter waves, airport millimeter scanners, motion detectors, long-distance telecommunications, radar, Wi-Fi 1 mm – 33 cm 1–300 GHz Heating of body tissue
Radio-frequency radiation Mobile/cell phones, television, FM, AM, shortwave, CB, cordless phones 33 cm – 3 km 100 kHz – 1 GHz Heating of body tissue, raised body temperature
Low-frequency RF Power lines >3 km <100 kHz Cumulation of charge on body surface; disturbance of nerve & muscle responses
Static field[6] Strong magnets, MRI Infinite 0 Hz (technically static fields are not "radiation") Electric charge on body surface

Types

Near ultraviolet radiation

Ultraviolet light can cause burns to skin and cataracts to the eyes. Ultraviolet is classified into near, medium and far UV according to energy, where near and medium ultraviolet are technically non-ionizing, but where all UV wavelengths can cause photochemical reactions that to some extent mimic ionization (including DNA damage and carcinogenesis). UV radiation above 10 eV (wavelength shorter than 125 nm) is considered ionizing. However, the rest of the UV spectrum from 3.1 eV (400 nm) to 10 eV, although technically non-ionizing, can produce photochemical reactions that are damaging to molecules by means other than simple heat. Since these reactions are often very similar to those caused by ionizing radiation, often the entire UV spectrum is considered to be equivalent to ionization radiation in its interaction with many systems (including biological systems).

For example, ultraviolet light, even in the non-ionizing range, can produce free radicals that induce cellular damage, and can be carcinogenic. Photochemistry such as pyrimidine dimer formation in DNA can happen through most of the UV band, including much of the band that is formally non-ionizing. Ultraviolet light induces melanin production from melanocyte cells to cause sun tanning of skin. Vitamin D is produced on the skin by a radical reaction initiated by UV radiation.

Plastic (polycarbonate) sunglasses generally absorb UV radiation. UV overexposure to the eyes causes snow blindness, common to areas with reflective surfaces, such as snow or water.

Visible light

Light, or visible light, is the very narrow range of electromagnetic radiation that is visible to the human eye (about 400–700 nm), or up to 380–750 nm. More broadly, physicists refer to light as electromagnetic radiation of all wavelengths, whether visible or not.

High-energy visible light is blue-violet light with a higher damaging potential.

Infrared

Infrared (IR) light is electromagnetic radiation with a wavelength between 0.7 and 300 micrometers, which equates to a frequency range between approximately 1 and 430 THz. IR wavelengths are longer than that of visible light, but shorter than that of terahertz radiation microwaves. Bright sunlight provides an irradiance of just over 1 kilowatt per square meter at sea level. Of this energy, 527 watts is infrared radiation, 445 watts is visible light, and 32 watts is ultraviolet radiation.

Microwave

Microwaves are electromagnetic waves with wavelengths ranging from as long as one meter to as short as one millimeter, or equivalently, with frequencies between 300 MHz (0.3 GHz) and 300 GHz. This broad definition includes both UHF and EHF (millimeter waves), and various sources use different boundaries. In all cases, microwave includes the entire SHF band (3 to 30 GHz, or 10 to 1 cm) at minimum, with RF engineering often putting the lower boundary at 1 GHz (30 cm), and the upper around 100 GHz (3mm). Applications include cellphone (mobile) telephones, radars, airport scanners, microwave ovens, earth remote sensing satellites, and radio and satellite communications.

Radio waves

Radio waves are a type of electromagnetic radiation with wavelengths in the electromagnetic spectrum longer than infrared light. Like all other electromagnetic waves, they travel at the speed of light. Naturally occurring radio waves are made by lightning, or by astronomical objects. Artificially generated radio waves are used for fixed and mobile radio communication, broadcasting, radar and other navigation systems, satellite communication, computer networks and innumerable other applications. Different frequencies of radio waves have different propagation characteristics in the Earth's atmosphere; long waves may cover a part of the Earth very consistently, shorter waves can reflect off the ionosphere and travel around the world, and much shorter wavelengths bend or reflect very little and travel on a line of sight.

Very low frequency (VLF)

Very low frequency or VLF is the range of RF of 3 to 30 kHz. Since there is not much bandwidth in this band of the radio spectrum, only the very simplest signals are used, such as for radio navigation. Also known as the myriametre band or myriametre wave as the wavelengths range from ten to one myriametre (an obsolete metric unit equal to 10 kilometres).

Extremely low frequency (ELF)

Extremely low frequency (ELF) is the range of radiation frequencies from 300 Hz to 3 kHz. In atmosphere science, an alternative definition is usually given, from 3 Hz to 3 kHz. In the related magnetosphere science, the lower frequency electromagnetic oscillations (pulsations occurring below ~3 Hz) are considered to be in the ULF range, which is thus also defined differently from the ITU Radio Bands.

Thermal radiation

Thermal radiation, a common synonym for infrared when it occurs at temperatures commonly encountered on Earth, is the process by which the surface of an object radiates its thermal energy in the form of electromagnetic waves. Infrared radiation that one can feel emanating from a household heater, infra-red heat lamp, or kitchen oven are examples of thermal radiation, as is the IR and visible light emitted by a glowing incandescent light bulb (not hot enough to emit the blue high frequencies and therefore appearing yellowish; fluorescent lamps are not thermal and can appear bluer). Thermal radiation is generated when the energy from the movement of charged particles within molecules is converted to the radiant energy of electromagnetic waves. The emitted wave frequency of the thermal radiation is a probability distribution depending only on temperature, and for a black body is given by Planck's law of radiation. Wien's displacement law gives the most likely frequency of the emitted radiation, and the Stefan–Boltzmann law gives the heat intensity (power emitted per area).

Parts of the electromagnetic spectrum of thermal radiation may be ionizing, if the object emitting the radiation is hot enough (has a high enough temperature). A common example of such radiation is sunlight, which is thermal radiation from the Sun's photosphere and which contains enough ultraviolet light to cause ionization in many molecules and atoms. An extreme example is the flash from the detonation of a nuclear weapon, which emits a large number of ionizing X-rays purely as a product of heating the atmosphere around the bomb to extremely high temperatures.

As noted above, even low-frequency thermal radiation may cause temperature-ionization whenever it deposits sufficient thermal energy to raises temperatures to a high enough level. Common examples of this are the ionization (plasma) seen in common flames, and the molecular changes caused by the "browning" in food-cooking, which is a chemical process that begins with a large component of ionization.

Black-body radiation

Black-body radiation is radiation from an idealized radiator that emits at any temperature the maximum possible amount of radiation at any given wavelength. A black body will also absorb the maximum possible incident radiation at any given wavelength. The radiation emitted covers the entire electromagnetic spectrum and the intensity (power/unit-area) at a given frequency is dictated by Planck's law of radiation. A black body at temperatures at or below room temperature would thus appear absolutely black as it would not reflect any light. Theoretically a black body emits electromagnetic radiation over the entire spectrum from very low frequency radio waves to X-rays. The frequency at which the black-body radiation is at maximum is given by Wien's displacement law.

Is intelligent life a ‘once in a universe’ likelihood?

, , | March 5, 2025
https://geneticliteracyproject.org/2025/03/05/is-intelligent-life-a-once-in-a-universe-likelihood-recalibrating-the-possibility-of-extraterrestrial-life/?mc_cid=3d708bfe25&mc_eid=539cc5c98c

 cd48db75-637e-46f8-a13a-54ff3e606566

A popular model of evolution concludes that it was incredibly unlikely for humanity to evolve on Earth, and that extraterrestrial intelligence is vanishingly rare.

But as experts on the entangled history of life and our planet, we propose that the coevolution of life and Earth’s surface environment may have unfolded in a way that makes the evolutionary origin of humanlike intelligence a more foreseeable or expected outcome than generally thought.

The hard-steps model

Brandon Carter, physicist, Laboratoire Univers et Théories. Brandon Carter/Wikimedia CommonsCC BY-SA

Some of the greatest evolutionary biologists of the 20th century famously dismissed the prospect of humanlike intelligence beyond Earth.

This view, firmly rooted in biology, independently gained support from physics in 1983 with an influential publication by Brandon Carter, a theoretical physicist.

In 1983, Carter attempted to explain what he called a remarkable coincidence: the close approximation between the estimated lifespan of the Sun – 10 billion years – and the time Earth took to produce humans – 5 billion years, rounding up.

He imagined three possibilities. In one, intelligent life like humans generally arises very quickly on planets, geologically speaking – in perhaps millions of years. In another, it typically arises in about the time it took on Earth. And in the last, he imagined that Earth was lucky – ordinarily it would take much longer, say, trillions of years for such life to form.

Carter rejected the first possibility because life on Earth took so much longer than that. He rejected the second as an unlikely coincidence, since there is no reason the processes that govern the Sun’s lifespan – nuclear fusion – should just happen to have the same timescale as biological evolution.

So Carter landed on the third explanation: that humanlike life generally takes much longer to arise than the time provided by the lifetime of a star. To explain why humanlike life took so long to arise, Carter proposed that it must depend on extremely unlikely evolutionary steps, and that the Earth is extraordinarily lucky to have taken them all.

The Sun will likely be able to keep planets habitable for only part of its lifetime – by the time it hits 10 billion years, it will get too hot. NASA/JPL-Caltech

He called these evolutionary steps hard steps, and they had two main criteria. One, the hard steps must be required for human existence – meaning if they had not happened, then humans would not be here. Two, the hard steps must have very low probabilities of occurring in the available time, meaning they usually require timescales approaching 10 billion years.

Tracing humans’ evolutionary lineage will bring you back billions of years.

Do hard steps exist?

The physicists Frank Tipler and John Barrow predicted that hard steps must have happened only once in the history of life – a logic taken from evolutionary biology.

If an evolutionary innovation required for human existence was truly improbable in the available time, then it likely wouldn’t have happened more than once, although it must have happened at least once, since we exist.

For example, the origin of nucleated – or eukaryotic – cells is one of the most popular hard steps scientists have proposed. Since humans are eukaryotes, humanity would not exist if the origin of eukaryotic cells had never happened.

On the universal tree of life, all eukaryotic life falls on exactly one branch. This suggests that eukaryotic cells originated only once, which is consistent with their origin being unlikely.

In the evolutionary tree of life, organisms that have eukaryotic cells are all on the same branch, suggesting this type of cell evolved only once. VectorMine/iStock via Getty Images Plus

The other most popular hard-step candidates – the origin of life, oxygen-producing photosynthesis, multicellular animals and humanlike intelligence – all share the same pattern. They are each constrained to a single branch on the tree of life.

However, as the evolutionary biologist and paleontologist Geerat Vermeij argued, there are other ways to explain why these evolutionary events appear to have happened only once.

This pattern of apparently singular origins could arise from information loss due to extinction and the incompleteness of the fossil record. Perhaps these innovations each evolved more than once, but only one example of each survived to the modern day. Maybe the extinct examples never became fossilized, or paleontologists haven’t recognized them in the fossil record.

Or maybe these innovations did happen only once, but because they could have happened only once. For example, perhaps the first evolutionary lineage to achieve one of these innovations quickly outcompeted other similar organisms from other lineages for resources. Or maybe the first lineage changed the global environment so dramatically that other lineages lost the opportunity to evolve the same innovation. In other words, once the step occurred in one lineage, the chemical or ecological conditions were changed enough that other lineages could not develop in the same way.

If these alternative mechanisms explain the uniqueness of these proposed hard steps, then none of them would actually qualify as hard steps.

But if none of these steps were hard, then why didn’t humanlike intelligence evolve much sooner in the history of life?

Environmental evolution

Geobiologists reconstructing the conditions of the ancient Earth can easily come up with reasons why intelligent life did not evolve sooner in Earth history.

For example, 90% of Earth’s history elapsed before the atmosphere had enough oxygen to support humans. Likewise, up to 50% of Earth’s history elapsed before the atmosphere had enough oxygen to support modern eukaryotic cells.

All of the hard-step candidates have their own environmental requirements. When the Earth formed, these requirements weren’t in place. Instead, they appeared later on, as Earth’s surface environment changed.

We suggest that as the Earth changed physically and chemically over time, its surface conditions allowed for a greater diversity of habitats for life. And these changes operate on geologic timescales – billions of years – explaining why the proposed hard steps evolved when they did, and not much earlier.

In this view, humans originated when they did because the Earth became habitable to humans only relatively recently. Carter had not considered these points in 1983.

Moving forward

But hard steps could still exist. How can scientists test whether they do?

Earth and life scientists could work together to determine when Earth’s surface environment first became supportive of each proposed hard step. Earth scientists could also forecast how much longer Earth will stay habitable for the different kinds of life associated with each proposed hard step – such as humans, animals and eukaryotic cells.

Evolutionary biologists and paleontologists could better constrain how many times each hard-step candidate occurred. If they did occur only once each, they could see whether this came from their innate biological improbability or from environmental factors.

Lastly, astronomers could use data from planets beyond the solar system to figure out how common life-hosting planets are, and how often these planets have hard-step candidates, such as oxygen-producing photosynthesis and intelligent life.

If our view is correct, then the Earth and life have evolved together in a way that is more typical of life-supporting planets – not in the rare and improbable way that the hard-steps model predicts. Humanlike intelligence would then be a more expected outcome of Earth’s evolution, rather than a cosmic fluke.

Researchers from a variety of disciplines, from paleontologists and biologists to astronomers, can work together to learn more about the probability of intelligent life evolving on Earth and elsewhere in the universe.

If the evolution of humanlike life was more probable than the hard-steps model predicts, then researchers are more likely to find evidence for extraterrestrial intelligence in the future.

Daniel Mills is a Postdoctoral Fellow in Geomicrobiology at Ludwig Maximilian University of Munich. Check out Daniel’s website

Jason Wright is Professor of Astronomy and Astrophysics at Penn State. Follow Jason on X @Astro_Wright

Jennifer Macalady is Professor of Geoscience at Penn State. Follow Jennifer on X @jmacalad.bsky.social

A version of this article was originally posted at Conversation and has been reposted here with permission. Any reposting should credit the original author and provide links to both the GLP and the original article. Find Conversation on X @Conversation_US

Cancer research

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Cancer_research ...