Search This Blog

Thursday, December 18, 2025

Clearing the neighbourhood

From Wikipedia, the free encyclopedia

In celestial mechanics, "clearing the neighbourhood" (or dynamical dominance) around a celestial body's orbit describes the body becoming gravitationally dominant such that there are no other bodies of comparable size other than its natural satellites or those otherwise under its gravitational influence.

"Clearing the neighbourhood" is one of three necessary criteria for a celestial body to be considered a planet in the Solar System, according to the definition adopted in 2006 by the International Astronomical Union (IAU). In 2015, a proposal was made to extend the definition to exoplanets.

In the end stages of planet formation, a planet, as so defined, will have "cleared the neighbourhood" of its own orbital zone, i.e. removed other bodies of comparable size. A large body that meets the other criteria for a planet but has not cleared its neighbourhood is classified as a dwarf planet. This includes Pluto, whose orbit is partly inside Neptune's and shares its orbital neighbourhood with many Kuiper belt objects. The IAU's definition does not attach specific numbers or equations to this term, but all IAU-recognised planets have cleared their neighbourhoods to a much greater extent (by orders of magnitude) than any dwarf planet or candidate for dwarf planet.

The phrase stems from a paper presented to the 2000 IAU general assembly by the planetary scientists Alan Stern and Harold F. Levison. The authors used several similar phrases as they developed a theoretical basis for determining if an object orbiting a star is likely to "clear its neighboring region" of planetesimals based on the object's mass and its orbital periodSteven Soter prefers to use the term dynamical dominance, and Jean-Luc Margot notes that such language "seems less prone to misinterpretation".

Prior to 2006, the IAU had no specific rules for naming planets, as no new planets had been discovered for decades, whereas there were well-established rules for naming an abundance of newly discovered small bodies such as asteroids or comets. The naming process for Eris stalled after the announcement of its discovery in 2005, because its size was comparable to that of Pluto. The IAU sought to resolve the naming of Eris by seeking a taxonomical definition to distinguish planets from minor planets.

Criteria

The phrase refers to an orbiting body (a planet or protoplanet) "sweeping out" its orbital region over time, by gravitationally interacting with smaller bodies nearby. Over many orbital cycles, a large body will tend to cause small bodies either to accrete with it, or to be disturbed to another orbit, or to be captured either as a satellite or into a resonant orbit. As a consequence it does not then share its orbital region with other bodies of significant size, except for its own satellites, or other bodies governed by its own gravitational influence. This latter restriction excludes objects whose orbits may cross but that will never collide with each other due to orbital resonance, such as Jupiter and its trojans, Earth and 3753 Cruithne, or Neptune and the plutinos. As to the extent of orbit clearing required, Jean-Luc Margot emphasises "a planet can never completely clear its orbital zone, because gravitational and radiative forces continually perturb the orbits of asteroids and comets into planet-crossing orbits" and states that the IAU did not intend the impossible standard of impeccable orbit clearing.

Stern–Levison's Λ

In their paper, Stern and Levison sought an algorithm to determine which "planetary bodies control the region surrounding them". They defined Λ (lambda), a measure of a body's ability to scatter smaller masses out of its orbital region over a period of time equal to the age of the Universe (Hubble time). Λ is a dimensionless number defined as

where m is the mass of the body, a is the body's semi-major axis, and k is a function of the orbital elements of the small body being scattered and the degree to which it must be scattered. In the domain of the solar planetary disc, there is little variation in the average values of k for small bodies at a particular distance from the Sun.

If Λ > 1, then the body will likely clear out the small bodies in its orbital zone. Stern and Levison used this discriminant to separate the gravitationally rounded, Sun-orbiting bodies into überplanets, which are "dynamically important enough to have cleared [their] neighboring planetesimals", and unterplanets. The überplanets are the eight most massive solar orbiters (i.e. the IAU planets), and the unterplanets are the rest (i.e. the IAU dwarf planets).

Soter's μ

Steven Soter proposed an observationally based measure μ (mu), which he called the "planetary discriminant", to separate bodies orbiting stars into planets and non-planets. He defines μ as where μ is a dimensionless parameter, M is the mass of the candidate planet, and m is the mass of all other bodies that share an orbital zone, that is all bodies whose orbits cross a common radial distance from the primary, and whose non-resonant periods differ by less than an order of magnitude.

The order-of-magnitude similarity in period requirement excludes comets from the calculation, but the combined mass of the comets turns out to be negligible compared with the other small Solar System bodies, so their inclusion would have little impact on the results. μ is then calculated by dividing the mass of the candidate body by the total mass of the other objects that share its orbital zone. It is a measure of the actual degree of cleanliness of the orbital zone. Soter proposed that if μ > 100, then the candidate body be regarded as a planet.

Margot's Π

Astronomer Jean-Luc Margot has proposed a discriminant, Π (pi), that can categorise a body based only on its own mass, its semi-major axis, and its star's mass. Like Stern–Levison's Λ, Π is a measure of the ability of the body to clear its orbit, but unlike Λ, it is solely based on theory and does not use empirical data from the Solar System. Π is based on properties that are feasibly determinable even for exoplanetary bodies, unlike Soter's μ, which requires an accurate census of the orbital zone.

where m is the mass of the candidate body in Earth masses, a is its semi-major axis in AU, M is the mass of the parent star in solar masses, and k is a constant chosen so that Π > 1 for a body that can clear its orbital zone. k depends on the extent of clearing desired and the time required to do so. Margot selected an extent of times the Hill radius and a time limit of the parent star's lifetime on the main sequence (which is a function of the mass of the star). Then, in the mentioned units and a main-sequence lifetime of 10 billion years, k = 807. The body is a planet if Π > 1. The minimum mass necessary to clear the given orbit is given when Π = 1.

Π is based on a calculation of the number of orbits required for the candidate body to impart enough energy to a small body in a nearby orbit such that the smaller body is cleared out of the desired orbital extent. This is unlike Λ, which uses an average of the clearing times required for a sample of asteroids in the asteroid belt, and is thus biased to that region of the Solar System. Π's use of the main-sequence lifetime means that the body will eventually clear an orbit around the star; Λ's use of a Hubble time means that the star might disrupt its planetary system (e.g. by going nova) before the object is actually able to clear its orbit.

The formula for Π assumes a circular orbit. Its adaptation to elliptical orbits is left for future work, but Margot expects it to be the same as that of a circular orbit to within an order of magnitude.

To accommodate planets in orbit around brown dwarfs, an updated version of the criterion with a uniform clearing time scale of 10 billion years was published in 2024. The values of Π for Solar System bodies remain unchanged.

In 2025, Hwang cited Margot's idea to define a planet as celestial body that has largest Margot's Π.

Numerical values

Below is a list of planets and dwarf planets ranked by Margot's planetary discriminant Π, in decreasing order. For all eight planets defined by the IAU, Π is orders of magnitude greater than 1, whereas for all dwarf planets, Π is orders of magnitude less than 1. Also listed are Stern–Levison's Λ and Soter's μ; again, the planets are orders of magnitude greater than 1 for Λ and 100 for μ, and the dwarf planets are orders of magnitude less than 1 for Λ and 100 for μ. Also shown are the distances where Π = 1 and Λ = 1 (where the body would change from being a planet to being a dwarf planet).

The mass of Sedna is not known; it is very roughly estimated here as 1021 kg, on the assumption of a density of about 2 g/cm3.

Rank Name Margot's planetary
discriminant Π
Soter's planetary
discriminant μ
Stern–Levison
parameter Λ

Mass (kg) Type of object Π = 1
distance (AU)
Λ = 1
distance (AU)
1 Jupiter 40,115 6.25×105 1.30×109 1.8986×1027 5th planet 64,000 6,220,000
2 Saturn 6,044 1.9×105 4.68×107 5.6846×1026 6th planet 22,000 1,250,000
3 Venus 947 1.3×106 1.66×105 4.8685×1024 2nd planet 320 2,180
4 Earth 807 1.7×106 1.53×105 5.9736×1024 3rd planet 380 2,870
5 Uranus 423 2.9×104 3.84×105 8.6832×1025 7th planet 4,100 102,000
6 Neptune 301 2.4×104 2.73×105 1.0243×1026 8th planet 4,800 127,000
7 Mercury 129 9.1×104 1.95×103 3.3022×1023 1st planet 29 60
8 Mars 54 5.1×103 9.42×102 6.4185×1023 4th planet 53 146
9 Ceres 0.04 0.33 8.32×10−4 9.43×1020 dwarf planet 0.16 0.024
10 Pluto 0.028 0.08 2.95×10−3 1.29×1022 dwarf planet 1.70 0.812
11 Eris 0.020 0.10 2.15×10−3 1.67×1022 dwarf planet 2.10 1.130
12 Haumea 0.0078 0.02 2.41×10−4 4.0×1021 dwarf planet 0.58 0.168
13 Makemake 0.0073 0.02 2.22×10−4 ~4.0×1021 dwarf planet 0.58 0.168
14 Quaoar 0.0027 0.007
1.4×1021 dwarf planet

15 Gonggong 0.0021 0.009
1.8×1021 dwarf planet

16 Orcus 0.0014 0.003
6.3×1020 dwarf planet

17 Sedna ~0.0001 <0.07 3.64×10−7 ? dwarf planet

Disagreement

Orbits of celestial bodies in the Kuiper belt with approximate distances and inclination. Objects marked with red are in orbital resonances with Neptune, with Pluto (the largest red circle) located in the "spike" of plutinos at the 2:3 resonance

Stern, the principal investigator of the New Horizons mission to Pluto, disagreed with the reclassification of Pluto on the basis of its inability to clear a neighbourhood. He argued that the IAU's wording is vague, and that — like Pluto — Earth, Mars, Jupiter and Neptune have not cleared their orbital neighbourhoods either. Earth co-orbits with 10,000 near-Earth asteroids (NEAs), and Jupiter has 100,000 trojans in its orbital path. "If Neptune had cleared its zone, Pluto wouldn't be there", he said.

The IAU category of 'planets' is nearly identical to Stern's own proposed category of 'überplanets'. In the paper proposing Stern and Levison's Λ discriminant, they stated, "we define an überplanet as a planetary body in orbit about a star that is dynamically important enough to have cleared its neighboring planetesimals ..." and a few paragraphs later, "From a dynamical standpoint, our solar system clearly contains 8 überplanets" — including Earth, Mars, Jupiter, and Neptune. Although Stern proposed this to define dynamical subcategories of planets, he rejected it for defining what a planet is, advocating the use of intrinsic attributes over dynamical relationships.

Butterfly effect

From Wikipedia, the free encyclopedia

In chaos theory, the butterfly effect is the sensitive dependence on initial conditions in which a small change in one state of a deterministic nonlinear system can result in large differences in a later state.

The term is closely associated with the work of the mathematician and meteorologist Edward Norton Lorenz. He noted that the butterfly effect is derived from the example of the details of a tornado (the exact time of formation, the exact path taken) being influenced by minor perturbations such as a distant butterfly flapping its wings several weeks earlier. Lorenz originally used a seagull causing a storm but was persuaded to make it more poetic with the use of a butterfly and tornado by 1972. He discovered the effect when he observed runs of his weather model with initial condition data that were rounded in a seemingly inconsequential manner. He noted that the weather model would fail to reproduce the results of runs with the unrounded initial condition data. A very small change in initial conditions had created a significantly different outcome.

The idea that small causes may have large effects in weather was earlier acknowledged by the French mathematician and physicist Henri Poincaré. The American mathematician and philosopher Norbert Wiener also contributed to this theory. Lorenz's work placed the concept of instability of the Earth's atmosphere onto a quantitative base and linked the concept of instability to the properties of large classes of dynamic systems which are undergoing nonlinear dynamics and deterministic chaos.

The concept of the butterfly effect has since been used outside the context of weather science as a broad term for any situation where a small change is supposed to be the cause of larger consequences.

History

In The Vocation of Man (1800), Johann Gottlieb Fichte says "you could not remove a single grain of sand from its place without thereby ... changing something throughout all parts of the immeasurable whole".

Chaos theory and the sensitive dependence on initial conditions were described in numerous forms of literature. This is evidenced by the case of the three-body problem by Poincaré in 1890. He later proposed that such phenomena could be common, for example, in meteorology.

In 1898, Jacques Hadamard noted general divergence of trajectories in spaces of negative curvature. Pierre Duhem discussed the possible general significance of this in 1908.

In 1950, Alan Turing noted: "The displacement of a single electron by a billionth of a centimetre at one moment might make the difference between a man being killed by an avalanche a year later, or escaping."

The idea that the death of one butterfly could eventually have a far-reaching ripple effect on subsequent historical events made its earliest known appearance in "A Sound of Thunder", a 1952 short story by Ray Bradbury in which a time traveller alters the future by inadvertently treading on a butterfly in the past.

More precisely, though, almost the exact idea and the exact phrasing —of a tiny insect's wing affecting the entire atmosphere's winds— was published in a children's book which became extremely successful and well-known globally in 1962, the year before Lorenz published:

"...whatever we do affects everything and everyone else, if even in the tiniest way. Why, when a housefly flaps his wings, a breeze goes round the world."

-- The Princess of Pure Reason

— Norton Juster, The Phantom Tollbooth

In 1961, Lorenz was running a numerical computer model to redo a weather prediction from the middle of the previous run as a shortcut. He entered the initial condition 0.506 from the printout instead of entering the full precision 0.506127 value. The result was a completely different weather scenario.

Lorenz wrote:

At one point I decided to repeat some of the computations in order to examine what was happening in greater detail. I stopped the computer, typed in a line of numbers that it had printed out a while earlier, and set it running again. I went down the hall for a cup of coffee and returned after about an hour, during which time the computer had simulated about two months of weather. The numbers being printed were nothing like the old ones. I immediately suspected a weak vacuum tube or some other computer trouble, which was not uncommon, but before calling for service I decided to see just where the mistake had occurred, knowing that this could speed up the servicing process. Instead of a sudden break, I found that the new values at first repeated the old ones, but soon afterward differed by one and then several units in the last [decimal] place, and then began to differ in the next to the last place and then in the place before that. In fact, the differences more or less steadily doubled in size every four days or so, until all resemblance with the original output disappeared somewhere in the second month. This was enough to tell me what had happened: the numbers that I had typed in were not the exact original numbers, but were the rounded-off values that had appeared in the original printout. The initial round-off errors were the culprits; they were steadily amplifying until they dominated the solution.

— E. N. Lorenz, The Essence of Chaos, University of Washington Press, Seattle (1993), page 134

In 1963, Lorenz published a theoretical study of this effect in a highly cited, seminal paper called Deterministic Nonperiodic Flow (the calculations were performed on a Royal McBee LGP-30 computer). Elsewhere he stated:

One meteorologist remarked that if the theory were correct, one flap of a sea gull's wings would be enough to alter the course of the weather forever. The controversy has not yet been settled, but the most recent evidence seems to favor the sea gulls.

A Battus polydamas butterfly in Brazil

Following proposals from colleagues, in later speeches and papers, Lorenz used the more poetic butterfly. According to Lorenz, when he failed to provide a title for a talk he was to present at the 139th meeting of the American Association for the Advancement of Science in 1972, Philip Merilees concocted Does the flap of a butterfly's wings in Brazil set off a tornado in Texas? as a title. Although a butterfly flapping its wings has remained constant in the expression of this concept, the location of the butterfly, the consequences, and the location of the consequences have varied widely.

The phrase refers to the effect of a butterfly's wings creating tiny changes in the atmosphere that may ultimately alter the path of a tornado or delay, accelerate, or even prevent the occurrence of a tornado in another location. The butterfly does not power or directly create the tornado, but the term is intended to imply that the flap of the butterfly's wings can cause the tornado: in the sense that the flap of the wings is a part of the initial conditions of an interconnected complex web; one set of conditions leads to a tornado, while the other set of conditions doesn't. The flapping wing creates a small change in the initial condition of the system, which cascades to large-scale alterations of events (compare: domino effect). Had the butterfly not flapped its wings, the trajectory of the system might have been vastly different—but it's also equally possible that the set of conditions without the butterfly flapping its wings is the set that leads to a tornado.

The butterfly effect presents an obvious challenge to prediction, since initial conditions for a system such as the weather can never be known to complete accuracy. This problem motivated the development of ensemble forecasting, in which a number of forecasts are made from perturbed initial conditions.

Some scientists have since argued that the weather system is not as sensitive to initial conditions as previously believed. David Orrell argues that the major contributor to weather forecast error is model error, with sensitivity to initial conditions playing a relatively small role. Stephen Wolfram also notes that the Lorenz equations are highly simplified and do not contain terms that represent viscous effects; he believes that these terms would tend to damp out small perturbations. Recent studies using generalized Lorenz models that included additional dissipative terms and nonlinearity suggested that a larger heating parameter is required for the onset of chaos.

While the "butterfly effect" is often explained as being synonymous with sensitive dependence on initial conditions of the kind described by Lorenz in his 1963 paper (and previously observed by Poincaré), the butterfly metaphor was originally applied to work he published in 1969 which took the idea a step further. Lorenz proposed a mathematical model for how tiny motions in the atmosphere scale up to affect larger systems. He found that the systems in that model could only be predicted up to a specific point in the future, and beyond that, reducing the error in the initial conditions would not increase the predictability (as long as the error is not zero). This demonstrated that a deterministic system could be "observationally indistinguishable" from a non-deterministic one in terms of predictability. Recent re-examinations of this paper suggest that it offered a significant challenge to the idea that our universe is deterministic, comparable to the challenges offered by quantum physics.

In the book entitled The Essence of Chaos published in 1993, Lorenz defined butterfly effect as: "The phenomenon that a small alteration in the state of a dynamical system will cause subsequent states to differ greatly from the states that would have followed without the alteration." This feature is the same as sensitive dependence of solutions on initial conditions (SDIC) in . In the same book, Lorenz applied the activity of skiing and developed an idealized skiing model for revealing the sensitivity of time-varying paths to initial positions. A predictability horizon is determined before the onset of SDIC.

Illustrations

The butterfly effect in the Lorenz attractor
time 0 ≤ t ≤ 30 (larger) z coordinate (larger)
These figures show two segments of the three-dimensional evolution of two trajectories (one in blue, and the other in yellow) for the same period of time in the Lorenz attractor starting at two initial points that differ by only 10−5 in the x-coordinate. Initially, the two trajectories seem coincident, as indicated by the small difference between the z coordinate of the blue and yellow trajectories, but for t > 23 the difference is as large as the value of the trajectory. The final position of the cones indicates that the two trajectories are no longer coincident at t = 30.
An animation of the Lorenz attractor shows the continuous evolution.

Theory and mathematical definition

A plot of Lorenz' strange attractor for values ρ=28, σ = 10, β = 8/3. The butterfly effect or sensitive dependence on initial conditions is the property of a dynamical system that, starting from any of various arbitrarily close alternative initial conditions on the attractor, the iterated points will become arbitrarily spread out from each other.

Recurrence, the approximate return of a system toward its initial conditions, together with sensitive dependence on initial conditions, are the two main ingredients for chaotic motion. They have the practical consequence of making complex systems, such as the weather, difficult to predict past a certain time range (approximately a week in the case of weather) since it is impossible to measure the starting atmospheric conditions completely accurately.

A dynamical system displays sensitive dependence on initial conditions if points arbitrarily close together separate over time at an exponential rate. The definition is not topological, but essentially metrical. Lorenz defined sensitive dependence as follows:

The property characterizing an orbit (i.e., a solution) if most other orbits that pass close to it at some point do not remain close to it as time advances.

If M is the state space for the map , then displays sensitive dependence to initial conditions if for any x in M and any δ > 0, there are y in M, with distance d(. , .) such that and such that

for some positive parameter a. The definition does not require that all points from a neighborhood separate from the base point x, but it requires one positive Lyapunov exponent. In addition to a positive Lyapunov exponent, boundedness is another major feature within chaotic systems.

The simplest mathematical framework exhibiting sensitive dependence on initial conditions is provided by a particular parametrization of the logistic map:

which, unlike most chaotic maps, has a closed-form solution:

where the initial condition parameter is given by . For rational , after a finite number of iterations maps into a periodic sequence. But almost all are irrational, and, for irrational , never repeats itself – it is non-periodic. This solution equation clearly demonstrates the two key features of chaos – stretching and folding: the factor 2n shows the exponential growth of stretching, which results in sensitive dependence on initial conditions (the butterfly effect), while the squared sine function keeps folded within the range [0, 1].

In physical systems

In weather

Overview

The butterfly effect is most familiar in terms of weather; it can easily be demonstrated in standard weather prediction models, for example. The climate scientists James Annan and William Connolley explain that chaos is important in the development of weather prediction methods; models are sensitive to initial conditions. They add the caveat: "Of course the existence of an unknown butterfly flapping its wings has no direct bearing on weather forecasts, since it will take far too long for such a small perturbation to grow to a significant size, and we have many more immediate uncertainties to worry about. So the direct impact of this phenomenon on weather prediction is often somewhat wrong."

Differentiating types of butterfly effects

The concept of the butterfly effect encompasses several phenomena. The two kinds of butterfly effects, including the sensitive dependence on initial conditions, and the ability of a tiny perturbation to create an organized circulation at large distances, are not exactly the same. In Palmer et al., a new type of butterfly effect is introduced, highlighting the potential impact of small-scale processes on finite predictability within the Lorenz 1969 model. Additionally, the identification of ill-conditioned aspects of the Lorenz 1969 model points to a practical form of finite predictability. These two distinct mechanisms suggesting finite predictability in the Lorenz 1969 model are collectively referred to as the third kind of butterfly effect. The authors in  have considered Palmer et al.'s suggestions and have aimed to present their perspective without raising specific contentions.

The third kind of butterfly effect with finite predictability, as discussed in, was primarily proposed based on a convergent geometric series, known as Lorenz's and Lilly's formulas. Ongoing discussions are addressing the validity of these two formulas for estimating predictability limits in.

A comparison of the two kinds of butterfly effects and the third kind of butterfly effect has been documented. In recent studies, it was reported that both meteorological and non-meteorological linear models have shown that instability plays a role in producing a butterfly effect, which is characterized by brief but significant exponential growth resulting from a small disturbance.

Recent debates on butterfly effects

The first kind of butterfly effect (BE1), known as SDIC (Sensitive Dependence on Initial Conditions), is widely recognized and demonstrated through idealized chaotic models. However, opinions differ regarding the second kind of butterfly effect, specifically the impact of a butterfly flapping its wings on tornado formation, as indicated in two 2024 articles. In more recent discussions published by Physics Today, it is acknowledged that the second kind of butterfly effect (BE2) has never been rigorously verified using a realistic weather model. While the studies suggest that BE2 is unlikely in the real atmosphere, its invalidity in this context does not negate the applicability of BE1 in other areas, such as pandemics or historical events.

For the third kind of butterfly effect, the limited predictability within the Lorenz 1969 model is explained by scale interactions in one article and by system ill-conditioning in another more recent study.

Finite predictability in chaotic systems

According to Lighthill (1986), the presence of SDIC (commonly known as the butterfly effect) implies that chaotic systems have a finite predictability limit. In a literature review, it was found that Lorenz's perspective on the predictability limit can be condensed into the following statement:

  • (A). The Lorenz 1963 model qualitatively revealed the essence of a finite predictability within a chaotic system such as the atmosphere. However, it did not determine a precise limit for the predictability of the atmosphere.
  • (B). In the 1960s, the two-week predictability limit was originally estimated based on a doubling time of five days in real-world models. Since then, this finding has been documented in Charney et al. (1966) and has become a consensus.

Recently, a short video has been created to present Lorenz's perspective on predictability limit.

A recent study refers to the two-week predictability limit, initially calculated in the 1960s with the Mintz-Arakawa model's five-day doubling time, as the "Predictability Limit Hypothesis." Inspired by Moore's Law, this term acknowledges the collaborative contributions of Lorenz, Mintz, and Arakawa under Charney's leadership. The hypothesis supports the investigation into extended-range predictions using both partial differential equation (PDE)-based physics methods and Artificial Intelligence (AI) techniques.

In quantum mechanics

The potential for sensitive dependence on initial conditions (the butterfly effect) has been studied in a number of cases in semiclassical and quantum physics, including atoms in strong fields and the anisotropic Kepler problem. Some authors have argued that extreme (exponential) dependence on initial conditions is not expected in pure quantum treatments; however, the sensitive dependence on initial conditions demonstrated in classical motion is included in the semiclassical treatments developed by Martin Gutzwiller and John B. Delos and co-workers. The random matrix theory and simulations with quantum computers prove that some versions of the butterfly effect in quantum mechanics do not exist.

Other authors suggest that the butterfly effect can be observed in quantum systems. Zbyszek P. Karkuszewski et al. consider the time evolution of quantum systems which have slightly different Hamiltonians. They investigate the level of sensitivity of quantum systems to small changes in their given Hamiltonians. David Poulin et al. presented a quantum algorithm to measure fidelity decay, which "measures the rate at which identical initial states diverge when subjected to slightly different dynamics". They consider fidelity decay to be "the closest quantum analog to the (purely classical) butterfly effect". Whereas the classical butterfly effect considers the effect of a small change in the position and/or velocity of an object in a given Hamiltonian system, the quantum butterfly effect considers the effect of a small change in the Hamiltonian system with a given initial position and velocity. This quantum butterfly effect has been demonstrated experimentally. Quantum and semiclassical treatments of system sensitivity to initial conditions are known as quantum chaos.

The butterfly effect has appeared across media such as literature (for instance, A Sound of Thunder), films and television (such as The Simpsons), video games (such as Life Is Strange), webcomics (such as Homestuck), musical references (such as "Butterfly Effect" by Travis Scott), AI-driven expansive language models, and more.

Inner core super-rotation

From Wikipedia, the free encyclopedia
Cutaway of the Earth showing the inner core (white) and outer core (yellow)

Inner core super-rotation is a hypothesized eastward rotation of the inner core of Earth relative to its mantle, for a net rotation rate that is usually faster than Earth as a whole. A 1995 model of Earth's dynamo proposed super-rotations of up to 3 degrees per year; the following year, a seismic study claimed that the proposal was supported by observed discrepancies in the time that p-waves take to travel through the inner and outer core. However, the hypothesis of super-rotation is disputed in the later seismic studies.

The seismic support of inner core super-rotations was based on the changes of seismic waves that transversed inside the inner core and free oscillations of Earth. The results are inconsistent between the studies. A localized temporal change of the inner core surface was discovered in 2006 and the temporal change of inner core surface also provided an explanation to the seismic evidence that was attributed to the hypothesis of inner core super-rotations. Recent studies indicated that a super-rotation of the inner core is inconsistent with the seismic data. Some studies proposed both the inner core super-rotation and localized temporal changes of inner core surface co-exist to consistently explain the seismic data, but other studies indicated that localized temporal changes of the inner core surface alone are enough to explain the seismic data.

Background

At the center of Earth is the core, a ball with a mean radius of 3480 kilometres that is composed mostly of iron. The outer core is liquid while the inner core, with a radius of 1220 km, is solid. Because the outer core has a low viscosity, it could be rotating at a different rate from the mantle and crust. This possibility was first proposed in 1975 to explain a phenomenon of Earth's magnetic field called westward drift: some parts of the field rotate about 0.2 degrees per year westward relative to Earth's surface. In 1981, David Gubbins of Leeds University predicted that a differential rotation of the inner and outer core could generate a large toroidal magnetic field near the shared boundary, accelerating the inner core to the rate of westward drift. This would be in opposition to the Earth's rotation, which is eastwards, so the overall rotation would be slower.

In 1995, Gary Glatzmeier at Los Alamos and Paul Roberts at UCLA published the first "self-consistent" three-dimensional model of the dynamo in the core. The model predicted that the inner core rotates 3 degrees per year faster than the mantle, a phenomenon that became known as super-rotation. 1996, Xiaodong Song and Paul G. Richards, scientists at the Lamont–Doherty Earth Observatory, presented seismic evidence for a super-rotation of 0.4 to 1.8 degrees per year, while another study estimated the super-rotation to be 3 degrees per year.

Seismic observations

Schematic of PKP(BC) and PKP(DF) waves
Location of the South Sandwich Islands, which are nearly antipodal to Alaska.

The main observational constraints on inner core rotation come from seismology. When an earthquake occurs, two kinds of seismic wave travel down through the Earth: those with ground motion in the direction the wave propagates (p-waves) and those with transverse motion (s-waves). S-waves do not travel through the outer core because they involve shear stress, a type of deformation that cannot occur in a liquid. In seismic notation, a p-wave is represented by the letter P when traveling through the crust and mantle and by the letter K when traveling through the outer core. A wave that travels through the mantle, core and mantle again before reaching the surface is represented by PKP. For geometric reasons, two branches of PKP are distinguished: PKP(AB) through the upper part of the outer core, and PKP(BC) through the lower part. A wave passing through the inner core is referred to as PKP(DF). (Alternate names for these phases are PKP1, PKP2 and PKIKP.) Seismic waves can travel multiple paths from an earthquake to a given sensor.

PKP(BC) and PKP(DF) waves have similar paths in the mantle, so any difference in the overall travel time is mainly due to the difference in wave speeds between the outer and inner core. Song and Richards looked at how this difference changed over time. Waves traveling from south to north (emitted by earthquakes in the South Sandwich Islands and received at Fairbanks, Alaska) had a differential that changed by 0.4 seconds between 1967 and 1995. By contrast, waves traveling near the equatorial plane (e.g., between Tonga and Germany) showed no change.

One of the criticisms of the early estimates of super-rotation was that uncertainties about the hypocenters of the earthquakes, particularly those in the earlier records, caused errors in the measurement of travel times. This error can be reduced by using data for doublet earthquakes. These are earthquakes that have very similar waveforms, indicating that the earthquakes were very close to each other (within about a kilometer). Using doublet data from the South Sandwich Islands, a study in 2015 arrived at a new estimate of 0.41° per year.

Seismic observations – in particular "temporal changes between repeated seismic waves that should traverse the same path through the inner core" – were used to reveal a core rotation slow-down around 2009. This is not thought to have major effects and one cycle of the oscillation in rotation is thought to be about seven decades, coinciding with several other geophysical periodicities, "especially the length of day and magnetic field".

Inner core anisotropy

Song and Richards explained their observations in terms of the prevailing model of inner core anisotropy at the time. Waves were observed to travel faster between north and south than along the equatorial plane. A model for the inner core with uniform anisotropy had a direction of fastest travel tilted at an angle 10° from the spin axis of the Earth. Since then, the model for the anisotropy has become more complex. The top 100 kilometers are isotropic. Below that, there is stronger anisotropy in a "western" hemisphere (roughly centered on the Americas) than in an "eastern" hemisphere (the other half of the globe), and the anisotropy may increase with depth. There may also be a different orientation of anisotropy in an "innermost inner core" (IMIC) with a radius of about 550 kilometers.

A group at the University of Cambridge used travel time differentials to estimate the longitudes of the hemisphere boundaries with depth up to 90 kilometers below the inner core boundary. Combining this information with an estimate for the rate of growth for the inner core, they obtained a rate of 0.1–1° per million years.

Estimates of the rotation rate based on travel time differentials have been inconsistent. Those based on the Sandwich Island earthquakes have the fastest rates, although they also have a weaker signal, with PKP(DF) barely emerging above the noise. Estimates based on other paths have been lower or even in the opposite direction. By one analysis, the rotation rate is constrained to be less than 0.1° per year.

Heterogeneity

A study in 1997 revisited the Sandwich Islands data and came to a different conclusion about the origin of changes in travel times, attributing them to local heterogeneities in wave speeds. The new estimate for super-rotation was reduced to 0.2–0.3° per year.

Inner core rotation has also been estimated using PKiKP waves, which scatter off the surface of the inner core, rather than PKP(DF) waves. Estimates using this method have ranged from 0.05 to 0.15° per year.

Normal modes

Another way of constraining the inner core rotation is using normal modes (standing waves in Earth), giving a global picture. Heterogeneities in the core split the modes, and changes in the "splitting functions" over time can be used to estimate the rotation rate. However, their accuracy is limited by the shortage of seismic stations in the 1970s and 1980s, and the inferred rotation can be positive or negative depending on the mode. Overall, normal modes are unable to distinguish the rotation rate from zero.

Theory

In the 1995 model of Glatzmeier and Roberts, the inner core is rotated by a mechanism similar to an induction motor. A thermal wind in the outer core gives rise to a circulation pattern with flow from east to west near the inner core boundary. Magnetic fields passing through the inner and outer cores provide a magnetic torque, while viscous torque on the boundary keeps the inner core and the fluid near it rotating at the same rate on average.

The 1995 model did not include the effect of gravitational coupling between density variations in the mantle and topography on the inner core boundary. A 1996 study predicted that it would force the inner core and mantle to rotate at the same rate, but a 1997 paper showed that relative rotation could occur if the inner core was able to change its shape. This would require the viscosity to be less than 1.5 x 1020 pascal-seconds (Pa·s). It also predicted that, if the viscosity were too low (less than 3 x 1016 Pa·s), the inner core would not be able to maintain its seismic anisotropy. However, the source of the anisotropy is still not well understood. A model of the viscosity of the inner core based on Earth's nutations constrains the viscosity to 2–7 × 1014 Pa·s.

Geodynamo models that take into account gravitational locking and changes in the length of day predict a super-rotation rate of only 1° per million years. Some of the inconsistencies between measurements of the rotation may be accommodated if the rotation rate oscillates.

Cataclysmic pole shift hypothesis

From Wikipedia, the free encyclopedia

The cataclysmic pole shift hypothesis is a pseudo-scientific claim that there have been recent, geologically rapid shifts in the axis of rotation of Earth, causing calamities such as floods and tectonic events or relatively rapid climate changes.

There is evidence of precession and changes in axial tilt, but this change is on much longer time-scales and does not involve relative motion of the spin axis with respect to the planet. However, in what is known as true polar wander, the Earth rotates with respect to a fixed spin axis. Research shows that during the last 200 million years a total true polar wander of some 30° has occurred, but that no rapid shifts in Earth's geographic axial pole were found during this period. A characteristic rate of true polar wander is 1° or less per million years. Between approximately 790 and 810 million years ago, when the supercontinent Rodinia existed, two geologically rapid phases of true polar wander may have occurred. In each of these, the magnetic poles of Earth shifted by approximately 55° due to a large shift in the crust.

Definition and clarification

The geographic poles are defined by the points on the surface of Earth that are intersected by the axis of rotation. The pole shift hypothesis describes a change in location of these poles with respect to the underlying surface – a phenomenon distinct from the changes in axial orientation with respect to the plane of the ecliptic that are caused by precession and nutation, and is an amplified event of a true polar wander. Geologically, a surface shift separate from a planetary shift, enabled by earth's molten core.

Pole shift hypotheses are not connected with plate tectonics, the well-accepted geological theory that Earth's surface consists of solid plates which shift over a viscous, or semifluid asthenosphere; nor with continental drift, the corollary to plate tectonics which maintains that locations of the continents have moved slowly over the surface of Earth, resulting in the gradual emerging and breakup of continents and oceans over hundreds of millions of years.

Pole shift hypotheses are not the same as geomagnetic reversal, the occasional reversal of Earth's magnetic field (effectively switching the north and south magnetic poles).

Speculative history

In popular literature, many conjectures have been suggested involving very rapid polar shift. A slow shift in the poles would display the most minor alterations and no destruction. A more dramatic view assumes more rapid changes, with dramatic alterations of geography and localized areas of destruction due to earthquakes and tsunamis.

Early proponents

An early mention of a shifting of Earth's axis can be found in an 1872 article entitled "Chronologie historique des Mexicains" by Charles Étienne Brasseur de Bourbourg, a specialist in Mesoamerican codices who interpreted ancient Mexican myths as evidence for four periods of global cataclysms that had begun around 10,500 BCE. In the 1930s and 40s, American Edgar Cayce's prophecies predicted cataclysms he called "Earth changes"; Ruth Montgomery would later cite Cayce's prophecies to support her polar shift theories.

In 1948, Hugh Auchincloss Brown, an electrical engineer, advanced a hypothesis of catastrophic pole shift. Brown also argued that accumulation of ice at the poles caused recurring tipping of the axis, identifying cycles of approximately seven millennia.

In his pseudo-scientific 1950 work Worlds in Collision, Immanuel Velikovsky postulated that the planet Venus emerged from Jupiter as a comet. During two proposed near-approaches in about 1450 BCE, he suggested that the direction of Earth's rotation was changed radically, then reverted to its original direction on the next pass. This disruption supposedly caused earthquakes, tsunamis, and the parting of the Red Sea. Further, he said near misses by Mars between 776 and 687 BCE also caused Earth's axis to change back and forth by ten degrees. Velikovsky cited historical records in support of his work, although his studies were generally ridiculed by the scientific community.

Recent conjectures

Several authors have offered pseudoscientific arguments for the hypothesis, including journalist and New Age enthusiast Ruth Shick Montgomery. Skeptics counter that these works combine speculation, the work of psychics, and modern folklore, while largely avoiding any effort at basic science by trying to disprove their own hypothesis.

Earth crustal displacement hypothesis

Charles Hapgood is now perhaps the best remembered early proponent of the hypothesis that some climate changes and ice ages could be explained by large sudden shifts of the geographic poles. In his books The Earth's Shifting Crust (1958) (which includes a foreword by Albert Einstein) and Path of the Pole (1970), Hapgood speculated that accumulated polar ice mass destabilizes Earth's rotation, causing crustal displacement but not disturbing Earth's axial orientation. Hapgood argued that shifts (of no more than 40 degrees) occurred about every 5,000 years, interrupting 20,000- to 30,000-year periods of polar stability. He cited recent North Pole locations in Hudson Bay (60°N, 73°W), the Atlantic Ocean between Iceland and Norway (72°N, 10°E) and the Yukon (63°N, 135°W). However, in his subsequent work The Path of the Pole, Hapgood conceded Einstein's point that the weight of the polar ice is insufficient to cause polar shift. Instead, Hapgood argued that causative forces must be located below the surface. Hapgood encouraged Canadian librarian Rand Flem-Ath to pursue scientific evidence backing Hapgood's claims. Flem-Ath published the results of this work in 1995 in When the Sky Fell co-written with his wife Rose.

The idea of earth crust displacement is featured in 2012, a 2009 film based on the 2012 phenomenon.

Scientific research

While there are reputable studies showing that true polar wander has occurred at various times in the past, the rates are much smaller (1° per million years or slower) than predicted by the pole shift hypothesis (up to 1° per thousand years). Analysis of the evidence does not lend credence to Hapgood's hypothesized rapid displacement of layers of Earth.

Data indicates that the geographical poles have not deviated by more than about 5° over the last 130 million years, contradicting the hypothesis of a cataclysmic polar wander event.

More rapid past possible occurrences of true polar wander have been measured: from 790 to 810 million years ago, true polar wander of approximately 55° may have occurred twice.

Fanaticism

From Wikipedia, the free encyclopedia ...