Search This Blog

Wednesday, December 25, 2019

Three-body problem

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Three-body_problem 
Approximate trajectories of three identical bodies located at the vertices of a scalene triangle and having zero initial velocities. It is seen that the center of mass, in accordance with the law of conservation of momentum, remains in place.
 
In physics and classical mechanics, the three-body problem is the problem of taking the initial positions and velocities (or momenta) of three point masses and solving for their subsequent motion according to Newton's laws of motion and Newton's law of universal gravitation. The three-body problem is a special case of the n-body problem. Unlike two-body problems, no general closed-form solution exists, as the resulting dynamical system is chaotic for most initial conditions, and numerical methods are generally required.

Historically, the first specific three-body problem to receive extended study was the one involving the Moon, the Earth, and the Sun. In an extended modern sense, a three-body problem is any problem in classical mechanics or quantum mechanics that models the motion of three particles.

Mathematical description

The mathematical statement of the three-body problem can be given in terms of the Newtonian equations of motion for vector positions of three gravitationally interacting bodies with masses :
where is the gravitational constant. This is a set of 9 second-order differential equations. The problem can also be stated equivalently in the Hamiltonian formalism, in which case it is described by a set of 18 first-order differential equations, one for each component of the positions and momenta :
where is the Hamiltonian:
In this case is simply the total energy of the system, gravitational plus kinetic.

Restricted three-body problem

The circular restricted three-body problem is a valid approximation of elliptical orbits found in the Solar System, and this can be visualized as a combination of the potentials due to the gravity of the two primary bodies along with the centrifugal effect from their rotation (Coriolis effects are dynamic and not shown). The Lagrange points can then be seen as the five places where the gradient on the resultant surface is zero (shown as blue lines), indicating that the forces are in balance there.
 
In the restricted three-body problem, a body of negligible mass (the "planetoid") moves under the influence of two massive bodies. Having negligible mass, the planetoid exerts no force on the two massive bodies, which can therefore be described in terms of a two-body motion. Usually this two-body motion is taken to consist of circular orbits around the center of mass, and the planetoid is assumed to move in the plane defined by the circular orbits. 

The restricted three-body problem is easier to analyze theoretically than the full problem. It is of practical interest as well since it accurately describes many real-world problems, the most important example being the Earth-Moon-Sun system. For these reasons, it has occupied an important role in the historical development of the three-body problem.

Mathematically, the problem is stated as follows. Let be the masses of the two massive bodies, with (planar) coordinates and , and let be the coordinates of the planetoid. For simplicity, choose units such that the distance between the two massive bodies, as well as the gravitational constant, are both equal to . Then, the motion of the planetoid is given by
where . In this form the equations of motion carry an explicit time dependence through the coordinates . However, this time dependence can be removed through a transformation to a rotating reference frame, which is an important simplification in any subsequent analysis.

Solutions

General solution

thumb While a system of 3 bodies interacting gravitationally is chaotic, a system of 3 bodies interacting elastically isn't.

There is no general analytical solution to the three-body problem given by simple algebraic expressions and integrals. Moreover, the motion of three bodies is generally non-repeating, except in special cases.

On the other hand, in 1912 the Finnish mathematician Karl Fritiof Sundman proved that there exists a series solution in powers of t1/3 for the 3-body problem. This series converges for all real t, except for initial conditions corresponding to zero angular momentum. (In practice the latter restriction is insignificant since such initial conditions are rare, having Lebesgue measure zero.)

An important issue in proving this result is the fact that the radius of convergence for this series is determined by the distance to the nearest singularity. Therefore, it is necessary to study the possible singularities of the 3-body problems. As it will be briefly discussed below, the only singularities in the 3-body problem are binary collisions (collisions between two particles at an instant) and triple collisions (collisions between three particles at an instant).

Collisions, whether binary or triple (in fact, any number), are somewhat improbable, since it has been shown that they correspond to a set of initial conditions of measure zero. However, there is no criterion known to be put on the initial state in order to avoid collisions for the corresponding solution. So Sundman's strategy consisted of the following steps:
  1. Using an appropriate change of variables to continue analyzing the solution beyond the binary collision, in a process known as regularization.
  2. Proving that triple collisions only occur when the angular momentum L vanishes. By restricting the initial data to L0, he removed all real singularities from the transformed equations for the 3-body problem.
  3. Showing that if L0, then not only can there be no triple collision, but the system is strictly bounded away from a triple collision. This implies, by using Cauchy's existence theorem for differential equations, that there are no complex singularities in a strip (depending on the value of L) in the complex plane centered around the real axis (shades of Kovalevskaya).
  4. Find a conformal transformation that maps this strip into the unit disc. For example, if s = t1/3 (the new variable after the regularization) and if |ln s| ≤ β, then this map is given by
This finishes the proof of Sundman's theorem. 

Unfortunately, the corresponding series converges very slowly. That is, obtaining a value of meaningful precision requires so many terms that this solution is of little practical use. Indeed, in 1930, David Beloriszky calculated that if Sundman's series were to be used for astronomical observations, then the computations would involve at least 108000000 terms.

Special-case solutions

In 1767, Leonhard Euler found three families of periodic solutions in which the three masses are collinear at each instant.

In 1772, Lagrange found a family of solutions in which the three masses form an equilateral triangle at each instant. Together with Euler's collinear solutions, these solutions form the central configurations for the three-body problem. These solutions are valid for any mass ratios, and the masses move on Keplerian ellipses. These four families are the only known solutions for which there are explicit analytic formulae. In the special case of the circular restricted three-body problem, these solutions, viewed in a frame rotating with the primaries, become points which are referred to as L1, L2, L3, L4, and L5, and called Lagrangian points, with L4 and L5 being symmetric instances of Lagrange's solution.

In work summarized in 1892–1899, Henri Poincaré established the existence of an infinite number of periodic solutions to the restricted three-body problem, together with techniques for continuing these solutions into the general three-body problem. 

In 1893, Meissel stated what is now called the Pythagorean three-body problem: three masses in the ratio 3:4:5 are placed at rest at the vertices of a 3:4:5 right triangle. Burrau further investigated this problem in 1913. In 1967 Victor Szebehely and C. Frederick Peters established eventual escape for this problem using numerical integration, while at the same time finding a nearby periodic solution.
In the 1970s, Michel Hénon and Roger A. Broucke each found a set of solutions that form part of the same family of solutions: the Broucke–Henon–Hadjidemetriou family. In this family the three objects all have the same mass and can exhibit both retrograde and direct forms. In some of Broucke's solutions two of the bodies follow the same path.

An animation of the figure-8 solution to the three-body problem over a single period T ≃ 6.3259.
 
In 1993, a zero angular momentum solution with three equal masses moving around a figure-eight shape was discovered numerically by physicist Cris Moore at the Santa Fe Institute. Its formal existence was later proved in 2000 by mathematicians Alain Chenciner and Richard Montgomery. The solution has been shown numerically to be stable for small perturbations of the mass and orbital parameters, which raises the intriguing possibility that such orbits could be observed in the physical universe. However, it has been argued that this occurrence is unlikely since the domain of stability is small. For instance, the probability of a binary-binary scattering event resulting in a figure-8 orbit has been estimated to be a small fraction of 1%.

In 2013, physicists Milovan Šuvakov and Veljko Dmitrašinović at the Institute of Physics in Belgrade discovered 13 new families of solutions for the equal-mass zero-angular-momentum three-body problem.

In 2015, physicist Ana Hudomal discovered 14 new families of solutions for the equal-mass zero-angular-momentum three-body problem.

In 2017, researchers Xiaoming Li and Shijun Liao found 669 new periodic orbits of the equal-mass zero-angular-momentum three-body problem. This was followed in 2018 by an additional 1223 new solutions for a zero-momentum system of unequal masses.

In 2018, Li and Liao reported 234 solutions to the unequal-mass "free-fall" three body problem. The free fall formulation of the three body problem starts with all three bodies at rest. Because of this, the masses in a free-fall configuration do not orbit in a closed "loop", but travel forwards and backwards along an open "track". 

Numerical approaches

Using a computer, the problem may be solved to arbitrarily high precision using numerical integration although high precision requires a large amount of CPU time. In 2019, Breen et al. announced a fast neural network solver, trained using a numerical integrator.

History

The gravitational problem of three bodies in its traditional sense dates in substance from 1687, when Isaac Newton published his "Principia" (Philosophiæ Naturalis Principia Mathematica). In Proposition 66 of Book 1 of the "Principia", and its 22 Corollaries, Newton took the first steps in the definition and study of the problem of the movements of three massive bodies subject to their mutually perturbing gravitational attractions. In Propositions 25 to 35 of Book 3, Newton also took the first steps in applying his results of Proposition 66 to the lunar theory, the motion of the Moon under the gravitational influence of the Earth and the Sun. 

The physical problem was addressed by Amerigo Vespucci and subsequently by Galileo Galilei; in 1499, Vespucci used knowledge of the position of the Moon to determine his position in Brazil. It became of technical importance in the 1720s, as an accurate solution would be applicable to navigation, specifically for the determination of longitude at sea, solved in practice by John Harrison's invention of the marine chronometer. However the accuracy of the lunar theory was low, due to the perturbing effect of the Sun and planets on the motion of the Moon around the Earth. 

Jean le Rond d'Alembert and Alexis Clairaut, who developed a longstanding rivalry, both attempted to analyze the problem in some degree of generality; they submitted their competing first analyses to the Académie Royale des Sciences in 1747. It was in connection with their research, in Paris during the 1740s, that the name "three-body problem" (French: Problème des trois Corps) began to be commonly used. An account published in 1761 by Jean le Rond d'Alembert indicates that the name was first used in 1747.

Other problems involving three bodies

The term 'three-body problem' is sometimes used in the more general sense to refer to any physical problem involving the interaction of three bodies. 

A quantum mechanical analogue of the gravitational three-body problem in classical mechanics is the helium atom, in which a helium nucleus and two electrons interact according to the inverse-square Coulomb interaction. Like the gravitational three-body problem, the helium atom cannot be solved exactly.

In both classical and quantum mechanics, however, there exist nontrivial interaction laws besides the inverse-square force which do lead to exact analytic three-body solutions. One such model consists of a combination of harmonic attraction and a repulsive inverse-cube force. This model is considered nontrivial since it is associated with a set of nonlinear differential equations containing singularities (compared with, e.g., harmonic interactions alone, which lead to an easily solved system of linear differential equations). In these two respects it is analogous to (insoluble) models having Coulomb interactions, and as a result has been suggested as a tool for intuitively understanding physical systems like the helium atom.

The gravitational three-body problem has also been studied using general relativity. Physically, a relativistic treatment becomes necessary in systems with very strong gravitational fields, such as near the event horizon of a black hole. However, the relativistic problem is considerably more difficult than in Newtonian mechanics, and sophisticated numerical techniques are required. Even the full two-body problem (i.e. for arbitrary ratio of masses) does not have a rigorous analytic solution in general relativity.

n-body problem

The three-body problem is a special case of the n-body problem, which describes how n objects will move under one of the physical forces, such as gravity. These problems have a global analytical solution in the form of a convergent power series, as was proven by Karl F. Sundman for n = 3 and by Qiudong Wang for n > 3. However, the Sundman and Wang series converge so slowly that they are useless for practical purposes; therefore, it is currently necessary to approximate solutions by numerical analysis in the form of numerical integration or, for some cases, classical trigonometric series approximations (see n-body simulation). Atomic systems, e.g. atoms, ions, and molecules, can be treated in terms of the quantum n-body problem. Among classical physical systems, the n-body problem usually refers to a galaxy or to a cluster of galaxies; planetary systems, such as stars, planets, and their satellites, can also be treated as n-body systems. Some applications are conveniently treated by perturbation theory, in which the system is considered as a two-body problem plus additional forces causing deviations from a hypothetical unperturbed two-body trajectory.

In popular culture

Dynamical system

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Dynamical_system
 
The Lorenz attractor arises in the study of the Lorenz Oscillator, a dynamical system.
 
In mathematics, a dynamical system is a system in which a function describes the time dependence of a point in a geometrical space. Examples include the mathematical models that describe the swinging of a clock pendulum, the flow of water in a pipe, and the number of fish each springtime in a lake.

At any given time, a dynamical system has a state given by a tuple of real numbers (a vector) that can be represented by a point in an appropriate state space (a geometrical manifold). The evolution rule of the dynamical system is a function that describes what future states follow from the current state. Often the function is deterministic, that is, for a given time interval only one future state follows from the current state. However, some systems are stochastic, in that random events also affect the evolution of the state variables. 

In physics, a dynamical system is described as a "particle or ensemble of particles whose state varies over time and thus obeys differential equations involving time derivatives." In order to make a prediction about the system's future behavior, an analytical solution of such equations or their integration over time through computer simulation is realized. 

The study of dynamical systems is the focus of dynamical systems theory, which has applications to a wide variety of fields such as mathematics, physics, biology, chemistry, engineering, economics, history, and medicine. Dynamical systems are a fundamental part of chaos theory, logistic map dynamics, bifurcation theory, the self-assembly and self-organization processes, and the edge of chaos concept. 

Overview

The concept of a dynamical system has its origins in Newtonian mechanics. There, as in other natural sciences and engineering disciplines, the evolution rule of dynamical systems is an implicit relation that gives the state of the system for only a short time into the future. (The relation is either a differential equation, difference equation or other time scale.) To determine the state for all future times requires iterating the relation many times—each advancing time a small step. The iteration procedure is referred to as solving the system or integrating the system. If the system can be solved, given an initial point it is possible to determine all its future positions, a collection of points known as a trajectory or orbit

Before the advent of computers, finding an orbit required sophisticated mathematical techniques and could be accomplished only for a small class of dynamical systems. Numerical methods implemented on electronic computing machines have simplified the task of determining the orbits of a dynamical system. 

For simple dynamical systems, knowing the trajectory is often sufficient, but most dynamical systems are too complicated to be understood in terms of individual trajectories. The difficulties arise because:
  • The systems studied may only be known approximately—the parameters of the system may not be known precisely or terms may be missing from the equations. The approximations used bring into question the validity or relevance of numerical solutions. To address these questions several notions of stability have been introduced in the study of dynamical systems, such as Lyapunov stability or structural stability. The stability of the dynamical system implies that there is a class of models or initial conditions for which the trajectories would be equivalent. The operation for comparing orbits to establish their equivalence changes with the different notions of stability.
  • The type of trajectory may be more important than one particular trajectory. Some trajectories may be periodic, whereas others may wander through many different states of the system. Applications often require enumerating these classes or maintaining the system within one class. Classifying all possible trajectories has led to the qualitative study of dynamical systems, that is, properties that do not change under coordinate changes. Linear dynamical systems and systems that have two numbers describing a state are examples of dynamical systems where the possible classes of orbits are understood.
  • The behavior of trajectories as a function of a parameter may be what is needed for an application. As a parameter is varied, the dynamical systems may have bifurcation points where the qualitative behavior of the dynamical system changes. For example, it may go from having only periodic motions to apparently erratic behavior, as in the transition to turbulence of a fluid.
  • The trajectories of the system may appear erratic, as if random. In these cases it may be necessary to compute averages using one very long trajectory or many different trajectories. The averages are well defined for ergodic systems and a more detailed understanding has been worked out for hyperbolic systems. Understanding the probabilistic aspects of dynamical systems has helped establish the foundations of statistical mechanics and of chaos.

History

Many people regard French mathematician Henri Poincaré as the founder of dynamical systems. Poincaré published two now classical monographs, "New Methods of Celestial Mechanics" (1892–1899) and "Lectures on Celestial Mechanics" (1905–1910). In them, he successfully applied the results of their research to the problem of the motion of three bodies and studied in detail the behavior of solutions (frequency, stability, asymptotic, and so on). These papers included the Poincaré recurrence theorem, which states that certain systems will, after a sufficiently long but finite time, return to a state very close to the initial state. 

Aleksandr Lyapunov developed many important approximation methods. His methods, which he developed in 1899, make it possible to define the stability of sets of ordinary differential equations. He created the modern theory of the stability of a dynamic system. 

In 1913, George David Birkhoff proved Poincaré's "Last Geometric Theorem", a special case of the three-body problem, a result that made him world-famous. In 1927, he published his Dynamical SystemsBirkhoff's most durable result has been his 1931 discovery of what is now called the ergodic theorem. Combining insights from physics on the ergodic hypothesis with measure theory, this theorem solved, at least in principle, a fundamental problem of statistical mechanics. The ergodic theorem has also had repercussions for dynamics. 

Stephen Smale made significant advances as well. His first contribution is the Smale horseshoe that jumpstarted significant research in dynamical systems. He also outlined a research program carried out by many others. 

Oleksandr Mykolaiovych Sharkovsky developed Sharkovsky's theorem on the periods of discrete dynamical systems in 1964. One of the implications of the theorem is that if a discrete dynamical system on the real line has a periodic point of period 3, then it must have periodic points of every other period. 

In the late 20th century, Palestinian mechanical engineer Ali H. Nayfeh applied nonlinear dynamics in mechanical and engineering systems. His pioneering work in applied nonlinear dynamics has been influential in the construction and maintenance of machines and structures that are common in daily life, such as ships, cranes, bridges, buildings, skyscrapers, jet engines, rocket engines, aircraft and spacecraft.

Basic definitions

A dynamical system is a manifold M called the phase (or state) space endowed with a family of smooth evolution functions Φt that for any element of tT, the time, map a point of the phase space back into the phase space. The notion of smoothness changes with applications and the type of manifold. There are several choices for the set T. When T is taken to be the reals, the dynamical system is called a flow; and if T is restricted to the non-negative reals, then the dynamical system is a semi-flow. When T is taken to be the integers, it is a cascade or a map; and the restriction to the non-negative integers is a semi-cascade

Examples

The evolution function Φ t is often the solution of a differential equation of motion
The equation gives the time derivative, represented by the dot, of a trajectory x(t) on the phase space starting at some point x0. The vector field v(x) is a smooth function that at every point of the phase space M provides the velocity vector of the dynamical system at that point. (These vectors are not vectors in the phase space M, but in the tangent space TxM of the point x.) Given a smooth Φ t, an autonomous vector field can be derived from it. 

There is no need for higher order derivatives in the equation, nor for time dependence in v(x) because these can be eliminated by considering systems of higher dimensions. Other types of differential equations can be used to define the evolution rule:
is an example of an equation that arises from the modeling of mechanical systems with complicated constraints. 

The differential equations determining the evolution function Φ t are often ordinary differential equations; in this case the phase space M is a finite dimensional manifold. Many of the concepts in dynamical systems can be extended to infinite-dimensional manifolds—those that are locally Banach spaces—in which case the differential equations are partial differential equations. In the late 20th century the dynamical system perspective to partial differential equations started gaining popularity. 

Further examples


Linear dynamical systems

Linear dynamical systems can be solved in terms of simple functions and the behavior of all orbits classified. In a linear system the phase space is the N-dimensional Euclidean space, so any point in phase space can be represented by a vector with N numbers. The analysis of linear systems is possible because they satisfy a superposition principle: if u(t) and w(t) satisfy the differential equation for the vector field (but not necessarily the initial condition), then so will u(t) + w(t). 

Flows

For a flow, the vector field Φ(x) is an affine function of the position in the phase space, that is,
with A a matrix, b a vector of numbers and x the position vector. The solution to this system can be found by using the superposition principle (linearity). The case b ≠ 0 with A = 0 is just a straight line in the direction of b:
When b is zero and A ≠ 0 the origin is an equilibrium (or singular) point of the flow, that is, if x0 = 0, then the orbit remains there. For other initial conditions, the equation of motion is given by the exponential of a matrix: for an initial point x0,
When b = 0, the eigenvalues of A determine the structure of the phase space. From the eigenvalues and the eigenvectors of A it is possible to determine if an initial point will converge or diverge to the equilibrium point at the origin.

The distance between two different initial conditions in the case A ≠ 0 will change exponentially in most cases, either converging exponentially fast towards a point, or diverging exponentially fast. Linear systems display sensitive dependence on initial conditions in the case of divergence. For nonlinear systems this is one of the (necessary but not sufficient) conditions for chaotic behavior.

Linear vector fields and a few trajectories.
 

Maps

A discrete-time, affine dynamical system has the form of a matrix difference equation:
with A a matrix and b a vector. As in the continuous case, the change of coordinates x → x + (1 − A) –1b removes the term b from the equation. In the new coordinate system, the origin is a fixed point of the map and the solutions are of the linear system A nx0. The solutions for the map are no longer curves, but points that hop in the phase space. The orbits are organized in curves, or fibers, which are collections of points that map into themselves under the action of the map.

As in the continuous case, the eigenvalues and eigenvectors of A determine the structure of phase space. For example, if u1 is an eigenvector of A, with a real eigenvalue smaller than one, then the straight lines given by the points along α u1, with α ∈ R, is an invariant curve of the map. Points in this straight line run into the fixed point. 

There are also many other discrete dynamical systems

Local dynamics

The qualitative properties of dynamical systems do not change under a smooth change of coordinates (this is sometimes taken as a definition of qualitative): a singular point of the vector field (a point where v(x) = 0) will remain a singular point under smooth transformations; a periodic orbit is a loop in phase space and smooth deformations of the phase space cannot alter it being a loop. It is in the neighborhood of singular points and periodic orbits that the structure of a phase space of a dynamical system can be well understood. In the qualitative study of dynamical systems, the approach is to show that there is a change of coordinates (usually unspecified, but computable) that makes the dynamical system as simple as possible. 

Rectification

A flow in most small patches of the phase space can be made very simple. If y is a point where the vector field v(y) ≠ 0, then there is a change of coordinates for a region around y where the vector field becomes a series of parallel vectors of the same magnitude. This is known as the rectification theorem.

The rectification theorem says that away from singular points the dynamics of a point in a small patch is a straight line. The patch can sometimes be enlarged by stitching several patches together, and when this works out in the whole phase space M the dynamical system is integrable. In most cases the patch cannot be extended to the entire phase space. There may be singular points in the vector field (where v(x) = 0); or the patches may become smaller and smaller as some point is approached. The more subtle reason is a global constraint, where the trajectory starts out in a patch, and after visiting a series of other patches comes back to the original one. If the next time the orbit loops around phase space in a different way, then it is impossible to rectify the vector field in the whole series of patches. 

Near periodic orbits

In general, in the neighborhood of a periodic orbit the rectification theorem cannot be used. Poincaré developed an approach that transforms the analysis near a periodic orbit to the analysis of a map. Pick a point x0 in the orbit γ and consider the points in phase space in that neighborhood that are perpendicular to v(x0). These points are a Poincaré section S(γx0), of the orbit. The flow now defines a map, the Poincaré map F : S → S, for points starting in S and returning to S. Not all these points will take the same amount of time to come back, but the times will be close to the time it takes x0.

The intersection of the periodic orbit with the Poincaré section is a fixed point of the Poincaré map F. By a translation, the point can be assumed to be at x = 0. The Taylor series of the map is F(x) = J · x + O(x2), so a change of coordinates h can only be expected to simplify F to its linear part
This is known as the conjugation equation. Finding conditions for this equation to hold has been one of the major tasks of research in dynamical systems. Poincaré first approached it assuming all functions to be analytic and in the process discovered the non-resonant condition. If λ1, ..., Î»Î½ are the eigenvalues of J they will be resonant if one eigenvalue is an integer linear combination of two or more of the others. As terms of the form λi – ∑ (multiples of other eigenvalues) occurs in the denominator of the terms for the function h, the non-resonant condition is also known as the small divisor problem. 

Conjugation results

The results on the existence of a solution to the conjugation equation depend on the eigenvalues of J and the degree of smoothness required from h. As J does not need to have any special symmetries, its eigenvalues will typically be complex numbers. When the eigenvalues of J are not in the unit circle, the dynamics near the fixed point x0 of F is called hyperbolic and when the eigenvalues are on the unit circle and complex, the dynamics is called elliptic

In the hyperbolic case, the Hartman–Grobman theorem gives the conditions for the existence of a continuous function that maps the neighborhood of the fixed point of the map to the linear map J · x. The hyperbolic case is also structurally stable. Small changes in the vector field will only produce small changes in the Poincaré map and these small changes will reflect in small changes in the position of the eigenvalues of J in the complex plane, implying that the map is still hyperbolic. 

The Kolmogorov–Arnold–Moser (KAM) theorem gives the behavior near an elliptic point. 

Bifurcation theory

When the evolution map Φt (or the vector field it is derived from) depends on a parameter μ, the structure of the phase space will also depend on this parameter. Small changes may produce no qualitative changes in the phase space until a special value μ0 is reached. At this point the phase space changes qualitatively and the dynamical system is said to have gone through a bifurcation.

Bifurcation theory considers a structure in phase space (typically a fixed point, a periodic orbit, or an invariant torus) and studies its behavior as a function of the parameter Î¼. At the bifurcation point the structure may change its stability, split into new structures, or merge with other structures. By using Taylor series approximations of the maps and an understanding of the differences that may be eliminated by a change of coordinates, it is possible to catalog the bifurcations of dynamical systems.

The bifurcations of a hyperbolic fixed point x0 of a system family Fμ can be characterized by the eigenvalues of the first derivative of the system DFμ(x0) computed at the bifurcation point. For a map, the bifurcation will occur when there are eigenvalues of DFμ on the unit circle. For a flow, it will occur when there are eigenvalues on the imaginary axis. For more information, see the main article on Bifurcation theory.

Some bifurcations can lead to very complicated structures in phase space. For example, the Ruelle–Takens scenario describes how a periodic orbit bifurcates into a torus and the torus into a strange attractor. In another example, Feigenbaum period-doubling describes how a stable periodic orbit goes through a series of period-doubling bifurcations

Ergodic systems

In many dynamical systems, it is possible to choose the coordinates of the system so that the volume (really a ν-dimensional volume) in phase space is invariant. This happens for mechanical systems derived from Newton's laws as long as the coordinates are the position and the momentum and the volume is measured in units of (position) × (momentum). The flow takes points of a subset A into the points Φ t(A) and invariance of the phase space means that
In the Hamiltonian formalism, given a coordinate it is possible to derive the appropriate (generalized) momentum such that the associated volume is preserved by the flow. The volume is said to be computed by the Liouville measure.

In a Hamiltonian system, not all possible configurations of position and momentum can be reached from an initial condition. Because of energy conservation, only the states with the same energy as the initial condition are accessible. The states with the same energy form an energy shell Ω, a sub-manifold of the phase space. The volume of the energy shell, computed using the Liouville measure, is preserved under evolution.

For systems where the volume is preserved by the flow, Poincaré discovered the recurrence theorem: Assume the phase space has a finite Liouville volume and let F be a phase space volume-preserving map and A a subset of the phase space. Then almost every point of A returns to A infinitely often. The Poincaré recurrence theorem was used by Zermelo to object to Boltzmann's derivation of the increase in entropy in a dynamical system of colliding atoms.

One of the questions raised by Boltzmann's work was the possible equality between time averages and space averages, what he called the ergodic hypothesis. The hypothesis states that the length of time a typical trajectory spends in a region A is vol(A)/vol(Ω).

The ergodic hypothesis turned out not to be the essential property needed for the development of statistical mechanics and a series of other ergodic-like properties were introduced to capture the relevant aspects of physical systems. Koopman approached the study of ergodic systems by the use of functional analysis. An observable a is a function that to each point of the phase space associates a number (say instantaneous pressure, or average height). The value of an observable can be computed at another time by using the evolution function φ t. This introduces an operator U t, the transfer operator,
By studying the spectral properties of the linear operator U it becomes possible to classify the ergodic properties of Î¦ t. In using the Koopman approach of considering the action of the flow on an observable function, the finite-dimensional nonlinear problem involving Φ t gets mapped into an infinite-dimensional linear problem involving U.

The Liouville measure restricted to the energy surface Ω is the basis for the averages computed in equilibrium statistical mechanics. An average in time along a trajectory is equivalent to an average in space computed with the Boltzmann factor exp(−βH). This idea has been generalized by Sinai, Bowen, and Ruelle (SRB) to a larger class of dynamical systems that includes dissipative systems. SRB measures replace the Boltzmann factor and they are defined on attractors of chaotic systems. 

Nonlinear dynamical systems and chaos

Simple nonlinear dynamical systems and even piecewise linear systems can exhibit a completely unpredictable behavior, which might seem to be random, despite the fact that they are fundamentally deterministic. This seemingly unpredictable behavior has been called chaos. Hyperbolic systems are precisely defined dynamical systems that exhibit the properties ascribed to chaotic systems. In hyperbolic systems the tangent space perpendicular to a trajectory can be well separated into two parts: one with the points that converge towards the orbit (the stable manifold) and another of the points that diverge from the orbit (the unstable manifold).

This branch of mathematics deals with the long-term qualitative behavior of dynamical systems. Here, the focus is not on finding precise solutions to the equations defining the dynamical system (which is often hopeless), but rather to answer questions like "Will the system settle down to a steady state in the long term, and if so, what are the possible attractors?" or "Does the long-term behavior of the system depend on its initial condition?"

Note that the chaotic behavior of complex systems is not the issue. Meteorology has been known for years to involve complex—even chaotic—behavior. Chaos theory has been so surprising because chaos can be found within almost trivial systems. The logistic map is only a second-degree polynomial; the horseshoe map is piecewise linear. 

Geometrical definition

A dynamical system is the tuple , with a manifold (locally a Banach space or Euclidean space), the domain for time (non-negative reals, the integers, ...) and f an evolution rule t → f t (with ) such that f t is a diffeomorphism of the manifold to itself. So, f is a mapping of the time-domain into the space of diffeomorphisms of the manifold to itself. In other terms, f(t) is a diffeomorphism, for every time t in the domain

Measure theoretical definition

A dynamical system may be defined formally, as a measure-preserving transformation of a sigma-algebra, the quadruplet (X, Σ, μ, Ï„). Here, X is a set, and Σ is a sigma-algebra on X, so that the pair (X, Σ) is a measurable space. μ is a finite measure on the sigma-algebra, so that the triplet (X, Σ, μ) is a probability space. A map Ï„: XX is said to be Σ-measurable if and only if, for every σ ∈ Σ, one has . A map Ï„ is said to preserve the measure if and only if, for every σ ∈ Σ, one has . Combining the above, a map Ï„ is said to be a measure-preserving transformation of X , if it is a map from X to itself, it is Σ-measurable, and is measure-preserving. The quadruple (X, Σ, μ, Ï„), for such a Ï„, is then defined to be a dynamical system.

The map Ï„ embodies the time evolution of the dynamical system. Thus, for discrete dynamical systems the iterates for integer n are studied. For continuous dynamical systems, the map Ï„ is understood to be a finite time evolution map and the construction is more complicated. 

Examples of dynamical systems


Multidimensional generalization

Dynamical systems are defined over a single independent variable, usually thought of as time. A more general class of systems are defined over multiple independent variables and are therefore called multidimensional systems. Such systems are useful for modeling, for example, image processing.

Modal realism

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Modal_realism   ...