Search This Blog

Thursday, July 29, 2021

Chaos theory

From Wikipedia, the free encyclopedia

A plot of the Lorenz attractor for values r = 28, σ = 10, b = 8/3
An animation of a double-rod pendulum at an intermediate energy showing chaotic behavior. Starting the pendulum from a slightly different initial condition would result in a vastly different trajectory. The double-rod pendulum is one of the simplest dynamical systems with chaotic solutions.

Chaos theory is a branch of mathematics focusing on the study of chaosdynamical systems whose apparently random states of disorder and irregularities are actually governed by underlying patterns and deterministic laws that are highly sensitive to initial conditions. Chaos theory is an interdisciplinary theory stating that, within the apparent randomness of chaotic complex systems, there are underlying patterns, interconnectedness, constant feedback loops, repetition, self-similarity, fractals, and self-organization. The butterfly effect, an underlying principle of chaos, describes how a small change in one state of a deterministic nonlinear system can result in large differences in a later state (meaning that there is sensitive dependence on initial conditions). A metaphor for this behavior is that a butterfly flapping its wings in Texas can cause a hurricane in China.

Small differences in initial conditions, such as those due to errors in measurements or due to rounding errors in numerical computation, can yield widely diverging outcomes for such dynamical systems, rendering long-term prediction of their behavior impossible in general. This can happen even though these systems are deterministic, meaning that their future behavior follows a unique evolution and is fully determined by their initial conditions, with no random elements involved. In other words, the deterministic nature of these systems does not make them predictable. This behavior is known as deterministic chaos, or simply chaos. The theory was summarized by Edward Lorenz as:

Chaos: When the present determines the future, but the approximate present does not approximately determine the future.

Chaotic behavior exists in many natural systems, including fluid flow, heartbeat irregularities, weather and climate. It also occurs spontaneously in some systems with artificial components, such as the stock market and road traffic. This behavior can be studied through the analysis of a chaotic mathematical model, or through analytical techniques such as recurrence plots and Poincaré maps. Chaos theory has applications in a variety of disciplines, including meteorology, anthropology, sociology, environmental science, computer science, engineering, economics, ecology, pandemic crisis management, and philosophy. The theory formed the basis for such fields of study as complex dynamical systems, edge of chaos theory, and self-assembly processes.

Introduction

Chaos theory concerns deterministic systems whose behavior can, in principle, be predicted. Chaotic systems are predictable for a while and then 'appear' to become random. The amount of time that the behavior of a chaotic system can be effectively predicted depends on three things: how much uncertainty can be tolerated in the forecast, how accurately its current state can be measured, and a time scale depending on the dynamics of the system, called the Lyapunov time. Some examples of Lyapunov times are: chaotic electrical circuits, about 1 millisecond; weather systems, a few days (unproven); the inner solar system, 4 to 5 million years. In chaotic systems, the uncertainty in a forecast increases exponentially with elapsed time. Hence, mathematically, doubling the forecast time more than squares the proportional uncertainty in the forecast. This means, in practice, a meaningful prediction cannot be made over an interval of more than two or three times the Lyapunov time. When meaningful predictions cannot be made, the system appears random.

Chaos theory is a method of qualitative and quantitative analysis to investigate the behavior of dynamic systems that cannot be explained and predicted by single data relationships, but must be explained and predicted by whole, continuous data relationships.

Chaotic dynamics

The map defined by x → 4 x (1 – x) and y → (x + y) mod 1 displays sensitivity to initial x positions. Here, two series of x and y values diverge markedly over time from a tiny initial difference.

In common usage, "chaos" means "a state of disorder". However, in chaos theory, the term is defined more precisely. Although no universally accepted mathematical definition of chaos exists, a commonly used definition, originally formulated by Robert L. Devaney, says that to classify a dynamical system as chaotic, it must have these properties:

  1. it must be sensitive to initial conditions,
  2. it must be topologically transitive,
  3. it must have dense periodic orbits.

In some cases, the last two properties above have been shown to actually imply sensitivity to initial conditions. In the discrete-time case, this is true for all continuous maps on metric spaces. In these cases, while it is often the most practically significant property, "sensitivity to initial conditions" need not be stated in the definition.

If attention is restricted to intervals, the second property implies the other two. An alternative and a generally weaker definition of chaos uses only the first two properties in the above list.

Chaos as a spontaneous breakdown of topological supersymmetry

In continuous time dynamical systems, chaos is the phenomenon of the spontaneous breakdown of topological supersymmetry, which is an intrinsic property of evolution operators of all stochastic and deterministic (partial) differential equations. This picture of dynamical chaos works not only for deterministic models, but also for models with external noise which is an important generalization from the physical point of view, since in reality, all dynamical systems experience influence from their stochastic environments. Within this picture, the long-range dynamical behavior associated with chaotic dynamics (e.g., the butterfly effect) is a consequence of Goldstone's theorem—in the application to the spontaneous topological supersymmetry breaking.

Sensitivity to initial conditions

Lorenz equations used to generate plots for the y variable. The initial conditions for x and z were kept the same but those for y were changed between 1.001, 1.0001 and 1.00001. The values for , and were 45.92, 16 and 4 respectively. As can be seen from the graph, even the slightest difference in initial values causes significant changes after about 12 seconds of evolution in the three cases. This is an example of sensitive dependence on initial conditions.

Sensitivity to initial conditions means that each point in a chaotic system is arbitrarily closely approximated by other points that have significantly different future paths or trajectories. Thus, an arbitrarily small change or perturbation of the current trajectory may lead to significantly different future behavior.

Sensitivity to initial conditions is popularly known as the "butterfly effect", so-called because of the title of a paper given by Edward Lorenz in 1972 to the American Association for the Advancement of Science in Washington, D.C., entitled Predictability: Does the Flap of a Butterfly's Wings in Brazil set off a Tornado in Texas?. The flapping wing represents a small change in the initial condition of the system, which causes a chain of events that prevents the predictability of large-scale phenomena. Had the butterfly not flapped its wings, the trajectory of the overall system could have been vastly different.

A consequence of sensitivity to initial conditions is that if we start with a limited amount of information about the system (as is usually the case in practice), then beyond a certain time, the system would no longer be predictable. This is most prevalent in the case of weather, which is generally predictable only about a week ahead. This does not mean that one cannot assert anything about events far in the future—only that some restrictions on the system are present. For example, we do know with weather that the temperature will not naturally reach 100 °C or fall to −130 °C on earth (during the current geologic era), but that does not mean that we can predict exactly which day will have the hottest temperature of the year.

In more mathematical terms, the Lyapunov exponent measures the sensitivity to initial conditions, in the form of rate of exponential divergence from the perturbed initial conditions. More specifically, given two starting trajectories in the phase space that are infinitesimally close, with initial separation , the two trajectories end up diverging at a rate given by

where is the time and is the Lyapunov exponent. The rate of separation depends on the orientation of the initial separation vector, so a whole spectrum of Lyapunov exponents can exist. The number of Lyapunov exponents is equal to the number of dimensions of the phase space, though it is common to just refer to the largest one. For example, the maximal Lyapunov exponent (MLE) is most often used, because it determines the overall predictability of the system. A positive MLE is usually taken as an indication that the system is chaotic.

In addition to the above property, other properties related to sensitivity of initial conditions also exist. These include, for example, measure-theoretical mixing (as discussed in ergodic theory) and properties of a K-system.

Non-periodicity

A chaotic system may have sequences of values for the evolving variable that exactly repeat themselves, giving periodic behavior starting from any point in that sequence. However, such periodic sequences are repelling rather than attracting, meaning that if the evolving variable is outside the sequence, however close, it will not enter the sequence and in fact, will diverge from it. Thus for almost all initial conditions, the variable evolves chaotically with non-periodic behavior.

Topological mixing

Six iterations of a set of states passed through the logistic map. The first iterate (blue) is the initial condition, which essentially forms a circle. Animation shows the first to the sixth iteration of the circular initial conditions. It can be seen that mixing occurs as we progress in iterations. The sixth iteration shows that the points are almost completely scattered in the phase space. Had we progressed further in iterations, the mixing would have been homogeneous and irreversible. The logistic map has equation . To expand the state-space of the logistic map into two dimensions, a second state, , was created as , if and otherwise.
 
The map defined by x → 4 x (1 – x) and y → (x + y) mod 1 also displays topological mixing. Here, the blue region is transformed by the dynamics first to the purple region, then to the pink and red regions, and eventually to a cloud of vertical lines scattered across the space.

Topological mixing (or the weaker condition of topological transitivity) means that the system evolves over time so that any given region or open set of its phase space eventually overlaps with any other given region. This mathematical concept of "mixing" corresponds to the standard intuition, and the mixing of colored dyes or fluids is an example of a chaotic system.

Topological mixing is often omitted from popular accounts of chaos, which equate chaos with only sensitivity to initial conditions. However, sensitive dependence on initial conditions alone does not give chaos. For example, consider the simple dynamical system produced by repeatedly doubling an initial value. This system has sensitive dependence on initial conditions everywhere, since any pair of nearby points eventually becomes widely separated. However, this example has no topological mixing, and therefore has no chaos. Indeed, it has extremely simple behavior: all points except 0 tend to positive or negative infinity.

Topological transitivity

A map is said to be topologically transitive if for any pair of non-empty open sets , there exists such that . Topological transitivity is a weaker version of topological mixing. Intuitively, if a map is topologically transitive then given a point x and a region V, there exists a point y near x whose orbit passes through V. This implies that is impossible to decompose the system into two open sets.

An important related theorem is the Birkhoff Transitivity Theorem. It is easy to see that the existence of a dense orbit implies in topological transitivity. The Birkhoff Transitivity Theorem states that if X is a second countable, complete metric space, then topological transitivity implies the existence of a dense set of points in X that have dense orbits.

Density of periodic orbits

For a chaotic system to have dense periodic orbits means that every point in the space is approached arbitrarily closely by periodic orbits. The one-dimensional logistic map defined by x → 4 x (1 – x) is one of the simplest systems with density of periodic orbits. For example,  →  → (or approximately 0.3454915 → 0.9045085 → 0.3454915) is an (unstable) orbit of period 2, and similar orbits exist for periods 4, 8, 16, etc. (indeed, for all the periods specified by Sharkovskii's theorem).

Sharkovskii's theorem is the basis of the Li and Yorke (1975) proof that any continuous one-dimensional system that exhibits a regular cycle of period three will also display regular cycles of every other length, as well as completely chaotic orbits.

Strange attractors

The Lorenz attractor displays chaotic behavior. These two plots demonstrate sensitive dependence on initial conditions within the region of phase space occupied by the attractor.

Some dynamical systems, like the one-dimensional logistic map defined by x → 4 x (1 – x), are chaotic everywhere, but in many cases chaotic behavior is found only in a subset of phase space. The cases of most interest arise when the chaotic behavior takes place on an attractor, since then a large set of initial conditions leads to orbits that converge to this chaotic region.

An easy way to visualize a chaotic attractor is to start with a point in the basin of attraction of the attractor, and then simply plot its subsequent orbit. Because of the topological transitivity condition, this is likely to produce a picture of the entire final attractor, and indeed both orbits shown in the figure on the right give a picture of the general shape of the Lorenz attractor. This attractor results from a simple three-dimensional model of the Lorenz weather system. The Lorenz attractor is perhaps one of the best-known chaotic system diagrams, probably because it is not only one of the first, but it is also one of the most complex, and as such gives rise to a very interesting pattern that, with a little imagination, looks like the wings of a butterfly.

Unlike fixed-point attractors and limit cycles, the attractors that arise from chaotic systems, known as strange attractors, have great detail and complexity. Strange attractors occur in both continuous dynamical systems (such as the Lorenz system) and in some discrete systems (such as the Hénon map). Other discrete dynamical systems have a repelling structure called a Julia set, which forms at the boundary between basins of attraction of fixed points. Julia sets can be thought of as strange repellers. Both strange attractors and Julia sets typically have a fractal structure, and the fractal dimension can be calculated for them.

Minimum complexity of a chaotic system

Bifurcation diagram of the logistic map xr x (1 – x). Each vertical slice shows the attractor for a specific value of r. The diagram displays period-doubling as r increases, eventually producing chaos.

Discrete chaotic systems, such as the logistic map, can exhibit strange attractors whatever their dimensionality. Universality of one-dimensional maps with parabolic maxima and Feigenbaum constants , is well visible with map proposed as a toy model for discrete laser dynamics: , where stands for electric field amplitude, is laser gain as bifurcation parameter. The gradual increase of at interval changes dynamics from regular to chaotic one with qualitatively the same bifurcation diagram as those for logistic map.

In contrast, for continuous dynamical systems, the Poincaré–Bendixson theorem shows that a strange attractor can only arise in three or more dimensions. Finite-dimensional linear systems are never chaotic; for a dynamical system to display chaotic behavior, it must be either nonlinear or infinite-dimensional.

The Poincaré–Bendixson theorem states that a two-dimensional differential equation has very regular behavior. The Lorenz attractor discussed below is generated by a system of three differential equations such as:

where , , and make up the system state, is time, and , , are the system parameters. Five of the terms on the right hand side are linear, while two are quadratic; a total of seven terms. Another well-known chaotic attractor is generated by the Rössler equations, which have only one nonlinear term out of seven. Sprott found a three-dimensional system with just five terms, that had only one nonlinear term, which exhibits chaos for certain parameter values. Zhang and Heidel showed that, at least for dissipative and conservative quadratic systems, three-dimensional quadratic systems with only three or four terms on the right-hand side cannot exhibit chaotic behavior. The reason is, simply put, that solutions to such systems are asymptotic to a two-dimensional surface and therefore solutions are well behaved.

While the Poincaré–Bendixson theorem shows that a continuous dynamical system on the Euclidean plane cannot be chaotic, two-dimensional continuous systems with non-Euclidean geometry can exhibit chaotic behavior. Perhaps surprisingly, chaos may occur also in linear systems, provided they are infinite dimensional. A theory of linear chaos is being developed in a branch of mathematical analysis known as functional analysis.

Infinite dimensional maps

The straightforward generalization of coupled discrete maps is based upon convolution integral which mediates interaction between spatially distributed maps: ,

where kernel is propagator derived as Green function of a relevant physical system, might be logistic map alike or complex map. For examples of complex maps the Julia set or Ikeda map may serve. When wave propagation problems at distance with wavelength are considered the kernel may have a form of Green function for Schrödinger equation:.

.

Jerk systems

In physics, jerk is the third derivative of position, with respect to time. As such, differential equations of the form

are sometimes called jerk equations. It has been shown that a jerk equation, which is equivalent to a system of three first order, ordinary, non-linear differential equations, is in a certain sense the minimal setting for solutions showing chaotic behaviour. This motivates mathematical interest in jerk systems. Systems involving a fourth or higher derivative are called accordingly hyperjerk systems.

A jerk system's behavior is described by a jerk equation, and for certain jerk equations, simple electronic circuits can model solutions. These circuits are known as jerk circuits.

One of the most interesting properties of jerk circuits is the possibility of chaotic behavior. In fact, certain well-known chaotic systems, such as the Lorenz attractor and the Rössler map, are conventionally described as a system of three first-order differential equations that can combine into a single (although rather complicated) jerk equation. Another example of a jerk equation with nonlinearity in the magnitude of is:

Here, A is an adjustable parameter. This equation has a chaotic solution for A=3/5 and can be implemented with the following jerk circuit; the required nonlinearity is brought about by the two diodes:

JerkCircuit01.png

In the above circuit, all resistors are of equal value, except , and all capacitors are of equal size. The dominant frequency is . The output of op amp 0 will correspond to the x variable, the output of 1 corresponds to the first derivative of x and the output of 2 corresponds to the second derivative.

Similar circuits only require one diode or no diodes at all.

See also the well-known Chua's circuit, one basis for chaotic true random number generators. The ease of construction of the circuit has made it a ubiquitous real-world example of a chaotic system.

Spontaneous order

Under the right conditions, chaos spontaneously evolves into a lockstep pattern. In the Kuramoto model, four conditions suffice to produce synchronization in a chaotic system. Examples include the coupled oscillation of Christiaan Huygens' pendulums, fireflies, neurons, the London Millennium Bridge resonance, and large arrays of Josephson junctions.

History

Barnsley fern created using the chaos game. Natural forms (ferns, clouds, mountains, etc.) may be recreated through an iterated function system (IFS).

An early proponent of chaos theory was Henri Poincaré. In the 1880s, while studying the three-body problem, he found that there can be orbits that are nonperiodic, and yet not forever increasing nor approaching a fixed point. In 1898, Jacques Hadamard published an influential study of the chaotic motion of a free particle gliding frictionlessly on a surface of constant negative curvature, called "Hadamard's billiards". Hadamard was able to show that all trajectories are unstable, in that all particle trajectories diverge exponentially from one another, with a positive Lyapunov exponent.

Chaos theory began in the field of ergodic theory. Later studies, also on the topic of nonlinear differential equations, were carried out by George David Birkhoff, Andrey Nikolaevich Kolmogorov, Mary Lucy Cartwright and John Edensor Littlewood, and Stephen Smale. Except for Smale, these studies were all directly inspired by physics: the three-body problem in the case of Birkhoff, turbulence and astronomical problems in the case of Kolmogorov, and radio engineering in the case of Cartwright and Littlewood. Although chaotic planetary motion had not been observed, experimentalists had encountered turbulence in fluid motion and nonperiodic oscillation in radio circuits without the benefit of a theory to explain what they were seeing.

Despite initial insights in the first half of the twentieth century, chaos theory became formalized as such only after mid-century, when it first became evident to some scientists that linear theory, the prevailing system theory at that time, simply could not explain the observed behavior of certain experiments like that of the logistic map. What had been attributed to measure imprecision and simple "noise" was considered by chaos theorists as a full component of the studied systems.

The main catalyst for the development of chaos theory was the electronic computer. Much of the mathematics of chaos theory involves the repeated iteration of simple mathematical formulas, which would be impractical to do by hand. Electronic computers made these repeated calculations practical, while figures and images made it possible to visualize these systems. As a graduate student in Chihiro Hayashi's laboratory at Kyoto University, Yoshisuke Ueda was experimenting with analog computers and noticed, on November 27, 1961, what he called "randomly transitional phenomena". Yet his advisor did not agree with his conclusions at the time, and did not allow him to report his findings until 1970.

Turbulence in the tip vortex from an airplane wing. Studies of the critical point beyond which a system creates turbulence were important for chaos theory, analyzed for example by the Soviet physicist Lev Landau, who developed the Landau-Hopf theory of turbulence. David Ruelle and Floris Takens later predicted, against Landau, that fluid turbulence could develop through a strange attractor, a main concept of chaos theory.

Edward Lorenz was an early pioneer of the theory. His interest in chaos came about accidentally through his work on weather prediction in 1961. Lorenz was using a simple digital computer, a Royal McBee LGP-30, to run his weather simulation. He wanted to see a sequence of data again, and to save time he started the simulation in the middle of its course. He did this by entering a printout of the data that corresponded to conditions in the middle of the original simulation. To his surprise, the weather the machine began to predict was completely different from the previous calculation. Lorenz tracked this down to the computer printout. The computer worked with 6-digit precision, but the printout rounded variables off to a 3-digit number, so a value like 0.506127 printed as 0.506. This difference is tiny, and the consensus at the time would have been that it should have no practical effect. However, Lorenz discovered that small changes in initial conditions produced large changes in long-term outcome. Lorenz's discovery, which gave its name to Lorenz attractors, showed that even detailed atmospheric modelling cannot, in general, make precise long-term weather predictions.

In 1963, Benoit Mandelbrot found recurring patterns at every scale in data on cotton prices. Beforehand he had studied information theory and concluded noise was patterned like a Cantor set: on any scale the proportion of noise-containing periods to error-free periods was a constant – thus errors were inevitable and must be planned for by incorporating redundancy. Mandelbrot described both the "Noah effect" (in which sudden discontinuous changes can occur) and the "Joseph effect" (in which persistence of a value can occur for a while, yet suddenly change afterwards). This challenged the idea that changes in price were normally distributed. In 1967, he published "How long is the coast of Britain? Statistical self-similarity and fractional dimension", showing that a coastline's length varies with the scale of the measuring instrument, resembles itself at all scales, and is infinite in length for an infinitesimally small measuring device. Arguing that a ball of twine appears as a point when viewed from far away (0-dimensional), a ball when viewed from fairly near (3-dimensional), or a curved strand (1-dimensional), he argued that the dimensions of an object are relative to the observer and may be fractional. An object whose irregularity is constant over different scales ("self-similarity") is a fractal (examples include the Menger sponge, the Sierpiński gasket, and the Koch curve or snowflake, which is infinitely long yet encloses a finite space and has a fractal dimension of circa 1.2619). In 1982, Mandelbrot published The Fractal Geometry of Nature, which became a classic of chaos theory. Biological systems such as the branching of the circulatory and bronchial systems proved to fit a fractal model.

In December 1977, the New York Academy of Sciences organized the first symposium on chaos, attended by David Ruelle, Robert May, James A. Yorke (coiner of the term "chaos" as used in mathematics), Robert Shaw, and the meteorologist Edward Lorenz. The following year Pierre Coullet and Charles Tresser published "Itérations d'endomorphismes et groupe de renormalisation", and Mitchell Feigenbaum's article "Quantitative Universality for a Class of Nonlinear Transformations" finally appeared in a journal, after 3 years of referee rejections. Thus Feigenbaum (1975) and Coullet & Tresser (1978) discovered the universality in chaos, permitting the application of chaos theory to many different phenomena.

In 1979, Albert J. Libchaber, during a symposium organized in Aspen by Pierre Hohenberg, presented his experimental observation of the bifurcation cascade that leads to chaos and turbulence in Rayleigh–Bénard convection systems. He was awarded the Wolf Prize in Physics in 1986 along with Mitchell J. Feigenbaum for their inspiring achievements.

In 1986, the New York Academy of Sciences co-organized with the National Institute of Mental Health and the Office of Naval Research the first important conference on chaos in biology and medicine. There, Bernardo Huberman presented a mathematical model of the eye tracking disorder among schizophrenics. This led to a renewal of physiology in the 1980s through the application of chaos theory, for example, in the study of pathological cardiac cycles.

In 1987, Per Bak, Chao Tang and Kurt Wiesenfeld published a paper in Physical Review Letters describing for the first time self-organized criticality (SOC), considered one of the mechanisms by which complexity arises in nature.

Alongside largely lab-based approaches such as the Bak–Tang–Wiesenfeld sandpile, many other investigations have focused on large-scale natural or social systems that are known (or suspected) to display scale-invariant behavior. Although these approaches were not always welcomed (at least initially) by specialists in the subjects examined, SOC has nevertheless become established as a strong candidate for explaining a number of natural phenomena, including earthquakes, (which, long before SOC was discovered, were known as a source of scale-invariant behavior such as the Gutenberg–Richter law describing the statistical distribution of earthquake sizes, and the Omori law describing the frequency of aftershocks), solar flares, fluctuations in economic systems such as financial markets (references to SOC are common in econophysics), landscape formation, forest fires, landslides, epidemics, and biological evolution (where SOC has been invoked, for example, as the dynamical mechanism behind the theory of "punctuated equilibria" put forward by Niles Eldredge and Stephen Jay Gould). Given the implications of a scale-free distribution of event sizes, some researchers have suggested that another phenomenon that should be considered an example of SOC is the occurrence of wars. These investigations of SOC have included both attempts at modelling (either developing new models or adapting existing ones to the specifics of a given natural system), and extensive data analysis to determine the existence and/or characteristics of natural scaling laws.

In the same year, James Gleick published Chaos: Making a New Science, which became a best-seller and introduced the general principles of chaos theory as well as its history to the broad public, though his history under-emphasized important Soviet contributions. Initially the domain of a few, isolated individuals, chaos theory progressively emerged as a transdisciplinary and institutional discipline, mainly under the name of nonlinear systems analysis. Alluding to Thomas Kuhn's concept of a paradigm shift exposed in The Structure of Scientific Revolutions (1962), many "chaologists" (as some described themselves) claimed that this new theory was an example of such a shift, a thesis upheld by Gleick.

The availability of cheaper, more powerful computers broadens the applicability of chaos theory. Currently, chaos theory remains an active area of research, involving many different disciplines such as mathematics, topology, physics, social systems, population modeling, biology, meteorology, astrophysics, information theory, computational neuroscience, pandemic crisis management, etc.

Applications

A conus textile shell, similar in appearance to Rule 30, a cellular automaton with chaotic behaviour.

Although chaos theory was born from observing weather patterns, it has become applicable to a variety of other situations. Some areas benefiting from chaos theory today are geology, mathematics, biology, computer science, economics, engineering finance, algorithmic trading, meteorology, philosophy, anthropology, physics, politics, population dynamics, psychology, and robotics. A few categories are listed below with examples, but this is by no means a comprehensive list as new applications are appearing.

Cryptography

Chaos theory has been used for many years in cryptography. In the past few decades, chaos and nonlinear dynamics have been used in the design of hundreds of cryptographic primitives. These algorithms include image encryption algorithms, hash functions, secure pseudo-random number generators, stream ciphers, watermarking and steganography. The majority of these algorithms are based on uni-modal chaotic maps and a big portion of these algorithms use the control parameters and the initial condition of the chaotic maps as their keys. From a wider perspective, without loss of generality, the similarities between the chaotic maps and the cryptographic systems is the main motivation for the design of chaos based cryptographic algorithms. One type of encryption, secret key or symmetric key, relies on diffusion and confusion, which is modeled well by chaos theory. Another type of computing, DNA computing, when paired with chaos theory, offers a way to encrypt images and other information. Many of the DNA-Chaos cryptographic algorithms are proven to be either not secure, or the technique applied is suggested to be not efficient.

Robotics

Robotics is another area that has recently benefited from chaos theory. Instead of robots acting in a trial-and-error type of refinement to interact with their environment, chaos theory has been used to build a predictive model. Chaotic dynamics have been exhibited by passive walking biped robots.

Biology

For over a hundred years, biologists have been keeping track of populations of different species with population models. Most models are continuous, but recently scientists have been able to implement chaotic models in certain populations. For example, a study on models of Canadian lynx showed there was chaotic behavior in the population growth. Chaos can also be found in ecological systems, such as hydrology. While a chaotic model for hydrology has its shortcomings, there is still much to learn from looking at the data through the lens of chaos theory. Another biological application is found in cardiotocography. Fetal surveillance is a delicate balance of obtaining accurate information while being as noninvasive as possible. Better models of warning signs of fetal hypoxia can be obtained through chaotic modeling.

Economics

It is possible that economic models can also be improved through an application of chaos theory, but predicting the health of an economic system and what factors influence it most is an extremely complex task. Economic and financial systems are fundamentally different from those in the classical natural sciences since the former are inherently stochastic in nature, as they result from the interactions of people, and thus pure deterministic models are unlikely to provide accurate representations of the data. The empirical literature that tests for chaos in economics and finance presents very mixed results, in part due to confusion between specific tests for chaos and more general tests for non-linear relationships.

Chaos could be found in economics by the means of recurrence quantification analysis. In fact, Orlando et al. by the means of the so-called recurrence quantification correlation index were able detect hidden changes in time series. Then, the same technique was employed to detect transitions from laminar (i.e. regular) to turbulent (i.e. chaotic) phases as well as differences between macroeconomic variables and highlight hidden features of economic dynamics. Finally, chaos could help in modeling how economy operate as well as in embedding shocks due to external events such as COVID-19.

Other areas

In chemistry, predicting gas solubility is essential to manufacturing polymers, but models using particle swarm optimization (PSO) tend to converge to the wrong points. An improved version of PSO has been created by introducing chaos, which keeps the simulations from getting stuck. In celestial mechanics, especially when observing asteroids, applying chaos theory leads to better predictions about when these objects will approach Earth and other planets. Four of the five moons of Pluto rotate chaotically. In quantum physics and electrical engineering, the study of large arrays of Josephson junctions benefitted greatly from chaos theory. Closer to home, coal mines have always been dangerous places where frequent natural gas leaks cause many deaths. Until recently, there was no reliable way to predict when they would occur. But these gas leaks have chaotic tendencies that, when properly modeled, can be predicted fairly accurately.

Chaos theory can be applied outside of the natural sciences, but historically nearly all such studies have suffered from lack of reproducibility; poor external validity; and/or inattention to cross-validation, resulting in poor predictive accuracy (if out-of-sample prediction has even been attempted). Glass and Mandell and Selz have found that no EEG study has as yet indicated the presence of strange attractors or other signs of chaotic behavior.

Researchers have continued to apply chaos theory to psychology. For example, in modeling group behavior in which heterogeneous members may behave as if sharing to different degrees what in Wilfred Bion's theory is a basic assumption, researchers have found that the group dynamic is the result of the individual dynamics of the members: each individual reproduces the group dynamics in a different scale, and the chaotic behavior of the group is reflected in each member.

Redington and Reidbord (1992) attempted to demonstrate that the human heart could display chaotic traits. They monitored the changes in between-heartbeat intervals for a single psychotherapy patient as she moved through periods of varying emotional intensity during a therapy session. Results were admittedly inconclusive. Not only were there ambiguities in the various plots the authors produced to purportedly show evidence of chaotic dynamics (spectral analysis, phase trajectory, and autocorrelation plots), but also when they attempted to compute a Lyapunov exponent as more definitive confirmation of chaotic behavior, the authors found they could not reliably do so.

In their 1995 paper, Metcalf and Allen maintained that they uncovered in animal behavior a pattern of period doubling leading to chaos. The authors examined a well-known response called schedule-induced polydipsia, by which an animal deprived of food for certain lengths of time will drink unusual amounts of water when the food is at last presented. The control parameter (r) operating here was the length of the interval between feedings, once resumed. The authors were careful to test a large number of animals and to include many replications, and they designed their experiment so as to rule out the likelihood that changes in response patterns were caused by different starting places for r.

Time series and first delay plots provide the best support for the claims made, showing a fairly clear march from periodicity to irregularity as the feeding times were increased. The various phase trajectory plots and spectral analyses, on the other hand, do not match up well enough with the other graphs or with the overall theory to lead inexorably to a chaotic diagnosis. For example, the phase trajectories do not show a definite progression towards greater and greater complexity (and away from periodicity); the process seems quite muddied. Also, where Metcalf and Allen saw periods of two and six in their spectral plots, there is room for alternative interpretations. All of this ambiguity necessitate some serpentine, post-hoc explanation to show that results fit a chaotic model.

By adapting a model of career counseling to include a chaotic interpretation of the relationship between employees and the job market, Amundson and Bright found that better suggestions can be made to people struggling with career decisions. Modern organizations are increasingly seen as open complex adaptive systems with fundamental natural nonlinear structures, subject to internal and external forces that may contribute chaos. For instance, team building and group development is increasingly being researched as an inherently unpredictable system, as the uncertainty of different individuals meeting for the first time makes the trajectory of the team unknowable.

Some say the chaos metaphor—used in verbal theories—grounded on mathematical models and psychological aspects of human behavior provides helpful insights to describing the complexity of small work groups, that go beyond the metaphor itself.

The red cars and blue cars take turns to move; the red ones only move upwards, and the blue ones move rightwards. Every time, all the cars of the same colour try to move one step if there is no car in front of it. Here, the model has self-organized in a somewhat geometric pattern where there are some traffic jams and some areas where cars can move at top speed.


Traffic forecasting may benefit from applications of chaos theory. Better predictions of when traffic will occur would allow measures to be taken to disperse it before it would have occurred. Combining chaos theory principles with a few other methods has led to a more accurate short-term prediction model (see the plot of the BML traffic model at right).

Chaos theory has been applied to environmental water cycle data (aka hydrological data), such as rainfall and streamflow. These studies have yielded controversial results, because the methods for detecting a chaotic signature are often relatively subjective. Early studies tended to "succeed" in finding chaos, whereas subsequent studies and meta-analyses called those studies into question and provided explanations for why these datasets are not likely to have low-dimension chaotic dynamics.

 

Nonlinear system

From Wikipedia, the free encyclopedia

In mathematics and science, a nonlinear system is a system in which the change of the output is not proportional to the change of the input. Nonlinear problems are of interest to engineers, biologists, physicists, mathematicians, and many other scientists because most systems are inherently nonlinear in nature. Nonlinear dynamical systems, describing changes in variables over time, may appear chaotic, unpredictable, or counterintuitive, contrasting with much simpler linear systems.

Typically, the behavior of a nonlinear system is described in mathematics by a nonlinear system of equations, which is a set of simultaneous equations in which the unknowns (or the unknown functions in the case of differential equations) appear as variables of a polynomial of degree higher than one or in the argument of a function which is not a polynomial of degree one. In other words, in a nonlinear system of equations, the equation(s) to be solved cannot be written as a linear combination of the unknown variables or functions that appear in them. Systems can be defined as nonlinear, regardless of whether known linear functions appear in the equations. In particular, a differential equation is linear if it is linear in terms of the unknown function and its derivatives, even if nonlinear in terms of the other variables appearing in it.

As nonlinear dynamical equations are difficult to solve, nonlinear systems are commonly approximated by linear equations (linearization). This works well up to some accuracy and some range for the input values, but some interesting phenomena such as solitons, chaos, and singularities are hidden by linearization. It follows that some aspects of the dynamic behavior of a nonlinear system can appear to be counterintuitive, unpredictable or even chaotic. Although such chaotic behavior may resemble random behavior, it is in fact not random. For example, some aspects of the weather are seen to be chaotic, where simple changes in one part of the system produce complex effects throughout. This nonlinearity is one of the reasons why accurate long-term forecasts are impossible with current technology.

Some authors use the term nonlinear science for the study of nonlinear systems. This term is disputed by others:

Using a term like nonlinear science is like referring to the bulk of zoology as the study of non-elephant animals.

Definition

In mathematics, a linear map (or linear function) is one which satisfies both of the following properties:

  • Additivity or superposition principle:
  • Homogeneity:

Additivity implies homogeneity for any rational α, and, for continuous functions, for any real α. For a complex α, homogeneity does not follow from additivity. For example, an antilinear map is additive but not homogeneous. The conditions of additivity and homogeneity are often combined in the superposition principle

An equation written as

is called linear if is a linear map (as defined above) and nonlinear otherwise. The equation is called homogeneous if .

The definition is very general in that can be any sensible mathematical object (number, vector, function, etc.), and the function can literally be any mapping, including integration or differentiation with associated constraints (such as boundary values). If contains differentiation with respect to , the result will be a differential equation.

Nonlinear algebraic equations

Nonlinear algebraic equations, which are also called polynomial equations, are defined by equating polynomials (of degree greater than one) to zero. For example,

For a single polynomial equation, root-finding algorithms can be used to find solutions to the equation (i.e., sets of values for the variables that satisfy the equation). However, systems of algebraic equations are more complicated; their study is one motivation for the field of algebraic geometry, a difficult branch of modern mathematics. It is even difficult to decide whether a given algebraic system has complex solutions. Nevertheless, in the case of the systems with a finite number of complex solutions, these systems of polynomial equations are now well understood and efficient methods exist for solving them.

Nonlinear recurrence relations

A nonlinear recurrence relation defines successive terms of a sequence as a nonlinear function of preceding terms. Examples of nonlinear recurrence relations are the logistic map and the relations that define the various Hofstadter sequences. Nonlinear discrete models that represent a wide class of nonlinear recurrence relationships include the NARMAX (Nonlinear Autoregressive Moving Average with eXogenous inputs) model and the related nonlinear system identification and analysis procedures. These approaches can be used to study a wide class of complex nonlinear behaviors in the time, frequency, and spatio-temporal domains.

Nonlinear differential equations

A system of differential equations is said to be nonlinear if it is not a linear system. Problems involving nonlinear differential equations are extremely diverse, and methods of solution or analysis are problem dependent. Examples of nonlinear differential equations are the Navier–Stokes equations in fluid dynamics and the Lotka–Volterra equations in biology.

One of the greatest difficulties of nonlinear problems is that it is not generally possible to combine known solutions into new solutions. In linear problems, for example, a family of linearly independent solutions can be used to construct general solutions through the superposition principle. A good example of this is one-dimensional heat transport with Dirichlet boundary conditions, the solution of which can be written as a time-dependent linear combination of sinusoids of differing frequencies; this makes solutions very flexible. It is often possible to find several very specific solutions to nonlinear equations, however the lack of a superposition principle prevents the construction of new solutions.

Ordinary differential equations

First order ordinary differential equations are often exactly solvable by separation of variables, especially for autonomous equations. For example, the nonlinear equation

has as a general solution (and also u = 0 as a particular solution, corresponding to the limit of the general solution when C tends to infinity). The equation is nonlinear because it may be written as

and the left-hand side of the equation is not a linear function of u and its derivatives. Note that if the u2 term were replaced with u, the problem would be linear (the exponential decay problem).

Second and higher order ordinary differential equations (more generally, systems of nonlinear equations) rarely yield closed-form solutions, though implicit solutions and solutions involving nonelementary integrals are encountered.

Common methods for the qualitative analysis of nonlinear ordinary differential equations include:

Partial differential equations

The most common basic approach to studying nonlinear partial differential equations is to change the variables (or otherwise transform the problem) so that the resulting problem is simpler (possibly even linear). Sometimes, the equation may be transformed into one or more ordinary differential equations, as seen in separation of variables, which is always useful whether or not the resulting ordinary differential equation(s) is solvable.

Another common (though less mathematical) tactic, often seen in fluid and heat mechanics, is to use scale analysis to simplify a general, natural equation in a certain specific boundary value problem. For example, the (very) nonlinear Navier-Stokes equations can be simplified into one linear partial differential equation in the case of transient, laminar, one dimensional flow in a circular pipe; the scale analysis provides conditions under which the flow is laminar and one dimensional and also yields the simplified equation.

Other methods include examining the characteristics and using the methods outlined above for ordinary differential equations.

Pendula

Illustration of a pendulum
Linearizations of a pendulum

A classic, extensively studied nonlinear problem is the dynamics of a pendulum under the influence of gravity. Using Lagrangian mechanics, it may be shown that the motion of a pendulum can be described by the dimensionless nonlinear equation

where gravity points "downwards" and is the angle the pendulum forms with its rest position, as shown in the figure at right. One approach to "solving" this equation is to use as an integrating factor, which would eventually yield

which is an implicit solution involving an elliptic integral. This "solution" generally does not have many uses because most of the nature of the solution is hidden in the nonelementary integral (nonelementary unless ).

Another way to approach the problem is to linearize any nonlinearities (the sine function term in this case) at the various points of interest through Taylor expansions. For example, the linearization at , called the small angle approximation, is

since for . This is a simple harmonic oscillator corresponding to oscillations of the pendulum near the bottom of its path. Another linearization would be at , corresponding to the pendulum being straight up:

since for . The solution to this problem involves hyperbolic sinusoids, and note that unlike the small angle approximation, this approximation is unstable, meaning that will usually grow without limit, though bounded solutions are possible. This corresponds to the difficulty of balancing a pendulum upright, it is literally an unstable state.

One more interesting linearization is possible around , around which :

This corresponds to a free fall problem. A very useful qualitative picture of the pendulum's dynamics may be obtained by piecing together such linearizations, as seen in the figure at right. Other techniques may be used to find (exact) phase portraits and approximate periods.

Types of nonlinear dynamic behaviors

  • Amplitude death – any oscillations present in the system cease due to some kind of interaction with other system or feedback by the same system
  • Chaos – values of a system cannot be predicted indefinitely far into the future, and fluctuations are aperiodic
  • Multistability – the presence of two or more stable states
  • Solitons – self-reinforcing solitary waves
  • Limit cycles – asymptotic periodic orbits to which destabilized fixed points are attracted.
  • Self-oscillations - feedback oscillations taking place in open dissipative physical systems.

Examples of nonlinear equations

Internet bot

From Wikipedia, the free encyclopedia

An Internet bot, web robot, robot or simply bot, is a software application that runs automated tasks (scripts) over the Internet. Typically, bots perform tasks that are simple and repetitive much faster than a person could. The most extensive use of bots is for web crawling, in which an automated script fetches, analyzes and files information from web servers. More than half of all web traffic is generated by bots.

Efforts by web servers to restrict bots vary. Some servers have a robots.txt file that contains the rules governing bot behavior on that server. Any bot that does not follow the rules could, in theory, be denied access to or removed from, the affected website. If the posted text file has no associated program/software/app, then adhering to the rules is entirely voluntary. There would be no way to enforce the rules or to ensure that a bot's creator or implementer reads or acknowledges the robots.txt file. Some bots are "good" – e.g. search engine spiders – while others are used to launch malicious attacks on, for example, political campaigns.

IM and IRC

Some bots communicate with users of Internet-based services, via instant messaging (IM), Internet Relay Chat (IRC), or other web interfaces such as Facebook bots and Twitter bots. These chatbots may allow people to ask questions in plain English and then formulate a response. Such bots can often handle reporting weather, zip code information, sports scores, currency or other unit conversions, etc. Others are used for entertainment, such as SmarterChild on AOL Instant Messenger and MSN Messenger.

Bots are very commonly used on social media. A user may not be aware that they are interacting with a bot.

Additional roles of an IRC bot may be to listen on a conversation channel, and to comment on certain phrases uttered by the participants (based on pattern matching). This is sometimes used as a help service for new users or to censor profanity.

Social bots

Social networking bots are sets of algorithms that take on the duties of repetitive sets of instructions in order to establish a service or connection among social networking users. Among the various designs of networking bots, the most common are chat bots, algorithms designed to converse with a human user, and social bots, algorithms designed to mimic human behaviors to converse with patterns similar to those of a human user. The history of social botting can be traced back to Alan Turing in the 1950s and his vision of designing sets of instructional code approved by the Turing test. In the 1960s Joseph Weizenbaum created ELIZA, a natural language processing computer program. considered an early indicator of artificial intelligence algorithms. ELIZA inspired computer programmers to design tasked programs that can match behavior patterns to their sets of instruction. As a result, natural language processing has become an influencing factor to the development of artificial intelligence and social bots. And as information and thought see a progressive mass spreading on social media websites, innovative technological advancements are made following the same pattern.

Reports of political interferences in recent elections, including the 2016 US and 2017 UK general elections, have set the notion of bots being more prevalent because of the ethics that is challenged between the bot's design and the bot's designer. Emilio Ferrara, a computer scientist from the University of Southern California reporting on Communications of the ACM, said the lack of resources available to implement fact-checking and information verification results in the large volumes of false reports and claims made about these bots on social media platforms. In the case of Twitter, most of these bots are programmed with search filter capabilities that target keywords and phrases favoring political agendas and then retweet them. While the attention of bots is programmed to spread unverified information throughout the social media platforms, it is a challenge that programmers face in the wake of a hostile political climate. The Bot Effect is what Ferrera reported as the socialization of bots and human users creating a vulnerability to the leaking of personal information and polarizing influences outside the ethics of the bot's code, and was confirmed by Guillory Kramer in his study where he observed the behavior of emotionally volatile users and the impact the bots have on them, altering their perception of reality.

Commercial bots

There has been a great deal of controversy about the use of bots in an automated trading function. Auction website eBay took legal action in an attempt to suppress a third-party company from using bots to look for bargains on its site; this approach backfired on eBay and attracted the attention of further bots. The United Kingdom-based bet exchange, Betfair, saw such a large amount of traffic coming from bots that it launched a WebService API aimed at bot programmers, through which it can actively manage bot interactions.

Bot farms are known to be used in online app stores, like the Apple App Store and Google Play, to manipulate positions or increase positive ratings/reviews.

A rapidly growing, benign form of internet bot is the chatbot. From 2016, when Facebook Messenger allowed developers to place chatbots on their platform, there has been an exponential growth of their use on that app alone. 30,000 bots were created for Messenger in the first six months, rising to 100,000 by September 2017. Avi Ben Ezra, CTO of SnatchBot, told Forbes that evidence from the use of their chatbot building platform pointed to a near future saving of millions of hours of human labor as 'live chat' on websites was replaced with bots.

Companies use internet bots to increase online engagement and streamline communication. Companies often use bots to cut down on cost; instead of employing people to communicate with consumers, companies have developed new ways to be efficient. These chatbots are used to answer customers' questions: for example, Domino's developed a chatbot that can take orders via Facebook Messenger. Chatbots allow companies to allocate their employees' time to other tasks.

Malicious bots

One example of the malicious use of bots is the coordination and operation of an automated attack on networked computers, such as a denial-of-service attack by a botnet. Internet bots or web bots can also be used to commit click fraud and more recently have appeared around MMORPG games as computer game bots. Another category is represented by spambots, internet bots that attempt to spam large amounts of content on the Internet, usually adding advertising links. More than 94.2% of websites have experienced a bot attack.

There are malicious bots (and botnets) of the following types:

  1. Spambots that harvest email addresses from contact or guestbook pages
  2. Downloaded programs that suck bandwidth by downloading entire websites
  3. Website scrapers that grab the content of websites and re-use it without permission on automatically generated doorway pages
  4. Registration bots that sign up a specific email address to numerous services in order to have the confirmation messages flood the email inbox and distract from important messages indicating a security breach.
  5. Viruses and worms
  6. DDoS attacks
  7. Botnets, zombie computers, etc.
  8. Spambots that try to redirect people onto a malicious website, sometimes found in comment sections or forums of various websites
  9. Viewbots create fake views
  10. Bots that buy up higher-demand seats for concerts, particularly by ticket brokers who resell the tickets. These bots run through the purchase process of entertainment event-ticketing sites and obtain better seats by pulling as many seats back as it can.
  11. Bots that are used in massively multiplayer online role-playing games to farm for resources that would otherwise take significant time or effort to obtain, which can be a concern for online in-game economies.
  12. Bots that increase views for YouTube videos
  13. Bots that increase traffic counts on analytics reporting to extract money from advertisers. A study by Comscore found that over half of ads shown across thousands of campaigns between May 2012 and February 2013 were not served to human users.
  14. Bots used on internet forums to automatically post inflammatory or nonsensical posts to disrupt the forum and anger users.

in 2012, journalist Percy von Lipinski reported that he discovered millions of bots or botted or pinged views at CNN iReport. CNN iReport quietly removed millions of views from the account of iReporter Chris Morrow. It is not known if the ad revenue received by CNN from the fake views was ever returned to the advertisers.

The most widely used anti-bot technique is the use of CAPTCHA. Examples of providers include Recaptcha, Minteye, Solve Media and NuCaptcha. However, captchas are not foolproof in preventing bots, as they can often be circumvented by computer character recognition, security holes, and outsourcing captcha solving to cheap laborers.

Human interaction with social bots

There are two main concerns with bots: clarity and face-to-face support. The cultural background of human beings affects the way they communicate with social bots. Many people believe that bots are vastly less intelligent than humans and so they are not worthy of our respect.

Min-Sun Kim proposed five concerns or issues that may arise when communicating with a social robot, and they are avoiding the damage of peoples' feelings, minimizing impositions, disproval from others, clarity issues, and how effective their messages may come across.

Social robots also take away from the genuine creations of human relationships.

Global Consciousness Project

From Wikipedia, the free encyclopedia

The Global Consciousness Project (GCP, also called the EGG Project) is a parapsychology experiment begun in 1998 as an attempt to detect possible interactions of "global consciousness" with physical systems. The project monitors a geographically distributed network of hardware random number generators in a bid to identify anomalous outputs that correlate with widespread emotional responses to sets of world events, or periods of focused attention by large numbers of people. The GCP is privately funded through the Institute of Noetic Sciences and describes itself as an international collaboration of about 100 research scientists and engineers.

Skeptics such as Robert T. Carroll, Claus Larsen, and others have questioned the methodology of the Global Consciousness Project, particularly how the data are selected and interpreted, saying the data anomalies reported by the project are the result of "pattern matching" and selection bias which ultimately fail to support a belief in psi or global consciousness. Other critics have stated that the open access to the test data "is a testimony to the integrity and curiosity of those involved". But in analyzing the data for 11 September 2001, May et al. concluded that the statistically significant result given by the published GCP hypothesis was fortuitous, and found that as far as this particular event was concerned an alternative method of analysis gave only chance deviations throughout.

Background

Roger D. Nelson developed the project as an extrapolation of two decades of experiments from the controversial Princeton Engineering Anomalies Research Lab (PEAR).

In an extension of the laboratory research utilizing hardware random number generators called FieldREG, investigators examined the outputs of REGs in the field before, during and after highly focused or coherent group events. The group events studied included psychotherapy sessions, theater presentations, religious rituals, sports competitions such as the Football World Cup, and television broadcasts such as the Academy Awards.

FieldREG was extended to global dimensions in studies looking at data from 12 independent REGs in the US and Europe during a web-promoted "Gaiamind Meditation" in January 1997, and then again in September 1997 after the death of Diana, Princess of Wales. The project claimed the results suggested it would be worthwhile to build a permanent network of continuously-running REGs. This became the EGG project or Global Consciousness Project.

Comparing the GCP to PEAR, Nelson, referring to the "field" studies with REGs done by PEAR, said the GCP used "exactly the same procedure... applied on a broader scale."[9][non-primary source needed]

Methodology

The GCP's methodology is based on the hypothesis that events which elicit widespread emotion or draw the simultaneous attention of large numbers of people may affect the output of hardware random number generators in a statistically significant way. The GCP maintains a network of hardware random number generators which are interfaced to computers at 70 locations around the world. Custom software reads the output of the random number generators and records a trial (sum of 200 bits) once every second. The data are sent to a server in Princeton, creating a database of synchronized parallel sequences of random numbers. The GCP is run as a replication experiment, essentially combining the results of many distinct tests of the hypothesis. The hypothesis is tested by calculating the extent of data fluctuations at the time of events. The procedure is specified by a three-step experimental protocol. In the first step, the event duration and the calculation algorithm are pre-specified and entered into a formal registry. In the second step, the event data are extracted from the database and a Z score, which indicates the degree of deviation from the null hypothesis, is calculated from the pre-specified algorithm. In the third step, the event Z-score is combined with the Z-scores from previous events to yield an overall result for the experiment.

The remote devices have been dubbed Princeton Eggs, a reference to the coinage electrogaiagram, a portmanteau of electroencephalogram and Gaia. Supporters and skeptics have referred to the aim of the GCP as being analogous to detecting "a great disturbance in the Force."

Claims and criticism of effects from the September 11 terrorist attacks

The GCP has suggested that changes in the level of randomness may have occurred during the September 11, 2001 attacks when the planes first impacted, as well as in the two days following the attacks.

Independent scientists Edwin May and James Spottiswoode conducted an analysis of the data around the September 11 attacks and concluded there was no statistically significant change in the randomness of the GCP data during the attacks and the apparent significant deviation reported by Nelson and Radin existed only in their chosen time window. Spikes and fluctuations are to be expected in any random distribution of data, and there is no set time frame for how close a spike has to be to a given event for the GCP to say they have found a correlation. Wolcotte Smith said "A couple of additional statistical adjustments would have to be made to determine if there really was a spike in the numbers," referencing the data related to September 11, 2001. Similarly, Jeffrey D. Scargle believes unless both Bayesian and classical p-value analysis agree and both show the same anomalous effects, the kind of result GCP proposes will not be generally accepted.

In 2003, a New York Times article concluded "All things considered at this point, the stock market seems a more reliable gauge of the national—if not the global—emotional resonance."

In 2007 The Age reported that "[Nelson] concedes the data, so far, is not solid enough for global consciousness to be said to exist at all. It is not possible, for example, to look at the data and predict with any accuracy what (if anything) the eggs may be responding to."

Robert Matthews said that while it was "the most sophisticated attempt yet" to prove psychokinesis existed, the unreliability of significant events to cause statistically significant spikes meant that "the only conclusion to emerge from the Global Consciousness Project so far is that data without a theory is as meaningless as words without a narrative".

 

Noosphere

From Wikipedia, the free encyclopedia

The noosphere (alternate spelling noösphere) is a philosophical concept developed and popularized by the Russian-Ukrainian Soviet biogeochemist Vladimir Vernadsky, and the French philosopher and Jesuit priest Pierre Teilhard de Chardin. Vernadsky defined the noosphere as the new state of the biosphere and described as the planetary "sphere of reason". The noosphere represents the highest stage of biospheric development, its defining factor being the development of humankind's rational activities.

The word is derived from the Greek νόος ("mind", "reason") and σφαῖρα ("sphere"), in lexical analogy to "atmosphere" and "biosphere". The concept, however, cannot be accredited to a single author. The founding authors Vernadsky and de Chardin developed two related but starkly different concepts, the former being grounded in the geological sciences, and the latter in theology. Both conceptions of the noosphere share the common thesis that together human reason and the scientific thought has created, and will continue to create, the next evolutionary geological layer. This geological layer is part of the evolutionary chain. Second generation authors, predominantly of Russian origin, have further developed the Vernadskian concept, creating the related concepts: noocenosis and noocenology.

Founding authors

The term noosphere was first used in the publications of Pierre Teilhard de Chardin in 1922 in his Cosmogenesis. Vernadsky was most likely introduced to the term by a common acquaintance, Édouard Le Roy, during a stay in Paris. Some sources claim Édouard Le Roy actually first proposed the term. Vernadsky himself wrote that he was first introduced to the concept by Le Roy in his 1927 lectures at the College of France, and that Le Roy had emphasized a mutual exploration of the concept with Teilhard de Chardin. According to Vernadsky's own letters, he took Le Roy's ideas on the noosphere from Le Roy's article "Les origines humaines et l’evolution de l’intelligence", part III: "La noosphere et l’hominisation", before reworking the concept within his own field, biogeochemistry. The historian Bailes concludes that Vernadsky and Teilhard de Chardin were mutual influences on each other, as Teilhard de Chardin also attended the Vernadsky's lectures on biogeochemistry, before creating the concept of the noosphere.

An account stated that Le Roy and Teilhard were not aware of the concept of biosphere in their noosphere concept and that it was Vernadsky who introduced them to this notion, which gave their conceptualization a grounding on natural sciences. Both Teilhard de Chardin and Vernadsky base their conceptions of the noosphere on the term 'biosphere', developed by Edward Suess in 1875. Despite the differing backgrounds, approaches and focuses of Teilhard and Vernadsky, they have a few fundamental themes in common. Both scientists overstepped the boundaries of natural science and attempted to create all-embracing theoretical constructions founded in philosophy, social sciences and authorized interpretations of the evolutionary theory. Moreover, both thinkers were convinced of the teleological character of evolution. They also argued that human activity becomes a geological power and that the manner by which it is directed can influence the environment. There are, however, fundamental differences in the two conceptions.

Concept

In the theory of Vernadsky, the noosphere is the third in a succession of phases of development of the Earth, after the geosphere (inanimate matter) and the biosphere (biological life). Just as the emergence of life fundamentally transformed the geosphere, the emergence of human cognition fundamentally transforms the biosphere. In contrast to the conceptions of the Gaia theorists, or the promoters of cyberspace, Vernadsky's noosphere emerges at the point where humankind, through the mastery of nuclear processes, begins to create resources through the transmutation of elements. It is also currently being researched as part of the Global Consciousness Project.

Teilhard perceived a directionality in evolution along an axis of increasing Complexity/Consciousness. For Teilhard, the noosphere is the sphere of thought encircling the earth that has emerged through evolution as a consequence of this growth in complexity/consciousness. The noosphere is therefore as much part of nature as the barysphere, lithosphere, hydrosphere, atmosphere, and biosphere. As a result, Teilhard sees the "social phenomenon [as] the culmination of and not the attenuation of the biological phenomenon." These social phenomena are part of the noosphere and include, for example, legal, educational, religious, research, industrial and technological systems. In this sense, the noosphere emerges through and is constituted by the interaction of human minds. The noosphere thus grows in step with the organization of the human mass in relation to itself as it populates the earth. Teilhard argued the noosphere evolves towards ever greater personalisation, individuation and unification of its elements. He saw the Christian notion of love as being the principal driver of "noogenesis", the evolution of mind. Evolution would culminate in the Omega Point—an apex of thought/consciousness—which he identified with the eschatological return of Christ.

One of the original aspects of the noosphere concept deals with evolution. Henri Bergson, with his L'évolution créatrice (1907), was one of the first to propose evolution is "creative" and cannot necessarily be explained solely by Darwinian natural selection. L'évolution créatrice is upheld, according to Bergson, by a constant vital force which animates life and fundamentally connects mind and body, an idea opposing the dualism of René Descartes. In 1923, C. Lloyd Morgan took this work further, elaborating on an "emergent evolution" which could explain increasing complexity (including the evolution of mind). Morgan found many of the most interesting changes in living things have been largely discontinuous with past evolution. Therefore, these living things did not necessarily evolve through a gradual process of natural selection. Rather, he posited, the process of evolution experiences jumps in complexity (such as the emergence of a self-reflective universe, or noosphere), in a sort of qualitative punctuated equilibrium. Finally, the complexification of human cultures, particularly language, facilitated a quickening of evolution in which cultural evolution occurs more rapidly than biological evolution. Recent understanding of human ecosystems and of human impact on the biosphere have led to a link between the notion of sustainability with the "co-evolution" and harmonization of cultural and biological evolution.

Introduction to entropy

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Introduct...