Search This Blog

Monday, December 23, 2024

Entropy (statistical thermodynamics)

From Wikipedia, the free encyclopedia

The concept entropy was first developed by German physicist Rudolf Clausius in the mid-nineteenth century as a thermodynamic property that predicts that certain spontaneous processes are irreversible or impossible. In statistical mechanics, entropy is formulated as a statistical property using probability theory. The statistical entropy perspective was introduced in 1870 by Austrian physicist Ludwig Boltzmann, who established a new field of physics that provided the descriptive linkage between the macroscopic observation of nature and the microscopic view based on the rigorous treatment of large ensembles of microscopic states that constitute thermodynamic systems.

Boltzmann's principle

Ludwig Boltzmann defined entropy as a measure of the number of possible microscopic states (microstates) of a system in thermodynamic equilibrium, consistent with its macroscopic thermodynamic properties, which constitute the macrostate of the system. A useful illustration is the example of a sample of gas contained in a container. The easily measurable parameters volume, pressure, and temperature of the gas describe its macroscopic condition (state). At a microscopic level, the gas consists of a vast number of freely moving atoms or molecules, which randomly collide with one another and with the walls of the container. The collisions with the walls produce the macroscopic pressure of the gas, which illustrates the connection between microscopic and macroscopic phenomena.

A microstate of the system is a description of the positions and momenta of all its particles. The large number of particles of the gas provides an infinite number of possible microstates for the sample, but collectively they exhibit a well-defined average of configuration, which is exhibited as the macrostate of the system, to which each individual microstate contribution is negligibly small. The ensemble of microstates comprises a statistical distribution of probability for each microstate, and the group of most probable configurations accounts for the macroscopic state. Therefore, the system can be described as a whole by only a few macroscopic parameters, called the thermodynamic variables: the total energy E, volume V, pressure P, temperature T, and so forth. However, this description is relatively simple only when the system is in a state of equilibrium.

Equilibrium may be illustrated with a simple example of a drop of food coloring falling into a glass of water. The dye diffuses in a complicated manner, which is difficult to precisely predict. However, after sufficient time has passed, the system reaches a uniform color, a state much easier to describe and explain.

Boltzmann formulated a simple relationship between entropy and the number of possible microstates of a system, which is denoted by the symbol Ω. The entropy S is proportional to the natural logarithm of this number:

The proportionality constant kB is one of the fundamental constants of physics and is named the Boltzmann constant in honor of its discoverer.

Boltzmann's entropy describes the system when all the accessible microstates are equally likely. It is the configuration corresponding to the maximum of entropy at equilibrium. The randomness or disorder is maximal, and so is the lack of distinction (or information) of each microstate.

Entropy is a thermodynamic property just like pressure, volume, or temperature. Therefore, it connects the microscopic and the macroscopic world view.

Boltzmann's principle is regarded as the foundation of statistical mechanics.

Gibbs entropy formula

The macroscopic state of a system is characterized by a distribution on the microstates. The entropy of this distribution is given by the Gibbs entropy formula, named after J. Willard Gibbs. For a classical system (i.e., a collection of classical particles) with a discrete set of microstates, if is the energy of microstate i, and is the probability that it occurs during the system's fluctuations, then the entropy of the system is

Entropy changes for systems in a canonical state

A system with a well-defined temperature, i.e., one in thermal equilibrium with a thermal reservoir, has a probability of being in a microstate i given by Boltzmann's distribution.

Changes in the entropy caused by changes in the external constraints are then given by:

where we have twice used the conservation of probability, Σ dpi = 0.

Now, Σi d(Ei pi) is the expectation value of the change in the total energy of the system.

If the changes are sufficiently slow, so that the system remains in the same microscopic state, but the state slowly (and reversibly) changes, then Σi (dEi) pi is the expectation value of the work done on the system through this reversible process, dwrev.

But from the first law of thermodynamics, dE = δw + δq. Therefore,

In the thermodynamic limit, the fluctuation of the macroscopic quantities from their average values becomes negligible; so this reproduces the definition of entropy from classical thermodynamics, given above.

The quantity is the Boltzmann constant. The remaining factor of the equation, the entire summation is dimensionless, since the value is a probability and therefore dimensionless, and ln is the natural logarithm. Hence the SI derived units on both sides of the equation are same as heat capacity:

This definition remains meaningful even when the system is far away from equilibrium. Other definitions assume that the system is in thermal equilibrium, either as an isolated system, or as a system in exchange with its surroundings. The set of microstates (with probability distribution) on which the sum is done is called a statistical ensemble. Each type of statistical ensemble (micro-canonical, canonical, grand-canonical, etc.) describes a different configuration of the system's exchanges with the outside, varying from a completely isolated system to a system that can exchange one or more quantities with a reservoir, like energy, volume or molecules. In every ensemble, the equilibrium configuration of the system is dictated by the maximization of the entropy of the union of the system and its reservoir, according to the second law of thermodynamics (see the statistical mechanics article).

Neglecting correlations (or, more generally, statistical dependencies) between the states of individual particles will lead to an incorrect probability distribution on the microstates and hence to an overestimate of the entropy. Such correlations occur in any system with nontrivially interacting particles, that is, in all systems more complex than an ideal gas.

This S is almost universally called simply the entropy. It can also be called the statistical entropy or the thermodynamic entropy without changing the meaning. Note the above expression of the statistical entropy is a discretized version of Shannon entropy. The von Neumann entropy formula is an extension of the Gibbs entropy formula to the quantum mechanical case.

It has been shown that the Gibbs Entropy is equal to the classical "heat engine" entropy characterized by , and the generalized Boltzmann distribution is a sufficient and necessary condition for this equivalence. Furthermore, the Gibbs Entropy is the only entropy that is equivalent to the classical "heat engine" entropy under the following postulates:

  1. The probability density function is proportional to some function of the ensemble parameters and random variables.
  2. Thermodynamic state functions are described by ensemble averages of random variables.
  3. At infinite temperature, all the microstates have the same probability.

Ensembles

The various ensembles used in statistical thermodynamics are linked to the entropy by the following relations:

Order through chaos and the second law of thermodynamics

We can think of Ω as a measure of our lack of knowledge about a system. To illustrate this idea, consider a set of 100 coins, each of which is either heads up or tails up. In this example, let us suppose that the macrostates are specified by the total number of heads and tails, while the microstates are specified by the facings of each individual coin (i.e., the exact order in which heads and tails occur). For the macrostates of 100 heads or 100 tails, there is exactly one possible configuration, so our knowledge of the system is complete. At the opposite extreme, the macrostate which gives us the least knowledge about the system consists of 50 heads and 50 tails in any order, for which there are 100891344545564193334812497256 (100 choose 50) ≈ 1029 possible microstates.

Even when a system is entirely isolated from external influences, its microstate is constantly changing. For instance, the particles in a gas are constantly moving, and thus occupy a different position at each moment of time; their momenta are also constantly changing as they collide with each other or with the container walls. Suppose we prepare the system in an artificially highly ordered equilibrium state. For instance, imagine dividing a container with a partition and placing a gas on one side of the partition, with a vacuum on the other side. If we remove the partition and watch the subsequent behavior of the gas, we will find that its microstate evolves according to some chaotic and unpredictable pattern, and that on average these microstates will correspond to a more disordered macrostate than before. It is possible, but extremely unlikely, for the gas molecules to bounce off one another in such a way that they remain in one half of the container. It is overwhelmingly probable for the gas to spread out to fill the container evenly, which is the new equilibrium macrostate of the system.

This is an example illustrating the second law of thermodynamics:

the total entropy of any isolated thermodynamic system tends to increase over time, approaching a maximum value.

Since its discovery, this idea has been the focus of a great deal of thought, some of it confused. A chief point of confusion is the fact that the Second Law applies only to isolated systems. For example, the Earth is not an isolated system because it is constantly receiving energy in the form of sunlight. In contrast, the universe may be considered an isolated system, so that its total entropy is constantly increasing. (Needs clarification. See: Second law of thermodynamics#cite note-Grandy 151-21)

Counting of microstates

In classical statistical mechanics, the number of microstates is actually uncountably infinite, since the properties of classical systems are continuous. For example, a microstate of a classical ideal gas is specified by the positions and momenta of all the atoms, which range continuously over the real numbers. If we want to define Ω, we have to come up with a method of grouping the microstates together to obtain a countable set. This procedure is known as coarse graining. In the case of the ideal gas, we count two states of an atom as the "same" state if their positions and momenta are within δx and δp of each other. Since the values of δx and δp can be chosen arbitrarily, the entropy is not uniquely defined. It is defined only up to an additive constant. (As we will see, the thermodynamic definition of entropy is also defined only up to a constant.)

To avoid coarse graining one can take the entropy as defined by the H-theorem.

However, this ambiguity can be resolved with quantum mechanics. The quantum state of a system can be expressed as a superposition of "basis" states, which can be chosen to be energy eigenstates (i.e. eigenstates of the quantum Hamiltonian). Usually, the quantum states are discrete, even though there may be an infinite number of them. For a system with some specified energy E, one takes Ω to be the number of energy eigenstates within a macroscopically small energy range between E and E + δE. In the thermodynamical limit, the specific entropy becomes independent on the choice of δE.

An important result, known as Nernst's theorem or the third law of thermodynamics, states that the entropy of a system at zero absolute temperature is a well-defined constant. This is because a system at zero temperature exists in its lowest-energy state, or ground state, so that its entropy is determined by the degeneracy of the ground state. Many systems, such as crystal lattices, have a unique ground state, and (since ln(1) = 0) this means that they have zero entropy at absolute zero. Other systems have more than one state with the same, lowest energy, and have a non-vanishing "zero-point entropy". For instance, ordinary ice has a zero-point entropy of 3.41 J/(mol⋅K), because its underlying crystal structure possesses multiple configurations with the same energy (a phenomenon known as geometrical frustration).

The third law of thermodynamics states that the entropy of a perfect crystal at absolute zero (K) is zero. This means that nearly all molecular motion should cease. The oscillator equation for predicting quantized vibrational levels shows that even when the vibrational quantum number is 0, the molecule still has vibrational energy:

where is the Planck constant, is the characteristic frequency of the vibration, and is the vibrational quantum number. Even when (the zero-point energy), does not equal 0, in adherence to the Heisenberg uncertainty principle.

Thermodynamic diagrams

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Thermodynamic_diagrams

Thermodynamic diagrams
are diagrams used to represent the thermodynamic states of a material (typically fluid) and the consequences of manipulating this material. For instance, a temperature–entropy diagram (T–s diagram) may be used to demonstrate the behavior of a fluid as it is changed by a compressor.

Overview

Especially in meteorology they are used to analyze the actual state of the atmosphere derived from the measurements of radiosondes, usually obtained with weather balloons. In such diagrams, temperature and humidity values (represented by the dew point) are displayed with respect to pressure. Thus the diagram gives at a first glance the actual atmospheric stratification and vertical water vapor distribution. Further analysis gives the actual base and top height of convective clouds or possible instabilities in the stratification.

By assuming the energy amount due to solar radiation it is possible to predict the 2 m (6.6 ft) temperature, humidity, and wind during the day, the development of the boundary layer of the atmosphere, the occurrence and development of clouds and the conditions for soaring flight during the day.

The main feature of thermodynamic diagrams is the equivalence between the area in the diagram and energy. When air changes pressure and temperature during a process and prescribes a closed curve within the diagram the area enclosed by this curve is proportional to the energy which has been gained or released by the air.

Types of thermodynamic diagrams

General purpose diagrams include:

Specific to weather services, there are mainly three different types of thermodynamic diagrams used:

All three diagrams are derived from the physical P–alpha diagram which combines pressure (P) and specific volume (alpha) as its basic coordinates. The P–alpha diagram shows a strong deformation of the grid for atmospheric conditions and is therefore not useful in atmospheric sciences. The three diagrams are constructed from the P–alpha diagram by using appropriate coordinate transformations.

Not a thermodynamic diagram in a strict sense, since it does not display the energy–area equivalence, is the

But due to its simpler construction it is preferred in education.

Another widely-used diagram that does not display the energy–area equivalence is the θ-z diagram (Theta-height diagram), extensively used boundary layer meteorology.

Characteristics

Thermodynamic diagrams usually show a net of five different lines:

  • isobars = lines of constant pressure
  • isotherms = lines of constant temperature
  • dry adiabats = lines of constant potential temperature representing the temperature of a rising parcel of dry air
  • saturated adiabats or pseudoadiabats = lines representing the temperature of a rising parcel saturated with water vapor
  • mixing ratio = lines representing the dewpoint of a rising parcel

The lapse rate, dry adiabatic lapse rate (DALR) and moist adiabatic lapse rate (MALR), are obtained. With the help of these lines, parameters such as cloud condensation level, level of free convection, onset of cloud formation. etc. can be derived from the soundings.

Example

The path or series of states through which a system passes from an initial equilibrium state to a final equilibrium state and can be viewed graphically on a pressure-volume (P-V), pressure-temperature (P-T), and temperature-entropy (T-s) diagrams.

There are an infinite number of possible paths from an initial point to an end point in a process. In many cases the path matters, however, changes in the thermodynamic properties depend only on the initial and final states and not upon the path.

Figure 1

Consider a gas in cylinder with a free floating piston resting on top of a volume of gas V1 at a temperature T1. If the gas is heated so that the temperature of the gas goes up to T2 while the piston is allowed to rise to V2 as in Figure 1, then the pressure is kept the same in this process due to the free floating piston being allowed to rise making the process an isobaric process or constant pressure process. This Process Path is a straight horizontal line from state one to state two on a P-V diagram.

Figure 2

It is often valuable to calculate the work done in a process. The work done in a process is the area beneath the process path on a P-V diagram. Figure 2 If the process is isobaric, then the work done on the piston is easily calculated. For example, if the gas expands slowly against the piston, the work done by the gas to raise the piston is the force F times the distance d. But the force is just the pressure P of the gas times the area A of the piston, F = PA. Thus

  • W = Fd
  • W = PAd
  • W = P(V2V1)
figure 3

Now let’s say that the piston was not able to move smoothly within the cylinder due to static friction with the walls of the cylinder. Assuming that the temperature was increased slowly, you would find that the process path is not straight and no longer isobaric, but would instead undergo an isometric process till the force exceeded that of the frictional force and then would undergo an isothermal process back to an equilibrium state. This process would be repeated till the end state is reached. See figure 3. The work done on the piston in this case would be different due to the additional work required for the resistance of the friction. The work done due to friction would be the difference between the work done on these two process paths.

Many engineers neglect friction at first in order to generate a simplified model. For more accurate information, the height of the highest point, or the max pressure, to surpass the static friction would be proportional to the frictional coefficient and the slope going back down to the normal pressure would be the same as an isothermal process if the temperature was increased at a slow enough rate.

Another path in this process is an isometric process. This is a process where volume is held constant which shows as a vertical line on a P-V diagram. Figure 3 Since the piston is not moving during this process, there is not any work being done.

State function

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/State_function

In the thermodynamics of equilibrium, a state function, function of state, or point function for a thermodynamic system is a mathematical function relating several state variables or state quantities (that describe equilibrium states of a system) that depend only on the current equilibrium thermodynamic state of the system (e.g. gas, liquid, solid, crystal, or emulsion), not the path which the system has taken to reach that state. A state function describes equilibrium states of a system, thus also describing the type of system. A state variable is typically a state function so the determination of other state variable values at an equilibrium state also determines the value of the state variable as the state function at that state. The ideal gas law is a good example. In this law, one state variable (e.g., pressure, volume, temperature, or the amount of substance in a gaseous equilibrium system) is a function of other state variables so is regarded as a state function. A state function could also describe the number of a certain type of atoms or molecules in a gaseous, liquid, or solid form in a heterogeneous or homogeneous mixture, or the amount of energy required to create such a system or change the system into a different equilibrium state.

Internal energy, enthalpy, and entropy are examples of state quantities or state functions because they quantitatively describe an equilibrium state of a thermodynamic system, regardless of how the system has arrived in that state. In contrast, mechanical work and heat are process quantities or path functions because their values depend on a specific "transition" (or "path") between two equilibrium states that a system has taken to reach the final equilibrium state. Exchanged heat (in certain discrete amounts) can be associated with changes of state function such as enthalpy. The description of the system heat exchange is done by a state function, and thus enthalpy changes point to an amount of heat. This can also apply to entropy when heat is compared to temperature. The description breaks down for quantities exhibiting hysteresis.

History

It is likely that the term "functions of state" was used in a loose sense during the 1850s and 1860s by those such as Rudolf Clausius, William Rankine, Peter Tait, and William Thomson. By the 1870s, the term had acquired a use of its own. In his 1873 paper "Graphical Methods in the Thermodynamics of Fluids", Willard Gibbs states: "The quantities v, p, t, ε, and η are determined when the state of the body is given, and it may be permitted to call them functions of the state of the body."

Overview

A thermodynamic system is described by a number of thermodynamic parameters (e.g. temperature, volume, or pressure) which are not necessarily independent. The number of parameters needed to describe the system is the dimension of the state space of the system (D). For example, a monatomic gas with a fixed number of particles is a simple case of a two-dimensional system (D = 2). Any two-dimensional system is uniquely specified by two parameters. Choosing a different pair of parameters, such as pressure and volume instead of pressure and temperature, creates a different coordinate system in two-dimensional thermodynamic state space but is otherwise equivalent. Pressure and temperature can be used to find volume, pressure and volume can be used to find temperature, and temperature and volume can be used to find pressure. An analogous statement holds for higher-dimensional spaces, as described by the state postulate.

Generally, a state space is defined by an equation of the form , where P denotes pressure, T denotes temperature, V denotes volume, and the ellipsis denotes other possible state variables like particle number N and entropy S. If the state space is two-dimensional as in the above example, it can be visualized as a three-dimensional graph (a surface in three-dimensional space). However, the labels of the axes are not unique (since there are more than three state variables in this case), and only two independent variables are necessary to define the state.

When a system changes state continuously, it traces out a "path" in the state space. The path can be specified by noting the values of the state parameters as the system traces out the path, whether as a function of time or a function of some other external variable. For example, having the pressure P(t) and volume V(t) as functions of time from time t0 to t1 will specify a path in two-dimensional state space. Any function of time can then be integrated over the path. For example, to calculate the work done by the system from time t0 to time t1, calculate . In order to calculate the work W in the above integral, the functions P(t) and V(t) must be known at each time t over the entire path. In contrast, a state function only depends upon the system parameters' values at the endpoints of the path. For example, the following equation can be used to calculate the work plus the integral of V dP over the path:

In the equation, can be expressed as the exact differential of the function P(t)V(t). Therefore, the integral can be expressed as the difference in the value of P(t)V(t) at the end points of the integration. The product PV is therefore a state function of the system.

The notation d will be used for an exact differential. In other words, the integral of dΦ will be equal to Φ(t1) − Φ(t0). The symbol δ will be reserved for an inexact differential, which cannot be integrated without full knowledge of the path. For example, δW = PdV will be used to denote an infinitesimal increment of work.

State functions represent quantities or properties of a thermodynamic system, while non-state functions represent a process during which the state functions change. For example, the state function PV is proportional to the internal energy of an ideal gas, but the work W is the amount of energy transferred as the system performs work. Internal energy is identifiable; it is a particular form of energy. Work is the amount of energy that has changed its form or location.

List of state functions

The following are considered to be state functions in thermodynamics:

Algorithmic information theory

Algorithmic information theory (AIT) is a branch of theoretical computer science that concerns itself with the relationship between computation and information of computably generated objects (as opposed to stochastically generated), such as strings or any other data structure. In other words, it is shown within algorithmic information theory that computational incompressibility "mimics" (except for a constant that only depends on the chosen universal programming language) the relations or inequalities found in information theory. According to Gregory Chaitin, it is "the result of putting Shannon's information theory and Turing's computability theory into a cocktail shaker and shaking vigorously."

Besides the formalization of a universal measure for irreducible information content of computably generated objects, some main achievements of AIT were to show that: in fact algorithmic complexity follows (in the self-delimited case) the same inequalities (except for a constant) that entropy does, as in classical information theory; randomness is incompressibility; and, within the realm of randomly generated software, the probability of occurrence of any data structure is of the order of the shortest program that generates it when running on a universal machine.

AIT principally studies measures of irreducible information content of strings (or other data structures). Because most mathematical objects can be described in terms of strings, or as the limit of a sequence of strings, it can be used to study a wide variety of mathematical objects, including integers. One of the main motivations behind AIT is the very study of the information carried by mathematical objects as in the field of metamathematics, e.g., as shown by the incompleteness results mentioned below. Other main motivations came from surpassing the limitations of classical information theory for single and fixed objects, formalizing the concept of randomness, and finding a meaningful probabilistic inference without prior knowledge of the probability distribution (e.g., whether it is independent and identically distributed, Markovian, or even stationary). In this way, AIT is known to be basically founded upon three main mathematical concepts and the relations between them: algorithmic complexity, algorithmic randomness, and algorithmic probability.

Overview

Algorithmic information theory principally studies complexity measures on strings (or other data structures). Because most mathematical objects can be described in terms of strings, or as the limit of a sequence of strings, it can be used to study a wide variety of mathematical objects, including integers.

Informally, from the point of view of algorithmic information theory, the information content of a string is equivalent to the length of the most-compressed possible self-contained representation of that string. A self-contained representation is essentially a program—in some fixed but otherwise irrelevant universal programming language—that, when run, outputs the original string.

From this point of view, a 3000-page encyclopedia actually contains less information than 3000 pages of completely random letters, despite the fact that the encyclopedia is much more useful. This is because to reconstruct the entire sequence of random letters, one must know what every single letter is. On the other hand, if every vowel were removed from the encyclopedia, someone with reasonable knowledge of the English language could reconstruct it, just as one could likely reconstruct the sentence "Ths sntnc hs lw nfrmtn cntnt" from the context and consonants present.

Unlike classical information theory, algorithmic information theory gives formal, rigorous definitions of a random string and a random infinite sequence that do not depend on physical or philosophical intuitions about nondeterminism or likelihood. (The set of random strings depends on the choice of the universal Turing machine used to define Kolmogorov complexity, but any choice gives identical asymptotic results because the Kolmogorov complexity of a string is invariant up to an additive constant depending only on the choice of universal Turing machine. For this reason the set of random infinite sequences is independent of the choice of universal machine.)

Some of the results of algorithmic information theory, such as Chaitin's incompleteness theorem, appear to challenge common mathematical and philosophical intuitions. Most notable among these is the construction of Chaitin's constant Ω, a real number that expresses the probability that a self-delimiting universal Turing machine will halt when its input is supplied by flips of a fair coin (sometimes thought of as the probability that a random computer program will eventually halt). Although Ω is easily defined, in any consistent axiomatizable theory one can only compute finitely many digits of Ω, so it is in some sense unknowable, providing an absolute limit on knowledge that is reminiscent of Gödel's incompleteness theorems. Although the digits of Ω cannot be determined, many properties of Ω are known; for example, it is an algorithmically random sequence and thus its binary digits are evenly distributed (in fact it is normal).

History

Algorithmic information theory was founded by Ray Solomonoff, who published the basic ideas on which the field is based as part of his invention of algorithmic probability—a way to overcome serious problems associated with the application of Bayes' rules in statistics. He first described his results at a Conference at Caltech in 1960, and in a report, February 1960, "A Preliminary Report on a General Theory of Inductive Inference." Algorithmic information theory was later developed independently by Andrey Kolmogorov, in 1965 and Gregory Chaitin, around 1966.

There are several variants of Kolmogorov complexity or algorithmic information; the most widely used one is based on self-delimiting programs and is mainly due to Leonid Levin (1974). Per Martin-Löf also contributed significantly to the information theory of infinite sequences. An axiomatic approach to algorithmic information theory based on the Blum axioms (Blum 1967) was introduced by Mark Burgin in a paper presented for publication by Andrey Kolmogorov (Burgin 1982). The axiomatic approach encompasses other approaches in the algorithmic information theory. It is possible to treat different measures of algorithmic information as particular cases of axiomatically defined measures of algorithmic information. Instead of proving similar theorems, such as the basic invariance theorem, for each particular measure, it is possible to easily deduce all such results from one corresponding theorem proved in the axiomatic setting. This is a general advantage of the axiomatic approach in mathematics. The axiomatic approach to algorithmic information theory was further developed in the book (Burgin 2005) and applied to software metrics (Burgin and Debnath, 2003; Debnath and Burgin, 2003).

Precise definitions

A binary string is said to be random if the Kolmogorov complexity of the string is at least the length of the string. A simple counting argument shows that some strings of any given length are random, and almost all strings are very close to being random. Since Kolmogorov complexity depends on a fixed choice of universal Turing machine (informally, a fixed "description language" in which the "descriptions" are given), the collection of random strings does depend on the choice of fixed universal machine. Nevertheless, the collection of random strings, as a whole, has similar properties regardless of the fixed machine, so one can (and often does) talk about the properties of random strings as a group without having to first specify a universal machine.

An infinite binary sequence is said to be random if, for some constant c, for all n, the Kolmogorov complexity of the initial segment of length n of the sequence is at least n − c. It can be shown that almost every sequence (from the point of view of the standard measure—"fair coin" or Lebesgue measure—on the space of infinite binary sequences) is random. Also, since it can be shown that the Kolmogorov complexity relative to two different universal machines differs by at most a constant, the collection of random infinite sequences does not depend on the choice of universal machine (in contrast to finite strings). This definition of randomness is usually called Martin-Löf randomness, after Per Martin-Löf, to distinguish it from other similar notions of randomness. It is also sometimes called 1-randomness to distinguish it from other stronger notions of randomness (2-randomness, 3-randomness, etc.). In addition to Martin-Löf randomness concepts, there are also recursive randomness, Schnorr randomness, and Kurtz randomness etc. Yongge Wang showed that all of these randomness concepts are different.

(Related definitions can be made for alphabets other than the set .)

Specific sequence

Algorithmic information theory (AIT) is the information theory of individual objects, using computer science, and concerns itself with the relationship between computation, information, and randomness.

The information content or complexity of an object can be measured by the length of its shortest description. For instance the string

"0101010101010101010101010101010101010101010101010101010101010101"

has the short description "32 repetitions of '01'", while

"1100100001100001110111101110110011111010010000100101011110010110"

presumably has no simple description other than writing down the string itself.

More formally, the algorithmic complexity (AC) of a string x is defined as the length of the shortest program that computes or outputs x, where the program is run on some fixed reference universal computer.

A closely related notion is the probability that a universal computer outputs some string x when fed with a program chosen at random. This algorithmic "Solomonoff" probability (AP) is key in addressing the old philosophical problem of induction in a formal way.

The major drawback of AC and AP are their incomputability. Time-bounded "Levin" complexity penalizes a slow program by adding the logarithm of its running time to its length. This leads to computable variants of AC and AP, and universal "Levin" search (US) solves all inversion problems in optimal time (apart from some unrealistically large multiplicative constant).

AC and AP also allow a formal and rigorous definition of randomness of individual strings to not depend on physical or philosophical intuitions about non-determinism or likelihood. Roughly, a string is algorithmic "Martin-Löf" random (AR) if it is incompressible in the sense that its algorithmic complexity is equal to its length.

AC, AP, and AR are the core sub-disciplines of AIT, but AIT spawns into many other areas. It serves as the foundation of the Minimum Description Length (MDL) principle, can simplify proofs in computational complexity theory, has been used to define a universal similarity metric between objects, solves the Maxwell daemon problem, and many others.

Ice giant

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Ice_giant   Uranus photographed by V...