Search This Blog

Wednesday, August 27, 2025

Entropy (information theory)

From Wikipedia, the free encyclopedia

where denotes the sum over the variable's possible values. The choice of base for , the logarithm, varies for different applications. Base 2 gives the unit of bits (or "shannons"), while base e gives "natural units" nat, and base 10 gives units of "dits", "bans", or "hartleys". An equivalent definition of entropy is the expected value of the self-information of a variable.

Two bits of entropy: In the case of two fair coin tosses, the information entropy in bits is the base-2 logarithm of the number of possible outcomes— with two coins there are four possible outcomes, and two bits of entropy. Generally, information entropy is the average amount of information conveyed by an event, when considering all possible outcomes.

The concept of information entropy was introduced by Claude Shannon in his 1948 paper "A Mathematical Theory of Communication", and is also referred to as Shannon entropy. Shannon's theory defines a data communication system composed of three elements: a source of data, a communication channel, and a receiver. The "fundamental problem of communication" – as expressed by Shannon – is for the receiver to be able to identify what data was generated by the source, based on the signal it receives through the channel. Shannon considered various ways to encode, compress, and transmit messages from a data source, and proved in his source coding theorem that the entropy represents an absolute mathematical limit on how well data from the source can be losslessly compressed onto a perfectly noiseless channel. Shannon strengthened this result considerably for noisy channels in his noisy-channel coding theorem.

Entropy in information theory is directly analogous to the entropy in statistical thermodynamics. The analogy results when the values of the random variable designate energies of microstates, so Gibbs's formula for the entropy is formally identical to Shannon's formula. Entropy has relevance to other areas of mathematics such as combinatorics and machine learning. The definition can be derived from a set of axioms establishing that entropy should be a measure of how informative the average outcome of a variable is. For a continuous random variable, differential entropy is analogous to entropy. The definition generalizes the above.

Introduction

The core idea of information theory is that the "informational value" of a communicated message depends on the degree to which the content of the message is surprising. If a highly likely event occurs, the message carries very little information. On the other hand, if a highly unlikely event occurs, the message is much more informative. For instance, the knowledge that some particular number will not be the winning number of a lottery provides very little information, because any particular chosen number will almost certainly not win. However, knowledge that a particular number will win a lottery has high informational value because it communicates the occurrence of a very low probability event.

The information content, also called the surprisal or self-information, of an event is a function that increases as the probability of an event decreases. When is close to 1, the surprisal of the event is low, but if is close to 0, the surprisal of the event is high. This relationship is described by the function where is the logarithm, which gives 0 surprise when the probability of the event is 1. In fact, log is the only function that satisfies а specific set of conditions defined in section § Characterization.

Hence, we can define the information, or surprisal, of an event by

or equivalently,

Entropy measures the expected (i.e., average) amount of information conveyed by identifying the outcome of a random trial. This implies that rolling a die has higher entropy than tossing a coin because each outcome of a die toss has smaller probability () than each outcome of a coin toss ().

Consider a coin with probability p of landing on heads and probability 1 − p of landing on tails. The maximum surprise is when p = 1/2, for which one outcome is not expected over the other. In this case a coin flip has an entropy of one bit (similarly, one trit with equiprobable values contains (about 1.58496) bits of information because it can have one of three values). The minimum surprise is when p = 0 (impossibility) or p = 1 (certainty) and the entropy is zero bits. When the entropy is zero, sometimes referred to as unity, there is no uncertainty at all – no freedom of choice – no information. Other values of p give entropies between zero and one bits.

Example

Information theory is useful to calculate the smallest amount of information required to convey a message, as in data compression. For example, consider the transmission of sequences comprising the 4 characters 'A', 'B', 'C', and 'D' over a binary channel. If all 4 letters are equally likely (25%), one cannot do better than using two bits to encode each letter. 'A' might code as '00', 'B' as '01', 'C' as '10', and 'D' as '11'. However, if the probabilities of each letter are unequal, say 'A' occurs with 70% probability, 'B' with 26%, and 'C' and 'D' with 2% each, one could assign variable length codes. In this case, 'A' would be coded as '0', 'B' as '10', 'C' as '110', and 'D' as '111'. With this representation, 70% of the time only one bit needs to be sent, 26% of the time two bits, and only 4% of the time 3 bits. On average, fewer than 2 bits are required since the entropy is lower (owing to the high prevalence of 'A' followed by 'B' – together 96% of characters). The calculation of the sum of probability-weighted log probabilities measures and captures this effect.

English text, treated as a string of characters, has fairly low entropy; i.e. it is fairly predictable. We can be fairly certain that, for example, 'e' will be far more common than 'z', that the combination 'qu' will be much more common than any other combination with a 'q' in it, and that the combination 'th' will be more common than 'z', 'q', or 'qu'. After the first few letters one can often guess the rest of the word. English text has between 0.6 and 1.3 bits of entropy per character of the message.

Definition

Named after Boltzmann's Η-theorem, Shannon defined the entropy Η (Greek capital letter eta) of a discrete random variable , which takes values in the set and is distributed according to such that :

Here is the expected value operator, and I is the information content of X. is itself a random variable.

The entropy can explicitly be written as: where b is the base of the logarithm used. Common values of b are 2, Euler's number e, and 10, and the corresponding units of entropy are the bits for b = 2, nats for b = e, and bans for b = 10.

In the case of for some , the value of the corresponding summand 0 logb(0) is taken to be 0, which is consistent with the limit

One may also define the conditional entropy of two variables and taking values from sets and respectively, as: where and . This quantity should be understood as the remaining randomness in the random variable given the random variable .

Measure theory

Entropy can be formally defined in the language of measure theory as follows: Let be a probability space. Let be an event. The surprisal of is

The expected surprisal of is

A -almost partition is a set family such that and for all distinct . (This is a relaxation of the usual conditions for a partition.) The entropy of is

Let be a sigma-algebra on . The entropy of is Finally, the entropy of the probability space is , that is, the entropy with respect to of the sigma-algebra of all measurable subsets of .

Example

Entropy Η(X) (i.e. the expected surprisal) of a coin flip, measured in bits, graphed versus the bias of the coin Pr(X = 1), where X = 1 represents a result of heads.

Here, the entropy is at most 1 bit, and to communicate the outcome of a coin flip (2 possible values) will require an average of at most 1 bit (exactly 1 bit for a fair coin). The result of a fair die (6 possible values) would have entropy log26 bits.

Consider tossing a coin with known, not necessarily fair, probabilities of coming up heads or tails; this can be modeled as a Bernoulli process.

The entropy of the unknown result of the next toss of the coin is maximized if the coin is fair (that is, if heads and tails both have equal probability 1/2). This is the situation of maximum uncertainty as it is most difficult to predict the outcome of the next toss; the result of each toss of the coin delivers one full bit of information. This is because

However, if we know the coin is not fair, but comes up heads or tails with probabilities p and q, where pq, then there is less uncertainty. Every time it is tossed, one side is more likely to come up than the other. The reduced uncertainty is quantified in a lower entropy: on average each toss of the coin delivers less than one full bit of information. For example, if p = 0.7, then

Uniform probability yields maximum uncertainty and therefore maximum entropy. Entropy, then, can only decrease from the value associated with uniform probability. The extreme case is that of a double-headed coin that never comes up tails, or a double-tailed coin that never results in a head. Then there is no uncertainty. The entropy is zero: each toss of the coin delivers no new information as the outcome of each coin toss is always certain.

Characterization

To understand the meaning of −Σ pi log(pi), first define an information function I in terms of an event i with probability pi. The amount of information acquired due to the observation of event i follows from Shannon's solution of the fundamental properties of information:

  1. I(p) is monotonically decreasing in p: an increase in the probability of an event decreases the information from an observed event, and vice versa.
  2. I(1) = 0: events that always occur do not communicate information.
  3. I(p1·p2) = I(p1) + I(p2): the information learned from independent events is the sum of the information learned from each event.
  4. I(p) is a twice continuously differentiable function of p.

Given two independent events, if the first event can yield one of n equiprobable outcomes and another has one of m equiprobable outcomes then there are mn equiprobable outcomes of the joint event. This means that if log2(n) bits are needed to encode the first value and log2(m) to encode the second, one needs log2(mn) = log2(m) + log2(n) to encode both.

Shannon discovered that a suitable choice of is given by: 

In fact, the only possible values of are for . Additionally, choosing a value for k is equivalent to choosing a value for , so that x corresponds to the base for the logarithm. Thus, entropy is characterized by the above four properties.

The different units of information (bits for the binary logarithm log2, nats for the natural logarithm ln, bans for the decimal logarithm log10 and so on) are constant multiples of each other. For instance, in case of a fair coin toss, heads provides log2(2) = 1 bit of information, which is approximately 0.693 nats or 0.301 decimal digits. Because of additivity, n tosses provide n bits of information, which is approximately 0.693n nats or 0.301n decimal digits.

The meaning of the events observed (the meaning of messages) does not matter in the definition of entropy. Entropy only takes into account the probability of observing a specific event, so the information it encapsulates is information about the underlying probability distribution, not the meaning of the events themselves.

Alternative characterization

Another characterization of entropy uses the following properties. We denote pi = Pr(X = xi) and Ηn(p1, ..., pn) = Η(X).

  1. Continuity: H should be continuous, so that changing the values of the probabilities by a very small amount should only change the entropy by a small amount.
  2. Symmetry: H should be unchanged if the outcomes xi are re-ordered. That is, for any permutation of .
  3. Maximum: should be maximal if all the outcomes are equally likely i.e. .
  4. Increasing number of outcomes: for equiprobable events, the entropy should increase with the number of outcomes i.e.
  5. Additivity: given an ensemble of n uniformly distributed elements that are partitioned into k boxes (sub-systems) with b1, ..., bk elements each, the entropy of the whole ensemble should be equal to the sum of the entropy of the system of boxes and the individual entropies of the boxes, each weighted with the probability of being in that particular box.

Discussion

The rule of additivity has the following consequences: for positive integers bi where b1 + ... + bk = n,

Choosing k = n, b1 = ... = bn = 1 this implies that the entropy of a certain outcome is zero: Η1(1) = 0. This implies that the efficiency of a source set with n symbols can be defined simply as being equal to its n-ary entropy. See also Redundancy (information theory).

The characterization here imposes an additive property with respect to a partition of a set. Meanwhile, the conditional probability is defined in terms of a multiplicative property, . Observe that a logarithm mediates between these two operations. The conditional entropy and related quantities inherit simple relation, in turn. The measure theoretic definition in the previous section defined the entropy as a sum over expected surprisals for an extremal partition. Here the logarithm is ad hoc and the entropy is not a measure in itself. At least in the information theory of a binary string, lends itself to practical interpretations.

Motivated by such relations, a plethora of related and competing quantities have been defined. For example, David Ellerman's analysis of a "logic of partitions" defines a competing measure in structures dual to that of subsets of a universal set. Information is quantified as "dits" (distinctions), a measure on partitions. "Dits" can be converted into Shannon's bits, to get the formulas for conditional entropy, and so on.

Alternative characterization via additivity and subadditivity

Another succinct axiomatic characterization of Shannon entropy was given by Aczél, Forte and Ng, via the following properties:

  1. Subadditivity: for jointly distributed random variables .
  2. Additivity: when the random variables are independent.
  3. Expansibility: , i.e., adding an outcome with probability zero does not change the entropy.
  4. Symmetry: is invariant under permutation of .
  5. Small for small probabilities: .

Discussion

It was shown that any function satisfying the above properties must be a constant multiple of Shannon entropy, with a non-negative constant. Compared to the previously mentioned characterizations of entropy, this characterization focuses on the properties of entropy as a function of random variables (subadditivity and additivity), rather than the properties of entropy as a function of the probability vector .

It is worth noting that if we drop the "small for small probabilities" property, then must be a non-negative linear combination of the Shannon entropy and the Hartley entropy.

Further properties

The Shannon entropy satisfies the following properties, for some of which it is useful to interpret entropy as the expected amount of information learned (or uncertainty eliminated) by revealing the value of a random variable X:

  • Adding or removing an event with probability zero does not contribute to the entropy:
  • The maximal entropy of an event with n different outcomes is logb(n): it is attained by the uniform probability distribution. That is, uncertainty is maximal when all possible events are equiprobable:
  • The entropy or the amount of information revealed by evaluating (X,Y) (that is, evaluating X and Y simultaneously) is equal to the information revealed by conducting two consecutive experiments: first evaluating the value of Y, then revealing the value of X given that you know the value of Y. This may be written as:
  • If where is a function, then . Applying the previous formula to yields so , the entropy of a variable can only decrease when the latter is passed through a function.
  • If X and Y are two independent random variables, then knowing the value of Y doesn't influence our knowledge of the value of X (since the two don't influence each other by independence):
  • More generally, for any random variables X and Y, we have
  • The entropy of two simultaneous events is no more than the sum of the entropies of each individual event i.e., , with equality if and only if the two events are independent.
  • The entropy is concave in the probability mass function , i.e. for all probability mass functions and .

Aspects

Relationship to thermodynamic entropy

The inspiration for adopting the word entropy in information theory came from the close resemblance between Shannon's formula and very similar known formulae from statistical mechanics.

In statistical thermodynamics the most general formula for the thermodynamic entropy S of a thermodynamic system is the Gibbs entropy where kB is the Boltzmann constant, and pi is the probability of a microstate. The Gibbs entropy was defined by J. Willard Gibbs in 1878 after earlier work by Ludwig Boltzmann (1872).

The Gibbs entropy translates over almost unchanged into the world of quantum physics to give the von Neumann entropy introduced by John von Neumann in 1927: where ρ is the density matrix of the quantum mechanical system and Tr is the trace.

At an everyday practical level, the links between information entropy and thermodynamic entropy are not evident. Physicists and chemists are apt to be more interested in changes in entropy as a system spontaneously evolves away from its initial conditions, in accordance with the second law of thermodynamics, rather than an unchanging probability distribution. As the minuteness of the Boltzmann constant kB indicates, the changes in S / kB for even tiny amounts of substances in chemical and physical processes represent amounts of entropy that are extremely large compared to anything in data compression or signal processing. In classical thermodynamics, entropy is defined in terms of macroscopic measurements and makes no reference to any probability distribution, which is central to the definition of information entropy.

The connection between thermodynamics and what is now known as information theory was first made by Boltzmann and expressed by his equation:

where is the thermodynamic entropy of a particular macrostate (defined by thermodynamic parameters such as temperature, volume, energy, etc.), W is the number of microstates (various combinations of particles in various energy states) that can yield the given macrostate, and kB is the Boltzmann constant. It is assumed that each microstate is equally likely, so that the probability of a given microstate is pi = 1/W. When these probabilities are substituted into the above expression for the Gibbs entropy (or equivalently kB times the Shannon entropy), Boltzmann's equation results. In information theoretic terms, the information entropy of a system is the amount of "missing" information needed to determine a microstate, given the macrostate.

In the view of Jaynes (1957), thermodynamic entropy, as explained by statistical mechanics, should be seen as an application of Shannon's information theory: the thermodynamic entropy is interpreted as being proportional to the amount of further Shannon information needed to define the detailed microscopic state of the system, that remains uncommunicated by a description solely in terms of the macroscopic variables of classical thermodynamics, with the constant of proportionality being just the Boltzmann constant. Adding heat to a system increases its thermodynamic entropy because it increases the number of possible microscopic states of the system that are consistent with the measurable values of its macroscopic variables, making any complete state description longer. (See article: maximum entropy thermodynamics). Maxwell's demon can (hypothetically) reduce the thermodynamic entropy of a system by using information about the states of individual molecules; but, as Landauer (from 1961) and co-workers have shown, to function the demon himself must increase thermodynamic entropy in the process, by at least the amount of Shannon information he proposes to first acquire and store; and so the total thermodynamic entropy does not decrease (which resolves the paradox). Landauer's principle imposes a lower bound on the amount of heat a computer must generate to process a given amount of information, though modern computers are far less efficient.

Data compression

Shannon's definition of entropy, when applied to an information source, can determine the minimum channel capacity required to reliably transmit the source as encoded binary digits. Shannon's entropy measures the information contained in a message as opposed to the portion of the message that is determined (or predictable). Examples of the latter include redundancy in language structure or statistical properties relating to the occurrence frequencies of letter or word pairs, triplets etc. The minimum channel capacity can be realized in theory by using the typical set or in practice using Huffman, Lempel–Ziv or arithmetic coding. (See also Kolmogorov complexity.) In practice, compression algorithms deliberately include some judicious redundancy in the form of checksums to protect against errors. The entropy rate of a data source is the average number of bits per symbol needed to encode it. Shannon's experiments with human predictors show an information rate between 0.6 and 1.3 bits per character in English; the PPM compression algorithm can achieve a compression ratio of 1.5 bits per character in English text.

If a compression scheme is lossless – one in which you can always recover the entire original message by decompression – then a compressed message has the same quantity of information as the original but is communicated in fewer characters. It has more information (higher entropy) per character. A compressed message has less redundancy. Shannon's source coding theorem states a lossless compression scheme cannot compress messages, on average, to have more than one bit of information per bit of message, but that any value less than one bit of information per bit of message can be attained by employing a suitable coding scheme. The entropy of a message per bit multiplied by the length of that message is a measure of how much total information the message contains. Shannon's theorem also implies that no lossless compression scheme can shorten all messages. If some messages come out shorter, at least one must come out longer due to the pigeonhole principle. In practical use, this is generally not a problem, because one is usually only interested in compressing certain types of messages, such as a document in English, as opposed to gibberish text, or digital photographs rather than noise, and it is unimportant if a compression algorithm makes some unlikely or uninteresting sequences larger.

A 2011 study in Science estimates the world's technological capacity to store and communicate optimally compressed information normalized on the most effective compression algorithms available in the year 2007, therefore estimating the entropy of the technologically available sources.

All figures in entropically compressed exabytes
Type of Information 1986 2007
Storage 2.6 295
Broadcast 432 1900
Telecommunications 0.281 65

The authors estimate humankind technological capacity to store information (fully entropically compressed) in 1986 and again in 2007. They break the information into three categories—to store information on a medium, to receive information through one-way broadcast networks, or to exchange information through two-way telecommunications networks.

Entropy as a measure of diversity

Entropy is one of several ways to measure biodiversity and is applied in the form of the Shannon index. A diversity index is a quantitative statistical measure of how many different types exist in a dataset, such as species in a community, accounting for ecological richness, evenness, and dominance. Specifically, Shannon entropy is the logarithm of 1D, the true diversity index with parameter equal to 1. The Shannon index is related to the proportional abundances of types.

Entropy of a sequence

There are a number of entropy-related concepts that mathematically quantify information content of a sequence or message:

  • the self-information of an individual message or symbol taken from a given probability distribution (message or sequence seen as an individual event),
  • the joint entropy of the symbols forming the message or sequence (seen as a set of events),
  • the entropy rate of a stochastic process (message or sequence is seen as a succession of events).

(The "rate of self-information" can also be defined for a particular sequence of messages or symbols generated by a given stochastic process: this will always be equal to the entropy rate in the case of a stationary process.) Other quantities of information are also used to compare or relate different sources of information.

It is important not to confuse the above concepts. Often it is only clear from context which one is meant. For example, when someone says that the "entropy" of the English language is about 1 bit per character, they are actually modeling the English language as a stochastic process and talking about its entropy rate. Shannon himself used the term in this way.

If very large blocks are used, the estimate of per-character entropy rate may become artificially low because the probability distribution of the sequence is not known exactly; it is only an estimate. If one considers the text of every book ever published as a sequence, with each symbol being the text of a complete book, and if there are N published books, and each book is only published once, the estimate of the probability of each book is 1/N, and the entropy (in bits) is −log2(1/N) = log2(N). As a practical code, this corresponds to assigning each book a unique identifier and using it in place of the text of the book whenever one wants to refer to the book. This is enormously useful for talking about books, but it is not so useful for characterizing the information content of an individual book, or of language in general: it is not possible to reconstruct the book from its identifier without knowing the probability distribution, that is, the complete text of all the books. The key idea is that the complexity of the probabilistic model must be considered. Kolmogorov complexity is a theoretical generalization of this idea that allows the consideration of the information content of a sequence independent of any particular probability model; it considers the shortest program for a universal computer that outputs the sequence. A code that achieves the entropy rate of a sequence for a given model, plus the codebook (i.e. the probabilistic model), is one such program, but it may not be the shortest.

The Fibonacci sequence is 1, 1, 2, 3, 5, 8, 13, .... treating the sequence as a message and each number as a symbol, there are almost as many symbols as there are characters in the message, giving an entropy of approximately log2(n). The first 128 symbols of the Fibonacci sequence has an entropy of approximately 7 bits/symbol, but the sequence can be expressed using a formula [F(n) = F(n−1) + F(n−2) for n = 3, 4, 5, ..., F(1) =1, F(2) = 1] and this formula has a much lower entropy and applies to any length of the Fibonacci sequence.

Limitations of entropy in cryptography

In cryptanalysis, entropy is often roughly used as a measure of the unpredictability of a cryptographic key, though its real uncertainty is unmeasurable. For example, a 128-bit key that is uniformly and randomly generated has 128 bits of entropy. It also takes (on average) guesses to break by brute force. Entropy fails to capture the number of guesses required if the possible keys are not chosen uniformly. Instead, a measure called guesswork can be used to measure the effort required for a brute force attack.

Other problems may arise from non-uniform distributions used in cryptography. For example, a 1,000,000-digit binary one-time pad using exclusive or. If the pad has 1,000,000 bits of entropy, it is perfect. If the pad has 999,999 bits of entropy, evenly distributed (each individual bit of the pad having 0.999999 bits of entropy) it may provide good security. But if the pad has 999,999 bits of entropy, where the first bit is fixed and the remaining 999,999 bits are perfectly random, the first bit of the ciphertext will not be encrypted at all.

Data as a Markov process

A common way to define entropy for text is based on the Markov model of text. For an order-0 source (each character is selected independent of the last characters), the binary entropy is:

where pi is the probability of i. For a first-order Markov source (one in which the probability of selecting a character is dependent only on the immediately preceding character), the entropy rate is:

where i is a state (certain preceding characters) and is the probability of j given i as the previous character.

For a second order Markov source, the entropy rate is

Efficiency (normalized entropy)

A source set with a non-uniform distribution will have less entropy than the same set with a uniform distribution (i.e. the "optimized alphabet"). This deficiency in entropy can be expressed as a ratio called efficiency:

Applying the basic properties of the logarithm, this quantity can also be expressed as:

Efficiency has utility in quantifying the effective use of a communication channel. This formulation is also referred to as the normalized entropy, as the entropy is divided by the maximum entropy . Furthermore, the efficiency is indifferent to the choice of (positive) base b, as indicated by the insensitivity within the final logarithm above thereto.

Entropy for continuous random variables

Differential entropy

The Shannon entropy is restricted to random variables taking discrete values. The corresponding formula for a continuous random variable with probability density function f(x) with finite or infinite support on the real line is defined by analogy, using the above form of the entropy as an expectation:

This is the differential entropy (or continuous entropy). A precursor of the continuous entropy h[f] is the expression for the functional Η in the H-theorem of Boltzmann.

Although the analogy between both functions is suggestive, the following question must be set: is the differential entropy a valid extension of the Shannon discrete entropy? Differential entropy lacks a number of properties that the Shannon discrete entropy has – it can even be negative – and corrections have been suggested, notably limiting density of discrete points.

To answer this question, a connection must be established between the two functions:

In order to obtain a generally finite measure as the bin size goes to zero. In the discrete case, the bin size is the (implicit) width of each of the n (finite or infinite) bins whose probabilities are denoted by pn. As the continuous domain is generalized, the width must be made explicit.

To do this, start with a continuous function f discretized into bins of size . By the mean-value theorem there exists a value xi in each bin such that the integral of the function f can be approximated (in the Riemannian sense) by where this limit and "bin size goes to zero" are equivalent.

We will denote and expanding the logarithm, we have

As Δ → 0, we have

Note; log(Δ) → −∞ as Δ → 0, requires a special definition of the differential or continuous entropy:

which is, as said before, referred to as the differential entropy. This means that the differential entropy is not a limit of the Shannon entropy for n → ∞. Rather, it differs from the limit of the Shannon entropy by an infinite offset (see also the article on information dimension).

Limiting density of discrete points

It turns out as a result that, unlike the Shannon entropy, the differential entropy is not in general a good measure of uncertainty or information. For example, the differential entropy can be negative; also it is not invariant under continuous co-ordinate transformations. This problem may be illustrated by a change of units when x is a dimensioned variable. f(x) will then have the units of 1/x. The argument of the logarithm must be dimensionless, otherwise it is improper, so that the differential entropy as given above will be improper. If Δ is some "standard" value of x (i.e. "bin size") and therefore has the same units, then a modified differential entropy may be written in proper form as: and the result will be the same for any choice of units for x. In fact, the limit of discrete entropy as would also include a term of , which would in general be infinite. This is expected: continuous variables would typically have infinite entropy when discretized. The limiting density of discrete points is really a measure of how much easier a distribution is to describe than a distribution that is uniform over its quantization scheme.

Relative entropy

Another useful measure of entropy that works equally well in the discrete and the continuous case is the relative entropy of a distribution. It is defined as the Kullback–Leibler divergence from the distribution to a reference measure m as follows. Assume that a probability distribution p is absolutely continuous with respect to a measure m, i.e. is of the form p(dx) = f(x)m(dx) for some non-negative m-integrable function f with m-integral 1, then the relative entropy can be defined as

In this form the relative entropy generalizes (up to change in sign) both the discrete entropy, where the measure m is the counting measure, and the differential entropy, where the measure m is the Lebesgue measure. If the measure m is itself a probability distribution, the relative entropy is non-negative, and zero if p = m as measures. It is defined for any measure space, hence coordinate independent and invariant under co-ordinate reparameterizations if one properly takes into account the transformation of the measure m. The relative entropy, and (implicitly) entropy and differential entropy, do depend on the "reference" measure m.

Use in number theory

Terence Tao used entropy to make a useful connection trying to solve the Erdős discrepancy problem.

Intuitively the idea behind the proof was if there is low information in terms of the Shannon entropy between consecutive random variables (here the random variable is defined using the Liouville function (which is a useful mathematical function for studying distribution of primes) XH = . And in an interval [n, n+H] the sum over that interval could become arbitrary large. For example, a sequence of +1's (which are values of XH could take) have trivially low entropy and their sum would become big. But the key insight was showing a reduction in entropy by non negligible amounts as one expands H leading inturn to unbounded growth of a mathematical object over this random variable is equivalent to showing the unbounded growth per the Erdős discrepancy problem.

The proof is quite involved and it brought together breakthroughs not just in novel use of Shannon entropy, but also it used the Liouville function along with averages of modulated multiplicative functions in short intervals. Proving it also broke the "parity barrier" for this specific problem.

While the use of Shannon entropy in the proof is novel it is likely to open new research in this direction.

Use in combinatorics

Entropy has become a useful quantity in combinatorics.

Loomis–Whitney inequality

A simple example of this is an alternative proof of the Loomis–Whitney inequality: for every subset AZd, we have where Pi is the orthogonal projection in the ith coordinate:

The proof follows as a simple corollary of Shearer's inequality: if X1, ..., Xd are random variables and S1, ..., Sn are subsets of {1, ..., d} such that every integer between 1 and d lies in exactly r of these subsets, then where is the Cartesian product of random variables Xj with indexes j in Si (so the dimension of this vector is equal to the size of Si).

We sketch how Loomis–Whitney follows from this: Indeed, let X be a uniformly distributed random variable with values in A and so that each point in A occurs with equal probability. Then (by the further properties of entropy mentioned above) Η(X) = log|A|, where |A| denotes the cardinality of A. Let Si = {1, 2, ..., i−1, i+1, ..., d}. The range of is contained in Pi(A) and hence . Now use this to bound the right side of Shearer's inequality and exponentiate the opposite sides of the resulting inequality you obtain.

Approximation to binomial coefficient

For integers 0 < k < n let q = k/n. Then where

A nice interpretation of this is that the number of binary strings of length n with exactly k many 1's is approximately .

Use in machine learning

Machine learning techniques arise largely from statistics and also information theory. In general, entropy is a measure of uncertainty and the objective of machine learning is to minimize uncertainty.

Decision tree learning algorithms use relative entropy to determine the decision rules that govern the data at each node. The information gain in decision trees , which is equal to the difference between the entropy of and the conditional entropy of given , quantifies the expected information, or the reduction in entropy, from additionally knowing the value of an attribute . The information gain is used to identify which attributes of the dataset provide the most information and should be used to split the nodes of the tree optimally.

Bayesian inference models often apply the principle of maximum entropy to obtain prior probability distributions. The idea is that the distribution that best represents the current state of knowledge of a system is the one with the largest entropy, and is therefore suitable to be the prior.

Classification in machine learning performed by logistic regression or artificial neural networks often employs a standard loss function, called cross-entropy loss, that minimizes the average cross entropy between ground truth and predicted distributions. In general, cross entropy is a measure of the differences between two datasets similar to the KL divergence (also known as relative entropy).

James Clerk Maxwell

From Wikipedia, the free encyclopedia
 
James Clerk Maxwell
Maxwell, c. 1870s
Born13 June 1831
Edinburgh, Scotland
Died5 November 1879 (aged 48)
Cambridge, England
Resting placeParton, Dumfries and Galloway
55.006693°N 4.039210°W
Alma mater
Known for

See list
Spouse
(m. 1858)
Awards
Scientific career
FieldsPhysics
Mathematics
Institutions
Academic advisorsWilliam Hopkins
Notable students
1st Cavendish Professor of Physics
In office
1871–1879
Succeeded byLord Rayleigh

Signature

James Clerk Maxwell FRS FRSE (13 June 1831 – 5 November 1879) was a Scottish physicist and mathematician who was responsible for the classical theory of electromagnetic radiation, which was the first theory to describe electricity, magnetism and light as different manifestations of the same phenomenon. Maxwell's equations for electromagnetism achieved the second great unification in physics, where the first one had been realised by Isaac Newton. Maxwell was also key in the creation of statistical mechanics.

With the publication of "A Dynamical Theory of the Electromagnetic Field" in 1865, Maxwell demonstrated that electric and magnetic fields travel through space as waves moving at the speed of light. He proposed that light is an undulation in the same medium that is the cause of electric and magnetic phenomena. The unification of light and electrical phenomena led to his prediction of the existence of radio waves, and the paper contained his final version of his equations, which he had been working on since 1856. As a result of his equations, and other contributions such as introducing an effective method to deal with network problems and linear conductors, he is regarded as a founder of the modern field of electrical engineering. In 1871, Maxwell became the first Cavendish Professor of Physics, serving until his death in 1879.

Maxwell was the first to derive the Maxwell–Boltzmann distribution, a statistical means of describing aspects of the kinetic theory of gases, which he worked on sporadically throughout his career. He is also known for presenting the first durable colour photograph in 1861, and showed that any colour can be produced with a mixture of any three primary colours, those being red, green, and blue, the basis for colour television. He also worked on analysing the rigidity of rod-and-joint frameworks (trusses) like those in many bridges. He devised modern dimensional analysis and helped to established the CGS system of measurement. He is credited with being the first to understand chaos, and the first to emphasize the butterfly effect. He correctly proposed that the rings of Saturn were made up of many unattached small fragments. His 1863 paper On Governors serves as an important foundation for control theory and cybernetics, and was also the earliest mathematical analysis on control systems. In 1867, he proposed the thought experiment known as Maxwell's demon. In his seminal 1867 paper On the Dynamical Theory of Gases he introduced the Maxwell model for describing the behavior of a viscoelastic material and originated the Maxwell-Cattaneo equation for describing the transport of heat in a medium.

His discoveries helped usher in the era of modern physics, laying the foundations for such fields as relativity, also being the one to introduce the term into physics, and quantum mechanics. Many physicists regard Maxwell as the 19th-century scientist having the greatest influence on 20th-century physics. His contributions to the science are considered by many to be of the same magnitude as those of Isaac Newton and Albert Einstein. On the centenary of Maxwell's birthday, his work was described by Einstein as the "most profound and the most fruitful that physics has experienced since the time of Newton". When Einstein visited the University of Cambridge in 1922, he was told by his host that he had done great things because he stood on Newton's shoulders; Einstein replied: "No I don't. I stand on the shoulders of Maxwell." Tom Siegfried described Maxwell as "one of those once-in-a-century geniuses who perceived the physical world with sharper senses than those around him".

Life

Early life, 1831–1839

Clerk Maxwell's birthplace at 14 India Street in Edinburgh is now the home of the James Clerk Maxwell Foundation.

James Clerk Maxwell was born on 13 June 1831 at 14 India Street, Edinburgh, to John Clerk Maxwell of Middlebie, an advocate, and Frances Cay, daughter of Robert Hodshon Cay and sister of John Cay. (His birthplace now houses a museum operated by the James Clerk Maxwell Foundation.) His father was a man of comfortable means of the Clerk family of Penicuik, holders of the baronetcy of Clerk of Penicuik. His father's brother was the 6th baronet. He had been born "John Clerk", adding "Maxwell" to his own after he inherited (as an infant in 1793) the Middlebie estate, a Maxwell property in Dumfriesshire. James was a first cousin of both the artist Jemima Blackburn (the daughter of his father's sister) and the civil engineer William Dyce Cay (the son of his mother's brother). Cay and Maxwell were close friends and Cay acted as his best man when Maxwell married.

Maxwell's parents met and married when they were well into their thirties; his mother was nearly 40 when he was born. They had had one earlier child, a daughter named Elizabeth, who died in infancy.

When Maxwell was young his family moved to Glenlair, in Kirkcudbrightshire, which his parents had built on the estate which comprised 1,500 acres (610 ha). All indications suggest that Maxwell had maintained an unquenchable curiosity from an early age. By the age of three, everything that moved, shone, or made a noise drew the question: "what's the go o' that?" In a passage added to a letter from his father to his sister-in-law Jane Cay in 1834, his mother described this innate sense of inquisitiveness:

He is a very happy man, and has improved much since the weather got moderate; he has great work with doors, locks, keys, etc., and "show me how it doos" is never out of his mouth. He also investigates the hidden course of streams and bell-wires, the way the water gets from the pond through the wall....

Education, 1839–1847

Recognising the boy's potential, Maxwell's mother Frances took responsibility for his early education, which in the Victorian era was largely the job of the woman of the house. At eight he could recite long passages of John Milton and the whole of the 119th psalm (176 verses). Indeed, his knowledge of scripture was already detailed; he could give chapter and verse for almost any quotation from the Psalms. His mother was taken ill with abdominal cancer and, after an unsuccessful operation, died in December 1839 when he was eight years old. His education was then overseen by his father and his father's sister-in-law Jane, both of whom played pivotal roles in his life. His formal schooling began unsuccessfully under the guidance of a 16-year-old hired tutor. Little is known about the young man hired to instruct Maxwell, except that he treated the younger boy harshly, chiding him for being slow and wayward. The tutor was dismissed in November 1841. James' father took him to Robert Davidson's demonstration of electric propulsion and magnetic force on 12 February 1842, an experience with profound implications for the boy.

In 1841, at age ten, Maxwell was sent to the prestigious Edinburgh Academy. He lodged during term times at the house of his aunt Isabella. During this time his passion for drawing was encouraged by his older cousin Jemima. The young Maxwell, having been raised in isolation on his father's countryside estate, did not fit in well at school. The first year had been full, obliging him to join the second year with classmates a year his senior. His mannerisms and Galloway accent struck the other boys as rustic. Having arrived on his first day of school wearing a pair of homemade shoes and a tunic, he earned the unkind nickname of "Daftie". He never seemed to resent the epithet, bearing it without complaint for many years. Social isolation at the Academy ended when he met Lewis Campbell and Peter Guthrie Tait, two boys of a similar age who were to become notable scholars later in life. They remained lifelong friends.

Maxwell was fascinated by geometry at an early age, rediscovering the regular polyhedra before he received any formal instruction. Despite his winning the school's scripture biography prize in his second year, his academic work remained unnoticed until, at the age of 13, he won the school's mathematical medal and first prize for both English and poetry.

Maxwell's interests ranged far beyond the school syllabus and he did not pay particular attention to examination performance. He wrote his first scientific paper at the age of 14. In it, he described a mechanical means of drawing mathematical curves with a piece of twine, and the properties of ellipses, Cartesian ovals, and related curves with more than two foci. The work, of 1846, "On the description of oval curves and those having a plurality of foci" was presented to the Royal Society of Edinburgh by James Forbes, a professor of natural philosophy at the University of Edinburgh, because Maxwell was deemed too young to present the work himself. The work was not entirely original, since René Descartes had also examined the properties of such multifocal ellipses in the 17th century, but Maxwell had simplified their construction.

University of Edinburgh, 1847–1850

Old College, University of Edinburgh

Maxwell left the Academy in 1847 at age 16 and began attending classes at the University of Edinburgh. He had the opportunity to attend the University of Cambridge, but decided, after his first term, to complete the full course of his undergraduate studies at Edinburgh. The academic staff of the university included some highly regarded names; his first-year tutors included Sir William Hamilton, who lectured him on logic and metaphysics, Philip Kelland on mathematics, and James Forbes on natural philosophy. He did not find his classes demanding, and was, therefore, able to immerse himself in private study during free time at the university and particularly when back home at Glenlair. There he would experiment with improvised chemical, electric, and magnetic apparatus; however, his chief concerns regarded the properties of polarised light. He constructed shaped blocks of gelatine, subjected them to various stresses, and with a pair of polarising prisms given to him by William Nicol, viewed the coloured fringes that had developed within the jelly. Through this practice he discovered photoelasticity, which is a means of determining the stress distribution within physical structures.

At age 18, Maxwell contributed two papers for the Transactions of the Royal Society of Edinburgh. One of these, "On the Equilibrium of Elastic Solids", laid the foundation for an important discovery later in his life, which was the temporary double refraction produced in viscous liquids by shear stress. His other paper was "Rolling Curves" and, just as with the paper "Oval Curves" that he had written at the Edinburgh Academy, he was again considered too young to stand at the rostrum to present it himself. The paper was delivered to the Royal Society by his tutor Kelland instead.

University of Cambridge, 1850–1856

A young Maxwell at Trinity College, Cambridge, holding one of his colour wheels

In October 1850, already an accomplished mathematician, Maxwell left Scotland for the University of Cambridge. He initially attended Peterhouse, but before the end of his first term transferred to Trinity, where he believed it would be easier to obtain a fellowship. At Trinity he was elected to the elite secret society known as the Cambridge Apostles. Maxwell's intellectual understanding of his Christian faith and of science grew rapidly during his Cambridge years. He joined the "Apostles", an exclusive debating society of the intellectual elite, where through his essays he sought to work out this understanding.

Now my great plan, which was conceived of old, ... is to let nothing be wilfully left unexamined. Nothing is to be holy ground consecrated to Stationary Faith, whether positive or negative. All fallow land is to be ploughed up and a regular system of rotation followed. ... Never hide anything, be it weed or no, nor seem to wish it hidden. ... Again I assert the Right of Trespass on any plot of Holy Ground which any man has set apart. ... Now I am convinced that no one but a Christian can actually purge his land of these holy spots. ... I do not say that no Christians have enclosed places of this sort. Many have a great deal, and every one has some. But there are extensive and important tracts in the territory of the Scoffer, the Pantheist, the Quietist, Formalist, Dogmatist, Sensualist, and the rest, which are openly and solemnly Tabooed. ..."

Christianity—that is, the religion of the Bible—is the only scheme or form of belief which disavows any possessions on such a tenure. Here alone all is free. You may fly to the ends of the world and find no God but the Author of Salvation. You may search the Scriptures and not find a text to stop you in your explorations. ...

The Old Testament and the Mosaic Law and Judaism are commonly supposed to be "Tabooed" by the orthodox. Sceptics pretend to have read them and have found certain witty objections ... which too many of the orthodox unread admit, and shut up the subject as haunted. But a Candle is coming to drive out all Ghosts and Bugbears. Let us follow the light.

In the summer of his third year, Maxwell spent some time at the Suffolk home of the Rev. C. B. Tayler, the uncle of a classmate, G. W. H. Tayler. The love of God shown by the family impressed Maxwell, particularly after he was nursed back from ill health by the minister and his wife.

On his return to Cambridge, Maxwell writes to his recent host a chatty and affectionate letter including the following testimony,

... I have the capacity of being more wicked than any example that man could set me, and ... if I escape, it is only by God's grace helping me to get rid of myself, partially in science, more completely in society, —but not perfectly except by committing myself to God ...

In November 1851, Maxwell studied under William Hopkins, whose success in nurturing mathematical genius had earned him the nickname of "senior wrangler-maker".

In 1854, Maxwell graduated from Trinity with a degree in mathematics. He scored second highest in the final examination, coming behind Edward Routh and earning himself the title of Second Wrangler. He was later declared equal with Routh in the more exacting ordeal of the Smith's Prize examination. Immediately after earning his degree, Maxwell read his paper "On the Transformation of Surfaces by Bending" to the Cambridge Philosophical Society. This is one of the few purely mathematical papers he had written, demonstrating his growing stature as a mathematician. Maxwell decided to remain at Trinity after graduating and applied for a fellowship, which was a process that he could expect to take a couple of years. Buoyed by his success as a research student, he would be free, apart from some tutoring and examining duties, to pursue scientific interests at his own leisure.

The nature and perception of colour was one such interest which he had begun at the University of Edinburgh while he was a student of Forbes. With the coloured spinning tops invented by Forbes, Maxwell was able to demonstrate that white light would result from a mixture of red, green, and blue light. His paper "Experiments on Colour" laid out the principles of colour combination and was presented to the Royal Society of Edinburgh in March 1855. Maxwell was this time able to deliver it himself.

Maxwell was made a fellow of Trinity on 10 October 1855, sooner than was the norm, and was asked to prepare lectures on hydrostatics and optics and to set examination papers. The following February he was urged by Forbes to apply for the newly vacant Chair of Natural Philosophy at Marischal College, Aberdeen. His father assisted him in the task of preparing the necessary references, but died on 2 April at Glenlair before either knew the result of Maxwell's candidacy. He accepted the professorship at Aberdeen, leaving Cambridge in November 1856.

Marischal College, Aberdeen, 1856–1860

Maxwell proved that the rings of Saturn were made of numerous small particles.

The 25-year-old Maxwell was a good 15 years younger than any other professor at Marischal. He engaged himself with his new responsibilities as head of a department, devising the syllabus and preparing lectures. He committed himself to lecturing 15 hours a week, including a weekly pro bono lecture to the local working men's college. He lived in Aberdeen with his cousin William Dyce Cay, a Scottish civil engineer, during the six months of the academic year and spent the summers at Glenlair, which he had inherited from his father.

Later, his former student described Maxwell as follows:

In the late 1850s shortly before 9 am any winter’s morning you might well have seen the young James Clerk Maxwell, in his mid to late 20s, a man of middling height, with frame strongly knit, and a certain spring and elasticity in his gait; dressed for comfortable ease rather than elegance; a face expressive at once of sagacity and good humour, but overlaid with a deep shade of thoughtfulness; features boldly put pleasingly marked; eyes dark and glowing; hair and beard perfectly black, and forming a strong contrast to the pallor of his complexion.

James Clerk Maxwell and his wife, painted by Jemima Blackburn

He focused his attention on a problem that had eluded scientists for 200 years: the nature of Saturn's rings. It was unknown how they could remain stable without breaking up, drifting away or crashing into Saturn. The problem took on a particular resonance at that time because St John's College, Cambridge, had chosen it as the topic for the 1857 Adams Prize. Maxwell devoted two years to studying the problem, proving that a regular solid ring could not be stable, while a fluid ring would be forced by wave action to break up into blobs. Since neither was observed, he concluded that the rings must be composed of numerous small particles he called "brick-bats", each independently orbiting Saturn. Maxwell was awarded the £130 Adams Prize in 1859 for his essay "On the stability of the motion of Saturn's rings"; he was the only entrant to have made enough headway to submit an entry. His work was so detailed and convincing that when George Biddell Airy read it he commented, "It is one of the most remarkable applications of mathematics to physics that I have ever seen." It was considered the final word on the issue until direct observations by the Voyager flybys of the 1980s confirmed Maxwell's prediction that the rings were composed of particles. It is now understood, however, that the rings' particles are not totally stable, being pulled by gravity onto Saturn. The rings are expected to vanish entirely over the next 300 million years.

In 1857 Maxwell befriended the Reverend Daniel Dewar, who was then the Principal of Marischal. Through him Maxwell met Dewar's daughter, Katherine Mary Dewar. They were engaged in February 1858 and married in Aberdeen on 2 June 1858. On the marriage record, Maxwell is listed as Professor of Natural Philosophy in Marischal College, Aberdeen. Katherine was seven years Maxwell's senior. Comparatively little is known of her, although it is known that she helped in his lab and worked on experiments in viscosity. Maxwell's biographer and friend, Lewis Campbell, adopted an uncharacteristic reticence on the subject of Katherine, though describing their married life as "one of unexampled devotion".

In 1860 Marischal College merged with the neighbouring King's College to form the University of Aberdeen. There was no room for two professors of Natural Philosophy, so Maxwell, despite his scientific reputation, found himself laid off. He was unsuccessful in applying for Forbes's recently vacated chair at Edinburgh, the post instead going to Tait. Maxwell was granted the Chair of Natural Philosophy at King's College, London, instead. After recovering from a near-fatal bout of smallpox in 1860, he moved to London with his wife.

King's College, London, 1860–1865

Commemoration of Maxwell's equations at King's College. Two identical IEEE Milestone Plaques are at Maxwell's birthplace in Edinburgh and the family home at Glenlair.

Maxwell's time at King's was probably the most productive of his career. He was awarded the Royal Society's Rumford Medal in 1860 for his work on colour and was later elected to the Society in 1861. This period of his life would see him display the world's first light-fast colour photograph, further develop his ideas on the viscosity of gases, and propose a system of defining physical quantities—now known as dimensional analysis. Maxwell would often attend lectures at the Royal Institution, where he came into regular contact with Michael Faraday. The relationship between the two men could not be described as close, because Faraday was 40 years Maxwell's senior and showed signs of senility. They nevertheless maintained a strong respect for each other's talents.

This time is especially noteworthy for the advances Maxwell made in the fields of electricity and magnetism. He examined the nature of both electric and magnetic fields in his two-part paper "On physical lines of force", which was published in 1861. In it, he provided a conceptual model for electromagnetic induction, consisting of tiny spinning cells of magnetic flux. Two more parts were later added to and published in that same paper in early 1862. In the first additional part, he discussed the nature of electrostatics and displacement current. In the second additional part, he dealt with the rotation of the plane of the polarisation of light in a magnetic field, a phenomenon that had been discovered by Faraday and is now known as the Faraday effect.

Later years, 1865–1879

In 1865 Maxwell resigned the chair at King's College, London, and returned to Glenlair with Katherine. In his paper "On governors" (1868) he mathematically described the behaviour of governors—devices that control the speed of steam engines—thereby establishing the theoretical basis of control engineering. In his paper "On reciprocal figures, frames and diagrams of forces" (1870) he discussed the rigidity of various designs of lattice. He wrote the textbook Theory of Heat (1871) and the treatise Matter and Motion (1876). Maxwell was also the first to make explicit use of dimensional analysis, in 1871.

Maxwell has been credited as being the first to grasp the concept of chaos, as he acknowledged the significance of systems that exhibit "sensitive dependence on initial conditions." He was also the first to emphasize the "butterfly effect" in the 1870s in two discussions.

In 1871 he returned to Cambridge to become the first Cavendish Professor of Physics. Maxwell was put in charge of the development of the Cavendish Laboratory, supervising every step in the progress of the building and of the purchase of the collection of apparatus. One of Maxwell's last great contributions to science was the editing (with copious original notes) of the research of Henry Cavendish, from which it appeared that Cavendish researched, amongst other things, such questions as the density of the Earth and the composition of water. He was elected as a member to the American Philosophical Society in 1876.

Death

The gravestone at Parton Kirk (Galloway) of James Clerk Maxwell, his parents and his wife

In April 1879 Maxwell began to have difficulty in swallowing, the first symptom of his fatal illness.

Maxwell died in Cambridge of abdominal cancer on 5 November 1879 at the age of 48. His mother had died at the same age of the same type of cancer. The minister who regularly visited him in his last weeks was astonished at his lucidity and the immense power and scope of his memory, but comments more particularly,

... his illness drew out the whole heart and soul and spirit of the man: his firm and undoubting faith in the Incarnation and all its results; in the full sufficiency of the Atonement; in the work of the Holy Spirit. He had gauged and fathomed all the schemes and systems of philosophy, and had found them utterly empty and unsatisfying—"unworkable" was his own word about them—and he turned with simple faith to the Gospel of the Saviour.

As death approached Maxwell told a Cambridge colleague,

I have been thinking how very gently I have always been dealt with. I have never had a violent shove all my life. The only desire which I can have is like David to serve my own generation by the will of God, and then fall asleep.

Maxwell is buried at Parton Kirk, near Castle Douglas in Galloway close to where he grew up. The extended biography The Life of James Clerk Maxwell, by his former schoolfellow and lifelong friend Professor Lewis Campbell, was published in 1882. His collected works were issued in two volumes by the Cambridge University Press in 1890.

The executors of Maxwell's estate were his physician George Edward Paget, G. G. Stokes, and Colin Mackenzie, who was Maxwell's cousin. Overburdened with work, Stokes passed Maxwell's papers to William Garnett, who had effective custody of the papers until about 1884.

There is a memorial inscription to him near the choir screen at Westminster Abbey.

Personal life

James Clerk Maxwell, painted by Jemima Blackburn

As a great lover of Scottish poetry, Maxwell memorised poems and wrote his own. The best known is Rigid Body Sings, closely based on "Comin' Through the Rye" by Robert Burns, which he apparently used to sing while accompanying himself on a guitar. It has the opening lines

Gin a body meet a body
Flyin' through the air.
Gin a body hit a body,
Will it fly? And where?

A collection of his poems was published by his friend Lewis Campbell in 1882.

Descriptions of Maxwell remark upon his remarkable intellectual qualities being matched by social awkwardness.

Maxwell wrote the following aphorism for his own conduct as a scientist:

He that would enjoy life and act with freedom must have the work of the day continually before his eyes. Not yesterday's work, lest he fall into despair, not to-morrow's, lest he become a visionary—not that which ends with the day, which is a worldly work, nor yet that only which remains to eternity, for by it he cannot shape his action. Happy is the man who can recognize in the work of to-day a connected portion of the work of life, and an embodiment of the work of eternity. The foundations of his confidence are unchangeable, for he has been made a partaker of Infinity. He strenuously works out his daily enterprises, because the present is given him for a possession.

Maxwell was an evangelical Presbyterian and in his later years became an Elder of the Church of Scotland. Maxwell's religious beliefs and related activities have been the focus of a number of papers. Attending both Church of Scotland (his father's denomination) and Episcopalian (his mother's denomination) services as a child, Maxwell underwent an evangelical conversion in April 1853. One facet of this conversion may have aligned him with an antipositivist position.

Scientific legacy

Recognition

In a survey of the 100 most prominent physicists conducted by Physics World—Maxwell was voted the third greatest physicist of all time, behind only Newton and Einstein. Another survey of rank-and-file physicists by PhysicsWeb voted him third.

Electromagnetism

A postcard from Maxwell to Peter Tait

Maxwell had studied and commented on electricity and magnetism as early as 1855 when his paper "On Faraday's lines of force" was read to the Cambridge Philosophical Society. The paper presented a simplified model of Faraday's work and how electricity and magnetism are related. He reduced all of the current knowledge into a linked set of differential equations with 20 equations in 20 variables. This work was later published as "On Physical Lines of Force" in March 1861.

Around 1862, while lecturing at King's College, Maxwell calculated that the speed of propagation of an electromagnetic field is approximately that of the speed of light. He considered this to be more than just a coincidence, commenting, "We can scarcely avoid the conclusion that light consists in the transverse undulations of the same medium which is the cause of electric and magnetic phenomena.

Working on the problem further, Maxwell showed that the equations predict the existence of waves of oscillating electric and magnetic fields that travel through empty space at a speed that could be predicted from simple electrical experiments; using the data available at the time, Maxwell obtained a velocity of 310,740,000 metres per second (1.0195×109 ft/s). In his 1865 paper "A Dynamical Theory of the Electromagnetic Field", Maxwell wrote, "The agreement of the results seems to show that light and magnetism are affections of the same substance, and that light is an electromagnetic disturbance propagated through the field according to electromagnetic laws".

His famous twenty equations, in their modern form of partial differential equations, first appeared in fully developed form in his textbook A Treatise on Electricity and Magnetism in 1873. Most of this work was done by Maxwell at Glenlair during the period between holding his London post and his taking up the Cavendish chair. Oliver Heaviside reduced the complexity of Maxwell's theory down to four partial differential equations, known now collectively as Maxwell's Laws or Maxwell's equations. Although potentials became much less popular in the nineteenth century, the use of scalar and vector potentials is now standard in the solution of Maxwell's equations. His work achieved the second great unification in physics.

As Barrett and Grimes (1995) describe:

Maxwell expressed electromagnetism in the algebra of quaternions and made the electromagnetic potential the centerpiece of his theory. In 1881 Heaviside replaced the electromagnetic potential field by force fields as the centerpiece of electromagnetic theory. According to Heaviside, the electromagnetic potential field was arbitrary and needed to be "assassinated". (sic) A few years later there was a debate between Heaviside and [Peter Guthrie] Tate (sic) about the relative merits of vector analysis and quaternions. The result was the realization that there was no need for the greater physical insights provided by quaternions if the theory was purely local, and vector analysis became commonplace.

Maxwell was proved correct, and his quantitative connection between light and electromagnetism is considered one of the great accomplishments of 19th-century mathematical physics.

Maxwell also introduced the concept of the electromagnetic field in comparison to force lines that Faraday described. By understanding the propagation of electromagnetism as a field emitted by active particles, Maxwell could advance his work on light. At that time, Maxwell believed that the propagation of light required a medium for the waves, dubbed the luminiferous aether. Over time, the existence of such a medium, permeating all space and yet apparently undetectable by mechanical means, proved impossible to reconcile with experiments such as the Michelson–Morley experiment. Moreover, it seemed to require an absolute frame of reference in which the equations were valid, with the distasteful result that the equations changed form for a moving observer. These difficulties inspired Albert Einstein to formulate the theory of special relativity; in the process, Einstein dispensed with the requirement of a stationary luminiferous aether.

Einstein acknowledged the groundbreaking work of Maxwell, stating that:

One scientific epoch ended and another began with James Clerk Maxwell.

He also acknowledged the influence that his work had on his relativity theory:

The special theory of relativity owes its origins to Maxwell's equations of the electromagnetic field.

Colour vision

First durable colour photographic image, demonstrated by Maxwell in an 1861 lecture

Along with most physicists of the time, Maxwell had a strong interest in psychology. Following in the steps of Isaac Newton and Thomas Young, he was particularly interested in the study of colour vision. From 1855 to 1872, Maxwell published at intervals a series of investigations concerning the perception of colour, colour-blindness, and colour theory, and was awarded the Rumford Medal for "On the Theory of Colour Vision".

Isaac Newton had demonstrated, using prisms, that white light, such as sunlight, is composed of a number of monochromatic components which could then be recombined into white light. Newton also showed that an orange paint made of yellow and red could look exactly like a monochromatic orange light, although being composed of two monochromatic yellow and red lights. Hence the paradox that puzzled physicists of the time: two complex lights (composed of more than one monochromatic light) could look alike but be physically different, called metameres. Thomas Young later proposed that this paradox could be explained by colours being perceived through a limited number of channels in the eyes, which he proposed to be threefold, the trichromatic colour theory. Maxwell used the recently developed linear algebra to prove Young's theory. Any monochromatic light stimulating three receptors should be able to be equally stimulated by a set of three different monochromatic lights (in fact, by any set of three different lights). He demonstrated that to be the case, inventing colour matching experiments and Colourimetry.

Maxwell was also interested in applying his theory of colour perception, namely in colour photography. Stemming directly from his psychological work on colour perception: if a sum of any three lights could reproduce any perceivable colour, then colour photographs could be produced with a set of three coloured filters. In the course of his 1855 paper, Maxwell proposed that, if three black-and-white photographs of a scene were taken through red, green, and blue filters, and transparent prints of the images were projected onto a screen using three projectors equipped with similar filters, when superimposed on the screen the result would be perceived by the human eye as a complete reproduction of all the colours in the scene.

During an 1861 Royal Institution lecture on colour theory, Maxwell presented the world's first demonstration of colour photography by this principle of three-colour analysis and synthesis. Thomas Sutton, inventor of the single-lens reflex camera, took the picture. He photographed a tartan ribbon three times, through red, green, and blue filters, also making a fourth photograph through a yellow filter, which, according to Maxwell's account, was not used in the demonstration. Because Sutton's photographic plates were insensitive to red and barely sensitive to green, the results of this pioneering experiment were far from perfect. It was remarked in the published account of the lecture that "if the red and green images had been as fully photographed as the blue", it "would have been a truly-coloured image of the riband. By finding photographic materials more sensitive to the less refrangible rays, the representation of the colours of objects might be greatly improved." Researchers in 1961 concluded that the seemingly impossible partial success of the red-filtered exposure was due to ultraviolet light, which is strongly reflected by some red dyes, not entirely blocked by the red filter used, and within the range of sensitivity of the wet collodion process Sutton employed.

Kinetic theory and thermodynamics

Maxwell's demon, a thought experiment where entropy decreases

Maxwell also investigated the kinetic theory of gases. Originating with Daniel Bernoulli, this theory was advanced by the successive labours of John Herapath, John James Waterston, James Joule, and particularly Rudolf Clausius, to such an extent as to put its general accuracy beyond a doubt; but it received enormous development from Maxwell, who in this field appeared as an experimenter (on the laws of gaseous friction) as well as a mathematician.

Between 1859 and 1866, he developed the theory of the distributions of velocities in particles of a gas, work later generalised by Ludwig Boltzmann. The formula, called the Maxwell–Boltzmann distribution, gives the fraction of gas molecules moving at a specified velocity at any given temperature. In the kinetic theory, temperatures and heat involve only molecular movement. This approach generalised the previously established laws of thermodynamics and explained existing observations and experiments in a better way than had been achieved previously. His work on thermodynamics led him to devise the thought experiment that came to be known as Maxwell's demon, where the second law of thermodynamics is violated by an imaginary being capable of sorting particles by energy.

In 1871, he established Maxwell's thermodynamic relations, which are statements of equality among the second derivatives of the thermodynamic potentials with respect to different thermodynamic variables. In 1874, he constructed a plaster thermodynamic visualisation as a way of exploring phase transitions, based on the American scientist Josiah Willard Gibbs's graphical thermodynamics papers.

In his 1867 paper On the Dynamical Theory of Gases he introduced the Maxwell model for describing the behavior of a viscoelastic material and originated the Maxwell-Cattaneo equation for describing the transport of heat in a medium.

Peter Guthrie Tait called Maxwell the "leading molecular scientist" of his time. Another person added after Maxwell's death that "only one man lived who could understand Gibbs's papers. That was Maxwell, and now he is dead."

Control theory

Maxwell published the paper "On governors" in the Proceedings of the Royal Society, vol. 16 (1867–1868). This paper is considered a central paper of the early days of control theory. Here "governors" refers to the governor or the centrifugal governor used to regulate steam engines.

In situ resource utilization

From Wikipedia, the free encyclopedia https://en.wikipedi...