The principle of maximum entropy states that the probability distribution which best represents the current state of knowledge is the one with largest entropy, in the context of precisely stated prior data (such as a proposition that expresses testable information).
Another way of stating this: Take precisely stated prior data or
testable information about a probability distribution function.
Consider the set of all trial probability distributions that would
encode the prior data. According to this principle, the distribution
with maximal information entropy is the best choice.
History
The principle was first expounded by E. T. Jaynes in two papers in 1957 where he emphasized a natural correspondence between statistical mechanics and information theory.
In particular, Jaynes offered a new and very general rationale why the
Gibbsian method of statistical mechanics works. He argued that the entropy of statistical mechanics and the information entropy of information theory are basically the same thing. Consequently, statistical mechanics should be seen just as a particular application of a general tool of logical inference and information theory.
Overview
In most practical cases, the stated prior data or testable information is given by a set of conserved quantities (average values of some moment functions), associated with the probability distribution in question. This is the way the maximum entropy principle is most often used in statistical thermodynamics. Another possibility is to prescribe some symmetries of the probability distribution. The equivalence between conserved quantities and corresponding symmetry groups implies a similar equivalence for these two ways of specifying the testable information in the maximum entropy method.
The maximum entropy principle is also needed to guarantee the
uniqueness and consistency of probability assignments obtained by
different methods, statistical mechanics and logical inference in particular.
The maximum entropy principle makes explicit our freedom in using different forms of prior data. As a special case, a uniform prior probability density (Laplace's principle of indifference,
sometimes called the principle of insufficient reason), may be adopted.
Thus, the maximum entropy principle is not merely an alternative way to
view the usual methods of inference of classical statistics, but
represents a significant conceptual generalization of those methods.
However these statements do not imply that thermodynamical systems need not be shown to be ergodic to justify treatment as a statistical ensemble.
In ordinary language, the principle of maximum entropy can be
said to express a claim of epistemic modesty, or of maximum ignorance.
The selected distribution is the one that makes the least claim to being
informed beyond the stated prior data, that is to say the one that
admits the most ignorance beyond the stated prior data.
Testable information
The principle of maximum entropy is useful explicitly only when applied to testable information.
Testable information is a statement about a probability distribution
whose truth or falsity is well-defined. For example, the statements
(where p2 + p3 are probabilities of events) are statements of testable information.
Given testable information, the maximum entropy procedure consists of seeking the probability distribution which maximizes information entropy, subject to the constraints of the information. This constrained optimization problem is typically solved using the method of Lagrange multipliers.
Entropy maximization with no testable information respects the
universal "constraint" that the sum of the probabilities is one. Under
this constraint, the maximum entropy discrete probability distribution
is the uniform distribution,
Applications
The principle of maximum entropy is commonly applied in two ways to inferential problems:
Prior probabilities
The principle of maximum entropy is often used to obtain prior probability distributions for Bayesian inference.
Jaynes was a strong advocate of this approach, claiming the maximum
entropy distribution represented the least informative distribution.
A large amount of literature is now dedicated to the elicitation of maximum entropy priors and links with channel coding.
Posterior probabilities
Maximum entropy is a sufficient updating rule for radical probabilism. Richard Jeffrey's probability kinematics
is a special case of maximum entropy inference. However, maximum
entropy is not a generalisation of all such sufficient updating rules.
Maximum entropy models
Alternatively,
the principle is often invoked for model specification: in this case
the observed data itself is assumed to be the testable information. Such
models are widely used in natural language processing. An example of such a model is logistic regression, which corresponds to the maximum entropy classifier for independent observations.
Probability Density Estimation
One of the main applications of the maximum entropy principle is in discrete and continuous density estimation.
Similar to support vector machine estimators,
the maximum entropy principle may require the solution to a quadratic programming
and thus provide
a sparse mixture model as the optimal density estimator. One important
advantage of the method is able to incorporate prior information in the
density estimation.
General solution for the maximum entropy distribution with linear constraints
Discrete case
We have some testable information I about a quantity x taking values in {x1, x2,..., xn}. We assume this information has the form of m constraints on the expectations of the functions fk; that is, we require our probability distribution to satisfy
Furthermore, the probabilities must sum to one, giving the constraint
The probability distribution with maximum information entropy subject to these constraints is
It is sometimes called the Gibbs distribution. The normalization constant is determined by
and is conventionally called the partition function. (The Pitman–Koopman theorem states that the necessary and sufficient condition for a sampling distribution to admit sufficient statistics of bounded dimension is that it have the general form of a maximum entropy distribution.)
The λk parameters are Lagrange multipliers whose particular values are determined by the constraints according to
For continuous distributions, the Shannon entropy cannot be used, as it is only defined for discrete probability spaces. Instead Edwin Jaynes (1963, 1968, 2003) gave the following formula, which is closely related to the relative entropy.
where m(x), which Jaynes called the "invariant measure", is proportional to the limiting density of discrete points. For now, we shall assume that m is known; we will discuss it further after the solution equations are given.
A closely related quantity, the relative entropy, is usually defined as the Kullback–Leibler divergence of m from p
(although it is sometimes, confusingly, defined as the negative of
this). The inference principle of minimizing this, due to Kullback, is
known as the Principle of Minimum Discrimination Information.
We have some testable information I about a quantity x which takes values in some interval of the real numbers (all integrals below are over this interval). We assume this information has the form of m constraints on the expectations of the functions fk, i.e. we require our probability density function to satisfy
And of course, the probability density must integrate to one, giving the constraint
The probability density function with maximum Hc subject to these constraints is
As in the discrete case, the values of the parameters are determined by the constraints according to
The invariant measure function m(x) can be best understood by supposing that x is known to take values only in the bounded interval (a, b), and that no other information is given. Then the maximum entropy probability density function is
where A is a normalization constant. The invariant measure
function is actually the prior density function encoding 'lack of
relevant information'. It cannot be determined by the principle of
maximum entropy, and must be determined by some other logical method,
such as the principle of transformation groups or marginalization theory.
Justifications for the principle of maximum entropy
Proponents
of the principle of maximum entropy justify its use in assigning
probabilities in several ways, including the following two arguments.
These arguments take the use of Bayesian probability as given, and are thus subject to the same postulates.
Information entropy as a measure of 'uninformativeness'
Consider a discrete probability distribution among m mutually exclusive propositions.
The most informative distribution would occur when one of the
propositions was known to be true. In that case, the information entropy
would be equal to zero. The least informative distribution would occur
when there is no reason to favor any one of the propositions over the
others. In that case, the only reasonable probability distribution would
be uniform, and then the information entropy would be equal to its
maximum possible value,
log m. The information entropy can therefore be seen as a
numerical measure which describes how uninformative a particular
probability distribution is, ranging from zero (completely informative)
to log m (completely uninformative).
By choosing to use the distribution with the maximum entropy
allowed by our information, the argument goes, we are choosing the most
uninformative distribution possible. To choose a distribution with lower
entropy would be to assume information we do not possess. Thus the
maximum entropy distribution is the only reasonable distribution.
The Wallis derivation
The following argument is the result of a suggestion made by Graham Wallis to E. T. Jaynes in 1962. It is essentially the same mathematical argument used for the Maxwell–Boltzmann statistics in statistical mechanics,
although the conceptual emphasis is quite different. It has the
advantage of being strictly combinatorial in nature, making no reference
to information entropy as a measure of 'uncertainty',
'uninformativeness', or any other imprecisely defined concept. The
information entropy function is not assumed a priori, but rather
is found in the course of the argument; and the argument leads naturally
to the procedure of maximizing the information entropy, rather than
treating it in some other way.
Suppose an individual wishes to make a probability assignment among mmutually exclusive
propositions. She has some testable information, but is not sure how to
go about including this information in her probability assessment. She
therefore conceives of the following random experiment. She will
distribute N quanta of probability (each worth 1/N) at random among the m possibilities. (One might imagine that she will throw N balls into m
buckets while blindfolded. In order to be as fair as possible, each
throw is to be independent of any other, and every bucket is to be the
same size.) Once the experiment is done, she will check if the
probability assignment thus obtained is consistent with her information.
(For this step to be successful, the information must be a constraint
given by an open set in the space of probability measures). If it is
inconsistent, she will reject it and try again. If it is consistent, her
assessment will be
where pi is the probability of the ith proposition, while ni is the number of quanta that were assigned to the ith proposition (i.e. the number of balls that ended up in bucket i).
Now, in order to reduce the 'graininess' of the probability
assignment, it will be necessary to use quite a large number of quanta
of probability. Rather than actually carry out, and possibly have to
repeat, the rather long random experiment, the protagonist decides to
simply calculate and use the most probable result. The probability of
any particular result is the multinomial distribution,
where
is sometimes known as the multiplicity of the outcome.
The most probable result is the one which maximizes the multiplicity W. Rather than maximizing W directly, the protagonist could equivalently maximize any monotonic increasing function of W. She decides to maximize
At this point, in order to simplify the expression, the protagonist takes the limit as , i.e. as the probability levels go from grainy discrete values to smooth continuous values. Using Stirling's approximation, she finds
All that remains for the protagonist to do is to maximize entropy
under the constraints of her testable information. She has found that
the maximum entropy distribution is the most probable of all "fair"
random distributions, in the limit as the probability levels go from
discrete to continuous.
Compatibility with Bayes' theorem
Giffin and Caticha (2007) state that Bayes' theorem
and the principle of maximum entropy are completely compatible and can
be seen as special cases of the "method of maximum relative entropy".
They state that this method reproduces every aspect of orthodox Bayesian
inference methods. In addition this new method opens the door to
tackling problems that could not be addressed by either the maximal
entropy principle or orthodox Bayesian methods individually. Moreover,
recent contributions (Lazar 2003, and Schennach 2005) show that
frequentist relative-entropy-based inference approaches (such as empirical likelihood and exponentially tilted empirical likelihood – see e.g. Owen 2001 and Kitamura 2006) can be combined with prior information to perform Bayesian posterior analysis.
Jaynes stated Bayes' theorem was a way to calculate a
probability, while maximum entropy was a way to assign a prior
probability distribution.
It is however, possible in concept to solve for a posterior distribution directly from a stated prior distribution using the principle of minimum cross entropy (or the Principle of Maximum Entropy being a special case of using a uniform distribution
as the given prior), independently of any Bayesian considerations by
treating the problem formally as a constrained optimisation problem, the
Entropy functional being the objective function. For the case of given
average values as testable information (averaged over the sought after
probability distribution), the sought after distribution is formally the
Gibbs (or Boltzmann) distribution
the parameters of which must be solved for in order to achieve minimum
cross entropy and satisfy the given testable information.
Relevance to physics
The principle of maximum entropy bears a relation to a key assumption of kinetic theory of gases known as molecular chaos or Stosszahlansatz.
This asserts that the distribution function characterizing particles
entering a collision can be factorized. Though this statement can be
understood as a strictly physical hypothesis, it can also be interpreted
as a heuristic hypothesis regarding the most probable configuration of
particles before colliding
A key measure in information theory is "entropy". Entropy quantifies the amount of uncertainty involved in the value of a random variable or the outcome of a random process. For example, identifying the outcome of a fair coin flip (with two equally likely outcomes) provides less information (lower entropy) than specifying the outcome from a roll of a die (with six equally likely outcomes). Some other important measures in information theory are mutual information, channel capacity, error exponents, and relative entropy.
Information
theory studies the transmission, processing, extraction, and
utilization of information. Abstractly, information can be thought of as
the resolution of uncertainty. In the case of communication of
information over a noisy channel, this abstract concept was made
concrete in 1948 by Claude Shannon in his paper "A Mathematical Theory of Communication",
in which "information" is thought of as a set of possible messages,
where the goal is to send these messages over a noisy channel, and then
to have the receiver reconstruct the message with low probability of
error, in spite of the channel noise. Shannon's main result, the noisy-channel coding theorem showed that, in the limit of many channel uses, the rate of information that is asymptotically achievable is equal to the channel capacity, a quantity dependent merely on the statistics of the channel over which the messages are sent.
Coding theory is concerned with finding explicit methods, called codes, for increasing the efficiency and reducing the error rate of data communication over noisy channels to near the channel capacity. These codes can be roughly subdivided into data compression (source coding) and error-correction
(channel coding) techniques. In the latter case, it took many years to
find the methods Shannon's work proved were possible. A third class of
information theory codes are cryptographic algorithms (both codes and ciphers). Concepts, methods and results from coding theory and information theory are widely used in cryptography and cryptanalysis. See the article ban (unit) for a historical application.
Information theory is also used in information retrieval, intelligence gathering, gambling, statistics, and even in musical composition.
Prior to this paper, limited information-theoretic ideas had been developed at Bell Labs, all implicitly assuming events of equal probability. Harry Nyquist's 1924 paper, Certain Factors Affecting Telegraph Speed,
contains a theoretical section quantifying "intelligence" and the "line
speed" at which it can be transmitted by a communication system, giving
the relation W = K log m (recalling Boltzmann's constant), where W is the speed of transmission of intelligence, m is the number of different voltage levels to choose from at each time step, and K is a constant. Ralph Hartley's 1928 paper, Transmission of Information, uses the word information as a measurable quantity, reflecting the receiver's ability to distinguish one sequence of symbols from any other, thus quantifying information as H = log Sn = n log S, where S was the number of possible symbols, and n the number of symbols in a transmission. The unit of information was therefore the decimal digit, which has since sometimes been called the hartley in his honor as a unit or scale or measure of information. Alan Turing in 1940 used similar ideas as part of the statistical analysis of the breaking of the German second world war Enigma ciphers.
In Shannon's revolutionary and groundbreaking paper, the work for
which had been substantially completed at Bell Labs by the end of 1944,
Shannon for the first time introduced the qualitative and quantitative
model of communication as a statistical process underlying information
theory, opening with the assertion that
"The fundamental problem of communication is that of reproducing
at one point, either exactly or approximately, a message selected at
another point."
the bit—a new way of seeing the most fundamental unit of information.
Quantities of information
Information theory is based on probability theory and statistics.
Information theory often concerns itself with measures of information
of the distributions associated with random variables. Important
quantities of information are entropy, a measure of information in a single random variable, and mutual information,
a measure of information in common between two random variables. The
former quantity is a property of the probability distribution of a
random variable and gives a limit on the rate at which data generated by
independent samples with the given distribution can be reliably compressed.
The latter is a property of the joint distribution of two random
variables, and is the maximum rate of reliable communication across a
noisy channel in the limit of long block lengths, when the channel statistics are determined by the joint distribution.
In what follows, an expression of the form p log p is considered by convention to be equal to zero whenever p = 0. This is justified because for any logarithmic base.
Entropy of an information source
Based on the probability mass function of each source symbol to be communicated, the Shannon entropyH, in units of bits (per symbol), is given by
where pi is the probability of occurrence of the i-th
possible value of the source symbol. This equation gives the entropy in
the units of "bits" (per symbol) because it uses a logarithm of base 2,
and this base-2 measure of entropy has sometimes been called the "shannon" in his honor. Entropy is also commonly computed using the natural logarithm (base e, where e is Euler's number), which produces a measurement of entropy in "nats"
per symbol and sometimes simplifies the analysis by avoiding the need
to include extra constants in the formulas. Other bases are also
possible, but less commonly used. For example, a logarithm of base 28 = 256 will produce a measurement in bytes per symbol, and a logarithm of base 10 will produce a measurement in decimal digits (or hartleys) per symbol.
Intuitively, the entropy HX of a discrete random variable X is a measure of the amount of uncertainty associated with the value of X when only its distribution is known.
The entropy of a source that emits a sequence of N symbols that are independent and identically distributed (iid) is N·H bits (per message of N symbols). If the source data symbols are identically distributed but not independent, the entropy of a message of length N will be less than N·H.
The entropy of a Bernoulli trial as a function of success probability, often called the binary entropy function, Hb(p). The entropy is maximized at 1 bit per trial when the two possible outcomes are equally probable, as in an unbiased coin toss.
If one transmits 1000 bits (0s and 1s), and the value of each of
these bits is known to the receiver (has a specific value with
certainty) ahead of transmission, it is clear that no information is
transmitted. If, however, each bit is independently equally likely to
be 0 or 1, 1000 shannons of information (more often called bits) have
been transmitted. Between these two extremes, information can be
quantified as follows. If 𝕏 is the set of all messages {x1, …, xn} that X could be, and p(x) is the probability of some , then the entropy, H, of X is defined:
(Here, I(x) is the self-information, which is the entropy contribution of an individual message, and 𝔼X is the expected value.) A property of entropy is that it is maximized when all the messages in the message space are equiprobable p(x) = 1/n; i.e., most unpredictable, in which case H(X) = log n.
The special case of information entropy for a random variable with two outcomes is the binary entropy function, usually taken to the logarithmic base 2, thus having the shannon (Sh) as unit:
Joint entropy
The joint entropy of two discrete random variables X and Y is merely the entropy of their pairing: (X, Y). This implies that if X and Y are independent, then their joint entropy is the sum of their individual entropies.
For example, if (X, Y) represents the position of a chess piece — X the row and Y
the column, then the joint entropy of the row of the piece and the
column of the piece will be the entropy of the position of the piece.
Despite similar notation, joint entropy should not be confused with cross entropy.
Conditional entropy (equivocation)
The conditional entropy or conditional uncertainty of X given random variable Y (also called the equivocation of X about Y) is the average conditional entropy over Y:
Because entropy can be conditioned on a random variable or on that
random variable being a certain value, care should be taken not to
confuse these two definitions of conditional entropy, the former of
which is in more common use. A basic property of this form of
conditional entropy is that:
Mutual information (transinformation)
Mutual information
measures the amount of information that can be obtained about one
random variable by observing another. It is important in communication
where it can be used to maximize the amount of information shared
between sent and received signals. The mutual information of X relative to Y is given by:
In other words, this is a measure of how much, on the average, the probability distribution on X will change if we are given the value of Y. This is often recalculated as the divergence from the product of the marginal distributions to the actual joint distribution:
Mutual information is closely related to the log-likelihood ratio test in the context of contingency tables and the multinomial distribution and to Pearson's χ2 test:
mutual information can be considered a statistic for assessing
independence between a pair of variables, and has a well-specified
asymptotic distribution.
Kullback–Leibler divergence (information gain)
The Kullback–Leibler divergence (or information divergence, information gain, or relative entropy) is a way of comparing two distributions: a "true" probability distributionp(X), and an arbitrary probability distribution q(X). If we compress data in a manner that assumes q(X) is the distribution underlying some data, when, in reality, p(X)
is the correct distribution, the Kullback–Leibler divergence is the
number of average additional bits per datum necessary for compression.
It is thus defined
Although it is sometimes used as a 'distance metric', KL divergence is not a true metric since it is not symmetric and does not satisfy the triangle inequality (making it a semi-quasimetric).
Another interpretation of the KL divergence is the "unnecessary
surprise" introduced by a prior from the truth: suppose a number X is about to be drawn randomly from a discrete set with probability distribution p(x). If Alice knows the true distribution p(x), while Bob believes (has a prior) that the distribution is q(x), then Bob will be more surprised than Alice, on average, upon seeing the value of X. The KL divergence is the (objective) expected value of Bob's (subjective) surprisal minus Alice's surprisal, measured in bits if the log
is in base 2. In this way, the extent to which Bob's prior is "wrong"
can be quantified in terms of how "unnecessarily surprised" it is
expected to make him.
A
picture showing scratches on the readable surface of a CD-R. Music and
data CDs are coded using error correcting codes and thus can still be
read even if they have minor scratches using error detection and correction.
Coding theory is one of the most important and direct applications of information theory. It can be subdivided into source coding theory and channel coding
theory. Using a statistical description for data, information theory
quantifies the number of bits needed to describe the data, which is the
information entropy of the source.
Data compression (source coding): There are two formulations for the compression problem:
lossy data compression:
allocates bits needed to reconstruct the data, within a specified
fidelity level measured by a distortion function. This subset of
information theory is called rate–distortion theory.
Error-correcting codes (channel coding): While data compression removes as much redundancy as possible, an error correcting code adds just the right kind of redundancy (i.e., error correction) needed to transmit the data efficiently and faithfully across a noisy channel.
This division of coding theory into compression and transmission is
justified by the information transmission theorems, or source–channel
separation theorems that justify the use of bits as the universal
currency for information in many contexts. However, these theorems only
hold in the situation where one transmitting user wishes to communicate
to one receiving user. In scenarios with more than one transmitter (the
multiple-access channel), more than one receiver (the broadcast channel) or intermediary "helpers" (the relay channel), or more general networks, compression followed by transmission may no longer be optimal. Network information theory refers to these multi-agent communication models.
Source theory
Any process that generates successive messages can be considered a source of information. A memoryless source is one in which each message is an independent identically distributed random variable, whereas the properties of ergodicity and stationarity impose less restrictive constraints. All such sources are stochastic. These terms are well studied in their own right outside information theory.
Rate
Information rate
is the average entropy per symbol. For memoryless sources, this is
merely the entropy of each symbol, while, in the case of a stationary
stochastic process, it is
that is, the conditional entropy of a symbol given all the previous
symbols generated. For the more general case of a process that is not
necessarily stationary, the average rate is
that is, the limit of the joint entropy per symbol. For stationary sources, these two expressions give the same result.
It is common in information theory to speak of the "rate" or
"entropy" of a language. This is appropriate, for example, when the
source of information is English prose. The rate of a source of
information is related to its redundancy and how well it can be compressed, the subject of source coding.
Channel capacity
Communications over a channel—such as an ethernetcable—is
the primary motivation of information theory. As anyone who's ever
used a telephone (mobile or landline) knows, however, such channels
often fail to produce exact reconstruction of a signal; noise, periods
of silence, and other forms of signal corruption often degrade quality.
Consider the communications process over a discrete channel. A simple model of the process is shown below:
Here X represents the space of messages transmitted, and Y the space of messages received during a unit time over our channel. Let p(y|x) be the conditional probability distribution function of Y given X. We will consider p(y|x) to be an inherent fixed property of our communications channel (representing the nature of the noise of our channel). Then the joint distribution of X and Y is completely determined by our channel and by our choice of f(x),
the marginal distribution of messages we choose to send over the
channel. Under these constraints, we would like to maximize the rate of
information, or the signal, we can communicate over the channel. The appropriate measure for this is the mutual information, and this maximum mutual information is called the channel capacity and is given by:
This capacity has the following property related to communicating at information rate R (where R is usually bits per symbol). For any information rate R < C and coding error ε > 0, for large enough N, there exists a code of length N
and rate ≥ R and a decoding algorithm, such that the maximal
probability of block error is ≤ ε; that is, it is always possible to
transmit with arbitrarily small block error. In addition, for any rate R > C, it is impossible to transmit with arbitrarily small block error.
Channel coding is concerned with finding such nearly optimal codes that can be used to transmit data over a noisy channel with a small coding error at a rate near the channel capacity.
A binary symmetric channel (BSC) with crossover probability p is a binary input, binary output channel that flips the input bit with probability p. The BSC has a capacity of 1 − Hb(p) bits per channel use, where Hb is the binary entropy function to the base-2 logarithm:
A binary erasure channel (BEC) with erasure probability p
is a binary input, ternary output channel. The possible channel outputs
are 0, 1, and a third symbol 'e' called an erasure. The erasure
represents complete loss of information about an input bit. The capacity
of the BEC is 1 − p bits per channel use.
Information theory leads us to believe it is much more difficult to keep secrets than it might first appear. A brute force attack can break systems based on asymmetric key algorithms or on most commonly used methods of symmetric key algorithms (sometimes called secret key algorithms), such as block ciphers.
The security of all such methods currently comes from the assumption
that no known attack can break them in a practical amount of time.
Information theoretic security refers to methods such as the one-time pad that are not vulnerable to such brute force attacks. In such cases, the positive conditional mutual information between the plaintext and ciphertext (conditioned on the key)
can ensure proper transmission, while the unconditional mutual
information between the plaintext and ciphertext remains zero, resulting
in absolutely secure communications. In other words, an eavesdropper
would not be able to improve his or her guess of the plaintext by
gaining knowledge of the ciphertext but not of the key. However, as in
any other cryptographic system, care must be used to correctly apply
even information-theoretically secure methods; the Venona project was able to crack the one-time pads of the Soviet Union due to their improper reuse of key material.
Pseudorandom number generation
Pseudorandom number generators
are widely available in computer language libraries and application
programs. They are, almost universally, unsuited to cryptographic use as
they do not evade the deterministic nature of modern computer equipment
and software. A class of improved random number generators is termed cryptographically secure pseudorandom number generators, but even they require random seeds external to the software to work as intended. These can be obtained via extractors, if done carefully. The measure of sufficient randomness in extractors is min-entropy, a value related to Shannon entropy through Rényi entropy;
Rényi entropy is also used in evaluating randomness in cryptographic
systems. Although related, the distinctions among these measures mean
that a random variable with high Shannon entropy is not necessarily satisfactory for use in an extractor and so for cryptography uses.
Seismic exploration
One
early commercial application of information theory was in the field of
seismic oil exploration. Work in this field made it possible to strip
off and separate the unwanted noise from the desired seismic signal.
Information theory and digital signal processing offer a major improvement of resolution and image clarity over previous analog methods.
Semiotics
Concepts from information theory such as redundancy and code control have been used by semioticians such as Umberto Eco and Ferruccio Rossi-Landi
to explain ideology as a form of message transmission whereby a
dominant social class emits its message by using signs that exhibit a
high degree of redundancy such that only one message is decoded among a
selection of competing ones.