Search This Blog

Wednesday, February 2, 2022

Probability interpretations

From Wikipedia, the free encyclopedia

The word probability has been used in a variety of ways since it was first applied to the mathematical study of games of chance. Does probability measure the real, physical, tendency of something to occur, or is it a measure of how strongly one believes it will occur, or does it draw on both these elements? In answering such questions, mathematicians interpret the probability values of probability theory.

There are two broad categories of probability interpretations which can be called "physical" and "evidential" probabilities. Physical probabilities, which are also called objective or frequency probabilities, are associated with random physical systems such as roulette wheels, rolling dice and radioactive atoms. In such systems, a given type of event (such as a die yielding a six) tends to occur at a persistent rate, or "relative frequency", in a long run of trials. Physical probabilities either explain, or are invoked to explain, these stable frequencies. The two main kinds of theory of physical probability are frequentist accounts (such as those of Venn, Reichenbach and von Mises) and propensity accounts (such as those of Popper, Miller, Giere and Fetzer).

Evidential probability, also called Bayesian probability, can be assigned to any statement whatsoever, even when no random process is involved, as a way to represent its subjective plausibility, or the degree to which the statement is supported by the available evidence. On most accounts, evidential probabilities are considered to be degrees of belief, defined in terms of dispositions to gamble at certain odds. The four main evidential interpretations are the classical (e.g. Laplace's) interpretation, the subjective interpretation (de Finetti and Savage), the epistemic or inductive interpretation (Ramsey, Cox) and the logical interpretation (Keynes and Carnap). There are also evidential interpretations of probability covering groups, which are often labelled as 'intersubjective' (proposed by Gillies and Rowbottom).

Some interpretations of probability are associated with approaches to statistical inference, including theories of estimation and hypothesis testing. The physical interpretation, for example, is taken by followers of "frequentist" statistical methods, such as Ronald Fisher, Jerzy Neyman, and Egon Pearson. Statisticians of the opposing Bayesian school typically accept the frequency interpretation when it makes sense (although not as a definition), but there's less agreement regarding physical probabilities. Bayesians consider the calculation of evidential probabilities to be both valid and necessary in statistics. This article, however, focuses on the interpretations of probability rather than theories of statistical inference.

The terminology of this topic is rather confusing, in part because probabilities are studied within a variety of academic fields. The word "frequentist" is especially tricky. To philosophers it refers to a particular theory of physical probability, one that has more or less been abandoned. To scientists, on the other hand, "frequentist probability" is just another name for physical (or objective) probability. Those who promote Bayesian inference view "frequentist statistics" as an approach to statistical inference that is based on the frequency interpretation of probability, usually relying on the law of large numbers and characterized by what is called 'Null Hypothesis Significance Testing' (NHST). Also the word "objective", as applied to probability, sometimes means exactly what "physical" means here, but is also used of evidential probabilities that are fixed by rational constraints, such as logical and epistemic probabilities.

It is unanimously agreed that statistics depends somehow on probability. But, as to what probability is and how it is connected with statistics, there has seldom been such complete disagreement and breakdown of communication since the Tower of Babel. Doubtless, much of the disagreement is merely terminological and would disappear under sufficiently sharp analysis.

— (Savage, 1954, p 2)

Philosophy

The philosophy of probability presents problems chiefly in matters of epistemology and the uneasy interface between mathematical concepts and ordinary language as it is used by non-mathematicians. Probability theory is an established field of study in mathematics. It has its origins in correspondence discussing the mathematics of games of chance between Blaise Pascal and Pierre de Fermat in the seventeenth century, and was formalized and rendered axiomatic as a distinct branch of mathematics by Andrey Kolmogorov in the twentieth century. In axiomatic form, mathematical statements about probability theory carry the same sort of epistemological confidence within the philosophy of mathematics as are shared by other mathematical statements.

The mathematical analysis originated in observations of the behaviour of game equipment such as playing cards and dice, which are designed specifically to introduce random and equalized elements; in mathematical terms, they are subjects of indifference. This is not the only way probabilistic statements are used in ordinary human language: when people say that "it will probably rain", they typically do not mean that the outcome of rain versus not-rain is a random factor that the odds currently favor; instead, such statements are perhaps better understood as qualifying their expectation of rain with a degree of confidence. Likewise, when it is written that "the most probable explanation" of the name of Ludlow, Massachusetts "is that it was named after Roger Ludlow", what is meant here is not that Roger Ludlow is favored by a random factor, but rather that this is the most plausible explanation of the evidence, which admits other, less likely explanations.

Thomas Bayes attempted to provide a logic that could handle varying degrees of confidence; as such, Bayesian probability is an attempt to recast the representation of probabilistic statements as an expression of the degree of confidence by which the beliefs they express are held.

Though probability initially had somewhat mundane motivations, its modern influence and use is widespread ranging from evidence-based medicine, through six sigma, all the way to the probabilistically checkable proof and the string theory landscape.

A summary of some interpretations of probability 

Classical Frequentist Subjective Propensity
Main hypothesis Principle of indifference Frequency of occurrence Degree of belief Degree of causal connection
Conceptual basis Hypothetical symmetry Past data and reference class Knowledge and intuition Present state of system
Conceptual approach Conjectural Empirical Subjective Metaphysical
Single case possible Yes No Yes Yes
Precise Yes No No Yes
Problems Ambiguity in principle of indifference Circular definition Reference class problem Disputed concept

Classical definition

The first attempt at mathematical rigour in the field of probability, championed by Pierre-Simon Laplace, is now known as the classical definition. Developed from studies of games of chance (such as rolling dice) it states that probability is shared equally between all the possible outcomes, provided these outcomes can be deemed equally likely. (3.1)

The theory of chance consists in reducing all the events of the same kind to a certain number of cases equally possible, that is to say, to such as we may be equally undecided about in regard to their existence, and in determining the number of cases favorable to the event whose probability is sought. The ratio of this number to that of all the cases possible is the measure of this probability, which is thus simply a fraction whose numerator is the number of favorable cases and whose denominator is the number of all the cases possible.

— Pierre-Simon Laplace, A Philosophical Essay on Probabilities
The classical definition of probability works well for situations with only a finite number of equally-likely outcomes.

This can be represented mathematically as follows: If a random experiment can result in N mutually exclusive and equally likely outcomes and if NA of these outcomes result in the occurrence of the event A, the probability of A is defined by

There are two clear limitations to the classical definition. Firstly, it is applicable only to situations in which there is only a 'finite' number of possible outcomes. But some important random experiments, such as tossing a coin until it rises heads, give rise to an infinite set of outcomes. And secondly, you need to determine in advance that all the possible outcomes are equally likely without relying on the notion of probability to avoid circularity—for instance, by symmetry considerations.

Frequentism

For frequentists, the probability of the ball landing in any pocket can be determined only by repeated trials in which the observed result converges to the underlying probability in the long run.
 

Frequentists posit that the probability of an event is its relative frequency over time, (3.4) i.e., its relative frequency of occurrence after repeating a process a large number of times under similar conditions. This is also known as aleatory probability. The events are assumed to be governed by some random physical phenomena, which are either phenomena that are predictable, in principle, with sufficient information (see determinism); or phenomena which are essentially unpredictable. Examples of the first kind include tossing dice or spinning a roulette wheel; an example of the second kind is radioactive decay. In the case of tossing a fair coin, frequentists say that the probability of getting a heads is 1/2, not because there are two equally likely outcomes but because repeated series of large numbers of trials demonstrate that the empirical frequency converges to the limit 1/2 as the number of trials goes to infinity.

If we denote by the number of occurrences of an event in trials, then if we say that .

The frequentist view has its own problems. It is of course impossible to actually perform an infinity of repetitions of a random experiment to determine the probability of an event. But if only a finite number of repetitions of the process are performed, different relative frequencies will appear in different series of trials. If these relative frequencies are to define the probability, the probability will be slightly different every time it is measured. But the real probability should be the same every time. If we acknowledge the fact that we only can measure a probability with some error of measurement attached, we still get into problems as the error of measurement can only be expressed as a probability, the very concept we are trying to define. This renders even the frequency definition circular; see for example “What is the Chance of an Earthquake?

Subjectivism

Subjectivists, also known as Bayesians or followers of epistemic probability, give the notion of probability a subjective status by regarding it as a measure of the 'degree of belief' of the individual assessing the uncertainty of a particular situation. Epistemic or subjective probability is sometimes called credence, as opposed to the term chance for a propensity probability. Some examples of epistemic probability are to assign a probability to the proposition that a proposed law of physics is true or to determine how probable it is that a suspect committed a crime, based on the evidence presented. The use of Bayesian probability raises the philosophical debate as to whether it can contribute valid justifications of belief. Bayesians point to the work of Ramsey (p 182) and de Finetti (p 103) as proving that subjective beliefs must follow the laws of probability if they are to be coherent. Evidence casts doubt that humans will have coherent beliefs. The use of Bayesian probability involves specifying a prior probability. This may be obtained from consideration of whether the required prior probability is greater or lesser than a reference probability associated with an urn model or a thought experiment. The issue is that for a given problem, multiple thought experiments could apply, and choosing one is a matter of judgement: different people may assign different prior probabilities, known as the reference class problem. The "sunrise problem" provides an example.

Propensity

Propensity theorists think of probability as a physical propensity, or disposition, or tendency of a given type of physical situation to yield an outcome of a certain kind or to yield a long run relative frequency of such an outcome. This kind of objective probability is sometimes called 'chance'.

Propensities, or chances, are not relative frequencies, but purported causes of the observed stable relative frequencies. Propensities are invoked to explain why repeating a certain kind of experiment will generate given outcome types at persistent rates, which are known as propensities or chances. Frequentists are unable to take this approach, since relative frequencies do not exist for single tosses of a coin, but only for large ensembles or collectives (see "single case possible" in the table above). In contrast, a propensitist is able to use the law of large numbers to explain the behaviour of long-run frequencies. This law, which is a consequence of the axioms of probability, says that if (for example) a coin is tossed repeatedly many times, in such a way that its probability of landing heads is the same on each toss, and the outcomes are probabilistically independent, then the relative frequency of heads will be close to the probability of heads on each single toss. This law allows that stable long-run frequencies are a manifestation of invariant single-case probabilities. In addition to explaining the emergence of stable relative frequencies, the idea of propensity is motivated by the desire to make sense of single-case probability attributions in quantum mechanics, such as the probability of decay of a particular atom at a particular time.

The main challenge facing propensity theories is to say exactly what propensity means. (And then, of course, to show that propensity thus defined has the required properties.) At present, unfortunately, none of the well-recognised accounts of propensity comes close to meeting this challenge.

A propensity theory of probability was given by Charles Sanders Peirce. A later propensity theory was proposed by philosopher Karl Popper, who had only slight acquaintance with the writings of C. S. Peirce, however. Popper noted that the outcome of a physical experiment is produced by a certain set of "generating conditions". When we repeat an experiment, as the saying goes, we really perform another experiment with a (more or less) similar set of generating conditions. To say that a set of generating conditions has propensity p of producing the outcome E means that those exact conditions, if repeated indefinitely, would produce an outcome sequence in which E occurred with limiting relative frequency p. For Popper then, a deterministic experiment would have propensity 0 or 1 for each outcome, since those generating conditions would have same outcome on each trial. In other words, non-trivial propensities (those that differ from 0 and 1) only exist for genuinely nondeterministic experiments.

A number of other philosophers, including David Miller and Donald A. Gillies, have proposed propensity theories somewhat similar to Popper's.

Other propensity theorists (e.g. Ronald Giere) do not explicitly define propensities at all, but rather see propensity as defined by the theoretical role it plays in science. They argued, for example, that physical magnitudes such as electrical charge cannot be explicitly defined either, in terms of more basic things, but only in terms of what they do (such as attracting and repelling other electrical charges). In a similar way, propensity is whatever fills the various roles that physical probability plays in science.

What roles does physical probability play in science? What are its properties? One central property of chance is that, when known, it constrains rational belief to take the same numerical value. David Lewis called this the Principal Principle, (3.3 & 3.5) a term that philosophers have mostly adopted. For example, suppose you are certain that a particular biased coin has propensity 0.32 to land heads every time it is tossed. What is then the correct price for a gamble that pays $1 if the coin lands heads, and nothing otherwise? According to the Principal Principle, the fair price is 32 cents.

Logical, epistemic, and inductive probability

It is widely recognized that the term "probability" is sometimes used in contexts where it has nothing to do with physical randomness. Consider, for example, the claim that the extinction of the dinosaurs was probably caused by a large meteorite hitting the earth. Statements such as "Hypothesis H is probably true" have been interpreted to mean that the (presently available) empirical evidence (E, say) supports H to a high degree. This degree of support of H by E has been called the logical probability of H given E, or the epistemic probability of H given E, or the inductive probability of H given E.

The differences between these interpretations are rather small, and may seem inconsequential. One of the main points of disagreement lies in the relation between probability and belief. Logical probabilities are conceived (for example in Keynes' Treatise on Probability) to be objective, logical relations between propositions (or sentences), and hence not to depend in any way upon belief. They are degrees of (partial) entailment, or degrees of logical consequence, not degrees of belief. (They do, nevertheless, dictate proper degrees of belief, as is discussed below.) Frank P. Ramsey, on the other hand, was skeptical about the existence of such objective logical relations and argued that (evidential) probability is "the logic of partial belief". (p 157) In other words, Ramsey held that epistemic probabilities simply are degrees of rational belief, rather than being logical relations that merely constrain degrees of rational belief.

Another point of disagreement concerns the uniqueness of evidential probability, relative to a given state of knowledge. Rudolf Carnap held, for example, that logical principles always determine a unique logical probability for any statement, relative to any body of evidence. Ramsey, by contrast, thought that while degrees of belief are subject to some rational constraints (such as, but not limited to, the axioms of probability) these constraints usually do not determine a unique value. Rational people, in other words, may differ somewhat in their degrees of belief, even if they all have the same information.

Prediction

An alternative account of probability emphasizes the role of prediction – predicting future observations on the basis of past observations, not on unobservable parameters. In its modern form, it is mainly in the Bayesian vein. This was the main function of probability before the 20th century, but fell out of favor compared to the parametric approach, which modeled phenomena as a physical system that was observed with error, such as in celestial mechanics.

The modern predictive approach was pioneered by Bruno de Finetti, with the central idea of exchangeability – that future observations should behave like past observations. This view came to the attention of the Anglophone world with the 1974 translation of de Finetti's book, and has since been propounded by such statisticians as Seymour Geisser.

Axiomatic probability

The mathematics of probability can be developed on an entirely axiomatic basis that is independent of any interpretation: see the articles on probability theory and probability axioms for a detailed treatment.

Gamma distribution

From Wikipedia, the free encyclopedia

Probability density function
Probability density plots of gamma distributions
 
Cumulative distribution function
Cumulative distribution plots of gamma distributions
Parameters
Support
PDF
CDF
Mean
Median No simple closed form No simple closed form
Mode
Variance
Skewness
Ex. kurtosis
Entropy
MGF
CF
Method of Moments

In probability theory and statistics, the gamma distribution is a two-parameter family of continuous probability distributions. The exponential distribution, Erlang distribution, and chi-square distribution are special cases of the gamma distribution. There are two different parameterizations in common use:

  1. With a shape parameter k and a scale parameter θ.
  2. With a shape parameter α = k and an inverse scale parameter β = 1/θ, called a rate parameter.

In each of these forms, both parameters are positive real numbers.

The gamma distribution is the maximum entropy probability distribution (both with respect to a uniform base measure and with respect to a 1/x base measure) for a random variable X for which E[X] = = α/β is fixed and greater than zero, and E[ln(X)] = ψ(k) + ln(θ) = ψ(α) − ln(β) is fixed (ψ is the digamma function).

Definitions

The parameterization with k and θ appears to be more common in econometrics and certain other applied fields, where for example the gamma distribution is frequently used to model waiting times. For instance, in life testing, the waiting time until death is a random variable that is frequently modeled with a gamma distribution. See Hogg and Craig for an explicit motivation.

The parameterization with α and β is more common in Bayesian statistics, where the gamma distribution is used as a conjugate prior distribution for various types of inverse scale (rate) parameters, such as the λ of an exponential distribution or a Poisson distribution – or for that matter, the β of the gamma distribution itself. The closely related inverse-gamma distribution is used as a conjugate prior for scale parameters, such as the variance of a normal distribution.

If k is a positive integer, then the distribution represents an Erlang distribution; i.e., the sum of k independent exponentially distributed random variables, each of which has a mean of θ.

Characterization using shape α and rate β

The gamma distribution can be parameterized in terms of a shape parameter α = k and an inverse scale parameter β = 1/θ, called a rate parameter. A random variable X that is gamma-distributed with shape α and rate β is denoted

The corresponding probability density function in the shape-rate parametrization is

where is the gamma function. For all positive integers, .

The cumulative distribution function is the regularized gamma function:

where is the lower incomplete gamma function.

If α is a positive integer (i.e., the distribution is an Erlang distribution), the cumulative distribution function has the following series expansion:

Characterization using shape k and scale θ

A random variable X that is gamma-distributed with shape k and scale θ is denoted by

Illustration of the gamma PDF for parameter values over k and x with θ set to 1, 2, 3, 4, 5 and 6. One can see each θ layer by itself here as well as by k and x

The probability density function using the shape-scale parametrization is

Here Γ(k) is the gamma function evaluated at k.

The cumulative distribution function is the regularized gamma function:

where is the lower incomplete gamma function.

It can also be expressed as follows, if k is a positive integer (i.e., the distribution is an Erlang distribution):

Both parametrizations are common because either can be more convenient depending on the situation.

Properties

Skewness

The skewness of the gamma distribution only depends on its shape parameter, k, and it is equal to

Higher moments

The nth raw moment is given by:

Median approximations and bounds

Bounds and asymptotic approximations to the median of the gamma distribution. The cyan colored region indicates the large gap between published lower and upper bounds.

Unlike the mode and the mean, which have readily calculable formulas based on the parameters, the median does not have a closed-form equation. The median for this distribution is defined as the value such that

A rigorous treatment of the problem of determining an asymptotic expansion and bounds for the median of the gamma distribution was handled first by Chen and Rubin, who proved that (for )

where is the mean and is the median of the distribution. For other values of the scale parameter, the mean scales to , and the median bounds and approximations would be similarly scaled by .

K. P. Choi found the first five terms in a Laurent series asymptotic approximation of the median by comparing the median to Ramanujan's function. Berg and Pedersen found more terms:

Two gamma distribution median asymptotes which are conjectured to be bounds (upper solid red and lower dashed red), of the from , and an interpolation between them that makes an approximation (dotted red) that is exact at k = 1 and has maximum relative error of about 0.6%. The cyan shaded region is the remaining gap between upper and lower bounds (or conjectured bounds) including these new (as of 2021) conjectured bounds and the proven bounds in the previous figure.
 
Log–log plot of upper (solid) and lower (dashed) bounds to the median of a gamma distribution, and the gaps between them. The green, yellow, and cyan regions represent the gap before the Lyon 2021 paper. The green and yellow narrow that gap with the lower bounds that Lyon proved. The yellow is further narrowed by Lyon's conjectured bounds. Mostly within the yellow, closed-form rational-function-interpolated bounds are plotted, along with the numerically calculated value of the median (dotted). Tighter interpolated bounds exist but are not plotted, as they would not be resolved at this scale.

Partial sums of these series are good approximations for high enough ; they are not plotted in the figure, which is focused on the low- region that is less well approximated.

Berg and Pedersen also proved many properties of the median, showed that it is a convex function of ,[8] and that the asymptotic behavior near is (where is the Euler–Mascheroni constant), and that for all the median is bounded by .

A closer linear upper bound, for only, was provided in 2021 by Gaunt and Merkle, relying on the Berg and Pedersen result that the slope of is everywhere less than 1:

for (with equality at )

which can be extended to a bound for all by taking the max with the chord shown in the figure, since the median was proved convex.

An approximation to the median that is asymptotically accurate at high and reasonable down to or a bit lower follows from the Wilson–Hilferty transformation:

which goes negative for .

In 2021, Lyon proposed several closed-form approximations of the form . He conjectured closed-form values of and for which this approximation is an asymptotically tight upper or lower bound for all . In particular:

is a lower bound, asymptotically tight as
is an upper bound, asymptotically tight as

Lyon also derived two other lower bounds that are not closed-form expressions, including this one based on solving the integral expression substituting 1 for :

(approaching equality as )

and the tangent line at where the derivative was found to be :

(with equality at )

where Ei is the exponential integral.

Additionally, he showed that interpolations between bounds can provide excellent approximations or tighter bounds to the median, including an approximation that is exact at (where ) and has a maximum relative error less than 0.6%. Interpolated approximations and bounds are all of the form

where is an interpolating function running monotonically from 0 at low to 1 at high , approximating an ideal, or exact, interpolator :

For the simplest interpolating function considered, a first-order rational function

the tightest lower bound has

and the tightest upper bound has

The interpolated bounds are plotted (mostly inside the yellow region) in the log–log plot shown. Even tighter bounds are available using different interpolating functions, but not usually with closed-form parameters like these.

Summation

If Xi has a Gamma(ki, θ) distribution for i = 1, 2, ..., N (i.e., all distributions have the same scale parameter θ), then

provided all Xi are independent.

For the cases where the Xi are independent but have different scale parameters see Mathai or Moschopoulos.

The gamma distribution exhibits infinite divisibility.

Scaling

If

then, for any c > 0,

by moment generating functions,

or equivalently, if

(shape-rate parameterization)

Indeed, we know that if X is an exponential r.v. with rate λ then cX is an exponential r.v. with rate λ/c; the same thing is valid with Gamma variates (and this can be checked using the moment-generating function, see, e.g.,these notes, 10.4-(ii)): multiplication by a positive constant c divides the rate (or, equivalently, multiplies the scale).

Exponential family

The gamma distribution is a two-parameter exponential family with natural parameters k − 1 and −1/θ (equivalently, α − 1 and −β), and natural statistics X and ln(X).

If the shape parameter k is held fixed, the resulting one-parameter family of distributions is a natural exponential family.

Logarithmic expectation and variance

One can show that

or equivalently,

where is the digamma function. Likewise,

where is the trigamma function.

This can be derived using the exponential family formula for the moment generating function of the sufficient statistic, because one of the sufficient statistics of the gamma distribution is ln(x).

Information entropy

The information entropy is

In the k, θ parameterization, the information entropy is given by

Kullback–Leibler divergence

Illustration of the Kullback–Leibler (KL) divergence for two gamma PDFs. Here β = β0 + 1 which are set to 1, 2, 3, 4, 5 and 6. The typical asymmetry for the KL divergence is clearly visible.

The Kullback–Leibler divergence (KL-divergence), of Gamma(αp, βp) ("true" distribution) from Gamma(αq, βq) ("approximating" distribution) is given by

Written using the k, θ parameterization, the KL-divergence of Gamma(kp, θp) from Gamma(kq, θq) is given by

Laplace transform

The Laplace transform of the gamma PDF is

Related distributions

General

  • Let be independent and identically distributed random variables following an exponential distribution with rate parameter λ, then ~ Gamma(n, λ) where n is the shape parameter and λ is the rate, and where the rate changes .
  • If X ~ Gamma(1, 1/λ') (in the shape–scale parametrization), then X has an exponential distribution with rate parameter λ.
  • If X ~ Gamma(ν/2, 2) (in the shape–scale parametrization), then X is identical to χ2(ν), the chi-squared distribution with ν degrees of freedom. Conversely, if Q ~ χ2(ν) and c is a positive constant, then cQ ~ Gamma(ν/2, 2c).
  • If k is an integer, the gamma distribution is an Erlang distribution and is the probability distribution of the waiting time until the kth "arrival" in a one-dimensional Poisson process with intensity 1/θ. If
then
  • If X ~ Gamma(k, θ), then follows an exponential-gamma (abbreviated exp-gamma) distribution.[14] It is sometimes referred to as the log-gamma distribution. Formulas for its mean and variance are in the section #Logarithmic expectation and variance.
  • If X ~ Gamma(k, θ), then follows a generalized gamma distribution with parameters p = 2, d = 2k, and .
  • More generally, if X ~ Gamma(k,θ), then for follows a generalized gamma distribution with parameters p = 1/q, d = k/q, and .
  • If X ~ Gamma(k, θ) with shape k and scale θ, then 1/X ~ Inv-Gamma(k, θ−1) (see Inverse-gamma distribution for derivation).
  • Parametrization 1: If are independent, then , or equivalently,
  • Parametrization 2: If are independent, then , or equivalently,
  • If X ~ Gamma(α, θ) and Y ~ Gamma(β, θ) are independently distributed, then X/(X + Y) has a beta distribution with parameters α and β, and X/(X + Y) is independent of X + Y, which is Gamma(α + β, θ)-distributed.
  • If Xi ~ Gamma(αi, 1) are independently distributed, then the vector (X1/S, ..., Xn/S), where S = X1 + ... + Xn, follows a Dirichlet distribution with parameters α1, ..., αn.
  • For large k the gamma distribution converges to normal distribution with mean μ = and variance σ2 = 2.
  • The gamma distribution is the conjugate prior for the precision of the normal distribution with known mean.
  • The Wishart distribution is a multivariate generalization of the gamma distribution (samples are positive-definite matrices rather than positive real numbers).
  • The gamma distribution is a special case of the generalized gamma distribution, the generalized integer gamma distribution, and the generalized inverse Gaussian distribution.
  • Among the discrete distributions, the negative binomial distribution is sometimes considered the discrete analogue of the gamma distribution.
  • Tweedie distributions – the gamma distribution is a member of the family of Tweedie exponential dispersion models.

Compound gamma

If the shape parameter of the gamma distribution is known, but the inverse-scale parameter is unknown, then a gamma distribution for the inverse scale forms a conjugate prior. The compound distribution, which results from integrating out the inverse scale, has a closed-form solution, known as the compound gamma distribution.

If instead the shape parameter is known but the mean is unknown, with the prior of the mean being given by another gamma distribution, then it results in K-distribution.

Statistical inference

Parameter estimation

Maximum likelihood estimation

The likelihood function for N iid observations (x1, ..., xN) is

from which we calculate the log-likelihood function

Finding the maximum with respect to θ by taking the derivative and setting it equal to zero yields the maximum likelihood estimator of the θ parameter:

Substituting this into the log-likelihood function gives

Finding the maximum with respect to k by taking the derivative and setting it equal to zero yields

where is the digamma function. There is no closed-form solution for k. The function is numerically very well behaved, so if a numerical solution is desired, it can be found using, for example, Newton's method. An initial value of k can be found either using the method of moments, or using the approximation

If we let

then k is approximately

which is within 1.5% of the correct value. An explicit form for the Newton–Raphson update of this initial guess is:

Closed-form estimators

Consistent closed-form estimators of k and θ exists that are derived from the likelihood of the generalized gamma distribution.

The estimate for the shape k is

and the estimate for the scale θ is

If the rate parameterization is used, the estimate of .

These estimators are not strictly maximum likelihood estimators, but are instead referred to as mixed type log-moment estimators. They have however similar efficiency as the maximum likelihood estimators.

Although these estimators are consistent, they have a small bias. A bias-corrected variant of the estimator for the scale θ is

A bias correction for the shape parameter k is given as

Bayesian minimum mean squared error

With known k and unknown θ, the posterior density function for theta (using the standard scale-invariant prior for θ) is

Denoting

Integration with respect to θ can be carried out using a change of variables, revealing that 1/θ is gamma-distributed with parameters α = Nk, β = y.

The moments can be computed by taking the ratio (m by m = 0)

which shows that the mean ± standard deviation estimate of the posterior distribution for θ is

Bayesian inference

Conjugate prior

In Bayesian inference, the gamma distribution is the conjugate prior to many likelihood distributions: the Poisson, exponential, normal (with known mean), Pareto, gamma with known shape σ, inverse gamma with known shape parameter, and Gompertz with known scale parameter.

The gamma distribution's conjugate prior is:

where Z is the normalizing constant, which has no closed-form solution. The posterior distribution can be found by updating the parameters as follows:

where n is the number of observations, and xi is the ith observation.

Occurrence and applications

The gamma distribution has been used to model the size of insurance claims and rainfalls. This means that aggregate insurance claims and the amount of rainfall accumulated in a reservoir are modelled by a gamma process – much like the exponential distribution generates a Poisson process.

The gamma distribution is also used to model errors in multi-level Poisson regression models, because a mixture of Poisson distributions with gamma distributed rates has a known closed form distribution, called negative binomial.

In wireless communication, the gamma distribution is used to model the multi-path fading of signal power; see also Rayleigh distribution and Rician distribution.

In oncology, the age distribution of cancer incidence often follows the gamma distribution, whereas the shape and scale parameters predict, respectively, the number of driver events and the time interval between them.

In neuroscience, the gamma distribution is often used to describe the distribution of inter-spike intervals.

In bacterial gene expression, the copy number of a constitutively expressed protein often follows the gamma distribution, where the scale and shape parameter are, respectively, the mean number of bursts per cell cycle and the mean number of protein molecules produced by a single mRNA during its lifetime.

In genomics, the gamma distribution was applied in peak calling step (i.e. in recognition of signal) in ChIP-chip and ChIP-seq data analysis.

The gamma distribution is widely used as a conjugate prior in Bayesian statistics. It is the conjugate prior for the precision (i.e. inverse of the variance) of a normal distribution. It is also the conjugate prior for the exponential distribution.

Generating gamma-distributed random variables

Given the scaling property above, it is enough to generate gamma variables with θ = 1 as we can later convert to any value of β with simple division.

Suppose we wish to generate random variables from Gamma(n + δ, 1), where n is a non-negative integer and 0 < δ < 1. Using the fact that a Gamma(1, 1) distribution is the same as an Exp(1) distribution, and noting the method of generating exponential variables, we conclude that if U is uniformly distributed on (0, 1], then −ln(U) is distributed Gamma(1, 1) (i.e. inverse transform sampling). Now, using the "α-addition" property of gamma distribution, we expand this result:

where Uk are all uniformly distributed on (0, 1] and independent. All that is left now is to generate a variable distributed as Gamma(δ, 1) for 0 < δ < 1 and apply the "α-addition" property once more. This is the most difficult part.

Random generation of gamma variates is discussed in detail by Devroye, noting that none are uniformly fast for all shape parameters. For small values of the shape parameter, the algorithms are often not valid. For arbitrary values of the shape parameter, one can apply the Ahrens and Dieter modified acceptance–rejection method Algorithm GD (shape k ≥ 1), or transformation method when 0 < k < 1. Also see Cheng and Feast Algorithm GKM 3 or Marsaglia's squeeze method.

The following is a version of the Ahrens-Dieter acceptance–rejection method:

  1. Generate U, V and W as iid uniform (0, 1] variates.
  2. If then and . Otherwise, and .
  3. If then go to step 1.
  4. ξ is distributed as Γ(δ, 1).

A summary of this is

where is the integer part of k, ξ is generated via the algorithm above with δ = {k} (the fractional part of k) and the Uk are all independent.

While the above approach is technically correct, Devroye notes that it is linear in the value of k and in general is not a good choice. Instead he recommends using either rejection-based or table-based methods, depending on context.

For example, Marsaglia's simple transformation-rejection method relying on one normal variate X and one uniform variate U:

  1. Set and .
  2. Set .
  3. If and return , else go back to step 2.

With generates a gamma distributed random number in time that is approximately constant with k. The acceptance rate does depend on k, with an acceptance rate of 0.95, 0.98, and 0.99 for k=1, 2, and 4. For k < 1, one can use to boost k to be usable with this method.

Representation of a Lie group

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Representation_of_a_Lie_group...