From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Likelihood_principle

In statistics, the likelihood principle is the proposition that, given a statistical model, all the evidence in a sample relevant to model parameters is contained in the likelihood function.

A likelihood function arises from a probability density function considered as a function of its distributional parameterization argument. For example, consider a model which gives the probability density function ƒX(x | θ) of observable random variable X as a function of a parameter θ. Then for a specific value x of X, the function (θ | x) = ƒX(x | θ) is a likelihood function of θ: it gives a measure of how "likely" any particular value of θ is, if we know that X has the value x. The density function may be a density with respect to counting measure, i.e. a probability mass function.

Two likelihood functions are equivalent if one is a scalar multiple of the other. The likelihood principle is this: all information from the data that is relevant to inferences about the value of the model parameters is in the equivalence class to which the likelihood function belongs. The strong likelihood principle applies this same criterion to cases such as sequential experiments where the sample of data that is available results from applying a stopping rule to the observations earlier in the experiment.

Example