Conjugate prior

In Bayesian probability theory, a class of prior probability distributions p(θ) is said to be conjugate to a class of likelihood functions p(x|θ) if the resulting posterior distributions p(θ|x) are in the same family as p(θ). For example, the Gaussian family is conjugate to itself (or self-conjugate): if the likelihood function is Gaussian, choosing a Gaussian prior will ensure that the posterior distribution is also Gaussian. The concept, as well as the term "conjugate prior", were introduced by Howard Raiffa and Robert Schlaifer in their work on Bayesian decision theory. A similar concept had been discovered independently by George Alfred Barnard.

Consider the general problem of inferring a distribution for a parameter θ given some datum or data x. From Bayes' theorem, the posterior distribution is calculated from the prior p(θ) and the likelihood function $$\theta \mapsto p(x\mid\theta)\!$$ as


 * $$ p(\theta|x) = \frac{p(x|\theta) \, p(\theta)}

{\int p(x|\theta) \, p(\theta) \, d\theta}. \!$$

Let the likelihood function be considered fixed; the likelihood function is usually well-determined from a statement of the data-generating process. It is clear that different choices of the prior distribution p(θ) may make the integral more or less difficult to calculate, and the product p(x|θ) &times; p(θ) may take one algebraic form or another. For certain choices of the prior, the posterior has the same algebraic form as the prior (generally with different parameters). Such a choice is a conjugate prior.

A conjugate prior is an algebraic convenience: otherwise a difficult numerical integration may be necessary.

All members of the exponential family have conjugate priors. See Gelman et al. for a catalog.

Example
For a random variable which is a Bernoulli trial with unknown probability of success q in [0,1], the usual conjugate prior is the beta distribution with
 * $$p(q=x) = {x^{\alpha-1}(1-x)^{\beta-1} \over \Beta(\alpha,\beta)}$$

where $$\alpha$$ and $$\beta$$ are chosen to reflect any existing belief or information ($$\alpha$$ = 1 and $$\beta$$ = 1 would give a uniform distribution) and Β($$\alpha$$, $$\beta$$) is the Beta function acting as a normalising constant.

If we then sample this random variable and get s successes and f failures, we have


 * $$P(s,f|q=x) = {s+f \choose s} x^s(1-x)^f, $$
 * $$p(q=x|s,f) = {{{s+f \choose s} x^{s+\alpha-1}(1-x)^{f+\beta-1} / \Beta(\alpha,\beta)} \over \int_{y=0}^1 \left({s+f \choose s} y^{s+\alpha-1}(1-y)^{f+\beta-1} / \Beta(\alpha,\beta)\right) dy} = {x^{s+\alpha-1}(1-x)^{f+\beta-1} \over \Beta(s+\alpha,f+\beta)}, $$

which is another Beta distribution with a simple change to the parameters. This posterior distribution could then be used as the prior for more samples, with the parameters simply adding each extra piece of information as it comes.

Table of conjugate distributions
Let n denote the number of observations