Convergence of random variables

In probability theory, there exist several different notions of convergence of random variables. The convergence (in one of the senses presented below) of sequences of random variables to some limiting random variable is an important concept in probability theory, and its applications to statistics and stochastic processes. For example, if the average of n uncorrelated random variables Yi, i = 1, ..., n, is given by


 * $$X_n = \frac{1}{n}\sum_{i=1}^n Y_i\,,$$

then as n goes to infinity, Xn converges in probability (see below) to the common mean, &mu;, of the random variables Yi. This result is known as the weak law of large numbers. Other forms of convergence are important in other useful theorems, including the central limit theorem.

Throughout the following, we assume that (Xn) is a sequence of random variables, and X is a random variable, and all of them are defined on the same probability space (&Omega;, F, P).

Convergence in distribution
Suppose that F1, F2, ... is a sequence of cumulative distribution functions corresponding to random variables X1, X2, ..., and that F is a distribution function corresponding to a random variable X. We say that the sequence Xn converges towards X in distribution, if
 * $$\lim_{n\rightarrow\infty} F_n(a) = F(a),$$

for every real number a at which F is continuous. Since F(a) = Pr(X &le; a), this means that the probability that the value of X is in a given range is very similar to the probability that the value of Xn is in that range, provided n is sufficiently large. Convergence in distribution is often denoted by adding the letter $$\mathcal D$$ over an arrow indicating convergence:
 * $$X_n \, \xrightarrow{\mathcal D} \, X$$

Small $$d$$ is also possible, although less common.

Convergence in distribution is the weakest form of convergence, and is sometimes called weak convergence (main article: weak convergence of measures). It does not, in general, imply any other mode of convergence. However, convergence in distribution is implied by all other modes of convergence mentioned in this article, and hence, it is the most common and often the most useful form of convergence of random variables. It is the notion of convergence used in the central limit theorem and the (weak) law of large numbers.

A useful result, which may be employed in conjunction with law of large numbers and the central limit theorem, is that if a function g: R &rarr; R  is continuous, then if  Xn  converges in distribution to  X, then so too does  g(Xn)  converge in distribution to  g(X). (This may be proved using Skorokhod's representation theorem.) This fact could be taken as a definition for the convergence in distribution.

Convergence in distribution is also called convergence in law, since the word "law" is sometimes used as a synonym of "probability distribution."

Convergence in probability
To say that the sequence Xn converges towards X in probability means


 * $$\lim_{n\rightarrow\infty}\Pr\left(\left|X_n-X\right|\geq\varepsilon\right)=0$$

for every &epsilon; > 0. Formally, pick any &epsilon; > 0 and any &delta; > 0. Let Pn be the probability that Xn is outside a tolerance &epsilon; of X. Then, if Xn converges in probability to X then there exists a value N such that, for all n &ge; N, Pn is itself less than &delta;.

Convergence in probability is often denoted by adding the letter 'P' over an arrow indicating convergence:
 * $$X_n \, \xrightarrow{P} \, X$$

Convergence in probability is the notion of convergence used in the weak law of large numbers. Convergence in probability implies convergence in distribution. To prove it, it's convenient to prove the following, simple lemma:

Lemma
Let X, Y be random variables, c a real number and &epsilon; > 0; then


 * $$\Pr(Y\leq c)\leq \Pr(X\leq c+\varepsilon)+\Pr(\left|Y - X\right|>\varepsilon).$$

Proof of lemma

 * $$\Pr(Y\leq c)=\Pr(Y\leq c,X\leq c+\varepsilon)+\Pr(Y\leq c,X>c+\varepsilon)$$


 * $$\leq \Pr(X\leq c+\varepsilon)+\Pr(Y\leq c,c\varepsilon)$$

since


 * $$\Pr(\left|Y - X\right|>\varepsilon)=\Pr(Y - X>\varepsilon)+\Pr(Y - X<-\varepsilon)\geq \Pr(Y - X<-\varepsilon).$$

Proof
For every $$\varepsilon > 0$$, due to the preceding lemma, we have:


 * $$\Pr(X_n\leq a)\leq \Pr(X\leq a+\varepsilon)+ \Pr(\left|X_n - X\right|>\varepsilon)$$


 * $$\Pr(X\leq a-\varepsilon)\leq \Pr(X_n \leq a)+\Pr(\left|X_n - X\right|>\varepsilon)$$

So, we have


 * $$\Pr(X\leq a-\varepsilon)-\Pr(\left|X_n - X\right|>\varepsilon)\leq \Pr(X_n \leq a)\leq \Pr(X\leq a+\varepsilon)+\Pr(\left|X_n - X\right|>\varepsilon).$$

Taking the limit for $$n\to\infty$$, we obtain:


 * $$\Pr(X\leq a-\varepsilon)\leq \lim_{n\rightarrow\infty} \Pr(X_n \leq a)\leq P(X\leq a+\varepsilon).$$

But $$\Pr(X\leq a)$$ is the cumulative distribution function $$F_X(a)$$, which is continuous by hypothesis, that is


 * $$\lim_{\varepsilon \to {0+}} F_X(a-\varepsilon)=\lim_{\varepsilon \to {0+}} F_X(a+\varepsilon)=F_X(a),$$

and so, taking the limit for $$\varepsilon \to {0+}$$, we obtain


 * $$\lim_{n\to\infty} \Pr(X_n \leq a)=\Pr(X \leq a).$$

Almost sure convergence
To say that the sequence Xn converges almost surely or almost everywhere or with probability 1 or strongly towards X means
 * $$\Pr\left(\lim_{n\rightarrow\infty}X_n=X\right)=1.$$

This means that the values of Xn approach the value of X, in the sense (see almost surely) that events for which Xn does not converge to X have probability 0. Using the probability space (&Omega;, F, P) and the concept of the random variable as a function from &Omega; to R, this is equivalent to the statement


 * $$\Pr\left(\big\{\omega \in \Omega \, | \, \lim_{n \to \infty}X_n(\omega) = X(\omega) \big\}\right) = 1.$$

Almost sure convergence is often denoted by adding the letters a.s. over an arrow indicating convergence:
 * $$X_n \, \xrightarrow{\mathrm{a.s.}} \, X$$

Almost sure convergence implies convergence in probability, and hence implies convergence in distribution. It is the notion of convergence used in the strong law of large numbers.

Sure convergence
To say that the sequence or random variables (Xn) defined over the same probability space (i.e., a random process) converges surely or everywhere or pointwise towards X means
 * $$\lim_{n\rightarrow\infty}X_n(\omega)=X(\omega), \, \, \forall \omega \in \Omega.$$

where $$\Omega$$ is the sample space of the underlying probability space over which the random variables are defined.

This is the notion of pointwise convergence of sequence functions extended to sequence of random variables. (Note that random variables themselves are functions).


 * $$\big\{\omega \in \Omega \, | \, \lim_{n \to \infty}X_n(\omega) = X(\omega) \big\} = \Omega.$$

Sure convergence of a random variable implies all the other kinds of convergence stated above, but there is no payoff in probability theory by using sure convergence compared to using almost sure convergence. The difference between the two only exists on sets with probability zero. This is why the concept sure convergence of random variables is very rarely used.

Convergence in mean
We say that the sequence Xn converges in the r-th mean or in the Lr norm towards X, if r &ge; 1, E|Xn|r < &infin; for all n, and


 * $$\lim_{n\rightarrow\infty}\mathrm{E}\left(\left|X_n-X\right|^r\right)=0$$

where the operator E denotes the expected value. Convergence in rth mean tells us that the expectation of the r-th power of the difference between Xn and X converges to zero.

This type of convergence is often denoted by adding the letter Lr over an arrow indicating convergence:
 * $$X_n \, \xrightarrow{L^r} \, X.$$

The most important cases of convergence in r-th mean are:
 * When Xn converges in r-th mean to X for r = 1, we say that Xn converges in mean to X.
 * When Xn converges in r-th mean to X for r = 2, we say that Xn converges in mean square to X.

Convergence in the r-th mean, for r &gt; 0, implies convergence in probability (by Chebyshev's inequality), while if r > s &ge; 1, convergence in r-th mean implies convergence in s-th mean. Hence, convergence in mean square implies convergence in mean.

Implications
The chain of implications between the various notions of convergence are noted in their respective sections. They are, using the arrow notation
 * $$ \xrightarrow{\textrm{a.s.}} \quad \Rightarrow \quad \xrightarrow{P} \quad \Rightarrow \quad \xrightarrow{\mathcal D} $$
 * $$ \forall r>0:\quad\xrightarrow{L^r} \quad \Rightarrow \quad \xrightarrow{P} $$
 * $$\forall r>s\geq1:\quad\xrightarrow{L^r} \quad \Rightarrow \quad \xrightarrow{L^s}$$

No other implications other than these hold in general, but a number of special cases do permit the converse implications:


 * If Xn converges in distribution to a constant c, then Xn converges in probability to c.


 * If Xn converges in probability to X, and if Pr(|Xn| &le; b) = 1 for all n and some b, then Xn converges in rth mean to X for all r &ge; 1. In other words, if Xn converges in probability to X and all random variables Xn are almost surely bounded above and below, then Xn converges to X also in any rth mean.


 * If for all &epsilon; > 0,


 * $$\sum_n P\left(|X_n - X| > \varepsilon\right) < \infty,$$


 * then we say that Xn converges almost completely, or fast in probability towards X. When Xn converges almost completely towards X then it also converges almost surely to X. In other words, if Xn converges in probability to X sufficiently quickly (i.e. the above sequence of tail probabilities is summable for all &epsilon; > 0), then Xn also converges almost surely to X. This is a direct implication from the Borel-Cantelli lemma.


 * If Sn is a sum of n real independent random variables:


 * $$S_n = X_1+\cdots+X_n$$


 * then Sn converges almost surely if and only if Sn converges in probability.


 * Lévy's convergence theorem gives sufficient conditions for almost sure convergence to imply L1-convergence:

\left. \begin{array}{ccc} X_n\xrightarrow{a.s.} X \\ \\ \\ \\ \mathrm{E}(Y) < \infty \end{array}\right\} \quad\Rightarrow \quad X_n\xrightarrow{L^1} X $$
 * X_n| < Y