Conditional probability

This article defines some terms which characterize probability distributions of two or more variables.

Conditional probability is the probability of some event A, given the occurrence of some other event B. Conditional probability is written P(A|B), and is read "the probability of A, given B".

Joint probability is the probability of two events in conjunction. That is, it is the probability of both events together. The joint probability of A and B is written $$\scriptstyle P(A \cap B)$$ or $$\scriptstyle P(A,  B).$$

Marginal probability or prior probability is the probability of one event, regardless of the other event. Marginal probability is obtained by summing (or integrating, more generally) the joint probability over the unrequired event. This is called marginalization. The marginal probability of A is written P(A), and the marginal probability of B is written P(B).

In these definitions, note that there need not be a causal or temporal relation between A and B. A may precede B or vice versa or they may happen at the same time. A may cause B or vice versa or they may have no causal relation at all. Notice, however, that causal and temporal relations are informal notions, not belonging to the probabilistic framework. They may apply in some examples, depending on the interpretation given to events.

Conditioning of probabilities, i.e. updating them to take account of (possibly new) information, may be achieved through Bayes' theorem.

Definition
Given a probability space $$\scriptstyle (\Omega, F, P)$$ and two events $$\scriptstyle A, B\,\in\, F$$ with P(B) > 0, the conditional probability of A given B is defined by


 * $$P(A \mid B) = \frac{P(A \cap B)}{P(B)}.\,$$

If P(B) = 0 then P(A | B) is not defined by this statement, but in some cases one does speak of conditioning on events of probability zero, necessarily defined differently.

See

Statistical independence
Two random events A and B are statistically independent if and only if


 * $$P(A \cap B) \ = \ P(A) P(B)$$

Thus, if A and B are independent, then their joint probability can be expressed as a simple product of their individual probabilities.

Equivalently, for two independent events A and B,


 * $$P(A|B) \ = \ P(A)$$

and


 * $$P(B|A) \ = \ P(B).$$

In other words, if A and B are independent, then the conditional probability of A, given B is simply the individual probability of A alone; likewise, the probability of B given A is simply the probability of B alone.

Mutual exclusivity
Two events A and B are mutually exclusive if and only if $$\scriptstyle A \cap B \,=\, \varnothing$$. Then $$\scriptstyle P(A \cap B)\, =\, 0$$.

Therefore, if P(B) > 0 then $$\scriptstyle P(A\mid B)$$ is defined and equal to 0.

Other considerations

 * If B is an event and P(B) > 0, then the function Q defined by Q(A) = P(A|B) for all events A is a probability measure.


 * Many models in data mining can calculate conditional probabilities, including decision trees and Bayesian networks.

The conditional probability fallacy
The conditional probability fallacy is the assumption that P(A|B) is approximately equal to P(B|A). The mathematician John Allen Paulos discusses this in his book Innumeracy (p. 63 et. seq.), where he points out that it is a mistake often made even by doctors, lawyers, and other highly educated non-statisticians. It can be overcome by describing the data in actual numbers rather than probabilities.

The relation between P(A|B) and P(B|A) is given by Bayes Theorem:


 * $$P(B \mid A)= P(A \mid B) \cdot \frac{P(B)}{P(A)}.$$

An example
In the following constructed but realistic situation, the difference between P(A|B) and P(B|A) may be surprising, but is at the same time obvious.

In order to identify individuals having a serious disease in an early curable form, one may consider screening a large group of people. While the benefits are obvious, an argument against such screenings is the disturbance caused by false positive screening results: If a person not having the disease is incorrectly found to have it by the initial test, they will most likely be quite distressed until a more careful test shows that they do not have the disease. Even after being told they are well, their lives may be affected negatively.

The magnitude of this problem is best understood in terms of conditional probabilities.

Suppose 1% of the group suffer from the disease, and the rest are well. Choosing an individual at random,


 * $$P(\text{disease})=1%=0.01$$ and $$P(\text{well})=99%=0.99.$$

Suppose that when the screening test is applied to a person not having the disease, there is a 1% chance of getting a false positive result, i.e.


 * $$P(\text{positive}|\text{well})=1%$$, and $$P(\text{negative}|\text{well})=99%.$$

Finally, suppose that when the test is applied to a person having the disease, there is a 1% chance of a false negative result, i.e.


 * $$P(\text{negative}|\text{disease})=1%$$ and $$P(\text{positive}|\text{disease})=99%.$$

Now, calculation shows that:


 * $$P(\text{well}\cap\text{negative})=P(\text{well})\times P(\text{negative}|\text{well})=99%\times99%=98.01%$$

is the fraction of the whole group being well and testing negative.


 * $$P(\text{disease}\cap\text{positive})=P(\text{disease})\times P(\text{positive}|\text{disease})=1%\times99%=0.99%$$

is the fraction of the whole group being ill and testing positive.


 * $$P(\text{well}\cap\text{positive})=P(\text{well})\times P(\text{positive}|\text{well})=99%\times1%=0.99%$$

is the fraction of the whole group having false positive results.


 * $$P(\text{disease}\cap\text{negative})=P(\text{disease})\times P(\text{negative}|\text{disease})=1%\times1%=0.01%$$

is the fraction of the whole group having false negative results. Furthermore,


 * $$P(\text{positive})=P(\text{well }\cap\text{ positive}) + P(\text{disease }\cap\text{ positive}) = 0.99%+0.99%=1.98%$$

is the fraction of the whole group testing positive.


 * $$P(\text{disease}|\text{positive})=\frac{P(\text{disease }\cap\text{ positive})} {P(\text{positive})} = \frac{0.99%}{1.98%}= 50%$$

is the probability that you actually have the disease if you tested positive. In this example, it should be easy to relate to the difference between P(positive|disease)=99% and P(disease|positive)= 50%: The first is the conditional probability that you test positive if you have the disease; the second is the conditional probability that you have the disease if you test positive. With the numbers chosen here, the last result is likely to be deemed unacceptable: Half the people testing positive are actually false positives.

Conditioning on a random variable
There is also a concept of the conditional probability of an event given a random variable. Such a conditional probability is a random variable in its own right.

Suppose X is a random variable that can be equal either to 0 or to 1. As above, one may speak of the conditional probability of any event A given the event X = 0, and also of the conditional probability of A given the event X = 1. The former is denoted P(A|X = 0) and the latter P(A|X = 1). Now define a new random variable Y, whose value is P(A|X = 0) if X = 0 and P(A|X = 1) if X = 1. That is



Y = \begin{cases} P(A\mid X=0) & \text{if }X=0; \\ P(A\mid X=1) & \text{if }X=1. \end{cases} $$

This new random variable is the conditional probability of the event A given the random variable X:


 * $$ Y = P(A\mid X) \,$$

According to the "law of total probability", the expected value of Y is just the marginal (or "unconditional") probability of A.

More generally still, it is possible to speak of the conditional probability of an event given a sigma-algebra. See conditional expectation.