Gambler's fallacy

The gambler's fallacy is a formal fallacy. It is the incorrect belief that the likelihood of a random event can be affected by or predicted from other, independent events.

The gambler's fallacy gets its name from the fact that, where the random event is the throw of a die or the spin of a roulette wheel, gamblers will risk money on their belief in "a run of luck" or a mistaken understanding of "the law of averages". It often arises because a similarity between random processes is mistakenly interpreted as a predictive relationship between them. (For instance, two fair dice are similar in that they each have the same chances of yielding each number - but they are independent in that they do not actually influence one another.)

The gambler's fallacy often takes one of these forms: Similarly
 * A particular outcome of a random event is more likely to occur because it has happened recently ("run of good luck");
 * A particular outcome is more likely to occur because it has not happened recently ("law of averages" or "it's my turn now").
 * A particular outcome is less likely to occur because it has happened recently ("law of averages" or "exhausted its luck");
 * A particular outcome is less likely to occur because it has not happened recently ("run of bad luck").

A more subtle version of the fallacy is that an "interesting" (non-random looking) outcome is "unlikely" (eg that a sequence of "1,2,3,4,5,6" in a lottery result is less likely than any other individual outcome). Even apart from the debate about what constitutes an "interesting" result, this can be seen as a version of the gambler's fallacy because it is saying that a random event is less likely to occur if the result, taken in conjunction with recent events, will produce an "interesting" pattern.

An example: coin-tossing
The gambler's fallacy can be illustrated by considering the repeated toss of a coin. With a fair coin the chances of getting heads are exactly 0.5 (one in two). The chances of it coming up heads twice in a row are 0.5&times;0.5=0.25 (one in four). The probability of three heads in a row is 0.5&times;0.5&times;0.5= 0.125 (one in eight) and so on.

Now suppose that we have just tossed four heads in a row. A believer in the gambler's fallacy might say, "If the next coin flipped were to come up heads, it would generate a run of five successive heads. The probability of a run of five successive heads is $$(1/2)^5=1/32$$; therefore, the next coin flipped only has a 1 in 32 chance of coming up heads."

This is the fallacious step in the argument. If the coin is fair, then by definition the probability of tails must always be 0.5, never more or less, and the probability of heads must always be 0.5, never less (or more). While a run of five heads is only 1 in 32 (0.03125), it is 1 in 32 before the coin is first tossed. After the first four tosses the results are no longer unknown, so they do not count. The probability of five consecutive heads is the same as four successive heads followed by one tails. Tails is no more likely. In fact, the calculation of the 1 in 32 probability relied on the assumption that heads and tails are equally likely at every step. Each of the two possible outcomes has equal probability no matter how many times the coin has been flipped previously and no matter what the result. Reasoning that it is more likely that the next toss will be a tail than a head due to the past tosses is the fallacy. The fallacy is the idea that a run of luck in the past somehow influences the odds of a bet in the future. This kind of logic would only work, if we had to guess all the tosses' results 'before' they are carried out. Let's say we are gambling on a HHHHH result, that is likely to constitute the significantly lesser chance to succeed.

As an example, the popular doubling strategy (start with $1, if you lose, bet $2, then $4 etc., until you win) does not work; see Martingale (betting system). Situations like these are investigated in the mathematical theory of random walks. This and similar strategies either trade many small wins for a few huge losses (as in this case) or vice versa. With an infinite amount of working capital, one would come out ahead using this strategy; as it stands, one is better off betting a constant amount if only because it makes it easier to estimate how much one stands to lose in an hour or day of play.

A joke told among mathematicians demonstrates the nature of the fallacy. When flying on an airplane, a man decides to always bring a bomb with him. "The chances of an airplane having a bomb on it are very small," he reasons, "and certainly the chances of having two are almost none!".

Some claim that the gambler's fallacy is a cognitive bias produced by a psychological heuristic called the representativeness heuristic, and a related phenomenon called pareidolia. There is an argument that we are programmed to look for patterns in chaos ("Is that a tiger half-hidden in the trees?" "Is that a bunch of ripe fruit half-hidden in the leaves?") and are actually biased towards spotting patterns when none exist. An animal that is prone to over-imagining patterns (e.g., never misses real tigers, but sometimes sees imaginary ones) is far more likely to pass on its genes than a cousin which ignores just one real tiger.

Other examples

 * What is the probability of flipping 21 heads in a row, with a fair coin? (Answer: 1 in 2,097,152 = approximately 0.000000477.) What is the probability of doing it, given that you have already flipped 20 heads in a row? (Answer: 0.5.) See Bayes' theorem.
 * Will you eventually come out ahead at roulette by betting double what you lost the previous time, and adding an extra amount? (Answer: given infinite time and funds, yes, you will eventually win on that color in a fair game. However, given finite time and even more finite funds, the chance exists that you will exhaust your money before winning. Regardless of the odds of a color losing (or winning) several times in a row, the probability of the ball landing on that color in a given spin is the number of that color that exist, divided by all possibilities. In the case of a Vegas roulette wheel, the chances of hitting red are 18/38, or ~.47, regardless of previous results.)
 * Are you more likely to win the lottery jackpot by choosing the same numbers every time or by choosing different numbers every time? (Answer: Either strategy is equally likely to win.)
 * Are you more or less likely to win the lottery jackpot by picking the numbers which won last week, or picking numbers at random? (Answer: Either strategy is equally likely to win.)

(This does not mean that all possible choices of numbers within a given lottery are equally good. While the odds of winning may be the same regardless of which numbers are chosen, the expected payout is not, because of the possibility of having to share that jackpot with other players. A rational gambler might attempt to predict other players' choices and then deliberately avoid these numbers.)

Non-examples
There are many scenarios where the gambler's fallacy might superficially seem to apply but does not, including:
 * When the probability of different events is not independent, the probability of future events can change based on the outcome of past events. Formally, the system is said to have memory. An example of this is cards drawn without replacement.  For example, once a jack is removed from the deck, the next draw is less likely to be a jack and more likely to be of another rank. Thus, the odds for drawing a jack, assuming that it was the first card drawn and that there are no jokers, have decreased from 4/52 (7.69%) to 3/51 (5.88%), while the odds for each other rank have increased from 4/52 (7.69%) to 4/51 (7.84%).
 * When the probability of each event is not even, such as with a loaded die or an unbalanced coin. The Chernoff bound is a method of determining how many times a coin must be flipped to determine (with high probability) which side is loaded.  As a run of heads (or, e.g., reds on a roulette wheel) gets longer and longer, the chance that the coin or wheel is loaded increases.
 * The outcome of future events can be affected if external factors are allowed to change the probability of the events (e.g. changes in the rules of a game affecting a sports team's performance levels). Additionally, an inexperienced player's success may decrease after opposing teams discover his or her weaknesses and exploit them. The player must then attempt to compensate and randomize his strategy. See Game Theory.
 * Many riddles trick the reader into believing that they are an example of Gambler's Fallacy, such as the Monty Hall problem.