The Law of Large Numbers was established in the 17th century by Jacob Bernoulli showing that the larger the sample of an event - like a coin toss - the more likely it is to represent its true probability. Bettors still struggle with this idea 400 years on which is why it has become known as the Gambler’s Fallacy. Find out why this mistake can be so costly.
The Law of Large Numbers
Using a fair coin toss as an example (where the chance of hitting heads and tails has an equal 50% chance), Bernoulli calculated that as the number of coin tosses gets larger, the percentage of heads or tails results gets closer to 50%, while the difference between the actual number of heads or tails thrown also gets larger.
As the number of tosses get larger the distribution of heads or tails evens out to 50%
It’s the second part of Bernoulli’s theorem that people have a problem understanding – which has led to it being coined the “Gambler’s Fallacy”. If you tell someone that a coin has been flipped nine times, landing on heads each time, their prediction for the next flip tends to be tails.
This is incorrect, however, as a coin has no memory, so each time it is tossed the probability of heads or tails is the same: 0.5 (a 50% chance).
Bernoulli’s discovery showed that as a sample of fair coin-tosses gets really big – e.g. a million – the distribution of heads or tails would even out to around 50%. Because the sample is so large, however, the expected deviation from an equal 50/50 split can be as large as 500.
This equation for calculating the statistical standard deviation gives us an idea what we should expect:
0.5 × √ (1,000,000) = 500
While the expected deviation is observable for this many tosses, the nine-toss example mentioned earlier isn’t a large enough sample for this to apply.
Therefore the nine tosses are like an extract from the million-toss sequence – the sample is too small to even-out like Bernoulli suggests will happen over a sample of a million tosses, and instead can form a sequence by pure chance.
Applying Distribution in betting
There are some clear applications for expected deviation in relation to betting. The most obvious application is for casino games like Roulette, where a misplaced belief that sequences of red or black or odd or even will even out during a single session of play can leave you out of pocket. That’s why the Gambler’s Fallacy is also known as the Monte Carlo fallacy.
In 1913, a roulette table in a Monte Carlo casino saw black come up 26 times in a row. After the 15th black, bettors were piling onto red, assuming the chances of yet another black number were becoming astronomical, thereby illustrating an irrational belief that one spin somehow influences the next.
In 1913, a roulette table in a Monte Carlo casino saw black come up 26 times in a row. For that reason, gambler’s fallacy is also known as Monte Carlo fallacy.
Another example could be a slot machine, which is in effect a random number generator with a set RTP (Return to Player). You can often witness players who have pumped considerable sums into a machine without success embargoing other players from their machine, convinced that a big win must logically follow their losing run.
Of course, for this tactic to be viable, the bettor would have to have played an impractically large number of times to reach the RTP.
When he established his law, Jacob Bernouilli asserted that even the stupidest man understands that the larger the sample, the more likely it is to represent the true probability of the observed event. He may have been a little harsh in his assessment by once you have an understanding of the Law of Large Numbers, and the law (or flaw) of averages is consigned to the rubbish bin, you won’t be one of Bernouilli’s ‘stupid men’.
MORE: TOP 100 Online Bookmakers >>>
MORE: TOP 20 Bookmakers that accept U.S. players >>>
MORE: TOP 20 Bookmakers that accept Cryptocurrency >>>
Source: pinnacle.com