The law of truly large numbers (a statistical adage), attributed to Persi Diaconis and Frederick Mosteller, states that with a large enough number of samples, any outrageous (i.e. unlikely in any single sample) thing is likely to be observed. Because we never find it notable when likely events occur, we highlight unlikely events and notice them more. The law is often used to falsify different pseudo-scientific claims, as such it and its use are sometimes criticized by fringe scientists.
The law is meant to make a statement about probabilities and statistical significance: in large enough masses of statistical data, even minuscule fluctuations attain statistical significance. Thus in truly large numbers of observations, it is paradoxically easy to find significant correlations, in large numbers, which still do not lead to causal theories (see: spurious correlation), and which by their collective number, might lead to obfuscation as well.
The law can be rephrased as "large numbers also deceive", something which is counter-intuitive to a descriptive statistician. More concretely, skeptic Penn Jillette has said, "Million-to-one odds happen eight times a day in New York" (population about 8,000,000).
Example
For a simplified example of the law, assume that a given event happens with a probability for its occurrence of 0.1%, within a single trial. Then, the probability that this so-called unlikely event does not happen (improbability) in a single trial is 99.9% (0.999).
Already for a sample of 1000 independent trials, however, the probability that the event does not happen in any of them, even once (improbability), is only 0.9991000 ≈ 0.3677 = 36.77%. Then, the probability that the event does happen, at least once, in 1000 trials is 1 − 0.9991000 ≈ 0.6323 or 63.23%. This means that this "unlikely event" has a probability of 63.23% of happening if 1000 independent trials are conducted, or over 99.9% for 10,000 trials.
The probability that it happens at least once in 10,000 trials is 1 − 0.99910000 ≈ 0.99995 = 99.995%. In other words, a highly unlikely event, given enough trials with some fixed number of draws per trial, is even more likely to occur.
For event X that occurs with very low probability of 0.0000001% (in any single sample), already 1,000,000,000 as "truly large" number of independent samples gives probability of occurrence of X equal to 1 − 0.9999999991000000000 ≈ 0.63 = 63% and number of independent samples equal (in 2021) to size of human population gives probability of event X: 1 − 0.9999999997900000000 ≈ 0.9996 = 99.96%.
These calculations can be generalized, formalized to use in mathematical proof that: "the probability c for the less likely event X to happen in N independent trials can become arbitrarily near to 1, no matter how small the probability a of the event X in one single trial is, provided that N is truly large."
In criticism of pseudoscience
The law comes up in criticism of pseudoscience and is sometimes called the Jeane Dixon effect (see also Postdiction). It holds that the more predictions a psychic makes, the better the odds that one of them will "hit". Thus, if one comes true, the psychic expects us to forget the vast majority that did not happen (confirmation bias). Humans can be susceptible to this fallacy.
Another similar (to some degree) manifestation of the law can be found in gambling, where gamblers tend to remember their wins and forget their losses, even if the latter far outnumbers the former (though depending on a particular person, the opposite may also be truth when they think they need more analysis of their losses to achieve fine tuning of their playing system). Mikal Aasved links it with "selective memory bias", allowing gamblers to mentally distance themselves from the consequences of their gambling by holding an inflated view of their real winnings (or losses in the opposite case - "selective memory bias in either direction").