A probability principle which states that a simple mean, range, standard deviation, or relative frequency calculation will usually be more accurate using a large sample (pool) of observations or data than with a small or limited sample. In other words, as a sample size grows, it better reflects and represents the population, its mean further approaches the average of the entire population. For example, a series of historical stock prices will tend to be more stable and less jumpy than small ones.
In statistics, this theorem entails that the frequencies of events with the same probability of occurrence even out when the number of experiments/ instances/ observations is large enough. As the number of experiments grows, the actual ratio of outcomes will get closer to its theoretical, or expected, central value.
For example, if a coin- even and level on both sides- is tossed a large number of times (e.g., a million times), the outcome will be that almost half of the tosses will be heads, and half will be tails. The heads-to-tails ratio will be very much close to 1:1 (one to one). Otherwise, in an experiment where the coin is only tossed 20 times, more likely the ratio will be tilted to one side over the other (say, 2-8, 3-7, 1-9, etc.)
There are two types of this law: the weak law of averages and the strong law of large numbers.
This theorem is also known as a law of larges numbers (LLN).
Comments