In statistical terms, the law of large numbers is a theorem that postulates that as the size of the sample of a random variable increases, its average will approach the theoretical average. In layman’s terms, the law of large numbers simply says that over time, the more times you roll a dice, the more likely the average of the rolls will turn out to be 3.5.
If your sample size is one, meaning a single roll, you have a 2 in six chance of getting a 3 or 4, both close to the average. But you also have a 2 in six chance of being as far away from the expected average as possible by rolling a 1 or a 6. Also note that there is no chance of rolling a 3.5, the theoretical average, with a single dice.
Roll twice, and the odds of getting snake eyes (two ones) or boxcars (two sixes) drops dramatically. There is only a 2 in 36 chance of rolling those numbers. There is also a far greater likelihood of getting a combination exactly at 3.5. A six and a one, a 5 and a two, and a 4 and a 3 will all yield a 3.5.
Simply put, the law of large numbers means that you should be wary of inferring characteristics about a population from a small sample.
If you like this reference guide, please help us spread the word about it!
In Lean, the same tendency people have to use small samples to make judgments holds true. People talk to a small group of people about some of their experiences with Lean, and hear a couple of bad stories. They assume that the bad stories are representative of all Lean activity, and become resistant. Or, they have a bad experience with something early on and assume that all future experiences will also be bad. In both cases, that small sample may be representative of what Lean will be like, but it also may just be a random occurrence.
With process-related outcomes, the law of large numbers also holds true. When a new process is implemented in a kaizen event, the first few units of output may indicate something about its average results, but it is possible that you are seeing something clustering near one of the tails of the distribution. (Unlikely, in any single process improvement, but if you make enough changes, it will eventually happen.) The results may show a very poor or very good process with a small sample, but over time as more data is collected, the average will drift toward the true process capability.
That is one of the reasons that most kaizenfacilitators recommend that you do follow-up audits on new processes. The reviews give you an opportunity to see a larger sample and get a better feel of what the true output of the process looks like.
The law of large numbers works against you more often than it works for you. If a process looks better than it really is, you pass on a chance to fix a problem. If it looks worse than it really is, you waste resources.
If an employee sees something bad about Lean early on, you run the risk of them assuming that all Lean things are bad.
Before making any assumptions about small sets of data points, test your theory. Run a few more cycles and see if the average drifts. Have the person with the bad experience try a few more things, or talk to a few more people to get a more representative view of the real situation. They key is to increase the sample size and confirm that the results you saw.