The Law of Large Numbers: It’s Not the Central Limit Theorem

I’ve spoken about asymptotics before. It’s the lego of the modelling world, in my view. Interesting, hard and you can lose years of your life looking for just the right piece that fits into the model you’re trying to build.

The Law of Large Numbers (LLN) is another simple theorem that’s widely misunderstood. Most often it’s conflated with the central limit theorem (CLT), which deals with the studentised sample mean or z-score. The LLN pertains to the sample mean itself.

Like the CLT, the LLN is actually a collection of theorems, strong and weak. I’ll confine myself to the simplest version here, Khinchine’s weak law of large numbers. It states that for a random, independent and identically distributed sample of n observations from any distribution with a finite mean (µ) and variance: then the sample mean has a probability limit equal to the population mean, µ. That is, the sample mean is a consistent estimator of the population mean under these conditions.

Put simply, as n gets very big, the sample mean is equal to the population mean.

Notice there is nothing about normal distributions as n gets large. That’s the key difference between the LLN and the CLT. One deals with the sample mean alone, the other with the studentised version. On its own, the distribution of the sample mean collapses onto a single point as n gets large: µ. This is the implication of the LLN.

Appropriately scaled, centred and at the correct rate, the studentised sample mean has a normal distribution in the limit as N gets large: that’s the CLT.

As usual, here’s an infographic to go: put side by side the two theorems have different results but are dealing with something quite similar.

CLT vs LLN infographic

2 thoughts on “The Law of Large Numbers: It’s Not the Central Limit Theorem

Leave a Comment