Skip to main content
An illustration of coins being flipped
Illustration by Sandbox Studio, Chicago


Sigma is a unit that describes how much a set of experimental data deviates from what’s expected.

Explain it in 60 seconds

When scientists search for new physics, they compare what they observe to what theories predict. If an experiment sees something that doesn't match with theory, it could be evidence of something new—or it could be merely a result of random fluctuations in the data.

Scientists use the statistical measure sigma to express the probability of a statistical fluke as large as the observed mismatch between theory and experiment.

Consider a quarter flipped 10 times. If it were to land with the tail side up seven times—despite the fact that theory says it should land that way only five times—there would be a high probability that the mismatch is due to statistical fluctuations. But if the quarter were flipped 10,000 times and landed tail-side-up 7000 of those times, that would suggest some unexpected new physics were affecting the outcome.

If the mismatch persists as more data is collected, it becomes less likely that statistical fluctuations are to blame. That increasing certainty can be expressed with a higher number of sigma. Three sigmas correspond to a 1-in-740 chance of a statistical quirk, while four sigmas equal a 1-in-32,000 chance and five sigmas a 1-in-3.5 million chance.

In particle physics, a three-sigma result usually means that the experimental finding is a promising hint. Four sigmas are considered a sign of a likely discovery. And a definitive discovery generally requires at least a five-sigma result. The more unexpected or important the discovery, or the more narrow the scope of the search, the greater the number of sigmas physicists require to convince themselves it's real.