Skip to main content
Trout
Artwork by Sandbox Studio, Chicago.

Fine-tuning versus naturalness

When observed parameters seem like they must be finely tuned to fit a theory, some physicists accept it as coincidence. Others want to keep digging.

When physicists saw the Higgs boson for the first time in 2012, they observed its mass to be very small: 125 billion electronvolts, or 125 GeV. The measurement became a prime example of an issue that dogs particle physicists and astrophysicists today: the problem of fine-tuning versus naturalness.

To understand what’s fishy about the observable Higgs mass being so low, first you must know that it is actually the sum of two inputs: the bare Higgs mass (which we don’t know) plus contributions from all the other Standard Model particles, contributions collectively known as “quantum corrections.” 

The second number in the equation is an enormous negative, coming in around minus 1018 GeV. Compared to that, the result of the equation, 125 GeV, is extremely small, close to zero. That means the first number, the bare Higgs mass, must be almost the opposite, to so nearly cancel it out. To some physicists, this is an unacceptably strange coincidence.

Or it could be that it’s not the bare Higgs mass doing the heavy lifting here; it could be that there are additional contributions to the quantum corrections that physicists don’t know about.

Either way, many particle physicists aren’t comfortable with this situation. There’s no known underlying reason for these almost exact cancellations, and insisting that “it is the way it is” is unsatisfying.

Observable parameters that don’t seem to naturally emerge from a theory, but instead must be deliberately manipulated to fit, are called “finely tuned.” 

“In general, what we want from our theories — and in some way, our universe — is that nothing seems too contrived.” 

In a theory, “when you end up with numbers that are very different in size, one can adopt the point of view that this is just a representation of how nature works and there is no special meaning in the size of the numbers,” says Verena Martinez Outschoorn, an assistant professor of physics at the University of Massachusetts, Amherst. “Alternatively, one can propose ways to remedy the fine-tuning, which usually requires adding new particles manually.”

The opposite of fine-tuning is naturalness. “It’s sort of two sides of the same coin,” says theorist Stefania Gori, an assistant professor of physics at the University of California, Santa Cruz. “We say a theory is natural when you can write down this theory with parameters that are all basically of the same order.”

So how much fine-tuning should we allow in our theories? “This is one of the fundamental debates that may decide the future of particle physics,” says experimentalist Lawrence Lee Jr., a postdoctoral fellow at Harvard University’s Laboratory for Particle Physics and Cosmology who works on the ATLAS experiment at CERN.

What’s my motivation?

Perhaps the earliest writing on fine-tuning versus naturalness appeared in 1937, with Paul Dirac’s “large numbers hypothesis,” an attempt to make sense of huge constants in the universe by comparing their ratios. The discovery of the charm quark was motivated by the quest for naturalness; scientists theorized the existence of this particle to explain the absence of an otherwise expected particle interaction.

“From an experimental point of view, the fine-tuning problem is really useful in a sense of guiding what we should investigate,” says Joseph Haley, an associate professor of physics at Oklahoma State University. 

Sometimes, he explains, a parameter may appear to be fine-tuned (like the Higgs mass) until experiments reveal a hidden, underlying issue—some additional piece of the equation we didn’t know about before. “Once we have a more complete theory, it’s like, ‘Oh, it had to be that value all along, it just wasn't clear why.’”

Lee, also an experimentalist, says his research is strongly motivated by the fine-tuning problem. “In general, what we want from our theories—and in some way, our universe—is that nothing seems too contrived,” he says.

However, not all physicists see situations that are described as fine-tuning as a problem. For them, there doesn’t need to be a reason that, say, two parameters have nearly equal, opposite values that result in a cancellation. After all, coincidences happen.

For example, the sun and moon are roughly the same size in the sky when viewed from Earth. This means that, when we are lined up just right, the moon blocks the sun entirely, resulting in a total solar eclipse. We have accepted that there is no scientific reason for this, and scientists have even calculated the extent to which the sun’s and moon’s matching sizes are fine-tuned: 2%, or 1-in-50. (Lee notes that this happy coincidence is still vastly different from the conundrum with the Higgs mass, which would require fine-tuning on the order of 1-in-1034.)

Other physicists say it would be nice to get rid of apparent fine-tuning, but doing so isn’t necessarily the main drive in their research. “While naturalness is something that motivates a lot of the work that we do experimentally, it's certainly not the only thing,” says Martinez Outschoorn. She studies a theory called Supersymmetry, which can simultaneously solve the fine-tuning problem with the Higgs boson while also providing a dark matter particle candidate.

However bothered they are by apparent fine-tuning, in an ideal world, physicists will find the final Theory of Everything that can explain the underlying causes for every observed parameter in the universe. If physicists ever reach that point, Haley says, “then you’d really know you solved physics.”