Skip to main content
Illustration of nature is manifesting in organic ways from a program
Illustration by Sandbox Studio, Chicago with Ariel Davis

Will AI make MC the MVP of particle physics?

Particle physicists are building innovative machine-learning algorithms to enhance Monte Carlo simulations with the power of AI.

Originally developed nearly a century ago by physicists studying neutron diffusion, Monte Carlo simulations are mathematical models that use random numbers to simulate different kinds of events. As a simple example of how they work, imagine you have a pair of six-sided dice, and you’d like to determine the probability of the dice landing on any given number. 

“You take your dice, and you repeat the same exercise of throwing them on the table, and you look at the outcome,” says Susanna Guatelli, associate professor of physics at the University of Wollongong in Australia.

By repeating the dice-throwing experiment and recording the number of times your dice land on each number, you can build a “probability distribution”—a list giving you the likelihood your dice will land on each possible outcome. 

For the Monte Carlo simulations used in physics, “we repeat the same experiment many, many times,” Guatelli says. “When we use it to solve problems, we have to repeat the same experiment that, of course, is a lot more difficult and complex than throwing the dice.”

Much like the universe itself, Monte Carlo simulations are governed by randomness and chance. This makes them well suited to modeling natural systems. “A Monte Carlo simulation is basically our way of simulating nature,” says Benjamin Nachman, a staff scientist at the US Department of Energy’s Lawrence Berkeley National Laboratory. 

animation representing AI
Illustration by Sandbox Studio, Chicago with Ariel Davis

Particle physicists use Monte Carlo simulations to design new experiments, to plan the construction of equipment, and to predict how that equipment will perform. After researchers run their experiments, they use Monte Carlo simulations to design their analyses, simulating both physical processes predicted by the Standard Model of particle physics and hypothetical processes that go beyond the Standard Model to understand what they would look like if they occurred. 

One reason Monte Carlo simulations have become so useful is that they’re now much more accurate than they were in the past. 

“We in high-energy physics rely on Monte Carlo simulations for almost everything, and this is actually a relatively recent development,” says Kevin Pedro, an associate scientist at Fermi National Accelerator Laboratory. “In the previous generations of experiments, the Monte Carlo tools were a lot less accurate, so people didn't trust them as much…but in the ’90s and 2000s, there was a lot of work done to improve the accuracy.”

Nachman says that work has paid off. “The simulations are so good now, that if you have a full simulation event of, say, a collision at the Large Hadron Collider, and you show [the data] to an expert…most people wouldn't be able to tell you which one's real or which one's fake,” Nachman says. “The Higgs boson would not have been discovered [when it was], probably, without that level of precise simulation that we have available.” 

In the last decade or so, Monte Carlo simulations have become even more powerful, thanks to the support of machine learning.

MC simulations get an ML boost

Monte Carlo simulations allow researchers to analyze events relative to some independent variable, like time or energy.

“One of the defining features of physical processes, or any processes, is that there are different processes happening on different time, energy, or length scales,” Nachman says. “So the idea is that we have [some] particles—or whatever the fundamental unit of object is—inside some simulator. We track those particles as they evolve through time, energy, or whatever the relevant independent quantity is.” 

The end result is a mathematical simulation of experimental data that looks a lot like the real thing. “We want to be able to emulate some data set, and it should look as similar as possible to the data we would see in some experiment,” Nachman says. 

But this method of Monte Carlo simulation also has its limitations. “Though it’s very accurate, it’s kind of slow,” Pedro says.

For researchers, it’s always an open question as to whether they can simulate enough individual events within the process they want to model, such that the final simulation has the same statistical power as the real experimental data.

“And so this is where AI comes in,” Pedro says. 

In collider experiments, for example, the simulations are slow because of all the detail that goes into each application. Particles traverse the detector, and researchers simulate their interactions with the detector material. However, any movement by a particle can cause a change both in the material of the detector, and in the type of interaction that’s taking place. 

This means that not just the inputs but the actual computations that the simulation requires are changing with every computational step. Each particle essentially wants to do something different, and that complexity is challenging for modern computers to simulate.

“But if you take just [the final results] ... the idea is you can train some kind of AI algorithm that will reproduce that distribution very accurately,” Pedro says. “And it will do that [in a way that’s] very easy to accelerate on a modern computing architecture.”

In other words, researchers accelerate slow particle detector simulations by replacing part or all of them with machine-learning models. They train these models on data from real detector experiments, or even with models trained on data from previous simulations. This framework applies to many other areas of particle physics as well, and it can serve to enhance not only the speed but also the accuracy of simulations. 

“The basic idea is [always] more or less similar: that there’s some very computationally intensive tasks that you can approximate to a very good degree of fidelity with an AI algorithm if you're careful enough,” Pedro says. 

Need a better ML model? DIY. 

Of course, just as Monte Carlo methods have their limitations, so too do the machine-learning models that physicists use to speed them up. In part, that’s because so many machine-learning methods come from industry research, where data sets are quite different, and arguably less complex.

“In industry AI research, they tend to look at text, images and video,” Pedro says.

These human-created data formats usually come with simple, regular structures. A sentence is a sequence of words. An image is a grid of pixels. A video is a sequence of grids of pixels. 

“The data we have in particle physics is regular in its own way, but…the relationships between pieces of the data are much more complicated,” Pedro says. “And so often, that's almost the whole problem—just trying to get an existing AI to act on our data efficiently and in a way that makes sense.”

As a result, researchers like Pedro sometimes find themselves taking AI and machine-learning methods from industry and pushing those methods to limits that surpass even traditional computer-science research in terms of the size or complexity of the problems they’re able to tackle.

Pedro mentioned a few examples of this from various areas of physics research, including a 2021 paper. In it, a group of high-energy physicists working on simulations of particle jets developed a novel version of the machine-learning model known as the generative adversarial network, or GAN. The researchers claimed that pre-existing GANs were “inadequate for physics applications.”

By making small adjustments to those models, they were able to develop a novel GAN that they say delivers improved quantitative results across every metric.

But while there are many benefits to integrating AI methods with physics applications, physicists also face a new set of challenges in dealing with the black-box nature of machine-learning algorithms themselves.

Machine-learning algorithms are often good at building internal models that generate correct answers, but bad at coherently explaining what those models are, or why they’re confident in their results. AI researchers in both industry and the sciences are still working to define the full scope of this “interpretability” problem, though some argue it is an especially pressing topic for scientific applications.

“Because we're not just trying to sell a product, right?” Pedro says. “We're actually trying to learn something about the universe.

“You can have an algorithm that might, inside of it, learn a bunch of physics and then it gives you an answer, but that's not really satisfying to us as scientists because we want to learn the physics too,” he says. “How do you coax an algorithm into telling you what it learned in a way that you can understand? This is also still a very open question.”