When you upload a photo of one of your friends to Facebook, you set into motion a complex behind-the-scenes process. An algorithm whirs away, analyzing the pixels in the photo until it spits out your friend’s name. This same cutting-edge technique enables self-driving cars to distinguish pedestrians and other vehicles from the scenery around them.
Can this technology also be used to tell a muon from an electron? Many physicists believe so. Researchers in the field are beginning to adapt it to analyze particle physics data.
Proponents hope that using deep learning will save experiments time, money and manpower, freeing physicists to do other, less tedious work. Others hope they will improve the experiments’ performance, making them better able to identify particles and analyze data than any algorithm used before. And while physicists don’t expect deep learning to be a cure-all, some think it could be key to warding off an impending data-processing crisis.
Neural networks
Up until now, computer scientists have often coded algorithms by hand, a task that requires countless hours of work with complex computer languages. “We still do great science,” says Gabe Perdue, a scientist at Fermi National Accelerator Laboratory. “But I think we could do better science.”
Deep learning, on the other hand, requires a different kind of human input.
One way to conduct deep learning is to use a convolutional neural network, or CNN. CNNs are modeled after human visual perception. Humans process images using a network of neurons in the body; CNNs process images through layers of inputs called nodes. People train CNNs by feeding them pre-processed images. Using these inputs, an algorithm continuously tweaks the weight it places on each node and learns to identify patterns and points of interest. As the algorithm refines these weights, it becomes more and more accurate, often outperforming humans.
Convolutional neural networks break down data processing in a way that short-circuits steps by tying multiple weights together, meaning fewer elements of the algorithm have to be adjusted.
CNNs have been around since the late ’90s. But in recent years, breakthroughs have led to more affordable hardware for processing graphics, bigger data sets for training and innovations in the design of the CNNs themselves. As a result, more and more researchers are starting to use them.
The development of CNNs has led to advances in speech recognition and translation, as well as in other tasks traditionally completed by humans. A London-based company owned by Google used a CNN to create AlphaGo, a computer program that in March beat the second-ranked international player of Go, a strategy board game far more complex than chess.
CNNs have made it much more feasible to handle previously prohibitively large amounts of image-based data—the kind of amounts seen often in high-energy physics.
Reaching the field of physics
CNNs became practical around the year 2006 with the emergence of big data and graphics processing units, which have the necessary computing power to process large amounts of information. “There was a big jump in accuracy, and people have been innovating like wild on top of that ever since,” Perdue says.
Around a year ago, researchers at various high-energy experiments began to consider the possibility of applying CNNs to their experiments. “We’ve turned a physics problem into, ‘Can we tell a car from a bicycle?’” says SLAC National Accelerator Laboratory researcher Michael Kagan. “We’re just figuring out how to recast problems in the right way.”
For the most part, CNNs will be used for particle identification and classification and particle-track reconstruction. A couple of experiments are already using CNNs to analyze particle interactions, with high levels of accuracy. Researchers at the NOvA neutrino experiment, for example, have applied a CNN to their data.
“This thing was really designed for identifying pictures of dogs and cats and people, but it’s also pretty good at identifying these physics events,” says Fermilab scientist Alex Himmel. “The performance was very good—equivalent to 30 percent more data in our detector.”
Scientists on experiments at the Large Hadron Collider hope to use deep learning to make their experiments more autonomous, says CERN physicist Maurizio Pierini. “We’re trying to replace humans on a few tasks. It’s much more costly to have a person watching things than a computer.”
CNNs promise to be useful outside of detector physics as well. On the astrophysics side, some scientists are working on developing CNNs that can discover new gravitational lenses, massive celestial objects such as galaxy clusters that can distort light from distant galaxies behind them. The process of scanning the telescope data for signs of lenses is highly time-consuming, and normal pattern-recognizing programs have a hard time distinguishing their features.
“It’s fair to say we’ve only begun to scratch the surface when it comes to using these tools,” says Alex Radovic, a postdoctoral fellow at The College of William & Mary who works on the NOvA experiment at Fermilab.
The upcoming data flood
Some believe neural networks could help avert what they see as an upcoming data processing crisis.
An upgraded version of the Large Hadron Collider planned for 2025 will produce roughly 10 times as much data. The Dark Energy Spectroscopic Instrument will collect data from about 35 million cosmic objects, and the Large Synoptic Survey Telescope will capture high-resolution video of nearly 40 billion galaxies.
Data streams promise to grow, but previously exponential growth in the power of computer chips is predicted to falter. That means greater amounts of data will become increasingly expensive to process.
“You may need 100 times more capability for 10 times more collisions,” Pierini says. “We are going toward a dead end for the traditional way of doing things.”
Not all experiments are equally fit for the technology, however.
“I think this'll be the right tool sometimes, but it won’t be all the time,” Himmel says. “The more dissimilar your data is from natural images, the less useful the networks are going to be.”
Most physicists would agree that CNNs are not appropriate for data analysis at experiments that are just starting up, for example—neural networks are not very transparent about how they do their calculations. “It would be hard to convince people that they have discovered things,” Pierini says. “I still think there’s value to doing things with paper and pen.”
In some cases, the challenges of running a CNN will outweigh the benefits. For one, the data need to be converted to image form if they aren’t already. And the networks require huge amounts of data for the training—sometimes millions of images taken from simulations. Even then, simulations aren’t as good as real data. So the networks have to be tested with real data and other cross-checks.
“There’s a high standard for physicists to accept anything new,” says Amir Farbin, an associate professor of physics at The University of Texas, Arlington. “There’s a lot of hoops to jump through to convince everybody this is right.”
Looking to the future
For those who are already convinced, CNNs spawn big dreams for faster physics and the possibility of something unexpected.
Some look forward to using neural networks for detecting anomalies in the data—which could indicate a flaw in a detector or possibly a hint of a new discovery. Rather than trying to find specific signs of something new, researchers looking for new discoveries could simply direct a CNN to work through the data and try to find what stands out. “You don’t have to specify which new physics you’re searching for,” Pierini says. “It’s a much more open-minded way of taking data.”
Someday, researchers might even begin to take tackle physics data with unsupervised learning. In unsupervised learning, as the name suggests, an algorithm would train with vast amounts of data without human guidance. Scientists would be able to give algorithms data, and the algorithms would be able to figure out what conclusions to draw from it themselves.
“If you had something smart enough, you could use it to do all types of things,” Perdue says. “If it could infer a new law of nature or something, that would be amazing.”
“But,” he adds, “I would also have to go look for new employment.”