Skip to main content
Photo of mock data
LSST simulation team

Mock data, real science

In scientific circles, “mock” is not always a four-letter word. Through mock-data competitions, astrophysicists check their work.

Data are flooding the fields of astrophysics and cosmology. Thanks to the invention of digital cameras and CCDs, astrophysicists today collect huge amounts of data—data from surveys, data from satellites, data from giant arrays of instruments working in tandem. Researchers are inundated by data, delirious with data, drowning in data. And that’s good news.

“More and more data means measurements are becoming more precise,” says Phil Marshall, a cosmologist with the Kavli Institute for Particle Astrophysics and Cosmology, located at SLAC National Accelerator Laboratory and Stanford University. “Over the last few decades, cosmology as a field has been transitioning to a precision science.”

While precision is important, researchers agree that it needs to go hand-in-hand with accuracy. There is a difference—a big one—between precision and accuracy, Marshall says. “With more data to analyze, error bars on the measurements are getting smaller. But that doesn’t mean accuracy is improving.”

To test that they’re interpreting their massive amounts of data correctly, astrophysicists create even more data: “mock” data. And while that may be counterintuitive at first, it actually makes a surprising amount of sense.

Marshall likens the accuracy problem that mock data are designed to address to measuring a table with a yardstick. “If you measure the table 10 times, you can average all those measurements together and the figure you’ll get is obviously more precise. But what if the yardstick is short by an inch? Your measurement is not going to be very accurate,” he says. This type of error, called systematic error, is not simply a sloppy measurement; it’s embedded within either the system being measured or the tool being used to do the measuring.

Such errors can lead to misinterpretations of data and even nonsensical discoveries, like a star called Methuselah that was originally estimated to be older than the universe itself. (Later analysis determined its age to be comparable to that of the universe.)

The challenge

To check for such unintended systematic error, astrophysicists have taken to fabricating realistic data based on known quantities, like starting with a table that’s exactly six feet long. Then they take this perfect table and effectively shave a centimeter off one end by introducing glitches that can affect measurements. They hand this mocked-up data to other groups to analyze.

What makes these “mock data challenges” so effective is that the teams analyzing the data don’t know what might throw their work off. The data are blinded, a technique borrowed from particle collider experiments to ensure the groups won’t be biased by expectation to essentially go looking for certain values.

For example, Marshall and his colleagues are currently hosting the “Strong Lens Time Delay Challenge.” Their mock data is based on what the Large Synoptic Survey Telescope will see when it starts collecting data toward the end of this decade. Researchers are already designing the algorithms they’ll use to analyze the 20 terabytes of data the LSST will collect every night; Marshall and his team plan to use the challenge to vet those algorithms and make sure what they find out about dark energy and galactic structure is accurate.

As the self-proclaimed “evil team,” they have created a data set representing 1000 quasars, or far-distant galaxies, each with a very active, energetic black hole at its heart. But the light streaming from these quasars is special. An astrophysical object—perhaps a black hole, perhaps a galaxy cluster—stands between each quasar and Earth, and as the light bends around it, it spreads apart. That “lensing” causes multiple images to appear to observers here on Earth. Even more spectacular, the photons from the quasars are affected differently as they travel along their different paths, causing them to arrive on Earth at different times, sometimes weeks apart (the “Time Delay” of the challenge title).

Of course, the data created by the evil team have glitches, just as real-world data have glitches. The light can be affected anywhere along its trip between quasar and telescope—by exotic influences such as tiny changes to the light’s path caused by the gravity of individual stars it passes and by more mundane effects such as the temperature at the telescope, atmospheric instabilities or which filter is on the camera at the time the data are captured.

The full 1000-quasar challenge data set will be shared with LSST data analysts (called—what else—the “good teams”) on December 1. Then, beginning in July of next year, Marshall and his team will begin analyzing the analyses to see just how close they come to reality. Or, rather, mock reality.

“We don’t expect the measurements to agree,” Marshall says, “But we hope the ways they don’t agree will help us recognize where the problems are,” as well as give them clues about how to work around those problems, Marshall says. “When we look at people’s results we want to look for trends that reflect how different problems affect accuracy.”

For example, measuring how far tardy photons lag behind their brethren seems simple, but doing so not only as precisely, but as accurately, as possible will enable cosmologists to refine certain important cosmological parameters, such as the Hubble constant, or the speed at which galaxies recede from each other—if dark energy is not taken into account.

And there is the real motivation: improving LSST’s ability to probe dark energy. With precise, and accurate, measurements of known properties, what’s left in the data will be the unknown.

That’s nothing to mock.

 

Like what you see? Sign up for a free subscription to symmetry!