Skip to main content
Wan-Lin Hu is seen talking with talks with accelerator systems operator Kabir Lubana in the lab’s main Accelerator Control Room.
Courtesy of SLAC National Accelerator Laboratory

A day in the life of a human-in-the-loop engineer

Wan-Lin Hu’s job is to improve the way people and artificial intelligence collaborate to run SLAC’s complex machines.

Growing up in Taiwan, Wan-Lin Hu decided in junior high school that she wanted to be an engineerbut not just any type of engineer.

“I wanted to design systems that work without any human input, beyond the usual need to turn them on and off and adjust their settings,” she says with a laugh. “The human decision-making process is quite complex, and articulating and predicting how it works is quite challenging.”

So, when she got to National Taiwan University, she focused on designing automated control systems that reduce the need for human input, and used that approach to build an atomic force microscope and a bio-inspired robot that slithered like a snake, among other projects.

Six years later, though, her path took a surprising turn. Her PhD adviser at Purdue University specialized in human-centered design. Hu’s expertise in advanced control theory and methodology allowed them to join forces to investigate ways to enhance human-machine systems, which fine-tune the way people interact with complex machines and systems to achieve better performance than either could achieve alone.

Today Hu is an associate staff scientist specializing in “human-in-the-loop engineering” at the Department of Energy’s SLAC National Accelerator Laboratory. The lab is home to things like particle accelerators and electric power systems that are far too complex for people to run on their own, but still need a human touch to keep them on the right track. 

SLAC associate staff scientist Wan-Lin Hu at her computer work station.

Wan-Lin Hu, an associate staff scientist and human-in-the-loop engineer in SLAC’s Grid Integration Systems and Mobility lab (GISMo).

Courtesy of SLAC National Accelerator Laboratory

“When people design automatic systems, like self-driving cars, they push very hard to make everything work automatically, like magic,” Hu says. “But in reality, humans are still very important. They can’t deal with the same amount of complex data as machines can, but they’re much better at adapting to changing situations.”

Learning from the best

Hu’s current project is observing the roughly 20 people who operate an incredibly large, complex and delicate machine, the Linac Coherent Light Source X-ray free-electron laser, from banks of monitors and dials and buttons in the lab’s Accelerator Control Room.

The goal is to learn how experienced operators do thingsknowledge that takes years to accumulate and can be hard to put into wordsand apply that to training novice operators so they can get up to speed more quickly.

She’s also working with Daniel Ratner, head of SLAC’s machine learning initiative, on improving collaboration between artificial intelligence systems and humans in the control room.

“One thing people often criticize about AI is its black box nature,” Hu says. “It typically works, but you don’t know why. So, one thing we want to do is make the interface between them more user-friendly, so operators can do a better job of interpreting the AI and step in when needed to get better results.”

Automation’s downside

Human-in-the-loop engineering grew out of a time when airplanes became so complex that many of their operational functions had to be automated, says David Chassin, group manager for SLAC’s Grid Integration Systems and Mobility lab. The assumption, he says, was that this would make them both safer and easier to operate.

But too much automation can have a downside, Ratner says: It can deprive human operators of the experience they need to take decisive action in a crunch.

“If a system becomes too automated, human operators have less visibility into what's going on under the hood,” Ratner says. “And that results in systems that are harder to recover when something goes wrong.”

Trust and good feedback

There are also more subtle challenges to human-in-the-loop engineering, Hu says.

One is building trust between the human and the machine. “When a system responds in an unexpected way, its operator can lose confidence in it and hesitate to do certain things,” she says. “On the flip side, the operator may be giving the machine the wrong input based on a false idea of how it should behave.”

Another is giving the human operators feedback in a way that enhances their performance instead of degrading it.

Hu will spend the next nine months observing how SLAC’s accelerator operators do what they do and asking them why they do it that way. In the end, she says, “We will create a model that describes the decision-making and problem-solving process to make knowledge transfer easier.”

Editor's note: A version of this article was originally published by SLAC National Accelerator Laboratory.