Mind-Reading Robots: ISE professors leap into the world of brain-computer interfaces

3/9/2018 Doug Peterson

Researchers at ISE are laying the groundwork for cooperative robots that might be able to read our minds.

Written by Doug Peterson

“I’m not a mind reader, you know.”

This classic line typically is delivered when someone gets angry about being expected to do a chore without even being asked. But what if you actually had an assistant that can read your mind and know when you need help?

In the future, such a helper may very well be a robot. Researchers in the University of Illinois Department of Industrial and
Systems Engineering are laying the groundwork for cooperative robots—or “cobots”—that might be able to read our minds.

The idea of a mind-reading robot may sound like something from a dystopian movie, but Yao Li, an ISE graduate student, says, “This kind of robot could not read everything you’re thinking.” It would be designed to read your brain signals only in very specific situations.

“We’re trying to see what ways robots can be more friendly, and the first idea we had is a robot that senses if the human needs help,” says ISE Professor Thenkurussi “Kesh” Kesavadas.

Specifically, Kesavadas says they are examining two different approaches to brain-computer interfaces. With one system, people use brain signals to give specific instructions to a robot. In another approach, the robot is in the background, constantly monitoring your thought processes, and it only steps in when it detects that you need help.

To demonstrate whether this second approach can be done, Kesavadas and Li began with an industrial application. They developed an assembly-line robot that can read the electrical signals from a person’s brain to detect when he or she has spotted a defective part. 

In their lab, various mechanical parts move along a conveyor belt, where a camera takes video of them passing by. The human operator, meanwhile, sits at a computer screen with electrodes placed on the scalp to read electrical activity in the
brain. The operator is then shown four images of various parts at a time, with each picture flickering at a different frequency. As the operator stares at a photo, his or her brain activity will match the same frequency.

For instance, when the operator stares at a photo that flickers at 6 hertz, brain activity will match that frequency, telling the system which photo the person is staring at. If an operator gazes at the image of a particular part for more than a few seconds, the frequency will tell the robot that the piece is defective. The robot then plucks the faulty part from the conveyor belt without being
told to do it.

The focus so far has been on industrial applications, but Kesavadas sees their work as having medical uses as well. For instance, a mind-reading robot in the operating room could detect when a doctor needs a scalpel and then hand it to the surgeon.

The assembly-line robotic system works with about 90% accuracy, says Li, with different results for different users. Right now, the greatest limitation is the sensors placed on the outside of a person’s scalp.

“The signals are weaker and more diffuse than when electrodes are implanted in the brain,” Kesavadas says.

Although electrodes directly implanted in the brain provide much better signals, they are only desirable in unique situations, such as when a disabled person wants to regain control of an arm or leg.

“You cannot expect someone who is working in a factory to say go ahead and drill a hole in my head,” Kesavadas says. Therefore, a key to many applications will be improving the sensors placed on the exterior of a person’s scalp.

In another brain-computer interface study, Richard Sowers, an Illinois professor with a joint appointment in ISE and mathematics, is also making use of electrodes attached to a “brain cap”—a cap with 60 sensors that attach to the scalp and detect brain waves through electroencephalography, or EEG.

Sowers has been working with Manuel Hernandez, a U of I professor of kinesiology who directs the Mobility and Fall Prevention Lab, in probing the connection between our perception and the fear of falling—a common problem for the elderly. In fact, the Centers for Disease Control and Prevention reports that “in 2014 alone, older Americans experienced 29 million falls causing 7 million injuries and costing an estimated $31 billion in Medicare costs.” The CDC says that falls are “the number one cause of injuries and deaths from injury among older Americans.”

“When someone is very afraid of falling, it restricts their enjoyment of getting out and around,” Sowers says. The project, which began at the end of 2016, places subjects in a virtual world. Subjects walk on a treadmill while wearing the brain cap and a virtual reality headset that shows them hiking through a world with many sudden drop-offs. The EEG brain cap can detect their anxiety.

“Our students have built a virtual reality world with mountains, and every now and then a deep chasm will appear while you’re walking along this fairly narrow path,” Sowers says. “Just think of your favorite action movie where Indiana Jones is walking over a deep ravine on a narrow path. We’re looking at how the brain reacts to that. 

“We want to create a feedback loop,” Sowers adds. “For example, if the system senses anxiety, the next virtual reality cliff will not be so deep.” The goal, he says, is to understand the connection between what people see and their acrophobic anxiety—the fear of heights. Researchers will then be able to detect the drop-off height that people are comfortable with—a height that will vary
from person to person.

“By understanding this, maybe we can help elderly people retrain their brain and build mental defense mechanisms against some of their fears,” Sowers explains. “This will allow them to get out and around more.” 

In a pair of other studies from Kesavadas’s lab, he and Li are trying to improve the communication between the computer and the human brain. One study involves a robot doing welding work and another sends a robot on a particular path. They are creating a language that allows people to use their thoughts to tell robots to do these complicated tasks without issuing a command for every step along the way.

For instance, when using your thoughts to tell a robot to open a door, you would not have to issue commands for every step in the process—walk to the door,hold the handle, turn the handle, and so on. The robot would automatically know all of the steps based on the context of your command.

In a medical setting, a surgeon could tell a mind-reading robot to adjust a camera being used to take images inside the body, Li says. And in manufacturing, an operator who needs an extra hand while welding could tell the robot to rotate the part being welded—all using brain activity.

“It’s interactive and it’s powerful,” Kesavadas says. “Nobody else is doing this.”

In the past five years, interest in robots and brain-computer interfaces (BCI) has surged. The National Science Foundation started the National Robotics Initiative in 2011, and Tesla CEO Elon Musk began backing a BCI company, Neuralink, in early 2017.

Kesavadas has been working on robotics and virtual reality since his graduate study days at Penn State in the early 1990’s. He developed the Robotic Surgery Simulator, or RoSS—a virtual reality simulator that teaches novice surgeons how to use the da Vinci surgical robot.

RoSS is used in hospitals around the world, and Kesavadas’s groundbreaking work on this medical/engineering interface makes him the ideal person to serve as director of the Health Care Engineering Systems Center at Illinois.

“I have been working on the theme of interfacing with robots for 20 years,” he says. “I’m looking for better and better ways to control robots, whether they are industrial robots or surgical robots.”

Robots will continue to be used in areas where they are more suited than humans, particularly in hazardous environments or in situations in which you need a worker that won’t get fatigued, Kesavadas says.

“But this doesn’t mean that one day robots will become so smart that they say, okay now I don’t need my boss. I’ll do it myself,” he notes. “I don’t think they’ll be taking over the world anytime soon.”

Related Links


Share this story

This story was published March 9, 2018.