MIT Takes A Step Toward Mind-Controlled RobotsPlay
Researchers at MIT's Computer Science and Artificial Intelligence Lab have created a system where humans can guide robots with their brainwaves. This may sound like a theory out of a sci-fi novel, but the goal of seamless human-robot interaction is the next major frontier for robotic research.
For now, the MIT system can only handle simple binary activities such as correcting a robot as it sorts objects into two boxes, but CSAIL Director Daniela Rus sees a future where one day we could control robots in more natural ways, rather than having to program them for specific tasks — like allowing a supervisor on a factory floor to control a robot without ever pushing a button.
"Imagine you look at the robots, and at some point one robot is not doing the job correctly," Rus explained. "You will think that, you will have that thought, and through this detection you would in fact communicate remotely with the robot to say 'stop.' "
Rus admits the MIT development is a baby step, but she insists it's an important step toward improving the way humans and robots interact.
Currently, most communication with robots requires thinking in a particular way or vocalizing a command, which can be exhausting.
"We would like to change the paradigm," Rus said. "We would like to get the robot to adapt to the human language."
The MIT paper proves it's possible to have a robot read your mind — at least when it comes to a super simplistic task. And Andres F. Salazar-Gomez, a Boston University Ph.D candidate with the CSAIL research team, says this system could one day help people who can't communicate verbally.
For this study, MIT researchers used a robot named Baxter from Rethink Robotics.
Baxter had a simple task: Put spray paint into the box marked "paint" and a spool of wire in the box labeled "wire." A volunteer hooked up to an EEG cap, which reads electrical activity in the brain, sat across from Baxter, and observed him doing his job. If they noticed a mistake, they would naturally emit a brain signal known as an "error-related potential."
"You can use [that signal] to tell a robot to stop or you can use that to alter the action of the robot," Rus explained.
The signal is detected in real time. The system then translates that brain signal to Baxter, so he understands he's wrong, his cheeks blush to show he's embarrassed, and he corrects his behavior.
The MIT system correctly identified the volunteer's brain signal and then corrected the robot's behavior 70 percent of the time.
Making Robots Effective 'Collaborators'
"I think this is exciting work," said Bin He, a biomedical engineer at the University of Minnesota, who published a paper in December that showed people can control a robotic arm with their minds.
He was not affiliated with the MIT research, but he sees this as a "clever" application in a growing yet nascent field.
Researchers say there's an increasing desire to find ways to make robots effective "collaborators," not just obedient servants.
"One key aspect of collaboration is being able ... to know when you're making a mistake," said Siddhartha Srinivasa, a professor at Carnegie Mellon University who was not affiliated with the MIT study. "What this paper shows is how you can use human intuition to boot-strap a robot's learning of what its world looks like and how it can know right from wrong."
Srinivasa says this research could potentially have key implications for prosthetics, but cautions it's an "excellent first step toward solving a harder, much more complicated problem."
"There's a long gray line between not making a mistake and making a mistake," Srinivasa said. "Being able to decode more of the neuronal activity... is really critical."
And Srinivasa says that's a topic that more scientists need to start exploring.
Potential Real-World Applications
As for MIT's Rus, it's not just the toy demonstration that's exciting, it's the potential real-world applications. She imagines a future where anybody can communicate with a robot without any training — a world where this technology could help steer a self-driving car or clean up your home.
"Imagine ... you have your robot pick up all the toys and socks from the floor, and you want the robot to put the socks in the sock bin and put the toys in the toy bin," she said.
She says that would save her a lot of time, but for now the house cleaner that can read your mind is still a dream.
CSAIL's paper detailing these findings was published online this week. It has also been accepted at the IEEE International Conference on Robotics and Automation, which is set for Singapore this May.
This segment aired on March 8, 2017.