Science

Welcome to the Age of Brain-Controlled Robots

A new system reads your brain waves to issue a robot corrective instructions in real time.

by Kastalia Medrano
Jason Dorfman, MIT CSAIL

Researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and Boston University have designed a system that allows humans to correct a robot’s errors in real time.

In addition to moving us further down the path toward seamless, intuitive communication between robots and humans, the technology could have implications for individuals who can’t communicate verbally and would like to.

While we may one day use neural lace to connect our brains to artificial intelligence that makes us super-intelligent, this advancement may lead to humans connecting their brains to powerful machines to do their bidding.

Volunteers for this project wore an electroencephalography monitor, commonly known as an EEG — you’ll recognize the swim cap-like item covered with detachable nodes from any movie where someone’s brain waves require monitoring. As the robot performed basic object-sorting tasks, the humans watched — as soon as they notice an error on the robot’s part, the EEG classified that perception and allowed them to essentially telepathically order a correction, which sounds pretty gratifying, tbh. The robot was able to correctly identify error-related potentials — the signals our brain generate when they register mistakes, and which are abbreviated as the delightfully onomatopoeic “ErrPs” — 70 percent of the time. With continued finessing of the system, the team expects that to rise to 90 percent of the time.

For now, this only works with simple binary tasks. The robot sorts things into bins, and when it sorts a thing into the wrong bin, you may tell it so with the ErrP of your superior brain. Because there’s no intermediary step — buttons to push, code to write — the corrective instructions can reach the robot in real time, just 10 to 20 milliseconds.

A paper detailing the procedure will be presented in May at the IEEE International Conference on Robotics and Automation (ICRA) in Singapore.

The robot has of course been kind of set up to publicly fail here. Add to this the fact that it has an actual face and goes by Baxter, a name generally reserved for Very Good Dogs, and you may be inclined to feel badly, to which I say fuck that. Baxter may have Bambi eyes and cheeks that redden bashfully when it makes a mistake, but it is still not better than a human and its Etch-a-Sketch-looking head shouldn’t stop one from validating their own intelligence via their ability to discern that a can of spray-paint goes in the bin labeled “Paint.”

Besides, Baxter is actually a seasoned pro from Rethink Robotics in Boston, Massachusetts, and its been advancing human-robot relations since 2012. ErrPs register with a severity proportional to the severity of the mistake, which means that from binary choices the researchers should be able to progress to multiple choices and eventually, the thinking goes, to complex, fluid, real-time conversation between robots and humans.

The paper doesn’t get into the potential real-world uses other than to say that there will probably be some, but this system could conceivably one day facilitate control of prosthetics for paraplegics or people who have lost control of their facial muscles due to a stroke. Some of the researchers are already planning its application for individuals with locked-in syndrome, a neurological condition in which a patient is so fully paralyzed they can move only their eyes.

“That’s basically the goal I have in my head,” first author Andres F. Salazar-Gomez tells Inverse. “This technology is all about you finding a mistake in your brain, we’re wired to find mistakes. Most technology locked-in syndrome patients have access to requires training, or has obnoxious visual stimuli like flashing lights …most [of these patients] communicate by blinking already. This would remove the middle man, a computer or a person with a board showing letters.”

That’s still a few years off, though. In the meantime, the team thinks the system could potentially be adapted to self-driving cars, a safety feature in the event that the human passenger notices something the system doesn’t. All of which is to say I suppose we should be nice to Baxter, even though in the name of science we may be mean. It’s learning.

Abstract
Communication with a robot using brain activity from a human collaborator could provide a direct and fast feedback loop that is easy and natural for the human, thereby enabling a wide variety of intuitive interaction tasks. This paper explores the application of EEG-measured error-related potentials (ErrPs) to closed-loop robotic control. ErrP signals are particularly useful for robotics tasks because they are naturally occurring within the brain in response to an unexpected error. We decode ErrP signals from a human operator in real time to control a Rethink Robotics Baxter robot during a binary object selection task. We also show that utilizing a secondary interactive error-related potential signal generated during this closed-loop robot task can greatly improve classification performance, suggesting new ways in which robots can acquire human feedback. The design and implementation of the complete system is described, and results are presented for realtime closed-loop and open-loop experiments as well as offline analysis of both primary and secondary ErrP signals. These experiments are performed using general population subjects that have not been trained or screened. This work thereby demonstrates the potential for EEG-based feedback methods to facilitate seamless robotic control, and moves closer towards the goal of real-time intuitive interaction.
Related Tags