Robots aren’t coming for your jobs — they’re coming for your brains.
According to Houtan Jebelli of Penn State, it could be the future of work.
Jebelli, an assistant professor of engineering, and his team are taking advantage of new advances in brain-computer interfaces to create a sort of cybernetic workforce. Using EEG embedded directly into construction workers’ hard hats, the team has devised a way for robots to read the minds of their human co-workers and adjust their work accordingly.
“This is probably the future of human-machine interaction, especially in field-oriented industries,” Jebelli says.
But Johnny Långstedt, a doctoral candidate in comparative religion at Åbo Akademi University with a research focus on business, tells Inverse that recent research of his suggests that this type of automation may also create just as many problems as it attempts to solve, particularly when it comes to workplace satisfaction.
“We're motivated by a certain set of value priorities,” Långstedt explains to Inverse, such as creativity or order. “And those [values] drive our actions and interpretations of different situations, and also affect which occupations we choose to work in.” Automation changing what values our jobs hold, for example going from monotonous to more creative, is where the trouble begins, he says.
The emotional intelligence this EEG technology is attempting to create walks the line between WALL-E-like endearment and Orwellian panoptic control, but like any modernizing workplace technology, it’s coming for us whether we like it or not.
What is human-robot collaboration?
The narrative of workplace automation often dictates that robots or machines will replace humans in the workforce in fields ranging from assembly lines and hair salons to the operating room. But human-robot collaboration (HRC) offers a different viewpoint and proposes that new workplace technology (particularly, robots) will instead work alongside humans, like co-workers.
This is particularly prevalent, write the Penn State researchers in their study published in the journal Automation in Construction this April, in fields like construction where there’s little opportunity for robotic workers and human workers to work separately from one another. To accomplish dynamic tasks, like laying brick, it’s more likely that human workers and robots will have to work together.
“I believe our proposed technology might be powerful for the future.”
But this is much easier said than done says Jebelli, especially because building trust between humans and robots can be a difficult task.
“When it comes to human-robot collaboration, building that trust, and even providing [HRC] training for the workers is very important,” says Jebelli. Otherwise, it can be easy to misinterpret the intentions of your robot companion, he explains.
Human workers may get annoyed at each other when a task goes south, but this is amplified if their co-worker is a robot because there’s little opportunity to understand or discuss why different choices were made, e.g. speeding through a detail-oriented task.
Creating a “human-in-the-loop” system for these robots that can help them better understand and meet the needs of their human co-workers will be essential to building this trust, says Jebelli. And that’s exactly what they set out to do when they stuck an EEG cap inside a bunch of hardhats a let participants loose with a team of robot workers.
HOW DOES IT ACTUALLY WORK?
The Penn State team developed an HRC framework that used EEG data collected from participants as input for robots to help them interpret the cognitive load of their human coworkers.
The framework worked like this:
- EEG data was collected via the lined hard hats and then digitally “cleaned” of excess electrical noise (e.g. eye blinking or nearby electronics) explains Yizhi Liu, the study’s first author and a Penn State doctoral student
- A machine-learning algorithm then processed the data to find distinct stress patterns
- These patterns were then characterized when the worker had an increased, decreased, or steady cognitive load
- The robot would then react accordingly, either slowing down, speeding up, or keeping its pace steady to accommodate for the worker
One interesting problem to overcome for this team was how to separate general stress from work stress to ensure the robot was receiving up-to-date cognitive load information. To account for this, Jebelli says they first established baseline EEG stress levels by having the participants wear the EEG-lined hardhats for a few minutes at the beginning of the trial.
“For example, assume I had a bad weekend... like I got a rejection for one of my proposals or something like that,” says Jebelli. “When I go to the job site, this will be added to the daily stress. This is something that our framework will consider.”
In the case of the brick-laying robot analyzed in the study, the robot can use this information to adjust its pace accordingly, explains a co-author on the student and Penn State post-doctoral scholar, Mahmoud Habibnezhad.
“When the robot changes its performance the worker understands [why],” says Habibnezhad. “The worker [can] understand that this robot is slowing down because he’s tired ... That instills trust over time.”
The authors write that this approach had over an 80 percent accuracy rate when it came to appropriately making these corrections.
WHY DO HUMANS WANT THIS?
Dynamic jobs like construction are not only physically demanding but can be dangerous as well. In 2017, construction jobs experienced 971 fatalities in the U.S., the highest death rate for the industry in a decade.
“You would probably see a decline in job satisfaction and engagement since the job doesn't really fulfill [their] basic needs.”
Collaborative automation of this industry cannot only relieve some of the physical work, argue the authors, but can also account for instances where mental stress may have led to poor or dangerous decisions as well.
WHAT ARE THE ETHICAL IMPLICATIONS?
Yet despite its potential benefits, this technology is not without its ethical faults or repercussions. In particular, do workers really want to give access to their thoughts to robots, let alone their bosses?
For now, Jebelli says that there are guidelines in place to prevent tampering, including anonymizing worker data, but says it’s a concern to keep on top of.
There is also a possibility, however, that this kind of automation will fundamentally change how workers appreciate their job, argues Långstedt.
While not commenting on this study directly, Långstedt’s study, published this February in the journal Labour & Industry found after analyzing a European Survey of over 126 occupations that automating away rote or repetitive tasks in a workplace and lead to worker dissatisfaction as the value they found in the job initially (e.g. order, repetition) may be erased.
“You would probably see a decline in job satisfaction and engagement since the job doesn't really fulfill [their] basic needs,” explains Långstedt. There may be ways to lessen this blow, he suggests in the study, such as teaching children to value creativity more in school, but there will inevitably be a lag between automation and these potential corrections.
WHEN WILL IT AFFECT THE FUTURE?
Jebelli says there isn’t yet a hard deadline for when this kind of mind-reading technology will hit the workplace in earnest, but he says that it’s probably less than a decade away and could be used in workplaces beyond construction, even perhaps on the ISS one day.
Ultimately, he says that this kind of technology isn’t looking to erase human workers, but instead work with them to create a safer workplace.
“The robot by itself it's not even performing the job,” says Jebelli. “It always requires at least one or two workers [to help]... And as there is a need for better human-robot work collaboration, I believe our proposed technology might be powerful for the future.”