You’ve definitely felt it, when that video game opponent has reduced your avatar to ash, or your Roomba shuts itself in the bathroom: robo-rage is real. Now, researchers at Carnegie Mellon have shown that robots really can get in our heads (psychologically speaking). According to the study, the quality of human-robot interaction can impact on human performance in a game, whether the robot is on your side or not.
In other words, robots can make you shook.
Primary investigator Aaron M. Roth, now a researcher at the University of Maryland, pitted people against Pepper, a robot made by SoftBank. Roth wanted to see if a humanoid robot with expressive language impacted people, and Pepper fit the bill. Its soft curves, white snowbank cheeks, and expressive eyes make it decidedly adorable. The human and Pepper played a castle defense game, where humans spend resources on certain gates in an attempt to reach the goal. Pepper played defense.
Drawing from a library of phrases generated by natural language processing model, Pepper would either provide supportive commentary to the human player or act antagonistically.
“The things the robot would say would be similar,” Roth says. You seem to be considering your moves in a practiced manner, a positive robot would say. You seem to be considering your moves in a bizarre manner it taunts, when turned antagonistic. Because its AI could only draw from a well, the Peppers would not get too cutting.
Roth’s team could measure how optimally a human played the game. Did they usually make the choices with the highest chance of success? People played better when encouraged by the robot, and worse when the robot gave them some small business.
The antagonistic Pepper inspired a range of emotions, like a jawing defensive back does. Some players simply ignored it; some traded barbs back; some got flustered. A few particularly enjoyed the juxtaposition of a cute robot putting them down.
That result doesn’t surprise human-robot interaction researchers. People react to what they hear, says Dawn Tilbury, professor of mechanical engineering, electrical engineering, and computer science at the University of Michigan.
We name (and adore) our robotic Martian rovers, harass mall-wandering automatons, trust them to fly our planes, and anthropomorphize our vacuum cleaners. And we are only going to see more and more in our lives, Roth says; human-robot interaction may become a daily norm.
The first industrial robots were made specifically for places people should not be, says Selma Šabanović, an associate professor in the School of Informatics and Computing at Indiana University. But more recent designs are intended to work with and alongside us soft meatbags, making human-robot interactions important to research.
And since not all interactions will be on the job, between mechanized or muscled workers, we need to understand not only the way we work with them, but how we treat them socially.
As robots become more common, it seems inevitable that in some cases, our goals will not be aligned.
“The social aspect … is completely necessary,” Šabanović says. “Because humans are such social creatures, and for humans, everything around them is really something that’s social.”
The field is primarily focused on humans and robots acting in concert, Roth says, and Šabanović and Tilbury back up. But as robots become more common, it seems inevitable that in some cases, our goals will not be aligned.
“Not like robots coming and performing harm, or doing evil,” Roth says — just… not on the same page. Take a robotic salesperson, for example; you’re just browsing, but it only knows one thing: sell. Since human social interaction is not always positive, it’s important to research human-robot interactions that are not exclusively positive, too.
“It’s good to understand these antagonistic relationships,” Tilbury says.
The research could help us to better understand the drawbacks of robots not aligned with our interests, as well as some possible benefits. A robot capable of talking back may be better able to defend itself from harassment and physical harm. Simple zingers or the ability to cheat can even be endearing in some situations.
The insights of research like Roth’s may help designers mitigate — or perhaps optimize — the ability of a robot to get in your head.