A recent study of 89 volunteers found that when faced with the prospect of turning off an artificially intelligent robot, they were less likely to do so if the robot asked them not to.
Nao, described by its manufacturer, Soft Bank Robotics, a French company, as “an interactive companion robot,” is 58 cm in height and in its fifth version. Approximately 10,000 NAO have been sold around the world. The study, which was published by the PLOS One journal, sought to answer the question, 'Do a robot’s social skills and its objection discourage interactants from switching the robot off?'
The answer seemed to be ‘yes’ since many participants refused to turn off the robot after it ‘begged’ them not to, saying, “No! Please do not turn me off!” NAO’s reason for not wanting to be turned off was that it’s afraid of the dark.
Forty-three of the 89 study participants were asked by Nao to not be turned off. Thirteen of them acquiesced and left it on. The other 30 took twice as long to turn off Nao as did the participants who weren’t asked.
The abstract of the study says, “People were given the choice to switch off a robot with which they had just interacted. The style of the interaction was either social (mimicking human behavior) or functional (displaying machinelike behavior). Additionally, the robot either voiced an objection against being switched off or it remained silent.
“Results show that participants rather let the robot stay switched on when the robot objected. After the functional interaction, people evaluated the robot as less likable, which in turn led to a reduced stress experience after the switching off situation.
“Furthermore, individuals hesitated longest when they had experienced a functional interaction in combination with an objecting robot. This unexpected result might be due to the fact that the impression people had formed based on the task-focused behavior of the robot conflicted with the emotional nature of the objection.”
It seems that robots who display human emotions are more likely to produce a human response from test subjects. Some of the participants alleged that they “felt sorry” for NAO, while others said they didn’t turn it off because he asked them not to.
RELATED: This Adorable Robot Is Super Helpful
Aike Horstmann, a student at the University of Duisburg-Essen who led the study, told The Verge that the study wasn’t attempting to show that humans could be easily manipulated by machines but that as they become more common in our day to day life, we should be aware that this emotional blind spot may exist.
“I hear this worry a lot,” Horstmann said. “But I think it’s just something we have to get used to. The media equation theory suggests we react to [robots] socially because, for hundreds of thousands of years, we were the only social beings on the planet. Now we’re not, and we have to adapt to it. It’s an unconscious reaction, but it can change.”