Nexi with its creators (via Northeastern University)


Nexi, MIT’s MDS or mobile, dexterous, social robot, was unveiled in 2008, becoming a hit on youtube due to its strikingly realistic human facial expressions. Now, Nexi is being put to work by Northeastern University psychology professor, David DeSteno, along with personnel from Cornell and MIT’s Media Lab, to try and find the physical cues that allow people to trust strangers, or in Nexi’s case, sympathetic humanoid robots.



Nexi has fully mobile arms and hands that give it dexterity and movements of its head, eyebrows and eyelids give it the ability to portray emotion. Dr. DeSteno wanted to see if he would use Nexi’s movements and social skills to test if subjects more or less trust someone or something based on physical signals people subconsciously interpret to predict intention.



The first part of the experiment put two subjects in a face-to-face conversation or a web chat. The conversations were recorded to see how many cues were picked up by test subjects depending on the two forms of communication. After this, the two role-played a scenario where one person was a prisoner and the other person had the ability to help by risking a small amount of real money. If the person to be helped was deemed untrustworthy, the other person could accumulate more money instead.



After a group of 86 northeastern students performed the first phase, it was clear that visual trust was directly related to generosity, but the non-verbal cues were not yet clear, as humans have so much communication that depends on context and how or when those cues are gestured.



To see if these cues could be generalized, the Northeastern team repeated the test, this time using Nexi’s controllable and rich articulation, in a Wizard of Oz experiment, to attempt to gain the trust of the student subject. Dr. DeSteno found that cues like leaning away, crossing the arms, and touching the face would lead to suspicion rather than trust even between a human and a humanoid robot.



The research showed that not only do non-verbal cues contribute to trust, but that humans can extend these interpretations to non-human entities, as well. This finding could be useful in the future when humans begin to interact with robots regularly. A paper by Dr. Steno detailing the experiment is to be released in the journal of Psychological Science.