The Situationist

What Can a Robot Teach Us about the Situation of Trust?

Posted by The Situationist Staff on July 13, 2010

From Northeastern University:

What can a wide-eyed, talking robot teach us about trust?

A lot, according to Northeastern psychology professor David DeSteno, and his colleagues, who are conducting innovative research to determine how humans decide to trust strangers — and if those decisions are accurate.

(Read a Boston Globe article about this research.)

The interdisciplinary research project, funded by the National Science Foundation (NSF), is being conducted in collaboration with Cynthia Breazeal, director of the MIT Media Lab’s Personal Robots Group, Robert Frank, an economist, and David Pizarro, a psychologist, both from Cornell.

The researchers are examining whether nonverbal cues and gestures could affect our trustworthiness judgments. “People tend to mimic each other’s body language,” said DeSteno, “which might help them develop intuitions about what other people are feeling — intuitions about whether they’ll treat them fairly.”

This project tests their theories by having humans interact with the social robot, Nexi, in an attempt to judge her trustworthiness. Unbeknownst to participants, Nexi has been programmed to make gestures while speaking with selected participants — gestures that the team hypothesizes could determine whether or not she’s deemed trustworthy.

“Using a humanoid robot whose every expression and gesture we can control will allow us to better identify the exact cues and psychological processes that underlie humans’ ability to accurately predict if a stranger is trustworthy,” said DeSteno.

During the first part of the experiment, Nexi makes small talk with her human counterpart for 10 minutes, asking and answering questions about topics such as traveling, where they are from and what they like most about living in Boston.

“The goal was to simulate a normal conversation with accompanying movements to see what the mind would intuitively glean about the trustworthiness of another,” said DeSteno.

The participants then play an economic game called “Give Some,” which asks them to determine how much money Nexi might give them at the expense of her individual profit.  Simultaneously, they decide how much, if any, they’ll give to Nexi. The rules of the game allow for two distinct outcomes:  higher individual profit for one and loss for the other, or relatively smaller and equal profits for both partners.

“Trust might not be determined by one isolated gesture, but rather a ‘dance’ that happens between the strangers, which leads them to trust or not trust the other,” said DeSteno, who, with his colleagues, will continue testing their theories by seeing if Nexi can be taught to predict the trustworthiness of human partners.

* * *

For a sample of related Situationist posts, see The Interior Situation of Honesty (and Dishonesty),” “The Situation of Trust,” The Situation of Lying,” “The Facial Obviousness of Lying,” “Denial,” Cheating Doesn’t Pay . . . So Why So Much of it?Unclean Hands,” The Situation of Imitation and Mimickry,”

About these ads

One Response to “What Can a Robot Teach Us about the Situation of Trust?”

  1. [...] Experiments and social surveys and game theory are all so terribly passé.  The new way to study trust is by… building a giant trustworthiness robot with big creepy lidless eyes? [...]

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

 
Follow

Get every new post delivered to your Inbox.

Join 850 other followers

%d bloggers like this: