Modeling Moral Competence in Robots: Interview with Professor Scheutz (Part 1)

scheutzProfessor Matthias Scheutz, the director of the Tufts Human-Robot Interaction Lab, is currently investigating the moral competence of computational architectures. Breakthrough had the opportunity to sit down with him and discuss his views of human morality, how morality applies to robots and artificial intelligences, and how these come together when humans and robots interact with one another. Part 2 and 3 of this interview will be released on a later date.

Q) Moral competence of artificial intelligences has been a hot topic of discussion among renowned scientists and CEOs. Your research strives to reverse-engineer the underlying mechanisms of moral competence. How did you gain interest in this field?

A) We started experiments in the early 2000s with robots and humans, where we looked at what role affect in a robot’s voice could play on a human teammate. When the robot expressed stress in the voice, it motivated human teammates to work better. In a way, that is a good thing because the robot succeeded in motivating the human partner to try harder and perform better.

However, when you step back from the experiment, it suggests that robots, by very simple manipulations, could convincingly convey stress, concern or anxiety to humans. And if a simple change in the feature of the robot’s voice could have an influence on people, then this influence could be exploited by machines to potentially manipulate people.

Hence, I began thinking about possible downsides of this research– Where are the dangers? The pitfalls? Around 2005, the first studies of roomba, the robot vacuum cleaners, came out. Surprisingly, it turned out that people treated them as pets, except that these robots had no pet-like features, such as emotions, furs and eyes. They didn’t look at all like an animal, they looked like discs. But the very fact that these robots moved autonomously projected agency to the people. This evolutionarily developed trait lead to people asking, “what is the robot doing? What goals do they have?”

This, in turn, led me to think about how very simple mechanisms that evolution has adapted us for could come to haunt us with such technological devices because we might believe that there is an agent inside the robot, when it is actually not there. We may think they have certain mental states, where there are none. Consequently, I wrote about the dangers of the unidirectional emotional bonds that I suspected would likely form from long term interactions between autonomous social robots and humans.

For example, studies with the Roomba show that the humans often show gratitude towards the robot, which is almost bizarre, because no one experiences gratitude towards a dish washer. You buy a dish washer because it is supposed to wash your dishes and you are not grateful that it does. But with the Roomba, people are grateful that it cleans the house– so much so that the people will sometimes clean for the robot.

So we asked, why is that? My collaborators and I laid out a framework that attempted to explain why people are likely to form these relationships with machines, where then I take it as my role as a designer of robots to try to miniize that because if the robots cannot live up to human expectations, they could ultimately cause harm to people.

Q) Your collaboration with Professor Malle of the Brown moral cognition lab has led to several papers recently that have formed the framework for a morally competent robot. Could you tell us about how this partnership started?

A) The collaboration started as a part of the grant proposal that we worked on together and in the context of the grant proposal, we really brainstormed on what it would take for a robot to be socially fit and be autonomous in human societies. What capabilities do you need in the system so that it turns out to be a useful helper to humanity, societies and not cause harm.

One of the key fundamental aspects about human social groups and societies is the social and moral normative expectations that we carry. They are interwoven with the societal fabric, the rules that we live by and use all the time in everyday behavior. Bertrand Malle is particularly expert on some of the moral aspects related to blame, working extensively on blame models and the blame processes and how people use that as a social regulative, to correct those who have moral transgressions.

If social robots are not aware of the social expectations, they will likely violate them. When they violate them, they could cause from emotional and physical harm to humans, so it is clear that these robots need an understanding of our social normative expectations and we must consider what the robots must have to have those capabilities.

five componentsThese questions lead to the five component framework, which tries to show that there are different aspects to moral competence. We specifically left out the philosophical question of whether the robots are moral agents or not to focus on the computational mechanism, the functionality the robot would have to have to be a functional agent. That is what we mean by moral competence.

The core representation morally competent robots need to have is a representation of norms that allows them to get activated when the robot observes a scene with a norm violation. How does the robot notice that? Also, how does the robot learn and reason with these norms? Are there emotional aspects to normative processing and how are they related to the robot’s actions? And how are actions treated, as a result of norms being instantiated?

Then we are interested in the communications aspect. Blame is a really good example, where in different ways people reprimand others or chide them for moral transgressions. So robots would have to understand when they are commiting a norm violation or even when people committing them. If robots violate a rule, then they must respond appropriately because that is what we expect from other people. To understand when they did wrong and know when to apologize, or know when to show that it was unintentional and be able to explain themselves are some of the areas we are working on today.

2 thoughts on “Modeling Moral Competence in Robots: Interview with Professor Scheutz (Part 1)

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s