Modeling Moral Competence in Robots: Interview with Professor Scheutz (Part 2)

FullSizeRender (1)

This is part two of a three-part series taken from an interview between Professor Matthias
Scheutz, the director of the Tufts Human-Robot Interaction Lab, and Breakthrough’s Josh Lee. You can read Part One
here.

Q) Your lab is made up of many researchers that hold interdisciplinary degrees. What does this tell us about interdisciplinary collaborations of research that seem to be becoming a norm today?

A) I don’t think any single disciplinary person could really tackle these problems, which span multiple fields. Starting from the conceptual philosophical problems about the structure of ethical theories, such as “what does it mean to be good?” Are we looking for virtues? Utilities? Or rules? What is it that we are looking for? From the legal aspects, we consider the law that we all abide by. From the psychological aspects, we ask what would happen to the people when they operate under these conditions. From the computational and the robotic aspects, we plan what we would need to not only make this happen in a robot, but also materialize in real time on a robotic platform. Then we get to the linguistic aspects of asking, “how do you communicate with the robot?”

All of these come together and it is not possible by a single person to do it. And critically, if these fields don’t talk to each other, you are not going to be able to build an integrated system that can live up to all of these aspects. So the project that we got funded has people from very different interdisciplinary orientation. We have social psychologist, who also has a background in linguistics and philosophy. We have a legal expert with a philosophy background. We have a philosophical logician who has an Artificial Intelligence research background and we have myself, the director of the Human-Robot Interaction lab with a philosophy, formal logic, computer science and cognitive science degree and an interest in linguistics.

So it’s a very interdisciplinary team and as a result, we are able to really look at the whole picture of how all the aspects come together.

Q) The robot that objects the owner by saying “No” has received much attention within the Tufts community. Could you elaborate on this research, and tell us what implications this research will have in the future?

A) Yes, it has received a lot of attention in the news, where people said that Tufts University’s research program robots to disobey orders and that robots learned the most important word in the English language, “No.” All of this is very flashy and I could understand why it got reported in the news that way, but the underlying research is very serious and necessary for a very simple reason.

When the robots become instructable in natural language and are built to carry out instructions, you don’t want them to blindly follow those orders. Not because people will necessarily give them bad instructions and have bad intentions of what they instruct the robot, which may also be the case, but simply because there may be cases where there are accidents or lack of knowledge.

For example, you could have a robot that is instructed to pour out the liquid in the pitcher of oil that it has, maybe in a can or a bottle, and you might be thinking that the robot is by the dining room table and wanted to pour the oil over the salad in the bowl, when it is really in the kitchen next to a stove, pouring the oil over the burning fire. So in that context, you have mistaken and if the robot blindly did the pouring action, it would cause a fire in the apartment. Hence, in this case, what you want the robot to do is ask the user to confirm if the robot really should pour the oil over the hot stove. Then the user will be notified of the error and say “of course not”.

But for the robot to be able to detect that, it has to realize what the outcomes might cost of its actions. And maybe not only the immediate outcomes, but also the outcomes that come several steps after that. And as it is trying to reason through what the outcomes are and realizes that the outcomes could cause harm, it should say that and not immediately carry out the action.

This is what we have started implementing. We started implementing the algorithms that would allow robots to reason through the consequences of the instructed actions. And if the consequences could cause harm, then the robot is permitted to not carry out the commands and that it is what we demonstrated.

However, when the person demonstrated the evidence that it is safe to carry out the action, they will indeed carry out the action. Hence, they didn’t refuse indefinitely, but it needed the evidence that it wasn’t going to be dangerous.

 

One thought on “Modeling Moral Competence in Robots: Interview with Professor Scheutz (Part 2)

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s