Modeling Moral Competence in Robots: Interview with Professor Scheutz (Part 3)

HRI lab

This is part three of a three-part series taken from an interview between Professor Matthias Scheutz, the director of the Tufts Human-Robot Interaction Lab, and Breakthrough’s Josh Lee. You can read Part One here and Part Two here.

Q) Socially autonomous robots with moral competence, as you have stated in your papers, seem to be computationally possible and could indeed be materialized in the future. What are the advancements you have made recently and what are some of the current challenges that your lab is trying to address? What are the problems you have solved, and what are the hard problems?

A) This is, for us, very much a project in the early stages, so we are still currently trying to figure out very basic questions about how can we represent the normative principles, such as the instances when the robot is instructed to do something but realizing that it could cause harm, decides not to do it. How can we represent this in a way that is easily accessible for the user? How can we enable the robot to recall it at the right time? How can the robot reason through it, even though it may clash with the other principles as human norms often do? They often clash with one another, forcing the person to prioritize. So we are still in the state of designing that part of the system.

There is clearly a very challenging problem there, which is trying to figure out how to handle these normative inconsistencies. If two principles are inconsistent in a logic, then logically, everything follows. But we don’t want that. I want to be able to work with the inconsistency locally, within the context, and not make everything else absurd. Hence, how to do that is not clear.

There are other questions of how could we learn more, understand what contexts are good to learn and what contexts are not good to learn and how do we perceive morally charged situations? It’s a very difficult problem. How do we detect that something is wrong? How do trigger these norms? How far should we reason, before we carry out an action? Do we limit it to the local context or do we allow it to project out?

If the Google car is driving across the street and the body runs across it and it cannot break, it cannot look into the futures that are hours ahead, it cannot simulate any of this. It needs to make a decision within 20 milliseconds of whether to brake or to swerve. So how to do all of this remains very much open. How to acquire these knowledge and norms differ amongst many individuals. Some are proponents of machine learning, some say they need to be engineered. What form of framework do you use? Some will say we need to do utilitarian theoretical approaches, while others will say, no we need to use deontological approaches– the explicit representations that are of moral rules, the rules that use obligations and permissions.

So a lot of these are still unclear. A lot of challenging algorithmic problems with designing the appropriate data structures and representations, but also the appropriate algorithms and make them work fast enough with a good enough outcome in a real time on the system, like a robot.

tech-determinism

Q) With technological determinism, where technology drives the development of social structure and cultural values, becoming stronger due to the swift growth in technology, what are some things that the public should be aware of? How should we not only adapt, but also take full advantage of this changing world and most effectively take advantage of this technological growth?

A) I believe that robotics has an enormous potential to do good things for the world. Otherwise, I won’t be working in this area, and we should embrace it. However, with all technologies, it comes with potential dangers and the key aspect to make sure that the technology will be safe is to discuss them now and not when we have all sorts of systems out there.

In my view, the most critical aspect of robotics is the autonomy of the system, especially when they are autonomous and not aware of the legal and normative restrictions that we impose for anything that wrongs the society. Therefore, this is a discussion that we need to have now, which is what we started at my lab.

The other dangers that could come down the road are unrestricted machine learning. We are not sure if that is something that we want on these autonomous systems. One way to think about it is whether you would rather fly on an airplane with an autopilot designed formally proven to be correct, such that you can formally prove that it will not leave a particular control envelope, as in it would not do the stupid things in situations of danger, as opposed to using an autopilot that was trained using machine learning algorithms where you only have statistical guarantees.

So that’s a very general question and a general challenge that the field of artificial intelligence will face. The chances are, we will need to look for hybrid solutions to that problem and robotics is no exception. Our approach to this problem is to design computational architectures in ways that our system can still learn, but is never allowed to violate any of the principles. That way, we could create an architecture that constrains exactly what these learning algorithms could do to the system.

We don’t have the system yet, but we are currently working on it. This is the work of the future.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s