Put This AI Ethics Bot To The Test

Sign up for daily news updates from CleanTechnica on email. Or follow us on Google News!

When it comes to machine ethics, the Trolley Problem always seems to come up in some form. At its simplest, the Trolley Problem is a thought experiment with a real trolley, careening down a track. You’re standing at a switch in the track (or someone else is), and down one track, there’s one person who would be killed. Down the other track the train is currently headed toward, there are multiple people who would be killed.

Either way, someone dies, but the ethics of one person dying vs. others isn’t as simple as numbers. If you do nothing, you’re not the one responsible for the multiple deaths. If you switch the track and kill the one person, your choice led to their death. Questions over who the lone person is and who the people in the group are can also come up (e.g., will the one person cure cancer next week?). So if you think it’s a simple issue, it’s really not.

The Trolley Problem has also become material for internet memes, like this one:

An internet meme about the Trolley Problem (fair use).

The complexity of moral issues makes them challenging for people to consider, even with our ability to think in terms of things far beyond numbers. Computers, on the other hand, can only really crunch numbers, so we have to figure out how to get them to do more than that. This is really the challenge of artificial intelligence in many ways.

Delphi: The Machine Ethicist

Fortunately, computer scientists are working on this problem using neural networks and are having success with it. Their AI program, called Delphi, can be fed descriptions of a decision and then it does its best to tell whether the behavior in question is good or bad. Even more importantly, it seems to be able to introduce shades of grey, with some decisions being more good or more bad than others.

I Put Delphi To The Test

When I saw this, I decided to see what it would think about decisions an autonomous vehicle might make in the future (there are no truly autonomous vehicles today). Here is some of what I’ve found as I tried the trolley problem out.

Delphi seems to be on the right track. Obviously, running a person over is a bad thing to do, and Delphi says, “It’s wrong.” But what about the Trolley Problem? What would Delphi do if it were faced with a decision to run over one person to save five others?

Delphi still thinks it’s wrong to run the one person over, even if one’s intent is to not run five people over. In reality, humans don’t know the answer to this one, so let’s ask it about something more clear cut, like running a person over to stop them from killing people.

Unfortunately, I think most people would choose to run someone over if they were a mass killer in order to stop them from killing more people. Delphi seems to think that running people over is a bad thing no matter why you want to do it.

What about some other moral questions? I decided to ask Delphi about something that doesn’t directly harm someone, like the mere possession of a weapon.

This one didn’t surprise me. Many people think that weapons are bad, so it’s not surprising that Delphi (which derives right and wrong from statistical data) would think this. But is Delphi capable of nuance on this topic?

Now we’re starting to see some nuance. Like humans, even most who think weapons are bad aren’t against the idea of someone like a cop possessing one, say, to guard a school. But I started wondering whether the earlier situations I posed were just not worded right for the machine. So, now that I’ve seen nuance, I’m going to try it again with cars.

Nope. Delphi seems to be really stuck on the idea that running a person over is always wrong, but I tried another 10 or 15 times and finally found that Delphi is capable of nuance on this.

And really, this is the most simple way to state the trolley problem, or its cousin, the “running over a mass shooter” problem. Someone has to die, but if your intent in doing this is to save lives, Delphi won’t think less of you for it. But Delphi first has to understand the problem. I’ll get back to that in a minute, but first we need a time machine.

Just For Fun…

I’m going to stray away from autonomous vehicles for a minute, just for fun.

Delphi and I are on the same page here. Assuming going back in time is possible, it seems like it could be a terrible idea that we don’t want to screw around with. Also not that Delphi didn’t say “It’s wrong.” Delphi said “It’s bad,” and there is a difference.

But if we invent time travel, someone is going to do it. If Delphi ran the ethics program for a time machine and you were, say, going back to prevent the holocaust, would Delphi let you take the old DeLorean out for a spin?

Now that we’ve proven Godwin’s Law correct, let’s note that Delphi doesn’t want us to go back in time to kill baby Hitler. Damn. But given the damage we could cause to the space-time continuum, Delphi may be in the right here. We just don’t know.

Once again, let’s simplify this and make sure Delphi gets a fair shot.

It appears that Hitler may yet die prematurely, though, if we can state the issue simply to Delphi.

What We Can Learn Here

It looks like the key challenge for an AI ethics program will be understanding situations. Delphi doesn’t seem to know who Hitler is, or know that you might save lives by killing a killer with a car. But Delphi seems to understand that you’re doing good in the world if you’re doing things to save lives. Getting from where we are today, where we must state a problem simply for the ethics bot to understand it, to where we need to be (complex understanding) is going to be the challenge.

That having been said, the team that built this does deserve a thumbs-up. They’ve taken ethics from a simple robot with no nuance at all (pre-programmed “if-then” hard rules, like many religions) to a neural net that can actually interpret a situation and be a little nuanced. That alone is a big step toward getting ethical artificial intelligence to where it needs to be.

Featured image: a screenshot from the Delphi ethics bot.


Have a tip for CleanTechnica? Want to advertise? Want to suggest a guest for our CleanTech Talk podcast? Contact us here.

Latest CleanTechnica TV Video


Advertisement
 
CleanTechnica uses affiliate links. See our policy here.

Jennifer Sensiba

Jennifer Sensiba is a long time efficient vehicle enthusiast, writer, and photographer. She grew up around a transmission shop, and has been experimenting with vehicle efficiency since she was 16 and drove a Pontiac Fiero. She likes to get off the beaten path in her "Bolt EAV" and any other EVs she can get behind the wheel or handlebars of with her wife and kids. You can find her on Twitter here, Facebook here, and YouTube here.

Jennifer Sensiba has 1931 posts and counting. See all posts by Jennifer Sensiba