[SystemSafety] The Moral Math of Robots

From: Les Chambers < >
Date: Sun, 13 Mar 2016 22:01:41 +1000


All

My hometown Brisbane is currently hosting the world science Festival. This afternoon I attended a thought-provoking session entitled The Moral Math of Robots. The session addressed the question, "Can machines learn right from wrong? As the first generation of driverless cars and battlefield war bots filter into society, scientists are working to develop moral decision making skills in robots. Break or swerve? Shoot or stand down? ...  

An interesting thought came up during the session. It was triggered by a variation on the well-known trolley problem (https://www.washingtonpost.com/news/innovations/wp/2015/12/29/will-self-dri ving-cars-ever-solve-the-famous-and-creepy-trolley-problem/).  

Picture this:

You are in your driverless car progressing down a two lane road with oncoming traffic. Without warning a pedestrian moves from behind a large oak tree growing on the footpath and steps right in front of your vehicle. At the same time a vehicle is heading your way in the oncoming lane. The laws of physics dictate your vehicle cannot stop in time to avoid the pedestrian. There are three options:

  1. Mow down the pedestrian
  2. Swerve into the oncoming lane with the certainty of a head-on collision
  3. Swerve onto the footpath with the certainty of a collision with the oak tree.

What ups the ante on this hitherto academic problem is that it is now real. And worse, a driverless car systems engineer has already made a decision for us on the control actions to be taken in this class of scenario.

The best of a bad lot of solutions is probably the collision with the oak tree. Given that the vehicle will have air bags the probability of harm is reduced.  

But it doesn't end there. The goodness of this obvious solution is a matter of opinion.

Picture this: you go down to your local driverless car dealer ready to pony up money for your first shiny new robotic chauffeur and you ask the sales guy this giant killing question, "Given {the above scenario} is this car programmed to sacrifice me or the pedestrian?"

An honest person might answer, "Well you of course, it's the only logical thing to do."

A sales person on track for sales leader of the year might answer, "Why the pedestrian course, he was careless, he had it coming."  

Are you going to buy a car that is programmed to sacrifice you?

Are you going to buy a car that is programmed to mow down pedestrians in an emergency?  

Personally I don't like either solution. I'd put my money in my pocket and go home (I'll stick with my own decision making process for mowing people down).  

Referring to a previous discussion, is this not a case for "unambiguous standards".

The solution to this problem cannot be left in the hands of individual vendors. This is international standards territory. We need an international ethical consensus on whether we should mow down pedestrians or sacrifice passengers. Unless this question is settled, no vendor will be able to sell these vehicles.  

Ladies and gentlemen, we are embarked, which will it be?  

Cheers

Les  



Les Chambers
Director
Chambers & Associates Pty Ltd
<http://www.chambers.com.au> www.chambers.com.au

Blog: <http://www.systemsengineeringblog.com/> www.systemsengineeringblog.com

Twitter: <http://www.twitter.com/chambersles> _at_xxxxxx M: 0412 648 992
Intl M: +61 412 648 992
Ph: +61 7 3870 4199
Fax: +61 7 3870 4220
<mailto:les_at_xxxxxx


 


The System Safety Mailing List
systemsafety_at_xxxxxx Received on Mon Mar 14 2016 - 08:33:30 CET

This archive was generated by hypermail 2.3.0 : Tue Jun 04 2019 - 21:17:08 CEST