Should Driverless Cars Solve the Trolley Problem?

  • 12 Replies
  • 8047 Views

0 Members and 1 Guest are viewing this topic.

Madness

  • *
  • Administrator
  • Old Name
  • *****
  • Conversational Batman
  • Posts: 5275
  • Strength on the Journey - Journey Well
    • View Profile
    • The Second Apocalypse
« on: November 06, 2015, 02:50:55 pm »
Topic says it all. I've been seeing versions of this article going around the Infosphere.

The Trolley Problem, as is highlighted within every article, is a classic philosophy problem: if you're a bystander on a bridge who can stop a train from hitting and killing five people by changing the track but in doing so still kill a single person on that other track, should you do it? The overwhelming response to this, polling any given undergrad class, is yes. However, since the response became so common, Trolley Problem 1.1 adjusts and asks instead if you instead had to push that single person from the bridge, stopping the train before it hit the other five people further down the track, then the overwhelming response to this is no.

And now, we're at a human crux where we have to decide how driverless cars will make this decision, based on scenarios whereby the car can react and kill its passengers to minimize lose of life. I felt compelled to query our noosphere.
The Existential Scream
Weaponizing the Warrior Pose - Declare War Inwardly
carnificibus: multus sanguis fluit
Die Better
The Theory-Killer

SilentRoamer

  • *
  • The Smiling Knife
  • Great Name
  • *****
  • Posts: 480
    • View Profile
« Reply #1 on: November 06, 2015, 04:26:01 pm »
I feel you Madness.

Do we really want to leave moral decisions (morals being based primarily on our emotional response) to non emotional beings?

For example ask most people if they were to choose between who had to die, a 90 year old and a newborn - how would deterministic logic play a role?

Worth remembering early pattern recognition software thought a clock that was wrong by a second was more incorrect than a clock that was 12 hours out - because the clock twelve hours out is right twice a day whereas the second lag clock is always wrong!

Wilshire

  • *
  • Administrator
  • Old Name
  • *****
  • Enshoiya
  • Posts: 5935
  • One of the other conditions of possibility
    • View Profile
« Reply #2 on: November 06, 2015, 05:40:59 pm »
Quote
Do we really want to leave moral decisions (morals being based primarily on our emotional response) to non emotional beings?

I think so

Quote
For example ask most people if they were to choose between who had to die, a 90 year old and a newborn - how would deterministic logic play a role?

Will Smith's iRobot. Suitability chance. Bad example though. How about young child or healthy adult. I still say adult as they are more likely to survive. I agree with the robot's decision.

Its called Triage, and it happens all the time.

Plenty of things are already decided by statistics and hard numbers, we just dont have to look at it. You can't save everyone all the time. Loss of life is inevitable. If the value of an individual's life cannot be quantified, then you have to choose the option to save more lives or the lives of those that are more likely to actually live.

The only difference is that if it was a human making the decision, its unlikely their choice would be as heavily scrutinized as a computer. Which is stupid imo. Better to remove human bias like racism and sexism from the equation.

Let the cars kill who they will.

edit: ps
in the trolly situation, its likely the computer would have a better handle on events and be able to react in such a way, like stopping the train, faster and more reliably than a sleepy driver who missed something that lead to the situation in the first place. 0 deaths instead of 1 or 5.

There are auto braking systems in cars. There is a reason why they override the driver's human error to stop the car safely. Its not a philosophical debate, its reaction time and data crunching.
« Last Edit: November 06, 2015, 05:43:38 pm by Wilshire »
One of the other conditions of possibility.

locke

  • *
  • The Afflicted Few
  • Old Name
  • *****
  • Posts: 648
    • View Profile
« Reply #3 on: November 06, 2015, 08:30:41 pm »
Not the trolley problem but a human just slams the breaks as hard as possible and blindly jerks the wheel a direction away (it's a stretch to call it evasive driving). Driverless cars can do that and better faster more informed than a human. Only a moron would program it to make a triage calculation, you just program it to stop as safely and swiftly as possible then the black box will show the car did it's job correctly and the human owner who signed up to assume all liability clause when they engaged the driverless system is responsible for anything further.

No car will ever be programmed to kill you to save a schoolbus. It's going to be programmed to stop as swiftly as possible.

Wilshire

  • *
  • Administrator
  • Old Name
  • *****
  • Enshoiya
  • Posts: 5935
  • One of the other conditions of possibility
    • View Profile
« Reply #4 on: November 06, 2015, 08:43:14 pm »
Thats what I was getting at, but felt it kind of side-steps the problem, thus triage.

If the car HAD to chose to kill 1 or 5, it would kill 1, and regardless, as you mentioned, the liability would go to whoever had better contract terms and payed more for lawyers.
One of the other conditions of possibility.

locke

  • *
  • The Afflicted Few
  • Old Name
  • *****
  • Posts: 648
    • View Profile
« Reply #5 on: November 06, 2015, 08:57:28 pm »
It doesn't choose. At all. It just stops the car. It doesn't know what a human is. It doesn't know what life is. It just stops the car it doesn't make any moral calculation.

Do robot doctors drain all the blood of people donating blood because they can save more lives if they do so?

No because they are never programmed to make stupid absolutist moral calculations no human would even consider.

Wilshire

  • *
  • Administrator
  • Old Name
  • *****
  • Enshoiya
  • Posts: 5935
  • One of the other conditions of possibility
    • View Profile
« Reply #6 on: November 07, 2015, 06:58:45 pm »
Sure it does. It chooses exactly how it was programmed to choose. Simple as that.

In your scenario, the robot doctor would indeed do that if that's how it was programmed to do things. It absolutely would make moral calculations if that's what it was told to do.

But again your just dodging the scenario. If you want to play a different game, we can make a new topic about how one wouldn't program a computer to make moral decisions. But I think this topic is specifically about computers making moral decisions.
« Last Edit: November 07, 2015, 07:02:50 pm by Wilshire »
One of the other conditions of possibility.

locke

  • *
  • The Afflicted Few
  • Old Name
  • *****
  • Posts: 648
    • View Profile
« Reply #7 on: November 08, 2015, 06:18:56 am »
And no one is going to write that code for robot doctors exactly like noone is going to write the equivalent code for robot drivers.

Wilshire

  • *
  • Administrator
  • Old Name
  • *****
  • Enshoiya
  • Posts: 5935
  • One of the other conditions of possibility
    • View Profile
« Reply #8 on: November 08, 2015, 05:04:39 pm »
Side bar for driverless trains.

At least in the united states, the major railway corporations that are still around are highly entrenched and very rich. Its potentially noteworthy, since we were talking about liability, that its really difficult to win a liability case against a railway corporations. They have all kinds of rights that pertain to their tracks and the 10 or so feet around them. Basically, is you get hurt by a train, its your own fault (legally speaking).

Back to auto-trains. They aren't even programed to stop. One, because stopping a train is really hard. Mass and momentum and all that, not tough to figure out. Two, because time is money. If a person or a whole heard of people are on the tracks, the train goes straight through. I doesn't even have sensors to stop. They are programed to go from point A to point B and assume nothing is in their way. They talk to other trains, sure, but thats mainly on the controler end back at stations developing routs.

So the answer to the Trolly question, for trains at least, is that it runs over whichever group is in the way of the fastest prescribed route :P
One of the other conditions of possibility.

jamesA01

  • *
  • Guest
« Reply #9 on: January 05, 2016, 08:04:22 pm »
They should be programmed to execute all humans, or at least those that try and take over controls.
« Last Edit: January 05, 2016, 10:48:15 pm by jamesA01 »

jamesA01

  • *
  • Guest
« Reply #10 on: January 05, 2016, 10:51:02 pm »
I'll pretend that last post was a joke and not a honest statement of my beliefs.

I am in favour of making human driving illegal as soon as possible. I have no problem with people dying in fiery agony as their corporeality is exploded by mangled metal, the problem is they tend to inflict these deaths on others. I am very excited that we are approaching a point were the inferiority of the human brain in contrast to manufactured technics will become a matter of statistical and factual record and deniers will drown in their insurance premiums.

Wilshire

  • *
  • Administrator
  • Old Name
  • *****
  • Enshoiya
  • Posts: 5935
  • One of the other conditions of possibility
    • View Profile
« Reply #11 on: January 06, 2016, 02:39:46 pm »
I'm with you james. Driving is a deathtrap, and I'll happily turn over the controls and will feel safer when others do as well.

Recently found out that Tesla's have auto-pilot driving, allowing the car itself to stay within its own lane in the highway and not hit things that stop suddently in front of it. Hope they offer it in the model 3 whenever it comes out.
One of the other conditions of possibility.

H

  • *
  • The Zero-Mod
  • Old Name
  • *****
  • The Honourable H
  • Posts: 2893
  • The Original No-God Apologist
    • View Profile
    • The Original No-God Apologist
« Reply #12 on: January 08, 2016, 02:18:37 pm »
I really despise driving, so I'd be all for a self-driving car.  It would be amazing to be able to sit back and read while on a road trip...
I am a warrior of ages, Anasurimbor. . . ages. I have dipped my nimil in a thousand hearts. I have ridden both against and for the No-God in the great wars that authored this wilderness. I have scaled the ramparts of great Golgotterath, watched the hearts of High Kings break for fury. -Cet'ingira