The ethical dilemma of self driving cars

I watched a great video this morning exploring the moral dilemmas of self driving cars and the important decisions that need to be programmed into their software that decide who lives and who dies if a fatal accident should occur.

Let’s say at some point in the not so distant future, you’re barreling down the highway in your self-driving car, and you find yourself boxed in on all sides by other cars. Suddenly, a large, heavy object falls off the truck in front of you. Your car can’t stop in time to avoid the collision, so it needs to make a decision:

1. Go straight and hit the object
2. Swerve left into an SUV
3. Swerve right into a motorcycle.

Should it prioritize your safety by hitting the motorcycle, minimize danger to others by not swerving, even if it means hitting the large object and sacrificing your life, or take the middle ground by hitting the SUV, which has a high passenger safety rating? So what should the self-driving car do?

If we were driving that boxed in car in manual mode, whichever way we’d react would be understood as just that, a reaction, not a deliberate decision. It would be an instinctual panicked move with no forethought or malice. But if a programmer were to instruct the car to make the same move, given conditions it may sense in the future, well, that looks more like premeditated homicide.

Its going to be like i Robot movie :wink:

The self driving car (as I understand it) wouldn’t be allowed (programmed) to drive close enough for this to happen.

7 Likes

This /thread

If I was driving I’d slam on the brakes to minimise impact, the car should be able to do the same.

That would depend on the object and how it falls. If it falls in the road, then by all means brake hard and hit it, if that seems the least worst option. But if it’s still falling through the air when you hit it? I’d swerve. Braking hard might mean decapication.

At they say in the video self-driving cars are are predicted to dramatically reduce traffic accidents and fatalities by removing human error from the driving equation. Plus, there may be all sorts of other benefits: eased road congestion, decreased harmful emissions, and minimized unproductive and stressful driving time.

But accidents can and will still happen, and when they do, their outcomes may be determined months or years in advance by programmers or policy makers.

And they’ll have some difficult decisions to make. It’s tempting to offer up general decision-making principles, like minimize harm, but even that quickly leads to morally murky decisions.

But this would mean that the car would need to be able to catalogue and define every single object in its sights and carry out a risk assessment for every single object it sees, possible hundreds / thousands of objects every few seconds. That is a huge load on its procession prower.

I was replying to Danny’s specific post, not the general topic. I’ve added context to make that clearer.

As for the general point re: self-driving cars - finally, a pratical application for the trolley problem! :laughing:

2 Likes

I’ve just thought of something worse:
Trolley problem combined with predictive modelling about the statistical likelihood of committing a crime.
Now that’s some nightmare fuel(if Brexit wasn’t enough :see_no_evil:)
EDIT TO nightmare fuel:
Add a photo recognition software that data trawls through social media to find your linkedIn account in order to predict your potential/real salary and your supposed value to Society((For full disclosure I do not agree with salary showing value to society or anything like that)) I’m merely postulating things.
Something interesting to look at is:

3 Likes

I think that the car should be able to be brought “out” of auto-pilot for these scenarios, and in all other cases, should the driver not decide to take control - not do anything.

Surely an omission (like in @NeilM’s trolley problem) is always better than an action which could cause death?

Either way a proper moral dilemma!

1 Like

I think thats a super bad idea to come out of pilot mode just before, situations can happen very quick handing over control to a driver at that moment will cause the inevitable dead on collision

But isn’t that what planes do? Doesn’t a pilot have to take back control of the plane should something catastrophic happen whilst on auto-pilot?

1 Like

A good question to also add is if when buying a self driving car you could choose to go for a model that would always save as many lives as possible in an accident, or one that would save you at any cost, which would you buy?

I know I would go for saving me at any cost

1 Like

I cant comment on whether or not that’s the case, but if its true, it makes sense because situations like a collision towards the ground takes minutes, not fractions of a second

2 Likes

Kinda defeats the point of having self drive cars if people still need to learn to drive/operate them

1 Like

Its a weird comparison to make I know. But maybe, a split second of human action could save that plane or get out of the scenario you have posted above with no fatalities or injuries.

I think the minute you start programming for “save all” “save driver” “make judgment call” you start playing too much with things. Omissions for me are better than any action in these scenarios.

Planes are all but auto-piloted apart from take off and landing - I know it isn’t really relevant but I wonder why that is.

Aren’t all the current “driverless” cars still able to be manually driven?

That’s interesting, but would your answer change when you put yourself into this situation,

Is it fair for the car to hand over control to you in a split second when you are hurtling down the motorway at 70 miles per hour while you are attending to your child, or playing angry birds.

I see what you are saying and agree from the point of view that its not me sitting behind the wheel.