The sorry state of the autonomous car discussion
As can be evident across the web (for example in this article), the current discussion fueled by Google’s self driving car news and a the possible development plans of other small and large companies is often concerning itself with the morals of a software driven car1. Which is, frankly, unfortunate.
I think that the only people that should be really bothered by all this talk of “who should the autonomous car kill (in case of an accident)” discussion are the programmers hard at Google and other companies, who are suddenly held to a much higher moral standard than expected of programmers who are responsible, today, to hundreds of lives in each instances – such as programmers for railway systems and passenger jets flight control software.
When you look at the problem from the perspective of autonomous transport control software, that is right now being used to safely transport millions of humans daily, its obvious that the main concern of the designers is to have quantitatively better response (more consistent and faster, in that order) than a human, for adverse situations, but qualitatively better – that is, the systems will not pretend to make decisions morally better than a human would do at any given situation, just perform better on the exact same actions that the human it replaces will have taken anyway.
So when a Google self-driving car programmer comes answer to the “trolley problem” or the “fat man problem” discussed in the linked article, they should not be held to a higher moral standard than the average driver, because that is who they are replacing.
Related articles
- Google’s self-driving car tests in California ruined by human error(digitalspy.co.uk)
- Google Software: Safer Than Human Drivers?(insights.dice.com)
- These crazy new cars will save lives(news.com.au)
- Consumer Group Worries Over Safety of Google’s Self-Driving Cars(technewsworld.com)
- that is, immediately after the “ooh, technology is so awesome” debate [↩]
But the question remains. Let’s take the truly extreme example: It’s just you driving your car within parameters. Then a situation arises (never mind the actual scenario) where you have to make a choice, and you know the exact outcomes of that choice, you either get yourself killed or kill another person(s). Let’s assume that if there is any way to preserve everyone’s life, it’ll be coded for. But in this situation, would you kill a stranger (or a group of strangers) or tumble to your death?
And if you really want a scenario, here’s a wild one: You’red driving down a single lane mountain road. Your brakes go out. Suddenly you spot a person standing in the middle of the road (they were walking there because the road is narrow) frozen in shock and you can either plow through them or stir off the cliff.
I don’t think anyone, in this situation, decides to fall to their death (except maybe highly suicidal people). This is exactly the point I’m trying to make: in the real world, an autonomous system in the exact same situation (assuming it has the same data and the same capabilities as the human, just quantitatively better decision making process) must make the exact same choice as the human. This is common sense, but also a question of liability (Which in the real world is the seriously interesting question).
Now, in the theoretical philosophical world of moral decisions, where we can assume a computer can make a qualitatively better decision, there is an interesting discussion to have and I’m fine with having it – just don’t pretend it has any relation to the real world.
So, in that theoretical world, will I prefer to die or kill a stranger? As a moral decision, having no other knowledge of the stranger and having enough time for a calculated moral decision, I must flip a coin – I have zero information as to what is a better moral decision, so its either I’m being selfish (which is an easily defensible moral position) or I’m flipping for it.
The article I referenced at the top has a more interesting problem – suppose you do have knowledge about the moral implications of your decision, and you can’t be selfish (because you are software and software has no self preservation instinct), what kind of processes you can follow to make a better moral decision?