The sorry state of the autonomous car discussion

English: Google driverless car operating on a ...As can be evident across the web (for example in this article), the current discussion fueled by Google’s self driving car news and a the possible development plans of other small and large companies is often concerning itself with the morals of a software driven car1. Which is, frankly, unfortunate.

I think that the only people that should be really bothered by all this talk of “who should the autonomous car kill (in case of an accident)” discussion are the programmers hard at Google and other companies, who are suddenly held to a much higher moral standard than expected of programmers who are responsible, today, to hundreds of lives in each instances – such as programmers for railway systems and passenger jets flight control software.

When you look at the problem from the perspective of autonomous transport control software, that is right now being used to safely transport millions of humans daily, its obvious that the main concern of the designers is to have quantitatively better response (more consistent and faster, in that order) than a human, for adverse situations, but qualitatively better – that is, the systems will not pretend to make decisions morally better than a human would do at any given situation, just perform better on the exact same actions that the human it replaces will have taken anyway.

So when a Google self-driving car programmer comes answer to the “trolley problem” or the “fat man problem” discussed in the linked article, they should not be held to a higher moral standard than the average driver, because that is who they are replacing.

 

  1. that is, immediately after the “ooh, technology is so awesome” debate []