I don’t think anyone, in this situation, decides to fall to their death (except maybe highly suicidal people). This is exactly the point I’m trying to make: in the real world, an autonomous system in the exact same situation (assuming it has the same data and the same capabilities as the human, just quantitatively better decision making process) must make the exact same choice as the human. This is common sense, but also a question of liability (Which in the real world is the seriously interesting question).

Now, in the theoretical philosophical world of moral decisions, where we can assume a computer can make a qualitatively better decision, there is an interesting discussion to have and I’m fine with having it – just don’t pretend it has any relation to the real world.

So, in that theoretical world, will I prefer to die or kill a stranger? As a moral decision, having no other knowledge of the stranger and having enough time for a calculated moral decision, I must flip a coin – I have zero information as to what is a better moral decision, so its either I’m being selfish (which is an easily defensible moral position) or I’m flipping for it.

The article I referenced at the top has a more interesting problem – suppose you do have knowledge about the moral implications of your decision, and you can’t be selfish (because you are software and software has no self preservation instinct), what kind of processes you can follow to make a better moral decision?