I was reading this article about machine morality and it reminded me of hearing about the Trolley Problem a few years ago. The conclusion that I think every person should come to is that there really is no best solution. It might sound like pointing out the obvious but in a scenario like this, the time for good decisions has come and gone. All this emphasis on what is the “best” choice is really pointless I would say.
Do you kill the old people or the young people? Neither, just don’t speed so much and you wouldn’t have brought on the circumstance in the first place.
Most of these problems are paradoxes with no solution and in my humble opinion just a big waste of time. Maybe you can look at morality in this sort of mathematical way if you are presented with a situation where you need to shoot a criminal to save a hostage but in some accidental circumstance like this, what progress is there to make? No good option exists.
Often times there is this initiative to save the old and young first and especially women (at least that was what they did on the Titanic I believe). It would be curious how the AI would consider those factors.
Luckily, we are not doomed to life or death situations like this (most of us of course) as long as we don’t put our lives in danger in some absurd way. Something I am especially interested in is how AI would fare acting as a judge in a legal context.