The “brains” of self-driving cars are today being programmed to handle millions of scenarios that commonly present to human drivers. But what happens when lives are at stake?
Real life situations can and do arise where humans have to make split second decisions about where to point their multi-ton vehicles, and those decisions can cost (or save) lives.
Despite the best efforts of well-intentioned autonomous vehicle designers, and artificial intelligence experts, “Accidents will happen, and a computer must be programmed to react in those situations, sometimes when death is inevitable. In those instances, it’s succinct to say that we’ll have to program computers to kill.”
So how should we be building the rule sets that govern SDC behavior?
Isaac Asimov’s venerable “3 Laws of Robotics” seem applicable, but even they don’t handle lose-lose "Kobayashi Maru" situations where lives are at stake.
For an exploration of these liability and philosophical issues (in a Star Trek-themed context!), check out Andrew Heikkila’s thought-provoking article exploring these ethical questions: “Self-Driving Cars And The Kobayashi Maru”.