Session: “IN DEFENSE OF KILLING PEDESTRIANS: AUTONOMOUS VEHICLE ADOPTION & INEVITABLE COLLISIONS”
How should autonomous vehicles be programmed to handle inevitable collision situations – situations where they must collide with something or someone? This question about autonomous vehicles, more than any other, has captured the popular and scholarly interest. I argue for a counter-intuitive answer to this question: autonomous vehicles should be programmed to protect their passengers at all cost, even if that means producing more harm than would otherwise occur. My advocacy of this position, however, relies on the same basic moral commitment of those who argue that autonomous vehicles should be programmed to minimize harm, even if that means sacrificing passengers: our goal should be to minimize vehicular harm. However, I see this goal as supporting programming decisions that hasten the adoption of autonomous vehicles since autonomous vehicles are widely expected to be significantly safer than human-driven vehicles. And because we have ample empirical evidence that people are more likely to accept autonomous vehicles that will protect them, we will get more rapid adoption of autonomous vehicles are programmed in just that way.