Thought Experiment: Will Autonomous Cars Try to Kill Us?
I came across an headline on Medium this morning that grabbed my attention. The subheading softened it a bit, but nonetheless, I was intrigued.
I came across an headline on Medium this morning that grabbed my attention. The subheading softened it a bit, but nonetheless, I was intrigued. The article, titled "A Self-Driving Car Might Decide You Should Die", used a series of well-known ethics questions to reach the author's ultimate point. That point, at least what I got from it was, if you build an algorithm that removes human error by calculating the optimal outcome of a life-threatening situation, how do you assign value to human life so you can choose who lives and who dies? It is in the programming of this algorithm that our values are truly reflected, and that is, in fact, very scary.
A Self-Driving Car Might Decide You Should Die
Let's take a step back and think about this for a minute. The author's calculation of his Infinite Trolley problem seems quite sound (read the article for more detail). It's easy (or at least easier) to choose between saving 5 people by sacrificing 1, with or without intentionally causing harm (version 1 versus version 2 of the Trolley Problem). The Infinite Trolley problem, however, asks us to posit what is an acceptable loss of human life, with the understanding that you aren't saving another life, but simply avoiding the inconveniencing of another group of people. As the author found when testing this problem, others believed that this was not a real world example, but his allegory to our dependency on automobiles makes perfect sense.
To recap, he states that there are roughly 250 billion automobile trips made by Americans every year. As a result of those trips, there are 30,000 automobile-related deaths. That means:
250,000,000,000 trips / 30,000 deaths = 8,333,333 trips per death
The argument then goes that the convenience of automobile travel costs us 1 human life for every 8 million trips. Or put another way, we (and I mean us as a society) are willing to sacrifice a human life every 8 million car trips and deem that an acceptable loss in order to maintain the convenience of automobile travel. I can't help but to agree.
So now the answer to the Infinite Trolley problem is abundantly clear, whether we want it to be or not. Our values are now crystal clear, even if we can't internalize them. Now the question becomes, if we are to apply these same values to autonomous machines, what value do we place on human life in order to avoid inconveniencing people? This is the question that perhaps no one wants to answer.
There are plenty of implications of these types of algorithms and scoring systems. For the moment, though, let's focus on autonomous (self-driving) cars. I was in a car accident a few years ago while driving my 5-year-old daughter home from ballet class (#dadlife). We were driving on the highway and the car two cars ahead of me realized too late that the car in front of them was stopping. The driver of that car made the split second decision to swerve into the right lane, almost hitting another car in the process. The car in front of me (driven by an unexperienced driver), slammed on his brakes, coming to a complete stop in the middle of the highway, even though traffic had begun to move again. I saw the car swerve into the right lane, I saw the car in front of me apply their brakes. Now I had to make a decision.
There was a concrete median to my left. There were cars to my right. I had several car lengths between me and the stopped car in front of me, but I was traveling 50MPH. In that moment I decided that my best option was to step on my brake and try to stop in time. My speed, road conditions, reaction time, other vehicles, etc., were all variables that I had to process in a millisecond to make my decision. I've replayed it a thousand times in my head, and the other alternatives don't seem much better. I could have swerved into the righthand lane and either the car to my right would have reacted and we both would have been fine, OR I could have clipped that car just right so that my car flipped over and killed me and my daughter. There is no way my mind could have calculated all of those outcomes in the time I had to react, but an autonomous car could have. And that is scary.
Think about this. Even though I wasn't driving an autonomous vehicle, the car actually made several decision for me. For example, while I was trying to push my brake pedal through the floor, I could feel the stutter of the Anti-locking Braking System (ABS). The car decided, given my speed and the force I was applying to the brake, that using the ABS would be more effective than simply locking up the brakes. When I finally smashed into the vehicle in front of me, the car decided that the force of the impact was sufficient enough to deploy my airbag. To a lesser extent, the mechanism in my seatbelt decided to lock the shoulder strap given the impact. I guess I can't complain. Both my daughter and me walked away relatively unscathed.
The decisions made by my car are a result of years of testing based on probability calculations. Now apply those to a truly autonomous car. Perhaps my car would have calculated that simply applying the brakes had a 95% chance of impact, but only a 5% chance of death. Swerving into the righthand lane may only have had a 50% chance of impact, but a 15% chance of death. What does the car decide to do then? Maybe that one's simple math. But what about other factors? What about the value of the vehicles? What about the fact that the highway I was on was only two lanes? Would swerving into the righthand lane cause more of a traffic back up than hitting the car in the left lane? Is protecting human life the only factor that we (and yes I mean us as a society) are willing to use as part of our algorithm? I'd like to think that was true, but I'm not that naïve.
Regardless of the algorithm, a hyper-connected autonomous transportation system would undoubtedly save lives. I'm just not sure I want to answer the author's ultimate question, what does it mean if we are frightened by our own values?