more on: normsrules

The Difficult Path to A Moral Code for Self-driving Cars

+ Kari Kohn

The non-rival nature of ideas allows humans to continuously share and build upon them, and in so doing, to make life better. However, with this great opportunity and optimism, there must be a sense of responsibility, as well. Often, we think that the complicated part is coming up with new technologies; it’s hard to imagine something that does not already exist.  In reality, the more difficult challenge is coming up with the right rules. Rules are made up of laws, norms and conventions. Norms, the socially determined notions about the right and wrong ways of doing things, are the most difficult to get right. Norms can change over time; they can influence laws and vice versa.  A change in norm is occurring as self-driving (autonomous or driverless) cars enter society.

At first, we were skeptical that it was even technologically possible to create a self-driving car, but, as Nicholas Carr wrote recently in the Atlantic regarding the delicate relationship between humans and machines, that problem has been resolved.

Experts used to assume that there were limits to the ability of programmers to automate complicated tasks, particularly those involving sensory perception, pattern recognition, and conceptual knowledge. They pointed to the example of driving a car, which requires not only the instantaneous interpretation of a welter of visual signals but also the ability to adapt seamlessly to unanticipated situations. “Executing a left turn across oncoming traffic,” two prominent economists wrote in 2004, “involves so many factors that it is hard to imagine the set of rules that can replicate a driver’s behavior.” Just six years later, in October 2010, Google announced that it had built a fleet of seven “self-driving cars,” which had already logged more than 140,000 miles on roads in California and Nevada.

Many think that over time the benefits of self-driving cars will far outweigh the costs, but getting the rules right will be challenging. Citing an October 2013 report (pdf) by the Eno Center for Transportation, Brad Plumer in Wonkblog has focused on some of the policy challenges that self-driving cars will face, many as a result of the interconnection of laws and norms. Imagine this reality described by Gary Marcus in a post for the New Yorker in November 2012.

Within two or three decades the difference between automated driving and human driving will be so great you may not be legally allowed to drive your own car, and even if you are allowed, it would be immoral of you to drive, because the risk of you hurting yourself or another person will be far greater than if you allowed a machine to do the work.

Getting to that day will require specifying certain rules and, thus, shaping norms.  Issues related to morality will be particularly interesting to follow.  For example, in the MIT Technology review Sven Beiker writes about this hypothetical example:

Should an automated car drive its passengers off a cliff to avoid crashing into a peloton of cyclists on a mountain road?

To begin thinking about this, perhaps it’s worth looking at the work of Joshua Greene which was recently discussed by Robert Wright in a piece for the Atlantic.  Although the piece primarily focuses on conflict and strife among humans, the insights on morality can help inform the debate about norms for setting the rules for self-driving cars. In 1999, Greene conducted a thought experiment called the trolley problem; here is the summary from his website:

First, we have the switch dilemma:  A runaway trolley is hurtling down the tracks toward five people who will be killed if it proceeds on its present course. You can save these five people by diverting the trolley onto a different set of tracks, one that has only one person on it, but if you do this that person will be killed. Is it morally permissible to turn the trolley and thus prevent five deaths at the cost of one?  Most people say “Yes.”

Then we have the footbridge dilemma:  Once again, the trolley is headed for five people. You are standing next to a large man on a footbridge spanning the tracks. The only way to save the five people is to push this man off the footbridge and into the path of the trolley.  Is that morally permissible?  Most people say “No.”

Wright has summarized Greene's findings.

Greene’s inspiration was to do brain scans of people while they thought about the trolley problem. The results suggested that people who refused to save five lives by pushing an innocent bystander to his death were swayed by emotional parts of their brains, whereas people who chose the more utilitarian solution—keep as many people alive as possible—showed more activity in parts of the brain associated with logical thought.

...Greene’s diagnosis is, at its foundation, Darwinian: the impulses and inclinations that shape moral discourse are, by and large, legacies of natural selection, rooted in our genes. Specifically, many of them are with us today because they helped our ancestors realize the benefits of cooperation. As a result, people are pretty good at getting along with one another, and at supporting the basic ethical rules that keep societies humming.

...He says the problem is that we were designed to get along together in a particular context—relatively small hunter-gatherer societies. So our brains are good at reconciling us to groups we’re part of, but they’re less good at getting groups to make compromises with one another. “Morality did not evolve to promote universal cooperation,” he writes.

Greene seems to think this wouldn’t be such a big problem if we were still living in the Stone Age, back when sparse population meant that groups didn’t bump into one another much—and when, anyway, a neighboring village might share your language and your culture and maybe even include some of your kin. But the modern world smushes groups together, and to further complicate things, they have different values.

“...they have incompatible visions of what a moral society should be.”

Given Wright's analysis and conclusion, it certainly seems more difficult to envision a set of universally accepted rules for self-driving cars than it does to imagine a society in which self-driving cars are the norm.  However, at a minimum, perhaps these moral dilemmas, and the diversity of thought surrounding their resolutions, can begin the discussion.

If this diversity of moral codes is indeed the big problem, one solution suggests itself: get rid of the diversity. We need “a common currency, a unified system for weighing values,” Greene writes.

...One question you confront if you’re arguing for a single planetary moral philosophy: Which moral philosophy should we use?

Back to top
see comments ()