Liability When Self-Driving Cars Make Moral Decisions

Liability When Self-Driving Cars Make Moral DecisionsThe law and morality are closely intertwined in the world of personal injury law. The law strives to do what is right, both for the individual but also while balancing the needs of society. Emerging technology is raising some serious moral questions that could have serious legal consequences when it comes to self-driving cars.

The Dilemma of Self-Driving Cars

We have written in the past about potential legal issues related to self-driving cars and about who has liability when people are killed in accidents as a result of the failure of a self-driving car to react properly or quickly enough. A recent article has raised the very interesting question about what happens when a self-driving car is purposely set to injure or kill someone.

Why would this be the case? Programmers who work on these vehicles are having to train computers to make instantaneous decisions about what happens when there are two choices as to where to direct the car in an emergency situation and they both could cause injury or death.

Star Trek’s Spock once said that the “needs of the many outweigh the needs of the few.” Should self-driving cars adhere to that idea?

Assume that you are in a self-driving car. Three pedestrians wander into the road. The car must be programmed to decide whether to collide with, and potentially injure, the pedestrians, or whether to veer off the road, potentially injuring or killing the car’s passengers. For that matter, the car may have to decide whether to hit the pedestrians or veer into busy sidewalks or other areas that would put people at risk.

People Have Conflicting Opinions

When asked, most people want driverless cars to make decisions that protect the maximum amount of people, even if that means sacrificing themselves or their own families. Most people surveyed said that while they want that to be the case, they would not buy a self-driving car that was actually programmed to save the maximum number of people, thus jeopardizing the people in the car.

That is not to say that self-driving cars are unsafe; many believe that they will reduce fatalities, and of course, most accidents do not involve decisions or comparisons about how many people to run over or put into danger.

While the law tries to do what is best for society, it obviously protects the individual, as well. A sole individual can sue a multi-national corporation or the government itself, or sue the Governor, or challenge a law in court. The protection of an individual’s rights has led to great advancement in safety and security to society in general.

That means that makers of self-driving cars may not be able to simply avoid liability for being sued by passengers, just because sacrificing the passengers was the “right decision,” or the one that lead to the least overall harm. Allowing self-driving cars to make these kinds of decisions is akin to admitting that a manufacturer is knowingly and purposely not only selling a product that will injure you, but which is, under the right circumstances, specifically programmed to injure you.

Cars Should Not Make Decisions About Negligence

Even if the legislature were to insulate self-driving car manufacturers for liability when the car opts to injure its occupants to save people outside the car, that does not mean that the car manufacturer is absolutely absolved of liability.

For example, we know that in pedestrian accidents there are many factors that determine who is right or wrong. When people are hit by cars, experts and detailed facts need to be brought forward to prove that a driver was negligent in hitting the person in the road. In many cases, the pedestrian is blamed for the accident, or held comparatively negligent, at least in part.

When pedestrians are hit by cars, questions are asked about whether the pedestrian was crossing at the right place in the road, or whether there was a condition on the road that prevented the driver from seeing the pedestrian. A pedestrian can be drunk and reckless with his own behavior just as a car driver can.

How will a self-driving car be programed to sacrifice its occupants to save people outside the car, if the car cannot evaluate who is at fault? If a drunk pedestrian darts into the road, and the car immediately stops and kills its own occupants, many would argue that the pedestrian is at fault, and thus, the occupants should have been spared. But a car or a computer cannot be programmed to make determinations of who is or is not negligent or obeying traffic laws.

What if the pedestrian could have seen and avoided the car? A car may be programmed to veer off the road and protect the pedestrian, injuring the passengers, even though there would have been time for the pedestrian to avoid the car and prevent anybody from being injured.

That means that even if laws protect a manufacturer to make sure it could not be sued for making a decision over who to save, that does not mean it could not be sued for making the wrong decision.

For a population that would buy self-driving cars for the purpose of more protection, and the security of knowing a computer is assisting in making safety decisions, knowing that the computer could intentionally take action to harm them could be deal breaker and lead to lawsuits against car manufacturers.

If you are injured as a result of a car accident or a pedestrian accident, you deserve full compensation for the value of your injuries. Contact Brill & Rinaldi today about a free consultation to discuss whether you may have a lawsuit available to you.