Thursday, August 18, 2022
HomeEconomicsSelf-Driving Cars and the Nirvana Fallacy

Self-Driving Cars and the Nirvana Fallacy


“A single death is a tragedy, a million deaths is a statistic.”

The first time I’ve quoted Joseph Stalin, his observation seems apropos to the ongoing debate about regulating self-driving cars.

On March 18, 2018, a self-driving Uber car struck and killed a pedestrian, Elaine Herzberg. No question, Herzberg’s death was a calamity. Yet, on the same day, roughly 3,700 other people around the world lost their lives in auto accidents. How many of those made international news?

Fredrick Kunkle, in a Washington Post piece compared Herzberg’s death to that of Bridget Driscoll—the first pedestrian killed by an automobile (in 1896). To my mind, that’s the right comparison. When we weigh the risks of autonomous vehicles, it would be a mistake to compare real-world outcomes with a hypothetical utopia where these vehicles never cause harm to person or property. If such an idealized world is to be our standard, we might also compare our world to a universe where autonomous cars never break down, overheat, make a wrong turn, or need any fuel. While we’re at it, why not also make them free, and have them rain (safely) from the heavens whenever we desire transport?

To make any of those obviously silly comparisons would be to commit an error which Harold Demsetz once warned us against: the Nirvana Fallacy. When someone condemns the real world, filled as it is with human imperfection, constrained as it is by scarcity, to a hypothetical utopia, beset with neither human foibles nor imperfect information, they are committing the Nirvana Fallacy.

Ours is not a world peopled with drivers who are perfectly vigilant or alert. Nor is our world one where current technology ensures that self-driving vehicles never make a misstep. Reality therefore condemns us to choose between two imperfect worlds: a world of distracted, angry, tired drivers whose peripheral vision is flawed, and a world of self-driving cars that occasionally malfunction, misjudge, and break down.

Discussions of how regulation could “get in front of” self-driving cars are therefore incomplete, and ultimately, may cost lives. According to the National Highway Traffic Safety Administration, over 42,000 people perished on U.S. roads in 2021. What that implies is that self-driving cars would be an improvement if, with autonomous vehicles widely prevalent, “only” 41,000 people were to perish in car accidents.

To put this even more starkly, were those numbers accurate, it would imply that every year regulators delay because driverless cars are not yet perfectly safe, they would be killing a thousand people on net.

My point is not that I know what these numbers are, nor am I an expert on the regulatory hurdles these vehicular innovations must overcome. Rather, I wish to make the more general, conceptual point that net deaths may occur due to regulators’ insisting on making self-driving cars safer.   

Ex ante regulation of the type being discussed for driverless vehicles, stipulates ahead of time the specifications a product must comply with. It necessarily invokes an arbitrary set of safety standards. It also short-circuits the local, tacit knowledge that producers have about how to make their products or production processes safer. Ironically, safety regulation can make us less safe, for precisely this reason.

I don’t know how to navigate the trade-offs inherent in creating a risky product (i.e. any product). Neither do you. But markets do.

Adam Thierer’s great term—”permissionless innovation”—is relevant here. Instead of relying on ex ante regulation, we could imagine innovations that do not require any bureaucrat’s permission to obtain legality.

What about real harms that driverless cars would inevitably cause? Well, how are car accidents handled now? A robust tort system coupled with insurance works these things out, and more importantly provides an incentive for precaution in driving. Why not hold owners of driverless vehicles similarly accountable for any damage they cause?

This approach would have at least two advantages. On one hand, when producers know how to make cars safer, they wouldn’t be beholden to the opinion of an uninformed Washington bureaucrat. On the other, without the need to “ask for permission,” innovations like driverless cars would hit the streets sooner. While these cars may not be perfect, that would only mean they are well-suited for planet earth, where perfection only exists among the Platonic forms—and in the minds of D.C. regulators. 

Caleb S. Fuller

Caleb S. Fuller is associate professor of economics at Grove City College. His research interests include organizational economics, the economics of privacy, and the relationship between institutions and entrepreneurship. He has published papers in Public Choice, the International Review of Law and Economics, and the Review of Austrian Economics among other outlets. He earned his BA in economics from Grove City College and his PhD in economics from George Mason University.

Get notified of new articles from Caleb S. Fuller and AIER.



RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

- Advertisment -
Google search engine

Most Popular

Recent Comments