Skip to content
This website uses cookies to help us understand the way visitors use our website. We can't identify you with them and we don't share the data with anyone else. If you click Reject we will set a single cookie to remember your preference. Find out more in our privacy policy.
Blog

The human side of AI: Who drives the driverless cars?

Artificial Intelligence is opening up new frontiers, whether in health care, business or (only slightly alarmingly) warfare, promising a new generation of ethical conundrums. The quandaries caused by driverless cars may require grappling with sooner than most. ​

“It is not the ferocity of automobiles that is to be feared, but the ferocity of those who drive them. Until humans intervene, they are usually harmless.” (Georgia Appeals Court, 1909)

It may not be long at all until driving a car becomes like making jam: you don’t need to do it, but some strange folk may enjoy doing it anyway.  Self-driving cars are no longer sci-fi. They’re on the cusp of being gently unleashed on public roads.

All of the world’s major car makers – and tech giants like Apple and Google – are ploughing serious cash into self-driving cars.  This could be revolutionary stuff.

But beneath the angular futurism lies squishy, human ethics.  Handing over control of the wheel to algorithms inevitably means handing over decisions about life and death. If the brakes fail on a driverless car, what should it do? Protect its passengers at all costs? Swerve into the cycle lane? Or sacrifice itself by driving off a bridge rather than ram into the back of a school bus?

Perhaps we’re overthinking it. Google suggests automated cars will need to inherit the same crisis response human drivers generally display – ramming on the brakes. And in practice, getting public acceptance of driverless vehicles creates a huge imperative for those cars putting the safety of the driver above all else. Mercedes-Benz boss Christoph von Hugo notoriously admitted what research has shown: no-one will set foot in an automated car if they think the blessed thing will sacrifice them to the greater good.

But grey areas remain. Brakes will still fail. In an impending crash, humans take subjective, hopelessly imperfect, inconsistent decisions. We’re not computers. We can point to the impossibility of that decision to excuse whatever terrible consequences ensure. But algorithms – programmed by humans to a set of conditions – can’t.  Particularly not with access to a new richness of objective data that could – depending on your point of view – sharpen, or further complicate, that decision.

What if the passenger of the car has a terminal illness? What if the pedestrian in harm’s way is a criminal on the run? Does that change anything? Should it? And is the ability to make fast, accurate, finely balanced calls about who lives and dies in a way humans never could at least as much of an opportunity as a threat?

With regulators struggling to stand up to the power of tech giants, and public debate lagging way behind the technological frontier, society has to find a way to take control of the new rules of the road. When computers take the wheel, who will decide what drives what they do in a crisis?

David Powell is Environment Lead at the New Economics Foundation


Blog
David Powell9 October 2017

A little something else? If you found this of interest, then we’d suggest taking a look at some related items. And when you’re ready to chat, you can get in touch.