Human morality can be modelled, study finds, paving way for morally aware self-driving cars

A groundbreaking study has found that, contrary to popular opinion, human morality can be modelled, meaning that – at least in principle – future self-driving cars will be able to make human-life moral and ethical decisions.

The study, conducted by scientist from The Institute of Cognitive Science at the University of Osnabrück and published in the journal Frontiers in Behavioural Neuroscience, used VR to simulate road traffic scenarios, which participants enacted as if they were driving and reacting to dilemma situations where they had to choose to spare humans, animals or inanimate objects. The resulting data was visualised as statistical models, which were used to produce defined rules, resulting in a numerical value-of-life for objects, animals and humans.

The study flies in the face of the widely held assumption that such moral dilemmas are too reliant on context to be statistically modelled.

“We found quite the opposite,” explained study first author Leon Sütfeld, from the University of Osnabrück. “Human behaviour in dilemma situations can be modelled by a rather simple value-of-life-based model that is attributed by the participant to every human, animal, or inanimate object.”

Firefly, one of the self-driving cars developed by Google spinoff Waymo

This resulting value could be used to form the basis of an algorithm, allowing it to be used by machines such as self-driving cars.

As a result, it should be possible to enable autonomous vehicles to make their own on-the-fly moral decisions in the same way as humans.

However, this raises its own moral question: should machines adopt human morality, or should machines be designed to follow a different basis of morality, such as, for example, utilitarianism?

“We need to ask whether autonomous systems should adopt moral judgements,” said study senior author Professor Gordon Pipa. “If yes, should they imitate moral behaviour by imitating human decisions, should they behave along ethical theories and if so, which ones and critically, if things go wrong who or what is at fault?”

While currently not programmed with morality, Waymo’s vehicles could in the future make moral decisions as a result of this research. Images courtesy of Waymo

With autonomous vehicles such as Google’s Waymo cars poised to be running on roads within years, this is a decision that will need to be made fairly quickly.

However, it is not just driverless cars where such issues need to be considered. Robots and other AI-run systems are becoming increasingly commonplace, and will continue to be as time progresses. These decisions, the scientists argue, will need to be made for all machines soon, if we are to avoid problems later.

“Now that we know how to implement human ethical decisions into machines we, as a society, are still left with a double dilemma,” said study senior author Professor Peter König. “Firstly, we have to decide whether moral values should be included in guidelines for machine behaviour and secondly, if they are, should machines act just like humans?”

Former US presidential candidate Ralph Nader warns against over-hyping driverless cars

Former presidential candidate Ralph Nader has said that unsubstantiated claims from driverless car enthusiasts are distracting authorities from improving transport links and improving road and rail infrastructure.

In a blog post, Nader argues that the while the many advantages of a possible driverless future have been reported by the media, they have not been properly scrutinised, and the technology is draining much-needed funds that should be made available to mass transit services and the industry’s own vehicle safety upgrades.

“The mass media took the bait and over-reported each company’s sensationalised press releases, announcing breakthroughs without disclosing the underlying data,” said Nader.

“The arrogance of the algorithms, among many other variables, bypassed simple daily realties such as bustling traffic in cities like New York.”

Image courtesy of Don LaVange

Nader makes the claim that the predicted decline in car sales has led car companies to promote their high-ticket, driverless cars, which as Nader points out are already being marketed as “computers on wheels”.

However, Nader argues no explanation has been given for how autonomous vehicles would be implanted into normal people’s daily lives, and the problems of cars being hacked or requiring humans to take over haven’t been resolved.

“The industry, from Silicon Valley to Detroit, argues safety. Robotic systems do not get drunk, fall asleep at the wheel or develop poor driving skills. But computers fail often; they are often susceptible to hacking, whether by the manufacturers, dealers or deadly actors,” said Nader.

“Already, Level Three—an autonomous vehicle needing emergency replacement by the surrogate human driver—is being viewed as unworkable by specialists at MIT and elsewhere. The human driver, lulled and preoccupied, can’t take back control in time.”

Nader also makes the point in his blog post that driverless cars are diverting funding away from making cars we already have safer, more efficient and less polluting.

It is Nader’s opinion that we shouldn’t wait for what he terms a “technological will-o’-the-wisp”, and we should instead make changes to the cars we already have, as well as improving public transportation and infrastructure.

“The driverless car is bursting forth without a legal, ethical and priorities framework. Already asking for public subsidies, companies can drain much-needed funds for available mass transit services and the industry’s own vehicle safety upgrades,” said Nader.

“Why won’t we concentrate on what can be improved and expanded to get safer, efficient, less polluting mobility?”