A groundbreaking study has found that, contrary to popular opinion, human morality can be modelled, meaning that – at least in principle – future self-driving cars will be able to make human-life moral and ethical decisions.
The study, conducted by scientist from The Institute of Cognitive Science at the University of Osnabrück and published in the journal Frontiers in Behavioural Neuroscience, used VR to simulate road traffic scenarios, which participants enacted as if they were driving and reacting to dilemma situations where they had to choose to spare humans, animals or inanimate objects. The resulting data was visualised as statistical models, which were used to produce defined rules, resulting in a numerical value-of-life for objects, animals and humans.
The study flies in the face of the widely held assumption that such moral dilemmas are too reliant on context to be statistically modelled.
“We found quite the opposite,” explained study first author Leon Sütfeld, from the University of Osnabrück. “Human behaviour in dilemma situations can be modelled by a rather simple value-of-life-based model that is attributed by the participant to every human, animal, or inanimate object.”
This resulting value could be used to form the basis of an algorithm, allowing it to be used by machines such as self-driving cars.
As a result, it should be possible to enable autonomous vehicles to make their own on-the-fly moral decisions in the same way as humans.
However, this raises its own moral question: should machines adopt human morality, or should machines be designed to follow a different basis of morality, such as, for example, utilitarianism?
“We need to ask whether autonomous systems should adopt moral judgements,” said study senior author Professor Gordon Pipa. “If yes, should they imitate moral behaviour by imitating human decisions, should they behave along ethical theories and if so, which ones and critically, if things go wrong who or what is at fault?”
With autonomous vehicles such as Google’s Waymo cars poised to be running on roads within years, this is a decision that will need to be made fairly quickly.
However, it is not just driverless cars where such issues need to be considered. Robots and other AI-run systems are becoming increasingly commonplace, and will continue to be as time progresses. These decisions, the scientists argue, will need to be made for all machines soon, if we are to avoid problems later.
“Now that we know how to implement human ethical decisions into machines we, as a society, are still left with a double dilemma,” said study senior author Professor Peter König. “Firstly, we have to decide whether moral values should be included in guidelines for machine behaviour and secondly, if they are, should machines act just like humans?”