Germany is gearing up to lay down the ethical foundations for self-driving cars, banning A.I. from making decisions that could harm one group of people over another. The country’s transport minister has outlined the basis for future legal guidelines for driverless vehicles, rules that echo Isaac Asimov’s three laws of robotics, which manufacturers will be expected to work towards ahead of formal legalization.
(1) “It is clear that property damage takes always precedence of personal injury.
(2) “There must be no classification of people, for example, on the size, age and the like.”
(3) “If something happens, the manufacturer is liable”
Dobrindt has created an ethics commission to work out the specifics in terms of regulation, but the above rules will serve as a starting point for future laws.
The third rule may seem to suggest that the manufacturer cannot depend on the driver stepping in during an emergency, but Dobrindt indicated that drivers will be expected to have a basic awareness at all times. In practice, this will likely mean sleeping at the wheel is forbidden, but reading a book is allowed. A black box will show whether the machine or driver was in charge at the time of the accident.
There is dispute in the sector over how much awareness a driver will need while driving. While the UK’s first self-driving car policy explicitly bans drivers from devoting anything less than their full attention to the road, Michigan has taken steps towards allowing self-driving cars with no human drivers at all.
While Germany’s rules appear straightforward, manufacturers will need to consider how to implement these when dealing with A.I. that may react unexpectedly in certain situations. On Wednesday, researcher Stuart Armstrong explained that, when it comes to teaching rules to A.I., it’s more effective to implement values through machine learning rather than hard coding rules. Nonetheless, Dobrindt’s rules give an outline of what a future A.I. code of ethics may look like for self-driving cars.
Photos via Moral Machine/MIT