Autonomous vehicles present serious questions of liability. When they are the decision makers, who is at fault in a crash?
When it comes to autonomous vehicles, we are faced with an ethical dilemma: In the seconds before an accident, should an autonomous vehicle do anything it can to protect the passengers, even if it means harming other motorists or pedestrians?
When humans are behind the wheel, collateral damage, as terrible as it is, doesn’t pose much of an ethical problem. A human being in danger can’t be faulted when its survival instincts make it swerve its car into a pedestrian. But when machines are the decision-makers, does a pedestrian harmed in an accident have a case against the car manufacturer? Does a driver have a case against a car manufacturer following an accident in which they were injured?
As a European Commission report on ethical dilemmas inherent in IoT technology stated: “People are not used to objects having an identity or acting on their own, especially if they act in unexpected ways.”
Hany Atlam, a PhD researcher at the University of Southhampton, wrote in a recent piece: ”The social acceptance of IoT applications and services is strongly depending on the trustworthiness of information and the protection of private data. Since the IoT is a complex, distributed and heterogeneous system in nature, it faces several challenges regarding security and privacy. Currently, building an effective and reliable security technique is one of the highest priorities to consider. Although a number of researchers have introduced several solutions to the security and privacy issues, a reliable security technique for the IoT is still in demand to satisfy requirements of data conﬁdentiality, integrity, privacy and trust.”