Artificial Driving Intelligence and Moral Agency: Examining the Decision Ontology of Unavoidable Road Traffic Accidents through the Prism of the Trolley Dilemma

Authors: Martin Cunneen Martin Mullins Finbarr Murphy Seán Gaines Cooke

Date: 01.01.2019

Applied Artificial Intelligence


Abstract

The question of the capacity of artificial intelligence to make moral decisions has been a key focus of investigation in robotics for decades. This question has now become pertinent to automated vehicle technologies, as a question of understanding the capacity of artificial driving intelligence to respond to unavoidable road traffic accidents. Artificial driving intelligence will make a calculated decision that could equate to deciding who lives and who dies. In calculating such important decisions, does the driving intelligence require moral intelligence and a capacity to make informed moral decisions? Artificial driving intelligence will be determined by at very least, state laws, driving codes, and codes of conduct relating to driving behaviour and safety. Does it also need to be informed by ethical theories, human values, and human rights frameworks? If so, how can this be achieved and how can we ensure there are no moral biases in the moral decision-making algorithms? The question of moral capacity is complex and has become the ethical focal point of this technology. Research has centred on applying Philippa Foot’s famous trolley dilemma. We claim that before applications attempt to focus on moral theories, there is a necessary precedent to utilise the trolley dilemma as an ontological experiment. The trolley dilemma is succinct in identifying important ontological differences between human driving intelligence and artificial driving intelligence. In this paper, we argue that when the trolley dilemma is focused upon ontology, it has the potential to become an important elucidatory tool. It can act as a prism through which one can perceive different ontological aspects of driving intelligence and assess response decisions to unavoidable road traffic accidents. The identification of the ontological differences is integral to understanding the underlying variances that support human and artificial driving decisions. Ontologically differentiating between these two contexts allows for a more complete interrogation of the moral decision-making capacity of the artificial driving intelligence.

BIB_text

@Article {
title = {Artificial Driving Intelligence and Moral Agency: Examining the Decision Ontology of Unavoidable Road Traffic Accidents through the Prism of the Trolley Dilemma},
journal = {Applied Artificial Intelligence},
pages = {267-293},
volume = {33},
keywds = {
Decision making, Highway traffic control, Ontology, Philosophical aspects, Prisms, Roads and streets
}
abstract = {

The question of the capacity of artificial intelligence to make moral decisions has been a key focus of investigation in robotics for decades. This question has now become pertinent to automated vehicle technologies, as a question of understanding the capacity of artificial driving intelligence to respond to unavoidable road traffic accidents. Artificial driving intelligence will make a calculated decision that could equate to deciding who lives and who dies. In calculating such important decisions, does the driving intelligence require moral intelligence and a capacity to make informed moral decisions? Artificial driving intelligence will be determined by at very least, state laws, driving codes, and codes of conduct relating to driving behaviour and safety. Does it also need to be informed by ethical theories, human values, and human rights frameworks? If so, how can this be achieved and how can we ensure there are no moral biases in the moral decision-making algorithms? The question of moral capacity is complex and has become the ethical focal point of this technology. Research has centred on applying Philippa Foot’s famous trolley dilemma. We claim that before applications attempt to focus on moral theories, there is a necessary precedent to utilise the trolley dilemma as an ontological experiment. The trolley dilemma is succinct in identifying important ontological differences between human driving intelligence and artificial driving intelligence. In this paper, we argue that when the trolley dilemma is focused upon ontology, it has the potential to become an important elucidatory tool. It can act as a prism through which one can perceive different ontological aspects of driving intelligence and assess response decisions to unavoidable road traffic accidents. The identification of the ontological differences is integral to understanding the underlying variances that support human and artificial driving decisions. Ontologically differentiating between these two contexts allows for a more complete interrogation of the moral decision-making capacity of the artificial driving intelligence.


}
doi = {10.1080/08839514.2018.1560124},
date = {2019-01-01},
}
Vicomtech

Parque Científico y Tecnológico de Gipuzkoa,
Paseo Mikeletegi 57,
20009 Donostia / San Sebastián (Spain)

+(34) 943 309 230

Zorrotzaurreko Erribera 2, Deusto,
48014 Bilbao (Spain)

close overlay

Behavioral advertising cookies are necessary to load this content

Accept behavioral advertising cookies