Prolegomena to any future artificial moral agent

Author:Allen, C; Varner, G; Zinser, J

Article Title:Prolegomena to any future artificial moral agent

Abstract:
As artificial intelligence moves ever closer to the goal of producing fully autonomous agents, the question of how to design and implement an artificial moral agent (AMA) becomes increasingly pressing. Robots possessing autonomous capacities to do things that are useful to humans will also have the capacity to do things that are harmful to humans and other sentient beings. Theoretical challenges to developing artificial moral agents result both from controversies among ethicists about moral theory itself, and from computational limits to the implementation of such theories. In this paper the ethical disputes are surveyed, the possibility of a 'moral Turing Test' is considered and the computational difficulties accompanying the different types of approach are assessed. Human-like performance, which is prone to include immoral actions, may not be acceptable in machines, but moral perfection may be computationally unattainable. The risks posed by autonomous machines ignorantly or deliberately harming people and other sentient beings are great. The development of machines with enough intelligence to assess the effects of their actions on sentient beings and act accordingly may ultimately be the most important task faced by the designers of artificially intelligent automata.

Keywords:  moral agency; computational ethics; moral Turing Test

DOI: 10.1080/09528130050111428

Source:JOURNAL OF EXPERIMENTAL & THEORETICAL ARTIFICIAL INTELLIGENCE

Welcome to correct the error, please contact email: humanisticspider@gmail.com