Angels and artifacts: Moral agents in the age of computers and networks

DOIhttps://doi.org/10.1108/14779960580000269
Pages151-157
Date31 August 2005
Published date31 August 2005
AuthorKeith Miller,David Larson
Subject MatterInformation & knowledge management
Angels and artifacts:
Moral agents in the age
of computers and networks
AUTOMATED ANGELS: FLORIDI’S CRITERIA
FOR ARTIFICIAL MORAL AGENCY
Floridi and Sanders’ (2004) recent piece on the pos-
sibility of artificial moral agents proposes criteria
that, they contend, can be used to distinguish those
few computer programs and systems that should be
regarded as moral agents. (They do not elaborate on
how this might impact on whether or not such pro-
grams would be moral patients, and we’ll follow
their lead in this paper by not dealing with that
issue either.) Their three criteria and definitions are
as follows:
Interactivity: “the agent and its environment
(can) act upon each other.”
Autonomy: “the agent is able to change state
without direct response to interaction: it can
perform internal transitions to change its state.”
Adaptability: “the agent’s interactions (can)
Info, Comm & Ethics in Society (2005) 3: 151-157
© 2005 Troubador Publishing Ltd.
Keith Miller
Dept. of Computer Science, University of Illinois at Springfield,
One University Plaza, Springfield, IL 62703, USA
Email: miller.keith@uis.edu
David Larson
Dept. of Management Information Systems, University of Illinois at Springfield
One University Plaza, Springfield, IL 62703, USA
Email: larson.david@uis.edu
Traditionally, philosophers have ascribed moral agency almost exclusively to humans (Eshleman, 2004). Early writing
about moral agency can be traced to Aristotle (Louden, 1989) and Aquinas (1997). In addition to human moral agents,
Aristotle discussed the possibility of moral agency of the Greek gods and Aquinas discussed the possibility of moral
agency of angels. In the case of angels, a difficulty in ascribing moral agency was that it was suspected that angels did
not have enough independence from God to ascribe to the angels genuine moral choices. Recently, new candidates
have been suggested for non-human moral agency. Floridi and Sanders (2004) suggest that artificially intelligence (AI)
programs that meet certain criteria may attain the status of moral agents; they suggest a redefinition of moral agency
to clarify the relationship between artificial and human agents. Other philosophers, as well as scholars in Science and
Technology Studies, are studying the possibility that artifacts that are not designed to mimic human intelligence still
embody a kind of moral agency. For example, there has been a lively discussion about the moral intent and the con-
sequential effects of speed bumps (Latour, 1994; Keulartz et al., 2004). The connections and distributed intelligence of
a network is another candidate being considered for moral agency (Allen, Varner & Zinser, 2000). These philosophical
arguments may have practical consequences for software developers, and for the people affected by computing. In this
paper, we will examine ideas about artificial moral agency from the perspective of a software developer.
Keywords: Computer ethics; artificial intelligent programs as moral agents; computer networks as moral agents
VOL 3 NO 3 JULY 2005 151

To continue reading

Request your trial

VLEX uses login cookies to provide you with a better browsing experience. If you click on 'Accept' or continue browsing this site we consider that you accept our cookie policy. ACCEPT