Just say “no!” to lethal autonomous robotic weapons

Pages299-313
DOIhttps://doi.org/10.1108/JICES-12-2014-0065
Date10 August 2015
Published date10 August 2015
AuthorWilliam M Fleischman
Subject MatterInformation & knowledge management,Information management & governance
Just say “no!” to lethal
autonomous robotic weapons
William M. Fleischman
Department of Computing Sciences, Villanova University,
Villanova, Pennsylvania, USA
Abstract
Purpose – The purpose of this paper is to consider the question of equipping fully autonomous robotic
weapons with the capacity to kill. Current ideas concerning the feasibility and advisability of
developing and deploying such weapons, including the proposal that they be equipped with a so-called
“ethical governor”, are reviewed and critiqued. The perspective adopted for this study includes software
engineering practice as well as ethical and legal aspects of the use of lethal autonomous robotic
weapons.
Design/methodology/approach In the paper, the author survey and critique the applicable
literature.
Findings – In the current paper, the author argue that fully autonomous robotic weapons with the
capacity to kill should neither be developed nor deployed, that research directed toward equipping such
weapons with a so-called “ethical governor” is immoral and serves as an “ethical smoke-screen” to
legitimize research and development of these weapons and that, as an ethical duty, engineers and
scientists should condemn and refuse to participate in their development.
Originality/value – This is a new approach to the argument for banning autonomous lethal robotic
weapons based on classical work of Joseph Weizenbaum, Helen Nissenbaum and others.
Keywords Autonomous lethal robotic weapons, Computational ethics,
Software engineering complexity
Paper type Research paper
1. Introduction
In Wired for War: The Robotic Revolution and Conict in the 21st Century, Peter W. Singer
comments that when it comes to giving robotic weapons lethal capabilities, ofcial
military policy seems to be clear and emphatic: “Humans must be kept in the loop”
(Singer, 2009). One imagines this to mean that before any robotic weapon can re on a
human target, a human must give authorization. However, Singer makes the quizzical
observation that whenever this matter is raised in serious discussion, the result is
averted eyes and a change in topic. The subtitle of this paper makes reference to a
kindred and puzzling phenomenon the author has observed, most recently at
ETHICOMP 2013, among ethicists whose area of concern is problematic applications of
computing technology.
In fact, there has been an active discussion among philosophers, legal theorists and
engineers, of the question of creating and deploying autonomous robotic weapons with
lethal capability. New contributions to this discussion appear with regularity. It has
stimulated an initiative to ban weapons of this type and, in turn, has spawned critical
commentary on the supposed premature or wrongheaded nature of this effort.
The current issue and full text archive of this journal is available on Emerald Insight at:
www.emeraldinsight.com/1477-996X.htm
Lethal
autonomous
robotic
weapons
299
Received 24 December 2014
Revised 9 March 2015
Accepted 9 June 2015
Journalof Information,
Communicationand Ethics in
Society
Vol.13 No. 3/4, 2015
pp.299-313
©Emerald Group Publishing Limited
1477-996X
DOI 10.1108/JICES-12-2014-0065

To continue reading

Request your trial

VLEX uses login cookies to provide you with a better browsing experience. If you click on 'Accept' or continue browsing this site we consider that you accept our cookie policy. ACCEPT