Technology and moral vacuums in just war theorising

Published date01 October 2018
Date01 October 2018
AuthorElke Schwarz
DOI10.1177/1755088217750689
Subject MatterArticles
https://doi.org/10.1177/1755088217750689
Journal of International Political Theory
2018, Vol. 14(3) 280 –298
© The Author(s) 2018
Article reuse guidelines:
sagepub.com/journals-permissions
DOI: 10.1177/1755088217750689
journals.sagepub.com/home/ipt
Technology and moral
vacuums in just war theorising
Elke Schwarz
University of Leicester, UK
Abstract
Our contemporary condition is deeply infused with scientific-technological rationales.
These influence and shape our ethical reasoning on war, including the moral status of
civilians and the moral choices available to us. In this article, I discuss how technology
shapes and directs the moral choices available to us by setting parameters for moral
deliberation. I argue that technology has moral significance for just war thinking, yet
this is often overlooked in attempts to assess who is liable to harm in war and to what
extent. This omission produces an undue deference to technological authority, reducing
combatants, civilians and scenarios to data points. If we are to develop a maximally
restrictive framework for harming civilians in war, which in my view should be a goal
of just war thinking, then it is imperative that the scientific-technological dimension of
contemporary war is given due attention.
Keywords
Applied ethics, ethics of technology, ethics of war, just war theory
Introduction
On 7 July 2016, former army reservist Micah Xavier Johnson opened fire on police offic-
ers in Dallas, killing five and wounding an additional nine officers. The shooter was
subsequently chased by responding officers and eventually cornered in a college build-
ing where a 2-hour-long standoff ensued. During this time, Johnson threatened to shoot
anyone in his path, growing increasingly erratic and manic. As efforts to negotiate
Johnson’s surrender seemed to deteriorate, the Dallas Police Department (PD) resorted
to unusual measures: it sent in a lethal robot. Dallas Police Chief David Brown ordered
two officers to mount a load of C-4 explosives on a bomb-disposal robot, which the PD
had acquired a few months prior, sent the loaded charge in to where Johnson was
Corresponding author:
Elke Schwarz, University of Leicester, University Road, Leicester LE1 7RH, UK.
Email: es304@leicester.ac.uk
750689IPT0010.1177/1755088217750689Journal of International Political TheorySchwarz
research-article2018
Article
Schwarz 281
sequestered and remotely detonated the robot bomb once it had reached its target. Shooter
Johnson died from the blast. Dallas Police Chief David Brown insisted that the action
taken was necessary to avoid more officers getting harmed: ‘We had no choice, in my
mind, but to use all tools necessary’ (quoted in Chan, 2016). The case attracted much
attention in the media and within academic circles for its unusual use of remote control
technology and the high level of force applied to take out an attacker. A key concern
raised was what the moral implications of the use of remotely operated lethal technology
by the police might be. More pointedly, might the availability of such technology some-
how serve to ‘reframe our perceptions of what is “necessary” when it comes to the pro-
jection of lethal force’ (Ian Kerr, quoted in Lin, 2016)?
The Dallas PD’s use of explosives and their delivery-by-robot raises important ques-
tions about the use of lethal force and the role of technology in directing our moral delib-
erations.1 In particular, it should give us pause to consider how we relate to technology
in our ethical thinking and, more concretely, how new technologies of killing might
shape our thinking through moral choices. This question of technology is relevant not
only to the future direction of police work but also to warfare. While the use of remotely
administered lethal force in a police context is seen as redrawing the boundaries of law
enforcement in possibly significant ways, the practice has become normalised in warfare
through the use of weaponised unmanned systems, more commonly known as ‘drones’.
The worry – in policing as in war – is that remotely administered technological force
opens up a vacuum in law and policy, which ‘has the potential to lead to overuse of
machines that can be used to injure or kill suspects’ (Sullivan et al., 2016). As societies
and militaries become ever more technologically sophisticated, enabling (lethal) force to
be deployed from greater distances and with fewer risks to the parties in possession of
such technologies, it becomes necessary to examine the technological condition that
underwrites contemporary moral dilemmas in thinking through the permissibility of
harm in war. It is to this task that I turn my attention in this article. Against a background
of ongoing debates about the ethical implications of using armed drones to kill terrorist
suspects and the moral challenges autonomous lethal weapons systems raise, I examine
whether technologically sophisticated weapons systems direct or impact our moral delib-
erations. My core argument is that advanced technologies of violence influence moral
decision-making in ways that are significant to just war thinking but often remain
neglected. This, in turn, risks producing vacuums for moral decision-making about
inflicting harm in war.
I begin by spotlighting the moral significance of technology in discussions about the
permissibility of violence, arguing that technology shapes our moral decision-making in
ways that are significant to just war thinking but rarely acknowledged. To do this, I con-
sider both the hypothetical cases advanced in (revisionist) just war theory and the real-
world cases presented by technologically mediated conflict today. More specifically, I
draw attention to how the seemingly unproblematic presence of different tools of harm
in analytical case scenarios can shape intuitions and perceptions about when it is morally
permissible to harm others. I then discuss how the use of drones for lethal strikes in the
war on terrorism constitutes a technology that recasts the elasticity of criteria such as
‘necessity’ and ‘imminence’ in ways that alter moral deliberations, shifting focus towards
moral calculation and technological risk management, for which technology provides a

To continue reading

Request your trial

VLEX uses login cookies to provide you with a better browsing experience. If you click on 'Accept' or continue browsing this site we consider that you accept our cookie policy. ACCEPT