Deciding on Appropriate Use of Force: Human‐machine Interaction in Weapons Systems and Emerging Norms

Published date01 September 2019
AuthorHendrik Huelss
Date01 September 2019
DOIhttp://doi.org/10.1111/1758-5899.12692
Deciding on Appropriate Use of Force: Human-
machine Interaction in Weapons Systems and
Emerging Norms
Hendrik Huelss
University of Kent
Abstract
This article considers the role of norms in the debate on autonomous weapons systems (AWS). It argues that the academic
and political discussion is largely dominated by considerations of how AWS relate to norms institutionalised in international
law. While this debate on AWS has produced insights on legal and ethical norms and sounded options of a possible regula-
tion or ban, it neglects to investigate how complex human-machine interactions in weapons systems can set standards of
appropriate use of force, which are politically normatively relevant but take place outside of formal, deliberative law-setting.
While such procedural norms are already emerging in the practice of contemporary warfare, the increasing technological com-
plexity of AI-driven weapons will add to their political-normative relevance. I argue that public deliberation about and political
oversight and accountability of the use of force is at risk of being consumed and normalised by functional procedures and
perceptions. This can have a profound impact on future of remote-warfare and security policy.
1. Autonomous weapons systems and the
question of norms
The policy of extensive deployment of armed drones in the
last 15 years, representing an era of remote-warfare featur-
ing long-standing interventions involving only limited
ground troop deployment, approaches the next stage: the
development of autonomous weapons systems (AWS) and
the political debate on their possible regulation and prohibi-
tion has raised attention in recent years in the academic
and political community and even in the wider public. While
state parties to the UNs Convention on Certain Conven-
tional Weapons (UN-CCW) in Geneva discuss the case of
AWS since 2014, their broader political implications are still
understudied.
The political and academic debates on AWS focus pre-
dominantly on how AWS challenge international law (Asaro,
2012; Grut, 2013; Kastan, 2013; Noone and Noone, 2015;
Sehrawat, 2017) as well as ethics (Heyns, 2016; Johnson and
Axinn, 2013; Leveringhaus, 2016; Sharkey, 2008). While both
dimensions are interrelated and arguments for why AWS are
legally problematic in terms of International Humanitarian
Law (IHL) are also motivated by ethical concerns, such as
human dignity and the question of whether machines
should ultimately have the decision-making power to end
human life, the current debate clearly takes place in the
margins of international law. Certainly, cases such as the
Nuclear Test Ban Treaty, the Treaty on the Prohibition of
Nuclear Weapons, the Mine Ban Treaty (Ottawa Convention),
or the Protocol on Blinding Laser Weapons underline the
importance of legal norms for def‌ining when and how it is
appropriate to use force. They also prove the ability of the
international community to have a signif‌icant impact on the
trajectory of how force is used.
However, the international community has made slow
progress in their consideration of what AWS are and do.
This is widely criticised not only by NGOs, for example by
the Campaign to Stop Killer Robots, but also by countries
calling for a prohibition of AWS such as 28 UN-CCW state
parties (Campaign To Stop Killer Robots, 2018). A main prob-
lem of the UN-CCW process is its inability to f‌ind a shared
def‌inition of autonomy as the crucial, qualifying feature of
AWS in relation to human agency. Likewise, conceptualising
meaningful human control(MHC) (Crootof, 2016; Moyes,
2016; Roff and Moyes, 2016) a term that gains currency at
the moment proves to be very diff‌icult, not least because
control and autonomy are interrelated and are multi-dimen-
sional issues. The discussion is hence dominated by
attempts to f‌ind a common ground for formulating and
institutionalising norms governing the use of force by pro-
viding guidelines of when and how the use of force is
appropriate, if specif‌ic weapons technologies are used. In
contrast to the contestation of drone-warfare (Kaag and
Kreps, 2014), technical aspects are therefore now in the cen-
tre of interest.
But the focus on the seeming uniqueness of a new gener-
ation of autonomous weapons and the resulting problem of
human control overshadows the importance of (existing)
practices of human-machine interaction in the use of force.
The importance for the US military, for instance, of devel-
oping and testing artif‌icial intelligence (AI) solutions funded
by multi-billions of US$ in the AI Next Campaign(see
©2019 University of Durham and John Wiley & Sons, Ltd. Global Policy (2019) 10:3 doi: 10.1111/1758-5899.12692
Global Policy Volume 10 . Issue 3 . September 2019
354
Special Section Article

To continue reading

Request your trial

VLEX uses login cookies to provide you with a better browsing experience. If you click on 'Accept' or continue browsing this site we consider that you accept our cookie policy. ACCEPT