The Rule of Law and Automation of Government Decision‐Making

AuthorGeorge Williams,Lyria Bennett Moses,Monika Zalnieriute
Published date01 May 2019
DOIhttp://doi.org/10.1111/1468-2230.12412
Date01 May 2019
bs_bs_banner
Modern Law Review
DOI: 10.1111/1468-2230.12412
The Rule of Law and Automation of Government
Decision-Making
Monika Zalnieriute,Lyria Bennett Moses
and George Williams
Governments around the world are deploying automation tools in making decisions that affect
rights and entitlements. The interests affected are very broad, ranging from time spent in
detention to the receipt of social security benefits. This article focusses on the impact on rule
of law values of automation using: (1) pre-programmed rules (for example, expert systems);
and (2) predictive inferencing whereby rules are derived from historic data (such by applying
supervised machine learning). The article examines the use of these systems across a range of
nations. It explores the tension between the rule of law and rapid technological change and
concludes with observations on how the automation of government decision-making can both
enhance and detract from rule of law values.
INTRODUCTION
Automation promises to improve a wide range of processes. The introduction
of controlled procedures and systems in place of human labour can enhance
efficiency as well as certainty and consistency. Given this, it is unsurprising that
automation is being embraced by the private sector in fields including phar-
maceuticals, retail, banking and transport. Automation also promises benefits
to government. It has the potential to make governments – and even whole
democratic systems – more accurate, more efficient and more fair. As a result,
several nations have become enthusiastic adopters of automation in fields such
as welfare allocation and the criminal justice system. While not a recent devel-
opment, automated systems that support or replace human decision-making in
government are increasingly being used.
The rapid deployment of automation is attracting conflicting narratives. On
the one hand, the transformative potential of technologies such as machine
learning has been lauded for its economic benefits. On the other, it has be-
come customary to acknowledge the risks that these pose to rights such as
Postdoctoral Research Fellow, Allens Hub for Technology, Law and Innovation, Faculty of Law,
UNSW Sydney.
Director, Allens Hub for Technology, Law and Innovation, Faculty of Law, UNSW Sydney.
Dean, Anthony Mason Professor and Scientia Professor, Faculty of Law, UNSW Sydney; Barrister,
New South Wales Bar. The authors thank Gabrielle Appleby and the anonymous referees for their
comments on an earlier draft, and Adam Yu and Leah Grolman for their research assistance.
C2019 The Authors. The Modern Law Review C2019 The Moder n LawReview Limited. (2019) 82(3) MLR 425–455
Published by JohnWiley & Sons Ltd, 9600 Garsington Road, Oxford OX4 2DQ, UK and 101 Station Landing, Medford, MA 02155, USA
Automation of Government Decision-Making
privacy1and equality.2The question of how automation interacts with foun-
dational legal concepts and norms is also attracting attention among theorists
working at the intersection of legal theory, technology and philosophy.3These
scholars examine possibilities such as the potential of automation and artificial
intelligence to displace traditional legal concerns with prediction,4and indeed
to challenge the normative structure underlying our understanding of law.5
Others have interrogated the relationship between legal values and data-driven
1 For automation, data protection and privacy, see, for example, A. Roig, ‘Safeguards for the Right
Not to be Subject to a Decision Based Solely on Automated Processing (Article 22 GDPR)’
(2017) 8 European Journal of Law and Technology1; S. Wachter, B.Mittelstadt and L. Flor idi, ‘Why
a Right to Explanation of Automated Decision-Making does not Exist in the General Data
Protection Regulation’ (2017) 7 International Data Privacy Law 76; S. Wachter, B. Mittelstadt
and C. Russell, ‘Counterfactual Explanations without Opening the Black Box: Automated
Decisions and the GDPR’ (2017) 31 Harvard Journal of Law & Technology 841; I. Mendoza and
L. A. Bygrave, ‘The Right Not to Be Subject to Automated Decisions Based on Profiling’
in T. Synodinou et al (eds), EU Internet Law: Regulation and Enforcement (Cham: Springer:
2017); G. Malgieri and G. Comand´
e, ‘Why a Right to Legibility of Automated Decision-
Making Exists in the General Data Protection Regulation’ (2017) 7 International Data Privacy
Law 243; B. Goodman and S.Flaxman, ‘European Union Regulations on Algorithmic Decision-
Making and a “Right to Explanation”’ (2017) 38 AI Magazine 50. See also UN Office of the
High Commissioner for Human Rights (OHCHR), A Human Rights-Based Approach to Data:
Leaving No One Behind in the 2030 Development Agenda (2016); United Nations Development
Group, Big Data for Achievement of the 2030 Agenda: Data Privacy, Ethics and Protection – Guidance
Note (2017) at https://undg.org/document/data-privacy-ethics-and-protection-guidance-note-
on-big-data-for-achievement-of-the-2030-agenda/ (last accessed 27 November 2018).
2 For automation and equality, see, for example, S. Barocas and A. D. Selbst, ‘Big Data’s Disparate
Impact’ (2016) 104 California Law Review 671; M. B. Zafar et al, ‘Fairness Beyond Disparate
Treatment & Disparate Impact: Learning Classification without Disparate Mistreatment’ (Inter-
national WorldWide WebConferences Steer ing Committee,2017) Proceedings of the 26th Inter na-
tional Conference on World Wide Web at https://doi.org/10.1145/3038912.3052660 (last accessed
10 September 2018); A. Chouldechova, ‘Fair Prediction with Disparate Impact: A Study of Bias
in Recidivism Prediction Instruments’ (2017) 5 Big Data 153; S. Goel et al, ‘Combatting Police
Discrimination in the Age of Big Data’ (2017) 20 New Criminal Law Review 181. See also ‘The
Toronto Declaration: Protecting the rights to equality and non-discrimination in machine learn-
ing systems’ 16 May 2018 at https://www.accessnow.org/the-toronto-declaration-protecting-
the-rights-to-equality-and-non-discrimination-in-machine-learning-systems/ (last accessed 27
November 2018).
3 See, for example, recent special issue ‘Artificial Intelligence, Technology, and the Law’ (2018) 68
supp 1 University of Toronto Law Journal 1, focused on legal theory, automation and technology
beyond government decision-making. See also K. Yeung, ‘Algorithmic Regulation: A Critical
Interrogation’ (2017) Regulation & Governance at https://doi.org/10.1111/rego.12158 (last ac-
cessed 10 September 2018); A. Rouvroy and B. Stiegler, ‘The Digital Regime of Truth: From
the Algorithmic Governmentality to a New Rule of Law’ A. Nony and B. Dillet (tr), 2016,
3La Deleuziana 6 at http://www.ladeleuziana.org/wp-content/uploads/2016/12/Rouvroy-
Stiegler_eng.pdf (last accessed 10 September 2018); E. Benvenisti, ‘EJIL Foreword – Upholding
Democracy Amid the Challenges of New Technology: What Role for the Law of Global Gov-
ernance?’ (2018) 29 European Journal of International Law 9; M. Hildebrandt and B. Koops, ‘The
Challenges of Ambient Law and Legal Protection in the Profiling Era’ (2010) 73 MLR 428.
4 F. Pasquale and G. Cashwell, ‘Prediction, Persuasion, and the Jurisprudence of Behaviourism’
(2018) 68 supp 1 University of Toronto Law Journal 63.
5 M. Hildebrandt, ‘Law as Computation in the Era of Artificial Legal Intelligence: Speaking Law
to the Power of Statistics’ (2018) 68 supp 1 University of Toronto Law Journal 12; B. Sheppard,
‘WarmingUp to Inscr utability: HowTechnology Could Challenge Our Concept of Law’(2018)
68 supp 1 University of Toronto Law Journal 36, 37; M. Hildebrandt, Smart Technologies and the
End(s) of Law: Novel Entanglements of Law and Technology (Cheltenham: Edward Elgar, 2015).
426 C2019 The Authors. The Modern Law Review C2019 The Moder n LawReview Limited.
(2019) 82(3) MLR 425–455

To continue reading

Request your trial

VLEX uses login cookies to provide you with a better browsing experience. If you click on 'Accept' or continue browsing this site we consider that you accept our cookie policy. ACCEPT