Proceduralizing control and discretion: Human oversight in artificial intelligence policy

AuthorRiikka Koulu
Published date01 December 2020
Date01 December 2020
DOIhttp://doi.org/10.1177/1023263X20978649
Subject MatterArticles
Article
Proceduralizing control and
discretion: Human oversight
in artificial intelligence policy
Riikka Koulu*
Abstract
This article is an examination of human oversight in EU policy for controlling algorithmic systems in
automated legal decision making. Despite the shortcomings of human control over complex
technical systems, human oversight is advocated as a solution against the risks of increasing reliance
on algorithmic tools. For law, human oversight provides an attractive, easily implementable and
observable procedural safeguard. However, without awareness of its inherent limitations, human
oversight is in danger of becoming a value in itself, an empty procedural shell used as a stand-in
justification for algorithmization but failing to provide protection for fundamental rights. By
complementing socio-legal analysis with Science and Technology Studies, critical algorithm studies,
organization studies and human-computer interaction research, the author explores the impor-
tance of keeping the human in the loop and asks what the human element at the core of legal
decision making is. Through algorithmization it is made visible how law conceptualises decision
making through human actors, personalises legal decision making through the decision-maker’s
discretionary power that provides proportionality and common sense, prevents gross miscarriages
of justice and establishes the human encounter deemed essential for the feeling of being heard. The
analysis demonstrates the necessary human element embedded in legal decision making, against
which the meaningfulness of human oversight needs to be examined.
Keywords
Algorithmic decision making, AI ethics, AI regulation, automation, human oversight, EU policy
Introduction
Following the recent advancements in artificial intelligence (AI), algorithmic decision making
(ADM) systems are being increasingly deployed to support or completely automate legal decision
* University of Helsinki, Helsinki, Finland
Corresponding author:
Riikka Koulu, University of Helsinki Legal Tech Lab, Faculties of Social Sciences and Law, University of Helsinki.
E-mail: riikka.koulu@helsinki.fi
Maastricht Journal of European and
Comparative Law
2020, Vol. 27(6) 720–735
ªThe Author(s) 2020
Article reuse guidelines:
sagepub.com/journals-permissions
DOI: 10.1177/1023263X20978649
maastrichtjournal.sagepub.com
MJ
MJ
making across the public domain, including courts and public administration. Defined as encoded
procedures for solving problems by transforming input data into a desired output and producing
recommendations on this basis,
1
algorithmic systems are said to contribute to the ‘algorithmisa-
tion’ of governance, a distinct form of social ordering that becomes entwined with autonomous
software.
2
A range of ADM systems are used across our societies to facilitate or automate decision
making, examples ranging from online activities such as curation of search engine results, targeted
advertising and content moderation to organisational proce sses such as recruitment decisions,
managerial surveillance and resource allocation.
Algorithmic systems also increasingly contribute to decisions on public administration, whether
a person is entitled to a social benefit, whether a family will be in need of child protection services,
or whether an immigrant gets refugee status or citizenship, with great hopes for AI deployment in
the judiciary also being expressed.
The concern for fundamental rights has created a global push towards AI ethics, producing a
plethora of ethical guidelines meant to limit the risks and negative consequences associated with
the algorithmisation of society.
3
Sometimes framed as ‘ethics washing’, the instruments have been
criticised for their non-binding nature, blurry scope of application, and lack of clear implementa-
tion guidelines for programmers and administrators of justice, that could be implemented easily
through checklists, all of which contributes to their limited ability to regulate AI systems.
4
How-
ever, the problem representations as well as the solutions proposed are bound to influence the
emergent hard-law approaches, as the juridification of AI regulation proceeds.
5
Currently, human oversight is advocated by a range of actors as a focal ethical principle for AI
development and deployment. For example, the EU Commission’s Communication in 2019 por-
trayed human agency and oversight as the first of seven key requirements AI applications must
follow to be considered trustworthy.
6
The risks and challenges hoped to be addressed by human
oversight include dangers to human autonomy, lack of transparency and opaque algorithmic
models, privacy and data protection issues, as well as discrimination.
7
Similarly, the ethical guide-
lines developed by the Council of Europe’s European Commission for the efficiency of justice
1. T. Gillespie, ‘The Relevance of Algorithms’, in T. Gillespie, P. J. Boczkowski and K. A. Foot (eds.), Media Tech-
nologies: Essays on Communication, Materiality, and Society (MIT Press, 2014), p. 167.
2. A. Aneesh, ‘Global Labor: Algocratic Modes of Organization’, 27 Sociological Theory (2019), p. 347; K. Yeung and M.
Lodge (eds.), Algorithmic Regulation (Oxford University Press, 2019).
3. The German non-profit organisation AlgorithmWatch provides online a global inventory of AI ethics Guidelines listing
over 80 documents at the time of writing in April 2020. See, ‘AI Ethics Guidelines Global Inventory’, AlgorithmWatch
(2020), https://algorithmwatch.org/en/project/ai-ethics-guidelines-global-inventory/.
4. See e.g. T. Hagendorff, ‘The Ethics of AI Ethics: An Evaluation of Guidelines’, 30 Minds and Machines (2019), https://
arxiv.org/abs/1903.03425; B. Mittelstadt et al., ‘The Ethics of Algorithms: Mapping the Debate’, 3 Big Data & Society
(2016), p. 1; D. Greene, A. L. Hoffmann, and L. Stark, ‘Better, Nicer, Clearer, Fairer: A Critical Assessment of the
Movement for Ethical Artificial Intelligence and Machine Learning’, Proceedings of the 52nd Hawaii International
Conference on System Sciences (2019), http://hdl.handle.net/10125/59651.
5. It should be noted that for the time being the regulatory landscape regarding the use of ADM systems in the society at
large or in the legal domain remains unclear, although the urgent need for socio-legal research is widely acknowledged.
See K. Yeung and M. Lodge (eds.), Algorithmic Regulation (Oxford University Press, 2019); J. Cohen, Between Truth
and Power (Oxford University Press, 2019).
6. Communication from the Commission to the European Parliament, the Council, the European economic and social
committee and the Committee of the regions building trust in human-centric artificial intelligence, COM/2019/168, p. 3.
7. See e.g. R. Koulu, ‘Human control over automation: AI ethics and EU policy’, 12 European Journal of Legal Studies
(2020), p. 9.
Koulu 721

To continue reading

Request your trial

VLEX uses login cookies to provide you with a better browsing experience. If you click on 'Accept' or continue browsing this site we consider that you accept our cookie policy. ACCEPT