Around the Black Box: Applying the Carltona Principle to Challenge Machine Learning Algorithms in Public Sector Decision-Making

AuthorElena Casale
PositionLLM (BPP) '21. BA in English (University of Oxford) '20
Pages369-389
2021
Around the Black Box!
369
!
!
!
!
Around the Black Box: Applying the Carltona Principle
to Challenge Machine Learning Algorithms in Public
Sector Decision-Making
Elena Casale*
ABSTRACT
For the first time, important public sector decisions are being taken in the absence of an
accountable and identifiable human being. Instead, they are increasingly outsourced to machine
learning algorithms (MLAs) to cut costs, save time, an d, in theory, improve the quality of
decisions made. However, MLAs also pose new risks to fair and legitimate decision making
such as bias and rigidity. These risks are often obfuscated by ‘intrinsic opacity’: the complex
interplay between extremely large da tasets and code which makes it impossible to trace the
decision pathway of an MLA. This ‘black box problem’ frustrates the review of a public sector
decision made by an MLA, as the court is unable to trace the decision-making process and so
determine its lawfulness in judicial review. In such cases, it is proposed that the principles of
non-devolution surrounding the Carltona principle - the doctrine that allows department
officials to exercise powers vested in a minister - offer a promising way of ‘getting around’ the
issue of intrinsic opacity. By conceptualising the outsourcing of a decision to an MLA as an act
of devolution, the law can effectively regulate the slippage of democratic accountability that the
use of an MLA necessarily entails.
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
* LLM (BPP) ‘21. BA in English (University of Oxford) ‘20. The author warmly thanks
Emma Williams for her guidance and enthusiasm whilst supervising this work.
370
LSE Law Review!
Vol. VII
!
!
!
!
INTRODUCTION
The daunting consequences of autonomous technology, long apprehended
in science fiction, will soon become a r outine consideration in judicial review as
machine learning algorithms (MLAs) transform the way decisions are made in
the UK public sector. The use of MLAs has already been documented in several
departments (benefit calculation assessments,1 child welfare services,2 and
policing3), and 2020 s aw the first challenge to an MLA in court in the Bridges
case.4 At their best, MLAs can be an innovative tool for more efficient public
service: cutting costs, raising standards, saving time, and improving the quality
of decisions made. However, their use poses new challenges for justice in the
form of bias, faulty cross-correlation, inaccuracy, rigidity, automation bias and
opacity. When MLAs start to control some of the most important administrative
decisions in our lives ‘which neighbourhoods get police d, which families attain
much needed resources, who is short-listed for employment and who is
investigated for fraud’5it is vital that existing structures of judicial review can
adequately hold this new form of power to account.
Though even the most advanced MLAs are still far from a state of
complete autonomy,6 one of their features does allow them to operate beyond
human oversight: intrinsic opacity.7 Commonly referred to as the ‘black box’
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
1 Sarah Marsh, ‘One in Three Councils Using Algorithms to Make Welfare Decisions’ The
Guardian (London, 15 October 2019)
<www.theguardian.com/society/2019/oct/15/councils-using-algorithms-make-welfare-
decisions-benefits> accessed 13 October 2021.
2 Joanna Redden, Lina Dencik and Harry Warne, ‘Datafied Child Welfare Services:
Unpacking Politics, Economics and Power’ (2020) 41 Policy Studies 507, 509.
3 Marion Oswald, ‘Algorithmic Risk Assessment Policing Models: Lessons from the
Durham HART Model and 'Experimental' Proportionality’ (2018) 27 Information &
Communications Technology Law 223, 223.
4 R (Bridges) v Chief Constable of South Wales [2019] EWHC 2341.
5 Virginia Eubanks, Automating Inequality: How High-Tech Tools Profile, Police, and Punish the
Poor (1st edn, St. Martin’s Press, 2018) 8.
6 Tim W Dornis, ‘Of “Authorless Works” and “Inventions without Inventor” The
Muddy Waters of “AI Autonomy” in Intellectual Pro perty Doctrine’ (2021) 43(9)
European Intellectual Property Review 570.
7 Burrell has classified the three types of algorithmic opacity as intentional, illiterate, and
intrinsic. Intentional opacity is the obfuscation of whether an algorithm has been used in
the first place, and illiterate opacity refers to the fact that most people, including public
!

To continue reading

Request your trial

VLEX uses login cookies to provide you with a better browsing experience. If you click on 'Accept' or continue browsing this site we consider that you accept our cookie policy. ACCEPT