Tuning EU equality law to algorithmic discrimination: Three pathways to resilience

AuthorRaphaële Xenidis
Published date01 December 2020
Date01 December 2020
DOIhttp://doi.org/10.1177/1023263X20982173
Subject MatterArticles
Article
Tuning EU equality law to
algorithmic discrimination:
Three pathways to resilience
Raphae
¨le Xenidis*
Abstract
Algorithmic discrimination poses an increased risk to the legal principle of equality. Scholarly
accounts of this challenge are emerging in the context of EU equality law, but the question of the
resilience of the legal framework has not yet been addressed in depth. Exploring three central
incompatibilities between the conceptual map of EU equality law and algorithmic discrimination, this
article investigates how purposively revisiting selected conceptual and doctrinal tenets of EU non-
discrimination law offers pathways towards enhancing its effectiveness and resilience. First, I argue
that predictive analytics are likely to give rise to intersectional forms of discrimination, which
challenge the unidimensional understanding of discrimination prevalent in EU law. Second, I show
how proxy discrimination in the context of machine learning questions the grammar of EU non-
discrimination law. Finally, I address the risk that new patterns of systemic discrimination emerge in
the algorithmicsociety. Throughout the article,I show that looking at the marginsof the conceptual
and doctrinal map of EU equality law offers several pathways to tackling algorithmic discrimination.
This exercise is particularly important with a view to securing a technology-neutral legal framework
robust enough to provide an effective remedy to algorithmic threats to fundamental rights.
Keywords
Algorithmic discrimination, non-discrimination law, equality, algorithms; machine learning;
Artificial Intelligence, profiling; predictive analytics, European Union, legal resilience
* Lecturer in European Union Law at Edinburgh University, School of Law, Old College, South Bridge, Edinburgh EH8 9YL,
United Kingdom and Marie Curie Fellow at iCourts, Copenhagen University, Faculty of Law, Karen Blixens Plads 16, 2300
Copenhagen, Denmark.
Corresponding author:
Raphae
¨le Xenidis, University of Copenhagen, Faculty of Law, Karen Blixens Plads 16, 2300 København, Denmark.
E-mail: rxenidis@ed.ac.uk
Maastricht Journal of European and
Comparative Law
2020, Vol. 27(6) 736–758
ªThe Author(s) 2020
Article reuse guidelines:
sagepub.com/journals-permissions
DOI: 10.1177/1023263X20982173
maastrichtjournal.sagepub.com
MJ
MJ
1. Introduction
In 2019, researchers based in the US conducted a revealing experiment on optimization in online
advertising: they created job ads and asked Facebook to distribute them among users, targeting the
same audience. The results? In extreme cases, cashier positions in supermarkets ended up being
distributed to an 85%female audience, taxi driver positions to a 75%black audience and lumber-
jack positions to an audience that was male at 90%and white at 72%.
1
The discriminatory potential
of such distribution of information on professional opportunities is obvious. At the structural level,
such targeting not only perpetuates gender and racial segregation within the labour market by
ascribing stereotypical affinities to certain protected groups, but it also directly affects individuals
by shaping their own professional horizons through exposure to, or exclusion from, information.
Behind this and other types of online profiling are various types of Artificial Intelligence (AI),
among which machine-learning algorithms feature most prominently. While AI doubtlessly offers
increased opportunities in many areas, it is now well-established that it also poses enhanced risks
of discrimination.
2
‘Algorithmic bias’ has been the subject of a growing strand of literature in
various disciplines like computer science, ethics, social sciences and law. In legal research, com-
mentators have pondered about the ability of various non-discrimination law frameworks to
address data-driven discrimination.
3
Although a majority of those scholarly contributions focus
on the US, gaps and weaknesses have also been described as problematic in the realm of EU non-
discrimination law.
4
A consensus is emerging on the fact that algorithmically induced discrimination poses chal-
lenges to non-discriminatio n law. Yet, the extent to which the l egal framework in place can
adequately address these challenges and effectively redress ensuing discriminatory harms is less
clear. In view of the rapid evolution of technology, there is value in exploring the technology
neutrality of the legal framework in place, that is whether it is robust and flexible enough to tackle
1. M. Ali et al., ‘Discrimination Through Optimization: How Facebook’s ad Delivery Can Lead to Skewed Outcomes’,
Proceedings of the ACM on Human-Computer Interaction (2019), https://arxiv.org/pdf/1904.02095.pdf, p. 2.
2. See e.g. R. Allen and D. Masters (2020), ‘Artificial Intelligence: the right to protection from discrimination caused by
algorithms, machine learning and automated decision-making’ 20 ERA Forum, p. 585; R. Allen and D. Masters,
‘Regulating for an equal AI: A New Role for Equality Bodies – Meeting the new challenges to equality and non-
discrimination from increased digitisation and the use of Artificial Intelligence’, Equinet (2020), https://equineteurope.
org/wp-content/uploads/2020/06/ai_report_digital.pdf; J. Gerards and R. Xenidis, Algorithmic Discrimination in Eur-
ope: Challenges and Opportunities for EU Gender Equality and Non-Discrimination Law (Publication Office of the
European Union, 2020 (forthcoming)).
3. See e.g. T.Z. Zarsky, ‘Understanding discrimination in the scored society’, 89 Wash L Rev (2014); T.Z. Zarsky, ‘An
Analytic Challenge: Discrimination Theory in the Age of Predictive Analytics’, 14 ISJLP (2017); S. Barocas and A.D.
Selbst, ‘Big Data’s Disparate Impact’, 104 California Law Review (2016); F.J. Zuiderveen Borgesius, ‘Strengthening
legal protection against discrimination by algorithms and artificial intelligence’, The International Journal of Human
Rights (2020); J. Kleinberg et al., ‘Discrimination in the age of algorithms’, NBER Working Paper No 25548 (2019).
4. For a mapping, see R. Xenidis and L. Senden, ‘EU Non-discrimination Law in the Era of Artificial Intelligence:
Mapping the Challenges of Algorithmic Discrimination’, in U. Bernitz et al. (eds), General Principles of EU Law and
the EU Digital Order (Wolters Kluwer, 2019); J. Gerards and R. Xenidis, Algorithmic Discrimination in Europe:
Challenges and Opportunities for EU Gender Equality and Non-Discrimination Law (Publication Office of the Eur-
opean Union, 2020 (forthcoming)); P. Hacker, ‘Teaching fairness to artificial intelligence: Existing and novel strategies
against algorithmic discrimination under EU law’, 55 Common Market Law Review (2018); F. Zuiderveen Borgesius,
Discrimination, artificial intelligence, and algorithmic decision-making (Council of Europe, Directorate General of
Democracy, 2018).
Xenidis 737

To continue reading

Request your trial

VLEX uses login cookies to provide you with a better browsing experience. If you click on 'Accept' or continue browsing this site we consider that you accept our cookie policy. ACCEPT