Why the algorithmic recruiter discriminates: The causal challenges of data-driven discrimination
| Published date | 01 June 2024 |
| DOI | http://doi.org/10.1177/1023263X241248474 |
| Author | Christine Carter |
| Date | 01 June 2024 |
Why the algorithmic recruiter
discriminates: The causal
challenges of data-driven
discrimination
Christine Carter*
Abstract
Automated decision-making systems are commonly used by human resources to automate
recruitment decisions. Most automated decision-making systems utilize machine learning to
screen, assess, and give recommendations on candidates. Algorithmic bias and prejudice are com-
mon side-effects of these technologies that result in data-driven discrimination. However, proof of
this is often unavailable due to the statistical complexities and operational opacities of machine
learning, which interferes with the abilities of complainants to meet the requisite causal require-
ments of the EU equality directives. In direct discrimination, the use of machine learning prevents
complainants from demonstrating a prima facie case. In indirect discrimination, the problems
mainly manifest once the burden has shifted to the respondent, and causation operates as a
quasi-defence by reference to objectively justified factors unrelated to the discrimination. This
paper argues that causation must be understood as an informational challenge that can be
addressed in three ways. First, through the fundamental rights lens of the EU Charter of
Fundamental Rights. Second, through data protection measures such as the General Data
Protection Regulation. Third, the article also considers the future liabilities that may arise
under incoming legislation such as the Artificial Intelligence Act and the Artificial Intelligence
Liability Directive proposal.
Keywords
Non-discrimination law, machine learning, automated decision-making, recruitment
*
University of Cambridge, Cambridge, UK
Corresponding author:
Christine Carter, Faculty of Law, University of Cambridge, The David Williams Building, 10 West Rd, Cambridge CB3 9DZ,
UK.
Email: mcc90@cam.ac.uk
Article
Maastricht Journal of European and
Comparative Law
2024, Vol. 31(3) 333–359
© The Author(s) 2024
Article reuse guidelines:
sagepub.com/journals-permissions
DOI: 10.1177/1023263X241248474
maastrichtjournal.sagepub.com
1. Introduction
Alongside the rise of densely interconnected information-based societies and platform economies,
the term ‘algorithmic management’has aptly arisen to describe Artificial Intelligence (‘AI’) and
machine learning (‘ML’) as a technological infrastructure that shapes the nature of work and
status of workers in the modern workplace.
1
Within this infrastructure, automated decision-making
systems (‘ADMs’) assume a central role in human resources (‘HR’) and are standard in most
recruitment practices.
2
Many ADMs include advanced ML algorithms that operate on various pre-
dictive and inferential analytics, natural language and audio-visual processing systems.
3
These
screen candidates through asynchronous video interviews or gamified assessments,
4
as well as
compile profiles that forecast the behaviour and success chances of a candidate at a given
company, and in some cases make recommendations to HR on whether to accept or reject the
application.
5
Bias, discrimination, inequality and unfair treatment are common side effects of recruitment
ADMs.
6
Female candidates seeking technical roles at Amazon suffered such a fate when the
ADMs penalized résumés containing words with female connotations because the algorithmic
model was trained on past resumes submitted to Amazon from predominantly male candidates.
One year after launch, the model was abandoned for systematically discriminating against
female candidates.
7
Recently in Filcams VGIL Bologna and others v. Deliveroo Italia SRL,
Italian trade unions sued Deliveroo for using ADMs that scored drivers on their participation
and reliability in the company’s booking system.
8
Drivers who were less active, or had previously
cancelled bookings, were ranked lower and less likely to be allocated work. The Bologna Labour
Tribunal found that the ADMs failed to consider the reasons behind the drivers’non-participation or
cancellation. Without consideration of these reasons, it was likely that protected groups would
suffer a particular disadvantage, which the court found to be indirectly discriminatory in a first
of its kind judgment.
Concerns about the general impact of AI on work, and particularly on the abilities of individuals to
access labour, have consequently surged in the global legal community. Data-driven discrimination is
one of the main challenges that has drawn the European Commission to question the technological via-
bility of the protections available to workers against discrimination found in the EU equality direc-
tives.
9
Similar sentiments are expressed in the United States, where researchers metaphorically
describe the regulation of these matters within US anti-discrimination law as the act of forcibly
1. International Labour Organisation, ‘The Algorithmic Management of Work and its Implications in Different Contexts’
(2022), p. 5–7.
2. Trade Union Congress, ‘Technology Managing People: The Worker Experience Report’(2020), p. 17–36.
3. Forbes, ‘Artificial Intelligence in Talent Acquisition: How Machine Learning is Influencing Recruitment’, 10 October
2023.
4. P. Tambe, P. Cappelli and V. Yakubovich, ‘Artificial Intelligence in Human Resources Management: Challenges and a
Path Forward’,61California Management Review (2019).
5. Article 29 Data Protection Working Party, ‘Opinion 03/2013 on Purpose Limitation’, 00569/13/EN, WP203 (2013), p. 47.
6. Centre for Data Ethics and Innovation, ‘Review into Bias in Algorithmic Decision-making’(2020), p. 39–48; Trade
Union Congress, ‘Technology Managing People: The Worker Experience Report’(2020), p. 17–36, 49.
7. Reuters, ‘Amazon Scraps Secret AI Recruiting Tool that Showed Bias against Women’, 11 October 2018.
8. Filcams VGIL Bologna and others v. Deliveroo Italia SRL, Court of Bologna, RG 2949/2019, ord. 12.31.2020.
9. European Commission, ‘Algorithmic Discrimination in Europe: Challenges and Opportunities for Gender Equality and
Non-discrimination Law’(2021), p. 55–58.
334 Maastricht Journal of European and Comparative Law 31(3)
Get this document and AI-powered insights with a free trial of vLex and Vincent AI
Get Started for FreeStart Your Free Trial of vLex and Vincent AI, Your Precision-Engineered Legal Assistant
-
Access comprehensive legal content with no limitations across vLex's unparalleled global legal database
-
Build stronger arguments with verified citations and CERT citator that tracks case history and precedential strength
-
Transform your legal research from hours to minutes with Vincent AI's intelligent search and analysis capabilities
-
Elevate your practice by focusing your expertise where it matters most while Vincent handles the heavy lifting
Start Your Free Trial of vLex and Vincent AI, Your Precision-Engineered Legal Assistant
-
Access comprehensive legal content with no limitations across vLex's unparalleled global legal database
-
Build stronger arguments with verified citations and CERT citator that tracks case history and precedential strength
-
Transform your legal research from hours to minutes with Vincent AI's intelligent search and analysis capabilities
-
Elevate your practice by focusing your expertise where it matters most while Vincent handles the heavy lifting
Start Your Free Trial of vLex and Vincent AI, Your Precision-Engineered Legal Assistant
-
Access comprehensive legal content with no limitations across vLex's unparalleled global legal database
-
Build stronger arguments with verified citations and CERT citator that tracks case history and precedential strength
-
Transform your legal research from hours to minutes with Vincent AI's intelligent search and analysis capabilities
-
Elevate your practice by focusing your expertise where it matters most while Vincent handles the heavy lifting
Start Your Free Trial of vLex and Vincent AI, Your Precision-Engineered Legal Assistant
-
Access comprehensive legal content with no limitations across vLex's unparalleled global legal database
-
Build stronger arguments with verified citations and CERT citator that tracks case history and precedential strength
-
Transform your legal research from hours to minutes with Vincent AI's intelligent search and analysis capabilities
-
Elevate your practice by focusing your expertise where it matters most while Vincent handles the heavy lifting
Start Your Free Trial of vLex and Vincent AI, Your Precision-Engineered Legal Assistant
-
Access comprehensive legal content with no limitations across vLex's unparalleled global legal database
-
Build stronger arguments with verified citations and CERT citator that tracks case history and precedential strength
-
Transform your legal research from hours to minutes with Vincent AI's intelligent search and analysis capabilities
-
Elevate your practice by focusing your expertise where it matters most while Vincent handles the heavy lifting