Detecting racial bias in algorithms and machine learning
Pages | 252-260 |
Published date | 13 August 2018 |
Date | 13 August 2018 |
DOI | https://doi.org/10.1108/JICES-06-2018-0056 |
Author | Nicol Turner Lee |
Subject Matter | Information & knowledge management,Information management & governance,Information & communications technology |
Detecting racial bias in algorithms
and machine learning
Nicol Turner Lee
Center for Technology Innovation, Brookings Institution, Washington,
District of Columbia, USA
Abstract
Purpose –The online economyhas not resolved the issue of racial bias in its applications. While algorithms
are procedures that facilitateautomated decision-making, or a sequence of unambiguousinstructions, bias is
a byproduct of these computations, bringing harm to historically disadvantaged populations. This paper
argues that algorithmic biases explicitly and implicitly harm racial groups and lead to forms of
discrimination.Relying upon sociological and technical research,the paper offers commentary on the need for
more workplace diversity within high-tech industries and public policies that can detect or reduce the
likelihoodof racial bias in algorithmic design and execution.
Design/methodology/approach –The paper shares examplesin the US where algorithmic biases have
been reportedand the strategies for explaining and addressingthem.
Findings –The findings of the paper suggest thatexplicit racial bias in algorithms can be mitigated by
existing laws, including those governing housing, employment, and the extension of credit. Implicit, or
unconscious, biases are harder to redresswithout more diverse workplaces and public policies that have an
approachto bias detection and mitigation.
Research limitations/implications –The major implication of this research is that further research
needs to be done. Increasingthe scholarly research in this area will be a major contribution in understanding
how emergingtechnologies are creating disparate and unfair treatment for certainpopulations.
Practical implications –The practical implicationsof the work point to areas within industries and the
government that can tackle the questionof algorithmic bias, fairness and accountability, especially African-
Americans.
Social implications –The social implications are that emerging technologies are not devoid of societal
influencesthat constantly define positions of power, values, and norms.
Originality/value –The paper joins a scarcity of existingresearch, especially in the area that intersects
race and algorithmicdevelopment.
Keywords Artificial intelligence, Communication, Advertising, Computer ethics, Civil society,
Civil race relations, Race and political rights
Paper type Research paper
Introduction
The online economy has not resolved the issue of racial bias in its applications. In 2013,
online search results for “black-sounding”names were more likely to link arrest records
with profiles, even when false (Lee, 2013). Two years later, Google apologized for an
algorithm that automatically tagged and labeledtwo African–Americans as “gorillas”after
an innocuous online word search (Kasperkevic, 2015). The online photo-shopping
application, FaceApp, was later found to be lightening the darker skin tones of African–
Americans because European faces dominated the training data, thereby defining the
standard of beauty for the algorithm(Morse, 2017).
Algorithms are procedures that facilitate automated problem-solving, or a sequence of
unambiguous instructions (C.T., 2017). In their controversies, Google explained their biases
JICES
16,3
252
Received22 June 2018
Revised22 June 2018
Accepted22 June 2018
Journalof Information,
Communicationand Ethics in
Society
Vol.16 No. 3, 2018
pp. 252-260
© Emerald Publishing Limited
1477-996X
DOI 10.1108/JICES-06-2018-0056
The current issue and full text archive of this journal is available on Emerald Insight at:
www.emeraldinsight.com/1477-996X.htm
To continue reading
Request your trial