Algorithmic justice: Algorithms and big data in criminal justice settings

AuthorAleš Završnik
Published date01 September 2021
Date01 September 2021
DOIhttp://doi.org/10.1177/1477370819876762
Subject MatterArticles
https://doi.org/10.1177/1477370819876762
European Journal of Criminology
© The Author(s) 2019
Article reuse guidelines:
sagepub.com/journals-permissions
DOI: 10.1177/1477370819876762
journals.sagepub.com/home/euc
Algorithmic justice:
Algorithms and big data in
criminal justice settings
Aleš Završnik
University of Ljubljana, Slovenia
Abstract
The article focuses on big data, algorithmic analytics and machine learning in criminal justice
settings, where mathematics is offering a new language for understanding and responding to crime.
It shows how these new tools are blurring contemporary regulatory boundaries, undercutting the
safeguards built into regulatory regimes, and abolishing subjectivity and case-specific narratives.
After presenting the context for ‘algorithmic justice’ and existing research, the article shows
how specific uses of big data and algorithms change knowledge production regarding crime. It
then examines how a specific understanding of crime and acting upon such knowledge violates
established criminal procedure rules. It concludes with a discussion of the socio-political context
of algorithmic justice.
Keywords
Algorithm, bias, criminal justice, machine learning, sentencing
Algorithmic governance: The context
Our world runs on big data, algorithms and artificial intelligence (AI), as social networks
suggest whom to befriend, algorithms trade our stocks, and even romance is no longer a
statistics-free zone (Webb, 2013). In fact, automated decision-making processes already
influence how decisions are made in banking (O’Hara and Mason, 2012), payment sec-
tors (Gefferie, 2018) and the financial industry (McGee, 2016), as well as in insurance
(Ambasna-Jones, 2015; Meek, 2015), education (Ekowo and Palmer, 2016; Selingo,
2017) and employment (Cohen et al., 2015; O’Neil, 2016). Applied to social platforms,
they have contributed to the distortion of democratic processes, such as general
Corresponding author:
Aleš Završnik, Institute of Criminology at the Faculty of Law, University of Ljubljana, Poljanski nasip 2,
Ljubljana, SI-1000, Slovenia.
Email: ales.zavrsnik@pf.uni-lj.si
876762EUC0010.1177/1477370819876762European Journal of CriminologyZavršnik
research-article2019
Article
2021, Vol. 18(5) 623–642
elections, with ‘political contagion’, similar to the ‘emotional contagion’ of the infamous
Facebook experiment (Kramer et al., 2014) involving hundreds of millions of individuals
for various political ends, as revealed by the Cambridge Analytica whistle-blowers in
2018 (Lewis and Hilder, 2018).
This trend is a part of ‘algorithmic governmentality’ (Rouvroy and Berns, 2013) and
the increased influence of mathematics on all spheres of our lives (O’Neil, 2016). It is a
part of ‘solutionism’, whereby tech companies offer technical solutions to all social prob-
lems, including crime (Morozov, 2013). Despite the strong influence of mathematics and
statistical modelling on all spheres of life, the question of ‘what, then, do we talk about
when we talk about “governing algorithms”?’ (Barocas et al., 2013) remains largely unan-
swered in the criminal justice domain. How does the justice sector reflect the trend of the
‘algorithmization’ of society and what are the risks and perils of this? The importance of
this issue has triggered an emerging new field of enquiry, epitomized by critical algorithm
studies. They analyse algorithmic biases, filter bubbles and other aspects of how society
is affected by algorithms. Discrimination in social service programmes (Eubanks, 2018)
and discrimination in search engines, where they have been called ‘algorithms of oppres-
sion’ (Noble, 2018), are but some examples of the concerns that algorithms, big data and
machine learning trigger in the social realm. Predictive policing and algorithmic justice
are part of the larger shift towards ‘algorithmic governance’.
Big data, coupled with algorithms and machine learning, has become a central theme
of intelligence, security, defence, anti-terrorist and crime policy efforts, as computers
help the military find its targets and intelligence agencies justify carrying out massive
pre-emptive surveillance of public telecommunications networks. Several actors in the
‘crime and security domain’ are using the new tools:1 (1) intelligence agencies (see, for
example, the judgment of the European Court of Human Rights in Zakharov v. Russia in
2015, No. 47143/06, or the revelations of Edward Snowden in 2013); (2) law enforce-
ment agencies, which are increasingly using crime prediction software such as PredPol
(Santa Cruz, California), HunchLab (Philadelphia), Precobs (Zürich, Munich) and
Maprevelation (France) (see Egbert, 2018; Ferguson, 2017; Wilson, 2018); and (3) crim-
inal courts and probation commissions (see Harcourt, 2015a; Kehl and Kessler, 2017).
The use of big data and algorithms for intelligence agencies’ ‘dragnet’ investigations
raised considerable concern after Snowden’s revelations regarding the ’National Security
Agency’s access to the content and traffic data of Internet users. Predictive policing has
attracted an equal level of concern among scholars, who have addressed ‘the rise of pre-
dictive policing’ (Ferguson, 2017) and the ‘algorithmic patrol’ (Wilson, 2018) as the new
predominant method of policing, which thus impacts other methods of policing. Country-
specific studies of predictive policing exist in Germany (Egbert, 2018), France (Polloni,
2015), Switzerland (Aebi, 2015) and the UK (Stanier, 2016). A common concern is the
predictive policing allure of objectivity, and the creative role police still have in creating
inputs for automated calculations of future crime: ‘Their choices, priorities, and even
omissions become the inputs algorithms use to forecast crime’ (Joh, 2017a). Scholars
have shown how public concerns are superseded by the market-oriented motivations and
aspirations of companies that produce the new tools (Joh, 2017b). Human rights advo-
cates have raised numerous concerns regarding predictive policing (see Robinson and
Koepke, 2016). For instance, a coalition of 17 civil rights organizations has listed several
624 European Journal of Criminology 18(5)

To continue reading

Request your trial

VLEX uses login cookies to provide you with a better browsing experience. If you click on 'Accept' or continue browsing this site we consider that you accept our cookie policy. ACCEPT