“It would be pretty immoral to choose a random algorithm”. Opening up algorithmic interpretability and transparency

Published date13 May 2019
Date13 May 2019
DOIhttps://doi.org/10.1108/JICES-11-2018-0092
Pages210-228
AuthorHelena Webb,Menisha Patel,Michael Rovatsos,Alan Davoust,Sofia Ceppi,Ansgar Koene,Liz Dowthwaite,Virginia Portillo,Marina Jirotka,Monica Cano
Subject MatterInformation & knowledge management,Information management & governance,Information & communications technology
It would be pretty immoral to
choose a random algorithm
Opening up algorithmic interpretability
and transparency
Helena Webb,Menisha Patel,Michael Rovatsos,Alan Davoust,
Sofia Ceppi,Ansgar Koene,Liz Dowthwaite,Virginia Portillo,
Marina Jirotka and Monica Cano
(Author afliations can be found at the end of the article)
Abstract
Purpose The purpose of this paper is to report on empirical work conducted to open up algorithmic
interpretability and transparency. In recent years, signicantconcerns have arisen regarding the increasing
pervasiveness of algorithms and the impact of automated decision-making in our lives. Particularly
problematic is the lack of transparency surrounding the developmentof these algorithmic systems and their
use. It is often suggested that to make algorithms more fair, they should be made more transparent, but
exactly howthis can be achieved remains unclear.
Design/methodology/approach An empirical studywas conducted to begin unpacking issues around
algorithmic interpretability and transparency. The study involved discussion-based experiments centred
around a limited resource allocation scenario which required participants to select their most and least
preferred algorithms in a particular context. In addition to collecting quantitative data about preferences,
qualitativedata captured participantsexpressedreasoning behind their selections.
Findings Even when providedwith the same information about the scenario, participantsmade different
algorithm preference selections and rationalised their selections differently. The study results revealed
diversity in participantresponses but consistency in the emphasis they placed on normativeconcerns and the
importance of contextwhen accounting for their selections. The issuesraised by participants as important to
their selections resonate closely with values that have come to the fore in current debates over algorithm
prevalence.
Originality/value This work developed a novel empirical approach that demonstrates the value in
pursuing algorithmicinterpretability and transparency while also highlightingthe complexities surrounding
their accomplishment.
Keywords Algorithms, Transparency, Interpretability
Paper type Research paper
1. Introduction
In recent years, a signicant amount of public concern has emerged over the increasing
pervasiveness of algorithms and the impact of automated decision-making in our lives
(Floridi and Sanders, 2004;Koene et al., 2016;Binns, 2018). A number of high-prole cases
have suggested that algorithms may inadvertently inuence public opinion or produce
The authors would like to acknowledge the contribution of all research participants who took part in
this study. The research undertaken in this study formed part of the EPSRC funded study UnBias:
Emancipating users against algorithmic biases for a trusted digital economy. EPSRC reference EP/
N02785X/1.
JICES
17,2
210
Received30 November 2018
Revised25 January 2019
Accepted29 January 2019
Journalof Information,
Communicationand Ethics in
Society
Vol.17 No. 2, 2019
pp. 210-228
© Emerald Publishing Limited
1477-996X
DOI 10.1108/JICES-11-2018-0092
The current issue and full text archive of this journal is available on Emerald Insight at:
www.emeraldinsight.com/1477-996X.htm
outcomes that systematically disadvantage certain groups in society. Key examples include
controversies over the roles played by bots and algorithms in the 2016 US presidential election
(Howard et al., 2018) and the placement of online advertisements for criminal background
checks alongside searches for African-American sounding names (Sweeney, 2013).
What perpetuates these concerns and adds to their problematic nature is the lack of
transparency surrounding the development of these algorithmic systems and their use
(Pasquale, 2015). Algorithms developed by large corporations are widely used and yet
proprietary, with their inner workings remaining hidden from direct scrutiny. In addition,
because of the complexities of the problems they workon, many of the algorithms that now
provide important servicesare inherently complex in their formulation. As a result, they are
often only fully understandableto those who have specic technical knowledge and interest
in them. This means that most of us are largelyuninformed users, experiencing algorithms
on a daily basis and yet unaware either of the issues or of how to overcome them. Where
there is a lack of transparency there is typically also a lack of accountability (Koene et al.,
2017;Oswald, 2018). The use of algorithmic risk assessment scores to aid sentencing in US
criminal courts has been accompanied by a number of controversies; one concerned the
rejection of an appeal from a defendant to scrutinise the process through which his risk
score and subsequent sentence had been produced (SCOTUSblog, 2017). It was ruled that
knowing the outcome of the score was sufcient and that the defendant and his legal team
did not have rights to access the proprietaryrisk assessment instrument itself.
The research reported in this paper is motivated by the desire to open up these algorithmic
processes to make them more interpretable, transparent and subject to oversight. Some have
argued for a society in the loopArticial Intelligence governance framework, where societal
values would be embedded into algorithmic decision-making (Rahwan, 2018), comparable to the
ways in which human judgement (from individuals) is used to train or control machine learning
systems. Similarly, responsible research and innovation approaches (Owen, Macnaghten and
Stilgoe, 2012) advocate opening processes of innovation to include voices from across society.
These perspectives highlight the need to elicit a collective judgement regarding particular
algorithmic processes. Precisely how this can be achieved is challenging. How can we open up the
black boxof algorithms to make them available for scrutiny by different groups with varying
levels of technical literacy? On what basis should algorithms be judged? How does ourjudgement
balance the interests of the different stakeholders affected by these processes and their outcomes?
This paper reports on empirical work to elicit the opinions of research participants
regarding an algorithm to be used in a specic context. Presented with a limited resource
allocation problem and several possible algorithms to solve it, participants were asked to
choose their preferred and leastpreferred algorithms for the task. They were also given the
opportunity to discuss these choices. Analysis of their choices and discussions shows that
the participants made different preference selections but consistently invoked normative
concerns when accounting for their choices. They also attended to their selections as
strongly dependent on the context. This discussion-based format formed a highly useful
approach to begin opening up algorithmicinterpretability and transparency.
2. Background: exploring algorithmic transparency
It may be that to make algorithms more fair in theircontemporary use, they should be made
more transparent. So, how would this be achieved? Engendering transparency is no simple
feat and many complexities exist. The notion of transparency itself has been explored
extensively, with both the positive and more problematic sides in making the invisible
more visiblerevealed (for example, see Strathern, 2000). More specically, in regard to
transparency and algorithms, there exists a tension between the proprietary nature of
Algorithmic
interpretability
and
transparency
211

To continue reading

Request your trial

VLEX uses login cookies to provide you with a better browsing experience. If you click on 'Accept' or continue browsing this site we consider that you accept our cookie policy. ACCEPT