AI: from rational agents to socially responsible agents

Date13 May 2019
DOIhttps://doi.org/10.1108/DPRG-08-2018-0049
Published date13 May 2019
Pages291-304
AuthorAntonio Vetrò,Antonio Santangelo,Elena Beretta,Juan Carlos De Martin
Subject MatterInformation & knowledge management,Information management & governance,Information policy
AI: from rational agents to socially
responsible agents
Antonio Vetrò, Antonio Santangelo, Elena Beretta and Juan Carlos De Martin
Abstract
Purpose This paper aims to analyzethe limitations of the mainstream definitionof artificial intelligence
(AI) as a rational agent, which currently drives the development of most AI systems. The authors
advocate the need of a wider rangeof driving ethical principles for designing more socially responsible
AI agents.
Design/methodology/approach The authors follow an experience-based line of reasoning by
argument to identify the limitationsof the mainstream definition of AI, which is basedon the concept of
rational agents that select,among their designed actions, those which produce the maximum expected
utility in the environmentin which they operate. The problem of biasesin the data used by AI is taken as
example,and a small proof of concept with real datasetsis provided.
Findings The authors observethat biases measurements on the datasetsare sufficient to demonstrate
potential risks of discriminations when using those data in AI rationalagents. Starting from this example,
the authors discuss otheropen issues connected to AI rational agents and provide a fewgeneral ethical
principles derived from theWhite Paper AI at the service of the citizen, recently published by Agid, the
agency of the ItalianGovernment which designs and monitors the evolutionof the IT systemsof the Public
Administration.
Originality/value The paper contributesto the scientific debate on the governance and the ethicsof AI
with a criticalanalysis of the mainstream definitionof AI.
Keywords Artificial intelligence, Data ethics, Digital technologies and society
Paper type Conceptual paper
1. What kind of rationality for artificial intelligence systems?
The expression “artificial intelligence” (AI) is gaining considerable attention from both
private and public sector (Runkin, 2018;Roy, 2018). The hype is very high and, as it often
happens in such situations, all this attention has generated confusion, even among experts,
who refer to AI to talk about very different things.
We refer to AI following the mainstream definition of Russell and Norvig (2010): it is “the
study of designing and building intelligent agents (p. 30), where “agent” is “anything that
can be viewed as perceiving its environment through sensors and acting upon that
environment through actuators” (p. 34). An intelligent agent “takes the best possible action
in a situation” (p. 30), i.e. it is a rational agent the one which, for each possible percept
sequence, is supposed to “select an action that is expected to maximize its performance
measure, given the evidence provided by the percept sequence and whatever built-in
knowledge the agent has”(p. 37).
An advantage of this definition is that the few concepts above are the building blocks for
designing AI systems with scalable complexity, e.g. from a “simple” vocal translator to an
autonomous vehicle. However, this definition of AI is based on a very precise, and in a
sense narrow, vision of the concept of intelligence, which is bound to a particular type of
rationality. In fact, if the actions undertaken by an agent must always maximize a
Antonio Vetro
`,
Antonio Santangelo,
Elena Beretta and
Juan Carlos De Martin all
based at Department of
Control and Computer
Engineering, Politecnico di
Torino, Torino, Italy.
Received 31 August 2018
Revised 29 November 2018
Accepted 19 December 2018
The authors would like to thank
Dr Eleonora Bassi for her pre-
cious advises on the legal
aspects of the discussion
section.
DOI 10.1108/DPRG-08-2018-0049 VOL. 21 NO. 3 2019, pp. 291-304, ©Emerald Publishing Limited, ISSN 2398-5038 jDIGITAL POLICY, REGULATION AND GOVERNANCE jPAGE 291

To continue reading

Request your trial

VLEX uses login cookies to provide you with a better browsing experience. If you click on 'Accept' or continue browsing this site we consider that you accept our cookie policy. ACCEPT