Robo-liability: The European Union in search of the best way to deal with liability for damage caused by artificial intelligence

Published date01 October 2018
DOI10.1177/1023263X18812333
Date01 October 2018
AuthorCaroline Cauffman
Subject MatterEditorial
Editorial
Robo-liability: The European
Union in search of the best
way to deal with liability
for damage caused
by artificial intelligence
Caroline Cauffman*
Robotics is no longer a theme reserved for science fiction movies and technolo gical research
institutes. Although most of us do not yet possess a human-looking machine that takes care of
our household, robots already play an important part in our daily lives, as search robots, virtual
assistants such as Siri or Alexa, programmes that suggest products or services based on our
previous purchases or searches etc.
It is difficult to define exactly what a robot is. The concept may refer to machines that carry out
identical and repetitive actions. These types of robots have been widely used since the industrial
revolution and our current law is fit for dealing with them. More problematic, however, are the
robots that possess artificial intelligence (AI), enabling them to ‘learn’ from the information they
are programmed with and the actions they perform, and to use this ‘knowledge’ to make decisions
in subsequent cases. It is these types of robots that challenge the present legal framework, inter alia
in the field of liability law.
Search engines and virtual shopping assistants may cause economic damage to certain traders,
by steering potential customers to their competitors; they may affect consumers whenever their
suggestions are not accurate or do not meet their needs or preferences. However, the risks and
damage caused by self-driving cars or healthcare AI applications may be significantly larger.
Moreover, the self-learning capacity of AI-driven robots makes it difficult for the developer/
producer to predict the actions the robot may undertake in the future. In addition, once the robot is
sold to a third party, the developer will typically no longer have control over the use of the robot
and/or the circumstances from which the robot will learn.
* Maastricht University, Maastricht, Netherlands
Corresponding author:
Caroline Cauffman, Maastricht University, Minderbroedersberg 4-6, 6211 LK Maastricht, Netherlands.
E-mail: caroline.cauffman@maastrichtuniversity.nl
Maastricht Journal of European and
Comparative Law
2018, Vol. 25(5) 527–532
ªThe Author(s) 2018
Article reuse guidelines:
sagepub.com/journals-permissions
DOI: 10.1177/1023263X18812333
maastrichtjournal.sagepub.com
MJ
MJ

To continue reading

Request your trial

VLEX uses login cookies to provide you with a better browsing experience. If you click on 'Accept' or continue browsing this site we consider that you accept our cookie policy. ACCEPT