Responding to uncertainty in emotion recognition

DOIhttps://doi.org/10.1108/JICES-07-2019-0080
Pages299-303
Published date12 August 2019
Date12 August 2019
AuthorBjörn Schuller
Subject MatterInformation & knowledge management
Responding to uncertainty in
emotion recognition
Björn Schuller
Chair of Embedded Intelligence for Health Care and Wellbeing,
University of Augsburg, Augsburg, Germany and GLAM Group on Language,
Audio and Music, Imperial College London, London, UK
Abstract
Purpose Uncertainty is an under-respected issue when it comes to automatic assessment of human
emotion by machines. The purpose of this paper is to highlight the existent approaches towards such
measurementof uncertainty, and identify furtherresearch need.
Design/methodology/approach The discussionis based on a literature review.
Findings Technical solutions towards measurement of uncertainty in automatic emotion recognition
(AER) exist but need to be extended to respect a range of so far underrepresented sources of uncertainty.
These thenneed to be integrated into systems availableto general users.
Research limitations/implications Not all sources of uncertainty in automatic emotion recognition
(AER) includingemotion representation and annotationcan be touched upon in this communication.
Practical implications AER systems shall be enhancedby more meaningful and complete information
provision on the uncertainty underlying their estimates. Limitations of their applicability should be
communicatedto users.
Social implications Users of automatic emotion recognition technology will become aware of their
limitations,potentially leading to a fairer usage in crucial application context.
Originality/value There is no previous discussion including the technical view point on extended
uncertaintymeasurement in automatic emotion recognition.
Keywords Uncertainty, Condence, Affective computing, Sentiment analysis, Trustworthiness,
Automatic emotion recognition
Paper type Viewpoint
Introduction
Automatic recognition of emotion such as of humans or animals is increasingly being
applied in crucial contexts such as assessment in job interviews or for security applications.
It is essentially a pattern recognition problem. Such problems are usually solved by some
sort of (statistical) machine learning. This alludes to learning from data to generalise
towards new, unseen data based on experience. What makes automatic emotion
recognition (AER) more difcult in comparison to many related pattern recognition tasks
such as automatic speech recognitionor person recognition in images, is the lack of a solid
ground-truth learning target. This comes, as the actual emotion inherent in a human or
animal usually cannot be measured.In the future, brain scans or other means may allow for
establishment of such a reliable ground-truth emotion information. However, the current
AER systems usually base on (subjective) human labelling of the perceived emotion as
expressed by others. This requires data suchas audio, video or text, but also physiological
data to be annotated, according to human perceivable information such as acoustic or
linguistic cues in (spoken) language and visual cues such as facial expression and body
posture. The learning target for AER systems is thereby usually formed by a potentially-
Uncertainty in
emotion
299
Received23 July 2019
Revised23 July 2019
Accepted23 July 2019
Journalof Information,
Communicationand Ethics in
Society
Vol.17 No. 3, 2019
pp. 299-303
© Emerald Publishing Limited
1477-996X
DOI 10.1108/JICES-07-2019-0080
The current issue and full text archive of this journal is available on Emerald Insight at:
www.emeraldinsight.com/1477-996X.htm

To continue reading

Request your trial

VLEX uses login cookies to provide you with a better browsing experience. If you click on 'Accept' or continue browsing this site we consider that you accept our cookie policy. ACCEPT