Event‐based knowledge extraction from free‐text descriptions for art images by using semantic role labeling approaches
Date | 11 April 2008 |
Pages | 215-225 |
DOI | https://doi.org/10.1108/02640470810864109 |
Published date | 11 April 2008 |
Author | Chia‐Hung Lin,Chia‐Wei Yen,Jen‐Shin Hong,Samuel Cruz‐Lara |
Subject Matter | Information & knowledge management,Library & information science |
Event-based knowledge
extraction from free-text
descriptions for art images by
using semantic role labeling
approaches
Chia-Hung Lin, Chia-Wei Yen and Jen-Shin Hong
Department of Computer Science and Information Engineering,
National ChiNan University, Puli, Taiwan, and
Samuel Cruz-Lara
Department of Computer Science, University of Nancy, Nancy, France
Abstract
Purpose – The purpose of this paper is to show how previous studies have demonstrated that
non-professional users prefer using event-based conceptual descriptions, such as “a woman wearing a
hat”, to describe and search images. In many art image archives, these conceptual descriptions are
manually annotated in free-text fields. This study aims to explore technologies to automate
event-based knowledge extractions from these free-text image descriptions.
Design/methodology/approach – This study presents an approach based on semantic role
labeling technologies for automatically extracting event-based knowledge, including subject, verb,
object, location and temporal information from free-text image descriptions. A query expansion
module is applied to further improve the retrieval recall. The effectiveness of the proposed approach is
evaluated by measuring the retrieval precision and recall capabilities for experiments with real life art
image collections in museums.
Findings – Evaluations results indicate that the proposed method can achieve a substantially higher
retrieval precision than conventional keyword-based approaches. The proposed methodology is highly
applicable for large-scale collections where the image retrieval precision is more critical than the recall.
Originality/value – The study provides the first attempt in literature for automating the extraction
of event-based knowledge from free-text image descriptions. The effectiveness and ease of
implementation of the proposed approach make it feasible for practical applications.
Keywords Information retrieval, Semantics, Libraryautomation, Visual databases,
Archives management
Paper type Technical paper
Introduction
Image indexing and retrieval from digital libraries have been an extensively studied
for decades. Chen (2007) conducted an evaluation experiment which indicates that
different users often have diverse using behaviors for the art images, as they have
varying needs and interests. Researchers in the digital library field have explored
The current issue and full text archive of this journal is available at
www.emeraldinsight.com/0264-0473.htm
The authors would like to thank the National Science Council of the Republic of China, Taiwan,
for financially supporting this research under Contract No. NSC-94-2422-H-260-002.
Event-based
knowledge
extraction
215
Received 19 March 2007
Revised 4 May 2007
Accepted 7 May 2007
The Electronic Library
Vol. 26 No. 2, 2008
pp. 215-225
qEmerald Group Publishing Limited
0264-0473
DOI 10.1108/02640470810864109
To continue reading
Request your trial