Two fast prototypes of web-based augmented reality enhancement for books

Publication Date02 Dec 2019
AuthorDan Lou
SubjectLibrary & information science,Librarianship/library management,Library technology,Library & information services
Two fast prototypes of web-based augmented
reality enhancement for books
Dan Lou
As Author and Envisioneer M. Pell
nicely put it, augmented reality (AR) is
the incredible media to “surfacing the
invisible” (DFREE, 2018). AR is an
interactive experience of blending
computed-generated objects seamlessly
into the view of the real world. In recent
years, we have seen various AR
applications that visualize the
previously invisible data in different
fields, from health care (PBS
NewsHour, 2019), education, real estate
(Young Entrepreneur Council, 2019)to
indoor/outdoor navigations (Grey,
This makes one wonder what type of
invisible data a library collection
possesses that may appeal to our
customers, as well as what could serve
as the most cost-effective way to make
that data visible through AR.
Palo Alto City Library started to
design and implement Web-based AR
in 2018. In a regional conference hosted
by the Library, attendees were invited to
open AR web pages on their
smartphones and scan 2-D markers
around the room to experience different
AR scenes. The author has explained
how such an AR solution was
developed in a recent Code4Lib article
(Lou, 2019), where she also suggested
the idea of adopting the solution to
enhance a library collection. But it is
quite a different story when one tries to
scale things up from an activity with
only one AR marker and a few AR
objects to the collection level with
thousands of books to be associated to
AR content. The major obstacle lies in
finding a scalable and painless solution
to create links between AR content and
books. At title level, books already have
their unique identifier – the ISBN
number. Unfortunately, marker-based
AR technology only accept two-
dimensional (2-D) objects as markers,
not the one-dimensional (1-D) EAN
barcode (or ISBN barcode) used by
books, for technical reasons.
Explanation on augmented reality
and augmented reality markers
Generally speaking, AR technology
can be divided into two major
categories: marker-based AR and
markerless AR (, 2017).
Maker-based AR needs a specific
marker to trigger an AR scene. With
this method, the association between the
marker and the AR scene is predefined.
This is the technology used in the AR
activity at Palo Alto City Library
mentioned in the previous paragraph.
In contrast, markerless AR doesnt
need special markers to identify the
place where a virtual object should
appear. Instead, it uses a technology
called Simultaneous Location and
Mapping (SLAM) (Wikipedia
Contributors, 2019) to map an
unknown environment in real time and
to adjust virtual objects in it
accordingly. Hence, markerless AR
can render an AR scene realistically in
almost any real-world environment.
Pokemon Go is probably the best
known example in this case.
Based on this understanding, if a
specific AR scene is designed to be
triggered by a specific book, marker-
based AR seems to be the more
appropriate option at first glance: the
association between a book and an AR
scene should be unique and needs to be
Unfortunately, it is a daunting task to
adopt this solution for a library
collection. From technical point of
view, marker-based AR can only accept
2-D objects as markers, such as 2-D
images, 2-D barcode, etc. This is
because a 2-D object, like a QR code,
represents rich geometric patterns that
contain the necessary information about
the real world required by AR display:
the position, the scale, and the rotation.
Based on such information returned
from a 2-D object, an AR application
can then construct a believable AR
scene. In contrast, the ISBN barcode is a
1-D barcode representing information
in parallel lines instead of geometric
patterns. Hence it is impossible to
capture all the information required to
build an AR scene (jetmarkingadmin,
2018). In the scenario of adopting
marker-based AR for this project, not
only a unique AR scene has to be
developed for every book (or a group of
books) in a collection but also an
additional AR marker has to be put on
every book to get it associated with an
AR scene. This can quickly turn into a
time-consuming, labor-intensive and
unsustainable project.
On the other hand, the ideal solution
is expected to function in a quite
different way. A Web-based barcode
reader scans the ISBN barcode on a
book. The retu rned ISBN infor mation,
instead of the data from a 2-D marker,
is used to trigger a specific Web-based
AR scene. This ideal solution has
three components: a Web-based
barcode reader, some attractive Web-
based AR content for every book, and
a connection between the ISBN
information returned by the barcode
reader and the target AR content.
A Web-based barcode reader
The Web-based barcode reader used
in the two prototypes is called
QuaggaJS (, 2017). QuaggaJS
is a barcode reader entirely written in
JavaScript that can be enabled in a
Web page. It supports real-time
scanning of various types of barcodes,
LIBRARY HITECH NEWS Number 10 2019, pp. 19-24, V
CEmerald Publishing Limited, 0741-9058, DOI 10.1108/LHTN-08-2019-0057 19

To continue reading

Request your trial

VLEX uses login cookies to provide you with a better browsing experience. If you click on 'Accept' or continue browsing this site we consider that you accept our cookie policy. ACCEPT