Understanding credibility judgements for web search snippets

Pages368-391
Published date20 May 2019
DOIhttps://doi.org/10.1108/AJIM-07-2018-0181
Date20 May 2019
AuthorMarkus Kattenbeck,David Elsweiler
Subject MatterLibrary & information science
Understanding credibility
judgements for web search snippets
Markus Kattenbeck and David Elsweiler
Department of Information Science,
University of Regensburg, Regensburg, Germany
Abstract
Purpose It is well known that information behaviour can be biased in countless ways and that users of web
search engines have difficulty in assessing the credibility of results. Yet, little is known about how search
engine result page (SERP) listings are used to judge credibility and in which if any way such judgements are
biased. The paper aims to discuss these issues.
Design/methodology/approach Two studies are presented . The first collects data by means of a
controlled, web-based user study (N¼105). Studying judgements fo r three controversial topics, the paper
examines the extent to wh ich users agree on credi bility, the extent to wh ich judgements relat e to those
applied by objective a ssessors and to what ext ent judgements can be p redicted by the usersposition
on and prior knowledge of the topic. A second, qualitative study (N¼9) utilises the same setup;
however, transcribed think-aloud proto cols provide an understa nding of the cues particip ants use to
estimate credibilit y.
Findings The first study reveals that users are very uncertain when assessing credibility and their
impressions often dive rge from objective judges who ha ve fact checked the sources. Li ttle evidence is found
indicating that judge ments are biased by prior be liefs or knowledge, but differences are observed in the
accuracy of judgement s across topics. Qualit atively analysing thi nk-aloud transcripts f rom participants
think-aloud reveals t en categories of cues, which participan ts used to determine the credibility of resu lts.
Despite short listin gs, participants util ised diverse cues for the same listings. Even when the same cues
were identified and uti lised, different participants often int erpreted these differently. Example tr anscripts
show how participants re ach varying conclusio ns, illustrate common mis takes made and highlig ht
problems with existing SERP listings.
Originality/value This study offers a novel perspective on how the credibility of SERP listings is
interpreted when assessing search results. Especially striking is how the same short snippets provide diverse
informational cues and how these cues can be interpreted differently depending on the user and his or her
background. This finding is significant in terms of how search engine results should be presented and opens
up the new challenge of discovering technological solutions, which allow users to better judge the credibility
of information sources on the web.
Keywords Credibility, Information seeking behaviour, Biases, Search engine result page, Web searchers
Paper type Research paper
1. Introduction
Making credibility judgements in information environments, such as the web, is challenging
(Schwarz and Morris, 2011) because, in contrast to traditional print and broadcast media,
often no quality control mechanism exists (Rieh, 2002). This places increased emphasis on
user information literacy skills, including their ability to evaluate information critically.
These are skills, which people, regardless of education level, tend to overestimate (Gross and
Latham, 2012). Moreover, information retrieval (IR) research has emphasised that user
actions while seeking information as well as their perception of information found is biased
in countless ways (e.g. Nickerson, 1998; White, 2013). Although credibility judgements for
web pages (Fogg, 2003; McKnight and Kacmar, 2007) and web search engine result page
(SERP) listings have been studied (Schwarz and Morris, 2011), we still know relatively little
about how such evaluations are made and how they may be affected by user biases.
Designing systems to support the critical evaluation of information should be informed by a
better understanding of both the processes and biases involved.
The focus of our work is on how judgements are made from result listings alone, that is
without examini ng the result web pag e itself. This means , using Riehsterminology,we
Aslib Journal of Information
Management
Vol. 71 No. 3, 2019
pp. 368-391
© Emerald PublishingLimited
2050-3806
DOI 10.1108/AJIM-07-2018-0181
Received 30 July 2018
Revised 31 October 2018
22 January 2019
Accepted 23 January 2019
The current issue and full text archive of this journal is available on Emerald Insight at:
www.emeraldinsight.com/2050-3806.htm
368
AJIM
71,3
have focused on predictive rather than evaluative quality judgements (Rieh, 2002).
The role that search listings play in establishing the credibility of results has been subject
to little study, but is important because the evidence suggests that decisions are being
made on the SERPs. Users typically click on few, high-ranking results ( Joachims et al.,
2007; Granka et al., 2004) and specific features of the listings, such as the presence of
query terms in the snippet or title, the readability of the description and the length of the
URL, can have a strong influence on which results are clicked (Clarke et al., 2007).
Moreover, past work suggests that while reading web pages typically helps people form
a view, it often does not change one (White, 2013). Past work has also highlighted
the importance of search listings and how result descriptions can be misleading when
users derive relevance judgements (Lewandowski, 2008). Lewandowski recommends
search engine companies not only work to improve the precision of the results, but also
that they consider how the results and result descriptions conform and how this might
influence behaviour.
Thus, decisions made based on listings alone may be difficult to reverse. As yet, we know
little about how credibility judgements based on search listings are made, what influences
the processes involved and how successful users are and why. We investigate these issues
in this work. In particular, we study credibility judgements for real listings for three
controversial topics using two approaches.
First, we apply a quantitative method to investigate the extent to which users (N¼105)
agree on credibility, how judgements relate to those applied by objective assessors who fact
checked the full page content and to what extent judgements can be predicted by biases[1],
such as usersposition on and prior knowledge of the topic.
A second data collection phase complements the results of the first study by reporting on
a qualitative investigation, whereby a smaller number of participants (N¼9) explain their
reasoning while completing the same task.
Before presenting the studies and the resulting findings, in the following section, we review
two bodies of related work. The first briefly summarises research on information seeking with a
particular focus on cognitive aspects of search and how these can be biased, while the second
details work on assessing the credibility of information sources.
2. Related work
Information Science has a rich tradition of studying how people acquire information
(see Case and Given, 2016 for a general overview). It is well understood that this behaviour is
complex and context-dependent, that is, the exact action, sequence of actions or strategy
employed will vary depending on a larger number of different variables from the type and
complexity of the task (Marchionini, 2006; Byström and Järvelin, 1995) to the expertise
(Aula et al., 2005; White et al., 2009), cognitive abilities (Brennan et al., 2014) or personality of
the user (Heinström, 2003). Moreover, information seeking behaviour may be
simultaneously shaped by immediate influences, such as friends, family and other trusted
small world sources, as well as by wider socio-cultural influences, including media,
technology and politics (Burnett and Jaeger, 2011).
In the information seeking community, the cognitive processes in search have been well
studied. Models have been developed describing user motivation, the recognition and
progression of information needs (Taylor, 1968), the reduction of uncertainty (Kuhlthau,
1991) and the fact that found information is assimilated into the users existing knowledge
structures (Brookes, 1975; Ingwersen, 1992). Such models, while representing cognitive
processes, have typically ignored cognitive biases, whereby individuals behave differently
to what might be ordinarily expected by an objective observer or in relation to verifiable
facts (Kahneman, 2011). In psychology, it is accepted that behaviour is influenced in
countless ways, including that people prefer information supporting their own views
369
Credibility
judgements for
web search
snippets

To continue reading

Request your trial

VLEX uses login cookies to provide you with a better browsing experience. If you click on 'Accept' or continue browsing this site we consider that you accept our cookie policy. ACCEPT