Disinformation and misinformation triangle. A conceptual model for “fake news” epidemic, causal factors and interventions

Date09 September 2019
Publication Date09 September 2019
AuthorVictoria L. Rubin
SubjectLibrary & information science,Records management & preservation,Document management,Classification & cataloguing,Information behaviour & retrieval,Collection building & management,Scholarly communications/publishing,Information & knowledge management,Information management & governance,Information management,Information & communications technology,Internet
Disinformation and
misinformation triangle
A conceptual model for fake newsepidemic,
causal factors and interventions
Victoria L. Rubin
Language and Information Technology Research Lab (LiT.RL),
Faculty of Information and Media Studies,
University of Western Ontario, London, Canada
Purpose The purpose of this paper is to treat disinformation and misinformation (intentionally deceptive
and unintentionally inaccurate misleading information, respectively) as a socio-cultural technology-enabled
epidemic in digital news, propagated via social media.
Design/methodology/approach The proposed disinformation and misinformation triangle is a
conceptual model that identifies the three minimal causal factors occurring simultaneously to facilitate the
spread of the epidemic at the societal level.
Findings Following the epidemiological disease triangle model, the three interacting causal factors are
translated into the digital news context: the virulent pathogens are falsifications, clickbait, satirical fakes
and other deceptive or misleading news content; the susceptible hosts are information-overloaded,
time-pressed news readers lacking media literacy skills; and the conducive environments are polluted poorly
regulated social media platforms that propagate and encourage the spread of various fakes.
Originality/value The three types of interventions automation, education and regulation are proposed
as a set of holistic measures to reveal, and potentially control, predict and prevent further proliferation of the
epidemic. Partial automated solutions with natural language processing, machine learning and various
automated detection techniques are currently available, as exemplified here briefly. Automated solutions
assist (but not replace) human judgments about whether news is truthful and credible. Information literacy
efforts require further in-depth understanding of the phenomenon and interdisciplinary collaboration outside
of the traditional library and information science, incorporating media studies, journalism, interpersonal
psychology and communication perspectives.
Keywords Newspapers, Internet, Deception, Disinformation
Paper type Conceptual paper
This conceptual work[1] offers an interdisciplinary perspective on the current state of
disinformation and misinformation in the digital news, or what is cumulatively called the
fake news problem.The two essential questions I ask are: what are the roots of the
problem, and what is to be done about it? The insights offered here draw on years of
investigations in deception research at the intersection of several disciplines library and
information science, media studies, journalism, interpersonal psychology and
communication with the purpose of informing the natural language processing (NLP)
and machine learning (ML) applications to find appropriate interventions for disinformation
in the news. Prior evidence-based research increases our grasp of the socio-cultural
technology-enabled phenomena beyond computational tasks. Much research is still needed
Journal of Documentation
Vol. 75 No. 5, 2019
pp. 1013-1034
© Emerald PublishingLimited
DOI 10.1108/JD-12-2018-0209
Received 16 December 2018
Revised 14 May 2019
Accepted 16 May 2019
The current issue and full text archive of this journal is available on Emerald Insight at:
This research has been funded by the Government of Canada Social Sciences and Humanities Research
Council (SSHRC) Insight Grant (No. 435-2015-0065) awarded for the project entitled Digital deception
detection: identifying deliberate misinformation in online news.Many thanks to Ben Rubin, a Forest
Ecologist, for the extensive conversations on the topic, and to Yimin Chen, Sarah Cornwell, Toluwase
Asubiaro, Chris Brogly, doctoral students at LiT.RL, Western, for their helpful suggestions.
to better understand the phenomenon (Tucker et al., 2018), but key findings from previous
research from disparate fields require wider dissemination for greater public awareness, and
the adoption of a unified mental model of key contributing factors.
As many scholars and practitioners in the field of library and information science and
technology (LIS&T) would agree, the discipline incorporates several relevant sub-disciplines
such as information organization, information retrieval (IR), human-computer interaction
(HCI) and data sciences including NLPand ML (The Association for Libraryand Information
Science Education,2016). The LIS&T literature should not be overlooked in finding solutions
to disruptingdisinformation and misinformation epidemic,since much of the conceptual work
in our inherentlyinterdisciplinary field pre-dates the general publicsgreater awareness of the
issue of disinformation and misinformation. For instance, in his 1983 pre-internet explosion
work, Fox (1983)distinguished informationfrom misinformation.Such distinction was of both
theoretical and practical concern to libraries and librarians who are traditionally tasked with
collecting, organizing and selecting credible information from trustworthy sources to enable
their patronsefficient accessto information. (For instance, The Libraryof Parliament, Canada
(2018) refers to delivering authoritative, reliable andrelevant information and knowledgeas
one of its missions.) Whilesome respected authorities, such as a Harvard Librarian Matthew
Connor Sullivan (2018), in his provocatively titled, Why Librarians Cant Fight Fake News,
argue that librarians do not adequately grasp the deeper psychological conditions of the
disinformationand misinformation problem, LIS&T is not strictly limited tonaïve practices of
librarians that allegedly have ill-informed assumptions.
Paper objectives and layout
I will start from the review of pertinent literature emanating primarily from LIS&T. To
broaden the discussion to an interdisciplinary perspective, the paper will proceed with three
sections each describing an alarming trend. The first (perhaps obvious) trend is the
proliferation of various kinds of fakesin the news which I exemplify with a falsified news
story, a satirical fake and a clickbait story from the public health treat context. They are
contrasted with a legitimate newsstory in the same domain, with the intent to highlight how
disorientingit may be (at least to some readers)to appropriately identifya fake,let alone the
type of fake. The second trend is about the current practices of digital news production,
dissemination and propagation,and their role in the creationof toxic online environments.The
third tendency is in the changing news reading practices that make readers susceptible to
being mis-/disinformed.The paper culminates inproposing a conceptualmodel, adapted in the
spirit of interdisciplinarity from the field of epidemiology, to account for the determinants of
this socio-cultural technology-enabled epidemic of disinformation and misinformation. The
disinformation and misinformation triangle unifies the three alarming trends (Fakesas
Virulent Pathogens, Online mediaas Conducive Environments and Gullible Readersas
Susceptible Hosts) as the key determinants of the dis-/misinformation epidemic, and predicts
that their interaction is what enables its occurrence. I conclude by suggesting three
interventions (Automation,Education and Regulation)to disrupt the interaction of thefactors.
Highlights from the pertinent literature
LIS&T and related fields offer conceptualizations and best practices in establishing the
credibility of information, ascertaining relevant reliable authoritative expert sources, and
vetting of information for quality. Libraries have historically been verifying their sources
for cognitive authority (Wilson, 1983) within a particular sphere, as well as credibility
(defined as perceived trustworthiness, or goodness and morality of the source (Fogg,
2003)) in combination with the expertise (perceived knowledge, skill and experience of the
source, e.g. Hovland et al. (1953) and Tseng and Fogg (1999)). Such filtering diminishes
(if not eliminates) the possibility of unreliable poor-quality information being offered

To continue reading

Request your trial

VLEX uses login cookies to provide you with a better browsing experience. If you click on 'Accept' or continue browsing this site we consider that you accept our cookie policy. ACCEPT