What the fake? Assessing the extent of networked political spamming and bots in the propagation of #fakenews on Twitter

Pages53-71
DOIhttps://doi.org/10.1108/OIR-02-2018-0065
Date11 February 2019
Published date11 February 2019
AuthorAhmed Al-Rawi,Jacob Groshek,Li Zhang
Subject MatterLibrary & information science,Information behaviour & retrieval,Collection building & management,Bibliometrics,Databases,Information & knowledge management,Information & communications technology,Internet,Records management & preservation,Document management
What the fake? Assessing the
extent of networked political
spamming and bots in the
propagation of #fakenews
on Twitter
Ahmed Al-Rawi
Simon Fraser University, Burnaby, Canada, and
Jacob Groshek and Li Zhang
Boston University College of Communication, Boston, Massachusetts, USA
Abstract
Purpose The purpose of this paper is to examine one of the largest data sets on the hashtag use of
#fakenews that comprises over 14m tweets sent by more than 2.4m users.
Design/methodology/approach Tweets referencing the hashtag (#fakenews) were collected for a period
of over one year from January 3 to May 7 of 2018. Bot detection tools were employed, and the most retweeted
posts, most mentions and most hashtags as well as the top 50 most active users in terms of the frequency of
their tweets were analyzed.
Findings The majority of the top 50 Twitter users are more likely to be automated bots, while certain users
posts like that are sent by President Donald Trump dominate the most retweeted posts that always associate
mainstream media with fake news. The most used words and hashtags show that major news organizations
are frequently referenced with a focus on CNN that is often mentioned in negative ways.
Research limitations/implications Theresearch study islimited to the examinationof Twitterdata, while
ethnographic methods like interviews or surveys are further needed to complement these findings. Though the
data reported here do not prove direct effects, the implications of the research provide a vital framework for
assessingand diagnosingthe networkedspammers andmain actors that havebeen pivotal in shapingdiscourses
aroundfake news on socialmedia. These discourses, whichare sometimes assistedby bots, can createa potential
influence on audiences and their trust in mainstream media and understanding of what fake news is.
Originality/value This paperoffers results on one of the firstempirical research studieson the propagation
of fake news discourseon social media by shedding light on the most active Twitter userswho discuss and
mention the term #fakenewsin connection to othernews organizations, parties and related figures.
Keywords Twitter, Fake news, Bots, Networked political spamming
Paper type Research paper
Introduction
This study sheds light on the most active Twitter users who discuss and mention the term
#fakenewsin connection to other news organizations, parties and related figures. It also
investigates whether these users are more likely to be humans or bots in order to better
understand the nature of the dissemination of discourses surrounding fake news discussion
on social media. In this regard, there is also another category called cyborg that combines
both artificial and human activity. For example, Daniel John Sobieski, a Conservative
Activist on Twitter with the username @gerfingerpoken, uses algorithms to post over 1,000
messages a day in order to further his agenda and reach a wider online public. This is just
one of the actions that cyborgs can provide, and in this case Sobieski uses schedulers
which work through stacks of his own prewritten posts in repetitive loops(Timberg,
2017). Further, political bots tend to be developed and deployed in sensitive political
Online Information Review
Vol. 43 No. 1, 2019
pp. 53-71
© Emerald PublishingLimited
1468-4527
DOI 10.1108/OIR-02-2018-0065
Received 2 March 2018
Revised 7 July 2018
18 September 2018
Accepted 19 September 2018
The current issue and full text archive of this journal is available on Emerald Insight at:
www.emeraldinsight.com/1468-4527.htm
This paper forms part of a special section Social media mining for journalism.
53
The
propagation of
#fakenews
on Twitter
moments when public opinion is polarized(Kollanyi et al., 2016, p. 1). For example, one
study on Twitter found that almost 50% of traffic is generated and propagated by a
rapidly growing bot population(Gilani et al., 2017).
In the contemporary media environment, fake news is becoming more important than
perhaps ever before as political actors and governments worldwide have begun using bots
to manipulate public opinion, choke off debate, and muddy political issues(Forelle et al.,
2015, p. 1). Indeed, fake news has become a highly partisan issue in the USA, so associating
certain political figures or news organizations with making or spreading it can lead to
undermining their credibility. This study attempts to examine the way some active Twitter
users connect certain figures, parties and sides with fake news, which can be regarded as a
part of their political spamming activities that are meant to discredit their ideological
opponents. There is no doubt that there is an increasing interest by the general public in the
issue of fake news especially due to its importance in influencing campaigns, shaping the
perception of reality and potentially altering citizenspolitical decision making. In general,
there seems to be a systematic and well-calculated attack on mainstream media by many
political sides in the way it is associated with fake news (Cadwalladr, 2017).
The main issuehere is that most socialmedia sites like Twitter andFacebook allow bots to
be used, whichboost and enhance spammingor posting messages by repeatedly sendingthem
to as many other users as possible (Chu et al., 2010). For example, Donald Trumps first
presidential address was initially identified as the most tweeted event in history, but it has
been observed that this online attention was partlydue to the use of pro-Trump bots. To wit,
Even before they started trending [], the official hashtags #JointAddress and
#JointSession accumulated decidedly inorganic traffic, including from some accounts that
had never tweeted about any other topic(Musgrave, 2017). Some of these accounts are not
totally automated as there seems to be cyborgs or human spammers and bot activity as
explainedabove, for such accounts areoften bots that see occasional human curation, or they
are actively maintained by people who employ scheduling algorithms and other applications
for automating social media communication(Kollanyi et al., 2016, p. 2). According to Pew
ResearchCenter, it has been estimated that two-thirds of tweetedlinks to popularwebsites are
posted bybots that shareroughly 41% of links to political sites sharedprimarily by liberals
and 44% of links to political sites shared primarily by conservatives(Wojcik et al., 2018).
Theoretical framework
Since this study deals with online information, it is relevant to begin with the theoretical
concept of political spamming, which we define as an overflow of politically oriented online
messages that are widely disseminated to serve the interest of a certain political party or figure.
In the context of this study, spamming is done with the way news organizations, political
figures and entities are repeatedly associated with fake news on Twitter. Further, we introduce
here the concept of networked political spamming activity which is manifested in the way
many, active Twitter users collaboratively disseminate posts by retweeting political or
ideological messages that often include hyperlinks in order to serve a certain agenda or political
purpose. The majority of previous studies on political spamming did not offer a clear
conceptual definition of this online activity, while the networked and collaborative aspect has
been largely overlooked. This is anetworked activitybecause there is a collective collaboration
in disseminating spam, and those involved might not always be aware of their spamming
activity. Though spam is not always defined as a form of false information, it is somehow
similar to the spread of misinformation which r efers to the inadvertent sharingof wrong
information when users are not aware of the nature of messages they disseminate (Born and
Edgington, 2017; Jackson, 2017). In other words, networked political spamming includes the
intentional and unintentional spread of spam messages by social media users whose general
aim is to serve a particular political side and attack or silence the opponent(s).
54
OIR
43,1

To continue reading

Request your trial

VLEX uses login cookies to provide you with a better browsing experience. If you click on 'Accept' or continue browsing this site we consider that you accept our cookie policy. ACCEPT