Combating the challenges of social media hate speech in a polarized society. A Twitter ego lexalytics approach

Date03 September 2019
Publication Date03 September 2019
AuthorCollins Udanor,Chinatu C. Anyanwu
SubjectLibrary & information science,Librarianship/library management,Library technology,Information behaviour & retrieval,Information & knowledge management,Information & communications technology,Internet
Combating the challenges of
social media hate speech in a
polarized society
A Twitter ego lexalytics approach
Collins Udanor and Chinatu C. Anyanwu
Department of Computer Science, University of Nigeria Nsukka, Nsukka, Nigeria
Purpose Hate speech in recent times has become a troubling development. It has different meanings to
different people in different cultures. The anonymity and ubiquity of the social media provides a breeding
ground for hate speech and makes combating it seems like a lost battle. However, what may constitute a hate
speech in a cultural or religious neutral society may not be perceived as such in a polarized multi-cultural and
multi-religious society like Nigeria. Defining hate speech, therefore, may be contextual. Hate speech in Nigeria
may be perceived along ethnic, religious and political boundaries. The purpose of this paper is to check for the
presence of hate speech in social media platforms like Twitter, and to what degree is hate speech permissible,
if available? It also intends to find out what monitoring mechanisms the social media platforms like Facebook
and Twitter have put in place to combat hate speech. Lexalytics is a term coined by the authors from the
words lexical analytics for the purpose of opinion mining unstructured texts like tweets.
Design/methodology/approach This research developed a Python software called polarized opinions
sentiment analyzer (POSA), adopting an ego social network analytics technique in which an individuals
behavior is mined and described. POSA uses a customizedPython N-Gram dictionary of local context-based
terms that maybe considered as hate terms. It then appliedthe Twitter API to stream tweets from popularand
trending Nigerian Twitter handles in politics, ethnicity, religion, social activism, racism, etc., and filtered the
tweetsagainst the custom dictionaryusing unsupervised classificationof the texts as either positiveor negative
sentiments.The outcome is visualized usingtables, pie charts and word clouds.A similar implementation was
also carried out usingR-Studio codes and both results are compared and a t-test was appliedto determine if
there was a significant differencein the results. The research methodology can be classifiedas both qualitative
and quantitative.Qualitative in terms of data classification, and quantitative in terms of beingable to identify
the results as either negative or positive from the computation of text to vector.
Findings The findingsfrom two sets of experiments on POSA andR are as follows: in the first experiment,
the POSAsoftware found that the Twitter handlesanalyzed contained between33 and 55 percent hate contents,
while the R resultsshow hate contents ranging from 38 to 62 percent.Performing a t-test on both pos itive and
negative scores for bothPOSA and R-studio, results reveal p-values of 0.389 and0.289, respectively, on an α
value of 0.05,implying that there isno significant differencein the results from POSA and R.During the second
experimentperformed on 11 local handleswith 1,207 tweets, the authorsdeduce as follows: that the percentage
of hate contents classified by POSA is 40 percent, while the percentage of hate contents classified by R is
51 percent.That the accuracy of hate speech classificationpredicted by POSAis 87 percent, while free speech is
86 percent. And the accuracy of hate speech classification predicted by R is 65 percent, while free speech is
74 percent.This study reveals that neitherTwitter nor Facebook has an automatedmonitoring system forhate
speech, andno benchmark is set to decide the levelof hate contents allowed in a text.The monitoring is rather
done by humans whoseassessment is usually subjective and sometimes inconsistent.
Research limitations/implications This study establishes the fact that hate speech is on the increase on
social media. It also shows that hate mongers can actually be pinned down, with the contents of their
messages. The POSA system can be used as a plug-in by Twitter to detect and stop hate speech on its
platform. The study was limited to public Twitter handles only. N-grams are effective features for word-sense
disambiguation, but when using N-grams, the feature vector could take on enormous proportions and in turn
increasing sparsity of the feature vectors.
Practical implications Thefindings of this studyshow that if urgent measuresare not takento combat hate
speech there could be dare consequences, especially in highly polarized societies that are always heated up along
religiousand ethnic sentiments.On daily basis tempersare flaring in the socialmedia over comments madeby
participants. This studyhas also demonstratedthat it is possible to implement a technology that cantrack and
terminatehate speech in a micro-blog like Twitter. This can also be extended to other social media platforms.
Social implications This study will help to promote a more positive society, ensuring the social media is
positively utilized to the benefit of mankind.
Data Technologies and
Vol. 53 No. 4, 2019
pp. 501-527
© Emerald PublishingLimited
DOI 10.1108/DTA-01-2019-0007
Received 15 January 2019
Revised 1 May 2019
20 June 2019
Accepted 7 July 2019
The current issue and full text archive of this journal is available on Emerald Insight at:
The challenges
of social media
hate speech
Originality/value The findings can be used by social media companies to monitor user behaviors, and pin
hate crimes to specific persons. Governments and law enforcement bodies can also use the POSA application
to track down hate peddlers.
Keywords Social media, Twitter, Sentiment analysis, Hate speech, Lexalytics, POSA
Paper type Research paper
1. Introduction
The result of the general elections held in Nigeria in 2015 clearly showed that the country
was polarized along religious and ethnic boundaries (Udanor et al., 2016), with the majority
of the Northern Muslim populace voting for the winner, President Mohammadu Buhari, a
Muslim, and the predominantly Christian southern part voting for the former president,
Goodluck Jonathan, who lost as the incumbent. Since that time, the polity in the country has
been heated up, with many taking to the social media to vent their misgivings, either
bothering on political, ethnicity or religious issues.
The ubiquitousness of social media has brought about many challenges that have
manifested themselves in a number of variations; hate speech is one example of such
challenges (Leondro et al., 2016). According to Jeremy (2012), hate speech may not have a
specific definition, but in broad terms, can be seen as any form of activity that has no
meaning other than communication that expresses anger for a person or group of
individuals bothering on their gender, sex or race. The social media being an avenue where
people easily pop-in to make new friends and express their diverse opinions on trending
issues all over the world has recently become an avenue where people express their anger
and hatred toward other people or the government in power. Sentiments in these media are
being expressed in the form of name-calling, insinuations of race or tribal superiority,
religious bigotry, abuses, or posting of inciting comments, images and videos, especially on
WhatsApp. The one posting the sentiments may call it fighting a cause, while the one who
receives it and is offended may call it hate speech. Some of the problems caused by hate
speech may include the promotion of violence, discrimination, disintegration, communal
wars and ultimately, loss of lives and properties.
Hate speech may be a spoken or written word that is offensive, threatening to an
individual or a group based on a particular attribute of the persons being targeted. Hate
speech is considered a crime in some countries (Anna and Michael, 2017), for example, in all
western Europe where all states currently prohibit various forms of racist, sexist, anti-
religious, homophobic or other intolerant speech (The Modern Law Review, 2006). The Penal
Code in India also enforces prosecution against hate speech (Noorani, 1992), and in Nigeria
(The Nation Nigeria, 2017), etc. Criminalizing hate speech is as a result of it consisting of
incitement to an act of encouraging violence, discrimination and hostility. It is used to
silence unfavorable opinions and suppress debate. Hate speech is also used as a popular tool
to disseminate hatred especially speeches that are focused on religion. It is also used by
immoral agents for propaganda based on hate speeches in a form that always becomes viral.
Unfortunately, identifying the sources of the messages and holding the users responsible for
the act might be difficult. This is because of the decentralized, anonymous and interactive
structure of the new media which makes it an ideal platform to spread hate speeches
(Aondover, 2018). Hate speech is characterized by the degree of attention given to the
content and tone of the expression on social media, such as provocative words (racial or
gender), through threats, defamation which includes libel and slander, feelings or attitudes
of hatred, obscenity and so on (Jeremy, 2012).
Physical violence sometimes springs up as a result of assimilating hate contents, either
spoken or written. One incident of hate speech violence that is still fresh is when a white
supremacist Dylan Roof in 2015 was said to have killed nine African Americans at
Charleston, just after he was received by the bible study group at a South Carolina Church.

To continue reading

Request your trial

VLEX uses login cookies to provide you with a better browsing experience. If you click on 'Accept' or continue browsing this site we consider that you accept our cookie policy. ACCEPT