'Eyes wide open to the context of content': Reimagining the hate speech policies of social media platforms through a substantive equality lens.

Date22 June 2021
AuthorBartolo, Louisa

The killing of George Floyd, a 46-year-old Black man, at the hands of police, in May 2020 gave renewed urgency and media prominence to the 'Black Lives Matter' (BLM) movement, both across the US, where the movement first emerged, and globally. At the same time, countervailing calls for 'neutrality' and 'colour-blindness', assertions that All Lives Matter', have (re-)emerged. (1) The slogan All Lives Matter' illustrates just how damaging seemingly benign appeals to neutrality can be. In this case, 'neutrality' means ignoring the very real racial disparities and discrimination that blight our societies. It reframes the calls of BLM activists for equal treatment as calls for 'special' treatment, making them easier to dismiss. 'Neutral' treatment in the context of a deeply unequal world can and often does privilege those in positions of power and further undermines those already bearing the brunt of the world's injustices. As a result, a healthy dose of scepticism is required when considering policies that are designed and applied in a facially neutral way.

In a social media setting, the limits of facially neutral policies in contexts of systemic racism were vividly demonstrated when Twitter deleted Cambridge academic Priyamvada Gopal's tweet 'White lives don't matter. As white lives'. (2) Gopal's tweet was a reaction to a particular incident: a banner stating 'White lives matter Burnley' which was flown over a Premier League football ground at the height of BLM protests in the UK in 2020. As Gopal noted at the time, her tweets were an attack on white supremacy and an attempt to provoke 'a critical look at whiteness'. (3) Gopal's case is an example of the kind of anti-racist online speech that civil society groups worry is being silenced by 'neutral' platform policies that moderate race-based discourse without considering wider power structures. (4)

In this article, I focus on questions of neutrality as they apply to the content moderation policies of social media platforms, and particularly to issues of race and racism. Content moderation policies are the rules that social media companies create and then use to determine which of the content that gets posted on their platforms should be removed or made less visible. As our 'new governors', the way that social media companies conceptualise and then operationalise ideas of legitimate speech or conduct is critically important. (5) When designing and enforcing their moderation policies, do platforms aim for 'neutral' treatment or do they acknowledge that their userbase exists in a fundamentally unequal world--one that requires far more carefully calibrated policy-making?

Platforms' rules around hate speech have tended to be designed on the basis of formal equality principles. I argue that substantive equality principles offer a more valuable framework for working through the challenges of platform hate speech policy. In the simplest terms, a formal equality approach means that platforms prohibit attacks on individuals or groups based on identity factors such as race. However, 'race' is treated as a singular, undifferentiated category: an attack on white people would be treated in the same way as an attack on Black people, for example. This is in contrast to principles of substantive equality, which would require platforms to consider existing systemic inequality between racial groups in policy design and implementation. It is arguable that, in some instances, speech or conduct attacking powerful groups may be considered a legitimate form of counter-speech. A focus on substantive equality can help to distinguish between 'assaultive speech' and speech (or conduct) that constitutes 'dissent'. (6)

There is a growing need to think differently about online safety and the enforcement of hate speech policies in contexts of deep-rooted systemic inequality. In what follows, I briefly draw on examples from Twitter, Facebook and Reddit's policy work to reveal sites of tension between formal and substantive equality approaches to the platform governance of hate speech. These sites of tension are not just interesting, they are also generative, providing opportunities for us to consider how existing policies could be reimagined to better serve those most vulnerable to the effects of hate speech: individuals and groups that have faced and continue to face systemic discrimination, including racial discrimination. To be sure, platforms wield significant power here. Nevertheless, UK policymakers have a critical role to play in shaping how platform policies are designed and enforced, not least through the impending Online Safety Bill, which will introduce new regulatory obligations for large platforms like Facebook and Twitter to deal with harmful content, including racist speech. Accordingly, I end the article by reflecting on how policymakers and broader movements on the political left might usefully adopt substantive equality principles to push for an approach to online safety that advances wider goals of social justice, including racial justice.

The current approach of social media platforms to hate speech moderation

Platforms currently moderate hate speech at high rates, but we lack an understanding of patterns in their moderation decisions: for example, how hate speech content removal varies by region, or by target group. This makes it impossible to evaluate platforms' moderation decisions 'at a systems level'--including to identify whether certain regions of the world or protected groups appear to be being under-served by existing moderation systems. (7) It is also impossible to assess the extent to which platforms' content moderation systems are silencing speech that challenges injustice through confronting language that targets those in positions power, for example by flagging and removing posts like 'White men are so fragile'; (8) or even those that simply use the slogan 'Black Lives Matter'. (9)

Digital platforms, and particularly the mainstream players (like Facebook, Instagram, Twitter, YouTube) generally prohibit hate speech. (10) Each platform defines 'hate speech' slightly differently, but broadly, it refers to content (text, images, videos, multimodal content) that vilifies, dehumanises or encourages violence against an individual or group based on identity markers such as race, ethnicity, sexual orientation, gender identity and disability. Platforms also have a variety of sophisticated enforcement systems built around those policies. Platforms' rules against hateful content are made publicly available on their sites, and users are encouraged to report material that they think fits the description using tools provided by the platforms. Tech companies are also investing in automated detection systems to proactively pick up content for moderator review without users having to report it at all. The largest platforms, like Facebook, have thousands of human content moderators who review the user-reported or automatically detected material to determine whether it meets the company's definition of hate speech. Moderators are asked to follow implementation guidelines that are (unsurprisingly) far more complex than the rules available to the public on platforms' sites, and these guidelines are kept outside public view.

Recently published 'Transparency Reports' give some indication of the scale of hateful content being dealt with. In report periods of three months, Facebook had removed over 22 million pieces of content, Instagram had taken down over 3 million pieces of material and YouTube had removed over 100,000 videos for breaking the platforms' respective rules around hate speech. (11) The growing emphasis on swift removal of problematic content is at least partly a response to pressure from regulators in Europe, whose laws around hate speech are stricter than those in the US, where many of these...

Get this document and AI-powered insights with a free trial of vLex and Vincent AI

Get Started for Free

Start Your Free Trial of vLex and Vincent AI, Your Precision-Engineered Legal Assistant

  • Access comprehensive legal content with no limitations across vLex's unparalleled global legal database

  • Build stronger arguments with verified citations and CERT citator that tracks case history and precedential strength

  • Transform your legal research from hours to minutes with Vincent AI's intelligent search and analysis capabilities

  • Elevate your practice by focusing your expertise where it matters most while Vincent handles the heavy lifting

vLex

Start Your Free Trial of vLex and Vincent AI, Your Precision-Engineered Legal Assistant

  • Access comprehensive legal content with no limitations across vLex's unparalleled global legal database

  • Build stronger arguments with verified citations and CERT citator that tracks case history and precedential strength

  • Transform your legal research from hours to minutes with Vincent AI's intelligent search and analysis capabilities

  • Elevate your practice by focusing your expertise where it matters most while Vincent handles the heavy lifting

vLex

Start Your Free Trial of vLex and Vincent AI, Your Precision-Engineered Legal Assistant

  • Access comprehensive legal content with no limitations across vLex's unparalleled global legal database

  • Build stronger arguments with verified citations and CERT citator that tracks case history and precedential strength

  • Transform your legal research from hours to minutes with Vincent AI's intelligent search and analysis capabilities

  • Elevate your practice by focusing your expertise where it matters most while Vincent handles the heavy lifting

vLex

Start Your Free Trial of vLex and Vincent AI, Your Precision-Engineered Legal Assistant

  • Access comprehensive legal content with no limitations across vLex's unparalleled global legal database

  • Build stronger arguments with verified citations and CERT citator that tracks case history and precedential strength

  • Transform your legal research from hours to minutes with Vincent AI's intelligent search and analysis capabilities

  • Elevate your practice by focusing your expertise where it matters most while Vincent handles the heavy lifting

vLex

Start Your Free Trial of vLex and Vincent AI, Your Precision-Engineered Legal Assistant

  • Access comprehensive legal content with no limitations across vLex's unparalleled global legal database

  • Build stronger arguments with verified citations and CERT citator that tracks case history and precedential strength

  • Transform your legal research from hours to minutes with Vincent AI's intelligent search and analysis capabilities

  • Elevate your practice by focusing your expertise where it matters most while Vincent handles the heavy lifting

vLex

Start Your Free Trial of vLex and Vincent AI, Your Precision-Engineered Legal Assistant

  • Access comprehensive legal content with no limitations across vLex's unparalleled global legal database

  • Build stronger arguments with verified citations and CERT citator that tracks case history and precedential strength

  • Transform your legal research from hours to minutes with Vincent AI's intelligent search and analysis capabilities

  • Elevate your practice by focusing your expertise where it matters most while Vincent handles the heavy lifting

vLex