This year, companies will spend $124B to not fix hate speech

27-Sep-2019 | Timothy Quinn

Hate speech and security


When tens of thousands of inflammatory posts appeared on Facebook and Twitter in the months preceding the 2016 US presidential election, an embittered rural electorate seemed a likely cause. Few suspected Russian state interference, although in hindsight the warnings were there: identical posts by alphabetically consecutive usernames, profile photos repurposed from other users, suspicious IP addresses, the stilted Felonious Gru English. Why didn’t it occur to content moderators that a coordinated operation was underway to impersonate legitimate users and manipulate algorithms designed to reflect trending conversation? Part of the blame belongs to the philosophical firewall that exists within many organizations between content and infrastructure, between speech and action.

Online harassment, discrimination, libel and manipulation: to many organizations, these are content problems, a challenge to be solved by automated filters or, too often, jaded human moderators working for subsistence wages in loosely regulated megacities. When content is posted on a social network or online forum, it generally passes through some sort of rudimentary content filter which might catch a fraction of a percent of the vitriol it scans every second of every day. Most content filters are hopelessly outmatched by the vagaries and vulgarities of human language, and since the consequences of a false positive can be embarrassing (YouTube famously claimed copyright infringement on a cat), there’s often a reluctance to increase a content filter’s sensitivity. Intentional obfuscation of discriminatory content further decreases the efficacy of automated filtering, pushing the responsibility down to human beings at the bottom rung of the gig economy.

If content moderation is trench warfare, platform security is Special Forces. Armed with packet sniffers and vulnerability auditors, and trained in industry-standardized protocols and certifications, security engineers work to ensure that unauthorized actors are prevented from accessing underlying functionality or breaching the logical privacy barriers built into the system. Unlike content moderators, security engineers are highly paid and highly sought after, and are more likely to be found in upmarket campus-like office parks than exurban Manila cube farms.

As the Russian election hack demonstrated, this fortress paradigm is out of date. Software isn’t a wall we build around content. Software and content are intertwined


Read more stories