This year, companies will spend $124B to not fix hate speech

27-Sep-2019 | Timothy Quinn

Hate speech and security


When tens of thousands of inflammatory posts appeared on Facebook and Twitter in the months preceding the 2016 US presidential election, an embittered rural electorate seemed a likely cause. Few suspected Russian state interference, although in hindsight the warnings were there: identical posts by alphabetically consecutive usernames, profile photos repurposed from other users, suspicious IP addresses, the stilted Felonious Gru English. Why didn’t it occur to content moderators that a coordinated operation was underway to impersonate legitimate users and manipulate algorithms designed to reflect trending conversation? Part of the blame belongs to the philosophical firewall that exists within many organizations between content and infrastructure, between speech and action.

Online harassment, discrimination, libel and manipulation: to many organizations, these are content problems, a challenge to be solved by automated filters or, too often, jaded human moderators working for subsistence wages in loosely regulated megacities. When content is posted on a social network or online forum, it generally passes through some sort of rudimentary content filter which might catch a fraction of a percent of the vitriol it scans every second of every day. Most content filters are hopelessly outmatched by the vagaries and vulgarities of human language, and since the consequences of a false positive can be embarrassing (YouTube famously claimed copyright infringement on a cat), there’s often a reluctance to increase a content filter’s sensitivity. Intentional obfuscation of discriminatory content further decreases the efficacy of automated filtering, pushing the responsibility down to human beings at the bottom rung of the gig economy.

If content moderation is trench warfare, platform security is Special Forces. Armed with packet sniffers and vulnerability auditors, and trained in industry-standardized protocols and certifications, security engineers work to ensure that unauthorized actors are prevented from accessing underlying functionality or breaching the logical privacy barriers built into the system. Unlike content moderators, security engineers are highly paid and highly sought after, and are more likely to be found in upmarket campus-like office parks than exurban Manila cube farms.

As the Russian election hack demonstrated, this fortress paradigm is out of date. Software isn’t a wall we build around content. Software and content are intertwined — in a sense, they are each other’s Achilles’ heel.

To begin posting gibberish at the 3.2 billion people currently connected to the internet, all you need is an email address. At present, there are anywhere from a few thousand to tens of thousands of darkmail providers offering extrajudicial anonymity. If you want to tailor your musings on Hilary Clinton’s pizzeria side hustle to a specific purpose, the bar doesn’t get much higher — with a modest advertising allowance (perhaps carved out of Vladimir Putin’s marine plumbing budget), you can circumvent millions of dollars of SOC 2 compliance. The Moscow-based Internet Research Agency reportedly spent 1.25M USD per month to expose 150M users to inflammatory content on Facebook and Instagram, thereby impacting, and potentially critically impacting, the results of a presidential election in the most economically and militarily powerful country in the world. By way of comparison, Gartner estimates that businesses will cumulatively spend $124B this year on information security, the bulk of which will go to critical infrastructure.

Although the notion of ideas as weapons can be dated to the Arthashastra and Sun Tzu’s Art of War, we still visualize threats as splinters of code or industrial firmware with the password set to "admin". Online hate speech and disinformation have a very well-documented history of wreaking offline havoc. Attacks on Roma in France earlier this year were triggered by stereotypes of child abduction spread on Facebook, not unlike last year’s lynchings in India which were inflamed by abduction hysteria spread through WhatsApp. Wall Street protects global markets by spending billions of dollars a year on infrastructure security, yet in 2013 a tweet which falsely claimed Barack Obama had been injured in an explosion cost the economy $130B.

Few information security protocols even mention content risk, let alone classify, quantify or mitigate it. The Open Web Application Security Project (OWASP), a widely used tactical security methodology, dedicates several of its 214 current best practices to input validation and communication security, but limits its recommendations to graphemes -- a problem of characters, not words.

There are several good reasons to think of content as a security issue rather than a policy issue, no different in impact than the threat of DDoS or code injections. At minimum, there’s a Venn diagram:

While some threats will skate silently beneath the public URLs and web forms of a busy online ecosystem, others will operate in plain sight, turning online forums into a Mos Eisley of unmoderated user interactions. Tackling the content problem with the rigor of enterprise security is an opportunity for organizations to understand and neutralize some of those threats before they burrow into databases and network infrastructure, whereas ignoring them (or bleeping them with hearts, as Steam’s Community Forums do for some unimaginable reason) is an invitation to trespass.

There are telling similarities in the pathologies of public abusers and surreptitious attackers. Both employ anonymity to their advantage. Multiple accounts are a favorite ploy. Both plant malware, whether links in comment threads or payloads left in compromised file structures. Both are adept at social engineering.

If studying one threat can yield insight into another ⁠— if there’s even a marginal benefit to applying greater rigor to the problem of content moderation ⁠— why don’t organizations expand their security portfolios to incorporate online abuse? One reason is that identifying and monitoring hate speech is difficult. Language is notoriously fungible; even when you find a word you suspect to be hate speech (which itself can be challenging), the term may have a different meaning or different intent depending on the context in which it was used, or on the identities of the person using it, the person being addressed and the person being discussed. Not only do many hate speech terms have a non-hateful doppelgänger, but language can be structured to communicate hateful context using sarcasm, double entendre, innuendo, euphemism, metaphor and other forms of rhetorical nuance.

Another reason many security engineers hesitate to include content within their purview is that, unlike key loggers and password crackers, hate speech is political: it’s tricky to define, it’s complicated by users’ notions of identity and culture, and it’s increasingly a stalking horse for the reappropriated defense of free speech by the embarrassing side of libertarianism. It’s a hot potato that’s more likely to break a mid-level security manager’s career than make it.

Palatable or not, a broader approach to security which counteracts multiple types of threat ⁠— including content ⁠— will decrease an organization’s exposed attack surfaces. Locking down an ecosystem against hate speech, fraud, revenge porn, disinformation, malware, clickjacking and other abusive behaviors can be the tide that raises all boats, not just a panacea for the compliance team policing passwords on Post-It notes. At best, fixing hate speech can provide intelligence and trigger heightened awareness of threats that show potential to metastasize from the online world to the offline, and are worthy of greater rigor for that reason alone.

Cover photo by Alexander Popov


Read more stories