By Adam G. Klein
Fear, more than hate, feeds online bigotry and real-world violence, according to Adam G. Klein.
When a U.S. senator asked Facebook CEO Mark Zuckerberg, “Can you define hate speech?” it was arguably the most important question that social networks face: how to identify extremism inside their communities.
Hate crimes in the 21st century follow a familiar pattern in which an online tirade escalates into violent actions. Before opening fire in the Tree of Life synagogue in Pittsburgh, the accused gunman had vented over far-right social network Gab about Honduran migrants traveling toward the U.S. border, and the alleged Jewish conspiracy behind it all. Then he declared, “I can’t sit by and watch my people get slaughtered. Screw your optics, I’m going in.” The pattern of extremists unloading their intolerance online has been a disturbing feature of some recent hate crimes. But most online hate isn’t that flagrant, or as easy to spot.
As I found in my 2017 study on extremism in social networks and political blogs, rather than overt bigotry, most online hate looks a lot like fear. It’s not expressed in racial slurs or calls for confrontation, but rather in unfounded allegations of Hispanic invaders pouring into the country, black-on-white crime or Sharia law infiltrating American cities. Hysterical narratives such as these have become the preferred vehicle for today’s extremists – and may be more effective at provoking real-world violence than stereotypical hate speech.
The ease of spreading fear
On Twitter, a popular meme traveling around recently depicts the “Islamic Terrorist Network” spread across a map of the United States, while a Facebook account called “America Under Attack” shares an article with its 17,000 followers about the “Angry Young Men and Gangbangers” marching toward the border. And on Gab, countless profiles talk of Jewish plans to sabotage American culture, sovereignty and the president.
While not overtly antagonistic, these notes play well to an audience that has found in social media a place where they can express their intolerance openly, as long as they color within the lines. They can avoid the exposure that traditional hate speech attracts. Whereas the white nationalist gathering in Charlottesville was high-profile and revealing, social networks can be anonymous and discreet, and therefore liberating for the undeclared racist. That presents a stark challenge to platforms like Facebook, Twitter and YouTube.
Of course this is not just a challenge for social media companies. The public at large is facing the complex question of how to respond to inflammatory and prejudiced narratives that are stoking racial fears and subsequent hostility. However, social networks have the unique capacity to turn down the volume on intolerance if they determine that a user has in fact breached their terms of service. For instance, in April 2018, Facebook removed two pages associated with white nationalist Richard Spencer. A few months later, Twitter suspended several accounts associated with the far-right group The Proud Boys for violating its policy “prohibiting violent extremist groups.”
Still, some critics argue that the networks are not moving fast enough. There is mounting pressure for these websites to police the extremism that has flourished in their spaces, or else become policed themselves. A recent Huffpost/YouGov survey revealed that two-thirds of Americans wanted social networks to prevent users from posting “hate speech or racist content.”
In response, Facebook has stepped up its anti-extremism efforts, reporting in May that it had removed “2.5 million pieces of hate speech,” over a third of which was identified using artificial intelligence, the rest by human monitors or flagged by users. But even as Zuckerberg promised more action in November 2018, the company acknowledged that teaching its technology to identify hate speech is extremely difficult because of all the contexts and nuances that can drastically alter these meanings.
Moreover, public consensus about what actually constitutes hate speech is ambiguous at best. The libertarian Cato Institute found broad disagreement among Americans about the kind of speech that should qualify as hate, or offensive speech, or fair criticism. And so, these discrepancies raise the obvious question: How can an algorithm identify hate speech if we humans can barely define it ourselves?
Fear lights the fuse
The ambiguity of what constitutes hate speech is providing ample cover for modern extremists to infuse cultural anxieties into popular networks. That presents perhaps the clearest danger: Priming people’s racial paranoia can also be extremely powerful at spurring hostility.
The late communication scholar George Gerbner found that, contrary to popular belief, heavy exposure to media violence did not make people more violent. Rather, it made them more fearful of others doing violence to them, which often leads to corrosive distrust and cultural resentment. That’s precisely what today’s racists are tapping into, and what social networks must learn to spot.
The posts that speak of Jewish plots to destroy America, or black-on-white crime, are not directly calling for violence, but they are amplifying prejudiced views that can inflame followers to act. That’s precisely what happened in advance of the deadly assaults at a historic black church in Charleston in 2015, and the Pittsburgh synagogue last month.
For social networks, the challenge is two-fold. They must first decide whether to continue hosting non-violent racists like Richard Spencer, who has called for “peaceful ethnic cleansing,” and remains active on Twitter. Or for that matter, Nation of Islam leader Louis Farrakhan, who recently compared Jews to termites, and continues to post to his Facebook page.
When Twitter and Facebook let these profiles remain active, the companies lend the credibility of their online communities to these provocateurs of racism or anti-Semitism. But they also signal that their definitions of hate may be too narrow.
The most dangerous hate speech is apparently no longer broadcast with ethnic slurs or delusional rhetoric about white supremacy. Rather, it’s all over social media, in plain sight, carrying hashtags like #WhiteGenocide, #BlackCrimes, #MigrantInvasion and #AmericaUnderAttack. They create an illusion of imminent threat that radicals thrive on, and to which the violence-inclined among them have responded.
This article was originally published on The Conversation and has been republished under a creative commons license. For the original, click here.
Adam G. Klein is an Assistant Professor of Communication Studies at Pace University.
Disclaimer: The ideas expressed in this article reflect the author’s views and not necessarily the views of The Big Q.
You might also like: