After Facebook banned the Ku Klux Klan, a cluster of Ukrainian users continued to post on VKontakte, a Russian social media and networking site. When the Ukrainian government banned VKontakte, that cluster moved back to Facebook, where they referred to the KKK in Cyrillic, making it difficult for Facebook’s English-language detection algorithm to catch their posts and allowing them to spread hate to other users.
Researchers outlined this phenomenon in a study released Wednesday in Nature. In it, they observe that hate spreads online like a diseased flea, jumping from one body to the next.
A team of physicists and computer scientists from George Washington University and the University of Miami observed this hopping behavior by tracking the movement of right-wing hate clusters that originate on Facebook and VKontakte then cross the boundaries set by internet platforms to applications like Instagram, Snapchat, and WhatsApp. They saw that the users — unrestrained by geographic boundaries — are better equipped to regroup when banned from any one location.
They describe these interconnected hate clusters as “global hate highways.”
“The analogy is no matter how much weed killer you place in a yard, the problem will come back potentially more aggressively,” said first author Neil Johnson, Ph.D., a professor of physics at GWU. “In the online world, all yards in the neighborhood are interconnected in a highly complex way — almost like wormholes.”
“This is why individual social media platforms like Facebook need new analysis such as ours to figure out new approaches to push them ahead of the curve,” he added.
Currently, the strategies used to stop online hate include a “microscopic approach,” which identifies individual “bad” users and bans them, and a “macroscopic approach” that involves banning entire ideologies. The latter often results in allegations of stifling free speech and, accordingly, is difficult to enact.
The mathematical mapping model used here showed that both these policing techniques can actually make matters worse. That’s because hate clusters thrive globally not on a micro or macro scale but in meso scale — this means clusters interconnect to form networks across platforms, countries, and languages and are quickly able to regroup or reshape after a single user is banned or after a group is banned from a single platform. They self-organize around a common interest and come together to remove trolls, bots, and adverse opinions.
They can also recruit new members through subtle yet frequent messaging. The researchers write:
For example, neo-Nazi clusters with membership drawn from the United Kingdom, Canada, United States, Australia, and New Zealand feature material about English football, Brexit, and skinhead imagery while also promoting black music genres. So although the hate may be pure, the rationale given is not, which suggests that this online ecology acts like a global fly-trap that can quickly capture new recruits from any platform, country, and language, particularly if they do not yet have a clear focus for their hate.
A better way to curb the spread of hate, the researchers posit, would involve randomly banning a small fraction of individuals across platforms, which is more likely to cause global clusters to disconnect. They also advise platforms to send in groups of anti-hate advocates to bombard hate-filled spaces together with individual users to influence others to question their stance.
The goal is to prevent hate-filled online pits that radicalize individuals like the Christchurch shooter, an Australian who attacked in New Zealand, covered his guns with the names of other violent white supremacists and citations of ancient European victories, and posted a 74-page racist manifesto on the website 8chan.
“Social media platforms seem to be losing the battle against online hate and urgently need new insights,” the researchers explain. “Our online analysis of online clusters does not require any information about individuals, just as information about a specific molecule of water is not required to describe the bubbles that form in boiling water.”
Our mathematical model predicts that policing within a single platform (such as Facebook) can make matters worse, and will eventually generate global ‘dark pools’ in which online hate will flourish. We observe the current hate network rapidly rewiring and self-repairing at the micro level when attacked, in a way that mimics the formation of covalent bonds in chemistry. This understanding enables us to propose a policy matrix that can help to defeat online hate, classified by the preferred (or legally allowed) granularity of the intervention and top-down versus bottom-up nature. We provide quantitative assessments for the effects of each intervention. This policy matrix also offers a tool for tackling a broader class of illicit online behaviors, such as financial fraud.