Online hate groups are a scourge that, according to the researchers, thrives due to the formation of networked “cluster groups.” Traditional studies have focused on the individuals that comprise hate groups or the ideologies they support, but this study focused on the “network of networks” that binds hate groups across the globe regardless of their geographic location. Credit: Johnson et. al, The results of their comprehensive study into hate groups on Facebook and VKontakte indicates that, due to these clusters, hate groups are incredibly hard to weed out. According to a Nature journal review of the team’s paper: The research paper takes a deep dive into the nature of these clusters at the mathematical level and demonstrates how – without knowing any personal data about the members of these groups – weaknesses in the ecology of these hate groups could be exploited to eliminate them. To this end, they’ve come up with four distinct policies (as outlined in the aforementioned Nature review) that could, if executed properly, strike at the very core of what allows hate groups thrive – their ability to exploit the rules of the platforms they exist on in order to stay one step ahead of admins and moderators.

Policy One:

Here the researchers suggest we snip out smaller groups before they absorb or combine with other groups rather than focusing our efforts on taking down the largest groups. We’d all like to believe that if you chop off its head the beast will die, but that simply isn’t true. Hate groups aren’t comprised of drooling sycophants marching behind their fearless leaders like fantasy orcs – they’re made up of people who generally think they’re in on a secret that the ‘normies’ don’t understand. If you remove the biggest offenders, it creates a power vacuum that sucks smaller groups towards a single point – essentially galvanizing them. Executing this policy at the platform level could look like focusing large-scale bans (entire groups) on outlier groups gaining popularity while attacking larger hate groups with myriad individual bans for specific posts.

Policy Two:

This one’s a bit more difficult to imagine as an action item. Conventional wisdom states that moderators should act on anything that qualifies as hate speech the moment it happens. The idea is that the sooner the speech or the person saying it is banned, the less opportunity it has to reach and radicalize others. But that’s a bit naive right? Here, I believe the researchers could be suggesting that banning members of online hate groups at random, rather than simply conducting massive sweeps, serves as a disruptive force with greater long-term payoff. The people invested in hate groups consider bans to be both a minor, temporary inconvenience and a badge of honor — random bannings could make it harder for hate groups to rally behind banned members.

Policy Three:

The current paradigm for platforms that host both hate and anti-hate groups is that AI-powered solutions can’t really tell the difference and most companies are scared-stiff that they’ll end up looking like they’re supporting a left-wing hate group against a right-wing hate group. So they tend treat both hate and anti-hate groups the same. It’d take one helluva courageous set of executives to start promoting groups that exist solely to speak out against hate groups on their own platform. This feels more like a grassroots power-to-the-people kind of thing. But creating a mechanism for like-minded anti-hate groups to form their own clusters could have a spillover effect to counteract non-targeted recruitment efforts by extremist groups.

Policy Four:

Now we’re talking. We’ve all heard that you can’t fight fire with fire, but whoever coined that term probably never tried to explain how hate group ideologies like “red pilling” (convincing a mark that your particular brand of extremist ideology is right and everyone else just doesn’t get it or is in on the conspiracy) are straight out of the generic-brand cult followers’ handbook. Policy four says that the administrators of the hosting platform should create seek-and-destroy groups that target hate groups by exposing them to each other’s differing viewpoints. I liken this to undercover instigators who drive hate groups towards discussions of their differences rather than letting them cower together in support of each other’s similarities. It’s important to keep in mind that the researchers have done the math here. This isn’t a group of activists compiling sources, it’s the actual scientists studying boatloads of data representing the exact nature of specific hate groups’ networks and how this “network of networks” is an incredibly complex ecosystem. What their work tells us is that the online hate ecosystem is not a giant balloon filled with vitriol; we cannot poke it with the needle of truth and expect it to shrivel. Getting rid of hate groups is a matter of killing the roots beneath the surface, not just throwing away the rotten fruit. Of course, for the researchers part, they advise caution to platforms and individuals considering deploying these policies. As Nature’s Noemi Derzy writes:

Researchers propose aggressive new method to eradicate online hate groups - 55