Why the Fight Against Online Extremism Keeps Failing

Rear view of woman playing video games on laptop late at night

When we read about yet another shooting linked to online hate, or another violent network spreading across social media, the common refrain is that “social media platforms must do more.”

Indeed, my own research in online extremism and content moderation shows that while content moderation takedowns have soared on major sites over the past several years, extremists still find plenty of digital spaces to recruit, organize, and call for violence. And so, perhaps the question we should be asking isn’t whether platforms are doing enough in isolation—it’s whether we are addressing a problem that is bigger than any one site can handle.

[time-brightcove not-tgx=”true”]

Our approach to fighting online hate and extremism focuses too often on individual platforms—Facebook, X, YouTube, or TikTok—and too little on the fragmentation of content moderation across the internet. Historically, when governments scrutinize “Big Tech” and platforms tighten their moderation rules, extremist movements disperse to smaller or alternative platforms. Fewer rules and smaller trust-and-safety teams mean more opportunity to radicalize a dedicated audience while still testing ways to sneak back into bigger platforms.

Recently, this has become easier, with some major platforms loosening their content moderation rules under the guise of free speech. Under Elon Musk’s ownership, for example, X (formerly Twitter) sharply reduced its trust-and-safety teams, reinstated banned extremist accounts, and relaxed enforcement of hateful content. Similarly, Meta, which owns Facebook and Instagram, ended its third-party fact-checking program and redefined its hate speech policies so that certain rhetoric once disallowed is now permitted. And because these platforms offer the widest reach, extremists not only regain access to mainstream audiences but also re-enter the cycle of radicalization, recruitment, and mobilization that smaller platforms struggle to sustain.

My research based on extensive multi-platform datasets and case studies of actors across the ideological spectrum shows how extremists build resilience in this uneven landscape. Their strategy is both deliberate and dynamic. They use fringe sites or encrypted messaging apps to post the most incendiary or violent material, bypassing stricter enforcement. Then they craft “toned-down” messages for mainstream platforms—perhaps hateful, but not quite hateful enough to trigger mass takedowns. They harness the resentment of users who feel they’ve been censored on mainstream social media, turning that grievance into part of their rallying cry. This cycle thrives in the cracks of what I like to call the “inconsistent enforcement system”—an ecosystem that, inadvertently or not, allows extremists to adapt, evade bans, and rebuild across platforms.

But this piece-meal approach also means that extremist movements are never truly dismantled—only temporarily displaced. Instead of weakening these networks, it teaches them to evolve, making future enforcement even harder.

Read More: Why Online Moderation Often Fails During a Conflict

Trying to fix this with platform-by-platform crackdowns is like plugging a single hole in a bucket riddled with leaks. As soon as you patch one, water pours out through the others. That’s why we need a more ecosystem-wide approach. In certain categories—where the content is nearly universally deemed harmful, such as explicit calls for violence—more consistent moderation across multiple platforms is our best bet.

If platforms coordinate their standards (and not just in vague statements but in specific enforcement protocols), that consistency starts to remove the “arbitrage” extremists rely on. Analyses of 60 platforms show that in places where there’s real policy convergence, violent groups find fewer safe havens because they can no longer exploit enforcement gaps to maintain a presence online. When platforms apply similar rules and coordinate enforcement, extremists have fewer places to regroup and less opportunity to shift from one site to another when bans take effect.

Coordinating in this manner isn’t straightforward—content moderation raises concerns about free speech, censorship, and potential abuse by governments or private firms. Nonetheless, for the narrow slice of content that most of us agree is beyond the pale—terrorist propaganda and hate speech advocating violence—aligning standards would close many of the gaping holes.

Building robust trust-and-safety capabilities isn’t cheap or simple—especially for smaller platforms that can’t hire hundreds of moderators and legal experts. Enter a new wave of third-party initiatives aiming to do exactly that: ROOST, for example, is funded by a coalition of philanthropic foundations and tech companies like Google, OpenAI, and Roblox. It provides open-source software and shared databases so that platforms, large or small, can better identify and remove extremist material known to incite real-world harm. Projects like this promise a path toward greater convergence, without forcing companies to reinvent moderation from scratch.

Of course, some of the biggest barriers remain political. We still lack consensus on where to draw the line between harmful extremist speech and legitimate political expression. The topic has become deeply polarized, with different actors and stakeholders holding sharply contrasting views on what should be considered harmful. But extremist violence isn’t a partisan issue: from synagogue shootings to the livestreamed violence in Christchurch to a series of Islamist-inspired attacks linked to online radicalization, we’ve already witnessed enough atrocities to know that hate and terror thrive in the seams between platforms.

Yes, we will continue debating the boundaries of harmful content. But most Americans can agree that explicit calls for violence, hate-based harassment, and terror propaganda warrant swift and serious intervention. That shared ground is where multi-platform initiatives like ROOST, or collaborative databases led by the Global Internet Forum to Counter Terrorism, can make real headway.

Until we address the systemic incentives that enable the migration, coordination, and reemergence of extremist content across platforms, we will keep wondering, after every horrific attack: Why does this keep happening? The answer is that we have built a fragmented system—one where each platform fights its own battle, while extremists exploit the seams.

It’s time to demand not just that “Big Tech do more,” but that all online spaces commit to a more unified stance against extremism. Only then can we begin to dry up the countless leaks that keep nourishing digital hate.