Facebook’s relaxed moderation policy leads to surge in harmful content

author-image
Chaitanyesh
Updated On
Coalition of 33 US States sues Meta for harming mental health of youngsters
Advertisment
  • Harmful content on Facebook is rising amid reduced moderation
  • Content removals and fake account bans have decreased significantly
  • AI and crowd-sourced moderation are key components of Meta’s new approach

Meta’s recent decision to ease proactive moderation on Facebook has led to an increase in harmful content across its platforms, raising alarms. In its first Integrity Report since January’s policy change, Meta revealed a rise in violent, graphic, and harassing posts on Facebook, even as enforcement actions have dropped sharply. CEO Mark Zuckerberg aimed to reduce moderation errors and allow more political expression, but the results show growing risks.

Also Read:Microsoft’s job cuts are a strategic shift towards AI advancement

Violent content on Facebook increased from 0.06-0.07% in late 2024 to 0.09% in early 2025. Bullying and harassment also saw a slight uptick, mainly due to a March surge. Meanwhile, Facebook removed fewer posts for hate speech, 3.4 million, the lowest since 2018. Fake account removals declined from 1.4 billion to 1 billion, and spam removals were cut almost in half, showing less strict enforcement overall.

Meta now targets only the most serious violations like child exploitation and terrorism. Other sensitive topics such as race and gender are less regulated, treated as political speech. The hate speech definition has been narrowed to focus on direct attacks only.

Meta also ended third-party fact-checking in the U.S., replacing it with a crowd-sourced system called Community Notes, which faces questions about reliability and bias.

Despite the rise in harmful content, Meta reports a 50% drop in moderation errors and promises to protect teen users. The company increasingly relies on AI tools to manage content efficiently.

Advertisment