Dark Mode
More forecasts: Johannesburg 14 days weather
  • Thursday, 27 February 2025
Meta Issues Apology After Instagram Glitch Floods Feeds with Graphic Content

Meta Issues Apology After Instagram Glitch Floods Feeds with Graphic Content

Meta has apologized for a technical error that caused Instagram users worldwide to see a flood of violent and graphic videos in their Reels feed. The glitch led to disturbing content, including footage of shootings and accidents, appearing even for users who had their "Sensitive Content Control" settings enabled.

 

“We have fixed an error that caused some users to see content in their Instagram Reels feed that should not have been recommended. We apologize for the mistake,” a Meta spokesperson said. The company has not disclosed the cause of the issue or the number of users affected.

 

The surge of graphic content sparked outrage on social media, with users sharing their frustration over the disturbing videos appearing on their feeds. Some reported seeing content from accounts with names such as “BlackPeopleBeingHurt” and “PeopleDyingHub,” despite not following them. View counts on these posts skyrocketed, suggesting Instagram’s recommendation algorithm had significantly boosted their reach.

 

The incident comes at a time when Meta is adjusting its content moderation approach. The company has been scaling back its automated systems, shifting its focus to “high-severity violations” like terrorism and child exploitation while relying more on user reports for less severe issues. This change followed criticism that Meta’s AI-driven moderation was overly aggressive in removing posts.

 

Meta’s policies prohibit violent and graphic content unless it serves to raise awareness about issues like human rights abuses or conflict. However, some users claimed that the posts they encountered had no such context. Others noted that while some of the videos carried warning labels, many did not.

 

Critics argue that the company's new policies, along with layoffs in its trust and safety teams, may have contributed to this failure. Meta has cut thousands of jobs in recent years, reducing the number of staff responsible for content moderation. Some believe these cuts have weakened the company’s ability to detect and prevent harmful content from spreading.

 

Despite the company’s insistence that the error was unrelated to its evolving moderation policies, many remain skeptical. Some see the incident as part of a larger trend of Meta struggling to balance free expression with user safety, a challenge that has led to controversies in the past.

 

The timing of the glitch has also raised eyebrows, coming shortly after CEO Mark Zuckerberg announced plans to change Meta’s approach to fact-checking and content recommendations, with human fact checkers being replaced by user-led moderation in the U.S.

 

While the issue has now been fixed, the incident highlights the ongoing difficulties of content moderation on social media platforms. As Meta continues to tweak its policies, it remains to be seen whether these changes will improve user experience or lead to further problems.

Comment / Reply From