Meta Ends Fact-Checking on Facebook and Instagram


Facebook is one of the most popular social media platforms in the world, connecting billions of people across countries and cultures. Launched in 2004 by Mark Zuckerberg, Facebook has evolved into a powerful tool for communication, content sharing, business promotion, and community building. From personal profiles and pages to groups, events, and live video, Facebook offers a wide range of features that keep users engaged and informed. Whether you’re using it for social networking or digital marketing, Facebook remains a central part of the online experience.

Facebook by Mega apk



What Meta Announced

On January 7, 2025, Meta revealed it is terminating its third-party fact‑checking program—a system in place since 2016 that collaborated with independent organizations like AP, Snopes, PolitiFact, and Agence France-Presse to review, label, and demote false or misleading content In its place, Meta will roll out a crowdsourced “Community Notes” functionality—similar to X’s (formerly Twitter) setup—across Facebook, Instagram, Threads, and WhatsApp, first in the U.S. and then globally.

More: Meta


Why Meta Made the Change

Meta’s stated reasons:

  • To emphasize free speech and reduce perceived censorship and bias, especially in politically charged topics.
  • Zuckerborg and other executives argue fact-checking models introduced too many errors: they cite that up to 20% of moderation decisions during the test phase were mistaken.
  • A broader policy shift to deprioritize content moderation beyond illegal or extreme violations—legal wrongdoing, terrorism, fraud, etc.—while relaxing restrictions on “civil” debates around immigration, gender identity, and more.
  • Additionally, Meta is relocating its Trust & Safety team from California to Texas to shift cultural and political influence.

How “Community Notes” Will Work

  • Inspired by Elon Musk’s X, Community Notes invites users to add contextual notes to posts they deem misleading. Notes must be upvoted by a cross‑ideological group before going public.
  • Meta plans a staged rollout over several months, offering a subtler notification system with less aggressive labels and minimal feed demotion .

Criticisms and Concerns

Misinformation Risks

  • Fact-checkers like Angie Drobnic Holan (IFCN) warn this removes a “speed‑bump” against false content—previously shown to curb hoaxes and conspiracies.
  • Experts from UCS, Friends of the Earth, and others highlight risks in science, climate, health, and gender-related misinformation .
  • Harvard’s Joan Donovan points out that consensual, user-based moderation is slow and burdensome—hardly a replacement for professional oversight.

Effects on Vulnerable Communities

  • Removal of hate-speech safeguards could lead to more online harassment of LGBTQ+, immigrants, and other protected groups.
  • Groups such as HRC have voiced alarm, warning offline consequences from increased digital hostility.

Political and Financial Pressures

  • Critics see timing aligned with Trump’s anticipated 2025 return; some Meta moves—like new executives (Kaplan, Dana White) and changes in moderation—are viewed as attempts to appease conservatives .
  • NPR reports Meta will maintain fact-checking outside the U.S. (e.g., Australia) through 2025, suggesting a politically calculated, phased approach.

Broader Implications

  • This marks a major shift in content policy—platforms are moving moderation burdens onto users—ahead of evolving regulations like Europe’s Digital Services Act.
  • Debate continues over whether Community Notes, without professional fact-checkers, can maintain factual fidelity.
  • Many rely heavily on Meta’s contracts—Meta accounted for nearly 45% of their revenue in 2023. The loss threatens the viability of dozens of fact-checking organizations.
  • Community Notes have shown mixed results: effective in some domains (e.g. COVID‑19) but inconsistent and unevenly displayed .
  • Overall, the move raises urgent questions: Can decentralized crowd‑moderation really counter professional fact-checking? .

What Users Need to Know

  • You will stop seeing fact-check flags manually applied by professionals.
  • Users must actively engage with Community Notes or do independent verification—especially for content that evokes strong emotion .
  • Posts on immigration, gender, politics, etc., may reappear in your feed with minimal moderation.
  • Content violating illegal or high-severity policies (e.g. terrorism, CSAM) will still be removed by Meta.

In Summary

Meta’s pivot from professional fact-checking to a user-driven Community Notes model represents a dramatic redefinition of content responsibility, prioritizing free-speech and decentralized trust over authoritative interventions. While supporters call it empowering and less biased, critics warn it risks opening the floodgates to misinformation, hate speech, and politically motivated manipulation.


What This Means for Misinformation During Elections

With major elections slated in the U.S., India, and the EU in 2025, Meta’s decision comes at a pivotal time. Experts warn that:

  • Election misinformation—including fake claims about voting procedures, ballot fraud, and fabricated candidate quotes—could now spread more easily without authoritative checks.
  • In previous cycles (2020 U.S., 2019 India), Meta’s partnerships with fact-checkers helped reduce virality of false claims by 80–95%, according to IFCN data.
  • The new Community Notes system may not act fast enough to correct viral falsehoods in real time, especially when harmful content spreads within minutes.

Critics argue this decision undermines voter trust and could further polarize political discourse.


Can Crowdsourcing Replace Experts?

Meta’s approach follows a broader Silicon Valley philosophy: “The crowd is smarter than the expert.” But this idea faces scrutiny:

  • Speed & expertise: Fact-checkers work with scientists, historians, and journalists. Users—even well-meaning ones—often lack the domain expertise or tools to verify claims about vaccines, law, or geopolitics.
  • Coordination & manipulation: Bad actors (like troll farms or political bots) may game the Community Notes system, upvoting misleading notes to push false narratives.
  • Polarization risks: Notes that require “cross-ideological” agreement might be too mild or vague to be effective, especially on hot-button issues like climate, gender, and immigration.

A Stanford study (2024) found that while Community Notes helped correct obvious hoaxes, they were far less successful in moderating complex or partisan claims.


How Fact-Checking Actually Worked at Meta Before

Under the old system:

  • Meta worked with over 90 certified fact-checkers in more than 40 languages and countries.
  • When a fact-checker rated a post as “false” or “misleading”, its reach was significantly reduced in the feed (by ~80%), and a warning label appeared.
  • Pages repeatedly flagged could be demonetized or suspended.

This system helped Meta earn praise from WHO, UNESCO, and Reuters Institute for reducing misinformation during COVID-19 and election seasons.

Now, with that infrastructure gone, responsibility shifts entirely to the public—who may not always be prepared for it.


Global Implications

Outside the U.S.

Meta says the fact-checking program will continue in select countries until late 2025, but no long-term commitment exists. Regions with weaker information ecosystems—such as Southeast Asia, Sub-Saharan Africa, and parts of Latin America—face heightened risks.

Fake news in these areas has previously triggered mob violence, vaccine refusal, and electoral chaos, especially in regions where WhatsApp and Facebook are primary news sources.

Human Rights Warnings

Several international organizations—including Human Rights Watch and UN rapporteurs—have urged Meta to reconsider its strategy. They argue:

  • Platforms with billions of users can’t rely solely on voluntary, community-driven moderation.
  • The burden of truth shouldn’t fall on marginalized users, who often lack the tools or safety to push back against propaganda or hate speech.

Inside Meta: What Employees Are Saying

Internal leaks and whistleblowers have revealed:

  • Internal pushback from Meta’s remaining Trust & Safety teams, many of whom view this change as a “corporate retreat” from responsibility.
  • Some long-serving fact-checking liaisons were laid off or reassigned before the public announcement.
  • A few engineers reportedly expressed concern on internal forums that AI content moderation alone is not ready to fill the gap.

One internal message read:

“This isn’t decentralization—it’s abdication.”


How to Protect Yourself as a User

As Meta shifts moderation responsibilities onto users, here’s how you can stay informed:

  • Cross-check claims before sharing—use sites like Snopes, AP Fact Check, or BBC Verify.
  • Follow reputable sources directly rather than relying on viral posts or influencers.
  • Learn to spot manipulation tactics: emotionally charged language, fake screenshots, and AI-generated images.
  • Contribute to Community Notes if you’re eligible—but understand that it’s not a silver bullet.

Meta’s move to end fact-checking marks a watershed moment in the history of social media. For better or worse, it signals the end of an era when platforms bore at least partial responsibility for verifying the content users consumed.

Whether this results in more open, honest dialogue—or opens the floodgates to chaos—depends on how actively and critically users choose to participate.

In a world where misinformation can influence elections, health decisions, and social conflict, the stakes couldn’t be higher.


  • Never miss an update on the latest APKs, tech news, and digital trends. Bookmark Mega Apk and check back regularly for fresh insights and downloads.

Leave a Reply