Meta

Meta
Policies
Community StandardsMeta Advertising StandardsOther policiesHow Meta improvesAge-appropriate content

Features
Our approach to dangerous organisations and individualsOur approach to the opioid epidemicOur approach to electionsOur approach to misinformationOur approach to newsworthy contentOur approach to Facebook Feed rankingOur approach to explaining rankingAccessibility at Meta

Research tools
Content Library and Content Library APIAd Library toolsOther research tools and datasets

Enforcement
Detecting violationsTaking action

Governance
Governance innovationOversight Board overviewHow to appeal to the Oversight BoardOversight Board casesOversight Board recommendationsCreating the Oversight BoardOversight Board: Further asked questionsMeta's biannual updates on the Oversight BoardTracking the Oversight Board's impact

Security
Threat disruptionsSecurity threatsThreat reporting

Reports
Community Standards enforcement reportIntellectual propertyGovernment requests for user dataContent restrictions based on local lawInternet disruptionsWidely viewed content reportRegulatory and other transparency reports

Policies
Community Standards
Meta Advertising Standards
Other policies
How Meta improves
Age-appropriate content
Features
Our approach to dangerous organisations and individuals
Our approach to the opioid epidemic
Our approach to elections
Our approach to misinformation
Our approach to newsworthy content
Our approach to Facebook Feed ranking
Our approach to explaining ranking
Accessibility at Meta
Research tools
Content Library and Content Library API
Ad Library tools
Other research tools and datasets
Enforcement
Detecting violations
Taking action
Governance
Governance innovation
Oversight Board overview
How to appeal to the Oversight Board
Oversight Board cases
Oversight Board recommendations
Creating the Oversight Board
Oversight Board: Further asked questions
Meta's biannual updates on the Oversight Board
Tracking the Oversight Board's impact
Security
Threat disruptions
Security threats
Threat reporting
Reports
Community Standards enforcement report
Intellectual property
Government requests for user data
Content restrictions based on local law
Internet disruptions
Widely viewed content report
Regulatory and other transparency reports
Policies
Community Standards
Meta Advertising Standards
Other policies
How Meta improves
Age-appropriate content
Features
Our approach to dangerous organisations and individuals
Our approach to the opioid epidemic
Our approach to elections
Our approach to misinformation
Our approach to newsworthy content
Our approach to Facebook Feed ranking
Our approach to explaining ranking
Accessibility at Meta
Research tools
Content Library and Content Library API
Ad Library tools
Other research tools and datasets
Security
Threat disruptions
Security threats
Threat reporting
Reports
Community Standards enforcement report
Intellectual property
Government requests for user data
Content restrictions based on local law
Internet disruptions
Widely viewed content report
Regulatory and other transparency reports
Enforcement
Detecting violations
Taking action
Governance
Governance innovation
Oversight Board overview
How to appeal to the Oversight Board
Oversight Board cases
Oversight Board recommendations
Creating the Oversight Board
Oversight Board: Further asked questions
Meta's biannual updates on the Oversight Board
Tracking the Oversight Board's impact
Policies
Community Standards
Meta Advertising Standards
Other policies
How Meta improves
Age-appropriate content
Features
Our approach to dangerous organisations and individuals
Our approach to the opioid epidemic
Our approach to elections
Our approach to misinformation
Our approach to newsworthy content
Our approach to Facebook Feed ranking
Our approach to explaining ranking
Accessibility at Meta
Research tools
Content Library and Content Library API
Ad Library tools
Other research tools and datasets
Enforcement
Detecting violations
Taking action
Governance
Governance innovation
Oversight Board overview
How to appeal to the Oversight Board
Oversight Board cases
Oversight Board recommendations
Creating the Oversight Board
Oversight Board: Further asked questions
Meta's biannual updates on the Oversight Board
Tracking the Oversight Board's impact
Security
Threat disruptions
Security threats
Threat reporting
Reports
Community Standards enforcement report
Intellectual property
Government requests for user data
Content restrictions based on local law
Internet disruptions
Widely viewed content report
Regulatory and other transparency reports
English (UK)
Privacy PolicyTerms of ServiceCookies
Home
Features
How fact-checking works

How fact-checking works

UPDATED 7 APR 2025
In the coming months, we will end the current third-party fact-checking programme in the United States and begin moving to a community-based programme called Community Notes. We are beginning with rolling out Community Notes in the US, and will continue to improve it over the course of the year before expansion to other countries (read more here and here).
Today, in the rest of the world, we rely on fact-checkers who are independent from Meta and certified through the non-partisan International Fact-Checking Network (IFCN) or, in Europe, the European Fact-Checking Standards Network (EFCSN) to address misinformation on Facebook, Instagram and Threads. While fact-checkers focus on the legitimacy and accuracy of information, we focus on taking action by informing people when content has been rated.
Here's how it works outside of the United States.

Identifying misinformation
In many countries, our technology can detect posts that are likely to be misinformation based on various signals, including how people are responding. It also considers if people on Facebook, Instagram and Threads flag a piece of content as "false information" and comments on posts that express disbelief. Fact-checkers also identify content to review on their own.
Content predicted to be misinformation may be temporarily shown lower in Feed before it is reviewed.

Reviewing content
Fact-checkers will review a piece of content and rate its accuracy. This process occurs independently from Meta and may include calling sources, consulting public data, authenticating images and videos and more.
The ratings that fact-checkers can use are False, Altered, Partly false, Missing context, Satire and True. These ratings are fully defined here.
The actions that we take based on these ratings are described below. Content rated as False or Altered makes up the most inaccurate content and therefore results in our most aggressive actions, with lesser actions for Partly false and Missing context. Content rated as Satire or True won't have labels or restrictions.

Clearly labelling misinformation and informing people about it
When content has been rated by fact-checkers, we add a notice to it so that people can read additional context. Content rated Satire or True won't be labelled, but a fact-check article will be appended to the post on Facebook. We also notify people before they try to share this content or if they shared it in the past.
  • We use our technology to detect content that is the same or almost exactly the same as that rated by fact-checkers, and add notices to that content as well.
  • We generally do not add notices to content that makes a similar claim rated by fact-checkers, if the content is not identical. This is because small differences in how a claim is phrased might change whether it is true or false.

Ensuring that fewer people see misinformation
Once a fact-checker has rated a piece of content as False, Altered or Partly false, or we detect it as near identical, it may receive reduced distribution on Facebook, Instagram and Threads. We dramatically reduce the distribution of False and Altered posts, and reduce the distribution of Partly false to a lesser extent. For Missing context, we focus on surfacing more information from fact-checkers. Meta does not suggest content to users once it has been rated by a fact-checker, which significantly reduces the number of people who see it.
We also reject ads with content that has been rated by fact-checkers as False, Altered, Partly false or Missing context and we do not recommend this content.

Taking action against repeat offenders
Pages, groups, profiles, websites and Instagram accounts that repeatedly share content rated as False or Altered will be put under some restrictions for a given time period. This includes removing them from the recommendations that we show people, reducing their distribution, removing their ability to monetise and advertise and removing their ability to register as a news Page.
Content that fact-checkers prioritise
Fact-checking programme policies
Content ratings that fact-checkers use
Penalties for sharing fact-checked content
Meta
Transparency Centre
Policies
Enforcement
Security
Features
Governance
Research tools
Reports
English (UK)