Policies that outline what is and isn't allowed on our apps.
Policies for ad content and business assets.
Other policies that apply to Meta technologies.
How we update our policies, measure results, work with others, and more.
Explore how we help teens have safe, positive experiences on Facebook and Instagram.
How we approach dangerous organizations and individuals.
How we support communities in the face of the opioid epidemic.
How we help prevent interference, empower people to vote and more.
How we work with independent fact-checkers, and more, to identify and take action on misinformation.
How we assess content for newsworthiness.
How we reduce problematic content in News Feed.
How we build AI systems.
Comprehensive access to public data from Facebook and Instagram
Comprehensive and searchable database of all ads currently running across Meta technologies
Additional tools for in-depth research on Meta technologies and programs
Quarterly report on how well we're doing at enforcing our policies on the Facebook app and Instagram.
Report on how well we're helping people protect their intellectual property.
Report on government request for people's data.
Report on when we restrict content that's reported to us as violating local law.
Report on intentional internet restrictions that limit people's ability to access the internet.
Quarterly report on what people see on Facebook, including the content that receives the widest distribution during the quarter.
Download current and past regulatory reports for Facebook and Instagram.
Fact-checkers are independent from Meta and certified through the non-partisan International Fact-Checking Network (IFCN) or in Europe, the European Fact-Checking Standards Network (EFCSN). We work with them to address misinformation on Facebook, Instagram, and Threads. While fact-checkers focus on the legitimacy and accuracy of information, we focus on taking action by informing people when content has been rated. Here’s how it works.
In many countries, our technology can detect posts that are likely to be misinformation based on various signals, including how people are responding and how fast the content is spreading. It also considers if people on Facebook, Instagram, and Threads flag a piece of content as “false information” and comments on posts that express disbelief. Fact-checkers also identify content to review on their own.
Content predicted to be misinformation may be temporarily shown lower in Feed before it is reviewed.
Fact-checkers will review a piece of content and rate its accuracy. This process occurs independently from Meta and may include calling sources, consulting public data, authenticating images and videos and more.
The ratings fact checkers can use are False, Altered, Partly False, Missing Context, Satire, and True. These ratings are fully defined here.
The actions we take based on these ratings are described below. Content rated False or Altered makes up the most inaccurate content and therefore results in our most aggressive actions, with lesser actions for Partly False and Missing Context. Content rated Satire or True won’t have labels or restrictions.
When content has been rated by fact-checkers, we add a notice to it so people can read additional context. Content rated Satire or True won’t be labeled but a fact-check article will be appended to the post on Facebook. We also notify people before they try to share this content or if they shared it in the past.
We use our technology to detect content that is the same or almost exactly the same as that rated by fact checkers, and add notices to that content as well.
We generally do not add notices to content that makes a similar claim rated by fact checkers, if the content is not identical. This is because small differences in how a claim is phrased might change whether it is true or false.
Once a fact-checker rates a piece of content as False, Altered, or Partly False, or we detect it as near identical, it may receive reduced distribution on Facebook, Instagram and Threads. We dramatically reduce the distribution of False and Altered posts, and reduce the distribution of Partly False to a lesser extent. For Missing Context, we focus on surfacing more information from fact checkers. Meta does not suggest content to users once it is rated by a fact-checker, which significantly reduces the number of people who see it.
We also reject ads with content that has been rated by fact-checkers as False, Altered, Partly False, or Missing Context and we do not recommend this content.
Pages, Groups, Profiles, websites, and Instagram accounts that repeatedly share content rated False or Altered will be put under some restrictions for a given time period. This includes removing them from the recommendations we show people, reducing their distribution, removing their ability to monetize and advertise and removing their ability to register as a news Page.
Content fact checkers prioritizeContent ratings fact-checkers usePenalties for sharing fact-checked content