Identifying misinformation
In many countries, our technology can detect posts that are likely to be misinformation based on various signals, including how people are responding. It also considers if people on Facebook, Instagram and Threads flag a piece of content as "false information" and comments on posts that express disbelief. Fact-checkers also identify content to review on their own.
Content predicted to be misinformation may be temporarily shown lower in Feed before it is reviewed.
Reviewing content
Fact-checkers will review a piece of content and rate its accuracy. This process occurs independently from Meta and may include calling sources, consulting public data, authenticating images and videos and more.
The ratings that fact-checkers can use are False, Altered, Partly false, Missing context, Satire and True. These ratings are fully defined here. The actions that we take based on these ratings are described below. Content rated as False or Altered makes up the most inaccurate content and therefore results in our most aggressive actions, with lesser actions for Partly false and Missing context. Content rated as Satire or True won't have labels or restrictions.
Clearly labelling misinformation and informing people about it
When content has been rated by fact-checkers, we add a notice to it so that people can read additional context. Content rated Satire or True won't be labelled, but a fact-check article will be appended to the post on Facebook. We also notify people before they try to share this content or if they shared it in the past.
- We use our technology to detect content that is the same or almost exactly the same as that rated by fact-checkers, and add notices to that content as well.
- We generally do not add notices to content that makes a similar claim rated by fact-checkers, if the content is not identical. This is because small differences in how a claim is phrased might change whether it is true or false.
Ensuring that fewer people see misinformation
Once a fact-checker has rated a piece of content as False, Altered or Partly false, or we detect it as near identical, it may receive reduced distribution on Facebook, Instagram and Threads. We dramatically reduce the distribution of False and Altered posts, and reduce the distribution of Partly false to a lesser extent. For Missing context, we focus on surfacing more information from fact-checkers. Meta does not suggest content to users once it has been rated by a fact-checker, which significantly reduces the number of people who see it.
We also reject ads with content that has been rated by fact-checkers as False, Altered, Partly false or Missing context and we do not recommend this content.
Taking action against repeat offenders
Pages, groups, profiles, websites and Instagram accounts that repeatedly share content rated as False or Altered will be put under some restrictions for a given time period. This includes removing them from the recommendations that we show people, reducing their distribution, removing their ability to monetise and advertise and removing their ability to register as a news Page.