Policies that outline what is and isn't allowed on the Facebook app.
Policies that outline what is and isn't allowed on the Instagram app.
Policies for ad content and business assets.
Other policies that apply to Meta technologies.
How we update our policies, measure results, work with others, and more.
How we keep our platforms safe from groups and individuals that promote violence, terrorism, organized crime, and hate.
How we support communities in the face of the opioid epidemic.
How we help prevent interference, empower people to vote and more.
How we work with independent fact-checkers, and more, to identify and take action on misinformation.
How we assess content for newsworthiness.
How we reduce problematic content in News Feed.
How we build AI systems.
Comprehensive access to public data from Facebook and Instagram
Comprehensive and searchable database of all ads currently running across Meta technologies
Additional tools for in-depth research on Meta technologies and programs
Quarterly report on how well we're doing at enforcing our policies on the Facebook app and Instagram.
Report on how well we're helping people protect their intellectual property.
Report on government request for people's data.
Report on when we restrict content that's reported to us as violating local law.
Report on intentional internet restrictions that limit people's ability to access the internet.
Quarterly report on what people see on Facebook, including the content that receives the widest distribution during the quarter.
Download current and past regulatory reports for Facebook and Instagram.
JUL 29, 2024
On December 6, 2022, the Oversight Board published its PAO on Meta's Cross Check policies. To fully address the number of recommendations, we’ve agreed with the board to review and respond within 90 days. To learn more about the cross-check system, visit our Transparency Center post.
Overview of Cross-Check
Facebook and Instagram users create billions of pieces of content each day. Moderating content at this scale presents challenges, including tradeoffs between important values and goals. We seek to quickly review potentially violating content, and remove it if it violates our policies. But we must balance this goal against the risk of “false positives” (erroneous removal of non-violating content) to protect users' voice. (Here, we refer to the “removal” of content, which we are using to describe integrity actions more generally. These can also include, for example, the use of warning screens or removal of pages.)
To balance these considerations, Meta implemented the cross-check system to identify content that presents a greater risk of false positives and provide additional levels of review to mitigate that risk. Cross-check provides additional levels of review for certain content that our internal systems flag as violating (via automation or human review), with the goal of preventing or minimizing the highest-risk false-positive moderation errors that might otherwise occur due to various factors such as the need to understand nuance or context. (Here, we refer to “content” that is reviewed through our cross-check system. We also use cross-check to review other actions such as removing a page or profile.) While cross-check provides additional levels of review, reviewers apply the same Community Standards that apply to all other content on Facebook. (Cross-check also applies to Instagram. Where we reference “Community Standards” in this webpage, it is meant to include the Instagram Community Guidelines as well.)
The cross-check system plays a crucial function in helping to protect human rights. For instance, the cross-check system includes entities and posts from journalists reporting from conflict zones and community leaders raising awareness of instances of hate or violence. Cross-check reviews take into account the context that is helpful to action this content correctly. Cross-check reviews may also apply to civic entities, where users have a heightened interest in seeing what their leaders are saying.
In addition, cross-check serves an important role in managing Meta’s relationships with many of our business partners. Incorrectly removing content posted by a page or profile with a large following, for instance, can result in negative experiences for both Meta’s business partners and the significant number of users who follow them. We also apply cross-check to some very large Groups, where an error can impact hundreds of thousands or millions of users. Cross-check does not exempt Meta’s business partners or Groups from our content policies, but it does sometimes provide additional levels of review to ensure those policies are applied accurately.
Facebook and Instagram users post billions of pieces of content each day. Even with thousands of dedicated reviewers around the world, it is not possible to manually review every piece of content that potentially violates our Community Standards. The vast majority of violating content that we remove is proactively detected by our technology before anyone reports it. When someone posts on Facebook or Instagram, our technology checks to see if the content may violate the Community Standards. In many cases, identification is a simple matter. The post either clearly violates our policies or it doesn’t. But in other cases, the content is escalated to a human reviewer for further evaluation.
Our primary review systems use technology to prioritize high-severity content, which includes “viral” content that spreads quickly. When the systems flag content for escalation, our reviewers make difficult and often nuanced judgment calls about whether content should remain on the platform. While we always aim to make the right decisions, we recognize that false positives do occur and some content is set for removal for violating Meta’s policies when it actually does not. Meta has therefore invested in mistake prevention to further review false positives and mitigate them. Cross-check is one of these mistake-prevention strategies.
Cross-check is a system used to help ensure that enforcement decisions are made accurately and with additional levels of human review. If during cross-check a reviewer confirms that content violates our Community Standards, we enforce those policies and address the violating content accordingly. Depending on the complexity of the content, we may apply multiple levels of review, including in rare instances review by leadership. If the final reviewer determines that the content at issue does not violate our Community Standards, the reviewer can “overturn” the initial action and leave the content on the platform.
Historical Cross-Check Practices
We first implemented the system now known as cross-check in 2013. The details of the system have evolved over the years and, where possible, we have provided dates and date ranges explaining when these changes occurred.
To determine what content or entities received cross-check review, our teams identified and compiled lists of users or entities perceived to have higher associated risk with false positive actions against them. “False positive risk” refers to the risk of incorrect enforcement against content or entities that do not actually violate our Community Standards. To determine which users and entities were associated with a higher false positive risk, our teams applied a variety of criteria, including the type of user or entity (e.g., an elected official, journalist, significant business partner, human rights organization), the number of followers, and the subject matter of the entity. (Entity is a general term for where content could originate or appear, such as a user account, page, or group.)
When users or entities identified on those lists posted content or took actions that our systems flagged as potentially violating our policies, we would add the content or entity to a queue for cross-check review.
Beginning in 2020, we made changes so that most content in the queue was prioritized using a risk framework, which assigned a level of false-positive risk that could result if Meta incorrectly removed that content. This risk framework generally relied on three factors: (1) the sensitivity of the entity, (2) the severity of the alleged violation, and (3) the severity of the potential enforcement action.
Current Cross-Check Practices
As with all of our policies and processes, we continually look for ways to improve and we are constantly making changes. Earlier this year, we identified additional opportunities to improve the cross-check system. One structural change we made is that the cross-check system is now made up of two components: General Secondary Review (GSR) and Sensitive Entity Secondary Review (SSR). While we will continue to use the list-based approach described above for inclusion in SSR for a percentage of certain users and entities, with GSR, we are in the process of ensuring content from all users and entities on Facebook and Instagram are eligible for cross-check review based on a dynamic prioritization system called “cross-check ranker.”
GSR involves contract reviewers and people from our regions team who perform a secondary review of content and entities that may violate our policies before an enforcement action is taken. This review does not rely solely on the identity of a user or entity to determine what content receives cross-check review. The cross-check ranker ranks content based on false positive risk using criteria such as topic sensitivity (how trending/sensitive the topic is), enforcement severity (the severity of the potential enforcement action), false positive probability, predicted reach, and entity sensitivity (based largely on the compiled lists, described above). The cross-check ranker is already used for the majority of cross-check reviews today.
SSR is similar to the legacy cross-check system. To determine which content or entities receive SSR, we continue to maintain lists of users and entities whose enforcements receive additional cross-check review if flagged as potentially violating the Community Standards. We have, however, added controls to that process of compiling and revising these lists. Prior to September 2020, most employees had the ability to add a user or entity to the cross-check list. After September 2020, while any employee can request that a user or entity be added to cross-check lists, only a designated group of employees have the authority to make additions to the list.
Governance responsibilities for the SSR list currently sit within our Global Operations organization, with support from our Legal and Partnerships teams with their specialized knowledge and experience. While the reporting structure of our Global Operations team is separate from Meta's Public Policy team, the Public Policy team is consulted for input in cross-check decisions—as they are in many areas of content moderation across the company. In these instances, our Operations team may leverage expertise from Meta’s Public Policy team, in combination with our regional experts and language-agnostic specialized reviewers, to enhance local and cultural perspectives. The separate reporting structures help to ensure review is independent from political or economic influence. In order to maintain lists that are relevant and appropriate, we have also developed a diligent annual review process to audit entities on SSR lists for continued eligibility, which we will continue to refine over time.
In recent months, Meta reviews an average of several thousand cross-checked jobs per day, with a large majority completed in GSR. (Relative to the millions of pieces of content being flagged and actioned for violating our Community Standards daily, this is a small proportion.) SSR now makes up the minority of these daily reviews. We anticipate a continued shift in the number of cross-check review jobs being the result of GSR prioritization through the end of 2021 and into 2022.
If a piece of content is from an individual or entity that is included as part of SSR, it is typically first reviewed by the regions team. The escalations team will then review to confirm whether the content is violating. In general, if the regions team finds that the content does not violate our policies, the escalations team will not review. If a piece of content is from an individual or entity that is prioritized by the cross-check ranker, contractors or the regions team typically review it, unless there is additional escalations team capacity to review. As with legacy cross-check, high complexity issues may receive additional review, including in rare instances review by leadership. If the final review finds that it violates our Community Standards, we remove it. If our reviews find that it does not violate, we leave it up.
As of October 16, 2021, approximately 660,000 users and entities have actions that require some form of SSR based on inclusion on the lists described above. This number regularly changes as we add or remove users and entities to the lists described above based on evolving criteria for inclusion. Examples of users and entities eligible for SSR include, but are not limited to:
Entities related to escalation responses or high-risk events. Currently, there is an informal process in place where teams preparing for a high-risk event identify entities at high risk of over-enforcement. For instance, if a user’s controversial content is going viral (e.g., live video of police violence), we may identify that user for SSR to prevent erroneous removal.
Entities included for legal compliance purposes. We use SSR in certain instances to comply with legal or regulatory requirements.
High-visibility public figures and publishers. We identify entities for SSR because over-enforcement may result in a negative experience for a large segment of users.
Marginalized populations. We identify human rights defenders, political dissidents, and others who we believe may be targeted by state-sponsored or other adversarial harassment, brigading, or mass reporting in order to protect against these attacks.
Civic Entities. We follow objective criteria and the expertise of our in-region policy teams to identify politicians, government officials, institutions, organizations, advocacy groups, and civic influencers. We include these entities for SSR in order to prevent mistakes that would limit non-violating political speech and inadvertently impact discussion of civic topics like elections, public policy, and social issues. We aim to ensure parity across a country’s civic entities—for example, if we include a national cabinet ministry in SSR, we would include all ministries in that country’s government in SSR.
Businesses. We identify advertisers of high value, as well as those who have experienced over-enforcement, to protect revenue and build long-term trust in our platform.
We are currently reviewing how to improve the criteria for identifying entities who should receive SSR. For instance, we are exploring evolving our criteria in areas such as the number of followers, the number of previous false positive enforcements, legal/regulatory requirements, as well important political/societal issues. Users may request that they not be included in the SSR list through this form. Meta does not confirm whether users are on or have been removed from these lists. However, we strongly believe in user autonomy and will review each request as soon as possible.
In addition to the two components of the cross-check system, we also have an internal mistake-prevention system called Dynamic Multi Review (DMR). This is a system that enables us to send reviewed cases back for re-review to get a majority vote on a decision (e.g., If a majority of reviewers agree on the decision, the case is closed), so that we have a higher confidence of correctness. We use this system to improve the quality and accuracy of human review and mitigate the risk of incorrect decisions by adjusting the number of reviews we require for final decision based on a number of different factors such as virality, number of views, and potential to contribute to harm (e.g. potential violations of our policies on Sexual Exploitation or Dangerous Individuals and Organizations).
Future Cross-Check Transparency Interventions
In response to the Oversight Board’s December 2022 decision on the cross-check policy advisory opinion referral, we have also committed to a series of mistake prevention transparency interventions. These interventions include:
Structured and robust engagement with our internal Human and Civil Rights teams, our Trusted Partners, and other external civil society organizations to explore ways to inform the criteria we use to identify public interest entities for cross-check lists.
Exploring a more formal cross-check list nomination process from global, regional, and local civil society groups.
Investing in quality review and training resources as we move to staff all cross-check decisions with reviewers who speak the language and have regional expertise wherever possible.
Implementing robust Service-Level Agreements (SLAs) for review decisions across our mistake-prevention systems, allowing us to optimize our current reviewer staffing model for the quickest in-language review possible.
Although we have made significant improvements to the cross-check system, we are still exploring ways to further ensure that this system appropriately balances our goals of removing content that violates our Community Standards while ensuring that we minimize our enforcement mistakes that have the greatest impact.