Policies that outline what is and isn't allowed on the Facebook app.
Policies that outline what is and isn't allowed on the Instagram app.
Policies for ad content and business assets.
Other policies that apply to Meta technologies.
How we update our policies, measure results, work with others, and more.
How we keep our platforms safe from groups and individuals that promote violence, terrorism, organized crime, and hate.
How we support communities in the face of the opioid epidemic.
How we help prevent interference, empower people to vote and more.
How we work with independent fact-checkers, and more, to identify and take action on misinformation.
How we assess content for newsworthiness.
How we reduce problematic content in News Feed.
How we build AI systems.
Comprehensive access to public data from Facebook and Instagram
Comprehensive and searchable database of all ads currently running across Meta technologies
Additional tools for in-depth research on Meta technologies and programs
Quarterly report on how well we're doing at enforcing our policies on the Facebook app and Instagram.
Report on how well we're helping people protect their intellectual property.
Report on government request for people's data.
Report on when we restrict content that's reported to us as violating local law.
Report on intentional internet restrictions that limit people's ability to access the internet.
Quarterly report on what people see on Facebook, including the content that receives the widest distribution during the quarter.
Download current and past regulatory reports for Facebook and Instagram.
2023-022-IG-UA
Today, the Oversight Board selected a case appealed by an Instagram user regarding a photo containing a list of false facts about the Holocaust which, as a result, imply denial of the Holocaust. Some of the false facts included speculation around mentions of the Holocaust by allied leaders and the false suggestion that infrastructure for carrying out the Holocaust was not built until after the war.
Upon initial review, Meta left this content up. However, upon further review, we determined the content did in fact violate our policy on Hate Speech, as laid out in the Facebook Community Standards and Instagram Community Guidelines, and was left up in error. We therefore removed the content.
Meta removes content that contains hate speech, including “harmful stereotypes linked to intimidation, exclusion, or violence on the basis of a protected characteristic” such as Holocaust denial. Holocaust denial includes content that “denies, calls into doubt, or minimizes the fact that the Holocaust happened, the number of victims, or the mechanisms of destruction used.”
We will implement the board’s decision once it has finished deliberating, and we will update this post accordingly. Please see the Board’s website for the decision when they issue it.
We welcome the Oversight Board’s decision today on this case. The board overturned Meta’s original decision to leave this content up. Meta previously removed this content.
When it is technically and operationally possible to do so, we will also take action on content that is identical and made in the same context.
After conducting a review of any recommendations provided by the Board, we will update this post with initial responses to those recommendations.
To ensure that the Holocaust denial policy is accurately enforced, Meta should take the technical steps to ensure that it is sufficiently and systematically measuring the accuracy of its enforcement of Holocaust denial content. This includes gathering more granular details about its enforcement of this content, as Meta has done in implementing the Mention of the Taliban in News Reporting recommendation no. 5.
The Board will consider this recommendation implemented when Meta provides the Board with its first analysis of enforcement accuracy of Holocaust denial content.
Our commitment: We will continue to measure the accuracy of our enforcement on content that contains harmful stereotypes, which includes Holocaust denial content. We will also conduct an analysis of the accuracy of our enforcement on Holocaust denial content and share this analysis directly with the Board.
Considerations: Ensuring that our policies are accurately enforced is a major priority of our company. For that reason, we continuously monitor and assess the accuracy of our enforcement measures. For Holocaust denial content, we measure our enforcement as part of our harmful stereotypes policy. Our Community Standards categorize content that includes Holocaust denial as Tier 1 violations under the Hate Speech policy, falling under “harmful stereotypes linked to intimidation, exclusion, or violence on the basis of a protected characteristic.” We remove content containing these harmful stereotypes and routinely measure the accuracy of this enforcement. We apply the same label to all harmful stereotypes because it improves the performance of our classifiers.
In response to the Board’s request, our teams will pull data from select relevant markets to analyze the prevalence and accuracy of our enforcement on Holocaust denial content. Our aim is to select representative markets for analysis as this data is collected and analyzed by market, making several regional analyses more feasible than a global review. This analysis requires extensive work by a number of internal teams, including data validation processes, legal and privacy review. We will endeavor to complete this analysis as quickly as possibly, and will provide updates on this process as it progresses.
We are in the process of aligning on the parameters of analyzing the subset of Holocaust denial content with relevant teams and will provide more granular details about enforcement of this content directly to the Board. We will provide an update on our progress in future Oversight Board updates.
To provide greater transparency that Meta’s appeals capacity is restored to pre-pandemic levels, Meta should publicly confirm whether it has fully ended all COVID-19 automation policies put in place during the COVID-19 pandemic.
The Board will consider this recommendation implemented when Meta publishes information publicly on each COVID-19 automation policy and when each was ended or will end.
The Oversight Board also reiterates the importance of its previous recommendations calling for alignment of the Instagram Community Guidelines and Facebook Community Standards, noting the relevance of these recommendations to the issue of Holocaust denial (recommendation no. 7 and 9 from the Breast Cancer Symptoms and Nudity case; recommendation no. 10 from the Öcalan’s Isolation case; no. 1 from the Ayahuasca Brew case; and recommendation no. 9 from the Sharing Private Residential Information policy advisory opinion). In line with those recommendations, Meta should continue to communicate delays in aligning these rules, and it should implement any short-term solutions to bring clarity to Instagram users.
Our commitment: We no longer apply automation to address the limited review capacity that resulted from the COVID-19 pandemic. We continue to rely on automation systems as an important tool for content moderation at scale, however those systems are unrelated to early pandemic constraints.
Considerations: Our Transparency Center details how our content review systems are structured using technology to rank content so that our review teams can prioritize incoming content in order of importance.
During the pandemic, we introduced temporary COVID-19 specific automation to address reduced human reviewer capacity, which included auto-closing certain appeal jobs that were not prioritized for review. The configuration of this automation has since changed; however we internally retained the legacy COVID-19 label because it was already built into our systems and would have been technically difficult to change. We are working with internal teams to explore the feasibility of updating this classifier name to avoid confusion about its purpose going forward. Our responses to the Board’s questions in this case could have been clearer on this point. To clarify previous responses to the Board, the label is internal-only; we no longer share COVID-19 related messaging with users when their appeals are actioned through this technology. Instead, they receive a message that this is a standard decision made by our technology as intended.
In response to the Punjabi concern over the RSS in India case in 2021, we noted our efforts to restore human review to pre-pandemic levels while better prioritizing human review of appeals on our content moderation decisions. We’ve since further improved our technology to better prioritize human review of appeals where necessary. This calculated combination of enhanced technology and human review enables us to consistently optimize capacity for reviewing appeals. We will continue to consider how to adjust our internal labels to more accurately reflect our automation enforcement processes and detail our progress in future Oversight Board updates.