Policies that outline what is and isn't allowed on our apps.
Policies for ad content and business assets.
Other policies that apply to Meta technologies.
How we update our policies, measure results, work with others, and more.
Explore how we help teens have safe, positive experiences on Facebook and Instagram.
How we approach dangerous organizations and individuals.
How we support communities in the face of the opioid epidemic.
How we help prevent interference, empower people to vote and more.
How we work with independent fact-checkers, and more, to identify and take action on misinformation.
How we assess content for newsworthiness.
How we reduce problematic content in News Feed.
How we build AI systems.
Comprehensive access to public data from Facebook and Instagram
Comprehensive and searchable database of all ads currently running across Meta technologies
Additional tools for in-depth research on Meta technologies and programs
Quarterly report on how well we're doing at enforcing our policies on the Facebook app and Instagram.
Report on how well we're helping people protect their intellectual property.
Report on government request for people's data.
Report on when we restrict content that's reported to us as violating local law.
Report on intentional internet restrictions that limit people's ability to access the internet.
Quarterly report on what people see on Facebook, including the content that receives the widest distribution during the quarter.
Download current and past regulatory reports for Facebook and Instagram.
2024-028-IG-MR, 2024-029-TH-MR, 2024-030-FB-MR
Today, May 21, 2024, the Oversight Board selected a case bundle referred by Meta regarding three pieces of text content posted across Threads, Facebook, and Instagram. These posts make criminal allegations against groups on the basis of nationality, which we consider to be a protected characteristic under our Hate Speech policy.
Meta referred the case bundle to the Oversight Board after the issue of how to treat criminal comparisons on the basis of nationality emerged as one of the most challenging questions during an ongoing policy development process.
The first piece of content is a Thread posted as a reply that states, “Genocide….. all Israelis are criminals.” The second piece of content is a Facebook post that says “Americans” and “Russians” are “criminals.” The third piece of content is a comment on an Instagram post that says, “All Indians are rapist[s].”
Meta determined that all three pieces of content violated our Hate Speech policy, as laid out in theInstagram Community Guidelines and Facebook Community Standards. We therefore removed all three pieces of content.
Under our Hate Speech policy, we remove content that targets people based on their protected characteristics or immigration status with Hate Speech attacks in the form of dehumanizing speech that compares them to criminals. National origin and ethnicity are both protected characteristic groups. Moreover, in these three pieces of content, Hate Speech attacks were made against people of a given nation, not toward that nation itself.
Meta referred this case to the board because we found it significant and difficult as it creates tension between our values of safety and voice.
While we believe the lines our policies articulate around unqualified behavioral statements in the context of hate speech attacks are in the right place to cover most circumstances, we also recognize that there are situations – particularly in times of crisis and conflict – where criminal allegations directed towards people of a given nationality may be interpreted as attacking a nation’s policies, its government, or its military rather than its people.
While the submission does not constitute a request for a formal Policy Advisory Opinion, as a part of the case Meta submitted to the board two possible options which we have outlined for them to consider, or any other options they determine may be appropriate:
Option 1: create an escalation-only framework to differentiate between attacks based on national origin as opposed to attacks targeting a concept.
Option 2: exempt nationality (or certain specific subsets, such as “soldier subsets”) from criminal comparison attacks (or a subset of attacks).
We also welcome the Oversight Board’s guidance on wider questions surrounding our policies and enforcement that the case bundle raises.
We will implement the board’s decision once it has finished deliberating, and we will update this post accordingly. Please see the board’s website for the decision when they issue it.
We welcome the Oversight Board’s decision today, September 25, 2024, on this case bundle. The Board upheld Meta’s decision to remove the content from Threads and Instagram, respectively. Meta previously removed the content for both cases.
The Board overturned Meta’s decision to remove the content from Facebook. Meta will comply with the Board's decision and reinstate the content to Facebook within 7 days.
After conducting a review of the recommendations provided by the Board, we will update this post with initial responses to those recommendations.
Meta should amend its Hate Speech Community Standard, adding the section marked as “new” below. The amended Hate Speech Community Standard would then include the following or other substantially similar language to that effect:
“Do not post
Tier 1
Content targeting a person or group of people (including all groups except those who are considered non-protected groups described as having carried out violent crimes or sexual offenses or representing less than half of a group) on the basis of their aforementioned protected characteristic(s) or immigration status in written or visual form with dehumanizing speech in the form of comparisons to or generalizations about criminals:
Sexual Predators
Violent Criminals
Other Criminals
[NEW] Except when the actors (e.g., police, military, army, soldiers, government, state officials) and/or crimes (e.g., atrocity crimes or grave human rights violations, such as those specified in the Rome Statute of the International Criminal Court) imply a reference to state rather than targeting people based on nationality.” The Board will consider this recommendation implemented when Meta updates the public-facing Hate Speech Community Standard and shares the updated specific guidance with its reviewers.
Our commitment: We will explore policy options to allow dehumanizing speech (e.g., comparisons to criminals) in content where the crime or the actors signal it is a reference to a state rather than people. We will provide updates on the status of this policy development in future reports to the Board.
Considerations: Our Hate Speech policy aims to prevent speech that may contribute to an environment of intimidation and exclusion, or that in some cases may promote offline violence. This policy protects against speech so that individuals don’t feel attacked on the basis of who they are, which is why we remove attacks on people based on their protected characteristics such as ethnicity and national origin. This includes removing allegations that someone, in relation to their protected characteristics, is a sexual predator, violent criminal, or other criminal. As noted in this case from the Oversight Board, this would include removing claims like “All Americans are criminals” or “All Russians are criminals,” as these attack people on the basis of national origin.
However, in the case of attacks based on national origin, we also recognize that sometimes speech may be intended to be a critique of a state rather than an attack on people based on their national origin and that this may require additional context to enforce. We are also aligned with the Board’s observation that sometimes people may use this type of language against proxies for states, governments and/or their policies, such as police, military, army, soldiers, government and other state officials and the Board’s perspective that our Hate Speech policy should more clearly delineate this distinction.
As the Board notes in their decision, adjustments to our policies that are intended to address the nuance between attacks on a state rather than people come with enforcement challenges at scale. We will consult with internal and external experts to consider the tradeoffs between expression and safety in these circumstances and align on any potential categorical changes. Following this, we may connect with our enforcement teams to evaluate how best to apply those changes at scale. As an alternative, we may consider improvements to our context-specific guidance to be applied upon escalation.
We will provide updates in future biannual reports to the Oversight Board on the status of any policy development related to our Hate Speech policy approach to criminal allegations.
To improve transparency around Meta’s enforcement, Meta should share the results of the internal audits it conducts to assess the accuracy of human review and performance of automated systems in the enforcement of its Hate Speech policy with the public. It should provide the results in a way that allows these assessments to be compared across languages and/or regions.
The Board will consider this recommendation implemented when Meta includes the accuracy assessment results as described in the recommendation in its Transparency Center and in the Community Standards Enforcement Reports.
Our commitment: In our Community Standards Enforcement Report (CSER) we currently share data on the amount of violating Hate Speech content we detect and remove. We will continue sharing this data in CSER and will confidentially share data with the Board on the accuracy of our enforcement on content under our Hate Speech policy by both human review and automated enforcement systems on a global scale.
Considerations: Ensuring that our policies are accurately enforced is a company priority. For that reason, we continuously monitor and assess the accuracy of our enforcement measures. As one step, we periodically review samples of violating content to determine whether our human or automated review systems took the correct actions. These reviews help us assess the performance of our enforcement systems and signal when improvements are needed. Over time—after learning from thousands of human decisions—the technology becomes more accurate.
We will continue to share data on the amount of Hate Speech content addressed by our detection and enforcement mechanisms in the Community Standards Enforcement Report (CSER). In sharing the results of our enforcement accuracy assessment with the Board confidentially, we will assess our ability to include a breakdown by language and region.
In order to create CSER, we monitor enforcement accuracy of our policies at the global level with a minimum threshold. In the event that our enforcement accuracy rates fall below this threshold, we begin targeted investigations which may include more granular assessments at the regional or language-specific levels, to identify specific areas for improvement. We are continuing investing resources into this current model as it allows for targeted adjustments when necessary. We will provide updates on our progress in future reports to the Board and will consider opportunities for additional transparency in the future.