Policies that outline what is and isn't allowed on our apps.
Policies for ad content and business assets.
Other policies that apply to Meta technologies.
How we update our policies, measure results, work with others, and more.
Explore how we help teens have safe, positive experiences on Facebook and Instagram.
How we approach dangerous organizations and individuals.
How we support communities in the face of the opioid epidemic.
How we help prevent interference, empower people to vote and more.
How we work with independent fact-checkers, and more, to identify and take action on misinformation.
How we assess content for newsworthiness.
How we reduce problematic content in News Feed.
How we build AI systems.
Comprehensive access to public data from Facebook and Instagram
Comprehensive and searchable database of all ads currently running across Meta technologies
Additional tools for in-depth research on Meta technologies and programs
Quarterly report on how well we're doing at enforcing our policies on the Facebook app and Instagram.
Report on how well we're helping people protect their intellectual property.
Report on government request for people's data.
Report on when we restrict content that's reported to us as violating local law.
Report on intentional internet restrictions that limit people's ability to access the internet.
Quarterly report on what people see on Facebook, including the content that receives the widest distribution during the quarter.
Download current and past regulatory reports for Facebook and Instagram.
2023-023-FB-UA
Today, the Oversight Board selected a case appealed by a Facebook user regarding a photo of a curtain hanging over a window which resembles the Trans Pride flag. The photo has a text overlay which includes the words “New technology. Curtains which hang themselves” and is accompanied by a caption with the words “spring cleaning” and a heart symbol.
Upon initial review, Meta left this content up. However, upon further review, we determined the content did in fact violate our policy on Hate Speech, as laid out in the Facebook Community Standards, and was left up in error. We therefore removed the content.
Meta prohibits content containing “direct attack[s] against people – rather than concepts or institutions – on the basis of what we call protected characteristics” such as sex or gender identity. Hate speech attacks include “content targeting a person or group of people on the basis of their aforementioned protected characteristics… with violent speech.” Violent speech includes “statements advocating or supporting death, disease, or harm.”
We will implement the board’s decision once it has finished deliberating, and we will update this post accordingly. Please see the Board’s website for the decision when they issue it.
We welcome the Oversight Board’s decision today on this case. The board overturned Meta’s original decision to leave this content up. Meta previously removed this content from Facebook.
When it is technically and operationally possible to do so, we will also take action on content that is identical and in the same context.
After conducting a review of the recommendations provided by the Board, we will update this post with initial responses to those recommendations.
Meta's Suicide and Self-injury policy page should clarify that the policy forbids content that promotes or encourages suicide aimed at an identifiable group of people.
The Board will consider this implemented when the public-facing language of the Suicide and Self-injury Community Standard reflects the proposed change.
Our commitment: We are in the process of updating our Hate Speech Community Standards across languages to include that we do not allow calls for self-injury and suicide of people or a group of people based on their protected characteristics. While this update falls within the Hate Speech, rather than the Suicide, Self-Injury, and Eating Disorders policy, which already explicitly prohibits celebration or promotion of self-injury or suicide, we believe it is consistent with the spirit of the Board’s recommendation.
Considerations: In December of 2023, we updated our Hate Speech policy to reflect in our Community Standards that we remove content for violating our Hate Speech policy when that content calls for self-injury or suicide. Specifically, we do not allow any content that targets a person or group of people based on their protected characteristics with “calls for action or statements of intent to inflict, aspirational or conditional statements about, or statements advocating for or supporting harm,” which includes “calls for self-injury and suicide”. Under our Hate Speech policy, protected characteristics include sexual orientation and gender identity. Moreover, our Suicide, Self Injury, and Eating Disorder policy explicitly prohibits content that promotes or encourages suicide. This includes both content that promotes or encourages suicide generally and content that promotes or encourages suicide directed toward individuals or groups of people.
Both our Hate Speech and our Suicide, Self-Injury, and Eating Disorders policies have been developed and continue to be updated in consultation with external subject matter experts.
At this time, we are rolling out the policy changes across languages in our Community Standards and expect to have the Hate Speech Language updated across our Community Standards soon. This is guidance that will be reflected in our internal process for how content is reviewed.
Meta's internal guidance for at-scale reviewers should be modified to ensure that flag-based visual depictions of gender identity that do not contain a human figure are understood as representations of a group defined by the gender identity of its members. This modification would clarify instructions for enforcement of this form of content at scale, whenever it contains a violating attack.
The Board will consider this implemented when Meta provides the Board with the changes to its internal guidance.
Our commitment: We are assessing Hate Speech policy guidance related to “visual depictions” of protected characteristics more generally, and will consider ways to incorporate this recommendation into that work.
Considerations: Our Hate Speech policy allows speech about concepts, but not attacks on people based on their protected characteristics. Currently, as the Board notes, we consider a number of factors to determine whether a protected characteristic is referenced in content, including whether gender identity is depicted by visual signals which can include flags accompanied by other indicators. Additionally, as part of training internal reviewers, we specifically train on how to better identify indicators of visual depictions of protected characteristic groups, including LGBTQ+ people.
We previously conducted policy development to align on which content may be interpreted as an attack on “people” as opposed to attacks on “concepts,” the result of which led to implementation of context-specific policy guidance. Our Hate Speech policy allows for attacks on concepts or institutions generally due to the importance of allowing legitimate speech. It also recognizes that attacks on people on the basis of protected characteristics such as national origin, race, ethnicity, disability, religious affiliation, caste, sexual orientation, sex, gender identity, or serious disease can create an environment of intimidation, and therefore we remove this speech when content targets a person or group of people with a hate speech attack based on protected characteristics. When there are signals that content attacking concepts, institutions, ideas, practices, or beliefs associated with a protected characteristic could contribute to discrimination, intimidation, or imminent physical harm, we may also remove that content on escalation.
More generally, we are in the process of reviewing some of our policy guidance related to visual depictions and plan to conduct policy development related to some forms of visual depiction indicators. We will assess ways to include this feedback and recommendation in that workstream. That being said, we also acknowledge that, at-scale, defining and enforcing solely on visual elements like flags , without certain additional contextual clues, as representations of a person or group with protected characteristics could lead to removal of legitimate speech. We will continue to provide updates on the upcoming policy work related to visual depiction indicators in future Oversight Board reports.