Policies that outline what is and isn't allowed on our apps.
Policies for ad content and business assets.
Other policies that apply to Meta technologies.
How we update our policies, measure results, work with others, and more.
Explore how we help teens have safe, positive experiences on Facebook and Instagram.
How we approach dangerous organizations and individuals.
How we support communities in the face of the opioid epidemic.
How we help prevent interference, empower people to vote and more.
How we work with independent fact-checkers, and more, to identify and take action on misinformation.
How we assess content for newsworthiness.
How we reduce problematic content in News Feed.
How we build AI systems.
Comprehensive access to public data from Facebook and Instagram
Comprehensive and searchable database of all ads currently running across Meta technologies
Additional tools for in-depth research on Meta technologies and programs
Quarterly report on how well we're doing at enforcing our policies on the Facebook app and Instagram.
Report on how well we're helping people protect their intellectual property.
Report on government request for people's data.
Report on when we restrict content that's reported to us as violating local law.
Report on intentional internet restrictions that limit people's ability to access the internet.
Quarterly report on what people see on Facebook, including the content that receives the widest distribution during the quarter.
Download current and past regulatory reports for Facebook and Instagram.
JUN 12, 2023
2021-007-FB-UA
Today, the Oversight Board selected a case appealed by a Facebook user regarding a post that criticizes the current situation in Myanmar following the 2021 military coup d’état and suggests ways to limit financing to the military. The post includes proposals of legal and financial consequences for companies supportive of the military as well as several terms identified as referring to China, possibly with profanity.
Facebook took down this content for violating our policy on hate speech, as laid out in our Facebook Community Standards. We do not allow content that uses “Profane terms or phrases with the intent to insult” targeted at people on account of their “Race, ethnicity, [or] national origin.”
We will implement the board’s decision once it has finished deliberating, and we will update this post accordingly. Please see the board’s website for the decision when they issue it.
We welcome the Oversight Board’s decision today on this case. Meta has acted to comply with the board’s decision immediately, and this content has been reinstated.
In accordance with the bylaws, we will also initiate a review of identical content with parallel context. If we determine that we have the technical and operational capacity to take action on that content as well, we will do so promptly. For more information, please see our Newsroom post about how we implement the board’s decisions.
After conducting a review of the recommendation provided by the board in addition to their decision, we will update this post.
The board issued their binding decision for this case last month overturning our initial decision in this case. At that time the board also issued one non-binding recommendations, which we are responding to in the table below.
On September 10, 2021, Meta responded to the board's recommendation for this case.
Meta should ensure that its Internal Implementation Standards are available in the language in which content moderators review content. If necessary to prioritize, Meta should focus first on contexts where the risks to human rights are more severe.
Our commitment: Our content moderators are all fluent in English. They rely on the Community Standards, internal policy guidelines (which are available in English), as well as supplementary lists of context-specific terms and phrases, in order to ensure standardized global enforcement of our policies.
Considerations: Our Community Standards apply to everyone, all around the world, and to all types of content. We aim to publish the Community Standards in the languages that our users speak. We provide content reviewers a set of internal policy guidelines, and these guidelines also apply globally.
Our content reviewers are all fluent in English. They speak a wide range of languages spoken in regions across the globe, and bring particular regional and cultural knowledge to the content they are reviewing. As we explained in our response to 2021-003-FB-UA-1, we currently publish the Community Standards and Community Guidelines in over 40 languages, which are available to our content reviewers. Our content reviewers are also supported by teams with regional and linguistic expertise when reviewing content.
There may be offensive words or phrases particular to another language and cultural context, and we account for this in our guidance to reviewers. While we enforce our policy on slurs consistently, reviewers need to know the colloquial language that, for example, is considered an attack on a protected group in their region. Our Content Policy team, in consultation with regional experts from the Global Operations Team, maintains lists of context-specific terms and phrases for this purpose.
Next steps: We will have no further updates on this recommendation.