Policies that outline what is and isn't allowed on our apps.
Policies for ad content and business assets.
Other policies that apply to Meta technologies.
How we update our policies, measure results, work with others, and more.
Explore how we help teens have safe, positive experiences on Facebook and Instagram.
How we approach dangerous organizations and individuals.
How we support communities in the face of the opioid epidemic.
How we help prevent interference, empower people to vote and more.
How we work with independent fact-checkers, and more, to identify and take action on misinformation.
How we assess content for newsworthiness.
How we reduce problematic content in News Feed.
How we build AI systems.
Comprehensive access to public data from Facebook and Instagram
Comprehensive and searchable database of all ads currently running across Meta technologies
Additional tools for in-depth research on Meta technologies and programs
Quarterly report on how well we're doing at enforcing our policies on the Facebook app and Instagram.
Report on how well we're helping people protect their intellectual property.
Report on government request for people's data.
Report on when we restrict content that's reported to us as violating local law.
Report on intentional internet restrictions that limit people's ability to access the internet.
Quarterly report on what people see on Facebook, including the content that receives the widest distribution during the quarter.
Download current and past regulatory reports for Facebook and Instagram.
2021-010-FB-UA
Today, the Oversight Board selected a case appealed by a Facebook user regarding a post shared by the page of a regional news outlet in Colombia. The post contains a short video with text expressing admiration for the protestors in the video who are chanting about the President of Colombia, tax reform, and the strike in Colombia and was originally shared by a verified page. The chanting includes a term identified as a slur.
Facebook took down this content for violating our policy on hate speech , as laid out in the Facebook Community Standards . We do not allow content that “describes or negatively targets people with slurs, where slurs are defined as words that are inherently offensive and used as insulting labels” on the basis of protected characteristics including sexual orientation.
We will implement the board’s decision once it has finished deliberating and will update this post accordingly. Please see the board’s website for the decision when they issue it.
We welcome the Oversight Board’s decision today on this case. Facebook has acted to comply with the board’s decision immediately, and this content has been reinstated.
In accordance with the bylaws, we will also initiate a review of identical content with parallel context. If we determine that we have the technical and operational capacity to take action on that content as well, we will do so promptly. For more information, please see our Newsroom post about how we implement the board’s decisions.
After conducting a review of the recommendation provided by the board in addition to their decision, we will update this post.
On October 27, 2021, Meta responded to the board’s recommendations for this case. We are fully implementing one recommendation, assessing feasibility on two, and one recommendation is work Meta already does.
Publish illustrative examples from the list of slurs it has designated as violating under its Hate Speech Community Standard. These examples should be included in the Community Standard and include edge cases involving words which may be harmful in some contexts but not others, describing when their use would be violating. Facebook should clarify to users that these examples do not constitute a complete list.
Our commitment: We will include additional information in our Hate Speech Community Standards that better explains our approach to edge cases involving words that may be harmful in certain contexts, but not necessarily in others. Although we recognize the need for transparency and clarity in our Community Standards, we are still considering the extent to which we will publish specific slur words, given that the mere mention of these terms can be demeaning and can lead to an environment of intimidation and exclusion.
Considerations: While we recognize the need for clarity and transparency in our Community Standards, we do not want those reading our Community Standards to see potentially harmful or hateful content. We’ve avoided publishing the terms that we prohibit from the platform on the basis that they may create an environment of intimidation and exclusion, or be used to demean people. As a result of this recommendation, however, we are assessing the potential implications of sharing specific examples.
Next steps: In the first half of 2022, we will update the Hate Speech section of the Community Standards with an explanation about our approach to edge cases involving words that may be harmful in some contexts, but not others. We will also provide an update on our assessment of whether to publish examples of slurs in our first Quarterly Update in 2022.
Link the short explanation of the newsworthiness allowance provided in the introduction to the Community Standards to the more detailed Transparency Center explanation of how this policy applies. The company should supplement this explanation with illustrative examples from a variety of contexts, including reporting on large scale protests.
Our commitment: We will include a link to the more detailed newsworthy explanation in the introduction of the Community Standards. Additionally, we will provide examples of newsworthy content in our explanation of our approach to newsworthy content in the Transparency Center.
Considerations: Our approach to determining the potential newsworthiness of a piece of content is rooted in international human rights standards, as reflected in our Corporate Human Rights Policy, and involves conducting a careful balancing test that weighs the public interest against the risk of harm.
Given the often subjective nature of what may be considered newsworthy, we use a number of factors to make this determination. We assign special value to content that surfaces threats to public health or safety or that gives voice to perspectives currently being debated as part of a political process. We also consider other factors, such as the circumstances of a specific country, the nature of the speech, and the political structure of the country, including the extent to which it has a free press.
The introduction of the Community Standards currently includes an explanation of instances where we may allow content as part of our commitment to voice, including if this content is newsworthy. In order to further clarify and improve the accessibility of our explanation about this allowance, we will implement the board’s recommendation to link the more detailed explanation of our approach to newsworthy content in this introduction. In addition, we will add examples into the newsworthy allowance section of the Transparency Center.
Next steps: We anticipate completing this work in the first half of 2022.
Develop and publicize clear criteria for content reviewers to escalate for additional review public interest content that potentially violates the Community Standards but may be eligible for the newsworthiness allowance.
Our commitment: Our content reviewers can escalate any potentially newsworthy content to our specialized review teams for additional consideration of a newsworthiness allowance. We published a description of the criteria we use to assess escalated content as potentially newsworthy in the Transparency Center.
Considerations: As we describe in the Transparency Center and in our response to recommendation #2 above, our approach to determining the potential newsworthiness of a piece of content is rooted in international human rights standards, and involves conducting a balancing test that weighs the public interest against the risk of harm. We outline the contextual factors we use in making this determination in the Transparency Center, including political issues and the extent to which there is a free press.
The training we provide to content reviewers makes it clear they can escalate content they identify as potentially newsworthy to our specialized teams for further assessment. These reviewers speak a wide variety of languages spoken in regions across the globe, and bring particular regional and cultural knowledge to the content they are reviewing, which enables them to take contextual factors into account when reviewing content that is potentially newsworthy. Because of the subjective nature of newsworthiness allowances, we rely on specialized teams to conduct these newsworthy balancing tests to reduce bias and subjectivity from the analysis.
Next steps: We will have no further updates on this recommendation.
Notify all users who reported content assessed as violating but left on the platform for public interest reasons that the newsworthiness allowance was applied to the post. The notice should link to the Transparency Center explanation of the newsworthiness allowance.
Our commitment: We will assess the feasibility of developing a notification to send to users who report a piece of content that violates the Community Standards, explaining that we left it up under our newsworthiness allowance.
Considerations: We have begun the process of building product features that we can use to inform users when a piece of content that violates the Community Standards is left up under our newsworthiness allowance. We need time to assess whether these product features under development would meet the objective of this recommendation to notify specific users who reported violating content that was left up under a newsworthiness allowance, or if another feature would need to be developed.
Next steps: We anticipate completing our assessment and providing an update in the first half of 2022.