Policies that outline what is and isn't allowed on our apps.
Policies for ad content and business assets.
Other policies that apply to Meta technologies.
How we update our policies, measure results, work with others, and more.
Explore how we help teens have safe, positive experiences on Facebook and Instagram.
How we approach dangerous organizations and individuals.
How we support communities in the face of the opioid epidemic.
How we help prevent interference, empower people to vote and more.
How we work with independent fact-checkers, and more, to identify and take action on misinformation.
How we assess content for newsworthiness.
How we reduce problematic content in News Feed.
How we build AI systems.
Comprehensive access to public data from Facebook and Instagram
Comprehensive and searchable database of all ads currently running across Meta technologies
Additional tools for in-depth research on Meta technologies and programs
Quarterly report on how well we're doing at enforcing our policies on the Facebook app and Instagram.
Report on how well we're helping people protect their intellectual property.
Report on government request for people's data.
Report on when we restrict content that's reported to us as violating local law.
Report on intentional internet restrictions that limit people's ability to access the internet.
Quarterly report on what people see on Facebook, including the content that receives the widest distribution during the quarter.
Download current and past regulatory reports for Facebook and Instagram.
JUN 12, 2023
2021-005-FB-UA
On March 2, 2021, the Oversight Board selected a case appealed by someone on Facebook regarding a comment with a meme depicting Turkey having to choose between “The Armenian Genocide is a lie” and “The Armenians were terrorists who deserved it.”
Meta took down this content for violating our policy on hate speech, as laid out in the Facebook Community Standards. We do not allow hate speech on Facebook, even in the context of satire, because it creates an environment of intimidation and exclusion, and in some cases, may promote real-world violence.
We welcome the Oversight Board's decision today on this case. Meta has acted to comply with the board’s decision immediately, and this content has been reinstated.
In accordance with the bylaws, we will also initiate a review of identical content with parallel context. If we determine that we have the technical and operational capacity to take action on that content as well, we will do so promptly. For more information, please see our Newsroom post about how we will implement the board’s decisions. We will update this post again once any further action is taken on other identical content with parallel context.
After conducting a review of the recommendation provided by the board in addition to their decision, we will update this post.
On June 17, 2021, Meta responded to the board’s recommendation for this case. We are committing to take action on the recommendation.
Meta should make technical arrangements to ensure that notice to users refers to the Community Standard enforced by the company.
Our commitment: We agree that providing people with accurate information about why we have taken down their content is important. We are assessing how best to do so.
Considerations: Our content moderators assess a piece of content against all of our Community Standards. If content is found to be violating our Community Standards, content moderators will select which policy was violated, but at this time can only select one violation type, even if the content violates multiple Community Standards. On appeal, if a content moderator finds that the content should instead be marked for violating a different Community Standard, the reviewer assigns a new violation to reflect the correct one.
We need to explore the benefit to user experience that could come from informing users of multiple violations and multiple appeal opportunities resulting from a single piece of content. Additionally, changing the technical ability, process, and training for how content moderators select policy violations for a piece of content, and the appeals that may follow, creates new operational complexity that we need to evaluate.
Next steps: We plan to complete our assessment and update on our progress by the end of the year.
Meta should include the satire exception, which is currently not communicated to users, in the public language of the Hate Speech Community Standard.
Our commitment: We’ll add information to the Community Standards that makes it clear where we consider satire as part of our assessment of context-specific decisions.
Considerations: This change will allow teams to consider satire when assessing potential Hate Speech violations.
Next steps: We plan to complete this update by the end of this year.
Meta should make sure that it has adequate procedures in place to assess satirical content and relevant context properly including by providing content moderators with additional resources.
Our commitment: We commit to provide regional and escalations teams the ability to evaluate content for satire through a new satire framework. We also are assessing how to apply this review at scale.
Considerations: As stated in our response to recommendation 2, we will add information to the Community Standards that makes it clear where we consider satire as part of our assessment of context-specific decisions. This work will include implementing a new satire framework, which our teams will use for evaluating potential satire exceptions. Regional teams will be able to provide satire assessments, as well as escalate pieces of content to specialized teams for an additional review when necessary.
We previously began developing a framework for assessing humor and satire and are prioritizing completing it based on the board’s recommendation. This work included over 20 engagements with academic experts, journalists, comedians, representatives of satirical publications, and advocates for freedom of expression. Stakeholders noted that humor and satire are highly subjective across people and cultures, underscoring the importance of human review by individuals with cultural context. Stakeholders also told us that “intent is key,” though it can be tough to assess. Further, true satire does not “punch down”: the target of humorous or satirical content is often an indicator of intent. And if content is simply derogatory, not layered, complex, or subversive, it is not satire. Indeed, humor can be an effective mode of communicating hateful ideas.
Given the context-specific nature of satire, we are not immediately able to scale this kind of assessment or additional consultation to our content moderators. We need time to assess the potential tradeoffs between identifying and escalating more content that may qualify for our satire exception, against prioritizing escalations for the highest severity policies, increasing the amount of content that would be escalated, and potentially slower review times among our content moderators.
Next steps: We are completing a new satire framework that regional and escalations teams will use to evaluate content for satire. We are assessing how to apply this review at scale. We plan to complete our assessment and update on our progress by the end of the year.
Facebook should let users indicate in their appeal that their content falls into one of the exceptions to the Hate Speech policy.
Our commitment: We will continue working on our appeal process to allow users to provide more specific information about their appeal, including that they believe it qualifies under one of the policy exceptions. As a result of this recommendation, we are evaluating how best to provide people with the ability to indicate that their content falls into one of the exceptions of the Hate Speech policy.
Considerations: We’re continuously working to improve our appeal process, both for the benefit of user experience and for accuracy of enforcement. We have been looking into ways to give people the ability to provide additional context with their appeal. Based on the board’s recommendation, we will now explore including the ability to identify a specific policy exception in the Community Standards that a user believes applies to their content.
There are operational challenges associated with increasing the amount of information our teams review as part of the appeals process. We need time to assess whether the additional context adds to the accuracy of review and quality of user experience. We also need to consider the extent to which additional context may slow down review time, limiting the number of appeals we can review at scale.
Next steps: We plan to complete our assessment and update on our progress by the end of the year.
Meta should ensure appeals based on policy exceptions are prioritized for human review.
Our commitment: We need time to assess including policy exceptions, as well as other user-provided context, as factors in how we prioritize appeals for human review.
Considerations: We currently prioritize appeals for human review based on a variety of factors, including how recently the post was taken down, how large the audience of the post is, and how confident we are that the initial decision was right. We are continuously working to improve our strategies for responding to appeals quickly and accurately. As described in our response to recommendation 4, we are exploring ways to give people the ability to provide additional information with their appeal, which we may be able to use to improve the quality of our appeals process.
Prioritizing a certain appeal may mean that it gets reviewed more quickly, but does not necessarily affect the accuracy of that review. We need time to analyze how changes to our appeals process and additional user-provided context affect both speed and accuracy of our scaled review.
Next steps: We plan to complete our assessment and update on our progress by the end of the year.