Policies that outline what is and isn't allowed on our apps.
Policies for ad content and business assets.
Other policies that apply to Meta technologies.
How we update our policies, measure results, work with others, and more.
Explore how we help teens have safe, positive experiences on Facebook and Instagram.
How we approach dangerous organizations and individuals.
How we support communities in the face of the opioid epidemic.
How we help prevent interference, empower people to vote and more.
How we work with independent fact-checkers, and more, to identify and take action on misinformation.
How we assess content for newsworthiness.
How we reduce problematic content in News Feed.
How we build AI systems.
Comprehensive access to public data from Facebook and Instagram
Comprehensive and searchable database of all ads currently running across Meta technologies
Additional tools for in-depth research on Meta technologies and programs
Quarterly report on how well we're doing at enforcing our policies on the Facebook app and Instagram.
Report on how well we're helping people protect their intellectual property.
Report on government request for people's data.
Report on when we restrict content that's reported to us as violating local law.
Report on intentional internet restrictions that limit people's ability to access the internet.
Quarterly report on what people see on Facebook, including the content that receives the widest distribution during the quarter.
Download current and past regulatory reports for Facebook and Instagram.
Standard Case 2023-032-IG-UA
Today, November 16, 2023, the Oversight Board selected a case appealed by an Instagram user regarding a video of an unveiled Iranian woman being confronted by a man who identifies himself as a judicial officer and accuses the woman of being a criminal for failing to wear a hijab. The video is accompanied by a caption which shares that the video is “evidence of your bastardness and the courage of Iranian women,” and that “It is not far to make you into pieces.”
Upon initial review, Meta took down this content for violating our policy on Violence and Incitement, as laid out in our Instagram Community Guidelines and Facebook Community Standards. However, upon additional review, we determined we removed this content in error and reinstated the post. Additional input from our regional teams, as well as the Board’s intervening decision in the Call for Women to Join Political Protest in Cuba Case, helped us to determine that the most likely interpretation of the language in the caption was a political critique directed toward either the regime as a whole or people who support the mandatory hijab requirement more generally, rather than a literal threat.
We will implement the board’s decision once it has finished deliberating, and we will update this post accordingly. Please see the board’s website for the decision when they issue it.
We welcome the Oversight Board’s decision today, March 7, 2024, on this case. The Board overturned Meta’s original decision to remove the content from Instagram. Meta previously restored the content to Instagram.
When it is technically and operationally possible to do so, we will also take action on content that is identical and in the same context.
After conducting a review of the recommendation provided by the Board, we will update this page.
To ensure respect for users' freedom of expression and assembly in an environment of systematic state repression, Meta should add a policy lever to the Crisis Policy Protocol providing that figurative (or not literal) statements, not intended to, and not likely to, incite violence, do not violate the Violence and Incitement policy line prohibiting threats of violence in relevant contexts. This should include developing criteria for at-scale moderators on how to identify such statements in the relevant context.
The Board will consider this recommendation implemented when Meta both shares with the Board the methods for implementing the policy lever and the resulting criteria for moderation in Iran.
Our commitment: Our Violence and Incitement policy explains that we may allow speech that does not constitute a credible threat. We may consider additional context to determine if speech is used “figuratively” rather than as a credible threat. We are working with teams internally to refine our Violence and Incitement policy to enable more nuanced enforcement. This work is distinct from our Crisis Policy Protocol (CPP), however, which is used to assess the risk of imminent harm on and off our platforms so we can respond with specific policy and product actions that will help keep people safe. We believe this recommendation fits better under the ongoing work to refine the Violence and Incitement policy, rather than the CPP, and therefore will address it through that workstream.
Considerations: We agree with the Board that people sometimes use speech that would otherwise violate our Violence and Incitement policy figuratively, rather than to make threats. Our Violence and Incitement policy has been developed and refined with the input of external stakeholders and expertise from internal teams. This year, we are also continuing work internally with teams to refine our Violence and Incitement policies to enable more nuanced enforcement. Because the CPP, which was developed following a recommendation from the Oversight Board, is a tool to enable enhanced, more nuanced application of our policies in crisis contexts, refinements to policies like the Violence and Incitement Community Standard are subsequently reflected in the application of the CPP.
In the case of figurative speech, it can be difficult to understand the intent and impact of a phrase as this type of speech is highly contextual. When it is unclear if a threat is “figurative” or not, we assess the overall context as well as the implications for voice and safety. Based on how users are engaging with the speech, its extent and reach, and the sensitivities surrounding the situation, we may lean towards safety when enforcing our policies at scale. However, when there are indicators that the speech is political and does not contain credible threats, we may allow that speech on our platforms on escalation. As we note in our responses to the Iran Protest Decision from the Board, we may also weigh considerations of public interest value and risk of harm to consider a scaled newsworthy allowance for certain phrases or statements. We have updated our newsworthy guidance on the Violence and Incitement policy and will continue to improve it to adequately assess allowances with strong political speech or public interest value.
Our Crisis Policy Protocol is used to assess risks of imminent harm both on and off of our platform so we can respond with specific policy and product actions. While Meta already reviews content that people post to assess whether or not it violates our policies, during crises the risks may be higher and different responses may become necessary. The CPP framework helps us have a consistent global response and allows flexibility to adapt to quickly changing conditions. It also helps us to anticipate risks and is informed by past crises to make sure that key learnings are incorporated. Development of this protocol included original research, and consultations with over 50 global external experts in national security, conflict prevention, hate speech, humanitarian response and human rights. The CPP was also developed in alignment with the principles outlined in the Rabat Plan. With these features in mind, inclusion of a lever for the CPP that shares criteria to at-scale moderators on how to identify statements using threatening language figuratively is not aligned with the role that CPP plays in crises.
Crises are varied and the implications of speech in each crisis are unique to that particular context. While we recognize the importance of figurative speech in moments of political protest, it is difficult to determine if a phrase may be figurative or credible. A decision to allow figurative language in moments of crisis is difficult to scale and moreover, we want to maintain the ability to quickly respond to crises upon fully considering the context and risk of harm associated with the content in those particular situations. We will provide the Board with updates on the progress of our broader work on Violence and Incitement in future reporting.