Policies that outline what is and isn't allowed on our apps.
Policies for ad content and business assets.
Other policies that apply to Meta technologies.
How we update our policies, measure results, work with others, and more.
Explore how we help teens have safe, positive experiences on Facebook and Instagram.
How we approach dangerous organizations and individuals.
How we support communities in the face of the opioid epidemic.
How we help prevent interference, empower people to vote and more.
How we work with independent fact-checkers, and more, to identify and take action on misinformation.
How we assess content for newsworthiness.
How we reduce problematic content in News Feed.
How we build AI systems.
Comprehensive access to public data from Facebook and Instagram
Comprehensive and searchable database of all ads currently running across Meta technologies
Additional tools for in-depth research on Meta technologies and programs
Quarterly report on how well we're doing at enforcing our policies on the Facebook app and Instagram.
Report on how well we're helping people protect their intellectual property.
Report on government request for people's data.
Report on when we restrict content that's reported to us as violating local law.
Report on intentional internet restrictions that limit people's ability to access the internet.
Quarterly report on what people see on Facebook, including the content that receives the widest distribution during the quarter.
Download current and past regulatory reports for Facebook and Instagram.
2024-031-IG-MR
Today, May 28, 2024, the Oversight Board selected a case referred by Meta regarding a video posted to Instagram. The video includes a Pakistani politician who was a candidate in the 2024 Pakistan General Election delivering a political speech. In the video, the politician states as translated: “the only entity after God is Nawaz Sharif.” Sharif is a Pakistani businessman and politician who previously served as Prime Minister of Pakistan from 1990-1993, 1997-1998, and 2013-2017. The text overlay to the video states, as translated, that the candidate is “crossing all limits of faithlessness.” Untranslated, the text uses the Arabic term “kufr,” which roughly means “faithlessness” or “non-belief”, and can be understood as the rejection or denial of God and his teachings under Islam.
Meta took down this content for violating our Coordinating Harm and Promoting Crime policy, as laid out in the Instagram Community Guidelines and Facebook Community Standards.
Meta referred this case to the board because we found it significant and difficult as it creates tension between our values of safety and voice.
The text overlay constitutes an allegation of blasphemy, suggesting that the politician has committed shirk – the belief in more than one God or holding up anything or anyone as equal to God – which violates Islamic law and may be construed as violating Pakistan’s blasphemy laws. While there is a public interest value of allowing debate and critique of politicians during elections, we ultimately determined that there is a high risk of offline violence associated with these types of allegations which warrants removal under our policies.
We will implement the board’s decision once it has finished deliberating, and we will update this post accordingly. Please see the board’s website for the decision when they issue it.
We welcome the Oversight Board’s decision today, September 19, 2024, on this case. The Board upheld Meta’s decision to remove the content from Instagram.
After conducting a review of the recommendations provided by the Board, we will update this post with initial responses to those recommendations.
To ensure safety for targets of blasphemy accusations, Meta should update the Coordinating Harm and Promoting Crime policy to make clear that users must not post accusations of blasphemy against identifiable individuals in locations where blasphemy is a crime and/or there are significant safety risks to persons accused of blasphemy.
The Board will consider this recommendation implemented when Meta updates its public-facing Coordinating Harm and Promoting Crime Community Standard to reflect the change.
Our commitment: We will update our Coordinating Harm and Promoting Crime Community Standards to provide clearer definitions and explanation about the types of content we may remove when it may share the identities of members of an outing-risk group. These definitions will include details about targets of blasphemy accusations.
Considerations: Our Coordinating Harm and Promoting Crime policy removes content when it may expose the identity or locations of someone who is part of what we refer to as an “outing-risk group” in our policy. As part of this policy, we also may remove content that exposes this when someone has a familial or romantic relationship or when someone has performed professional activities in support of an outing-risk group. We remove this content due to concerns for potential offline violence associated with certain allegations, including blasphemy, in parts of the world.
Currently our Community Standards does not share an exhaustive list of “outing-risk groups” and a definition of the concept due to the concerns around potential offline harm for certain groups and attempts to circumvent our policies. However, we also recognize that providing clarifying examples of some types of outing risk groups may make our policies more transparent and understandable. In line with a recent Board recommendation in the Homophobic Violence in West Africa decision, we will include examples of the types of outing risk groups that we remove, specifically individuals facing allegations of blasphemy in regions where this may lead to offline violence. We will share updates in our next report for the Board.
To ensure adequate enforcement of the Coordinating Harm and Promoting Crime policy line against blasphemy accusations in locations where such accusations pose an imminent risk of harm to the person accused, Meta should train at-scale reviewers covering such locations and provide them with more specific enforcement guidance to effectively identify and consider nuance and context in posts containing blasphemy allegations.
The Board will consider this recommendation implemented when Meta provides updated internal documents demonstrating that the training of at-scale reviewers to better detect this type of content occurred.
Our commitment: We will conduct additional reviewer training on our existing reviewer guidance for enforcing on accusations of blasphemy under the Coordinating Harm and Promoting Crime policy. We will share the additional progress on the reviewer training with the Board at a later date.
Considerations: We prioritize the safety of our users and seriously address content that could lead to risk of physical harm, including for allegations of blasphemy. Our existing reviewer guidance under the Coordinating Harm and Promoting Crime policy covers accusations of blasphemy, atheism, apostasy, and conversion in several geographic regions. This guidance is specific and actionable to at scale reviewers who apply the necessary context and nuance when reviewing content.
To increase the awareness of this kind of content among reviewers and further distinguish our existing guidance, we commit to conducting additional reviewer training to assist reviewers enforcing specifically on allegations of blasphemy. In this additional training, we will include examples specific to different regions where accusations of blasphemy may require different contextual considerations in order to clarify the varied instances of enforcing this type of content.
The training will include final assessments for the reviewers so as to demonstrate their comprehension of the nuance in the policy and complete the training. We will update the Board on the implementation of the training in future reports.