How we approach dangerous organizations and individuals.
How we support communities in the face of the opioid epidemic.
How we help prevent interference, empower people to vote and more.
How we work with independent fact-checkers, and more, to identify and take action on misinformation.
How we assess content for newsworthiness.
How we reduce problematic content in News Feed.
How we build AI systems.
Comprehensive access to public data from Facebook and Instagram
Comprehensive and searchable database of all ads currently running across Meta technologies
Additional tools for in-depth research on Meta technologies and programs
Quarterly report on how well we're doing at enforcing our policies on the Facebook app and Instagram.
Report on how well we're helping people protect their intellectual property.
Report on government request for people's data.
Report on when we restrict content that's reported to us as violating local law.
Report on intentional internet restrictions that limit people's ability to access the internet.
Quarterly report on what people see on Facebook, including the content that receives the widest distribution during the quarter.
Download current and past regulatory reports for Facebook and Instagram.
2020-007-FB-FBR
On December 3, 2020, the Oversight Board selected a case referred by Meta regarding a post in a group that appears to exist for Muslims in India. The post contains a statement about a sword being taken from its scabbard if people speak against the prophet. The post also references President Emmanuel Macron of France. Meta deemed this post a veiled threat, and we took it down for violating our policy on violence and incitement, as laid out in the Facebook Community Standards.
Meta referred this case to the board as an example of a challenging decision about statements that may incite violence even when not explicit. It also highlights an important tension we face when addressing religious speech that could be interpreted as a threat of violence.
On February 12, 2021, the board overturned Meta's decision on this case. Meta acted to comply with the board’s decision, and this content has been reinstated.
On March 11, 2021, Meta responded to the board’s recommendation for this case. We are committing to take action on the recommendation.
Provide people with additional information regarding the scope and enforcement of restrictions on veiled threats. This would help people understand what content is allowed in this area. Meta should make their enforcement criteria public. These should consider the intent and identity of the person, as well as their audience and the wider context.
Our commitment: We commit to adding language to the Violence and Incitement Community Standard to make it clearer when we remove content for containing veiled threats.
Considerations: Facebook removes explicit statements that incite violence under our Violence and Incitement Community Standard. Facebook also removes statements that are not explicit when they act as veiled or implicit threats. The language we will add to our Community Standards will elaborate on the criteria we use in this policy to evaluate whether a statement is a coded attempt to incite violence.
In its enforcement of this policy, Facebook currently does not directly use the identity of the person who shared the content or the content’s full audience as criteria for assessing whether speech constitutes a veiled threat, so the added language will not include such criteria. As the board notes, we are informed by our trusted partner network to tell us when content is potentially threatening or likely to contribute to imminent violence or physical harm, so it is possible that these partners use such signals in their assessments.
Next steps: We will add language described above to the Violence and Incitement Community Standard within a few weeks.