How we approach dangerous organizations and individuals.
How we support communities in the face of the opioid epidemic.
How we help prevent interference, empower people to vote and more.
How we work with independent fact-checkers, and more, to identify and take action on misinformation.
How we assess content for newsworthiness.
How we reduce problematic content in News Feed.
How we build AI systems.
Comprehensive access to public data from Facebook and Instagram
Comprehensive and searchable database of all ads currently running across Meta technologies
Additional tools for in-depth research on Meta technologies and programs
Quarterly report on how well we're doing at enforcing our policies on the Facebook app and Instagram.
Report on how well we're helping people protect their intellectual property.
Report on government request for people's data.
Report on when we restrict content that's reported to us as violating local law.
Report on intentional internet restrictions that limit people's ability to access the internet.
Quarterly report on what people see on Facebook, including the content that receives the widest distribution during the quarter.
Download current and past regulatory reports for Facebook and Instagram.
2023-028-FB-UA
Today, the Oversight Board selected a case appealed by a user concerning an image posted by a Facebook Page that depicts a diagram showing the inside and outside of a firearm cartridge accompanied by a caption in Arabic with instructions for disassembling a cartridge and using its explosive components to create a Molotov cocktail. The caption concludes with a statement of support for the Sudanese Armed Forces.
Meta took down this content for violating our Violence and Incitement policy, as laid out in the Facebook Community Standards.
Meta does not allow content that provides “instructions on how to make or use explosives, unless there is a clear context that the content is for a non-violent purpose.” We also do not allow people to share instructions concerning “how to make or use weapons if there is evidence of a goal to seriously injure or kill people through: language explicitly stating that goal, or photos or videos that show or simulate the end result (serious injury or death) as part of the instruction.” For this content, the caption suggests that the intended use is for throwing the Molotov cocktail against a target and is shared in the context of armed conflict.
We will implement the Board’s decision once it has finished deliberating, and we will update this post accordingly. Please see the Board’s website for the decision when they issue it.
We welcome the Oversight Board's decision on this case today, February 13th, 2024. The board upheld Meta's decision to remove the content.
After conducting a review of the recommendations provided by the board, we will update this post.
To better inform users of what content is prohibited on its platforms, Meta should amend its Violence and Incitement policy to include a definition of “recreational self-defense” and “military training” as exceptions to its rules prohibiting users from providing instructions on making or using weapons, and clarify that it does not allow any self-defense exception for instructions on how to make or use weapons in the context of an armed conflict.
The Board will consider this implemented when the Violence and Incitement Community Standard is updated to reflect these changes.
Our commitment:We are in the process of streamlining and clarifying our policy prohibiting content that provides instructions on how to make or use weapons and explosives. This policy update will aim to clarify allowances related to recreational self-defense and military training. This includes updating the terminology to ‘non-violent purposes’, resolving overlaps and ambiguities in policy language and providing examples.
Considerations: Our Violence and Incitement policy aims to prevent potential offline violence that may be related to content on our platforms. As the Board notes in its decision, our Violence and Incitement policy does not allow people to share instructions on how to make or use weapons outside of certain specific contexts. We consider context to determine if language or imagery that may otherwise violate is shared in the permissible contexts of recreational self-defense or military training.
In December 2023, we updated our Violence and Incitement Community Standards to clarify the specific contexts in which we allow content providing instructions on how to make or use weapons. Based on the Board’s recommendation, we will further refine the language in our internal guidance, and resolve ambiguities and better define these allowances. We will also explore further ways to clarify our approach to this content in our Community Standards.
To make sure users are able to understand which policies their content was enforced against, Meta should develop tools to rectify mistakes in its user messaging notifying the user about the Community Standard they violated.
The Board will consider this implemented when the related review and notification systems are updated accordingly.
Our commitment: We consistently work to improve the accuracy and granularity of user messaging following enforcement actions on our platforms. We will explore opportunities to update notifications via appeal transparency pathways to rectify user notifications in situations where the violation type has changed, but the enforcement action remains the same.
Considerations: We continue to focus on improving user notifications to optimize experiences for people on our platforms as described in our responses to Aftermath of an Attack on a Church in Nigeria #2 and Post Depicting Indigenous Artwork and Discussing Residential Schools #1.
At times, a single piece of content may violate multiple Community Standards. While an enforcement action may stay the same, the cause for a violation may vary upon assessment by different reviewers. If multiple violation types are identified, reviewers are instructed to enforce against the most severe violation type, while tracking other relevant violation types. However, when content is re-reviewed after initial enforcement, we do not send an updated user notification if the reviewer selects a different violation type without changing the enforcement action. As noted in our guidance on taking down violating content, we are aware that despite our best efforts, it is inevitable that we will make some errors when taking down content that violates our Community Standards. As such, we continue to strive towards increasing transparency, and will explore an avenue that allows us to do this through appeals outcome messaging. In the instance where a user has appealed an enforcement decision using the pathway described in our guidance and our reviewers determine that the reason for enforcement is different from the one previously communicated to the user, we will provide an updated notice and update the information in the Integrity Hub or Account Status page, clarifying the relevant Community Standards violation. We will also explore opportunities to further clarify with users when these changes have taken place outside of appeal pathways, for example, if the content is escalated by internal teams for re-review.
User notification remains an ongoing priority across our teams and we continue to iterate on how to improve across this focus area. We are committed to ensuring that users receive the most accurate information regarding content takedowns and will expand on our progress in future updates to the Board.