Policies that outline what is and isn't allowed on our apps.
Policies for ad content and business assets.
Other policies that apply to Meta technologies.
How we update our policies, measure results, work with others, and more.
Explore how we help teens have safe, positive experiences on Facebook and Instagram.
How we approach dangerous organizations and individuals.
How we support communities in the face of the opioid epidemic.
How we help prevent interference, empower people to vote and more.
How we work with independent fact-checkers, and more, to identify and take action on misinformation.
How we assess content for newsworthiness.
How we reduce problematic content in News Feed.
How we build AI systems.
Comprehensive access to public data from Facebook and Instagram
Comprehensive and searchable database of all ads currently running across Meta technologies
Additional tools for in-depth research on Meta technologies and programs
Quarterly report on how well we're doing at enforcing our policies on the Facebook app and Instagram.
Report on how well we're helping people protect their intellectual property.
Report on government request for people's data.
Report on when we restrict content that's reported to us as violating local law.
Report on intentional internet restrictions that limit people's ability to access the internet.
Quarterly report on what people see on Facebook, including the content that receives the widest distribution during the quarter.
Download current and past regulatory reports for Facebook and Instagram.
JUN 12, 2023
2022-011-IG-UA
Today, the Oversight Board selected a case based on a user appeal that centers on a video purporting to have been filmed shortly after a mass shooting in a church in Nigeria. The video shows motionless, bloodied bodies coupled with the sounds of wailing and screaming in the background.
Initially, Meta’s systems placed a warning screen on the video to mark it as disturbing. The user later added a caption to the post with several hashtags. We subsequently removed the content for violating our policy on Violent and Graphic Content, as laid out in our Instagram Community Guidelines and Facebook Community Standards. Meta’s policies prohibit any content that “glorifies violence or celebrates the suffering or humiliation of others.”
We will implement the board’s decision once it has finished deliberating, and we will update this post accordingly. Please see the board’s website for the decision when they issue it.
We welcome the Oversight Board’s decision today on this case. The board overturned Meta’s original decision to remove the content from the platform. Meta has acted to comply with the board’s decision immediately and this content has been reinstated with a warning screen.
In accordance with the bylaws, we will also initiate a review of identical content with parallel context. If we determine that we have the technical and operational capacity to take action on that content as well, we will do so promptly. For more information, please see our Newsroom post about how we implement the board’s decisions.
After conducting a review of the recommendations provided by the board in addition to their decision, we will update this page.
Meta should review the public-facing language in the Violent and Graphic Content policy to ensure that it is better aligned with the company's internal guidance on how the policy is to be enforced. The Board will consider this recommendation implemented when the policy has been updated with a definition and examples, in the same way as Meta explains concepts such as "praise" in the Dangerous Individuals and Organisations policy.
Our commitment: We will review our Violent and Graphic Content policy to ensure it is aligned with the internal guidance on how to enforce the policy and consider including additional public details about the policy.
Considerations: We are exploring ways to incorporate this recommendation into our Violent and Graphic Content policy. Our existing policy outlines a number of details about our approach to violent and graphic content. This includes details about how we enforce on “sadistic remarks,” which the board notes in its decision are prohibited under the policy but defined broadly in internal guidance for moderators.
We agree that there may be opportunities to provide further details and clarification about this policy. We will look to find ways to incorporate these details in a way that improves understanding but does not leave our platforms more susceptible to abuse by bad actors. We expect to provide further updates in future Quarterly Updates.
Meta should notify Instagram users when a warning screen is applied to their content and provide the specific policy rationale for doing so. The Board will consider this recommendation implemented when Meta confirms that notifications are provided to Instagram users in all languages supported by the platform.
Our commitment: We are improving the availability and granularity of information shared in user messaging across all violation areas and enforcement types. We are prioritizing a variety of safety actions in light of regulatory requirements and will cover warning screen applications in due course.
Considerations: We have continued to dedicate efforts towards improving the experience when we take enforcement decisions or safety actions on user content. This work has been highlighted in our commitments and responses to recommendations including the case on breast cancer symptoms and nudity, a post depicting indigenous artwork and discussing residential schools and the case regarding the support of Abdullah Ӧcalan, founder of the PKK among others. We are committed to increasing transparency for people using our platforms, in alignment with regulatory requirements and collaborations across our various internal teams. We will fulfill this commitment by methodically working from the most severe content and violation types.
Given the large scope of enforcement decisions and varying severity levels, we are constantly balancing priorities to optimize the experience of our users. We will continue to implement efforts that notify users of content moderation decisions across our products. In light of this prioritization and against our efforts to ensure that we successfully meet our regulatory obligations, we do not have a definitive timeframe for this implementation. We will share updates in a future Quarterly Update.