Policies that outline what is and isn't allowed on our apps.
Policies for ad content and business assets.
Other policies that apply to Meta technologies.
How we update our policies, measure results, work with others, and more.
Explore how we help teens have safe, positive experiences on Facebook and Instagram.
How we approach dangerous organizations and individuals.
How we support communities in the face of the opioid epidemic.
How we help prevent interference, empower people to vote and more.
How we work with independent fact-checkers, and more, to identify and take action on misinformation.
How we assess content for newsworthiness.
How we reduce problematic content in News Feed.
How we build AI systems.
Comprehensive access to public data from Facebook and Instagram
Comprehensive and searchable database of all ads currently running across Meta technologies
Additional tools for in-depth research on Meta technologies and programs
Quarterly report on how well we're doing at enforcing our policies on the Facebook app and Instagram.
Report on how well we're helping people protect their intellectual property.
Report on government request for people's data.
Report on when we restrict content that's reported to us as violating local law.
Report on intentional internet restrictions that limit people's ability to access the internet.
Quarterly report on what people see on Facebook, including the content that receives the widest distribution during the quarter.
Download current and past regulatory reports for Facebook and Instagram.
In the case of Armenian People and the Armenian Genocide (aka “Two Buttons Meme”), a Facebook user in the United States posted a comment with an adaptation of the ‘two buttons’ meme. This featured the split-screen cartoon from the original ‘two buttons’ meme, but with a Turkish flag substituted for the cartoon character’s face, with corresponding statements in English: “The Armenian Genocide is a lie” and “The Armenians were terrorists that deserved it.” The Oversight Board overturned Meta’s decision to remove the comment under the Hateful Conduct Community Standard (previously referred to as “Hate Speech”). A majority of the Board found it fell into Facebook’s exception for content condemning or raising awareness of hatred. Following the Board's recommendation in 2021, Meta made a commitment to allow users to specify in their appeal submissions that their content falls under one of the exceptions to the Hateful Conduct policy, such as satirical content or sharing hateful content to condemn or raise awareness. Meta introduced a feature that allowed users to provide additional context when they submitted appeals (generic for any policy) on both Facebook and Instagram. This feature was rolled out in August 2023 for Facebook and November 2023 for Instagram.
What was the impact of Meta’s implementation of this recommendation?
In response to a recommendation from the Board in 2021, Meta committed to let users indicate in their appeal submission that their content falls into one of the exceptions to the Hateful Conduct policy, calling out specifically exceptions in Community Standards for satirical content and where users share hateful content to condemn it or raise awareness. In a related recommendation from this Board decision, we also share more details in our Community Standards to note that in certain cases, we may allow content that may otherwise violate the Community Standards when it is determined that the content is satirical.
The ability to add additional context in appeal submissions (generic for any policy) was launched for Facebook in August 2023, and for Instagram in November 2023.