Policies that outline what is and isn't allowed on our apps.
Policies for ad content and business assets.
Other policies that apply to Meta technologies.
How we update our policies, measure results, work with others, and more.
Explore how we help teens have safe, positive experiences on Facebook and Instagram.
How we approach dangerous organizations and individuals.
How we support communities in the face of the opioid epidemic.
How we help prevent interference, empower people to vote and more.
How we work with independent fact-checkers, and more, to identify and take action on misinformation.
How we assess content for newsworthiness.
How we reduce problematic content in News Feed.
How we build AI systems.
Comprehensive access to public data from Facebook and Instagram
Comprehensive and searchable database of all ads currently running across Meta technologies
Additional tools for in-depth research on Meta technologies and programs
Quarterly report on how well we're doing at enforcing our policies on the Facebook app and Instagram.
Report on how well we're helping people protect their intellectual property.
Report on government request for people's data.
Report on when we restrict content that's reported to us as violating local law.
Report on intentional internet restrictions that limit people's ability to access the internet.
Quarterly report on what people see on Facebook, including the content that receives the widest distribution during the quarter.
Download current and past regulatory reports for Facebook and Instagram.
NOV 12, 2024
We want people to be able to talk openly about the issues that matter to them, even if some may disagree or find them objectionable. In rare cases, we allow content that may violate the Community Standards, if it’s newsworthy and if keeping it visible is in the public interest.
We do this only after conducting a thorough review that weighs the public interest against the risk of harm. We look to international human rights standards, as reflected in our Corporate Human Rights Policy, to help make these judgments.
We introduced our newsworthiness allowance in October 2016 after receiving global criticism for removing the iconic “Napalm Girl” photo, which, as a result of this allowance, is visible across our platforms today.
We’ve found that determining the newsworthiness of a piece of content can be highly subjective. People often disagree about what standards should be in place to ensure a community is both safe and open to expression. We conduct a thorough assessment of any potentially newsworthy content and our reviewers consider a number of factors prior to escalating to our Content Policy team.
When making a newsworthy determination, we assess whether that content surfaces an imminent threat to public health or safety, or gives voice to perspectives currently being debated as part of a political process. We also consider other factors, such as:
Country-specific circumstances (for example, whether there is an election underway, or the country is at war)
The nature of the speech, including whether it relates to governance or politics
The political structure of the country, including whether it has a free press
We remove content, even if it has some degree of newsworthiness, when leaving it up presents a risk of harm, such as physical, emotional and financial harm, or a direct threat to public safety. For content we allow that may be sensitive or disturbing, we include a warning screen. In these cases, we can also limit the ability to view the content to adults, ages 18 and older.
Content from all sources, including news outlets, politicians, or other people, is eligible for a newsworthy allowance. While the speaker may factor into the balancing test, we do not presume that any person’s speech is inherently newsworthy, including politicians.
Newsworthy allowances can be “narrow,” in which an allowance applies to a single piece of content or "scaled," which may apply more broadly to something like a phrase.
Newsworthy Data
August 2024
From June 1, 2023 through June 1, 2024, we documented a total of 32 newsworthy decisions
14 of those documented allowances were issued for politicians
Of the 32 newsworthy decisions, we documented a total of 6 scaled newsworthy decisions
August 2023
From June 1, 2022 through June 1, 2023, we documented 69 newsworthiness allowances
9 (~13%) of those documented allowances were issued for posts by politicians
Of the 69 allowances, we documented a total of 17 scaled allowances
August 2022
From June 1, 2021 through June 1, 2022, we documented 68 newsworthiness allowances
13 (~20%) of those documented allowances were issued for posts by politicians
Examples
Hate speech present in background of media report on alleged police brutality during protest in Colombia
Media outlets reported an incident of alleged police brutality during the ongoing transport protests in Colombia. The outlets often shared a video where a slur can incidentally be heard in the background, therefore violating our Hate Speech policy. We issued a newsworthy allowance for this video given its public interest value, which outweighed the risk of harm from hearing the incidental use of a slur. However, given that the video also included graphic content, we placed a warning screen over this content and limited its availability to adults ages 18 and older.
Post by Ukrainian Defense Ministry depicting charred bodies
This video originally shared by the Ukrainian Defense Ministry very briefly depicts an unidentified charred body. Though we typically remove this type of content under our Violent and Graphic Content policy, we determined that this video qualified for a newsworthy allowance, as it documented an ongoing armed conflict. We placed a warning screen over this content and limited its availability to adults ages 18 and older because of the graphic nature of the content.
Brazilian politician's post depicting female nipples while discussing government policy towards cinema
The content was posted by a federal deputy in Brazil, and discusses a fire in one of the country’s historic cinemas that destroyed important pieces of Brazilian cinematography. The post criticizes the government for allegedly neglecting the maintenance of the cinema and for a lack of funding for the cinema industry. The post includes a short clip from the movie, Xica da Silva, which depicts uncovered female nipples. A newsworthy allowance was granted as the post discusses the artistic value of the work and contains political speech criticizing the government for not properly funding the cinema, such that the public interest value outweighed the potential for harm.
The video was posted to a page depicting men in military uniforms being beaten and mocked by a group of men wearing different military uniforms. The caption claimed the perpetrators were Azerbaijani soldiers. Under our Coordinating Harm and Promoting Crime policy on outing, Meta removes content revealing the identity of prisoners of war in the context of an armed conflict, where exposure of identity can present a risk of offline harm. However, with no evidence videos like this being used in the conflict to further mistreat the detainees and there being trends of similar content available through other sources, the safety risks were not high. On the other hand, the videos were raising awareness about the conditions of the prisoners and potential violations of international humanitarian law against them and are highly relevant to campaigns and legal proceedings for accountability of serious crimes. Ultimately the video was allowed, with a warning screen, as a newsworthy decision after weighing the risk of safety and dignity of prisoners of war against the public interest value of sharing this imagery. The Oversight Board issued a decision upholding Meta’s decision to leave up the content.