Policies that outline what is and isn't allowed on our apps.
Policies for ad content and business assets.
Other policies that apply to Meta technologies.
How we update our policies, measure results, work with others, and more.
Explore how we help teens have safe, positive experiences on Facebook and Instagram.
How we approach dangerous organizations and individuals.
How we support communities in the face of the opioid epidemic.
How we help prevent interference, empower people to vote and more.
How we work with independent fact-checkers, and more, to identify and take action on misinformation.
How we assess content for newsworthiness.
How we reduce problematic content in News Feed.
How we build AI systems.
Comprehensive access to public data from Facebook and Instagram
Comprehensive and searchable database of all ads currently running across Meta technologies
Additional tools for in-depth research on Meta technologies and programs
Quarterly report on how well we're doing at enforcing our policies on the Facebook app and Instagram.
Report on how well we're helping people protect their intellectual property.
Report on government request for people's data.
Report on when we restrict content that's reported to us as violating local law.
Report on intentional internet restrictions that limit people's ability to access the internet.
Quarterly report on what people see on Facebook, including the content that receives the widest distribution during the quarter.
Download current and past regulatory reports for Facebook and Instagram.
MAR 13, 2025
One way Meta promotes a safe, authentic community is by informing people that content might be sensitive or misleading, even if it doesn’t explicitly violate the Community Standards. In this instance, we’ll include additional context about the content to help people decide what to read, trust or share.
By providing people with specific and relevant context when they come across a flagged post, we can help them be more informed about what they see and read. Here are some ways we provide context on relevant pieces of content that may be sensitive, misleading, or confusing:
Our goal is to protect people from viewing potentially sensitive content.
People value the ability to discuss important and often difficult issues online, but they also have different sensitivities to certain kinds of content. Therefore, we include a warning screen over potentially sensitive content on Facebook, such as:
Violent or graphic imagery.
Posts that contain descriptions of bullying or harassment, if shared to raise awareness.
Some forms of nudity.
Posts related to suicide or suicide attempts.
To help people avoid coming across content they’d rather not see, we limit the visibility of certain posts that are flagged by people on Instagram for containing sensitive or graphic material. Photos and videos containing such content will appear with a warning screen to inform people about the content before they view it. This warning screen appears when viewing a post in feed or on someone's profile.
Our goal is to help people feel confident about the content and accounts they interact with.
To combat impersonations and help people avoid scammers that pretend to be high-profile people, Meta provides verified badges on Pages and profiles that indicate a verified account. This means we’ve confirmed the authentic presence of the public figure, celebrity or global brand that the account represents.
Our goal is to make it easier for people to identify content that’s timely, reliable and most valuable to them.
To give people more context about a news article before they share it on Facebook, Meta includes a notification screen if the article is more than 90 days old. After which, we allow people to continue sharing it if they desire. This notification helps people understand how old a given news article is, and its source.
To ensure we don’t slow the spread of credible information, especially in the health space, content posted by government health authorities and recognized global health organizations does not have this notification screen.
Our goal is to draw on a broad range of voices that exist on our platform to decide what content is potentially misleading or confusing and could benefit from additional information.
In the United States, Community Note contributors can write and submit a note to posts that they think are potentially misleading or confusing. A community note may include background information, a tip or an insight people might find useful. For a note to be published on a post, users who normally disagree, based on how they’ve rated notes in the past, will have to agree that a note is helpful. Notes will not be added to content when there is no agreement or when people agree a note is not helpful.
Nearly anyone can sign up today for the opportunity to be a contributor to Community Notes. Click here for details. Though this program is only available in the United States right now, our intention is ultimately to roll out this new approach to our users all over the world.
Our goal is to protect people from viewing potentially sensitive content.
People value the ability to discuss important and often difficult issues online, but they also have different sensitivities to certain kinds of content. Therefore, we include a warning screen over potentially sensitive content on Facebook, such as:
Violent or graphic imagery.
Posts that contain descriptions of bullying or harassment, if shared to raise awareness.
Some forms of nudity.
Posts related to suicide or suicide attempts.
To help people avoid coming across content they’d rather not see, we limit the visibility of certain posts that are flagged by people on Instagram for containing sensitive or graphic material. Photos and videos containing such content will appear with a warning screen to inform people about the content before they view it. This warning screen appears when viewing a post in feed or on someone's profile.
Our goal is to help people feel confident about the content and accounts they interact with.
To combat impersonations and help people avoid scammers that pretend to be high-profile people, Meta provides verified badges on Pages and profiles that indicate a verified account. This means we’ve confirmed the authentic presence of the public figure, celebrity or global brand that the account represents.
Our goal is to make it easier for people to identify content that’s timely, reliable and most valuable to them.
To give people more context about a news article before they share it on Facebook, Meta includes a notification screen if the article is more than 90 days old. After which, we allow people to continue sharing it if they desire. This notification helps people understand how old a given news article is, and its source.
To ensure we don’t slow the spread of credible information, especially in the health space, content posted by government health authorities and recognized global health organizations does not have this notification screen.
Our goal is to draw on a broad range of voices that exist on our platform to decide what content is potentially misleading or confusing and could benefit from additional information.
In the United States, Community Note contributors can write and submit a note to posts that they think are potentially misleading or confusing. A community note may include background information, a tip or an insight people might find useful. For a note to be published on a post, users who normally disagree, based on how they’ve rated notes in the past, will have to agree that a note is helpful. Notes will not be added to content when there is no agreement or when people agree a note is not helpful.
Nearly anyone can sign up today for the opportunity to be a contributor to Community Notes. Click here for details. Though this program is only available in the United States right now, our intention is ultimately to roll out this new approach to our users all over the world.