Policies that outline what is and isn't allowed on the Facebook app.
Policies that outline what is and isn't allowed on the Instagram app.
Policies for ad content and business assets.
Other policies that apply to Meta technologies.
How we update our policies, measure results, work with others, and more.
How we support communities in the face of the opioid epidemic.
How we help prevent interference, empower people to vote and more.
How we work with independent fact-checkers, and more, to identify and take action on misinformation.
How we assess content for newsworthiness.
How we reduce problematic content in News Feed.
How we build AI systems.
Comprehensive access to public data from Facebook and Instagram
Comprehensive and searchable database of all ads currently running across Meta technologies
Additional tools for in-depth research on Meta technologies and programs
Quarterly report on how well we're doing at enforcing our policies on the Facebook app and Instagram.
Report on how well we're helping people protect their intellectual property.
Report on government request for people's data.
Report on when we restrict content that's reported to us as violating local law.
Report on intentional internet restrictions that limit people's ability to access the internet.
Quarterly report on what people see on Facebook, including the content that receives the widest distribution during the quarter.
Download current and past regulatory reports for Facebook and Instagram.
APR 2, 2024
One way Meta promotes a safe, authentic community is by informing people that content might be sensitive or misleading, even if it doesn’t explicitly violate the Facebook Community Standards or Instagram Community Guidelines. In this instance, we’ll include additional context about the content to help people decide what to read, trust or share.
By providing people with specific and relevant context when they come across a flagged post, we can help them be more informed about what they see and read. Here are some ways we provide context on relevant pieces of content that may be sensitive or misleading:
Our goal is to protect people from viewing potentially sensitive content.
People value the ability to discuss important and often difficult issues online, but they also have different sensitivities to certain kinds of content. Therefore, we include a warning screen over potentially sensitive content on Facebook, such as:
Violent or graphic imagery.
Posts that contain descriptions of bullying or harassment, if shared to raise awareness.
Some forms of nudity.
Posts related to suicide or suicide attempts.
To help people avoid coming across content they’d rather not see, we limit the visibility of certain posts that are flagged by people on Instagram for containing sensitive or graphic material. Photos and videos containing such content will appear with a warning screen to inform people about the content before they view it. This warning screen appears when viewing a post in feed or on someone's profile.
Our goal is to help people feel confident about the content and accounts they interact with.
To combat impersonations and help people avoid scammers that pretend to be high-profile people, Meta provides verified badges on Pages and profiles that indicate a verified account. This means we’ve confirmed the authentic presence of the public figure, celebrity or global brand that the account represents.
Our goal is to make it easier for people to identify content that’s timely, reliable and most valuable to them.
To give people more context about a news article before they share it on Facebook, Meta includes a notification screen if the article is more than 90 days old. After which, we allow people to continue sharing it if they desire. This notification helps people understand how old a given news article is, and its source.
To ensure we don’t slow the spread of credible information, especially in the health space, content posted by government health authorities and recognized global health organizations does not have this notification screen.
Our goal is to protect people from viewing potentially sensitive content.
People value the ability to discuss important and often difficult issues online, but they also have different sensitivities to certain kinds of content. Therefore, we include a warning screen over potentially sensitive content on Facebook, such as:
Violent or graphic imagery.
Posts that contain descriptions of bullying or harassment, if shared to raise awareness.
Some forms of nudity.
Posts related to suicide or suicide attempts.
To help people avoid coming across content they’d rather not see, we limit the visibility of certain posts that are flagged by people on Instagram for containing sensitive or graphic material. Photos and videos containing such content will appear with a warning screen to inform people about the content before they view it. This warning screen appears when viewing a post in feed or on someone's profile.
Our goal is to help people feel confident about the content and accounts they interact with.
To combat impersonations and help people avoid scammers that pretend to be high-profile people, Meta provides verified badges on Pages and profiles that indicate a verified account. This means we’ve confirmed the authentic presence of the public figure, celebrity or global brand that the account represents.
Our goal is to make it easier for people to identify content that’s timely, reliable and most valuable to them.
To give people more context about a news article before they share it on Facebook, Meta includes a notification screen if the article is more than 90 days old. After which, we allow people to continue sharing it if they desire. This notification helps people understand how old a given news article is, and its source.
To ensure we don’t slow the spread of credible information, especially in the health space, content posted by government health authorities and recognized global health organizations does not have this notification screen.