Policies that outline what is and isn't allowed on our apps.
Policies for ad content and business assets.
Other policies that apply to Meta technologies.
How we update our policies, measure results, work with others, and more.
Explore how we help teens have safe, positive experiences on Facebook and Instagram.
How we approach dangerous organizations and individuals.
How we support communities in the face of the opioid epidemic.
How we help prevent interference, empower people to vote and more.
How we work with independent fact-checkers, and more, to identify and take action on misinformation.
How we assess content for newsworthiness.
How we reduce problematic content in News Feed.
How we build AI systems.
Comprehensive access to public data from Facebook and Instagram
Comprehensive and searchable database of all ads currently running across Meta technologies
Additional tools for in-depth research on Meta technologies and programs
Quarterly report on how well we're doing at enforcing our policies on the Facebook app and Instagram.
Report on how well we're helping people protect their intellectual property.
Report on government request for people's data.
Report on when we restrict content that's reported to us as violating local law.
Report on intentional internet restrictions that limit people's ability to access the internet.
Quarterly report on what people see on Facebook, including the content that receives the widest distribution during the quarter.
Download current and past regulatory reports for Facebook and Instagram.
NOV 12, 2024
Meta uses technology to enforce the Community Standards. Our teams work together to build and train the technology. Here’s how it works.
The process begins with our artificial intelligence teams. They build machine learning models that can perform tasks, such as recognizing what’s in a photo or understanding text. Then, our integrity teams—who are responsible for scaling the detection and enforcement of our policies—build upon these models to create more specific models that make predictions about people and content. These predictions help us enforce our policies.
For example, an AI model predicts whether a piece of content is hate speech or violent and graphic content. A separate system—our enforcement technology—determines whether to take an action, such as deleting, demoting or sending the content to a human review team for further review.
When we first build new technology for content enforcement, we train it to look for certain signals. For example, some technology looks for nudity in photos, while other technology learns to understand text. At first, a new type of technology might have low confidence about whether a piece of content violates our policies.
Review teams can then make the final call, and our technology can learn from each human decision. Over time—after learning from thousands of human decisions—the technology becomes more accurate.
Our policies also evolve over time to keep up with changes in our product, social norms and language. As a result, both training our technology and review teams is a gradual and iterative process.
Technology is very good at detecting the same content over and over—millions of times, if necessary. Our technology will take action on a new piece of content if it matches or comes very close to another piece of violating content. This is particularly helpful for viral misinformation campaigns, memes and other content that can spread extremely quickly.
Technology can find and remove the same content over and over. But it’s a big challenge to get a machine to understand nuances in word choice or how small differences may change the context.
The first image is the original piece of misleading content, which includes misinformation about public health safety.
The second image is a screenshot of the first image, this time with the computer’s menu bar at the top.
Finally, the third image looks extremely similar to the first and second image, but it has 2 small word changes that make the headline accurate and no longer false.
This is fairly easy for humans to understand, but hard for technology to get right. There’s a risk of erring too much on one side or the other. If the technology is too aggressive, it will remove millions of non-violating posts. If it’s not aggressive enough, it will think the screenshot with the menu bar is different from the original, and will fail to take action on the content.
We spend a lot of time working on this. Over the last few years, we made several investments to help our technology get better at detecting subtle distinctions in content. It gets more precise every day as it continues to learn.