Building models and making predictions
The process begins with our artificial intelligence teams. They build machine learning models that can perform tasks, such as recognising what's in a photo or understanding text. Then, our integrity teams – who are responsible for scaling the detection and enforcement of our policies – build upon these models to create more specific models that make predictions about people and content. These predictions help us enforce our policies. For example, an AI model predicts whether a piece of content is hate speech or violent and graphic content. A separate system – our enforcement technology – determines whether to take an action, such as deleting, demoting or sending the content to a human review team for further review.
Learning by repetition, verified by humans
When we first build new technology for content enforcement, we train it to look for certain signals. For example, some technology looks for nudity in photos, while other technology learns to understand text. At first, a new type of technology might have low confidence about whether a piece of content violates our policies.
Review teams can then make the final call, and our technology can learn from each human decision. Over time – after learning from thousands of human decisions – the technology becomes more accurate.
Our policies also evolve over time to keep up with changes in our product, social norms and language. As a result, both training our technology and review teams is a gradual and iterative process.
Detecting repeat violationsTechnology is very good at detecting the same content over and over – millions of times, if necessary. Our technology will take action on a new piece of content if it matches or comes very close to another piece of violating content. This is particularly helpful for viral misinformation campaigns, memes and other content that can spread extremely quickly.