How we approach dangerous organizations and individuals.
How we support communities in the face of the opioid epidemic.
How we help prevent interference, empower people to vote and more.
How we work with independent fact-checkers, and more, to identify and take action on misinformation.
How we assess content for newsworthiness.
How we reduce problematic content in News Feed.
How we build AI systems.
Comprehensive access to public data from Facebook and Instagram
Comprehensive and searchable database of all ads currently running across Meta technologies
Additional tools for in-depth research on Meta technologies and programs
Quarterly report on how well we're doing at enforcing our policies on the Facebook app and Instagram.
Report on how well we're helping people protect their intellectual property.
Report on government request for people's data.
Report on when we restrict content that's reported to us as violating local law.
Report on intentional internet restrictions that limit people's ability to access the internet.
Quarterly report on what people see on Facebook, including the content that receives the widest distribution during the quarter.
Download current and past regulatory reports for Facebook and Instagram.
JAN 19, 2022
Sometimes, the meaning of a piece of content is immediately obvious to a person but less clear to technology. To keep people safe, Meta needs to train artificial intelligence on how to detect violating posts.
For example, the following content combines text and images. Two of the images are good-natured; the other 2 are potentially mean-spirited.
Without proper training, most AI struggles to make these distinctions. It either reads the text and determines the literal meaning of the words, or it looks at the image to determine the general meaning of the photo’s subject. People, on the other hand, instinctively pair the text and image together to understand the content.
One way we address this is by training our technology to first look at all the components of a post and only then to determine the true meaning. This can go a long way to helping AI more accurately detect what a person sees when viewing the same post.
We also use a system that guides AI to learn directly from millions of current pieces of content and help pick training data that reflects our goals. This is different from typical AI systems that rely on fixed data for training. Using this method helps us better protect people from hate speech and content that incites violence.
We still have work to do, but this training will help our technology continue to improve and better understand the true meaning of multimodal content.