At Meta, we’re committed to giving people a voice and keeping them safe.

Since 2016, we've used a strategy called "remove, reduce, inform" to manage content across Meta technologies.
This means we remove harmful content that goes against our policies, reduce the distribution of problematic content that doesn’t violate our policies, and inform people with additional context so they can decide what to click, read or share.
To help with this strategy, we have policies that describe what is and isn’t allowed on our technologies. Our teams work together to develop our policies and enforce them. Here’s how it works.

1

We collaborate with global experts in technology, public safety and human rights to create and update our policies.

2

We build features for safety, so people can report content and block, hide or unfollow accounts.

3

We enforce our policies using technology and human review.
We keep people safe and let people hold us accountable by sharing our policies, enforcement and transparency reports.
policy-image
Our policies
Our policies define what is and isn’t allowed on Meta technologies. If content goes against our policies, we take action on it.
enforcement-image
Our enforcement
We use technology and review teams to detect, review and take action on millions of pieces of content every day on Facebook and Instagram.
transparency-reports-image
Our transparency reports
We publish regular transparency reports to show people how we’re doing at enforcing our policies.
ranking-image
Our approach to explaining ranking
Artificial intelligence (AI) systems inform the ranking of content for many experiences on Meta’s products, such as viewing Facebook Feed, watching reels on Instagram or browsing Facebook Marketplace.
Recent updates