Policy details

Change log

CHANGE LOG

Change log

Today

Current version

Sep 26, 2024
Mar 29, 2024
Jan 26, 2024
Sep 1, 2023
Dec 23, 2022
Oct 28, 2022
Feb 25, 2022
Oct 29, 2021
Jul 30, 2021
Mar 26, 2021
Dec 18, 2020
Nov 19, 2020
May 29, 2020
Oct 26, 2019
Jul 2, 2019
Sep 1, 2018
Jul 28, 2018
Jun 30, 2018
Show olderShow fewer
Policy Rationale

In order to maintain a safe environment and empower free expression, we restrict or remove accounts that are harmful to the community. We have built a combination of automated and manual systems to restrict and remove accounts that are used to egregiously or persistently violate our policies across any of our products.

Because account removal is a serious action, whenever possible, we aim to give our community opportunities to learn our rules and follow our Community Standards. For example, a notification is issued each time we remove content, and in most cases we also provide people with information about the nature of the violation and any restrictions that are applied. Our enforcement actions are designed to be proportional to the severity of the violation, the history of violations on the account, and the risk or harm posed to the community. Continued violations, despite repeated warnings and restrictions, or violations that pose severe safety risks will lead to an account being disabled.

Learn more about how Meta enforces its policies and restricts accounts in the Transparency Center.

We may restrict or disable accounts, other entities (Pages, groups, events) or business assets (Business Managers, ad accounts) that:

  • Violate our Community Standards involving egregious harms, including those we refer to law enforcement due to the risk of imminent harm to individual or public safety
  • Violate our Community Standards involving any harms that warrant referral to law enforcement due to the risk of imminent harm to individual of public safety
  • Violate our Advertising Standards involving deceptive or dangerous business harms
  • Persistently violate our Community Standards by posting violating content and/or managing violating entities or business assets
  • Persistently violates our Advertising Standards
  • Activity or behavior indicative of a clear violating purpose

We may restrict or disable accounts, other entities (Pages, groups, events) or business assets (Business Managers, ad accounts) that are:

  • Owned by the same person or entity as an account that has been disabled
  • Created or repurposed to evade a previous account or entity removal, including those assessed to have common ownership and content as previously removed accounts or entities
  • Created to contact a user that has blocked an account
  • Otherwise used to evade our enforcement actions or review processes

We may restrict or disable accounts, other entities (Pages, groups, events) or business assets (Business Managers, ad accounts) that demonstrate:

  • Close linkage with a network of accounts or other entities that violate or evade our policies
  • Coordination within a network of accounts or other entities that persistently or egregiously violate our policies
  • Activity or behavior indicative of a clear violating purpose through a network of accounts

We may restrict or disable accounts or other entities (Pages, groups, events), or business assets (Business Managers, ad accounts) that engage in off-platform activity that can lead to harm on our platform, including those:

  • Owned by a convicted sex offender
  • Owned by a Designated Entity or run on their behalf
  • Prohibited from receiving our products, services or software under applicable laws

In the following scenarios, we may request additional information about an account to ascertain ownership and/or permissible activity:

  • Compromised accounts
  • Creating or using an account or other entity through automated means, such as scripting (unless the scripting activity occurs through authorized routes and does not otherwise violate our policies)
  • Empty accounts with prolonged dormancy

User experiences

See some examples of what enforcement looks like for people on Facebook, such as: what it looks like to report something you don’t think should be on Facebook, to be told you’ve violated our Community Standards and to see a warning screen over certain content.

Note: We’re always improving, so what you see here may be slightly outdated compared to what we currently use.

Reporting
1
Universal entry point

We have an option to report, whether it's on a post, comment, story, message, profile or something else.

2
Get started

We help people report things that they don’t think should be on our platform.

3
Select a problem

We ask people to tell us more about what’s wrong. This helps us send the report to the right place.

4
Check your report

Make sure the details are correct before you click Submit. It’s important that the problem selected truly reflects what was posted.

5
Report submitted

After these steps, we submit the report. We also lay out what people should expect next.

6
More options

We remove things if they go against our Community Standards, but you can also Unfollow, Block or Unfriend to avoid seeing posts in future.

Post-report communication
1
Update via notifications

After we’ve reviewed the report, we’ll send the reporting user a notification.

2
More detail in the Support Inbox

We’ll share more details about our review decision in the Support Inbox. We’ll notify people that this information is there and send them a link to it.

3
Appeal option

If people think we got the decision wrong, they can request another review.

4
Post-appeal communication

We’ll send a final response after we’ve re-reviewed the content, again to the Support Inbox.

Takedown experience
1
Immediate notification

When someone posts something that doesn't follow our rules, we’ll tell them.

2
Additional context

We’ll also address common misperceptions and explain why we made the decision to enforce.

3
Policy Explanation

We’ll give people easy-to-understand explanations about the relevant rule.

4
Option for review

If people disagree with the decision, they can ask for another review and provide more information.

5
Final decision

We set expectations about what will happen after the review has been submitted.

Warning screens
1
Warning screens in context

We cover certain content in News Feed and other surfaces, so people can choose whether to see it.

2
More information

In this example, we give more context on why we’ve covered the photo with more context from independent fact-checkers

Enforcement

We have the same policies around the world, for everyone on Facebook.

Review teams

Our global team of over 15,000 reviewers work every day to keep people on Facebook safe.

Stakeholder engagement

Outside experts, academics, NGOs and policymakers help inform the Facebook Community Standards.

Get help with account integrity and authentic identity

Learn what you can do if you see something on Facebook that goes against our Community Standards.