Meta
Transparency Centre
Policies
Enforcement
Security
Features
Governance
Research tools
Reports
English (UK)

Meta
Policies
Community StandardsMeta Advertising StandardsOther policiesHow Meta improvesAge-appropriate content

Features
Our approach to dangerous organisations and individualsOur approach to the opioid epidemicOur approach to electionsOur approach to misinformationOur approach to newsworthy contentOur approach to Facebook Feed rankingOur approach to explaining rankingAccessibility at Meta

Research tools
Content Library and Content Library APIAd Library toolsOther research tools and datasets

Enforcement
Detecting violationsTaking action

Governance
Governance innovationOversight Board overviewHow to appeal to the Oversight BoardOversight Board casesOversight Board recommendationsCreating the Oversight BoardOversight Board: Further asked questionsMeta's biannual updates on the Oversight BoardTracking the Oversight Board's impact

Security
Threat disruptionsSecurity threatsThreat reporting

Reports
Community Standards enforcement reportIntellectual propertyGovernment requests for user dataContent restrictions based on local lawInternet disruptionsWidely viewed content reportRegulatory and other transparency reports

Policies
Community Standards
Meta Advertising Standards
Other policies
How Meta improves
Age-appropriate content
Features
Our approach to dangerous organisations and individuals
Our approach to the opioid epidemic
Our approach to elections
Our approach to misinformation
Our approach to newsworthy content
Our approach to Facebook Feed ranking
Our approach to explaining ranking
Accessibility at Meta
Research tools
Content Library and Content Library API
Ad Library tools
Other research tools and datasets
Enforcement
Detecting violations
Taking action
Governance
Governance innovation
Oversight Board overview
How to appeal to the Oversight Board
Oversight Board cases
Oversight Board recommendations
Creating the Oversight Board
Oversight Board: Further asked questions
Meta's biannual updates on the Oversight Board
Tracking the Oversight Board's impact
Security
Threat disruptions
Security threats
Threat reporting
Reports
Community Standards enforcement report
Intellectual property
Government requests for user data
Content restrictions based on local law
Internet disruptions
Widely viewed content report
Regulatory and other transparency reports
Policies
Community Standards
Meta Advertising Standards
Other policies
How Meta improves
Age-appropriate content
Features
Our approach to dangerous organisations and individuals
Our approach to the opioid epidemic
Our approach to elections
Our approach to misinformation
Our approach to newsworthy content
Our approach to Facebook Feed ranking
Our approach to explaining ranking
Accessibility at Meta
Research tools
Content Library and Content Library API
Ad Library tools
Other research tools and datasets
Security
Threat disruptions
Security threats
Threat reporting
Reports
Community Standards enforcement report
Intellectual property
Government requests for user data
Content restrictions based on local law
Internet disruptions
Widely viewed content report
Regulatory and other transparency reports
Enforcement
Detecting violations
Taking action
Governance
Governance innovation
Oversight Board overview
How to appeal to the Oversight Board
Oversight Board cases
Oversight Board recommendations
Creating the Oversight Board
Oversight Board: Further asked questions
Meta's biannual updates on the Oversight Board
Tracking the Oversight Board's impact
Policies
Community Standards
Meta Advertising Standards
Other policies
How Meta improves
Age-appropriate content
Features
Our approach to dangerous organisations and individuals
Our approach to the opioid epidemic
Our approach to elections
Our approach to misinformation
Our approach to newsworthy content
Our approach to Facebook Feed ranking
Our approach to explaining ranking
Accessibility at Meta
Research tools
Content Library and Content Library API
Ad Library tools
Other research tools and datasets
Enforcement
Detecting violations
Taking action
Governance
Governance innovation
Oversight Board overview
How to appeal to the Oversight Board
Oversight Board cases
Oversight Board recommendations
Creating the Oversight Board
Oversight Board: Further asked questions
Meta's biannual updates on the Oversight Board
Tracking the Oversight Board's impact
Security
Threat disruptions
Security threats
Threat reporting
Reports
Community Standards enforcement report
Intellectual property
Government requests for user data
Content restrictions based on local law
Internet disruptions
Widely viewed content report
Regulatory and other transparency reports
English (UK)
Privacy PolicyTerms of ServiceCookies
Home
Policies
Community Standards
Spam

Spam

Policy details
User experiences
Data

Policy details

CHANGE LOG
Today
26 Jun 2024
30 Jun 2022
17 Dec 2020
30 Oct 2019
Policy rationale
We do not allow content that is designed to deceive, mislead or overwhelm users in order to artificially increase viewership. This content detracts from people's ability to engage authentically on our platforms and can threaten the security, stability and usability of our services. We also seek to prevent abusive tactics, such as spreading deceptive links to draw unsuspecting users in through misleading functionality or code, or impersonating a trusted domain.
Online spam is a lucrative industry. Our policies and detection must constantly evolve to keep up with emerging spam trends and tactics. In taking action to combat spam, we seek to balance raising the costs for its producers and distributors on our platforms, with protecting the vibrant, authentic activity of our community.
We do not allow:
  • Posting, sharing, engaging with content or creating accounts, groups, Pages, events or other assets, either manually or automatically, at very high frequencies.
    • We may place restrictions on accounts that are acting at lower frequencies when other indicators of spam (e.g. posting repetitive content) or signals of inauthenticity are present.
  • Attempting to or successfully selling, buying or exchanging platform assets, such as accounts, groups and Pages.
  • Attempting to or successfully selling, buying or exchanging site privileges, such as admin or moderator roles, or permission to post in specific spaces.
  • Attempting to or successfully selling, buying, or exchanging content for something of monetary value, except clearly identified Branded Content, as defined by our Branded Content Policy.
  • Attempting to or successfully selling, buying or exchanging for engagement, such as likes, shares, views, follows, clicks and use of specific hashtags. This includes:
    • Offering giveaways (i.e. offering others a chance to win) of cash or cash equivalents in exchange for engagement (e.g. "Anyone that likes my Page will be entered to win USD 500").
    • Offering to provide anything of monetary value in exchange for engagement (e.g. "If you like my Page, I will give you an iPhone!").
  • Requiring or claiming that users are required to engage with content (e.g. liking, sharing) before they are able to view or interact with promised content.
  • Sharing deceptive or misleading URLs, domains or applications, including:
    • Cloaking: Cloaking is any attempt to circumvent our content policies by intentionally presenting different off-platform content, such as URLs or applications, to our integrity systems versus what is shown to users.
    • Misleading links: Content containing a link that promises one type of content, but delivers something substantially different. This can include content in a promised app or software.
    • Deceptive redirect behaviour: Websites that require an action (e.g. captcha, watch ad, click here) in order to view the expected landing page content and the domain name of the URL changes after the required action is complete or automatically redirects users to a substantially different domain without any user action.
    • Like/share-gating: Requiring users to engage (in the form of likes, shares, follows or any other public-facing form of engagement) to gain access to specific, exclusive content.
    • Deceptive platform functionality – Mimicking the features or functionality of our services, such as mimicking fundraising, in-line polls, play buttons or the Like button, where that functionality does not exist or does not function as expected, in order to get a user to follow a link.
    • Deceptive landing page functionality: Websites that have a misleading user interface, which results in accidental traffic being generated (e.g. pop-ups/unders and clickjacking). This includes tactics, such as trapping, where irrelevant pop-ups appear when a person attempts to leave the landing page.
    • Landing page or domain impersonation – An off-platform landing page, URL or external website or domain that pretends to be a reputable brand or service by using a name, domain or content that features typos, misspellings or other means to impersonate well-known websites, domains or brands using a landing page similar to another, trusted site.
    • Other deceptive uses of URLs or links that are substantially similar to the above.
  • Notwithstanding the above, we do not prohibit:
    • Cross-promotion that is not triggered by payment to a third party
    • Transferring admin or moderation responsibilities for a Page or group to another user based on their interest in the Page or group, rather than an exchange of value.
    • Posting or sharing clearly identified branded content.
User experiences
See some examples of what enforcement looks like for people on Facebook, such as: what it looks like to report something that you don't think should be on Facebook, to be told that you've violated our Community Standards and to see a warning screen over certain content.
Note: We're always improving, so what you see here may be slightly outdated compared to what we currently use.
USER EXPERIENCE
Reporting
USER EXPERIENCE
Post-report communication
USER EXPERIENCE
Takedown experience
USER EXPERIENCE
Warning screens
Data
View the latest Community Standards Enforcement Report
Enforcement
We have the same policies around the world, for everyone on Facebook.
Review teams
Our global team of over 15,000 reviewers work every day to keep people on Facebook safe.
Stakeholder engagement
Outside experts, academics, NGOs and policymakers help inform the Facebook Community Standards.
Get help with spam
Learn what you can do if you see something on Facebook that goes against our Community Standards.
Visit our Help Centre