Child Sexual Exploitation, Abuse, and Nudity

Policy details

Change log

CHANGE LOG

Change log

Today

Current version

Oct 3, 2024
Jul 26, 2024
Jul 9, 2024
Jan 12, 2024
Dec 6, 2023
Aug 3, 2023
Dec 23, 2022
Apr 29, 2022
Nov 25, 2021
Oct 29, 2021
Aug 27, 2021
May 5, 2021
Jan 29, 2021
Nov 19, 2020
Jun 23, 2020
Dec 29, 2018
Show olderShow fewer
Policy Rationale
We do not allow content or activity that sexually exploits or endangers children. When we become aware of apparent child exploitation, we report it to the National Center for Missing and Exploited Children (NCMEC), in compliance with applicable law. We know that sometimes people share nude images of their own children with good intentions; however, we generally remove these images because of the potential for abuse by others and to help avoid the possibility of other people reusing or misappropriating the images.

We also work with external experts, including the Meta Safety Advisory Board, to discuss and improve our policies and enforcement around online safety issues, especially with regard to children. Learn more about the technology we’re using to fight against child exploitation.

Do not post:

Child sexual exploitation

Content, activity, or interactions that threaten, depict, praise, support, provide instructions for, make statements of intent, admit participation in, or share links of the sexual exploitation of children (including real minors, toddlers, or babies, or non-real depictions with a human likeness, such as in art, AI-generated content, fictional characters, dolls, etc). This includes but is not limited to:

  • Sexual intercourse
    • Explicit sexual intercourse or oral sex, defined as mouth or genitals entering or in contact with another person's genitals or anus, when at least one person's genitals or anus is visible.
    • Implied sexual intercourse or oral sex, including when contact is imminent or not directly visible.
    • Stimulation of genitals or anus, including when activity is imminent or not directly visible.
    • Any of the above involving an animal.
  • Children with sexual elements, including but not limited to:
    • Restraints
    • Signs of arousal
    • Focus on genitals or anus
    • Presence of aroused adult
    • Presence of sex toys or use of any object for sexual stimulation, gratification, or sexual abuse
    • Sexualized costume
    • Stripping
    • Staged environment (for example, on a bed) or professionally shot (quality/focus/angles)
    • Open-mouth kissing
    • Stimulation of human nipples or squeezing of female breast (EXCEPT in the context of breastfeeding)
    • Presence of by-products of sexual activity
  • Content involving children in a sexual fetish context
  • Content that supports, promotes, advocates or encourages participation in pedophilia unless it is discussed neutrally in a health context
  • Content that identifies or mocks alleged victims of child sexual exploitation by name or image

Solicitation

Content that solicits sexual content or activity depicting or involving children, defined as:

  • Child Sexual Abuse Material (CSAM)
  • Nude imagery of real or non-real children
  • Sexualized imagery of real or non-real children

Content that solicits sexual encounters with children

Inappropriate interactions with children

Content that constitutes or facilitates inappropriate interactions with children, such as:

  • Arranging or planning real-world sexual encounters with children
  • Purposefully exposing children to sexually explicit language or sexual material
  • Engaging in implicitly sexual conversations in private messages with children
  • Obtaining or requesting sexual material from children in private messages

Exploitative intimate imagery and sextortion

Content that attempts to exploit real children by:

  • Coercing money, favors or intimate imagery with threats to expose real or non-real intimate imagery or information
  • Sharing, threatening, or stating an intent to share private sexual conversations or real or non-real intimate imagery

Sexualization of children

  • Content (including photos, videos, real-world art, digital content, and verbal depictions) that sexualizes real or non-real children
  • Groups, Pages, and profiles dedicated to sexualizing real or non-real children

Child nudity

Content that depicts real or non-real child nudity where nudity is defined as:

  • Close-ups of real or non-real children’s genitalia
  • Real or non-real nude toddlers, showing:
    • Visible genitalia, even when covered or obscured by transparent clothing
    • Visible anus and/or fully nude close-up of buttocks
  • Real or non-real nude minors, showing:
    • Visible genitalia (including genitalia obscured only by pubic hair or transparent clothing)
    • Visible anus and/or fully nude close-up of buttocks
    • Uncovered female nipples
    • No clothes from neck to knee - even if no genitalia or female nipples are showing
  • Unless the non-real imagery is for health purposes or is a non-sexual depiction of child nudity in real-word art

Non-sexual child abuse

Videos or photos that depict real or non-real non-sexual child abuse regardless of sharing intent, unless the imagery is from real-world art, cartoons, movies or video games

Content that praises, supports, promotes, advocates for, provides instructions for or encourages participation in non-sexual child abuse

In addition to removing accounts that violate our Child Sexual Exploitation, Abuse and Nudity (CSEAN) policies, our reviewers and automated systems consider a broad spectrum of signals to help prevent potentially unwanted or unsafe interactions.
  • We may restrict access to products and features (e.g., the ability to message certain other users) for adults based on their interactions with other accounts, searches for or interactions with violating content, or membership in communities (e.g. Groups) we have removed for violating our policies.
For the following content, we include a warning screen so that people are aware the content may be disturbing and limit the ability to view the content to adults ages eighteen and older:
  • Videos or photos that depict police officers or military personnel committing non-sexual child abuse
  • Videos or photos of non-sexual child abuse, when law enforcement, child protection agencies, or trusted safety partners request that we leave the content on the platform for the express purpose of bringing a child back to safety
For the following content, we include a sensitivity screen so that people are aware the content may be upsetting to some:
  • Videos or photos of violent immersion of a child in water in the context of religious rituals
For the following Community Standards, we require additional information and/or context to enforce:

For the following content, we include a warning label so that people are aware that the content may be sensitive:

  • Imagery posted by a news agency that depicts child nudity in the context of famine, genocide, war crimes, or crimes against humanity, unless accompanied by a violating caption or shared in a violating context, in which case the content is removed

We may remove imagery depicting the aftermath of non-sexual child abuse when reported by news media partners, NGOs, or other trusted safety partners.

We may remove content that identifies alleged victims of child sexual exploitation through means other than name or image if content includes information that is likely to lead to the identification of the individual.

We may remove content created for the purpose of identifying a private minor if there is a risk to the minor’s safety, when requested by Law Enforcement, Government, Trusted Partner, or the content is self-reported by the minor or the minor’s parent/legal guardian

User experiences

See some examples of what enforcement looks like for people on Facebook, such as: what it looks like to report something you don’t think should be on Facebook, to be told you’ve violated our Community Standards and to see a warning screen over certain content.

Note: We’re always improving, so what you see here may be slightly outdated compared to what we currently use.

Data
Prevalence

Percentage of times people saw violating content

Content actioned

Number of pieces of violating content we took action on

Proactive rate

Percentage of violating content we found before people reported it

Appealed content

Number of pieces of content people appealed after we took action on it

Restored content

Number of pieces of content we restored after we originally took action on it

Prevalence

Percentage of times people saw violating content

Content actioned

Number of pieces of violating content we took action on

Proactive rate

Percentage of violating content we found before people reported it

Appealed content

Number of pieces of content people appealed after we took action on it

Restored content

Number of pieces of content we restored after we originally took action on it

Reporting
1
Universal entry point

We have an option to report, whether it's on a post, comment, story, message, profile or something else.

2
Get started

We help people report things that they don’t think should be on our platform.

3
Select a problem

We ask people to tell us more about what’s wrong. This helps us send the report to the right place.

4
Check your report

Make sure the details are correct before you click Submit. It’s important that the problem selected truly reflects what was posted.

5
Report submitted

After these steps, we submit the report. We also lay out what people should expect next.

6
More options

We remove things if they go against our Community Standards, but you can also Unfollow, Block or Unfriend to avoid seeing posts in future.

Post-report communication
1
Update via notifications

After we’ve reviewed the report, we’ll send the reporting user a notification.

2
More detail in the Support Inbox

We’ll share more details about our review decision in the Support Inbox. We’ll notify people that this information is there and send them a link to it.

3
Appeal option

If people think we got the decision wrong, they can request another review.

4
Post-appeal communication

We’ll send a final response after we’ve re-reviewed the content, again to the Support Inbox.

Takedown experience
1
Immediate notification

When someone posts something that doesn't follow our rules, we’ll tell them.

2
Additional context

We’ll also address common misperceptions and explain why we made the decision to enforce.

3
Policy Explanation

We’ll give people easy-to-understand explanations about the relevant rule.

4
Option for review

If people disagree with the decision, they can ask for another review and provide more information.

5
Final decision

We set expectations about what will happen after the review has been submitted.

Warning screens
1
Warning screens in context

We cover certain content in News Feed and other surfaces, so people can choose whether to see it.

2
More information

In this example, we give more context on why we’ve covered the photo with more context from independent fact-checkers

Enforcement

We have the same policies around the world, for everyone on Facebook.

Review teams

Our global team of over 15,000 reviewers work every day to keep people on Facebook safe.

Stakeholder engagement

Outside experts, academics, NGOs and policymakers help inform the Facebook Community Standards.

Get help with child sexual exploitation, abuse and nudity

Learn what you can do if you see something on Facebook that goes against our Community Standards.