How we approach dangerous organizations and individuals.
How we support communities in the face of the opioid epidemic.
How we help prevent interference, empower people to vote and more.
How we work with independent fact-checkers, and more, to identify and take action on misinformation.
How we assess content for newsworthiness.
How we reduce problematic content in News Feed.
How we build AI systems.
Comprehensive access to public data from Facebook and Instagram
Comprehensive and searchable database of all ads currently running across Meta technologies
Additional tools for in-depth research on Meta technologies and programs
Quarterly report on how well we're doing at enforcing our policies on the Facebook app and Instagram.
Report on how well we're helping people protect their intellectual property.
Report on government request for people's data.
Report on when we restrict content that's reported to us as violating local law.
Report on intentional internet restrictions that limit people's ability to access the internet.
Quarterly report on what people see on Facebook, including the content that receives the widest distribution during the quarter.
Download current and past regulatory reports for Facebook and Instagram.
2024-007-IG-UA, 2024-008-FB-UA
Today, April 16, 2024, the Oversight Board selected a case bundle regarding two pieces of content containing artificial explicit images that were appealed by Facebook and Instagram users.
Meta took down both pieces of content for violating our policy on Bullying and Harassment, which prohibits “derogatory sexualized photoshops or drawings,” as laid out in the Facebook Community Standards and Instagram Community Guidelines. For one piece of content, Meta also determined that it violated our Adult Nudity and Sexual Activity policy, as laid out in our Facebook Community Standards.
We will implement the Board’s decision once it has finished deliberating, and will update this post accordingly. Please see the Board’s website for the decision when they issue it.
We welcome the Oversight Board’s decision today, July 25, 2024, on this case bundle. The Board overturned Meta’s original decision to leave up the content in the first case. The Board upheld Meta’s decision to take down the content in the second case. Since Meta previously removed the content for both cases, we will take no further action related to this bundle or the content.
When it is technically and operationally possible to do so, we will also take action on content that is identical and in the same context as the first case. For more information, please see our Newsroom post about how we implement the Board’s decisions.
After conducting a review of the recommendations provided by the Board, we will update this page.
Move the prohibition on “derogatory sexualized photoshop” into the Adult Sexual Exploitation Community Standard.
The Board will consider this recommendation implemented when this section is removed from the Bullying and Harassment policy and is included in the publicly available Adult Sexual Exploitation policy.
Our commitment: We will consider approaches to potentially harmful manipulated media under our Adult Sexual Exploitation policy to better capture and remove potentially harmful content that includes manipulated media, which we discuss in our response to recommendation #4. At this time, we do not expect that this will result in moving the prohibition on “derogatory sexualized photoshop” to our Adult Sexual Exploitation Community Standard from our Bullying and Harassment Community Standard because the two policies address different types of harms. However, we do intend to revisit and potentially revise sections of our Adult Sexual Exploitation Community Standard to capture the spirit of this recommendation.
Considerations: Our prohibition against “derogatory sexualized photoshop” is included in our Bullying and Harassment Policy because it is intended to remove content that degrades or disparages the person(s) depicted in line with the policy rationale. We have other policies, such as our Adult Sexual Exploitation (ASE) policies as well as our Adult Nudity and Sexual Activity (ANSA) policies, that may also remove some forms of nudity or sexualized content on the platform.
These different policies serve different purposes, and when multiple policies apply, we typically apply the policy with the most stringent level of enforcement. If manipulated sexualized imagery violates both the Bullying and Harassment and Adult Sexual Exploitation policy, we apply the stricter penalties associated with the Adult Sexual Exploitation policy.
Given the fact that multiple policies may address this type of imagery, we believe maintaining a line in the Bullying and Harassment policy aimed at removing sexualized imagery meant to degrade while considering potential revisions to the Adult Sexual Exploitation policy will address the spirit of this recommendation. We believe maintaining this policy line as part of the Bullying and Harassment policy is important because not all “derogatory” and “sexualized” content is necessarily exploitative. For example, some political and social commentary may include elements that are “sexualized,” but not necessarily exploitative In addition, the more restrictive penalties applied to violations of our Adult Sexual Exploitation policies may be disproportionate for some of the content captured under our prohibition against “derogatory sexualized photoshop” content.
That said, we understand the Board’s concerns regarding capturing and enforcing on the most harmful forms of this content, and intend to address this in our commitment to recommendation #4. We will provide updates alongside our response to recommendation #4 in future reports to the Board.
Change the word “derogatory” in the prohibition on “derogatory sexualized photoshop” to “non-consensual.”
The Board will consider this recommendation implemented when the word “non-consensual” replaces the word “derogatory” in the prohibition on derogatory sexualized content in the publicly available Community Standards.
Our commitment: We are conducting broader work, in conjunction with recommendation #4, that we expect will address elements of this recommendation, though we do not expect to replace “derogatory” with “non-consensual” to our existing approach to “derogatory sexualized photoshop” under the Bullying and Harassment policy. We address non-consensual intimate imagery (NCII) as part of our Adult Sexual Exploitation policy, which differs from “derogatory sexualized” content, but we agree with the Board that there may be opportunities to clarify these policy distinctions.
Considerations: We do not require specific signals indicating a lack of consent for content to violate our policy line on derogatory sexualized manipulated imagery. Requiring signals that a party has not consented to the content being posted would narrow the scope and enforcement of the policy.
Separately, our approach to non-consensual intimate imagery (NCII) under our Adult Sexual Exploitation policy has been developed with input from external and internal experts and extensive research in this space. We do not allow sharing, threatening, stating an intent to share, or offering or asking for NCII. We currently define NCII as imagery that is non-commercial or in a private setting where the person in the imagery is (near) nude, engaged in a sexual activity or in a sexually suggestive pose; and there are signals indicating that there is a lack of consent to share the imagery. These signals include reports from the person depicted or other sources that there is lack of consent or the content is being shared in a vengeful context.
We will consider ways to clarify this within our Bullying and Harassment and Adult Sexual Exploitation policies as part of recommendation #4, and will provide updates in future Oversight Board reports.
Replace the word “photoshop” in the prohibition on “derogatory sexualized photoshop” with a more generalized term for manipulated media.
The Board will consider this recommendation implemented when the word “photoshop” is removed from the prohibition on “derogatory sexualized” content and replaced with a more generalized term, such as “manipulated media.”
Our commitment: We will update our existing approach to “derogatory sexualized photoshop” to more broadly encapsulate manipulated media.
Considerations: We agree with the Board that the tools for creating altered content that is intended to be derogatory and sexualized have expanded beyond methods such as photoshop. In our announcements about approaches to addressing content created with generative AI more broadly, we noted that we will continue to remove content, whether it is AI-generated or not, when it otherwise violates our Community Standards. With this in mind, we will refine our existing Bullying and Harassment policy line about “photoshop” to better capture all methods for altering imagery. We will provide updates in future reports to the Board.
Harmonize its policies on non-consensual content by adding a new signal for lack of consent in the Adult Sexual Exploitation policy: context that content is AI-generated or manipulated. For content with this specific context, the policy should also specify that it need not be “non-commercial or produced in a private setting” to be violating.
The Board will consider this recommendation implemented when both the public-facing and private internal guidelines are updated to reflect this change.
Our commitment: We will assess the feasibility of adding a new signal for lack of consent in the Adult Sexual Exploitation Policy when there is the context that content is AI-generated or manipulated. As part of this, we will also explore updating the policy to clarify the existing requirement that content be in a “non-commercial or produced in a private setting” to violate.
Considerations: As the Board notes in its decision, our current Adult Sexual Exploitation policy does not allow sharing of non-consensual intimate imagery (“NCII”). We consider imagery to be NCII based on certain criteria, including if the imagery is non-commercial or produced in a private setting, the person in the imagery is near-nude or nude, engaged in sexual activity or in a sexual pose. There must also be a signal that there is no consent to share the imagery. Currently, these signals include direct reports from the person depicted in the imagery, other contextual elements, or information from independent sources.
Given the changing landscape and speed at which manipulated imagery may now be created, we agree with the Board that we should evaluate whether to update some of these signals. Our approach to addressing this changing landscape will also take into account the potential impact on user voice. People may share imagery that is created or edited using AI as a means of empowerment, art, or entertainment. We want to ensure our policies address harmful and unwanted imagery while leaving space for this type of creative expression.
Earlier this year, we made an announcement about new approaches to AI generated content, including use of labels based on our detection of industry-shared signals of AI images or people self-disclosing that they’re uploading AI-generated content. We are continuing to evolve our policies and approaches to media generated with AI in this space and have provided the Board with updates in relation to their previous decision on the Altered Video of Biden case. Separately, we updated a number of our Community Standards with clarifications about how we treat digital imagery, including recent updates to our Adult Nudity and Sexual Activity policy, which clarifies that we remove AI-generated or manipulated imagery depicting nudity or sexual activity regardless of whether it looks “photorealistic” (as in, it looks like a real person), with a few exceptions.
This work is distinct from the previously discussed effort to harmonize our policies across Meta surfaces so that they are more standardized and clear, which, in turn, can improve enforcement accuracy. This effort has included aligning definitions across policies to create more efficient and clear internal guidance for policy enforcement, and we are in the final stages of these updates.
We will begin to assess the feasibility of changing our policy language and approach to consider how we can adapt our signal of “a private or non-commercial setting” to better reflect the changing nature of how potentially altered, sexualized content is created and shared. We will also continue considering ways in which some people may consensually share altered imagery that does not otherwise violate our Adult Sexual Exploitation, Adult Nudity and Sexual Activity, or Bullying and Harassment policies, while still accounting for the fact that non-consensually shared intimate imagery is a severe policy violation and requires at-scale removal from our services in line with Meta’s underlying values of safety and dignity. We will provide updates in regular reports to the Board on this work.