Harmonize its policies on non-consensual content by adding a new signal for lack of consent in the Adult Sexual Exploitation policy: context that content is AI-generated or manipulated. For content with this specific context, the policy should also specify that it need not be “non-commercial or produced in a private setting” to be violating.
The Board will consider this recommendation implemented when both the public-facing and private internal guidelines are updated to reflect this change.
Our commitment: We will assess the feasibility of adding a new signal for lack of consent in the Adult Sexual Exploitation Policy when there is the context that content is AI-generated or manipulated. As part of this, we will also explore updating the policy to clarify the existing requirement that content be in a “non-commercial or produced in a private setting” to violate.
Considerations: As the Board notes in its decision, our current Adult Sexual Exploitation policy does not allow sharing of non-consensual intimate imagery (“NCII”). We consider imagery to be NCII based on certain criteria, including if the imagery is non-commercial or produced in a private setting, the person in the imagery is near-nude or nude, engaged in sexual activity or in a sexual pose. There must also be a signal that there is no consent to share the imagery. Currently, these signals include direct reports from the person depicted in the imagery, other contextual elements, or information from independent sources.
Given the changing landscape and speed at which manipulated imagery may now be created, we agree with the Board that we should evaluate whether to update some of these signals. Our approach to addressing this changing landscape will also take into account the potential impact on user voice. People may share imagery that is created or edited using AI as a means of empowerment, art, or entertainment. We want to ensure our policies address harmful and unwanted imagery while leaving space for this type of creative expression.
Earlier this year, we made an
announcement about new approaches to AI generated content, including use of labels based on our detection of industry-shared signals of AI images or people self-disclosing that they’re uploading AI-generated content. We are continuing to evolve our policies and approaches to media generated with AI in this space and have provided the Board with updates in relation to their previous decision on the
Altered Video of Biden case. Separately, we updated a number of our Community Standards with clarifications about how we treat digital imagery, including recent updates to our
Adult Nudity and Sexual Activity policy, which clarifies that we remove AI-generated or manipulated imagery depicting nudity or sexual activity regardless of whether it looks “photorealistic” (as in, it looks like a real person), with a few exceptions.
This work is distinct from the previously discussed effort to harmonize our policies across Meta surfaces so that they are more standardized and clear, which, in turn, can improve enforcement accuracy. This effort has included aligning definitions across policies to create more efficient and clear internal guidance for policy enforcement, and we are in the final stages of these updates.
We will begin to assess the feasibility of changing our policy language and approach to consider how we can adapt our signal of “a private or non-commercial setting” to better reflect the changing nature of how potentially altered, sexualized content is created and shared. We will also continue considering ways in which some people may consensually share altered imagery that does not otherwise violate our Adult Sexual Exploitation, Adult Nudity and Sexual Activity, or Bullying and Harassment policies, while still accounting for the fact that non-consensually shared intimate imagery is a severe policy violation and requires at-scale removal from our services in line with Meta’s underlying values of safety and dignity. We will provide updates in regular reports to the Board on this work.