To better combat misleading manipulated celebrity endorsements, Meta should enforce at scale its Fraud, Scams and Deceptive Practices policy prohibition on content that “attempts to establish a fake persona or to pretend to be a famous person in an attempt to scam or defraud” by providing reviewers with indicators to identify this content. This could include, for example, the presence of media manipulation watermarks and metadata, or clear factors such as video-audio mismatch.
The Board will consider this recommendation implemented when both the public-facing and private internal guidelines are updated to reflect this change.
Commitment Statement: We are assessing the feasibility of implementing a number of updates to our Fraud, Scams, and Deceptive Practices Policy that we believe will address the spirit of this recommendation. These updates include work to build tools that, combined with regional context and existing enforcement approaches, help us address “celeb-bait” content.
Considerations: What constitutes a ‘fake persona’ may differ across regions and identifying celeb-bait and celebrities in general can be difficult at scale. As such, we are working with our engineering and product teams on tools that human reviewers can also work with to identify and face match harmful celeb-bait content at scale. Given the complexity of this work, we continuously work to improve the tools used to detect and prevent scams.
Our Fraud, Scams, and Deceptive Practices Community Standard explains that we aim to protect users and businesses from potentially being deceived out of money, property, or personal information by removing content and addressing behavior that purposefully employs deceptive means. The means described here may include misrepresentation, making exaggerated claims, or using stolen information. As part of our existing approach to this content, we remove content that attempts to establish a fake persona or pretend to be a famous person in order to scam or defraud. This includes removal of manipulated audio and visual content with these intentions. When any content is reported, it is assessed across other Community Standards or policies as well and we may remove or apply a label accordingly if manipulated content violates other policies such as Bullying and Harassment or Misinformation.
We currently enforce this policy related to fake personas on escalation, and we are assessing ways to scale how we enforce this content more broadly. As the Board notes in its decision, assessing if content is ‘fake’ or includes a ‘celebrity’ is a highly context-dependent assessment that can require regional expertise.
We will provide regular updates to the Board in upcoming reports on the status of implementing this recommendation.