ही सामग्री अजून मराठी मध्ये उपलब्ध नाही
Content Targeting Human Rights Defender in Peru
अपडेट केलेले २५ जुलै, २०२५
2025-012-FB-UA
Today, January 14, 2025, the Oversight Board selected a case appealed by a Facebook user regarding an image posted to Facebook that depicts a digitally altered headshot of the leader of a human rights organization in Peru. The image of the human rights defender appears to be AI-manipulated, showing their face covered with blood that is dripping downward. The accompanying caption in Spanish insinuates financial wrongdoing by non-governmental organizations (NGOs) and accuses NGOs of encouraging violent protests. The post was shared around the time of demonstrations in Peru’s capital when citizens protested against the government.
Meta determined that this content did not violate our policies on Violence and Incitement or Bullying and Harassment, as laid out in the Facebook Community Standards.
We will implement the Board’s decision once it has finished deliberating, and will update this post accordingly. Please see the Board’s website for the decision when they issue it.
Read the board’s case selection summary
Case decision
We welcome the Oversight Board's decision today, May 27, 2025, on this case. The Board overturned Meta's original decision to leave up the content on Facebook. Since Meta previously disabled the account containing this content, the post is no longer on Facebook so there will be no further action on this case.
When it is technically and operationally possible to do so, we will also take action on content that is identical and in the same context as this case. For more information, please see our Newsroom post about how we implement the Board's decisions.
After conducting a review of the recommendations provided by the Board, we will update this post with initial responses to those recommendations.
Recommendations
Recommendation #1 (Assessing Feasibility)
To ensure that its Violence and Incitement Community Standard clearly captures how veiled threats can occur across text and imagery, Meta should clarify that threats made out of “coded statements,” even “where the method of violence is not clearly articulated,” are prohibited in written, visual and verbal form.
The Board will consider this implemented when the public-facing language of the Violence and Incitement Community Standard reflects the proposed change.
Commitment Statement: We will assess ways to further clarify our approach to written, visual, and verbal signals in our assessment of veiled threats.
Considerations: We are continuing to assess updates to our Violence and Incitement Policy, including exploring updates to our external Community Standards language. As part of this, we will also explore how we can explain our approach to text and imagery as part of our assessments of codified statements including veiled threats. In part due to previous Board recommendations (1 and 2), we include detailed information about our approach to “coded statements” in our Community Standards, which reflects the internal framework used by specialized teams to review veiled threats. These statements include veiled or implicit threats where the method of violence is not clearly articulated, but we identified both other threats and contextual signals.
We will assess how we can more clearly incorporate written, visual, and verbal signals into existing veiled threats assessments of coded statements. We will provide updates in future reports to the Oversight Board.Recommendation 2 (assessing feasibility)
To ensure that potential veiled threats are more accurately assessed, in light of Meta’s incorrect interpretation of this content on-escalation, the Board recommends that Meta produce an annual assessment of accuracy for this problem area. This should include a specific focus on false negative rates of detection and removal for threats against human rights defenders, and false positive rates for political speech (e.g., Iran Protest Slogan). As part of this process, Meta should investigate opportunities to improve the accurate detection of high-risk (low-prevalence, high impact) threats at scale.
The Board will consider this implemented when Meta shares the results of this assessment, including how these results will inform improvements to enforcement operations and policy development.
Commitment Statement: Conducting an “accuracy” assessment is challenging as the final assessment is the result of complex factors that may be specific to a regional, historical, or otherwise situational context as opposed to direct or explicit threats of violence, which can be reviewed at scale. However, we will work with our enforcement teams to assess ways that we can refine how content is surfaced for veiled threats assessment.
Considerations: In part due to prior recommendations from the Oversight Board, such as A Veiled Threat of Violence Based on Lyrics from a Drill Rap Song recommendation #2, we have refined the language in our Violence and Incitement Community Standard related to coded statements—including veiled or implicit threats. This refined language outlines that, on escalation and with additional context, we may remove coded statements where the method of violence is not clearly articulated but the threat is veiled or implicit as indicated by a number of signals.
These signals, which are coded/indirect versions of similar signals that our reviewers use to determine whether or not to escalate a threat, which may include references to specific locations, or historical incidents of violence, for example. We also share examples to help explain each of these signals. In addition to these signals, we also require a contextual signal such as local context of imminent violence, the target or an authorized representative such as a local NGO reporting the content, or context that the target is a child.
Assessment of these instances requires nuance and situational awareness, which is why we only apply these on escalation by specialized teams and with contextual information, as opposed to direct threats which may be reviewed at scale.
For example, if one user shares a coded statement in a retaliatory context that implies another person ‘will pay for what they have done, I know where to find you,’ our escalation teams will review the post in its entirety, taking in any contextual signals around the post and any known context about the user or the other person they’re referring to from our internal and external stakeholders to determine whether their statement is indeed a veiled threat.
While reviewers at scale may assess direct threats based on the presence of a target, a method, and a few other clear signals, escalation teams will require further investigation and collaboration with internal experts before enforcing on potential veiled threats.
As we assess the feasibility of this recommendation, our Policy team will partner with escalation teams to evaluate samples of content escalated for potential veiled threats to consider opportunities to refine the application of this framework.