To ensure its Dangerous Organizations and Individuals Community Standard is tailored to advance its aims, Meta should allow, with a “Mark as Disturbing” warning screen, third-party imagery of a designated event showing the moment of attacks on visible but not personally identifiable victims when shared in news reporting, condemnation and awareness-raising contexts.The Board will consider this recommendation implemented when Meta updates the public-facing Dangerous Organizations and Individuals Community Standard in accordance with the above.
Our commitment: We will assess the feasibility of introducing a “Mark as Disturbing” warning screen option when third-party imagery of a designated event depicting the moment of attack is shared in the context of news reporting, condemnation, or awareness raising and does not include personally identifiable victims. This will require an assessment of the technical feasibility of implementing this option at-scale, as well as an assessment of potential impact of this option on our ability to quickly respond in moments of crisis.
Considerations: Over the past several years, we’ve invested in improving the experiences for people when we remove their content, and we have teams dedicated to continuing to improve these. As part of this work, we updated our notifications to inform people under which Community Standard a post was taken down (for example, Hate Speech, Adult Nudity and Sexual Activity, etc.), but we agree with the board that we’d like to provide more.
As part of our Dangerous Organizations and Individuals Community Standard, we define Violating Violent Events (VVEs) as an attempt or an intentional act of high-severity violence by a non-state actor against civilian targets outside the context of armed conflict or war. We designate these events, such as terrorist events or multiple-victim violence, when we determine the required signals are met and the totality of the circumstances surrounding the event warrant event designation enforcement. Upon designation, we prohibit all References, Glorification, Support, or Representation of the event or its perpetrators, and prohibit sharing certain kinds of imagery associated with the attack. We recently conducted policy development on our approach VVEs, which included a Policy Forum discussion that the Board attended. Our policy development included consultation with global experts, research, and discussions with internal teams that respond to these events in order to align on changes to our previous approach to violating events. We also reviewed our commitments with the Global Internet Forum to Counter Terrorism, and considered all of our Community Standards to proactively address and respond to violent incidents by removing content in anticipation of any virality or encouraging copycat behavior. However, we also weighed the importance of expression and adopting proportionate penalties for sharing content that intends to condemn or raise awareness about these events. In instances where victims may be visible, we also considered our Community Standards value of dignity. During our Policy Forum we evaluated an option to allow third-party content with a Mark as Disturbing screen. This option raised some concerns about the possibility of the content being repurposed by adversarial actors to glorify attacks or the attackers or normalizing acts of violence. However, we acknowledge the Board’s recommendation to further consider these potential tradeoffs, and as we note in our response to recommendation 2, we have implemented several changes to the VVE definition following our Policy Forum.
We will assess further approaches to violating events that balance voice, safety, and dignity in the aftermath of these events. Given the recency of our policy development on violating events, the complexity of adding a Mark as Disturbing option for a Community Standards area that does not use this enforcement option at scale, and other key considerations, we expect that this assessment will take time to fully complete. Due to the scope and complexity of this work, we expect that we will be able to provide a more detailed update on the status of this recommendation in 2026. We will share updates in future reports to the Oversight Board.