Meta
这条内容尚无中文(简体)版本

Home
Oversight
Oversight board cases
Footage of terrorist attack in Moscow

Footage of Terrorist Attack in Moscow Bundle

更新日期 2025年1月17日
2024-038-FB-UA, 2024-039-FB-UA, 2024-040-FB-UA
Today, July 11, 2024, the Oversight Board selected a case bundle appealed by Facebook users regarding three pieces of content. Each piece of content contains a video that depicts the moment of a terrorist attack on visible victims at a concert venue in Moscow with a caption that condemns the attack or expresses support for the victims.
In each instance, Meta took down this content for violating our Dangerous Organizations and Individuals policy, as laid out in the Facebook Community Standards.
Under our Dangerous Organizations and Individuals policy, “we do not allow content that glorifies, supports, or represents events that Meta designates as violating violent events.” Meta internally designated the Moscow attack as a violating violent event (a terrorist attack) on March 22, 2024. As a result, this means we remove “any third party imagery depicting the moment of the attack on visible victims,” even if shared to raise awareness, neutrally discuss, or condemn the attack.
We will implement the board’s decision once it has finished deliberating, and will update this post accordingly. Please see the Board's website for the decision when they issue it.
Read the board’s case selection summary

Case decision
We welcome the Oversight Board’s decision today, November 19, 2024, on this case. The Board overturned Meta’s decisions to remove all three pieces of content. Meta will act to comply with the Board's decision and reinstate the content with warning screens to Facebook within 7 days.
After conducting a review of the recommendations provided by the Board, we will update this post with initial responses to those recommendations.
Read the board’s case decision

Recommendations
On February 25, 2021, Meta responded to the board’s recommendation for this case. We are still assessing the feasibility of the recommendation.

Recommendation 1 (assessing feasibility)
To ensure its Dangerous Organizations and Individuals Community Standard is tailored to advance its aims, Meta should allow, with a “Mark as Disturbing” warning screen, third-party imagery of a designated event showing the moment of attacks on visible but not personally identifiable victims when shared in news reporting, condemnation and awareness-raising contexts.
The Board will consider this recommendation implemented when Meta updates the public-facing Dangerous Organizations and Individuals Community Standard in accordance with the above.
Our commitment: We will assess the feasibility of introducing a “Mark as Disturbing” warning screen option when third-party imagery of a designated event depicting the moment of attack is shared in the context of news reporting, condemnation, or awareness raising and does not include personally identifiable victims. This will require an assessment of the technical feasibility of implementing this option at-scale, as well as an assessment of potential impact of this option on our ability to quickly respond in moments of crisis.
Considerations: Over the past several years, we’ve invested in improving the experiences for people when we remove their content, and we have teams dedicated to continuing to improve these. As part of this work, we updated our notifications to inform people under which Community Standard a post was taken down (for example, Hate Speech, Adult Nudity and Sexual Activity, etc.), but we agree with the board that we’d like to provide more.
As part of our Dangerous Organizations and Individuals Community Standard, we define Violating Violent Events (VVEs) as an attempt or an intentional act of high-severity violence by a non-state actor against civilian targets outside the context of armed conflict or war. We designate these events, such as terrorist events or multiple-victim violence, when we determine the required signals are met and the totality of the circumstances surrounding the event warrant event designation enforcement. Upon designation, we prohibit all References, Glorification, Support, or Representation of the event or its perpetrators, and prohibit sharing certain kinds of imagery associated with the attack.
We recently conducted policy development on our approach VVEs, which included a Policy Forum discussion that the Board attended. Our policy development included consultation with global experts, research, and discussions with internal teams that respond to these events in order to align on changes to our previous approach to violating events. We also reviewed our commitments with the Global Internet Forum to Counter Terrorism, and considered all of our Community Standards to proactively address and respond to violent incidents by removing content in anticipation of any virality or encouraging copycat behavior. However, we also weighed the importance of expression and adopting proportionate penalties for sharing content that intends to condemn or raise awareness about these events. In instances where victims may be visible, we also considered our Community Standards value of dignity.
During our Policy Forum we evaluated an option to allow third-party content with a Mark as Disturbing screen. This option raised some concerns about the possibility of the content being repurposed by adversarial actors to glorify attacks or the attackers or normalizing acts of violence. However, we acknowledge the Board’s recommendation to further consider these potential tradeoffs, and as we note in our response to recommendation 2, we have implemented several changes to the VVE definition following our Policy Forum.
We will assess further approaches to violating events that balance voice, safety, and dignity in the aftermath of these events. Given the recency of our policy development on violating events, the complexity of adding a Mark as Disturbing option for a Community Standards area that does not use this enforcement option at scale, and other key considerations, we expect that this assessment will take time to fully complete. Due to the scope and complexity of this work, we expect that we will be able to provide a more detailed update on the status of this recommendation in 2026. We will share updates in future reports to the Oversight Board.

Recommendation 2 (Implementing Fully)
To ensure clarity, Meta should include a rule under the “We remove” section of the Dangerous Organizations and Individuals Community Standard and move the explanation of how Meta treats content depicting designated events out of the policy rationale section and into this section.
The Board will consider this recommendation implemented when Meta updates the public-facing Dangerous Organizations and Individuals Community Standard moving the rule on footage of designated events to the “We remove” section of the policy.
Our commitment: We plan to update our Community Standards with further details explaining our approach to Violating Violent Events and consider this recommendation implemented in full later this year.
Considerations: This year we plan to update our Community Standards with our definition of Violating Violent Events (VVEs). As also noted above, we define a VVE as an attempt or an intentional act of high-severity violence by a non-state actor against civilian targets outside the context of armed conflict or war. This external update and updates to our internal approach to VVEs was the result of extensive policy development and a Policy Forum discussion earlier in the year. Our policy development focused on the treatment of imagery from a violating event resulting in updates to our overall approach to content in the aftermath of these events. Once this change is implemented, we will provide an update in a future report to the Board.

Meta
政策
社群守则Meta 广告发布守则其他政策Meta 如何改进工作适龄内容

功能
我们打击危险组织和人物的方法我们对阿片类药物泛滥的处理方式我们维护诚信选举的方法我们打击错误信息的方法我们评估内容新闻价值的方法我们的 Facebook 动态版块内容排名方法我们对内容排名方法的解释Meta 无障碍理念

研究工具
内容库与内容库 API广告资料库工具其他研究工具和数据集

政策执行
检测违规内容采取措施

治理
治理创新监督委员会概览如何向监督委员会申诉监督委员会案件监督委员会建议设立监督委员会监督委员会:其他问题Meta 关于监督委员会的半年度更新报告追踪监督委员会的影响力

安全
威胁中断安全威胁威胁行为处理报告

报告
社群守则执行情况报告知识产权政府的用户数据收集情况依据当地法律实施内容限制网络中断广泛浏览内容报告监管报告和其他透明度报告

政策
社群守则
Meta 广告发布守则
其他政策
Meta 如何改进工作
适龄内容
功能
我们打击危险组织和人物的方法
我们对阿片类药物泛滥的处理方式
我们维护诚信选举的方法
我们打击错误信息的方法
我们评估内容新闻价值的方法
我们的 Facebook 动态版块内容排名方法
我们对内容排名方法的解释
Meta 无障碍理念
研究工具
内容库与内容库 API
广告资料库工具
其他研究工具和数据集
政策执行
检测违规内容
采取措施
治理
治理创新
监督委员会概览
如何向监督委员会申诉
监督委员会案件
监督委员会建议
设立监督委员会
监督委员会:其他问题
Meta 关于监督委员会的半年度更新报告
追踪监督委员会的影响力
安全
威胁中断
安全威胁
威胁行为处理报告
报告
社群守则执行情况报告
知识产权
政府的用户数据收集情况
依据当地法律实施内容限制
网络中断
广泛浏览内容报告
监管报告和其他透明度报告
政策
社群守则
Meta 广告发布守则
其他政策
Meta 如何改进工作
适龄内容
功能
我们打击危险组织和人物的方法
我们对阿片类药物泛滥的处理方式
我们维护诚信选举的方法
我们打击错误信息的方法
我们评估内容新闻价值的方法
我们的 Facebook 动态版块内容排名方法
我们对内容排名方法的解释
Meta 无障碍理念
研究工具
内容库与内容库 API
广告资料库工具
其他研究工具和数据集
安全
威胁中断
安全威胁
威胁行为处理报告
报告
社群守则执行情况报告
知识产权
政府的用户数据收集情况
依据当地法律实施内容限制
网络中断
广泛浏览内容报告
监管报告和其他透明度报告
政策执行
检测违规内容
采取措施
治理
治理创新
监督委员会概览
如何向监督委员会申诉
监督委员会案件
监督委员会建议
设立监督委员会
监督委员会:其他问题
Meta 关于监督委员会的半年度更新报告
追踪监督委员会的影响力
政策
社群守则
Meta 广告发布守则
其他政策
Meta 如何改进工作
适龄内容
功能
我们打击危险组织和人物的方法
我们对阿片类药物泛滥的处理方式
我们维护诚信选举的方法
我们打击错误信息的方法
我们评估内容新闻价值的方法
我们的 Facebook 动态版块内容排名方法
我们对内容排名方法的解释
Meta 无障碍理念
研究工具
内容库与内容库 API
广告资料库工具
其他研究工具和数据集
政策执行
检测违规内容
采取措施
治理
治理创新
监督委员会概览
如何向监督委员会申诉
监督委员会案件
监督委员会建议
设立监督委员会
监督委员会:其他问题
Meta 关于监督委员会的半年度更新报告
追踪监督委员会的影响力
安全
威胁中断
安全威胁
威胁行为处理报告
报告
社群守则执行情况报告
知识产权
政府的用户数据收集情况
依据当地法律实施内容限制
网络中断
广泛浏览内容报告
监管报告和其他透明度报告
中文(简体)
隐私政策服务条款Cookie
Meta
政策及信息公示平台
政策
政策执行
安全
功能
治理
研究工具
报告
中文(简体)