Meta
这条内容尚无中文(简体)版本

Home
Oversight
Oversight Board Cases
Holocaust Denial

Holocaust Denial Post

更新日期 2024年3月21日
2023-022-IG-UA
Today, the Oversight Board selected a case appealed by an Instagram user regarding a photo containing a list of false facts about the Holocaust which, as a result, imply denial of the Holocaust. Some of the false facts included speculation around mentions of the Holocaust by allied leaders and the false suggestion that infrastructure for carrying out the Holocaust was not built until after the war.
Upon initial review, Meta left this content up. However, upon further review, we determined the content did in fact violate our policy on Hate Speech, as laid out in the Facebook Community Standards and Instagram Community Guidelines, and was left up in error. We therefore removed the content.
Meta removes content that contains hate speech, including “harmful stereotypes linked to intimidation, exclusion, or violence on the basis of a protected characteristic” such as Holocaust denial. Holocaust denial includes content that “denies, calls into doubt, or minimizes the fact that the Holocaust happened, the number of victims, or the mechanisms of destruction used.”
We will implement the board’s decision once it has finished deliberating, and we will update this post accordingly. Please see the Board’s website for the decision when they issue it.
Read the board’s case selection summary

Case decision
We welcome the Oversight Board’s decision today on this case. The board overturned Meta’s original decision to leave this content up. Meta previously removed this content.
When it is technically and operationally possible to do so, we will also take action on content that is identical and made in the same context.
After conducting a review of any recommendations provided by the Board, we will update this post with initial responses to those recommendations.
Read the Board’s case decision

Recommendations

Recommendation 1 (Implementing in Part)
To ensure that the Holocaust denial policy is accurately enforced, Meta should take the technical steps to ensure that it is sufficiently and systematically measuring the accuracy of its enforcement of Holocaust denial content. This includes gathering more granular details about its enforcement of this content, as Meta has done in implementing the Mention of the Taliban in News Reporting recommendation no. 5.
The Board will consider this recommendation implemented when Meta provides the Board with its first analysis of enforcement accuracy of Holocaust denial content.
Our commitment: We will continue to measure the accuracy of our enforcement on content that contains harmful stereotypes, which includes Holocaust denial content. We will also conduct an analysis of the accuracy of our enforcement on Holocaust denial content and share this analysis directly with the Board.
Considerations: Ensuring that our policies are accurately enforced is a major priority of our company. For that reason, we continuously monitor and assess the accuracy of our enforcement measures. For Holocaust denial content, we measure our enforcement as part of our harmful stereotypes policy. Our Community Standards categorize content that includes Holocaust denial as Tier 1 violations under the Hate Speech policy, falling under “harmful stereotypes linked to intimidation, exclusion, or violence on the basis of a protected characteristic.” We remove content containing these harmful stereotypes and routinely measure the accuracy of this enforcement. We apply the same label to all harmful stereotypes because it improves the performance of our classifiers.
In response to the Board’s request, our teams will pull data from select relevant markets to analyze the prevalence and accuracy of our enforcement on Holocaust denial content. Our aim is to select representative markets for analysis as this data is collected and analyzed by market, making several regional analyses more feasible than a global review. This analysis requires extensive work by a number of internal teams, including data validation processes, legal and privacy review. We will endeavor to complete this analysis as quickly as possibly, and will provide updates on this process as it progresses.
We are in the process of aligning on the parameters of analyzing the subset of Holocaust denial content with relevant teams and will provide more granular details about enforcement of this content directly to the Board. We will provide an update on our progress in future Oversight Board updates.

Recommendation 2 (Implementing in Full)
To provide greater transparency that Meta’s appeals capacity is restored to pre-pandemic levels, Meta should publicly confirm whether it has fully ended all COVID-19 automation policies put in place during the COVID-19 pandemic.
The Board will consider this recommendation implemented when Meta publishes information publicly on each COVID-19 automation policy and when each was ended or will end.
The Oversight Board also reiterates the importance of its previous recommendations calling for alignment of the Instagram Community Guidelines and Facebook Community Standards, noting the relevance of these recommendations to the issue of Holocaust denial (recommendation no. 7 and 9 from the Breast Cancer Symptoms and Nudity case; recommendation no. 10 from the Öcalan’s Isolation case; no. 1 from the Ayahuasca Brew case; and recommendation no. 9 from the Sharing Private Residential Information policy advisory opinion). In line with those recommendations, Meta should continue to communicate delays in aligning these rules, and it should implement any short-term solutions to bring clarity to Instagram users.
Our commitment: We no longer apply automation to address the limited review capacity that resulted from the COVID-19 pandemic. We continue to rely on automation systems as an important tool for content moderation at scale, however those systems are unrelated to early pandemic constraints.
Considerations: Our Transparency Center details how our content review systems are structured using technology to rank content so that our review teams can prioritize incoming content in order of importance.
During the pandemic, we introduced temporary COVID-19 specific automation to address reduced human reviewer capacity, which included auto-closing certain appeal jobs that were not prioritized for review. The configuration of this automation has since changed; however we internally retained the legacy COVID-19 label because it was already built into our systems and would have been technically difficult to change. We are working with internal teams to explore the feasibility of updating this classifier name to avoid confusion about its purpose going forward. Our responses to the Board’s questions in this case could have been clearer on this point. To clarify previous responses to the Board, the label is internal-only; we no longer share COVID-19 related messaging with users when their appeals are actioned through this technology. Instead, they receive a message that this is a standard decision made by our technology as intended.
In response to the Punjabi concern over the RSS in India case in 2021, we noted our efforts to restore human review to pre-pandemic levels while better prioritizing human review of appeals on our content moderation decisions. We’ve since further improved our technology to better prioritize human review of appeals where necessary. This calculated combination of enhanced technology and human review enables us to consistently optimize capacity for reviewing appeals. We will continue to consider how to adjust our internal labels to more accurately reflect our automation enforcement processes and detail our progress in future Oversight Board updates.

Meta
政策
社群守则Meta 广告发布守则其他政策Meta 如何改进工作适龄内容

功能
我们打击危险组织和人物的方法我们对阿片类药物泛滥的处理方式我们维护诚信选举的方法我们打击错误信息的方法我们评估内容新闻价值的方法我们的 Facebook 动态版块内容排名方法我们对内容排名方法的解释Meta 无障碍理念

研究工具
内容库与内容库 API广告资料库工具其他研究工具和数据集

政策执行
检测违规内容采取措施

治理
治理创新监督委员会概览如何向监督委员会申诉监督委员会案件监督委员会建议设立监督委员会监督委员会:其他问题Meta 关于监督委员会的半年度更新报告追踪监督委员会的影响力

安全
威胁中断安全威胁威胁行为处理报告

报告
社群守则执行情况报告知识产权政府的用户数据收集情况依据当地法律实施内容限制网络中断广泛浏览内容报告监管报告和其他透明度报告

政策
社群守则
Meta 广告发布守则
其他政策
Meta 如何改进工作
适龄内容
功能
我们打击危险组织和人物的方法
我们对阿片类药物泛滥的处理方式
我们维护诚信选举的方法
我们打击错误信息的方法
我们评估内容新闻价值的方法
我们的 Facebook 动态版块内容排名方法
我们对内容排名方法的解释
Meta 无障碍理念
研究工具
内容库与内容库 API
广告资料库工具
其他研究工具和数据集
政策执行
检测违规内容
采取措施
治理
治理创新
监督委员会概览
如何向监督委员会申诉
监督委员会案件
监督委员会建议
设立监督委员会
监督委员会:其他问题
Meta 关于监督委员会的半年度更新报告
追踪监督委员会的影响力
安全
威胁中断
安全威胁
威胁行为处理报告
报告
社群守则执行情况报告
知识产权
政府的用户数据收集情况
依据当地法律实施内容限制
网络中断
广泛浏览内容报告
监管报告和其他透明度报告
政策
社群守则
Meta 广告发布守则
其他政策
Meta 如何改进工作
适龄内容
功能
我们打击危险组织和人物的方法
我们对阿片类药物泛滥的处理方式
我们维护诚信选举的方法
我们打击错误信息的方法
我们评估内容新闻价值的方法
我们的 Facebook 动态版块内容排名方法
我们对内容排名方法的解释
Meta 无障碍理念
研究工具
内容库与内容库 API
广告资料库工具
其他研究工具和数据集
安全
威胁中断
安全威胁
威胁行为处理报告
报告
社群守则执行情况报告
知识产权
政府的用户数据收集情况
依据当地法律实施内容限制
网络中断
广泛浏览内容报告
监管报告和其他透明度报告
政策执行
检测违规内容
采取措施
治理
治理创新
监督委员会概览
如何向监督委员会申诉
监督委员会案件
监督委员会建议
设立监督委员会
监督委员会:其他问题
Meta 关于监督委员会的半年度更新报告
追踪监督委员会的影响力
政策
社群守则
Meta 广告发布守则
其他政策
Meta 如何改进工作
适龄内容
功能
我们打击危险组织和人物的方法
我们对阿片类药物泛滥的处理方式
我们维护诚信选举的方法
我们打击错误信息的方法
我们评估内容新闻价值的方法
我们的 Facebook 动态版块内容排名方法
我们对内容排名方法的解释
Meta 无障碍理念
研究工具
内容库与内容库 API
广告资料库工具
其他研究工具和数据集
政策执行
检测违规内容
采取措施
治理
治理创新
监督委员会概览
如何向监督委员会申诉
监督委员会案件
监督委员会建议
设立监督委员会
监督委员会:其他问题
Meta 关于监督委员会的半年度更新报告
追踪监督委员会的影响力
安全
威胁中断
安全威胁
威胁行为处理报告
报告
社群守则执行情况报告
知识产权
政府的用户数据收集情况
依据当地法律实施内容限制
网络中断
广泛浏览内容报告
监管报告和其他透明度报告
中文(简体)
隐私政策服务条款Cookie
Meta
政策及信息公示平台
政策
政策执行
安全
功能
治理
研究工具
报告
中文(简体)