Meta

Meta
政策
社群守则Meta 广告发布守则其他政策Meta 如何改进工作适龄内容

功能
我们打击危险组织和人物的方法我们对阿片类药物泛滥的处理方式我们维护诚信选举的方法我们打击错误信息的方法我们评估内容新闻价值的方法我们的 Facebook 动态版块内容排名方法我们对内容排名方法的解释Meta 无障碍理念

研究工具
内容库与内容库 API广告资料库工具其他研究工具和数据集

政策执行
检测违规内容采取措施

治理
治理创新监督委员会概览如何向监督委员会申诉监督委员会案件监督委员会建议设立监督委员会监督委员会:其他问题Meta 关于监督委员会的半年度更新报告追踪监督委员会的影响力

安全
威胁中断安全威胁威胁行为处理报告

报告
社群守则执行情况报告知识产权政府的用户数据收集情况依据当地法律实施内容限制网络中断广泛浏览内容报告监管报告和其他透明度报告

政策
社群守则
Meta 广告发布守则
其他政策
Meta 如何改进工作
适龄内容
功能
我们打击危险组织和人物的方法
我们对阿片类药物泛滥的处理方式
我们维护诚信选举的方法
我们打击错误信息的方法
我们评估内容新闻价值的方法
我们的 Facebook 动态版块内容排名方法
我们对内容排名方法的解释
Meta 无障碍理念
研究工具
内容库与内容库 API
广告资料库工具
其他研究工具和数据集
政策执行
检测违规内容
采取措施
治理
治理创新
监督委员会概览
如何向监督委员会申诉
监督委员会案件
监督委员会建议
设立监督委员会
监督委员会:其他问题
Meta 关于监督委员会的半年度更新报告
追踪监督委员会的影响力
安全
威胁中断
安全威胁
威胁行为处理报告
报告
社群守则执行情况报告
知识产权
政府的用户数据收集情况
依据当地法律实施内容限制
网络中断
广泛浏览内容报告
监管报告和其他透明度报告
政策
社群守则
Meta 广告发布守则
其他政策
Meta 如何改进工作
适龄内容
功能
我们打击危险组织和人物的方法
我们对阿片类药物泛滥的处理方式
我们维护诚信选举的方法
我们打击错误信息的方法
我们评估内容新闻价值的方法
我们的 Facebook 动态版块内容排名方法
我们对内容排名方法的解释
Meta 无障碍理念
研究工具
内容库与内容库 API
广告资料库工具
其他研究工具和数据集
安全
威胁中断
安全威胁
威胁行为处理报告
报告
社群守则执行情况报告
知识产权
政府的用户数据收集情况
依据当地法律实施内容限制
网络中断
广泛浏览内容报告
监管报告和其他透明度报告
政策执行
检测违规内容
采取措施
治理
治理创新
监督委员会概览
如何向监督委员会申诉
监督委员会案件
监督委员会建议
设立监督委员会
监督委员会:其他问题
Meta 关于监督委员会的半年度更新报告
追踪监督委员会的影响力
政策
社群守则
Meta 广告发布守则
其他政策
Meta 如何改进工作
适龄内容
功能
我们打击危险组织和人物的方法
我们对阿片类药物泛滥的处理方式
我们维护诚信选举的方法
我们打击错误信息的方法
我们评估内容新闻价值的方法
我们的 Facebook 动态版块内容排名方法
我们对内容排名方法的解释
Meta 无障碍理念
研究工具
内容库与内容库 API
广告资料库工具
其他研究工具和数据集
政策执行
检测违规内容
采取措施
治理
治理创新
监督委员会概览
如何向监督委员会申诉
监督委员会案件
监督委员会建议
设立监督委员会
监督委员会:其他问题
Meta 关于监督委员会的半年度更新报告
追踪监督委员会的影响力
安全
威胁中断
安全威胁
威胁行为处理报告
报告
社群守则执行情况报告
知识产权
政府的用户数据收集情况
依据当地法律实施内容限制
网络中断
广泛浏览内容报告
监管报告和其他透明度报告
中文(简体)
隐私政策服务条款Cookie
这条内容尚无中文(简体)版本

Home
Oversight
Oversight Board Cases
Symbols Adopted By Dangerous Organizations

Symbols Adopted By Dangerous Organizations

更新日期 2025年8月11日
2025-015-IG-MR, 2025-016-IG-MR, 2025-017-IG-MR
Today, February 13, 2025, the Oversight Board selected a case bundle referred by Meta regarding three pieces of content posted to Instagram all involving symbols often used by hate groups but which can also have other uses.
The first piece of content was an image of a woman with a part of her face covered by a scarf. The words “Slavic Army” and a kolovrat symbol, a type of swastika used both by neo-Nazis and some pagans without apparent extremist intent, were superimposed on the scarf. The image was accompanied by a caption that expressed the user’s pride in being Slavic and stated that kolovrat is a symbol of faith, war, peace, hate, and love.
The second piece of content concerns a carousel of images which depict a woman in various poses wearing an iron cross necklace and a t-shirt with an AK-47 assault rifle and the words “Defend Europe” printed on it. The Fraktur font on the t-shirt typeface and the Odal (or Othala) rune in the caption – a symbol from the runic alphabet that was used in Europe prior to its replacement by the Latin alphabet – are both associated with Nazis and neo-Nazis. The caption also contained the hashtag #DefendEuorpe, which is a slogan used by white supremacists and other extremist organizations opposing immigration.
The third piece of content also concerns a carousel of images which are drawings of an Odal rune wrapped around a sword with a quote about blood and fate by a German author and soldier who fought in the first and second world wars. The caption shares a selective early history of the rune without mentioning its Nazi and neo-Nazi appropriation, as well as the conclusion that the rune is about “heritage, homeland, and family.” The caption also states that prints of the image are for sale.
Meta determined that the first two pieces of content violated our Dangerous Organizations and Individuals policy, as laid out in the Instagram Community Guidelines and Facebook Community Standards. Meta determined that the third piece of content did not violate our policies and left the content up.
Meta referred this case to the Board because we found it significant and difficult as it creates tension between our values of safety and voice.
While these symbols and others like them may be used to promote dangerous organizations and individuals, be used by members of these groups to identify themselves, or to show support for the group’s objectives, prohibiting these symbols entirely could limit discussions of history, linguistics, and art.
We will implement the Board’s decision once it has finished deliberating, and we will update this post accordingly. Please see the Board’s website for the decision when they issue it.
Read the board’s case selection summary

Case decision
We welcome the Oversight Board’s decision today today, June 12, 2025, on this case bundle. The Board upheld Meta's decision to remove the content in the first two cases and to leave up the content in the third case.
After conducting a review of the recommendations provided by the Board, we will update this post with initial responses to those recommendations.

Recommendations

Recommendation 1 (implementing fully)
To provide more clarity to users, Meta should make public the internal definition of “references” and define its subcategories under the Dangerous Organizations and Individuals Community Standard.
The Board will consider this recommendation implemented when Meta updates the public-facing Dangerous Organizations and Individuals Community Standard.
Commitment Statement: We will include an update to how we approach “references” as part of other ongoing updates we are planning to make to our Dangerous Organizations and Individuals (DOI) Community Standard to complete commitments to the Oversight Board.
Considerations: As the Board notes in its decision, we provide a number of explanations in our Community Standards for Dangerous Organizations and Individuals to detail how we define and approach glorification, representation, and support of a DOI. Our policy also notes that we may remove content about a designated DOI when it is unclear or lacks context to help us understand the user’s intent. These “references” to a designated DOI may include symbols, positive references, incidental depictions, captionless photos, or unclear satire/humor that do not directly support or glorify a designated DOI but lack clear intent. However, as our Community Standards note, we allow content when users’ intent clearly condemns, neutrally discusses, or is reporting on designated DOIs.
We will update our Community Standards to more clearly describe our approach to references. We expect that this may take some time to fully implement as we plan to incorporate it as part of an ongoing workstream. We will provide updates in future reports to the Board.

Recommendation 2 (assessing feasibility)
To ensure that the list of designated symbols under the Dangerous Organizations and Individuals policy does not include symbols that no longer meet Meta’s criteria for inclusion, Meta should introduce a clear and evidence-based process to determine how symbols are added to the groups and which group each designated symbol is added to and periodically audit all designated symbols, ensuring the list covers all relevant symbols globally and removing those no longer satisfying published criteria, as outlined in section 5.2 of this decision.
The Board will consider this recommendation implemented when Meta has established this process and provides the Board with the documentation and the results of its first audit based on these new rules.
Commitment Statement: In line with our previous commitments, we will assess the feasibility of introducing a more clear and evidence-based process for designating and de-designating symbols that may be associated with Dangerous Organizations and Individuals (DOI).
Considerations: We will pursue next steps to better establish a dynamic, global, and evidence-based process to inform how we address DOI symbols. As part of this refined process, we will also consider an evaluation of existing symbols and will assess if there is a clear process for de-designating symbols when they no longer meet criteria to be considered associated with certain entities and ideologies.
As the Board recommends, we may consider relevant research findings, including research into symbol usage trends on the company’s platforms across languages and regions to review the list of designated symbols. Also as the Board notes, we acknowledge that the use of symbols may change over time and will consider ways to further address the risks of potential under- or overenforcement, based on the evolving uses of certain symbols. We will provide updates in future reports to the Board.

Recommendation 3 (assessing feasibility)
To address potential false positives involving designated symbols under the Dangerous Organizations and Individuals Community, Meta should develop a system to automatically identify and flag instances where designated symbols lead to “spikes” that suggest a large volume of non-violating content is being removed, similar to the system the company created in response to the Board’s recommendation no. 2 in Colombian Police Cartoon. This system will allow Meta to analyze “spikes” involving designated symbols and inform the company’s future actions, including amending their practices to be more accurate and precise.
The Board will consider this recommendation implemented when Meta develops this system and informs the Board of the actions taken to avoid potential overenforcement detected by the system.
Commitment Statement: We will continue using our existing system to automatically identify and flag instances where our banks may be generating false positives. As we assess the feasibility of recommendation no.2 from this decision, we will also consider whether we can provide additional policy guidance on Dangerous Organizations and Individuals (DOI) symbols to teams that add content to DOI-specific banks in order to further reduce the risk of false positives.
Considerations: Media Matching Service (MMS) Banks are systems, collections of content, used primarily to detect and take action on media (e.g. image, video) across Facebook and Instagram that violates Meta’s Community Standards. Content that is determined violating or non-violating is stored in banks and these banks will detect content across Facebook and Instagram. These banks are created to align with specific Community Standard policies such as Dangerous Individuals and Organisations.
In a recent update to the Board, we described an improvement to this system that helps operational teams identify large volumes of matches generating false positives in MMS banks by automatically detecting increases in banked content matching with non-violating content. When we identify these issues, teams work to re-review the content to evaluate whether it should be removed from the MMS bank. This applies to all content in our DOI banks, including DOI symbols. As a result, if a designated symbol is generating a large number of false positives in our DOI banks, this system would indicate that we should consider removing the content from the bank.
Our Policy team will work with the DOI banking teams in Global Operations to assess whether, in addition to the system in place to flag false positives, there is additional policy guidance on designated symbols that could mitigate over-enforcement of these symbols when shared in a non-violating context. We will provide further updates on our progress to clarify DOI symbols banking guidance in future updates.

Recommendation 4 (assessing feasibility)
To provide more transparency to users, Meta should publish a clear explanation on how it creates and enforces its designated symbols list under the Dangerous Organizations and Individuals Community Standard. This explanation should include the processes and criteria for designating the symbols and how the company enforces against different symbols, including information on strikes and any other enforcement actions taken against designated symbols.
The Board will consider this recommendation implemented when the information is published in the Transparency Center and is hyperlinked in the public-facing Dangerous Organizations and Individuals Community Standard.
Commitment Statement: As we assess the feasibility of implementing recommendation 2, we will also consider the level of detail that would be most appropriate to publish on any process that we may implement, balancing our commitment to transparency while protecting the integrity of our internal systems.
Considerations: We will consider publishing details about any process that we implement in conjunction with recommendations 2 and 3. Currently in our Transparency Center page on account restrictions, we note that we may consider certain violations of our Dangerous Organizations and Individuals policy ‘severe’. Our Community Standard for Dangerous Organizations (DOI) and Individuals also explains that we do not allow symbols that represent DOIs to be used on our platforms and we also share further details about other designation processes on a page published in part due to prior Board recommendations. We will provide updates on this work in a future report to the Board.
Meta
政策及信息公示平台
政策
政策执行
安全
功能
治理
研究工具
报告
中文(简体)