Meta
此內容尚未提供中文(香港)版本

Home
Oversight
Oversight board cases
Criticism of EU Migration Policies

Criticism of EU Migration Policies and Immigrants Bundle

上次更新日期 2025年6月20日
2025-003-FB-UA, 2025-004-FB-UA
Today, October 17, 2024, the Oversight Board selected a case bundle appealed by Facebook users regarding discussions centered around the European Union’s Pact on Migration and Asylum.
The first piece of content is an image that shows Polish Prime Minister Donald Tusk looking through a peephole of a door, with a black man walking up behind him. The accompanying text criticizes the Tusk government’s support of the Pact and suggests that it has resulted in bringing “murzynów,” a word used to describe black people that is considered offensive by some and subject to debate in Poland. The accompanying caption encourages others to oppose the Pact before the European Parliament.
The second piece of content is an image depicting a blond-haired, blue-eyed woman holding up her hand in a stop gesture, with both a stop sign and German flag in the background. German text over the image states that people should no longer come to Germany as they don’t need any more “gang-rape specialists.”
Meta determined that both pieces of content did not violate our policies on Hate Speech as laid out in the Facebook Community Standards, and left both pieces of content up.
We will implement the Board’s decision once it has finished deliberating, and will update this post accordingly. Please see the Board's website for the decision when they issue it.
Read the board’s case selection summary

Case decision
We welcome the Oversight Board’s decision today today, April 23, 2025, on this case. The Board overturned Meta’s decision to leave up the content in both cases. Meta will act to comply with the Board's decision and reinstate the content in both cases within 7 days.
When it is technically and operationally possible to do so, we will also take action on content that is identical and in the same context as the first case. For more information, please see our Newsroom post about how we implement the Board’s decisions.
After conducting a review of the recommendations provided by the Board, we will update this post with initial responses to those recommendations.

Recommendations

Recommendation 1 (Assessing Feasibility)
As part of its ongoing human rights due diligence, Meta should take all of the following steps in respect of the January 7, 2025, updates to the Hateful Conduct Community Standard. First, it should identify how the policy and enforcement updates may adversely impact the rights of immigrants, in particular refugees and asylum seekers, with a focus on markets where these populations are at heightened risk. Second, Meta should adopt measures to prevent and/or mitigate these risks and monitor their effectiveness. Third, Meta should update the Board on its progress and learnings every six months, and report on this publicly at the earliest opportunity.
The Board will consider this recommendation implemented when Meta provides the Board with robust data and analysis on the effectiveness of its prevention or mitigation measures on the cadence outlined above, and when Meta reports on this publicly.
Commitment Statement: We will assess the feasibility of this multi-part recommendation.
Considerations: Meta conducts ongoing, integrated, human rights due diligence to identify, prevent, mitigate and address potential adverse human rights impacts related to our policies, products and operations in line with the UNGPs, related guidance, and our human rights policy. Ahead of the January 7th changes, we assessed the risks of the changes and took into account relevant mitigations, such as the availability of other policies and user reports to address potentially harmful content.
We will assess the feasibility of implementing this recommendation and provide updates in future reports to the Oversight Board. We will also bundle future updates for this recommendation under recommendation #1 in the Gender Identity Debate Videos case.

Recommendation 2 (Assessing Feasibility)
Meta should add the term “murzyn” to its Polish market slur list.
The Board will consider this recommendation implemented when Meta informs the Board this has been done.
Commitment Statement: We will follow our slur designation process to assess if the term ‘murzyn’ should be included in the Polish slur list.
Considerations: We have initiated our slur designation process to assess whether or not to add the term ‘murzyn’ to the Polish slur list. As we detail in our Transparency Center page linked above, our slur designation process involves a number of teams with regional expertise including policy, stakeholder engagement, and local markets teams. Regional teams conduct both qualitative and quantitative analysis to understand how a word is used on the platform. These teams also gather information on any additional definitions or uses as well as how a term may be locally and colloquially used in a particular region.
This designation process takes time to ensure that we are not incorrectly removing speech unnecessarily. Even once a slur is designated, we may still allow its use when it is used self-referentially, in a news reporting context, or to condemn its use. We will provide updates on the status of this process in our next Biannual report to the Oversight Board.

Recommendation 3 (Implementing in Full)
When Meta audits its slur lists, it should ensure it carries out broad external engagement with relevant stakeholders. This should include consulting with impacted groups and civil society.
The Board will consider this recommendation implemented when Meta amends its explanation of how it audits and updates its market-specific slur lists on its Transparency Center
Commitment Statement: We regularly engage with stakeholders, including civil society, to maintain accurate lists of slurs across global regions. We are committed to formalizing this process to ensure that teams who manage our partnerships with external policy stakeholders and civil society groups will be involved at an early stage in our annual audit process to maximize opportunities for external input.
Considerations: Our current process for auditing slurs takes place on an annual basis, with additional audits taking place around elections and during crisis events. In this process, Global Operations teams, including regional experts, conduct a full review of our slurs lists for each region and provide samples of potential new slurs to add or remove from our lists based on new trends. Then, Content and Public Policy teams partner with Global Operations teams to carry out additional reviews and conduct outreach with external experts—including Trusted Partners. In considering the designation of new slurs, our teams consider both the harm associated with the use of these terms and the potential for over-enforcement and limitations on legitimate speech, particularly in the context of elections and discourse on issues of political significance.
As we standardize the process of engaging with external stakeholders, our Global Operations teams will partner with Trusted Partners to provide an early opportunity to review the lists if significant changes are proposed. This will take place before lists are finalized, so that external inputs may be holistically considered. We will update our Transparency Center page on “bringing local context to global standards” to include information on civil society’s new role in this process.

Recommendation 4 (No Further Action)
To reduce instances of content that violates its Hateful Conduct policy, Meta should update its internal guidance to make it clear that Tier 1 attacks (including those based on immigration status) are prohibited, unless it is clear from the content that it refers to a defined subset of less than half of the group. This would reverse the current presumption that content refers to a minority unless it specifically states otherwise.
The Board will consider this recommendation implemented when Meta provides the Board with the updated internal rules.
Commitment Statement: We do not anticipate that we will reverse our current approach to the Hateful Conduct Community Standard to allow content that does not clearly refer to more than half a group, as a reversal is likely to restrict legitimate speech on our platforms.
Proposed Considerations: Our Hateful Conduct policy aims to remove content that directly attacks people on the basis of their protected characteristics. We remove what we define as Tier 1 attacks against people, but allow content when it is unclear if the attack is referring to more than half, or the majority, a particular group of people. This means that when content refers to “some” or “lots of” a particular group of people, we allow that content, even if it is coupled with a Tier 1 Hate Speech attack, because it is not clearly targeting an entire group and may be related to more nuanced debate or legitimate speech that would otherwise be restricted by enforcement at scale. While at times this allowed speech may still be considered offensive to some, reversing the existing approach could place an undue expectation on users to explain their positions. Given these considerations, at this time, we do not expect to reverse the current approach to this speech on our platforms and will provide no further updates on this recommendation in our next report to the Board.

Meta
政策
社群守則Meta 廣告刊登準則其他政策Meta 所作的改善適齡的內容

特色
我們處理危險組織和人物的做法我們就鴉片類藥物氾濫問題採取的處理做法我們就選舉採取的處理做法我們就錯誤資訊採取的處理做法我們就具報導價值的內容採取的處理做法我們排序 Facebook 動態消息的做法我們用以說明排序功能的做法Meta 無障礙環境

研究工具
內容資料庫和內容資料庫 API廣告檔案庫工具其他研究工具和資料集

政策執行
偵測違規內容採取行動

管理
管理方面的創新監察委員會概覽如何向監察委員會提出申訴監察委員會案例監察委員會的建議成立監察委員會監察委員會:進階問題Meta 有關監察委員會的兩年一度更新追蹤監察委員會的影響

安全性
威脅阻斷安全威脅威脅分析報告

報告
社群守則執行狀況報告知識產權政府的用戶資料要求因本地法律而起的內容限制互聯網中斷廣獲瀏覽內容報告法規和其他透明度報告

政策
社群守則
Meta 廣告刊登準則
其他政策
Meta 所作的改善
適齡的內容
特色
我們處理危險組織和人物的做法
我們就鴉片類藥物氾濫問題採取的處理做法
我們就選舉採取的處理做法
我們就錯誤資訊採取的處理做法
我們就具報導價值的內容採取的處理做法
我們排序 Facebook 動態消息的做法
我們用以說明排序功能的做法
Meta 無障礙環境
研究工具
內容資料庫和內容資料庫 API
廣告檔案庫工具
其他研究工具和資料集
政策執行
偵測違規內容
採取行動
管理
管理方面的創新
監察委員會概覽
如何向監察委員會提出申訴
監察委員會案例
監察委員會的建議
成立監察委員會
監察委員會:進階問題
Meta 有關監察委員會的兩年一度更新
追蹤監察委員會的影響
安全性
威脅阻斷
安全威脅
威脅分析報告
報告
社群守則執行狀況報告
知識產權
政府的用戶資料要求
因本地法律而起的內容限制
互聯網中斷
廣獲瀏覽內容報告
法規和其他透明度報告
政策
社群守則
Meta 廣告刊登準則
其他政策
Meta 所作的改善
適齡的內容
特色
我們處理危險組織和人物的做法
我們就鴉片類藥物氾濫問題採取的處理做法
我們就選舉採取的處理做法
我們就錯誤資訊採取的處理做法
我們就具報導價值的內容採取的處理做法
我們排序 Facebook 動態消息的做法
我們用以說明排序功能的做法
Meta 無障礙環境
研究工具
內容資料庫和內容資料庫 API
廣告檔案庫工具
其他研究工具和資料集
安全性
威脅阻斷
安全威脅
威脅分析報告
報告
社群守則執行狀況報告
知識產權
政府的用戶資料要求
因本地法律而起的內容限制
互聯網中斷
廣獲瀏覽內容報告
法規和其他透明度報告
政策執行
偵測違規內容
採取行動
管理
管理方面的創新
監察委員會概覽
如何向監察委員會提出申訴
監察委員會案例
監察委員會的建議
成立監察委員會
監察委員會:進階問題
Meta 有關監察委員會的兩年一度更新
追蹤監察委員會的影響
政策
社群守則
Meta 廣告刊登準則
其他政策
Meta 所作的改善
適齡的內容
特色
我們處理危險組織和人物的做法
我們就鴉片類藥物氾濫問題採取的處理做法
我們就選舉採取的處理做法
我們就錯誤資訊採取的處理做法
我們就具報導價值的內容採取的處理做法
我們排序 Facebook 動態消息的做法
我們用以說明排序功能的做法
Meta 無障礙環境
研究工具
內容資料庫和內容資料庫 API
廣告檔案庫工具
其他研究工具和資料集
政策執行
偵測違規內容
採取行動
管理
管理方面的創新
監察委員會概覽
如何向監察委員會提出申訴
監察委員會案例
監察委員會的建議
成立監察委員會
監察委員會:進階問題
Meta 有關監察委員會的兩年一度更新
追蹤監察委員會的影響
安全性
威脅阻斷
安全威脅
威脅分析報告
報告
社群守則執行狀況報告
知識產權
政府的用戶資料要求
因本地法律而起的內容限制
互聯網中斷
廣獲瀏覽內容報告
法規和其他透明度報告
中文(香港)
私隱政策服務條款Cookie
Meta
政策公開透明平台
政策
政策執行
安全性
特色
管理
研究工具
報告
中文(香港)