Meta
此內容尚未提供中文(台灣)版本

Home
Oversight
Oversight Board Cases
Comment Related To January 2021 Protests In Russia

Case on a comment related to the January 2021 protests in Russia

上次更新日期 2023年6月12日
2021-004-FB-UA
On March 2, 2021, the Oversight Board selected a case appealed by someone on Facebook regarding a comment they made on a post containing pictures, a video and text about the January 2021 protests in support of Alexei Navalny held in Saint Petersburg, Russia. The commenter called another user a “common and cowardly bot” (as translated from Russian) over comments the other person had made against the ongoing protests.
Facebook took down this content for violating our policy on bullying and harassment, as laid out in the Facebook Community Standards. For private individuals we “remove content that’s meant to degrade or shame” and in some instances we require self-reporting, as was done in this case, so we can better understand if the individual is feeling bullied or harassed.
Read the board’s case selection summary

Case decision
We welcome the Oversight Board’s decision today on this case. Meta has acted to comply with the board’s decision immediately, and this content has been reinstated.
In accordance with the bylaws, we will also initiate a review of identical content with parallel context. If we determine that we have the technical and operational capacity to take action on that content as well, we will do so promptly. For more information, please see our Newsroom post about how we will implement the board’s decisions. We will update this post again once any further action is taken on other identical content with parallel context.
After conducting a review of the recommendations provided by the board in addition to their decision, we will update this post
Read the board’s case decision

Recommendations
On June 25, 2021, Meta responded to the board’s recommendation for this case. We are committing to take action on the recommendation.

Recommendation 1 (implementing in part)
Explain the relationship between its Bullying and Harassment policy rationale and the “Do nots” as well as the other rules restricting content that follow it.
Our commitment: We’ll explain the meaning of the Policy Rationale section in the introduction of the Community Standards.
Considerations: We begin every section of the Community Standards with a “Policy Rationale,” followed by our specific policies. Based on the board’s recommendation, we’ll explain the Policy Rationale and its relationship to other provisions of the Community Standards in the introduction to the Community Standards.
Next steps: We plan to add this information to the introduction of the Community Standards by the end of the year.

Recommendation 2 (assessing feasibility)
Differentiate between bullying and harassment and provide definitions that distinguish the two acts. The Community Standard should also clearly explain to users how bullying and harassment differ from speech that only causes offense and may be protected under international human rights law.
Our commitment: We’ll research key points to differentiate between bullying and harassment and potentially update our Community Standards based on our findings.
Considerations: We currently treat “bullying” and “harassment” as a single violation area. Although the two can be considered distinct types of abuses, our experience in enforcing our policies has shown that addressing both under one policy area is clearer for users and effective at reducing harm. To offer more clarity, we’re looking into how we can differentiate between them in a way that does not negatively impact enforcement.
Next steps: We’ll complete our research in the next few months and provide an update once we are done.

Recommendation 3 (implementing fully)
Clearly define its approach to different target user categories and provide illustrative examples of each target category (i.e. who qualifies as a public figure). Format the Community Standard on Bullying and Harassment by user categories currently listed in the policy.
Our commitment: We’ll reformat the Bullying and Harassment Community Standard to clarify the policy differences for public figures and private individuals. We’ll also detail our Bullying and Harassment enforcement approach, including definitions and examples of these user categories.
Considerations: Currently, the Policy Rationale section of the Bullying and Harassment policy contains an overview of our approach to distinguishing between public figures and private individuals. As a result of this recommendation, we’ll format the Bullying and Harassment Community Standard to explain our approach to public figures and private individuals more clearly. In addition, we’ll provide details and examples of how we define “public figures” and “private individuals.”
Next steps: We plan to reformat the Bullying and Harassment Community Standard as well as publish the details of our enforcement approach later this year.

Recommendation 4 (implementing in part)
Include illustrative examples of violating and non-violating content in the Bullying and Harassment Community Standard to clarify the policy lines drawn and how these distinctions can rest on the identity status of the target.
Our commitment: In addition to the actions we are taking in response to recommendation three, we will publish examples of violating and non-violating content later this year.
Considerations: As discussed in our response to recommendation three, we’ll publish the details of our approach to Bullying and Harassment enforcement, including examples of the kinds of content that violate these policies, as well as content that are non-violating. These examples will clarify for people what content is and isn’t allowed under this policy.
Next steps: We plan to publish the details of our Bullying and Harassment enforcement approach later this year.

Recommendation 5 (assessing feasibility)
When assessing content including a ‘negative character claim’ against a private adult, Meta should amend the Community Standard to require an assessment of the social and political context of the content. Meta should reconsider the enforcement of this rule in political or public debates where the removal of the content would stifle debate.
Our commitment: We’re assessing whether it is feasible to provide ways to escalate content for additional review based on political and social context.
Considerations: This recommendation proposes that we scale the ability to moderate potentially violating content differently depending on the social or political context within which a user posts. By its nature, though, content moderation at scale requires principled criteria for our content moderators designed to ensure speed, accuracy, consistency, and non-arbitrary content moderation. Although our content moderators have familiarity with the context of the content they review because they are trained in relevant languages and are familiar with the regions in which the content is posted, certain contextual indicators are not necessarily available at every stage in the review process. For instance, content moderators working at scale have a more limited assessment of intent or subtext than specialized teams.
Given the specific nature of certain social and political content, adding more context-specific guidance could introduce too much subjectivity into the scaled enforcement of our Community Standards, undercutting our ability to enforce globally at scale with consistency. Moreover, there are operational challenges associated with increasing the amount of information our content moderators review, which may slow down review time without increasing the accuracy of the review. Therefore, we will assess whether it is possible and impactful to escalate content for additional review based on political and social context.
Next steps: We’re assessing whether it is possible and impactful to escalate content for additional review based on political and social context. We plan to complete our assessment and update on our progress in the next few months.

Recommendation 6 (assessing feasibility)
Whenever Facebook removes content because of a negative character claim that is only a single word or phrase in a larger post, it should promptly notify the user of that fact, so that the user can repost the material without the negative character claim.
Our commitment: We’re exploring ways of notifying users of specific violating words under multiple sections of the Community Standards before we take an enforcement action.
Considerations: We’re exploring ways of increasing transparency and using automation to help users self-remediate. Currently, when our automated systems detect with high confidence a potential Bullying and Harassment violation in content that a user is about to post, we may inform the user that their post might violate the policy. This provides an opportunity for users to modify the content or decide not to post it at all. This notification, currently active in English and under testing in additional languages, informs users that the content they are posting might violate the Bullying and Harassment policy. However, this process does not include the specificity the board recommends because (1) we do not have this feature available after the moment of posting, and (2) it does not notify the user of specific words or phrases that may violate.
Prior to the board’s recommendation, we’ve also been exploring how to use automation to highlight specific words or phrases that violate our Hate Speech policies to potentially allow users to edit and repost previously violating content. This automation may eventually be used in other policy areas, for instance, identifying specific words or phrases that violate our Bullying and Harassment policy.
We do not, however, have this preemptive detection capability for human review. In this case, a user reported a comment, which our content moderators reactively reviewed. Based on this recommendation, we will assess whether we could potentially highlight specific violating words and phrases for users as a result of human review.
Next steps: We need time to build the tools necessary for this work and to test these capabilities. We plan to complete our assessment and update on our progress in the first half of 2022.

Meta
政策
《社群守則》Meta 廣告刊登準則其他政策Meta 如何改善政策適齡內容

功能
我們對危險組織和人物採取的具體作法我們對類鴉片止痛藥氾濫成災採取的具體作法我們對選舉採取的具體作法我們對錯誤資訊採取的具體作法我們對具報導價值的內容採取的具體作法我們對 Facebook 動態消息排序的具體作法我們對說明內容排序採取的具體作法Meta 的無障礙環境

研究工具
內容資料庫和內容資料庫 API廣告檔案庫工具其他研究工具和資料集

政策執行
偵測違規內容採取行動

管理
創新治理監察委員會總覽如何向監察委員會提出申訴監察委員會案例監察委員會建議成立監察委員會監察委員會:更深入的問題Meta 監察委員會半年一期報告追蹤監察委員會的影響

安全性
威脅干擾安全威脅威脅報告

報告
社群守則執行狀況報告智慧財產權政府對用戶資料的索取要求當地法律對內容的限制系統運作中斷廣泛瀏覽內容報告法規和其他資訊透明度報告

政策
《社群守則》
Meta 廣告刊登準則
其他政策
Meta 如何改善政策
適齡內容
功能
我們對危險組織和人物採取的具體作法
我們對類鴉片止痛藥氾濫成災採取的具體作法
我們對選舉採取的具體作法
我們對錯誤資訊採取的具體作法
我們對具報導價值的內容採取的具體作法
我們對 Facebook 動態消息排序的具體作法
我們對說明內容排序採取的具體作法
Meta 的無障礙環境
研究工具
內容資料庫和內容資料庫 API
廣告檔案庫工具
其他研究工具和資料集
政策執行
偵測違規內容
採取行動
管理
創新治理
監察委員會總覽
如何向監察委員會提出申訴
監察委員會案例
監察委員會建議
成立監察委員會
監察委員會:更深入的問題
Meta 監察委員會半年一期報告
追蹤監察委員會的影響
安全性
威脅干擾
安全威脅
威脅報告
報告
社群守則執行狀況報告
智慧財產權
政府對用戶資料的索取要求
當地法律對內容的限制
系統運作中斷
廣泛瀏覽內容報告
法規和其他資訊透明度報告
政策
《社群守則》
Meta 廣告刊登準則
其他政策
Meta 如何改善政策
適齡內容
功能
我們對危險組織和人物採取的具體作法
我們對類鴉片止痛藥氾濫成災採取的具體作法
我們對選舉採取的具體作法
我們對錯誤資訊採取的具體作法
我們對具報導價值的內容採取的具體作法
我們對 Facebook 動態消息排序的具體作法
我們對說明內容排序採取的具體作法
Meta 的無障礙環境
研究工具
內容資料庫和內容資料庫 API
廣告檔案庫工具
其他研究工具和資料集
安全性
威脅干擾
安全威脅
威脅報告
報告
社群守則執行狀況報告
智慧財產權
政府對用戶資料的索取要求
當地法律對內容的限制
系統運作中斷
廣泛瀏覽內容報告
法規和其他資訊透明度報告
政策執行
偵測違規內容
採取行動
管理
創新治理
監察委員會總覽
如何向監察委員會提出申訴
監察委員會案例
監察委員會建議
成立監察委員會
監察委員會:更深入的問題
Meta 監察委員會半年一期報告
追蹤監察委員會的影響
政策
《社群守則》
Meta 廣告刊登準則
其他政策
Meta 如何改善政策
適齡內容
功能
我們對危險組織和人物採取的具體作法
我們對類鴉片止痛藥氾濫成災採取的具體作法
我們對選舉採取的具體作法
我們對錯誤資訊採取的具體作法
我們對具報導價值的內容採取的具體作法
我們對 Facebook 動態消息排序的具體作法
我們對說明內容排序採取的具體作法
Meta 的無障礙環境
研究工具
內容資料庫和內容資料庫 API
廣告檔案庫工具
其他研究工具和資料集
政策執行
偵測違規內容
採取行動
管理
創新治理
監察委員會總覽
如何向監察委員會提出申訴
監察委員會案例
監察委員會建議
成立監察委員會
監察委員會:更深入的問題
Meta 監察委員會半年一期報告
追蹤監察委員會的影響
安全性
威脅干擾
安全威脅
威脅報告
報告
社群守則執行狀況報告
智慧財產權
政府對用戶資料的索取要求
當地法律對內容的限制
系統運作中斷
廣泛瀏覽內容報告
法規和其他資訊透明度報告
中文(台灣)
隱私政策服務條款Cookie
Meta
政策公開透明平台
政策
政策執行
安全性
功能
管理
研究工具
報告
中文(台灣)