Meta
此內容尚未提供中文(香港)版本

Home
Oversight
Oversight Board Cases
Sweden Violence Against Women

First Bundled Case About Violence Against Women

上次更新日期 2023年9月11日
2023-002-IG-UA
Today, the Oversight Board selected a case appealed by an Instagram user regarding a video in Swedish containing a woman’s testimony about her experience in a violent intimate relationship. The caption discusses the nature of gender-based violence inflicted by men upon women, claiming that men physically and mentally abuse women “all the time, every day.”
Upon initial review, Meta took down this content for violating our policy on Hate Speech, as laid out in our Instagram Community Guidelines and Facebook Community Standards. However, upon additional review, we determined we removed this content in error and reinstated the post after subject matter experts determined this was a qualified behavioral statement that raises awareness of gender-based violence against women.
We will implement the board’s decision once it has finished deliberating, and we will update this post accordingly. Please see the board’s website for the decision when they issue it.
Read the board’s case selection summary

Case decision
We welcome the Oversight Board’s decision today on this case. The board overturned Meta’s original decision to remove the content from the platform. Meta previously reinstated this content so no further action will be taken on it.
After conducting a review of the recommendations provided by the board in addition to their decision, we will update this page.
Read the board’s case decision

Recommendations

Recommendation 1 (Implementing in Part)
To allow users to condemn and raise awareness of gender-based violence, Meta should include the exception for allowing content that condemns or raises awareness of gender-based violence in the public language of the Hate Speech policy. The Board will consider this recommendation implemented when the public-facing language of the Hate Speech Community Standard reflects the proposed change.
Our commitment: Both our Violence and Incitement policy and our Hate Speech policy allow for content to be shared in a condemning or awareness-raising context, including for gender-based violence. We are currently working to clarify both our Hate Speech and Violence and Incitement policies and will consider opportunities to clearly publicly articulate this allowance.
Considerations: As part of both our Violence and Incitement policy and Hate Speech policy, we allow content that is shared in order to condemn or raise awareness. For example, if someone were to share a video of gender-based violence with a caption condemning the actions depicted, we would allow it unless the video or caption contained additional content that violated our policies. For example, if someone were to share content that condemns or raises awareness of gender-based violence, but also violates our Hate Speech policy by including a direct attack against people based on their protected characteristics, the content would be removed.
We are in the process of refining and clarifying our Community Standards as part of holistically reviewing the overlaps and differences between our policies on organic and ads content, and will consider ways that we can more clearly articulate where we may allow speech that condemns and raises awareness of gender-based violence.

Recommendation 2 (Implementing in Part)
To ensure that content condemning and raising awareness of gender-based violence is not removed in error, Meta should update guidance to its at-scale moderators with specific attention to rules around qualification. This is important because the current guidance makes it virtually impossible for moderators to make the correct decisions even when Meta states that the first post should be allowed on the platform. The Board will consider this recommendation implemented when Meta provides the Board with updated internal guidance that shows what indicators it provides to moderators to grant allowances when considering content that may otherwise be removed under the Hate Speech policy.
Our commitment: As part of our Hate Speech Community Standards, we remove broad generalizations and unqualified behavioral statements about a group or groups of people when there is an attack on a group of groups of people based on protected characteristics. We are pursuing work to provide additional nuance to clarify internal guidance around our approach to behavioral statements, generalizations, and qualified behavioral statements This includes long-term work to increase alignment across our approaches to potentially violating content in our Hate Speech policy area and across policy areas.
Considerations: Currently, we are scoping work to explore refining our guidance within our Hate Speech policy generally around behavioral statements, generalizations, and qualified behavioral statements. We will provide additional detail on our progress in future updates. We recognize that there may be space to provide additional nuance and context when we allow content shared in condemnation and raising awareness context within our policies, and therefore are exploring ways to make this update. We will provide additional detail on our progress in future Quarterly Updates.

Recommendation 3 (Implementing in Full)
To improve the accuracy of decisions made upon secondary review, Meta should assess how its current review routing protocol impacts accuracy. The Board believes Meta would increase accuracy by sending secondary review jobs to different reviewers than those who previously assessed the content. The Board will consider this implemented when Meta publishes a decision, informed by research on the potential impact to accuracy, whether to adjust its secondary review routing.
Our commitment: We have ongoing monitoring mechanisms in place which assess how review routing protocol and enforcement decisions impacts accuracy across reviewers. We are continuously working to refine and improve how systems impact our full set of enforcement metrics, including accuracy.
Considerations: Meta has review protocols in place to ensure that secondary review is allocated to a different reviewer other than the initial reviewer to the most feasible extent. As shared in our response to PAO on Meta’s Cross-Check Policies #31, we have an internal system called Dynamic Multi Review (DMR) which enables us to review certain content multiple times, by different reviewers before making a final decision. This ensures that the quality and accuracy of human review is carefully considered upon secondary review, taking into account factors such as virality and potential for harm.
We have a dedicated global operations measurement team which monitors enforcement decisions across all content types. This team monitors the accuracy and quality of our review decisions to ultimately develop integrity metrics that validate our review processes. We do this through protocols such as audit validation which ensures that our accuracy metrics can be trusted on an ongoing basis and remain in alignment with the source of truth. Our operational measurement teams also conduct engagement with scaled reviewers to maintain validation across our metrics, monitor tooling and triaging to report and address malfunctions in our tools on an ongoing basis and generate insights to consistently improve review accuracy.
In practice, decisions that are escalated to secondary review are processed through channels that will ensure at least one additional reviewer other than the initial assessor may be allocated. Oftentimes more than one additional reviewer is allocated, depending on factors including the violation type, associated account tags and accumulated views. We have also enabled robust appeals processes which have recently enabled users to appeal decisions directly to the board per our response to PAO on Meta’s Cross-Check Policies #25. Additionally, our appeals processes default to secondary review from reviewers who are different to those who previously assessed the content, unless determined otherwise by capacity constrained exceptions.
We are constantly iterating on our routing protocol and monitoring our accuracy metrics. We continue to strive towards ensuring that our secondary review routing systems strengthen our enforcement accuracy. We now consider this recommendation complete and will have no further updates on this work.

Recommendation 4 (Implementing in Part)
To provide greater transparency to users and allow them to understand the consequences of their actions, Meta should update its Transparency Center with information on what penalties are associated with the accumulation of strikes on Instagram. The Board appreciates that Meta has provided additional information about strikes for Facebook users in response to Board recommendations. It believes this should be done for Instagram users as well. The Board will consider this implemented when the Transparency Center contains this information.
Our commitment: We remove content from Instagram if it violates our policies, and also may disable accounts if they repeatedly violate our policies as we note in our Restricting Accounts page on the Transparency Center. We do not apply the same restrictions (such as read-only feature blocks) on Instagram as we do on Facebook, so the same penalties are not associated with the accumulation of strikes for our users. We will work to more clearly represent this information on our Transparency Center.
Considerations: We provide details about our approach to strikes and penalties in the Transparency Center, highlighting where these strikes and related penalties apply specifically to Facebook. However, aside from account disable and restrictions to live video, these same restrictions do not apply to Instagram due to a number of reasons, including the difference between the platforms and the fact that they have different features and experiences. Facebook users may also utilize groups and pages – if a person posts violating content to a page or group that they manage, the strike may also count against that page or group. Instagram, on the other hand, does not have these features and therefore the same restrictions would not apply. In cases such as “Live video”, however, which is a feature both on Facebook and Instagram, if a user accrues enough strikes on Instagram we temporarily limit access for that feature, just as if they accrued the same number of strikes on Facebook.
In our Instagram Help Center, we provide details about how a user can check their Account Status, which allows them to find out if content they have posted was removed for violating the Community Guidelines and whether that removal may lead to account restrictions and limitations. This includes details on how to appeal content and allows a user to see if there are any features they may temporarily not be able to use as a result of their violations to the Instagram Community Guidelines. For increased accessibility, people can also check their account status and identify any enforcement actions taken against their content via an in-product feature. Ultimately if a user violates our policies on Facebook, Instagram, or Threads repeatedly, or violates a more severe policy, we will disable the account. In addition to this shared approach across Facebook and Instagram, the Instagram Help Center details other restrictions that may be placed on an account in an effort to limit things like spam or inauthentic activity, including limits to how many messages an account can send or limits to approving follower requests.
We will incorporate language to our Transparency Center to clarify how penalties apply to Instagram and will share updates on this work in a future Quarterly Update.

Meta
政策
社群守則Meta 廣告刊登準則其他政策Meta 所作的改善適齡的內容

特色
我們處理危險組織和人物的做法我們就鴉片類藥物氾濫問題採取的處理做法我們就選舉採取的處理做法我們就錯誤資訊採取的處理做法我們就具報導價值的內容採取的處理做法我們排序 Facebook 動態消息的做法我們用以說明排序功能的做法Meta 無障礙環境

研究工具
內容資料庫和內容資料庫 API廣告檔案庫工具其他研究工具和資料集

政策執行
偵測違規內容採取行動

管理
管理方面的創新監察委員會概覽如何向監察委員會提出申訴監察委員會案例監察委員會的建議成立監察委員會監察委員會:進階問題Meta 有關監察委員會的兩年一度更新追蹤監察委員會的影響

安全性
威脅阻斷安全威脅威脅分析報告

報告
社群守則執行狀況報告知識產權政府的用戶資料要求因本地法律而起的內容限制互聯網中斷廣獲瀏覽內容報告法規和其他透明度報告

政策
社群守則
Meta 廣告刊登準則
其他政策
Meta 所作的改善
適齡的內容
特色
我們處理危險組織和人物的做法
我們就鴉片類藥物氾濫問題採取的處理做法
我們就選舉採取的處理做法
我們就錯誤資訊採取的處理做法
我們就具報導價值的內容採取的處理做法
我們排序 Facebook 動態消息的做法
我們用以說明排序功能的做法
Meta 無障礙環境
研究工具
內容資料庫和內容資料庫 API
廣告檔案庫工具
其他研究工具和資料集
政策執行
偵測違規內容
採取行動
管理
管理方面的創新
監察委員會概覽
如何向監察委員會提出申訴
監察委員會案例
監察委員會的建議
成立監察委員會
監察委員會:進階問題
Meta 有關監察委員會的兩年一度更新
追蹤監察委員會的影響
安全性
威脅阻斷
安全威脅
威脅分析報告
報告
社群守則執行狀況報告
知識產權
政府的用戶資料要求
因本地法律而起的內容限制
互聯網中斷
廣獲瀏覽內容報告
法規和其他透明度報告
政策
社群守則
Meta 廣告刊登準則
其他政策
Meta 所作的改善
適齡的內容
特色
我們處理危險組織和人物的做法
我們就鴉片類藥物氾濫問題採取的處理做法
我們就選舉採取的處理做法
我們就錯誤資訊採取的處理做法
我們就具報導價值的內容採取的處理做法
我們排序 Facebook 動態消息的做法
我們用以說明排序功能的做法
Meta 無障礙環境
研究工具
內容資料庫和內容資料庫 API
廣告檔案庫工具
其他研究工具和資料集
安全性
威脅阻斷
安全威脅
威脅分析報告
報告
社群守則執行狀況報告
知識產權
政府的用戶資料要求
因本地法律而起的內容限制
互聯網中斷
廣獲瀏覽內容報告
法規和其他透明度報告
政策執行
偵測違規內容
採取行動
管理
管理方面的創新
監察委員會概覽
如何向監察委員會提出申訴
監察委員會案例
監察委員會的建議
成立監察委員會
監察委員會:進階問題
Meta 有關監察委員會的兩年一度更新
追蹤監察委員會的影響
政策
社群守則
Meta 廣告刊登準則
其他政策
Meta 所作的改善
適齡的內容
特色
我們處理危險組織和人物的做法
我們就鴉片類藥物氾濫問題採取的處理做法
我們就選舉採取的處理做法
我們就錯誤資訊採取的處理做法
我們就具報導價值的內容採取的處理做法
我們排序 Facebook 動態消息的做法
我們用以說明排序功能的做法
Meta 無障礙環境
研究工具
內容資料庫和內容資料庫 API
廣告檔案庫工具
其他研究工具和資料集
政策執行
偵測違規內容
採取行動
管理
管理方面的創新
監察委員會概覽
如何向監察委員會提出申訴
監察委員會案例
監察委員會的建議
成立監察委員會
監察委員會:進階問題
Meta 有關監察委員會的兩年一度更新
追蹤監察委員會的影響
安全性
威脅阻斷
安全威脅
威脅分析報告
報告
社群守則執行狀況報告
知識產權
政府的用戶資料要求
因本地法律而起的內容限制
互聯網中斷
廣獲瀏覽內容報告
法規和其他透明度報告
中文(香港)
私隱政策服務條款Cookie
Meta
政策公開透明平台
政策
政策執行
安全性
特色
管理
研究工具
報告
中文(香港)