Sudan’s Rapid Support Forces Video Captive

UPDATED

JUN 10, 2024

2023-039-FB-UA

Today, January 9, 2024, the Oversight Board selected a case appealed by a Facebook user regarding a video showing several men holding firearms. An individual in the video self-identifies as a member of the Rapid Support Forces (RSF) in Sudan and describes activities the RSF has taken such as the capture of an Egyptian “infiltrator” in Khartoum. The individual also makes reference to “our leader Mohamed Hamda” who is the Commander Lieutenant General of the RSF. The caption for the video mentions the presence of “foreigners from our evil neighbor.”

Upon initial review, Meta left this content up. However, upon further review, we determined the content did in fact violate our Dangerous Organizations and Individuals policy, as laid out in the Facebook Community Standards, and was left up in error. We therefore removed the content.

Meta does not allow organizations or individuals that proclaim a violent mission or are engaged in violence to have a presence on our platforms. Meta also removes any content that contains “praise, substantive support and representation” of designated entities, including “organizations themselves, their activities, and their members.” Substantive support includes “channeling information or resources, including official communications, on behalf of a designated entity or event.” In this case, the user who posted the video is channeling information about the RSF – a designated terrorist organization (Tier 1) under our Dangerous Organizations and Individuals policy – by posting the video of someone who self-identifies as a member of the RSF and describes activities the RSF has taken without a caption that “condemns, neutrally discusses, or is a part of news reporting.”

We will implement the board’s decision once it has finished deliberating, and we will update this post accordingly. Please see the board’s website for the decision when they issue it.

Case decision

We welcome the Oversight Board’s decision today, April 11, 2024, on this case. The Board overturned Meta’s original decision to leave this content up on Facebook. Meta previously removed this content.

When it is technically and operationally possible to do so, we will also take action on content that is identical and made in the same context.

After conducting a review of the recommendations provided by the Board, we will update this page with initial responses to those recommendations.

Recommendations

Recommendation 1 (Implementing in Part)

To ensure effective protection of detainees under international humanitarian law, Meta should develop a scalable solution to enforce the Coordinating Harm and Promoting Crime policy that prohibits outing prisoners of war within the context of armed conflict. Meta should set up a protocol for the duration of a conflict that establishes a specialized team to prioritize and proactively identify content outing prisoners of war.

The Board will consider this implemented when Meta shares with the Board data on the effectiveness of this protocol in identifying content outing prisoners of war in armed conflict settings and provides updates on the effectiveness of this protocol every six months.

Our commitment: We will scale our Coordinating Harm and Promoting Crime policy in Sudan so scaled reviewers have guidance to remove POW content in order to balance considerations of voice and safety for the current environment. Additionally, we will consider ways to scale this guidance in future conflict situations where POW content may be prevalent. We will also continue investing in our crisis response mechanisms to ensure our enforcement teams can proactively identify prisoner of war content when necessary during armed conflicts.

Considerations: Under our Coordinating Harm and Promoting Crime policy, we remove content that exposes the identity or location of a person and their affiliation to an outing-risk group. One of those outing-risk groups in the context of an armed conflict includes prisoners of war (POWs). We remove content that shares imagery and/or information identifying POWs, such as names and identities, in the interest of their dignity and safety.

Enforcing on content that may out the identity of a POW requires context, which is why it is difficult to do so accurately and consistently at scale. In certain conflicts, videos or images may be used to raise awareness about issues including serious human rights abuses or violations of international humanitarian law. A policy that removes all potential POW content at scale may not strike the appropriate balance between voice, dignity, and safety. However, as the Board notes, there may be opportunities to scale our approach to POW content in certain regions when there is an active conflict as part of our overall crisis response.

In regions like Sudan, where we have activated the Crisis Policy Protocol, we use methods such as running proactive searches for content that could expose POWs and expediting enforcement. Scaling this policy in the market will also enable internal teams to provide guidance to scaled reviewers to remove this content when reported by other users or our automated systems, which surface potentially violating content more generally.

There are some inherent technical challenges with more broadly scaling a policy that requires highly contextualized review, even in the context of a crisis situation. Although we sometimes use classifier data to guide other proactive searches, this type of content is often too nuanced for classifier detection. As an alternative, we sometimes rely on keywords when conducting these proactive searches, which yield narrower results.

Given these technical challenges and the context required to evaluate the content, scaling this policy is not always possible or appropriate even in a crisis situation. However, we agree with the Board that there are certain crisis situations where we should do our best to scale this policy, including in Sudan, and we will work to implement this recommendation in those contexts. We will continue to explore additional improvements to the signals we leverage to conduct proactive searches, including pathways for users to report content which may violate our Coordinating Harm and Promoting Crime policy.

Recommendation 2 (Implementing in Full)

To enhance its automated detection and prioritization of content potentially violating the Dangerous Organizations and Individuals policy for human review, Meta should audit the training data used in its video content understanding classifier to evaluate whether it has sufficiently diverse examples of content supporting designated organizations in the context of armed conflicts, including different languages, dialects, regions and conflicts.

The Board will consider this recommendation implemented when Meta provides the Board with detailed results of its audit and the necessary improvements that the company will implement as a result.

Our commitment: We will engage product teams and regional experts to conduct an internal review of our existing classifier training data for Dangerous Organizations and Individuals (DOI) to evaluate the extent of diversification in training data across languages, dialects, regions, and conflicts. We will share the outcomes of this review and any improvements taken to improve the scope of training data in a future confidential update to the Board.

Considerations: Currently, the data supporting our DOI classifiers which target video content is sourced across various languages, dialects, regions and conflicts, based on regional prevalence, capacity for human labeling, and value to the classifier’s development. In addition, regional teams proactively bank content from their respective regions and conflicts, which is later leveraged as training data. We then conduct sampling to ensure that new data is distinct from previous training data. In line with this recommendation, we will conduct a more comprehensive review of our training data to assess the level of diversity of examples specific to designated organizations. We will share results from this analysis and any changes implemented as a result of it in a future confidential update to the Board.

Recommendation 3 (Implementing in Full)

To provide more clarity to users, Meta should hyperlink the U.S. Foreign Terrorist Organizations and Specially Designated Global Terrorists lists in its Community Standards, where these lists are mentioned.

The Board will consider this recommendation implemented when Meta makes these changes to the Community Standards.

Our commitment: We will update our Dangerous Organizations and Individuals Community Standards to include links to the lists of U.S Foreign Terrorist Organizations and Specially Designated Global Terrorist lists.

Considerations: Our Dangerous Organizations and Individuals section of our Community Standards notes that we remove Glorification, Support, and Representation of Tier 1 entities, their leaders, founders or prominent members, as well as unclear references to them. This includes entities and individuals designated by the United States government as Foreign Terrorist Organizations (FTOs) or Specially Designated Global Terrorists (SDGTs). In recent years in part due to recommendations from the Oversight Board, we have updated our community Standards to include examples of the types of content we remove.

As we consider and implement other updates to our DOI policy, we commit to updating our Community Standards to link to these lists.