To ensure effective protection of detainees under international humanitarian law, Meta should develop a scalable solution to enforce the Coordinating Harm and Promoting Crime policy that prohibits outing prisoners of war within the context of armed conflict. Meta should set up a protocol for the duration of a conflict that establishes a specialized team to prioritize and proactively identify content outing prisoners of war.
The Board will consider this implemented when Meta shares with the Board data on the effectiveness of this protocol in identifying content outing prisoners of war in armed conflict settings and provides updates on the effectiveness of this protocol every six months.
Our commitment: We will scale our Coordinating Harm and Promoting Crime policy in Sudan so scaled reviewers have guidance to remove POW content in order to balance considerations of voice and safety for the current environment. Additionally, we will consider ways to scale this guidance in future conflict situations where POW content may be prevalent. We will also continue investing in our crisis response mechanisms to ensure our enforcement teams can proactively identify prisoner of war content when necessary during armed conflicts.
Considerations: Under our Coordinating Harm and Promoting Crime policy, we remove content that exposes the identity or location of a person and their affiliation to an outing-risk group. One of those outing-risk groups in the context of an armed conflict includes prisoners of war (POWs). We remove content that shares imagery and/or information identifying POWs, such as names and identities, in the interest of their dignity and safety.
Enforcing on content that may out the identity of a POW requires context, which is why it is difficult to do so accurately and consistently at scale. In certain conflicts, videos or images may be used to raise awareness about issues including serious human rights abuses or violations of international humanitarian law. A policy that removes all potential POW content at scale may not strike the appropriate balance between voice, dignity, and safety. However, as the Board notes, there may be opportunities to scale our approach to POW content in certain regions when there is an active conflict as part of our overall crisis response.
In regions like Sudan, where we have activated the Crisis Policy Protocol, we use methods such as running proactive searches for content that could expose POWs and expediting enforcement. Scaling this policy in the market will also enable internal teams to provide guidance to scaled reviewers to remove this content when reported by other users or our automated systems, which surface potentially violating content more generally.
There are some inherent technical challenges with more broadly scaling a policy that requires highly contextualized review, even in the context of a crisis situation. Although we sometimes use classifier data to guide other proactive searches, this type of content is often too nuanced for classifier detection. As an alternative, we sometimes rely on keywords when conducting these proactive searches, which yield narrower results.
Given these technical challenges and the context required to evaluate the content, scaling this policy is not always possible or appropriate even in a crisis situation. However, we agree with the Board that there are certain crisis situations where we should do our best to scale this policy, including in Sudan, and we will work to implement this recommendation in those contexts. We will continue to explore additional improvements to the signals we leverage to conduct proactive searches, including pathways for users to report content which may violate our Coordinating Harm and Promoting Crime policy.