Provide people with additional information regarding the scope and enforcement of restrictions on veiled threats. This would help people understand what content is allowed in this area. Meta should make their enforcement criteria public. These should consider the intent and identity of the person, as well as their audience and the wider context.
Our commitment: We commit to adding language to the Violence and Incitement Community Standard to make it clearer when we remove content for containing veiled threats.
Considerations: Facebook removes explicit statements that incite violence under our Violence and Incitement Community Standard. Facebook also removes statements that are not explicit when they act as veiled or implicit threats. The language we will add to our Community Standards will elaborate on the criteria we use in this policy to evaluate whether a statement is a coded attempt to
incite violence.
In its enforcement of this policy, Facebook currently does not directly use the identity of the person who shared the content or the content’s full audience as criteria for assessing whether speech constitutes a veiled threat, so the added language will not include such criteria. As the board notes, we are informed by our trusted partner network to tell us when content is potentially threatening or likely to contribute to imminent violence or physical harm, so it is possible that these partners use such signals in their assessments.
Next steps: We will add language described above to the Violence and Incitement Community Standard within a few weeks.