Meta should provide greater detail in the language of its Hate Speech Community Standard about how it distinguishes immigration-related discussions from harmful speech targeting people on the basis of their migratory status. This includes explaining how the company handles content spreading hateful conspiracy theories. This is necessary for users to understand how Meta protects political speech on immigration while addressing the potential offline harms of hateful conspiracy theories.
The Board will consider this implemented when Meta publishes an update explaining how it is approaching immigration debates in the context of the Great Replacement Theory, and links to the update prominently in its Transparency Center.
Our commitment:
Our
Hate Speech policy prohibits attacks against people based on their protected characteristics, which include immigration and migratory status. We are beginning to gather new insights to inform our approach to content that claims people are threats to the safety, health, or survival of others based on their personal characteristics, including their immigration status, but does not otherwise violate our Hate Speech policy.
Considerations: Our
Hate Speech policy removes attacks against people but allows discussions of concepts and institutions. We allow speech related to immigration on our platforms including commentary on or criticism of immigration policies.
We define hate speech as direct attacks against people — rather than concepts or institutions— on the basis of what we call Protected Characteristics (PCs). PCs include race, ethnicity, national origin, disability, religious affiliation, caste, sexual orientation, sex, gender identity, and serious disease. Our enforcement of national origin and ethnicity is not restricted to a reference to a country, and includes references to people from a particular continent including terms such as “Africans” or “Europeans.”
We remove the most severe forms of attacks, which we describe as Tier 1 attacks in our Community Standards, that are made against refugees, migrants, immigrants, and asylum seekers. For example, we remove content that attacks people based on their national origin or immigrant status with dehumanizing speech such as comparisons to animals or claims that they are violent criminals. Additionally, sometimes, based on local nuance, we consider certain words or phrases as proxy terms for PC groups and will remove hate speech when it is directed at those proxies. This can include proxy terms for immigrants.
Separately, our
Dangerous Organizations and Individuals policy does not allow certain ideologies inherently tied to violence and attempts to organize people around calls for violence or exclusion based on their protected characteristics. We remove explicit Glorification, Support, and Representation of these ideologies, which include White Supremacy, White Nationalism, and White Separatism. Additionally, as the Board notes in its decision, we also do not allow designated
Violence-Inducing Conspiracy Networks.
We believe that the existing approach to hate speech which removes attacks on people is the most operable way to address this kind of content at scale. While we continue to iterate on improvements across our policies, it is not feasible to create a scalable policy that meets measures for legality, necessity and proportionality in content that lacks clear Hate Speech policy violations. Removing this type of content, including as a coded reference, has the potential to remove non-violating political speech and therefore restrict expression.
However, we are assessing our approach to content that claims people are threats to the safety, health, or survival of others based on their personal characteristics, including immigration status. Given the complexity of this work, we expect that this work will require some time to conduct and will continue to provide updates in future reports to the Board.