Policies that outline what is and isn't allowed on our apps.
Policies for ad content and business assets.
Other policies that apply to Meta technologies.
How we update our policies, measure results, work with others, and more.
Explore how we help teens have safe, positive experiences on Facebook and Instagram.
How we approach dangerous organizations and individuals.
How we support communities in the face of the opioid epidemic.
How we help prevent interference, empower people to vote and more.
How we work with independent fact-checkers, and more, to identify and take action on misinformation.
How we assess content for newsworthiness.
How we reduce problematic content in News Feed.
How we build AI systems.
Comprehensive access to public data from Facebook and Instagram
Comprehensive and searchable database of all ads currently running across Meta technologies
Additional tools for in-depth research on Meta technologies and programs
Quarterly report on how well we're doing at enforcing our policies on the Facebook app and Instagram.
Report on how well we're helping people protect their intellectual property.
Report on government request for people's data.
Report on when we restrict content that's reported to us as violating local law.
Report on intentional internet restrictions that limit people's ability to access the internet.
Quarterly report on what people see on Facebook, including the content that receives the widest distribution during the quarter.
Download current and past regulatory reports for Facebook and Instagram.
2023-033-FB-UA
Today, November 28, 2023, the Oversight Board selected a case appealed by a Facebook user regarding a video posted to French politician Eric Zemmour’s official Facebook Page. The video depicts Zemmour speaking to an interviewer about demographic changes between Europe and Africa. Zenmour claims that at the beginning of the 20th Century, there were 100 million people living in Africa and 400 million in Europe, whereas today there are 1.5 billion people living in Africa and 400 million in Europe. He then says “when there were 4 Europeans for 1 African, what did we do, we've colonized Africa. Now that there are 4 Africans for 1 European, what's happening? Africa is colonizing Europe, and specifically France.”
Meta determined that the content did not violate our policies on Hate Speech, as laid out in our Facebook Community Standards, and left the content up.
Under our Hate Speech policy, Meta prohibits any content that “direct attack[s] against people — rather than concepts or institutions — on the basis of what we call protected characteristics: race, ethnicity, national origin, disability, religious affiliation, caste, sexual orientation, sex, gender identity and serious disease,” and any content that calls for exclusion of members of a protected characteristic group. However, we do not consider the allegation that one group is “colonizing” a place to be an attack in and of itself so long as it does not amount to a call for exclusion. This is because discussions about colonization and colonialism can be complicated and nuanced, and we want to allow citizens to discuss the laws and policies of their nations so long as this discussion does not constitute attacks against vulnerable groups. In this instance, the claims about population change and its purported connection to colonization do not contain an attack and furthermore does not even identify a protected characteristic group but instead makes statements about a continent.
We will implement the board’s decision once it has finished deliberating, and we will update this post accordingly. Please see the board’s website for the decision when they issue it.
We welcome the Oversight Board’s decision today, March 12, 2024, on this case. The Board upheld Meta's decision to leave the content on Facebook.
After conducting a review of the recommendation provided by the Board, we will update this post with initial responses to that recommendation.
Meta should provide greater detail in the language of its Hate Speech Community Standard about how it distinguishes immigration-related discussions from harmful speech targeting people on the basis of their migratory status. This includes explaining how the company handles content spreading hateful conspiracy theories. This is necessary for users to understand how Meta protects political speech on immigration while addressing the potential offline harms of hateful conspiracy theories.
The Board will consider this implemented when Meta publishes an update explaining how it is approaching immigration debates in the context of the Great Replacement Theory, and links to the update prominently in its Transparency Center.
Our commitment: Our Hate Speech policy prohibits attacks against people based on their protected characteristics, which include immigration and migratory status. We are beginning to gather new insights to inform our approach to content that claims people are threats to the safety, health, or survival of others based on their personal characteristics, including their immigration status, but does not otherwise violate our Hate Speech policy.
Considerations: Our Hate Speech policy removes attacks against people but allows discussions of concepts and institutions. We allow speech related to immigration on our platforms including commentary on or criticism of immigration policies.
We define hate speech as direct attacks against people — rather than concepts or institutions— on the basis of what we call Protected Characteristics (PCs). PCs include race, ethnicity, national origin, disability, religious affiliation, caste, sexual orientation, sex, gender identity, and serious disease. Our enforcement of national origin and ethnicity is not restricted to a reference to a country, and includes references to people from a particular continent including terms such as “Africans” or “Europeans.”
We remove the most severe forms of attacks, which we describe as Tier 1 attacks in our Community Standards, that are made against refugees, migrants, immigrants, and asylum seekers. For example, we remove content that attacks people based on their national origin or immigrant status with dehumanizing speech such as comparisons to animals or claims that they are violent criminals. Additionally, sometimes, based on local nuance, we consider certain words or phrases as proxy terms for PC groups and will remove hate speech when it is directed at those proxies. This can include proxy terms for immigrants.
Separately, our Dangerous Organizations and Individuals policy does not allow certain ideologies inherently tied to violence and attempts to organize people around calls for violence or exclusion based on their protected characteristics. We remove explicit Glorification, Support, and Representation of these ideologies, which include White Supremacy, White Nationalism, and White Separatism. Additionally, as the Board notes in its decision, we also do not allow designated Violence-Inducing Conspiracy Networks.
We believe that the existing approach to hate speech which removes attacks on people is the most operable way to address this kind of content at scale. While we continue to iterate on improvements across our policies, it is not feasible to create a scalable policy that meets measures for legality, necessity and proportionality in content that lacks clear Hate Speech policy violations. Removing this type of content, including as a coded reference, has the potential to remove non-violating political speech and therefore restrict expression.
However, we are assessing our approach to content that claims people are threats to the safety, health, or survival of others based on their personal characteristics, including immigration status. Given the complexity of this work, we expect that this work will require some time to conduct and will continue to provide updates in future reports to the Board.