How we approach dangerous organizations and individuals.
How we support communities in the face of the opioid epidemic.
How we help prevent interference, empower people to vote and more.
How we work with independent fact-checkers, and more, to identify and take action on misinformation.
How we assess content for newsworthiness.
How we reduce problematic content in News Feed.
How we build AI systems.
Comprehensive access to public data from Facebook and Instagram
Comprehensive and searchable database of all ads currently running across Meta technologies
Additional tools for in-depth research on Meta technologies and programs
Quarterly report on how well we're doing at enforcing our policies on the Facebook app and Instagram.
Report on how well we're helping people protect their intellectual property.
Report on government request for people's data.
Report on when we restrict content that's reported to us as violating local law.
Report on intentional internet restrictions that limit people's ability to access the internet.
Quarterly report on what people see on Facebook, including the content that receives the widest distribution during the quarter.
Download current and past regulatory reports for Facebook and Instagram.
AUG 12, 2022
Our Community Standards define a slur as a word that is inherently offensive and used as an insult for a protected characteristic. Multiple teams, including policy, markets, and stakeholder engagement teams are involved in designating a slur. To create these lists, our regional teams conduct ongoing qualitative and quantitative analysis on the language and culture of their region or community (which we call a market). This includes reviewing how a word is locally and colloquially used, the prevalence of the word on our platforms, and the meaning associated with it when it is used. They may use cultural knowledge based on news articles, academic studies, and other linguistic research. Our regional teams are assisted by other experts on our policies and operational processes. Cultural context is provided (news articles, academic articles, etc.), and at least 50 pieces of content containing that term must be collected and assessed. Once that analysis is complete, policy teams review everything provided by local markets, so that the content is assessed based on the relevant Meta policy. Market teams are responsible for ensuring that their relevant slur lists are as exhaustive and up to date as possible.
We also analyze the ways certain words are used on our platforms to determine the extent to which they meet our slur definition. For example, the use of words on our platforms may indicate some slurs that include previously unidentified variations or related terms that should be considered. We’ll analyze the use of slurs across our platforms to identify those instances. Additionally, slur lists and policies include guidance on circumstances in which a given slur might be used in a permissible way, as when they are used in a clear self-referential way, when used in its alternative meaning, when discussing the use of the slurs, when reporting on the slur, when condemning the use of the slur, or when the slur is used in a explicitly positive way.
A language can be shared across nations and cultures, but slurs are often specific to a region or community (which we call a market). This is why we use slur lists that are specific to markets, not just languages. Across all violation areas, we have reviewers that are covering multiple regions across multiple languages (to cover all dialects as much as possible). These reviewers are assigned to queues based on language expertise and violation type skill set, so they have an informed sense of which slur lists will be most relevant for their respective content queues. Our content moderation routing incorporates both language and region to determine the appropriate reviewer(s) for content, but generally, language plays a larger role in that complex routing. For example, Southern Cone market queues will encompass content that comes from Chile, Uruguay, Argentina, and Paraguay and for which the primary language is Spanish. Each market queuing algorithm also has a condition called a “catch all.” This condition allows for all jobs in a language that is not assigned to the countries that the market covers to automatically fall in the markets most relevant for that language. For example, French language jobs that geographically originate in the South Cone Market (Argentina for instance) would fall in the French markets review queues and vice versa. When a slur in a language that is different from the rest of a piece of content appears, our scaled review technology highlights it as appearing on a different market’s slur list so that it is flagged at scale and in all market queues as potentially violating language.
Queuing algorithms account for both language and country because slurs can have caveats (or benign alternative use cases) related to what is going on in the world at the time and market context. Market context is important for reviewers in determining if a word is appearing in a permissible use case or not. If context is lacking, and in the absence of any other permissible use case, we err towards considering the content violating.
We conduct an annual audit of our slur lists. This is done by our operational teams in collaboration with our regional market teams, who together review the slurs and reach a conclusion as to whether the word retains the offensive character that initially qualified it for the list. We also encourage our regional teams, including at-scale review partners, to continually monitor the linguistic development of their market and, based on this, propose new slurs that should be added to their market list or suggest that existing words on the list be revised. Finally, we ask the civil society and non-governmental organizations with whom we engage to provide input on what words should be considered slurs.