Policies that outline what is and isn't allowed on our apps.
Policies for ad content and business assets.
Other policies that apply to Meta technologies.
How we update our policies, measure results, work with others, and more.
Explore how we help teens have safe, positive experiences on Facebook and Instagram.
How we approach dangerous organizations and individuals.
How we support communities in the face of the opioid epidemic.
How we help prevent interference, empower people to vote and more.
How we work with independent fact-checkers, and more, to identify and take action on misinformation.
How we assess content for newsworthiness.
How we reduce problematic content in News Feed.
How we build AI systems.
Comprehensive access to public data from Facebook and Instagram
Comprehensive and searchable database of all ads currently running across Meta technologies
Additional tools for in-depth research on Meta technologies and programs
Quarterly report on how well we're doing at enforcing our policies on the Facebook app and Instagram.
Report on how well we're helping people protect their intellectual property.
Report on government request for people's data.
Report on when we restrict content that's reported to us as violating local law.
Report on intentional internet restrictions that limit people's ability to access the internet.
Quarterly report on what people see on Facebook, including the content that receives the widest distribution during the quarter.
Download current and past regulatory reports for Facebook and Instagram.
OCT 20, 2023
2023J
Today, the Oversight Board selected a case bundle appealed by Facebook and Instagram users regarding three videos (two posted to Facebook and one to Instagram) which all depict a member of Turkey’s Justice and Development Party criticizing a prominent member of the Republican People’s Party for their response to the earthquake centered in the Kahramanmaraş province that took place on February 6, 2023. In each of the videos, the criticism includes a phrase (“İngiliz uşağı”) which Meta translates to “Servant of the English.”
Upon initial review, Meta took down this content for violating our policy on Hate Speech, as laid out in our Instagram Community Guidelines and Facebook Community Standards. At the time, Meta considered the phrase “İngiliz uşağı” to be a slur in the Turkish market. However, upon additional review, we determined that the phrase “İngiliz uşağı” is not currently used as a slur. The phrase “İngiliz uşağı” has since been removed from our slur list and the content has been restored.
We will implement the board’s decision once it has finished deliberating, and we will update this post accordingly. Please see the board’s website for the decision when they issue it.
We welcome the Oversight Board’s decision today on this case bundle. The board overturned Meta’s original decisions to remove the three pieces of content from Facebook and Instagram. Meta will act to comply with the board's decision and reinstate the content within 7 days.
After conducting a review of the recommendations provided by the board in addition to their decision, we will update this page.
To ensure media organizations can more freely report on topics of public interest, Meta should revise the Hate Speech Community Standard to explicitly protect journalistic reporting on slurs, when such reporting, in particular in electoral contexts, does not create an atmosphere of exclusion and/or intimidation. This exception should be made public, and be separate from the “raising awareness” and “condemning” exceptions. There should be appropriate training to moderators, especially outside of English languages, to ensure respect for journalism, including local media. The reporting exception should make clear to users, in particular those in the media, how such content should be contextualized, and internal guidance for reviewers should be consistent with this. The Board will consider this recommendation implemented when the Community Standards are updated, and internal guidelines for Meta’s human reviewers are updated to reflect these changes.
Our commitment: We are in the process of refining our definitions across all our policies and intend to use this process to provide greater clarity in the Transparency Center. This includes clarifying how we approach journalistic reporting on slurs. We will consider ways to share more details externally as part of this ongoing work.
Considerations: As outlined in our Community Standards, we remove content that uses slurs to attack people on the basis of their protected characteristics which include race, ethnicity, national origin, disability, religious affiliation, caste, sexual orientation, sex, and gender identity. However, we also recognize that there may be cases in which users share content that includes slurs in order to condemn or raise awareness of this sort of hate speech. This may also include reporting on the use of a slur in order to raise awareness.
In those instances, as outlined in our Community Standards, we require the person sharing the content to clearly indicate their intent. We recognize that there may be opportunities to more effectively articulate and define this allowance for our content moderators and people who use our platforms, especially with respect to how we treat journalistic reporting on slurs. As such, we will work to clarify this guidance externally in the Community Standards and internally for reviewers.
To ensure greater clarity of when slur use is permitted, Meta should ensure the Hate Speech Community Standard has clearer explanations of each exception with illustrative examples. Situational examples can be provided in the abstract, to avoid repeating hate speech terms. The Board will consider this implemented when Meta restructures its Hate Speech Community Standard and adds illustrative examples.
Our commitment: We will continue to find additional ways to clarify policy allowances around the use of slurs on our platforms. However, as shared previously, we will not publish illustrative examples because we do not want to include potentially harmful or hateful content in our Community Standards.
Considerations: We will continue to refine our definitions and clarify our approach to slurs in the Community Standards, including sharing more public details around when we may allow the use of a term that may otherwise be considered a slur.
We will not include illustrative examples of slur allowances in our Transparency Center, however. While we’ve previously shared details about our approach to slurs in response to Board recommendations in other cases, we have declined to share examples of slurs because we do not want to share potentially harmful or hateful content in our Community Standards. We recognize that some slurs may be used self-referentially or be used by someone to condemn or raise awareness of the use of a term, and we explicitly indicate that we allow this type of content in our Community Standards.
We recently completed a policy development process on how we can better define and designate slurs, which resulted in a new slurs definition and updates to our Hate Speech policy in May 2023. The details of this Policy Forum are available on our Policy Forum Transparency Center page and were presented to the Board. Our new slurs definition is informed by research, better connects our policy around these terms with the potential harms that slurs can elicit, and ties our slurs definition back to historical discrimination. So while we will not add examples of slurs, we have already undertaken efforts to improve the clarity of our policy and will consider additional ways to more clearly articulate the contexts where slurs may be allowed on our platforms. We will share updates on our progress in future Quarterly Updates on the Oversight Board.
To ensure fewer errors in the enforcement of its Hate Speech policy, Meta should expedite audits of its slur lists in countries with elections in the second half of 2023 and early 2024, with the goal of identifying and removing terms mistakenly added to the company’s slur lists. The Board will consider this implemented when Meta provides an updated list of designated slurs following the audit, and a list of terms de-designated, per market, following the new audits.
Our commitment: We will prioritize slur audits for countries with imminent elections. In identifying these markets, we will consider factors like election timelines, risk, and region to ensure that the audits are most impactful. These efforts will be completed in time to support imminent elections as feasible and will inform our approach going forward.
Considerations: Meta conducts yearly audits of our slur lists, with standardized intake periods to determine when an audit should be conducted for a given market. However, we may also conduct ad hoc and risk-based slur audits based on urgency and/or other relevant changes within a market. It may not be necessary to conduct a separate, ad hoc audit in response to an upcoming election when we conducted an annual audit earlier the same year.
Although Meta has an established and standardized slur audit process, each language supported by Meta typically has its own designated slurs list that is informed by regional and market expertise. As there is significant variability in the slur lists between languages, local language capacity, and competing company priorities, the timing and duration of each audit may vary.
We agree with the Board that new insights and trends may emerge in the run-up to an election that warrant a more urgent re-review of our slur list in addition to the standardized review process. In such cases, we may consider adjusting the timing of our audit for the affected languages in that market.
However, we will conduct audits in markets with upcoming elections where time and resources allow and have kicked off pilots of this approach for imminent elections. We will provide updates on this work in future Quarterly Updates and, as with previous lists, will consider ways to share this information with the board.