Policies that outline what is and isn't allowed on our apps.
Policies for ad content and business assets.
Other policies that apply to Meta technologies.
How we update our policies, measure results, work with others, and more.
Explore how we help teens have safe, positive experiences on Facebook and Instagram.
How we approach dangerous organizations and individuals.
How we support communities in the face of the opioid epidemic.
How we help prevent interference, empower people to vote and more.
How we work with independent fact-checkers, and more, to identify and take action on misinformation.
How we assess content for newsworthiness.
How we reduce problematic content in News Feed.
How we build AI systems.
Comprehensive access to public data from Facebook and Instagram
Comprehensive and searchable database of all ads currently running across Meta technologies
Additional tools for in-depth research on Meta technologies and programs
Quarterly report on how well we're doing at enforcing our policies on the Facebook app and Instagram.
Report on how well we're helping people protect their intellectual property.
Report on government request for people's data.
Report on when we restrict content that's reported to us as violating local law.
Report on intentional internet restrictions that limit people's ability to access the internet.
Quarterly report on what people see on Facebook, including the content that receives the widest distribution during the quarter.
Download current and past regulatory reports for Facebook and Instagram.
2025-003-FB-UA, 2025-004-FB-UA
Today, October 17, 2024, the Oversight Board selected a case bundle appealed by Facebook users regarding discussions centered around the European Union’s Pact on Migration and Asylum.
The first piece of content is an image that shows Polish Prime Minister Donald Tusk looking through a peephole of a door, with a black man walking up behind him. The accompanying text criticizes the Tusk government’s support of the Pact and suggests that it has resulted in bringing “murzynów,” a word used to describe black people that is considered offensive by some and subject to debate in Poland. The accompanying caption encourages others to oppose the Pact before the European Parliament.
The second piece of content is an image depicting a blond-haired, blue-eyed woman holding up her hand in a stop gesture, with both a stop sign and German flag in the background. German text over the image states that people should no longer come to Germany as they don’t need any more “gang-rape specialists.”
Meta determined that both pieces of content did not violate our policies on Hate Speech as laid out in the Facebook Community Standards, and left both pieces of content up.
We will implement the Board’s decision once it has finished deliberating, and will update this post accordingly. Please see the Board's website for the decision when they issue it.
We welcome the Oversight Board's decision today, April 23, 2025, on this case. The Board overturned Meta’s decision to leave up the content in both cases. Meta will act to comply with the Board's decision and reinstate the content in both cases within 7 days.
When it is technically and operationally possible to do so, we will also take action on content that is identical and in the same context as the first case. For more information, please see our Newsroom post about how we implement the Board’s decisions.
After conducting a review of the recommendations provided by the Board, we will update this post with initial responses to those recommendations.
As part of its ongoing human rights due diligence, Meta should take all of the following steps in respect of the January 7, 2025, updates to the Hateful Conduct Community Standard. First, it should identify how the policy and enforcement updates may adversely impact the rights of immigrants, in particular refugees and asylum seekers, with a focus on markets where these populations are at heightened risk. Second, Meta should adopt measures to prevent and/or mitigate these risks and monitor their effectiveness. Third, Meta should update the Board on its progress and learnings every six months, and report on this publicly at the earliest opportunity.
The Board will consider this recommendation implemented when Meta provides the Board with robust data and analysis on the effectiveness of its prevention or mitigation measures on the cadence outlined above, and when Meta reports on this publicly.
Commitment Statement: We will assess the feasibility of this multi-part recommendation.
Considerations: Meta conducts ongoing, integrated, human rights due diligence to identify, prevent, mitigate and address potential adverse human rights impacts related to our policies, products and operations in line with the UNGPs, related guidance, and our human rights policy. Ahead of the January 7th changes, we assessed the risks of the changes and took into account relevant mitigations, such as the availability of other policies and user reports to address potentially harmful content.
We will assess the feasibility of implementing this recommendation and provide updates in future reports to the Oversight Board. We will also bundle future updates for this recommendation under recommendation #1 in the Gender Identity Debate Videos case.
Meta should add the term “murzyn” to its Polish market slur list.
The Board will consider this recommendation implemented when Meta informs the Board this has been done.
Commitment Statement: We will follow our slur designation process to assess if the term ‘murzyn’ should be included in the Polish slur list.
Considerations: We have initiated our slur designation process to assess whether or not to add the term ‘murzyn’ to the Polish slur list. As we detail in our Transparency Center page linked above, our slur designation process involves a number of teams with regional expertise including policy, stakeholder engagement, and local markets teams. Regional teams conduct both qualitative and quantitative analysis to understand how a word is used on the platform. These teams also gather information on any additional definitions or uses as well as how a term may be locally and colloquially used in a particular region.
This designation process takes time to ensure that we are not incorrectly removing speech unnecessarily. Even once a slur is designated, we may still allow its use when it is used self-referentially, in a news reporting context, or to condemn its use. We will provide updates on the status of this process in our next Biannual report to the Oversight Board.
When Meta audits its slur lists, it should ensure it carries out broad external engagement with relevant stakeholders. This should include consulting with impacted groups and civil society.
The Board will consider this recommendation implemented when Meta amends its explanation of how it audits and updates its market-specific slur lists on its Transparency Center
Commitment Statement: We regularly engage with stakeholders, including civil society, to maintain accurate lists of slurs across global regions. We are committed to formalizing this process to ensure that teams who manage our partnerships with external policy stakeholders and civil society groups will be involved at an early stage in our annual audit process to maximize opportunities for external input.
Considerations: Our current process for auditing slurs takes place on an annual basis, with additional audits taking place around elections and during crisis events. In this process, Global Operations teams, including regional experts, conduct a full review of our slurs lists for each region and provide samples of potential new slurs to add or remove from our lists based on new trends. Then, Content and Public Policy teams partner with Global Operations teams to carry out additional reviews and conduct outreach with external experts—including Trusted Partners. In considering the designation of new slurs, our teams consider both the harm associated with the use of these terms and the potential for over-enforcement and limitations on legitimate speech, particularly in the context of elections and discourse on issues of political significance.
As we standardize the process of engaging with external stakeholders, our Global Operations teams will partner with Trusted Partners to provide an early opportunity to review the lists if significant changes are proposed. This will take place before lists are finalized, so that external inputs may be holistically considered. We will update our Transparency Center page on “bringing local context to global standards” to include information on civil society’s new role in this process.
To reduce instances of content that violates its Hateful Conduct policy, Meta should update its internal guidance to make it clear that Tier 1 attacks (including those based on immigration status) are prohibited, unless it is clear from the content that it refers to a defined subset of less than half of the group. This would reverse the current presumption that content refers to a minority unless it specifically states otherwise.
The Board will consider this recommendation implemented when Meta provides the Board with the updated internal rules.
Commitment Statement: We do not anticipate that we will reverse our current approach to the Hateful Conduct Community Standard to allow content that does not clearly refer to more than half a group, as a reversal is likely to restrict legitimate speech on our platforms.
Proposed Considerations: Our Hateful Conduct policy aims to remove content that directly attacks people on the basis of their protected characteristics. We remove what we define as Tier 1 attacks against people, but allow content when it is unclear if the attack is referring to more than half, or the majority, a particular group of people. This means that when content refers to “some” or “lots of” a particular group of people, we allow that content, even if it is coupled with a Tier 1 Hate Speech attack, because it is not clearly targeting an entire group and may be related to more nuanced debate or legitimate speech that would otherwise be restricted by enforcement at scale. While at times this allowed speech may still be considered offensive to some, reversing the existing approach could place an undue expectation on users to explain their positions. Given these considerations, at this time, we do not expect to reverse the current approach to this speech on our platforms and will provide no further updates on this recommendation in our next report to the Board.