Policies that outline what is and isn't allowed on our apps.
Policies for ad content and business assets.
Other policies that apply to Meta technologies.
How we update our policies, measure results, work with others, and more.
Explore how we help teens have safe, positive experiences on Facebook and Instagram.
How we approach dangerous organizations and individuals.
How we support communities in the face of the opioid epidemic.
How we help prevent interference, empower people to vote and more.
How we work with independent fact-checkers, and more, to identify and take action on misinformation.
How we assess content for newsworthiness.
How we reduce problematic content in News Feed.
How we build AI systems.
Comprehensive access to public data from Facebook and Instagram
Comprehensive and searchable database of all ads currently running across Meta technologies
Additional tools for in-depth research on Meta technologies and programs
Quarterly report on how well we're doing at enforcing our policies on the Facebook app and Instagram.
Report on how well we're helping people protect their intellectual property.
Report on government request for people's data.
Report on when we restrict content that's reported to us as violating local law.
Report on intentional internet restrictions that limit people's ability to access the internet.
Quarterly report on what people see on Facebook, including the content that receives the widest distribution during the quarter.
Download current and past regulatory reports for Facebook and Instagram.
2024-046-FB-UA, 2024-047-IG-UA
Today, August 29, 2024, the Oversight Board announced that it has selected a case bundle from two user appeals about content posted to Facebook and Instagram. The first piece of content is a video showing a transgender woman being confronted for using the women's bathroom. The second piece of content is a video of a transgender girl winning a female sports competition in the United States, with some spectators vocally disapproving of the result.
Meta determined that neither video violated our policies on Hate Speech or Bullying and Harassment, as laid out in our Facebook Community Standards and Instagram Community Guidelines, and left both pieces of content up.
Under our Hate Speech policy, Meta removes any calls for exclusion of members of a protected characteristic group. We generally allow people to criticize concepts because we want to allow discussion about the ideas, institutions, and policies that are a central part of any society or cultural community.
In both cases, even if the content included a call for exclusion, we determined that the posts should nonetheless be allowed upon escalation in our content review process, given their newsworthiness. Transgender people’s access to bathrooms that correspond to their gender identity is the subject of considerable political debate in the United States.
While under our Bullying and Harassment policy Meta removes any attacks targeted at a private individual, in both instances we determined there is no explicit call for exclusion present in the posts.
We will implement the board’s decision once it has finished deliberating, and will update this post accordingly. Please see the Board's website for the decision when it issues it.
We welcome the Oversight Board's decision today, April 29, 2025, on this case. The Board upheld Meta’s decision to leave the content up in both cases.
After conducting a review of the recommendations provided by the Board, we will update this post with initial responses to those recommendations.
As part of its ongoing human rights due diligence, Meta should take all of the following steps in respect of the January 7, 2025, updates to the Hateful Conduct Community Standard. First, it should identify how the policy and enforcement updates may adversely impact the rights of LGBTQIA+ people, including minors, especially where these populations are at heightened risk. Second, Meta should adopt measures to prevent and/or mitigate these risks and monitor their effectiveness. Third, Meta should update the Board on its progress and learnings every six months, and report on this publicly at the earliest opportunity.
The Board will consider this recommendation implemented when Meta provides the Board with robust data and analysis on the effectiveness of its prevention or mitigation measures on the cadence outlined above and when Meta reports on this publicly.
Commitment Statement: We will assess the feasibility of this multi-part recommendation.
Considerations: Meta conducts ongoing, integrated, human rights due diligence to identify, prevent, mitigate and address potential adverse human rights impacts related to our policies, products and operations in line with the UNGPs, related guidance, and our human rights policy. Ahead of the January 7th changes, we assessed the risks of the changes and took into account relevant mitigations, such as the availability of other policies and user reports to address potentially harmful content.
We will assess the feasibility of implementing this recommendation and provide updates in future reports to the Oversight Board. We will also bundle future updates for this recommendation along with recommendation #1 from the cases on Posts Displaying South Africa’s Apartheid-Era Flag and Criticism of EU Migration Policies and Immigrants.
To ensure Meta’s content policies are framed neutrally and in line with international human rights standards, Meta should remove the term “transgenderism” from the Hateful Conduct policy and corresponding implementation guidance.
The Board will consider this recommendation implemented when the term no longer appears in Meta’s content policies or implementation guidance.
Commitment Statement: We will consider ways to update the terminology in our Hateful Conduct policy in order to best explain the types of discussions and content the policy allows.
Considerations: We are evaluating how best to explain what content is allowed and not allowed on our platforms under the Hateful Conduct policy. Our goal in our Community Standards is to clearly explain our policy approach to content. Achieving clarity and transparency in our public explanations may sometimes require including language considered offensive to some.
As we consider this change in our Hateful Conduct policy, we plan to incorporate feedback from a variety of stakeholders to ensure that our Community Standard continues to be clear for the billions of users on our platforms. We are assessing a number of possible updates to the policy language, and will provide updates in a future Biannual Report to the Board.
To reduce the reporting burden on targets of Bullying and Harassment, Meta should allow users to designate connected accounts, which are able to flag potential Bullying and Harassment violations requiring self-reporting on their behalf.
The Board will consider this recommendation implemented when Meta makes these features available and easily accessible to all users via their account settings.
Commitment Statement: We will evaluate the feasibility of allowing people connected to a user to report content requiring self-reporting on their behalf, as well as looking for opportunities to foster partnerships expanding the ability of designated entities to report potentially violating content, particularly on behalf of youth.
Considerations: Ensuring the safety of users on our platforms is consistently a high priority, and in doing so we strive to iterate and improve their ability to report or escalate content such as bullying and harassment. Our Bullying and Harassment policy applies certain protections for everyone, regardless of reporting context. However, for less severe tiers of our policy, we apply different protections for different individuals, such as adult public figures and private individuals. In order to allow discussion such as banter among friends or neutral commentary, we may require self-reporting as it provides context to help us understand if the person reporting content feels bullied or harassed.
In response to this recommendation, we will assess if there are ways that we may leverage existing tools for reporting content while still maintaining self-reporting as a key contextual signal for understanding if content may be considered as bullying and harassment by an individual as opposed to legitimate discussions and speech. Allowing others to report on behalf of a person is technically difficult given the way our review systems function at scale and may be subject to abuse, but we will explore options and provide an update on this work in a future report.
Beyond the context of self reporting, we have also taken steps recently to prioritize certain reports more generally for review under our Community Standards. Earlier this year, following our launch of Instagram teen accounts we introduced the School Partnership Program for Instagram, a program partnering directly with schools and teachers to address bullying. Through this program, reports submitted by school partners that may violate Instagram’s Community Standards will be prioritized for review. However, policy areas that require self-reporting will still need a match between the target and the reporter. Additionally, schools receive status updates on the reports and notifications as soon as Instagram takes action on the report. The program is currently open to middle and high schools in the US. As part of our standard process, we allow parents to request the removal of violating content on behalf of children under 13 years old.
We are committed to exploring additional opportunities to provide services in instances where users may suddenly become public figures or highly visible on our platforms. This will require collaboration across our Product, Policy, Partnerships, and Operations teams to identify any possible avenues for expansion
We will provide updates on the status of this recommendation in future reports to the Board.
To ensure there are fewer enforcement errors on Bullying and Harassment violations requiring self-reporting, Meta should ensure the one report representing multiple reports on the same content is chosen based on the highest likelihood of a match between the reporter and the content’s target. In doing this Meta should guarantee that any technological solutions account for potential adverse impacts on at-risk groups.
The Board will consider this recommendation implemented when Meta provides sufficient data to validate the efficacy of improvements in the enforcement of self-reports of Bullying and Harassment violations as a result of this change.
Commitment Statement: We are assessing solutions to improve identification and review of enforcement errors across all Community Standards, including those related to self-reporting of Bullying and Harassment. This assessment includes reevaluating existing tooling functions and ongoing deliberation of how we prioritize content for human review.
Considerations: We are continuously working to improve and standardize our review and enforcement processes across all violation areas and will assess the feasibility of implementing this recommendation in line with this ongoing work. Currently, our system mitigates the risk of missing a self-report by ensuring that, if multiple people report content at different times, we have humans review the content multiple times before we begin automatically marking it non-violating. This increases the chance that, if there is a self-report, one of the human reviews will capture it. In cases where automated enforcement has previously been applied to reported content deemed as non-violating, we also enable periodic human reviews at set intervals to re-examine content that receives frequent reports.
Ongoing assessments include exploring how advancements in our enforcement technology can further improve the enforcement accuracy of highly viral content reported by multiple users. We will continue to evaluate the feasibility of using these enhancements to address this recommendation and will provide updates in future reports.