Policies that outline what is and isn't allowed on our apps.
Policies for ad content and business assets.
Other policies that apply to Meta technologies.
How we update our policies, measure results, work with others, and more.
Explore how we help teens have safe, positive experiences on Facebook and Instagram.
How we approach dangerous organizations and individuals.
How we support communities in the face of the opioid epidemic.
How we help prevent interference, empower people to vote and more.
How we work with independent fact-checkers, and more, to identify and take action on misinformation.
How we assess content for newsworthiness.
How we reduce problematic content in News Feed.
How we build AI systems.
Comprehensive access to public data from Facebook and Instagram
Comprehensive and searchable database of all ads currently running across Meta technologies
Additional tools for in-depth research on Meta technologies and programs
Quarterly report on how well we're doing at enforcing our policies on the Facebook app and Instagram.
Report on how well we're helping people protect their intellectual property.
Report on government request for people's data.
Report on when we restrict content that's reported to us as violating local law.
Report on intentional internet restrictions that limit people's ability to access the internet.
Quarterly report on what people see on Facebook, including the content that receives the widest distribution during the quarter.
Download current and past regulatory reports for Facebook and Instagram.
We’re publishing our fourth quarter reports for 2024, including the Community Standards Enforcement Report, Adversarial Threat Report, Widely Viewed Content Report and our Oversight Board Report for H2 2024. All of the reports are available in our Transparency Center. Some report highlights include:
Community Standards Enforcement Report
Prevalence remained consistent across a range of violation types. In addition:
Prevalence decreased on Facebook and Instagram for Adult Nudity and Sexual Activity due to adjustments made to our proactive detection technology.
Prevalence increased on Instagram for Violent & Graphic Content as we make adjustments to our proactive detection technology.
Content actioned on Instagram for Restricted Goods and Services (Drugs) decreased as a result of changes we made due to over enforcement and mistakes that were being made.
Content actioned on Instagram for Child Sexual Exploitation returned to previous levels after a spike in violating viral content.
This report is for Q4 2024, and does not include any data related to policy or enforcement changes made in January 2025. However, we have been monitoring those changes and so far we have not seen any meaningful impact on prevalence of violating content despite no longer proactively removing certain content. In addition, we have seen enforcement mistakes have measurably decreased with this new approach.
You can read the full Q4 report here.
Adversarial Threat Report
In our Q4 Adversarial Threat report, we’re sharing threat research into three new covert influence operations that we took down in Benin, Ghana, and China. All of them targeted people across the internet, including on our apps, Telegram, X (formerly Twitter), YouTube, TikTok, Blogspot, and their own websites. We detected and removed these campaigns from our apps before they were able to build authentic audiences on our platforms. We also include an update on Doppelganger, the most persistent Russian influence operation we’ve disrupted since 2017. You can read the full report here.
H2 2024 Oversight Board Report
In today’s H2 2024 report, we provide an update for our implementation work across 87 open recommendations and details on 7 case referrals we sent to the Oversight Board during this period – 4 of which were selected by the Board for review. We respond to every Oversight Board recommendation publicly and have committed to implementing or exploring the feasibility of 80% of its 292 recommendations to date.
Building on the progress highlighted in our H1 2024 report, during this past half:
The Oversight Board Trust created Appeals Centre Europe (ACE), a DSA certified out-of-court dispute settlement body which accepts content moderation appeals from Facebook, YouTube and TikTok. This extends the Oversight Board’s impact beyond Meta platforms and helps establish new pathways for external review of content decisions in the wider online ecosystem.
The Board published their first set of impact assessments using Meta Content Library API data, which provide an independent check on how we implement their recommendations, and demonstrate the impact they have directly on our community in ways that protect speech and promote free expression on our platforms.
We also created an Oversight Board Impact Tracker, available on our Transparency Center to capture some of the Board’s most influential recommendations and how it has shaped our platforms and our users’ experiences. This was undertaken alongside and in response to the Board’s own impact assessments to expand our transparency and strengthen external accountability for how the Board may govern across Meta’s platforms. We will continue to share impact assessments in these semiannual reports and in the tracker on an ongoing basis. Read the full report here.
NCMEC CyberTips
As part of our ongoing work to provide young people with safe, positive online experiences, we're continuing to provide more transparency into our efforts to find and report child exploitation to the National Center for Missing and Exploited Children (NCMEC).
In Q4 2024, we reported the following number of CyberTips to NCMEC from Facebook, Instagram and Threads:
Facebook, Instagram and Threads sent over 2 million NCMEC CyberTip reports for child sexual exploitation.
Of these reports, over 150 thousand involved inappropriate interactions with children. CyberTips relating to inappropriate interactions with children may include an adult soliciting child sexual abuse material (CSAM) directly from a minor, online enticement of a minor, minor sex trafficking, or attempting to meet and cause harm to a child in person. These CyberTips also include cases where a child is in apparent imminent danger.
Over 1.8 million reports related to child sexual exploitation content, including shared or re-shared photos and videos that contain CSAM.