Integrity Reports, H1 2026

UPDATED MAR 19, 2026
We're publishing our H1 2026 integrity reports – the first in our semiannual reporting. The H1 publication includes the H2 2025 Community Standards Enforcement Report, the Q4 2025 Widely Viewed Content Report, the Transparency Report consisting of H2 2025 Content Restrictions Based on Local Law, and our Oversight Board Report for H2 2025. Government Requests for User Data covering H2 2025 will be published next half due to our shift to semiannual reporting periods. Our latest Adversarial Threat Report covers the first part of 2026 and was previously published on March 11. All of the reports are available in our Transparency Center.
In Q3, we reported that we'd been testing more advanced Al systems to deliver more accurate content enforcement. As we begin to deploy these more advanced Al systems to help make experiences better across Meta platforms, we're also using Al to improve a number of other experiences across Meta - including to help us better measure the prevalence of violating content. The use of Al tools for that measurement will be reflected in our next reports.
Some highlights of this report include:
Community Standards Enforcement Report
Last quarter we began providing new data on global enforcement precision, which found that of the hundreds of billions of pieces of content produced on Facebook and Instagram globally during the quarter, less than 0.1% was removed incorrectly. Our rate of mistakes remained consistent during Q4 2025 with the exception of the number of false positives, which increased over 100% during a very brief period of the quarter due to a bug.
This occurred as prevalence has remained consistent across most problem areas, with a few exceptions:
  • Prevalence of violent and graphic content decreased on Facebook (0.19% - 0.20% to 0.15% - 0.16%) due to adjustments made to our proactive detection technology during the quarter.
  • On Instagram, prevalence for adult nudity and sexual activity (0.06% - 0.07% to 0.09% - 0.11%) appeared to increase due to changes made during the quarter to improve reviewer training and enhance review workflows, which impacts how samples are labeled when measuring prevalence.
We are beginning to share data on our efforts to combat fraud and scams on our platforms, and that data will be more comprehensive in future reports. In 2025, we removed over 159 million scam ads, 92% of which we took down before anyone reported them.
You can read the full report here.
Adversarial Threat Report
We published our H1 2026 Adversarial Threat Report on March 11, where we detailed research and enforcement operations related to a range of adversarial behaviors including those associated with scam centers. In 2025, we took down 10.9 million accounts associated with scam centers in Southeast Asia and in the Middle East. Throughout these investigations, we saw how these operations have grown in sophistication and industrialization.
Increasing sophistication and tradecraft was also a theme in our disruption of covert influence operations across networks from Russia, Iran and China - with those nations continuing to serve as the three leading sources of foreign influence globally. One Iranian operation began targeting English-speaking audiences during summer of 2025, and we detected and disrupted it during Q4 - before it had time to build an authentic audience on our platforms. The network utilized a sophisticated two-tiered structure of "creator" personas—such as fake political scientists—and "amplifiers" to spread narratives critical of Israel and US foreign policy. Other disruptions included: a Russian network targeting Sub-Saharan Africa with Al-generated content posing as grassroots news that attempted to shape sentiment toward the West; a Chinese network targeting Taiwan which solicited individuals to submit grievances about domestic affairs; and an additional Iranian operation targeting Azerbaijanis in Iran and Germany. Elsewhere, a domestic Pakistani network was notable for its extensive use of Al for target identification, multilingual content, and photorealistic video generation to promote nationalist narratives promoting Pakistan's central government.
We also continue to tackle the concerning growth of so-called 'nudify' apps, which use Al to create fake non-consensual nude or sexually explicit images. Between November 2025 and January 2026, we removed over 344,000 ads across Facebook and Instagram that attempted to promote these apps. We also sent cease and desist letters to 46 companies that violated our policies by advertising such apps on our platforms.
You can read the full report here.
Widely Viewed Content Report
In Q4 2025, the top 20 widely viewed domains collectively accounted for about .3% of all Feed content views overall, remaining consistent with Q3 2025.
You can read the full report here.
Content Restrictions Based on Local Law
Meta remains committed to upholding high levels of transparency when we take action on content that violates local law but does not go against our Community Standards. For many years, we've published semiannual transparency reports as part of our commitment to transparency and under our Global Network Initiative (GNI) commitments.
This report includes information where, in limited countries, we are obligated to automatically restrict content at scale to comply with local laws, which is reflected in the comparatively higher volumes of content restrictions for those countries.
During this reporting period, the volume of content restrictions based on local law for Facebook and Instagram decreased globally from over 35 million in the H1 2025 to over 27 million in H2 2025, driven by a slight reduction in automated geoblocks in Indonesia, which peaked in H1 2024 but have since declined and stabilized.
We continually review our processes and protocols to help ensure the accuracy of our reporting. You can read more here.
Oversight Board Report for H2 2025
Now in its sixth year, the Oversight Board's impact continues to scale from improving individual user experiences to shaping system-level approaches to governance. As the Oversight Board's role continues to evolve, we're sharing more on this progress as well as our work to implement the Board's recommendations. We publicly respond to every recommendation the Oversight Board issues and, have committed to implementing or exploring the feasibility of implementing 80% of the 326 recommendations that Meta has responded to as of December 31, 2025.
This past half, the Board's partnership with Meta drove progress in several areas:
  • The Board accepted a Policy Advisory Opinion on the global expansion of Community Notes, Meta's crowd-sourced moderation tool, and for the first time will review Meta's approach to disabling accounts, bringing greater transparency to account-level enforcement and giving users new ways to shape their experience on our platforms.
  • For the first time, the Board issued a decision on a case concerning Meta's Fraud, Scams, and Deceptive Practices Community Standard as it relates to Al-manipulated content.
  • Following the Board's guidance, we increased the specificity of notifications provided to users whose content is removed in response to formal government reports.
The Board's work ensures that our approach remains guided by independent, principled oversight. Taken together, these updates demonstrate a governance model built to evolve alongside emerging technologies.
You can read the full report here.
NCMEC CyberTips
As part of our ongoing work to provide young people with safe, positive online experiences, we continue to provide transparency into our efforts to find and report child exploitation to the National Center for Missing and Exploited Children (NCMEC).
  • In Q4 2025, Facebook, Instagram, and Threads sent over 2.6 million CyberTip reports for child sexual exploitation.
  • Of these reports:
    • Over 660 thousand involved inappropriate interactions with children. CyberTips relating to inappropriate interactions with children may include an adult soliciting child sexual abuse material (CSAM) directly from a minor, online enticement of a minor, minor sex trafficking, or attempting to meet and cause harm to a child in person. These CyberTips also include cases where a child is in apparent imminent danger.
    • Over 2 million reports related to shared or re-shared photos and videos that contain CSAM.
Other Integrity Updates
  • Policy Update: We'll be updating our definition of Violence Inducing Entities in order to streamline it, and shifting our Violence Inducing Conspiracy Network enforcement to be housed under our Violence and Incitement policy. Going forward, we will remove QAnon and Antifa content when combined with content-level threat signals.