Integrity Reports, Third Quarter 2025
UPDATED DEC 11, 2025
We’re publishing our third quarter reports for 2025, including the Community Standards Enforcement Report, Adversarial Threat Report, Widely Viewed Content Report and the semiannual Transparency Report consisting of Government Requests for User Data and Content Restrictions Based on Local Law. All of the reports are available in our Transparency Center.
In response to increased regulatory reporting requirements, including publication of the semiannual Very Large Online Platforms Transparency Reports under the EU’s Digital Services Act, these transparency reports will be released on a semiannual basis beginning in 2026 to better align with the timing of those requirements.
Some Q3 report highlights include:
Community Standards Enforcement Report
In January we announced a series of steps to allow more speech on our platforms, and in Q1 and Q2 we shared our progress in reducing mistakes in the US. That trend has continued into Q3 with more targeted deployment of automated systems. For the first time, we are providing new data on global enforcement precision that measures our progress in making fewer mistakes and will hold us accountable for continued improvement going forward.
Of the hundreds of billions of pieces of content produced on Facebook and Instagram in Q3 globally, less than 1% was removed for violating our policies and less than 0.1% was removed incorrectly. For the content that was removed, we measured our enforcement precision – that is, the percentage of correct removals out of all removals – to be more than 90% on Facebook and more than 87% on Instagram. That means about 1 out of every 10 pieces of content removed, and less than one out of every 1,000 pieces of content produced overall, was removed in error.
This has occurred as prevalence remained consistent across most problem areas, with a few exceptions:
- On both Facebook and Instagram, prevalence increased for adult nudity and sexual activity and for violent and graphic content, and on Facebook it increased for bullying and harassment. This is largely due to changes made during the quarter to improve reviewer training and enhance review workflows, which impacts how samples are labeled when measuring prevalence.
We continue to focus our enforcement efforts on high-severity violations and illegal activity. You can read the full report here.
Adversarial Threat Report
In this Q2-Q3 report, we’re sharing new threat research into Coordinated Inauthentic Behavior (CIB) operations and highlighting updated attribution for a persistent influence operation known as “Endless Mayfly,” which we have attributed to Iran’s International Union of Virtual Media (IUVM), and discussing Russian use of unwitting freelance social media managers in Africa to advance violations of our CIB policy. We also provide case summaries for five other CIB operations that target India, Poland and Moldova. We are also sharing new insights on the threat landscape in AI security - including how adversarial actors use AI to violate our Community Standards, how we use AI to defend our platforms from adversarial abuse, and how we are seeking to secure our AI models from adversarial interference. The report also includes actions we are taking to combat scams, highlighting our work against Criminal Scam Syndicates, and outlining our whole-of-society approach to tackling fraud and scams, detailed through our Fraud Attack chain.
Finally, we are updating our Inauthentic Behavior (IB) Community Standards to simplify and refine our IB and CIB policies and to help uninvolved authentic communities, Pages, and Groups that are targeted, managed, or co-opted by CIB operations remain on our services. You can read the full report here.
Widely Viewed Content Report
In Q3 2025, the top 20 widely viewed domains collectively accounted for about 0.4% of all Feed content views overall, unchanged from the previous quarter. You can read the full report here.
Government Requests for User Data
During the first half of 2025, global government requests for user data increased by 16.3% from 322,062 to 374,516. India continues to be the top requester with a 31.9% increase in requests, followed by the United States with an 8.6% increase in requests, and then Brazil, Germany and France.
In the US, we received 81,064 requests in the first half of 2025, an increase of 8.6%, 77.3% of which include non-disclosure orders prohibiting Meta from notifying the target user. Emergency requests accounted for 6% of the total request in the US.
Additionally, as a result of transparency updates introduced in the 2015 USA Freedom Act, the US government lifted non-disclosure orders on 14 National Security Letters (“NSLs”). These NSLs, along with the US government’s authorization letters, are available here.
Content Restrictions Based on Local Law
Meta remains committed to upholding high levels of transparency when we take action on content that violates local law but does not go against our Community Standards. For many years, we’ve published semiannual transparency reports as part of our commitment to transparency and under our Global Network Initiative (GNI) commitments.
This report includes information where, in limited countries, we are obligated to automatically restrict content at scale to comply with local laws, which is reflected in the comparatively higher volumes of content restrictions for those countries.
During this reporting period, the volume of content restrictions based on local law for Facebook and Instagram decreased globally from over 84.6 million in H2 2024 to 35 million in the H1 2025, driven by a reduction in automated geoblocks in Indonesia, which peaked in H1 2024 but have since declined and stabilized.
We continually review our processes and protocols to help ensure the accuracy of our reporting. You can read more here.
NCMEC CyberTips
As part of our ongoing work to provide young people with safe, positive online experiences, we're continuing to provide transparency into our efforts to find and report child exploitation to the National Center for Missing and Exploited Children (NCMEC).
In Q3 2025, Facebook, Instagram and Threads sent over 2 million CyberTip reports for child sexual exploitation. Of these reports:
- Over 486,000 involved inappropriate interactions with children. CyberTips relating to inappropriate interactions with children may include an adult soliciting child sexual abuse material (CSAM) directly from a minor, online enticement of a minor, minor sex trafficking, or attempting to meet and cause harm to a child in person. These CyberTips also include cases where a child is in apparent imminent danger.
- Over 1.6 million reports related to shared or re-shared photos and videos that contain CSAM.
Other Integrity Updates
- Enhanced AI for better content enforcement: For years we’ve used AI as a part of our content enforcement. Recently we’ve been testing significantly more advanced AI models, which show promising results. In one test, the latest generation of large language models outperformed our current automation and human review systems in detecting celebrity impersonations - a common scammer tactic. We’re finding these enhanced models deliver more accurate and comprehensive enforcement than our current systems, reducing mistakes and more effectively finding and removing harmful content, so people see less of it on our platforms. Over the next few years we’ll transition to these new, enhanced models, allowing us to improve our content review across reports, requiring less human review in areas that can be easily automated or covered by enhanced LLMs, and focusing human expertise on the most sensitive and difficult areas of enforcement. We’ll continue to invest in this advanced technology to help keep people safe, reduce mistakes and support better experiences for everyone who uses our apps.
- Ongoing review of banned accounts: In addition to conducting our regular audits of the accounts banned under our Dangerous Organizations and Individuals policies, we'll be continuing to review banned accounts on Facebook, Instagram and Threads to ensure entries are relevant, current, up-to-date, and qualify for removal under our policies, and to ensure we're not enforcing on accounts based on policies that no longer exist.