How we approach dangerous organizations and individuals.
How we support communities in the face of the opioid epidemic.
How we help prevent interference, empower people to vote and more.
How we work with independent fact-checkers, and more, to identify and take action on misinformation.
How we assess content for newsworthiness.
How we reduce problematic content in News Feed.
How we build AI systems.
Comprehensive access to public data from Facebook and Instagram
Comprehensive and searchable database of all ads currently running across Meta technologies
Additional tools for in-depth research on Meta technologies and programs
Quarterly report on how well we're doing at enforcing our policies on the Facebook app and Instagram.
Report on how well we're helping people protect their intellectual property.
Report on government request for people's data.
Report on when we restrict content that's reported to us as violating local law.
Report on intentional internet restrictions that limit people's ability to access the internet.
Quarterly report on what people see on Facebook, including the content that receives the widest distribution during the quarter.
Download current and past regulatory reports for Facebook and Instagram.
We’re publishing our second quarter reports for 2024, including the Community Standards Enforcement Report, Adversarial Threat Report, Widely Viewed Content Report, and our Oversight Board Report for H1 2024. All of the reports are available on our Transparency Center.
Some report highlights include:
Community Standards Enforcement Report
In Q2, prevalence continues to remain relatively consistent across most areas. In addition:
The one instance of an increase in prevalence is for Violent and Graphic Content on Facebook. Prevalence is estimated by using samples of content and labeling them for violations. We improved our training to more accurately label violations leading to an increase in the measured value of prevalence. This increase does not indicate that the platform has more violating content, rather, our ability to identify such content has improved. We expect these efforts to help in the reduction of prevalence in the future.
An increase in content actioned for Child Endangerment: Nudity and Physical Abuse Content on Facebook as a result of our proactive detection technology identifying and taking down violating viral content.
An increase in content actioned for Dangerous Organizations and Individuals (Organized Hate) on Facebook and Instagram due to continued updates made to our proactive detection technology.
A decrease in content actioned for Spam on Facebook due to updates to our proactive enforcement related to policy updates. While fluctuations in enforcement metrics for spam are expected due to the highly adversarial nature of this space, we also saw an increase in content appeals due to a bug that led to over enforcement.
A decrease in content actioned for Violence and Incitement for Facebook and Instagram due to updates made to our proactive detection technology to improve accuracy.
Adversarial Threat Report
In our Q2 Adversarial Threat report, we’re sharing threat research into six new covert influence operations that we took down in Russia, Vietnam and the United States. We detected and removed many of these cross-internet campaigns early in their audience building efforts. We also include an update on Doppelganger, the most persistent Russian influence operation we’ve disrupted since 2017. Finally, as we look ahead to a number of elections, including in the US, we’re sharing some key insights into the global threat landscape and what we expect to see through the rest of this year. You can read the full report here.
Widely Viewed Content Report
In Q2 2024, we continued to see a majority of views (73.5%) coming from people’s friends and groups they follow. There is a bug that is showing 0% of Feed content views are coming from Page Followers. This will be addressed in next quarter’s report. You can read the full report here here.
H1 2024 Oversight Board Report
In today’s H1 2024 report, we provide a comprehensive update for our implementation work across 91 recommendations and further details on the seven content referrals we sent to the Board in this period – two of which (content referrals #3 and #4) were selected by the Board for review. We respond to every Oversight Board recommendation publicly and have committed to implementing or exploring the feasibility of implementing 79% of recommendations to date.
Thanks to the Oversight Board's recommendations, this past half we:
Began providing new information about AI-generated content, including new labels and an updated approach to related policies. This followed a series of recommendations from the Oversight Board where we agreed that providing transparency and additional context is an effective way to address these types of content at scale, while avoiding the risk of unnecessarily restricting speech.
Completed development of a new approach to retaining potential evidence of war crimes and serious violations of international human rights law.
Committed to clarifying our Coordinating Harm & Promoting Crime policy to share how we define content encouraging illegal participation in voting or census processes.
We are also sharing impact assessments in this report, which demonstrate how the Board’s recommendations create far-reaching change beyond individual cases. For example, one recommendation led to the creation of a new pathway for users to provide us with additional context in their appeal submissions when they disagree with an enforcement decision. This helps our content reviewers understand when policy exceptions may apply, and enables users to better advocate for their appeals. In just one month during H1, the impact assessment shows users provided additional context in approximately 82% of appeal submissions across Facebook and Instagram.
Read the full report here.
NCMEC Cybertips
As part of our ongoing work to provide young people with safe, positive online experiences, we’re continuing to provide more transparency into our efforts to find and report child exploitation to the National Center for Missing and Exploited Children (NCMEC). We began sharing this data last year.
In Q2 2024, we reported the following number of CyberTips to NCMEC from Facebook and Instagram:
Facebook and Instagram sent over 2.8 million NCMEC CyberTip Reports for child sexual exploitation.
Of these reports, over 80 thousand involved inappropriate interactions with children. CyberTips relating to inappropriate interactions with children may include an adult soliciting child sexual abuse material (CSAM) directly from a minor or attempting to meet and cause harm to a child in person. These CyberTips also include cases where a child is in apparent imminent danger.
Over 2.7 million reports related to shared or re-shared photos and videos that contain CSAM.
This quarter we streamlined our reporting to NCMEC and made fewer CyberTips than in previous quarters as a result. Earlier this year, based on feedback provided by NCMEC, we started to group identical or near identical copies of viral or meme content into one CyberTip. For example, if the same piece of viral or meme content is shared 100 times in 24 hours, we now report 1 CyberTip with that information rather than 100 CyberTips. This is designed to help NCMEC and law enforcement more easily manage reports and prioritize those that represent an imminent risk of harm. NCMEC and law enforcement still have access to all the same content and information, but grouping reports in this way will improve their ability to manage and prioritize them.
Other Integrity Updates
2023 Newsworthy Allowance: In 2022, based on a recommendation from the Oversight Board, we published details on our Newsworthy allowance, including the total number of documented newsworthy allowances and the number of those allowances issued for posts by politicians. From June 2023 – June 2024, we documented 32 newsworthiness allowances of which 14 were issued for politicians. This updated data is now available in our Transparency Center.
One Set of Community Standards: As a part of our work to ensure users have a streamlined experience, starting later this year we’ll have one set of policies — Community Standards— for Facebook, Instagram, Messenger and Threads. Our policies and how we enforce them aren’t changing, rather, our goal is to make our policies easier to understand and easier to access for everyone. Our Community Standards help you understand what we do and don’t allow on all four apps, and we believe this change will make things more seamless for our users.
Improvements to User Reporting: As a part of our continuous work to improve our products and enhance our users’ experiences, we’ve been working on and testing improvements to the process by which users can report content that they believe violates our Community Standards. We’ll share more details in the coming months.