Policies that outline what is and isn't allowed on our apps.
Policies for ad content and business assets.
Other policies that apply to Meta technologies.
How we update our policies, measure results, work with others, and more.
Explore how we help teens have safe, positive experiences on Facebook and Instagram.
How we approach dangerous organizations and individuals.
How we support communities in the face of the opioid epidemic.
How we help prevent interference, empower people to vote and more.
How we work with independent fact-checkers, and more, to identify and take action on misinformation.
How we assess content for newsworthiness.
How we reduce problematic content in News Feed.
How we build AI systems.
Comprehensive access to public data from Facebook and Instagram
Comprehensive and searchable database of all ads currently running across Meta technologies
Additional tools for in-depth research on Meta technologies and programs
Quarterly report on how well we're doing at enforcing our policies on the Facebook app and Instagram.
Report on how well we're helping people protect their intellectual property.
Report on government request for people's data.
Report on when we restrict content that's reported to us as violating local law.
Report on intentional internet restrictions that limit people's ability to access the internet.
Quarterly report on what people see on Facebook, including the content that receives the widest distribution during the quarter.
Download current and past regulatory reports for Facebook and Instagram.
We’re publishing our third quarter reports for 2023, including the Community Standards Enforcement Report, Adversarial Threat Report, Oversight Board Quarterly Update, Widely Viewed Content Report, and the Transparency Report for the first half of 2023 (Government Request for User Data and Content Restrictions Based on Local Law). All of the reports are available in our Transparency Center.
Some report highlights include:
Community Standards Enforcement Report
In the Q3, we continue to make progress on removing content that violates our Community Standards with prevalence remaining relatively consistent across a wide range of violation areas. In addition, there was:
An increase in appeals across a range of violations due to updates made to fulfill the European Union’s Digital Services Act (DSA) requirement for appeals to be accessible to users for 6 months after an enforcement action is taken.
An increase in our proactive rate across multiple violation types as we updated our methodology. As of Q3 2023, we have changed the methodology to only consider comments as user reported if they were reported directly by users. Prior to this change, when a post was reported by users we also considered the comments on the post as user reported.
A decrease in content actioned for spam due to multiple factors including a bug in enforcement, which has been addressed.
A +135% increase in content actioned for Child Sexual Exploitation content following improvements in our detection systems and numerous network disruptions by our specialist investigative teams. We hire specialists with backgrounds in law enforcement and online child safety to find predatory networks and remove them. These specialists monitor evolving behaviors exhibited by these networks - such as new coded language - to not only remove them, but to inform the technology we use to proactively find them. Between 2020 and 2023, our teams disrupted 32 abusive networks and removed more than 160,000 accounts associated with those networks.
You can read the full report here.
Adversarial Threat Report
In our Q3 Adversarial Threat report, we’re sharing findings about three separate covert influence operations we took down for violating our policy against Coordinated Inauthentic Behavior (CIB). Two of them originated in China, and one in Russia. We are also publishing our insights into the global threat landscape ahead of next year with its many elections around the world. This includes our latest research into deceptive activities originating in Russia, Iran and China – the most prolific geographic sources of foreign influence operations to date. We’re also including current trends we see in the information environment, including challenges posed by generative Artificial Intelligence (AI) that we’re working to tackle alongside governments, researchers, and our industry peers. You can read the full report here.
The Oversight Board Quarterly Update
The Oversight Board continues to help drive important changes to our policies, operations and products to hold us accountable. This quarter the board issued 16 recommendations across 5 cases, one of which was a Meta referral. We completed implementation of 16 recommendations and implemented 7 of those in full—meaning that we complied fully with the Board’s direction in each of those instances. We respond to every Oversight Board recommendation publicly and have committed to implementing or exploring the feasibility of implementing 77% of the Board’s total recommendations to date.
In Q3 2023, because of the Board’s recommendations we:
Updated our Violence and Incitement Community Standards to clarify our approach to Veiled Threats;
Updated our Dangerous Organizations and Individuals policy to provide new details on how we approach news reporting as well as neutral and condemning discussion; and
Updated our public Approach to Newsworthy Content page to report the number of newsworthiness allowances applied this year to date, and include new information about scaled versus narrow newsworthiness allowances.
You can read the full report here.
Widely Viewed Content Report (WVCR)
In Q3 2023, we continued to see a majority of views (65%) coming from people’s friends and Groups or Pages they follow. You can read the full report here.
Government Requests for User Data
During the first half of 2023, global government requests for user data increased 13.5% from 239,388 to 271,691. The US and India submitted the largest number of requests by volume, followed by Germany, Brazil and France.
In the US, we’ve received 73,956 requests– an increase of 15.3% – from the last half of 2022. Non-disclosure orders prohibiting Meta from notifying the user decreased to 63.75%. Emergency requests accounted for 5.7% of the total request in the US, an increase of 13.5% from the last half of 2022.
Additionally, as a result of transparency updates introduced in the 2016 USA Freedom Act, the US government lifted non-disclosure orders on 11 National Security Letters, which we received between 2014 and 2021. These requests, along with the US government’s authorization letters, are available here.
As we have said in prior reports, we always scrutinize every government request we receive to make sure it is legally valid, no matter which government makes the request. We comply with government requests for user information only where we have a good-faith belief that the law requires us to do so. In addition, we assess whether a request is consistent with internationally recognized standards on human rights, including due process, privacy, free expression and the rule of law. When we do comply, we only produce information that is narrowly tailored to that request. If we determine that a request appears to be deficient or overly broad, we push back and will fight in court, if necessary. We do not provide governments with “back doors” to people’s information. For more information about how we review and respond to government requests for user data and the safeguards we apply, please refer to our FAQs.
You can read more on requests made from H1 2023 here.
Content Restrictions Based on Local Law
For many years, we’ve published biannual transparency reports, which include the volume of content restrictions we make when content is reported as violating local law but doesn’t go against our Community Standards. During this reporting period, the volume of content restrictions based on local law increased globally 39% from 89,368 in H1 2022 to 123,971 in H2 2022, and 61% from H2 2022 to 200,121 in H1 2023, driven mainly by increases in requests from Colombia, Brazil and Taiwan.
In March of 2022 we committed to participation in Lumen, an independent research project hosted by Harvard’s Berkman Klein Center for Internet and Society. The project enables researchers to study content takedown requests from governments and private actors concerning online content. Today, the first set of takedown requests that Meta submitted to Lumen are now available in their database. This will further enable the global community’s efforts to analyze, report and advocate for digital rights of internet users.
NCMEC Cybertips
As part of our ongoing work to provide young people with safe, positive online experiences, as first reported in Q2, we’re providing more transparency into our efforts to find and report child exploitation to the National Center for Missing and Exploited Children (NCMEC).
In Q3 2023, we reported the following number of CyberTips to NCMEC from Facebook and Instagram:
Facebook and Instagram sent over 7.6 million NCMEC Cybertip Reports for child sexual exploitation.
Of these reports, over 56 thousand involved inappropriate interactions with children. Cybertips relating to inappropriate interactions with children may include an adult soliciting child sexual abuse material (CSAM) directly from a minor or attempting to meet and cause harm to a child in person. These Cybertips also include cases where a child is in apparent imminent danger.
Over 7.5 million related to shared or re-shared photos and videos that contain child sexual abuse material (CSAM).
Intellectual Property
We have paused the publication of the intellectual property report this half. We will resume publishing this data in the first half of 2024.
Other Integrity Updates:
Bringing Together Community Standards and Advertising Standards: Our Community Standards are a living set of guidelines, and it’s important that they remain consistent for all surfaces across Facebook. That’s why we have begun a process to reduce duplication, strengthen uniformity and to better clarify how and where our organic content Community Standards and Advertising Standards apply. We are starting this roll out with a small number of Community Standards and Advertising Standards, focusing first on unifying those that have been most overlapping to date. While these Advertising Standards have been enforced, these updates are intended to help advertisers better understand our policies. Each of these Advertising Standards now links directly to the corresponding Community Standard, and includes any ad-specific differences that apply for that policy.
Expansion of Whole Post Integrity Embeddings (WPIE): AI has become one of the most effective tools for improving precision of enforcement and reducing the prevalence of violating content, or the amount of violating content that people see on our platforms. Integrity AI systems have typically been single-purpose, each designed for a specific content type, language, and policy violation and they require varying amounts of training data and different infrastructure. As shared previously, we created WPIE AI Framework to address a range of policy violations. The deployment of this cross-problem AI enables us to utilize efficient classifiers, improve our detection capabilities and replace dozens of individual models. This approach allows our integrity systems to learn from training data across all policy violations, which helps cover gaps that a single-purpose model may have on its own. Cross-Problem AI systems now utilize an upgraded version of WPIE that expands our multimodal integrity system to encompass organic and paid policies for deeper and richer understanding of content.
Digitally Created or Altered Social Issue and Election Ads: Starting in the new year, advertisers will have to disclose when they use AI or other digital techniques to create or alter a political or social issue ad in certain cases. These include if the ad contains a photorealistic image or video, or realistic sounding audio, that was digitally created or altered to depict a real person as saying or doing something they did not say or do. They also include if an ad depicts a realistic-looking person that does not exist or a realistic-looking event that did not happen, alters footage of a real event, or depicts a realistic event that allegedly occurred, but that is not a true image, video, or audio recording of the event.
High Risk Drug Policy Update: As part of Meta’s ongoing work to help combat the drug epidemic on and off our platforms, we’ve added a new section to our Restricted Goods and Services policy to address the sale of high-risk drugs, starting with fentanyl, cocaine and heroin. One violation of this type will result in the disabling of an account.