How we approach dangerous organizations and individuals.
How we support communities in the face of the opioid epidemic.
How we help prevent interference, empower people to vote and more.
How we work with independent fact-checkers, and more, to identify and take action on misinformation.
How we assess content for newsworthiness.
How we reduce problematic content in News Feed.
How we build AI systems.
Comprehensive access to public data from Facebook and Instagram
Comprehensive and searchable database of all ads currently running across Meta technologies
Additional tools for in-depth research on Meta technologies and programs
Quarterly report on how well we're doing at enforcing our policies on the Facebook app and Instagram.
Report on how well we're helping people protect their intellectual property.
Report on government request for people's data.
Report on when we restrict content that's reported to us as violating local law.
Report on intentional internet restrictions that limit people's ability to access the internet.
Quarterly report on what people see on Facebook, including the content that receives the widest distribution during the quarter.
Download current and past regulatory reports for Facebook and Instagram.
On the heels of releasing our Q4 Adversarial Threat Report, we are publishing the remainder of our fourth quarter reports for 2023, including the Community Standards Enforcement Report, Oversight Board Quarterly Update, and the Widely Viewed Content Report. All of the reports are available on our Transparency Center.
Some report highlights include:
Community Standards Enforcement Report
In Q4, we continue to make progress on removing content that violates our Community Standards with prevalence remaining relatively consistent across most areas. In addition:
An increase in actioned content for Adult Nudity and Sexual Activity, due to updates to our proactive detection technology that improved accuracy on Live Videos.
An increase in actioned content and appeals for Dangerous Organizations and Individuals and Violent and Graphic Contents due to the Israel-Hamas war.
An increase in actioned content, on both Facebook and Instagram, for violations of our Regulated Goods and Services policy, largely related to fireworks and firearms.
A decrease in appealed content, related to Bullying and Harassment, due to accuracy improvements in our proactive detection technology.
You can read the full report here.
Adversarial Threat Report
In our Q4 Adversarial Threat report, we shared our 3rd annual update on our work against the surveillance-for-hire industry, including our call to action against spyware with specific policy recommendations for a broader whole-of-society response. Our report also included new Q4 takedowns of Coordinated Inauthentic Behavior (CIB) networks in China, Myanmar and Ukraine, as well as our key insights into the adversarial trends we’ve identified in the two years since Russia began its full-scale war against Ukraine. This threat research helps inform our defenses as we work to protect public debate around the world this year, including ahead of its many elections. You can read the full report here.
The Oversight Board Quarterly Update
The Oversight Board continues to help drive important changes to our policies, operations and products to hold us accountable. In Q4 2023, we submitted 15 content referrals to the Board and completed work on 16 recommendations spanning our operations, policies and products, contributing to broad and meaningful improvements across the company and our global community. We respond to every Oversight Board recommendation publicly and have committed to implementing or exploring the feasibility of implementing 77% of the Board’s total recommendations to date.
Q4 2023 marked the Oversight Board’s first expedited decisions, which the bylaws provide for in “exceptional circumstances, including when content could result in urgent real-world consequences.” In the wake of the October 7th terror attacks by Hamas in Israel, and Israel’s response in Gaza, the Board selected the Al-Shifa Hospital and Hostages Kidnapped from Israel cases. In swiftly overturning our original decisions, the Board provided both feedback that continues to shape our response to the ongoing conflict. These decisions also demonstrated the potential of expedited case decisions in enabling the Board to provide critical guidance during high-risk, fast-moving situations.
In February 2024, the Oversight Board announced that their scope now includes content appeals from Threads in addition to Facebook and Instagram. As on Facebook and Instagram, people will be able to refer content decisions on Threads to the Board whether it is left up or taken down. The Oversight Board’s charter is set up for its scope to grow and evolve over time, and since launching in 2020 its scope has expanded to include reporter appeals, bundled decisions, summary decisions and expedited decisions.
As both Meta and the Board continue to push for more output, more efficiently, each institution has separately made the decision to shift from a quarterly to a semiannual reporting cadence for updates on our work with the Oversight Board. You can read more on the Board’s impact here.
Widely Viewed Content Report (WVCR)
In Q4 2023, we continued to see a majority of views (62.7%) coming from people’s friends and Groups or Pages they follow. You can read the full report here.
NCMEC Cybertips
As part of our ongoing work to provide young people with safe, positive online experiences, as first reported in Q2 and continued in Q3, we’re providing more transparency into our efforts to find and report child exploitation to the National Center for Missing and Exploited Children (NCMEC).
In Q4 2023, we reported the following number of CyberTips to NCMEC from Facebook and Instagram:
Facebook and Instagram sent over 6 million NCMEC Cybertip Reports for child sexual exploitation.
Of these reports, over 100 thousand involved inappropriate interactions with children. Cybertips relating to inappropriate interactions with children may include an adult soliciting child sexual abuse material (CSAM) directly from a minor or attempting to meet and cause harm to a child in person. These Cybertips also include cases where a child is in apparent imminent danger.
Over 5.9 million reports related to shared or re-shared photos and videos that contain CSAM.
Other Integrity Updates
Preparing for EU Elections: On Monday 26 February, we published a newsroom post outlining the measures we are taking to prepare for the EU Parliament Elections, including our approach to combating misinformation, tackling influence operations and countering the abuse of GenAI technologies. More information can be found here.
Labeling AI-Generated Images on Facebook, Instagram and Threads: We’re working with industry partners to align on common technical standards that signal when a piece of content has been created using AI. Being able to detect these signals will make it possible for us to label AI-generated images that users post to Facebook, Instagram and Threads. We’re building this capability now, and in the coming months we’ll start applying labels in all languages supported by each app. We’re taking this approach through the next year, during which a number of important elections are taking place around the world. During this time, we expect to learn much more about how people are creating and sharing AI content, what sort of transparency people find most valuable, and how these technologies evolve. What we learn will inform industry best practices and our own approach going forward. More information can be found here.
Our Continuous Efforts to Keep Organized Hate Groups Off Our Platforms: Most of our enforcement against terrorism and organized hate comes from routine content removal. However, there are times when, as we face an especially determined or adversarial group, content enforcement alone is not enough. That’s when we leverage an important tactic — called a strategic network disruption (SND) which helps us take down an entire network at once. As part of that ongoing work, we recently conducted strategic network disruptions of two US Hate Organizations, the National Justice Party and the KKK. SNDs are an important tool in our work to keep our platforms and communities safe — and counter malicious groups when they attempt to abuse our platforms to cause offline harm and violence.