Meta

Meta
Policies
Community StandardsMeta Advertising StandardsOther policiesHow Meta improvesAge-appropriate content

Features
Our approach to dangerous organisations and individualsOur approach to the opioid epidemicOur approach to electionsOur approach to misinformationOur approach to newsworthy contentOur approach to Facebook Feed rankingOur approach to explaining rankingAccessibility at Meta

Research tools
Content Library and Content Library APIAd Library toolsOther research tools and datasets

Enforcement
Detecting violationsTaking action

Governance
Governance innovationOversight Board overviewHow to appeal to the Oversight BoardOversight Board casesOversight Board recommendationsCreating the Oversight BoardOversight Board: Further asked questionsMeta's biannual updates on the Oversight BoardTracking the Oversight Board's impact

Security
Threat disruptionsSecurity threatsThreat reporting

Reports
Community Standards enforcement reportIntellectual propertyGovernment requests for user dataContent restrictions based on local lawInternet disruptionsWidely viewed content reportRegulatory and other transparency reports

Policies
Community Standards
Meta Advertising Standards
Other policies
How Meta improves
Age-appropriate content
Features
Our approach to dangerous organisations and individuals
Our approach to the opioid epidemic
Our approach to elections
Our approach to misinformation
Our approach to newsworthy content
Our approach to Facebook Feed ranking
Our approach to explaining ranking
Accessibility at Meta
Research tools
Content Library and Content Library API
Ad Library tools
Other research tools and datasets
Enforcement
Detecting violations
Taking action
Governance
Governance innovation
Oversight Board overview
How to appeal to the Oversight Board
Oversight Board cases
Oversight Board recommendations
Creating the Oversight Board
Oversight Board: Further asked questions
Meta's biannual updates on the Oversight Board
Tracking the Oversight Board's impact
Security
Threat disruptions
Security threats
Threat reporting
Reports
Community Standards enforcement report
Intellectual property
Government requests for user data
Content restrictions based on local law
Internet disruptions
Widely viewed content report
Regulatory and other transparency reports
Policies
Community Standards
Meta Advertising Standards
Other policies
How Meta improves
Age-appropriate content
Features
Our approach to dangerous organisations and individuals
Our approach to the opioid epidemic
Our approach to elections
Our approach to misinformation
Our approach to newsworthy content
Our approach to Facebook Feed ranking
Our approach to explaining ranking
Accessibility at Meta
Research tools
Content Library and Content Library API
Ad Library tools
Other research tools and datasets
Security
Threat disruptions
Security threats
Threat reporting
Reports
Community Standards enforcement report
Intellectual property
Government requests for user data
Content restrictions based on local law
Internet disruptions
Widely viewed content report
Regulatory and other transparency reports
Enforcement
Detecting violations
Taking action
Governance
Governance innovation
Oversight Board overview
How to appeal to the Oversight Board
Oversight Board cases
Oversight Board recommendations
Creating the Oversight Board
Oversight Board: Further asked questions
Meta's biannual updates on the Oversight Board
Tracking the Oversight Board's impact
Policies
Community Standards
Meta Advertising Standards
Other policies
How Meta improves
Age-appropriate content
Features
Our approach to dangerous organisations and individuals
Our approach to the opioid epidemic
Our approach to elections
Our approach to misinformation
Our approach to newsworthy content
Our approach to Facebook Feed ranking
Our approach to explaining ranking
Accessibility at Meta
Research tools
Content Library and Content Library API
Ad Library tools
Other research tools and datasets
Enforcement
Detecting violations
Taking action
Governance
Governance innovation
Oversight Board overview
How to appeal to the Oversight Board
Oversight Board cases
Oversight Board recommendations
Creating the Oversight Board
Oversight Board: Further asked questions
Meta's biannual updates on the Oversight Board
Tracking the Oversight Board's impact
Security
Threat disruptions
Security threats
Threat reporting
Reports
Community Standards enforcement report
Intellectual property
Government requests for user data
Content restrictions based on local law
Internet disruptions
Widely viewed content report
Regulatory and other transparency reports
English (UK)
Privacy PolicyTerms of ServiceCookies
Home
Policies
Improving
Corrections Adjustments

Corrections and adjustments

UPDATED 2 OCT 2024
We detail any specific adjustments identified through our information quality practices. We will update this in accordance with our measurement processes.

8/2024: Resolved bug in measuring fake accounts prevalence on Facebook
We discovered a bug in the query used to calculate prevalence of fake accounts on Facebook in Q1 2024 and have resolved it. As a result, we slightly over estimated prevalence in Q1 2024.

5/2024: Resolved bug in sampling methodology for prevalence on Facebook and Instagram
Prevalence of violating content is estimated using samples of content views from across Facebook or Instagram. We fixed a bug where a small amount of irrelevant data was sampled, which has been resolved and did not result in any significant change to prevalence.

11/2023: Updated calculations for proactive rate on Facebook and Instagram; Phishing is no longer part of spam
Proactive rate increased for multiple violation types as we updated our methodology. Prior to this change, when a post was reported by users, we also considered the comments on the post as user reported. As of Q3 2023, we have changed the methodology to only consider comments as user reported if they were reported directly by users.
In addition, metrics for phishing will no longer be counted as part of spam in Community Standards enforcement reports as of Q3 2023 to align with a previous change in Meta's policy.

8/2023: Updated methodology for fake accounts on Facebook and Instagram
Per the European Union's Digital Services Act (DSA), appeals need to be accessible to users for six months after an enforcement action is taken, thus increasing the appeal window. This means that to meaningfully represent the user experience in our fake account metrics, we have updated our accounting of fake accounts to align with this new time period. The increase that we observed between Q1 and Q2 in fake account removals is due to this accounting change.

2/2023: Updated methodology for proactive rate on Facebook and Instagram
As part of our work to constantly refine and improve the metrics that we share in this report, we updated our proactive rate methodology starting in Q4 2022 to only count enforcement actions as "proactive" only if we find and action the violating content before users report it to us. The old methodology would count the actions as "proactive" if proactive detection happened first in scenarios where both detection causes were present (i.e. content was both proactively detected and reported to us by users). The new methodology takes the existence of user reports into account and counts these instances as "reactive" now, instead of "proactive". While this change does not materially change the metrics, it contributed to minor quarter-over-quarter differences in the proactive rate metric. The table in this document provides a comparison of the metrics as measured with both the old and new methodology for the Q3/Q4 2022 reporting periods.

8/2022: Updated methodology for appealed content on Facebook and Instagram
Starting in Q1 2020, due to a temporary reduction in our review capacity as a result of COVID-19, we could not always offer people the option to appeal but still gave people the option to tell us if they disagreed with our decision. As reflected in our data between Q1 2020 and Q1 2022, we did not count these instances in our appeals metric definition, because they provide valuable user feedback but do not qualify as an appeal without the opportunity for review. Over the last year, we've been improving and developing these appeal experiences, and as our operations have stabilised, we now review many of these instances.
As part of our work to constantly refine and improve the metrics that we share in this report, we updated our appeals methodology starting in Q2 2022 to account for all instances where content was submitted for additional review, including when people told us that they disagreed with our decision. We are still excluding instances where content is not submitted for additional review, even if people told us that they disagreed with our decision, such as in many cases of spam.

2/2022: Content actioned, content appealed, proactive rate and content restored for terrorism on Facebook and Instagram.
In Q4, we identified and reclassified actions that we'd taken on terrorist content on Facebook and Instagram. This affected the numbers that we'd previously shared for content actioned, proactive rate, appealed content and restored content for Q3 2021, and we've adjusted the numbers accordingly.

11/2021: Content actioned for suicide and self-injury on Instagram, and restored content for child nudity and sexual exploitation on Facebook.
In Q2 2020, some of the content that we actioned against our policy for violent and graphic content was later found to be in violation of our specific policy for suicide and self-injury. We reclassified this content accordingly, which affected numbers that we previously shared for content actioned on Instagram in Q2 2020. Additionally, we made slight adjustments to our restored content numbers for child nudity and sexual exploitation on Instagram in Q3 2020, due to previously miscategorised entities. We will continue to update historical numbers as we update our policies and continue to improve our systems and accounting.

8/2021: Content actioned for spam, suicide and self-injury, proactive rate for bullying and harassment, and suicide and self-injury, content restored for adult nudity and sexual activity
This quarter, we made refinements to our metrics for spam, suicide and self-injury, which led to minimal changes from previously reported numbers. We also made methodology adjustments that led to some small shifts in proactive rates for bullying and harassment, and suicide and self-injury. Finally, we reclassified some content actioned for spam under adult nudity and sexual activity, which affected metrics for content that we restored.

5/2021: Content actioned for suicide and self-injury on Facebook
In 2020, some of the content that we actioned against our policy for violent and graphic content was later found to be violating our policy for suicide and self-injury. We reclassified this content accordingly, which affected numbers that we previously shared for content actioned on Facebook in 2020.

2/2021: Restored content for adult nudity and sexual activity on Facebook, prevalence and content actioned for violent and graphic content on Facebook, content actioned for suicide and self-injury on Facebook and content restored by Instagram
In Q4, we introduced clarifications on certain classes of images for our policy on adult nudity and sexual activity on Facebook. We restored some previously actioned content based on the latest policy, which affected the numbers that we previously shared for restored content on Facebook in Q3.
For violent and graphic content on Facebook, prevalence was previously reported in the Community Standards enforcement report for November 2020 as between 0.05% and 0.06% of views. In the February 2021 report, we updated prevalence for violent and graphic content to about 0.07% of views in Q3.
In Q2, some of the content that we actioned against our policy for violent and graphic content was later found to be in violation of our specific policy for suicide and self-injury, after we regained some manual review capacity in early September. We reclassified this content accordingly, which affected numbers that we previously shared for content actioned on Facebook in Q3.
Additionally, we adjusted our restored content numbers for Q1 and Q2 on Instagram to account for previously unreported comments we restored. This has resulted in minimal changes across most policy areas on Instagram, and we adjusted previously shared data accordingly. We will continue to update historical numbers as we update our policies and continue to improve our systems and accounting.

11/2020: Updated adjustments to content actioned, proactive rate, content appealed by users and content restored on Facebook and Instagram
In Q3, we made an update that recategorised previously actioned cruel and insensitive content so that it is no longer considered hate speech. This update affected the numbers that we had previously shared for content actioned, proactive rate, appealed content and restored content for Q4 2019, Q1 2020 and Q2 2020, and we've adjusted the numbers accordingly. We also updated our policy to remove more types of graphic suicide and self-injury content, and recategorised some violent and graphic content that we had previously marked as disturbing in Q2.
Additionally, we adjusted our restored content numbers for Q1 and Q2 on Instagram to account for previously unreported comments that we restored, in addition to an issue with our data source for the August 2020 report. This has resulted in minimal changes across most policy areas on Facebook and Instagram, and we are adjusting previously shared data accordingly. We will continue to update historical numbers as we update our policies, and continue to improve our systems and accounting.

8/2020: Content actioned for violent and graphic content on Instagram
In Q1 2020, we identified and corrected an issue with the accounting of actions taken by our proactive detection technology for violent and graphic content on Instagram, and we were able to update our full reporting systems in Q2. For violent and graphic content on Instagram, content actioned in Q1 2020 was previously reported in the May 2020 report as 2.3 million pieces of content, and has been updated to 2.8 million in the August 2020 report.

5/2020: Updated adjustments to content actioned, proactive content actioned, content appealed by users and content restored on Facebook and Instagram
At the time of our last update in November 2019, we made a number of improvements to our systems and accounting. These improvements allowed us to estimate largest impacts while still adjusting our metrics at that time. Following the November 2019 report, we further refined these improvements.
Because of this work, in the fifth edition of the Community Standards Enforcement Report for May 2020, we are adjusting previously shared data. Most categories for 2019 are only minimally affected, and any adjustments to data amount to no more than a 3% change in content actioned. We will continue to update historical numbers as we reclassify previously removed content for different violations based on existing and changing protocols, and continue to improve our systems and accounting.

11/2019: Content actioned, proactive rate for spam on Facebook
At Meta, different systems take action on different types of content to improve efficiency and reliability for the billions of actions happening every quarter. One of these systems, which acts mainly on content with links, did not log our actions for certain content that was removed if no one tried to view it within seven days of it being created, even if this content was removed from the platform.
While we know that this undercounts the true number of content containing external links, mainly affecting our spam metrics for content containing malicious links, we are not currently able to retrospectively size this undercounting. As such, the numbers currently reflected in the Community Standards Enforcement Report represent a minimum estimate of both content actioned and proactive rate for the affected period. Updates about this issue will be posted here when available.

11/2019: Content actioned, proactive content actioned, content appealed by users and content restored on Facebook
When we shared the second edition of the Community Standards Enforcement Report in November 2018, we updated our method for counting how we take actions on content. We did this so that the metrics better reflected what happens on Facebook when we take action on content for violating our Community Standards. For example, if we find that a post containing one photo violates our policies, we want our metric to reflect that we took action on one piece of content – not two separate actions for removing the photo and the post.
However, in July 2019, we found that the systems logging and counting these actions did not correctly log the actions taken. This was largely due to needing to count multiple actions that take place within a few milliseconds and not miss, or overstate, any of the individual actions taken. Because our logging system for measurement purposes is distinct from our operations to enforce our policies, the issue with our accounting did not affect how we enforced our policies or how we informed people about those actions; it only affected how we counted the actions that we took. As soon as we discovered this issue, we worked to fix it, identified any incorrect metrics previously shared and established a more robust set of checks in our processes to ensure the accuracy of our accounting. In total, we found that this issue affected the numbers that we previously shared for content actioned, proactive rate, appealed content and restored content for Q3 2018, Q4 2018 and Q1 2019.
The fourth edition of the Community Standards Enforcement Report includes the correct metrics for the affected quarters, and the table linked above provides the previously reported metrics and their corrections.