Policies that outline what is and isn't allowed on our apps.
Policies for ad content and business assets.
Other policies that apply to Meta technologies.
How we update our policies, measure results, work with others, and more.
Explore how we help teens have safe, positive experiences on Facebook and Instagram.
How we approach dangerous organizations and individuals.
How we support communities in the face of the opioid epidemic.
How we help prevent interference, empower people to vote and more.
How we work with independent fact-checkers, and more, to identify and take action on misinformation.
How we assess content for newsworthiness.
How we reduce problematic content in News Feed.
How we build AI systems.
Comprehensive access to public data from Facebook and Instagram
Comprehensive and searchable database of all ads currently running across Meta technologies
Additional tools for in-depth research on Meta technologies and programs
Quarterly report on how well we're doing at enforcing our policies on the Facebook app and Instagram.
Report on how well we're helping people protect their intellectual property.
Report on government request for people's data.
Report on when we restrict content that's reported to us as violating local law.
Report on intentional internet restrictions that limit people's ability to access the internet.
Quarterly report on what people see on Facebook, including the content that receives the widest distribution during the quarter.
Download current and past regulatory reports for Facebook and Instagram.
We are continually assessing our metrics to learn how we can improve the ways we measure in our Community Standards Enforcement Report.
We also continue to review our policies and processes and the methodologies behind them. Changes to any of these inherently change the metrics calculations themselves. These methodology or process changes may be in addition to trends indicating that we’re getting better or worse at mitigating violations.
As our measurement processes mature, we regularly review and validate our metrics. We have also established a set of standards that govern how we identify, correct and publicly report any adjustments to previously released data.
We identify potential issues with our data using a range of regular quality checks on our datasets, measurement tools and logging systems. When a potential issue is identified, relevant teams at Meta undergo a series of steps to investigate, mitigate and identify long-term fixes for the issue.
Once the issue has been addressed, Meta will update data in the Community Standards Enforcement Report. Where such corrections are meaningful, Meta will describe the issue, metrics affected and the time periods impacted.
Corrections and adjustmentsWe are committed to transparently sharing our metrics as well as the processes we use to calculate and improve them. To streamline and better govern the release of adjustments and corrections to our methodologies and metrics, we developed an information quality procedure to identify, rectify, and publicly report any adjustments we make to previously released information. This is a common practice in large statistical agencies and federal agency public reports and was developed in line with data reporting best practices in both public and private sectors. These reviews and procedures we developed will be critical in maintaining the accuracy and integrity of our reporting going forward.
We constantly evaluate and validate our metrics and make sure the information we are sharing is accurate and our methodologies to generate this data are sound. As part of this work, when we update our methodologies or adjust metrics, we’ll share those changes here.
We’re constantly refining our processes and methodologies in order to provide the most meaningful and accurate numbers on how we’re enforcing our policies. Over the summer of 2019, we implemented information quality processes that create further checks and balances in order to make sure we share valid and consistent metrics.
We identify different dimensions of each metric and develop a risk-prioritization of segments that may significantly affect the metrics. For the segments in this prioritized list, we implement multiple checks to make sure these segments are capturing information accurately.
For example, we break out our content actioned metrics into multiple dimensions to review. For example, we separate out content based on whether our automated systems or human reviewers took the action, what led us to take action, and what type of content (photos, text, video) we took action on.
With these different dimensions, we then assess how much bias would be introduced into our measurement if that dimension was not correctly represented in the metric (for example, if we didn’t include video content in our metrics). These assessments allow us to identify dimensions that might impact the metric (such as whether humans took action).
Then, we figure out how much the metric could be impacted if that dimension was wrong (say we didn’t log any of the content humans took action on). We then prioritize the biggest risk scenarios to do additional cross-checks. For these high-risk combinations, we develop additional tracking and cross-check systems to ensure these metrics are estimated correctly.
We have also implemented consistency checks to add more validation for our metrics. These include the following:
We periodically measure our actions with a separate, independent system that measures content actions. On a regular basis, we check these various independent metrics which are intended to identify large errors in our accounting.
We conduct a range of random spot checks to verify the accuracy of our measurement systems in near real-time. This includes checking various outcomes that happen later in our system to double check upstream outcomes. For example, we confirm that content that is appealed is also logged as content that has been actioned since content must be actioned in order to be appealed. Many of these checks are intended to identify large errors such as content that is appealed but was never removed.
As with all aspects of our standards enforcement reporting, we will continue to evolve and improve our validity and consistency review processes over time.
We also established procedures to identify and correct information previously shared in our enforcement report, which we will regularly review and update. When we identify potential issues in metrics shared in the Community Standards Enforcement Report, we follow these steps:
Reporting. If a potential issue is discovered, our teams immediately file an incident report that alerts the relevant teams to begin investigating the issue.
Investigating and mitigating. The relevant teams review the potential issue, making immediate changes to prevent further consistency issues where necessary and developing solutions to avoid the issue in the future
Sizing the issue. The relevant teams review the potential issue, making immediate changes to prevent further consistency issues where necessary and developing solutions to avoid the issue in the future.
Post-mortem incident review. Once the issue is mitigated, we conduct a detailed internal review to identify the root causes and full impact of the issue. This allows us to identify broader risks to the validity of our measurement so we can prevent or minimize them.
Once we identify an issue and adjust the affected metric, we will publicly report the correction by updating this post at the time of the subsequent release of the Community Standards Enforcement Report. In such an update, we will describe the issue, the metrics impacted, and the time periods impacted. The data for the quarters previously affected in the Content Standards Enforcement Report itself will include any adjusted metrics when feasible to ensure comparisons over time are meaningful.
In addition to the work we do internally to evaluate and improve our metrics, we also look for external input on our methodologies and expand the metrics we report on to give a more robust picture of how we’re doing at enforcing our policies.
To ensure our methods are transparent and based on sound principles, we seek out analysis and input from subject matter experts on areas such as whether the metrics we provide are informative.
In order to ensure our approach to measuring content enforcement was meaningful and accurate, we worked with the Data Transparency Advisory Group (DTAG), an external group of international academic experts in measurement, statistics, criminology, and governance. In May 2019, they provided their independent, public assessment of whether the metrics we share in the Community Standards Enforcement Report provide accurate and meaningful measures of how we enforce our policies, as well as the challenges we face in this work, and what we do to address them. Overall, they found our metrics to be reasonable ways of measuring violations and in line with best practices. They also provided a number of recommendations for how we can continue to be more transparent about our work, which we discussed in detail and continue to explore. In addition to this, Meta has committed to an independent audit of the metrics shared in this report in 2021.