Getting better at measurement


JAN 19, 2022

We are continually assessing our metrics to learn how we can improve the ways we measure in our Community Standards Enforcement Report.

We also continue to review our policies and processes and the methodologies behind them. Changes to any of these inherently change the metrics calculations themselves. These methodology or process changes may be in addition to trends indicating that we’re getting better or worse at mitigating violations.

Information quality, updates and corrections

As our measurement processes mature, we regularly review and validate our metrics. We have also established a set of standards that govern how we identify, correct and publicly report any adjustments to previously released data.

We identify potential issues with our data using a range of regular quality checks on our datasets, measurement tools and logging systems. When a potential issue is identified, relevant teams at Meta undergo a series of steps to investigate, mitigate and identify long-term fixes for the issue.

Once the issue has been addressed, Meta will update data in the Community Standards Enforcement Report. Where such corrections are meaningful, Meta will describe the issue, metrics affected and the time periods impacted.

Corrections and adjustments

Why we developed information quality processes

We are committed to transparently sharing our metrics as well as the processes we use to calculate and improve them. To streamline and better govern the release of adjustments and corrections to our methodologies and metrics, we developed an information quality procedure to identify, rectify, and publicly report any adjustments we make to previously released information. This is a common practice in large statistical agencies and federal agency public reports and was developed in line with data reporting best practices in both public and private sectors. These reviews and procedures we developed will be critical in maintaining the accuracy and integrity of our reporting going forward.

We constantly evaluate and validate our metrics and make sure the information we are sharing is accurate and our methodologies to generate this data are sound. As part of this work, when we update our methodologies or adjust metrics, we’ll share those changes here.

How we evaluate and improve our metrics

We’re constantly refining our processes and methodologies in order to provide the most meaningful and accurate numbers on how we’re enforcing our policies. Over the summer of 2019, we implemented information quality processes that create further checks and balances in order to make sure we share valid and consistent metrics.

Identifying priority scenarios to confirm validity

We identify different dimensions of each metric and develop a risk-prioritization of segments that may significantly affect the metrics. For the segments in this prioritized list, we implement multiple checks to make sure these segments are capturing information accurately.

For example, we break out our content actioned metrics into multiple dimensions to review. For example, we separate out content based on whether our automated systems or human reviewers took the action, what led us to take action, and what type of content (photos, text, video) we took action on.

With these different dimensions, we then assess how much bias would be introduced into our measurement if that dimension was not correctly represented in the metric (for example, if we didn’t include video content in our metrics). These assessments allow us to identify dimensions that might impact the metric (such as whether humans took action).

Then, we figure out how much the metric could be impacted if that dimension was wrong (say we didn’t log any of the content humans took action on). We then prioritize the biggest risk scenarios to do additional cross-checks. For these high-risk combinations, we develop additional tracking and cross-check systems to ensure these metrics are estimated correctly.

Validity and consistency checks

We have also implemented consistency checks to add more validation for our metrics. These include the following:

Consistency checks

We periodically measure our actions with a separate, independent system that measures content actions. On a regular basis, we check these various independent metrics which are intended to identify large errors in our accounting.

Auditing and debugging our measurement tools

We conduct a range of random spot checks to verify the accuracy of our measurement systems in near real-time. This includes checking various outcomes that happen later in our system to double check upstream outcomes. For example, we confirm that content that is appealed is also logged as content that has been actioned since content must be actioned in order to be appealed. Many of these checks are intended to identify large errors such as content that is appealed but was never removed.

As with all aspects of our standards enforcement reporting, we will continue to evolve and improve our validity and consistency review processes over time.

Internal review and correction procedures

We also established procedures to identify and correct information previously shared in our enforcement report, which we will regularly review and update. When we identify potential issues in metrics shared in the Community Standards Enforcement Report, we follow these steps:

  • Reporting. If a potential issue is discovered, our teams immediately file an incident report that alerts the relevant teams to begin investigating the issue.

  • Investigating and mitigating. The relevant teams review the potential issue, making immediate changes to prevent further consistency issues where necessary and developing solutions to avoid the issue in the future

  • Sizing the issue. The relevant teams review the potential issue, making immediate changes to prevent further consistency issues where necessary and developing solutions to avoid the issue in the future.

  • Post-mortem incident review. Once the issue is mitigated, we conduct a detailed internal review to identify the root causes and full impact of the issue. This allows us to identify broader risks to the validity of our measurement so we can prevent or minimize them.

Reporting adjustments

Once we identify an issue and adjust the affected metric, we will publicly report the correction by updating this post at the time of the subsequent release of the Community Standards Enforcement Report. In such an update, we will describe the issue, the metrics impacted, and the time periods impacted. The data for the quarters previously affected in the Content Standards Enforcement Report itself will include any adjusted metrics when feasible to ensure comparisons over time are meaningful.

Seeking input and expanding the metrics we report

In addition to the work we do internally to evaluate and improve our metrics, we also look for external input on our methodologies and expand the metrics we report on to give a more robust picture of how we’re doing at enforcing our policies.

Methodology assessment and input

To ensure our methods are transparent and based on sound principles, we seek out analysis and input from subject matter experts on areas such as whether the metrics we provide are informative.

In order to ensure our approach to measuring content enforcement was meaningful and accurate, we worked with the Data Transparency Advisory Group (DTAG), an external group of international academic experts in measurement, statistics, criminology, and governance. In May 2019, they provided their independent, public assessment of whether the metrics we share in the Community Standards Enforcement Report provide accurate and meaningful measures of how we enforce our policies, as well as the challenges we face in this work, and what we do to address them. Overall, they found our metrics to be reasonable ways of measuring violations and in line with best practices. They also provided a number of recommendations for how we can continue to be more transparent about our work, which we discussed in detail and continue to explore. In addition to this, Meta has committed to an independent audit of the metrics shared in this report in 2021.