Policies that outline what is and isn't allowed on our apps.
Policies for ad content and business assets.
Other policies that apply to Meta technologies.
How we update our policies, measure results, work with others, and more.
Explore how we help teens have safe, positive experiences on Facebook and Instagram.
How we approach dangerous organizations and individuals.
How we support communities in the face of the opioid epidemic.
How we help prevent interference, empower people to vote and more.
How we work with independent fact-checkers, and more, to identify and take action on misinformation.
How we assess content for newsworthiness.
How we reduce problematic content in News Feed.
How we build AI systems.
Comprehensive access to public data from Facebook and Instagram
Comprehensive and searchable database of all ads currently running across Meta technologies
Additional tools for in-depth research on Meta technologies and programs
Quarterly report on how well we're doing at enforcing our policies on the Facebook app and Instagram.
Report on how well we're helping people protect their intellectual property.
Report on government request for people's data.
Report on when we restrict content that's reported to us as violating local law.
Report on intentional internet restrictions that limit people's ability to access the internet.
Quarterly report on what people see on Facebook, including the content that receives the widest distribution during the quarter.
Download current and past regulatory reports for Facebook and Instagram.
JUN 12, 2023
2022-004-FB-UA
Today, the Oversight Board appealed by a Facebook user regarding a comment with a cartoon depicting violence by the police in Colombia. The post includes a cartoon resembling the official crest of the National Police of Colombia with figures wearing police uniforms committing violent acts upon another figure.
Upon initial review, Meta took down this content for violating our policy on Dangerous Individuals and Organizations. Upon second review, after the user appealed the initial decision, we upheld the decision based on our policy on Violence and Incitement, as laid out in the Facebook Community Standards. However, upon further review, we determined we removed this content in error and reinstated it.
We will implement the board’s decision once it has finished deliberating, and we will update this post accordingly. Please see the board’s website for the decision when they issue it.
We welcome the Oversight Board’s decision today on this case. Meta previously reinstated this content as it did not violate our policies and was removed in error, so no further action will be taken on this content.
After conducting a review of the recommendations provided by the board in addition to their decision, we will update this page.
To improve Meta’s ability to remove non-violating content from banks programmed to identify or automatically remove violating content, Meta should ensure that content with high rates of appeal and high rates of successful appeal is re-assessed for possible removal from its Media Matching Service banks. The Board will consider this recommendation implemented when Meta: (i) discloses to the Board the rates of appeal and successful appeal that trigger a review of Media Matching Service-banked content, and (ii) confirms publicly that these reassessment mechanisms are active for all its banks that target violating content.
Our commitment: We will implement this recommendation using a gradual approach based on the complexity of the governance, enforcement and maturity levels of individual banks. Some banking teams are already implementing this recommendation, while others are newer and still in the process of training their auditing systems. Across all Media Matching Service (MMS) banks, we plan to implement product and governance innovations to more effectively and efficiently remove incorrectly banked content.
Considerations: MMS banks are a collection of “hashed” content, used primarily to detect and take scaled action on media (e.g. image, video) across Facebook and Instagram that violates Meta’s Community Standards. Content identified as violating is stored in banks to detect additional occurrences across Facebook and Instagram for scaled enforcement. In other cases, non-violating content is banked to prevent it from being removed and increase review capacity. Banks can be configured to either only detect and take action on newly uploaded content or to scan existing content on the platform.
We regularly evaluate the performance of MMS banks in correctly identifying violating content. For example, if bank precision declines, as often measured through user feedback, we conduct analyses to understand the root cause and identify solutions, including technical issue fixes and updates to reviewer training. We also measure the accuracy of the reviewers who determine where content should be banked.
These quality assessments are often conducted at the individual bank level and not always based on appeal rates. Because there is no single banking team that can implement a uniform solution to this recommendation across all the areas where banking occurs at Meta, implementing this recommendation will require a gradual approach based on the complexity of the governance and maturity levels of individual banks.
There are automated systems and alerts that run across most banks to detect anomalies or spikes in appeal rates. Spike detection systems also allow us to automatically limit false positives generated by incorrectly banked content. Teams review and investigate these alerts to make future banking more accurate. When anomaly alerts catch clusters of content that trigger a high number of appeals, they send notifications to our Global Operations teams which review and ultimately remove any incorrectly-banked content from enforcement banks. For content that is non-violating, these teams also move that content to “ignore” banks to prevent future false positives.
Some of our banking teams are working on a new automated tool to detect whether precision issues are stemming from technical mistakes (e.g. algorithm false positives) or human error (e.g. a reviewer entering inaccurate content). As a result of this recommendation and our ongoing commitment to banking accuracy, more teams are likely to launch this tool in the near future. Similarly, we expect more banking teams to adopt an “appeals circuit breaker” feature which pauses a cluster of content with a small number of successful appeals and a reasonable ratio of successful to unsuccessful appeals so that the content can be re-reviewed and removed from banks as appropriate. Teams that do not already have these features in place are developing pipelines to flag this type of content for re-review. In line with this recommendation, we are also in the early stages of testing and rolling out a “hygiene sweep" capability based on appeals signals. This tool will allow us to better identify clear patterns in the root cause of false positives. We will provide further updates on our progress in future Quarterly Updates.
To ensure that inaccurately banked content is quickly removed from Meta’s Media Matching Service banks, Meta should set and adhere to standards that limit the time between when banked content is identified for re-review and when, if deemed non-violating, it is removed from the bank. The board will consider this recommendation implemented when Meta: (i) sets and discloses to the board its goal time between when a re-review is triggered and when the non-violating content is restored, and (ii) provides the Board with data demonstrating its progress in meeting this goal over the next year.
Our commitment: While many of our individual MMS banking teams already have strict standards for the re-review and potential removal of flagged content within their banks, we will continue to work towards a more cohesive governance model for MMS banking to ensure that such standards exist for every bank. Because of widely differing use cases, policy types and banking strategies, however, these standards will likely remain specific to each individual bank and will not be universal. We will also aim to share both existing and newly established time-to-review standards with the board.
Considerations: MMS banks are created to align with specific Community Standard policies such as Dangerous Individuals and Organizations, Hate Speech and Child Sexual Exploitation. For some policies, there may be multiple banks because each one is specific to a particular section of the policy. If a piece of content is identified as violating one of our policies associated with MMS banks, the content is triaged to content reviewers to review and potentially send to a relevant MMS bank. The makeup of individual banks can vary widely by policy, in part because some policies require teams to bank content reactively in response to specific events or proactively to prevent harmful content from spreading on the platform during times we know will be particularly high-risk (e.g. elections).
We typically have a set time period for removing content from banks, which varies based on the internal guidelines and the type of enforcement needed for each policy area. For some of our established banks, Integrity teams detect appeal rates and extract jobs that trigger a high rate of appeal within 48 hours for potential re-review and removal from banks.
Based on this recommendation and the others from this case, we will work to create a more cohesive approach to MMS banking governance in the future, to allow for more alignment and consistent process updates across individual banks (to the extent possible, given varying approaches necessitated by subject matter and proactive or reactive banking strategy). We will also work to disclose both existing and newly established standards for time before review to the board. We will provide further updates on our progress in future Quarterly Updates.
To enable the establishment of metrics for improvement, Meta should publish the error rates for content mistakenly included in Media Matching Service banks of violating content, broken down by each content policy, in its transparency reporting. This reporting should include information on how content enters the banks and the company’s efforts to reduce errors in the process. The Board will consider this recommendation implemented when Meta includes this information in its Community Standards Enforcement Report.
Our commitment: While we are committed to sharing more information on our enforcement accuracy as part of previous recommendations, providing this information in the manner prescribed here would not provide an holistic and accurate picture of our content moderation systems. We continue to work towards public reporting of new metrics that will provide comprehensive insights into our enforcement systems, including efforts to reduce errors in the process, but we will have no further updates on this recommendation.
Considerations: As part of our Community Standards Enforcement Reporting (CSER) and transparency efforts, we have previously committed to gathering and sharing accuracy and precision metrics around our content moderation systems. We expect this work to shed more light on our media matching systems as well. However, sharing metrics for individual MMS banks without sufficient context may create confusion, and would not provide a complete picture of the accuracy of our automated enforcement actions. MMS is one component of a larger enforcement system, and media matching does not always work in isolation. Each moderation component, such as MMS banking, supports our enforcement efforts overall; these various systems work together and have various dependencies which aim to increase overall accuracy. As an example, an error flagged during step one of a process may be later resolved in step two or three. While we are committed to sharing more information on our enforcement accuracy and efforts to reduce errors as part of previous recommendations, providing this information in the manner prescribed here would not provide an holistic and accurate picture that addresses the board’s stated concerns. We will continue to work towards greater public reporting of metrics that will provide comprehensive insights into our enforcement systems, but because of the factors described above, we will have no further updates on this recommendation.