Oversight Board Selects a PAO on Meta’s Cross-Check Policies

UPDATED

OCT 3, 2024

2021-002-FB-PAO

The Oversight Board announced the selection of a case referred by Meta that is a policy advisory opinion regarding how we can continue to improve our cross-check system.

Background information about our cross-check system can be found in our Transparency Center.

In this policy advisory opinion referral, Meta is asking for guidance on the following questions:

  • Because of the complexities of content moderation at scale, how should Meta balance its desire to fairly and objectively apply our Community Standards with our need for flexibility, nuance, and context-specific decisions within cross-check?

  • What improvements should Meta make to how we govern our Early Response (“ER”) Secondary Review cross-check system to fairly enforce our Community Standards while minimizing the potential for over-enforcement, retaining business flexibility, and promoting transparency in the review process?

  • What criteria should Meta use to determine who is included in ER Secondary Review and prioritized as one of many factors by our cross-check ranker in order.

Once the board has finished deliberating, we will consider and publicly respond to its recommendations within 30 days, and will update this post accordingly. Please see the board’s website for the recommendations when they issue them.



Disclaimer

Effective April 25, 2024, the program previously known as Early Response Secondary Review (ERSR) has been re-named to Secondary Sensitive Entity Review (SSR). All references to this program from that date forward will use the name SSR.

Case decision

We welcome the Oversight Board’s decision today on this policy advisory opinion referral.

After conducting a review of the recommendations provided by the board, we will update this post.

Recommendations

Recommendation 0 (Implementing Fully)

Meta should provide information about its implementation work in its quarterly reports on the Board. Additionally, Meta should convene a biannual meeting of high-level responsible officials to brief the Board on its work to implement the policy advisory opinion recommendations.

Our commitment: We will provide updates on the progress and implementation of our commitments to the board in our Quarterly Updates. Additionally we will convene a biannual meeting of key stakeholders to brief the board on the recommendation implementation status for this policy advisory opinion (PAO).

Considerations: We are committed to maintaining transparency with the board and the public as we continue to execute on the commitments we are making in response to recommendations. We also welcome the board's request for quarterly updates and biannual briefings as structural mechanisms of accountability.

Recommendation 1 (Implementing in Part)

Meta should split, either by distinct pathways or prioritization, any list-based over-enforcement prevention program into separate systems: one to protect expression in line with Meta’s human rights responsibilities, and one to protect expression that Meta views as a business priority that falls outside that category.

Measure of Implementation: Meta provides the Board with information detailing how both inclusion and operation are split for these categories of entities. Meta publicizes the details about these systems in its Transparency Center.

Our commitment: We will work to separate out the entities that are included to respect freedom of expression in line with our human rights responsibilities and those that are purely business interests, wherever possible - while recognizing that an absolute distinction between the two categories will not always be possible. To do this, our Product and Policy teams, including our Human Rights team, will partner to consider categorization options that better reflect purely business and public interests, and our Global Risk Operations team will continue to hone and develop a globally representative public interest list. We will also work with our Human Rights team to continue fine-tuning our criteria for inclusion to ensure the public interest lists are as inclusive as possible.

Considerations: Giving people voice is core to both our business interests and our human rights responsibilities. The line between these interests and responsibilities is not always easy to draw. However, we will take steps to clearly distinguish the two where possible (e.g., entities added to Early Response Secondary Review (ERSR) lists to protect important business partnerships distinguishable from human rights defenders or other entities added to ERSR because of our commitment to human rights). This will allow us to better understand and evaluate the impact cross-check has on Meta’s business interests versus our human rights and public interest responsibilities. We will update the board on the status of this work in future Quarterly Updates.

Recommendation 2 (Implementing in Part)

Meta should ensure that the review pathway and decision making structure for content with human rights or public interest implications including its escalation paths, is devoid of business considerations. Meta should take steps to ensure that the team in charge of this system does not report to public policy or government relations teams or those in charge of relationship management with any affected users.

Measure of Implementation: Meta provides the Board with information detailing the decision-making pathways and teams involved in the content moderation of content with human rights or public interest implications.

Our commitment: We will continue to ensure that our content moderation decisions are made as consistently and accurately as possible, without bias or external pressure. While we acknowledge that business considerations will always be inherent to the overall thrust of our activities, we will continue to refine guardrails and processes to prevent bias and error in all our review pathways and decision making structures.

Considerations: As a public company, we acknowledge that business considerations will always be inherent to our overall business operations, but in the realm of content moderation we have guardrails and processes in place which we will continue to refine to prevent bias and errors. Protecting communities and people on our platforms remains the north star for all of our teams, and the processes we have established are designed to enable content review and policy development that prioritizes these protections. When updating our policies, our Content Policy team relies on input from many organizations across the company (including Operations, Engineering, Legal, Human Rights, Civil Rights, Safety, Comms and Public Policy). Separately, our ERSR teams may escalate difficult decisions for input by these same stakeholders. In these instances, Public Policy is just one of many organizations consulted. And, while the perspective of Global Public Policy is key to understanding local context, no one team’s interests or views determine the outcome of any content moderation decision.

At each level of the review process, we have guardrails in place to consider cultural, political, linguistic, and regional contexts as factors alongside our policies in decision making. These factors better allow for important context while simultaneously preventing bias in our content moderation. We will update the board on our efforts towards refining our guardrails as well as preventing bias and error in future Quarterly Updates.

Recommendation 3 (Implementing Fully)

Meta should improve how its workflow dedicated to meet Meta’s human rights responsibilities incorporates context and language expertise on enhanced review, specifically at decision making levels.

Measure of Implementation: "Meta provides the Board with information detailing how it has improved upon its current process to include language and context expertise at the moment that context-based decisions and policy exceptions are being considered."

Our commitment: We will move to staff all cross-check decisions with reviewers who speak the language and have regional expertise wherever possible. Separately, where we can guarantee the high quality and consistency of the review, we will explore enabling General Secondary Review (GSR) reviewers (not just ERSR reviewers) to apply context-specific decisions.

Considerations: We see in-language content review (as opposed to content review based on translations), or sub-marketization, as critical to accuracy across our global content moderation operations as it allows us to integrate linguistic subtleties and regional and cultural context. There may be times however, when in-language review would significantly delay a content review decision because of where in-language reviewers are based and tradeoffs may be necessary. There may also be occasions when we do not have language resources available for review and need to review content agnostically using translation tools, when required.

Historically, we have not scaled context-specific decisions to larger groups of reviewers because quality and consistency of review suffers. We will explore opportunities to expand context-specific content to all decision-making levels of cross-check, though we will only do so where we can first demonstrate consistency and quality of review. This will require additional investment in quality review and training resources and we may find that some context-specific policies are more easily applied by a larger group of reviewers than others. We will update the board on our progress in future Quarterly Updates.

Recommendation 4 (Implementing in Part)

Meta should establish clear and public criteria for list-based mistake-prevention eligibility. These criteria should differentiate between users who merit additional protection from a human rights perspective and those included for business reasons.

Measure of Implementation: Meta releases a report or Transparency Center update detailing the criteria for list-based enhanced review eligibility for the different categories of users the program will enroll.

Our commitment: We have established clear criteria for membership in list-based mistake prevention systems at Meta. In line with recommendation #1, we will refine those criteria to draw a clear distinction between business interests and those that advance human rights concerns. We will also expand the specificity of the definitions of entity categories we include in list-based mistake prevention systems publicly.

Considerations: After the creation of our Early Response Secondary Review (ERSR) list-based mistake prevention system in 2022, we began publishing the categories of entities we enroll in the ERSR system to our Transparency Center. We will continue to do so.

We will also revisit both the categories and specific criteria used to populate our list-based mistake prevention systems to better distinguish between those categories of users who have been added because of a business relationship and those who have been added on account of human rights interests. However, as mentioned in our response to recommendation #1, that line may be blurry as entities can sometimes qualify for multiple categories of membership.

While we will share additional information about what makes an entity cross-check eligible at a high-level, we have no current plans to publish all of the more specific criteria we use to determine whether a user qualifies for membership in any of the ERSR categories because doing so could make the system more vulnerable to manipulation and coordinated inauthentic behavior. Maintaining an updated set of publicly available criteria would also be difficult as we will continue to refine and iterate on the criteria for inclusion in ERSR based on the issues and challenges that surface on our platforms. We will provide further updates on the progress of this work in a future Quarterly Update.

Recommendation 5 (No Further Action)

Meta should establish a process for users to apply for overenforcement mistake-prevention protections should they meet the company’s publicly articulated criteria. State actors should be eligible to be added or apply based on these criteria and terms but given no other preference.

Measure of Implementation: Meta implements a publicly and easily accessible, transparent application system for any list-based overenforcement protection, detailing what purposes the system serves and how the company assesses applications. Meta includes the number of entities that successfully enrolled in mistake prevention through application, their country and category each year in its Transparency Center.

Our commitment: While we believe the risk from bad actors and potential signal to noise ratio of implementing a program like this make it operationally unfeasible, we agree with the board’s determination that more external inputs to the development of overenforcement lists could make them more equitable and representative. We will take no further action on this recommendation, but instead aim to address the spirit of this recommendation by implementing recommendation #7.

Considerations: A publicly and easily available application system for inclusion on our overenforcement prevention lists could lead to myriad unintended consequences making it both unfeasible and unsustainable. It would significantly increase the risk from individuals who seek to target, game, and compromise these programs, thereby diverting resources away from increasing a strategic approach to global list development and auditing.

By implementing recommendation #7, we will engage with external parties to help surface candidates for inclusion in our list-based overenforcement programs. Additionally, we will continue to review opportunities to mitigate potential regional bias in the makeup of these lists via deeper engagement with internal and external global market representatives. We will have no further updates on this recommendation.

Recommendation 6 (No Further Action)

Meta should ensure that the process for list-based inclusion, regardless of who initiated the process (the entity itself or Meta) involves, at minimum:

(1) an additional, explicit, commitment by the user to follow Meta’s content policies;

(2) an acknowledgement of the program’s particular rules; and

(3) a system by which changes to the platform’s content policies are proactively shared with them.

Measure of Implementation: Meta provides the Board with the complete user experience for onboarding into any list-based system, including how users commit to content policy compliance and how they are notified of policy changes.

Our commitment: This recommendation would require disclosing which entities are enrolled in the cross-check program. For the security reasons outlined below and in response to recommendation #12, we will take no further action on this recommendation. Engagement on our platforms requires adherence to our Community Standards regardless of inclusion on cross-check lists. We believe the commitment made by people using our platforms should be the same across the board. Relatedly, our systems for educating users of updates to the Community Standards is consistent across all users, whether included in the program or not.

Considerations: When someone sets up an account with Meta, we inform them of our policies and Community Standards. This process obligates each user on our platform to abide by those rules. Cross-check exists to improve adherence to these company-wide policies, but we do not believe that requiring a separate commitment would serve our online communities well. Rather, it could create confusion about who is required to abide by our policies.

Additionally, people who use our platforms are proactively updated on when relevant high signal policy changes occur. We will continue to share information about our policies and important policy updates with all users, but do not plan to create mechanisms to inform certain groups of users more proactively than others. Updating some users more regularly or through additional means would offer them preferential treatment, which is not the goal of the program. We will have no further updates on this recommendation.

Recommendation 7 (Implementing in Part)

Meta should strengthen its engagement with civil society for the purposes of list creation and nomination. Users and trusted civil society organizations should be able to nominate others that meet the criteria. This is particularly urgent in countries where the company’s limited presence does not allow it to identify candidates for inclusion independently.

Measure of Implementation: Meta provides information to the Board on how the company engages with civil society to determine list-based eligibility. Meta provides data in its Transparency Center, disaggregated by country, on how many entities are added as a result of civil society engagement as opposed to proactive selection by Meta.

Our commitment: In addition to our internal Human and Civil Rights teams, we will work directly with our Trusted Partners and other external civil society organizations to explore ways to inform the criteria we use to identify public interest entities for cross-check lists. As part of this, we will also explore a more formal nomination process from civil society groups and will continue to work with these organizations to inform our policies and enforcement practices.

Considerations: As part of our work to understand the impact of our platform and policies around the world, we regularly engage with global, regional, and local civil society organizations. Furthermore, our Trusted Partners program currently includes over 400 human rights defenders, researchers, humanitarian agencies, and non-governmental organizations representing 113 countries. The organizations in this program offer feedback, questions, concerns, and global expertise.

The organizations who currently participate in this program often have experience in social media monitoring and are committed to promoting safe online communities and equitable content moderation for all people on our platforms. We are constantly looking for ways to strengthen these partnerships and, through continued engagement with Trusted Partners and other civil society organizations who represent both global and local perspectives, we will solicit input to determine the best approach to ensuring a more equitable cross-check program. We will report on the progress of these collaborations in future Quarterly Updates.

Recommendation 8 (Implementing in Part)

Meta should use specialized teams, independent from political or economic influence, including from Meta’s public policy teams, to evaluate entities for list inclusion. To ensure criteria are met, specialized staff, with the benefit of local input, should ensure objective application of inclusion criteria.

Measure of Implementation: Meta will provide the Board with internal documents detailing which teams handle list creation and where they sit in the organization.

Our commitment: Governance responsibilities for the Early Response Secondary Review (ERSR) list currently sit within our Global Operations organization. These responsibilities include assessing eligibility for inclusion in three of the six categories of ERSR lists and conducting list audits. Eligibility for inclusion in the other three ERSR lists is assessed by our Legal and Partnerships teams—based on their specialized knowledge and experience. Global Operations is a separate organization, with a different reporting structure, from Meta’s Public Policy teams. Where possible, we will expand these governance responsibilities to members of Global Operations’ regional teams who have both language and cultural expertise.

Considerations: Meta’s company-wide organizational structure means that all teams ultimately ladder up to the same leadership regardless of individual reporting structures. Our policies and operational structures are developed to ensure frontline risk decisions are made responsibly and with only relevant considerations. Because we are a business, economic considerations will inevitably be a factor in the work that we do, but we have thoughtful guardrails and processes in place to mitigate these concerns.

While the reporting structure of our Global Operations team is separate from Meta's Public Policy team, the Public Policy team is consulted for input in cross-check decisions—as they are in many areas of content moderation across the company. In these instances, our Operations team may leverage expertise from Meta’s Public Policy team, in combination with our regional experts and language-agnostic specialized reviewers, to enhance local and cultural perspectives. The separate reporting structures help to ensure review is independent from political or economic influence. Where possible, we will expand these governance responsibilities to members of Global Operations’ regional teams who have both language and cultural expertise and report on our progress in future Quarterly Updates.

Recommendation 9 (Implementing in Part)

Meta should require that more than one employee be involved in the final process of adding new entities to any lists for false positive mistake-prevention systems. These people should work on different but related teams.

Measure of Implementation: Meta provides the Board with information detailing the process by which new entities are added to lists, including how many employees must approve inclusion and what teams they belong to.

Our commitment: We will continue to improve the integrity of our mistake prevention lists through a more mature governance process – including regular list audits and quality checks for those involved in list development, conducted by several different teams.

Considerations: We interpret the goal of this recommendation to be to ensure employees adding entities to cross-check lists are doing so without bias and with high degrees of quality and consistency. All individuals who are responsible for adding entities to cross-checked lists are trained on the same eligibility criteria and operational processes. Adding more employees from our Operations team to the process would come at a high cost as it would divert resources from other escalations and time-sensitive situations. In the past year, we have enhanced our systems so that decisions are made more quickly and we reduce the risk of future backlogs. As a result, our teams often respond to quickly evolving situations in which they need to make a fast decision to add an entity to cross-check to prevent negative experiences for our community.

We agree with the goal of this recommendation and believe we can achieve its intended outcomes through other means, some of which we have committed to in other recommendations including yearly list audits and regular quality checks for those involved in the governance process. We will continue tracking these efforts and report on their respective progress in future Quarterly Updates.

Recommendation 10 (Implementing in Part)

Meta should establish clear criteria for removal. One criterion should be the amount of violating content posted by the entity. Disqualifications should be based on a transparent strike system, in which users are warned that continued violation may lead to removal from the system and or Meta’s platforms. Users should have the opportunity to appeal such strikes through a fair and easily accessible process.

Measure of Implementation: Meta provides the Board with information detailing the threshold of enforcement actions against entities at which their protection under a list-based program is revoked, including notifications sent to users when they receive strikes against their eligibility, when they are disqualified, and their options for appeal. It should also provide the Board with data about how many entities are removed each year for posting violating content.

Our commitment: We have established clear criteria for removal from the ERSR program and are continuing to refine these criteria. When an entity doesn’t meet our eligibility criteria, they no longer qualify for the program and are removed from the lists during regular audits.

We will use the number of violations an entity has incurred as a signal that an entity should be prioritized for audit more quickly. Regardless of whether an entity is in the ERSR program, crossing our strike thresholds generally triggers a feature limit or account disable, which will itself prompt a cross-check review. We are already working towards making appeals available for more content removals, including those conducted in cross-check review.

Considerations: The overarching goal of the cross-check program is to remove violating content while protecting voice and reducing the risk of high impact false positives (i.e., overenforcement mistakes). While we do apply context-specific policies through cross-check that are not applied at scale, entities receiving cross-check review are not exempt from our Community Standards. All entities on Facebook and Instagram, including those whose content is removed through cross-check, are subject to a strike threshold and are disabled once that threshold is reached. Additionally, having previous violations does not negate the potential for cases that are at higher risk for mistakes or where the potential impact of a mistake is especially severe for the community or Meta’s business interests. For these reasons, we do not feel the number of violations alone is enough reason to remove an entity from ERSR. We will update the board on our progress of implementation in future Quarterly Updates.

Recommendation 11 (Implementing in Part)

Meta should establish clear criteria and processes for audit. Should entities no longer meet the eligibility criteria, they should be promptly removed from the system. Meta should review all included entities in any mistake prevention system at least yearly. There should also be clear protocols to shorten that period where warranted.

Measure of Implementation: Meta provides the Board with data on the amount, type of entity, and reason for removal from entity lists as a result of audits, along with a timeline for conducting audits periodically.

Our commitment: We have established eligibility criteria and processes for audit that we will continue to refine over time. The majority of the entities on ERSR lists are currently audited on a yearly basis. In those audits, entities are removed if they no longer meet the eligibility criteria. We will also establish protocols for entity audits or automatic removals for cases in which a year may be too long.

Considerations: In order to maintain lists that are relevant and appropriate, we have developed a diligent annual review process to audit entities on ERSR lists for continued eligibility. This process applies to the majority of our ERSR lists. However, a very small portion of our politics and government list which focuses on former government officials and other civic influencers, which tend to be more stable and consistent categorizations, is currently audited every three years because of the consistency of their reach and newsworthiness. We will continue to evaluate the right cadence for auditing this particular group to ensure it best serves the goals of the program and protects audiences on our platforms.

We will establish protocols for entity audits or automatic removals for cases in which a year may be too long. For example, entities added to a list because of their involvement in a significant world event may be removed earlier if that world event is no longer taking place, for example an Olympic athlete who was added due to the Olympic Games in Tokyo 2021 and removed shortly after the Olympics ends. We will update the board on our progress in future Quarterly Updates.

Recommendation 12 (No Further Action)

Meta should publicly mark the pages and accounts of entities receiving list-based protection in the following categories: all state actors and political candidates, all business partners, all media actors, and all other public figures included because of the commercial benefit to the company in avoiding false positives. Other categories of users may opt to be identified.

Measure of Implementation: Meta marks all entities in these categories as benefiting from an entity-based mistake prevention program and announces the change in its Transparency Center.

Our commitment: Meta remains committed to transparency for our users while balancing other considerations, including safety, security, and gamification. In light of this, we will not be implementing this recommendation.

Considerations: We understand the need to provide people who use our platforms with a positive holistic experience. To achieve this, cross-check serves as an additional level of review on content: a safety net.

Marking profiles publicly identifies them as potential targets for bad actors. Though Meta maintains a rigorous process for protecting accounts, public demarcations like this may increase the possible targeting of these account types by bad actors. Systems like cross-check aim to reduce the content taken down on false reports. To do this effectively and efficiently, we first seek to reduce the backlog of current reports while optimizing our systems. This work will be prioritized in recommendation #18.

While we think the value of the cross-check program outweighs overall concerns about gamification of the system which can be mitigated, calling attention to the user’s inclusion may tip the balance and lead to an increase in harmful content. This would not only increase the backlog of our systems; it would unjustly expose potentially thousands of users to potentially harmful content.

Balancing the potential harm to individual accounts, increased targeting by bad actors, increasing false reports, and exposure to potentially harmful content for other users against this recommendation leads us to decline this recommendation. We will have no further updates on this recommendation.

Recommendation 13 (No Further Action)

Meta should notify users who report content posted by an entity publicly identified as benefiting from additional review that special procedures will apply, explaining the steps and potentially longer time to resolution.

Measure of Implementation: Meta provides the Board with the notifications for users that report content from users identified as benefiting from additional review and confirm global implementation and data that shows these notifications are consistently shown to users.

Our commitment: For the security reasons outlined in recommendation #12 and below, we will take no further action on this recommendation.

Considerations: Allowing users who report content to be notified that the content in question belongs to a cross-check enrolled account would expose this account to the potential targeting and gamification outlined in recommendation #12. At this time, we will take no further action on this recommendation and will have no further updates.

Recommendation 14 (Assessing Feasibility)

Meta should notify all entities that it includes on lists to receive enhanced review and provide them with an opportunity to decline inclusion.

Measure of Implementation: Meta provides the Board with

- (1) the notifications sent to users informing them of their inclusion in a list-based enhanced review program and offering them the option to decline; and Meta

- (2) publicly reports annual numbers in its Transparency Center on the amount of entities, per country, that declined inclusion.

Our commitment: For the reasons outlined in recommendation #12, we do not plan to directly notify users or provide an in-product option for users to remove themselves from ERSR. However, we understand that there may be instances in which some users may not wish to be included on such a list, even if for their benefit. We will collaborate with our Human Rights and Civil Rights teams to assess options to address this issue, in an effort to enhance user autonomy regarding cross-check.

Considerations: We believe that marking or informing the entities included on cross-check lists would lead to serious security concerns and potentially impact the integrity of the program as outlined in recommendation #12. However, we also recognize the importance of transparency and user autonomy. Based on continued consultation with our Human Rights and Civil Rights team, we plan to explore the feasibility of implementing this recommendation via creative solutions that would mitigate those same unintended consequences. We do feel it is important to note that the cross-check program doesn’t only serve the entities included on these lists, in which case it might be more straightforward to enable users to opt out. Rather, cross-check allows us to make more accurate decisions on complex content to allow people who use our platforms to access accurate information in critical societal moments.

In addition to exploring options to address the spirit of this recommendation, we believe that our commitments to recommendations #1 (increasing the public transparency of entity types), #4 (publicly publishing cross-check criteria), #10 (defining the criteria for removal), and #26 (using data to improve the experiences of over-enforced entities on our platform) will help meet the spirit of this recommendation. We will provide an update on our assessment in a future Quarterly Update.

Recommendation 15 (Implementing in Part)

Meta should consider reserving a minimum amount of review capacity by teams that can apply all content policies (e.g., the Early Response Team) to review content flagged through content based mistake-prevention systems.

Measure of Implementation: Meta provides the Board with documentation showing its process of consideration of this recommendation and the rationale for its decision on whether to implement it and publishes this justification to their Transparency Center.

Our commitment: We will explore staffing all decision making levels through cross-check with reviewers who can apply all content policies (e.g., context-specific decisions) and surface cases for context-specific decisions.

Considerations: Historically, we have not scaled context-specific decisions to larger groups of reviewers because the quality and consistency of content review suffers. In response to the board’s recommendation, we will explore opportunities to expand context-specific decisions to all decision making levels of cross-check, though we will only do so where we can first demonstrate consistency and quality of review. We will provide updates on our progress in a future Quarterly Update.

Recommendation 16 (Implementing in Part)

Meta should take measures to ensure that additional review decisions for mistake-prevention systems that delay enforcement are taken as quickly as possible. Investments and structural changes should be made to expand the review teams so that reviewers are available and working in relevant time zones whenever content is flagged for any enhanced human review.

Measure of Implementation: Meta provides the Board with data that demonstrates a quarter-over-quarter reduction in time-to-decision for all content receiving enhanced review, disaggregated by category for inclusion and country.

Our commitment: We will implement and aim to adhere to robust Service-Level Agreements (SLAs) for review decisions across our mistake-prevention systems. We will also develop contingency plans to be implemented in the case of surges in volumes that prevent us from reaching these SLAs.

Considerations: Our ultimate goal is to review content as quickly as possible with reviewers who speak the language and have cultural expertise. However, it may not be possible to staff in-language reviewers across every timezone 24/7. We will audit our current staffing model and make changes to optimize for the quickest in-language review possible, but there may need to be trade-offs between in-language review and time to review. We will continue sharing our progress in future Quarterly Updates.

Recommendation 17 (Implementing Fully)

Meta should not delay all action on content identified as potentially severely violating and should explore applying interstitials or removals pending any enhanced review. The difference between removal or hiding and downranking should be based on an assessment of harm, and may be based, for example, on the content policy that has possibly been violated. If content is hidden on these grounds, a notice indicating that it is pending review should be provided to users in its place.

Measure of Implementation: Meta updates its Transparency Center with its new approach to enforcement action during the time when content receives enhanced review and provides the Board with information detailing the enforcement consequences it will apply based on content-specific criteria. Meta shares with the Board data on the application of these measures and their impact.

Our commitment: We already remove some extremely high severity content while it is pending enhanced review, and reduce visibility of some other potential harms. We’ll continue to operate this system, and also explore additional options for how to best treat content while it is awaiting further review.

Considerations: We agree with the board’s recommendation not to delay the removal of content identified as potentially severely violating. In fact, we already remove extremely high severity content while it is pending review. We will continue this practice while also exploring further options to best protect the users of our platform from harm while content flagged for certain violations is pending cross-check review.

As mentioned in recommendation #18, we are also working with our operations teams to reduce backlogs in cross-check reviews and the duration for which cross-checked content is pending review. This will also help reduce the risk that users are exposed to violating content while it is pending cross-check review. We will share further updates on our progress in a future Quarterly Update.

Recommendation 18 (Implementing Fully)

Meta should not operate these programs at a backlog. Meta should not, however, achieve gains in relative review capacity by artificially raising the ranker threshold or having its algorithm select less content.

Measure of Implementation: Meta provides the Board with data that demonstrates a quarter-over-quarter reduction in total backlogged content and amount of days with a backlog for cross-check review queues.

Our commitment: We are working to reduce the current backlog within our systems, and will keep the Oversight Board informed as we consider new methods to decrease review times and ERSR backlog.

Considerations: Cross-check is composed of two systems that work together: Early Response Secondary Review (ERSR) and General Secondary Review (GSR). We agree that these programs should not operate at a backlog, and that artificially raising the threshold should not be used to achieve this goal. In recognizing the importance of this recommendation, we will begin re-evaluating the current strategy for how our systems prioritize content for review and commit to eliminating the current ERSR backlog.

The volume of total content identified through cross-check is one of the most significant factors creating a backlog. There are also constraints around external factors, such as moment-in-time events, that may lead to spikes in content needing to be reviewed. A key way we avoid a backlog in GSR is by cascading decisions to the previous review decision after a certain time period. We will explore implementing this cascading approach to ERSR content as well in the event of volume spikes.

We will keep the Oversight Board informed as we consider new methods to decrease review times and ERSR backlog, and will report on our progress in a future Quarterly Update.

Recommendation 19 (Implementing in Part)

Meta should not automatically prioritize entity-based secondary review and make a large portion of the algorithmically selected content-based review dependent on extra review capacity.

Measure of Implementation: Meta provides the Board with internal documents detailing the distribution of review time and volume between entity based and content-based systems.

Our commitment: As we committed to in recommendation #3 and #15, we will explore staffing both GSR and ERSR with reviewers who can apply all content policies and surface cases for potential context-specific decisions that require escalation. We will do so where we are first able to demonstrate consistency and quality of review.

Considerations: While currently only entities on the ERSR list receive the benefit of automatic cross-check review, the vast majority of content moving through the cross-check system is algorithmically selected, content-based review (aka General Secondary Review). We understand the board’s recommendation to mean that we should not prioritize ERSR review over GSR review unless we can expand the review capabilities of general review channels.

Historically, there have been limits on our ability to scale certain context-specific review decisions to larger groups of reviewers because quality and consistency suffers. We will explore opportunities to expand review of more context-specific decisions that require escalation to all decision making levels of cross-check, though we will only do so where we can first demonstrate consistency and quality of review. We will provide further updates on this work in a future Quarterly Update.

Recommendation 20 (Implementing Fully)

Meta should ensure that content that receives any kind of enhanced review because it is important from a human rights perspective, including content of public importance, is reviewed by teams that can apply exceptions and context.

Measure of Implementation: Meta provides the Board with information that shows the percentage of content receiving review by teams that can apply exceptions and context because it has been posted by an entitled entity or because it has been identified algorithmically as meriting enhanced review disaggregated by mistake prevention system (e.g. GSR vs. ERSR).

Our commitment: We will work to ensure that content that receives any kind of enhanced review because it is important from a human rights perspective, including content of public importance, is reviewed by teams that can apply the context-specific considerations that inform our policies.

Considerations: We deploy systems that ensure that content that enters our processes through our mistake prevention systems receives multilayer review. Once content has passed initial review, it is triaged to relevant teams for market and context-specific review. The various teams reviewing content are trained to consider any necessary assessments or applicable context-specific considerations that inform our policies. If content is deemed violating following the review and auditing from our various operations teams, the content is actioned and the entity is notified of the decision. Following enforcement, the entity may enter an appeal process which will involve the same iterative processes described above.

Content that is likely to be forwarded for secondary review for its relevance to human rights or public interest is broadly considered in our ERSR categories (ie. Civic and Government; Media Organizations). Additionally, we have specialized reviewers that monitor significant world events, historically over-enforced, and entities escalated for higher context-specific review. We are always exploring and implementing ways to get better at this work, and we will continue to prioritize improvements and share these developments in future Quarterly Updates.

Recommendation 21 (Implementing Fully)

Meta should establish clear criteria for the application of any automatic bars to enforcement (‘technical corrections’), and not permit such bars for high severity content policy violations. At least two teams with separate reporting structures should participate in granting technical corrections to provide for cross-team vetting.

Measure of Implementation: Meta publishes the number of entities currently benefiting from a “technical correction” on an annual basis, with indication of what content policies are barred from enforcement.

Our commitment: We will refine our eligibility criteria for the technical corrections system, ensuring that those criteria are well-governed and regularly updated to assess their suitability. Further, we will publish the number of entities currently enrolled in the technical corrections policy on an annual basis. We will also continue our efforts to ensure that the technical corrections system is not used to exempt any users from very high severity violations.

Considerations: The technical corrections system was launched in early 2022 and helps us prevent enforcement mistakes for specific, narrowly-defined scenarios where we know a violation is highly unlikely - for example, some reputable publishers that we know are incorrectly flagged by our spam classifier due to their frequent posting.

The process for creating new technical corrections is governed by multiple teams at Meta, including an in depth review from Legal and Policy teams. We will continue to refine this practice and will also publish the number of entities enrolled in technical corrections in order to promote transparency and trust with our users and the board. We will track the progress of implementation in future Quarterly Updates.

Recommendation 22 (Implementing in Part)

Meta should conduct periodic audits to ensure that entities benefitting from automatic bars to enforcement (‘technical corrections’) meet all criteria for inclusion. At least two teams with separate reporting structures should participate in these audits to provide for cross-team vetting.

Measure of Implementation: Meta provides information to the Board on its periodic list auditing processes.

Our commitment: We will continuously improve our automated and manual auditing processes so that entities benefiting from technical corrections meet all criteria for inclusion. Furthermore, we will implement additional governance controls to currently established audit periods during technical corrections renewal.

Considerations: We interpret the goal of the second part of this recommendation to be to ensure employees adding entities to technical correction lists are doing so without bias and with high degrees of quality and consistency. We agree with this goal but aim to achieve it through slightly different approaches.

All individuals who are responsible for adding entities to technical correction lists are required to adhere to the same eligibility criteria and operational processes. As explained in our response to recommendation #9, in the past year we have enhanced our systems to allow escalation decisions to be made more quickly and to prevent future backlogs. This means, however, that increasing additional employees from our Operations team to this process would come at a high cost to our human review resources, who are needed for said decisions and operational agility in time-sensitive situations.

We believe we can achieve the goal of reducing bias in technical correction list creation through additional means, some of which we have committed to in other recommendations. This includes yearly list audits and regular quality checks for those involved in the governance process. We will share further updates on our progress on this recommendation and related commitments in a future Quarterly Update.

Recommendation 23 (Implementing Fully)

Meta should conduct periodic multi-team audits to proactively and periodically search for unexpected or unintentional bars to enforcement that may result from system error.

Measure of Implementation: Meta publishes information annually on any unexpected enforcement bars it has found, their impact, and the steps taken to remedy the root cause.

Our commitment: We are committed to conducting periodic audits and the publication of relevant findings, and will publish a report on these efforts with existing or planned mitigation measures in the future.

Considerations: We recognize the importance of identifying and addressing risks posed to our community of users and more widely. We already have structures and processes to identify, manage, and mitigate risks in various ways.

We will publicly publish a report on these efforts with existing or planned mitigation measures in the future. This report will specify key findings around content moderation and the functioning of our enforcement systems. As part of this, we will include information on unexpected enforcement bars found and the steps taken to address these.

This work will be an iterative process that will likely evolve as we gain insight into the process. As we continue to develop this process, we will provide appropriate information related to unexpected errors and current or future remedies for those errors. We will continue to track these developments in future Quarterly Updates.

Recommendation 24 (Implementing in Part)

Meta should ensure that all content that does not reach the highest level of internal review is able to be appealed to Meta.

Measure of Implementation: Meta publishes information on the number of content decisions made through enhanced review pathways that were not eligible for appeal. This yearly data, disaggregated by country, should be broken down in a way that explains what, if any, percentage of the content did not get an appeal because it reached global leadership review.

Our commitment: We are committed to expanding appeal options to a wider range of content moderation decisions. As shared in our response to recommendation #3 in the Veiled Threat of Violence Based on Lyrics from a Drill Rap Song case, this undertaking will begin with users in EU, UK, and India by the end of 2023.

Considerations: As shared in prior updates, we have been working on expanding the scope of enforcement actions eligible for appeals and the regions where such appeals are available. As explained previously, there are certain sensitive or illicit content types that we cannot allow appeals for, such as violations under our Child Sexual Exploitation Policy.

As capacity evaluations continue and our technology evolves, we will work towards completing this recommendation in the long term. In the interim, however, we will continue to enable more appeal types in 2023 and share ongoing progress in future Quarterly Updates.

Recommendation 25 (Implementing Fully)

Meta must guarantee that it is providing an opportunity to appeal to the Board for all content the Board is empowered to review under its governing documents, regardless of whether the content reached the highest levels of review within Meta.

Measure of Implementation: Meta publicly confirms all content covered under the Board’s governing documents are receiving Oversight Board appeal IDs to submit a complaint to the Board, providing documentation to demonstrate where steps have been taken to close appeal availability gaps. Meta creates an accessible channel for users to achieve prompt redress when they do not receive an Oversight Board appeal ID.

Our commitment: We have efforts underway to allow our users to appeal eligible escalations within the EU, UK, and India both internally and to the Oversight Board by the end of 2023. We are also developing a new global solution allowing users to appeal eligible content decisions directly to the Oversight Board.

Considerations: As stated in our response to recommendation #3 in the Veiled Threat of Violence Based on Lyrics from a Drill Rap Song case, our teams are conducting work underway to enable people on our platforms in the EU, UK, and India to appeal eligible escalations internally and to the Oversight Board by the end of 2023.

Additionally, our internal teams are in the process of building an automated solution for the rest of the world to appeal any escalation that does not have a Meta appeal available directly to the Oversight Board. In this instance, we will allow users to appeal decisions directly to the board in cases where the enforcement decision doesn’t result in a Meta appeal option. When a Meta appeal option is available and the appeal is rejected, we will allow users to appeal directly to the board after a rejection. We expect to roll this out in the second quarter 2023.

Future updates to this recommendation will be bundled with recommendation #3 in the Veiled Threat of Violence Based on Lyrics from a Drill Rap Song case. We will share our progress in future Quarterly Updates.

Recommendation 26 (Implementing Fully)

Meta should use the data it compiles to identify “historically over-enforced entities” to inform how to improve its enforcement practices at scale. Meta should measure over-enforcement of these entities and it should use that data to help identify other over-enforced entities. Reducing over-enforcement should be an explicit and high-priority goal for the company.

Measure of Implementation: Meta provides data to the public that shows quarter-over-quarter declines in over-enforcement and documentation that shows that the analysis of content from “historically over-enforced” entities is being used to reduce over-enforcement rates more generally.

Our commitment: We view this recommendation as a long-term investment in the success of the cross-check program and will continue to prioritize improvements to our enforcement practices at scale, including measurement and reduction of overenforcement and increased equity in our content moderation.

Considerations: Overenforcement is a critical concern that requires reflection and deliberate action. Meta views protecting our users' voices and respecting human rights, including freedom of expression, as a long-term goal. Mistakes in this area may disproportionately impact historically over-enforced entities, which is why we are committed to taking deliberate action.

We will work with our Human Rights and Civil Rights teams to conduct an analysis of the current state and impacted entities. This work will also be done in conjunction with recommendation #1.

As we explore potential mitigations in this area, we recognize that more than a metric-only-based model is required. Therefore, we will also explore non-data-based analysis to assess our enforcement practices at scale and share our progress in future Quarterly Updates.

Recommendation 27 (Implementing in Part)

Meta should use trends in overturn rates to inform whether to default to the original enforcement within a shorter time frame or what other enforcement action to apply pending review. If overturn rates are consistently low for particular subsets of policy violations or content in particular languages, for example, Meta should continually calibrate how quickly and how intrusive an enforcement measure it should apply.

Measure of Implementation: Meta provides the Board with data detailing the rates at which queued content remains up or is taken down, broken out by country, policy area, and other relevant metrics, and describes changes made on an annual basis.

Our commitment: We will regularly analyze data such as overturn rates to measure the impact of cross-check and will identify opportunities to make decisions on content faster and/or with fewer layers of review. However, we do not believe it is in the best interest of our users, nor is it equitable content moderation to commit to basing current decisions on historical data such as overturn rates, especially ones that might generalize on the basis of region, for example. We will of course continue to evaluate patterns of enforcement decisions for larger improvement within our systems.

Considerations: In addition to the overturn rates, we will also analyze other metrics such as the accuracy of the review to get a full picture of the impact of cross-check and continue to improve the review time while optimizing review quality.

We will not, however, implement the element of the recommendation asking us to consider historical overturn rates or other data when making new enforcement decisions. Doing so could create an assumption of a violation before the review is conducted. We will continue to make enforcement decisions in cross-check that are in accordance with our policies and not influenced by previous decisions or generalizations, such as rate of previous violations in a certain language. We will share further updates on our implementation progress in a future Quarterly Update.

Recommendation 28 (Implementing Fully)

Meta should conduct periodic reviews of different aspects of its enhanced review system, including content with the longest time to resolution and high-profile violating content left on the platform.

Measure of Implementation: Meta publishes the results of reviews to the cross-check system on an annual basis, including summaries of changes made as a result of these reviews.

Our commitment: We will conduct periodic reviews of the cross-check system, including content with the longest time to resolution. We already conduct periodic sampling to determine where high-profile violating content is left on the platform. We will use this analysis to inform improvements to the program overall.

Considerations: We agree with the board’s recommendation and we will conduct periodic reviews of the cross-check system, including identifying content with the longest time to resolution. These periodic reviews will help inform our efforts to reduce backlogs in cross-check queues and reduce the time required to review cross-checked content.

As part of our current scaled content review operations we regularly conduct periodic sampling to determine how much violating content is left on Meta’s platforms. We use this data to refine and improve our efforts to detect violating content.

Due to the complexity of conducting and reviewing these analyses, the full implementation of this recommendation is a long-term goal. We will share the results of these analyses when completed with the board and report on our progress in a future Quarterly Update.

Recommendation 29 (No Further Action)

Meta should publicly report on metrics that quantify the adverse effects of delayed enforcement as a result of enhanced review systems, such as views accrued on content that was preserved on the platform as a result of mistake-prevention systems but was subsequently found violating. As part of its public reporting, Meta should determine a baseline for these metrics and report on goals to reduce them.

Measure of Implementation: Meta includes one or more key metrics demonstrating the negative consequences of delayed enforcement pending enhanced review mechanisms in the Community Standards Enforcement Report, along with goals to reduce these metrics and progress in meeting those goals.

Our commitment: We currently report on the prevalence of harm across our platform in our quarterly Community Standards Enforcement Report, and will not be increasing the granularity of this metric. To supplement this information, however, we will increase this effort by adopting a cross-check specific transparency report as explained in recommendation #30.

Considerations: We currently report on the prevalence of harm in our regularly published Community Standards Enforcement Report to gauge how we’re performing against our goal to minimize the impact caused by violations of our policies on people using our platforms.

At this time, we do not have any plans to share this specific data type publicly, but we remain committed to furthering our transparency around our cross-check systems. As explained in our response to recommendation #30, below, we will be re-evaluating our current ongoing reporting processes for the cross-check program with the goal of publishing an annual report containing metrics on the functionality and impact of cross-check on our platforms. While we will have no further updates on this recommendation, we will share future updates on our cross-check data transparency work at large under recommendation #30.

Recommendation 30 (Implementing in Part)

Meta should publish regular transparency reporting focused specifically on delayed enforcement of false-positive prevention systems. Reports should contain data that permits users and the public to understand how these programs function and what their consequences on public discourse may be. At minimum, the Board recommends Meta include:

a. Overturn rates for false positive mistake-prevention systems, disaggregated according to different factors. For example, the Board has recommended that Meta create separate streams for different categories of entities or content based on their expression and risk profile. The overturn rate should be reported for any entity-based and content-based systems, and categories of entities or content included.

b. The total number and percentage of escalation-only policies applied due to false positive mistake-prevention programs relative to total enforcement decisions.

c. Average and median time to final decision for content subject to false-positive mistake prevention programs, disaggregated by country and language.

d. Aggregate data regarding any lists used for mistake-prevention programs, including the type of entity and region.

e. Rate of erroneous removals (false positives) versus all reviewed content, including the total amount of harm generated by these false positives measured as the predicted total views on the content (i.e., overenforcement)

f. Rate of erroneous keep-up decisions (false negatives) on content, including the total amount of harm generated by these false positives, measured as the sum of views the content accrued (i.e., underenforcement)

Measure of Implementation: Meta releases annual transparency reporting including these metrics

Our commitment: We will begin the process of tracking and determining what information can be shared publicly in an annual report aimed to increase transparency around our cross-check program. This will be a long-term effort as we expand our transparency reporting efforts broadly and will be consistent with regulatory requirements and existing transparency roadmaps.

Considerations: In conjunction with ongoing regulatory obligations, we will begin implementing the recommendations outlined by the Oversight Board. Part of that process will be updating how we track certain entities or log specific data types. In tandem with this, we will re-evaluate our current on-going reporting processes for the cross-check program.

To ensure this reporting is accurate and thorough, we will consider this recommendation as a long-term implementation. Additionally, various deployments must occur before this recommendation can be actualized. For example, subpart (a) will need to be conducted with recommendation #1, where we will consider the level of entity-based or content information disaggregated by various factors. Whereas other parts may need to be modified to meet the spirit of this recommendation; for example in sub-part (c) we may evaluate if regional data may be a better parameter.

At this time, we cannot commit to which exact metrics will be public particularly as our metrics may need to be refined. However, we remain committed to bringing additional transparency, including in this context. We will begin to compile which metrics can be shared publicly and publish them as soon as readily available. We will share further updates on our progress in a future Quarterly Update.

Recommendation 31 (Implementing Fully)

Meta should provide basic information in its Transparency Center regarding the functioning of any mistake-prevention system it uses that identifies entities or users for additional protections.

Measure of Implementation: A section is added to the Transparency Center explaining its array of mistake prevention systems (the Board understands the potential for user adversarialism to attempt to bypass enforcement, and Meta may choose to summarize some points of its enforcement practices).

Our commitment: We recently published information on how we detect and enforce violations in our Transparency Center, including an overview of our approach to accurately reviewing high-impact content through cross-check. While we cannot publish the full details of these interventions, due to the high risk of adversarial behavior, we will expand upon the existing information by including further details around our mistake prevention systems here and in the “Detecting Violations” section of our Transparency Center.

Considerations: On December 6, 2022, we created a new page on our Transparency Center publicly detailing our processes for identifying high-impact content and providing additional levels of review to mitigate the risk of said content through cross-check. The cross-check system is made up of two components: General Secondary Review (GSR) and Early Response Secondary Review (ERSR). The page provides information on how we detect and enforce violations using these two components, including categorical examples of users and entities that we deem eligible for early response secondary review as part of our mistake-prevention systematic efforts.

In addition to the systems already described on our Transparency Center, we also have an internal system called Dynamic Multi Review (DMR). This is a system that enables us to review certain content multiple times before making a final decision. We use this system to improve the quality and accuracy of human review and mitigate the risk of incorrect decisions by adjusting the number of reviews we require for final decision based on a number of different factors such as virality, number of views, and potential to contribute to harm (e.g. potential violations of our policies on Sexual Exploitation or Dangerous Individuals and Organizations). Per our responses to earlier recommendations (#7, #8, #9), we endeavor to strengthen the overall transparency interventions towards mistake prevention at Meta through structured and robust engagement with civil society, upskilling our internal teams to ensure specialized review of list enrollment and protecting the independence of mistake prevention efforts. We will add an overview of these efforts, alongside a description of the Dynamic Multi Review (DMR) system, to the “Detecting Violations” section of our Transparency Center.

Given the confidentiality of the processes that govern these efforts and the potential for adversarial behavior if further details of these interventions are publicized, we will only provide a general overview of our mistake prevention systems on our Transparency Center. Any additional specificity could pose too great a risk to our platforms and the people who use them. We will provide further updates on this work in a future Quarterly Update.

Recommendation 32 (Implementing Fully)

Meta should institute a pathway for external researchers to gain access to non-public data about false-positive mistake-prevention programs that would allow them to understand the program more fully through public-interest investigations and provide their own recommendations for improvement. The Board understands that data privacy concerns should require stringent vetting and data aggregation.

Measure of Implementation: Meta discloses a pathway for external researchers to obtain non-public data on false positive mistake-prevention programs.

Our commitment: We will evaluate possible solutions to expand collaborative external research initiatives to include false-positive mistake prevention programs, without compromising user privacy and security.

Considerations: We agree that collaborative research initiatives are a fundamental part of building and improving our systems and products. We have a Research Collaborations team dedicated to partnering with university faculty, post-doctoral researchers, and doctoral students to drive new insights and recommendations.

We are currently exploring the feasibility of expanding these external research initiatives to include false-positive mistake prevention program engagement to meet our regulatory obligations. Any process or procedure will aim to meet those obligations going forward. This recommendation is incredibly complex and must be navigated with great deference given to user privacy. Defining who would be considered a vetted researcher, for what purposes data could be used, and where and how access will be given all require Meta to complete its due diligence. We will provide an update on the status of this assessment in future Quarterly Updates.