Policies that outline what is and isn't allowed on our apps.
Policies for ad content and business assets.
Other policies that apply to Meta technologies.
How we update our policies, measure results, work with others, and more.
Explore how we help teens have safe, positive experiences on Facebook and Instagram.
How we approach dangerous organizations and individuals.
How we support communities in the face of the opioid epidemic.
How we help prevent interference, empower people to vote and more.
How we work with independent fact-checkers, and more, to identify and take action on misinformation.
How we assess content for newsworthiness.
How we reduce problematic content in News Feed.
How we build AI systems.
Comprehensive access to public data from Facebook and Instagram
Comprehensive and searchable database of all ads currently running across Meta technologies
Additional tools for in-depth research on Meta technologies and programs
Quarterly report on how well we're doing at enforcing our policies on the Facebook app and Instagram.
Report on how well we're helping people protect their intellectual property.
Report on government request for people's data.
Report on when we restrict content that's reported to us as violating local law.
Report on intentional internet restrictions that limit people's ability to access the internet.
Quarterly report on what people see on Facebook, including the content that receives the widest distribution during the quarter.
Download current and past regulatory reports for Facebook and Instagram.
PAO 2023-01
Updated: July 2, 2024
We have completed a feasibility assessment for recommendations #1 and #3 to determine the best approach to implement the Board’s recommendations and are updating our commitment to the Board from assessing feasibility to implementing in full.
For Recommendations #1 and #3, the Board recommended allowing use of the word “shaheed” in all instances unless content otherwise violates our policies or is shared with one or more of three signals of violence. In our May 2024 responses, we committed to conducting a labeling exercise to examine the types of content that would be allowed on our platforms if we only consider the three signals of violence the Board identified in its Opinion, rather than the broader set of six signals that Meta initially proposed in our request for a Policy Advisory Opinion. The signals proposed by the Board included visual depiction of an armament/weapon, a statement of intent or advocacy to use or carry an armament/weapon, or a reference to a designated event.
We conducted the labeling assessment to understand how this approach might impact the type of content allowed if we did not implement all six signals we originally proposed. Our assessment included an exercise with our policy and enforcement teams to understand how changes to our policy guidance might impact enforcement in key markets where we see content related to “Shaheed” being shared. We did this by testing updated guidance and gathering a sample of real, on-platform content from these markets to determine how the Board’s recommended changes to policy and enforcement in line with recommendations 1 and 3 would impact content on the platform.
Initial results from our assessment indicate that continuing to remove content when “Shaheed” is paired with otherwise violating content – or when the three signals of violence outlined by the Board are present – captures the most potentially harmful content without disproportionality impacting voice. We will introduce updates in our next report for the Oversight Board on the progress towards fully implementing these recommendations.
On March 9, 2023, the Oversight Board accepted Meta’s request for a Policy Advisory Opinion (PAO) on our treatment of the word “shaheed” when used to refer to an individual designated under our Dangerous Organizations and Individuals (DOI) policy.
Under our DOI policy, Meta designates and bans from our platforms “organizations or individuals that proclaim a violent mission or are engaged in violence”, like terrorists or hate groups. We also prohibit content that includes “praise, substantive support, or representation” - terms we define in our policy - for these designated organizations and individuals, alive or deceased. Currently, we treat the word “shaheed” as explicit praise when used in reference to a designated individual, and we remove this content when we’re aware of it. We do not remove the word "shaheed" on its own or when used to reference non-designated individuals.
Meta has requested the Oversight Board’s guidance on this approach because, while developed with safety in mind, we know it comes with global challenges. “Shaheed'' is used in different ways by many communities around the world and across cultures, religions, and languages. At times, this approach may result in us removing some content at scale that was never intended to support terrorism or praise violence.
We are seeking the Oversight Board’s views on three possible options which we have outlined for them to consider, or any other options they determine may be appropriate:
Option One: Maintain status quo - as outlined above
Option Two: Allow content that uses “shaheed” to reference a designated dangerous individual only when (i) it is used in a specific permissible context (for example, news reporting, neutral and academic discussion), (ii) there is no additional praise, substantive support, or representation of a dangerous organization or individual and (iii) there is no signal of violence in the content (for example the depiction of weapons, military clothing or reference to real world violence)
Option Three: Remove content that uses “shaheed” to reference a designated dangerous individual only when there is additional praise, substantive support, representation or signal of violence
We also welcome the Oversight Board’s guidance on wider questions surrounding our policies and enforcement that the PAO raises.
In assessing our current policies and preparing this PAO request for the board, we reviewed extensive research from academic, non-profit and advocacy researchers, and conducted substantial outreach with over 40 stakeholder individuals and organizations across Europe, the Middle East and North Africa, Sub-Saharan Africa, South Asia, Asia Pacific, and North America. This included linguistics experts, academic scholars, counterterrorism experts, political scientists, freedom of expression advocates, and digital rights organizations, as well as local civil society groups directly impacted by the policy in question.
Once the Board has finished deliberating, we will consider and publicly respond to its recommendations within 60 days, and will update this post accordingly. Please see the board’s website for the recommendations when they issue them.
We welcome the Oversight Board’s response today, March 26, 2024, on this policy advisory opinion (PAO) referral.
After conducting a review of the recommendations provided by the Board, we will update this post with initial responses to those recommendations.
Meta should stop presuming that the word "shaheed", when used to refer to a designated individual or unnamed members of designated organizations, is always violating and ineligible for policy exceptions. Content referring to a designated individual as "shaheed" should be removed as an "unclear reference" in only two situations. First, when one or more of three signals of violence are present: a visual depiction of an armament/weapon, a statement of intent or advocacy to use or carry an armament/weapon, or a reference to a designated event. Second, when the content otherwise violates Meta's policies (e.g. for glorification or because the reference to a designated individual remains unclear for reasons other than use of "shaheed"). In either scenario, content should still be eligible for the "reporting on, neutrally discussing and condemning" exceptions.
The Board will consider this recommendation implemented when Meta publicly updates its Community Standards to specify that references to designated individuals as "shaheed" are not allowed when one or more of the three listed signals of violence are present.
Our commitment: We welcome the Board's guidance in this recommendation and are in the process of assessing the most feasible approach to implementing it. As part of this assessment, we will examine the types of content that would be allowed on our platforms if we only consider the three signals of violence the Board identified in its Opinion, rather than the broader set of six signals we proposed. Additionally, our assessment will focus on ways to operationalize, at scale, the Board’s recommendation to allow otherwise violating content under the “reporting on, neutrally discussing and condemning” allowance.
Considerations: In our request for a Policy Advisory Opinion (PAO), we included three potential options for addressing the use of “shaheed” when used to refer to an individual designated under our DOI policy. The third option we posited, as the Board notes in its Opinion, is closest to the Board’s recommendation with two distinctions: (1) the Board recommends we only use three of the six signals of violence when deciding whether to remove “shaheed” as an “unclear reference” to a designated individual; and (2) the Board recommends we allow otherwise violating uses of “shaheed” in the context of news reporting, neutral discussion, and condemnation.
Regarding the first distinction between the Board’s recommendation and our proposed third option, the Board recommends that we not use, as signals of violence, references to military language, references to arson, looting, or other destruction of property, and statements of intent, calls to action, representing, supporting or advocating violence against people. We included these signals in our third option because, based on our prior policy research and development, we found that their presence is often indicative of violent speech. However, as part of our PAO request, we did not specifically analyze the type of content that may be allowed under our DOI policy if we did not rely on those specific signals of violence. While we recognize that the Board may be correct in its recommendation to not utilize these three signals, we wish to take additional time to analyze what type of content would be allowed under the Board’s approach. This analysis will include reviewing real on-platform examples of content that would be allowed if these signals are not included in our revised policy.
Regarding the second distinction between the Board’s recommendation and our proposed third option, we plan to assess ways to operationalize the Board’s approach. As the Board acknowledges in its Opinion, extending the “reporting on, neutrally discussing, or condemning” allowance to this content “does add some greater complexity to the scalability of the policy proposed by Meta’s third option….” For example, it may be difficult, at scale, to determine whether a post using the word “shaheed” in reference to a designated individual and containing, for instance, a photo of that individual holding a weapon, is actually engaging in neutral discussion. As part of our assessment, we will examine whether the Board’s recommendation here can be consistently and accurately applied at scale, including what changes or additions will be required to our reviewer protocols and processes to effectuate the Board’s recommendation.
We will provide further updates in the next report on the Oversight Board.
To clarify the prohibition on "unclear references", Meta should include several examples of violating content, including a post referring to a designated individual as "shaheed" combined with one or more of the three signals of violence specified in recommendation no. 1.
The Board will consider this recommendation implemented when Meta publicly updates its Community Standards with examples of "unclear references”.
Our commitment: Following the completion of the additional analysis outlined in our response to Recommendation #1, we will update our publicly available Dangerous Organizations and Individuals Policy to include examples to violating content, including unclear references referring to one or more of the signals of violence specified in Recommendation #1.
Considerations: We are aligned with the Board that, given the complexity of our DOI policy, sharing examples as part of this Community Standards section can help clarify our approach. As such, we will work to include examples of “unclear references” in our external policy.
In line with previous recommendations from the Board, we currently include a number of examples in our DOI Policy on the types of content that we may allow as news reporting, neutral discussion, and condemnation. We also include examples of the types of content we might remove for violating our policies concerning Glorification, Support, and Representation. We believe that providing these types of examples are helpful to users in understanding our policies.
Accordingly, we will update our Community Standards alongside updates for Recommendation #1, and will share details on our progress in the next report to the Oversight Board.
Meta's internal policy guidance should also be updated to make clear that referring to designated individuals as "shaheed" is not violating except when accompanied by signals of violence, and that even when those signals are present, the content may still benefit from the "reporting on, neutrally discussing or condemning" exceptions.
The Board will consider this recommendation implemented when Meta updates its guidance to reviewers allowing "shaheed" to benefit from the reporting on, neutrally discussing or condemning exceptions, and shares this revised guidance with the Board.
Our commitment: We are assessing updates to our internal policy guidance to clarify the policy update described in Recommendation #1, including how best to communicate that update to our reviewers.
Considerations: We will update internal guidance alongside our policy updates for Recommendation #1.
To improve the transparency of its designated entities and events list, Meta should explain in more detail the procedure by which entities and events are designated. It should also publish aggregated information on its designation list on a regular basis, including the total number of entities within each tier of its list, as well as how many were added and removed from each tier in the past year.
The Board will consider this implemented when Meta publishes the requested information in its Transparency Center.
Our commitment: We will update our Transparency Center to provide a more detailed explanation of our process for designating and de-designating entities and events. For the reasons discussed below, we will not be able to share aggregate information on our designation list, but we continue to evaluate ways to improve transparency.
Considerations: We recognize the importance of transparency around our approach to content discussing Dangerous Organizations and Individuals. In response to the Oversight Board’s previous recommendations related to this policy, we have updated our Community Standards with a number of definitions and examples. In recent years, we have also conducted a number of policy developments related to our approach to Dangerous Organizations and Individuals. These policy developments have led to updates to our approach to content that glorifies, supports, or represents a DOI and a new and consistent framework to assess de-listing. In an effort to explain recent changes to our DOI Policy, we also share more about this updated approach to de-listing designated entities and organizations alongside other updates in a page recently published in our Transparency Center. As part of our continued work to improve our transparency regarding this policy, we will provide greater detail, in our Transparency Center, about the procedure by which entities and events are designated.
We cannot commit to sharing aggregated information on our list of designated entities and individuals on a regular basis at this time. Sharing these metrics without the context of the full list and process has the potential to be misinterpreted and would provide data without meaning.
We will provide an update on the status of this work in future reports on the Board.
To ensure that the Dangerous Organizations and Individuals entity list is up to date and does not include organizations, individuals and events that no longer meet Meta's definition for designation, the company should introduce a clear and effective process for regularly auditing designations and removing those no longer satisfying published criteria.
The Board will consider this implemented when Meta has created such an audit process and explains the process in its Transparency Center.
Our commitment: We are finalizing implementation of a process for auditing and removing designated individuals and entities when they no longer meet certain criteria as part of our DOI policy. We will share more details about this process in our Transparency Center.
Considerations: In March 2023, we initiated policy development on our policy for removing organizations and individuals from our list of designated organizations and individuals. This work resulted from a number of recommendations, including one from an Israel-Palestine Human Rights Impact Assessment that was conducted in relation to an Oversight Board recommendation from the Al Jazeera decision in September 2021. The policy development process was informed by external stakeholder engagements with global experts and conversations with internal teams with expertise in this area. It also involved research on the topic, assessment of operational feasibility of implementing a delisting process, and human rights considerations. We held a Policy Forum to discuss our policy approach in December 2023.
In January 2024, following this policy development, we provided an update in our Transparency Center detailing recent changes to our DOI Policy, including introducing changes to our policy for removing dangerous organizations and individuals from our DOI list. This policy update ensures we have a process that covers all DOI categories and is triggered by an entity's demonstrated behavioral change. In order to be considered for delisting, an entity must not be designated by the U.S. government as a Specially Designated Narcotics Trafficking Kingpin (SDNTK); Foreign Terrorist Organization (FTO); or Specially Designated Global Terrorist (SDGT); no longer be involved in violence or hate; and finally, not be symbolic to violence and hate or be used to incite further violence or spread hateful propaganda.
As a result of the Policy Forum, we also adopted the recommended approach to audit our existing DOI lists to determine if certain entities and individuals no longer meet the criteria for designation. Given the complexity of this work, we are in the process of finalizing implementation. We are also still finalizing next steps for sharing final details about this work externally, but will share an update in upcoming reports with the Oversight Board once it is complete.
To improve transparency of Meta's enforcement, including regional differences among markets and languages, Meta should explain the methods it uses to assess the accuracy of human review and the performance of automated systems in the enforcement of its Dangerous Organizations and Individuals policy. It should also periodically share the outcome of performance assessments of classifiers used in enforcement of the same policy, providing results in a way that allows these assessments to be compared across languages and/or regions.
The Board will consider this implemented when Meta includes this information in its Transparency Center and in the Community Standards Enforcement Reports.
Our commitment: We explain further details about our auditing process, including methods for assessing the accuracy of human review and performance of automated systems for our DOI policy, in our Considerations below. At this time, we do not expect to be able to share details about performance assessments broken down by language or regions.
Considerations: As part of our review process, we conduct audits to assess the accuracy of our content moderation decisions. For human reviewers, we have a quality assurance team audit performance which includes an additional layer of review by Meta’s Global Operations team. The outcomes of these audits can inform areas where we may make improvements to our policies and processes. This is coupled with regular communication with reviewers from both Operations and Policy teams to provide guidance, clarification, and support in decision making on specific types of content.
Previously we assessed accuracy rates across the entire policy area for DOI. Recently, we’ve introduced metrics to assess accuracy for different violation types within our DOI Policy as well in order to understand, with more granularity, the accuracy of decisions made on potentially violating content. This allows us to regularly audit both automated and human reviewed decisions at a deeper level than per broad policy area.
We intend to continue to conduct accuracy assessments for classifiers used in enforcement of policy, however, due to a number of limitations we do not intend to collect and share this data. We do not expect to share more details on this recommendation and consider it complete.
To inform stakeholders, Meta should provide explanations in clear language on how classifiers are used to generate predictions of policy violations. Meta should also explain how it sets thresholds for either taking no action, lining content up for human review or removing content by describing the processes through which these thresholds are set. This information should be provided in the company's Transparency Center.
The Board will consider this implemented when Meta publishes the requested information in its Transparency Center.
Our commitment: Our Transparency Center includes clear and comprehensive explanations of how classifiers are used to generate predictions of policy violations, how content gets prioritized for review, and other information related to how we action content. We will continue to consider new opportunities to update our Transparency Center pages with additional information related to these technologies and review thresholds.
Considerations: Meta leverages classifiers to proactively detect violating content on our platforms. These classifiers are trained to check if content violates the Community Standards. In some cases where it is more difficult to detect a violation, people step in to conduct further review. When determining which content our human review teams should review first, we consider 3 main factors: severity, virality, and likelihood of violating.
Many of our machine learning (ML) classifiers are automatically reassessed for accuracy after each human review. The content labeling decisions taken by human reviewers are used to train and refine our technology. As a part of this process, the review teams manually label the policy guiding their decision, i.e., they mark the policy that the content, account, or behavior violates. This helps to improve the quality of our artificial intelligence algorithms and our lists of known policy-violating content used by our matching technology.
This precision fluctuates across policy areas, particularly where there is likely to be a greater volume of nuanced or borderline non-violating content, which is at risk of being mistakenly removed by automation. We work hard to prevent these cases and remediate the actions when they occur.
In addition, our enqueueing is intentionally optimized for helping improve the classifiers, meaning we intentionally ask humans to review content our ML is not performing as well on.
In addition, our enqueueing is intentionally optimized for helping improve the classifiers, meaning we intentionally ask humans to review content our ML is not performing as well on.
Beyond what we’ve described above, we take other actions as part of our overall Remove, Reduce, Inform framework that we’ve shared previously – e.g. downranking content in news feed, providing additional context, etc.
Over time, we will continue to ensure our Transparency Center accurately reflects the approach we take to integrate machine learning into content moderation efforts.