How we approach dangerous organizations and individuals.
How we support communities in the face of the opioid epidemic.
How we help prevent interference, empower people to vote and more.
How we work with independent fact-checkers, and more, to identify and take action on misinformation.
How we assess content for newsworthiness.
How we reduce problematic content in News Feed.
How we build AI systems.
Comprehensive access to public data from Facebook and Instagram
Comprehensive and searchable database of all ads currently running across Meta technologies
Additional tools for in-depth research on Meta technologies and programs
Quarterly report on how well we're doing at enforcing our policies on the Facebook app and Instagram.
Report on how well we're helping people protect their intellectual property.
Report on government request for people's data.
Report on when we restrict content that's reported to us as violating local law.
Report on intentional internet restrictions that limit people's ability to access the internet.
Quarterly report on what people see on Facebook, including the content that receives the widest distribution during the quarter.
Download current and past regulatory reports for Facebook and Instagram.
JUN 12, 2023
2022-013-FB-UA
Today, the Oversight Board selected a case appealed by a Facebook user involving a cartoon shared on a public group of Iran’s Supreme Leader Ayatollah Ali Khamenei. The image shows Ayatollah Khamenei’s beard forming into a fist grasping a woman in a hijab and is accompanied by a caption that uses the phrase “death to” the government and its leader Khamenei.
Upon initial review, Meta took down this content for violating our policy on Violence and Incitement, as laid out in the Facebook Community Standards. However, upon additional review we decided that although the post violates our policy on Violence and Incitement, the newsworthiness allowance applies. Meta subsequently reinstated the content.
We will implement the board’s decision once it has finished deliberating, and we will update this post accordingly. Please see the board’s website for the decision when they issue it.
We welcome the Oversight Board’s decision today on this case. The board overturned Meta’s original decision to remove the content from the platform for violating our Violence and Incitement Policy, deeming the newsworthiness allowance unnecessary. Meta previously reinstated this content so no further action will be taken on it.
In accordance with the bylaws, we will also initiate a review of identical content with parallel context. If we determine that we have the technical and operational capacity to take action on that content as well, we will do so promptly. For more information, please see our Newsroom post about how we implement the board’s decisions.
After conducting a review of the recommendations provided by the board in addition to their decision, we will update this page.
We appreciate the Oversight Board’s input on this case and the nuanced issue of moderating content that uses the phrase “marg bar Khamenei” (Death to Khamenei). While we’ve always understood that calls for death of heads of state could be rhetorical in certain contexts, given our limited ability to know someone’s intent, it’s difficult for companies like ours to determine when and where to allow these types of statements on our platforms. We accept the Board’s guidance that in Iran this slogan is an integral part of political speech during the ongoing protests, and that it alone is unlikely to pose risk of physical harm.
That’s why, effective today, we are implementing the Board’s recommendation to allow the phrase “marg bar Khamenei” in the context of the ongoing protests in Iran.
According to the Oversight Board’s bylaws, Meta has 60 days to respond to the board’s recommendations. Recognizing the ongoing protests in Iran and the importance of speech in this context, we are responding to this recommendation ahead of the usual 60 day response deadline in order to provide transparency about the changes we are implementing.
Our responses to the board’s remaining recommendations are due March 10, 2023. We will provide detailed responses to the recommendations at that time.
Meta's Community Standards should accurately reflect its policies. To better inform users of the types of statements that are prohibited, Meta should amend the Violence and Incitement Community Standard to (i) explain that rhetorical threats such as "death to X" statements are generally permitted, except when the target of the threat is a high-risk person; (ii) include an illustrative list of high-risk persons, explaining that they may include heads of state; (iii) provide criteria for when threatening statements directed at heads of state are permitted to protect clearly rhetorical political speech in protest contexts that does not incite to violence, taking language and context into account, in accordance with the principles outlined in this decision. The Board will consider this recommendation implemented when the public-facing language of the Violence and Incitement Community Standard reflects the proposed change, and when Meta shares internal guidelines with the Board that are consistent with the public-facing policy.
Our commitment: We are committed to exploring ways to further clarify our policies and will assess the feasibility of introducing the proposed changes and clarifications.
Considerations: To provide greater transparency to users and enhance our policies on political expression, we are exploring the following approaches:
1. Consolidating our policy related to calls for death against people, which currently is split between our Violence and Incitement and Bullying and Harassment policies. This aims to simplify the policy and clarify to users when threats like “death to X” are prohibited or allowed under our policies;
2. Evaluating the policy on high-risk persons to ensure it is both equitable and transparent; and
3. Exploring policy nuance that strikes a better balance between violent speech and political expression and striving to apply that to rhetorical political speech in protest contexts.
We are still in the early stages of the policy development process, and we will provide an update on the status of these assessments in a future Quarterly Update.
Meta should err on the side of issuing scaled allowances where (i) this is not likely to lead to violence; (ii) when potentially violating content is used in protest contexts; and (iii) where public interest is high. Meta should ensure that their internal process to identify and review content trends around protests that may require context-specific guidance to mitigate harm to freedom of expression, such as allowances or exemptions, are effective. The Board will consider this recommendation implemented when Meta shares the internal process with the Board and demonstrates through sharing data with the Board that it has minimized incorrect removals of protest slogans.
Our commitment: We aim to scale allowances in instances where content is not likely to lead to violence, when potentially violating content is used in protest contexts, and where public interest is high. However, we are continuing to refine what data we can commit to sharing.
Considerations: Our approach to newsworthy content takes context into account when weighing the public interest value of the content against the risk of harm. One of the examples in our Transparency Center is a newsworthy allowance we granted in connection with protests in Colombia. The protest context is directly relevant to the content’s public interest value, while the likelihood content may contribute to a risk of violence relates to the risk of harm.
To identify and track protest-related content trends that may pose on-platform risk, our teams deploy a wide-range of detection initiatives including automation tools like content classifiers and pipelines. Human reviewers with language and regional expertise work with policy subject matter experts to provide assessments on this content and take appropriate action in accordance with our Community Standards and identify areas for further mitigation or remediation. We also work closely with local teams to identify and protect at-risk users, including civic actors, that may be more likely to be targeted with abuse during protests or whose content may warrant additional consideration for policy allowances or exemptions.
Where possible, we also consider scaling these allowances so that they apply as broadly as appropriate under the circumstances. Criteria for scaling include considerations of the potential for harm weighed against the public interest value of the content. With these considerations top of mind, we may allow a newsworthy allowance at scale to apply more broadly to content across Facebook in certain instances. As explained in our response to recommendation #6, below, we will add further details about our approach to “narrow” and “scaled” newsworthy allowances to our Transparency Center.
Pending changes to the Violence and Incitement policy, Meta should issue guidance to its reviewers that "marg bar Khamenei" statements in the context of protests in Iran do not violate the Violence and Incitement Community Standard. Meta should reverse any strikes and feature limits for wrongfully removed content that used the "marg bar Khamenei" slogan. The Board will consider this recommendation implemented when Meta discloses data on the volume of content restored and number of accounts impacted.
Our commitment: As stated in our January 23rd response, we have updated our guidance to allow the phrase “marg bar Khamenei” in the context of ongoing protests in Iran. We have also run sweeps to identify and reverse strikes made on the basis of this type of content, and will continue to do so where possible.
Considerations: We appreciate the Oversight Board’s input on this case and the nuanced issue of moderating content that uses the phrase “marg bar Khamenei” (Death to Khamenei). While we’ve always understood that calls for death of heads of state could be rhetorical in certain contexts, given our limited ability to know someone’s intent, it’s difficult for companies like ours to determine when and where to allow these types of statements on our platforms. We accept the Board’s guidance that in Iran this slogan is an integral part of political speech during the ongoing protests, and that it alone is unlikely to pose risk of physical harm.
That’s why, on January 23, 2023, we fully implemented the Board’s recommendation to allow the phrase “marg bar Khamenei” in the context of the ongoing protests in Iran. We have also run sweeps to identify previous enforcement actions and reverse strikes made on the basis of this type of content and will continue to pursue further reversals where they are applicable and feasible. At this time, we are still assessing capacity for that work and determining which metrics we will be able to share. We will provide an update on our progress in a future Quarterly Update.
Meta should revise the indicators that it uses to rank appeals in its review queues and to automatically close appeals without review. The appeals prioritization formula should include, as it does for the cross-check ranker, the factors of topic sensitivity and false-positive probability. The Board will consider this implemented when Meta shares with the Board their appeals prioritization formula and data that shows that it is ensuring review of appeals against the incorrect removal of political expression in protest contexts.
Our commitment: Our internal optimization and appeals experiences teams have launched the first iteration of a solution for the ranking and automation of content takedown appeals jobs. This work will be implemented in a phased approach beginning with auto-restoring content takedown appeals where we have a high confidence, per a designated score, that an enforcement error has occurred. Content that receives a lower score will be reviewed in order of the ranking. We will continue to provide updates of the rollout in future Quarterly Updates.
Considerations: We continue working to understand the tradeoffs of prioritizing certain appeals over others. We have further developed this work since our commitments in the Armenian people and the Armenian Genocide case and the Post Requesting Advice on Pharmaceutical Drugs case. We have now determined integrity metrics that allow us to confidently determine the likelihood of false positives and quickly restore the highest ranking content where it has been erroneously removed.
We will initially rank content takedown appeals against various metrics and restore content where we have high confidence that the takedown was incorrect. The appeals ranking process takes into account a variety of metrics; as this work is evolving we will continue to refine that process.
We aim to further refine the appeal process throughout 2023. We will continue to share updates on implementation and expansion of our efforts in future Quarterly Updates, including whether it is feasible to share confidential data with the board to verify implementation.
Meta should announce all scaled allowances that it issues, their duration and notice of their expiry, in order to give people who use its platforms notice of policy changes allowing certain expression, alongside comprehensive data on the number of "scaled" and "narrow" allowances granted. The Board will consider this recommendation implemented when Meta demonstrates regular and comprehensive disclosures to the Board.
Our commitment: We will provide data on the number of total scaled allowances on a yearly basis.
Considerations: Due in part to previous recommendations from the Oversight Board, we collect data and information to track the number of newsworthiness allowances we issue. As described in recommendation #6, we will provide further clarity about the types of newsworthy allowances, which include both “narrow” and “scaled” approaches. We will also share the total number of scaled allowances with the board. We continue to mature our efforts in this regard and have committed to improving the scope and accuracy of our newsworthiness allowance tracking across regions and violation areas. As part of our commitment for this recommendation, we will work to provide updates on the total number of scaled allowances on a yearly basis. At this time, we do not plan to share the duration and notice of an allowance’s expiration; however, this is something that we may explore collecting and sharing in the future.
The public explanation of the newsworthiness allowance in the Transparency Centre should (i) explain that newsworthiness allowances can either be scaled or narrow; and (ii) provide the criteria that Meta uses to determine when to scale newsworthiness allowances. The Board will consider this recommendation to be implemented when Meta updates the publicly available explanation of newsworthiness and issues Transparency Reports that include sufficiently detailed information about all applied allowances.
Our commitment: We will share additional details about scaled as well as narrow approaches in our Approach to Newsworthiness page in the Transparency Center.
Considerations: We will add further details about our approach to “narrow” and “scaled” newsworthy allowances to our Transparency Center. Currently, “narrow” newsworthy allowances apply to a single piece of content, while "scaled" newsworthy allowances apply more broadly. For instance, if we see a phrase being used in a way that meets our newsworthiness criteria, such as a protest slogan, we may allow it. We recognize that it may be useful to include these details in our existing article and plan to make these changes before the end of the year.
Meta should provide a public explanation of the automatic prioritization and closure of appeals, including the criteria for both prioritization and closure. The Board will consider this recommendation implemented when Meta publishes this information in the Transparency Centre.
Our commitment: Our progress in automatic prioritization and closure of appeals is newly developed and quickly transforming. Given the nature of this work, we believe that providing ongoing updates of our implementation effort will suffice as the criteria involved are evolving. Building, testing and strengthening automatic prioritization and closure of appeals remains our priority, and we will continue to report on the implementation progress as the criteria matures.
Considerations: As shared in our response to recommendation #4, we will further refine the appeal process throughout 2023. We plan to expand this solution to reporter appeals for simple objects, complex objects and demotions appeals in subsequent rollouts.
Our content review prioritization processes are publicly available on the Transparency Center where we explain that we primarily consider severity, virality and likelihood of violation in determining which content our human review teams should review first. This framework is embedded in our automatic prioritization framework.
Since Q1 2022, we have undergone a multi-stage process to identify key drivers of trust in appeals in order to improve their overall effectiveness. In our Q4 2022 Quarterly Update to the Oversight Board, we reported that we have launched the first iteration of a new appeals prioritization system, which ranks appeals based on potential impact of an enforcement error. Further development of solutions targeted at appeals ranking based on the severity of enforcement decisions and/or a specific policy exception remains a long term priority for our teams. As this dynamic process continues, we will surface new insights on how to effectively prioritize appeals and the resultant implications on our processes, accuracy and fairness. With this in mind, we will continue to iterate on our appeals ranking processes and embedded criteria through rigorous testing and prioritization. At this time, we are not planning on publicly disclosing a criteria for both prioritization and closure, as doing so would be premature. However, once these new processes have reached maturity, we will reassess the best way to increase transparency around the new system.