Meta
Dette indhold er endnu ikke tilgængeligt på Dansk

Home
Oversight
Oversight Board Recommendations

Oversight Board recommendations

OPDATERET 29. SEP. 2025
In addition to binding decisions on content, the Oversight Board can issue recommendations for Meta’s content policies and how we enforce our policies on Facebook, Instagram, and Threads.
Meta is committed to both considering these recommendations as important inputs to our internal policy processes and publicly responding to each recommendation within 60 days. Recommendations, unlike the board’s decisions on individual cases, are not binding for Meta.
Below is a table of the recommendations Meta has received from the Oversight Board so far. Here we outline the number of recommendations related to a case, our commitment level, and the implementation status. All additional recommendation updates and details can be found in Meta's Quarterly Update on the Oversight Board.
We categorize our response to the board’s recommendations in the following areas:
  • Implementing in Full: We agree with the recommendation and have or will implement it in full.
  • Implementing in part: We agree with the overall aim of the recommendation and have or will implement work related to the board's guidance.
  • Assessing feasibility: We are assessing the feasibility and impact of the recommendation and will provide further updates in the future.
  • Work Meta already does: We have already addressed the Board’s recommendation through an action that we already do. Thus, there is no update required.
  • No further action: We will not implement the recommendation, either due to a lack of feasibility or disagreement about how to reach the desired outcome.
The current status for our responses to the board’s recommendations include the following:
  • Complete: We have completed full or partial implementation in line with our response to the board’s recommendation, and will have no further updates on the recommendation in the future.
  • In progress: We are continuing to make progress on our response to the board’s recommendation, and will have further updates on the recommendation in the future.
  • No further updates: We will not implement the recommendation or have already addressed the recommendation through an action that we already do, and will have no further updates on the recommendation in the future.

Board Recommendations

Case Name and Rec Number
Recommendation
Date of 60-Day Response
Latest Action
Status
Videos of Teachers Hitting Children Bundle #1
To allow users to condemn, report and raise awareness of non-sexual child abuse, Meta should include an exception in its public-facing Child Sexual Exploitation, Abuse and Nudity Community Standard allowing images and videos of non-sexual child abuse perpetrated by adults, when shared with this intent. Content should be allowed with a "mark as disturbing" warning screen and restricted visibility to users aged 18 and older. In these cases, children must neither be directly identifiable (by name or image), nor functionally identifiable (when contextual clues are likely to lead to the identification of the individual). This exception should be applied on escalation only.
29 September 2025
Assessing Feasibility
In progress
Videos of Teachers Hitting Children Bundle #2
To ensure proportionate and consistent enforcement, Meta should not apply strikes to accounts whose non-sexual child abuse content it removes on escalation where there are clear indicators of the user’s intent to condemn, report or raise awareness.
29 September 2025
Implementing in Full
In progress
Alleged Audio Call to Rig Elections in Iraqi Kurdistan #1
To ensure “High Risk” labels are applied consistently to all identical or similarly manipulated content, Meta should apply the relevant label to all content with the same manipulated media on its platforms, including all posts containing the manipulated audio in this case.
22 August 2025
Implementing in Part
In progress
Alleged Audio Call to Rig Elections in Iraqi Kurdistan #2
As part of its electoral integrity efforts and to ensure users are informed of manipulated media on Meta’s platforms in the lead-up to an election, Meta should ensure that the informative labels for manipulated media on Facebook, Instagram and Threads are displayed in the same language that the user has elected for its platform. The Board will consider this recommendation implemented when Meta provides information in its Transparency Center about the languages in which manipulated media labels are available to users on its platforms.
22 August 2025
Implementing in Part
In progress
Symbols Adopted by Dangerous Organizations #1
To provide more clarity to users, Meta should make public the internal definition of “references” and define its subcategories under the Dangerous Organizations and Individuals Community Standard.
11 August 2025
Implementing Fully
In progress
Symbols Adopted by Dangerous Organizations #2
To ensure that the list of designated symbols under the Dangerous Organizations and Individuals policy does not include symbols that no longer meet Meta’s criteria for inclusion, Meta should introduce a clear and evidence-based process to determine how symbols are added to the groups and which group each designated symbol is added to and periodically audit all designated symbols, ensuring the list covers all relevant symbols globally and removing those no longer satisfying published criteria, as outlined in section 5.2 of this decision.
11 August 2025
Assessing Feasibility
In progress
Symbols Adopted by Dangerous Organizations #3
To address potential false positives involving designated symbols under the Dangerous Organizations and Individuals Community, Meta should develop a system to automatically identify and flag instances where designated symbols lead to “spikes” that suggest a large volume of non-violating content is being removed, similar to the system the company created in response to the Board’s recommendation no. 2 in Colombian Police Cartoon. This system will allow Meta to analyze “spikes” involving designated symbols and inform the company’s future actions, including amending their practices to be more accurate and precise.
11 August 2025
Assessing Feasibility
In progress
Symbols Adopted by Dangerous Organizations #4
To provide more transparency to users, Meta should publish a clear explanation on how it creates and enforces its designated symbols list under the Dangerous Organizations and Individuals Community Standard. This explanation should include the processes and criteria for designating the symbols and how the company enforces against different symbols, including information on strikes and any other enforcement actions taken against designated symbols.
11 August 2025
Assessing Feasibility
In progress
AI-Manipulated Video Promoting Gambling Rec #1
To better combat misleading manipulated celebrity endorsements, Meta should enforce at scale its Fraud, Scams and Deceptive Practices policy prohibition on content that “attempts to establish a fake persona or to pretend to be a famous person in an attempt to scam or defraud” by providing reviewers with indicators to identify this content. This could include, for example, the presence of media manipulation watermarks and metadata, or clear factors such as video-audio mismatch.
4 August 2025
Assessing Feasibility
In progress
Images of Partially Nude Indigenous Women Rec #1
To better protect expression, while respecting the rights of Indigenous Peoples and their members, Meta should make public its Adult Nudity and Sexual Activity policy exception allowing content depicting bare-chested indigenous women in some circumstances. This exception should be applied on escalation only, and should allow such nudity where it reflects socially accepted custom and belief, and does not misrepresent these practices.
1 August 2025
Assessing Feasibility
In progress
Content Targeting Human Rights Defender in Peru Rec #1
To ensure that its Violence and Incitement Community Standard clearly captures how veiled threats can occur across text and imagery, Meta should clarify that threats made out of “coded statements,” even “where the method of violence is not clearly articulated,” are prohibited in written, visual and verbal form.
25 Jul 2025
Assessing Feasibility
In progress
Content Targeting Human Rights Defender in Peru Rec #2
To ensure that potential veiled threats are more accurately assessed, in light of Meta’s incorrect interpretation of this content on-escalation, the Board recommends that Meta produce an annual assessment of accuracy for this problem area. This should include a specific focus on false negative rates of detection and removal for threats against human rights defenders, and false positive rates for political speech (e.g., Iran Protest Slogan). As part of this process, Meta should investigate opportunities to improve the accurate detection of high-risk (low-prevalence, high impact) threats at scale.
25 Jul 2025
Assessing Feasibility
In progress
Gender Identity Debate Videos Rec #1
As part of its ongoing human rights due diligence, Meta should take all of the following steps in respect of the January 7, 2025, updates to the Hateful Conduct Community Standard. First, it should identify how the policy and enforcement updates may adversely impact the rights of LGBTQIA+ people, including minors, especially where these populations are at heightened risk. Second, Meta should adopt measures to prevent and/or mitigate these risks and monitor their effectiveness. Third, Meta should update the Board on its progress and learnings every six months, and report on this publicly at the earliest opportunity.
20 Jun 2025
Assessing Feasibility
In progress
Gender Identity Debate Videos Rec #2
To ensure Meta’s content policies are framed neutrally and in line with international human rights standards, Meta should remove the term “transgenderism” from the Hateful Conduct policy and corresponding implementation guidance.
20 Jun 2025
Assessing Feasibility
In progress
Gender Identity Debate Videos Rec #3
To reduce the reporting burden on targets of Bullying and Harassment, Meta should allow users to designate connected accounts, which are able to flag potential Bullying and Harassment violations requiring self-reporting on their behalf.
20 Jun 2025
Assessing Feasibility
In progress
Gender Identity Debate Videos Rec #4
To ensure there are fewer enforcement errors on Bullying and Harassment violations requiring self-reporting, Meta should ensure the one report representing multiple reports on the same content is chosen based on the highest likelihood of a match between the reporter and the content’s target. In doing this Meta should guarantee that any technological solutions account for potential adverse impacts on at-risk groups.
20 Jun 2025
Assessing Feasibility
In progress
Posts Displaying South Africa’s Apartheid-Era Flag Rec #1
As part of its ongoing human rights due diligence, Meta should take all of the following steps in respect of the January 7 updates to the Hateful Conduct Community Standard. First, it should identify how the policy and enforcement updates may adversely impact populations in global majority regions. Second, Meta should adopt measures to prevent and/or mitigate these risks and monitor their effectiveness. Third, Meta should update the Board on its progress and learnings every six months, and report on this publicly at the earliest opportunity.
20 Jun 2025
Assessing Feasibility
In progress
Posts Displaying South Africa’s Apartheid-Era Flag Rec #2
To improve the clarity of its Dangerous Organizations and Individuals Community Standard, Meta should adopt a single, clear and comprehensive explanation of how its prohibitions and exceptions under this Community Standard apply to designated hateful ideologies.
20 Jun 2025
Implementing in Full
In progress
Posts Displaying South Africa’s Apartheid-Era Flag Rec #3
To improve the clarity of its Dangerous Organizations and Individuals Community Standard, Meta should list apartheid as a standalone designated hateful ideology in the rules.
20 Jun 2025
Assessing Feasibility
In progress
Posts Displaying South Africa’s Apartheid-Era Flag Rec #4
To improve clarity to reviewers of its Dangerous Organizations and Individuals Community Standard, Meta should provide more global examples to reviewers of prohibited glorification, support and representation of hateful ideologies, including examples that do not directly name the listed ideology.
20 Jun 2025
Implementing in Full
In progress
Criticism of EU Migration Policies and Immigrants Rec #1
As part of its ongoing human rights due diligence, Meta should take all of the following steps in respect of the January 7, 2025, updates to the Hateful Conduct Community Standard. First, it should identify how the policy and enforcement updates may adversely impact the rights of immigrants, in particular refugees and asylum seekers, with a focus on markets where these populations are at heightened risk. Second, Meta should adopt measures to prevent and/or mitigate these risks and monitor their effectiveness. Third, Meta should update the Board on its progress and learnings every six months, and report on this publicly at the earliest opportunity.
20 Jun 2025
Assessing Feasibility
In progress
Criticism of EU Migration Policies and Immigrants Rec #2
Meta should add the term “murzyn” to its Polish market slur list.
20 Jun 2025
Assessing Feasibility
In progress
Criticism of EU Migration Policies and Immigrants Rec #3
When Meta audits its slur lists, it should ensure it carries out broad external engagement with relevant stakeholders. This should include consulting with impacted groups and civil society.
20 Jun 2025
Implementing in Full
In progress
Criticism of EU Migration Policies and Immigrants Rec #4
To reduce instances of content that violates its Hateful Conduct policy, Meta should update its internal guidance to make it clear that Tier 1 attacks (including those based on immigration status) are prohibited, unless it is clear from the content that it refers to a defined subset of less than half of the group. This would reverse the current presumption that content refers to a minority unless it specifically states otherwise.
20 Jun 2025
No Further Action
No further updates
Posts Supporting UK Riots Bundle Rec #1
To improve the clarity of its Violence and Incitement Community Standard, Meta should specify that all high-severity threats of violence against places are prohibited, as well as against people.
20 Jun 2025
Assessing Feasibility
In progress
Posts Supporting UK Riots Bundle Rec #2
To improve the clarity of its Hateful Conduct Community Standard, Meta should develop clear and robust criteria for what constitutes allegations of serious criminality, based on protected characteristics, in visual form. These criteria should align with and adapt existing standards for text-based hateful conduct, ensuring consistent application across both text and imagery.
20 Jun 2025
Assessing Feasibility
In progress
Posts Supporting UK Riots Bundle Rec #3
To ensure Meta responds effectively and consistently to crises, the company should revise the criteria it has established to initiate the Crisis Policy Protocol. In addition to the current approach, in which the company has a list of conditions that may or may not result in protocol activation, the company should identify core criteria that, when met, are sufficient for the immediate activation of the protocol.
20 Jun 2025
Assessing Feasibility
In progress
Posts Supporting UK Riots Bundle Rec #4
To ensure accurate enforcement of its Violence and Incitement and Hateful Conduct policies in future crises, Meta’s Crisis Policy Protocol should ensure potential policy violations that could lead to likely and imminent violence are flagged for in-house human reviewers. These reviewers should provide time-bound, context-informed guidance for at-scale reviewers, including for image-based violations.
20 Jun 2025
Implementing in Part
In progress
Posts Supporting UK Riots Bundle Rec #5
As the company rolls out Community Notes, it should undertake continuous assessments of the effectiveness of Community Notes as compared to third-party fact-checking. These assessments should focus on the speed, accuracy and volume of notes or labels being affixed in situations where the rapid dissemination of false information creates risks to public safety.
20 Jun 2025
Assessing Feasibility
In progress
Footage of Moscow Terrorist Attack Rec #1
To ensure its Dangerous Organizations and Individuals Community Standard is tailored to advance its aims, Meta should allow, with a “Mark as Disturbing” warning screen, third-party imagery of a designated event showing the moment of attacks on visible but not personally identifiable victims when shared in news reporting, condemnation and awareness-raising contexts.
17 Jan 2025
Assessing Feasibility
In progress
Footage of Moscow Terrorist Attack Rec #2
To ensure clarity, Meta should include a rule under the “We remove” section of the Dangerous Organizations and Individuals Community Standard and move the explanation of how Meta treats content depicting designated events out of the policy rationale section and into this section.
17 Jan 2025
Implementing in Full
In progress
Homophobic violence in West Africa Rec #1
Meta should update the Coordinating Harm and Promoting Crime policy’s at-scale prohibition on “outing” to include illustrative examples of “outing-risk groups,” including LGBTQIA+ people in countries where same-sex relations are forbidden and/or such disclosures create significant safety risks.
13 Dec 2024
Implementing in Full
In progress
Homophobic violence in West Africa Rec #2
To improve implementation of its policy, Meta should conduct an assessment of the enforcement accuracy of the at-scale prohibition on exposing the identity or locations of anyone alleged to be a member of an outing-risk group, under the Coordinating Harm and Promoting Crime Community Standard.
13 Dec 2024
Implementing in Part
In progress
Homophobic violence in West Africa Rec #3
To increase the efficiency and accuracy of content review in unsupported languages, Meta should ensure its language detection systems precisely identify content in unsupported languages and provide accurate translations of that content to language-agnostic reviewers.
13 Dec 2024
Implementing in Part
Complete
Homophobic violence in West Africa Rec #4
Meta should ensure that content containing an unsupported language, even if mixed with supported languages, is routed to agnostic review. This includes providing reviewers with the option to re-route content containing an unsupported language to agnostic review.
13 Dec 2024
Implementing in Part
Complete
Iranian Make-Up Video for a Child Marriage Rec #1
To ensure clarity for users, Meta should modify the Human Exploitation policy to explicitly state that forced marriages include child marriage.
9 Dec 2024
Implementing in Full
In progress
Iranian Make-Up Video for a Child Marriage Rec #2
To ensure clarity for users, Meta should modify the Human Exploitation policy to define child marriage in line with international human rights standards to include marriage and informal unions of children under 18 years of age.
9 Dec 2024
Implementing in Full
In progress
Iranian Make-Up Video for a Child Marriage Rec #3
Meta should provide explicit guidance to human reviewers about child marriage being included in the definition of forced marriages.
9 Dec 2024
Implementing in Full
In progress
Iranian Make-Up Video for a Child Marriage Rec #4
To protect children’s rights and to avoid Meta’s reliance on the spirit of the policy allowance, the company should expand the definition of facilitation in its internal guidelines to include the provision of any type of material aid (which include “services”) to enable exploitation.
9 Dec 2024
Assessing Feasibility
In progress
Criminal Allegations Based on Nationality Rec #1
Meta should amend its Hate Speech Community Standard, adding the section marked as “new” below. The amended Hate Speech Community Standard would then include the following or other substantially similar language to that effect:
22 Nov 2024
Assessing Feasibility
In progress
Criminal Allegations Based on Nationality Rec #2
To improve transparency around Meta’s enforcement, Meta should share the results of the internal audits it conducts to assess the accuracy of human review and performance of automated systems in the enforcement of its Hate Speech policy with the public. It should provide the results in a way that allows these assessments to be compared across languages and/or regions.
22 Nov 2024
Implementing in Part
In progress
Pakistan Political Candidate Accused of Blasphemy Rec #1
To ensure safety for targets of blasphemy accusations, Meta should update the Coordinating Harm and Promoting Crime policy to make clear that users must not post accusations of blasphemy against identifiable individuals in locations where blasphemy is a crime and/or there are significant safety risks to persons accused of blasphemy.
18 Nov 2024
Implementing in Full
In progress
Pakistan Political Candidate Accused of Blasphemy Rec #2
To ensure adequate enforcement of the Coordinating Harm and Promoting Crime policy line against blasphemy accusations in locations where such accusations pose an imminent risk of harm to the person accused, Meta should train at-scale reviewers covering such locations and provide them with more specific enforcement guidance to effectively identify and consider nuance and context in posts containing blasphemy allegations.
18 Nov 2024
Implementing in Part
In progress
Statements About the Japanese Prime Minister Rec #1
Meta should update the Violence and Incitement policy to provide a general definition for “high-risk persons” clarifying that high-risk persons encompass people, like political leaders, who may be at higher risk of assassination or other violence and provide illustrative examples.
8 Nov 2024
Implementing in Part
In progress
Statements About the Japanese Prime Minister Rec #2
Meta should update its internal guidelines to at-scale reviewers about calls for death using the specific phrase “death to” when directed against high-risk persons, this update should allow posts that, in the local context and language, express disdain or disagreement through non-serious and casual ways of threatening violence.
8 Nov 2024
Assessing Feasibility
In progress
Statements About the Japanese Prime Minister Rec #3
Hyperlink to its Bullying and Harassment definition of public figures in the Violence and Incitement policy, and other relevant Community Standards, where such figures are referenced.
8 Nov 2024
Implementing Fully
In progress
From the River to the Sea Rec #1
Meta should ensure that qualified researchers, civil society organizations and journalists, who previously had access to CrowdTangle, are onboarded to the company’s new Content Library within three weeks of submitting their application.
1 Nov 2024
Assessing Feasibility
In progress
From the River to the Sea Rec #2
Meta should ensure the Meta Content Library is a suitable replacement for CrowdTangle, which provides equal or greater functionality and data access.
1 Nov 2024
Implementing in Part
In progress
From the River to the Sea Rec #3
Meta should implement recommendation no. 16 from the BSR Human Rights Due Diligence of Meta’s Impacts in Israel and Palestine report to develop a mechanism to track the prevalence of content attacking people on the basis of specific protected characteristics (for example, antisemitic, Islamophobic and homophobic content).
1 Nov 2024
Assessing Feasibility
In progress
Explicit AI Images Bundle Rec #1
Move the prohibition on “derogatory sexualized photoshop” into the Adult Sexual Exploitation Community Standard.
20 Sep 2024
No further action
No further updates
Explicit AI Images Bundle Rec #2
Change the word “derogatory” in the prohibition on “derogatory sexualized photoshop” to “non-consensual.”
20 Sep 2024
Assessing Feasibility
In progress
Explicit AI Images Bundle Rec #3
Replace the word “photoshop” in the prohibition on “derogatory sexualized photoshop” with a more generalized term for manipulated media.
20 Sep 2024
Implementing in Full
In progress
Explicit AI Images Bundle Rec #4
Harmonize its policies on non-consensual content by adding a new signal for lack of consent in the Adult Sexual Exploitation policy: context that content is AI-generated or manipulated. For content with this specific context, the policy should also specify that it need not be “non-commercial or produced in a private setting” to be violating.
20 Sep 2024
Assessing Feasibility
In progress
News Documentary on Child Abuse in Pakistan Rec #1
To better inform users when policy exceptions could be granted, Meta should create a new section within each Community Standard detailing what exceptions and allowances apply. When Meta has specific rationale for not allowing certain exceptions that apply to other policies (such as news reporting or awareness raising), Meta should include that rationale in this section of the Community Standard.
11 Jul 2024
Assessing Feasibility
In progress
Australian Electoral Commissions Rec #1
To ensure users are fully informed about the types of content prohibited under the “Voter and/or census fraud” section of the Coordinating Harm and Promoting Crime Community Standard, Meta should incorporate its definition of the term “illegal voting” into the public-facing language of the policy prohibiting: “advocating, providing instructions for, or demonstrating explicit intent to illegally participate in a voting or census process, except if shared in a condemning, awareness raising, news reporting, or humorous or satirical contexts.”
8 Jul 2024
Implementing in Full
Complete
Sudan’s Rapid Support Forces Video Captive Rec #1
To ensure effective protection of detainees under international humanitarian law, Meta should develop a scalable solution to enforce the Coordinating Harm and Promoting Crime policy that prohibits outing prisoners of war within the context of armed conflict. Meta should set up a protocol for the duration of a conflict that establishes a specialized team to prioritize and proactively identify content outing prisoners of war.
10 Jun 2024
Implementing in Part
In progress
Sudan’s Rapid Support Forces Video Captive Rec #2
To enhance its automated detection and prioritization of content potentially violating the Dangerous Organizations and Individuals policy for human review, Meta should audit the training data used in its video content understanding classifier to evaluate whether it has sufficiently diverse examples of content supporting designated organizations in the context of armed conflicts, including different languages, dialects, regions and conflicts.
10 Jun 2024
Implementing in Full
In progress
Sudan’s Rapid Support Forces Video Captive Rec #3
To provide more clarity to users, Meta should hyperlink the U.S. Foreign Terrorist Organizations and Specially Designated Global Terrorists lists in its Community Standards, where these lists are mentioned.
10 Jun 2024
Implementing in Full
In progress
Greek 2023 Elections Campaign Bundle Rec #1
To provide greater clarity to users, Meta should clarify the scope of the policy exception under the Dangerous Organizations and Individuals Community Standard, which allows for content “reporting on, neutrally discussing or condemning dangerous organizations and individuals or their activities” to be shared in the context of “social and political discourse.” Specifically, Meta should clarify how this policy exception relates to election-related content.
24 May 2024
Implementing in Full
In progress
Shaheed PAO Rec #1
Meta should stop presuming that the word "shaheed", when used to refer to a designated individual or unnamed members of designated organizations, is always violating and ineligible for policy exceptions. Content referring to a designated individual as "shaheed" should be removed as an "unclear reference" in only two situations. First, when one or more of three signals of violence are present: a visual depiction of an armament/weapon, a statement of intent or advocacy to use or carry an armament/weapon, or a reference to a designated event. Second, when the content otherwise violates Meta's policies (e.g. for glorification or because the reference to a designated individual remains unclear for reasons other than use of "shaheed"). In either scenario, content should still be eligible for the "reporting on, neutrally discussing and condemning" exceptions.
24 May 2024
Implementing in Full
In progress
Shaheed PAO Rec #2
To clarify the prohibition on "unclear references", Meta should include several examples of violating content, including a post referring to a designated individual as "shaheed" combined with one or more of the three signals of violence specified in recommendation no. 1.
24 May 2024
Implementing in Full
In progress
Shaheed PAO Rec #3
Meta's internal policy guidance should also be updated to make clear that referring to designated individuals as "shaheed" is not violating except when accompanied by signals of violence, and that even when those signals are present, the content may still benefit from the "reporting on, neutrally discussing or condemning" exceptions.
24 May 2024
Implementing in Full
Complete
Shaheed PAO Rec #4
To improve the transparency of its designated entities and events list, Meta should explain in more detail the procedure by which entities and events are designated. It should also publish aggregated information on its designation list on a regular basis, including the total number of entities within each tier of its list, as well as how many were added and removed from each tier in the past year.
24 May 2024
Implementing in Part
Complete
Shaheed PAO Rec #5
To ensure that the Dangerous Organizations and Individuals entity list is up to date and does not include organizations, individuals and events that no longer meet Meta's definition for designation, the company should introduce a clear and effective process for regularly auditing designations and removing those no longer satisfying published criteria.
24 May 2024
Implementing in Full
Complete
Shaheed PAO Rec #6
To improve transparency of Meta's enforcement, including regional differences among markets and languages, Meta should explain the methods it uses to assess the accuracy of human review and the performance of automated systems in the enforcement of its Dangerous Organizations and Individuals policy. It should also periodically share the outcome of performance assessments of classifiers used in enforcement of the same policy, providing results in a way that allows these assessments to be compared across languages and/or regions.
24 May 2024
Implementing in Part
Complete
Shaheed PAO Rec #7
To inform stakeholders, Meta should provide explanations in clear language on how classifiers are used to generate predictions of policy violations. Meta should also explain how it sets thresholds for either taking no action, lining content up for human review or removing content by describing the processes through which these thresholds are set. This information should be provided in the company's Transparency Center.
24 May 2024
Implementing in Full
Complete
Politician's Comments on Demographic Changes
Meta should provide greater detail in the language of its Hate Speech Community Standard about how it distinguishes immigration-related discussions from harmful speech targeting people on the basis of their migratory status. This includes explaining how the company handles content spreading hateful conspiracy theories. This is necessary for users to understand how Meta protects political speech on immigration while addressing the potential offline harms of hateful conspiracy theories.
10 May 2024
Assessing Feasibility
No further updates
Iranian Woman Confronted on Street Rec #1
To ensure respect for users' freedom of expression and assembly in an environment of systematic state repression, Meta should add a policy lever to the Crisis Policy Protocol providing that figurative (or not literal) statements, not intended to, and not likely to, incite violence, do not violate the Violence and Incitement policy line prohibiting threats of violence in relevant contexts. This should include developing criteria for at-scale moderators on how to identify such statements in the relevant context.
6 May 2024
Implementing in Part
Complete
Weapons Post linked to Sudan Conflict Rec #1
To better inform users of what content is prohibited on its platforms, Meta should amend its Violence and Incitement policy to include a definition of “recreational self-defense” and “military training” as exceptions to its rules prohibiting users from providing instructions on making or using weapons, and clarify that it does not allow any self-defense exception for instructions on how to make or use weapons in the context of an armed conflict.
12 Apr 2024
Implementing in Full
In progress
Weapons Post linked to Sudan Conflict Rec #2
To make sure users are able to understand which policies their content was enforced against, Meta should develop tools to rectify mistakes in its user messaging notifying the user about the Community Standard they violated.
12 Apr 2024
Implementing in Part
In progress
Altered Biden Video Rec #1
To address the harms posed by manipulated media, Meta should reconsider the scope of its Manipulated Media policy in three ways to cover: (1) audio and audiovisual content, (2) content showing people doing things they did not do (as well as saying things they did not say), and (3) content regardless of the method of creation or alteration.
5 Apr 2024
Implementing in Full
In progress
Altered Biden Video Rec #2
To ensure its Manipulated Media policy pursues a legitimate aim, Meta must clearly define in a single unified policy the harms it aims to prevent beyond preventing users being misled, such as preventing interference with the right to vote and to take part in the conduct of public affairs.
5 Apr 2024
Implementing in Full
In progress
Altered Biden Video Rec #3
To ensure the Manipulated Media policy is proportionate, Meta should stop removing manipulated media when no other policy violation is present and instead apply a label indicating the content is significantly altered and may mislead. The label should be attached to the media (such as a label at the bottom of a video) rather than the entire post, and should be applied to all identical instances of that media on the platform.
5 Apr 2024
Implementing in Full
In progress
Holocaust Denial Rec #1
Take technical steps to ensure that it is sufficiently and systematically measuring the accuracy of its enforcement of Holocaust denial content, to include gathering more granular details.
21 Mar 2024
Implementing in Part
In progress
Holocaust Denial Rec #2
Publicly confirm whether it has fully ended all COVID-19 automation policies put in place during the pandemic.
21 Mar 2024
Implementing in Full
In progress
Polish Post Targeting Trans People Rec #1
Meta's Suicide and Self-injury policy page should clarify that the policy forbids content that promotes or encourages suicide aimed at an identifiable group of people.
15 Mar 2024
Implementing in Full
Complete
Polish Post Targeting Trans People Rec #2
Meta's internal guidance for at-scale reviewers should be modified to ensure that flag-based visual depictions of gender identity that do not contain a human figure are understood as representations of a group defined by the gender identity of its members. This modification would clarify instructions for enforcement of this form of content at scale, whenever it contains a violating attack.
15 Mar 2024
No further action
No further updates
Haitian Police Station Video Rec #1
To address the risk of harm, particularly where Meta has no or limited proactive moderation tools, processes, or measures to identify and assess content, Meta should assess the timeliness and effectiveness of its responses to content escalated through the Trusted Partner program.
2 Feb 2024
Implementing in Full
Complete
Fruit Juice Diet Bundle Rec #1
To not create financial incentives for influential users to create harmful content, Meta should restrict extreme and harmful diet-related content in its Content Monetisation Policies. The Board will consider this implemented when Meta's Content Monetisation Policies have been updated to include a definition and examples of what constitutes extreme and harmful diet-related content, in the same way that it defines and explains other restricted categories under the Content Monetisation Policies.
21 Dec 2023
Implementing in Part
Complete
Abortion Content Bundle Rec #1
In order to inform future assessments and recommendations to the Violence and Incitement policy, and enable the Board to undertake its own necessity and proportionality analysis of the trade-offs in policy development, Meta should provide the Board with the data that it uses to evaluate its policy enforcement accuracy. This information should be sufficiently comprehensive to allow the Board to validate Meta’s arguments that the type of enforcement errors in these cases are not a result of any systemic problems with Meta’s enforcement processes. The Board expects Meta to collaborate with it to identify the necessary data (e.g., 500 pieces of content from Facebook and 500 from Instagram in English for US users) and develop the appropriate data sharing arrangements. The Board will consider this implemented when Meta provides the requested data.
3 Nov 2023
Implementing in Part
Complete
Response to the Turkey Earthquake Rec #1
To ensure media organizations can more freely report on topics of public interest, Meta should revise the Hate Speech Community Standard to explicitly protect journalistic reporting on slurs, when such reporting, in particular in electoral contexts, does not create an atmosphere of exclusion and/or intimidation. This exception should be made public, and be separate from the “raising awareness” and “condemning” exceptions. There should be appropriate training to moderators, especially outside of English languages, to ensure respect for journalism, including local media. The reporting exception should make clear to users, in particular those in the media, how such content should be contextualized, and internal guidance for reviewers should be consistent with this. The Board will consider this recommendation implemented when the Community Standards are updated, and internal guidelines for Meta’s human reviewers are updated to reflect these changes.
20 Oct 2023
Implementing in Part
In Progress
Response to the Turkey Earthquake Rec #2
To ensure greater clarity of when slur use is permitted, Meta should ensure the Hate Speech Community Standard has clearer explanations of each exception with illustrative examples. Situational examples can be provided in the abstract, to avoid repeating hate speech terms. The Board will consider this implemented when Meta restructures its Hate Speech Community Standard and adds illustrative examples.
20 Oct 2023
Implementing in Part
Complete
Response to the Turkey Earthquake Rec #3
To ensure fewer errors in the enforcement of its Hate Speech policy, Meta should expedite audits of its slur lists in countries with elections in the second half of 2023 and early 2024, with the goal of identifying and removing terms mistakenly added to the company’s slur lists. The Board will consider this implemented when Meta provides an updated list of designated slurs following the audit, and a list of terms de-designated, per market, following the new audits.
20 Oct 2023
Implementing in Part
In progress
Ketamine as a Medical Treatment Rec #1
Meta should clarify the meaning of the "paid partnership" labels in its Transparency Centre and Instagram's Help Centre. That includes explaining the role of business partners in the approval of "paid partnership" labels. The Board will consider this recommendation implemented when Meta's Branded Content policies have been updated to reflect these clarifications.
16 Oct 2023
Implementing in Full
In progress
Ketamine as a Medical Treatment Rec #2
Meta should clarify in the language of the Restricted Goods and Services Community Standard that content that "admits to using or promotes the use of pharmaceutical drugs" is allowed, even where that use may result in a "high" in the context of "supervised medical setting". Meta should also define what a "supervised medical setting" is and explain under the Restricted Goods and Services Community Standard that medical supervision can be demonstrated by indicators such as a direct mention of a medical diagnosis, a reference to the health service provider's license or to medical staff. The Board will consider this recommendation implemented when Meta's Restricted Goods and Services Community Standard has been updated to reflect these clarifications.
16 Oct 2023
No Further Action
No further updates
Ketamine as a Medical Treatment Rec #3
Meta should improve its review process to ensure that content created as part of a "paid partnership" is properly reviewed against all applicable policies (i.e. Community Standards and Branded Content policies), given that Meta does not currently review all branded content under the Branded Content policies. In particular, Meta should establish a pathway for at-scale content reviewers to route potentially violating Branded Content policies to Meta's specialist teams or automated systems that are able and trained to apply Meta's Branded Content policies when implicated. The Board will consider this implemented when Meta shares its improved review routing logic, showing how it allows for all relevant platform/content policies to be applied when there is a high likelihood of potential violation of any of the aforementioned policies.
16 Oct 2023
Implementing in Full
Complete
Ketamine as a Medical Treatment Rec #4
Meta should audit the enforcement of policy lines from its Branded Content policies ("we prohibit the promotion of the following [...] 4. Drugs and drug-related products, including illegal or recreational drugs") and Restricted Goods and Services Community Standard ("do not post content that attempts to buy, sell, trade, co-ordinate the trade of, donate, gift or asks for non-medical drugs"). The Board finds that Meta has clear and defensible approaches that impose strong restrictions on the paid promotion of drugs (under its Branded Content policies) and attempts to buy, sell or trade drugs (under its Restricted Goods and Services Community Standard). However, the Board finds some indication that these policies could be inconsistently enforced. To clarify whether this is indeed the case, Meta should engage in an audit of how its Branded Content policies and its Restricted Goods and Services Standard are being enforced with regard to pharmaceutical and non-medical drugs. It should then close any gaps in enforcement. The Board will consider this implemented when Meta has shared the methodology and results of this audit and disclosed how it will close any gaps in enforcement revealed by that audit.
16 Oct 2023
Implementing in Part
In progress
Image of Gender Based Violence Rec #1
To ensure clarity for users, Meta should explain that the term “medical condition,” as used in the Bullying and Harassment Community Standard, includes “serious physical injury.” While the internal guidance explains to content moderators that “medical condition” includes “serious physical injury,” this explanation is not provided to Meta’s users. The Board will consider this recommendation implemented when the public-facing language of the Community Standard is amended to include this clarification.
29 Sep 2023
Implementing in Full
In progress
Image of Gender Based Violence Rec #2
The Board recommends that Meta undertakes a policy development process to establish a policy aimed at addressing content that normalizes gender-based violence through praise, justification, celebration or mocking of gender-based violence. The Board understands that Meta is conducting a policy development process which, among other issues, is considering how to address praise of gender-based violence. This recommendation is in support of a more thorough approach to limiting the harms caused by the normalization of gender-based violence.
The Board will consider this recommendation implemented when Meta publishes the findings of this policy development process and updates its Community Standards.
29 Sep 2023
Implementing in Full
In progress
Violence Against Women Case 1 Rec #1 and Violence Against Women Case 2 Rec #1
To allow users to condemn and raise awareness of gender-based violence, Meta should include the exception for allowing content that condemns or raises awareness of gender-based violence in the public language of the Hate Speech policy. The Board will consider this recommendation implemented when the public-facing language of the Hate Speech Community Standard reflects the proposed change.
11 Sep 2023
Implementing in Part
Complete
Violence Against Women Case 1 Rec #2 and Violence Against Women Case 2 Rec #2
To ensure that content condemning and raising awareness of gender-based violence is not removed in error, Meta should update guidance to its at-scale moderators with specific attention to rules around qualification. This is important because the current guidance makes it virtually impossible for moderators to make the correct decisions even when Meta states that the first post should be allowed on the platform. The Board will consider this recommendation implemented when Meta provides the Board with updated internal guidance that shows what indicators it provides to moderators to grant allowances when considering content that may otherwise be removed under the Hate Speech policy.
11 Sep 2023
Implementing in Part
Complete
Violence Against Women Case 1 Rec #3 and Violence Against Women Case 2 Rec #3
To improve the accuracy of decisions made upon secondary review, Meta should assess how its current review routing protocol impacts accuracy. The Board believes Meta would increase accuracy by sending secondary review jobs to different reviewers than those who previously assessed the content. The Board will consider this implemented when Meta publishes a decision, informed by research on the potential impact to accuracy, whether to adjust its secondary review routing.
11 Sep 2023
Implementing in Full
Complete
Violence Against Women Case 1 Rec #4 and Violence Against Women Case 2 Rec #4
To provide greater transparency to users and allow them to understand the consequences of their actions, Meta should update its Transparency Center with information on what penalties are associated with the accumulation of strikes on Instagram. The Board appreciates that Meta has provided additional information about strikes for Facebook users in response to Board recommendations. It believes this should be done for Instagram users as well. The Board will consider this implemented when the Transparency Center contains this information.
11 Sep 2023
Implementing in Part
Complete
Cambodian Prime Minister Rec #1
Meta should clarify that its policy for restricting accounts of public figures applies to contexts in which citizens are under continuing threat of retaliatory violence from their governments. The policy should make it clear that it is not restricted solely to single incidents of civil unrest or violence and that it applies where political expression is pre-emptively suppressed or responded to with violence or threats of violence from the state. The Board will consider this recommendation implemented when Meta's public framework for restricting accounts of public figures is updated to reflect these clarifications.
28 Aug 2023
No Further Action
No further updates
Cambodian Prime Minister Rec #2
Meta should update its newsworthiness allowance policy to state that content that directly incites violence is not eligible for a newsworthiness allowance, subject to existing policy exceptions. The Board will consider this recommendation implemented when Meta publishes an updated policy on newsworthy content explicitly setting out this limitation on the allowance.
28 Aug 2023
Implementing Fully
In progress
Cambodian Prime Minister Rec #3
Meta should immediately suspend the official Facebook Page and Instagram account of Cambodian Prime Minister Hun Sen for a period of at least six months under Meta's policy on restricting accounts of public figures during civil unrest. The Board will consider this recommendation implemented when Meta suspends the accounts and publicly announces that it has done so.
28 Aug 2023
No Further Action
No further updates
Cambodian Prime Minister Rec #4
Meta should update its review prioritization systems to ensure that content from heads of state and senior members of government that potentially violated the Violence and Incitement policy is consistently prioritized for immediate human review. The Board will consider this recommendation implemented when Meta discloses details on the changes to its review-ranking systems and demonstrates how those changes would have ensured review for this and similar content from heads of state and senior members of government.
28 Aug 2023
Implementing in part
In progress
Cambodian Prime Minister Rec #5
Meta should implement product and/or operational guideline changes that allow more accurate review of long-form video (e.g. use of algorithms for predicting the timestamp of violation, ensuring proportional review time with length of the video, allowing videos to run 1.5 times or 2 times faster). The Board will consider this implemented when Meta shares its new long-form video moderation procedures with the Board, including metrics for showing improvements in review accuracy for long-form videos.
28 Aug 2023
Implementing in Part
In progress
Cambodian Prime Minister Rec #6
In the case of Prime Minister Hun Sen, and in all account-level actions against heads of state and senior members of government, Meta should publicly reveal the extent of the action and the reasoning behind its decision. The Board will consider this recommendation implemented when Meta discloses this information for Hun Sen, and commits to doing so for future enforcements against all heads of state and senior members of government.
28 Aug 2023
Implementing in Part
Complete
Brazilian general’s speech Rec #1
Meta should develop a framework for evaluating the company’s election integrity efforts. This includes creating and sharing metrics for successful election integrity efforts, including those related to Meta’s enforcement of its content policies and the company’s approach to ads. The Board will consider this recommendation implemented when Meta develops this framework (including a description of metrics and goals for those metrics), discloses it in the company’s Transparency Center, starts publishing country-specific reports, and publicly discloses any changes to its general election integrity efforts as a result of this evaluation.
21 Aug 2023
Implementing in Full
Complete
Brazilian general’s speech Rec #2
Meta should clarify in its Transparency Center that, in addition to the Crisis Policy Protocol, the company runs other protocols in its attempt to prevent and address potential risk of harm arising in electoral contexts or other high-risk events. In addition to naming and describing those protocols, the company should also outline their objective, what the points of contact between these different protocols are, and how they differ from each other. The Board will consider this recommendation implemented when Meta publishes the information in its Transparency Center.
21 Aug 2023
Implementing in Part
In progress
Armenian Prisoners of War Rec #1
In line with recommendation no. 14 in the "former President Trump's suspension" case, Meta should commit to preserving, and where appropriate, sharing with competent authorities evidence of atrocity crimes or grave human rights violations, such as those specified in the Rome Statute of the International Criminal Court, by updating its internal policies to make clear the protocols that it has in place in this regard. The protocol should be attentive to conflict situations. It should explain the criteria, process and safeguards for (1) initiating and terminating preservation including data retention periods, (2) accepting requests for preservation, (3) and for sharing data with competent authorities, including international accountability mechanisms and courts. There must be safeguards for users' rights to due process and privacy in line with international standards and applicable data protection laws. Civil society, academia and other experts in the field should be part of developing this protocol. The Board will consider this recommendation implemented when Meta shares its updated internal documents with the Board.
11 Aug 2023
Implementing in Part
In progress
Armenian Prisoners of War Rec #2
To ensure consistent enforcement, Meta should update the Internal Implementation Standards to provide more specific guidance on applying the newsworthiness allowance to content that identifies or reveals the location of Prisoners of War, consistent with the factors outlined in Section 8 of this decision, to guide both the escalation and assessment of this content for newsworthiness. The Board will consider this recommendation implemented when Meta incorporates this revision and shares the updated guidance with the Board.
11 Aug 2023
Implementing in Part
Complete
Armenian Prisoners of War Rec #3
To provide greater clarity to users, Meta should add to its explanation of the newsworthiness allowance in the Transparency Centre an example of content that revealed the identity or location of Prisoners of War but was left up due to the public interest. The Board will consider this recommendation implemented when Meta updates its newsworthiness page with an example addressing Prisoners of War.
11 Aug 2023
Implementing in Full
In progress
Armenian Prisoners of War Rec #4
Following the development of the protocol on evidence preservation related to atrocity crimes and grave human rights violations, Meta should publicly share this protocol in the Transparency Centre. This should include the criteria for initiating and terminating preservation, data retention periods, as well as the process and safeguards for accepting requests for preservation and for sharing data with competent authorities, including international accountability mechanisms and courts. There must be safeguards for users' rights to due process and privacy in line with international standards and applicable data protection laws. The Board will consider this recommendation implemented when Meta publicly shares this protocol.
11 Aug 2023
Assessing Feasibility (Long Term)
In progress
Covid-19 PAO Rec #1
Given the World Health Organization’s declaration that COVID-19 constitutes a global health emergency and Meta’s insistence on a global approach, Meta should continue its existing approach of removing globally false content about COVID-19 that is “likely to directly contribute to the risk of imminent physical harm”. At the same time, it should begin a transparent and inclusive process for robust and periodic reassessment of each of the 80 claims subject to removal to ensure that:
Each of the specific claims about COVID-19 that is subject to removal is false and “likely to directly contribute to the risk of imminent physical harm”; and Meta’s human rights commitments are properly implemented (e.g. the legality and necessity principles).
Based on this process of reassessment, Meta should determine whether any claims are no longer false or no longer “likely to directly contribute to the risk of imminent physical harm.” Should Meta find that any claims are no longer false or no longer “likely to directly contribute to the risk of imminent physical harm,” such claims should no longer be subject to removal under this policy. The Board will consider this recommendation implemented when Meta announces a reassessment process and announces any changes to the 80 claims on the Help Center page.
16 Jun 2023
No Further Action
No further updates
Covid-19 PAO Rec #1A
The company must put a process in place, as soon as feasible, to consider a broader set of perspectives in evaluating whether the removal of each claim is needed by the exigencies of the situation. The experts and organizations consulted should include public health experts, immunologists, virologists, infectious disease researchers, misinformation and disinformation researchers, tech policy experts, human rights organizations, fact-checkers, and freedom of expression experts.
16 Jun 2023
No Further Action
No further updates
Covid-19 PAO Rec #1B
Meta should establish the timing for this review (e.g. every three or six months) and make this public to ensure notice and input.
16 Jun 2023
No Further Action
No further updates
Covid-19 PAO Rec #1C
Meta should articulate a clear process for regular review, including means for interested individuals and organizations to challenge an assessment of a specific claim (e.g., by providing a link on the Help Center page for public comments, and virtual consultations.
16 Jun 2023
No Further Action
No further updates
Covid-19 PAO Rec #1D
Meta’s review of the claims should include the latest research on the spread and impact of such online health misinformation. This should include internal research on the relative effectiveness of various measures available to Meta, including removals, fact-checking, demotions, and neutral labels. The company should consider the status of the pandemic in all regions in which it operates, especially those in which its platforms constitute a primary source of information and where there are less digitally literate communities, weaker civic spaces, a lack of reliable sources of information, and fragile health care systems. Meta should also evaluate the effectiveness of its enforcement of these claims. Meta should gather, if it doesn’t already possess, information about which claims have systemically resulted in under and over enforcement problems. This information should inform whether a claim should continue to be removed or should be addressed through other measures.
16 Jun 2023
No Further Action
No further updates
Covid-19 PAO Rec #1E
In order to provide transparency on the types of experts consulted, their input, the internal and external research considered and how the information impacted the outcome of the analysis, Meta should provide to the Board a summary of the basis for its decision on each claim. The summary should specifically include the basis for the company’s decision for continuing to remove a claim. Meta should also disclose what role, if any, government personnel or entities played in its decision-making. If the company decides to cease removing a specific claim, the company should explain the basis of that decision (including: (a) what input led the company to determine that the claim is no longer false; (b) what input, from what source, led the company to determine the claim no longer directly contributes to the risk of imminent physical harm, and whether that assessment holds in countries with lowest vaccination rates and under-resourced public health infrastructure; (c) did the company determine that its enforcement system led to over-enforcement on the specific claim; (d) did the company determine that the claim is no longer prevalent on the platform.) The Board will consider this recommendation implemented when Meta shares the assessment of its policy evaluation process. This information should align with the reasons listed publicly in the Help Center post for any changes made to the policy, as outlined in the first paragraph of this recommendation.
16 Jun 2023
No Further Action
No further updates
Covid-19 PAO Rec #2
Meta should immediately provide a clear explanation of the reasons why each category of removable claims is “likely to directly contribute to the risk of imminent physical harm.”
16 Jun 2023
Work Meta Already Does
No further updates
Covid-19 PAO Rec #3
Meta should clarify its Misinformation about health during public health emergencies policy by explaining that the requirement that information be “false” refers to false information according to the best available evidence at the time the policy was most recently re-evaluated.
16 Jun 2023
Implementing in Full
Complete
Covid-19 PAO Rec #4
Meta should immediately initiate a risk assessment process to identify the necessary and proportionate measures that it should take, consistent with this policy decision and the other recommendations made in this policy advisory opinion, when the WHO lifts the global health emergency for COVID-19, but other local public health authorities continue to designate COVID-19 as a public health emergency. This process should aim to adopt measures addressing harmful misinformation likely to contribute to significant and imminent real-life harm, without compromising the general right to freedom of expression globally. The risk assessment should include:
1) A robust evaluation of the design decisions and various policy and implementation alternatives;
2) Their respective impacts on freedom of expression, the right to health and to life and other human rights; and
3) A feasibility assessment of a localized enforcement approach.
16 Jun 2023
Implementing in Full
Complete
Covid-19 PAO Rec #5
Meta should translate internal implementation guidelines into the working languages of the company’s platforms.
16 Jun 2023
No Further Action
No further updates
Covid-19 PAO Rec #6
User appeals for a fact-check label should be reviewed by a different fact-checker than the one who made the first assessment. To ensure fairness and promote access to a remedy for users that have their content fact-checked, Meta should amend its process to ensure a different fact-checker that has not already made the assessment on the given claim, can evaluate the decision to impose a label.
16 Jun 2023
No Further Action
No further updates
Covid-19 PAO Rec #7
Meta should allow profiles (not only pages and groups) that have content labeled by third party fact-checkers enforcing Meta’s misinformation policy, to appeal the label to another fact-checker through the in-product appeals feature.
16 Jun 2023
Implementing in Part
Complete
Covid-19 PAO Rec #8
Meta should increase its investments in digital literacy programs across the world, prioritizing countries with low media freedom indicators (e.g. Freedom of the Press score by Freedom House) and high social media penetration. These investments should include tailored literacy training.
16 Jun 2023
Implementing in Full
Complete
Covid-19 PAO Rec #9
For single accounts and networks of Meta entities that repeatedly violate the misinformation policy, Meta should conduct or share existing research on the effects of its newly publicized penalty system, including any data about how this system is designed to prevent these violations. This research should include analysis of accounts amplifying or coordinating health misinformation campaigns. The assessment should evaluate the effectiveness of the demonetization penalties that Meta currently uses, in addressing the financial motivations/benefits of sharing harmful and false or misleading information.
16 Jun 2023
Implementing in Part
In progress
Covid-19 PAO Rec #10
Meta should commission a human rights impact assessment of how Meta’s newsfeed, recommendation algorithms, and other design features amplify harmful health misinformation and its impacts. This assessment should provide information on the key factors in the feed-ranking algorithm that contribute to the amplification of harmful health misinformation, what types of misinformation can be amplified by Meta’s algorithms, and which groups are most susceptible to this type of misinformation (and whether they are particularly targeted by Meta’s design choices). This assessment should also make public any prior research Meta has conducted that evaluates the effects of its algorithms and design choices in amplifying health misinformation.
16 Jun 2023
No further action
No further updates
Covid-19 PAO Rec #11
Meta should add a change log to the Help Center page providing the complete list of claims subject to removal under the company’s misinformation about health during public health emergencies policy.
16 Jun 2023
Implementing in Part
Complete
Covid-19 PAO Rec #12
Meta should provide quarterly enforcement data on misinformation in the Quarterly Enforcement Report, broken down by type of misinformation (i.e., physical harm or violence, harmful health misinformation, voter of census interference, or manipulated media) and country and language. This data should include information on the number of appeals and the number of pieces of content restored.
16 Jun 2023
Implementing in Part
Complete
Covid-19 PAO Rec #13
Meta should create a section in its Community Standards Enforcement Report to report on state actor requests to review content for the policy on Misinformation about health during public health emergencies violations. The report should include the details on the number of review and removal requests by country and government agency, and the number of rejections and approvals by Meta.
16 Jun 2023
Implementing in Part
In progress
Covid-19 PAO Rec #14
Meta should ensure existing research tools, such as CrowdTangle and Facebook Open Research and Transparency (FORT) continue to be made available to researchers.
16 Jun 2023
Implementing in Part
Complete
Covid-19 PAO Rec #15
Meta should institute a pathway for external researchers to gain access to non-public data to independently study the effects of policy interventions related to the removal and reduced distribution of COVID-19 misinformation, while ensuring these pathways protect the right to privacy of Meta’s users and the human rights of people on and off the platform. This data should include metrics not previously made available, including the rate of recidivism around COVID-19 misinformation interventions.
16 Jun 2023
Implementing in Full
Complete
Covid-19 PAO Rec #16
Meta should publish the findings of its research on neutral and fact-checking labels that it shared with the Board during the COVID-19 policy advisory opinion process.
16 Jun 2023
No Further Action
No further updates
Covid-19 PAO Rec #17
Meta should ensure equitable data access to researchers around the world. While researchers in Europe will have an avenue to apply for data access through the Digital Services Act (DSA), Meta should ensure it does not over-index on researchers from Global North research universities. Research on prevalence of COVID-19 misinformation and the impact of Meta’s policies will shape general understanding of, and future responses to, harmful health misinformation and future emergencies. If that research is disproportionately focused on the Global North, the response will be too.
16 Jun 2023
Implementing in Full
Complete
Covid-19 PAO Rec #18
Meta should evaluate the impact of the cross-check Early Response Secondary Review (ERSR) system on the effectiveness of its enforcement of the Misinformation policy and ensure that Recommendations 16 and 17 in the Board’s policy advisory opinion on Meta’s cross-check program apply to entities that post content violating the Misinformation about health during a public health emergency policy.
16 Jun 2023
Implementing in Full
In progress
Sri Lanka Pharmaceuticals Rec #1
To provide more clarity to users, Meta should explain in the landing page of the Community Standards, in the same way the company does with the newsworthiness allowance, that allowances to the Community Standards may be made when their rationale, and Meta's values, demand a different outcome than a strict reading of the rules. The company should include a link to a Transparency Centre page which provides information about the "spirit of the policy" allowance. The Board will consider this recommendation implemented when an explanation is added to the Community Standards.
8 May 2023
Implementing in Full
In progress
Sri Lanka Pharmaceuticals Rec #2
To provide more certainty to users, Meta should communicate when reported content benefits from a "spirit of the policy" allowance. In line with Meta's recent work to audit its user notification systems as stated in its response to the Board's recommendation in the "Colombia protests" case (2021-010-FB-UA), Meta should notify all users who reported content which was assessed as violating but left on the platform because a "spirit of the policy" allowance was applied to the post. The notice should include a link to a Transparency Centre page which provides information about the "spirit of the policy" allowance. The Board will consider this recommendation implemented when Meta introduces the notification protocol described in this recommendation.
8 May 2023
No Further Action
No further updates
Sri Lanka Pharmaceuticals Rec #3
In line with the Board's recommendations five and six in the "Iran protest slogan" case (2022-013-FB-UA), the Board specifies that Meta should publish information about the "spirit of the policy" allowance in its Transparency Centre, similar to the information it has published on the newsworthiness allowance. In the Transparency Centre, Meta should: (i) explain that "spirit of the policy" allowances can be either scaled or narrow; (ii) publicize examples of content which benefited from this allowance; (iii) provide criteria Meta uses to determine when to scale "spirit of the policy" allowances; and (iv) include a list of all "spirit of the policy" allowances Meta has issued at scale in the past three years with explanations of why Meta decided to issue and terminate each of them. Meta should keep this list updated as new allowances are issued. The Board will consider this recommendation implemented when Meta makes this information publicly available in the Transparency Center.
8 May 2023
Implementing In Part
In progress
Sri Lanka Pharmaceuticals Rec #4
In line with the Board's recommendations five and six in the "Iran protest slogan" case (2022-013-FB-UA), the Board specifies that Meta should publicly share aggregated data in its Transparency Centre about the "spirit of the policy" allowances issued, including the number of instances in which they were issued, and the regions and/or languages affected. Meta should keep this information updated as new "spirit of the policy" allowances are issued. The Board will consider this recommendation implemented when Meta makes this information publicly available in the Transparency Center.
8 May 2023
Implementing In Part
In progress
Gender Identity and Nudity Rec #1
In order to treat all users fairly and provide moderators and the public with a workable standard on nudity, Meta should define clear, objective, rights-respecting criteria to govern the entirety of its Adult Nudity and Sexual Activity policy, ensuring treatment of all people that is consistent with international human rights standards, including without discrimination on the basis of sex or gender identity. Meta should first conduct a comprehensive human rights impact assessment to review the implications of the adoption of such criteria, which includes broadly inclusive stakeholder engagement across diverse ideological, geographic and cultural contexts. To the degree that this assessment should identify any potential harms, implementation of the new policy should include a mitigation plan for addressing them.
17 Mar 2023
Implementing In Part
Complete
Gender Identity and Nudity Rec #2
In order to provide greater clarity to users, Meta should provide users with more explanation of what constitutes an "offer or ask" for sex (including links to third party websites) and what constitute sexually suggestive poses in the public Community Standards. The Board will consider this recommendation implemented when an explanation of these terms with examples is added to the Sexual Solicitation Community Standard.
17 Mar 2023
Implementing In Part
Complete
Gender Identity and Nudity Rec #3
In order to ensure that Meta’s internal criteria for its Sexual Solicitation policy do not result in the removal of more content than the public-facing policy indicates and so that non-sexual content is not mistakenly removed, Meta should revise its internal reviewer guidance to ensure that the criteria reflect the public-facing rules and require a clearer connection between the "offer or ask" and the "sexually suggestive element." The Board will consider this implemented when Meta provides the Board with its updated internal guidelines that reflect these revised criteria.
17 Mar 2023
Implementing In Part
Complete
Iran Protest Slogan Rec #1
Meta's Community Standards should accurately reflect its policies. To better inform users of the types of statements that are prohibited, Meta should amend the Violence and Incitement Community Standard to (i) explain that rhetorical threats such as "death to X" statements are generally permitted, except when the target of the threat is a high-risk person; (ii) include an illustrative list of high-risk persons, explaining that they may include heads of state; (iii) provide criteria for when threatening statements directed at heads of state are permitted to protect clearly rhetorical political speech in protest contexts that does not incite to violence, taking language and context into account, in accordance with the principles outlined in this decision. The Board will consider this recommendation implemented when the public-facing language of the Violence and Incitement Community Standard reflects the proposed change, and when Meta shares internal guidelines with the Board that are consistent with the public-facing policy.
10 Mar 2023
Assessing Feasibility
In progress
Iran Protest Slogan Rec #2
Meta should err on the side of issuing scaled allowances where (i) this is not likely to lead to violence; (ii) when potentially violating content is used in protest contexts; and (iii) where public interest is high. Meta should ensure that their internal process to identify and review content trends around protests that may require context-specific guidance to mitigate harm to freedom of expression, such as allowances or exemptions, are effective. The Board will consider this recommendation implemented when Meta shares the internal process with the Board and demonstrates through sharing data with the Board that it has minimized incorrect removals of protest slogans.
10 Mar 2023
Implementing In Part
Complete
Iran Protest Slogan Rec #3
Pending changes to the Violence and Incitement policy, Meta should issue guidance to its reviewers that "marg bar Khamenei" statements in the context of protests in Iran do not violate the Violence and Incitement Community Standard. Meta should reverse any strikes and feature limits for wrongfully removed content that used the "marg bar Khamenei" slogan. The Board will consider this recommendation implemented when Meta discloses data on the volume of content restored and number of accounts impacted.
10 Mar 2023
Implementing Fully
Complete
Iran Protest Slogan Rec #4
Meta should revise the indicators that it uses to rank appeals in its review queues and to automatically close appeals without review. The appeals prioritization formula should include, as it does for the cross-check ranker, the factors of topic sensitivity and false-positive probability. The Board will consider this implemented when Meta shares with the Board their appeals prioritization formula and data that shows that it is ensuring review of appeals against the incorrect removal of political expression in protest contexts.
10 Mar 2023
Implementing in Part
Complete
Iran Protest Slogan Rec #5
Meta should announce all scaled allowances that it issues, their duration and notice of their expiry, in order to give people who use its platforms notice of policy changes allowing certain expression, alongside comprehensive data on the number of "scaled" and "narrow" allowances granted. The Board will consider this recommendation implemented when Meta demonstrates regular and comprehensive disclosures to the Board.
10 Mar 2023
Implementing in Part
Complete
Iran Protest Slogan Rec #6
The public explanation of the newsworthiness allowance in the Transparency Centre should (i) explain that newsworthiness allowances can either be scaled or narrow; and (ii) provide the criteria that Meta uses to determine when to scale newsworthiness allowances. The Board will consider this recommendation to be implemented when Meta updates the publicly available explanation of newsworthiness and issues Transparency Reports that include sufficiently detailed information about all applied allowances.
10 Mar 2023
Implementing Fully
Complete
Iran Protest Slogan Rec #7
Meta should provide a public explanation of the automatic prioritization and closure of appeals, including the criteria for both prioritization and closure. The Board will consider this recommendation implemented when Meta publishes this information in the Transparency Centre.
10 Mar 2023
No Further Action
No further updates
Cross-Check PAO Rec #0
Meta should provide information about its implementation work in its quarterly reports on the Board. Additionally, Meta should convene a biannual meeting of high-level responsible officials to brief the Board on its work to implement the policy advisory opinion recommendations.
3 Mar 2023
Implementing Fully
Complete
Cross-Check PAO Rec #1
Meta should split, either by distinct pathways or prioritization, any list-based over-enforcement prevention program into separate systems: one to protect expression in line with Meta’s human rights responsibilities, and one to protect expression that Meta views as a business priority that falls outside that category.
3 Mar 2023
Assessing Feasibility
In progress
Cross-Check PAO Rec #2
Meta should ensure that the review pathway and decision making structure for content with human rights or public interest implications including its escalation paths, is devoid of business considerations. Meta should take steps to ensure that the team in charge of this system does not report to public policy or government relations teams or those in charge of relationship management with any affected users.
3 Mar 2023
Implementing in Part
Complete
Cross-Check PAO Rec #3
Meta should improve how its workflow dedicated to meet Meta’s human rights responsibilities incorporates context and language expertise on enhanced review, specifically at decision making levels.
3 Mar 2023
Implementing Fully
In progress
Cross-Check PAO Rec #4
Meta should establish clear and public criteria for list-based mistake-prevention eligibility. These criteria should differentiate between users who merit additional protection from a human rights perspective and those included for business reasons.
3 Mar 2023
Implementing in Part
In progress
Cross-Check PAO Rec #5
Meta should establish a process for users to apply for overenforcement mistake-prevention protections should they meet the company’s publicly articulated criteria. State actors should be eligible to be added or apply based on these criteria and terms but given no other preference.
3 Mar 2023
No Further Action
No further updates
Cross-Check PAO Rec #6
Meta should ensure that the process for list-based inclusion, regardless of who initiated the process (the entity itself or Meta) involves, at minimum:
(1) an additional, explicit, commitment by the user to follow Meta’s content policies;
(2) an acknowledgement of the program’s particular rules; and
(3) a system by which changes to the platform’s content policies are proactively shared with them.
3 Mar 2023
No Further Action
No further updates
Cross-Check PAO Rec #7
Meta should strengthen its engagement with civil society for the purposes of list creation and nomination. Users and trusted civil society organizations should be able to nominate others that meet the criteria. This is particularly urgent in countries where the company’s limited presence does not allow it to identify candidates for inclusion independently.
3 Mar 2023
Implementing in Part
Complete
Cross-Check PAO Rec #8
Meta should use specialized teams, independent from political or economic influence, including from Meta’s public policy teams, to evaluate entities for list inclusion. To ensure criteria are met, specialized staff, with the benefit of local input, should ensure objective application of inclusion criteria.
3 Mar 2023
Assessing Feasibility
In progress
Cross-Check PAO Rec #9
Meta should require that more than one employee be involved in the final process of adding new entities to any lists for false positive mistake-prevention systems. These people should work on different but related teams.
3 Mar 2023
Implementing in Part
In progress
Cross-Check PAO Rec #10
Meta should establish clear criteria for removal. One criterion should be the amount of violating content posted by the entity. Disqualifications should be based on a transparent strike system, in which users are warned that continued violation may lead to removal from the system and or Meta’s platforms. Users should have the opportunity to appeal such strikes through a fair and easily accessible process.
3 Mar 2023
Implementing in Part
Complete
Cross-Check PAO Rec #11
Meta should establish clear criteria and processes for audit. Should entities no longer meet the eligibility criteria, they should be promptly removed from the system. Meta should review all included entities in any mistake prevention system at least yearly. There should also be clear protocols to shorten that period where warranted.
3 Mar 2023
Implementing in Part
In progress
Cross-Check PAO Rec #12
Meta should publicly mark the pages and accounts of entities receiving list-based protection in the following categories: all state actors and political candidates, all business partners, all media actors, and all other public figures included because of the commercial benefit to the company in avoiding false positives. Other categories of users may opt to be identified.
3 Mar 2023
No Further Action
No further updates
Cross-Check PAO Rec #13
Meta should notify users who report content posted by an entity publicly identified as benefiting from additional review that special procedures will apply, explaining the steps and potentially longer time to resolution.
3 Mar 2023
No Further Action
No further updates
Cross-Check PAO Rec #14
Meta should notify all entities that it includes on lists to receive enhanced review and provide them with an opportunity to decline inclusion.
3 Mar 2023
Implementing in Part
In progress
Cross-Check PAO Rec #15
Meta should consider reserving a minimum amount of review capacity by teams that can apply all content policies (e.g., the Early Response Team) to review content flagged through content based mistake-prevention systems.
3 Mar 2023
Implementing in Part
In progress
Cross-Check PAO Rec #16
Meta should take measures to ensure that additional review decisions for mistake-prevention systems that delay enforcement are taken as quickly as possible. Investments and structural changes should be made to expand the review teams so that reviewers are available and working in relevant time zones whenever content is flagged for any enhanced human review.
3 Mar 2023
Implementing in Part
Complete
Cross-Check PAO Rec #17
Meta should not delay all action on content identified as potentially severely violating and should explore applying interstitials or removals pending any enhanced review. The difference between removal or hiding and downranking should be based on an assessment of harm, and may be based, for example, on the content policy that has possibly been violated. If content is hidden on these grounds, a notice indicating that it is pending review should be provided to users in its place.
3 Mar 2023
Implementing Fully
Complete
Cross-Check PAO Rec #18
Meta should not operate these programs at a backlog. Meta should not, however, achieve gains in relative review capacity by artificially raising the ranker threshold or having its algorithm select less content.
3 Mar 2023
Implementing in Full
Complete
Cross-Check PAO Rec #19
Meta should not automatically prioritize entity-based secondary review and make a large portion of the algorithmically selected content-based review dependent on extra review capacity.
3 Mar 2023
Implementing in Part
In progress
Cross-Check PAO Rec #20
Meta should ensure that content that receives any kind of enhanced review because it is important from a human rights perspective, including content of public importance, is reviewed by teams that can apply exceptions and context.
3 Mar 2023
Implementing in Part
In progress
Cross-Check PAO Rec #21
Meta should establish clear criteria for the application of any automatic bars to enforcement (‘technical corrections’), and not permit such bars for high severity content policy violations. At least two teams with separate reporting structures should participate in granting technical corrections to provide for cross-team vetting.
3 Mar 2023
Implementing Fully
Complete
Cross-Check PAO Rec #22
Meta should conduct periodic audits to ensure that entities benefitting from automatic bars to enforcement (‘technical corrections’) meet all criteria for inclusion. At least two teams with separate reporting structures should participate in these audits to provide for cross-team vetting.
3 Mar 2023
Implementing in Part
In progress
Cross-Check PAO Rec #23
Meta should conduct periodic multi-team audits to proactively and periodically search for unexpected or unintentional bars to enforcement that may result from system error.
3 Mar 2023
Assessing Feasibility
Complete
Cross-Check PAO Rec #24
Meta should ensure that all content that does not reach the highest level of internal review is able to be appealed to Meta.
3 Mar 2023
Implementing in Part
Complete
Cross-Check PAO Rec #25
Meta must guarantee that it is providing an opportunity to appeal to the Board for all content the Board is empowered to review under its governing documents, regardless of whether the content reached the highest levels of review within Meta.
3 Mar 2023
Implementing Fully
Complete
Cross-Check PAO Rec #26
Meta should use the data it compiles to identify “historically over-enforced entities” to inform how to improve its enforcement practices at scale. Meta should measure over-enforcement of these entities and it should use that data to help identify other over-enforced entities. Reducing over-enforcement should be an explicit and high-priority goal for the company.
3 Mar 2023
Implementing Fully
In progress
Cross-Check PAO Rec #27
Meta should use trends in overturn rates to inform whether to default to the original enforcement within a shorter time frame or what other enforcement action to apply pending review. If overturn rates are consistently low for particular subsets of policy violations or content in particular languages, for example, Meta should continually calibrate how quickly and how intrusive an enforcement measure it should apply.
3 Mar 2023
Implementing in Part
In progress
Cross-Check PAO Rec #28
Meta should conduct periodic reviews of different aspects of its enhanced review system, including content with the longest time to resolution and high-profile violating content left on the platform.
3 Mar 2023
Implementing Fully
In progress
Cross-Check PAO Rec #29
Meta should publicly report on metrics that quantify the adverse effects of delayed enforcement as a result of enhanced review systems, such as views accrued on content that was preserved on the platform as a result of mistake-prevention systems but was subsequently found violating. As part of its public reporting, Meta should determine a baseline for these metrics and report on goals to reduce them.
3 Mar 2023
No Further Action
No further updates
Cross-Check PAO Rec #30
Meta should publish regular transparency reporting focused specifically on delayed enforcement of false-positive prevention systems. Reports should contain data that permits users and the public to understand how these programs function and what their consequences on public discourse may be. At minimum, the Board recommends Meta include:
a. Overturn rates for false positive mistake-prevention systems, disaggregated according to different factors. For example, the Board has recommended that Meta create separate streams for different categories of entities or content based on their expression and risk profile. The overturn rate should be reported for any entity-based and content-based systems, and categories of entities or content included.
b. The total number and percentage of escalation-only policies applied due to false positive mistake-prevention programs relative to total enforcement decisions.
c. Average and median time to final decision for content subject to false-positive mistake prevention programs, disaggregated by country and language.
d. Aggregate data regarding any lists used for mistake-prevention programs, including the type of entity and region.
e. Rate of erroneous removals (false positives) versus all reviewed content, including the total amount of harm generated by these false positives measured as the predicted total views on the content (i.e., overenforcement)
f. Rate of erroneous keep-up decisions (false negatives) on content, including the total amount of harm generated by these false positives, measured as the sum of views the content accrued (i.e., underenforcement)
3 Mar 2023
Implementing in Part
In progress
Cross-Check PAO Rec #31
Meta should provide basic information in its Transparency Center regarding the functioning of any mistake-prevention system it uses that identifies entities or users for additional protections.
3 Mar 2023
Implementing Fully
In progress
Cross-Check PAO Rec #32
Meta should institute a pathway for external researchers to gain access to non-public data about false-positive mistake-prevention programs that would allow them to understand the program more fully through public-interest investigations and provide their own recommendations for improvement. The Board understands that data privacy concerns should require stringent vetting and data aggregation.
3 Mar 2023
Implementing Fully
Complete
India Sexual Harassment Rec #1
Meta should include an exception to the Adult Sexual Exploitation Community Standard for depictions of non-consensual sexual touching, where, based on a contextual analysis, Meta judges that the content is shared to raise awareness, the victim is not identifiable, the content does not involve nudity and is not shared in a sensationalized context, thus entailing minimal risks of harm for the victim. This exception should be applied at escalation only. The Board will consider this recommendation implemented when the text of the Adult Sexual Exploitation Community Standard has been changed.
10 Feb 2023
Implementing fully
Complete
India Sexual Harassment Rec #2
Meta should update its internal guidance to at-scale reviewers on when to escalate content reviewed under the Adult Sexual Exploitation Community Standard, including guidance to escalate content depicting non-consensual sexual touching, with the above policy exception. The Board will consider this recommendation implemented when Meta shares with the Board the updated guidance to at-scale reviewers.
10 Feb 2023
Implementing fully
Complete
Nigeria Church Video Rec #1
Meta should review the public-facing language in the Violent and Graphic Content policy to ensure that it is better aligned with the company's internal guidance on how the policy is to be enforced. The Board will consider this recommendation implemented when the policy has been updated with a definition and examples, in the same way as Meta explains concepts such as "praise" in the Dangerous Individuals and Organisations policy.
10 Feb 2023
Assessing feasibility
In progress
Nigeria Church Video Rec #2
Meta should notify Instagram users when a warning screen is applied to their content and provide the specific policy rationale for doing so. The Board will consider this recommendation implemented when Meta confirms that notifications are provided to Instagram users in all languages supported by the platform.
10 Feb 2023
Implementing fully
In progress
UK Drill Rap Rec #1
Meta's description of its value of "Voice" should be updated to reflect the importance of artistic and creative expression. The Board will consider this recommendation implemented when Meta's values have been updated.
20 Jan 2023
Implementing fully
Complete
UK Drill Rap Rec #2
Meta should clarify that for content to be removed as a "veiled threat" under the Violence and Incitement Community Standard, one primary and one secondary signal is required. The list of signals should be divided between primary and secondary signals, in line with the internal Implementation Standards. This will make Meta's content policy in this area easier to understand, particularly for those reporting content as potentially violating. The Board will consider this recommendation implemented when the language in the Violence and Incitement Community Standard has been updated.
20 Jan 2023
Implementing fully
Complete
UK Drill Rap Rec #3
Meta should provide users with the opportunity to appeal to the Oversight Board for any decisions made through Meta's internal escalation process, including decisions to remove content and to leave content up. This is necessary to provide the possibility of access to remedy to the Board and to enable the Board to receive appeals for "escalation-only" enforcement decisions. This should also include appeals against removals made for Community Standard violations as a result of "trusted flagger" or government actor reports made outside in-product tools. The Board will consider this implemented when it sees user appeals coming from decisions made on escalation and when Meta shares data with the Board showing that for 100% of eligible escalation decisions, users are receiving reference IDs to initiate appeals.
20 Jan 2023
Implementing in part
Complete
UK Drill Rap Rec #4
Meta should implement and ensure a globally consistent approach to receive requests for content removals (outside in-product reporting tools) from state actors by creating a standardized intake form asking for minimum criteria, for example, the violated policy line, why it has been violated, and a detailed evidential basis for that conclusion, before any such requests are actioned by Meta internally. This contributes to ensuring more organized information collection for transparency reporting purposes. The Board will consider this implemented when Meta discloses the internal guidelines that outline the standardized intake system to the Board and in the Transparency Centre.
20 Jan 2023
Implementing in part
In progress
UK Drill Rap Rec #5
Meta should mark and preserve any accounts and content that were penalised or disabled for posting content that is subject to an open investigation by the Board. This prevents those accounts from being permanently deleted when the Board may wish to request content that is referred for decision or to ensure that its decisions can apply to all identical content with parallel context that may have been wrongfully removed. The Board will consider this implemented when Board decisions are applicable to the aforementioned entities and Meta discloses the number of said entities affected for each Board decision.
20 Jan 2023
Implementing fully
Complete
UK Drill Rap Rec #6
Meta should create a section in its Transparency Centre, alongside its "Community Standards Enforcement Report" and "Legal Requests for Content Restrictions Report", to report on state actor requests to review content for Community Standard violations. It should include details on the number of review and removal requests by country and government agency, and the number of rejections by Meta. This is necessary to improve transparency. The Board will consider this implemented when Meta publishes a separate section in its "Community Standards Enforcement Report" on requests from state actors that led to removal for content policy violations.
20 Jan 2023
Implementing in part
In progress
UK Drill Rap Rec #7
Meta should regularly review the data on its content moderation decisions prompted by state actor content review requests to assess for any systemic biases. Meta should create a formal feedback loop to fix any biases and/or outsized impacts stemming from its decisions on government content takedowns. The Board will consider this recommendation implemented when Meta regularly publishes the general insights derived from these audits and the actions taken to mitigate systemic biases.
20 Jan 2023
Assessing feasibility
In progress
World War II Poem Rec #1
Meta should add to the public-facing language of its Violence and Incitement Community Standard that the company interprets the policy to allow content containing statements with "neutral reference to a potential outcome of an action or an advisory warning" and content that "condemns or raises awareness of violent threats". The Board expects that this recommendation, if implemented, will require Meta to update the public-facing language of the Violence and Incitement policy to reflect these inclusions.
13 Jan 2023
Implementing fully
Complete
World War II Poem Rec #2
Meta should add to the public-facing language of its Violent and Graphic Content Community Standard detail from its internal guidelines about how the company determines whether an image "shows the violent death of a person or people by accident or murder". The Board expects that this recommendation, if implemented, will require Meta to update the public-facing language of the Violent and Graphic Content Community Standard to reflect this inclusion.
13 Jan 2023
Implementing fully
Complete
World War II Poem Rec #3
Meta should assess the feasibility of implementing customisation tools that would allow users over 18 years old to decide whether to see sensitive graphic content with or without warning screens, on Facebook, Instagram, and Threads. The Board expects that this recommendation, if implemented, will require Meta to publish the results of a feasibility assessment.
13 Jan 2023
No further action
No further updates
Violence in Ethiopia Rec #1
In line with the Board’s recommendation in the “Former President Trump’s Suspension,” as reiterated in the “Sudan Graphic Video,” Meta should publish information on its Crisis Policy Protocol. The Board will consider this recommendation implemented when information on the Crisis Policy Protocol is available in the Transparency Center, within six months of this decision being published, as a separate policy in the Transparency Center in addition to the Public Policy Forum slide deck.
2 Dec 2022
Implementing in part
Complete
Violence in Ethiopia Rec #2
To improve enforcement of its content policies during periods of armed conflict, Meta should assess the feasibility of establishing a sustained internal mechanism that provides the expertise, capacity and coordination required to review and respond to content effectively for the duration of a conflict. The Board will consider this recommendation implemented when Meta provides an overview of the feasibility of a sustained internal mechanism to the Board.
2 Dec 2022
Implementing fully
Complete
Colombian Police Cartoon Rec #1
To improve Meta’s ability to remove non-violating content from banks programmed to identify or automatically remove violating content, Meta should ensure that content with high rates of appeal and high rates of successful appeal is re-assessed for possible removal from its Media Matching Service banks. The Board will consider this recommendation implemented when Meta: (i) discloses to the Board the rates of appeal and successful appeal that trigger a review of Media Matching Service-banked content, and (ii) confirms publicly that these reassessment mechanisms are active for all its banks that target violating content.
14 Nov 2022
Implementing in part
Complete
Colombian Police Cartoon Rec #2
To ensure that inaccurately banked content is quickly removed from Meta’s Media Matching Service banks, Meta should set and adhere to standards that limit the time between when banked content is identified for re-review and when, if deemed non-violating, it is removed from the bank. The board will consider this recommendation implemented when Meta: (i) sets and discloses to the board its goal time between when a re-review is triggered and when the non-violating content is restored, and (ii) provides the Board with data demonstrating its progress in meeting this goal over the next year.
14 Nov 2022
Implementing in part
In progress
Colombian Police Cartoon Rec #3
To enable the establishment of metrics for improvement, Meta should publish the error rates for content mistakenly included in Media Matching Service banks of violating content, broken down by each content policy, in its transparency reporting. This reporting should include information on how content enters the banks and the company’s efforts to reduce errors in the process. The Board will consider this recommendation implemented when Meta includes this information in its Community Standards Enforcement Report.
14 Nov 2022
No further action
No further updates
Mention of the Taliban Rec #1
Meta should investigate why the December 2021 changes to the Dangerous Individuals and Organizations policy were not updated within the target time of six weeks, and ensure such delays or omissions are not repeated. The Board asks Meta to inform the Board within 60 days of the findings of its investigation, and the measures it has put in place to prevent translation delays in future.
14 Nov 2022
Implementing fully
In progress
Mention of the Taliban Rec #2
Meta should make its public explanation of its two-track strikes system more comprehensive and accessible, especially for “severe strikes.” It should include all policy violations that result in severe strikes, which account features can be limited as a result and specify applicable durations. Policies that result in severe strikes should also be clearly identified in the Community Standards, with a link to the “Restricting Accounts” explanation of the strikes system. The Board asks Meta to inform the Board within 60 days of the updated Transparency Center explanation of the strikes system, and the inclusion of the links to that explanation for all content policies that result in severe strikes.
14 Nov 2022
Implementing fully
Complete
Mention of the Taliban Rec #3
Meta should narrow the definition of “praise” in the Known Questions guidance for reviewers, by removing the example of content that “seeks to make others think more positively about” a designated entity by attributing to them positive values or endorsing their actions. The Board asks Meta to provide the Board within 60 days with the full version of the updated Known Questions document for Dangerous Individuals and Organizations.
14 Nov 2022
Implementing fully
Complete
Mention of the Taliban Rec #4
Meta should revise its internal Implementation Standards to make clear that the “reporting” allowance in the Dangerous Individuals and Organizations policy allows for positive statements about designated entities as part of the reporting, and how to distinguish this from prohibited “praise.” The Known Questions document should be expanded to make clear the importance of news reporting in situations of conflict or crisis and provide relevant examples, and that this may include positive statements about designated entities like the reporting on the Taliban in this case.
14 Nov 2022
Implementing fully
Complete
Mention of the Taliban Rec #5
Meta should assess the accuracy of reviewers enforcing the reporting allowance under the Dangerous Individuals and Organizations policy in order to identify systemic issues causing enforcement errors. The Board asks Meta to inform the Board within 60 days of the detailed results of its review of this assessment, or accuracy assessments Meta already conducts for its Dangerous Individuals and Organizations policy, including how the results will inform improvements to enforcement operations, including for HIPO.
14 Nov 2022
Implementing in part
Complete
Mention of the Taliban Rec #6
Meta should conduct a review of the HIPO ranker to examine if it can more effectively prioritize potential errors in the enforcement of allowances to the Dangerous Individuals and Organizations Policy. This should include examining whether the HIPO ranker needs to be more sensitive to news reporting content, where the likelihood of false-positive removals that impacts freedom of expression appears to be high. The Board asks Meta to inform the Board within 60 days of the results of its review and the improvements it will make to avoid errors of this kind in the future.
14 Nov 2022
Implementing in part
Complete
Mention of the Taliban Rec #7
Meta should enhance the capacity allocated to HIPO review across languages to ensure that more content decisions that may be enforcement errors receive additional human review. The Board asks Meta to inform the Board within 60 days of the planned capacity enhancements.
14 Nov 2022
Implementing fully
Complete
Knin cartoon Rec #1
Meta should clarify the Hate Speech Community Standard and the guidance provided to reviewers, explaining that even implicit references to protected groups are prohibited by the policy when the reference would reasonably be understood. The Board will consider this recommendation implemented when Meta updates its Community Standards and Internal Implementation Standards to content reviewers to incorporate this revision.
16 Aug 2022
Complete
In progress
Knin cartoon Rec #2
In line with Meta’s commitment following the "Wampum belt" case (2021-012-FB-UA), the Board recommends that Meta notify all users who have reported content when, on subsequent review, it changes its initial determination. Meta should also disclose the results of any experiments assessing the feasibility of introducing this change with the public. The Board will consider this recommendation implemented when Meta shares information regarding relevant experiments and, ultimately, the updated notification with the Board and confirms it is in use in all languages.
16 Aug 2022
Implementing fully
Complete
Reclaiming Arabic Words Rec #1
Meta should translate the Internal Implementation Standards and Known Questions to Modern Standard Arabic. Doing so could reduce over-enforcement in Arabic-speaking regions by helping moderators better assess when exceptions for content containing slurs are warranted. The Board notes that Meta has taken no further action in response to the recommendation in the "Myanmar Bot" case (2021-007-FB-UA) that Meta should ensure that its Internal Implementation Standards are available in the language in which content moderators review content. The Board will consider this recommendation implemented when Meta informs the Board that translation to Modern Standard Arabic is complete.
12 Aug 2022
No further action
No further updates
Reclaiming Arabic Words Rec #2
Meta should publish a clear explanation on how it creates its market-specific slur lists. This explanation should include the processes and criteria for designating which slurs and countries are assigned to each market-specific list. The Board will consider this implemented when the information is published in the Transparency Center.
12 Aug 2022
Implementing fully
Complete
Reclaiming Arabic Words Rec #3
Meta should publish a clear explanation of how it enforces its market-specific slur lists. This explanation should include the processes and criteria for determining precisely when and where the slurs prohibition will be enforced, whether in respect to posts originating geographically from the region in question, originating outside but relating to the region in question, and/or in relation to all users in the region in question, regardless of the geographic origin of the post. The Board will consider this recommendation implemented when the information is published in Meta’s Transparency Center.
12 Aug 2022
Implementing fully
Complete
Reclaiming Arabic Words Rec #4
Meta should publish a clear explanation on how it audits its market-specific slur lists. This explanation should include the processes and criteria for removing slurs from or keeping slurs on Meta's market-specific lists. The Board will consider this recommendation implemented when the information is published in Meta’s Transparency Center.
12 Aug 2022
Implementing fully
Complete
Sudan Graphic Video Rec #1
Meta should amend the Violent and Graphic Content Community Standard to allow videos of people or dead bodies when shared for the purpose of raising awareness of or documenting human rights abuses. This content should be allowed with a warning screen so that people are aware that content may be disturbing. The Board will consider this recommendation implemented when Meta updates the Community Standard.
12 Aug 2022
Implementing in Part
Complete
Sudan Graphic Video Rec #2
Meta should undertake a policy development process that develops criteria to identify videos of people or dead bodies when shared for the purpose of raising awareness of or documenting human rights abuses. The Board will consider this recommendation implemented when Meta publishes the findings of the policy development process, including information on the process and criteria for identifying this content at scale.
12 Aug 2022
Implementing in Part
IComplete
Sudan Graphic Video Rec #3
Meta should make explicit in its description of the newsworthiness allowance all the actions it may take (for example, restoration with a warning screen) based on this policy. The Board will consider this recommendation implemented when Meta updates the policy.
12 Aug 2022
Implementing fully
In progress
Sudan Graphic Video Rec #4
To ensure users understand the rules, Meta should notify users when it takes action on their content based on the newsworthiness allowance including the restoration of content or application of a warning screen. The user notification may link to the Transparency Center explanation of the newsworthiness allowance. The Board will consider this implemented when Meta rolls out this updated notification to users in all markets and demonstrates that users are receiving this notification through enforcement data.
12 Aug 2022
Assessing feasibility
In progress
Private Residential Info PAO Rec #1
Meta should remove the exception that allows the sharing of private residential information (both images that currently fulfill the Privacy Violations policy’s criteria for takedown and addresses) when considered “publicly available”. The Board will consider this implemented when Meta modifies its Internal Implementation Standards and its content policies.
8 Apr 2022
Implementing fully
Complete
Private Residential Info PAO Rec #2
Meta should develop and publicize clear criteria for content reviewers to escalate for additional review of public interest content that potentially violates the Community Standards but may be eligible for the newsworthiness exception, as previously recommended in case decision 2021-010-FB-UA. The Board will consider this implemented when Meta publicly shares these escalation criteria.
8 Apr 2022
No further action
No further updates
Private Residential Info PAO Rec #3
Meta should allow the sharing of “imagery that displays the external view of private residences” when the property depicted is the focus of the news story, except when shared in the context of organizing protests against the resident. The Board will consider this implemented when Meta modifies its content policies.
8 Apr 2022
Implementing fully
Complete
Private Residential Info PAO Rec #4
Meta should allow the publication of addresses and imagery of official residences provided to high-ranking government officials. The Board will consider this implemented when Meta modifies its content policies.
8 Apr 2022
Implementing fully
Complete
Private Residential Info PAO Rec #5
Meta should allow the resharing of private residential addresses when posted by the affected user themselves or when the user consented to its publication. Users should not be presumed to consent to private information posted by others. The Board will consider this implemented when Meta modifies its content policies.
8 Apr 2022
No further action
No further updates
Private Residential Info PAO Rec #6
Users should have a quick and effective mechanism to request the removal of private information posted by others. We will consider this implemented when Meta demonstrates in its transparency reports that user requests to remove their information are consistently and promptly actioned.
8 Apr 2022
Implementing fully
Complete
Private Residential Info PAO Rec #7
Meta should clarify in the Privacy Violations policy when disclosing the city where a residence is located will suffice for the content to be removed, and when disclosing its neighborhood would be required for the same matter. The Board will consider this implemented when Meta modifies its content policies.
8 Apr 2022
Implementing fully
Complete
Private Residential Info PAO Rec #8
Meta should explain, in the text of Facebook’s Privacy Violations policy, its criteria for assessing whether the resident is sufficiently identified in the content. The Board will consider this implemented when Meta modifies its content policies.
8 Apr 2022
Implementing fully
In progress
Private Residential Info PAO Rec #9
The Board reiterates its recommendation that Meta should explain to users that it enforces the Facebook Community Standards on Instagram, with several specific exceptions. The Board recommends Meta update the introduction to the Instagram Community Guidelines within 90 days to inform users that if content is considered violating on Facebook, it is also considered violating on Instagram, as stated in the company’s Transparency Center, with some exceptions.The Board will consider this implemented when Meta modifies its content policies.
8 Apr 2022
Implementing in Part
Complete
Private Residential Info PAO Rec #10
Meta should let users reporting content that may violate the Privacy Violations policy provide additional context about their claim. The Board will consider this implemented when Meta publishes information about its appeal processes that demonstrate users may provide this context in appeals.
8 Apr 2022
Implementing fully
Complete
Private Residential Info PAO Rec #11
Meta should create a specific channel of communications for victims of doxing (available both for users and non-users). Additionally, Meta could provide financial support to organizations that already have hotlines in place. Meta should prioritize action when the impacted person references belonging to a group facing heightened risk to safety in the region where the private residence is located. The Board will consider this implemented when Meta creates the channel and publicly announces how to use it.
8 Apr 2022
No further action
No further updates
Private Residential Info PAO Rec #12
Meta should consider the violation of its Privacy Violations policy as “severe,” prompting temporary account suspension, in cases where the sharing of private residential information is clearly related to malicious action that created a risk of violence or harassment. The Board will consider this implemented when Meta updates its Transparency Center description of the strikes system to make clear that some Privacy Violations are severe and may result in account suspension.
8 Apr 2022
No further action
No further updates
Private Residential Info PAO Rec #13
Meta should give users an opportunity to remove or edit private information within their content following a removal for violation of the Privacy Violations policy. The Board will consider this implemented when Meta publishes information about its enforcement processes that demonstrates users are notified of specific policy violations when content is removed and granted a remedial window before the content is permanently deleted.
8 Apr 2022
Implementing in Part
Complete
Private Residential Info PAO Rec #14
Meta should let users indicate in their appeals against content removal that their content falls into one of the exceptions to the Privacy Violations policy. The Board will consider this implemented when Meta publishes information about its appeal processes that demonstrates users may provide this information in appeals.
8 Apr 2022
Implementing fully
Complete
Private Residential Info PAO Rec #15
Meta should publish quantitative data on the enforcement of the Privacy Violations policy in the company’s Community Standards Enforcement Report. The Board will consider this implemented when Meta’s transparency report includes Privacy Violations enforcement data.
8 Apr 2022
No further action
No further updates
Private Residential Info PAO Rec #16
Meta should break down data in its transparency reports to indicate the amount of content removed following privacy-related government requests, even if taken down under the Privacy Violations policy and not under local privacy laws. The Board will consider this implemented when Meta’s transparency reporting includes all government requests that result in content removal for violating the Privacy Violations policy as a separate category.
8 Apr 2022
No further action
No further updates
Private Residential Info PAO Rec #17
Meta should provide users with more detail on the specific policy of the Privacy Violations Community Standard that their content was found to violate and implement it across all working languages of the company’s platforms. The Board will consider this implemented when Meta publishes information and data about user notifications.
8 Apr 2022
Implementing in part
Complete
Child Sexual Exploitation Rec #1
Meta should define graphic depiction and sexualization in the Child Sexual Exploitation, Nudity and Abuse Community Standard. Meta should make clear that not all explicit language constitutes graphic depiction or sexualization and explain the difference between legal, clinical or medical terms and graphic content. Meta should also provide a clarification for distinguishing child sexual exploitation and reporting on child sexual exploitation. The Board will consider the recommendation implemented when language defining key terms and the distinction has been added to the Community Standards.
1 Apr 2022
Implementing fully
In progress
Child Sexual Exploitation Rec #2
Meta should undergo a policy development process, including as a discussion in the Policy Forum, to determine whether and how to incorporate a prohibition on functional identification of child victims of sexual violence in its Community Standards. This process should include stakeholder and expert engagement on functional identification and the rights of the child. The Board will consider this recommendation implemented when Meta publishes the minutes of the Product Policy Forum where this is discussed.
1 Apr 2022
Implementing fully
Complete
Advice on Pharmaceutical Drugs Rec #1
Meta should publish its internal definitions for “non-medical drugs” and “pharmaceutical drugs” in the Facebook Community Standard on Restricted Goods and Services. The published definitions should: (a) make clear that certain substances may fall under either “non-medical drugs” or “pharmaceutical drugs” and (b) explain the circumstances under which a substance would fall into each of these categories. The Board will consider this recommendation implemented when these changes are made in the Community Standard.
1 Apr 2022
Implementing fully
Complete
Advice on Pharmaceutical Drugs Rec #2
Meta should study the consequences and trade-offs of implementing a dynamic prioritization system that orders appeals for human review, and consider whether the fact that an enforcement decision resulted in an account restriction should be a criterion within this system. The Board will consider this recommendation implemented when Meta shares the results of these investigations with the Board and in its quarterly Board transparency report.
1 Apr 2022
Implementing in Part
In progress
Advice on Pharmaceutical Drugs Rec #3
Meta should conduct regular assessments on reviewer accuracy rates focused on the Restricted Goods and Services policy. The Board will consider this recommendation implemented when Meta shares the results of these assessments with the Board, including how these results will inform improvements to enforcement operations and policy development, and summarize the results in its quarterly Board transparency reports. Meta may consider if these assessments should be extended to reviewer accuracy rates under other Community Standards.
1 Apr 2022
Implementing in part
Complete
Violence in Raya Kobo Rec #1
Meta should rewrite Meta’s value of “safety” to reflect that online speech may pose risk to the physical security of persons and the right to life, in addition to the risks of intimidation, exclusion and silencing.
13 Jan 2022
Implementing in part
Complete
Violence in Raya Kobo Rec #2
Facebook’s Community Standards should reflect that in the contexts of war and violent conflict unverified rumors pose higher risk to the rights of life and security of persons. This should be reflected at all levels of the moderation process.
13 Jan 2022
No further action
No further updates
Violence in Raya Kobo Rec #3
Meta should commission an independent human rights due diligence assessment related to our work in Ethiopia.
13 Jan 2022
Implementing in part
Complete
Praise of Ayahuasca Rec #1
The Board reiterates its recommendation that Meta should explain to users that it enforces the Facebook Community Standards on Instagram, with several specific exceptions. While Meta may be taking other actions to comply with the recommendations, the Board recommends Meta update the Introduction to the Instagram Community Guidelines (“The Short” Community Guidelines) within 90 days to inform users that if content is considered violating on Facebook, it is also considered violating on Instagram, as stated in the company’s Transparency Center, with some exceptions.
7 Jan 2022
Implementing in Part
Complete
Praise of Ayahuasca Rec #2
The Board reiterates its recommendation that Meta should explain to users precisely what rule in a content policy they have violated.
7 Jan 2022
Implementing in part
Complete
Praise of Ayahuasca Rec #3
The Board recommends that Meta modify the Instagram Community Guidelines and Facebook Regulated Goods Community Standard to allow positive discussion of traditional and religious uses of non-medical drugs where there is historic evidence of such use. The Board also recommends that Meta make public all allowances, including existing allowances.
7 Jan 2022
Implementing fully
Complete
Indigenous Artwork Rec #1
Provide users with timely and accurate notice of action being taken on the content their appeal relates to. Where applicable, including in enforcement error cases like this one, the notice to the user should acknowledge that the action was a result of the Oversight Board’s review process. Meta should share the user messaging sent when Board actions impact content decisions appealed by users, to demonstrate it has complied with this recommendation.
7 Jan 2022
Implementing fully
Complete
Indigenous Artwork Rec #2
Study the impacts of modified approaches to secondary review on reviewer accuracy and throughput, including (1) an evaluation of accuracy rates when content moderators are informed that they are engaged in secondary review and (2) an opportunity for users to provide relevant context that may help reviewers evaluate their content, in line with the Board’s previous recommendations. Meta should share the results of these accuracy assessments with the Board and summarize the results in its quarterly Board transparency report to demonstrate it has complied with this recommendation.
7 Jan 2022
Work Meta already does
In progress
Indigenous Artwork Rec #3
Conduct accuracy assessments focused on Hate Speech policy allowances that cover artistic expression and expression about human rights violations, including how the location of a reviewer impacts the ability of moderators to accurately assess hate speech and counter speech from the same or different regions. Meta should share the results of this assessment with the Board, including how these results will inform improvements to enforcement operations and policy development and whether it plans to run regular reviewer accuracy assessments on these allowances, and summarize the results in its quarterly Board transparency report to demonstrate it has complied with this recommendation.
7 Jan 2022
Implementing in part
Complete
Discussing South African While Using Slurs Rec #1
Notify users of the specific rule within the Hate Speech Community Standard that has been violated in the language in which they use Facebook, as recommended in case decision 2020-003-FB-UA (Armenians in Azerbaijan) and case decision 2021-002-FB-UA (Depiction of Zwarte Piet). The Board looks forward to Facebook providing information that confirms implementation for English-language users and information about the timeframe for implementation for other language users.
27 Oct 2021
Implementing in part
Complete
Protests in Colombia Rec #1
Publish illustrative examples from the list of slurs it has designated as violating under its Hate Speech Community Standard. These examples should be included in the Community Standard and include edge cases involving words which may be harmful in some contexts but not others, describing when their use would be violating. Facebook should clarify to users that these examples do not constitute a complete list.
27 Oct 2021
Implementing in part
Complete
Protests in Colombia Rec #2
Link the short explanation of the newsworthiness allowance provided in the introduction to the Community Standards to the more detailed Transparency Center explanation of how this policy applies. The company should supplement this explanation with illustrative examples from a variety of contexts, including reporting on large scale protests.
27 Oct 2021
Implementing fully
Complete
Protests in Colombia Rec #3
Develop and publicize clear criteria for content reviewers to escalate for additional review public interest content that potentially violates the Community Standards but may be eligible for the newsworthiness allowance.
27 Oct 2021
Work Meta already does
No further updates
Protests in Colombia Rec #4
Notify all users who reported content assessed as violating but left on the platform for public interest reasons that the newsworthiness allowance was applied to the post. The notice should link to the Transparency Center explanation of the newsworthiness allowance.
27 Oct 2021
Assessing feasibility
In progress
Al Jazeera Post Rec #1
Add criteria and illustrative examples to its Dangerous Individuals and Organizations policy to increase understanding of the exceptions for neutral discussion, condemnation and news reporting.
14 Oct 2021
Assessing feasibility
Complete
Al Jazeera Post Rec #2
Ensure swift translation of updates to the Community Standards into all available languages.
14 Oct 2021
No further action
No further updates
Al Jazeera Post Rec #3
Engage an independent entity not associated with either side of the Israeli-Palestinian conflict to conduct a thorough examination to determine whether Facebook’s content moderation in Arabic and Hebrew, including its use of automation, have been applied without bias. The report and its conclusions should be made public.
14 Oct 2021
Implementing fully
Complete
Al Jazeera Post Rec #4
Formalize a transparent process on how it receives and responds to all government requests for content removal, and ensure that they are included in transparency reporting. The transparency reporting should distinguish government requests that led to removals for violations of the Community Standards from requests that led to removal or geo-blocking for violating local law, in addition to requests that led to no action.
14 Oct 2021
Implementing in part
In progress
Brazilian Covid Lockdown Rec #1
Facebook should conduct a proportionality analysis to identify a range of less intrusive measures than removing the content, including labeling content, introducing friction to posts to prevent interactions or sharing, and downranking. All these enforcement measures should be clearly communicated to all users, and subject to appeal.
17 Sep 2021
Work Meta already does
No further updates
Brazilian Covid Lockdown Rec #2
Given the context of the COVID-19 pandemic, Facebook should make technical arrangements to prioritize fact-checking of potential health misinformation shared by public authorities which comes to the company’s attention, taking into consideration the local context.
17 Sep 2021
Work Meta already does
No further updates
Brazilian Covid Lockdown Rec #3
Facebook should provide more transparency within the False News Community Standard regarding when content is eligible for fact-checking, including whether public institutions' accounts are subject to fact-checking.
17 Sep 2021
Implementing fully
Complete
Situation in Myanmar Rec #1
Facebook should ensure that its Internal Implementation Standards are available in the language in which content moderators review content. If necessary to prioritize, Facebook should focus first on contexts where the risks to human rights are more severe.
10 Sep 2021
No further updates
No further updates
Support of PKK Founder Rec #1
Immediately restore the misplaced 2017 guidance to the Internal Implementation Standards and Known Questions (the internal guidance for content moderators), informing all content moderators that it exists and arranging immediate training on it.
6 Aug 2021
Implementing fully
Complete
Support of PKK Founder Rec #2
Evaluate automated moderation processes for enforcement of the Dangerous Individuals and Organizations policy and where necessary update classifiers to exclude training data from prior enforcement errors that resulted from failures to apply the 2017 guidance.
6 Aug 2021
No further action
No further updates
Support of PKK Founder Rec #3
Publish the results of the ongoing review process to determine if any other polices were lost, including descriptions of all lost policies, the period the policies were lost for, and steps taken to restore them.
6 Aug 2021
Implementing in part
Complete
Support of PKK Founder Rec #4
Reflect in the Dangerous Individuals and Organizations “policy rationale” that respect for human rights and freedom of expression can advance the value of “Safety,” and that it is important for the platform to provide a space for these discussions.
6 Aug 2021
Implementing fully
Complete
Support of PKK Founder Rec #5
Add to the Dangerous Individuals and Organizations policy a clear explanation of what “support” excludes. Users should be free to discuss alleged violations and abuses of the human rights of members of designated organizations. Calls for accountability for human rights violations and abuses should also be protected.
6 Aug 2021
Implementing fully
Complete
Support of PKK Founder Rec #6
Explain in the Community Standards how users can make the intent behind their posts clear to Facebook. This would be assisted by implementing the Board’s existing recommendation to publicly disclose the company’s list of designated individuals and organizations (see: case 2020-005-FB-UA). Facebook should also provide illustrative examples to demonstrate the line between permitted and prohibited content, including in relation to the application of the rule clarifying what “support” excludes.
6 Aug 2021
Implementing in part
Complete
Support of PKK Founder Rec #7
Ensure meaningful stakeholder engagement on the proposed policy change through Facebook’s Product Policy Forum, including through a public call for inputs. Facebook should conduct this engagement in multiple languages across regions, ensuring the effective participation of individuals most impacted by the harms this policy seeks to prevent.
6 Aug 2021
Work Meta already does
No further updates
Support of PKK Founder Rec #8
Ensure internal guidance and training is provided to content moderators on any new policy. Content moderators should be provided adequate resources to be able to understand the new policy, and adequate time to make decisions when enforcing the policy.
6 Aug 2021
Work Meta already does
No further updates
Support of PKK Founder Rec #9
Ensure that users are notified when their content is removed. The notification should note whether the removal is due to a government request or due to a violation of the Community Standards or due to a government claiming a national law is violated (and the jurisdictional reach of any removal).
6 Aug 2021
Implementing fully
Complete
Support of PKK Founder Rec #10
Clarify to Instagram users that Facebook’s Community Standards apply to Instagram in the same way they apply to Facebook, in line with the recommendation in case 2020-004-IG-UA.
6 Aug 2021
Implementing in Part
Complete
Support of PKK Founder Rec #11
Include information on the number of requests Facebook receives for content removals from governments that are based on Community Standards violations (as opposed to violations of national law), and the outcome of those requests.
6 Aug 2021
Implementing in part
In progress
Support of PKK Founder Rec #12
Include more comprehensive information on error rates for enforcing rules on “praise” and “support” of dangerous individuals and organizations, broken down by region and language.
6 Aug 2021
No further action
No further updates
2021 Russian Protests Rec #1
Explain the relationship between the policy rationale and the “Do nots” as well as the other rules restricting content that follow it.
25 Jun 2021
Implementing in part
Complete
2021 Russian Protests Rec #2
Differentiate between bullying and harassment and provide definitions that distinguish the two acts. Further, the Community Standard should clearly explain to users how bullying and harassment differ from speech that only causes offense and may be protected under international human rights law.
25 Jun 2021
No further action
No further updates
2021 Russian Protests Rec #3
Clearly define its approach to different target user categories and provide illustrative examples of each target category (i.e. who qualifies as a public figure). Format the Community Standard on Bullying and Harassment by user categories currently listed in the policy.
25 Jun 2021
Implementing in part
Complete
2021 Russian Protests Rec #4
Include illustrative examples of violating and non-violating content in the Bullying and Harassment Community Standard to clarify the policy lines drawn and how these distinctions can rest on the identity status of the target.
25 Jun 2021
Implementing in part
Complete
2021 Russian Protests Rec #5
When assessing content including a ‘negative character claim’ against a private adult, Facebook should amend the Community Standard to require an assessment of the social and political context of the content. Facebook should reconsider the enforcement of this rule in political or public debates where the removal of the content would stifle debate.
25 Jun 2021
No further action
No further updates
2021 Russian Protests Rec #6
Whenever Facebook removes content because of a negative character claim that is only a single word or phrase in a larger post, it should promptly notify the user of that fact, so that the user can repost the material without the negative character claim.
25 Jun 2021
Implementing in Part
Complete
Armenian Genocide Comments Rec #1
Facebook should make technical arrangements to ensure that notice to users refers to the Community Standard enforced by the company.
17 Jun 2021
No further action
No further updates
Armenian Genocide Comments Rec #2
Facebook should include the satire exception, which is currently not communicated to users, in the public language of the Hate Speech Community Standard.
17 Jun 2021
Implementing fully
Complete
Armenian Genocide Comments Rec #3
Facebook should make sure that it has adequate procedures in place to assess satirical content and relevant context properly including by providing content moderators with additional resources.
17 Jun 2021
Implementing in part
Complete
Armenian Genocide Comments Rec #4
Facebook should let users indicate in their appeal that their content falls into one of the exceptions to the Hate Speech policy.
17 Jun 2021
Implementing in part
Complete
Armenian Genocide Comments Rec #5
Facebook should ensure appeals based on policy exceptions are prioritized for human review.
17 Jun 2021
Assessing feasibility
In progress
President Trump Suspension Rec #1
Facebook should act quickly on posts made by influential users that pose a high probability of imminent harm.
4 Jun 2021
Work Meta already does
No further updates
President Trump Suspension Rec #2
Facebook should consider the context of posts by influential users when assessing a post’s risk of harm.
4 Jun 2021
Work Meta already does
No further updates
President Trump Suspension Rec #3
Facebook should prioritize safety over expression when taking action on a threat of harm from influential users.
4 Jun 2021
Work Meta already does
No further updates
Trump Suspension Rec #4
Facebook should suspend the accounts of high government officials, such as heads of state, if their posts repeatedly pose a risk of harm.
4 Jun 2021
Implementing fully
Complete
Trump Suspension Rec #5
Facebook should suspend accounts of high government officials, such as heads of state, for a determinate period sufficient to protect against imminent harm. Periods of suspension should be long enough to deter misconduct and may, in appropriate cases, include account or page deletion.
4 Jun 2021
Implementing fully
Complete
Trump Suspension Rec #6
Facebook should resist pressure from governments to silence their political opposition and consider the relevant political context, including off of Facebook and Instagram, when evaluating political speech from highly influential users.
4 Jun 2021
Work Meta already does
No further updates
Trump Suspension Rec #7
Facebook should have a process that utilizes regional political and linguistic expertise, with adequate resourcing when evaluating political speech from highly influential users.
4 Jun 2021
Work Meta already does
No further updates
Trump Suspension Rec #8
Facebook should publicly explain the rules that it uses when it imposes account-level sanctions against influential users.
4 Jun 2021
Implementing Fully
Complete
Trump Suspension Rec #9
Facebook should assess the on-and-offline risk of harm before lifting an influential user’s account suspension.
4 Jun 2021
Implementing fully
Complete
Trump Suspension Rec #10
Facebook should document any exceptional processes that apply to influential users.
4 Jun 2021
Implementing fully
Complete
Trump Suspension Rec #11
Facebook should more clearly explain its newsworthiness allowance.
4 Jun 2021
Implementing fully
Complete
Trump Suspension Rec #12
In regard to cross check review for influential users, Facebook should clearly explain the rationale, standards, and processes of review, including the criteria to determine which pages and accounts are selected for inclusion.
4 Jun 2021
Implementing fully
Complete
Trump Suspension Rec #13
Facebook should report on the relative error rates and thematic consistency of determinations made through the cross check process compared with ordinary enforcement procedures.
4 Jun 2021
No further action
No further updates
Trump Suspension Rec #14
Facebook should review its potential role in the election fraud narrative that sparked violence in the United States on January 6, 2021 and report on its findings.
4 Jun 2021
Work Meta already does
No further updates
Trump Suspension Rec #15
Facebook should be clear in its Corporate Human Rights policy how it collects, preserves and shares information related to investigations and potential prosecutions, including how researchers can access that information.
4 Jun 2021
No further action
No further updates
Trump Suspension Rec #16
Facebook should explain in its Community Standards and Guidelines its strikes and penalties process for restricting profiles, pages, groups and accounts on Facebook and Instagram in a clear, comprehensive, and accessible manner.
4 Jun 2021
Implementing fully
Complete
Trump Suspension Rec #17
Facebook should tell users how many violations, strikes, and penalties they have, as well as the consequences of future violations.
4 Jun 2021
Implementing fully
Complete
Trump Suspension Rec #18
In its transparency reporting, Facebook should include numbers of profile, page, and account restrictions, including the reason and manner in which enforcement action was taken, with information broken down by region and country.
4 Jun 2021
Implementing in Part
In progress
Trump Suspension Rec #19
Facebook should develop and publish a policy that governs its response to crises or novel situations where its regular processes would not prevent or avoid imminent harm.
4 Jun 2021
Implementing fully
Complete
Depiction of Zwarte Piet Rec #1
Facebook should link the rule in the Hate Speech Community Standard prohibiting blackface to the company’s reasoning for the rule, including harms it seeks to prevent.
13 May 2021
Implementing fully
Complete
Depiction of Zwarte Piet Rec #2
In line with the board’s recommendation in the case about Armenians in Azerbaijan, the board said that Facebook should “ensure that users are always notified of the reasons for any enforcement of the Community Standards against them, including the specific rule Facebook is enforcing.” In this case any notice to users should specify the rule on blackface, and also link to the above-mentioned resources that explain the harm this rule seeks to prevent. The board asked Facebook to provide a detailed update on its “feasibility assessment” of the prior recommendations on this topic, including the specific nature of any technical limitations and how these can be overcome.
13 May 2021
Implementing in part
Complete
Punjabi Concern Over the RSS Rec #1
Community Standards and Internal Implementation Standards into Punjabi. Facebook should aim to make its Community Standards accessible in all languages widely spoken by its users.
27 May 2021
Implementing in part
Complete
Punjabi Concern Over the RSS Rec #2
The company should restore human review and access to a human appeals process to pre-pandemic levels as soon as possible while fully protecting the health of Facebook’s staff and contractors.
27 May 2021
Work Meta already does
No further updates
Punjabi Concern Over the RSS Rec #3
Facebook should improve its transparency reporting to increase public information on error rates by making this information viewable by country and language for each Community Standard.
27 May 2021
Implementing in Part
In progress
Veiled Threat Based on Religious Beliefs Rec #1
Provide people with additional information regarding the scope and enforcement of restrictions on veiled threats. This would help people understand what content is allowed in this area. Facebook should make their enforcement criteria public. These should consider the intent and identity of the person, as well as their audience and the wider context.
11 Mar 2021
Implementing in part
Complete
Medications to Treat COVID-19 Rec #1
Clarify the Community Standards with respect to health misinformation, particularly with regard to COVID-19. Facebook should set out a clear and accessible policy on health misinformation, consolidating and clarifying existing policies in one place.
25 Feb 2021
Implementing in part
Complete
Medications to Treat COVID-19 Rec #2
Facebook should 1) publish its range of enforcement options within the Community Standards, ranking these options from most to least intrusive based on how they infringe freedom of expression, 2) explain what factors, including evidence-based criteria, the platform will use in selecting the least intrusive option when enforcing its Community Standards to protect public health and 3) make clear within the Community Standards what enforcement option applies to each policy.
25 Feb 2021
Implementing in part
Complete
Medications to Treat COVID-19 Rec #3
To ensure enforcement measures on health misinformation represent the least intrusive means of protecting public health, Facebook should clarify the particular harms it is seeking to prevent and provide transparency about how it will assess the potential harm of particular content.
25 Feb 2021
Work Meta already does
No further updates
Medications to Treat COVID-19 Rec #4
To ensure enforcement measures on health misinformation represent the least intrusive means of protecting public health, Facebook should conduct an assessment of its existing range of tools to deal with health misinformation and consider the potential for development of further tools that are less intrusive than content removals.
25 Feb 2021
Implementing in part
Complete
Medications to Treat COVID-19 Rec #5
Publish a transparency report on how the Community Standards have been enforced during the COVID-19 global health crisis.
25 Feb 2021
Work Meta already does
No further updates
Medications to Treat COVID-19 Rec #6
Conduct a human rights impact assessment with relevant stakeholders as part of its process of rule modification.
25 Feb 2021
Implementing in part
Complete
Medications to Treat COVID-19 Rec #7
In cases where people post information about COVID-19 treatments that contradicts the specific advice of health authorities and where a potential for physical harm is identified but is not imminent, Facebook should adopt a range of less intrusive measures.
25 Feb 2021
No further action
No further updates
Nazi Quote Rec #1
Ensure that users are always notified of the Community Standards Facebook is enforcing.
25 Feb 2021
Implementing in part
Complete
Nazi Quote Rec #2
Explain and provide examples of the application of key terms used in the Dangerous Individuals and Organizations policy. These should align with the definitions used in Facebook’s Internal Implementation Standards.
25 Feb 2021
Implemented in part
Complete
Nazi Quote Rec #3
Provide a public list of the organizations and individuals designated “dangerous” under the policy on dangerous individuals and organizations.
25 Feb 2021
No further action
Complete
Nagorno-Karabakh Dispute Rec #1
Go beyond the policy that Facebook is enforcing, and add more specifics about what part of the Facebook Community Standards they violated.
25 Feb 2021
Implementing in part
Complete
Breast Cancer Symptoms and Nudity Rec #1
Improve automated detection of images with text overlay so that posts raising awareness of breast cancer symptoms are not wrongly flagged for review. Facebook should also improve its transparency reporting on its use of automated enforcement.
25 Feb 2021
Implementing fully
In progress
Breast Cancer Symptoms and Nudity Rec #2
Revise the Instagram Community Guidelines to specify that female nipples can be shown to raise breast cancer awareness and clarify that where there are inconsistencies between the Community Guidelines and the Community Standards, the latter take precedence.
25 Feb 2021
Implementing in Part
Complete
Breast Cancer Symptoms and Nudity Rec #3
When communicating to people about how they violated policies, be clear about the relationship between the Community Guidelines and Community Standards.
25 Feb 2021
Implementing in part
Complete
Breast Cancer Symptoms and Nudity Rec #4
Ensure people can appeal decisions taken by automated systems to human review when their content is found to have violated the policy on adult nudity and sexual activity.
25 Feb 2021
Implementing fully
Complete
Breast Cancer Symptoms and Nudity Rec #5
Inform people when automation is used to take enforcement action against their content, including accessible descriptions of what this means
25 Feb 2021
Implementing fully
Complete
Breast Cancer Symptoms and Nudity Rec #6
Expand transparency reporting to disclose data on the number of automated removal decisions, and the proportion of those decisions subsequently reversed following human review.
25 Feb 2021
Implementing in Part
In progress
Breast Cancer Symptoms and Nudity Rec #7
Revise the “short” explanation of the Instagram Community Guidelines to clarify that the ban on adult nudity is not absolute.
25 Feb 2021
Implementing in Part
Complete

Meta
Politikker
FællesskabsreglerMetas annonceringsreglerAndre politikkerSådan forbedrer Meta sigAlderssvarende indhold

Funktioner
Vores tilgang til farlige organisationer og individerVores tilgang til opioidepidemienVores tilgang til valgVores tilgang til forkerte oplysningerVores tilgang til indhold med nyhedsværdiVores tilgang til Facebook-feedrangeringVores tilgang til at forklare rangeringTilgængelighed hos Meta

Undersøgelsesværktøjer
Indholdsbibliotek og API for indholdsbibliotekAnnoncebiblioteksværktøjerAndre værktøjer og datasæt

Håndhævelse
Detektering af overtrædelserVi skrider til handling

Forvaltning
ForvaltningsinnovationOversigt over TilsynsrådetSådan appellerer du til TilsynsrådetTilsynsrådets sagerTilsynsrådets anbefalingerDannelse af TilsynsrådetTilsynsrådet: Yderligere spørgsmålMetas halvårlige opdateringer om TilsynsrådetRegistrering af Tilsynsrådets effekt

Sikkerhed
Forstyrrelser pga. truslerSikkerhedstruslerAnmeldelse af trusler

Rapporter
Rapport om håndhævelse af fællesskabsreglerImmaterielle rettighederBrugerdataanmodninger fra offentlige myndighederIndholdsbegrænsninger baseret på lokal lovgivningInternetafbrydelserRapport om indhold, der er set mange gangeLovgivningsmæssig rapport og andre gennemsigtighedsrapporter

Politikker
Fællesskabsregler
Metas annonceringsregler
Andre politikker
Sådan forbedrer Meta sig
Alderssvarende indhold
Funktioner
Vores tilgang til farlige organisationer og individer
Vores tilgang til opioidepidemien
Vores tilgang til valg
Vores tilgang til forkerte oplysninger
Vores tilgang til indhold med nyhedsværdi
Vores tilgang til Facebook-feedrangering
Vores tilgang til at forklare rangering
Tilgængelighed hos Meta
Undersøgelsesværktøjer
Indholdsbibliotek og API for indholdsbibliotek
Annoncebiblioteksværktøjer
Andre værktøjer og datasæt
Håndhævelse
Detektering af overtrædelser
Vi skrider til handling
Forvaltning
Forvaltningsinnovation
Oversigt over Tilsynsrådet
Sådan appellerer du til Tilsynsrådet
Tilsynsrådets sager
Tilsynsrådets anbefalinger
Dannelse af Tilsynsrådet
Tilsynsrådet: Yderligere spørgsmål
Metas halvårlige opdateringer om Tilsynsrådet
Registrering af Tilsynsrådets effekt
Sikkerhed
Forstyrrelser pga. trusler
Sikkerhedstrusler
Anmeldelse af trusler
Rapporter
Rapport om håndhævelse af fællesskabsregler
Immaterielle rettigheder
Brugerdataanmodninger fra offentlige myndigheder
Indholdsbegrænsninger baseret på lokal lovgivning
Internetafbrydelser
Rapport om indhold, der er set mange gange
Lovgivningsmæssig rapport og andre gennemsigtighedsrapporter
Politikker
Fællesskabsregler
Metas annonceringsregler
Andre politikker
Sådan forbedrer Meta sig
Alderssvarende indhold
Funktioner
Vores tilgang til farlige organisationer og individer
Vores tilgang til opioidepidemien
Vores tilgang til valg
Vores tilgang til forkerte oplysninger
Vores tilgang til indhold med nyhedsværdi
Vores tilgang til Facebook-feedrangering
Vores tilgang til at forklare rangering
Tilgængelighed hos Meta
Undersøgelsesværktøjer
Indholdsbibliotek og API for indholdsbibliotek
Annoncebiblioteksværktøjer
Andre værktøjer og datasæt
Sikkerhed
Forstyrrelser pga. trusler
Sikkerhedstrusler
Anmeldelse af trusler
Rapporter
Rapport om håndhævelse af fællesskabsregler
Immaterielle rettigheder
Brugerdataanmodninger fra offentlige myndigheder
Indholdsbegrænsninger baseret på lokal lovgivning
Internetafbrydelser
Rapport om indhold, der er set mange gange
Lovgivningsmæssig rapport og andre gennemsigtighedsrapporter
Håndhævelse
Detektering af overtrædelser
Vi skrider til handling
Forvaltning
Forvaltningsinnovation
Oversigt over Tilsynsrådet
Sådan appellerer du til Tilsynsrådet
Tilsynsrådets sager
Tilsynsrådets anbefalinger
Dannelse af Tilsynsrådet
Tilsynsrådet: Yderligere spørgsmål
Metas halvårlige opdateringer om Tilsynsrådet
Registrering af Tilsynsrådets effekt
Politikker
Fællesskabsregler
Metas annonceringsregler
Andre politikker
Sådan forbedrer Meta sig
Alderssvarende indhold
Funktioner
Vores tilgang til farlige organisationer og individer
Vores tilgang til opioidepidemien
Vores tilgang til valg
Vores tilgang til forkerte oplysninger
Vores tilgang til indhold med nyhedsværdi
Vores tilgang til Facebook-feedrangering
Vores tilgang til at forklare rangering
Tilgængelighed hos Meta
Undersøgelsesværktøjer
Indholdsbibliotek og API for indholdsbibliotek
Annoncebiblioteksværktøjer
Andre værktøjer og datasæt
Håndhævelse
Detektering af overtrædelser
Vi skrider til handling
Forvaltning
Forvaltningsinnovation
Oversigt over Tilsynsrådet
Sådan appellerer du til Tilsynsrådet
Tilsynsrådets sager
Tilsynsrådets anbefalinger
Dannelse af Tilsynsrådet
Tilsynsrådet: Yderligere spørgsmål
Metas halvårlige opdateringer om Tilsynsrådet
Registrering af Tilsynsrådets effekt
Sikkerhed
Forstyrrelser pga. trusler
Sikkerhedstrusler
Anmeldelse af trusler
Rapporter
Rapport om håndhævelse af fællesskabsregler
Immaterielle rettigheder
Brugerdataanmodninger fra offentlige myndigheder
Indholdsbegrænsninger baseret på lokal lovgivning
Internetafbrydelser
Rapport om indhold, der er set mange gange
Lovgivningsmæssig rapport og andre gennemsigtighedsrapporter
Dansk
Politik om beskyttelse af personoplysningerTjenestevilkårCookies