Policies that outline what is and isn't allowed on our apps.
Policies for ad content and business assets.
Other policies that apply to Meta technologies.
How we update our policies, measure results, work with others, and more.
Explore how we help teens have safe, positive experiences on Facebook and Instagram.
How we approach dangerous organizations and individuals.
How we support communities in the face of the opioid epidemic.
How we help prevent interference, empower people to vote and more.
How we work with independent fact-checkers, and more, to identify and take action on misinformation.
How we assess content for newsworthiness.
How we reduce problematic content in News Feed.
How we build AI systems.
Comprehensive access to public data from Facebook and Instagram
Comprehensive and searchable database of all ads currently running across Meta technologies
Additional tools for in-depth research on Meta technologies and programs
Quarterly report on how well we're doing at enforcing our policies on the Facebook app and Instagram.
Report on how well we're helping people protect their intellectual property.
Report on government request for people's data.
Report on when we restrict content that's reported to us as violating local law.
Report on intentional internet restrictions that limit people's ability to access the internet.
Quarterly report on what people see on Facebook, including the content that receives the widest distribution during the quarter.
Download current and past regulatory reports for Facebook and Instagram.
2023-003-FB-MR
Today, the Oversight Board selected a case referred by Meta regarding a video posted by the Facebook page of Cambodian Prime Minister Hun Sen that contains a lengthy speech given by the Prime Minister. The speech covered a wide range of topics including the country’s relationship with China as well as the COVID-19 pandemic and included statements made by the Prime Minister offering to some a choice between legal action or physical force. These statements were understood as statements of intent to commit violence against political opponents.
Upon initial review, Meta marked this content as non-violating. However, upon additional review, we determined that the content violated our Violence and Incitement policy, as laid out in the Facebook Community Standards, but decided that the newsworthiness allowance applies and left the content up.
Meta referred this case to the board because we found it significant and difficult as it creates tension between our values of safety and voice.
Meta prohibits threats of violence on our platforms in order to prevent potential offline harm. This includes “threats that lead to serious injury” or “statements of intent to commit violence.” However, a newsworthiness allowance may be granted when the content has high public interest value, particularly when it may warn of future government action and outweighs the risk of harm.
We will implement the board’s decision once it has finished deliberating, and we will update this post accordingly. Please see the board’s website for the decision when they issue it.
We welcome the Oversight Board’s decision today on this case. The board overturned Meta’s original decision to maintain the content on Facebook under the newsworthiness allowance. Meta will act to comply with the board’s decision and remove the content.
In accordance with the bylaws, we will also initiate a review of identical content with parallel context. For more information, please see our Newsroom post about how we implement the board’s decisions.
We will conduct a review of all the recommendations provided by the board in addition to its decision, and respond to the board's recommendation on suspending Prime Minister Hun Sen’s accounts as soon as we have undertaken that analysis. This will be separate from any consequences to the accounts as a result of our enforcement of the board's binding decision in this case.
Meta should clarify that its policy for restricting accounts of public figures applies to contexts in which citizens are under continuing threat of retaliatory violence from their governments. The policy should make it clear that it is not restricted solely to single incidents of civil unrest or violence and that it applies where political expression is pre-emptively suppressed or responded to with violence or threats of violence from the state. The Board will consider this recommendation implemented when Meta's public framework for restricting accounts of public figures is updated to reflect these clarifications.
Our commitment: We will take no further action on this recommendation as this protocol is not meant for use in this type of context and existing policies against threats of violence are in place.
Considerations: As we state in our Community Standards, our commitment to expression is paramount. The goal of our Community Standards is to create a place for expression and to give people a voice. In some cases, we may allow content – which would otherwise go against our Community Standards – if it’s newsworthy and in the public interest.
Our commitment to voice is a critical component of our protocol on restricting accounts of public figures, which we created following the Board’s decision in the Donald Trump case, specifically to address the Board’s concerns regarding the indefinite suspension of public figures. This protocol is designed to apply a severe time-bound restriction, or suspension, on the account of a public figure users inciting or celebrating ongoing civil unrest or violence in crisis situations.
The protocol is not designed for situations where a history of state violence or human rights restrictions have resulted in ongoing state restrictions on expressions for an indeterminate period of time. Applying the protocol in those circumstances could lead to an indefinite suspension of a public figure’s account, which (apart from fairness issues) could be detrimental to people’s ability to access information from and about their leaders and to express themselves using Meta’s platforms.
The Oversight Board has previously underscored in multiple cases the importance of voice “in countries where freedom of expression is routinely suppressed,” including “in a context where civic space and media freedom is curtailed by the state.” In this context, “social media has played an important role in providing a platform for all people, including journalists, to share information about the protests in an environment where public comments and expert reports suggest the media landscape would benefit from greater pluralism.” Under these circumstances, allowing people to retain access to our platforms also “serves to enhance the value of ‘Safety’ by ensuring people have access to information and state violence is exposed.” Social media can provide “an important pathway to raise awareness” and serve as a “key forum for dissent.”
These concerns are particularly acute in Cambodia, where prior independent Human Rights due diligence highlighted the importance of our platforms to the information ecosystem in that country. The due diligence found, among other things, that:
Our products and services have been essential to freedom of information and expression in Cambodia given its restricted media environment;
Our platforms serve as an important source of independent news and a tool for activists to improve public officials’ accountability; and
Our platforms are important to Cambodia’s digital economy, particularly contributing to the growth and development of small- and medium-sized businesses in the country.
It is important to note that no user can continue violating our policies without enforcement actions. Influential users and public figures, including Hun Sen, are always subject to our Community Standards. Our Violence and Incitement Policy provides that we will remove content that includes language that may incite or facilitate serious violence, including threats of violence. If a user repeatedly violates this policy, their account will be suspended and eventually removed. This penalty framework is carefully designed to embody principles of fairness, proportionality, and transparency.
We believe that reliance on our Community Standard and existing enforcement framework rather than our protocol for restricting public figures intended for time- bound crises, is a more appropriate approach to the circumstances presented in this case, as they more adequately balance our values of voice and safety.
Meta should update its newsworthiness allowance policy to state that content that directly incites violence is not eligible for a newsworthiness allowance, subject to existing policy exceptions. The Board will consider this recommendation implemented when Meta publishes an updated policy on newsworthy content explicitly setting out this limitation on the allowance.
Our commitment: As part of our newsworthy assessment, “[w]e remove content, even if it has some degree of newsworthiness, when leaving it up presents a risk of harm, such as physical, emotional and financial harm, or a direct threat to public safety.” We are considering ways to more precisely convey this commitment to address the recommendation.
Considerations: Under our existing Violence and Incitement policy, we do not allow content that includes language that may incite or facilitate serious violence. In rare instances, if we have determined that content might violate one of our policies, we may allow content if it is newsworthy and keeping it visible is in the public interest.
To make this determination, we balance the public interest value of the content against the risk of harm. We consider a number of factors, including whether the content surfaces an imminent threat to public health or safety or gives voice to a perspective currently being debated as part of a political process, as well as country-specific circumstances, the nature of the speech, and the political structure of the country (including the existence of a free press). When applying our newsworthy balancing test, we remove content "when leaving it up presents a risk of harm, such as physical, emotional and financial harm, or a direct threat to public safety."
We are reviewing the clarity of our newsworthiness language and, if we change that guidance, will share the update in our Transparency Center.
Meta should immediately suspend the official Facebook Page and Instagram account of Cambodian Prime Minister Hun Sen for a period of at least six months under Meta's policy on restricting accounts of public figures during civil unrest. The Board will consider this recommendation implemented when Meta suspends the accounts and publicly announces that it has done so.
Our commitment: Upon assessing Hun Sen’s Facebook Page and Instagram account, we determined that suspending those accounts outside our regular enforcement framework would not be consistent with our policies, including our protocol on restricting accounts of public figures during civil unrest.
Considerations: We have removed the content that was the subject of this case and, consistent with our policies, applied appropriate account-level penalties associated with that action. There is not currently any basis to suspend Hun Sen’s account under our policies.
As discussed in our response to Recommendation #1, we determined that suspending Hun Sen’s accounts is not appropriate under our protocol on restricting accounts of public figures during periods of civil unrest. In making this determination, we relied on our Crisis Policy Protocol (CPP) to assess on and off-platform risks of imminent harm at the time Hun Sen made the post in question, to determine whether conditions in Cambodia constituted a crisis. Based on an assessment of the framework criteria contained in the protocol - including levels of ongoing civil unrest/violence, state-imposed restrictions on media, and any significant reductions of human rights as compared to the baseline conditions in the country - we determined that the situation in Cambodia did not meet the entry criteria threshold for crisis designation under the Crisis Policy Protocol.
Hun Sen’s accounts will continue to be subject to our generally applicable penalty system. Our Community Standards apply to all users all around the world. When a user violates these standards, they may also incur a strike to their account. We notify users when this happens and, if they continue to violate our policies, we may restrict certain account features as detailed in our Transparency Center.
We will have no further updates on this recommendation.
Meta should update its review prioritization systems to ensure that content from heads of state and senior members of government that potentially violated the Violence and Incitement policy is consistently prioritized for immediate human review. The Board will consider this recommendation implemented when Meta discloses details on the changes to its review-ranking systems and demonstrates how those changes would have ensured review for this and similar content from heads of state and senior members of government.
Our commitment: We will assess the technical feasibility and weigh the tradeoffs of ensuring that content posted by heads of state that is reported for potentially violating our Violence and Incitement policy is more consistently prioritized for human review relative to other high-severity content.
Considerations: Our systems generally surface potentially violating content from heads of state and senior members of government for human review in several ways, including through our normal proactive and reactive detection systems as well as via existing escalation pathways and our cross-check program. We are assessing the feasibility of adding additional signals to classifiers as another avenue to ensure we surface and review content that potentially violates our Violence and Incitement policy posted by this type of public figure.
We have enrolled the vast majority of heads of state and senior members of government into our Early Response Secondary Review (ERSR) program, which we have expanded upon in our responses and updates to the Cross-Check PAO. Our systems prioritize content according to severity of violation in alignment with our Community Standards. Where we encounter potentially violating content which can be exposed to large audiences, our early response program has ranking mechanisms to ensure that we review the content within a designated time frame and take the necessary action. As they operate within the ERSR program, reported content from heads of state consistently qualify for human review when flagged for any violation, including potential Violence and Incitement.
Where our ERSR program and detection systems have not captured potentially violating content for human review by default, we have escalation channels and appeals processes to address anomalies. However, as cross-check is primarily intended to prevent false positive violations rather than identify false negatives on potentially violating content, we are assessing the feasibility of adding additional signals to classifiers that determine the severity of content to ensure that this type of content is more effectively prioritized for human review.
While we believe our current systems, including cross-check and escalations based on regional markets context, human rights considerations, and trusted flaggers inputs, ensure that the vast majority of potential violence and incitement from heads of state or senior government officials is currently prioritized for efficient human review, we are also considering opportunities to capture even more content that may fall into this category. We are considering opportunities to adjust the criteria for entry into high-severity review queues, including additional signals that specify that the content was posted by a head of state and that it likely has a high viral velocity.
Meta should implement product and/or operational guideline changes that allow more accurate review of long-form video (e.g. use of algorithms for predicting the timestamp of violation, ensuring proportional review time with length of the video, allowing videos to run 1.5 times or 2 times faster). The Board will consider this implemented when Meta shares its new long-form video moderation procedures with the Board, including metrics for showing improvements in review accuracy for long-form videos.
Our commitment: Our long-form video review optimization processes and protocols remain a priority across our operational teams. We already have a number of established processes geared towards improving video review accuracy and we will continue to iterate on new improvements. We will also evaluate metrics to demonstrate ongoing improvements in review accuracy for long-form videos to the public and to the Board.
Considerations: We currently have operational guidelines in place that govern long form video review. Our protocol seeks to ensure accurate review of long form video while optimizing for the efficiency and wellbeing of our reviewers. Our guidelines include a designated review protocol, explicit minimum review standards including duration as well as video indicators such as violation and reporting timestamps and speed adjustments. Our guidelines also ensure that we apply progressive review on long form video including the assessment of title, description, caption, thumbnails, transcript and comments attached to the video under review.
We have developed new categorical guidance pertaining to live videos and provided reviewers with the corresponding action hierarchy for such instances, still optimizing for both reviewer precision and accuracy with widgets such as “watch this” indicators for potentially violating content as well as warning signs and interstitials. Additionally, we have a designated team that engages with live video reviewers on a recurring basis to surface protocol and tooling improvements. All our video review processes include video specific resiliency features such as muting video, adjustable skip lengths and blurring. We continue to innovate our efforts on video review with solutions such as AI thumbnail generation to flag potentially violating content, increase the accuracy of the existing thumbnails and support reviewer efficiency.
We have rolled out efforts to detect looping videos which are long-form types used by repeat offenders on our platform. We have completed automation to roll this out on Facebook and will scale to Instagram in the second half of 2023.
We continue to improve our products and protocols for long-form and live video review—this remains an ongoing priority area across our enforcement teams. We will also continue to assess the most effective way to use metrics to publicly demonstrate review accuracy for long form videos and share these in future updates.
In the case of Prime Minister Hun Sen, and in all account-level actions against heads of state and senior members of government, Meta should publicly reveal the extent of the action and the reasoning behind its decision. The Board will consider this recommendation implemented when Meta discloses this information for Hun Sen, and commits to doing so for future enforcements against all heads of state and senior members of government.
Our commitment: We may publicly explain the rationale behind future account-level enforcement actions against heads of state or senior members of government, but will do so on a case-by-case basis, balancing transparency and security considerations.
Considerations: While we have shared details about enforcement actions on the accounts belonging to Hun Sen in this case, and on the accounts of former U.S. President Trump, there may be circumstances where privacy and security considerations weigh against Meta publicly sharing details about actions taken on an account. Given these considerations, we cannot commit to always sharing all details about account restrictions on accounts of heads of state or senior members of governments.
Having integrated the Board’s guidance in the Trump Decision into our approach for account-level suspension of heads of state, we will continue to aim for transparency around this type of enforcement on a case-by-case basis. We consider this recommendation implemented in part and will have no further updates.