Policies that outline what is and isn't allowed on our apps.
Policies for ad content and business assets.
Other policies that apply to Meta technologies.
How we update our policies, measure results, work with others, and more.
Explore how we help teens have safe, positive experiences on Facebook and Instagram.
How we approach dangerous organizations and individuals.
How we support communities in the face of the opioid epidemic.
How we help prevent interference, empower people to vote and more.
How we work with independent fact-checkers, and more, to identify and take action on misinformation.
How we assess content for newsworthiness.
How we reduce problematic content in News Feed.
How we build AI systems.
Comprehensive access to public data from Facebook and Instagram
Comprehensive and searchable database of all ads currently running across Meta technologies
Additional tools for in-depth research on Meta technologies and programs
Quarterly report on how well we're doing at enforcing our policies on the Facebook app and Instagram.
Report on how well we're helping people protect their intellectual property.
Report on government request for people's data.
Report on when we restrict content that's reported to us as violating local law.
Report on intentional internet restrictions that limit people's ability to access the internet.
Quarterly report on what people see on Facebook, including the content that receives the widest distribution during the quarter.
Download current and past regulatory reports for Facebook and Instagram.
2025-009-FB-UA, 2025-010-FB-UA, 2025-011-FB-UA
Today, December 3, 2024, the Oversight Board selected a case bundle appealed by Facebook users regarding three pieces of content shared during the UK riots that took place between July 30 and August 7, 2024.
The first piece of content was a post expressing agreement with the riots, calling for more mosques to be smashed and buildings to be set on fire where “scum are living.” Upon initial review, Meta left the content up. However, upon further review, we determined the content did in fact violate our policy on Violence and Incitement and removed the post.
The second piece of content is a reshare of another post of what appears to be an AI-generated image of a very large man wearing a union jack (the UK flag) t-shirt who is looming over several men wearing salwar kameezes. The image has text overlay providing the time and place of a protest.
The third piece of content is a repost of what appears to be an AI-generated image of four bearded men in salwar kameezes – one of whom is waving a knife – running after a crying blond-haired toddler in a union jack t-shirt. The image is accompanied with a caption that says “wake up.”
Meta determined that neither the second nor third piece of content violated our Violence and Incitement or Hate Speech policies, and left both pieces of content up.
We will implement the board’s decision once it has finished deliberating, and will update this post accordingly. Please see the Board's website for the decision when they issue it.
We welcome the Oversight Board's decision today, April 29, 2025, on this case. The Board overturned Meta’s original decision to leave up the content in all three cases. Since Meta previously removed the content in the first case, there will be no further action on this case. Meta will act to comply with the Board's decision and remove the content in the second and third cases within 7 days.
When it is technically and operationally possible to do so, we will also take action on content that is identical and in the same context as the first case. For more information, please see our Newsroom post about how we implement the Board’s decisions.
After conducting a review of the recommendations provided by the Board, we will update this post with initial responses to those recommendations.
To improve the clarity of its Violence and Incitement Community Standard, Meta should specify that all high-severity threats of violence against places are prohibited, as well as against people.
The Board will consider this recommendation implemented when Meta updates the Violence and Incitement Community Standard.
Commitment Statement: We will consider updating our Violence and Incitement Community Standard to clarify that we remove high-severity threats of violence against some places when they are occupied by people.
Considerations: Our Violence and Incitement policy aims to remove content that threatens people with violence. The content sometimes may not mention people directly, but rather mention places occupied by them. Threatening a place can mean threatening the people in it. Therefore, as the Board also notes, our internal guidelines cover places and locations occupied by people as a valid target under the Violence and Incitement policy. This includes calls to burn down a building or bomb a street. We treat threats against places where people are not present as vandalism under our Coordinating Harm and Promoting Crime policy. This includes calls to break statues or burn books. In prior Board recommendation responses, we have noted that we are considering updates to our Violence and Incitement policy to provide more clarity and transparency around our approach to this type of content. We will explore this recommendation in line with that ongoing work and will provide updates on the status of this recommendation in future reports to the Board.
To improve the clarity of its Hateful Conduct Community Standard, Meta should develop clear and robust criteria for what constitutes allegations of serious criminality, based on protected characteristics, in visual form. These criteria should align with and adapt existing standards for text-based hateful conduct, ensuring consistent application across both text and imagery.
The Board will consider this recommendation implemented when the internal implementation standards reflect the proposed change.
Commitment Statement: We will assess if we can incorporate visual indicators that align with existing text-based indicators for what constitutes allegations of serious criminality. This will involve development of key indicators and testing those indicators to understand their impacts on policy enforcement.
Considerations: The Hateful Conduct policy removes direct attacks targeting people on the basis of protected characteristics or immigration status with allegations of serious criminality, which includes equating a person to a violent criminal such as a murderer or terrorist. However, as the Board indicates, the current guidance does not explicitly outline visual indicators for criminality, such as when an image might use certain signals to compare a group of people to violent criminals. Creating an exhaustive list of indicators while balancing considerations where this content may be shared in a non-violating manner may be difficult, and we want to consider ways to approach this recommendation that protects free expression while addressing the Board’s concern.
We also continue to assess the feasibility of a prior recommendation from the Oversight Board to clarify how Hateful Conduct policy addresses content that implies reference to a state rather than a nationality. We will include updates on these recommendations in future reports to the Board.
To ensure Meta responds effectively and consistently to crises, the company should revise the criteria it has established to initiate the Crisis Policy Protocol. In addition to the current approach, in which the company has a list of conditions that may or may not result in protocol activation, the company should identify core criteria that, when met, are sufficient for the immediate activation of the protocol.
The Board will consider this recommendation implemented when Meta briefs the Board on its new approach for activation of the Crisis Policy Protocol and concludes a disclosure of the procedures in its Transparency Center.
Commitment Statement: We have begun reviewing our Crisis Policy Protocol (CPP) in line with the Oversight Board recommendations and are assessing the feasibility of updating our CPP to ensure that we are continuing to understand how to balance quick responses with principled framework activation.
Considerations: Our Crisis Policy Protocol uses policy and product interventions to help us respond to risks of imminent harm both on and off our platforms. This protocol was created following a previous recommendation from the Oversight Board. It was developed using insights from internal and external experts as part of our Policy Forum development process. The protocol development included over 50 global external stakeholders with subject matter expertise in conflict prevention, humanitarian response, human rights, and national security amongst other areas.
The CPP has been used in crisis events since its rollout in 2022, and has helped Meta quickly respond to global events in conjunction with other mechanisms such as Integrity Product Operations Centers (IPOCs) and designating places as Temporary High Risk Locations, which we detail in our Transparency Center. The CPP in particular aims to balance rapid response to emerging risks and crises with a principled, consistent global policy response that also allows for flexibility to adapt as information and conditions change in a crisis. The CPP is informed by learnings from past crises and, as we have enacted the protocol in a number of crisis situations, we have continued to adapt it. With this in mind, we will assess the feasibility of implementing updates to our CPP to ensure that we are continuing to understand how to balance quick response with principled framework activation.
We will provide updates on the status of this recommendation in future reports to the Board.
To ensure accurate enforcement of its Violence and Incitement and Hateful Conduct policies in future crises, Meta’s Crisis Policy Protocol should ensure potential policy violations that could lead to likely and imminent violence are flagged for in-house human reviewers. These reviewers should provide time-bound, context-informed guidance for at-scale reviewers, including for image-based violations.
The Board will consider this implemented when Meta shares documentation on this new Crisis Policy Protocol lever, outlining how (1) potential violations are flagged for in-house review; (2) context-informed guidance is cascaded down; and (3) implemented for at-scale reviewers.
Commitment Statement: Our Crisis Policy Protocol (CPP) is designed to help us respond to imminent risks on and off our platforms with specific policy and product actions that will help keep people safe. Alongside this, our Global Operations teams have the ability to leverage a Crisis Assessment Framework to determine the appropriate operational response to similar risks. We will continue leveraging our Crisis Assessment protocol, a separate, but related, framework from our Crisis Policy Protocol to ensure sensitive content related to developing crises are properly escalated and assessed by in-house teams with the ability to apply context.
Considerations: Over the last two years, Global Response Operations (GRO) has worked with strategic response teams to improve and strengthen our internal protocols and thresholds for crisis management and bring our processes across Policy and Operations into alignment. To that end, GRO reviewed previous thresholds, levers, and signals indicative of off-platform, real world crises and reworked our protocols to ensure Operations would be prepared to prioritize real-world developments that impacted our Operations teams. In the summer of 2024, we launched an updated Crisis Assessment Framework internally that enables the critical event management team within GRO to review on-platform or proprietary internal signals against the given external event, such as a geopolitical crisis, to determine if our internal teams or systems need to increase our posture of support, and—in nearly all cases—align directly with a CPP designation. For example, in the last four months, the Crisis Assessment Framework has been utilized by GRO to influence our operational response to events like the Türkiye Protests in March 2025 and India-Pakistan Conflict regarding Kashmir in May 2025.
Some of the internal signals evaluated before designating a crisis include 1) the degree of increased user reports and/or reports from external regulatory bodies; 2) whether our standard risk detection tools are sufficient to monitor trends in the related content; and 3) any gaps in our ability to enforce accurately and timely through standard processes. Operations and Policy teams work hand-in-hand once a crisis is designated and have worked together in establishing guidelines for providing updates on enforcement or escalation guidance for internal escalations teams and regional experts, as well as reviewers at scale. This includes guidance for reviewing and escalating image-based violations. Although guidance to reviewers at-scale varies from one crisis to another, we regularly deliver guidance to escalate content to GRO teams which contains potential violations against our Violence and Incitement or Coordinating Harm and Promoting Crime policies including image-based content. These GRO teams operate on time-bound protocols and are trained in applying context-only policies as well as escalating to Content Policy and other teams for deliberation on the approach to enforcement.
We will work to identify the appropriate materials to confidentially share with the Board outlining our process for providing guidance to reviewers during a crisis.
As the company rolls out Community Notes, it should undertake continuous assessments of the effectiveness of Community Notes as compared to third-party fact-checking. These assessments should focus on the speed, accuracy and volume of notes or labels being affixed in situations where the rapid dissemination of false information creates risks to public safety.
The Board will consider this recommendation to be implemented when Meta updates the Board every six months until implementation is completed and shares the results of this evaluation publicly.
Commitment Statement: As is typical for new system rollouts, we will work to assess the performance of the Community Notes program over time. However, Community Notes are not the same as third-party fact checks. Community Notes is a new way for our community to decide when to add more context to posts, and the community can add a note to a post for a host of reasons — to add additional information, a tip, helpful fact, etc. Just because a piece of content received a note doesn’t mean that it’s false or is misinformation. As we continuously evaluate the rollout, we will consider which metrics are the best ways to measure the efficacy of the program.
Considerations: When we announced Community Notes earlier this year, we committed to continuously test and improve the ability of contributors to write and rate notes on content across Facebook, Instagram and Threads, aiming to improve users’ ability to add more context to more types of content.
Meta has launched the Community Notes program in the US market and is building and improving the program before scaling to other regions, and our teams are working to ensure that the program is operating successfully. We plan to incorporate our learnings to iterate as we launch in more countries around the world in order to identify and mitigate any challenges.
We will continue to gather insights from our preliminary rollout and initial scale phases to improve the program. We will keep and consolidate the key learnings throughout this process and will assess the best way to share this information with the Board in future.