Our Approach To Dangerous Organizations and Individuals

UPDATED

NOV 1, 2024

There is no place on our platform for groups or individuals that promote violence, organized crime, hate or terrorism. Over years of work we have developed, and continue to develop, Dangerous Individual and Organizations (DOI) policies and processes to address this type of content. While we are committed to providing space for people to be able to talk about events that happen around the world that impact their lives, families and communities, our policy exists to draw a line for what is not allowed on our platforms.

Our Policies are Designed to Keep our Platforms Safe

Meta has one of the most comprehensive policies in the industry targeting terrorist organizations, hate groups, organized criminal organizations like cartels, violence inducing entities, and perpetrators of designated violating events such as mass shootings or terrorist attacks.

Under this DOI Policy, we designate and ban individuals and organizations involved in activity, and remove glorification and support of them when we become aware of it. We also designate and ban violence inducing entities—entities that are engaged in preparing or advocating for future violence but have not necessarily engaged in violence to date—and remove glorification and support of them as well.

Our policies are public, and you can read more details about them on our Community Standards page. In addition, we regularly publish details about how much of this content we remove in our quarterly community standards report.

How and Why We Designate

We do not allow organizations or individuals that proclaim a violent or hateful mission or are engaged in violence to have a presence on Meta’s platforms. We assess these entities based on their behavior both online and offline, and most significantly, their ties to violence. Under this policy, we designate individuals, organizations, and networks of people. To ensure more effective, proportionate, and consistent enforcement, we have divided the designations into two tiers. You can read more about these tiers, and the types of organizations that each encompasses, in our community standards.

We have our own robust internal process — that takes into account many sources of information — that we use to evaluate organizations and individuals for possible designation. Developing our own definitions and process for designation, agnostic to region or ideology, allows us to be robust, fair and proactive in protecting our platforms.

This work is not static. We are continuously assessing risks and evaluating groups and individuals for designation based on their behavior, changing circumstances, new information, internal expert analysis, and input from external stakeholders. For example, as we announced in January 2024, when a designated organization or individual changes their behavior, they are also eligible for removal from our list—specifically if they are (1) not designated by the U.S. government as a Specially Designated Narcotics Trafficking Kingpin (SDNTK); Foreign Terrorist Organization (FTO); or Specially Designated Global Terrorist (SDGT); (2) no longer involved in violence or hate; and (3) not symbolic to violence and hate or be used to incite further violence or spread violent or hateful propaganda.

While we deeply value transparency and are continually assessing the tradeoffs, we currently do not share the details of our designation list to mitigate security and legal risks and to prevent these dangerous actors from circumventing our enforcement mechanisms.

How we Enforce Against Dangerous Organizations and Individuals

We invest heavily in people, technology, partnerships, and research to counter DOI activity.

  • Technology: We approach this space through a combination of AI and human intelligence — and also invest in research and work with outside experts and organizations to stay on top of this changing environment. We use AI to detect video, images, audio, text, and even graphics like logos and depictions of violence. We also make open-source tools available across the industry to help partners access technology to fight dangerous organizations and individuals on their own platforms. For example, in December 2022, to help make it easier for every company, across the industry, to keep their platforms free of terrorist content, we made “Hasher Matcher Actioner”(HMA) available — a free-to-use, open source software tool that will help platforms identify copies of images or videos and take action against them en masse.

  • People: Context and language are often complex and technology can't always distinguish glorification of a DOI from culturally-specific criticism. That’s why Meta has invested billions of dollars and has nearly 40K people working on safety and security. Within that, we have a cross-functional team of hundreds of people specifically with expertise ranging from law enforcement and national security, to counterterrorism intelligence and academic studies in radicalization dedicated to this work.

  • Partnerships: As long as harmful activity exists in the world, it will exist on the internet, and no one company can solve these problems on its own. That's why partnerships with others - companies, civil society, researchers and governments - are so crucial. We engage with governments and intergovernmental agencies around the world, and partner with organizations with expertise in terrorism, violent extremism, cyber intelligence, and adversarial behavior online.


    GIFCT: We joined forces with YouTube, Microsoft, and Twitter in 2017 to create the Global Internet Forum to Counter Terrorism (GIFCT), an organization built with the goal to prevent terrorists and violent extremists from exploiting digital platforms. GIFCT became an independent NGO in 2019 and coordinates crisis response across the industry in response to attacks.


    Law Enforcement: When we become aware of a specific, imminent and credible threat to human life, we do not hesitate to notify law enforcement. Meta also works with law enforcement around violent attacks, though we always scrutinize every government request we receive to make sure it is legally valid and is consistent with internationally recognized standards on human rights, including due process, privacy, free expression and the rule of law. If we determine that a request appears to be deficient or overly broad, we push back. You can read more about our work with law enforcement here.

  • Independent Research: We commission independent research from think-tanks, academics and NGOs on various topics of violent extremist and terrorist use of the internet in order to help our industry understand and make progress on these important issues. In 2022, Meta announced a research partnership with the Middlebury Institute of International Studies’ Center on Terrorism, Extremism, and Counterterrorism to analyze evolving trends in violent extremism and effective tools that help communities combat it.

  • Strategic Network Disruptions: Though most of our enforcement against terrorism and organized hate comes from routine content removal, sometimes that’s not enough. This is an adversarial space, and designated dangerous entities sometimes attempt to get around our enforcement and re-constitute networks on our platform. To combat these networks, we use a key approach called a Strategic Network Disruption (SND). An SND allows us to remove a network of already-banned DOI actors all at once, either when they’re first designated, or as a part of our ongoing work to keep these designated groups off our platform. Disabling these clusters in their entirety makes it more difficult for them to return to the platform. It also sends a clear message that we are aware of their presence and that these groups are not welcome on our platforms. Finally, this technique allows us to study how these DOIs might try to bypass our detection, as well as how they might attempt to return to our apps after we remove their accounts.


    • In June of 2023, the independent academic journal Proceedings of the National Academy of Sciences (PNAS) published a paper written by Meta researchers Daniel Robert Thomas and Laila A. Wahedi studying the efficacy of our SND strategy on disrupting hate groups. What we found is that the tactic works — and helps create a healthier and safer user experience on our platforms.

Prevention Work

At Meta, we are dedicated to maintaining the safety and integrity of our platforms. One of the ways we achieve this is through upstream prevention, which involves taking proactive measures to prevent the spread of DOIs before they can take root in individuals or communities. This includes using targeted interventions and promoting positive speech. We believe that constructive dialogue is essential, and we strive to create an environment that encourages such exchanges.

To further strengthen our efforts, we actively work with Civil Society Organizations (CSOs) that are dedicated to countering extremism, organized hate, and criminal activities. We support these organizations by building their capacity and providing them with the necessary resources to create counterspeech content. This content challenges extremist narratives and offers alternative perspectives, helping to promote a more positive and inclusive online environment.

Our approach to countering extremism is multi-faceted and proactive, involving a combination of prevention, collaboration, and promotion of positive speech.

  • The Resiliency Initiative is a program that trains CSOs across Asia, Africa and the United States Prevention/Countering Violent Extremism (P/CVE) campaign strategies and builds their capacity for effective counterspeech campaigns. As part of the Resiliency Initiative, Meta launched a US-based partnership with Search for Common Ground (SFCG) to support community-based partners who are working locally to counter hate-fuelled violence and build social and community resilience.

  • Safer Searches: We also, when appropriate, point people to resources when they search for terms associated with dangerous organizations and individuals. When users search for DOI terms on Facebook, Instagram, and Threads, they are instead provided with resources, helping to deter searches for DOI content and provide educational resources to users.


    We also run a search program in several countries, including Australia, the United States, the United Kingdom, Germany, India, Indonesia, and Pakistan. If someone searches for terms that are specific to each market and typically focus on particular DOI harms, we offer them additional information and resources in collaboration with a local CSO that specializes in offering support services to users who are at risk of extremism.

  • Providing Resources to Community Organizations: We also run a Safety Ads Program, which works to support CSO partners to proactively prevent harm by providing them with advertising credits and strategic support when they are running counter-speech programs. By working together with these organizations, we aim to grow our partnerships to combat DOI’s.

This work is ongoing, and we know that as long as dangerous organizations and individuals exist in the world, they will exist on the internet, which is why we remain vigilant. These groups take on new tactics to avoid detection and or try to evade our policies and enforcement — this adversarial behavior is why we’re constantly working to stay one step ahead, evaluating and updating our approach.