Policies that outline what is and isn't allowed on the Facebook app.
Policies that outline what is and isn't allowed on the Instagram app.
Policies for ad content and business assets.
Other policies that apply to Meta technologies.
How we update our policies, measure results, work with others, and more.
How we support communities in the face of the opioid epidemic.
How we help prevent interference, empower people to vote and more.
How we work with independent fact-checkers, and more, to identify and take action on misinformation.
How we assess content for newsworthiness.
How we reduce problematic content in News Feed.
How we build AI systems.
Comprehensive access to public data from Facebook and Instagram
Comprehensive and searchable database of all ads currently running across Meta technologies
Additional tools for in-depth research on Meta technologies and programs
Quarterly report on how well we're doing at enforcing our policies on the Facebook app and Instagram.
Report on how well we're helping people protect their intellectual property.
Report on government request for people's data.
Report on when we restrict content that's reported to us as violating local law.
Report on intentional internet restrictions that limit people's ability to access the internet.
Quarterly report on what people see on Facebook, including the content that receives the widest distribution during the quarter.
Download current and past regulatory reports for Facebook and Instagram.
Change log
Change log
Current version
Misinformation is different from other types of speech addressed in our Community Standards because there is no way to articulate a comprehensive list of what is prohibited. With graphic violence or hate speech, for instance, our policies specify the speech we prohibit, and even persons who disagree with those policies can follow them. With misinformation, however, we cannot provide such a line. The world is changing constantly, and what is true one minute may not be true the next minute. People also have different levels of information about the world around them, and may believe something is true when it is not. A policy that simply prohibits “misinformation” would not provide useful notice to the people who use our services and would be unenforceable, as we don’t have perfect access to information.
Instead, our policies articulate different categories of misinformation and try to provide clear guidance about how we treat that speech when we see it. For each category, our approach reflects our attempt to balance our values of expression, safety, dignity, authenticity, and privacy.
We remove misinformation where it is likely to directly contribute to the risk of imminent physical harm. We also remove content that is likely to directly contribute to interference with the functioning of political processes. In determining what constitutes misinformation in these categories, we partner with independent experts who possess knowledge and expertise to assess the truth of the content and whether it is likely to directly contribute to the risk of imminent harm. This includes, for instance, partnering with human rights organizations with a presence on the ground in a country to determine the truth of a rumor about civil conflict.
For all other misinformation, we focus on reducing its prevalence or creating an environment that fosters a productive dialogue. We know that people often use misinformation in harmless ways, such as to exaggerate a point (“This team has the worst record in the history of the sport!”) or in humor or satire (“My husband just won Husband of the Year.”) They also may share their experience through stories that contain inaccuracies. In some cases, people share deeply-held personal opinions that others consider false or share information that they believe to be true but others consider incomplete or misleading.
Recognizing how common such speech is, we focus on slowing the spread of hoaxes and viral misinformation, and directing users to authoritative information. As part of that effort, we partner with third-party fact checking organizations to review and rate the accuracy of the most viral content on our platforms (see here to learn more about how our fact-checking program works). We also provide resources to increase media and digital literacy so people can decide what to read, trust, and share themselves. We require people to disclose, using our AI-disclosure tool, whenever they post organic content with photorealistic video or realistic-sounding audio that was digitally created or altered, and we may apply penalties if they fail to do so. We may also add a label to certain digitally created or altered content that creates a particularly high risk of misleading people on a matter of public importance.
Finally, we prohibit content and behavior in other areas that often overlap with the spread of misinformation. For example, our Community Standards prohibit fake accounts, fraud, and coordinated inauthentic behavior.
As online and offline environments change and evolve, we will continue to evolve these policies. Accounts that repeatedly share the misinformation listed below may, in addition to having their content enforced on in accordance with this policy, receive decreased distribution, limitations on their ability to advertise, or be removed from our platforms. Additional information on what happens when Meta removes content can be found here.
Misinformation we remove:
We remove the following types of misinformation:
I. Physical Harm or Violence
We remove misinformation or unverifiable rumors that expert partners have determined are likely to directly contribute to a risk of imminent violence or physical harm to people. We define misinformation as content with a claim that is determined to be false by an authoritative third party. We define an unverifiable rumor as a claim whose source expert partners confirm is extremely hard or impossible to trace, for which authoritative sources are absent, where there is not enough specificity for the claim to be debunked, or where the claim is too incredulous or too irrational to be believed.
We know that sometimes misinformation that might appear benign could, in a specific context, contribute to a risk of offline harm, including threats of violence that could contribute to a heightened risk of death, serious injury, or other physical harm. We work with a global network of non-governmental organizations (NGOs), not-for-profit organizations, humanitarian organizations, and international organizations that have expertise in these local dynamics.
In countries experiencing a heightened risk of societal violence, we work proactively with local partners to understand which false claims may directly contribute to a risk of imminent physical harm. We then work to identify and remove content making those claims on our platform. For example, in consultation with local experts, we may remove out-of-context media falsely claiming to depict acts of violence, victims or perpetrators of violence, weapons, or military hardware.
II. Harmful Health Misinformation
We consult with leading health organizations to identify health misinformation likely to directly contribute to imminent harm to public health and safety. The harmful health misinformation that we remove includes the following:
III. Voter or Census Interference
In an effort to promote election and census integrity, we remove misinformation that is likely to directly contribute to a risk of interference with people’s ability to participate in those processes. This includes the following:
We have additional policies intended to cover calls for violence, the promotion of illegal participation, and calls for coordinated interference in elections, which are represented in other sections of our Community Standards.
Manipulated Media
Media can be edited in a variety of ways. In many cases, these changes are benign, such as content being cropped or shortened for artistic reasons or music being added. In other cases, the manipulation is not apparent and could mislead.
See some examples of what enforcement looks like for people on Facebook, such as: what it looks like to report something you don’t think should be on Facebook, to be told you’ve violated our Community Standards and to see a warning screen over certain content.
Note: We’re always improving, so what you see here may be slightly outdated compared to what we currently use.
We have an option to report, whether it's on a post, comment, story, message, profile or something else.
We help people report things that they don’t think should be on our platform.
We ask people to tell us more about what’s wrong. This helps us send the report to the right place.
Make sure the details are correct before you click Submit. It’s important that the problem selected truly reflects what was posted.
After these steps, we submit the report. We also lay out what people should expect next.
We remove things if they go against our Community Standards, but you can also Unfollow, Block or Unfriend to avoid seeing posts in future.
After we’ve reviewed the report, we’ll send the reporting user a notification.
We’ll share more details about our review decision in the Support Inbox. We’ll notify people that this information is there and send them a link to it.
If people think we got the decision wrong, they can request another review.
We’ll send a final response after we’ve re-reviewed the content, again to the Support Inbox.
When someone posts something that doesn't follow our rules, we’ll tell them.
We’ll also address common misperceptions and explain why we made the decision to enforce.
We’ll give people easy-to-understand explanations about the relevant rule.
If people disagree with the decision, they can ask for another review and provide more information.
We set expectations about what will happen after the review has been submitted.
We cover certain content in News Feed and other surfaces, so people can choose whether to see it.
In this example, we give more context on why we’ve covered the photo with more context from independent fact-checkers
We have the same policies around the world, for everyone on Facebook.
Our global team of over 15,000 reviewers work every day to keep people on Facebook safe.
Outside experts, academics, NGOs and policymakers help inform the Facebook Community Standards.
Learn what you can do if you see something on Facebook that goes against our Community Standards.