Policies that outline what is and isn't allowed on our apps.
Policies for ad content and business assets.
Other policies that apply to Meta technologies.
How we update our policies, measure results, work with others, and more.
Explore how we help teens have safe, positive experiences on Facebook and Instagram.
How we approach dangerous organizations and individuals.
How we support communities in the face of the opioid epidemic.
How we help prevent interference, empower people to vote and more.
How we work with independent fact-checkers, and more, to identify and take action on misinformation.
How we assess content for newsworthiness.
How we reduce problematic content in News Feed.
How we build AI systems.
Comprehensive access to public data from Facebook and Instagram
Comprehensive and searchable database of all ads currently running across Meta technologies
Additional tools for in-depth research on Meta technologies and programs
Quarterly report on how well we're doing at enforcing our policies on the Facebook app and Instagram.
Report on how well we're helping people protect their intellectual property.
Report on government request for people's data.
Report on when we restrict content that's reported to us as violating local law.
Report on intentional internet restrictions that limit people's ability to access the internet.
Quarterly report on what people see on Facebook, including the content that receives the widest distribution during the quarter.
Download current and past regulatory reports for Facebook and Instagram.
2020-006-FB-FBR
On December 1, 2020, the Oversight Board selected a case referred by Facebook regarding a post in a group claiming hydroxychloroquine and azithromycin is a cure for COVID-19 and criticizing the French government’s response to COVID-19. The Facebook company took down this content for violating our policy on violence and incitement, as laid out in the Facebook Community Standards.
Facebook referred this case to the board because we found it significant and difficult as it creates tension between our values of voice and safety. While we are committed to preserving people’s ability to discuss and share information about the COVID-19 pandemic, and to debate the efficacy of potential treatments and mitigation strategies, we also want to limit the spread of false information that could lead to harm.
On January 28, 2021, the board overturned Meta's decision on this case. Meta acted to comply with the board’s decision immediately, and this content has been reinstated.
On February 25, 2021, Meta responded to the board’s recommendations for this case. We are committing to take action on 6 and are taking no further action on 1.
Clarify the Community Standards with respect to health misinformation, particularly with regard to COVID-19. Meta should set out a clear and accessible policy on health misinformation, consolidating and clarifying existing policies in one place.
Our commitment: In response to the board’s recommendation, we have consolidated information about health misinformation in a Help Center article, which we now link to in the Community Standards. This article includes details about all of our Community Standards related to COVID-19 and vaccines, including how we treat misinformation that is likely to contribute to imminent physical harm. We also added a "Commonly Asked Questions" section to address more nuanced situations (e.g. how humor and satire relate to these policies, how we handle personal experiences or anecdotes). We have also clarified our health misinformation policy as part of a larger COVID-19 update earlier this month. As part of that update, we added more specificity to our rules, including giving examples of the type of false claims that we will remove.
Considerations: Our policies and principles for enforcement of health misinformation are continuously updated to reflect the feedback we get from our global conversations with health experts.
Next steps: We’ll continue to update the Help Center as necessary as our policies evolve with the pandemic.
Facebook should 1) publish its range of enforcement options within the Community Standards, ranking these options from most to least intrusive based on how they infringe freedom of expression, 2) explain what factors, including evidence-based criteria, the platform will use in selecting the least intrusive option when enforcing its Community Standards to protect public health, and 3) make clear within the Community Standards what enforcement option applies to each policy.
Our commitment: In the coming months, we will launch the Transparency Center. The website will be a destination for people to get more information about our Community Standards and how we enforce them on our platform, including when and why we remove violating content, and when we choose to provide additional context and labeling.
Considerations: As our content moderation practices have grown in sophistication and complexity, our efforts to provide people with comprehensive but clear information about our systems have to catch up. The Transparency Center is a step in this effort, building on our Community Standards to help people understand our integrity efforts overall. The Transparency Center will add more detail about what isn’t allowed, as well as how we use interventions like downranking and labels for content that we think may benefit from more context.
Next steps: Launch the Transparency Center in the coming months.
Update May 19, 2021: We launched the Transparency Center, which builds on our Community Standards to help people understand our integrity efforts overall. We will continue to add content to the Transparency Center to explain more about our approach to enforcement. We also are exploring how to share more granularity on the Transparency Center about the different types of enforcement options and how we use those options for different types of content. We will have no further updates to this recommendation.
To ensure enforcement measures on health misinformation represent the least intrusive means of protecting public health, Meta should clarify the particular harms it is seeking to prevent and provide transparency about how it will assess the potential harm of particular content.
Our commitment: In response to the board’s guidance, we updated our Help Center to provide greater detail on the specific harms that our COVID-19 and vaccine policies are intended to address. The Help Center explains that we will “remove misinformation when public health authorities conclude that the information is false and likely to contribute to imminent violence or physical harm.” As noted in the Help Center, some of these examples of imminent physical harm include “increasing the likelihood of exposure to or transmission of the virus, or having adverse effects on the public health system’s ability to cope with the pandemic.”
Considerations: For COVID-19, we assessed harm by working closely with public health authorities, who are better equipped to answer the complex question of causality between online speech and offline harm. We also consulted with experts from around the world with backgrounds in public health, vaccinology, sociology, freedom of expression, and human rights on updates we made to our policies on vaccine misinformation. These experts came from academia, civil society, public health organizations, and elsewhere. We rely on these experts to help us understand whether claims are false and likely to contribute to the risk of increased exposure and transmission or to adverse effects on the public health system. We then remove content that includes these claims.
Next steps: We won’t take any additional actions since based on the board’s recommendation we’ve already updated our Help Center.
To ensure enforcement measures on health misinformation represent the least intrusive means of protecting public health, Meta should conduct an assessment of its existing range of tools to deal with health misinformation and consider the potential for development of further tools that are less intrusive than content removals.
Our commitment: We will continue to develop a range of tools to connect people to authoritative information as they encounter health content on our platforms, starting with information about COVID-19 vaccines.
Considerations: We continually assess and develop a range of tools, in consultation with public health experts, to address potential health misinformation in the least intrusive way depending on the risk of imminent physical harm. Our current range of enforcement tools include:
Working with independent third-party fact-checking partners to debunk claims that are found to be false, but do not violate our Community Standards. Once third-party fact-checkers rate something as false, we reduce its distribution and inform people about factual information from authoritative sources.
Sending notifications to people who shared false content to let them know it’s since been rated false. We add a notice and an overlay to the post and show a fact-checker’s articles when someone tries to share the content.
Connecting people to authoritative information based on their behavior. For example, if someone searches for “COVID-19” or “vaccines,” we will redirect them to our COVID-19 Info Center on Facebook. And, we may show educational modules to people who we know have interacted with misinformation we removed for violating our Community Standards.
These tools are part of our larger effort to respond proportionally to content, as the board recommends, while keeping people safe on the platform.
Next steps: Our immediate focus for this recommendation is to work on tools to connect people with authoritative information about COVID-19 vaccines.
Publish a transparency report on how the Community Standards have been enforced during the COVID-19 global health crisis.
Our commitment: We will continue to look for ways to communicate the efficacy of our efforts to combat COVID-19 misinformation.
Considerations: We regularly publish information on the efforts we are taking to combat COVID-19 misinformation. For example, we have previously shared detailed data points on our response to COVID-19 misinformation, including the number of pieces of content on Facebook and Instagram we removed for violating our COVID-19 misinformation policies, the number of warning labels applied to content about COVID-19 that was rated by independent third-party fact-checkers, the number of visits to the COVID-19 Information Hub, and the number of people who clicked through these notifications to go directly to the authoritative health sources. We have also shared information with the EU Commission’s COVID-19 monitoring programme reports.
Next steps: We began consistently sharing COVID-19 metrics in the Spring of 2020, and we will continue to do so for the duration of the pandemic. Given the temporary and unique circumstances of COVID-19, we are not planning to add it into the Community Standards Enforcement Report as an additional policy area.
Conduct a human rights impact assessment with relevant stakeholders as part of Meta’s process of policy modification.
Our commitment: We will ask the board to clarify if its recommendation relates to all rule modifications or those related to COVID-19 misinformation. We will explore approaches to strengthen the incorporation of human rights principles into our policy development process.
Considerations: Meta has a dedicated Human Rights Policy Team that consults on policy development and rule changes. Given the frequency with which we update our policies conducting a full human rights impact assessment for every rule change is not feasible.
The Human Rights Policy Team, informed by authoritative guidance and an independent literature review, advised on access to authoritative health information as part of the right to health and on permissible restrictions to freedom of expression related to public health. It also participated in structuring an extensive global rights holder consultation. These elements were directly incorporated into Meta’s overall strategy for combating misinformation that contributes to the risk of imminent physical harm.
Next steps: We will ask the board to clarify if its recommendation relates to all rule modifications or those related to COVID-19 misinformation. Based on this, we will assess whether there are opportunities to strengthen the inclusion of human rights principles in our policy development process, including the possibility of additional formal human rights impact assessments.
In cases where people post information about COVID-19 treatments that contradicts the specific advice of health authorities, and where a potential for physical harm is identified but is not imminent, Meta should adopt a range of less intrusive measures.
Our commitment: We agree with the board that less intrusive measures should be used where a potential for physical harm is identified but is not imminent. That said, we disagree with the board that the content implicated in this case does not rise to the level of imminent harm. We will continually evaluate and calibrate our response to content about COVID-19 treatments based on information from public health authorities.
Considerations: Our global expert stakeholder consultations have made it clear that in the context of a health emergency, the harm from certain types of health misinformation does lead to imminent physical harm. That is why we remove this content from the platform. We use a wide variety of proportionate measures to support the distribution of authoritative health information. We also partner with independent third party fact-checkers and label other kinds of health misinformation.
We know from our work with the World Health Organization (WHO) and other public health authorities that if people think there is a cure for COVID-19 they are less likely to follow safe health practices, like social distancing or mask-wearing. Exponential viral replication rates mean one person’s behavior can transmit the virus to thousands of others within a few days.
We also note that one reason the board decided to allow this content was that the person who posted the content was based in France, and in France, it is not possible to obtain hydroxychloroquine without a prescription. However, readers of French content may be anywhere in the world, and cross-border flows for medication are well established. The fact that a particular pharmaceutical item is only available via prescription in France should not be a determinative element in decision-making.
Next steps: We’ll take no further action on this recommendation since we believe we already do employ the least intrusive enforcement measures given the likelihood of imminent harm. We restored the content based on the binding power of the board’s decision. We will continue to rely on extensive consultation with leading public health authorities to tell us what is likely to contribute to imminent physical harm. During a global pandemic, this approach will not change.