Facebook should conduct a proportionality analysis to identify a range of less intrusive measures than removing the content, including labeling content, introducing friction to posts to prevent interactions or sharing, and downranking. All these enforcement measures should be clearly communicated to all users, and subject to appeal.
Our commitment: Our global expert stakeholder consultations have made it clear that in the context of a health emergency, certain types of health misinformation do lead to imminent physical harm. Because of this, we remove content from our platform that is likely to lead to harm.
That said, we’ll continue to use less intrusive measures than removal where a potential for physical harm is identified, but not imminent, for content that contains misinformation relating to the COVID-19 pandemic. We’ll continue working with fact-checkers to assess potential misinformation, as well as reducing the distribution of false content.
Considerations: In our response to 2020-006-FB-FBR-7, we discussed that we remove certain content from the platform because our global stakeholder consultations have made it clear that imminent physical harm can result from misinformation concerning a health emergency like the COVID-19 pandemic. For example, we know from our work with the World Health Organization and other public health authorities that if people think there is a cure for COVID-19, they are less likely to follow safe health practices, like social distancing or mask-wearing. Exponential viral replication rates mean one person’s behavior can transmit the virus to thousands of others within a few days. Proportionality is part of our existing strategy to fight health misinformation. When content spreading COVID-19 misinformation does not reach that threshold of imminent physical harm, our responses are less intrusive than removal. But when content does reach the threshold of imminent harm, we remove it from our platform.
Accordingly, our approach to combating misinformation relating to the COVID-19 pandemic involves a range of proportional measures. First, we label content we believe is related to COVID-19 and COVID-19 vaccines, as well as specific sub-topics such as vaccine safety. In these labels, we provide links to authoritative external health resources, such as the World Health Organization.
In addition, we work with our network of independent third-party fact-checking partners to quickly assess content that may contain misinformation. We take all the following enforcement actions when a fact-checker rates a piece of content as false:
- We apply a warning label to the content that includes a link to the fact-checkers' article debunking the misinformation.
- We reduce the content’s distribution so that fewer people see it.
- We notify anybody who previously shared the content, or who tries to share it going forward, that the information is false.
- We use automation and human review to scale the impact of these fact-checkers by detecting identical or nearly identical pieces of content, applying labels to them, and reducing their distribution.
In our Transparency Center, we provide an explanation of our approach to misinformation so users can understand our strategy. In addition, we are continuing to explore how we can provide users with more information when we take actions on their content. Next steps: We will have no further updates on this recommendation.