Facebook’s Community Standards should reflect that in the contexts of war and violent conflict unverified rumors pose higher risk to the rights of life and security of persons. This should be reflected at all levels of the moderation process.
Our commitment: We will continue to work with trusted partners and independent fact checkers to identify and remove misinformation that may contribute to the risk of imminent harm while protecting people’s ability to report on events in real time and to receive information.
Considerations: We agree with the board that greater risks to the rights to life and security of persons exist in contexts of war and violent conflict. We also recognize that in these high-risk areas, real-time reports of violence or other information can play a critical role in safety and raise global awareness, especially when journalists cannot access the area due to the ongoing conflict. To ensure we are balancing the critical need to protect a person’s voice and their safety, we invest significant resources in safety and security measures for at-risk countries. This includes Ethiopia, which has been one of our highest priorities for country-specific interventions, given the longstanding risks of conflict. Those safety efforts take into account context and involve (among other things) identifying and removing persistently harmful false claims, improving hate speech enforcement, and expanding our policies on coordinating harm, bullying and harassment, and veiled threats. Each of these actions is highly relevant to contexts of armed conflict. To the extent the board’s recommendation suggests that we take into account the heightened risk associated with the context of war and conflict, this is work we already do.
We updated our policy to address unverifiable rumors in 2019 after carefully considering input from 49 experts globally, including academics, human rights experts and civil society organizations. As stakeholders suggested, we have worked to identify and limit the spread of unverifiable rumors that could contribute to a risk of harm while strengthening work with local partners to understand critical context. We further developed this work with trusted partners in conflict zones, such as Myanmar, Ethiopia and the Sahel to identify persistent claims that, if false, are likely to contribute to the risk of imminent physical harm. Identifying such persistently harmful claims speeds up our removal of potentially harmful misinformation. Our policy, as informed by our stakeholder engagement, intentionally addresses “unverifiable” as opposed to “unverified” rumors. We do not agree that the appropriate way to balance voice and safety is to remove more reports from conflict zones as “unverified rumors” when we have no signal from a trusted partner or third party fact-checker that those reports are false or could contribute to a risk of harm. We do not remove merely “unverified” rumors, claims or information, given these may be accurate statements of personal experience or observation. We remove “unverifiable rumors,” which we define as rumors that cannot be confirmed or debunked in a meaningful timeframe, when we have the necessary information or context suggesting they are likely to contribute to a risk of imminent physical harm. Especially in contexts of war and violent conflict, it is often not possible to verify information quickly. Removing everything that is unverified could lead to the removal of accurate claims by observers or victims of crimes against vulnerable people.
Meta is concerned that removing content based on its own judgment about the level of evidentiary support contained in someone’s posts would lead to arbitrary results and would suppress potentially accurate reports that could protect others and raise awareness of atrocities. The board’s recommendation would impose a journalistic publishing standard on people that could prevent them from raising awareness of atrocities or other hazards in conflict situations where real-time verifications are unlikely. We will continue to rely on trusted partners with local knowledge to inform us when information is false or unverifiable and may lead to imminent harm.
Next steps: We will have no further updates on this recommendation.