Meta

Meta
Policies
Community StandardsMeta Advertising StandardsOther policiesHow Meta improvesAge-Appropriate Content

Features
Our approach to dangerous organizations and individualsOur approach to the opioid epidemicOur approach to electionsOur approach to misinformationOur approach to newsworthy contentOur approach to Facebook Feed rankingOur approach to explaining rankingAccessibility at Meta

Research tools
Content Library and Content Library APIAd Library ToolsOther research tools and data catalogue

Enforcement
Detecting violationsTaking action

Governance
Governance innovationOversight Board overviewHow to appeal to the Oversight BoardOversight Board casesOversight Board recommendationsCreating the Oversight BoardOversight Board: Further asked questionsMeta’s Bi-Annual Updates on the Oversight BoardTracking the Oversight Board's Impact

Security
Threat disruptionsSecurity threatsThreat reporting

Reports
Community Standards Enforcement ReportIntellectual PropertyGovernment Requests for User DataContent Restrictions Based on Local LawInternet DisruptionsWidely Viewed Content ReportRegulatory and Other Transparency Reports

Policies
Community Standards
Meta Advertising Standards
Other policies
How Meta improves
Age-Appropriate Content
Features
Our approach to dangerous organizations and individuals
Our approach to the opioid epidemic
Our approach to elections
Our approach to misinformation
Our approach to newsworthy content
Our approach to Facebook Feed ranking
Our approach to explaining ranking
Accessibility at Meta
Research tools
Content Library and Content Library API
Ad Library Tools
Other research tools and data catalogue
Enforcement
Detecting violations
Taking action
Governance
Governance innovation
Oversight Board overview
How to appeal to the Oversight Board
Oversight Board cases
Oversight Board recommendations
Creating the Oversight Board
Oversight Board: Further asked questions
Meta’s Bi-Annual Updates on the Oversight Board
Tracking the Oversight Board's Impact
Security
Threat disruptions
Security threats
Threat reporting
Reports
Community Standards Enforcement Report
Intellectual Property
Government Requests for User Data
Content Restrictions Based on Local Law
Internet Disruptions
Widely Viewed Content Report
Regulatory and Other Transparency Reports
Policies
Community Standards
Meta Advertising Standards
Other policies
How Meta improves
Age-Appropriate Content
Features
Our approach to dangerous organizations and individuals
Our approach to the opioid epidemic
Our approach to elections
Our approach to misinformation
Our approach to newsworthy content
Our approach to Facebook Feed ranking
Our approach to explaining ranking
Accessibility at Meta
Research tools
Content Library and Content Library API
Ad Library Tools
Other research tools and data catalogue
Security
Threat disruptions
Security threats
Threat reporting
Reports
Community Standards Enforcement Report
Intellectual Property
Government Requests for User Data
Content Restrictions Based on Local Law
Internet Disruptions
Widely Viewed Content Report
Regulatory and Other Transparency Reports
Enforcement
Detecting violations
Taking action
Governance
Governance innovation
Oversight Board overview
How to appeal to the Oversight Board
Oversight Board cases
Oversight Board recommendations
Creating the Oversight Board
Oversight Board: Further asked questions
Meta’s Bi-Annual Updates on the Oversight Board
Tracking the Oversight Board's Impact
Policies
Community Standards
Meta Advertising Standards
Other policies
How Meta improves
Age-Appropriate Content
Features
Our approach to dangerous organizations and individuals
Our approach to the opioid epidemic
Our approach to elections
Our approach to misinformation
Our approach to newsworthy content
Our approach to Facebook Feed ranking
Our approach to explaining ranking
Accessibility at Meta
Research tools
Content Library and Content Library API
Ad Library Tools
Other research tools and data catalogue
Enforcement
Detecting violations
Taking action
Governance
Governance innovation
Oversight Board overview
How to appeal to the Oversight Board
Oversight Board cases
Oversight Board recommendations
Creating the Oversight Board
Oversight Board: Further asked questions
Meta’s Bi-Annual Updates on the Oversight Board
Tracking the Oversight Board's Impact
Security
Threat disruptions
Security threats
Threat reporting
Reports
Community Standards Enforcement Report
Intellectual Property
Government Requests for User Data
Content Restrictions Based on Local Law
Internet Disruptions
Widely Viewed Content Report
Regulatory and Other Transparency Reports
English (US)
Privacy PolicyTerms of ServiceCookies
Home
Features
Systems in response to heightened risk

How We Adapt Our Systems In Response To Heightened Risk

UPDATED DEC 16, 2022
Meta’s services help people freely express themselves; we take pride in the role we play in fostering people’s ability to exercise important rights around the world. At the same time, we work hard to prevent the spread of potentially harmful content. We do this by developing industry-leading processes and tools to reduce the likelihood that people see content spreading hate, misinformation or inciting violence, for example. These include our global Content Distribution Guidelines, which detail the kinds of distribution limits we place on problematic or low quality content, as well as our Community Standards and Community Guidelines which describe content that we may remove from our apps. During periods of heightened risk to safety, either on or off our platforms, we have the ability to take additional temporary steps if needed.
Temporary Strategies To Help People Stay Safe
During critical moments such as elections, or in situations with elevated risk of violence or other severe human rights risks, we are especially mindful of the need to carefully tailor our approach to keeping people safe while protecting their ability to express themselves. As such, our teams closely monitor trends on our platforms and investigate situations to determine whether and how best to respond. As appropriate, we may apply limited, proportionate, and time-bound measures that can be quickly implemented to address a specific, emerging risk.
In these moments, we monitor real-world events and track different metrics on our platforms, including things like how much violating content is on Facebook, or whether we’re starting to see new forms of abuse where we need to quickly adjust our response. For example, if we see an increase in violations of a specific policy, we investigate the nature and size of the problem before determining which measures, if any, we need to use in order to address the problem. In some cases, we may further reduce the visibility of certain types of content, above our standard reductions, that may not necessarily violate our Community Standards on hate speech, violence and incitement, or bullying and harassment, but come close to the line. To respond to other risks, we may reduce the distribution of content more significantly if it’s posted from accounts that have recently and repeatedly posted violating content, or if it is likely to violate our policies but we need additional time to review. In some circumstances, we may also reduce the distribution of widely shared content in order to slow its overall spread. This is particularly helpful when the content could be misinformation or incite violence. If our teams determine that a piece of content violates our policies, we will remove it, even if its visibility has already been reduced.
We also evaluate features in our apps to see how they could be misused during certain events, and in some cases temporarily change those features based on the risks we see. For example, during times of heightened risk to public safety, we may take additional steps to safeguard users' privacy and secure personally identifiable information by removing the visibility of friends lists on Facebook or follower/following lists for private accounts on Instagram. On Messenger, we may sometimes decrease the number of times people can forward messages in an effort to reduce the spread of misinformation during a crisis. There are also times when we limit the type-ahead suggestions people may see in the Search bar on Facebook or Instagram to minimize the chances that people inadvertently encounter higher-risk topics during critical moments. We also may include links to reliable information in relevant searches about certain key events.
We do not implement any of these measures lightly — we know that there could be unintended consequences, like inadvertently limiting harmless or even helpful content, and we seek to take steps that are proportionate to the risk and minimize this impact. For example, when we lower the number of times people can forward messages in Messenger, that could also affect people’s ability to easily share informative content. Or when we take stronger steps to move content lower in Feed that our systems detect might violate our policies, we may also be reducing the distribution of content that our systems have detected incorrectly.
Ongoing Monitoring And Returning To Normal Operations
High risk situations are often complex, fast-moving moments that can be adversarial, so there are no one-size-fits-all solutions. Throughout the relevant time period, we continue to monitor impacts to our platform and the people on it, in terms of both expression and safety. As a result of this monitoring, we may adjust our measures in response to any spikes or changes in signals we’re tracking. Once we see signals return to normal levels and determine the risk to safety on or off our platform has subsided, we will turn off the associated temporary measure.
We know our work requires ongoing vigilance, investments and a willingness to apply learnings from each situation we encounter to refine and improve our approach. Any temporary measures we may take to mitigate safety risks are consistent with our human rights policy and related principles, particularly those of necessity and proportionality. You can learn more about our work to prevent or mitigate human rights risks in our human rights policy, our human rights report, our human rights page, and in the audits we have requested as part of our commitment to independent verification of how we enforce our standards and conduct Human Rights Due Diligence.
Meta
Transparency Center
Policies
Enforcement
Security
Features
Governance
Research tools
Reports
English (US)