How we approach dangerous organizations and individuals.
How we support communities in the face of the opioid epidemic.
How we help prevent interference, empower people to vote and more.
How we work with independent fact-checkers, and more, to identify and take action on misinformation.
How we assess content for newsworthiness.
How we reduce problematic content in News Feed.
How we build AI systems.
Comprehensive access to public data from Facebook and Instagram
Comprehensive and searchable database of all ads currently running across Meta technologies
Additional tools for in-depth research on Meta technologies and programs
Quarterly report on how well we're doing at enforcing our policies on the Facebook app and Instagram.
Report on how well we're helping people protect their intellectual property.
Report on government request for people's data.
Report on when we restrict content that's reported to us as violating local law.
Report on intentional internet restrictions that limit people's ability to access the internet.
Quarterly report on what people see on Facebook, including the content that receives the widest distribution during the quarter.
Download current and past regulatory reports for Facebook and Instagram.
APR 29, 2024
Meta continually assesses the risks of imminent harm during critical events so we can respond with targeted, time limited policy and product actions that will help keep people safe. This approach includes examining potential risks ahead of and during local, regional and national events, including elections.
In response to high-risk events, Meta has multiple approaches it can take to address a range of scenarios, for example changing or limiting product features, introducing messaging rate limits, and limiting distribution of content, among others.
We identify potentially harmful trends by gathering information from public reports, reviewing recommendations from our trusted partners, making ongoing observations of content trends, human rights due diligence, and examining assessments from our intelligence teams. This information, along with existing Community Standards and enforcement systems, helps to inform the type of product and policy mitigations we may use to prevent abuse during crises, including high-risk elections. This includes our Crisis Policy Protocol (CPP), which helps us assess ways to address content risks and separate measures we may take to adapt our systems in response to heightened risk.
An important and potential mitigation may include designating places as Temporary High-Risk Locations (THRL), a policy tool used to address specific types of potentially violent content in locations identified to be high-risk due to real world events. In addition to our policy tools and frameworks for mitigating crises, Meta also relies upon Integrity Product Operations Centers (IPOCs), a measure that brings together different teams, subject matter experts and capabilities from across the company, as needed and at times in advance of a particular situation, to respond in real time to potential problems or trends.
During a given election, an IPOC will monitor multiple sources including, data analysis tools, inputs from trusted partners, and traditional media. We also monitor trends in user reports and content flagged by our classifiers.
Even though they are separate undertakings, the THRL designation process and CPP designation review process draw on some of the same signals for heightened risk of violence or offline harm. While we may consider designating a THRL during a crisis, it is not contingent or dependent on a CPP designation.
Our teams continuously monitor our platforms to assess the evolving risk environment, and help determine any resulting adjustments in response to spikes or changes in the signals we’re tracking. See here for additional information on ways that we adapt our systems in response to heightened risk.