Policies that outline what is and isn't allowed on the Facebook app.
Policies that outline what is and isn't allowed on the Instagram app.
Policies for ad content and business assets.
Other policies that apply to Meta technologies.
How we update our policies, measure results, work with others, and more.
How we support communities in the face of the opioid epidemic.
How we help prevent interference, empower people to vote and more.
How we work with independent fact-checkers, and more, to identify and take action on misinformation.
How we assess content for newsworthiness.
How we reduce problematic content in News Feed.
How we build AI systems.
Comprehensive access to public data from Facebook and Instagram
Comprehensive and searchable database of all ads currently running across Meta technologies
Additional tools for in-depth research on Meta technologies and programs
Quarterly report on how well we're doing at enforcing our policies on the Facebook app and Instagram.
Report on how well we're helping people protect their intellectual property.
Report on government request for people's data.
Report on when we restrict content that's reported to us as violating local law.
Report on intentional internet restrictions that limit people's ability to access the internet.
Quarterly report on what people see on Facebook, including the content that receives the widest distribution during the quarter.
Download current and past regulatory reports for Facebook and Instagram.
Meta’s Content Policy Stakeholder Engagement team engages with civil society organizations, academics, and other thought leaders to gather knowledge and experience as we develop our content policies. We work with internal teams to build stakeholder feedback into the policy development process. Our goal is to create policies that reflect broad input from an inclusive stakeholder base.
As part of its Hate Speech Policy, Meta created a policy to remove harmful stereotypes. Stakeholder engagement helped the Content Policy team build a framework for understanding and addressing these stereotypes. Our team consulted with global stakeholders including academic experts on hate speech, social psychologists, historians, and civil society organizations in fields such as free expression. Stakeholders helped us understand the importance of historical discrimination and minority status in the creation of stereotypes. Experts also highlighted that harmful stereotypes make people feel unsafe in the public domain and prevent them from participating as citizens.
The focus of our Hate Speech Policy is attacks against people. Conversely, under our policies, we have generally permitted attacks on concepts, ideas, practices, beliefs and institutions, with the goal of allowing broad discussion around such topics. However, we heard from stakeholders and users alike that allowing people to criticize and attack an institution or concept which is closely tied to people of a particular protected characteristic can, in some circumstances, lead to harm -- including, potentially, violence and intimidation. This feedback led us to initiate policy development around this area of our hate speech standards. We engaged with a broad range of academics and civil society organizations, including experts in dangerous speech and atrocity prevention, human rights practitioners, social psychologists who study issues of personal identity, advocates for freedom of expression, and groups representing religious and non-religious world views. Our revised policy provides that in certain circumstances we will remove “Content attacking concepts, institutions, ideas, practices, or beliefs associated with protected characteristics, which are likely to contribute to imminent physical harm, intimidation or discrimination against the people associated with that protected characteristic.” See the section of our Hate Speech Policy which requires additional additional information and/or context to enforce.
To inform our policy definition of state media, we consulted with global experts specializing in media, governance and human rights. This input was crucial to helping us understand the different ways in which governments may exert editorial control over certain media entities. We've engaged with some of the leading voices on press freedoms, including Reporters Without Borders, Center for International Media Assistance, European Journalism Centre, Oxford University, Center for Media, Data and Society (CMDS) at the Central European University, The Council of Europe, UNESCO, Global Forum for Media Development (GFMD), African Centre for Media Excellence (ACME), and SOS Support Public Broadcasting Coalition, to name a few. We know that governments continue to use funding mechanisms to control the media, but this alone doesn’t tell the full story. That’s why our definition of state-controlled media extends beyond just assessing financial control or ownership and includes an assessment of editorial control exerted by a government.
Stakeholders helped shape our policy on Adult Sexual Exploitation in important ways. For example, in developing our approach to content that identifies adult victims of sexual assault in cases where supporters share victims’ stories or amplify their voices, we engaged with a broad range of academics and civil society organizations affected by the policy, including journalists, legal scholars, feminist and campaigning activist groups, and women’s rights NGOs. These engagements helped us build a policy that seeks to give voice to social movements and campaigns for raising awareness, while also respecting the dignity and privacy of victims.
Our Human Exploitation Policy has long prohibited users from posting content offering human smuggling services. However, our policies have made certain allowances for content requesting smuggling services. In 2021, we reviewed our approach with external stakeholders, including human rights advocates, transnational crime experts, UN agencies, and NGOs, who noted a difficult trade-off: while solicitations of smuggling services can make it easier for those who misuse our platforms to prey on vulnerable people, removing such posts may prevent people from seeking safety or exercising their right to seek asylum. In the end, experts helped us decide that we could mitigate risks of exploitation while respecting the rights of our users by updating our policy to remove solicitations of human smuggling services and accompanying removals with an information page. This page, developed in consultation with external experts, contains details about people’s rights as refugees and asylum seekers and how they can avoid exploitation.
In 2022, we published our Crisis Policy Protocol (CPP) to codify our content policy response to crises. Based on a recommendation from the Oversight Board, this framework helps to assess crisis situations that may require a specific policy response. In developing the CPP, we consulted global experts with backgrounds in national security, international relations, humanitarian response, conflict prevention, and human rights; we explored how to strengthen existing procedures and include new components such as criteria for crisis entry and exit. Stakeholders helped surface key signals that should be used to determine whether a crisis threshold has been reached. Our global stakeholders brought perspectives from regions that vary considerably in political stability. Overall, stakeholder input helped ensure that our protocol makes our responses more timely, systematic, and equitable in a crisis. Learn more here and here.
As part of our approach to combatting bullying and harassment, we engaged with a broad range of stakeholders directly impacted by brigading and mass harassment, including women's rights activists, representatives of the LGBTQI+ community, minority groups, journalists, human rights activists, and public figures. We also consulted with experts who study online harassment and state-sponsored influence operations, as well as advocates for freedom of expression. Stakeholders acknowledged that both legitimate activism and harmful brigading may exhibit the same online behaviours, such as mass reporting, comment flooding, or hashtag bombing. It was therefore recommended that we focus on context-specific factors to differentiate between such use cases, focusing on the nature of the content posted, the impact to the victim, and the behavior's potential for offline harm. The input we received helped us craft the first iteration of our policy on brigading and mass harassment, part of our Bullying and Harassment Policy.