Policies that outline what is and isn't allowed on the Facebook app.
Policies that outline what is and isn't allowed on the Instagram app.
Policies for ad content and business assets.
Other policies that apply to Meta technologies.
How we update our policies, measure results, work with others, and more.
How we support communities in the face of the opioid epidemic.
How we help prevent interference, empower people to vote and more.
How we work with independent fact-checkers, and more, to identify and take action on misinformation.
How we assess content for newsworthiness.
How we reduce problematic content in News Feed.
How we build AI systems.
Comprehensive access to public data from Facebook and Instagram
Comprehensive and searchable database of all ads currently running across Meta technologies
Additional tools for in-depth research on Meta technologies and programs
Quarterly report on how well we're doing at enforcing our policies on the Facebook app and Instagram.
Report on how well we're helping people protect their intellectual property.
Report on government request for people's data.
Report on when we restrict content that's reported to us as violating local law.
Report on intentional internet restrictions that limit people's ability to access the internet.
Quarterly report on what people see on Facebook, including the content that receives the widest distribution during the quarter.
Download current and past regulatory reports for Facebook and Instagram.
Meta took a first step in developing industry-leading governance by launching the Oversight Board in 2020. You can read more about the work of the Oversight Board here.
We more recently started exploring additional innovative ways to give people a greater voice into the governance of our platforms and the development of the technologies that affect them. This includes Community Forums, a form of organized public discourse focused on tech’s biggest issues, which we began to pilot in 2022.
The Oversight Board provides Meta with tremendous insight to ensure decisions about our policies and products are in the best interest of users, and Community Forums will help ensure that our decision making incorporates perspectives from outside the company—even when we disagree.
We will continue to explore new forms of governance and innovative ways to democratize and distribute power as a means to empower people to have a greater voice in the development of our policies and products.
Community Forums bring together representative groups of people from all over the world to discuss tough issues, consider hard choices, and share their perspectives to improve the experiences people have across Meta’s technologies.
Meta’s Community Forums bring together thousands of global voices to weigh in on some of the tech industry’s toughest questions. In each Community Forum, representative sets of participants receive educational materials to learn about a topic before deliberating in small groups, where they share perspectives, their lived experiences, and the complex tensions related to the topic. They then have the opportunity to ask clarifying questions to third party experts before submitting responses to various questions in a survey. Their responses, and the analysis of the results, produce insights on the public’s understanding of and concerns about these emerging technologies, and ultimately inform the development of our products and policies. For example, our Community Forum on the Metaverse played a direct role in Meta adding mute assist, a form of automatic speech detection in public worlds, to the catalog of tools available to creators on Horizon. We invest in Community Forums because it’s important that our products represent the people who use them.
We started by looking at deliberative democratic mechanisms, such as Citizens Assemblies, that have been used to provide public input into government policies for years. An initial pilot was run on our approach to climate misinformation in 2022. Based on those learnings we explored how we might scale this approach to more people, and launched another Forum on the issue of bullying and harassment in the Metaverse. Both of these showed that Community Forums can provide rich insights for our product and policy development.
Community forums are especially well-suited to provide principle and values-based insights, which are grounded in people's own experiences and expectations, rather than our own.
Our Community Forum on Generative AI
The novelty of the topic and technology of Generative AI created a unique opportunity for us to inform its development through public input as AI models are often guided by structures and inputs, such as principles and values.
In order to better understand the values people wanted to see reflected in Generative AI technologies, we structured questions around principles that would underpin chatbots, the most accessible form of Generative AI for the wider public. Questions included how chatbots should provide guidance and advice to users, and how chatbots should interact with people. This forum included 1,500 participants from four countries. You can read more about our Community Forum on Generative AI here.
Our Community Forum on Bullying and Harassment in the Metaverse
A Community Forum on the Metaverse was conducted in collaboration with Stanford’s Deliberative Democracy Lab on the topic of bullying and harassment and was a first-of-its-kind experiment in global deliberation. We chose to focus on closed virtual spaces so that the forum could advise on policy and product development for virtual experiences such as Horizon Worlds. This Forum included 6,000 participants from 32 countries and functioned as an important pilot to establish proof of concept. Read more here.
Our Community Forum on Climate Misinformation
This Forum deliberated on the challenging topic of misleading climate change content. We brought together over 250 Facebook users across five countries, to ensure that we heard from people from different nationalities, ethnicities, socio-economic backgrounds, and political ideologies. This Forum functioned as an important pilot to establish proof of concept. Read more here.
As with the collaborative nature of this work, Meta has spoken to and partnered with a variety of deliberative democracy experts, civil society organizations, government policymakers, and academics to ensure our forums are constructed in accordance with deliberative democracy best practices and standards. This process helps us mitigate against any biases while also sharing insights with others in the deliberative democracy community. The design and execution of our Forums to-date have been in partnership with Stanford's Deliberative Democracy Lab and the Behavioural Insights Team.