Policies that outline what is and isn't allowed on our apps.
Policies for ad content and business assets.
Other policies that apply to Meta technologies.
How we update our policies, measure results, work with others, and more.
Explore how we help teens have safe, positive experiences on Facebook and Instagram.
How we approach dangerous organizations and individuals.
How we support communities in the face of the opioid epidemic.
How we help prevent interference, empower people to vote and more.
How we work with independent fact-checkers, and more, to identify and take action on misinformation.
How we assess content for newsworthiness.
How we reduce problematic content in News Feed.
How we build AI systems.
Comprehensive access to public data from Facebook and Instagram
Comprehensive and searchable database of all ads currently running across Meta technologies
Additional tools for in-depth research on Meta technologies and programs
Quarterly report on how well we're doing at enforcing our policies on the Facebook app and Instagram.
Report on how well we're helping people protect their intellectual property.
Report on government request for people's data.
Report on when we restrict content that's reported to us as violating local law.
Report on intentional internet restrictions that limit people's ability to access the internet.
Quarterly report on what people see on Facebook, including the content that receives the widest distribution during the quarter.
Download current and past regulatory reports for Facebook and Instagram.
In 2022, Meta sought a Policy Advisory Opinion (PAO) from the Oversight Board regarding how we handle the sharing of private residential information across our platforms. This issue presents a delicate balance between two competing interests: safety and voice. Exposing this information without consent can create a risk to an individual's safety and infringe on their privacy. However, sharing residential addresses can be an important tool for journalism, civic activism, and other public discourse—particularly when the information may already be publicly accessible elsewhere online. The Board’s PAO on Sharing Private Residential Information recommendation #13 recommended that Meta give users an opportunity to remove or edit private information within their content following a removal for violation of the Privacy policy—protecting users’ right to share and receive information while upholding the right to privacy of people both on and off Meta’s platforms.
What was the impact of implementing this recommendation?
In the spirit of this recommendation, Meta launched a dynamic system that offers users an opportunity to remove their first policy violation strike and any resulting account restrictions by completing a brief educational exercise. This system is offered, not only for the Privacy policy, but also for several other policies on the platform. By preventing account penalties, this change helps protect free expression and has resulted in millions of users removing strikes from their accounts.
Today, when a user commits a first-time violation within all eligible policies, they will receive a notification detailing the specific policy they breached along with two options: Appeal the Decision, or Remove the Warning. Users who select Remove the Warning are taken to a page to start the educational exercise. By completing this interactive module, users have the strike related to the policy violation removed from their record.
To frame the impact of this feature launch, we are sharing metrics from Facebook and Instagram users:
Over a 3-month period from January 12, 2025 to April 10, 2025, over 7.1 million Facebook users and over 730 thousand Instagram users who had content removed for violating a first-time, non-severe Community Standard and were eligible for the educational exercise opted to view the eligible violation notice.
Of the Facebook users who opted to view the eligible violation notice, over 2.6 million users started the educational exercise by selecting the Remove this Warning option. On Instagram, of the users who opted to view the educational exercise, over 340 thousand users started the educational exercise by selecting the Remove this Warning option. Among these nearly 3 million users across Facebook and Instagram who started the educational exercise, the majority completed the exercise and successfully removed their strike.
On Facebook, over 2.1 million users—over 80% of those who started the exercise—successfully completed it and had their strike and any resulting account restrictions removed.
On Instagram, over 290 thousand users—nearly 85% of those who started the exercise—also completed it and had their strike and any resulting account restrictions removed.
To further note the impact of this feature, the majority of users on both Facebook and Instagram continued on in the feature to fill out a feedback survey explaining what happened, providing a valuable feedback loop for teams at Meta to leverage when identifying ways to improve our products and policies.
On Facebook, more than 80% of users who started the exercise filled out the survey. Of these users, the top reasons were as follows:
Nearly 50% of Facebook respondents mentioned that “I didn't know the rule.”
Over 35% of Facebook respondents mentioned that "You misunderstood my post."
On Instagram, nearly 85% of users who started the exercise filled out the survey. Of these users, the top reasons were as follows:
Over 44% of Instagram respondents mentioned that “I didn’t know the rule.”
Over 36% of Instagram respondents mentioned that “You misunderstood my post.”
The high survey response rate demonstrates that a significant number of users on both Facebook and Instagram are actively engaging with the educational content and providing valuable feedback. The survey results also suggest that a substantial portion of users posted violating content unintentionally, as they were unaware of the policy. This demonstrates the importance of this feature launch, which aims to educate users about our policies—helping them remediate penalties and avoid future violations.
As Meta continues efforts to build trust and foster free expression, we value opportunities to increase the understanding of our policies and prevent users from receiving penalties where possible. The removal of these strikes when violating content is inadvertently posted can mitigate account restrictions that may be applied upon the accrual of multiple violations over time. This approach underscores our commitment to protecting user voice and upholding freedom of speech. We hope this exercise will improve people’s experiences on our apps, and we'll continue to adjust in the future as needed.
For additional clarity on the metrics reported above, we are providing the following disclaimers: