Increasing Transparency through User Notifications
UPDATED NOV 5, 2025
In 2021, the Oversight Board selected a case involving an Instagram post mentioning Abdullah Öcalan, a founding member of the Kurdistan Workers' Party (PKK), which was removed in error and reinstated by Meta. The Board critiqued the initial error, and recommended that Meta notify users when their content is removed, specifying if the removal is due to a government request, a Community Standards violation, or a government claim of national law violation; including the jurisdictional scope. In response, Meta increased the granularity of existing notifications—providing more detailed notice to users when their content is restricted in response to a formal government report stating the content violates local laws.
This initiative reflects Meta's commitment to enhancing transparency for our global user base—helping users make informed decisions by offering more detailed information about the policies that apply to their content at the time of enforcement, allowing for a clearer understanding of our platform's policies.
What was the impact of implementing this recommendation?
To assess this impact, we evaluated data spanning user notifications for Community Standards violations as well as government requests related to local laws, seeing significant increases in the rates of user notifications between 2024 and 2025. This is due, in part, to our efforts to address the Board’s recommendation alongside regulatory requirements, and marks a substantial improvement in transparency for users by providing clearer explanations of the rationales for the removal or restriction of their content.
User Notifications Sent for Content Removals Following a Community Standard Violation
We are sharing metrics across one year that demonstrate notification rates on Facebook increased by 9% and notification rates on Instagram increased by 16%.
- Facebook: From January 1, 2024 to March 31, 2024, Meta removed over 1.1 billion pieces of organic content on Facebook which were eligible for users to be notified. Of these, over 81% of removed content was accompanied by users being successfully notified. From January 1, 2025 to March 31, 2025, Meta removed over 790 million pieces of organic content on Facebook which were eligible for users to be notified. Of these, over 90% of removed content was accompanied by users being successfully notified.
- Instagram: From January 1, 2024 to March 31, 2024, Meta removed over 161 million pieces of organic content on Instagram which were eligible for users to be notified. Of these, over 79% of removed content was accompanied by users being successfully notified. From January 1, 2025 to March 31, 2025, Meta removed over 169 million pieces of organic content on Instagram which were eligible for users to be notified, of which more than 95% of removed content was accompanied by users being successfully notified.
User Notifications Sent for Content Restrictions due to Government Requests under Local Law
From January 1, 2025 to March 31, 2025, Meta restricted over 18 million pieces of content on Facebook due to government claims of local or national law violations. Of these, nearly 100% of restricted content was accompanied by users being successfully notified of the localized restriction. Over the same period, Meta restricted over 500 thousand pieces of content on Instagram due to government claims of local or national law violations Of these, nearly 100% of restricted content was accompanied by users being successfully notified of the localized restriction.
These metrics demonstrate Meta's commitment to increasing the level of detail provided to users about the removal of content on Facebook and Instagram. By increasing transparency with users about the reasons for takedowns and localized restrictions, we can promote user understanding while also building credibility and legitimacy in our content moderation practices. By being more transparent about the 'why' behind our actions, we can empower users to make informed decisions and promote a safer, more respectful online community.
A few notes to provide additional clarity on the metrics reported above:
- Our localized restriction feature is available globally, allowing us to restrict access to content in a specific jurisdiction in response to government requests under local law. The specific regions subject to localized restriction may vary depending on the regulatory obligations and risks.
- In this dataset, we do not differentiate between Community Standards enforcement actions that result from government requests and enforcement actions that result from user reports.
- Not all content or account removals are eligible for user notifications. For example, we may not issue notifications for high-volume violations related to deceptive content, such as spam and phishing; this follows categories excluded in DSA Article 17(2). Nor are user notifications issued for violations that might compromise security features intended to protect our services. In the metrics provided above, we have excluded the subset of enforcements deemed ineligible for notifications.