Meta

Meta
Policies
Community StandardsMeta Advertising StandardsOther policiesHow Meta improvesAge-Appropriate Content

Features
Our approach to dangerous organizations and individualsOur approach to the opioid epidemicOur approach to electionsOur approach to misinformationOur approach to newsworthy contentOur approach to Facebook Feed rankingOur approach to explaining rankingAccessibility at Meta

Research tools
Content Library and Content Library APIAd Library ToolsOther research tools and data catalogue

Enforcement
Detecting violationsTaking action

Governance
Governance innovationOversight Board overviewHow to appeal to the Oversight BoardOversight Board casesOversight Board recommendationsCreating the Oversight BoardOversight Board: Further asked questionsMeta’s Bi-Annual Updates on the Oversight BoardTracking the Oversight Board's Impact

Security
Threat disruptionsSecurity threatsThreat reporting

Reports
Community Standards Enforcement ReportIntellectual PropertyGovernment Requests for User DataContent Restrictions Based on Local LawInternet DisruptionsWidely Viewed Content ReportRegulatory and Other Transparency Reports

Policies
Community Standards
Meta Advertising Standards
Other policies
How Meta improves
Age-Appropriate Content
Features
Our approach to dangerous organizations and individuals
Our approach to the opioid epidemic
Our approach to elections
Our approach to misinformation
Our approach to newsworthy content
Our approach to Facebook Feed ranking
Our approach to explaining ranking
Accessibility at Meta
Research tools
Content Library and Content Library API
Ad Library Tools
Other research tools and data catalogue
Enforcement
Detecting violations
Taking action
Governance
Governance innovation
Oversight Board overview
How to appeal to the Oversight Board
Oversight Board cases
Oversight Board recommendations
Creating the Oversight Board
Oversight Board: Further asked questions
Meta’s Bi-Annual Updates on the Oversight Board
Tracking the Oversight Board's Impact
Security
Threat disruptions
Security threats
Threat reporting
Reports
Community Standards Enforcement Report
Intellectual Property
Government Requests for User Data
Content Restrictions Based on Local Law
Internet Disruptions
Widely Viewed Content Report
Regulatory and Other Transparency Reports
Policies
Community Standards
Meta Advertising Standards
Other policies
How Meta improves
Age-Appropriate Content
Features
Our approach to dangerous organizations and individuals
Our approach to the opioid epidemic
Our approach to elections
Our approach to misinformation
Our approach to newsworthy content
Our approach to Facebook Feed ranking
Our approach to explaining ranking
Accessibility at Meta
Research tools
Content Library and Content Library API
Ad Library Tools
Other research tools and data catalogue
Security
Threat disruptions
Security threats
Threat reporting
Reports
Community Standards Enforcement Report
Intellectual Property
Government Requests for User Data
Content Restrictions Based on Local Law
Internet Disruptions
Widely Viewed Content Report
Regulatory and Other Transparency Reports
Enforcement
Detecting violations
Taking action
Governance
Governance innovation
Oversight Board overview
How to appeal to the Oversight Board
Oversight Board cases
Oversight Board recommendations
Creating the Oversight Board
Oversight Board: Further asked questions
Meta’s Bi-Annual Updates on the Oversight Board
Tracking the Oversight Board's Impact
Policies
Community Standards
Meta Advertising Standards
Other policies
How Meta improves
Age-Appropriate Content
Features
Our approach to dangerous organizations and individuals
Our approach to the opioid epidemic
Our approach to elections
Our approach to misinformation
Our approach to newsworthy content
Our approach to Facebook Feed ranking
Our approach to explaining ranking
Accessibility at Meta
Research tools
Content Library and Content Library API
Ad Library Tools
Other research tools and data catalogue
Enforcement
Detecting violations
Taking action
Governance
Governance innovation
Oversight Board overview
How to appeal to the Oversight Board
Oversight Board cases
Oversight Board recommendations
Creating the Oversight Board
Oversight Board: Further asked questions
Meta’s Bi-Annual Updates on the Oversight Board
Tracking the Oversight Board's Impact
Security
Threat disruptions
Security threats
Threat reporting
Reports
Community Standards Enforcement Report
Intellectual Property
Government Requests for User Data
Content Restrictions Based on Local Law
Internet Disruptions
Widely Viewed Content Report
Regulatory and Other Transparency Reports
English (US)
Privacy PolicyTerms of ServiceCookies
Home
Oversight
Oversight board cases
Posts Displaying South Africa's Apartheid Era Flag

Posts Displaying South Africa’s Apartheid-Era Flag

UPDATED JUN 20, 2025
2025-001-FB-UA, 2025-002-FB-UA
Today, October 8, 2024, the Oversight Board selected a case bundle appealed by Facebook users regarding posts that include depictions of the flag used by South Africa between 1928 and 1994.
The first piece of content is a photograph of a uniformed soldier carrying the flag with a caption that says “Share if you served under this flag.” The second piece of content is an album of 6 photographs – one of which is the flag. The other photographs include a black ice cream vendor riding a bicycle and selling to white children, a beach, a game board, candy cigarettes, and a toy “cracker” gun. The caption expresses fondness for the previous era and asks the audience to “read between the lines,” followed by a winking face and an “OK” hand emoji.
Meta determined that each of the two pieces of content did not violate our policies on Dangerous Organizations and Individuals or Hate Speech, as laid out in the Facebook Community Standards, and left the content up.
We will implement the Board’s decision once it has finished deliberating, and we will update this post accordingly. Please see the board’s website for the decision when they issue it.
Read the board’s case selection summary

Case decision
We welcome the Oversight Board's decision today, April 29, 2025, on this case. The Board upheld Meta’s decision to leave up the content in both cases.
After conducting a review of the recommendations provided by the Board, we will update this post with initial responses to those recommendations.

Recommendations

Recommendation 1 (Assessing Feasibility)
As part of its ongoing human rights due diligence, Meta should take all of the following steps in respect of the January 7, 2025, updates to the Hateful Conduct Community Standard. First, it should identify how the policy and enforcement updates may adversely impact the rights of LGBTQIA+ people, including minors, especially where these populations are at heightened risk. Second, Meta should adopt measures to prevent and/or mitigate these risks and monitor their effectiveness. Third, Meta should update the Board on its progress and learnings every six months, and report on this publicly at the earliest opportunity.
The Board will consider this recommendation implemented when Meta provides the Board with robust data and analysis on the effectiveness of its prevention or mitigation measures on the cadence outlined above and when Meta reports on this publicly.
Commitment Statement: We will assess the feasibility of this multi-part recommendation.
Considerations: Meta conducts ongoing, integrated, human rights due diligence to identify, prevent, mitigate and address potential adverse human rights impacts related to our policies, products and operations in line with the UNGPs, related guidance, and our human rights policy. Ahead of the January 7th changes, we assessed the risks of the changes and took into account relevant mitigations, such as the availability of other policies and user reports to address potentially harmful content.
We will assess the feasibility of implementing this recommendation and provide updates in future reports to the Oversight Board. We will also bundle future updates for this recommendation under recommendation #1 in the Gender Identity Debate Videos case.

Recommendation 2 (Implementing in Full)
To improve the clarity of its Dangerous Organizations and Individuals Community Standard, Meta should adopt a single, clear and comprehensive explanation of how its prohibitions and exceptions under this Community Standard apply to designated hateful ideologies.
The Board will consider this recommendation implemented when Meta adopts a single, clear and comprehensive explanation of its rule and exceptions related to designated hateful ideologies (under “we remove”).
Commitment Statement: We will update our Dangerous Organizations and Individuals Community Standard to clarify our approach to content involving designated hateful ideologies.
Considerations: In line with prior commitments to the Board to update our Dangerous Organizations and Individuals Community Standard, we also plan to clarify that content that glorifies, supports, represents, or references a hateful ideology violates this policy in line with our policy rationale. In the body of our Community Standard, we currently explain how we define those hateful ideologies and groups. However, we recognize that we could clarify this approach throughout our external policy and will do so as part of ongoing work to update and clarify this Community Standard. We will provide an update on the status of this work in a future report to the Board.

Recommendation 3 (Assessing Feasibility)
To improve the clarity of its Dangerous Organizations and Individuals Community Standard, Meta should list apartheid as a standalone designated hateful ideology in the rules.
The Board will consider this recommendation implemented when Meta adds apartheid to its list of designated hateful ideologies.
Commitment Statement: We will conduct an initial assessment to better understand how the term “apartheid” is used on our platforms and consider next steps based on its findings.
Considerations: We will conduct an initial assessment to better understand how the term “apartheid” is used on our platforms and consider next steps based on its findings.

Recommendation 4 (Implementing in Full)
To improve clarity to reviewers of its Dangerous Organizations and Individuals Community Standard, Meta should provide more global examples to reviewers of prohibited glorification, support and representation of hateful ideologies, including examples that do not directly name the listed ideology.
The Board will consider this recommendation implemented when Meta provides updated internal guidance to the Board including more global examples, including ones that do not directly name the listed ideology.
Commitment Statement: We will update our internal guidance for the Dangerous Organizations and Individuals Community Standard with more examples, including illustrative global examples for hateful ideologies.
Considerations: We will update examples in our internal guidance for content reviewers to include a more global representation of content that glorifies, supports, or represents a hateful ideology. As part of this process, we will work with subject matter experts and teams with regional and language expertise to identify additional illustrative examples to guide internal reviewers. We will provide updates in future reports to the Oversight Board.
Meta
Transparency Center
Policies
Enforcement
Security
Features
Governance
Research tools
Reports
English (US)