Konten ini belum tersedia di Bahasa Indonesia

Comment on Kenyan Politics Using a Designated Slur

DIPERBARUI 20 FEB 2026
Today, June 3, 2025, the Oversight Board selected a case appealed by a Facebook user regarding a comment on a post about former Deputy President of Kenya Rigathi Gachagua’s support for former Kenyan Prime Minister Raila Odinga’s candidacy in the election for the African Union’s Chairperson. The post features a photograph of Gachagua with text overlay claiming he endorsed Odinga, a member of the Luo ethnic group, to appease Odinga’s Luo constituency and advance his own political interests, despite being a member of the Kikuyu ethnic group. The caption also suggests that Luo people are gullible and would vote for anyone that benefits Odinga even if they are allegedly responsible for violence committed against Luo people in the past.
The user’s comment on the post mocks the reaction to Gachagua’s statement and dismisses its explanation as meant for “tugeges,” referring to his Kikuyu supporters, and instead suggests his endorsement is for an external audience, rather than a Kenyan audience. Meta translates “tugeges” to mean “retarded Kikuyu.”
Meta took down the comment for violating our Hateful Conduct Community Standard, which prohibits content that “describes or negatively targets people with slurs.” Meta defines slurs as “words that inherently create an atmosphere of exclusion and intimidation against people on the basis of a protected characteristic.”
We will implement the Board's decision once it has finished deliberating, and will update this post accordingly. Please see the Board's website for the decision when they issue it.
Read the board’s case selection summary
Case decision
We welcome the Oversight Board's decision today, December 9, 2025, on this case. The Board overturned Meta's original decision to remove the content from Facebook. Meta had previously reinstated this content, and as a result, no further action will be taken on the case content.
When it is technically and operationally possible to do so, we will also take action on content that is identical and in the same context as this case. For more information, please see our Newsroom post about how we implement the Board's decisions.
After conducting a review of the recommendation provided by the Board, we will update this post with initial responses.
Recommendations
Recommendation 1 (implementing fully)
To ensure Meta’s current product interventions help users avoid Hateful Conduct policy violations, Meta should provide users with an opportunity for self-remediation comparable to the post time friction intervention that was created as a result of the Pro-Navalny Protests in Russia, recommendation no. 6. If this intervention is no longer in effect, Meta should provide a comparable product intervention.
The Board will consider this recommendation implemented when Meta provides enforcement data that demonstrates the efficacy of these product interventions.
Our commitment: We have deployed a range of mechanisms comparable to post time friction and continue to invest in education, innovation, and technology to support user self-remediation and understanding. We consider this recommendation fully implemented and will maintain our commitment to ongoing improvement.
Considerations: We are constantly experimenting with ways to reduce over-enforcement and optimize the overall user experience on our platforms. We carefully balance these efforts with our commitment to voice and our values of authenticity, safety, privacy, and dignity.
To provide users with ongoing opportunities for self-remediation and education, we have expanded our focus on better explaining our policies by offering users on Facebook and Instagram the opportunity to erase their first strike and any resulting account restrictions by completing a short educational program. We recognize that some users unintentionally violate our community standards, often without realizing it,and this feature provides a meaningful opportunity for self-remediation across our Community Standards (including Hateful Conduct) and underscores our commitment to protecting user voice and upholding freedom of expression
In April 2025, in response to the Board’s PAO on Sharing Private Residential Information recommendation #13, we analyzed the impact of this new self-remediation opportunity—finding that, over a 3-month period from January 12, 2025 to April 10, 2025, over 7.1 million Facebook users and over 730 thousand Instagram users who had content removed for violating a first-time, non-severe Community Standard and were eligible for the educational exercise opted to view the eligible violation notice. Over 80% of users on both platforms who started the exercise completed it and had their strike and any resulting account restrictions removed. Ultimately, we found that user education can serve as a less intrusive self-remediation tool, which can mitigate account restrictions that may be applied upon the accrual of multiple violations over time—especially when users do not adequately understand our policies.
Another way that we aim to help users avoid policy violations, specifically on Instagram, is by providing them with notifications about hurtful comments or content. If a user’s comment or post on Instagram is reported, they may receive a notification informing them that it may have been hurtful to others, and giving them the option to self-remediate by deleting or keeping it. This provides an early remediation function in instances where the user may not have intended to cause harm, as they are made aware that their content may be affecting others negatively without the notification impacting their overall account or recommendation status.
We want to ensure that users have a continuous understanding of our evolving Community Standards, and are empowered with knowledge on how to participate on our platforms in a manner that upholds our values, are dynamic and iterative. We will continue to leverage insights gained from user feedback and new initiatives to refine our existing tools. Given the systems in place and our investment in continuous improvement, we consider this recommendation complete and will have no future updates.