Policies that outline what is and isn't allowed on our apps.
Policies for ad content and business assets.
Other policies that apply to Meta technologies.
How we update our policies, measure results, work with others, and more.
Explore how we help teens have safe, positive experiences on Facebook and Instagram.
How we approach dangerous organizations and individuals.
How we support communities in the face of the opioid epidemic.
How we help prevent interference, empower people to vote and more.
How we work with independent fact-checkers, and more, to identify and take action on misinformation.
How we assess content for newsworthiness.
How we reduce problematic content in News Feed.
How we build AI systems.
Comprehensive access to public data from Facebook and Instagram
Comprehensive and searchable database of all ads currently running across Meta technologies
Additional tools for in-depth research on Meta technologies and programs
Quarterly report on how well we're doing at enforcing our policies on the Facebook app and Instagram.
Report on how well we're helping people protect their intellectual property.
Report on government request for people's data.
Report on when we restrict content that's reported to us as violating local law.
Report on intentional internet restrictions that limit people's ability to access the internet.
Quarterly report on what people see on Facebook, including the content that receives the widest distribution during the quarter.
Download current and past regulatory reports for Facebook and Instagram.
2024-041-FB-UA
Today, July 16, 2024, the Oversight Board selected a case appealed by a Facebook user regarding a video from Nigeria that shows two men who appear to have been beaten and detained for allegedly having sex with one another. In the video, the community is interrogating the men, one of whom identifies himself by name and says he was beaten because he was having sex with a man. The user who posted this video added a caption in English saying that both men were caught having sex and are married. The user’s account is located in a country in which same-sex relationships are criminalized.
Upon initial review, Meta left this content up. However, upon further review, we determined the content did in fact violate our policy on Coordinating Harm and Promoting Crime as laid out in the Facebook Community Standards, and was left up in error. We therefore removed the content.
Under our Coordinating Harm and Promoting Crime policy, Meta prohibits content “outing” individuals by exposing the identity or locations affiliated with anyone who is alleged to be a member of an outing-risk group in order to prevent and disrupt offline harm. More specifically, we prohibit outing that is involuntary, meaning we allow people to declare themselves to be a member of an outing-risk group in the interest of protecting voice. In this case, however, we determined that this admission was involuntary given the fact that both men are injured and speaking while being detained.
We will implement the Board’s decision once it has finished deliberating, and will update this post accordingly. Please see the Board's website for the decision when they issue it.
We welcome the Oversight Board’s decision today, October 15, 2024, on this case. The Board overturned Meta’s original decision to leave up the content on Facebook. Meta previously removed this content.
After conducting a review of the recommendations provided by the Board, we will update this post with initial responses to those recommendations.
Meta should update the Coordinating Harm and Promoting Crime policy’s at-scale prohibition on “outing” to include illustrative examples of “outing-risk groups,” including LGBTQIA+ people in countries where same-sex relations are forbidden and/or such disclosures create significant safety risks.
The Board will consider this recommendation implemented when the public-facing language of the Coordinating Harm and Promoting Crime policy reflects the proposed change.
Our commitment: We will update our external Coordinating Harm and Promoting Crime policy with additional details about our existing approach to “outing risk groups.” This will include additional clarification that we remove the outing of LGBTQIA+ people in countries where same-sex relations may be forbidden and/or where sharing these identities may create significant safety risks.
Considerations: In line with our updates and commitment to recommendation #1 in the Pakistan Political Candidate Accused of Blasphemy decision, we are pursuing updates to our external Coordinating Harm and Promoting Crime policy. As part of these updates, we are considering additional ways to externally clarify our existing policy lines for when we remove content that exposes the identity of a person and may put them at risk of harm.
In our Community Standards we note that, with additional context, we may remove content that could identify someone as a member of the LGBTQIA+ community when exposing that identity may be harmful. However, we do not currently share details or specifics about how we identify if there is a risk of harm or details on how this may apply at scale. We will share further details in our Community Standards about our approach to this content including by providing examples of “outing-risk groups.”
To improve implementation of its policy, Meta should conduct an assessment of the enforcement accuracy of the at-scale prohibition on exposing the identity or locations of anyone alleged to be a member of an outing-risk group, under the Coordinating Harm and Promoting Crime Community Standard.
The Board will consider this recommendation implemented when Meta publicly shares the results of the assessment and explains how the company intends to improve enforcement accuracy of this policy.
Our commitment: We will measure the accuracy of our enforcement of the “harm against people” subset of the Coordinating Harm and Promoting Crime Community Standard and will further assess the feasibility of conducting a more granular assessment of our enforcement accuracy for exposing members of outing-risk groups, which lies within the “harm against people” subset.
Considerations: We continuously monitor and assess the accuracy of our enforcement measures in line with our commitment to ensuring that our policies are accurately enforced. Accordingly, we periodically conduct reviews by looking at random samples of violating content – across both human and automated review - and assessing whether we made an accurate at-scale decision. To this end, we are investing in new technology, including classifiers to improve the enforcement of our Coordinating Harm and Promoting Crime policy. These improvements should also yield improvements in enforcement accuracy of policy subsets, including the provision focused on outing-risk groups.
Currently, the various subsets of outing are enforced under the “harm against people” sub-section of our Coordinating Harm and Promoting Crime policy. This addresses the outing of risk groups and their affiliates, among other variations. When reviewers enforce against outing in the policy, they assess the specific policy violation, but only apply a label for the broader policy category: “harm against people.” In order to conduct an assessment on the “outing risk groups” policy line, which is a narrow subset of the prohibitions we maintain against “harm against people”, our operational, policy, and product teams will need to identify whether there is an effective combination of technical and procedural adjustments that would enable reviewers to conduct more granular labeling and allow for more detailed assessments of accuracy. Furthermore, as outing also pertains to other policies such as Bullying & Harassment, our teams will need to determine the best method to capture the holistic enforcement accuracy of outing across our policies.
Overall, our iterative approach to reviews allows us to gather insights on the performance of our enforcement systems, as well as provide a signal when improvements are needed. We will continue to improve our enforcement accuracy review process and share the progress of our efforts in future reports.
To increase the efficiency and accuracy of content review in unsupported languages, Meta should ensure its language detection systems precisely identify content in unsupported languages and provide accurate translations of that content to language-agnostic reviewers.
The Board will consider this recommendation implemented when Meta shares data signaling increased accuracy in the routing and review of content in unsupported languages.
Our commitment: We will continue to expand on our efforts to improve our language detection systems and better identify unsupported languages. We will expand on our strategy to improve staffing and automation processes for language-agnostic reviewers.
Considerations: We are committed to providing an equitable experience for users across our platforms and to help ensure any enforcement against users considers the necessary elements of language, culture and context. Our continued investment in improving language detection across all languages includes iterations to our language identification models which, among other things, help us provide accurate translations of that content to language-agnostic reviewers with speech-to-text translation, speech-to-speech translation, text-to-text translation and text-to-speech translation across a growing number of languages.
We have a streamlined process with dedicated staffing and resources to ensure more efficient and expedient review, particularly for high priority agnostic review. Given the specifics of regional context, we are mindful of the limitations of scaling to all languages and dialects, yet we continue to refine our detection processes to better understand the nuance of locality.
We are aware that language-agnostic reviewers often need additional signals beyond accurate translations of the content, and we continue to strive towards technology solutions that provide auxiliary support to reviewers. We are innovating across various pathways, including using artificial intelligence to identify areas in a long-form video or post where agnostic reviewers should focus their time. Additionally, we are including AI-generated contextual notes to help reviewers identify words and phrases they may misinterpret due to instances such as locally trending events or terms, slurs, historical context or celebrity bait among many others. Additionally, we are refining these functions to help agnostic- language reviewers to identify the probable violation by proposing the likely violated policy and reasoning.
As we continue to improve and refine our efforts on language detection for unsupported languages and translation across all languages for reviewers, we consider the impact and scale of our interventions and will provide further details on these in future updates.
Meta should ensure that content containing an unsupported language, even if mixed with supported languages, is routed to agnostic review. This includes providing reviewers with the option to re-route content containing an unsupported language to agnostic review.
The Board will consider this recommendation implemented when Meta provides the Board with data on the successful implementation of this routing option for reviewers.
Our commitment: We will continue to allow content reviewers to route content to the appropriate language queue, which provides more optimization than a language agnostic queue. This does not, however, preclude content reviewers from routing content to language-agnostic queues when most appropriate. Additionally, we’ll continue to work on our reviewer location strategy and language identification and translation technology to improve performance in unsupported languages.
Considerations: Meta has already enabled in its review tool the option to route content to other language review or agnostic teams. This allows reviewers to redirect unsupported, mis-routed, or content with a mix of languages to the adequate team. Currently, when there is a mix of languages in a post, reviewers will review the parts they understand and route the content with a summary and assessment in the notes for other reviewers' consideration during their review.
In the case of unsupported languages where reviewers don’t speak the same language in the content, it is often more effective to have content review in regional queues where content reviewers will still have local regional context. In agnostic queues, the reviewers would have even less language and regional context, and as a result would not be as effective in their review of content in unsupported languages. As a result, we continue to work towards having adequate regional representation in our reviewer location strategy.
In addition to continuously improving the accuracy of our translations across languages, we seek to ensure that agnostic review is best positioned for accurate enforcement by pursuing efforts to align agnostic review with regional relevance. Through our continued assessments of agnostic review processes, we learn that human reviewers with the relevant regional context can leverage their expertise to make enforcement decisions, even if they don’t speak the language. Configuring our review queues to accurately reflect this nuance is a large undertaking which requires various considerations and extensive investments in resources, however our teams continue to strategically pursue means to support regionally specialized agnostic review through routing tools and updates to reviewer training.
We will also continue to work on our language identification and translation technology, allowing reviewers to better identify the language of the content and provide accurate translation for assessment.