Ez a tartalom egyelőre nem áll rendelkezésre itt: Magyar
Poem About Political Protests in Argentina
FRISSÍTVE 2025. NOV. 25.
Today, June 17, 2025, the Oversight Board selected a case appealed by an Instagram user regarding a carousel of text-only images in Spanish that form a poem about political protest in Argentina. The content was posted during protests against a speech made by President Javier Milei at the World Economic Forum, during which he criticized "radical feminism" and the "LGBT agenda."
The poem includes a broader critique of the government's treatment of marginalized groups and is a call for people to protest. In the second image, the poem references words understood to be slurs used in Latin American countries, including Argentina, that are in reference to members of the LGBTQ+ community. The poem uses these terms while appealing to readers to protest for the rights of marginalized groups.
Upon initial review, Meta took down this content for violating our policy on Hateful Conduct, as laid out in the Community Standards. However, upon additional review, we determined we removed this content in error and reinstated the post. While Meta removes content “that describes or negatively targets people with slurs,” we allow the use of slurs in certain circumstances, for example to condemn speech or report on it. When read in the context of the full carousel of images, we decided the slurs in this case were used to condemn the government’s treatment of marginalized groups.
We will implement the Board's decision once it has finished deliberating, and will update this post accordingly. Please see the Board's website for the decision when they issue it.
Read the board’s case selection summary
Case decision
We welcome the Oversight Board's decision on this case. The Board overturned Meta's original decision to remove the content from Instagram. Meta previously reinstated this content, and as a result, no further action will be taken on the case content.
When it is technically and operationally possible to do so, we will also take action on content that is identical and in the same context as this case. For more information, please see our Newsroom post about how we implement the Board's decisions.
After conducting a review of the recommendations provided by the Board, we will update this post with initial responses.
Recommendations
Recommendation 1 (Assessing Feasibility)
So that the full context of a post can be considered during review, Meta should ensure that, when reviewing content within carousels or multiple-image content types, moderators are able to see all content within the post before making a decision, even when only one image is sent for human review.
The Board will consider this recommendation implemented when Meta shares internal documentation with the Board detailing these changes in the moderator interface.
Our commitment:
We will explore options to support reviewers in accessing additional context when reviewing content within carousels or multiple-image posts. As part of this assessment, we are investigating the volume of content that could be impacted by potential enhancements to the reviewer experience. We are also considering whether these enhancements could be adopted at scale across all review workflows or be reserved for workflows that are known to require more contextualized review.
Considerations:
We design our content review processes and systems to effectively apply our content policies, while also ensuring efficiency in our tools and processes for large-scale review. To maintain efficiency in our review process, we consider that many images within carousels or multiple-image posts can be evaluated on their own for potential violations of our standards. However, we recognize that in some cases, the meaning of a single image may depend on its context within a carousel or multiple-image post, and that this may require access to additional content within the post to determine whether the image may be violating.
Enhancements to the information accessible by reviewers examining content within carousels or multiple-image content types in our content review systems presents operational challenges, including increased review time, which could result in backlogs. We will consult with internal experts to evaluate any trade-offs and risks before aligning on any potential enhancements. Following this, we may connect with our enforcement teams to evaluate how best to apply those enhancements at scale. As an alternative, we may consider applying these enhancements in specific review workflows, such as those handled by teams that specialize in contextualized review.
We will provide updates in future biannual reports to the Oversight Board on any developments in the reviewer experience related to review of content within carousels or multiple-image posts.
Recommendation 2 (Work Meta Already Does)
Meta should develop an integrated process for ensuring that, when a content type is introduced or significantly updated, the company's procedures and tooling allow for moderation in line with the company’s human rights responsibilities. This process should include:
- A pre-launch period where enforcement policies, operational guidelines and reviewer product decisions are set up, tested and red-teamed (i.e., proactively seeking vulnerabilities) by cross-functional teams following a pre-determined methodology.
- A time-bound post-launch period involving periodic live testing, problem identification and mitigation that specifically addresses the different modes of expression enabled by the content type.
The Board will consider this recommendation implemented when Meta shares internal documentation with the Board detailing this process and alerts the Board each time it is activated with an asynchronous update.
Our commitment: Our risk review processes ensure effective assessment, compliance, and continuous improvement across our products, with transparency provided through regular reporting and maintained commitments to human rights and increased transparency. Given our ongoing efforts to standardize and enhance these processes, including for new and updated content types, we consider this recommendation complete and will provide no further updates while remaining committed to continuing iteration.
Considerations:
In the process of reviewing this recommendation, we engaged with product and policy teams to rigorously assess existing requirements, validate our approach to launching new content types, and ensure continuous alignment. By consulting cross-functional partners, we confirmed that the recommended standards are met by existing processes.
Our risk management processes ensure effective assessment and compliance across our integrity and policy programs, which align with our human rights responsibilities. Our risk review process is designed to enable innovation while maintaining rigorous operational standards across integrity, and other emerging risk areas.
Tactically, our integrated risk review process coordinates cross-functional teams to establish and test decisions before launch. This process is structured to ensure that our products and practices have been vetted through thorough review—which includes identifying any associated risks and compliance obligations, and leveraging established cross-functional triage and escalation mechanisms where needed. Through these mechanisms, we have a set of specialized risk team members who manage escalations through the relevant internal stakeholders to ensure expediency. Throughout our risk review processes, our product teams continuously consult internal stakeholders to assess the level of risk across functions. We also continue to review our policies, processes and the methodologies behind them and maintain transparency on these by publishing our continuous efforts on getting better at measurement. Testing product and safety features remains an important part of improving our platform as it helps us build tools to reduce the prevalence of harm. As such, we conduct enforcement testing through various methods including integrity holdouts and quality metrics. We remain accountable by sharing our transparency reports which provide updates on our moderation in line with our policies and various responsibilities.
We publicly discuss our methodology on how content is actioned on our platforms and our respective approach to how we count content and actions. Additionally, we have processes which iteratively uphold our commitment to providing the most accurate representation of our metrics and process in enforcement and moderation of various content types. Beyond new content types, which we continuously consider when determining how review teams are trained throughout pre-training, hands-on and ongoing learning, we remain committed to clarifying how enforcement technology works in tandem with our integrity teams who are responsible for scaling the detection and enforcement of our policies.
Given the scale and complexity of our products and processes, we are continuously working on designing, standardizing, and monitoring integrated processes for new launches and updates, where it is feasible and aligns with our goal of reducing the distribution of problematic content. Given our various ongoing efforts to maintain this work across launches of new products, processes, and updates we consider this recommendation complete and will provide no further updates while remaining committed to continuously iterating on this work.