Protest Footage Paired with Pro-Duterte Chants

UPDATED FEB 6, 2026
Today, July 22, 2025, the Oversight Board selected a case appealed by a Facebook user regarding an eight-second video reshared on Facebook. The content was posted shortly after former Philippines President Rodrigo Duterte was arrested to face charges before the International Criminal Court (ICC) for alleged crimes against humanity.
The video shows crowds of people protesting on the street, carrying signs and the Serbian flag, accompanied by audio of people repeatedly chanting "Duterte!" and a patriotic Tagalog song "Bayan Ko," plays over the video. The video contains text overlay stating, "Netherland," with the original post's caption saying "Netherlands supporters." The original video footage appears to be of an anti-corruption protest that took place in Serbia, rather than a pro-Duterte rally in the Netherlands.
Meta determined that this content did not violate our policies on Misinformation, as laid out in the Community Standards, and left the content up. Under our Misinformation policy, Meta removes content directly linked to physical harm, as well as "voter or census interference." For other types of misinformation, the company focuses on "reducing its prevalence" and "foster[ing] a productive dialogue."
We will implement the Board's decision once it has finished deliberating, and will update this post accordingly. Please see the Board's website for the decision when they issue it.
Read the board’s case selection summary
Case decision
We welcome the Oversight Board's decision today, November 25, 2025, on this case. The Board upheld Meta’s decision to leave the content up.
After conducting a review of the recommendations provided by the Board, we will update this post with initial responses to those recommendations.
Recommendations
Recommendation 1 (Work Meta Already Does)
To better inform users of how the Misinformation Community Standard manipulated media policy is enforced, Meta should explain the different informative labels that Meta uses for manipulated media, including that the High-Risk label is applied in relation to a critical event, and what counts as a critical event.
The Board will consider this recommendation implemented when Meta updates the language in the Misinformation Community Standard to reflect the change.
Our commitment: In the status quo, our Misinformation Community Standard informs users of how the Manipulated Media policy is enforced, including sharing that “for content that does not otherwise violate the Community Standards, we may place an informative label on the face of the content…[when it] creates a particularly high risk of materially deceiving the public on a matter of public importance.” We also shared details about these labels, and other approaches to potentially AI-generated content, in an announcement on this page. This public-facing language encompasses the full scope of situations wherein we may apply a label.
Considerations: We currently note in our Misinformation Community Standard that we may apply a label to content that does not otherwise violate our Community Standards when the content is a photorealistic image,video, or realistic sounding audio that was digitally created or altered and creates a particularly high risk of materially deceiving the public on a matter of public importance. There are a number of factors we may consider when deciding whether or not the label needs to be applied and—while determining a matter of “public importance” may relate to a global, critical event—“critical events” are not the sole factor considered for label application. For this reason, we feel that our existing language in the Community Standards provides a comprehensive description of the informative label that may be applied.
Our announcement of these labels initially coincided with the Oversight Board’s decision in the Biden Manipulated Video case. In this announcement, we signaled our agreement to provide information and context rather than remove this content. We also noted that we’ll continue to review our approach as technology evolves.
While we will not be taking further action on this recommendation to update user-facing language, we will provide additional information to existing internal guidance on applying the more visible informative labels for manipulated media in situations where there is a particularly high risk of deceiving the public on a matter of public importance. This internal guidance will include further details on what constitutes a matter of “public importance.” In making this update, we hope to further clarify how these labels may be applied internally.
To enable third-party fact-checkers to efficiently address patterns of misinformation, Meta should build a separate queue within the fact-checking interface that includes content similar, but not identical or near-identical to content already fact-checked in a given market.
The Board will consider this recommendation implemented when Meta shares information with the Board detailing this new interface feature and how it enables fact checkers to incorporate new, similar content into existing fact checks
Our commitment: In regions where we have third party fact-checkers, they currently already have access to a surface that helps them see similar content that might make the same claim, despite not being identical or near-identical. Therefore, we consider this recommendation implemented as work Meta already does.
Considerations: As noted in our Transparency Center, outside of the United States, we rely on fact-checkers who are independent from Meta and certified through the non-partisan International Fact-Checking Network (IFCN) or, in Europe, the European Fact-Checking Standards Network (EFCSN) to address misinformation on Facebook, Instagram, and Threads. Fact-checkers independently decide what content to review and what rating to apply.
Since we want our fact-checking partners to focus as much of their time as possible on original reporting, we have systems in place to find both identical and similar content:
  • Identical or near-identical content: When fact-checkers rate a video or image, we're able to find near-exact duplicates automatically, and label them.
  • Similar content: When fact-checkers submit an article to us, we then run this through matching models, in order to surface more content to partners that might make the same claim. This helps them debunk a higher volume of content, more efficiently.
Users can access more information in our Transparency Center about how our fact-checking process works.