How we approach dangerous organizations and individuals.
How we support communities in the face of the opioid epidemic.
How we help prevent interference, empower people to vote and more.
How we work with independent fact-checkers, and more, to identify and take action on misinformation.
How we assess content for newsworthiness.
How we reduce problematic content in News Feed.
How we build AI systems.
Comprehensive access to public data from Facebook and Instagram
Comprehensive and searchable database of all ads currently running across Meta technologies
Additional tools for in-depth research on Meta technologies and programs
Quarterly report on how well we're doing at enforcing our policies on the Facebook app and Instagram.
Report on how well we're helping people protect their intellectual property.
Report on government request for people's data.
Report on when we restrict content that's reported to us as violating local law.
Report on intentional internet restrictions that limit people's ability to access the internet.
Quarterly report on what people see on Facebook, including the content that receives the widest distribution during the quarter.
Download current and past regulatory reports for Facebook and Instagram.
2020-004-IG-UA
On December 1, 2020, the Oversight Board selected a case appealed by someone on Instagram regarding photos with nudity related to breast cancer symptoms.
Meta took down this content for violating our policy on adult nudity and sexual activity, as laid out in the Facebook Community Standards.
On January 28, 2021, the board overturned Facebook's decision on this case. Facebook previously reinstated this content as it did not violate our policies and was removed in error so no further action will be taken on this content.
On February 25, 2021, Facebook responded to the board’s recommendations for this case. We are committing to take action on 3 and still assessing the feasibility on 3.
Improve automated detection of images with text overlay so that posts raising awareness of breast cancer symptoms are not wrongly flagged for review. Meta should also improve its transparency reporting on its use of automated enforcement.
Our commitment: We agree we can do more to ensure our machine learning models don’t remove the kinds of nudity we allow (e.g., female nipples in the context of breast cancer awareness). We commit to refining these systems by continuing to invest in improving our computer vision signals, sampling more training data for our machine learning, and leveraging manual review when we’re not as confident about the accuracy of our automation.
Considerations: Facebook uses both: 1) automated detection systems to flag potentially violating content and “enqueue” it for a content reviewer, and 2) automated enforcement systems to review content and decide if it violates our policies. We want to avoid wrongfully flagging posts both for review and removal, but our priority will be to ensure our models don’t remove this kind of content (content wrongfully flagged for review is still assessed against our policies before any action is taken).
In this case, our automated systems got it wrong by removing this post, but not because they didn’t recognize the words “breast cancer.” Our machine learning works by predicting whether a piece of content violates our policies or not, including text overlays. We have observed patterns of abuse where people mention “breast cancer” or “cervix cancer” to try to confuse and/or evade our systems, meaning we cannot train our system to, say, ignore everything that says “breast cancer.”
So, our models make predictions about posts like breast cancer awareness after “learning” from a large set of examples that content reviewers have confirmed either do or do not violate our policies. This case was difficult for our systems because the number of breast cancer-related posts on Instagram is very small compared to the overall number of violating nudity-related posts. This means the machine learning system has fewer examples to learn from and may be less accurate.
Next steps: We will continue to invest in making our machine learning models better at detecting the kinds of nudity we do allow. We will continue to improve computer vision signals, sampling more training data for our machine learning, and increase our use of manual review when we’re less sure about the accuracy of our automation.
Revise the Instagram Community Guidelines to specify that female nipples can be shown to raise breast cancer awareness and clarify that where there are inconsistencies between the Community Guidelines and the Community Standards, the latter take precedence.
Our commitment: In response to the board’s recommendations, we updated the Instagram Community Guidelines on nudity to read: “...photos in the context of breastfeeding, birth-giving and after-birth moments, health-related situations (for example, post-mastectomy, breast cancer awareness or gender confirmation surgery) or an act of protest are allowed.”
We’ll also clarify the overall relationship between Facebook’s Community Standards and Instagram’s Community Guidelines, including in the Transparency Center we’ll be launching in the coming months (see hydroxychloroquine, azithromycin and COVID-19 recommendation 2 for more detail).
Considerations: Our policies are applied uniformly across Facebook and Instagram, with a few exceptions — for example, people may have multiple accounts for different purposes on Instagram, while people on Facebook can only have one account using their authentic identity. We will update Instagram’s Community Guidelines to provide additional transparency about the policies we enforce on the platform. Our teams will need some time to do this holistically (for example, ensuring the changes are reflected in the notifications we send to people and in our Help Center), but we’ll provide updates on our progress.
Next steps: We’ll build more comprehensive Instagram Community Guidelines that provide additional detail on the policies we enforce on Instagram today and provide people with more information on the relationship between Facebook’s Community Standards and Instagram’s Community Guidelines.
When communicating to people about how they violated policies, be clear about the relationship between the Community Guidelines and Community Standards.
Our commitment: We’ll continue to explore how best to provide transparency to people about enforcement actions, within the limits of what is technologically feasible. We’ll start with ensuring consistent communication across Facebook and Instagram to build on Our commitment above to clarify the overall relationship between Facebook’s Community Standards and Instagram’s Community Guidelines.
Considerations: Over the past years we have invested in improving the way we communicate with people when we remove content, and we have teams dedicated to continuing to research and refine these user experiences. As part of this work, we have updated our notifications to inform people under which of Instagram’s Community Guidelines a post was taken down (for example, was it taken down for Hate Speech or Adult Nudity and Sexual Activity), but we agree with the board that we’d like to provide more detail. As part of our response to the recommendation in the case about Armenians in Azerbaijan, we are working through multiple considerations to explore how we can provide additional transparency.
In addition to confirming the need to provide more specificity about our decisions, the board’s decision also highlighted the need for consistency in how we communicate across Facebook and Instagram. In this case, we did not tell the user that we allow female nipples in health contexts, but the same notification on Facebook would have included this detail. As we clarify the overall relationship between Facebook’s Community Standards and Instagram’s Community Guidelines, we commit to ensuring our notification systems keep up with those changes.
Next steps: We will continue to work toward consistency between Facebook and Instagram and provide updates within the next few months.
Ensure people can appeal decisions taken by automated systems to human review when their content is found to have violated Facebook’s Community Standard on Adult Nudity and Sexual Activity.
Our commitment: Our teams are always working to refine the appropriate balance between manual and automated review. We will continue this assessment for appeals, evaluating whether using manual review would improve accuracy in certain areas, and if so how best to accomplish it.
Considerations: Typically, the majority of appeals are reviewed by content reviewers. Anyone can appeal any decision we make to remove nudity, and that appeal will be reviewed by a content reviewer except in cases where we have capacity constraints related to COVID-19.
That said, automation can also be an important tool in re-reviewing content decisions since we typically launch automated removals only when they are at least as accurate as content reviewers.
Next steps: We’ll continue to monitor our enforcement and appeals systems to ensure that there’s an appropriate level of manual review and will make adjustments where needed.
Inform people when automation is used to take enforcement action against their content, including accessible descriptions of what this means.
Our commitment: Our teams will test the impact of telling people whether their content was actioned by automation or manual review.
Considerations: Over the past several years we’ve invested in improving the experience that we provide people when we remove content. We have teams who think about how to best explain our actions and conduct research to help inform how we can do this in a way that’s accessible and supportive to people. We also need to ensure that this experience is consistent across billions of people all over the world, with differing levels of comprehension. From prior research and experimentation, we’ve identified that people have different perceptions and expectations about both manual and automated reviews. While we agree with the board that automated technologies are limited in their ability to understand some context and nuance, we want to ensure that any additional transparency we provide is helping all people more accurately understand our systems, and not instead creating confusion as a result of pre-existing perceptions. For example, we typically launch automated removal technology when it is at least as accurate as content reviewers. We also don’t want to overrepresent the ability of content reviewers to always get it right.
Additionally, many decisions made are a combination of both manual and automated input. For example, a content reviewer may take action on a piece of content based on our Community Standards, and we may then use automation to detect and enforce on identical copies. We would need to research to identify the best way of explaining these and other permutations to people.
Next steps: We will continue experimentation to understand how we can more clearly explain our systems to people, including specifically testing the impact of telling people more about how an enforcement action decision was made.
Expand transparency reporting to disclose data on the number of automated removal decisions, and the proportion of those decisions subsequently reversed following human review.
Our commitment: We need more time to evaluate the right approach to share more about our automated enforcement. Our Community Standards Enforcement Report currently includes our “proactive rate” (the amount of violating content we find before people report it), but we agree that we can add more information to show the accuracy of our automated review systems.
Considerations: The board uses the term “automation” broadly, however many decisions are made with a combination of both manual and automated input. For example, a content reviewer may take action on a piece of content based on our Community Standards, and we may then use automation to detect and enforce on identical copies. We need to align on the best way to study and report this information.
Next steps: We’ll continue working on this recommendation and the most appropriate and meaningful metrics reported in our Community Standards Enforcement Report that take into account the complexities of scale, technology, and manual review.