Meta should assess the feasibility of implementing customisation tools that would allow users over 18 years old to decide whether to see sensitive graphic content with or without warning screens, on both Facebook and Instagram. The Board expects that this recommendation, if implemented, will require Meta to publish the results of a feasibility assessment.
Our commitment: We will continue to explore opportunities to allow adult users to provide input and shape experiences based on what they personally feel comfortable encountering online, but we believe that warning screens are an important tool for allowing users to make their own decisions about what they see in real time.
Considerations: We aim to ensure that everyone feels safe and supported on our platforms. Part of this work includes allowing customers to decide whether or not to view sensitive content through the use of warning screens. Warning screens can be applied to eligible content that does not violate our Community Standards but may still be disturbing or sensitive in order to protect the underlying free expression while allowing individuals to choose whether they want to view the content. Internal safety and integrity research strongly supports their use. Warning screens are one of our online community’s preferred integrity interventions, with user research showing the vast majority of users agreeing with the use of this soft action. The intent of the content warning screen is not to punish the creator, but to protect viewers and give them control over their online experience.
Warning screens are an important tool to allow users to better curate their experience on our platforms. They allow users to decide when and if they want to engage with potentially harmful content by providing a warning about the sensitive nature of the content along with a single click-through option if a user decides to proceed with viewing the content in question. Though research insights from testing and product behavior showed a decrease in the overall engagement with sensitive content, warning screens enable us to keep the option for users to decide to view that content, empowering users to shape their own experience.
Encountering uncovered sensitive content without warning can be unnecessarily distressing to users, and because harmful content can take so many different forms, a single option to remove all warning screens could lead users vulnerable to types of content they had not anticipated or believed they were choosing to view. Additionally, actors who have provided feedback on warning screens find that this feature strikes a fair balance in protecting users' experiences while allowing sensitive content to remain on the platform.
We see warning screens as more effective for allowing users to make real time decisions for themselves than a static one-off selection to remove all warning screens across platforms. It is important to note that tolerance for potentially harmful or borderline content may vary based on the environment a user views it in. In qualitative interviews, for example, US respondents told us that despite tolerating nudity on Instagram and Facebook in some contexts, that same content made them feel less comfortable scrolling when around their family members. Warning screens allow users to make the decision to view content in real-time, taking into account things like their external environment.
Meta currently has plans to allow users to provide feedback on warning screens applied to content that appears in their feed, including registering their disagreement with the warning screen application and preference that similar content not come with a warning screen in the future. This work will fold into our efforts to ensure users can share feedback about their overall experience on our platform. We have also conducted broad integrity research on the feasibility of more personalized warning screens and will update the board on any related product developments.
Additionally, the Oversight Board’s ability to shape our application of warning screens was recently expanded. Now, if the board determines that content should be restored or remain on our platforms, it can also issue a binding judgment about whether that content qualifies for the application or removal of a warning screen, adding another external accountability tool to ensure we apply screens accurately and effectively.