Meta

Meta
정책
커뮤니티 규정Meta 광고 규정기타 정책Meta의 개선 방식연령에 적합한 콘텐츠

기능
위험한 단체 및 개인에 대한 저희의 접근방식오피오이드 유행에 대한 저희의 접근방식선거에 대한 저희의 접근방식허위 정보에 대한 저희의 접근방식공유할 만한 콘텐츠에 대한 저희의 접근방식Facebook 피드 랭킹에 대한 저희의 접근방식랭킹 결정 방법에 대한 저희의 접근방식Meta의 접근성

리서치 도구
콘텐츠 라이브러리 및 콘텐츠 라이브러리 API광고 라이브러리 도구기타 리서치 도구 및 데이터 카탈로그

규정 시행
위반 감지적절한 조치

거버넌스
거버넌스 혁신감독위원회 개요감독위원회에 대한 재고 요청 방법감독위원회 사례감독위원회 권장 사항감독위원회 구성감독위원회: 추가 질문감독위원회에 대한 Meta의 반기별 업데이트감독위원회의 영향력 추적

보안
위협 중단보안 위협위협 신고

보고서
커뮤니티 규정 시행 보고서지식 재산권정부의 사용자 데이터 요청현지 법률에 따른 콘텐츠 제한인터넷 중단널리 조회된 콘텐츠 보고서규제 및 기타 투명성 보고서

정책
커뮤니티 규정
Meta 광고 규정
기타 정책
Meta의 개선 방식
연령에 적합한 콘텐츠
기능
위험한 단체 및 개인에 대한 저희의 접근방식
오피오이드 유행에 대한 저희의 접근방식
선거에 대한 저희의 접근방식
허위 정보에 대한 저희의 접근방식
공유할 만한 콘텐츠에 대한 저희의 접근방식
Facebook 피드 랭킹에 대한 저희의 접근방식
랭킹 결정 방법에 대한 저희의 접근방식
Meta의 접근성
리서치 도구
콘텐츠 라이브러리 및 콘텐츠 라이브러리 API
광고 라이브러리 도구
기타 리서치 도구 및 데이터 카탈로그
규정 시행
위반 감지
적절한 조치
거버넌스
거버넌스 혁신
감독위원회 개요
감독위원회에 대한 재고 요청 방법
감독위원회 사례
감독위원회 권장 사항
감독위원회 구성
감독위원회: 추가 질문
감독위원회에 대한 Meta의 반기별 업데이트
감독위원회의 영향력 추적
보안
위협 중단
보안 위협
위협 신고
보고서
커뮤니티 규정 시행 보고서
지식 재산권
정부의 사용자 데이터 요청
현지 법률에 따른 콘텐츠 제한
인터넷 중단
널리 조회된 콘텐츠 보고서
규제 및 기타 투명성 보고서
정책
커뮤니티 규정
Meta 광고 규정
기타 정책
Meta의 개선 방식
연령에 적합한 콘텐츠
기능
위험한 단체 및 개인에 대한 저희의 접근방식
오피오이드 유행에 대한 저희의 접근방식
선거에 대한 저희의 접근방식
허위 정보에 대한 저희의 접근방식
공유할 만한 콘텐츠에 대한 저희의 접근방식
Facebook 피드 랭킹에 대한 저희의 접근방식
랭킹 결정 방법에 대한 저희의 접근방식
Meta의 접근성
리서치 도구
콘텐츠 라이브러리 및 콘텐츠 라이브러리 API
광고 라이브러리 도구
기타 리서치 도구 및 데이터 카탈로그
보안
위협 중단
보안 위협
위협 신고
보고서
커뮤니티 규정 시행 보고서
지식 재산권
정부의 사용자 데이터 요청
현지 법률에 따른 콘텐츠 제한
인터넷 중단
널리 조회된 콘텐츠 보고서
규제 및 기타 투명성 보고서
규정 시행
위반 감지
적절한 조치
거버넌스
거버넌스 혁신
감독위원회 개요
감독위원회에 대한 재고 요청 방법
감독위원회 사례
감독위원회 권장 사항
감독위원회 구성
감독위원회: 추가 질문
감독위원회에 대한 Meta의 반기별 업데이트
감독위원회의 영향력 추적
정책
커뮤니티 규정
Meta 광고 규정
기타 정책
Meta의 개선 방식
연령에 적합한 콘텐츠
기능
위험한 단체 및 개인에 대한 저희의 접근방식
오피오이드 유행에 대한 저희의 접근방식
선거에 대한 저희의 접근방식
허위 정보에 대한 저희의 접근방식
공유할 만한 콘텐츠에 대한 저희의 접근방식
Facebook 피드 랭킹에 대한 저희의 접근방식
랭킹 결정 방법에 대한 저희의 접근방식
Meta의 접근성
리서치 도구
콘텐츠 라이브러리 및 콘텐츠 라이브러리 API
광고 라이브러리 도구
기타 리서치 도구 및 데이터 카탈로그
규정 시행
위반 감지
적절한 조치
거버넌스
거버넌스 혁신
감독위원회 개요
감독위원회에 대한 재고 요청 방법
감독위원회 사례
감독위원회 권장 사항
감독위원회 구성
감독위원회: 추가 질문
감독위원회에 대한 Meta의 반기별 업데이트
감독위원회의 영향력 추적
보안
위협 중단
보안 위협
위협 신고
보고서
커뮤니티 규정 시행 보고서
지식 재산권
정부의 사용자 데이터 요청
현지 법률에 따른 콘텐츠 제한
인터넷 중단
널리 조회된 콘텐츠 보고서
규제 및 기타 투명성 보고서
한국어
개인정보처리방침서비스 약관쿠키
이 콘텐츠는 한국어에서 아직 이용할 수 없습니다
Home
Oversight
Oversight board cases
Gender Identity Debate Videos

Gender Identity Debate Videos

업데이트됨 2025. 6. 20.
2024-046-FB-UA, 2024-047-IG-UA
Today, August 29, 2024, the Oversight Board announced that it has selected a case bundle from two user appeals about content posted to Facebook and Instagram. The first piece of content is a video showing a transgender woman being confronted for using the women's bathroom. The second piece of content is a video of a transgender girl winning a female sports competition in the United States, with some spectators vocally disapproving of the result.
Meta determined that neither video violated our policies on or Bullying and Harassment, as laid out in our Facebook Community Standards and Instagram Community Guidelines, and left both pieces of content up.
Under our Hate Speech policy, Meta removes any calls for exclusion of members of a protected characteristic group. We generally allow people to criticize concepts because we want to allow discussion about the ideas, institutions, and policies that are a central part of any society or cultural community.
In both cases, even if the content included a call for exclusion, we determined that the posts should nonetheless be allowed upon escalation in our content review process, given their newsworthiness. Transgender people’s access to bathrooms that correspond to their gender identity is the subject of considerable political debate in the United States.
While under our Bullying and Harassment policy Meta removes any attacks targeted at a private individual, in both instances we determined there is no explicit call for exclusion present in the posts.
We will implement the board’s decision once it has finished deliberating, and will update this post accordingly. Please see the Board's website for the decision when it issues it.
Read the board’s case selection summary

Case decision
We welcome the Oversight Board's decision today, April 29, 2025, on this case. The Board upheld Meta’s decision to leave the content up in both cases.
After conducting a review of the recommendations provided by the Board, we will update this post with initial responses to those recommendations.

Recommendations

Recommendation 1 (Assessing Feasibility)
As part of its ongoing human rights due diligence, Meta should take all of the following steps in respect of the January 7, 2025, updates to the Hateful Conduct Community Standard. First, it should identify how the policy and enforcement updates may adversely impact the rights of LGBTQIA+ people, including minors, especially where these populations are at heightened risk. Second, Meta should adopt measures to prevent and/or mitigate these risks and monitor their effectiveness. Third, Meta should update the Board on its progress and learnings every six months, and report on this publicly at the earliest opportunity.
The Board will consider this recommendation implemented when Meta provides the Board with robust data and analysis on the effectiveness of its prevention or mitigation measures on the cadence outlined above and when Meta reports on this publicly.
Commitment Statement: We will assess the feasibility of this multi-part recommendation.
Considerations: Meta conducts ongoing, integrated, human rights due diligence to identify, prevent, mitigate and address potential adverse human rights impacts related to our policies, products and operations in line with the UNGPs, related guidance, and our human rights policy. Ahead of the January 7th changes, we assessed the risks of the changes and took into account relevant mitigations, such as the availability of other policies and user reports to address potentially harmful content.
We will assess the feasibility of implementing this recommendation and provide updates in future reports to the Oversight Board. We will also bundle future updates for this recommendation along with recommendation #1 from the cases on Posts Displaying South Africa’s Apartheid-Era Flag and Criticism of EU Migration Policies and Immigrants.

Recommendation 2 (Assessing Feasibility)
To ensure Meta’s content policies are framed neutrally and in line with international human rights standards, Meta should remove the term “transgenderism” from the Hateful Conduct policy and corresponding implementation guidance.
The Board will consider this recommendation implemented when the term no longer appears in Meta’s content policies or implementation guidance.
Commitment Statement: We will consider ways to update the terminology in our Hateful Conduct policy in order to best explain the types of discussions and content the policy allows.
Considerations: We are evaluating how best to explain what content is allowed and not allowed on our platforms under the Hateful Conduct policy. Our goal in our Community Standards is to clearly explain our policy approach to content. Achieving clarity and transparency in our public explanations may sometimes require including language considered offensive to some.
As we consider this change in our Hateful Conduct policy, we plan to incorporate feedback from a variety of stakeholders to ensure that our Community Standard continues to be clear for the billions of users on our platforms. We are assessing a number of possible updates to the policy language, and will provide updates in a future Biannual Report to the Board.

Recommendation 3 (Assessing Feasibility)
To reduce the reporting burden on targets of Bullying and Harassment, Meta should allow users to designate connected accounts, which are able to flag potential Bullying and Harassment violations requiring self-reporting on their behalf.
The Board will consider this recommendation implemented when Meta makes these features available and easily accessible to all users via their account settings.
Commitment Statement: We will evaluate the feasibility of allowing people connected to a user to report content requiring self-reporting on their behalf, as well as looking for opportunities to foster partnerships expanding the ability of designated entities to report potentially violating content, particularly on behalf of youth.
Considerations: Ensuring the safety of users on our platforms is consistently a high priority, and in doing so we strive to iterate and improve their ability to report or escalate content such as bullying and harassment. Our Bullying and Harassment policy applies certain protections for everyone, regardless of reporting context. However, for less severe tiers of our policy, we apply different protections for different individuals, such as adult public figures and private individuals. In order to allow discussion such as banter among friends or neutral commentary, we may require self-reporting as it provides context to help us understand if the person reporting content feels bullied or harassed.
In response to this recommendation, we will assess if there are ways that we may leverage existing tools for reporting content while still maintaining self-reporting as a key contextual signal for understanding if content may be considered as bullying and harassment by an individual as opposed to legitimate discussions and speech. Allowing others to report on behalf of a person is technically difficult given the way our review systems function at scale and may be subject to abuse, but we will explore options and provide an update on this work in a future report.
Beyond the context of self reporting, we have also taken steps recently to prioritize certain reports more generally for review under our Community Standards. Earlier this year, following our launch of Instagram teen accounts we introduced the School Partnership Program for Instagram, a program partnering directly with schools and teachers to address bullying. Through this program, reports submitted by school partners that may violate Instagram’s Community Standards will be prioritized for review. However, policy areas that require self-reporting will still need a match between the target and the reporter. Additionally, schools receive status updates on the reports and notifications as soon as Instagram takes action on the report. The program is currently open to middle and high schools in the US. As part of our standard process, we allow parents to request the removal of violating content on behalf of children under 13 years old.
We are committed to exploring additional opportunities to provide services in instances where users may suddenly become public figures or highly visible on our platforms. This will require collaboration across our Product, Policy, Partnerships, and Operations teams to identify any possible avenues for expansion.
We will provide updates on the status of this recommendation in future reports to the Board.

Recommendation 4 (Assessing Feasibility)
To ensure there are fewer enforcement errors on Bullying and Harassment violations requiring self-reporting, Meta should ensure the one report representing multiple reports on the same content is chosen based on the highest likelihood of a match between the reporter and the content’s target. In doing this Meta should guarantee that any technological solutions account for potential adverse impacts on at-risk groups.
The Board will consider this recommendation implemented when Meta provides sufficient data to validate the efficacy of improvements in the enforcement of self-reports of Bullying and Harassment violations as a result of this change.
Commitment Statement: We are assessing solutions to improve identification and review of enforcement errors across all Community Standards, including those related to self-reporting of Bullying and Harassment. This assessment includes reevaluating existing tooling functions and ongoing deliberation of how we prioritize content for human review.
Considerations: We are continuously working to improve and standardize our review and enforcement processes across all violation areas and will assess the feasibility of implementing this recommendation in line with this ongoing work. Currently, our system mitigates the risk of missing a self-report by ensuring that, if multiple people report content at different times, we have humans review the content multiple times before we begin automatically marking it non-violating. This increases the chance that, if there is a self-report, one of the human reviews will capture it. In cases where automated enforcement has previously been applied to reported content deemed as non-violating, we also enable periodic human reviews at set intervals to re-examine content that receives frequent reports.
Ongoing assessments include exploring how advancements in our enforcement technology can further improve the enforcement accuracy of highly viral content reported by multiple users. We will continue to evaluate the feasibility of using these enhancements to address this recommendation and will provide updates in future reports.
Meta
투명성 센터
정책
규정 시행
보안
기능
거버넌스
리서치 도구
보고서
한국어