Study the impacts of modified approaches to secondary review on reviewer accuracy and throughput. This includes an evaluation of accuracy rates when content moderators are informed that they are engaged in secondary review and an opportunity for users to provide relevant context that may help reviewers evaluate their content, in line with the board’s previous recommendations. Meta should share the results of these accuracy assessments with the board and summarize the results in its Quarterly Updates.
Our commitment: We have researched the effects of informing reviewers that they are conducting a secondary review and giving people the opportunity to provide additional context on appeal. We are exploring additional experiments to further refine and understand this research. We’ve provided a summary of this work below and plan to provide more detailed information about this research to the board.
Considerations: In June 2018, we ran an experiment giving people the opportunity to request a second appeal of our initial enforcement decision. In those secondary appeals, we provided people a text box to submit freeform commentary about their content, to understand if providing additional context had an impact on the outcome of the appeal. Reviewers were aware that they were reviewing content that had been previously reviewed.
The results indicated that reviewers generally did not find the additional information provided useful. Only 2% of the information people provided supported overturning the original enforcement decision. More often than not, feedback expressed either disagreement with our Community Standards or disagreement that the person had violated our Community Standards. Reviewers found that many comments didn't contain useful information, and instead contained incomplete words or phrases, and expressions of frustration or anger. Even in cases where people provided relevant information for the reviewer, it was not always possible for the reviewer to validate the information. The researchers who ran the experiment determined that people's feedback needed to be clearer to be useful to reviewers and produce changes in enforcement outcomes.
We ran another experiment in December 2020, allowing people to select reasons for their appeal from a dropdown menu (for example, “I think my post does follow the Community Standards” and “I think Facebook misunderstood the context or intent of my post”). We also provided a freeform text box allowing people to further explain why they disagreed with an enforcement decision. The results of this experiment suggested that allowing people to provide additional context on appeal could have an impact on enforcement outcomes, but we are still exploring the format that is most useful to reviewers and impactful to outcomes.
Following up on the December 2020 experiment, we are exploring additional experiments that would offer people a dropdown menu instead of a freeform text box with options to provide more granular context on appeal, grouped by violation type (for example, a specific dropdown menu with options for Hate Speech appeals). We are still exploring the most efficient way to provide reviewers additional information to (1) maximize the accuracy of their reviews while ensuring consistency and scalability, (2) minimize the review time needed to consider additional context, and (3) minimize any additional reviewer training sessions we would need to conduct on how to consider this additional context.
Next steps: We are exploring additional experiments to further refine and understand this research, and we will share more detailed results of these analyses with the board.