Policies that outline what is and isn't allowed on the Facebook app.
Policies that outline what is and isn't allowed on the Instagram app.
Policies for ad content and business assets.
Other policies that apply to Meta technologies.
How we update our policies, measure results, work with others, and more.
How we support communities in the face of the opioid epidemic.
How we help prevent interference, empower people to vote and more.
How we work with independent fact-checkers, and more, to identify and take action on misinformation.
How we assess content for newsworthiness.
How we reduce problematic content in News Feed.
How we build AI systems.
Comprehensive access to public data from Facebook and Instagram
Comprehensive and searchable database of all ads currently running across Meta technologies
Additional tools for in-depth research on Meta technologies and programs
Quarterly report on how well we're doing at enforcing our policies on the Facebook app and Instagram.
Report on how well we're helping people protect their intellectual property.
Report on government request for people's data.
Report on when we restrict content that's reported to us as violating local law.
Report on intentional internet restrictions that limit people's ability to access the internet.
Quarterly report on what people see on Facebook, including the content that receives the widest distribution during the quarter.
Download current and past regulatory reports for Facebook and Instagram.
Our goal is to minimize the impact caused by violations of our policies on people using our services. We measure the prevalence of violating content to gauge how we’re performing against that goal.
Prevalence considers all the views of content on Facebook or Instagram and measures the estimated percentage of those views that were of violating content. (Learn more about how we define views in “Why we measure the prevalence of views.") This metric assumes that the impact caused by violating content is proportional to the number of times that content is viewed.
Another way to think of prevalence is how many views of violating content we didn’t prevent — either because we haven’t caught the violations early enough or we missed them altogether.
Prevalence of violating content is estimated using samples of content views from across Facebook or Instagram. We calculate it as: the estimated number of views that showed violating content, divided by the estimated number of total content views on Facebook or Instagram. If the prevalence of adult nudity and sexual activity was 0.18% to 0.20%, that would mean of every 10,000 content views, 18 to 20 on average were of content that violated our standards for adult nudity and sexual activity.
1 DOT = 10 VIEWS
10,000 TOTAL VIEWS
20 VIOLATING CONTENT VIEWS
If prevalence was 0.20%, that means for every 10,000 views, 20 views were of violating content. While numbers can be very low, even the smallest number can cause significant impact to people.
Some types of violations occur very infrequently on our services. The likelihood that people view content that violate them is very low, and we remove much of that content before people see it. As a result, many times we do not find enough violating samples to precisely estimate prevalence. In these cases, we can estimate an upper limit of how often someone would see content that violates these policies. For example, if the upper limit for terrorist propaganda was 0.04%, that means that out of every 10,000 views on Facebook or Instagram in that time period, we estimate that no more than 4 of those views contained content that violated our terrorist propaganda policy.
It’s important to note that when the prevalence of a violation type is so low that we can only provide upper limits, this limit may change by a few hundredths of a percentage point between reporting periods. However, changes this small may not be statistically significant; in such cases, these small changes do not indicate an actual difference in the prevalence of this violating content on the service.
We estimate how often content is seen rather than the amount of content posted because we want to determine how much that content affected people on Facebook or Instagram. A piece of violating content could be published once but seen 1,000 times, 1 million times or not at all. Measuring views of violating content rather than the amount of violating content published better reflects the impact on the community. A small prevalence number can still correspond to a large amount of impact on our services, due to the large number of overall views of content on our services.
We record a content view when a piece of content appears on a user’s screen. Specifically, a view happens when someone:
Views a post – even if there are multiple pieces of content in that post, the view is assigned to the post
Clicks to enlarge a photo or video player – the view is assigned to the photo or video
We estimate prevalence by sampling content views on Facebook or Instagram.
To do this, we manually review samples of views and the content shown in them. Then we label the samples as violating or not violating according to our policies. The teams who do this sampling review the entire post for violations, even if the sampled view didn’t expose all the content in the post.
Using the portion of these samples that were of violating content, we estimate the percentage of all views that were of violating content. Note that we do not sample from every part of Facebook or Instagram for every violation type.
For certain violation types, we use stratified sampling, which increases the sample rate if the context indicates the content view is more likely to contain a violation. For example, if violations were viewed more frequently in Groups than in Feed, we would sample views in Groups with a higher probability than we sample views in Feed. One reason we do this is to reduce the uncertainty due to sampling. We express this uncertainty by quoting a range of values, for example by saying 18 to 20 out of every 10,000 views are on violations for adult nudity and sexual activity. This range reflects a 95% confidence window. This means that if we performed this measurement 100 times using different samples each time, we expect the true number to lie within the range 95 out of the 100 times.
For violation types that are viewed very infrequently, sampling requires a very large number of content samples to estimate a precise prevalence measure. In these cases, we can only estimate the upper limit — meaning, we are confident that the prevalence of violating views is below that limit, but we cannot precisely say how far below. Our confidence window for these upper limits is also 95%.
The people who apply labels to our samples sometimes make mistakes, including labeling violations as non-violating or vice versa. The relative rate of these mistakes could impact the prevalence measurement. For this reason, we may have two people review a sample to ensure accuracy in our labeling, and if there is ever a disagreement, we have a third person act as the tiebreaker.
For areas such as violent and graphic content, where content can be marked as disturbing, our prevalence calculation accounts for views of that content before the cover was added.
The current prevalence measurement covers surfaces that contribute to more than 90% of all views on Facebook and Instagram and does not include private conversations on Messenger or Instagram Direct.
To generate a representative measurement of global prevalence, we sample and label content in the multiple languages for Facebook and Instagram and are confident this approach provides a representative global estimate and are continually working to expand coverage of the metric.
Our overall content enforcement, through both reviewers and technology, expands to many more languages.
Prevalence for fake accounts on Facebook is an estimate of the percentage of monthly active Facebook accounts that were fake. Unlike prevalence for content violations, fake accounts prevalence assumes the impact on users is proportional to the number of active fake accounts on Facebook, even if people don’t ever see or experience these accounts.
To estimate the prevalence of fake accounts, we sample monthly active users and label them as fake or not. We define a monthly active user (MAU) as a registered Facebook user who logged in and visited Facebook through our website or a mobile device, or used our Messenger application (and is also a registered Facebook user), in the last 30 days as of the date of measurement.