Facebook removed 2.2 billion fake accounts from its platform during the first quarter of the year — nearly double the number it took action on in the prior quarter. The company says the increase is due to an uptick in automated attacks that create many accounts at once.
The Menlo Park, California-based company on Thursday released the latest iteration of its Community Standards Report, which is meant to help the public understand how it is handling content moderation. It’s one of a number of efforts Facebook has made at increasing transparency and improving its public image after a cascade of scandals, including Cambridge Analytica, Russian interference in the 2016 election, and its platform’s role in spreading misinformation. Its practices have been under increased scrutiny around the globe, including from politicians, some of whom have called for Facebook’s break-up and questioned how it moderates what is and isn’t allowed on its platform.
It’s a huge jump. In the fourth quarter of 2018, Facebook took down 1.2 billion fake accounts, and the quarter before, 750,000. During the first quarter of 2018, Facebook took down fewer than 600,000 fake accounts.
The company emphasized that most of the fake accounts it’s addressing have been taken down within minutes of being created, and those accounts therefore aren’t included in the metrics it reports, such as monthly active users. It claims that it flags 99.8 percent of fake accounts on its own, before they’re reported.
Facebook said the amount of accounts it took action on this quarter increased because of “automated attacks by bad actors who attempt to create large volumes of accounts at one time.” But it admitted that because so many automated accounts are being created, more are inevitably making it past their detection.
On a call with reporters on Thursday, Guy Rosen, vice president of integrity at Facebook, said that in light of this wave of fake account attacks, the company is also sometimes blocking ranges of IP addresses to stop spammers from connecting to its systems altogether.
“The larger quantities of fake accounts are driven by spammers who are constantly trying to invade our systems,” Rosen said, though he noted that some of the accounts Facebook is taking down are also preexisting ones.
Facebook is well aware it has a fake accounts problem
Facebook estimates that 5 percent of its monthly active users are made up, though some have suggested that number could actually be much higher.
Earlier this year, a report from a Facebook critic, Aaron Greenspan, claimed that half of the social media giant’s users could be fake. The report argued that Facebook has no way to “ever have an accurate way to measure its fake account problem.” Facebook slammed the report; a spokeswoman told Business Insider at the time that it was “unequivocally wrong.”
Still, Facebook has admitted it’s hard to offer accurate statistics on fake accounts. Jake Nicas at the New York Times in January laid out the complications:
The Silicon Valley company defines fake accounts as profiles that are either designed to break its rules, for example by spammers or scammers impersonating others, or that are misclassified, such as someone setting up a Facebook profile instead of a Facebook page for a business.
Yet the number of Facebook accounts that fit those descriptions is less clear. While the company discloses its estimates of fake accounts, its figures have fluctuated and are confusing. Even Facebook admits its understanding of the numbers is tenuous.
Regardless of the specific data point, Facebook seems to know that its fake account issue is not a good thing. Along with the Community Standards Report on Thursday, it also released an explanation of how it measures fake accounts.
“When it comes to abusive fake accounts, our intent is simple: find and remove as many as we can while removing as few authentic accounts as possible,” Alex Schultz, vice president of Analytics at Facebook, wrote in a post outlining the company’s handling of fake accounts.
On the one hand, it is a good sign that Facebook is increasing its policing of fake accounts and other bad actors on its platform. Yes, there was an uptick in fake account creation, but it also caught a lot of them.
But that the removed 2.2 billion fake accounts were created in a three-month period is a sign of the scope of the problem. That’s a big number and suggests there may be no way for the company to completely stomp this sort of activity out.
On the call with reporters on Thursday, the company emphasized that spammers and others who create fake accounts are often commercially motivated. But fake accounts can also propagate abusive behavior, spread fake news, and open the door to advertising fraud.
Facebook on Thursday also released a report from the Facebook Data Transparency Advisory Group, an independent group created last year meant to assess whether the metrics the company is putting out — including when it comes to its fake account numbers — are accurate and meaningful. The authors of that report noted they were not able to speak directly with engineers maintaining day-to-day systems and therefore could not “evaluate the extent that Facebook’s daily operations deviate” from the process Facebook’s higher-ups described to them. In other words, they had to take Facebook’s word on what’s happening.
It’s the same thing that’s happening with the fake accounts and, more broadly, with the community standards the company releases. Facebook says it’s doing better — and maybe it is — but we have to take their word for it.
Recode and Vox have joined forces to uncover and explain how our digital world is changing — and changing us. Subscribe to Recode podcasts to hear Kara Swisher and Peter Kafka lead the tough conversations the technology industry needs today.