Twitter’s efforts to suspend fake accounts have doubled since last year

Bots, your days of tweeting politically divisive nonsense might be numbered. The Washington Post reported Friday that in the last few months Twitter has aggressively suspended accounts in an effort to stem the spread of disinformation running rampant on its platform.

The Washington Post reports that Twitter suspended as many as 70 million accounts between May and June of this year, with no signs of slowing down in July. According to data obtained by the Post, the platform suspended 13 million accounts during a weeklong spike of bot banning activity in mid-May.

Sources tell the Post that the uptick in suspensions is tied to the company’s efforts to comply with scrutiny from the congressional investigation into Russian disinformation on social platforms. The report adds that Twitter investigates bots and other fake accounts through an internal project known as “Operation Megaphone” through which it buys suspicious accounts and then investigates their connections.

Twitter declined to provide additional information about The Washington Post report, but pointed us to a blog post from last week in which it disclosed other numbers related to its bot-hunting efforts. In May of 2018, Twitter identified more than 9.9 million suspicious accounts — triple its efforts in late 2017.

Chart via Twitter

When Twitter identifies an account that it deems suspicious, it then “challenges” that account, giving legitimate Twitter users an opportunity to prove their sentience by confirming a phone number. When an account fails this test it gets the boot, while accounts that pass are reinstated.

As Twitter noted in its recent blog post, bots can make users look good by artificially inflating follower counts.

“As a result of these improvements, some people may notice their own account metrics change more regularly,” Twitter warned. The company noted that cracking down on fake accounts means that “malicious actors” won’t be able to promote their own content and accounts as easily by inflating their own numbers. Kicking users off a platform, fake or not, is a risk for a company that regularly reports its monthly active users, though only a temporary one.

As the report notes, at least one insider expects Twitter’s Q2 active user numbers to dip, reflecting its shift in enforcement. Still, any temporary user number setback would prove nominal for a platform that should focus on healthy user growth. Facebook is facing a similar reckoning as a result of the Russian bot scandal, as the company anticipates user engagement stats to dip as it moves to emphasize quality user experiences over juiced-up quarterly numbers. In both cases, it’s a worthy trade-off.

Suspicious likes lead to researcher lighting up a 22,000-strong botnet on Twitter

Botnets are fascinating to me. Who creates them? What are they for? And why doesn’t someone delete them? The answers are probably less interesting than I hope, but in the meantime I like to cheer when large populations of bots are exposed. That’s what security outfit F-Secure’s Andy Patel did this week after having his curiosity piqued by a handful of strange likes on Twitter .

Curious about the origin of this little cluster of random likes, which he just happened to see roll in one after another, he noticed that the accounts in question all looked… pretty fake. Cute girl avatar, weird truncated bio (“Waiting you”; “You love it harshly”), and a shortened URL which, on inspection, led to “adult dating” sites.

So it was a couple bots designed to lure users to scammy sites. Simple enough. But after seeing that there were a few more of the same type of bot among the followers and likes of these accounts, Patel decided to go a little further down the rabbit hole.

He made a script to scan through the sketchy accounts and find ones with similarly suspicious traits. It did so for a couple days, and… behold!

This fabulous visualization shows the 22,000 accounts the script had scraped when Patel stopped it. Each of those little dots is an account, and they exhibit an interesting pattern. Here’s a close-up:

As you can see, they’re organized in a sort of hierarchical fashion, a hub-and-spoke design where they all follow one central node, which is itself connected to other central nodes.

I picked a few at random to check and they all turned out to be exactly as expected. Racy profile pic, random retweets, a couple strange original ones, and the obligatory come-hither bio link (“Do you like it gently? Come in! 💚💚💚”). Warning, they’re NSFW.

Patel continued his analysis and found that far from being some botnet-come-lately, some of these accounts — and by some I mean thousands and thousands! — are years old. A handful are about to hit a decade!

The most likely explanation is a slowly growing botnet owned and operated by a single entity that, in aggregate, drives enough traffic to justify itself — yet doesn’t attract enough attention to get rolled up.

But on that account I’m troubled. Why is it that a single savvy security guy can uncover a giant botnet with, essentially, the work of an afternoon, but Twitter has failed to detect it for going on ten years? Considering how obvious bot spam like this is, and how easily a tool or script can be made that walks the connections and finds near-identical spurious accounts, one wonders how hard Twitter can actually be looking.

That said, I don’t want to be ungenerous. It’s a hard problem, and the company is also dealing with the thousands and thousands (maybe millions) that get created every day. And technically bots aren’t against the terms of service, although at some point they probably tip over into nuisance territory. I suppose we should be happy that the problem isn’t any worse than it is.