News, Culture & Society

Twitter reveals it closes down 3.2m spam accounts a WEEK

Twitter has revealed the scale of its spam problem for the first time as part of its briefings to U.S. congressional staff as it probes online campaigns to influence the 2016 U.S. election.

‘On average, our automated systems catch more than 3.2 million suspicious accounts globally per week — more than double the amount we detected this time last year,’ the social network said.

It also outlined for the first time the measures it uses to deal with the issue, from automatically blocking suspicious attempts to log in to banning ‘bad actors’.


Twitter also outlined for the first time the measures it uses to deal with the issue, from automatically blocking suspicious attempts to log in to banning ‘bad actors’.


Systems catch more than 3.2 million suspicious accounts globally per week 

Counteracts 130,000 accounts daily trying to influence trends

Catch 450,000 suspicious logins per day

More than 117,000 malicious applications suspended since June 

Twitter said on Thursday it had suspended hundreds of Russian-linked accounts and would ramp up enforcement of its spam rules.

‘Every online platform has to deal with spam, and there is no silver bullet,’ Twitter said in its submission.

‘For example, the Internet Society estimated in October 2015 that up to 85 percent of all global email is spam – and that’s after decades of every email platform in the world tackling this challenge. 

‘Obviously email is very different from Tweets, but it’s important to understand the scale of what we are dealing with, and that this is a global issue for all platforms.’

It admits Russia and other post-Soviet states have been a primary source of what it calls ‘automated and spammy content’ on Twitter for many years. 

‘Content that violates our rules with respect to automated accounts and spam can have a highly negative effect on user experience, and we have long taken substantial action to stem that flow.

‘As patterns of malicious activity evolve, we’re adapting to meet them head-on.’  

In battling the automated promotion of trending topics, which get displayed to many users, the company said it counteracted 130,000 accounts daily.

Twitter said it would toughen restrictions on suspect spammers, for example by reducing the time that suspicious accounts stay visible during company investigations. 


Stopping logins:  We’ve built systems to identify suspicious attempts to log in to Twitter, including signs that a login may be automated or scripted. These techniques now help us catch about 450,000 suspicious logins per day, a 64% year-over-year increase in suspicious logins we’re able to detect.

Known ‘bad actors’: We’re investing in systems to stop bad content at its source if its point of origin corresponds with a known bad actor. We’re also improving how we detect and cluster accounts that were created by a single entity or a single suspicious source. We used these techniques to stop more than 5.7 million spammy follows from a single source just last week (9/21/2017).

Detecting non-human activity patterns: Using signals like the frequency and timing of Tweets and engagements, we’ve built models that can detect whether an activity on Twitter is likely automated.

Compromised account detection: To stop bad actors from exploiting otherwise healthy accounts to spread malicious content, we’re building systems to detect when login activity is inconsistent with a user’s typical behavior.

Checking suspicious content: Accounts and content detected by our systems are subject to a number of enforcement actions and limitations including: being placed in a ‘read only’ mode pending authentication, having the reach of Tweets limited based on suspicious origin or low quality content, the removal of associated content, and account suspension.

Third-party apps: Since June 2017, we’ve suspended more than 117,000 malicious applications for abusing our API, collectively responsible for more than 1.5 billion low-quality Tweets this year.

Preventing false positives: We typically give users caught by our spam detections an opportunity to verify that they’re legitimate before we suspend them from the platform.

Improving phone verification: We’ve improved our phone reputation system to identify suspicious carriers and numbers and prevent their repeated use to pass verification challenges.

Source: Twitter 



Although the company’s disclosures in briefings to U.S. congressional staff and a public blog post were its most detailed to date on the issue, Senator Mark Warner, the top Democrat on the Senate Intelligence Committee, called the company’s statements ‘deeply disappointing.’

Warner, whose panel is investigating alleged Russian interference in the election, said Twitter officials had not answered many questions about the Russian use of the platform and that it was still subject to foreign manipulation.

Twitter has been criticized as being too lax in policing fake or abusive accounts.

Colin Crowell, Twitter’s vice president of public policy, was among company representatives who met behind closed doors with Senate Intelligence Committee aides on Thursday.

The company was also expected to brief the House of Representatives Intelligence Committee on Thursday, according to committee sources.

The intelligence committees on Wednesday asked executives from technology companies including Twitter, Facebook, Alphabet Inc’s Google to testify at a public hearing on Nov. 1 about alleged Russian interference.

The San Francisco-based company said Russian media outlet RT, which is close to the Kremlin, had spent $274,100 on Twitter advertisements and promoted 1,823 tweets potentially aimed at the U.S. market.

Those ad buys alone topped the $100,000 that Facebook this month linked to a Russian propaganda operation during the 2016 election cycle, a revelation that prompted calls from some Democrats for new disclosure rules for online political ads.




Comments are closed.