Twitter partners with researchers to develop algorithms that can help kill off echo chambers

Twitter is finally taking stock of the quality of conversation happening on its platform. 

The social media giant is partnering with a group of researchers from academic institutions around the world to assess the severity of Twitter’s ‘echo chamber’ problem and study whether or not the site helps to reduce discrimination. 

The move is a part of Twitter’s ongoing effort to curb harassment and toxic behaviors among users. 

 

Twitter is partnering with a academic researchers to assess the severity of Twitter’s ‘echo chamber’ problem and study whether or not the site helps to reduce discrimination

In March, Twitter put out a call for proposals from outside experts so that it could work to get a sense of the health of the Twittersphere by measuring abuse, spam and manipulation.

‘After months of reviewing fantastic proposals from experts around the world, we’ve selected two partners that will help us measure our work to serve the public conversation,’ Twitter’s Safety unit wrote in a tweet. 

‘…We were overwhelmed by the thoughtful and smart ideas you shared, and are looking for ways we can partner with others in different ways.’

The firm said it created a review committee comprised of people across Twitter to choose from the more than 230 proposals submitted to the company.  

Twitter ultimately decided to form two groups of academic researchers that will focus on separate areas. 

Research collected by the academic groups will be used to create a metrics system that can evaluate the health of public conversations on the site.

‘Partnering with outside experts who can provide thoughtful analysis, external perspective and rigorous review is the best way to measure our work and stay accountable to those tho use Twitter,’ the firm wrote in a blog post. 

The first team is comprised of researchers from the Netherlands’ Leiden University and Delft University of Technology, Syracuse University and Bocconi University in Italy. 

It’s charged with looking at ‘echo chambers and uncivil discourse,’ specifically looking to create a metric that can evaluate how communities form around political discourse on Twitter and the challenges that can form based on discussions in those groups. 

‘In the context of growing political polarization, the spread of misinformation, and increases in incivility and intolerance, it is clear that if we are going to effectively evaluate and address some of the most difficult challenges arising on social media, academic researchers and tech companies will need to work together much more closely,’ Dr. Rebekah Tromble, assistant professor of political science at Leiden University and the group’s leader, said in a statement. 

Leiden University’s past findings have indicated that echo chambers, which form when people in a conversation share the same views, can increase ‘hostility and promote resentment towards those not having the same conversation.’ 

Twitter ultimately decided to form two groups of academic researchers that will focus on separate areas. Research collected by the academic groups will be used to create a metrics system that can evaluate the health of public conversations on the site

Twitter ultimately decided to form two groups of academic researchers that will focus on separate areas. Research collected by the academic groups will be used to create a metrics system that can evaluate the health of public conversations on the site

Twitter CEO Jack Dorsey (pictured) announced in March that the company was looking for ways to better measure the 'health of public conversation' on the site

Twitter CEO Jack Dorsey (pictured) announced in March that the company was looking for ways to better measure the ‘health of public conversation’ on the site

One set of metrics will assess how much people encounter and engage with diverse viewpoints on the site. 

The group will also develop an algorithm that distinguish between incivility, which Twitter noted can sometimes be constructive when it comes to politics, and intolerant discourse. 

By contrast to incivility, intolerant discourse might involve themes like hate, speech, racism and xenophobia.  

The second group, which includes researchers from the University of Oxford and the University of Amsterdam, will look at how people use Twitter and whether it does or does not decrease prejudice and discrimination. 

The idea is that by exposing people to different viewpoints – presumably via Twitter – they may be less likely to exude prejudice offline. 

HOW DOES TWITTER TAKE ACTION AGAINST OFFENDING ACCOUNTS?

Twitter can go after offending accounts from the tweet-level, direct message-level and account-level, according to the company’s website.

The company said it will take action against accounts when their behavior violates ‘Twitter Rules’ or it may be in response to a ‘valid and properly scoped’ request from an ‘authorized entity in a given country’.

Twitter may instruct a user to delete a tweet that violates the site’s terms, hide a tweet as it ‘awaits its deletion’ or it may even make a tweet ‘less visible’ on the site by limiting how often it appears in search results, replies or timelines.

Twitter takes a variety of steps to prevent offending accounts from using the site. In the case of its latest purge, Twitter asked suspicious accounts to verify their phone number

Twitter takes a variety of steps to prevent offending accounts from using the site. In the case of its latest purge, Twitter asked suspicious accounts to verify their phone number

The company can also stop users from direct messaging another user by removing the conversation from the user who reported the incidents’ inbox.

If a certain accounts violates Twitter’s policies, the company can make certain media unavailable, place an account in read-only mode, by removing their ability to post tweets, retweet, or like content ‘until calmer heads prevail’.

Twitter may also ask a user to verify account ownership, typically by requesting they verify an email or phone number linked to the account.

In extreme scenarios, Twitter may permanently suspend an account from global view, or the violator will not be allowed to create a new account. 

According to Twitter, they’ll create a set of text classifiers for language that are ‘commonly associated with positive sentiment, cooperative emotionality and integrative complexity’ and can be adapted to communications on the platform.

While the move should give Twitter a better idea of the kinds of conversations and behaviors that exist on its platform, some argue that it’s not a solution to some of its biggest problems around harassment and hate speech. 

Victims of harassment believe that the company doesn’t take enough, or any, action when they report offending accounts.  

It also won’t leave Twitter with any new tools to combat harassment, though it’s likely that it will use the data unearthed by researchers to improve its safety features.    



Read more at DailyMail.co.uk