Twitter rolls out stricter rules on abusive content

Twitter has begun enforcing stricter policies on violent and abusive content like hateful images or symbols, including those attached to user profiles.

The new guidelines, which were first announced one month ago, were put into place Monday.

Monitors at the company will weigh hateful imagery in the same way they do graphic violence and adult content.

This Wednesday, Oct. 26, 2016, photo shows a Twitter sign outside of the company’s headquarters in San Francisco. Twitter will be enforcing stricter policies on violent and abusive content such as hateful images or symbols, including those attached to user profiles, the company announced Monday

WHAT IS HATEFUL IMAGERY? 

Twitter says it considers hateful imagery to be ‘logos, symbols, or images whose purpose is to promote hostility and malice against others based on their race, religion, disability, sexual orientation, or ethnicity/national origin’.

If a user wants to post symbols or images that might be considered hateful, the post must be marked ‘sensitive media.’ 

Other users would then see a warning that would allow them to decide whether to view the post.

 

If a user wants to post symbols or images that might be considered hateful, the post must be marked ‘sensitive media.’ 

Other users would then see a warning that would allow them to decide whether to view the post.

‘Hateful imagery will now be considered sensitive media under our media policy,’ the firm said.

‘We consider hateful imagery to be logos, symbols, or images whose purpose is to promote hostility and malice against others based on their race, religion, disability, sexual orientation, or ethnicity/national origin. 

‘If this type of content appears in header or profile images, we will now accept profile-level reports and require account owners to remove any violating media.’

Twitter is also prohibiting users from abusing or threatening others through their profiles or usernames.

While the new guidelines became official on Monday, the social media company continues to work out internal monitoring tools and it is revamping the appeals process for banned or suspended accounts. 

But the company will also begin accepting reports from users.

Users can report profiles, or users, that they consider to be in violation of Twitter policy. 

Previously, users could only report individual posts they deemed offensive.

Now being targeted are ‘logos, symbols, or images whose purpose is to promote hostility and malice against others based on their race, religion, disability, sexual orientation, or ethnicity/national origin.’

There is no specific list, however, of banned symbols or images. 

Rather, the company will review complaints individually to consider the context of the post or profile, including cultural and political considerations.

It is also broadening existing policies intended to reduce threatening content, to include imagery that glorifies or celebrates violent acts. 

That content will be removed and repeat offenders will be banned. 

THE ROSE MCGOWAN ROW 

Rose McGowan was suspended from Twitter earlier this month after several days of hitting out against the likes of Harvey Weinstein, his brother Bob Weinstein and Ben Affleck.

McGowan, who was allegedly assaulted by Weinstein in 1997 had been crusading against silence over Weinstein’s grim past on Twitter. However she took to Instagram to share the news her account had been shutdown. 

Rose McGowan took to Instagram to say her Twitter account had been suspended Wednesday night

Rose McGowan took to Instagram to say her Twitter account had been suspended Wednesday night

‘Twitter has suspended me. There are powerful forces at work. Be my voice.’

She also shared a screen grab of Twitter’s suspended account notification, which says ‘We’ve determined that this account violated the Twitter Rules.’ The message also suggests she ‘Delete the Tweets that violate our rules.’  

She tweeted telling Affleck to ‘F*** off,’ and called him a liar after he denounced Weinstein on Tuesday. She also tweeted ‘Now I am allowed to say rapist,’ without mentioning Weinstein specifically.

Twitter CEO Jack Dorsey promised more transparency and aggressive policies last month after the firm came under fire after McGowan’s account was temporarily blocked for violating Twitter polices.

Twitter later said the action was taken because she posted someone’s phone number – a policy violation, despite many users thinking t was for her comments on Harvey Weinstein. 

 

Beginning Monday, the company will ban accounts affiliated with ‘organizations that use or promote violence against civilians to further their causes.’

While more content is banned, the company has provided more leeway for itself after it was criticized for strict rules that resulted in account suspensions.

There was a backlash against Twitter after it suspending the account of actress Rose McGowan who opened a public campaign over sexual harassment and abuse, specifically naming Hollywood mogul Harvey Weinstein. 

Twitter eventually reinstated McGowan’s account and said that it had been suspended because of a tweet that violated its rules on privacy.

‘In our efforts to be more aggressive here, we may make some mistakes and are working on a robust appeals process,’ Twitter said in its blog post.

Twitter relies in large part on user reports to identify problematic accounts and content, but the company said it is developing ‘internal tools’ to bolster its ability to police content.

Twitter also seeks to improve communications with users about the decisions it makes. 

That includes telling those who have been suspended which rules they had violated.

TWITTER’S NEW RULES IN FULL 

Abusive behavior

We are making it clear that context — including if the behavior is targeted, if a report has been filed and by whom, and if the Tweet itself is newsworthy and in the legitimate public interest — is crucial when evaluating abusive behavior and determining appropriate enforcement actions. Expect more detail on how we review and enforce all of our policies and the range of enforcement options in a separate update on November 14.

Self-harm

We’ve always shared resources with people experiencing suicidal or self-harming thoughts when we learn of such behavior, and removed any Tweets that encourage or promote suicide games. Our updated policy on suicide and self-harm clarifies how strictly we enforce this policy, and how we communicate with anyone promoting or encouraging this type of behavior.

Spam and related behaviors

We are more clearly defining spam, how it behaves on Twitter, and sharing the factors we consider when reviewing accounts that may be spam. As part of this update, we’re also clarifying that when we review accounts that demonstrate spam-like behavior, we focus on behavioral signals, not the factual accuracy of the information they share.

Graphic violence and adult content

We’re providing more specific detail around the types of content we consider to be “graphic violence” or “adult content.” We’re also updating our media policy Help Center page so it includes examples that help set expectations around the types of content covered by this policy. Please note that the media policy will be updated again on November 22, to account for hateful imagery.

 



Read more at DailyMail.co.uk