Twitter will suspend repeat offenders posting abusive comments on Periscope live streams

As part of Twitter’s attempted crackdown on abusive behavior across its network, the company announced on Friday afternoon a new policy facing those who repeatedly harass, threaten or otherwise make abusive comments during a Periscope broadcaster’s live stream. According to Twitter, the company will begin to more aggressively enforce its Periscope Community Guidelines by reviewing and suspending accounts of habitual offenders.

The plans were announced via a Periscope blog post and tweet that said everyone should be able to feel safe watching live video.

Currently, Periscope’s comment moderation policy involves group moderation.

That is, when one viewer reports a comment as “abuse,” “spam” or selects “other reason,” Periscope’s software will then randomly select a few other viewers to take a look and decide if the comment is abuse, spam or if it looks okay. The randomness factor here prevents a person (or persons) from using the reporting feature to shut down conversations. Only if a majority of the randomly selected voters agree the comment is spam or abuse does the commenter get suspended.

However, this suspension would only disable their ability to chat during the broadcast itself — it didn’t prevent them from continuing to watch other live broadcasts and make further abusive remarks in the comments. Though they would risk the temporary ban by doing so, they could still disrupt the conversation, and make the video creator — and their community — feel threatened or otherwise harassed.

Twitter says that accounts that repeatedly get suspended for violating its guidelines will soon be reviewed and suspended. This enhanced enforcement begins on August 10, and is one of several other changes Twitter is making to its product across Periscope and Twitter focused on user safety.

To what extent those changes have been working is questionable. Twitter may have policies in place around online harassment and abuse, but its enforcement has been hit-or-miss. But ridding its platform of unwanted accounts — including spam, despite the impact to monthly active user numbers — is something the company must do for its long-term health. The fact that so much hate and abuse is seemingly tolerated or overlooked on Twitter has been an issue for some time, and the problem continues today. And it could be one of the factors in Twitter’s stagnant user growth. After all, who willingly signs up for harassment?

The company is at least attempting to address the problem, most recently by acquiring the anti-abuse technology provider Smyte. Its transition to Twitter didn’t go so well, but the technology it offers the company could help Twitter address abuse at a greater scale in the future.

Twitter replaces its gun emoji with a water gun

Twitter has now followed Apple’s lead in changing its pistol emoji to a harmless, bright green water gun. And in doing so, the company that has struggled to handle the abuse, hate speech and harassment taking place across its platform, has removed one of the means for online abusers to troll their victims.

The change is one of several rolling out now in Twitter’s emoji update, Twemoji 2.6, which impacts Twitter users on the web, mobile web, and on Tweetdeck.

Below: Apple’s water gun

Below: Twitter’s water gun

The decision to replace an emoji of a weapon to a child’s toy was seen as a political statement when Apple in 2016 rolled out its own water gun emoji in iOS 10. The company had also argued against the addition of a rifle emoji, ultimately leading to the Unicode’s decision to remove the gun from its list of new emoji candidates that same year.

With these moves, Apple was effectively telling people that a gun didn’t have a place in the pictorial language people commonly use when messaging on mobile devices.

These sorts of changes matter because of emoji’s ability to influence culture and its function as a globally understood form of communication. That’s why so much attention is given to those emoji updates that go beyond the cosmetic – like updates that offer better representations of human skin tones, show different types of family groupings or relationships, or those give various professions – like a police officer or a scientist – both male and female versions, for example.

In the case of the water pistol, Apple set a certain standard that others in the industry have since followed.

Samsung also later replaced its gun with a water gun, as did WhatsApp. Google, meanwhile, didn’t follow Apple’s lead saying that it believed in cross-platform communication. Many others left their realistic gun emojis alone, too, including Microsoft.

“The main problem with the different appearances of the pistol emoji has been the potential for confusion when one platform displays this as an innocuous toy, and another shows the same emoji as a weapon. This was particularly an issue in 2016 when Apple changed the pistol emoji out of step with every single other vendor at the time,” notes Jeremy Burge, Emojipedia’s founder and Vice Chair on the Unicode Emoji Subcommittee. “Now we’re seeing multiple vendors all changing to a water pistol image all in a similar timeframe with Samsung and Twitter both changing their design this year,” he says.

On Twitter, however, the updated gun emoji very much comes across as a message about where the company stands (or aims to stand) on abuse and violence. A gun – as opposed to a water gun – can be far more frightening when accompanied with a threat of violence in a tweet.

The change also arrives at a time when Twitter is trying – some would say unsuccessfully – to better manage the bad behavior that takes place on its platform. Most recently, it decided to publicize its rules around abuse to see if people would then choose to follow them. It has also updated its guidelines and policies for how it would handle online abusers to mixed results.

In addition, the change feels even more like a political message than the Apple emoji update did given its timing – in the wake of Parkland, the youth-led #NeverAgain movement, the YouTube shooting, and the increased focus on the NRA’s contributions to politicians.

Twitter has confirmed the change in an email with TechCrunch, saying the decision was made for “consistency” with the others who have changed.

However, Emojipedia shows that not all companies have updated to the water gun. Google, Microsoft, Facebook, Messenger, LG, HTC, EmojiOne, emojidex, and Mozilla still offer a realistic pistol, not the green toy.

But Apple and Samsung perhaps hold more weight when it comes to where things are headed.

“I know some users object to what they see as censorship on their emoji keyboard, but I can certainly see why companies today might want to ensure that they aren’t showing a weapon where iPhone and Samsung Galaxy users now have a toy gun,” Burge says. “It’s pretty much the opposite to the issue with Apple being out of step with other vendors in 2016.”

 

The gun was the most notable change in Twemoji 2.6, but Emojipedia notes that other emoji have been updated as well, including the kitchen knife (which now looks like more of a vegetable slicer than a weapon for stabbing), the Crystal Ball, the Alembic (a glass vessel with water), and the Magnifying Glass, with more minor tweaks to the Coat, Eyes, and emoji faces with horns.

Image credits: Emojipedia; Apple Water Gun: Apple

Twitter will publicize rules around abuse to test if behavior changes

As part of Twitter’s efforts to rid its platform of abuse and hate, the company is teaming up with researchers Susan Benesch, a faculty associate at the Berkman Klein Center for Internet & Society at Harvard University, and J. Nathan Matias, a post-doc research associate at Princeton University, to study online abuse. Today, Twitter is going to start testing an idea that if it shows people its rules, behavior will improve.

“In an experiment starting today, Twitter is publicizing its rules, to test whether this improves civility,” Benesch and Matias wrote on Medium. “We proposed this idea to Twitter and designed an experiment to evaluate it.”

The idea is that by showing people the rules, their behavior will improve on the platform. The researchers point to evidence of when institutions clearly publish rules, people are more likely to follow them.

The researchers assure the privacy of Twitter users will be protected. For example, Twitter will only provide anonymized, aggregated information.

“Since we will not receive identifying information on any individual person or Twitter account, we cannot and will not mention anyone or their Tweets in our publications,” the researchers wrote.

Last month, Twitter began soliciting proposals from the public to help the social network capture, measure and evaluate healthy interactions on the platform. This was part of Twitter’s commitment “to help increase the collective health, openness, and civility of public conversation,” Twitter CEO Jack Dorsey said in a tweet.

It’s not clear how widespread the test will be. I’ve reached out to Twitter and will update this story if I learn more, but it seems that the company won’t be releasing specifics.

In the meantime, holler at me (megan@techcrunch.com) if these rules show up for you.