At a Twitter all-hands meeting on March 22, an employee asked a blunt question: Twitter has largely eradicated Islamic State propaganda off its platform. Why can’t it do the same for white supremacist content?
An executive responded by explaining that Twitter follows the law, and a technical employee who works on machine learning and artificial intelligence issues went up to the mic to add some context. (As Motherboard has previously reported, algorithms are the next great hope for platforms trying to moderate the posts of their hundreds of millions, or billions, of users.)
Videos by VICE
With every sort of content filter, there is a tradeoff, he explained. When a platform aggressively enforces against ISIS content, for instance, it can also flag innocent accounts as well, such as Arabic language broadcasters. Society, in general, accepts the benefit of banning ISIS for inconveniencing some others, he said.
In separate discussions verified by Motherboard, that employee said Twitter hasn’t taken the same aggressive approach to white supremacist content because the collateral accounts that are impacted can, in some instances, be Republican politicians.
The employee argued that, on a technical level, content from Republican politicians could get swept up by algorithms aggressively removing white supremacist material. Banning politicians wouldn’t be accepted by society as a trade-off for flagging all of the white supremacist propaganda, he argued.
There is no indication that this position is an official policy of Twitter, and the company told Motherboard that this “is not [an] accurate characterization of our policies or enforcement—on any level.” But the Twitter employee’s comments highlight the sometimes overlooked debate within the moderation of tech platforms: are moderation issues purely technical and algorithmic, or do societal norms play a greater role than some may acknowledge?
Though Twitter has rules against “abuse and hateful conduct,” civil rights experts, government organizations, and Twitter users say the platform hasn’t done enough to curb white supremacy and neo-Nazis on the platform, and its competitor Facebook recently explicitly banned white nationalism. Wednesday, during a parliamentary committee hearing on social media content moderation, UK MP Yvette Cooper asked Twitter why it hasn’t yet banned former KKK leader David Duke, and “Jack, ban the Nazis” has become a common reply to many of Twitter CEO Jack Dorsey’s tweets. During a recent interview with TED that allowed the public to tweet in questions, the feed was overtaken by people asking Dorsey why the platform hadn’t banned Nazis. Dorsey said “we have policies around violent extremist groups,” but did not give a straightforward answer to the question. Dorsey did not respond to two requests for comment sent via Twitter DM.
Do you work at Twitter? We would love to hear from you. Using a non-work computer or phone, you can contact Joseph Cox securely on Signal on +44 20 8133 5190, OTR chat on jfcox@jabber.ccc.de, or email joseph.cox@vice.com.
Twitter has not publicly explained why it has been able to so successfully eradicate ISIS while it continues to struggle with white nationalism. As a company, Twitter won’t say that it can’t treat white supremacy in the same way as it treated ISIS. But external experts Motherboard spoke to said that the measures taken against ISIS were so extreme that, if applied to white supremacy, there would certainly be backlash, because algorithms would obviously flag content that has been tweeted by prominent Republicans—or, at the very least, their supporters. So it’s no surprise, then, that employees at the company have realized that as well.
This is because the proactive measures taken against ISIS are more akin to the removal of spam or child porn than the more nuanced way that social media platforms traditionally police content, which can involve using algorithms to surface content but ultimately relies on humans to actually review and remove it (or leave it up.) A Twitter spokesperson told Motherboard that 91 percent of the company’s terrorism-related suspensions in a 6 month period in 2018 were thanks to internal, automated tools.
The argument that external experts made to Motherboard aligns with what the Twitter employee aired: Society as a whole uncontroversially and unequivocally demanded that Twitter take action against ISIS in the wake of beheading videos spreading far and wide on the platform. The automated approach that Twitter took to eradicating ISIS was successful: “I haven’t seen a legit ISIS supporter on Twitter who lasts longer than 15 seconds for two-and-a-half years,” Amarnath Amarasingam, an extremism researcher at the Institute for Strategic Dialogue, told Motherboard in a phone call. Society and politicians were willing to accept that some accounts were mistakenly suspended by Twitter during that process (for example, accounts belonging to the hacktivist group Anonymous that were reporting ISIS accounts to Twitter as part of an operation called #OpISIS were themselves banned).
That same eradicate-everything approach, applied to white supremacy, is much more controversial.
“Most people can agree a beheading video or some kind of ISIS content should be proactively removed, but when we try to talk about the alt-right or white nationalism, we get into dangerous territory, where we’re talking about [Iowa Rep.] Steve King or maybe even some of Trump’s tweets, so it becomes hard for social media companies to say all of this ‘this content should be removed,’” Amarasingam said.
“There’s going to be controversy here that we didn’t see with ISIS, because there are more white nationalists than there are ISIS supporters, and white nationalists are closer to the levers of political power in the US and Europe than ISIS ever was.”
In March, King promoted an open white nationalist on Twitter for the third time. King quote tweeted Faith Goldy, a Canadian white nationalist. Earlier this month, Facebook banned Goldy under the site’s new policy banning white nationalism; Goldy has 122,000 followers on Twitter and has not been banned at the time of writing. Last year, Twitter banned Republican politician and white nationalist Paul Nehlen for a racist tweet he sent about actress and princess Meghan Markle, but prior to the ban, Nehlen gained a wide following on the platform while tweeting openly white nationalist content about, for example, the “Jewish media.”
Any move that could be perceived as being anti-Republican is likely to stir backlash against the company, which has been criticized by President Trump and other prominent Republicans for having an “anti-conservative bias.” Tuesday, on the same day Trump met with Twitter’s Dorsey, the President tweeted that Twitter “[doesn’t] treat me well as a Republican. Very discriminatory,” Trump tweeted. “No wonder Congress wants to get involved—and they should.”
JM Berger, author of Extremism and a number of reports on ISIS and far-right extremists on Twitter, told Motherboard that in his own research, he has found that “a very large number of white nationalists identify themselves as avid Trump supporters.”
“Cracking down on white nationalists will therefore involve removing a lot of people who identify to a greater or lesser extent as Trump supporters, and some people in Trump circles and pro-Trump media will certainly seize on this to complain they are being persecuted,” Berger said. “There’s going to be controversy here that we didn’t see with ISIS, because there are more white nationalists than there are ISIS supporters, and white nationalists are closer to the levers of political power in the US and Europe than ISIS ever was.”
Twitter currently has no good way of suspending specific white supremacists without human intervention, and so it continues to use human moderators to evaluate tweets. In an email, a company spokesperson told Motherboard that “different content and behaviors require different approaches.”
“For terrorist-related content we’ve a lot of success with proprietary technology but for other types of content that violate our policies—which can often [be] much more contextual—we see the best benefits by using technology and human review in tandem,” the company said.
Twitter hasn’t done a particularly good job of removing white supremacist content and has shown a reluctance to take any action of any kind against “world leaders” even when their tweets violate Twitter’s rules. But Berger agrees with Twitter in that the problem the company is facing with white supremacy is fundamentally different than the one it faced with ISIS on a practical level.
“With ISIS, the group’s obsessive branding, tight social networks and small numbers made it easier to avoid collateral damage when the companies cracked down (although there was some),” he said. “White nationalists, in contrast, have inconsistent branding, diffuse social networks and a large body of sympathetic people in the population, so the risk of collateral damage might be perceived as being higher, but it really depends on where the company draws its lines around content.”
But just because eradicating white supremacy on Twitter is a hard problem doesn’t mean the company should get a pass. After Facebook explicitly banned white supremacy and white nationalism, Motherboard asked YouTube and Twitter whether they would make similar changes. Neither company would commit to making that explicit change, and referred us to their existing rules.
“Twitter has a responsibility to stomp out all voices of hate on its platform,” Brandi Collins-Dexter, senior campaign director at activist group Color Of Change told Motherboard in a statement. “Instead, the company is giving a free ride to conservative politicians whose dangerous rhetoric enables the growth of the white supremacist movement into the mainstream and the rise of hate, online and off.”