Want the best of VICE News straight to your inbox? Sign up here.
Facebook is still trying to track down versions of the Christchurch shooter’s live stream, months after the tech giant began frantically removing the gory content.
Videos by VICE
In the first 24 hours after the video of the March mosque attack went viral, Facebook removed 1.5 million related posts. But a significant number at least initially evaded the Silicon Valley giant’s defenses, and users have continued to try to share the content. Some versions even remain live.
The company announced Wednesday that it took down at least 3 million more posts by the end of September.
“When people are sharing billions of things a day, even a tiny fraction is too much,” Facebook CEO Mark Zuckerberg said in a conference call on content moderation.
Facebook previously reported that at least 300,000 of those posts initially made it onto the platform, where they could be copied and tweaked to avoid censors. By Sept. 30, Facebook had removed a total of 4.5 million pieces of content related to the massacre.
The initial lapse appears to have helped fuel yet more attempted uploads — and potentially hundreds of thousands more successful shares — over time. The company is still trying to track them down.
“People have continued to try to spread it and share it, and that’s why our systems continue trying to enforce this,” VP of Integrity Guy Rosen said Wednesday. He added that 97% of Facebook’s takedowns come before users report them.
Facebook boasts a specialized 350-person counterterrorism unit that had focused in previous years on jihadi terrorist groups like the Islamic State group and al-Qaeda. But the white supremacist attack on Christchurch, where killed 51 people were killed at two mosques and helped inspire a similar attack in El Paso, caught it flat-footed.
The team spent the past eight months working furiously to shore up global partnerships between tech companies to share digital fingerprints of terrorist content — also known as “hashes” — and respond to attacks alongside law enforcement. It’s also retrained machine “classifiers,” tools that attempt to evaluate posts for violent propaganda the same way human reviewers would.
“If you’re going to train a real classifier, that takes time,” Brian Fishman, head of Facebook’s counterterrorism and dangerous organizations team, told VICE News in August. “And that takes human decisions. And those human decisions have an error rate. A classifier that works well against ISIS may not work so well against a bunch of neo-Nazis.”
The new batch of data released Wednesday offers a glimpse of the scale of the decentralized hate movement that Facebook is up against. It came as part of the company’s semi-annual report on content that violates its rules.
Earlier this year, Facebook began introducing more AI to its enforcement around hate speech as well, which allows machines to not just automatically identify offensive posts, but also remove some. The new AI contributed to 7 million takedowns of such content between July and September, up from 4.4 million in the previous three months.
“While we are pleased with this progress,” Rosen added in a blog post. “These technologies are not perfect and we know that mistakes can still happen.”
Cover image: A photo tribute for Christchurch mosque shooting victim Tariq Omar lies amid mounds of flowers across the road from the Al Noor mosque in Christchurch, New Zealand Tuesday, March 19, 2019. (AP Photo/Mark Baker)