A newly released cache of documents provide unprecedented insight into how Roblox, one of the most popular gaming platforms in the world that half of all children in the U.S play, moderates content. The documents show everything from how executives at the company talk about and moderate mass shooting simulators to how big of a “bulge” in player-designed clothes is an issue for the platform. They also give insight into the targeting of children by predators on the platform and how the platform attempts to fight child grooming.
The documents, which were stolen by a hacker and published online, includes internal discussions about how to respond to some of the highest-profile content moderation challenges the platform has faced, such as how and when to remove user-made games that simulate mass shootings and how to respond to media inquiries about strip clubs in the game. They also include a spreadsheet with a scoring system for specific banned terms that can be used in chat; the scoring system is seemingly intended to determine the severity of a specific phrase and is broken into categories that include “bullying,” “sexting,” “underage,” “racist,” “grooming,” “subversive,” “self_harm,” “religion,” and more.
Videos by VICE
“Roblox is a new Society. What kind of Society do we want to be?” one slide included in the hacked documents reads.
Do you work at Roblox? We’d love to hear from you. Using a non-work phone or computer, you can contact Joseph Cox securely on Signal on +44 20 8133 5190, Wickr on josephcox, or email joseph.cox@vice.com.
In one document, a “roadmap vision” for the Safety Group includes introducing new categories such as “grooming.” Another document lists some of the words that Roblox says relate to “grooming” and appears to assign them a score. “Wish you were younger” is scored as a 3. “Can I get a {{media}} of you” is a 5. “Dont tell your {{family_member}}” is also a 5. There are also scores of 6 for “hump kids,” “{{child}} slave,” “{{child}} predator,” and other horrifying terms that clearly relate to child sexual abuse.
Roblox also sorts phrases into other sections such as “sexting,” “bullying,” “racist,” and “religion.”
One issue identified in the documents is that although Roblox’s systems scan 100 percent of submitted abuse reports, only around 10 percent of those are actionable. That means “It’s hard to identify which 10% are the right ones to review,” the document reads. An improvement for 2020 was listed as becoming better at identifying actionable abuse reports.
A May 2021 presentation from Roblox’s “safety group” lists the objective to “Reduce the user’s chance of seeing unsafe (Tier 4) content.” According to that presentation, the “Key Result” of this objective is to “Decrease exposure rate of bad games from 1.2% to 0.01% of [weekly active users].
In September Rolling Stone reported on strip clubs inside Roblox in which children performed “lap dances” with their in-game avatars in exchange for Roblox’s currency Robux. This month local North Carolina outlet WECT 6 reported on a child who was targeted by a predator over Roblox.
The documents also include public relation briefs and strategy documents in which Roblox employees and executives discuss how to respond to reporters who are reaching out to the company about controversial content on its platform. Some of these exchanges reveal how Roblox talks about if and how it can remove this content.
One document, for example, shows that in May of 2021 a Norwegian journalist emailed Roblox to ask about “extremists exploiting a gaming platform popular with kids,” and referenced two Roblox games: “Utøya 22. July,” the date and location of the mass shooting carried out in Norway, and “Christchurch Simulator,” named after the 2019 Christchurch mosque shooting in New Zealand.
“Game is still up, but there’s nothing offensive in it other than the name.” Jeff Maher, Roblox’s head of product, safety, said in a May 11, 2021 comment about the Utøya game. “It’s just an island with some camp sites in it, and I can’t find any guns or (or anything to add to the avatar at all).”
Maher also linked to the game, which has since been deleted. According to Roblox’s site it was visited 858 times.
“Should we consider a text filter block on the word ‘Utøya’?” Maher wrote.
“I think that would be an overreach,” Joel Silk, Roblox’s senior director of moderation, replied. “It would be like blocking Columbine or New York because a horrible thing happened there. We can look at blocking combinations of the name of the island with the date or with Oslo (a bombing occurred there the same day) or when paired with ‘shooting’ or ‘massacre’ or something.”
“Based on the description, can we remove the game? Not going to be easy to defend this if it’s still up…” Teresa Brewer, Roblox’s vice president of corporate communications, said in a May 14 comment.
“This game was removed yesterday. [A Roblox employee] resurrected our redshift tool and added Christchurch related terms. We should see fewer (if any) related games/assets now,” Silk replied on May 15.
According to the document, Maher wasn’t able to find the specific Christchurch Simulator the Norwegian reporter asked about, but links to a now deleted Roblox game, which according to the URL was named “Goku vs. Christchurch.” Goku is one of the main characters in the anime Dragon Ball Z.
“This is an extremely crude / blocky version, where without the name you wouldn’t know it’s really simulating anything,” Maher said. “But you can dress up like ‘Goku’ from DBZ and they give you an AK, you go into a building that is only a single blank room with a few blocky npcs running around. I would remove for copyright violations at the least though.”
“Do we have an IP relationship with DBZ? If not, I’d avoid removal for IP if we didn’t get a takedown,” Eliza Jacobs, Roblox’s director of content policy, replied. “[@Joel Silk] for confirmation there. ‘Christchurch Simulator’ without more content to tie it to the massacre isn’t enough to remove under real life tragedy, from my reading of the policy.”
Roblox told Motherboard in a statement this week that “We do not tolerate racism, discriminatory speech, or content related to tragic events, which is why we have a stringent safety system which we rigorously enforce.”
The company also revealed it has a team of 2600 people who work on moderation on the platform. “We review every single image, audio file, and video before the content is uploaded with human moderation assisted by automated machine learning technology and have a team of over 2600 dedicated to monitoring for safety 24/7 to detect and swiftly act on any inappropriate content or behavior,” the statement added. “We also work closely with the Middlebury Institute’s Center on Terrorism, Extremism, and Counterterrorism (CTEC), a world leading counterterrorism institute and other governments and NGOs across the world to help us prevent extremist activity on our platform.” (There were 811 people in Roblox’s entire moderation team at the end of Q3 in 2020, according to one of the documents)
One of the documents, a quarterly review for Roblox’s Safety Group in 2020, provides a bird’s-eye-view of the moderation challenges that the company faces across its platform. Namely, inappropriate games, assets, and chat messages. The document then lays out in more granular detail some of the different moderation efforts Roblox uses or planned to do so at the time. Those include using Amazon’s image recognition software called Rekognition as part of Roblox’s image moderation queue.
The company also developed its own proof-of-concept for identifying particular objects in images; in this case, Pikachu.
Potentially controversial tactics are also included in the slides, such as “expand the playbook” by looking “for MAC address to ban.” MAC addresses are device specific hardware identifiers, meaning Roblox could ban particular devices from accessing its services, rather than just specific accounts. A Roblox spokesperson told Motherboard in an email that “We don’t disclose that level of information” when asked if the company blocks users on a MAC address level.
“Safety and civility drive everything we do, and we’re continually working hard to ensure people of all ages have a positive and safe experience on our platform. We enforce our Community Standards using a multi-layered approach to safety that combines the best in artificial intelligence, machine learning and human moderation. We are not providing more detail about the confidential information illegally obtained by criminals who tried to unsuccessfully extort the company,” the spokesperson added. (Motherboard believes much of the information included in the released files is in the public interest and warrants coverage).
Some of the content that is reviewed by Roblox is “tech-assisted” with human moderation, such as the asset reviews and abuse reports. Other content is moderated purely by tech, including “bad games detection” and “IP detection,” such as games Roblox doesn’t want on its platform or those that violate someone else’s intellectual property.
The quarterly review also highlighted issues Roblox predicts it will have to deal with in the future, such as the introduction of more detailed player figures. Traditionally, Roblox’s character models have been low detail, blocky avatars which can only wear one item of clothing per layer at a time. With the next iteration of characters, Roblox’s plan was to allow them to wear layered clothing, or “LC.” This creates its own issues: “How do we determine what obscenity looks like for layered clothing (when UGC [user generated content])?” one slide reads. Roblox launched layered clothing in April.
“How big of a ‘bulge’ is a problem for us?” it then asks, referring to bulges that users may create with their own clothing creations.
Roblox repeatedly declined to make someone available for an interview on the company’s content moderation strategy. A Roblox spokesperson instead pointed Motherboard to multiple Roblox blog posts, one of which said that “We are always strengthening our systems to make Roblox even safer.”
Subscribe to our cybersecurity podcast, CYBER. Subscribe to our new Twitch channel.