It’s a normal day on the internet. Twitter is tweeting and faving, Facebook is liking and sharing, and Reddit is humming with all things good and bad online. It was a day like this in 2010 that cartoonist Ben Garrison had the gall to create something political about the Fed and post it online.
The illustration, called “The March of Tyranny,” was later uploaded to 4chan and edited so that the illuminati pyramid was replaced with the “Happy Merchant,” an anti-Semitic meme featuring an unflattering depiction of a Jewish man.
Videos by VICE
From there things snowballed. Online users turned many of Garrison’s other cartoons into offensive illustrations, and now he has been called the “most trolled cartoonist in the world.” Because his name was left on all of the altered illustrations, when you google him the search results are a barrage of unsavory images and stories about him as a meme.
What’s a guy supposed to do who, like Garrison, is being trolled to death and doesn’t have any connections to larger organizations that can jump in and set the record straight? What if he doesn’t have a way to get himself on TV to tell his side of the story—that he didn’t make the offensive cartoons being circulated in his name? Where does one go from there?
The Online Hate Prevention Institute is one of the only institutes of its kind, working within Australia to stop and prevent forms of threatening hate speech online. To better understand the best course of action upon being trolled, I spoke over email with Dr. Andre Oboler, CEO of OHPI, as well as Ben Garrison. The two worked together in an effort to reduce the amount of trolling Garrison suffered.
STEP 1: Identify What Kind of Trolling Is Happening
Hate Speech: Hate speech targets people based on prejudice, including women, people of color, Jewish people, LGBTQI people—generally anyone who is not a white cis straight man.
Cyberbullying: “Cyberbullying can occur between people who know each other. It’s often spoken of in relation to the schoolyard, but it is also an element of workplace bullying, domestic violence, and other forms of abuse,” Dr. Oboler tells me. Like IRL bullying, cyberbullying is often instigated by someone who personally knows the victim. Tyler Clementi‘s death in 2010 is partially attributed to cyberbullying prompted after his roommate spied on him during an intimate act via webcam and tweeted about it.
On Motherboard: The Dark Web’s Biggest Market Is Going to Stop Selling Guns
Trolling: “Trolling” is harassment marked by anonymity, so the victim has no way to strike back directly against her harassers. Trolling is not marked by number of harassers, like griefing (more on that below). OHPI sees a lot of ” RIP trolling” instances, where users post hateful comments on social media accounts of deceased people. In a case of trolling the victim usually has no come back against the troll as they don’t know who they are. “The reason they had the nerve to [troll],” Garrison says, “was because they were anonymous.”
Griefing: “Griefing is a distributed form of trolling, much like a distributed denial of service attack (DDOS). Some information about the victim is still needed, but it would usually be very basic,” Dr. Oboler tells me. The term “griefing” comes from the world of online gaming, describing a group of players who would create a bunch of free accounts, and then gang up on a single player to ruin their experience. In the context of wider online harassment, griefing is when a lot of people put in a very minor effort, such as sending one hateful tweet, and the cumulative effect can be devastating.
Harassment categories are not mutually exclusive. In the case of Ben Garrison, trolls used griefing and anti-Semitic hate speech to tarnish Garrison’s reputation. In a similar situation, trolls impersonated feminist writer Caitlin Roper and promoted transphobic hate speech under her name. Women online are disproportionately targeted for the grave sin of being women (see: Gamergate).
STEP 2: Report the Harassment to the Social Network on Which It Occurred
Social media networks like Twitter and Facebook typically maintain “community standards” that disallow threatening or hateful content. Facebook’s standards don’t permit hate speech or bullying specifically, as well as a host of other distasteful things like attacking private figures or using the site to facilitate criminal activity.
Twitter in particular used to be notoriously bad for responding to hate speech. Former CEO Dick Costolo admitted this in a memo leaked to The Verge, saying “We suck at dealing with abuse and trolls on the platform and we’ve sucked at it for years.”
However, the site has recently adjusted their community standards. Whereas in the past a threat had to be “direct and specific,” now any indirect threat is likely to break community standards under the wider umbrella of “promoting violence against others.” Longtime troll Chuck Johnson solicited donations on Twitter to “take out” activist DeRay McKesson, and subsequently was permanently suspended. Before the removal of the “direct and specific” clause, Johnson’s tweet may not have been considered a violation of community standards, and it’s possible no action would’ve been taken. “There’s also recently been a 400 percent increase in the number of staff at Twitter responding to users’ reports,” Dr. Oboler notes.
OHPI runs Fight Against Hate, which is not a reporting system, but an accountability system. After a user reports hateful content, they can log that report to Fight Against Hate, and the team at OHPI (including staff, law enforcement, NGOs, and researchers) can track how long it takes for the platform to respond. If they don’t respond, the team at FAH is already clued in and can take action, lifting some of the emotional labor from the victims.
STEP 2.5: Watch for Phoenix Pages
“The problem of phoenix pages is a significant one on Facebook,” Dr. Oboler admits. A “phoenix page” is when a hate speech fanpage or harassing user is removed from Facebook and then immediately creates a new page or account. “[A phoenix page] needs a swift response or we end up playing whack-a-mole as the time to create a page and have it gain a little traction is less than the time it usually takes to get a page removed.”
In Garrison’s case, it was exceedingly hard to get the various pages removed. “Facebook refused to take many of the pages down after repeated complaints. They would eventually take them down, but they made it very difficult to do it.”
Keep an eye out for recreated pages/users after your harasser is removed and report it immediately so your initial efforts are not in vain. But know that it can be a serious time-suck. “I have spent countless hours filling out forms and sending email to image hosting services, Facebook and Twitter to get libel removed,” Garrison tells me. “This is time I could have spent doing something productive.”
STEP 3: Report the Harassment to the Police
“As a general rule, if people are feeling threatened or the online content is harassing them to the point that it negatively impacts on their ability to go about their daily life, that would be the time to contact police,” Dr. Oboler notes.
But what if the police can’t do anything? If the harasser is in one country and you’re in another, too bad. Most likely, nothing can be done. When writer Amanda Hess reported her harassment to the police, the officer’s response was “What is Twitter?” In order for police to do anything about online harassment, they must first understand it. Being forced to continually log and report instances of law-breaking harassment is one way to help them learn.
Twitter has made it simpler to report cases of harassment to law enforcement. On the federal level, Representative Katherine Clark and the House of Representatives have formally urged the Department of Justice to pursue online harassment cases seriously and more often. Local law enforcement may not have the technical knowledge or resources to meaningfully pursue online harassment, as detailed by Jezebel at the end of last year. Consider skipping your local police and reporting harassment through the FBI, via the Internet Crime Complaint Center. The IC3 purports to review each case and refer to local or federal law enforcement as necessary.
Skip the lawyer route. Garrison tried it, and while he had a case, his lawyer eventually refused to pursue it due to the expense, difficulty, and unlikely return of online libel cases. “[U]sually the perpetrators are broke and living in their parents’ basements,” he tells me. “Few lawyers will take on such cases and I suspect part of the reason is they want to avoid getting targeted by the Internet Hate Machine themselves.”
STEP 4: Attempt to Salvage Your Ruined Reputation
The best way to do this is to openly connect yourself with the troll’s impersonations, and say without shame, “This is what’s happening to me.” Caitlin Roper keeps impersonating tweets pinned to the top of her account. Ben Garrison continues to create art and denounces trollish impersonations whenever necessary. “I put a disclaimer on my site and I decided to ignore the trolls. This is what everyone says—just ignore them and they’ll go away.”
But when you’re being griefed, that’s often not enough. “In my case, after two years, they did not go away,” Garrison says.
STEP 5: Discuss Your Experiences with Harassment and, If You Wish, Attempt to Understand the Mentality of People Who Hate You for No Reason
In places like anonymous imageboard 4chan, there’s a Wild West, anything-goes mentality. Some users see the internet as a magical space to say whatever they want for fun with no consequences, and trolling is how sadists indulge their Machiavellian desires. Garrison knows this firsthand. “To anonymous trolls,” he tells me, “There is no such thing as libel, defamation, or harassment. It’s all ‘free speech’ and they consider that to be ‘absolute’ free speech. Anyone who questions that is vilified and ridiculed.”
Garrison held a Q&A session on 8chan, a 4chan clone, to see if he could reason with trolls on their turf. “What I found out was a great many of the young people using that chat board really did like me. Many were respectful to an old man.” If you can reach a harasser in some small way, maybe that smidge of humanity will grant you some greater forgiveness toward the larger group. But results are not guaranteed. Garrison tells me he’d find it hard to forgive Christopher “m00t” Poole, former CEO of 4chan, for his refusal to grant Garrison’s DCMA takedown requests, thus supporting the trolls’ efforts. Trolls must be stopped from the top down, not the bottom up.
The best way to stop online harassment is to change the environment of the online communities themselves. Riot Games, creators of League of Legends, effectively curbed harassment in-game by holding tribunals where users were “tried” by the community and suffered consequences when they used hate speech in-game. Only when trolls felt real consequences on their own gameplay did their behavior change.
The tide is turning, but historically there’s been an overall sentiment that what happens on the internet isn’t “real.” In today’s interconnected world it’s almost impossible to simply log off. Protect yourself first, but don’t be unwilling to publish the worst of your harassment, and reach out to your online community for support. Slowly, corporations like Facebook and Twitter are taking notice and slowly, they’re beginning to adjust the way they manage their communities.
Follow Kate on Twitter.