By now, you already know how the cycle goes. Someone posts something offensive, usually to a tiny circle of like-minded people. No one outside that circle notices at the time, because the original poster is fairly unimportant. Once entrusted with a new job, affiliation or role, though, the poster’s outed, and promptly cancelled in the eyes of whoever cares. Post, forget, unearth, repeat.
“Problematic” social media opinions crop up so much now that they feel like their own section of news. Before May’s EU Parliament elections, new party Change UK (or whatever their name is now) had to ask their MEP candidates Joseph Russo and Ali Sadjady to stand down. It turned out both had previously tweeted one-liners that could easily be interpreted as sexist, anti-black and xenophobic. Russo and Sadjady’s comments are still easy enough to find, too. Their posts, like countless others, got past potential employers.
Videos by VICE
Change UK’s then-interim leader, Heidi Allen, blames the party’s failure to spot Russo and Sadjady’s comments on an “unprofessional” vetting company. She told The House magazine: “We were very clear that, as MPs flipping through a list of names, going ‘Oh, I like the sound of that one, I’m not sure about them, I’ll do a quick Google’, that was never going to be good enough, so we paid for professionals to do this for us. So there’s a conversation to be had there because, clearly, they failed on that.” But who are the companies that vet what we share online, acting as defacto arbiters of taste and decency? And how do they give clients what they’re after? Does that affect workers and their rights?
For starters, companies encounter intense legalities when looking to scroll through a potential new employee’s social media accounts. I hear as much from Peter Church, a lawyer at Linklaters whose work concerns the hazy space where technology interfaces with the law. He says: “There are relatively strict rules in place under European data protection law and GDPR, and in terms of the way companies should approach you about information. But they can be difficult to police in practice.”
As a potential employee, you’re likely to only walk smack into the vetting process when you’re looking like a sure thing. “Screening should happen relatively late in the process,” Church continues. “If you get 100 CVs in for a position, you wouldn’t want all of them to be subject to social media verification. I think it would only be the candidates who are close to being appointed who would be screened.”
David D’Souza, director of HR professional association CIPD, explains that screening companies “essentially say: ‘from this person’s online profile and existence are there any triggers or risks the employer might need to be aware of?’” But are the triggers and risks universal? People can say atrocious things on social media. Should they be treated the same way as the person who tweets about smoking weed at home?
That depends, when you look at what various background and identity service businesses have to say. Sterling Talent Solutions’s website notes, without clarifying what “poor ethics” are, that “someone who shares inappropriate pictures implying poor ethics is not a competitive candidate. Likewise, sharing content with grammatical or spelling errors is a red flag.” CBS Screening says: “our social-media screening requiring an applicant’s permission and checks only job relevant content”. Meanwhile, outsourcer Capita’s Security Watchdog’s site will scan a candidate’s social media profiles for elements as broad as: “showcasing undesirable characteristics” that are “likely to have an impact on client relations” or “linked to lobby or advocate/activist groups”.
Bianca Lager is president at Social Intelligence (or SI), a US Federal Trade Commission-regulated company that offers “social media screening for intelligent hiring”. On behalf of the client, and with the candidate’s permission, SI will use their name, email, employment history and education history to find any of their public social media accounts. Once they’ve confidently identified those accounts, Lager tells me, the screening searches for four categories of content: “1. Racism/Intolerance 2. Potentially Violent 3. Potentially Illegal 4. Sexually Explicit.”
AI does just some of the work here, as Lager points out that offensiveness is easier for humans to detect: “There’s slang, different languages, cultures and subcultures and most importantly, intent and context around what people say online. It’s super complex.” More simply, though, SI is only looking for serious stuff, concerning “blatant, obvious and egregious content that could be damaging or risky for an organisation and their current employees.”
SI do a better job, in her view, than the average employer, who “most certainly should not be doing it themselves. At best, they waste a bunch of company time with most content being none of their business and at worst, it’s building an unfair and possibly illegal bias.” Dedicated screening companies act as, Lager says: “an unbiased go-between”, offering what she deems impartiality. She does admit, however, that it’s up to the client which categories they want their candidates screened for. A company employing security guards, for instance, might want to look for potential violence in their candidates’ social media history, but not racism or intolerance. This could create an ethical gap.
As Lager says, social media-vetting companies are responsible for ensuring “candidates are not rejected based on protected things like religion, political beliefs, disability, sexual orientation”. But are all companies as adept at batting away clients’ less illegal, but no less ridiculous requests, effectively excluding candidates who haven’t actually done anything wrong? “I’ve literally had a client tell me they have not hired people in the past for silly things like too many selfies or food pictures,” Lager says, “So it’s astonishing what personal choices are made when someone scrolls through social media.”
D’Souza has a wider definition of what could be classed as silly, too: “Just because your Twitter feed is full of four-letter words, that doesn’t necessarily mean you would you conduct yourself that way in a meeting,” he says. “This merging of private and public personas is really problematic. To what extent is it right for organisations to assess people’s fitness for work based on either things that people have expected to be public, or didn’t anticipate would be used by an employer that way?” He suspects that the relevance and intent of social media posts will create some interesting case law in the next few years.
At hiring level, companies are already started to grapple with that quivering line between private and public. “We’d mainly check for if anything they’d said was illegal or their overall tone of voice and opinions, and then mentions of competitors to make sure they’d not been working for rivals,” says an advertising executive who’s worked in influencer marketing. They spoke to VICE on condition of anonymity so as not to disrupt their current work. “I don’t think I’d see any difference between something a public figure had said in an interview with a newspaper, and something they’d published on a public social profile.”
The solution, Church warns, is for candidates to screen themselves before an employer gets there: “You have to be realistic about the prospect of your public social media history being looked at, which means being responsible by looking at your social media footprint and looking at privacy settings. Particularly private posts should be restricted to friends, if there are things that you’re not happy about from your past maybe you need to try and remove some of that content.” He also suggests using a social media pseudonym.
Not everyone will get this opportunity, however, since D’Souza reckons current employment law favours recruiters rather than employees. So at least until two years after the employment process, “candidates are unlikely to know that they have been excluded from a recruitment process because of their online profile”. Unless you’re a high-profile bubble tea influencer, Love Island hopeful, MP-in-training with an agent, probably don’t rely on the ethics of a social media screener to see you through. Otherwise, that familiar cycle looms large.