Tech

Airbnb Has Secret ‘Trustworthy Scores’ and This Privacy Group Is Demanding to See Them

GettyImages-1191099988

Algorithms now determine everything. Facebook’s news and advertising algorithm determines your daily reality, bombarding you with skewed ads and sketchy news that only reinforces your worldview. Flawed test score algorithms determine your career prospects. YouTube algorithms consider whether you’ll be receptive to white supremecist drivel.

Every day across a litany of platforms, secretive algorithms are not only calculating what content you’ll see and what ads you’ll respond well to, but your overall trustworthiness as an obedient consumer. Such systems routinely lack any transparency whatsoever, yet can impact everything from your career trajectory to the quality of customer service you’ll receive.

Airbnb is no exception, and has been under fire recently for a secretive algorithm it uses to determine whether you’re “trustworthy.” Privacy and digital activism groups are now crying foul, demanding the FTC do more to rein in the practice and protect consumers.

In a new complaint filed with the FTC, The Electronic Privacy Information Center (EPIC) argues that Airbnb’s nontransparent algorithm is “unfair and deceptive” under the FTC Act and the Fair Credit Act. The group also complains that the company’s technique violates the fairness and transparency principles and standards laid out by the international community.

Airbnb’s website is notably ambiguous about how this risk assessment is calculated and how much data is stored and collected about its users.

“Every Airbnb reservation is scored for risk before it’s confirmed,” the company tells customers. “We use predictive analytics and machine learning to instantly evaluate hundreds of signals that help us flag and investigate suspicious activity before it happens.”

In its complaint, EPIC notes that Airbnb’s secret ranking algorithm is based on an ocean of personal consumer data collected from your behavior all over the internet, ranging from the comments you make to Airbnb hosts on the platform, to any unrelated comments you may have made on social media platforms or even blog posts.

A recent New York Times report explored how these accumulated profiles can be over 400-pages in length, have the potential to impact every aspect of your daily life, yet aren’t transparent at all to the users or communities impacted by these automated calculations.

The complaint references a patent developed and issued to a company Airbnb acquired, which can track whether a customer “created a false or misleading online profile, provided false or misleading information to the service provider, is involved with drugs or alcohol, is involved with hate websites or organizations, is involved in sex work, perpetrated a crime, is involved in civil litigation, is a known fraudster or scammer, is involved in pornography, has authored online content with negative language, or has interests that indicate negative personality or behavior traits.”

“The patent referenced in this complaint was developed by and issued to a company Airbnb acquired, before we acquired the company, and the methods listed in this patent are not, nor have they ever been, employed by Airbnb,” Airbnb spokesperson Charlie Urbancic said. “Airbnb is committed to earning our community’s trust by striving to keep them safe offline and online – that includes protecting users’ personal information and using it responsibly.”

Videos by VICE

From there, the algorithm attributes generalized categories to each consumer using terms and phrases such as “badness, antisocial tendencies, goodness, conscientiousness, openness, extraversion, agreeableness, neuroticism, narcissism, Machiavellianism, or psychopathy.”

In its complaint, EPIC argues that not only is there no transparency behind these life impacting determinations, they tend to oversimplify complex human behavior using “inherently subjective” criteria. The group also argued that such predictive efforts have historically been biased, resulting in harsher penalties for minority and disadvantaged communities.

“Algorithms used by judges in sentencing to predict future criminal activity have been found to be unreliable and were twice as likely to mislabel black defendants as future criminals than white defendants,” EPIC said. “Policing data is the result of choices that undermine the credibility of the data.”

EPIC recently filed a similar complaint with the FTC over the facial recognition and AI-driven scoring systems used by the screening firm HireVue to screen young potential athletes. The group has also petitioned the FTC to conduct a rulemaking tackling “the use of artificial intelligence in commerce.”

The FTC taking any action here remains unlikely.

Industries like telecom routinely tapdance around the “unfair and deceptive” language in the FTC Act. Studies have also shown the agency is rife with revolving door conflicts of interest, and is also underfunded and understaffed; the FTC has just 8 percent of the staff dedicated to privacy as the UK, despite the UK having one-fifth as many consumers to protect.

Update: This post has been updated with comment from Airbnb.