The Weakest Link is Motherboard’s third annual theme week dedicated to the future of hacking and cybersecurity. Follow along.
Listen to Motherboard’s new hacking podcast, CYBER, here.
Videos by VICE
Dr. Nathalie Maréchal is a senior research fellow at Ranking Digital Rights, where she studies the impact of information and communication technology companies’ business practices on human rights.
In his testimony to the US Senate last spring, Facebook CEO Mark Zuckerberg emphasized that his company doesn’t sell user data, as if to reassure policymakers and the public. But the reality—that Facebook, Google, Twitter, and other social media companies sell access to our attention—is just as concerning. Actual user information may not change hands, but the advertising business model drives company decision making in ways that are ultimately toxic to society. As sociologist Zeynep Tufekci put it in her 2017 TED talk, “we’re building a dystopia just to make people click on ads.”
Social media companies are advertising companies. This has never been a secret, of course. Google pioneered the targeted advertising business model in the late 90s, and Sheryl Sandberg brought the practice to Facebook in 2008 when she joined the company as chief operating officer. The cash was flowing in, and companies around Silicon Valley and beyond adopted the same basic strategy: first, grow the user base as quickly as possible without worrying about revenue; second, collect as much data as possible about the users; third, monetize that information by performing big data analytics in order to show users advertising that is narrowly tailored to their demographics and revealed interests; fourth, profit.
For a while this seemed like a win-win: people around the world could watch cat videos, see pictures of each others’ babies in Halloween costumes, connect with family, friends, and colleagues around the globe, and more. In return, companies would show them ads that were actually relevant to them. Contextual advertising had supported the print and broadcast media for decades, so this was the logical next step. What could possibly go wrong?
Plenty, as it turns out. From today’s vantage point, the Arab Spring stands out as an iconic cautionary tale of techno-utopianism gone wrong. Sure, would-be revolutionaries, reformers, and human rights defenders were among the first to master the power of what we used to call “Web 2.0,” but authorities caught on quickly and used the new tools to crack down on threats to their grasp on power. Similarly, the 2008 Obama campaign was the first to harness online advertising to reach the right voters with the right message with near-surgical precision, but 10 years later the same techniques are propelling right-wing authoritarians to power in the US, the Philippines, and Brazil, and being used to fan the flames of xenophobia, racial hatred, and even genocide around the world—perhaps most devastatingly in Myanmar. How on Earth did we get here?
It all started with targeted advertising, and with the new economic arrangement that Harvard Business School scholar Shoshana Zuboff calls “surveillance capitalism.” Just like 20th century firms like General Motors and Ford invented mass production and managerial capitalism, Google and Facebook figured out how to commodify “reality” itself by tracking what people (and not just their users) do online (and increasingly offline too), making predictions about what they might do in the future, devising ways to influence behavior from shopping to voting, and selling that power to whoever is willing to pay.
“As societies, we have never agreed that our private experience is available for extraction as behavioral data, much of which is then fed into supply chains for the manufacture of behavioral predictions,” Zuboff told me in a phone interview.
Zuboff’s new book, The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontiers of Power, examines surveillance capitalism’s 20-year history, from the birth of online advertising in the late 1990s to today’s era of democratic regression. “Surveillance capitalism was invented in the context of targeted advertising,” she said. “This was the material, historical context in which it originated in a moment of financial emergency during the dotcom bust. Google was a fledgling firm, and its investors were threatening to bail—in spite of its superior search product. That’s when Google turned to previously discarded and ignored data logs and repurposed them as a ‘behavioral surplus.’ Instead of being used for product improvement, these behavioral data were directed toward an entirely new goal: predicting user behavior.”
“If we were looking for a digital revolution, it happened in advertising online”
Zuboff predicts that if left unchecked, surveillance capitalism will be just as destructive as previous variants of capitalism have been, though in wholly new ways. “We are talking about the unilateral claiming of private human experience as raw material for product development and market exchange,” she said. “Industrial capitalism claimed nature for itself, and only now are we faced with the consequences of that undertaking. In this new phase of capitalism’s development, it’s the raw material of human nature that drives a new market dynamic, in which predictions of our behavior are told and then sold. The economic imperatives of this new capitalism produce extreme asymmetries of knowledge and the power that accrues from that knowledge. This is unprecedented territory with profound consequences for 21st century society.”
Online tracking is ubiquitous, Tim Libert, of Carnegie Mellon University’s CyLab Security and Privacy Institute, told me in an email. “Across the top one million websites you will be tracked on 91 percent of sites. I’ve been doing these types of scans for years and the results are always the same: you can’t browse the web without being tracked, period. Companies track you when you visit medical websites, pornography websites, websites for lawyers, websites for politicians, newspaper websites, and the same goes for apps. There are very few things that people don’t seek out or share using a computer and nearly all of that is tracked, all the time, by the billion dollar giants you see in the news as well as hundreds of companies you’ve never heard of.”
Companies collect this information in order to monetize it: while we may not see much value in the individual data points about our behavior, in the aggregate it adds up to big money.
When you visit a webpage that hosts advertising content, the ad network—Google’s DoubleClick, for example—chooses between showing you several ads. Because Google knows so much about you, your friends, your tastes, your habits, and your purchasing power, DoubleClick can calculate which ad you are most likely to click on and ultimately which product you’re most likely to purchase.
“That’s all ‘AI’ and ‘machine learning’ is for these companies: getting better at guessing what ads to show you,” Libert said. “Every tiny bit of data increases the chances they show the ‘right’ ad so they never stop, they never sleep, and they never respect your privacy—every single day everybody at Google collectively works to one purpose: getting the percentage of ‘right’ ads shown slightly higher.”
It’s almost impossible to know exactly how this “Digital Influence Machine,” as a recent Data & Society report put it, works in a particular instance, just like it’s impossible to know how climate change contributes to specific hurricanes. But in the aggregate, the connection is clear and undeniable.
“With advertising technology, political communication has changed dramatically,” Joan Donovan, who researches media manipulation and platform accountability at Data & Society, told me in an email. “If we were looking for a digital revolution, it happened in advertising online. Much of online advertising is completely unregulated and unmanaged. Political strategists understood this new opportunity and capitalized on it by serving up digital disinformation using ads as the delivery system. No politician can campaign ethically under these conditions because they are just out gunned by those who are willing to use these systems to do damage.”
In copying the traditional media’s advertising-based business model, internet companies neglected to adopt a crucial rule: the separation between business operations and editorial decisions. Though the rule was far from universally respected, 20th century journalism’s code of ethics prohibited financial considerations from influencing news coverage. This ethical screen allowed American capitalism to subsidize the press, which in turn helped keep the government and companies honest: checks and balances at work.
This all fell apart with targeted advertising, which stole journalism’s lunch money and used it to sustain platforms whose driving logic isn’t to educate, to inform, or to hold the powerful to account, but to keep people “engaged.” This logic of “engagement” is motivated by the twin needs to collect more data and show more ads, and manifests itself in algorithms that value popularity over quality. In less than 20 years, Silicon Valley has replaced editorial judgment with mathematical measures of popularity, destabilized the democratic systems of checks and balances by hobbling the Fourth Estate, and hammered nail after nail into the coffin of privacy.
“We are witnessing a full-blown failure of trust in online platforms at a time when they are the most influential force in undermining or protecting democratic ideals around the world”
The targeted advertising business model incentivizes companies to amass as much information as they can: what their users do on the platforms themselves and what they do elsewhere on the internet. Google and Facebook even keep tabs on what people who don’t have an account with them do online, and use that information in their data modeling and to serve ads around the Web. More recently, they’ve started buying data about consumers’ credit card purchases and other offline activity. These digital dossiers contain revealing information about each of us individually and all of us collectively. It’s no surprise that governments are eager to get their hands on that data. For example, the 2013 Edward Snowden revelations contained details about several NSA programs, including PRISM, that relied on obtaining data from major technology companies, both with and without executives’ knowledge. Similar relationships between tech companies (including telecom operators) and state actors can be found in other countries, as well. Consequent human rights harms include extra-judicial surveillance, harassment, and physical harm as well as chilling effects stemming from awareness of these risks.
At the group level, targeted advertising automates discrimination and normalizes it by seeming to take individual prejudice out of the equation. As Chris Gilliard explains in a recent essay, “surveillance capitalism turns a profit by making people more comfortable with discrimination.” This is manifested in practices like digital redlining, differential pricing, racist search results, and social media filter bubbles.
Safiya Noble, an associate professor at the University of California, Los Angeles and author of Algorithms of Oppression, told me in an email that “we are dependent upon commercial search engines to sort truth from fiction, yet these too, are unreliable fact-checkers on many social and political issues. In essence, we are witnessing a full-blown failure of trust in online platforms at a time when they are the most influential force in undermining or protecting democratic ideals around the world.”
Targeted advertising causes us to experience the internet, and therefore the world, in different ways based on what the surveillance capitalism assemblage thinks it knows about us. This not a recipe for fairness, equality, or a just society.
Finally, targeted advertising and the algorithmic curation practices associated with it harms democracy itself. Advertising’s shift to digital has cannibalized the news media’s revenue, thus weakening the entire public sphere. And linking advertising to pageviews incentivizes media organizations to produce articles that perform well, sometimes at the expense of material that educates, entertains, or holds power-holders accountable. Targeted advertising provides tools for political advertisers and propagandists to micro-segment audiences in ways that inhibit a common understanding of reality. This creates a perfect storm for authoritarian populists like Rodrigo Duterte, Donald Trump, and Jairo Bolsanaro to seize power, with dire consequences for human rights. Dipayan Ghosh and Ben Scott, authors of the “Digital Deceit” report series, note that “we have permitted technologies that deliver information based on relevance and the desire to maximize attention capture to replace the normative function of editors and newsrooms.”
For decades, thinkers like Hannah Arendt, Karl Polanyi, and many others have repeatedly warned us that fascism is the direct consequence of subordinating human needs to the needs of the market. Having willfully ignored the lessons of history, we have allowed corporate greed to transform our media ecosystem into one that structurally favors authoritarian populism. Saving democracy requires more than reforming internet companies, of course, and the exact recipe for success varies by country. In the United States, we need to reverse 30 years of media deregulation, exponentially increase public support for public interest media, and address the structural inequalities in our electoral system that give power to a party that less than half the electorate supports.
The targeted advertising business model at the heart of surveillance capitalism needs to be restructured, maybe even replaced. But with what, and how do we get there?
Experts disagree on whether the targeted advertising ecosystem can be meaningfully reformed, and whether that will be enough to reverse its harmful impact on society. “Surveillance capitalism is no more limited to targeted advertising than managerial capitalism was limited to the production of the Ford model T,” said Zuboff, whose new book comes out in January. “This logic of accumulation has traveled far beyond its origins to new sectors and new forms of business operations. Like an invasive plant that faced no natural predators, surveillance capitalism has been allowed to take root and flourish in lawless space for two decades.”
Dipayan Ghosh, who studies privacy engineering at Harvard University’s John F. Kennedy School of Government, is more optimistic, but doesn’t rule out regulatory solutions. As the tech companies themselves argue, internet users may find value in targeted advertising that is personalized to their interests and helps them discover opportunities or events that are relevant to them. But the same infrastructure that lets you know when your favorite band is playing in your city also enables bad actors to disseminate nefarious ideas.
“This commercial regime is responsible for huge security threats,” Ghosh, who worked as a tech policy advisor in the Obama White House and formerly was a US privacy and public policy advisor at Facebook, said in an interview. “We will need to treat the business model with policy measures in ways that raise up the good and cut off the bad. And if that doesn’t work, we may have to regulate against targeted advertising. I think we can find ways to allow targeted advertising done by Chanel or the NBA, and cut out the nefarious content pushed by Russian propagandists.”
The onus is on Silicon Valley to demonstrate that firms can guard against surveillance capitalism’s gravest harms without uprooting their business models—or better yet, to find new revenue streams that don’t rely on commodifying people’s private behavior. This is all the more important because people can’t meaningfully opt out.
While Google and Facebook let users opt out of seeing targeted ads, it’s impossible to opt out of being tracked or being included in the datasets used to create targeting algorithms. According to Carnegie Mellon’s Libert, “you may assume if you don’t see a targeted ad for shoes they stopped tracking you, but that’s not the case whatsoever. There are technological ways to prevent some level of tracking, but it’s like taking aspirin to cure your cancer, it may make you feel a little better for a few hours but you’re still dealing with cancer. The only way to root out the cancer of targeted advertising is regulation. Europe is conducting a grand experiment right now with GDPR, and the rest of the world is watching.”
Policymakers around the world, including in Washington, are increasingly aware that privacy and data protection are intimately linked to the basic structure of society. Neither they, nor the public, are likely to accept the status quo for much longer.