Tech

What If Technology Belonged to the People?

FINAL CMS_AND_SOCIAL_TEMPLATE

Since the pandemic began, Apple, Alphabet, Amazon, Facebook, and Microsoft have seen their values increase by well over $1.7 trillion. Is it because these companies are offering technologies we all need or is it because they enjoy a series of monopolies that ensure greater wealth and control during a period of great uncertainty?

With so many people stuck at home, these internet-first companies were of course well-positioned to provide critical services during a pandemic. But they all got there by leveraging the labor of some of the most vulnerable populations in the world, extracting and selling the data of their customers, getting massive tax breaks, and otherwise taking advantage of huge weaknesses in our economic and political systems. With the economy and society falling apart, these massive companies—already monopolies during “normal” times—are becoming monolithic.

Videos by VICE

What, then, is to be done about these companies and their technologies which, on the one hand, facilitate unprecedented communication and address once intractable logistics challenges, but, on the other hand, contribute to widespread suffering everyday? Can we subordinate these technologies, whether they be algorithms or their data sets, to the ends of making a more fair social order? Put simply: Can we create technology that is owned by the people who use it, and whose main purpose is to help humanity rather than extract wealth for a small class of individuals?

Digitization and Privatization

The digitization of our society is an aggressive form of privatization.

Early on in his book on digital capitalism, Jathan Sadowski defines smart technology as tech that “is embedded with digital technology for data collection, network connectivity, and enhanced control.” Sadowski uses the example of smart toothbrushes. These products record detailed data about how you brush your teeth, send that data to some cloud server maintained by some manufacturer or third party, which can then be accessed by you and your dentist through some app to not only guide and monitor your brushing, but score it. It’s not hard to imagine a world where that data is also sold to companies, or are analyzed by a dental insurer and used to calculate your monthly premiums—it’s already here.

If we look at the digitization of insurance more broadly, we quickly find a prime example of how the use of data extraction and surveillance technologies is becoming problematic. Here, digital systems not only amplify the industry’s record as “one of the greatest sources of regulatory authority over private life” but are deployed to transform the logic of insurance and its effects on society. Instead of pooling risk by aggregate levels that inform policy options and provide a sort of mutual aid to those in need, companies hope to individually assess risk based on a continuous stream of data on every action an individual does.

Digitization allows those with huge sums of capital to replace old systems of oppression with ones based on surveillance, control, and algorithms.

The operating principles at work here are clear, as Sadowski lays out: “Any risk that insurers must bear is potential loss and any claim insurers have to pay is lost profit. Preventing such losses means controlling the source of risks and claims: customers.”

There are startups like The Floow which assign a driver’s “safety score” based on data collected from your phone (GPS, accelerometer, screen time, etc.) to predict accident risk. There are insurers like Progressive that install devices in cars to record, in granular detail, how and when you speed and brake, whether you drive through “dangerous” neighborhoods or at weird times, and ensure your premium matches your “risky” behavior. And there are major corporations like Microsoft that partner with major insurance companies like American Family Insurance to create startup accelerators that aim to turn your home into another networked computer filled with other networked devices that constantly send data flows back to insurers eager to minimize losses.

Insurers may maintain that such a system will be “fairer” because prices will more accurately reflect individualized risk, but they’re actually just redefining the term. Instead of “spreading risks across a population to hedge against the vagaries of life” we are called on to believe “no one should bear any expense or risk for the benefit of the collective.” It is a form of surveillance and control, pitched to us as being good for the individual as long as the individual sticks to very strict rules determined by corporations and algorithms. It’s ”if you don’t have anything to hide, you don’t have anything to worry about,” applied to every single facet of our lives, every product we own, everything that we do.

This line of thought justifies and demands discrimination in the name of fairness—and makes perverse and unequal consequences desirable because they are “fair” under this new logic. This represents a huge failure of imagination, with insurance technologies being deployed to reinforce the status quo of unequal coverage instead of “extending better coverage to broader populations and reducing the collective insecurity that impedes a flourishing society.”

None of this should come as a surprise, however, because capitalism is, as Astra Taylor writes, “an insecurity machine.” Our system “destabilizes by design: market forces capsize communities and disintegrate old ways of life.” Taylor explains that precarity and insecurity are not new by any stretch of imagination—“security” was “an ancient concept and aspirational ideal”—but digital technologies allow even greater destabilization and dispossession so that even greater levels of private ownership and investment can happen. Capitalism’s insecurity machine has far and wide reaching consequences. To illustrate this, Taylor explains how surveillance and control are used in housing:

“Opaque systems of information collection and predictive analytics facilitate new forms of discrimination and redlining, marking certain populations as criminal threats or directing them into subprime financial services, predatory mortgages, and exploitative rental markets, increasing housing insecurity,” Taylor writes.

Big business capitalized on mass evictions and foreclosures, which disproportionately affected Black and brown borrowers and homeowners in 2008. These foreclosures “opened space for new algorithmically enabled land grabs” to let institutional investors buy up property repossessed during the crisis. A new wave of digital technologies were then created to not only mediate and streamline this process for investors, but deepen the old discriminatory hierarchies that created the original mortgage crisis to begin with.

Algorithms decide which properties to buy based on “neighborhood desirability, proximity to employment centers, transportation corridors, community amenities, construction type, and required ongoing capital needs.” Others decide to lock out certain populations entirely—or to include them so as to better exploit them and realize an outsized return. Tying all of these algorithms together is how finely tuned they are to secure investments (and privileged resident populations) by ensuring the insecurity and dispossession of certain populations deemed too risky.

Digitization allows those with huge sums of capital to replace old systems of oppression with ones based on surveillance, control, and algorithms. A series of networked platforms mediate daily life for a mass of atomized individuals who are treated as consumers in a market bidding for commodities like housing and healthcare, not citizens with certain rights and privileges. It has also allowed companies to “disrupt” public goods designed to help the masses with privatized ones that have no motive other than scale and market dominance.

Consider the ride-hailing company Uber. Its platform allowed it to grow best when competing with existing transit options (subways, buses) and when ignoring regulations (and competing with taxi services that had to jump through regulatory hoops that Uber decided to simply ignore). Uber’s data extraction yields insights about how drivers and riders use its platform give the company more leverage to undermine competitors and encourage driver and customer retention, in direct contrast to public transit’s goal: ensuring universal access and maximum coverage.

It is within cities we will have the best luck in building viable alternatives that use digital technologies to take transportation (and other services) out of the hands of the market.

This move toward surveillance and control is happening across all industries. It features firms that may style themselves as the epitome of capitalist enterprise, but operate in ways that fly in the face of capitalist logic: they are anti-competitive, enjoy immense subsidies, and actively seek to rewrite regulations in ways that further entrench their power. This means that few startups have any intention of actually making money and are more focused on scale, control, market dominance, and regulatory capture. They are heavily subsidized by venture capitalists who are happy to take millions or billions of dollars in losses with the hope that, once they achieve a monopoly, they can eventually jack up prices or otherwise monetize consumers in a way that will rapidly earn them back all the money they spent destroying competitors by undercutting them with artificially low prices.

Some may view this development favorably. Viktor Mayer-Schönberger, an Austrian legal scholar who wrote the influential book Big Data, argued “the massive amounts of data now being harvested and analysed by a few far-sighted firms would produce new business models and destroy existing ones; disruption was imminent, profits assured.” A second book, Reinventing Capitalism in the Age of Big Data, was much more ambitious: “once [Big Data] is efficiently utilized throughout the economy, Big Data will not just reinvent capitalism…but end it” namely by supplanting the price system.

And yet, what we see is that when Big Data undermines the price system, it ends up furthering the interests of a narrow class of capitalists not reinventing the system. If anything, the proliferation of digital technology in the marketplace has created a system where the true cost cannot be ascertained—ours is more a successor to the Soviet Union’s Gosplan central planning system than anything else.

A Public Uber?

The important question for the future is whether we can design a better system. Technology, broadly speaking, has the ability to improve life for huge swaths of the population. But Silicon Valley has thus far deployed it in a method designed to extract wealth from the vulnerable, to crush other businesses by undercutting them using endless VC funding, and to control, surveil, and monetize the behaviors of the masses. We need non-market, publicly-owned alternatives to big tech.

Rather than having a ride-hail company like Uber that makes traffic worse, loses endless amounts of money by underpricing its product, and pays its workers poverty-level wages, we should aspire for a transit system that is free and accessible to all that maximizes coverage for passengers while ensuring fair working conditions for drivers and operators. If you need to get somewhere, you should not worry about whether you can afford it. And if you want a job in transportation, you should not worry about whether you can make ends meet while you have it.

We should also aspire to adopt forms of transportation that radically reduce urban pollution, traffic congestion and accidents, while minimizing those that don’t. Our over-reliance on cars and private transportation literally shapes how our communities and cities are planned. Federal highways, to take one example, have helped keep communities segregated to this day thanks to how easily racist views mixed with the social engineering project urban planners took up decades ago to “keep their cities healthy” with massive road systems.

It is hard to imagine any city that can seriously match the deep pockets—or access to computing resources—of major corporations, whether they are Google, Amazon, or Uber. In fact, Amazon’s HQ2 spectacle—when dozens of cities tripped over themselves to convince the world’s richest man to set up shop in their backyard—should leave us doubtful that even a coalition of cities would be able to challenge one company.

And yet, it is within cities we will have the best luck in building viable alternatives that use digital technologies to take transportation (and other services) out of the hands of the market.

The first place to begin would be to seize the data generated on Uber’s platform by passengers and drivers. European drivers are already suing for access to data on them generated by Uber’s algorithms, but that won’t be enough. Such data would need to be seized by a local, state, or federal authority or its sharing made a condition of gaining a license to operate. In some of Uber’s largest markets, incredible opportunities to do just this are emerging. In California, where Uber and Lyft are threatening to exit if their drivers are reclassified as employees, and in London, where Uber is on track to lose its license to operate, regulatory authorities could make handing over Uber’s data contingent on returning—Uber itself has admitted in its IPO filing documents that it likely cannot survive without these cities.

Such experiments have been underway in Barcelona, where Francesca Bria, a professor at UCL and a long-time technology advisor for the Spanish government, has helped spearhead such efforts for years. There, digital technologies have not only been used to expand participatory democracy by allowing citizens to actively create the government agenda, but to prevent the typical tendency of “smart cities” to become playgrounds for corporations to enjoy total control over harvested data that could then be analyzed for insight into what could be public good or service should be privatized next.

On one front, this means contracts with technology companies that come with clauses that demand data either be owned by the public or a government entity. On another, Bria argues it involves creating a digital infrastructure that allows people to understand what data they are generating and “what data they want to share, with whom, on what basis, and to do what.” This also means we need to contest whether private companies get to say they own the instruments that collect and interpret data (not just generate it)—meaning the sensors and algorithms in question. After all, what good is this data seized from Uber or forcibly handed over if we cannot use it, let alone make sense of it? Cities will need to find some way to obtain the computational resources necessary to analyze the data (or the “proprietary” algorithms developed by Uber), in order to properly regulate or imitate its services.

The immediate instinct might be to simply seize Uber’s platform and operate it in the public interest. But Uber’s platform and its algorithms are thoroughly designed to achieve outcomes we are not interested in: increasing demand for private transit, increasing the number of drivers to reduce wait-times, increasing the number passengers to increase revenues, and thus increasing pollution and traffic.

A final question is how much needs to be managed in the first place. Should we use digital technologies to enforce a strict uniformity or encourage a diversity of experiments and solutions to problems both universal and specific? Different cities have different gaps in their transportation systems, different demographics that guide traffic patterns, different urban compositions that dictate where traffic may or may not flow, and different geographies altogether. Reducing vehicle ownership and traffic in a city suffering from urban sprawl might demand a different type of public option than a city where a significant amount of the population are commuters. Cities and urban spaces should be unique, as much makes sense, and so should the data they decide or refuse to collect.

Ultimately, governments and public groups can’t seek to simply imitate Uber or Amazon or Facebook or Google, because these services were designed from the start to extract wealth and monetize users. And in the case of Uber specifically, it was created to displace mass transit with private transit. We must build something entirely different.

Digital technologies need not be tied to corporations or even markets—they can be subordinated to the task of creating explicitly non-market forms of social coordination.

We must free ourselves of the narratives that paint these technologies and accompanying institutions as the providence of markets and capitalist activity. We do not have to choose between a central planning system and a market system. We can choose something else, even if we are not sure what it will look like. In fact, if we are serious about reducing the level of discrimination and insecurity in this world—which is currently a feature, not a bug, of what happens our digital technologies are deployed on the marketplace—we must actively experiment to find it.

Follow Edward Ongweso Jr. on Twitter.