Tech

Public Outrage Is the Best Regulation of Big Tech We’re Going to Get

Youtube banning certain kinds of QAnon content is in a line of arbitrary bans that are probably better than the alternatives.
GettyImages-1228658993

Thursday, Youtube announced a policy change that was both mystifyingly vague and startlingly clear. As described in a press release published on its official blog, it is "expanding both our hate and harassment policies to prohibit content that targets an individual or group with conspiracy theories that have been used to justify real-world violence." This could cover a lot of things; the four references to QAnon, and the one to Pizzagate, in the 399-word release made it evident what it is, for now, intended to cover.

Advertisement

There are plenty of grounds on which to criticize this decision. If the idea is to prevent real-world violence, it's not clear why it would be applied to QAnon but not conspiracy theories promulgated by U.S. government officials to justify the barbarous treatment of immigrants. It's not obvious that QAnon is isolable from a variety of other conspiracy theories, ranging from harmless ones about the influence of space aliens on the progress of man to harmful anti-Semitic ones, that thrive on Youtube, raising questions about how this policy can possibly be enforced evenly. And it's clear that even targeted enforcement can only be so effective: A Motherboard review of a variety of popular Q-related channels found lots of them still up, some promoting far more malignant ideas—that Bill Gates is researching COVID-19 vaccines because he is the anti-Christ, for example, or that it is illegal for people injured by vaccines to seek recourse—than did some of those that were taken down.

More broadly, though, the decision raises old yet still urgent questions about giant tech platforms defining the bounds of acceptable discourse. Just this week, Facebook banned Holocaust denial but not other forms of genocide denial, and anti-vax advertising but not anti-vax content. Both Facebook and Twitter throttled distribution of a poorly reported New York Post story on the dubious grounds that it was poorly reported and contained information that may have been obtained illicitly. You don't need to be a fan of any of these things to be alarmed by companies that have massive control over the flow of information issuing seemingly arbitrary and perhaps unenforceable edicts about what they will and won't allow.

Advertisement

Part of the problem lies in what these companies are. They have some of the qualities of neutral distribution platforms like phone networks or chat apps; they also have some of the qualities of editorial platforms like newspapers. They necessarily need to make decisions about what they do and don't promote—and allowing something to be published is a form of promotion—that are fundamentally editorial, even and especially when done by algorithms that the general public has no insight into and that even programmers at the company cannot fully comprehend. These decisions necessarily conflict with these companies’ role overseeing products that serve as utilities that people who have no particular opinions on algorithms as products of human design use to communicate with each other.

The arbitrary and provisional nature of Youtube policymaking makes more sense when you consider that another part of the problem is that it is a company, as Facebook and Twitter are. It is a publisher and a platform and it serves something like the function of a public utility, but its purpose and motive is profit. If it makes erratic and incoherent policy decisions in response to public outcry over its promotion of garbage, that's because it wants people to stop yelling at it and wants advertisers to continue to spend money on it. Youtube would like people to stop yelling at it for publishing videos about Donald Trump fighting a Satanic cabal of Democrats draining fluids from babies, just as Twitter would like for people to stop yelling at it for being a vector of Russian disinformation. If people were angry enough about other bizarre conspiracy theories spread on social media—moon landing trutherism, say, or the notion that multiple celebrities have been covertly killed by the government and replaced with pliant body doubles—they would address those too, or instead, via some sort of ill-defined and poorly-executed policy. They just want the noise to stop and to continue to avoid government regulation.

Whether this is bad is a matter of perspective. It's probably not good that profit-seeking companies with functional control over the discourse view that control principally in terms of public relations and damage control; the ideal state of affairs certainly does not involve them taking individual theories or stories, the promotion of which they believe may affect their reputation, and silencing them for no better reason than that they want the clamor to die down.

A public-relations exercise, though, is in a very real way the public being listened to—the market at work, one might say. Random things being randomly and ineptly marked off as beyond the pale due to the public shouting about them isn't good. The alternatives, though, would seem to be these companies allowing absolutely anything at all; submitting to government regulation that would precisely delineate what is and isn't allowed to be said; being forced to submit to anti-monopoly laws that would meaningfully constrain their power; or believing in something other than their own profits enough to proactively take stances before they become a problem, which would undoubtedly anger their global userbases in an incredibly divided world.

There is not a good and realistic answer here. There are worse and more unrealistic ones.

Follow Tim Marchman on Twitter.