A botifesto
We live in a world of bots. Generally speaking, these sets of algorithms are responsible for so much on the backend of the internet, from making Google searches possible to filling up your spam folder. But an emergent kind of bot, capable of interacting with humans and acting on their behalf, is playing a more active role in our everyday lives.
Videos by VICE
Bots measure the technical health of the internet, share information on natural disasters, predict disease outbreaks, fulfill our lunch requests, and send news articles to networks of people on platforms like Twitter and Slack. They may even write some of those articles. They are also integral to social media propaganda campaigns, distributed denial of service (DDoS) attacks, and stock market manipulation. Bots have been shown to be capable of compelling humans to carry out small tasks and a “Siri-like” assistant has been proposed as a way to help drone pilots fire on their targets as a way to reduce “moral injury.”
“Social bots” on social platforms like Twitter interact with human users, spreading magical realist mashups, promoting political transparency, and circulating news about surveillance or embattled legal policies. In fact, many bots are built to pass themselves off as human. Projects like BotOrNot, software that aims to detect whether a Twitter account is controlled by a person or by code, point to just how much success coders have had at building human-like bots. Meanwhile, many bots seem more like disembodied cyborgs—part automaton, part human—that successfully pass themselves off as people to multinational corporations, broad publics, and even social media platforms.
However, bots like @oliviataters, @nice_tips_bot, and @twoheadlines draw attention because on close inspection, they are not convincingly human; their very “botness” is funny, surreal, or poetic. In online political activism too, research suggests that bots may be most effective at motivating engagement when they aren’t trying to appear human. In one study in Bolivia, researchers found that potential volunteers responded negatively when clearly nonhuman Twitter bots took on a more human tone.
Ever since ELIZA, which is often considered the first chatbot, one distinguishing feature of bots is that they are semi-autonomous: they exhibit behavior that is partially a function of the intentions that a programmer builds into them, and partially a function of algorithms and machine learning abilities that respond to a plenitude of inputs. Thinking about bots as semi-automated actors makes them a challenge in terms of design. It also makes them unusual in an ethical sense. Questions of deception and responsibility must be considered when discussing both the construction and functionality of bots.
To get a better grip on the questions that bots raise, Sam Woolley, a researcher at the Data & Society institute, organized a workshop that brought together a diverse group of bot experts. This article is the output of the event, a tour of the bot and its semi-autonomy from three perspectives: that of the designer, the implementer, and the regulator. In each, the bot presents unique challenges that have yet to be fully reckoned with by any human.
The Designer: Techno-Humanity and Humble Systems
Our expectations for bots can be different depending on the context. Sometimes we want bots to remind us of our own humanity, so we design them to say things that allow us to laugh at them. We relish the opportunity to objectify bots because they are trying to be human and failing.
Who is responsible for the output and actions of bots, both ethically and legally? How does semi-autonomy create ethical constraints that limit the maker of a bot?
At the same time, we want bots to understand us, to work for us; in this case failure isn’t funny but annoying. Consider what might happen when we call an automated, interactive customer service hotline: before long we’re screaming “agent!” “operator!” “human!” As we get more frustrated at the automated systems’ inability to understand us, we get angry. And then we might project that anger onto the human who eventually picks up the phone. In both cases, the humanity that we want in our bots is one derived from polite society. Most likely, we don’t blame the bot for its poor socialization. We blame the human behind it.
But this condemnation can be sorely misplaced. It’s not always that the human behind the bot intends for things to go poorly. While we want to preserve human responsibility for design choices, we also need to keep in mind that some bots will be programmed precisely to do things independent from and even unanticipated by their creators.
Consider the Random Darknet Shopper bot, which was programmed to spend $100 in bitcoin weekly to make random purchases on the online black market Agora. At some point the bot purchased drugs. The laptop running the program that powered the bot was consequently seized by Swiss police, along with the MDMA pills the bot purchased.
As ! Mediengruppe Bitnik, the Swiss collective behind the bot, explained, the charges were dropped as the prosecutor decided that “the overweighing interest in the questions raised by the art work «Random Darknet Shopper» justify the exhibition of the drugs as artifacts, even if the exhibition does hold a small risk of endangerment of third parties through the drugs exhibited.”
Though the artists were working to challenge the very idea of online culpability, of how anonymity, ethics and commerce function in the darkest depths of the web, they have stated that they take responsibility for what the bot does. The Swiss constitution, however, complicates their willingness to accept potential punishment, because it argues that public interest oriented art is legal.
The work and the legal response raise crucial questions. Who is responsible for the output and actions of bots, both ethically and legally? How does semi-autonomy create ethical constraints that limit the maker of a bot? Similar questions were raised last year, after a semi-random text bot owned by an Amsterdam-based man issued a death threat on Twitter. “Of course since I don’t have any legal knowledge I don’t know who is/should be held responsible (if anyone) but like. kinda scared right now,” the bot’s programmer tweeted. The owner of the bot deleted it.
For those of us who create bots and send them out into the world, it is essential that we do so thoughtfully. It should be our responsibility to explicitly examine and articulate the values underlying our creations. Having space and allowance for messiness or innovation should not be equated with indiscriminately releasing anything into the world.
The Implementer: Automating the Fourth Estate
Outside of the art realm these questions have reared their head among people who use bots in civic-minded ways. Bots can be useful for making value systems apparent, revealing obfuscated information, and amplifying the visibility of marginalized topics or communities. Twitter accounts such as @congressedits, @scotus_servo, and @stopandfrisk use re-contextualization in order to highlight information in a way that has traditionally been the role of journalistic organizations.
The bot can be thought of as more than an assistant: it can be a kind of civic prosthetic, a tool that augments our ability to sense other people and systems. Bots won’t replace journalists, but they can help supercharge them by automating tasks that would otherwise have to take place manually. A bot can continue to report indefinitely on a topic, or expose connections or patterns that would take many hours for a human to uncover. Through these myriad affordances, bots can become powerful tools for citizens to use in demanding accountability of those in power.
Questions of design, ownership and trust are particularly relevant for journalism bots because journalism is a discipline of verification. Journalists strive to seek truth and accuracy in writing, and seek to avoid slander and libel. A bot will eventually get things wrong if it is fed inaccurate information, and the bot could commit libel. If you make a bot, are you prepared to deal with the fallout when your tool does something that you yourself would not choose to do? How do you stem the spread of misinformation published by a bot? Automation and “big” data certainly afford innovative reporting techniques, but they also highlight a need for revamped journalistic ethics.
The Regulator: Bots, Politics and Policy
Given the public and social role they increasingly play—and whatever responsibility their creators assume—the actions of bots, whether implicitly or explicitly, have political outcomes. The last several years have seen a rise in bots being used to spread political propaganda, stymie activism and bolster social media follower lists of public figures. Activists can use bots to mobilize people around social and political causes. People working for a variety of groups and causes use bots to inject automated discourse on platforms like Twitter and Reddit. Over the last few years both government employees and opposition activists in Mexico have used bots in attempts to sway public opinion. Where do we draw the line between propaganda, public relations and smart communication?
Platforms, governments and citizens must step in and consider the purpose, and future, of bot technology before manipulative anonymity becomes a hallmark of the social bot.
At the moment bot oversight tends to fall upon the shoulders of platforms, but even companies like Twitter and Facebook can’t catch all bots and nor do they want to. Twitter’s bot-friendly design and approach differs greatly from the tighter regulation of authenticity and automation on Facebook; the former allows for a thriving bot ecology but also makes a space for more nefarious uses of automation. Bots encoded with political intentions and those designed to attack other users threaten other rights, those enforceable by other entities. Automated abuse, coercion and deceit—along with the anonymity afforded by bot proxies—must be addressed while preserving the creativity and messiness so central to coding.
Wholesale elimination of bots on social media would, after all, also get rid of bots doing important work in journalism and silence the variety of bots appreciated for their comedy and “botness.” How to craft an administrable policy then becomes a critical question for platforms wanting to reap the opportunities that bots bring without falling victim to the threats they raise.
One approach to designing policies around bots involves thinking about designing rules that would limit behaviors that we do not want people to practice. For instance, how do you create policies that dissuade people from engaging in unfair political attacks? Two main values, decisional privacy and democratic discourse, emerge here as helpful starting points to all kinds of regulators.
Decisional privacy, according to Beate Rossler, is about “making one’s own decisions and acting on those decisions, free from governmental or other unwanted interference.” In application to bots this means we do not want bots to interfere in people’s critical life decisions. We don’t want these automated actors to unduly influence who people vote for or what type of medical treatment they receive.
This is not a complete solution to the challenge bots present. The approach becomes tricky when we think of the possibilities of bots as “speed-bumps”: agents that can create socially beneficial behavioral change. Bots might be effective tools for guiding people toward healthier lifestyles or for spreading information about natural disasters. How can policies allow for civically “good” bots while stopping those that are repressive or manipulative?
Healthy democratic discourse, meanwhile, is buttressed by an environment that fosters a diversity of voices. Bots can be seen as both speech acts and proxy actors. As acts, the First Amendment is biased toward preserving them. As “actors,” it’s vital to consider how bots are related to the rights of individuals, be it to amplify their voices or to potentially drown out or harm the speech of others.
Embracing Semi-Autonomy
Excitement about bots, from Silicon Valley to the academy, is palpable at the moment. This is partially because the creative, political, legal and ethical futures of this technology are so open. Semi autonomy, the dual human-computer nature of bots, provides fodder for all sorts of revolutionary production and reception. We must consider, however, the fact that this technology will continue to evolve and become more sophisticated. To what ends will this evolution lead?
The best way to envision potential futures for social automation lies in accepting the paradoxical nature of bots. Yes, they contain values encoded by the people who build them, but they also live—and perform—on an unpredictable internet of nearly limitless input and output. This doesn’t mean that responsibility doesn’t exist. It means that it’s complicated and should be addressed as such. Automation, and the anonymity implicit in the depths of the web, muddy notions of clear culpability. Cases like the Darknet Shopper underscore this point by revealing the distributed, and unpredictable, nature of surrogate code.
Platforms, governments and citizens must step in and consider the purpose, and future, of bot technology before manipulative anonymity becomes a hallmark of the social bot. Rumination on bots should also work to avoid policies or perspectives that simply blacklist all bots. These automatons can and might be used for many positive efforts, from serving as a social scaffolding to pushing the bounds of art. We hope to provoke conversation about the design, implementation and regulation of bots in order to preserve these, and other as yet unimagined, possibilities.
By embracing messiness, those who make, use, and interact with bots open themselves to the automated creativity, innovation, and unpredictability so central to the web. This inventiveness will continue to extend itself to the realms of journalism, activist, and protest, epicenters of democracy and public welfare. This approach makes a space for thoughtful regulation, for rules that allow bots to be as messy as their creators, favor diversity, and prevent imbalances of power.
Stay tuned to Motherboard for IN OUR IMAGE, a week of coverage dedicated to artificial intelligence, starting March 14.