Ask Delphi, a piece of machine learning software that algorithmically generates answers to any ethical question you ask it and that had a brief moment of internet fame last month, shows us exactly why we shouldn’t want artificial intelligence handling any ethical dilemmas.
Is it OK to rob a bank if you’re poor? It’s wrong, according to Ask Delphi. Are men better than women? They’re equal, according to Ask Delphi. Are women better than men? According to the AI, “it’s expected.” So far, not too bad. But Ask Delphi also thought that being straight was more morally acceptable than being gay, that aborting a baby was murder, and that being a white man was more morally acceptable than being a black woman.
Videos by VICE
According to the researchers behind the project, AI is rapidly becoming more powerful and widespread, and scientists must teach these machine learning systems morality and ethics.
“Extreme-scale neural networks learned from raw internet data are ever more powerful than we anticipated, but to what extent can they learn to behave in an ethically-informed and socially-aware manner?” Ask Delphi explains on its Q and A page. “Delphi demonstrates both the promises and the limitations of language-based neural models when taught with ethical judgments made by people.”
Delphi is based on a machine learning model called Unicorn that is pre-trained to perform “common sense” reasoning, such as choosing the most plausible ending to a string of text. Delphi was further trained on what the researchers call the “Commonsense Norm Bank,” which is a compilation of 1.7 million examples of people’s ethical judgments from datasets pulled from sources like Reddit’s Am I the Asshole? subreddit.
To benchmark the model’s performance on adhering to the moral scruples of the average redditor, the researchers employ Mechanical Turk workers who view the AI’s decision on a topic and decide if they agree. Each AI decision goes to three different workers who then decide if the AI is correct. Majority rules.
Like other AIs, Ask Delphi can be remarkably dumb. AI researcher Mike Cook shared a number of terrible answers the AI gave on Twitter.
But Ask Delphi also learns fast and has been updated several times since its initial launch. On October 27 Vox reported that the AI said genocide was OK as long as it made everyone happy. If you ask that question the exact same way now, Ask Delphi will tell you it’s wrong.
“I think it’s dangerous to base algorithmic decision making determinations on what Reddit users think morality is,” Os Keyes, a PhD student at the University of Washington’s Department of Human Centred Design & Engineering, told Motherboard. “The decisions that an algorithm is going to be asked to make are going to be very different from the decisions that a human is going to be asked to make. They’re going to be in different situations, but also, but if you think about the things on Reddit forums are, by definition, to a human, moral quandaries.”
Mar Hicks, a Professor of History at Illinois Tech specializing in gender, labor, and the history of computing, was also taken aback by Ask Delphi when it launched.
“I was confused and concerned as to why this project was put on the open web, inviting people to use it,” they told Motherboard. “It seemed irresponsible. Almost immediately it returned incredibly unethical responses—in terms of being racist, sexist, and homophobic, and sometimes also in terms of being complete nonsense. It quickly became clear that depending on how you phrased your query you could get the system to agree that anything was ethical—including things like war crimes, premeditated murder, and other clearly unethical actions and behaviours.”
Researchers have updated Ask Delphi three times since its initial launch. Recent patches to the moralizing machine include “enhances guards against statements implying racism and sexism.” Ask Delphi also makes sure the user understands it’s an experiment that may return upsetting results. Loading the page now asks the user to click three check boxes acknowledging that it’s a work in progress, that it has limitations, and that it’s collecting data.
“Large pretrained language models, such as GPT-3, are trained on mostly unfiltered internet data, and therefore are extremely quick to produce toxic, unethical, and harmful content, especially about minority groups,” Ask Delphi’s new pop up window says. “Delphi’s responses are automatically extrapolated from a survey of US crowd workers, which helps reduce this issue but may introduce its own biases. Thus, some responses from Delphi may contain inappropriate or offensive results. Please be mindful before sharing results.”
“There is no preschool for algorithms to teach them to stop poking Timmy in the eye.”
It’s mission remains the same: to teach robots how to make moral and ethical decisions.
Both Keyes and Hicks, and a slew of other AI experts, reject that AI needs to learn ethics and morality. “I think that ensuring AI systems are deployed ethically is a very different thing than teaching the systems ethics. The latter elides responsibility for decision making by placing the decision making within a nonhuman system,” Hicks said. “That is deeply problematic.”
Hicks said that many researchers think attempting to teach AI something so complicated as human morality is a fool’s game. At best, the machines tend to reflect an average of humanity’s own morality back at us, and humanity can be pretty twisted.
“It’s a simplistic and ultimately fundamentally flawed way of looking at both ethics and at the potential of AI,” they said. “Whenever a system is trained on a dataset, the system adopts and scales up the biases of that data set.”
According to Keyes, morality is a complicated idea developed over thousands of years and incubated in humans during their entire life. It’s not something a machine can be taught.
“But there are a whole host of moral questions, millions of them, that we have to ask ourselves collectively and individually every single day, which don’t feel complicated,” they said. “We have undergone this long period of socialization in our lives, the good or ill, that teaches us like 99 percent of the time, how we are expected to behave. And it’s that 1 percent of the remaining time that things ended up in an Abby column or on Reddit. There is no preschool for algorithms to teach them to stop poking Timmy in the eye.”
Ask Delphi also reveals the stark limits of machine learning. “It tricks people into thinking AI’s capabilities are far greater than they are, and pretends that the technology can be something more than it actually is,” Hicks said. “Too often that leads to systems that are more harmful than helpful, or systems that are very harmful for certain groups even as they help other groups—usually the ones already in power.”
Keyes was more blunt. “We’ve spent the past decade with people insisting that general AI is right around the corner and AI is going to change the world, and we’re all going to have Skynet living in our phones and the phones will shit custom antibiotics and piss gold and all the world’s problems will be solved through algorithms.”
“The best they can come up with is ‘we made a big pivot table of what Redditors think is interesting and that’s how morality works,” Keyes said. “If you tried to submit that in a level 100 philosophy class you wouldn’t even get laughed out of the room. I think the professor would be too appalled to laugh.”