Tech

When AI Goes Wrong, We Won’t Be Able to Ask It Why

Software governs much of our daily lives from behind the scenes, from which sorts of information we consume, to who we date. For some, secretive algorithms decide whether they are at risk of committing a future crime. It’s only natural to want to understand how these black boxes accomplish all this, especially when it impacts us so directly.

Artificial intelligence is getting better all the time. Google, for example, recently used a technique known as deep learning to kick a human’s ass at Go, an incredibly complex board game invented thousands of years ago. Researchers also believe deep learning could be used to find more effective drugs by processing huge amounts of data more quickly. More relatably, Apple has injected the technique into Siri to make her smarter.

Videos by VICE

Futurists believe that computer programs may one day make decisions about everything from who gets insurance coverage, to what punishment fits a capital crime. Soon, AI could even be telling soldiers who to shoot on the battlefield. In essence, computers may take on a greater role as our insurance salespeople, our judges, and our executioners.

These concerns are the backdrop for new legislation in the European Union, slated to take effect in 2018, that will ban decisions “based solely on automated processing” if they have an “adverse legal effect,” or a similar negative effect, on the person concerned. The law states that this might include “refusal of an online credit application or e-recruiting practices.”

“As soon as you have a complicated enough machine, it becomes almost impossible to completely explain what it does”

In the event that a machine screws up somebody’s life, experts believe that the new law also opens up the possibility to demand answers. Although a “right to explanation” for algorithmic decisions is not explicit in the law, some academics believe that it would still create one for people who suffer because of something a computer did.

This proposed “right,” although noble, would be impossible to enforce. It illustrates a paradox about where we’re at with the most powerful form of AI around—deep learning.

We’ll get into this in more detail later, but in broad strokes, deep learning systems are “layers” of digital neurons that each run their own computations on input data and rearrange themselves. Basically, they “teach” themselves what to pay attention to in a massive stream of information.

Despite these programs being put to use in many facets of our daily lives, Google and Apple and all the rest don’t understand how, exactly, these algorithms make decisions in the first place.

If we can’t explain deep learning, then we have to think about if and how we can control these algorithms, and more importantly, how much we can trust them. Because no legislation, no matter how well-intentioned, can open these black boxes up.

Google’s AlphaGo faces off against Go player Lee Sedol in March of 2016. Image: Flickr/Buster Benson

Yoshua Bengio is 52 years old and has been at the Université de Montréal since 1993. He’s one of a handful of Canadian computer scientists who toiled in obscurity to make the key breakthroughs that transformed deep learning research from a graveyard for promising careers into a multi-billion dollar industry.

I called Bengio in the hopes that he would ease my anxiety over a fact that would make anyone who believes robots will one day kill us all shudder: we don’t really understand how deep learning systems make decisions.

He didn’t exactly chill me out. In fact, he said, it’s exactly because we can’t mathematically pick apart a decision made by deep learning software that it works so well.

“As soon as you have a complicated enough machine, it becomes almost impossible to completely explain what it does,” Bengio said. “Think about another person or an animal—their brain is computing something with hundreds of billions of neurons. Even if you could measure those neurons, it’s not going to be an answer that you can use.”

The math at the core of deep learning systems is really pretty simple, Bengio said, but the problem is this: once they get going, it becomes too complex to make sense of. You could put all the calculations that went into making a decision into a spreadsheet, Bengio explained, but the result will just be numbers that only a machine can understand.

It’s worth emphasizing here that deep learning still runs on computers, and that means we shouldn’t completely mythologize it. Think about it this way: in the past, many people were paid to be human computers. The term “computer” contains an implicit historical continuity that draws our attention to the fact that today’s powerful machines are doing the exact same job as these original human computers, but much faster.

“You don’t understand, in fine detail, the person in front of you, but you trust them”

Consider the Turing machine, a concept of an ideal computer popularized by Alan Turing in the 1930s that is still used today to think through some of the thornier issues surrounding the ethics of intelligent machines. But in Turing’s mind, “a man provided with paper, pencil, and rubber, and subject to strict discipline, is in effect a universal machine.”

In essence, then, extremely powerful deep learning computers are still only computers, but they’re also savants in a way—data goes in, and outputs come out. We understand the math and the high-level concepts of what makes them tick, but they have an internal logic that’s outstripped our ability to comprehend it.

Bengio argues that trusting a computer is no different, or more dangerous, than trusting another person. “You don’t understand, in fine detail, the person in front of you, but you trust them,” Bengio continued. “It’s the same for complicated human organizations because it’s all interacting in ways nobody has full control over. How do we trust these things? Sometimes we never trust them completely and guard against them.”

This is the core idea behind provisions pertaining to AI in the EU’s new law. By banning only decisions made entirely with automated processing that negatively impact a person, a system that employs a human at some stage in the decision-making loop would presumably still be allowed.

But one can imagine some situations where some sort of “right to explanation” might still be desirable, even with this caveat. For example, when a self-driving car crashes. The data processing was presumably fully automatic, and likely negatively impacted at least the car’s passenger. Perhaps the car decided to inexplicably turn left instead of right. We could call this a glitch, or we could call it a bad decision.

To find out how that decision was made in an effort to satisfy the “right to explanation,” we can tear apart its machine brain, but all the numbers we pull out… Well, they’d just be numbers resulting from billions of individual autonomous calculations, and not any sort of clear explanation for a human tragedy.

Here, and in many other less personally injurious instances of AI behaving badly, there is no satisfying explanation to be had here, right or no right. The case will be the same for an autonomous car crash or an autonomous denial of insurance coverage.

Human computers in 1949. Image: Wikimedia

Deep learning systems, or neural networks, are “layers” of nodes that run semi-random computations on input data, like millions of cat photos. These nodes are weighted, and reorganize themselves to arrive at an output—say, the defining visual features of a cat. This process is called training. Google achieved this by networking 16,000 computer processors running one billion individual “neurons” in 2012.

These systems make predictions based on what they “know.” If you show a neural network a cat photo that it’s never seen before after being trained, it would be able to say with some certainty that this, too, is a cat. Researchers can modify deep learning systems to do different things by training them on different kinds of data—books or human speech instead of cat photos, for example.

After a dozen or so layers and billions of connections, the path toward a given decision becomes too complex to reverse-engineer. It’s not that these systems are magical in any way. They’re governed by math—it’s just too complex. The cake is baked, so to speak, and it can’t be un-baked.

“Imagine if you were an economist, and I told you the detailed buying behaviour of a billion people,” explained Jeff Clune, a computer scientist at the University of Wyoming who works with deep learning systems. “Even if I gave you all of that, still, you’d have no idea what would emerge.”

Neural networks with even a very small number of connections may take years to poke and prod until they are fully understood, Clune said.

“I don’t know what that symphony is going to sound like—what the music will sound like”

“I can look at the code of the individual neuron, but I don’t know what that symphony is going to sound like—what the music will sound like,” Clune continued. “I think our future will involve trusting machine learning systems that work very well, but for reasons that we don’t fully understand, or even partially understand.

None of this is to suggest that researchers aren’t trying to understand neural networks. Clune, for his part, has developed visualization tools that show what each neuron in every layer of the network “sees” when given an input. He and his colleagues have also written an algorithm that generates images specifically designed to maximally activate individual neurons in an effort to determine what they are “looking” for in a stream of input data.

Google engineers took exactly this kind of backwards approach to understanding what neural networks are actually doing when they built Deep Dream, a program that generated trippy images purely from a neural network’s learned assumptions about the world. “One of the challenges of neural networks is understanding what exactly goes on at each layer,” a Google blog explaining the approach stated.

The end goal isn’t so much to understand the mysterious brain of a super-intelligent being, as it is about making these programs work better at a very base level. Deep Dream itself revealed that computers can have some pretty messed up (not to mention incorrect) ideas about what everyday objects look like.

But the fact remains that deep learning is arguably the most effective form of machine learning we’ve developed to date, and the tech industry knows it. That’s why the technology is already being used, fine-grain understanding be damned.

I called Selmer Bringsjord, computer scientist and chair of the Department of Cognitive Science at Rensselaer Polytechnic Institute, to hear his thoughts on the matter. He told me that all of this means one thing:

“We are heading into a black future, full of black boxes.”

Image: Google Research

Talk of trusting black boxes isn’t very helpful when algorithms are already disproportionately targeting racialized people as being at risk for criminal recidivism. A recent paper also found examples of racist and sexist language in a large, and massively popular, machine learning dataset.

How do we make sure robots don’t turn homicidal or racist? A good first step might be, as the EU’s algorithmic discrimination law suggests, keeping a human in the decision-making loop. But, of course, humans aren’t guaranteed to be free of bias, either, and in some cases—a self-driving car, for example—having a human calling the shots may be impossible or undesirable.

According to Bengio, we should be selective about the data these systems vacuum up. For example, we could ensure a neural network doesn’t have access to Mein Kampf. Instead, we could make computers read The Giving Tree or some W.E.B. Du Bois.

“Once we know the problem, say, we don’t want a machine to rely on certain kinds of information, we can actually train it to ignore that information,” said Bengio.

In other words, banning the underlying technology of deep learning and advanced AI is pointless, but it can be directed to benefit people. Doing so requires political will instead of simply letting corporations chase profits, Bengio added.

Whatever these systems are used for—killing or accounting—and however we corral them, we still won’t understand them. In a future that will be driven by deep learning processes, this makes any “right to explanation” for a decision made by an AI nigh impossible.

The question now is what we’re going to do about it as the modern tech industry continues to proliferate a technology that we don’t understand, frankly, because it works.

We’re either going to have to learn to trust these systems, as Bengio suggests, like we would any other human being, or we can attempt to control them. Because when disaster emerges from the rote functioning of invisible algorithms, we may be left empty-handed when we demand an explanation.

An abbreviated version of this article appears in the May issue of VICE Magazine. Click HERE to subscribe.