Nick Bostrom. Photo via.
Nick Bostrom’s job is to dream up increasingly lurid scenarios that could wipe out the human race: Asteroid strikes; high-energy physics experiments that go wrong; global plagues of genetically-modified superbugs; the emergence of all-powerful computers with scant regard for human life—that sort of thing.
Videos by VICE
In the hierarchy of risk categories, Bostrom’s specialty stands above mere catastrophic risks like climate change, financial market collapse and conventional warfare.
As the Director of the Future of Humanity Institute at the University of Oxford, Bostrom is part of a small but growing network of snappily-named academic institutions tackling these “existential risks”: the Centre for the Study of Existential Risk at the University of Cambridge; the Future of Life Institute at MIT and the Machine Intelligence Research Institute at Berkeley. Their tools are philosophy, physics and lots and lots of hard math.
Five years ago he started writing a book aimed at the layman on a selection of existential risks but quickly realized that the chapter dealing with the dangers of artificial intelligence development growth was getting fatter and fatter and deserved a book of its own. The result is Superintelligence: Paths, Dangers, Strategies. It makes compelling—if scary—reading.
The basic thesis is that developments in artificial intelligence will gather apace so that within this century it’s conceivable that we will be able to artificially replicate human level machine intelligence (HLMI).
Once HLMI is reached, things move pretty quickly: Intelligent machines will be able to design even more intelligent machines, leading to what mathematician I.J. Good called back in 1965 an “intelligence explosion” that will leave human capabilities far behind. We get to relax, safe in the knowledge that the really hard work is being done by super-computers we have brought into being.
An intelligence explosion. Illustration via.
All sound good? Not really, thanks to the “control” problem. Basically it’s a lot easier to build an artificial intelligence than it is to build one that respects what humans hold dear. As Bostrom says: “There is no reason to think that by default these powerful future machine intelligences would have any human-friendly goals.”
Which brings us to the gorillas. In terms of muscle, gorillas outperform humans. However, our human brains are slightly more sophisticated than theirs, and millennia of tool-making (sharp sticks, iron bars, guns, etc.) have compounded this advantage. Now the future of gorillas depends more on humans than on the gorillas themselves.
In his book, Bostrom argues that once a super intelligence is reached, present and future humanity become the gorillas; stalked by a more powerful, more capable agent that sees nothing wrong with imprisoning these docile creatures or wrecking their natural environments as part of a means of achieving its aims.
Photo via.
“A failure to install the right kind of goals will lead to catastrophe,” says Bostrom. A super intelligent AI could rapidly outgrow the human-designed context it was initially designed for, slip the leash and adopt extreme measures to achieve its goals. As Bostrom puts it, there comes a pivot point: “when dumb, smarter is safer; when smart, smarter is more dangerous.”
Bostrom gives the example of a super intelligent AI located in a paperclip factory whose top-level goal is to maximize the production of paperclips, and whose intelligence would enable it to acquire different resources to increase its capabilities. “If your goal is to make as many paperclips as possible and you are a super-intelligent machine you may predict that human beings might want to switch off this paperclip machine after a certain amount of paperclips have been made,” he says.
“So for this agent, it may be desirable to get rid of humans. It also would be desirable ultimately to use the material that humans use, including our bodies, our homes and our food to make paperclips.”
“Some of those arbitrary actions that improve paperclip production may involve the destruction of everything that we care about. The point that is actually quite difficult is specifying goals that would not have those consequences.”
Bostrom predicts that the development of a superintelligent AI will either be very good or catastrophically bad for the human race, with little in between.
It’s not all doom though. Bostrom’s contention is that humans have the decisive advantage: We get to make the first move. If we can develop a seed AI that ensures future superintelligences are aligned with human interests, all may be saved. Still, with this silver lining comes a cloud.
“We may only ever get one shot at this,” he says. Once a superintelligence is developed, it will be too sophisticated for us to control effectively.
Image via.
How optimistic is Bostrom that the control problem can be solved? “It partly depends on how much we get our act together and how many of the cleverest people will work on this problem,” he says. “Part of it depends just how difficult this problem is, but that’s something we will not know until we have solved it. It looks really difficult. But whether it’s just very difficult or super-duper ultra difficult remains to be seen.”
So, across the world’s labs there must be hoards of dweebs chipping away at what Bostrom calls “the essential task of our time,” right? Not quite. “It’s hard to estimate how many exactly, but there’s probably about six people working on it [in the world] now.”
Perhaps this has something to do with the idea that working on an all-powerful AI was the preserve of mouth-breathing eccentrics. “A lot of academics were wary of entering a field where there were a lot of crackpots or crazies. The crackpot factor deterred a lot of people for a long time,” says Bostrom.
One who wasn’t deterred was Daniel Dewey, who left a job at Google to work with Bostrom at the FHI and at Oxford University’s Martin School, lured in by the prospect of dealing with the AI control problem. “I still think that the best people to work with are in academia and non-profits, but that could be changing, as big companies like Google start to deeply consider the future of AI” says Dewey.
The former Google staffer is optimistic that the altruistic nature of his former colleagues will trump any nefarious intentions connected with AI. “There’s a clear common good here. People in computer science generally want to improve the world as much as they can. There’s a real sense that science and engineering make the world a better place.”
Jaan Tallinn, the founder of Skype and co-founder of the CSER, has invested millions in funding research into the AI ‘control’ problem, after his interest was piqued by realizing that, as he puts it, “the default outcome was not good for humans.”
The CSER at Cambridge. Photo via.
For Tallin, there’s an added urgency to making sure AI is controlled appropriately. “AI is a kind of meta-risk. If you manage to get AI right then it would help mitigate the other existential risks, whereas the reverse is not true. For example AI could amplify the risks associated with synthetic biology,” he says.
He maintains that we are not at the point where effective regulation can be introduced, but he points out that “these existential risks are fairly new.” Tallin continues, “Once these topics get more acknowledged worldwide, people in technology companies may put in new kinds of polices to make these technologies safer.”
“The regulations around bio-hazard levels are a good example of off-the-shelf policies that you use if you are dealing with bio-hazards. [In the future] It’d be great to have that for AI.”
Jason Matheny, Programme Manager of IARPA at the USA’s Office of the National Intelligence agrees. “We need improved methods for assessing the risks of emerging technologies and the efficacy of safety measures,” he says.
The threat of superintelligence is to Matheny far worse than any epidemic we have ever experienced. “Some risks that are especially difficult to control have three characteristics: autonomy, self-replication and self-modification. Infectious diseases have these characteristics, and have killed more people than any other class of events, including war. Some computer malware has these characteristics, and can do a lot of damage. But microbes and malware cannot intelligently self-modify, so countermeasures can catch up. A superintelligent system [as outlined by Bostrom] would be much harder to control if it were able to intelligently self-modify.”
Meanwhile, the quiet work of these half dozen researchers in labs and study rooms across the globe continues. As Matheny puts it: “existential risk [and superintelligence] is a neglected topic in both the scientific and governmental communities, but it’s hard to think of a topic more important than human survival.”
He quotes Carl Sagan, writing about the costs of nuclear war: “We are talking about [the loss of life of] some 500 trillion people yet to come. There are many other possible measures of the potential loss—including culture and science, the evolutionary history of the planet and the significance of the lives of all of our ancestors who contributed to the future of their descendants. Extinction is the undoing of the human enterprise.”
And it all could come from clever computers. You’ve been warned.
Follow James Pallister on Twitter