In Doctor Strangelove, the cult 1964 Stanley Kubrick film, events beyond the control of US and Russian leaders trigger a spiral of events plunging the world into apocalypse, like the Cuban Missile Crisis with a different ending.
As a third revolution in warfare emerges – after gunpowder and nuclear weapons – experts and campaigners fear that the weaponisation of artificial intelligence, and the drive to develop lethal autonomous weapons systems (Laws), could bring the world closer to such a state than ever before.
Videos by VICE
“If the current developments of autonomous weapons by the high tech nations are not stopped, then global security could be dramatically destabilised,” Noel Sharkey, professor of artificial intelligence and robots at the University of Sheffield, tells me. “An accidental skirmish between enemy robots on a border or in international waters could become a war that humans could neither comprehend nor prevent. A conflict could be over in minutes before anyone could stop it, and with mass devastation and loss of life.”
Various precursor semi-autonomous weapons already exist, such as the mechanised sentries in the Korean demilitarised zone that would automatically shoot interlopers, but an arms race for Laws – also known as killer robots – is in progress, meaning at some point in the near future the major nations of the world will have swarms of fully autonomous fighter jets and tanks, among other weapons systems, all with little or no human control.
Governments are meeting at the UN this week to discuss whether and how to regulate Laws, and campaigners say time is running out for meaningful action.
“To avoid a future where killer robots, not humans, call the shots, governments need to act now,” says Mary Wareham of Human Rights Watch, coordinator of the Campaign to Stop Killer Robots. “Governments should move to negotiate an international treaty banning fully autonomous weapons. Any lesser measures will be doomed to failure.”
A number of countries have expressed their support for a total ban, including Austria, which became the first EU state to join the ranks of the countries opposed on Monday. But a preemptive ban has so far eluded campaigners, despite Elon Musk leading calls from experts for an outright ban, and dozen countries admit to developing Laws.
Autonomy is a self-professed cornerstone of the USA’s future military strategy, and has driven a steep increase in global spending on robotics, from $91.5 billion in 2016 to a likely $188 billion in 2020, as Russia, China and others up their investment so they don’t fall behind. The superpower is developing a prototype of an X-47B tailless, unmanned aircraft – which has ten times the range of a normal F-25 fighter jet – that will be able to take off in all weather conditions, fly in swarms and refuel in mid air.
It has been specifically designed for use in the Pacific following a blistering report on US military capability which found that the US has fallen behind China, which now has hypersonic missiles which would potentially provide them with a decisive advantage in any future war in the South China Sea.
“While admittedly futuristic in vision, one can conceive of scenarios where UUVs sense, track, identify, target and destroy an enemy – all autonomously,” reads a US Department of Defense report. It “envisions unmanned systems seamlessly operating with manned systems while gradually reducing the degree of human control and decision making required for the unmanned portion of the force structure”.
The secrets behind Chinese weapons are generally more closely guarded, but they recently exhibited some of their wares at the International Defense Exhibition and Conference, one of the world’s largest arms fairs, in Dubai. The emerging superpower has developed the CH-5 drone, an unmanned combat aircraft vehicle, that can fly for 60 hours straight and will soon be able to continuously fly for 12,000 miles. Commentators say that breakthroughs in Chinese artificial intelligence could see it working as part of an autonomous drone swarm in the future.
Russia is also pressing ahead with research and development of Laws. “Whoever leads in AI will rule the world,” said Vladimir Putin last year. “Artificial intelligence is the future, not only for Russia, but for all humankind.”
Kalashnikov, the producer of the AK-74 assault rifle, has developed a fully automated piece of light artillery that uses neural network technologies to identify targets and make decisions. It has also began serial production of a noiseless drone, which could be weaponised in the future.
The UK is also developing precursor weapons systems that possess autonomy, and is among a number of countries with hi-tech militaries opposed to a preemptive ban, despite Theresa May’s recent proclamation that she wants the UK to be the leader in ethical AI.
The UK’s Taranis combat aircraft is designed to strike distant targets “even in another continent”, and while the Ministry of Defence has stated that humans will remain in the loop, Human Rights Watch say the Taranis exemplifies the move toward increased autonomy.
These weapons can also be hacked and behave unpredictably when confronted with unanticipated circumstances; experts point to the failure of the Patriot missile defence system in the First Gulf War as evidence that all sorts of weapons can malfunction in unknown ways.
“Autonomous weapons systems are entirely computer controlled, with all of the problems inherent with using computers,” continues Sharkey. “The problem is when everybody, or a number of states, has them, we cannot know how will they interact with one another. These are unknown algorithms fighting each other, and that very fact makes them unpredictable and against the laws of war.”