On Thursday, Spot maker Boston Dynamics and other robotics companies signed an open letter pledging not to support weaponization of their robots or software.
“We believe that adding weapons to robots that are remotely or autonomously operated, widely available to the public, and capable of navigating to previously inaccessible locations where people live and work, raises new risks of harm and serious ethical issues,” the companies said in the letter.
Videos by VICE
The companies pledge not to weaponize their own robots, weaponize software for their robots, or to support others that do so. The letter also pledges that “when possible” these firms will “carefully review” customer applications to avoid possible weaponization, alongside a pledge to “explore the development” of ways to minimize the risks of weaponization.
There’s never been a better time to try and oppose the weaponization of robots. There are grenade-dropping commercial drones, and Boston Dynamics-style robot dogs have been seen with submachine guns attached to their backs and RPG launchers.
And yet, in the same paragraph where the companies explicitly make this pledge and promise to help mitigate risks, they hedge their position: “To be clear, we are not taking issue with existing technologies that nations and their government agencies use to defend themselves and uphold their laws.” While companies promising to do their best to prevent their robots—which may be sold as rescue units, for example—from becoming weapons is encouraging, these existing technologies are a key part of the problem.
Even if the robot isn’t weaponized itself, it can still help kill people. Boston Dynamics has already had Spot tested for a reconnaissance role in combat missions by the French Army. Boston Dynamics has previously worked with the Defense Advanced Research Projects Agency (DARPA) to develop Atlas, a humanoid robot that, at the moment, is designed for disaster-response. It’s not hard to see how either robot could be developed in ways that lead to breakthroughs in combat-oriented applications just shy of being “weaponized.”
“AI-powered turrets” firing sponge-tipped bullets are deployed to help preserve Israeli apartheid. Palantir’s deportation machine is used to target and terrorize migrants. Yemeni families are destroyed time and time again by drone assassinations. The states and agencies that deploy these technologies invoke self-defense or law and order, but this doesn’t erase the immorality or barbarity behind them. To claim the moral high ground while signing off on these technologies puts one on shaky ground at the very start.
“Boston Dynamics has made a broader commitment not to weaponize any of our robots, period. This isn’t a new position for us—it’s clearly stated in both our ethical principles, as well as in our terms & conditions of sale, which state that our robots must be used in compliance with the law, cannot be used to harm or intimidate people or animals, and must not be used as a weapon or configured to hold a weapon,” a Boston Dynamics spokesperson told Motherboard. “We also work closely with our customers and partners to ensure they understand our stance on this important issue, and we would take appropriate action to mitigate any misuse in violation of those terms.”
It’s great that there is at the very least a pledge to halt the weaponization of robots, but it’s one that leaves an uncomfortable number of open questions, particularly when it comes to ensuring end users don’t go against the manufacturer’s commitment.
It also raises the important question of companies besides the ones making the pledge and if they will honor similar commitments. As we’ve already seen, robotics companies besides Boston Dynamics are able to create robot dogs, and they have no issues turning them into killing machines.