The Next Fear on A.I.: Hollywood’s Killer Robots Become the Military’s Tools

The assassination of Mohsen Fakhrizadeh, Iran’s top nuclear scientist, was conducted by Israel’s Mossad using an autonomous machine gun, mounted in a pickup truck, that was assisted by artificial intelligence — though there appears to have been a high degree of remote control. Russia said recently it has begun to manufacture — but has not yet deployed — its undersea Poseidon nuclear torpedo. If it lives up to the Russian hype, the weapon would be able to travel across an ocean autonomously, evading existing missile defenses, to deliver a nuclear weapon days after it is launched.

So far there are no treaties or international agreements that deal with such autonomous weapons. In an era when arms control agreements are being abandoned faster than they are being negotiated, there is little prospect of such an accord. But the kind of challenges raised by ChatGPT and its ilk are different, and in some ways more complicated.

In the military, A.I.-infused systems can speed up the tempo of battlefield decisions to such a degree that they create entirely new risks of accidental strikes, or decisions made on misleading or deliberately false alerts of incoming attacks.

“A core problem with A.I. in the military and in national security is how do you defend against attacks that are faster than human decision-making,” Mr. Schmidt said. “And I think that issue is unresolved. In other words, the missile is coming in so fast that there has to be an automatic response. What happens if it’s a false signal?”

The Cold War was littered with stories of false warnings — once because a training tape, meant to be used for practicing nuclear response, was somehow put into the wrong system and set off an alert of a massive incoming Soviet attack. (Good judgment led to everyone standing down.) Paul Scharre, of the Center for a New American Security, noted in his 2018 book “Army of None” that there were “at least 13 near use nuclear incidents from 1962 to 2002,” which “lends credence to the view that near miss incidents are normal, if terrifying, conditions of nuclear weapons.”

For that reason, when tensions between the superpowers were a lot lower than they are today, a series of presidents tried to negotiate building more time into nuclear decision making on all sides, so that no one rushed into conflict. But generative A.I. threatens to push countries in the other direction, toward faster decision-making.

The good news is that the major powers are likely to be careful — because they know what the response from an adversary would look like. But so far there are no agreed-upon rules.

NYT

Related posts

Leave a Comment