Advertisement
Succeeding in the area would mean targeting three main levels of an opponent’s AI processing: data, algorithms and computing capacity.
The article suggested that one possibility was to “pollute” data or change the distribution of data during warfare to make an adversary’s language models less accurate. Methods such as camouflaging weapons and running fake data on networks – which the article said foreign militaries had already adopted – would mislead the learning process of an enemy’s models.
Another option was “logical deception” of algorithms, in which one side operates outside the learning scope or against the logic of the models. In physical combat operations this could mean directing drone swarms to do irregular manoeuvres to prevent foreign forces from learning their routes and formation.
The People’s Liberation Army has long discussed the idea of such “deceptive warfare”.