China’s military lab AI connects to commercial large language models for the first time to learn more about humans

Meanwhile, one computer scientist has voiced concerns over the move, saying that unless it is handled carefully, it could lead to a situation similar to that depicted in the Terminator films.

05:03

How does China’s AI stack up against ChatGPT?

How does China’s AI stack up against ChatGPT?

The project was detailed in a peer-reviewed paper published in December 2023 in the Chinese academic journal, Command Control & Simulation. In the paper, project scientist Sun Yifeng and his team from the PLA’s Information Engineering University wrote that both humans and machines could benefit from the project.

“The simulation results assist human decision-making … and can be used to refine the machine’s combat knowledge reserve and further improve the machine’s combat cognition level,” they wrote.

This is the first time the Chinese military has publicly confirmed its use of commercial large language models. For security reasons, military information facilities are generally not directly connected to civilian networks. Sun’s team did not give details in the paper of the link between the two systems, but stressed that this work was preliminary and for research purposes.

Sun and his colleagues said their goal was to make military AI more “humanlike”, better understanding the intentions of commanders at all levels and more adept at communicating with humans.

Most existing military AI is based on traditional war gaming systems. Although their abilities have progressed rapidly, they often feel more like a machine than a living being to users.

And when facing cunning and unpredictable human enemies, machines can be deceived. However, commercial large language models, which have studied almost all aspects of society, including literary works, news reports and historical documents, may help military AI gain a deeper understanding of people.

While the researchers have said their work can benefit both machines and humans, one computer scientist not linked to the project has warned caution is needed to avoid a Terminator-like situation. Photo: Paramount Pictures

In the paper, Sun’s team discussed one of their experiments that simulated a US military invasion of Libya in 2011. The military AI provided information about the weapons and deployment of both armies to Ernie. After several rounds of dialogue, Ernie successfully predicted the next move of the US military.

Sun’s team claimed that such predictions could compensate for human weaknesses. “As the highest form of life, humans are not perfect in cognition and often have persistent beliefs, also known as biases,” Sun’s team wrote in the paper. “This can lead to situations of overestimating or underestimating threats on the battlefield. Machine-assisted human situational awareness has become an important development direction.”

Sun’s team also said there were still some issues in the communication between military and commercial models, as the latter were not specifically developed for warfare. For instance, Ernie’s forecasts are sometimes vague, giving only a broad outline of attack strategies without the specifics that military commanders need.

In response, the team experimented with multi-modal communication methods. One such approach involved military AI creating a detailed military map, which was then given to iFlyTek’s Spark for deeper analysis. Researchers found that this illustrative approach significantly improved the performance of the large language models, enabling them to produce analysis reports and predictions that met practical application requirements.

Sun acknowledged in the paper that what his team disclosed was only the tip of the iceberg of this ambitious project. Some important experiments, such as how military and commercial models can learn from past failures and mutually acquire new knowledge and skills, remain shrouded in secrecy.

China, US agree on AI risks, but can they see past military tech rivalry?

China is not the only country conducting such research. Many generals from various US military branches have publicly expressed interest in ChatGPT and similar technologies and tasked corresponding military research institutions and defence contractors to explore the possible applications of generative AI in US military operations, such as intelligence analysis, psychological warfare, drone control and communication code decryption.

But a Beijing-based computer scientist warned that while the military application of AI was inevitable, it warranted extreme caution.

The present generation of large language models was more powerful and sophisticated than ever, posing potential risks if given unrestricted access to military networks and confidential equipment knowledge, said the scientist who requested not to be named due to the sensitivity of the issue.

“We must tread carefully. Otherwise, the scenario depicted in the Terminator movies may really come true,” he said.

South China Morning Post

Related posts

Leave a Comment