US-China AI race to develop killer robots risks catastrophe, warn peace campaigners

The world’s leading military powers have heavily invested in AI and are seeking to develop autonomous weapons. — SCMP Read more at

An artificial intelligence arms race between majors powers such as China, Russia and the US could lead to catastrophe for humanity, according to a peace campaign organisation.

The report, by the Dutch-based organisation Pax, called for a pre-emptive ban on so-called killer robots and warned that the development of the technology could leave algorithms deciding whether people would live or die.

“An AI arms race … is more likely to be a no-win situation … (it) would be destabilising and increase the chances of conflict. It would have negative economic, political and societal impacts,” said the report, after examining the AI push in seven nations including United States, China and Russia.

It said the development of lethal autonomous weapons – designed to shorten reaction times and make strikes more precise – was curtailing the decision-making process regarding the use of force and increased the risk of mass casualties.

The world’s leading military powers have heavily invested in AI and are seeking to develop autonomous weapons like drones or submarines, which can detect a target enemy and automatically attack.

The US is making AI a priority and set up a Joint Artificial Intelligence Centre last year to oversee the development of the technology by its service branches and defence agencies.

Meanwhile China has set up two major research organisations focused on AI and unmanned systems in its race to keep up with the US.

Articles by PLA personnel suggest Beijing is researching an “algorithm game” which will help predict what happens on the battlefield.

Russia is also prioritising AI, the report said, and its road map for developing the technology is expected to be released within a matter of months.

Daan Kayser, the project leader on autonomous weapons at Pax, urged the international community to work on a clear international rule that regulates the use of lethal autonomous weapons.

“We are looking at a near-future where AI-enabled weapons take over human roles, selecting and attacking targets on their own.

“Without clear international rules, we may enter an era where algorithms, not people, decide over life and death,” said Kayser.

Frank Slijper, a co-author of the report, called for international cooperation and transparency.

“Transparency will bring security and help prevent this arms race from proceeding,” Slijper said.

Currently, the US is one of the few countries to have a specific policy on lethal autonomous weapon systems designed to reduce the risk that killer robots will attack the wrong targets.

In 2012, the Pentagon ordered that “semi-autonomous weapon systems that are on-board or integrated with unmanned platforms must be designed such that, in the event of degraded or lost communications, the system does not autonomously select and engage individual targets or specific target groups that have not been previously selected by an authorised human operator”. – South China Morning Post