DARPA Successfully Tests AI “Swarming Drones” That Can Make Battlefield “Decisions”

November 27, 2018 in News by RBN Staff


Source: Zero Hedge | by Tyler Durden 

In a report that leaves us thinking that we are all a big step closer to Skynet coming online, the Defense Advanced Research Projects Agency (who else?) has announced it’s breaking new ground in the area of “highly autonomous” and “deeply interconnected drones, jets, ships” which can coordinate strikes and recalibrate changing mission parameters independent of real-time or constant human input.

Artist’s rendition of the CODE program, via DARPA

What could go wrong? But apparently the only problem that a Defense One report sees with the DARPA/Pentagon-funded project is that the system which includes “swarming UAV’s” could be hacked by the enemy.

According to the report:

But this massive, coordinated strike across air, land, sea and cyberspace is sure to run headfirst into electronic warfare defenses designed to disrupt the networks that make it possible.

DARPA announced that it successfully tested both live and virtual drones capable of “high degrees of autonomy” while under heavy electronic attack to see if they were capable of conducting coordinated missions. The series of simulations was conducted at Arizona’s Yuma Proving Ground last week, according to a DARPA statement .

The DARPA announcement reads: “The [unmanned aerial systems] efficiently shared information, cooperatively planned and allocated mission objectives, made coordinated tactical decisions, and collaboratively reacted to a dynamic, high-threat environment with minimal communication.”

The DoD has long funded experimental programs — the most significant ones through DARPA — involving the implementation artificial intelligence (AI) and machine learning on the battlefield. But the Achilles’ heel of such experimental AI related communications technology has been the possibility that electronic countermeasures would disrupt the systems.

The DARPA “CODE” program involves swarming UAV’s that could use AI to communicate and coordinate even when sophisticated electronic countermeasures are deployed by the enemy.

The Yuma tests represent that for the first time the military successfully demonstrated that AI could find “small areas of spectrum to allow for short bursts of essential communications between military assets—little windows where just enough data can get through to allow for all the components to work together,” according to Defense One.

Scott Wierzbanowski, the program manager overseeing tests known as the Collaborative Operations in Denied Environment, or CODE, program, said, “The demonstrated behaviors are the building blocks for an autonomous team that can collaborate and adjust to mission requirements and a changing environment.”

Specifically the test utilized six live drones alongside 24 virtual or simulated drones which collaborated with each other amidst electronic countermeasure attacks and the disabling of their GPS systems. The drones were able to adjust and successfully hit designated targets even with changing battlefield variables, according to the report.

The report concludes that the tests ultimately show that “the U.S. military is well on its way to delegating a lot more decisions to smart weapons on the battlefield” and that this “will have pros and cons, some more foreseeable than others.”

Indeed it’s precisely the unforeseeable consequences that could prove disastrously more costly than what the report cares to explore.