If AI Thinks George Washington is a Black Woman, Why Are We Letting it Pick Bomb Targets?
March 6, 2024 in News by RBN Staff
source: lewrockwell
Google Gemini’s ridiculous image generator got all of the headlines in the last two weeks, but a more important AI announcement went mostly unnoticed
By Matt Taibbi
Racket News
March 6, 2024
After yesterday’s Racket story about misadventures with Google’s creepy new AI product, Gemini, I got a note about a Bloomberg story from earlier this week. From US Used AI to Help Find Middle East Targets for Airstrikes:
The US used artificial intelligence to identify targets hit by air strikes in the Middle East this month, a defense official said, revealing growing military use of the technology for combat… Machine learning algorithms that can teach themselves to identify objects helped to narrow down targets for more than 85 US air strikes on Feb. 2…
The U.S. formally admitting to using AI to target human beings was a first of sorts, but Google’s decision to release a moronic image generator that mass-produces black Popes and Chinese founding fathers was the story that garnered the ink and outrage. The irony is the military tale is equally frightening, and related in unsettling ways:Best Price: $24.70Buy New $30.19(as of 11:30 UTC – Details)
Bloomberg quoted Schuyler Moore, Chief Technology Officer for U.S. Central Command. She described using AI to identify bombing targets in Iraq and Syria, in apparent retaliation for a January 28th attack in Jordan that killed three U.S. troops and injured 34. According to Moore, it was last year’s Hamas attack that sent the Pentagon over the edge into a willingness to deploy Project Maven, in which AI helps the military identify targets using data from satellites, drones, and other sources.
“October 7th, everything changed,” she said. “We immediately shifted into high gear and a much higher operational tempo than we had previously.”
The idea that the U.S. was so emotionally overcome on October 7th that it had to activate Project Maven seems bizarre at best. The Pentagon has boasted for years about deploying AI, from sending Switchblade drones to Ukraine that are “capable of identifying targets using algorithms,” to the “Replicator” initiative launched with a goal of hitting “1000 targets in 24 hours,” to talks of deploying a “Vast AI Fleet” to counter alleged Chinese AI capability. Nonetheless, it’s rare for someone like Moore to come out and announce that a series of recent air strikes were picked at least in part by algorithms that “teach themselves to identify” objects.
Project Maven made headlines in 2018 when, in a rare (but temporary) attack of conscience, Google executives announced they would not renew the firm’s first major Pentagon contract. 4,000 employees signed a group letter that seems quaint now, claiming building technology to assist the U.S. government in “military surveillance” was “not acceptable.” Employees implored CEO Sundar Pichai to see that “Google’s unique history, its motto Don’t Be Evil, and its direct reach into the lives of billions of users set it apart.”
But the firm’s kvetching about Don’t Be Evil and squeamishness about cooperating with the Pentagon didn’t last long. It soon began bidding for more DoD work and won a contract to provide cloud security for the Defense Innovation Board and a piece of a multibillion-dollar CIA cloud contract, among other things. Six years after its employee letter denouncing surveillance/targeting the military as “unacceptable,” Alphabet chief and former Google CEO Eric Schmidt chairs the Defense Innovation Board, and through efforts like Project Nimbus Google is essentially helping military forces like Israel’s IDF develop their own AI programs, as one former exec puts it.
The military dresses up justification for programs like Maven in many ways, but if you read between the lines of its own reports, the Pentagon is essentially chasing its own data tail. The sheer quantity of data the armed forces began generating after 9/11 through raids of homes (in Iraq, by the hundreds) and programs like full-motion video (FMV) from drones overwhelmed human analysis. As General Richard Clarke and Fletcher professor Richard Schultz put it, in an essay about Project Maven for West Point’s Modern War Institute:
[Drones] “sent back over 327,000 hours (or 37 years) of FMV footage.” By 2017, it was estimated for that year that the video US Central Command collected could amount to “325,000 feature films [approximately 700,000 hours or eighty years].”
The authors added the “intelligence simply became snowed under by data,” which to them meant “too much real-time intelligence was not being exploited.” This led to the second point: as Google’s former CEO Schmidt put it in 2020, when commenting on the firm’s by-then-fully-revived partnership with the Pentagon, “The way to understand the military is that soldiers spend a great deal of time looking at screens.”
Believing this was not an optimal use of soldier time, executives like Schmidt and military brass began pushing for more automated analysis. Even though early AI programs were “rudimentary with many false detections,” with accuracy “only around 50 percent” and even “the difference between men, women, and children” proving challenging, they plowed ahead.Best Price: $9.69Buy New $11.19(as of 03:14 UTC – Details)
Without disclosing how accuracy has improved since Maven’s early days, Schultz and Clarke explained the military is now planning a “range of AI/ML applications” to drive “increased efficiency, cost savings, and lethality,” and a larger goal:
To prepare DoD as an institution for future wars—a transformation from a hardware-centric organization to one in which AI and ML software provides timely, relevant mission-oriented data to enable intelligence-driven decisions at speed and scale.
This plan, like so many other things that emerge from Pentagon bureaucracy, is a massive self-licking ice cream cone.
Defense leaders first push to make and deploy ever-increasing quantities of flying data-gathering machines, which in turn film gazillions of feature films per year worth of surveillance. As the digital haul from the robot fleet grows, military leaders claim they’re forced to finance AI programs that can identify the things worth shooting at in these mountains of footage, to avoid the horror of opportunities “not being exploited.” This has the advantage of being self-fulfilling in its logic: as we shoot at more targets, we create more “exploding insurgencies,” as Clarke and Schultz put it, in turn creating more targets, and on and on.