The U.S. Air Force (USAF) announced on Wednesday a test flight with an artificial intelligence (AI) system that was used successfully as part of a simulated mission. The AI algorithm, called ARTUµ, was responsible for the control and direction of the radar of a U-2S Dragon Lady spy plane, a veteran of Lockheed Martin’s reconnaissance.
According to the USAF, the aircraft took off from Beale Air Force Base in California on Monday for a simulated reconnaissance mission. ARTUµ’s task was to find possible surface-to-air missile launchers while the pilot searched for enemy aircraft, sharing the same radar system.
“ARTUµ’s groundbreaking flight culminates our three-year journey to becoming a digital force,” said Dr. William Roper, assistant secretary of the Air Force for acquisition, technology and logistics. “Putting AI safely in command of a U.S. military system for the first time ushers in a new age of human-machine teaming and algorithmic competition. Failing to realize AI’s full potential will mean ceding decision advantage to our adversaries.”
“We know that in order to fight and win in a future conflict with a peer adversary, we must have a decisive digital advantage,” said Air Force Chief of Staff Gen. Charles Q. Brown, Jr.
After takeoff, radar control was transferred to the algorithm, which then operated the equipment based on information previously learned in more than half a million computer-simulated training interactions.
µZero computer program
It took just two months between updating the on-board software and the first test flight with the U-2. However, to get the algorithm to assume human functions, the Air Force laboratory had an essential help, the µZero program, famous for being able to play chess and advanced games without having prior knowledge of its rules.
The use of artificial intelligence in defense systems is considered crucial by the Pentagon to keep its forces at the forefront in the face of advances in military development in Russia and, above all, in China. The goal is that these algorithms not only help pilots but also make decisions autonomously, even without the consent of the crew. It remains to be seen whether the program will be able to understand conflict ethics.