Air ForceArtificial intelligenceCCAdronesF-35Featured

Could AI Actually Harm the US Air Force?

AI may win dogfights in simulations, but in the fog of war, America still needs pilots.

The US Air Force has been clear about its ambitions to integrate artificial intelligence into the cockpit. The most obvious example is the development of “loyal wingmen” drones, designed to fly alongside the Lockheed Martin F-35 Lightning II and the forthcoming Next Generation Air Dominance (NGAD) fighter. The wingmen drones, known as the Collaborative Combat Aircraft (CCA), will be AI-guided, autonomous machines—a clear progression toward a seemingly inevitable conclusion: the end of manned fighters. The logic behind the progression is straightforward enough: AI can process information faster than humans, making split-second targeting decisions regardless of how complex the battle space becomes. But the wholesale outsourcing of human judgment in something as instinctual and subjective as air combat is reckless.

Historic Warning

Artificial intelligence excels in controlled simulations, yes, making the Air Force’s interest in AI understandable. Modern fighters produce torrents of information, which a pilot must internalize rapidly before making immediate decisions—at supersonic speeds, under fire, and under shifting rules of engagement. AI advocates argue that AI can filter and prioritize information with a speed and accuracy that humans are biologically incapable of. That may be true. But removing human agency from consequential military decisions is shortsighted. 

Consider the lessons of the Cold War, when nuclear early warning systems occasionally generated false alarms based on innocuous everyday happenings, i.e., geese migrating or the full moon rising. Fortunately, human operators were always able to override the faulty early warning systems before they initiated a nuclear holocaust.

Would an AI-guided wingman suffer from similar false alarms? Would a human operator be able to intervene when the wrong decision was made? Not likely. The wingman concept is operating on a much quicker decision-loop than a nuclear early warning system. A wingman drone adhering to probabilistic logic could act before humans can intervene—heightening the risk of unintended escalation.

Further, the use of AI blunts a distinctly American advantage: better pilots. The United States has long benefited from having a superior pilot training pipeline; US pilots excel in combat thanks to enhanced skills and enhanced improvisation relative to the adversary. Replacing the pilot with AI may seem like an advantage, an upgrade, in the short-term—but over time, two problems emerge. One, the pilot corps atrophies completely. Two, adversaries race to field their own AI-pilot systems, which have a higher likelihood of achieving parity with US AI systems than adversary pilots had of achieving parity with US pilots, thereby negating the US aerial advantage.

Humans chronically make mistakes. AI integration is a seductive way to bypass human mistake-making. But AI, if offered as a direct one-to-one substitution for the human pilot, risks dependency on a brittle system vulnerable to both adversary manipulation and technical malfunction. AI does have a place in the cockpit; AI should be explored as a way to augment human operators and make them even better relative to the field. But the United States should be cautious before plunging irreversibly into an AI-infused air corps. AI may win dogfights in simulations, but in the fog of war, America still needs pilots.

About the Author: Harrison Kass

Harrison Kass is a senior defense and national security writer at The National Interest. Kass is an attorney and former political candidate who joined the US Air Force as a pilot trainee before being medically discharged. He focuses on military strategy, aerospace, and global security affairs. He holds a JD from the University of Oregon and a master’s in Global Journalism and International Relations from NYU.

Image: DVIDS.

Source link

Related Posts

1 of 35