AIAutonomous WeaponsdronesFeaturedTechnologyTrump administration

Trump’s AI Action Plan Doesn’t Go Far Enough on National Security

The Trump administration’s AI Action Plan seeks to curb lethal AI proliferation, but conflicting priorities on innovation, security, and open-source models risk undermining its effectiveness.

A year ago, we argued in The National Interest that artificial intelligence (AI)-powered autonomous weapons threaten to destabilize the international system. Technological and battlefield developments over the previous year have made the threat even more urgent. The need for US leadership to slow the spread of lethal AI is clearer now than ever. The Trump administration’s AI Action Plan is a welcome step in the right direction, though it suffers from internal tensions that could ultimately render it counterproductive.

The Double-Edged Sword of AI

By lowering the barriers to acquiring high levels of precision firepower and enabling automated kill chains, AI-powered autonomous weapons are rebalancing the battlefield, allowing smaller, poorer actors to better exploit asymmetric advantages. For instance, loitering munitions—First Person View (FPV) drones enhanced by AI to autonomously identify and engage targets even without GPS or humans in the loop—have allowed the Ukrainian military to partially close its firepower gap with Russia. Similarly, cheap drones, including loitering munitions, may end up reinforcing Taiwan’s daunting geography against a Chinese amphibious invasion.

But the digitization of warfare underlying this reordering is a double-edged sword. Low barriers to acquisition and operation of high levels of precision firepower will enable dangerous actors like terrorist organizations, criminal networks, and rogue states. Furthermore, actors of all kinds using AI-enabled autonomous weapons—but especially low-capacity actors that most benefit from the AI weaponry revolution—are likely to instigate more unintended escalations. 

A case in point: in late 2024, Russia had already used Iranian Shahed-136 drones to regain the upper hand on the frontlines and devastate Ukrainian civilian infrastructure. Iran, for its part, had begun using these drones to terrorize Israeli civilians.

Global Escalation

The international situation has only gotten more volatile since then.

Take the recent Russian drone incursion into Polish airspace. Let’s assume the incursion was a deliberate attempt to test the North Atlantic Treaty Organization’s (NATO) defensive coordination. The event could still have triggered a direct NATO-Russia conflict that neither side had counted on if the drones, following an autonomous kill chain under duress, had accidentally struck an airport and killed civilians.

Ukraine recently downed an AI-enhanced Shahed drone armed with a thermal imaging targeting system, a visible-spectrum camera, and an Nvidia Jetson AI processor, all of which allowed the drone to visualize and acquire targets autonomously during its final phase. According to Sibylline, an intelligence analysis firm, organized crime groups across Europe are using drones to smuggle contraband, evade law enforcement, and carry out targeted attacks. Russia and China are increasingly relying on drones for espionage. 

Meanwhile, European and Ukrainian defense companies have pioneered AI command-and-control software tools like AI Warfare Adaptive Swarm Platform (AI-WASP) and Mosaic UXS that enable autonomous coordination between loitering munitions in the air, land, and sea domains. Perhaps most concerningly, the Russo-Ukrainian war is likely accelerating the transfer of advanced drone technology to nefarious actors across the globe.

The Sudanese Rapid Support Forces (RSF)—a paramilitary group implicated in the Darfur genocide and now locked in a civil war with the country’s military dictatorship—temporarily disabled airports in the eastern part of the country with a drone strike in May 2025, one year after it nearly assassinated the country’s leading general with a drone strike on a military graduation ceremony. In Myanmar, rebels have discovered ways to 3D print aerial drones en masse on the jungle frontlines of their civil war. AI-enabled underwater drones could wreak havoc on maritime shipping and undersea telecommunications. 

These developments only reinforce the need to slow the spread of algorithms that will make these systems more effective in the hands of hostile or reckless actors. 

The Promise of the AI Action Plan

Electronic warfare technologies can only blunt the destabilizing effects of these technologies. Due to their limited scope, states are unlikely to successfully deploy them to defend their entire airspace, especially against unpredictable internal threats. Furthermore, these tools will have limited impact on ground-based and underwater drones and will be unable to fully disable autonomous command-and-control infrastructure used to coordinate military assets.

The White House’s AI Action Plan contains some important approaches to this problem. It envisions an American-led “AI alliance”, in which allies agree to import American hardware, models, software, applications, and standards. The plan calls for the Department of Commerce (DOC) to facilitate export deals with countries that “meet US-approved security requirements and standards.”

This alliance would prevent the United States and its allies from exporting sensitive AI technology to adversaries like China, Russia, and Iran. It would also exclude countries with inadequate mechanisms for controlling the spread of sensitive technology. Because US firms dominate the AI market in both commercial and military applications, it would incentivize countries to harmonize safety standards with those of the United States.

The AI Action Plan also calls for aggressive export controls on compute hardware, including microchips that use US technology, which are crucial for training advanced AI models and for running them on end-users’ devices. It calls for the creation of a “technology diplomacy strategic plan” to align export control and other safety standards across the AI alliance.

These are positive signs. The White House clearly recognizes the inherent danger in porous borders for advanced AI. The measures described above, if fully implemented, would be a welcome effort to stem the flow of lethal AI to America’s adversaries and to promote uniform safety standards across allied countries.

Contradictions Within the AI Action Plan 

But the AI Action Plan also calls for measures that would hamper this effort.

For one, it laments the “onerous regulatory regime” that would have been necessary to enforce former President Joe Biden’s AI executive order, which President Trump has since rescinded. But any effort to establish and implement rigorous safety standards at home and propagate them abroad—which the AI Action Plan rightly calls for—will require regulation and an enforcement bureaucracy. It will also require experts whose training pipelines the Trump administration has imperiled through drastic cuts to scientific research.

Most concerningly, the plan strongly advocates the open sourcing of advanced AI models, which the White House acknowledges would make them “freely available […] for anyone in the world to download and modify.” This directly contradicts the administration’s stated desire to tightly control the spread of cutting-edge software.

The danger of open source was recently driven home by the release of DeepSeek, a Chinese large language model similar to ChatGPT but trained using cheaper, domestic hardware. The startup that developed DeepSeek used open-source code, including from American tech giant Meta. Without additional guardrails, there is nothing preventing adversaries like China from freely using American open-source models with military applications for their own purposes.

The open sourcing of advanced algorithms is thus incompatible with US national security. The case of DeepSeek, as well as the discovery of cheap, domestically produced compute hardware able to run advanced AI models onboard Russian missiles and the Iranian drones discussed above, also underscores the inadequacy of hardware export controls alone. The Trump administration can have a robust system of safeguards for preventing the proliferation of lethal AI, or it can have unfettered open-source AI, but it cannot have both.

The AI Action Plan’s support for the open sourcing of advanced algorithms stems from America’s desire to continue leading the world in AI innovation. But America’s desire to innovate should not inadvertently allow hostile actors to free-ride on its technological breakthroughs, especially in lethal AI.

Policy Recommendations for Controlling AI

This is why we have called for the establishment of a closed forum in which registered developers can share AI Dual Use Research of Concern (AI DURC), a research category that would include military AI applications. The most important targets for restriction would be command-and-control software and swarm intelligence algorithms. Swarm intelligence algorithms mimic how groups like ants solve problems together: each drone executes simple sensing and judgment, but their collective behavior leads to smart, adaptive decisions on the battlefield without the need for a human operator.

The Department of Defense, the Department of Commerce’s Bureau of Industry and Security (BIS), and other federal agencies should define AI DURC as precisely as they can, then create a licensing system that would allow American and other allied researchers to access the forum to share code. The United States would only grant access to researchers from allied countries that can credibly demonstrate the enforcement of mutually acceptable AI DURC safety standards. It could restrict access to members of the AI alliance, for instance.

Congress should also pass legislation requiring hyperscale data centers—the kind possessing enough compute power to train the most advanced AI models—to conduct due diligence on their customers. If a customer is using data center services to train potentially lethal AI, the legislation would require them to obtain the necessary license. It would also require them to have implemented “duty of care” protocols safeguarding their software against unintentionally escalatory behaviors, theft, and transfer to malicious and reckless actors. 

Congress should also direct BIS to keep a “black list” of irresponsible, adversarial, and reckless entities and require data centers to ban these actors from their servers. Finally, it should direct the Department of the Treasury’s Office of Foreign Assets Control to block these bad actors from the US financial system, thereby preventing them from doing business with leading American innovators.

The Trump administration should then seek to extend this regulatory framework to partners and allies—especially those operating hyperscale data centers—using tariff threats. This would comport with the administration’s argument that foreign countries have too long reaped the benefits of the American-led world order without contributing their fair share to upholding it.

Balancing Innovation and Security

The White House’s AI Action Plan rightfully recognizes America’s overwhelming interest in continuing to lead the world in AI innovation while preventing its cutting-edge technology from leaking to its adversaries. It is undoubtedly a step in the right direction. But it does a poor job at balancing the two strategic priorities of innovation and security. If it does not recognize the inherent contradictions in its plan, it risks compromising both. 

About the Authors: Joshua Curtis and Anthony De Luca-Baratta

Joshua Curtis is a member of the NextGen Initiative of Foreign Policy for America and Chairman of the Washington, D.C. chapter of IPF Atid. He holds a master’s degree in international relations from the Johns Hopkins School of Advanced International Studies in Washington, D.C.

Anthony De Luca-Baratta is a contributor to the Center for North American Prosperity and Security, a project of the Macdonald-Laurier Institute, and a Young Voices Contributor based in Montreal. He holds a master’s degree in international relations from the Johns Hopkins School of Advanced International Studies in Washington, D.C.

Image: Brian Jason/shutterstock

Source link

Related Posts

1 of 34