AnthropicDario amodeiDepartment of Defense (DOD)FeaturedNorth Americanuclear weaponsPentagonSilicon valleyUnited States

Anthropic Picked a Fight With the Pentagon—and Misunderstood Its Place

Anthropic’s dispute with the Pentagon shows the limits of Silicon Valley’s power: the US military will never grant private companies veto authority over warfare.

Although the conflict in Iran quickly wiped it off the headlines, the Trump administration’s defenestration of Anthropic’s Claude is still reverberating across the artificial intelligence (AI) and defense industries. The move bolstered their competitor, OpenAI, which was more than happy to quickly ink a deal with the Pentagon. It is also proving injurious to Anthropic’s bottom line as defense companies, unwilling to cross the Department of Defense (DOD), are ceasing to work with the beleaguered AI company.

Anthropic’s Clash with the Pentagon over Surveillance and Autonomous Weapons 

Anthropic has been staunchly defending its actions. In an open letter, Anthropic founder and CEO Dario Amodei wrote that the breaking point for his company was the Pentagon’s demand to allow for “mass domestic surveillance” and “fully autonomous weapons.” The Pentagon’s demand for the removal of its safeguards was met with refusal, and the company has now been labeled a supply chain risk.

Why the Pentagon’s Reaction to Anthropic Was Predictable 

Anthropic seemed somewhat blindsided by the events. But they shouldn’t have been. The question should not be why the Pentagon did what it did. It should be why Anthropic thought it would happen any other way.

Start with the actual product itself. AI developers have made no bones about the effect that artificial intelligence will have on the world. In fact, there has been one frequent comparison they’ve made to describe their creations: the advent of nuclear weaponry. OpenAI’s Sam Altman compared his company’s work to the Manhattan Project, saying, “There are these moments in the history of science where you have a group of scientists look at their creation and just say, you know, what have we done?” Elon Musk, before he founded xAI, said that artificial intelligence was “more dangerous than nukes.” And Anthropic’s own Amodei argued that allowing the sale of AI chips to China was akin to “selling nuclear weapons to North Korea.”

The actual Manhattan Project was a United States government operation. But if private industry had somehow gotten there first, is there any doubt that the Roosevelt administration would have had a keen interest in controlling their research and any resulting bombs? 

Who Controls AI in Warfare: Tech Companies or the US Military? 

Then there’s the actual debate between Anthropic and the Pentagon. Anthropic claims it is about their two rules—mass surveillance and AI-operated weaponry—but those are arguably tertiary to the real debate: who has control over the weaponry American soldiers use. As Secretary of Defense Pete Hegseth wrote, “the Commander-in-Chief and the American people alone will determine the destiny of our armed forces, not unelected tech executives.”

Anthropic, in essence, wants the ability to have veto power over what their own CEO has compared to nuclear weaponry. While their only demands now are supposedly mass surveillance and AI-operated weaponry, it is not impossible to imagine that they will seek to extend those demands down the line.

Even their current demands are questionable. Imagine the Pentagon wants to surveil a building during an ongoing operation, and seeks to, for example, hack into security cameras. What if the AI wrongly categorizes this as “mass surveillance”? AI is wrong constantly. It hallucinates case law that never existed and wrongly labels random musicians as sex criminals. Can we be sure it will not misinterpret safeguards to far exceed whatever mandate Anthropic initially set for it?

There’s also the fact that there is evidence that Anthropic may want a greater say than it publicly admits. The entire affair kicked off when someone at Palantir became alarmed after a conversation with an Anthropic official, who seemed to disapprove of their AI, Claude, being used in January’s successful Venezuela incursion. The Palantir official later told the Pentagon what had been said.

Then there’s the question of whether there should even be a mandate. When SIG Sauer sells rifles to the United States Army, they do not include kill switches. What happens when an AI-powered rifle shuts off in a soldier’s hands because it determines that, somewhere far down the street behind a threat, there may or may not be a civilian who could be hit? When you sell a bomb or a weapon, you do not get to determine how that is then used.

Silicon Valley’s AI Mindset and Growing National Security Backlash 

Supporters of artificial intelligence, and those who run AI companies, consistently do not understand how they come across to those who live outside (metaphorically or literally) of Silicon Valley. Polling shows that people are, by and large, concerned about AI. To this, those who support AI essentially have adopted a “Deal with it” mentality. At times, they even give off a sharper edge, a belief that they are the future and that anyone seeking to arrest their progress is arresting the progress of mankind.

This has become such a pervasive attitude that it has even rankled those within the tech space. Alex Karp, CEO of Palantir, recently went on a tear, saying, “If Silicon Valley believes we are going to take away everyone’s white-collar job … and you’re gonna screw the military—if you don’t think that’s gonna lead to nationalization of our technology, you’re retarded.”

The Risk of Nationalization and the Limits of Private AI Power 

Anthropic, and those who are the loudest purveyors of AI, should take Karp’s concerns seriously. Because he’s right: presenting your technology as akin to what brought in the nuclear era while saying it will take everyone’s jobs and also wanting to get into a fight with the American military-industrial complex—which is arguably one of the most influential forces on Earth—is a fast way to lose all of one’s friends.

For now, nationalization is far from anyone’s minds. The Pentagon has not hinted that it would desire it, and the Trump administration has clearly sought to make partners of AI companies, not subjects. If they continue to make enemies of everyone on the left and right, however, Karp’s fears could prove prophetic.

The United States military, or the military of any great power, is never going to give a private company any veto authority. Anthropic should have understood that. The fact that it didn’t—along with the implicit argument that it should, on some level, have veto authority—is evidence that it is badly misreading its position and its place. Anthropic may view itself as changing human nature, pushing past the ignorant peons who may stand in its way.

But to the Pentagon, they’re just another weapons manufacturer. And there are plenty of others out there to choose from.

About the Author: Anthony J. Constantini

Anthony J. Constantini is a policy analyst at the Bull Moose Project and the foreign affairs editor at Upward News. His work has appeared in a variety of domestic and international publications.

Source link

Related Posts

1 of 1,715