Unchecked AI development poses growing national security risks, and secretive companies like Safe Superintelligence show why transparency and oversight can’t wait.
Until now, artificial intelligence (AI) has been somewhat of a detrimental novelty. Yes, studies have shown that AI is making people lazier, and its habit of making up fake legal cases has caused some lawyers to be penalized. But it has, by and large, yet to cross over into a national security concern.
That is changing. In late October, hundreds of major tech figures, including Apple’s Steve Wozniak, signed a letter urging Congress to pass a ban on “superintelligence.” And last week, Microsoft’s AI chief, Mustafa Suleyman, argued that the pursuit of superintelligent AI should be an “anti-goal,” as it “doesn’t feel like a positive vision of the future.”
AI’s Limitations and Misunderstood Intelligence
The concern from these figures emanates from the idea that society will, over the next few years, turn over many of its functions—from healthcare decision-making to missile targeting to AI—due to its ability to make quick, intelligent decisions. State and local governments have already started using AI to help run programs. But current large language models can never be truly “intelligent”: it’s just pattern recognition software that scrapes available information for answers. Because there’s a lot of bad data out there, it sometimes comes up with “facts” that are entirely false, something the aforementioned lawyers lamentably discovered. Major tech companies have cleverly sold AI as “intelligent” because it sounds trustworthy, but in essence, American society will be turning over key aspects of our society and economy to a technology that just guesses, often poorly.
AI’s National Security Risks Haven’t Stopped Big Tech
The national security concerns are obvious. AI quickly makes determinations based on the data it scrapes. If adversaries are able to determine which data is being scraped—and corrupt that data—then the AI becomes almost entirely useless and can even be turned against its user without the user being aware. In a future battlefield, this could quickly lead to disaster.
However, these concerns have not slowed big tech companies. One in particular, Safe Superintelligence Inc., has taken a particularly novel—and concerning—approach to AI development. As per their name, the company’s goal is to create a “safe superintelligence.” But unlike other AI companies, that is the only product they plan on releasing. Nothing is to be unveiled until they have met that goal; the only updates the company has released have concerned personnel changes and money raised. And although their website is rather low budget—composed of a main page, an update page (with only two updates), and a contact page—they have raised a fortune, with the company currently valued at $32 billion. Safe Superintelligence, Inc. has two offices, one in Palo Alto and one in Tel Aviv. Although they identify themselves as an “American company,” their CEO, Ilya Sutskever, is not an American citizen, and it is unclear what loyalties, if any, the company has to the United States government.
Which, when it comes to technology like this, is extremely concerning. America’s technical know-how—assuming they did not randomly select Palo Alto for an office location—is being used by a secretive company to create some form of intelligence which will supposedly surpass human capabilities and will, at some point in the not-so-distant future, be unexpectedly released into the world.
On top of all of this, foreign adversaries, such as China, are developing their own AI technologies—and the US government is effectively blind to the development of our own capabilities.
A Blind Spot in US AI Regulation
It sounds like a 1990s science fiction film, but it’s happening right now. The US government currently has no way of compelling the company to release anything about what they are working on. This must change—and there are relatively simple ways to do so.
The Biden administration’s AI transparency plans were built around requiring transparency from companies that released powerful large language models (LLMs). But this approach had a major issue: not all AI developers release powerful LLMs. Sometimes they purposefully release weaker models, which are designed for specific tasks. Other times, in the case of Safe Superintelligence Inc., they release nothing at all.
This is easily fixable by shifting the focus away from output and to expenditure. One such legislative solution, proposed by the Foundation for American Innovation’s (and former White House Senior Policy Advisor) Dean Ball, would require transparency in operations from any AI company that has expenditures of more than $1 billion. This would ensure that all major players operating within the United States are at least somewhat transparent about what they are working on, guaranteeing that the American public—and the US government—are not suddenly surprised one day by the release of a “safe” superintelligence.
The Case for Congressional Action
If a company announced it was working on a “safe” computer virus or a “safe” nuclear bomb, the US government would want to know, and laws on the books would force them to open up. But new technologies, such as LLMs, are not covered by 20th century legislation. They should be.
America should not trust companies like Safe Superintelligence to cross their fingers when it comes to the safety of their product, particularly when they fail to provide a definition of “safe” in the first place. Congress must force them to open their doors.
About the Author: Anthony J. Constantini
Anthony J. Constantini is a policy analyst at the Bull Moose Project and the Foreign Affairs editor at Upward News. His work has appeared in a variety of domestic and international publications.
Image: Tada Images/shutterstock















