The Pentagon’s War Against AI
In the early hours of Saturday morning the United States, in a joint operation with Israel, launched a large-scale offensive campaign against Iran. The U.S. military struck over 1,000 targets in the first 24 hours of this campaign and heavily utilized artificial intelligence to do so.
The Maven Smart System is the recent technological advance utilized in this attack, which takes in classified data from satellites and other intelligence sources, processes them through an AI model, and outlines an optimal targeting plan which can then be executed into real-world operations. This is a massive advancement in technology, with one resource doing the work of what once took dozens of analysts. However, there is a staggering irony behind this tactical success. Maven is run through Claude, an artificial intelligence model built by Anthropic. And while the Pentagon utilized Claude to dictate a war in the Middle East, the Trump administration just officially declared Anthropic to be “a radical left, woke company” who is “out-of-control," consequently directing every Federal Agency to “immediately cease all use of Anthropic’s technology.” First used by the U.S. government in November 2024, Claude has been used extensively across the Department of Defense and U.S. intelligence agencies, including in the January 3rd extraction of Nicolás Maduro. This instance sparked the beginning of the divorce between the United States government and Anthropic, as Anthropic asserted its support for the use of the model for national security provided it is not used for the mass domestic surveillance of Americans nor to control fully autonomous weapons. U.S. Secretary of Defense Pete Hegseth refused to accept these terms, saying that Anthropic must abandon any restriction over the use of its model or face backlash and penalties, including the labeling of the company as a supply chain risk. This would explicitly prohibit any contractor that does business with the U.S. military from conducting business with Anthropic, effectively attempting to isolate the company from the American defense and technology sectors and threatening to cut off access to Anthropic’s key resources.
The hypocrisy of this divorce, however, is playing out in real time on the battlefield. Even as Washington uses inflammatory rhetoric to corner Anthropic, the military and Department of Defense remain tethered to its resources. Since Claude is currently the only AI model cleared for classified data, the Pentagon has been forced to implement a six-month phase out period, where it will continue to conduct the most AI-driven offensive in history, powered by a model which the President has labeled as evil and a national security threat. This power struggle has blasted uncertainty into the broader American public, as by designating a $380 billion domestic startup as a supply-chain risk, a label traditionally reserved for companies with direct and extreme connections to foreign adversaries, the government has fundamentally changed the rules of public-private sector engagement, creating concern for any private firms who might pursue contracts with the government.
As Anthropic’s contract fades out, other actors have rapidly rushed to fill the vacuum. In particular, OpenAI agreed to a contract allowing their AI models and tools to be used in the military’s classified spaces, effectively taking Anthropic’s place. OpenAI and the Pentagon’s agreement came with similar red lines as Anthropic’s, agreeing the technology could not be used for mass domestic surveillance, but notably omitting any restrictions regarding control of fully autonomous weapons. Furthermore, OpenAI Ceo Sam Altman, in an all-hands meeting with his employees, said his company does not get to choose how the military uses its technology. While OpenAI moves to integrate its systems into the Pentagon’s architecture, they do so under a new understanding that the government, not the private sector, has the ultimate say over where it draws the line of AI usage. As the war in Iran continues to prove the lethality of these AI models, the ethical frameworks governing them have gotten foggier and foggier.