If the Pentagon carries out its threat to blacklist Anthropic’s Claude AI platform, it could be three months or even longer before the U.S. military regains access to such a powerful tool on its classified networks, according to multiple sources familiar with the fight between the Defense Department and the AI maker.
If the Pentagon does designate the San Francisco-based AI startup as a supply-chain risk, it would touch off a lengthy and likely expensive series of protective measures, the people familiar said.
Operators would have to reconfigure data inputs that they are feeding into models, re-examine how to share data in real-time with the intelligence community which also uses Claude widely, and re-validate that replacement models were functioning as the military expected it to, they said.
In July, Anthropic received a $200 million contract to provide its frontier-model tools to the Pentagon, as did the other three U.S. makers of such products: OpenAI, Google, and xAI.



Look at this administration. This is the epitome of reality-averse. They will welcome hallucinations and slop and if they don’t like what it says they’ll have someone rewrite the prompt until they are happy with it. Everything you hate about AI, this administration considers a policy goal.