Anthropic CEO Dario Amodei has reportedly reopened negotiations with the US Division of Protection in a last-minute effort to safe continued entry to Pentagon contracts as the corporate faces the opportunity of being labeled a provide chain danger by the Trump administration.
Amodei has been holding discussions with Emil Michael, the US undersecretary of protection for analysis and engineering, to finalize phrases governing the army’s use of Anthropic’s synthetic intelligence fashions, the Monetary Instances reportedciting folks aware of the matter.
A brand new settlement would enable the Pentagon to maintain utilizing the corporate’s know-how and will stop a proper designation that may drive contractors within the protection provide chain to chop ties with the AI developer, per the report.
The talks observe a pointy breakdown in negotiations final week. Michael reportedly accused Amodei of being a “liar” with a “God complicated,” whereas discussions collapsed after the 2 sides did not agree on language Anthropic stated was essential to forestall misuse of its know-how.
Associated: Ex-OpenAI researcher’s hedge fund reveals huge Bitcoin miner bets in new SEC submitting
Pentagon negotiations stall over bulk information evaluation clause
In an inside memo to employees seen by the FT, Amodei reportedly wrote that close to the tip of negotiations, the Pentagon provided to simply accept Anthropic’s broader phrases if the corporate eliminated a clause proscribing the “evaluation of bulk acquired information.” He stated this phrase was meant to protect towards potential mass home surveillance, a situation of Anthropic treats as a crimson line, alongside the usage of AI in deadly autonomous weapons.
The dispute escalated after Protection Secretary Pete Hegseth warned that Anthropic could possibly be designated a provide chain danger, a transfer that may successfully freeze the corporate out of US army procurement networks.
The standoff got here regardless of Anthropic’s current ties to the protection sector. The corporate was awarded a contract value as much as $200 million by the US Protection Division in July 2025 and it turned the primary AI supplier whose fashions have been utilized in categorised environments and by nationwide safety businesses.
As Cointelegraph reported, the US army even used Anthropic’s Claude AI mannequin to help a serious air strike on Iran hours after President Donald Trump ordered federal businesses to cease utilizing the corporate’s programs.
Associated: Mining firms transfer deeper into AI, HPC and MARA could promote Bitcoin
Tech teams warn danger label may harm US AI management
In the meantime, in a Wednesday letter to Trump, tech teams warned that labeling a home AI firm a provide chain danger may undermine US management in AI. The teams argued that treating a US know-how firm “as a international adversary, fairly than an asset,” may discourage innovation and weaken America’s capacity to compete with China within the world AI race.
Signatories included the Software program & Data Trade Affiliation, TechNet, the Pc & Communications Trade Affiliation and the Enterprise Software program Alliance. These organizations symbolize tons of of American tech firms, together with AI chipmakers Nvidia, Alphabet’s Google and Apple.
Journal: Bitcoin could take 7 years to improve to post-quantum — BIP-360 co-author

