In a publish on Fact Social, President Trump directed federal businesses to stop use of all Anthropic merchandise after the corporate’s public dispute with the Division of Protection. The president allowed for a six-month phase-out interval for departments utilizing the merchandise, however emphasised that Anthropic was now not welcome as a federal contractor.
“We don’t want it, we don’t need it, and won’t do enterprise with them once more,” the president wrote within the publish.
Notably, the president’s publish didn’t point out any plans to designate Anthropic as a provide chain danger, as had been beforehand talked about as a consequence. Nevertheless, a subsequent tweet from Secretary of Protection Pete Hegseth made good on the risk.
“Along side the President’s directive for the Federal Authorities to stop all use of Anthropic’s know-how, I’m directing the Division of Conflict to designate Anthropic a Provide-Chain Threat to Nationwide Safety,” Secretary Hegseth wrote. “Efficient instantly, no contractor, provider, or companion that does enterprise with the US navy might conduct any industrial exercise with Anthropic.”
The Pentagon dispute centered on Anthropic’s refusal to permit its AI fashions for use to energy both mass home surveillance or absolutely autonomous weapons, which Secretary Hegseth discovered unduly restrictive.
CEO Dario Amodei reiterated his stance in a public post on Thursday, refusing to compromise on the 2 factors.
“Our sturdy choice is to proceed to serve the Division and our warfighters — with our two requested safeguards in place,” Amodei wrote on the time. “Ought to the Division select to offboard Anthropic, we are going to work to allow a clean transition to a different supplier, avoiding any disruption to ongoing navy planning, operations, or different crucial missions.”
Techcrunch occasion
Boston, MA
|
June 9, 2026
OpenAI has come out in assist of Anthropic’s resolution. Per the BBC, CEO Sam Altman despatched a memo to workers on Thursday saying he shared the identical “pink strains” and that any OpenAI-related protection contracts would additionally reject makes use of that had been “illegal or unsuited to cloud deployments, equivalent to home surveillance and autonomous offensive weapons.”
OpenAI co-founder Ilya Sutskever, who very publicly fell out with Altman in November 2023 and has since co-founded his personal AI firm, additionally waded into the dialog on Friday, writing on X: “It’s extraordinarily good that Anthropic has not backed down, and it’s vital that OpenAI has taken the same stance.
Sooner or later, there shall be way more difficult conditions of this nature, and it is going to be crucial for the related leaders to rise as much as the event, for fierce opponents to place their variations apart. Good to see that occur in the present day.”
Anthropic, OpenAI and Google every acquired contract awards from the U.S. Protection Division final July. Whereas some Google employees have come out in assist of Anthropic, Google and its guardian firm have but to remark.

