A whole bunch of tech employees have signed an open letter urging the Division of Protection to withdraw its designation of Anthropic as a “provide chain danger.” The letter additionally calls on Congress to step in and “look at whether or not the usage of these extraordinary authorities in opposition to an American expertise firm is acceptable.”
The letter consists of signatories from main expertise and enterprise capital corporations together with OpenAI, Slack, IBM, Cursor, Salesforce Ventures, and extra. It follows a dispute between the DOD and Anthropic after the AI lab final week refused to offer the navy unrestricted entry to its AI methods.
Anthropic’s two crimson traces in its negotiations with the Pentagon had been that it didn’t need its expertise for use for mass surveillance on Individuals or to energy autonomous weapons that made focusing on and firing choices with no human within the loop. The DOD stated it had no plans to do both of these issues, however that it didn’t imagine it ought to be restricted by the foundations of a vendor.
In response to Anthropic CEO Dario Amodei’s refusal to cave to Hegseth’s threats, President Donald Trump on Friday directed federal businesses to cease utilizing Anthropic’s expertise after a six-month transition interval. Hegseth stated he would make good on his threats and designate Anthropic a provide chain danger — a designation usually reserved for overseas adversaries that will blacklist the AI agency from working with any company or firm that does enterprise with the Pentagon.
In a post on Friday, Hegseth wrote: “Efficient instantly, no contractor, provider, or associate that does enterprise with america navy might conduct any industrial exercise with Anthropic.”
However a publish on X doesn’t robotically make Anthropic a provide chain danger. The federal government wants to finish a danger evaluation and notify Congress earlier than navy companions have to chop ties with Anthropic or its merchandise. Anthropic stated in a blog post the vacation spot is each “legally unsound” and that it could “problem any provide chain danger designation in courtroom.”
Many within the business see the administration’s remedy of Anthropic as harsh and clear retaliation.
Techcrunch occasion
San Francisco, CA
|
October 13-15, 2026
“When two events can not agree on phrases, the conventional course is to half methods and work with a competitor,” the open letter reads. “This example units a harmful precedent. Punishing an American firm for declining to just accept modifications to a contract sends a transparent message to each expertise firm in America: settle for no matter phrases the federal government calls for, or face retaliation.”
Past concern over the federal government’s harsh remedy of Anthropic, many within the business are nonetheless involved about potential authorities overreach and use of AI for nefarious functions.
Boaz Barak, an OpenAI researcher, wrote in a social media post on Monday that blocking governments from utilizing AI to do mass surveillance can also be his “private crimson line” and “it ought to be all of ours.”
Moments after Trump publicly attacked Anthropic, OpenAI introduced it had reached a deal of its personal for its fashions to be deployed within the DOD’s labeled environments. OpenAI CEO Sam Altman stated final week that the agency has the identical crimson traces as Anthropic.
“If something good can come out of the occasions of the final week, it could be if we within the AI business begin treating the difficulty of utilizing AI for presidency abuse and surveilling its personal folks as a catastrophic danger of its personal proper,” Barak wrote. “We now have accomplished a superb job of evaluations, mitigations, and processes, for dangers akin to bioweapons and cyber safety. Let’s use comparable processes right here.”

