Anthropic submitted two sworn declarations to a California federal courtroom late Friday afternoon, pushing again on the Pentagon’s assertion that the AI firm poses an “unacceptable danger to nationwide safety” and arguing that the federal government’s case depends on technical misunderstandings and claims that had been by no means truly raised throughout the months of negotiations that preceded the dispute.
The declarations had been filed alongside Anthropic’s reply transient in its lawsuit towards the Division of Protection and are available forward of a listening to this coming Tuesday, March 24, earlier than Choose Rita Lin in San Francisco.
The dispute traces again to late February, when President Trump and Protection Secretary Pete Hegseth publicly declared they had been reducing ties with Anthropic after the corporate refused to permit unrestricted navy use of its AI know-how.
The 2 individuals who submitted the declarations are Sarah Heck, Anthropic’s Head of Coverage, and Thiyagu Ramasamy, the corporate’s Head of Public Sector.
Heck is a former Nationwide Safety Council official who labored on the White Home underneath the Obama administration earlier than transferring to Stripe after which Anthropic, the place she runs the corporate’s authorities relationships and coverage work. She was personally current on the February 24 assembly the place CEO Dario Amodei sat down with Protection Secretary Hegseth and the Pentagon’s Below Secretary Emil Michael.
In her declaration, Heck calls out what she describes as a central falsehood within the authorities’s filings: that Anthropic demanded some type of approval position over navy operations. That declare, she says, merely isn’t true. “At no time throughout Anthropic’s negotiations with the Division did I or every other Anthropic worker state that the corporate needed that type of position,” she wrote.
She additionally factors out that the Pentagon’s concern about Anthropic doubtlessly disabling or altering its know-how mid-operation was by no means raised throughout negotiations. As an alternative, she says, it appeared for the primary time within the authorities’s courtroom filings, which gave Anthropic no alternative to reply.
Techcrunch occasion
San Francisco, CA
|
October 13-15, 2026
One other element in Heck’s declaration certain to attract consideration is that on March 4 — the day after the Pentagon formally finalized its supply-chain danger designation towards Anthropic — Below Secretary Michael emailed Amodei to say the 2 sides had been “very shut” on the 2 points the federal government now cites as proof that Anthropic is a nationwide safety risk: its positions on autonomous weapons and mass surveillance of Individuals.
The e-mail, which Heck attaches as an exhibit to her declaration, is value studying alongside what Michael mentioned publicly within the days afterward. On March 5, Amodei revealed an announcement saying the corporate had been having “productive conversations” with the Pentagon. The day after that, Michael posted on X that “there is no such thing as a energetic Division of Battle negotiation with Anthropic.” Per week after that, he instructed CNBC there was “no probability” of renewed talks.
Heck’s level seems to be: If Anthropic’s stance on these two points is what makes it a nationwide safety risk, why was the Pentagon’s personal official saying the 2 sides had been almost aligned on precisely these points proper after the designation was finalized?
Ramasamy brings a distinct type of experience to the case. Earlier than becoming a member of Anthropic in 2025, he spent six years at Amazon Net Companies managing AI deployments for presidency clients, together with categorised environments. At Anthropic, he’s credited with constructing the crew that introduced its Claude fashions into nationwide safety and protection settings, together with the $200 million contract with the Pentagon introduced final summer time.
His declaration takes on the federal government’s declare that Anthropic may theoretically intervene with navy operations by disabling the know-how or in any other case altering the way it behaves, which Ramasamy says isn’t technically attainable. Per his telling, as soon as Claude is deployed inside a government-secured, “air-gapped” system operated by a third-party contractor, Anthropic has no entry to it; there is no such thing as a distant kill change, no backdoor, and no mechanism to push unauthorized updates. Any type of “operational veto” is a fiction, he suggests, explaining {that a} change to the mannequin would require the Pentagon’s specific approval and motion to put in.
Anthropic, he says, can’t even see what authorities customers are typing into the system, not to mention extract that information.
Ramasamy additionally disputes the federal government’s declare that Anthropic’s hiring of international nationals makes the corporate a safety danger. He notes that Anthropic workers have undergone U.S. authorities safety clearance vetting — the identical background verify course of required for entry to categorised info, including in his declaration that “to my information,” Anthropic is the one AI firm the place cleared personnel truly constructed the AI fashions designed to run in categorised environments.
Anthropic’s lawsuit argues that the supply-chain danger designation — the primary ever utilized to an American firm — quantities to authorities retaliation for the corporate’s publicly acknowledged views on AI security, in violation of the First Modification.
The federal government, in a 40-page submitting earlier this week, rejected that framing totally, saying that Anthropic’s refusal to permit all lawful navy makes use of of its know-how was a enterprise choice, not protected speech, and that the designation was a simple nationwide safety name and never punishment for the corporate’s views.
