By CEO Sam Altman’s personal admission, OpenAI’s cope with the Division of Protection was “undoubtedly rushed,” and “the optics don’t look good.”
After negotiations between Anthropic and the Pentagon fell via on Friday, President Donald Trump directed federal businesses to cease utilizing Anthropic’s know-how after a six-month transition period, and Secretary of Protection Pete Hegseth stated he was designating the AI firm as a supply-chain danger.
Then, OpenAI shortly introduced that it had reached a deal of its personal for fashions to be deployed in labeled environments. With Anthropic saying it was drawing crimson strains round the usage of its know-how in totally autonomous weapons or mass home surveillance, and Altman saying OpenAI had the identical crimson strains, there have been some apparent questions: Was OpenAI being sincere about its safeguards? Why was it capable of attain a deal whereas Anthropic was not?
In order OpenAI executives defended the settlement on social media, the corporate additionally printed a blog post outlining its approach.
In reality, the put up pointed to a few areas the place it stated OpenAI’s fashions can’t be used — mass home surveillance, autonomous weapon programs, and “high-stakes automated selections (e.g. programs resembling ‘social credit score’).”
The corporate stated that in distinction to different AI corporations which have “lowered or eliminated their security guardrails and relied totally on utilization insurance policies as their major safeguards in nationwide safety deployments,” OpenAI’s settlement protects its crimson strains “via a extra expansive, multi-layered strategy.”
“We retain full discretion over our security stack, we deploy by way of cloud, cleared OpenAI personnel are within the loop, and now we have sturdy contractual protections,” the weblog stated. “That is all along with the sturdy present protections in U.S. regulation.”
Techcrunch occasion
San Francisco, CA
|
October 13-15, 2026
The corporate added, “We don’t know why Anthropic couldn’t attain this deal, and we hope that they and extra labs will contemplate it.”
After the put up was printed, Techdirt’s Mike Masnick claimed that the deal “completely does permit for home surveillance,” as a result of it says the gathering of personal knowledge will adjust to Executive Order 12333 (together with numerous different legal guidelines). Masnick described that order as “how the NSA hides its home surveillance by capturing communications by tapping into strains *outdoors the US* even when it accommodates information from/on US individuals.”
In a LinkedIn post, OpenAI’s head of nationwide safety partnerships Katrina Mulligan argued that a lot of the dialogue across the contract language assumes “the one factor standing between Individuals and the usage of AI for mass home surveillance and autonomous weapons is a single utilization coverage provision in a single contract with the Division of Warfare.”
“That’s not how any of this works,” Mulligan stated, including, “Deployment structure issues greater than contract language […] By limiting our deployment to cloud API, we are able to be certain that our fashions can’t be built-in immediately into weapons programs, sensors, or different operational {hardware}.”
Altman additionally fielded questions in regards to the deal on X, the place he admitted it had been rushed and resulted in vital backlash in opposition to OpenAI (to the extent that Anthropic’s Claude overtook OpenAI’s ChatGPT in Apple’s App Retailer on Saturday). So why do it?
“We actually wished to de-escalate issues, and we thought the deal on supply was good,” Altman stated. “If we’re proper and this does result in a de-escalation between the DoW and the business, we’ll seem like geniuses, and an organization that took on a whole lot of ache to do issues to assist the business. If not, we’ll proceed to be characterised as […] rushed and uncareful.”

