A coalition of nonprofits is urging the U.S. authorities to right away droop the deployment of Grok, the chatbot developed by Elon Musk’s xAI, in federal companies together with the Division of Protection.
The open letter, shared completely with TechCrunch, follows a slew of regarding conduct from the big language mannequin over the previous 12 months, together with most lately a development of X customers asking Grok to show pictures of actual girls, and in some circumstances youngsters, into sexualized photographs with out their consent. In accordance with some experiences, Grok generated thousands of nonconsensual specific photographs each hour, which had been then disseminated at scale on X, Musk’s social media platform that’s owned by xAI.
“It’s deeply regarding that the federal authorities would proceed to deploy an AI product with system-level failures leading to technology of nonconsensual sexual imagery and baby sexual abuse materials,” the letter, signed by advocacy teams like Public Citizen, Heart for AI and Digital Coverage, and Shopper Federation of America, reads. “Given the administration’s govt orders, steering, and the lately handed Take It Down Act supported by the White House, it’s alarming that [Office of Management and Budget] has not but directed federal companies to decommission Grok.”
xAI reached an settlement final September with the Basic Companies Administration (GSA), the federal government’s buying arm, to promote Grok to federal companies beneath the chief department. Two months earlier than, xAI – alongside Anthropic, Google, and OpenAI – secured a contract value as much as $200 million with the Division of Protection.
Amid the scandals on X in mid-January, Protection Secretary Pete Hegseth mentioned Grok will be a part of Google’s Gemini in operating inside the Pentagon network, dealing with each labeled and unclassified paperwork, which consultants say is a nationwide safety danger.
The letter’s authors argue that Grok has confirmed itself incompatible with the administration’s necessities for AI techniques. In accordance with the OMB’s guidance, techniques that current extreme and foreseeable dangers that can not be adequately mitigated have to be discontinued.
“Our major concern is that Grok has fairly persistently proven to be an unsafe giant language mannequin,” JB Department, a Public Citizen Large Tech accountability advocate and one of many letter’s authors, informed TechCrunch. “However there’s additionally a deep historical past of Grok having a wide range of meltdowns, together with anti-semitic rants, sexist rants, sexualized photographs of girls and youngsters.”
Techcrunch occasion
Boston, MA
|
June 23, 2026
A number of governments have demonstrated an unwillingness to interact with Grok following its conduct in January, which builds on a sequence of incidents together with the technology of antisemitic posts on X and calling itself “MechaHitler.” Indonesia, Malaysia, and the Philippines all blocked entry to Grok (they’ve subsequently lifted these bans), and the European Union, the U.Ok., South Korea, and India are actively investigating xAI and X concerning knowledge privateness and the distribution of unlawful content material.
The letter additionally comes every week after Frequent Sense Media, a nonprofit that opinions media and tech for households, printed a damning danger evaluation that discovered Grok is among the many most unsafe for youths and youths. One may argue that, based mostly on the findings of the report — together with Grok’s propensity to supply unsafe recommendation, share details about medication, generate violent and sexual imagery, spew conspiracy theories, and generate biased outputs — Grok isn’t all that protected for adults both.
“If you realize that a big language mannequin is or has been declared unsafe by AI security consultants, why on the earth would you need that dealing with essentially the most delicate knowledge we have now?” Department mentioned. “From a nationwide safety standpoint, that simply makes completely no sense.”
Andrew Christianson, a former Nationwide Safety Company contractor and present founding father of Gobbi AI, a no-code AI agent platform for labeled environments, says that utilizing closed-source LLMs usually is an issue, notably for the Pentagon.
“Closed weights means you possibly can’t see contained in the mannequin, you possibly can’t audit the way it makes choices,” he mentioned. “Closed code means you possibly can’t examine the software program or management the place it runs. The Pentagon goes closed on each, which is the worst attainable mixture for nationwide safety.”
“These AI brokers aren’t simply chatbots,” Christianson added. “They’ll take actions, entry techniques, transfer info round. You want to have the ability to see precisely what they’re doing and the way they’re making choices. Open supply provides you that. Proprietary cloud AI doesn’t.”
The dangers of utilizing corrupted or unsafe AI techniques spill out past nationwide safety use circumstances. Department identified that an LLM that’s been proven to have biased and discriminatory outputs may produce disproportionate detrimental outcomes for individuals as properly, particularly if utilized in departments involving housing, labor, or justice.
Whereas the OMB has but to publish its consolidated 2025 federal AI use case stock, TechCrunch has reviewed the use circumstances of a number of companies — most of that are both not utilizing Grok or usually are not disclosing their use of Grok. Other than the DoD, the Division of Well being and Human Companies additionally seems to be actively utilizing Grok, primarily for scheduling and managing social media posts and producing first drafts of paperwork, briefings, or different communication supplies.
Department pointed to what he sees as a philosophical alignment between Grok and the administration as a purpose for overlooking the chatbot’s shortcomings.
“Grok’s model is being the ‘anti-woke giant language mannequin,’ and that ascribes to this administration’s philosophy,” Department mentioned. “If in case you have an administration that has had a number of issues with folks who’ve been accused of being Neo Nazis or white supremacists, after which they’re utilizing a big language mannequin that has been tied to that kind of conduct, I might think about they may will be apt to make use of it.”
That is the coalition’s third letter after writing with related considerations in August and October final 12 months. In August, xAI launched “spicy mode” in Grok Think about, triggering mass creation of non-consensual sexually specific deepfakes. TechCrunch additionally reported in August that personal Grok conversations had been listed by Google Search.
Previous to the October letter, Grok was accused of providing election misinformation, together with false deadlines for poll adjustments and political deepfakes. xAI additionally launched Grokipedia, which researchers discovered to be legitimizing scientific racism, HIV/AIDS skepticism, and vaccine conspiracies.
Other than instantly suspending the federal deployment of Grok, the letter calls for that the OMB formally examine Grok’s security failures and whether or not the suitable oversight processes had been carried out for the chatbot. It additionally asks the company to publicly make clear whether or not Grok has been evaluated to adjust to Trump’s govt order requiring LLMs to be truth-seeking and impartial and whether or not it met OMB’s danger mitigation requirements.
“The administration must take a pause and reassess whether or not or not Grok meets these thresholds,” Department mentioned.
TechCrunch has reached out to xAI and OMB for remark.

