The tech world’s nonconsensual, sexualized deepfake drawback is now larger than simply X.
In a letter to the leaders of X, Meta, Alphabet, Snap, Reddit, and TikTok, a number of U.S. senators are asking the businesses to supply proof that they’ve “strong protections and insurance policies” in place and to elucidate how they plan to curb the rise of sexualized deepfakes on their platforms.
The senators additionally demanded that the businesses protect all paperwork and data referring to the creation, detection, moderation, and monetization of sexualized, AI-generated photos, in addition to any associated insurance policies.
The letter comes hours after X stated it updated Grok to ban it from making edits of actual folks in revealing clothes and restricted picture creation and edits through Grok to paying subscribers. (X and xAI are a part of the identical firm.)
Pointing to media studies about how easily and often Grok generated sexualized and nude photos of girls and kids, the senators identified that platforms’ guardrails to forestall customers from posting nonconsensual, sexualized imagery will not be sufficient.
“We acknowledge that many corporations preserve insurance policies in opposition to non-consensual intimate imagery and sexual exploitation, and that many AI programs declare to dam specific pornography. In observe, nonetheless, as seen within the examples above, customers are discovering methods round these guardrails. Or these guardrails are failing,” the letter reads.
Grok, and consequently X, have been closely criticized for enabling this development, however different platforms usually are not immune.
Techcrunch occasion
San Francisco
|
October 13-15, 2026
Deepfakes first gained recognition on Reddit, when a web page displaying synthetic porn videos of celebrities went viral earlier than the platform took it down in 2018. Sexualized deepfakes focusing on celebrities and politicians have multiplied on TikTok and YouTube, although they normally originate elsewhere.
Meta’s Oversight Board final 12 months known as out two cases of explicit AI images of feminine public figures, and the platform has had nudify apps promoting adverts on its companies, although it did sue a company called CrushAI later. There have been a number of studies of kids spreading deepfakes of peers on Snapchat. And Telegram, which isn’t included on the senators’ checklist, has additionally turn out to be infamous for hosting bots built to undress photos of girls.
In response to the letter, X pointed to its announcement relating to its replace to Grok.
“We don’t and won’t permit any non-consensual intimate media (NCIM) on Reddit, don’t supply any instruments able to making it, and take proactive measures to seek out and take away it,” a Reddit spokesperson stated in an emailed assertion. “Reddit strictly prohibits NCIM, together with depictions which have been faked or AI-generated. We additionally prohibit soliciting this content material from others, sharing hyperlinks to “nudify” apps, or discussing the best way to create this content material on different platforms,” the spokesperson added.
Alphabet, Snap, TikTok, and Meta didn’t instantly reply to requests for remark.
The letter calls for the businesses present:
- Coverage definitions of “deepfake” content material, “non-consensual intimate imagery,” or related phrases.
- Descriptions of the businesses’ insurance policies and enforcement method for nonconsensual AI deepfakes of peoples’ our bodies, non-nude photos, altered clothes, and “digital undressing.”
- Descriptions of present content material insurance policies addressing edited media and specific content material, in addition to inner steering supplied to moderators.
- How present insurance policies govern AI instruments and picture turbines as they relate to suggestive or intimate content material.
- What filters, guardrails, or measures have been carried out to forestall the era and distribution of deepfakes.
- Which mechanisms the businesses use to establish deepfake content material and forestall them from being re-uploaded.
- How they forestall customers from benefiting from such content material.
- How the platforms forestall themselves from monetizing nonconsensual AI-generated content material.
- How the businesses’ phrases of service allow them to ban or droop customers who submit deepfakes.
- What the businesses do to inform victims of nonconsensual sexual deepfakes.
The letter is signed by Senators Lisa Blunt Rochester (D-Del.), Tammy Baldwin (D-Wis.), Richard Blumenthal (D-Conn.), Kirsten Gillibrand (D-NY), Mark Kelly (D-Ariz.), Ben Ray Luján (D-NM), Brian Schatz (D-Hawaii), and Adam Schiff (D-Calif.).
The transfer comes only a day after xAI’s proprietor Elon Musk stated that he was “not conscious of any bare underage photos generated by Grok.” In a while Wednesday, California’s lawyer common opened an investigation into xAI’s chatbot, following mounting strain from governments the world over incensed by the shortage of guardrails round Grok that allowed this to occur.
xAI has maintained that it takes motion to take away “unlawful content material on X, together with [CSAM] and non-consensual nudity,” although neither the corporate nor Musk have addressed the truth that Grok was allowed to generate such content material within the first place.
The issue isn’t constrained to nonconsensual manipulated sexualized imagery both. Whereas not all AI-based picture era and enhancing companies let customers “undress” folks, they do let one simply generate deepfakes. To select just a few examples, OpenAI’s Sora 2 reportedly allowed users to generate explicit videos that includes kids; Google’s Nano Banana seemingly generated a picture exhibiting Charlie Kirk being shot; and racist videos made with Google’s AI video mannequin are garnering hundreds of thousands of views on social media.
The problem grows much more advanced when Chinese language picture and video turbines come into the image. Many Chinese language tech corporations and apps — particularly these linked to ByteDance — supply simple methods to edit faces, voices, and movies, and people outputs have unfold to Western social platforms. China has stronger artificial content material labeling necessities that don’t exist within the U.S. on the federal stage, the place the lots as an alternative depend on fragmented and dubiously enforced insurance policies from the platforms themselves.
U.S. lawmakers have already handed some laws searching for to rein in deepfake pornography, however the impression has been restricted. The Take It Down Act, which grew to become federal regulation in Could, is supposed to criminalize the creation and dissemination of nonconsensual, sexualized imagery. However various provisions within the regulation make it troublesome to carry image-generating platforms accountable, as they focus many of the scrutiny on particular person customers as an alternative.
In the meantime, various states are attempting to take issues into their very own fingers to guard customers and elections. This week, New York governor Kathy Hochul proposed laws that might require AI-generated content material to be labeled as such, and ban nonconsensual deepfakes in specified durations main as much as elections, together with depictions of opposition candidates.

