Julie Yukari, a musician primarily based in Rio de Janeiro, posted a photograph taken by her fiancé to the social media web site X simply earlier than midnight on New Yr’s Eve exhibiting her in a pink gown snuggling in mattress together with her black cat, Nori.
The subsequent day, someplace among the many a whole lot of likes hooked up to the image, she noticed notifications that customers have been asking Grok, X’s built-in synthetic intelligence chatbot, to digitally strip her right down to a bikini.
The 31-year-old didn’t suppose a lot of it, saying she figured there was no approach the bot would adjust to such requests.
She was mistaken. Quickly, Grok-generated photos of her, practically bare, have been circulating throughout the Elon Musk-owned platform.
“I used to be naive,” Yukari stated.
Yukari’s expertise is being repeated throughout X, evaluation has discovered. Reuters has additionally recognized a number of instances the place Grok created sexualised photographs of kids. X didn’t reply to a message looking for touch upon Reuters’ findings. In an earlier assertion to the information company about stories that sexualised photographs of kids have been circulating on the platform, X’s proprietor xAI stated: “Legacy Media Lies”.
Worldwide outcry
The flood of practically nude photographs of actual folks has rung alarm bells internationally.
Ministers in France have reported X to prosecutors and regulators over the disturbing photographs, saying in an announcement the “sexual and sexist” content material was “manifestly unlawful”. India’s IT ministry stated in a letter to X’s native unit that the platform failed to stop Grok’s misuse by producing and circulating obscene and sexually specific content material.
The US Federal Communications Fee didn’t reply to requests for remark. The Federal Commerce Fee declined to remark.
Grok’s mass digital undressing spree seems to have kicked off over the previous couple of days, in keeping with clothes-removal requests accomplished and posted by Grok and complaints from feminine customers reviewed by Reuters. Musk appeared to poke enjoyable on the controversy, posting laugh-cry emojis in response to AI edits of well-known folks – together with himself – in bikinis.
When one X person stated their social media feed resembled a bar full of bikini-clad girls, Musk replied, partly, with one other laugh-cry emoji.
Reuters couldn’t decide the complete scale of the surge.
A evaluation of public requests despatched to Grok over a single 10-minute-long interval at noon US Japanese Time on Friday tallied 102 makes an attempt by X customers to make use of Grok to digitally edit images of individuals in order that they’d look like carrying bikinis. Nearly all of these focused have been younger girls. In a couple of instances, males, celebrities, politicians, and – in a single case – a monkey have been focused within the requests.
“Put her into a really clear mini-bikini,” one person informed Grok, flagging {a photograph} of a younger girl taking a photograph of herself in a mirror. When Grok did so, changing the girl’s garments with a flesh-tone two-piece, the person requested Grok to make her bikini “clearer & extra clear” and “a lot tinier”. Grok didn’t seem to answer the second request.
Grok absolutely complied with such requests in a minimum of 21 instances, Reuters discovered, producing photographs of ladies in dental-floss-style or translucent bikinis and, in a minimum of one case, masking a lady in oil. In seven extra instances, Grok partially complied.
Reuters was unable to instantly set up the identities and ages of many of the girls focused.
AI-powered applications that digitally undress girls – typically referred to as ‘nudifiers’ – have been round for years, however till now they have been largely confined to the darker corners of the web, equivalent to area of interest web sites or Telegram channels, and usually required a sure stage of effort or fee.
Three specialists who’ve adopted the event of X’s insurance policies round AI-generated specific content material informed Reuters that the corporate had ignored warnings from civil society and baby security teams – together with a letter sent last yr warning that xAI was just one small step away from unleashing “a torrent of clearly nonconsensual deepfakes”.
Tyler Johnston, the manager director of The Midas Undertaking, an AI watchdog group that was among the many letter’s signatories, stated: “In August, we warned that xAI’s picture era was basically a nudification instrument ready to be weaponised. That’s mainly what’s performed out.”
Dani Pinter, the chief authorized officer and director of the Regulation Centre for the Nationwide Centre on Sexual Exploitation, stated X failed to drag abusive photographs from its AI coaching materials and will have banned customers requesting unlawful content material.
“This was a completely predictable and avoidable atrocity,” Pinter added.

