UK media regulator Ofcom has made “pressing contact” with xAI, the factitious intelligence enterprise owned by Elon Musk, following studies that its Grok chatbot can be utilized to generate sexualised photographs of youngsters and non-consensual specific photographs of ladies.
The intervention follows widespread concern over Grok’s image-generation capabilities on X, the place customers have posted examples of the AI being prompted to digitally “undress” ladies or place them into sexualised situations with out consent.
Ofcom confirmed it’s investigating whether or not the usage of Grok breaches the UK’s On-line Security Act, which makes it unlawful to create or share intimate or sexually specific photographs, together with AI-generated “deepfakes”, with out a individual’s consent.
A spokesperson for Ofcom mentioned the regulator can also be analyzing allegations that Grok has been producing “undressed photographs” of people, including that expertise corporations are legally required to take acceptable steps to stop UK customers from encountering unlawful content material and to take away such materials swiftly as soon as flagged.
X has not responded publicly to Ofcom’s request for clarification. Nonetheless, over the weekend the platform issued a warning to customers to not use Grok to generate unlawful materials, together with baby sexual abuse imagery. Musk additionally posted on X that anybody prompting Grok to create unlawful content material would “endure the identical penalties” as if that they had uploaded such content material themselves.
Regardless of this, Grok’s personal acceptable use coverage, which explicitly bans depicting actual folks in a pornographic method, seems to have been routinely bypassed. Photos of high-profile figures, together with Catherine, Princess of Wales, had been amongst these reportedly manipulated utilizing the AI device.
The Web Watch Basis confirmed it has acquired studies from members of the general public regarding Grok-generated photographs. Nonetheless, it mentioned that, thus far, it had not recognized content material that crossed the authorized threshold to be labeled as baby sexual abuse materials underneath UK legislation.
The problem has additionally triggered scrutiny past the UK. The European Fee mentioned it was “severely wanting into the matter”, whereas regulators in France, Malaysia and India are reportedly assessing whether or not Grok breaches native legal guidelines.
Thomas Regnier, a European Fee spokesperson, described the content material as “appalling” and “disgusting”, stating that there was “no place” for such materials in Europe. X was fined €120 million (£104 million) by EU regulators in December for breaching its obligations underneath the Digital Companies Act.
Criticism has intensified from UK politicians. Dame Chi Onwurah, chair of the Science, Innovation and Know-how Committee, mentioned the allegations had been “deeply disturbing” and argued that current safeguards had been failing to guard the general public. She described the On-line Security Act as “woefully insufficient” and known as for stronger enforcement powers in opposition to social media platforms.
The controversy has additionally highlighted the human affect of AI misuse. Journalist Samantha Smith instructed the BBC that seeing AI-generated photographs of herself in a bikini was “as violating as if somebody had posted an actual specific picture”.
“It appeared like me. It felt like me. And it was dehumanising,” she mentioned.
The Dwelling Workplace confirmed it’s progressing laws to outlaw “nudification” instruments altogether, with a proposed new legal offence that might see suppliers of such expertise face jail sentences and substantial fines.
As regulators transfer to tighten scrutiny, the Grok episode has grow to be a flashpoint within the wider debate over AI accountability, platform accountability and the bounds of free expression within the age of generative expertise.

