Sexual deepfakes, probes and bans: backlash over Musk’s Grok spreads

Countries move to block Grok as regulators in Europe, Asia and beyond launch probes into AI-generated sexual deepfakes, raising alarms over safeguards, platform responsibility and the spread of non-consensual images of women and children

The controversy surrounding “bikini images” generated by Grok, the artificial intelligence chatbot developed by Elon Musk’s xAI and accessible through the social media platform X, is widening, drawing mounting criticism and regulatory action around the world.
Malaysia and Indonesia became the first countries to fully block access to Grok over the weekend, saying existing safeguards failed to prevent the creation and spread of fake sexual content, particularly involving women and minors.
1 View gallery
האפליקציה של גרוק, אילון מאסק
האפליקציה של גרוק, אילון מאסק
(Photo: Shutterstock, Reuters)
“The government views non-consensual sexual deepfakes as a serious violation of human rights, dignity and public safety in the digital space,” Indonesia’s Minister of Communication and Digital Affairs Meutya Hafid said.
Regulators and officials in India, the European Union and several European countries have also raised concerns. In Britain, the independent online safety regulator Ofcom announced it has opened an investigation into X to determine whether the company breached its legal obligation to protect UK users from illegal content linked to sexually explicit images produced via Grok.
In France, the Paris public prosecutor’s office said it would examine the spread of explicit sexual deepfakes generated on the platform following complaints from members of parliament. France’s cybercrime unit is already investigating X on separate suspicions, including the dissemination of antisemitic statements and Holocaust denial through Grok.
The latest wave of criticism has been triggered by a new feature that allows users to easily and instantly edit existing images within Grok, without requiring consent from the rights holder. The tool quickly led to a surge of images circulating online in which people were digitally undressed without their consent and then depicted wearing virtual bikinis.
While some of the images were presented as jokes, others were clearly intended to create near-pornographic content. In some cases, users instructed Grok to apply extremely minimal bikini styles or to remove dresses and skirts entirely, including in images of children and toddlers.
Amid the backlash, xAI imposed new restrictions on the feature and told X users that image generation and editing tools would now be available only to paying subscribers. Critics say the move falls short, noting that users can still generate sexualized images through the Grok tab, which allows direct interaction with the chatbot inside X.
The standalone Grok app, which operates separately from X, continues as of now to allow users to generate similar images without requiring a subscription.
The European Commission called the circulation of nude and sexualized images of women and children on X “illegal and shocking,” saying the new limitations do not address the core problem. “This does not change the fundamental issue. Whether paid or unpaid, we do not want to see such images,” a commission spokesperson said.
Germany’s communications minister, Wolfram Weimer, described the phenomenon as the “industrialization of sexual harassment.” In the United States, Democratic senators have urged Apple and Google to remove X from their app stores.
In Turkey, where Grok was temporarily blocked in September 2025 over regulatory concerns, the debate over digital safety and platform oversight has also resurfaced. Cybersecurity experts and legal scholars have described the trend as “AI-based sexual violence,” arguing that responsibility lies with the platform enabling the content.
Comments
The commenter agrees to the privacy policy of Ynet News and agrees not to submit comments that violate the terms of use, including incitement, libel and expressions that exceed the accepted norms of freedom of speech.
""