Global regulators crack down on Musk’s AI chatbot Grok over non‑consensual deepfakes

Countries worldwide are imposing restrictions and launching investigations into xAI’s Grok chatbot, citing its ability to generate sexually explicit deepfake images without consent, sparking urgent regulatory and legal responses.
Governments across multiple continents are escalating oversight of Elon Musk’s artificial intelligence chatbot, Grok, following widespread reports that its image‑generation feature is being used to create non‑consensual sexually explicit deepfakes. The tool, which allows users to modify real photographs or generate synthetic images, has prompted regulatory blocks, criminal probes, and formal warnings amid mounting concerns over privacy violations, exploitation, and harm to women and minors.
Southeast Asia Leads with Access Restrictions
Indonesia became the first country to temporarily block access to Grok last Saturday, citing the generation of pornographic content based on personal photos as a violation of human dignity and digital safety. Malaysia followed with a similar restriction, stating that X Corp.’s reliance on user‑reporting mechanisms failed to address inherent risks. Both nations emphasized the need for proactive safeguards before considering restoring access.
European Investigations and Legal Pressure
In Europe, regulatory pressure is intensifying. The UK’s Ofcom has opened a formal investigation into whether content generated via Grok constitutes intimate image abuse or child sexual abuse material. France has expanded an existing probe into X to include the chatbot, while Italy’s data protection authority warned that creating or sharing “digital stripping” images could lead to criminal liability. The European Commission has called on X to implement effective measures under the EU’s Digital Services Act and AI Act, warning of significant penalties if violations persist.
Calls for Stronger Legislation and Platform Accountability
Germany is preparing stricter laws against “digital violence,” and Australia’s eSafety Commissioner has signaled it may use enforcement powers under the Online Safety Act. Canada’s AI minister emphasized that deepfake sexual abuse constitutes violence and highlighted upcoming legislative reforms. South Korea’s media watchdog has separately requested X to submit plans for protecting minors from harmful AI‑generated content. These coordinated responses reflect a growing consensus that existing content moderation and criminal laws are inadequate to address the speed and scale of AI‑enabled abuse.
Broader Implications for AI Governance
The global reaction to Grok underscores urgent debates about developer responsibility, ethical AI design, and the need for robust safety‑by‑default measures in generative tools. As regulators move from warnings to concrete actions, the case is becoming a benchmark for how societies balance innovation with the protection of individual rights in the age of accessible, powerful AI.
Advertisement
Comments you share on our site are a valuable resource for other users. Please be respectful of different opinions and other users. Avoid using rude, aggressive, derogatory, or discriminatory language.