South Korea presses X to shield minors from explicit AI deepfakes via Grok

South Korea’s media regulator has formally requested that social media platform X implement stronger safeguards to protect minors from sexually explicit AI-generated content created by its chatbot Grok.
South Korea’s media watchdog has called on the social media company X to take urgent steps to protect minors from non-consensual sexually explicit deepfakes generated through its AI chatbot, Grok. The Korea Media and Communications Commission delivered the request on Wednesday, citing growing public concern over the misuse of AI tools to create harmful imagery targeting young users.
Regulatory Demand for Specific Safeguards
In a statement, the commission asked X to prevent illegal activities involving Grok and to present concrete measures to shield teenage users from harmful content. Proposed safeguards could include access restrictions or enhanced parental controls for minors. Under South Korean law, social networking service operators—including X—are required to appoint a minor-protection officer and submit an annual compliance report. The commission also emphasized that producing, distributing, or storing sexual deepfake content without consent is a criminal offense under local regulations.
Global Scrutiny of Grok’s AI Features
Grok, launched in 2023 and integrated into X, enables users to generate text and images through prompts. Its image-generation capability has raised alarms globally for its potential to create manipulated or wholly synthetic non-consensual sexual content. Several countries have recently taken action: Indonesia and Malaysia blocked access to Grok last weekend over similar concerns, while authorities in the UK and France have opened investigations into the platform for possible breaches of online safety and privacy laws.
Balancing Innovation with Protective Regulation
Commission Chair Kim Jong-cheol stated that the agency supports the “healthy and safe development of new technologies” but plans to introduce “reasonable regulations” to counter negative side effects. This includes policies to curb the circulation of illegal information and to mandate that AI service providers implement stronger minor-protection measures. The move reflects a broader international trend toward holding tech platforms accountable for the ethical and legal implications of generative AI tools.
Comments you share on our site are a valuable resource for other users. Please be respectful of different opinions and other users. Avoid using rude, aggressive, derogatory, or discriminatory language.