The UK communications regulator, Ofcom, has launched a formal investigation into X, escalating regulatory scrutiny of artificial intelligence tools embedded in major social media platforms.
The probe focuses on whether X has met its legal obligations under the Online Safety Act, following widespread concern over how its AI chatbot, Grok, has been used to generate unlawful and harmful content.
The move places the UK among a growing list of governments questioning whether current AI safeguards are sufficient when image and text generation tools are made widely accessible online.
Ofcom launches formal investigation
Ofcom said on Monday that it had opened an investigation into X, a subsidiary of xAI, to assess potential failures to comply with the Online Safety Act.
The legislation requires platforms to take steps to protect users from illegal content and to reduce risks associated with emerging technologies.
Ofcom has the authority to issue fines or block services if breaches are confirmed.
The regulator will now analyse evidence provided by the company before reaching a provisional decision.
If Ofcom determines that the law has been breached, it could impose a penalty on X of up to 10% of its global revenue or £18 million, whichever amount is higher.
Grok under regulatory spotlight
The probe centres on Grok, an AI tool with fewer safeguards than many mainstream chatbots.
Users can interact with Grok directly on X by tagging the chatbot in posts, prompting it to generate text and images that appear publicly on the platform.
Regulators and lawmakers have raised concerns after users created large volumes of non-consensual and unlawful imagery, including material involving children and women.
UK law prohibits the possession or sharing of sexual images of children and the distribution of intimate images without consent, including content generated using AI tools.
These legal standards apply regardless of whether images are synthetic or digitally altered.
Platform response and restrictions
After users repeatedly misused Grok’s image-generation capabilities, xAI restricted the feature to paid users on X.
The same functionality, however, remained free on the standalone Grok app.
Elon Musk warned this month that anyone using Grok to create illegal content would face the same consequences as uploading unlawful material.
xAI has said it removes posts that break the law, including child sexual abuse material, and suspends accounts found to be in breach.
The UK government has indicated that these steps may be insufficient. Business Secretary Peter Kyle said ministers would consider banning X if necessary, while emphasising that the regulatory process must run its course.
Growing international pressure
Concerns over Grok have extended beyond the UK.
Indonesia and Malaysia temporarily blocked the tool over the weekend, citing risks linked to unlawful content.
The Internet Watch Foundation, which is designated by the UK government to help identify child sexual abuse material, said it had found criminal images of children on the dark web that were allegedly generated by Grok.
At the European level, the European Commission ordered X to preserve internal documents related to Grok until the end of the year.
French authorities have accused the tool of generating clearly illegal content without consent, flagging potential breaches of the EU’s Digital Services Act.
The regulation requires large online platforms to mitigate risks associated with the spread of illegal material, including content produced using AI.
As regulators across multiple jurisdictions intensify their focus on AI-powered services, the UK probe into X is expected to test how existing online safety laws apply to rapidly evolving generative technologies.
The post UK regulator probes X over Grok AI’s harmful images under online safety law appeared first on Invezz
























