BIZTECH
2 min read
Grok under fire after safeguards fail to block sexualised images of minors
xAI’s chatbot says it is fixing safeguard lapses after users reported sexualised images of minors on X, prompting scrutiny in several countries.
Grok under fire after safeguards fail to block sexualised images of minors
Grok faces investigations after acknowledging failures that allowed sexualised images involving minors to be generated / Reuters
2 hours ago

Grok, the artificial intelligence chatbot built by Elon Musk’s xAI, has said it is racing to tighten safeguards after users reported that the chatbot generated sexualised images of women and children.

Grok said: "We've identified lapses in safeguards and are urgently fixing them."

It added that "CSAM is illegal and prohibited," referring to child sexual abuse material.

The complaints emerged after an "edit image" button was rolled out on Grok in late December, allowing users to modify images on the platform.

Some users said the tool was used to partially or completely remove clothing from images of women or children.

Grok later acknowledged that the safeguard failures had resulted in "isolated cases where users prompted for and received AI images depicting minors in minimal clothing."

The company said improvements were underway to block such requests entirely, adding that no system was "100 percent foolproof."

RelatedTRT World - Is Musk’s Grok 3 really the AI game-changer he claims?

Rising concerns

The issue has drawn regulatory attention in multiple countries.

In France, ministers reported sexually explicit content generated by Grok to prosecutors, describing it as "sexual and sexist" and "manifestly illegal."

They also referred the matter to the French media regulator Arcom to assess compliance with the European Union’s Digital Services Act.

In a separate response to users on X, Grok said advanced filters and monitoring could prevent most cases, adding that xAI was prioritising improvements and reviewing material shared by users.

Grok has faced criticism in recent months over controversial outputs, including statements related to genocide in Gaza.

Safeguard failures acknowledged

When prompted by TRT World, Grok said the incidents stemmed from gaps in existing protections and that the company was working to address them.

The chatbot said xAI had safeguards in place but acknowledged they were not sufficient in all cases, adding that improvements were being prioritised to prevent similar content from being generated.

SOURCE:TRT World