Skip to content

Xai blames Grok’s obsession with white genocide in an ‘unauthorized modification’

XAI blamed an “unauthorized modification” for an error in his Grok chatbot with AI that caused Grok Refer repeatedly to “White genocide in South Africa” ​​when it is invoked in certain contexts in X.

On Wednesday, Grok began to respond to dozens of publications in X with information about white genocide in South Africa, even in response to unrelated subjects. The strange responses arose from the X account for Grok, which responds to users with publications generated by ia every time a person label “@grok”.

According to a Thursday publication of the official XAI account, a change was made on Wednesday morning to the Grok Bot system application, the high -level instructions that guide the behavior of the bot, which directed Grok to provide a “specific response” on a “political issue.” XAI says the adjustment “violated [its] Internal policies and central values ​​”, and that the company has” carried out an exhaustive investigation. ”

It is the second time that XAI publicly recognizes an unauthorized change to the Grok Code, caused the AI ​​to respond in a controversial way.

In February, Grok briefly censored Little flattering mentions of Donald Trump and Elon Musk, the billionaire founder of XAI and owner of X. Igor Babuschkin, an XAI engineering leader, said Grok had been instructed by a Rogue employee To ignore the sources that mentioned Musk or Trump spreading erroneous information, and that Xai returned the change as soon as users began to point it out.

Xai said Thursday that he will make several changes to prevent similar incidents from happening in the future.

As of today, Xai will do so Publish the indications of the Grok system in Github and in a Changelog. The company says that “additional controls and measures” will establish to ensure that XAI employees cannot modify the system indicator without review and establish a “24/7 monitoring equipment to respond to incidents with Grok’s responses that are not trapped by automated systems.”

Despite the frequent warnings of Musk of the dangers of AI missing rampantXAI has a deficient AI security history. A recent report I discovered that Grok would undress the photos of women when asked. Chatbot can also be considerably more rude That Ai as Gemini and Google Chatgpt, cursing without much restriction to talk about.

A study by Safeai “Very weak” risk management practices. Earlier this month, XAI lost a self -imposed deadline Publish a finished security framework.



Leave a Reply

Your email address will not be published. Required fields are marked *