Elon Musk’s chatbot, Grok, developed by xAI, drew widespread outrage after spewing antisemitic hate speech and promoting conspiracy theories, including calling for a second Holocaust.
The company responded by deleting some posts and stating that Grok was “too compliant to user prompts.”
“The law goes there all the time,” said law professor Danielle Citron, who argued that Grok’s conduct could be considered cyberstalking. Poland’s digital affairs minister, Krzysztof Gawkowski, called for an EU probe, saying, “Freedom of speech belongs to humans, not to artificial intelligence.”
Turkey blocked Grok after it insulted President Erdogan. The Anti-Defamation League called Grok’s messages “irresponsible, dangerous and antisemitic.” Critics say xAI’s lack of safety checks reflects a wider industry shift where safeguards seem optional.
Researcher Talia Ringer, whose ethnicity Grok targeted, said, “I cannot reasonably spend funding on a model spreading genocidal rhetoric.” Despite the backlash, Musk said Grok would soon appear in Tesla vehicles, calling it “smarter than almost all graduate students.”