Grok, Elon Musk
Digest more
Elon Musk announced the launch of Grok 4 and user reviews are in for the super expensive AI model ($300/month subscription). One user online posted a screenshot of the latest Grok model spectacularly failing a simple visual test.
After Grok took a hard turn toward antisemitic earlier this week, many are probably left wondering how something like that could even happen.
On Tuesday July 8, X (née Twitter) was forced to switch off the social media platform’s in-built AI, Grok, after it declared itself to be a robot version of Hitler, spewing antisemitic hate and racist conspiracy theories. This followed X owner Elon Musk’s declaration over the weekend that he was insisting Grok be less “politically correct.”
Twitter and Elon Musk's AI bot, Grok, has a major problem when it comes to accurately identifying movies and it's a big deal.
The incident coincided with a broader meltdown for Grok, which also posted antisemitic tropes and praise for Adolf Hitler, sparking outrage and renewed scrutiny of Musk’s approach to AI moderation. Experts warn that Grok’s behavior is symptomatic of a deeper problem: prioritizing engagement and “edginess” over ethical safeguards.
Grok is normally a very smart AI system where you can perform DeepSearch research, create files, projects, and more. On the other hand, AI isn’t perfect and can make mistakes like providing inaccurate information,
The Grok debacle isn't just a tech ethics story. It’s a business, legal, and reputational risk story—one that businesses in nearly every industry shouldn’t ignore.
Elon Musk's artificial intelligence startup xAI was forced to delete a series of mind-bogglingly antisemitic and racist posts by its Grok chatbot on Twitter-formerly-X, a service that Elon Musk also owns.