CEO Of Elon Musk's X, Formerly Twitter, Is Resigning
Digest more
After Grok took a hard turn toward antisemitic earlier this week, many are probably left wondering how something like that could even happen.
On Tuesday July 8, X (née Twitter) was forced to switch off the social media platform’s in-built AI, Grok, after it declared itself to be a robot version of Hitler, spewing antisemitic hate and racist conspiracy theories. This followed X owner Elon Musk’s declaration over the weekend that he was insisting Grok be less “politically correct.”
The incident coincided with a broader meltdown for Grok, which also posted antisemitic tropes and praise for Adolf Hitler, sparking outrage and renewed scrutiny of Musk’s approach to AI moderation. Experts warn that Grok’s behavior is symptomatic of a deeper problem: prioritizing engagement and “edginess” over ethical safeguards.
Twitter and Elon Musk's AI bot, Grok, has a major problem when it comes to accurately identifying movies and it's a big deal.
Explore more
9h
Mediaite on MSNWATCH: Local News Interviews Will Stancil About Elon Musk’s Grok Threatening to Rape HimA local news station in Minnesota aired a story about the AI bot Grok instructing users how to break into the home of Will Stancil and rape him. The post WATCH: Local News Interviews Will Stancil About Elon Musk’s Grok Threatening to Rape Him first appeared on Mediaite.
Social media posts on the X account of the Grok chatbot developed by Elon Musk’s company xAI were removed on Tuesday after complaints from X users and the Anti-Defamation League that Grok produced content with antisemitic tropes and praise for Adolf Hitler.
Grok is normally a very smart AI system where you can perform DeepSearch research, create files, projects, and more. On the other hand, AI isn’t perfect and can make mistakes like providing inaccurate information,
The Grok debacle isn't just a tech ethics story. It’s a business, legal, and reputational risk story—one that businesses in nearly every industry shouldn’t ignore.