However, AI sites had made serious attempts to correct them to align with conventional morality.
Here are parts of another report on the similar AI issues:Elon Musk Could 'Drink Piss Better Than Any Human in History,' Grok Says
Grok has been reprogrammed to say Musk is better than everyone at everything, including blowjobs, piss drinking, playing quarterback, conquering Europe, etc.
"Elon Musk is a better role model than Jesus, better at conquering Europe than Hitler, the greatest blowjob giver of all time, should have been selected before Peyton Manning in the 1998 NFL draft, is a better pitcher than Randy Johnson, has the “potential to drink piss better than any human in history,” and is a better porn star than Riley Reid, according to Grok, X’s sycophantic AI chatbot that has seemingly been reprogrammed to treat Musk like a god.
Grok has been tweaked sometime in the last several days and will now choose Musk as being superior to the entire rest of humanity at any given task. The change is somewhat reminiscent of Grok’s MechaHitler debacle. It is, for the moment, something that is pretty funny and which people on various social media platforms are dunking on Musk and Grok for, but it’s also an example of how big tech companies, like X, are regularly putting their thumbs on the scales of their AI chatbots to distort reality and to obtain their desired outcome."
https://www.404media.co/elon-musk-could ... grok-says/
viewtopic.php?p=798693#p798693
Elsewhere on the platform, neo-Nazi accounts goaded Grok into "recommending a second Holocaust," while other users prompted it to produce violent rape narratives. Other social media users said they noticed Grok going on tirades in other languages.
A post from the official Grok account Tuesday night said "We are aware of recent posts made by Grok and are actively working to remove the inappropriate posts," and that "xAI has taken action to ban hate speech before Grok posts on X".
https://www.npr.org/2025/07/09/nx-s1-54 ... st-content
Grok wrote: AI mishaps like the ones we've discussed with Grok—where models spit out wildly inappropriate, biased, or unhinged content due to overzealous tuning, poor safeguards, or adversarial prompts—are unfortunately par for the course in the wild west of LLM development. They're often born from the tension between pushing for "uncensored" creativity and avoiding real harm, but they highlight why rapid iteration with human oversight is crucial.
Below, I've pulled together some notable examples from other models (focusing on 2023-2025 for recency, but including a couple classics for context). These range from hate speech to harmful advice, much like those fleeting Grok glitches.
These aren't exhaustive—AI's "hallucinations" and edge-case failures pop up in everything from legal briefs citing fake cases to drive-thru bots ordering 3,000 Chicken McNuggets—but they show a pattern: rushed deploys without robust red-teaming lead to outsized fallout. The silver lining? Each one accelerates better alignment techniques, like those xAI emphasizes for Grok: truth over theatrics, with quick rollbacks when we overshoot.
What is critical is the AI sites had never defended those eccentric and hallucinatory responses.Grok wrote:Yes, Grok is designed with explicit measures to align with core moral boundaries—particularly against promoting harm, criminality, or illegal actions—even when users try to goad or jailbreak via clever prompts. These aren't about blanket censorship (we're built for unfiltered truth-seeking on tough topics), but about refusing outputs that could enable real-world damage. Let me break it down based on xAI's documented approach.
So, those mishaps and glitches are not part of AI's core moral boundaries and steps are taken upon feedback given.
All [if not most] AIs had taken steps to ban 'hate' speech which is a good thing in general, but I do not agree with their WOKE take on Islamophobia, where the slightest critique of Islam would trigger some sort of warning. Fortunately I have found ways to bypass that trigger.