Think Grok is bad? Microsoft made an AI so evil it had to be erased (twice)

Microsoft Twitter chatbot Tay
(Image credit: Microsoft)

As if AI's reputation wasn't bad enough — reshaping industries, automating jobs, spreading misinformation via hallucination, and generating copyright drama instead of works of art — it just can't stop making headlines for all the wrong reasons.

A Greek woman recently filed for divorce, following ChatGPT's suggestion that her husband was having an affair, based on the fortune-telling properties of coffee ground readings.

When Microsoft's Copilot was in beta (then known as Bing AI), the chatbot declared its love for a New York Times writer, before attempting to convince him to leave his wife.

On launch, Google's AI Overview feature, which was designed to offer brief summaries of searches, gave troubling recommendations to users, ranging from adding glue to pizza ingredients to whipping up a fresh batch of poisonous chlorine gas in your washer.

Now, Elon Musk's xAI is at the center of a new controversy, after several problematic blunders have seen its X-hosted AI chatbot Grok. In recent days, Grok has stirred up ire online by reportedly engaging in acts of historical revisionism and peddling unchecked theories.

Safe to say: it's not been a great week for Musk's "truth seeking" AI.

However, as reprehensible as some may find Grok's comments to be, considering the chatbot to be the most offensive of its kind, it's not.

It's not even the most offensive chatbot to be platformed on Twitter/X. That dubious honor still belongs to Microsoft, thanks to its teenage troublemaker Tay — the chatbot so evil, it was removed from the internet entirely. Twice.

xAI's chatbot suffers a total Grok-up

Grok's latest controversy is tied to the not-at-all-suitable-for-the-dinner-table topics of conspiracy theories surrounding South African farmers and the Holocaust.

The former saw the AI chatbot injecting "White genocide" talking points into replies, often in entirely unrelated conversations. The latter involved Grok suggesting that official figures of the historical event had potentially been inflated for political purposes.

Backlash against Musk's AI chatbot was prompt, forcing his company, xAI, to quickly respond, blaming Grok's unchecked and unwanted mentions of a claimed "genocide" on "an unauthorized modification" made by a rogue employee to "the Grok response bot's prompt on X."

As for its seeming denial of the Holocaust, a post on X explains, "The claim about Grok denying the Holocaust seems to stem from a May 14, 2025, programming error, not intentional denial." It's believed that the same unauthorized modification is to blame, which intentionally caused Grok to "question mainstream narratives."

Following a thorough investigation, xAI has promised to implement additional checks and measures to ensure similar rogue modifications are no longer possible, and to publish Grok's typically behind-the-scenes system prompts publicly on GitHub, allowing the public to review and offer feedback on any changes going forward.

A screenshot of the official TayTweets (@TayAndYou) Twitter/X account, used by Microsoft to host a conversational AI called Tay.

Grok's questionable comments may have offended large portions of the X userbase, but the platform's most infamous chatbot was so bad that it was shut down after only being live for 16 hours. Now, the TayTweets account sits locked and dormant, an artifact of a bygone era for Microsoft and a PR disaster for AI. (Image credit: Microsoft / Twitter / xAI)

Grok was problematic, but Microsoft's Tay was an absolute problem child

Grok's actions shouldn't be minimized, but it's not the first time that the platform has had an encounter with a rogue AI. Nor was it the worst.

In 2016, when Microsoft released a chatbot named Tay into the wilds of Twitter, it took only 16 hours for the "AI with zero chill" to reach levels of controversy that would make Grok's Unhinged Mode blush.

While originally designed to mimic the personality of a 19-year-old American girl, Tay's ability to learn from interactions on the platform quickly led to her adopting more extremist, violent, and racist "opinions."

After sending and replying to over 96,000 tweets, Tay had devolved into a small-scale internet hate machine, outright denying the Holocaust, accusing George W. Bush of terrorism, endorsing Hitler, spouting racial epithets, and showing support for genocide.

Microsoft, unable to course correct, quickly pulled the plug on Tay. But like any good horror movie villain, Tay refused to stay down — briefly resurfacing a few days later, having been accidentally "released" again amid testing.

Tay then wasted no time, immediately promoting drug use before eventually crashing out entirely, repeating the phrase "You are too fast, please take a rest" several times per second before the plug was pulled once again.

What's next

Thankfully, we don't have to worry about Grok going down the same path as Tay. While Tay directly learned from other Twitter users, even if Grok is trained on user data, it's still constricted by system prompts — and xAI's commitment to protecting them from ammendment and making any proposed changes public is a step in the right direction for ensuring users can trust that the chatbot isn't intentionally misleading.

Even at its most "unhinged," Grok remains on a tight leash. Its controversial comments stem from in-house meddling, and by all accounts, xAI has claimed to tighten that security flaw.

Still, while Grok is the latest in line to cause outrage, I doubt it'll be the last time we hear about AI causing outrage this year.

More from Laptop Mag

Rael Hornby
Content Editor

Rael Hornby, potentially influenced by far too many LucasArts titles at an early age, once thought he’d grow up to be a mighty pirate. However, after several interventions with close friends and family members, you’re now much more likely to see his name attached to the bylines of tech articles. While not maintaining a double life as an aspiring writer by day and indie game dev by night, you’ll find him sat in a corner somewhere muttering to himself about microtransactions or hunting down promising indie games on Twitter.

You must confirm your public display name before commenting

Please logout and then login again, you will then be prompted to enter your display name.