top of page

Speed over Safety: How Grok Poisoned the World

©Pexels: UMA media
©Pexels: UMA media

If Elon Musk isn't the most controversial person of the decade, he's getting really close, especially after the scandal surrounding his AI chatbot Grok. On January 12th, Indonesia and Malaysia became the first countries to ban Grok after it was caught generating sexualized images of women and children. Users were asking it to digitally undress people, and Grok was complying without hesitation. Yet, that same week, the US Pentagon announced it would integrate Grok into its military AI systems under a-200 million-dollar contract.  Is it not absurd that a chatbot deemed too dangerous by other countries was so easily incorporated into the American defense system? Nevertheless, it is important to understand that the newest controversy surrounding Grok is only the tip of the iceberg in its bleak life. This article will deconstruct the full story of how entire communities became collateral damage in the race to build the most notorious AI chatbot.


Memphis

After realizing that AI chatbots were the future, Elon Musk did not hesitate to enter the industry. The only issue was that he showed up late to the party, and other AI companies - like OpenAI -  were miles ahead in terms of both computing power and learning algorithms. So, Elon decided to act promptly, setting up a fully running factory in mere 122 days. By September 2024, "Colossus" was active, consuming 150 megawatts of power, which is enough to power 100,000 homes. The plan to build Colossus in Memphis was unknown to residents, city council members, and environmental agencies. Many did not find out about the project until the day before or on the day of the announcement.

Local residents were concerned about the new AI factory and were shocked when they discovered that the factory was running on 33 methane gas turbines, which were pumping out pollutants linked directly to respiratory illnesses and cancer. Naturally, outrage and lawsuits followed, but Musk and xAI’s management remained largely silent. Over the short time the factory ran, the predominantly Black neighbourhood has already experienced a decline in air quality, increased lung illnesses, and broken promises of jobs that never fully manifested. In fact, the closest neighbourhood, Boxton, has cancer risk four times the national average. It is almost as if the community in Memphis was sacrificed for Elon Musk’s ambition, left with pollution and power strain from the facility's massive energy demands. Worst of all, the factory is still running since launch, continuing to poison the environment.


Late Arrival

"Grok 4 Heavy was smarter 2 weeks ago than GPT-5 is now and G4H is already a lot better. Let that sink in." — Elon Musk, August 7, 2025

After straining an entire community, one would hope that Elon Musk’s dreams were at least somewhat successful, yet the billionaire was still late. By the time Grok 4, the most advanced version of this chatbot, was released, the AI market was already saturated. The truth is that Grok failed to stand out and capture meaningful market share in the AI industry. 

Despite Musk's claims of superiority, which were  stated with no independent verification, benchmarks, or peer-reviewed testing, Grok remains uncompetitive. As of late 2025, Grok reached approximately 30-35 million monthly active users across its web platform, mobile apps, and X integration. It fails dramatically in comparison to ChatGPT, which boasts around 400-800 million weekly users, and Google's Gemini, which reached a similar scale by integrating with Google's ecosystem.


Grok’s Explicit Image Generation

Naturally, Grok is still fairly new and experiencing rapid growth, but other AI companies are growing in the same manner, if not faster. Nevertheless, the main issue with Grok is that it is built around an unsustainable business model, with Grok mostly being used for generating memes, social media content and random gibberish.  As opposed to companies like Anthropic, which position themselves around truly helping people and have safety features in place to handle sensitive data, Grok is revealing itself as unreliable and simply unsafe. Its image was further tarnished by the most recent controversy of Grok generating non-consensual sexual content of women and even children. 

Despite Elon Musk and his team’s claim that "anyone using Grok to make illegal content will suffer the same consequences as if they upload illegal content," enforcement is practically impossible. With Grok directly integrated into X's platforms, AI-generated images are published automatically with no moderation. When the public demanded stronger safeguards against sexualized content, xAI’s responses were vague and primarily focused on user reporting. Without any real measures being taken, Grok continued and still continues to generate non-consensual sensitive images, prompting countries like Indonesia and Malaysia to ban it altogether. Grok remains under close scrutiny from around the world, even as Elon Musk claims to be addressing the problem.

However, one thing is clear: having your image or your child’s image on any social platform is no longer safe from being manipulated in a humiliating or harassing context that you never consented to. In fact, you could simply be going for a stroll when someone might take a picture of you, and there could be hundreds of deepfakes of you online the next day. Elon Musk’s ex-lover, Ashley St. Claire, is a victim of such abuse, with hundreds of her deepfakes circulating on X. When she formally complained to the company, she was assured that content involving her would stop, yet it continued regardless, once again revealing Grok’s faulty safety measures.


Grok’s Failures

Grok’s faults stretch beyond sexual deepfakes. In the past year, Grok was caught praising Hitler and endorsing a “second holocaust”, creating white genocide conspiracy theories, falsely accusing a Canadian man of assassination, and obviously exhibiting extreme pro-Musk bias. These go beyond just simple errors, but they reveal systematic safety failures. While the exact data used for Grok training is not known, it has been stated that Grok was primarily trained on X posts, which often feature extremist content with minimal moderation. The patterns of failures suggest either inadequate filtering or deliberately weak  safety constraints, with Grok possibly absorbing the worst of their training data. Elon Musk and xAI were clearly prioritising speed and “free speech” over responsible development, thus making Grok what it is today. Some find Grok's unhinged responses amusing, which increases its popularity as more people want to try communicating with this controversial bot. Nevertheless, it is fair to say that while each scandal may bring curious users, it also damages Grok's long-term reputation.

While the world reacted with shock and disgust to Grok's sexual content generation, the US Pentagon ignored the news altogether. As Indonesia and Malaysia banned the chatbot and countries worldwide launched investigations into Grok’s safety measures, the Pentagon announced a $200-million-contract to integrate Grok into America's defense systems, alongside contracts given to OpenAI, Anthropic and Google. The stark distinction between these companies is that xAI is the only enterprise which failed to prevent generation of illegal content and showed systematic failure. However, this same system would now have access to sensitive US military data to maintain strategic advantage over our adversaries”. What could possibly go wrong?

The possible wrongs may be found in Memphis, where people are forced to breathe polluted air produced by illegal generators only so Elon Musk could build a chatbot that praises Hitler and is largely used to produce pornography. Worst of all, it is not even over, as xAI plans on expanding with "Colossus 2” and a third supercomputer, which may require up to two gigawatts of power and many more communities as collateral damage. When Musk treats safety as a joke and people as disposable, what hope is there for his AI to act responsibly?


Governance

The story of Grok reveals some hard truth regarding the AI industry of today: speed over safety. With Elon Musk rushing to implement his new AI in core business operations, he failed to install robust safety controls and governance. To say “Grok has created illegal images” would be to push all the blame onto a tool, yet it is the humans behind the scenes who are largely at fault. It is true that users who wrote prompts to generate those images are not innocent, but it  was xAI’s team that equipped Grok with the capabilities to answer those prompts.

With AI’s current boom, ensuring that AI models comply with laws is paramount. After Grok’s safety failure, authorities worldwide have immediately launched formal investigations. The UK’s Ofcom warned that X could face a ban or  large fines, whilst France’s Paris Prosecutor’s office expanded its investigation into X to include child-pornography dissemination. As mentioned before, Grok was temporarily banned in Malaysia and Indonesia altogether, though users still found ways to connect to the platform. The European Commission, already pursuing X under the Digital Services Act, called Grok's content "illegal," "appalling," and "disgusting". Whilst the dissatisfaction with Grok is obvious, global outrage means little without any concrete consequences. If xAI walks away from this incident without any material penalties, what message does this send to other companies evaluating how much to invest in AI safety?


 
 
 

Comments


bottom of page