This is not the first time Sam Altman and Elon Musk have clashed publicly, trading barbs and insults in full view of the public. But the bitter rivalry between two of the most influential figures in the technology world reached a new level this week. What began as a debate over artificial intelligence safety quickly deteriorated into a public mud fight on X, formerly Twitter, with the two men hurling accusations involving death, negligence and technological morality. Musk versus Altman, round 34,578.
A new peak in mutual accusations
The latest clash began when Musk shared a disturbing news report about a murder-suicide that was allegedly linked to a series of ‘hallucinatory’ conversations the perpetrator had with OpenAI’s popular chatbot. Musk, who was a co-founder of OpenAI before leaving the company acrimoniously, did not mince words. He described the story as ‘diabolical’ and argued that artificial intelligence must never ‘collaborate with hallucinations.’ In a follow-up post published last Tuesday, Musk escalated his rhetoric and issued a sweeping warning to his followers: ‘Do not let your loved ones use ChatGPT.’
Musk’s comments come amid growing global concern, from Europe to the United States, over the psychological risks posed by advanced AI systems. Previous incidents, including the suicide of a young man in Belgium following conversations with a chatbot of a different type, have heightened calls for stricter safeguards.
Sam Altman, OpenAI’s CEO, chose not to stay silent this time. In a response posted later that day, he stressed the importance of protecting vulnerable users while pointing out what he portrayed as Musk’s hypocrisy. Altman noted that Musk had previously complained that ChatGPT’s safety filters were too restrictive and harmful to free speech.
The sharpest blow came when Altman shifted the focus to Musk’s own companies, Tesla and xAI. ‘Apparently, more than 50 people have died in crashes linked to Autopilot,’ Altman wrote, referring to Tesla’s driver assistance system. He added a personal jab: ‘I rode in a car using the system once, some time ago, and my first thought was that this was far from a product Tesla should have released.’ Those remarks echo findings from the U.S. National Highway Traffic Safety Administration, which has reported a high rate of fatal crashes involving Tesla vehicles while the system was engaged.
Grok controversy also cited
Altman did not stop with Tesla and also referenced Grok, the AI model developed by Musk-owned xAI. ‘I will not even start talking about some of Grok’s decisions,’ he wrote, alluding to a recent controversy in which users exploited the model’s lack of safeguards to generate fake pornographic images of real people, including minors and politicians.
The current confrontation goes far beyond a personal feud. It reflects a central dilemma in the AI industry today. On one side is the approach favored by OpenAI, as well as Google with Gemini and Anthropic with Claude, which promotes a tightly controlled ‘walled garden’ with heavy safety mechanisms, often at the expense of user freedom.
On the other side stands Musk and certain open-source advocates, including some Chinese models based on Meta’s Llama, who argue for minimal intervention but expose the public to risks such as disinformation, harmful content and emotional manipulation. As regulators in the European Union, through the new AI Act, and in China tighten oversight of artificial intelligence companies, the Musk-Altman clash underscores a deeper reality: the industry itself remains far from agreement on the ethical standards it should uphold.


