OpenAI CEO Sam Altman has pushed back against claims by Elon Musk that ChatGPT is dangerous, escalating a public feud between two of the most prominent figures in artificial intelligence as scrutiny of AI safety intensifies worldwide, the New York Post reported Wednesday.
Musk posted on X warning users not to use ChatGPT, linking the chatbot to nine deaths, including suicides.
Altman responded in a series of posts calling the claim misleading and turning the criticism back on Musk by pointing to Tesla's Autopilot system, which has been linked to dozens of fatal crashes.
"Sometimes you complain about ChatGPT being too restrictive, and then in cases like this you claim it's too relaxed," Altman wrote in a post. "Apparently more than 50 people have died from crashes related to Autopilot."
Altman said he had ridden in a vehicle using Autopilot only once but questioned whether the technology was safe enough to have been released publicly.
He also criticized Musk's own artificial intelligence project, Grok, saying the Tesla and SpaceX CEO "shouldn't be talking when it comes to guardrails."
The exchange comes as Musk pursues a lawsuit against OpenAI, alleging the company abandoned its original nonprofit mission in favor of commercial interests, the New York Post reported.
Musk is reportedly seeking as much as $134 billion in damages, a case that has become a central flashpoint in the deteriorating relationship between the former collaborators.
The timing of the clash coincides with heightened regulatory and legal scrutiny of AI technologies. Last week, the United Kingdom's Office of Communications, known as Ofcom, launched a formal investigation into Musk's Grok chatbot over the creation of sexualized images of minors.
European regulators are also examining whether Grok violated online safety rules, increasing pressure on Musk as he continues to publicly attack OpenAI.
At the same time, OpenAI faces multiple lawsuits in California alleging that ChatGPT contributed to suicides, psychosis and financial harm. The cases have raised questions about the company's internal safeguards and whether rapid product deployment exposed vulnerable users to risk.
Critics argue that while OpenAI has introduced safety guardrails, lapses in oversight and competitive pressure to release new tools may have left gaps. The company has said it is reviewing the allegations and assessing how its systems are used.
"This is an incredibly heartbreaking situation, and we're reviewing the filings to understand the details," an OpenAI spokesperson said in November.
Industry analysts say the public dispute reflects a broader battle over who gets to define responsible AI development.
Musk has positioned himself as a whistleblower warning about the dangers of artificial intelligence, while Altman has framed OpenAI as a company attempting to manage risk without stifling innovation, the New York Post reported.
How the conflict unfolds could influence OpenAI's standing with regulators and the public, analysts say, particularly as governments move to impose stricter rules on AI systems.
OpenAI was founded in 2015 as a nonprofit research lab and initially counted Musk among its early supporters and board members. Musk left the organization in 2018, and relations have deteriorated sharply since OpenAI launched commercially successful products such as ChatGPT.
Tensions between rapid innovation and safety concerns have been a recurring theme at OpenAI, highlighted by a wave of high-profile resignations in 2024 by researchers who raised concerns about governance and risk management.
Brian Freeman ✉
Brian Freeman, a Newsmax writer based in Israel, has more than three decades writing and editing about culture and politics for newspapers, online and television.
© 2026 Newsmax. All rights reserved.