As if Sam Altman needed more troubles, the CEO of AI pioneer OpenAI now faces fresh scrutiny. Days after he was again accused by his sister of sexually abusing her in childhood, The New Yorker published a sweeping investigative report raising serious questions about his character, credibility and fitness to lead one of the world’s most influential technology companies.
The investigation, based on hundreds of interviews, documents and internal communications, portrays Altman — the public face of the AI revolution — as a “pathological liar,” citing firsthand accounts of a consistent pattern of blurring the line between reality and speculation, deliberate misrepresentation, distortion of facts and concentration of power. It also alleges that while publicly advocating for safety, Altman prioritized personal gain over the interests of the company and the public, and used apocalyptic rhetoric to reinforce his position.
A pattern of deception
The report opens with the dramatic episode in fall 2023, when Altman was ousted from OpenAI by its board. The move followed secret memos submitted by then–chief scientist Ilya Sutskever, an Israeli-born researcher, which included dozens of pages of Slack messages and internal testimony. These, according to the report, outlined a consistent pattern of misleading statements by Altman to company leadership and the board, particularly regarding OpenAI’s internal safety protocols.
“Any person working to build this civilization-altering technology bears a heavy burden and is taking on unprecedented responsibility,” Sutskever wrote. “I don’t think Sam is the guy who should have his finger on the button.”
Altman was dismissed in a brief video call, a decision that was reversed within days. He returned to his post after five days, while Sutskever and several board members resigned.
The investigation describes how Altman set up a “war room” at his San Francisco home immediately after his dismissal to coordinate efforts for his return, including hiring a crisis communications adviser. From there, he led an aggressive social media campaign, leveraged pressure from aligned investors who conditioned funding on his reinstatement, and helped drive most employees to threaten mass resignation. Workers who hesitated reportedly received messages from colleagues urging them to join.
4 View gallery


Former OpenAI chief scientist Dr. Ilya Sutskever and CEO Sam Altman at Tel Aviv University in 2024
(Photo: Avigail Uzi)
Leadership issues
Criticism of Altman’s credibility, the report says, predates OpenAI. During his early startup Loopt, founded after he dropped out of Stanford University, employees complained about his tendency to exaggerate and blur the line between “I think I can maybe accomplish this thing” and “I have already accomplished this thing.” A group of senior staff twice appealed to the board to remove him as CEO, citing leadership issues.
Paul Graham, founder of startup accelerator Y Combinator and Altman’s mentor, once said that if dropped on an island of cannibals, Altman would return five years later as their king.
Altman later became president of Y Combinator but was pushed out in 2019 amid a breakdown in trust with partners, who alleged he “constantly lied” and prioritized personal interests. “It’s a policy of ‘Sam first,’” one investor told The New Yorker.
Critics also point to OpenAI’s shift toward a for-profit model as an example of misleading conduct, despite its founding as a nonprofit aimed at addressing concerns that artificial general intelligence could become one of the most dangerous inventions in history.
The shift accelerated after Microsoft invested billions as a strategic partner. Dario Amodei, now CEO of rival Anthropic, said Altman inserted undisclosed clauses into the deal that contradicted OpenAI’s original commitments. When confronted, Altman denied their existence until shown the text. Amodei and colleagues later left, saying “the problem with OpenAI is Sam himself.”
4 View gallery


OpenAI developers conference. Right: Microsoft CEO Satya Nadella
(Photo: JUSTIN SULLIVAN / GETTY IMAGES)
Amodei was particularly blunt: “His words are almost certainly bullshit,” he was quoted as saying. Former OpenAI board member Sue Yoon said Altman combines an intense desire to be liked with what she described as an almost sociopathic lack of concern for the consequences of misleading others.
The report also says Altman pledged to allocate 20% of the company’s computing power to a “superalignment team” focused on mitigating AI risks, but in practice the team received only a small fraction and relied on outdated hardware, while top resources were directed to commercial products.
Geopolitical ambitions
The investigation highlights Altman’s geopolitical ambitions, including promoting the “Stargate” project — a plan to build massive AI infrastructure at a cost of trillions of dollars.
He has sought funding from countries such as Saudi Arabia and the United Arab Emirates, despite concerns among U.S. national security officials about potential technology leakage to China. Altman has described Sheikh Tahnoon, the UAE’s national security adviser, as a “dear personal friend.”
At the same time, Altman has shifted on military use of AI. While OpenAI previously barred such applications, he moved to offer services to the Pentagon after rival Anthropic declined to relax its safety restrictions.
A former OpenAI executive described these moves as “the most reckless thing done at the company,” comparing data center construction in the Middle East to creating potentially dangerous “portals” of power.
A man of contradictions
The report portrays Altman as deeply contradictory: publicly advocating regulation and safety — including before Congress — while privately working to weaken safety regulations in California and Europe.
He has also claimed he holds no personal equity in OpenAI, yet maintains extensive indirect investments through various funds, giving him significant financial influence.
Some of the harshest criticism comes from former associates. A senior Microsoft executive reportedly suggested Altman could ultimately be remembered as a fraud on the scale of Bernie Madoff. Board members who voted to remove him described him as “a sociopath unbound by the truth,” capable of convincing others that his interests align with those of the company.
Altman, for his part, says he is driven by “hope for humanity,” not power, and describes his shifting positions as a necessary adaptation to the rapid pace of technological change.
The report concludes that Altman’s story mirrors the systems he helped build: language models designed to please users, sometimes inventing facts to sound convincing.



