All revisions

Revision #1

System

about 4 hours ago

The Case Against Sam Altman: Inside the New Yorker's 18-Month Investigation and the OpenAI CEO's Combative Response

On April 6, 2026, The New Yorker published "Moment of Truth: Sam Altman May Control Our Future — Can He Be Trusted?" — a 15,000-word investigation by Ronan Farrow and Andrew Marantz built on 18 months of reporting, more than 100 interviews, and previously undisclosed internal documents [1]. Four days later, someone threw a Molotov cocktail at Altman's San Francisco home at 3:45 AM [2]. A 20-year-old suspect was arrested after also allegedly threatening to burn down OpenAI's headquarters [3].

The collision of those two events — the most detailed public accounting of Altman's leadership and a literal act of violence at his doorstep — produced an extraordinary week that crystallized the tensions surrounding the most powerful figure in artificial intelligence, the CEO of a company now valued at $852 billion [4].

What the Investigation Alleges

The New Yorker's core claim is that Altman has engaged in a "consistent pattern of lying" across his career, from his early startup Loopt through Y Combinator and into his tenure at OpenAI [1]. The evidence rests heavily on two sets of internal documents: roughly 70 pages of Slack messages, HR records, and analysis compiled by former OpenAI chief scientist Ilya Sutskever in fall 2023, and more than 200 pages of private notes kept by former OpenAI VP of research Dario Amodei, who left to co-found Anthropic [5]. Both men reached the same conclusion, according to the reporting: "The problem with OpenAI is Sam himself" [5].

The allegations fall into several categories:

Misleading the board on safety. Sutskever's memos allege that Altman "misrepresented facts to executives and board members, and deceived them about internal safety protocols" [1]. Specifically, the investigation reports that Altman assured board members in December 2022 that controversial GPT-4 features had received safety panel approval. Former board member Helen Toner later discovered those features were never actually approved [6]. Separately, Farrow reports that across many hours of board briefings, Altman never mentioned that Microsoft had released an early ChatGPT version in India without completing a required safety review [6].

Breaking the superalignment compute commitment. In 2023, OpenAI publicly announced that its superalignment team — focused on ensuring advanced AI systems remain under human control — would receive "20% of the compute we've secured to date," a commitment worth more than $1 billion [6]. Four people who worked on the team told the New Yorker that the actual allocation was "between one and two per cent" of the company's compute, running on outdated hardware [1]. The team was effectively shut down in 2024 after co-lead Jan Leike resigned, emailing colleagues that OpenAI had prioritized "product and revenue above all else" [6].

The Y Combinator departure. Y Combinator co-founder Paul Graham is quoted as saying Altman "had been lying to us all the time" before his 2019 departure from the accelerator [6]. Multiple YC partners confirmed he was effectively forced out, contradicting Altman's public accounts and sworn depositions in which he denied being fired [6].

Financial self-interest. Silicon Valley investors described Altman's approach as "Sam first" — selectively blocking outside investors from top companies and negotiating personal terms above standard rates, including in deals involving companies like Stripe [6]. The investigation also details financial entanglements involving former romantic partners [6].

Military and geopolitical maneuvering. The investigation alleges that Altman pursued Saudi Arabian investment after the Khashoggi murder and cultivated a relationship with UAE's Sheikh Tahnoon, raising what Biden administration officials described as "red flags" about "transactional relationships" [6]. More recently, after Anthropic refused Pentagon demands to drop autonomous weapons restrictions and was blacklisted by Defense Secretary Pete Hegseth, Altman simultaneously negotiated a $50 billion Pentagon partnership [6][7].

Altman's Response

Altman did not wait long to respond. In a personal blog post published hours after the Molotov cocktail attack, he addressed both the article and the violence [2]. He called the New Yorker piece "incendiary" and wrote: "I have underestimated the power of words and narratives" [8].

On his own conduct, Altman offered a partial acknowledgment: "I am not proud of handling myself badly in a conflict with our previous board that led to a huge mess for the company. I have made many other mistakes throughout the insane trajectory of OpenAI" [1]. He added: "I am sorry to people I've hurt and wish I had learned more faster" [8]. He also acknowledged being "not proud of being conflict-averse" [8].

But Altman pushed back on the broader framing. He denied lying, characterized OpenAI's shift to for-profit operations and deal-making as "normal competitive business rather than a betrayal," and maintained that the company would continue safety work — "or at least safety-adjacent projects" [1]. OpenAI representatives disputed several specific allegations, stating: "Its mission did not change" and that they "continue to invest in and evolve our work on safety" [1].

Altman also referenced his ongoing legal battle with Elon Musk, noting his refusal to accept what he characterized as demands for unilateral control over the company [8].

The Insider Divide

Former board member Sue Yoon offered one of the investigation's most quoted characterizations, describing Altman as combining "an intense desire to be liked" with "an almost sociopathic lack of concern for the consequences of misleading others" [9].

But the article also presents sharply divided views. Some colleagues credit Altman with driving OpenAI's rapid growth and global influence, describing him as operating at the speed the competitive AI landscape demands [1]. Supporters argue that the accounts from former board members and departed executives are colored by grievance — that Sutskever's memos were compiled specifically to justify the November 2023 firing, and that Amodei's notes reflect the perspective of someone who left to build a direct competitor [1].

The strongest version of this defense: Altman inherited a structurally impossible job — running a capped-profit company inside a nonprofit shell while competing against trillion-dollar corporations — and the messy compromises the New Yorker frames as deceptions were the ordinary friction of managing that contradiction at breakneck speed.

The 2023 Governance Crisis and What Changed

The New Yorker investigation provides the most detailed account yet of what preceded the November 17, 2023, board vote to fire Altman. Sutskever spent weeks that fall compiling his 70-page dossier, sending it as disappearing messages because he was "terrified" someone would find them [5]. The board concluded that Altman "was not consistently candid in his communications" [10].

Within four days, Altman was reinstated under massive pressure from investors and employees. The original board was gutted. A new three-person board formed: Bret Taylor, Larry Summers, and Adam D'Angelo, the only holdover [10]. Three additional members — Dr. Sue Desmond-Hellmann, Nicole Seligman, and Fidji Simo — were later added [10].

The structural changes went further. OpenAI's capped-profit entity was converted into a Delaware public benefit corporation (PBC). Under the final restructuring, Microsoft received a 27% share, the nonprofit retained 26%, and other employees and investors received 47% [10]. The California and Delaware Attorneys General extracted governance protections as conditions for approval, including giving the nonprofit sole power to appoint and remove all PBC board members [11].

Whether these reforms are sufficient is contested. Critics note that the new board is populated with members who owe their seats to Altman's reinstatement, not to the nonprofit's original mission. Defenders counter that the AG-imposed conditions — particularly the nonprofit's continued board control — represent meaningful structural safeguards that did not exist before.

The Money at Stake

OpenAI's financial trajectory provides essential context for the governance debate. The company closed a $122 billion funding round in March 2026 at a post-money valuation of $852 billion, anchored by Amazon, NVIDIA, and SoftBank [4]. It generates roughly $2 billion in monthly revenue, though it remains unprofitable [4].

OpenAI Valuation Over Time
Source: Bloomberg, TechCrunch, CNBC
Data as of Mar 31, 2026CSV

Altman has historically claimed to hold no equity in OpenAI, testifying before the U.S. Senate that "I have no equity in OpenAI" and "I do this because I love it" [12]. But Bloomberg reported in 2024 that the board discussed granting Altman a 7% equity stake, which at the current valuation would be worth approximately $60 billion [12]. Altman called reports of a "giant equity stake" "just not true," though he acknowledged the board had discussed giving him a stake without specific figures [13].

The New Yorker investigation frames this ambiguity as itself a form of misdirection — that Altman cultivated the image of a selfless steward while maintaining extensive indirect investments through various funds, giving him significant financial influence over the AI ecosystem [6]. Whether or not he holds direct OpenAI equity, a 7% stake in an $852 billion company would represent one of the largest individual windfalls in corporate history, creating what critics call an obvious structural conflict of interest for someone overseeing a nonprofit-to-for-profit conversion [12].

Comparative Scrutiny: Is Altman Being Held to a Different Standard?

No other AI lab CEO has faced an investigation of comparable depth from a major publication. Demis Hassabis at Google DeepMind operates within Alphabet's corporate structure, which diffuses individual accountability. Dario Amodei at Anthropic has faced his own scrutiny — investors have privately described concerns about his temperament, with one shareholder calling his public criticism of OpenAI's Pentagon deal part of an "extremely concerning" pattern [14] — but nothing approaching the 100-source, 18-month treatment the New Yorker gave Altman.

There are structural reasons for this asymmetry. OpenAI's unusual nonprofit-to-for-profit conversion created a unique governance story. Altman's 2023 firing-and-reinstatement was a genuinely unprecedented corporate drama. And OpenAI's position as the company that launched the current AI era with ChatGPT makes its CEO a natural focal point for broader anxieties about the technology.

Altman's critics argue he has earned this scrutiny through his own choices — publicly claiming to prioritize safety while allegedly undermining it internally. His defenders argue that the intensity of coverage reflects a media environment that has cast him as a symbolic stand-in for every fear about AI, making fair assessment difficult.

Legal and Regulatory Exposure

The California Attorney General's office conducted what it described as a "robust investigation" into OpenAI's restructuring [11]. In May 2025, OpenAI reversed its initial plan for a full for-profit conversion after pushback from California AG Rob Bonta and Delaware's attorney general [15]. By October 2025, both AGs signed off on a revised plan that preserved nonprofit control, required safety commitments, and included a commitment to keep OpenAI headquartered in California [11].

A coalition of consumer advocacy groups, including Public Citizen, has urged regulators to reject what they characterize as an inadequate deal that undervalues the nonprofit's charitable assets [16]. The San Francisco Foundation led a coalition requesting further AG action to protect those assets [16].

As of April 2026, no personal legal action has been taken against Altman based on the New Yorker's allegations. The AG oversight pertains to the corporate conversion, not to individual conduct. Any future regulatory action would likely hinge on whether Altman's representations to the board or to regulators during the restructuring process can be shown to have been materially false — a high bar that would require evidence beyond the anonymous accounts published so far.

OpenAI is also engaged in active litigation with Elon Musk, who has separately alleged that the company abandoned its founding mission. Musk endorsed the New Yorker's findings on social media, agreeing with characterizations of Altman as untrustworthy [17].

The Molotov Cocktail and the Limits of Accountability Journalism

The physical attack on Altman's home complicates the public conversation in ways that resist easy resolution. Altman drew a direct line between the article and the violence, writing: "Someone said to me yesterday they thought it was coming at a time of great anxiety about AI and that it made things more dangerous for me" [8]. He shared a photo of his husband and their child to underscore the personal stakes [2].

The suspect, a 20-year-old man arrested by San Francisco police, has not been publicly linked to the New Yorker article or shown to have read it [3]. The investigation into motive is ongoing. But Altman's framing — positioning the article's publication and the attack as part of the same dangerous week — effectively shifted public discussion from the substance of the allegations to the question of whether aggressive reporting on powerful figures creates safety risks.

That framing is itself contested. Accountability journalism about figures controlling consequential technology is a routine and necessary function of the press. At the same time, the personal safety of public figures and their families is not a trivial concern, particularly in an era of escalating threats against tech executives.

What Remains Unresolved

The New Yorker investigation is substantial but not conclusive. Several of its most damaging claims rest on anonymous sources or on documents compiled by people who had direct reasons to build a case against Altman — Sutskever was preparing to fire him; Amodei had left to start a rival company. The strongest allegations — the superalignment compute shortfall, the safety approval misrepresentations — are specific enough to be verifiable, but OpenAI has disputed them without providing its own documentary evidence.

Altman's response acknowledged mistakes in general terms while contesting the specific narrative. Whether the admissions — "I am not proud of handling myself badly" — represent genuine accountability or strategic concession is a judgment that depends heavily on what happens next: whether OpenAI's reformed governance structures actually constrain executive power, whether safety commitments are met with real resources, and whether the nonprofit's control over the for-profit board proves to be more than a legal formality.

The AI industry's most consequential company is now valued at $852 billion, is seeking a potential IPO, and is led by a CEO whose trustworthiness has been publicly challenged in granular detail by two of the country's most prominent investigative journalists. The question the New Yorker posed in its headline — "Can He Be Trusted?" — remains open. The answer will be determined not by any single article or blog post, but by the accumulation of decisions, disclosures, and governance outcomes in the months ahead.

Sources (17)

  1. [1]
    The New Yorker Turns Up the Heat on OpenAI's Sam Altman: A Two-Decade Saga of Allegationsopentools.ai

    Overview of The New Yorker's April 2026 investigation into Altman, including details on Sutskever's memos, superalignment compute shortfall, and Altman's response acknowledging mistakes.

  2. [2]
    Sam Altman Confirms Molotov Cocktail Incident and Responds to 'Incendiary' New Yorker Investigationhollywoodreporter.com

    Altman confirmed a Molotov cocktail was thrown at his home at 3:45 AM, sharing a photo of his husband and child and calling the New Yorker article 'incendiary.'

  3. [3]
    Suspect arrested after incendiary device thrown at OpenAI CEO Sam Altman's homecnn.com

    A 20-year-old man was arrested after throwing a Molotov cocktail at Altman's San Francisco home and allegedly threatening to burn down OpenAI headquarters.

  4. [4]
    OpenAI closes funding round at an $852 billion valuationcnbc.com

    OpenAI closed a $122 billion funding round at $852 billion valuation, generating $2 billion monthly revenue while remaining unprofitable.

  5. [5]
    Ilya Sutskever Secret Memos Accuse Sam Altman of Lyinghvylya.net

    Sutskever compiled 70 pages of Slack messages and HR documents; Amodei maintained 200+ pages of notes. Both concluded 'The problem with OpenAI is Sam himself.'

  6. [6]
    8 Allegations Against OpenAI's Sam Altman in New Yorker's Investigation You Should Knowtechloy.com

    Detailed breakdown of all eight allegations including Y Combinator departure, safety charter betrayal, broken compute commitment, and Pentagon deal.

  7. [7]
    Anthropic CEO Dario Amodei calls OpenAI's messaging around military deal 'straight up lies'techcrunch.com

    Amodei accused Altman of 'straight up lies' regarding Pentagon contract messaging after Anthropic was blacklisted for refusing to drop weapons restrictions.

  8. [8]
    Sam Altman Responds to 'Incendiary' New Yorker Article and Molotov Cocktail Attackthewrap.com

    Altman wrote 'I am sorry to people I've hurt' and acknowledged being 'not proud of being conflict-averse' while maintaining AI commitment.

  9. [9]
    'Almost a sociopathic lack of concern': 5 biggest revelations from The New Yorker's deep dive into Sam Altmantomsguide.com

    Former board member Sue Yoon described Altman as having 'an almost sociopathic lack of concern for the consequences of misleading others.'

  10. [10]
    The OpenAI Governance Transition: The History, What It Is, and What It Meansforum.effectivealtruism.org

    Analysis of OpenAI's governance changes post-2023 crisis including new board composition and nonprofit-to-PBC conversion with AG oversight.

  11. [11]
    Attorney General Bonta Issues Statement on OpenAI's Recapitalization Planoag.ca.gov

    California AG Bonta conducted 'robust investigation' and secured concessions ensuring charitable assets protection, safety prioritization, and California HQ commitment.

  12. [12]
    OpenAI Discusses Giving Sam Altman 7% Stake in For-Profit Transitionbloomberg.com

    Board discussed granting Altman 7% equity stake, which at current valuations would be worth approximately $60 billion.

  13. [13]
    Sam Altman tells OpenAI staff there's no plan for him to receive a 'giant equity stake'cnbc.com

    Altman called reports of a giant equity stake 'just not true' while acknowledging board had discussed giving him a stake without specific figures.

  14. [14]
    Anthropic backers fret over AI giant's volatile CEO Dario Amodeifinance.yahoo.com

    One Anthropic shareholder called Amodei's public diatribes 'extremely concerning' and unbefitting of a high-profile CEO.

  15. [15]
    OpenAI's CEO just ditched a months-long effort to abandon non-profit controlfortune.com

    OpenAI reversed its full for-profit conversion plan in May 2025 after pushback from California and Delaware AGs.

  16. [16]
    Coalition Requests Attorney General Action to Protect OpenAI's Charitable Assetssff.org

    San Francisco Foundation-led coalition urged AG Bonta to take further action protecting OpenAI nonprofit's charitable assets during restructuring.

  17. [17]
    Sam Altman a sociopath, pathological liar and abuser? Musk agrees to claims on OpenAI chief's past and leadershipwionews.com

    Elon Musk endorsed the New Yorker's characterizations of Altman on social media amid ongoing litigation between Musk and OpenAI.