Sam Altman: The Man Who Would Rule AGI
Can we trust the man who controls the future of artificial intelligence?
That’s the question at the heart of Ronan Farrow’s explosive 16,000-word exposé in The New Yorker, published April 6, 2026. Drawing on leaked memos, internal documents, and interviews with over 100 sources, Farrow’s investigation reveals a pattern of deception, broken promises, and unchecked ambition at the helm of OpenAI—just as the company cements its role as a Pentagon contractor and marches toward a potential $1 trillion IPO. Read the full article here.
The portrait that emerges is of a leader who publicly warns of AI’s existential risks while privately courting autocrats, dismantling safety teams, and lobbying against regulation. Below, we distill the most damning revelations—and ask whether Sam Altman’s vision for AI is a path to utopia or unaccountable power.
1. The Blip: Five Days That Shook OpenAI
On a Friday in November 2023, Sam Altman, the CEO of OpenAI, was attending a Formula 1 race in Las Vegas when he received an unexpected invitation: a video call with the company’s board of directors. Ilya Sutskever, OpenAI’s chief scientist and a co-founder, had spent weeks compiling evidence (70 pages of Slack messages, HR documents, and cellphone photos) alleging that Altman had engaged in a “consistent pattern of lying,” manipulated executives, and misled the board about critical safety protocols. When the call began, Sutskever read a brief statement: Sam Altman was no longer an employee of OpenAI.
The board’s public explanation was vague, citing Altman’s lack of “consistent candor.” But the move sent shockwaves through Silicon Valley. Microsoft, which had invested $13 billion in OpenAI, was blindsided. Employees revolted. Within hours, a public letter demanding Altman’s reinstatement began circulating. Over 95% of OpenAI’s staff threatened to quit if he wasn’t brought back. Five days later, the board capitulated. Altman returned as CEO, and the directors who had ousted him, including Sutskever, resigned.
Employees now refer to this period as “the Blip,” a Marvel-esque moment where Altman disappeared and returned, unchanged, to a company forever altered. But the Blip wasn’t just a corporate coup. It was a test of whether OpenAI’s safeguards could survive the man who built them. They failed.
2. The Ilya Memos: A Pattern of Deception
The memos Ilya Sutskever compiled were never meant to see the light of day. Sent as disappearing messages to avoid leaks, they detailed a litany of allegations against Altman: misrepresenting facts to the board, deceiving executives about safety protocols, and pitting colleagues against one another. One memo began with a damning list: “Sam exhibits a consistent pattern of… Lying.”
Sutskever wasn’t alone in his concerns. Dario Amodei, another OpenAI co-founder who later left to start Anthropic, kept over 200 pages of notes documenting Altman’s broken promises. In one instance, Altman promised the “superalignment” team, tasked with ensuring AI systems remained aligned with human values, 20% of OpenAI’s computing power. The team received 1–2%, mostly outdated hardware. When researchers complained, Altman dismissed their concerns. “The problem with OpenAI is Sam himself,” Amodei wrote.
The memos and notes paint a picture of a leader whose word couldn’t be trusted. But in Silicon Valley, where ambition often outpaces accountability, trust is optional when you control the future.
3. The Manhattan Project Analogy: Hypocrisy in Action
Sam Altman has long invoked the Manhattan Project as a metaphor for OpenAI’s mission. In 2015, he warned that artificial general intelligence (AGI) could be “the most powerful, and potentially dangerous, invention in human history.” He positioned OpenAI as a nonprofit bulwark against unchecked corporate power, a guardian of humanity’s future. But behind closed doors, Altman’s actions often contradicted his public rhetoric.
In 2018, OpenAI executives brainstormed a plan dubbed “the Countries Plan.” The idea was to auction AI technology to nations, including China and Russia, to spark a bidding war. The premise was simple: if OpenAI could play world powers against each other, it could secure the funding needed to dominate the field. “We’re talking about potentially the most destructive technology ever invented—what if we sold it to Putin?” one employee recalled thinking. The plan was abandoned only after staff threatened to quit.
Altman’s courtship of Gulf states like Saudi Arabia and the UAE further underscored his willingness to prioritize power over principle. Despite their human rights records, Altman pursued tens of billions in funding from these regimes, even as employees raised ethical concerns. When California proposed an AI safety bill in 2024, OpenAI lobbied aggressively against it, deploying threats to kill the legislation.
Altman didn’t just break his own rules, he rewrote them, turning OpenAI’s mission from “saving humanity” into “winning at all costs.”
4. Safety vs. Profit: The Collapse of OpenAI’s Mission
OpenAI’s founding charter was clear: the company existed to ensure that AGI benefited all of humanity. But as the company grew, its commitment to safety eroded. In 2018, the charter included a “merge and assist” clause, pledging that if another organization neared safe AGI first, OpenAI would stop competing and assist them. By 2023, the clause had been quietly removed.
The dissolution of OpenAI’s safety teams tells the story of this shift. The “superalignment” team, tasked with ensuring AI systems remained aligned with human values, was promised 20% of OpenAI’s computing power. It received a fraction of that (1–2%) and was dissolved in 2024. “Safety culture and processes have taken a backseat to shiny products,” wrote Jan Leike, the team’s former leader, in his resignation letter.
The final blow came when OpenAI stepped in to replace Anthropic, a rival AI lab, in a Pentagon contract. Anthropic had refused to comply with demands for mass surveillance and autonomous weapons. OpenAI did not. The Pentagon labeled Anthropic a “supply chain risk” and awarded OpenAI a $50 billion deal to embed its technology in U.S. defense systems.
By 2024, OpenAI’s safety teams were gone, its charter was a relic, and its models were being weaponized. The only question left was whether Altman cared.
5. The Cult of Sam: Power, Money, and Scandal
Sam Altman’s influence extends far beyond OpenAI. His net worth is tied to the company’s $1 trillion+ valuation, though he claims to have “no equity.” His lifestyle is lavish - a $27 million San Francisco mansion, a $20 million McLaren F1, and a penchant for hosting strip poker parties at his Hawaii estate. Politically, Altman has pivoted from backing Biden to advising Trump, donating $1 million to his inaugural fund.
But Altman’s personal life has also been a magnet for controversy. Elon Musk’s allies circulated dossiers falsely accusing Altman of sex with minors and whistleblower murders. (Ronan Farrow’s investigation found no evidence to support these claims.) More seriously, Altman’s sister, Annie, sued him for childhood sexual abuse, allegations Altman and his family vehemently deny.
Altman’s ties to Gulf states like the UAE and Saudi Arabia have raised further ethical questions. He has referred to UAE’s Sheikh Tahnoon as a “dear personal friend” and accepted gifts, including a hypercar, while negotiating a $7 trillion “ChipCo” project to build AI infrastructure in the Middle East.
Altman’s personal life mirrors his professional one: a mix of brilliance, controversy, and a refusal to be constrained by rules or truth.
6. The Future: AGI, IPO, and the Cost of Unchecked Power
OpenAI is now preparing for a potential IPO that could make Altman one of the richest people on Earth. The company’s “Stargate” project, a $500 billion global AI infrastructure network, is moving forward, with funding from Gulf states and a Trump administration eager to deregulate the industry.
Critics warn that OpenAI’s financial leverage is “risky and scary.” Fidji Simo, OpenAI’s AGI Deployment CEO, has been floated as a potential successor, suggesting even insiders doubt Altman’s longevity. Meanwhile, Altman’s public stance on AI safety has shifted. Once a vocal advocate for caution, he now dismisses concerns as “self-inflicted injuries” and praises Trump’s deregulatory approach as “refreshing.”
The stakes couldn’t be higher. OpenAI’s models are already being used in military operations, from autonomous drones to psychological warfare. The company’s IPO could further entrench Altman’s power, with little oversight. As one former board member put it: “The company levered up financially in a way that’s risky and scary.”
Sam Altman didn’t just build an AI company, he built a machine for concentrating power. The question now is whether the world will let him keep it.