On Friday, November 17, 2023, the most powerful AI company in the world fired its CEO without warning.
By the following Wednesday, he was back - and most of the board was gone.
In between: the strangest five days in technology history. A story of power, ideology, governance, and the question of who controls the most consequential technology of our time.
Friday: The Firing
What Happened
At 12:23 PM Pacific, OpenAI's board published a terse statement: Sam Altman was out, effective immediately. The reason given: he "was not consistently candid in his communications with the board."
No specifics. No warning to investors, partners, or employees. The world learned via blog post.
The Immediate Shock
Microsoft's reaction: They found out minutes before the announcement. They'd invested $13 billion in OpenAI. Nobody called.
Employee reaction: Mass confusion. Slack channels exploded. Nobody knew what happened.
Industry reaction: If the CEO of OpenAI can be fired without warning, what did he do? Speculation ranged from safety violations to fraud to AGI achievement.
The Board's Reasoning
The four board members who voted to remove Altman: - Ilya Sutskever (Chief Scientist, co-founder) - Adam D'Angelo (CEO of Quora) - Tasha McCauley (CEO of Fellow Robots) - Helen Toner (AI policy researcher)
They later said: their concern was about trust and candor, not a single incident. They felt Altman had systematically undermined the board's ability to govern.
What specifically? Never publicly disclosed. Speculation included: undisclosed business dealings, safety disagreements, power consolidation.
Saturday-Sunday: The Chaos
The Negotiation Attempts
Saturday morning: Investors, led by Thrive Capital and Tiger Global, pressured the board to reverse course. The board refused.
Saturday afternoon: Microsoft CEO Satya Nadella called board members directly. The board held firm.
Saturday night: OpenAI named Twitch's Emmett Shear as interim CEO - a surprise choice who hadn't even joined the company.
Sunday: Negotiations for Altman's return began. The board demanded governance changes. Altman demanded board changes.
The Employee Revolt
The petition: Over 700 of OpenAI's 770 employees signed a letter threatening to resign unless the board stepped down and Altman returned.
The leverage: These weren't just workers. They were the researchers, engineers, and scientists who built GPT-4. If they left, OpenAI's value would collapse.
The unprecedented solidarity: Even employees who might have agreed with the board's concerns signed. The process, not the substance, was the breaking point.
Monday: The Microsoft Move
The Offer
Satya Nadella announced that Altman and co-founder Greg Brockman would join Microsoft to lead a new AI research team - with resources to hire any OpenAI employee who wanted to follow.
The subtext: If OpenAI's board wanted to destroy the company, Microsoft would simply absorb the pieces.
The pressure: This gave OpenAI employees a path out that preserved their compensation. And it showed the board they couldn't win.
Tuesday-Wednesday: The Resolution
The Deal
Altman returned as CEO. His primary condition was a reconstituted board.
The old board dissolved. Sutskever, McCauley, and Toner stepped down. D'Angelo stayed initially.
New board members: Bret Taylor (former Salesforce CEO) as chair, Larry Summers (former Treasury Secretary), and eventually others chosen by Altman.
The power shift: The new board was friendly to Altman. The independent oversight that caused the crisis was replaced with allies.
Ilya's Reversal
Ilya Sutskever, OpenAI's co-founder and chief scientist, had voted to fire Altman. By Sunday, he'd signed the employee letter saying he regretted his role.
What changed? He later said he still believed in the governance concerns but didn't anticipate the chaos his vote would cause.
The interpretation: Either he was pressured into recanting, or he genuinely concluded the process was wrong even if the concerns were right.
The departure: Sutskever left OpenAI months later to start a new company focused on AI safety.
The Governance Questions
What the Saga Revealed
The nonprofit structure was theater. OpenAI was structured as a nonprofit with a capped-profit subsidiary. This was supposed to ensure mission over money. In practice, the $100+ billion valuation meant market forces dominated.
The board lacked power. A board that can fire the CEO but can't survive doing so isn't really in control. The employees and investors held the real power.
Safety concerns can't survive commercial pressure. If Toner and McCauley did have legitimate safety concerns, the structure provided no way to act on them without destroying the organization.
The Competing Narratives
The Altman narrative: Rogue board members, possibly influenced by competitors or ideologues, tried to destroy a company for unclear reasons. The employees and market corrected the mistake.
The board narrative: Altman had concentrated too much power and was not transparent with the body meant to oversee him. Firing was appropriate; the inability to make it stick was a governance failure, not a validation.
The safety narrative: This was about AI development pace and safety concerns. The board worried Altman was moving too fast. Altman's return meant those concerns would be ignored.
The truth: Probably elements of all three. The full story may never be public.
The Aftermath
At OpenAI
Power consolidated. Altman emerged with more control than before. The board that challenged him was gone.
Commercial acceleration. GPT-4 Turbo, GPT-4 Vision, and eventually GPT-5 shipped without the friction a skeptical board might have provided.
Talent departures. Several key researchers left, some citing the governance chaos, some the shift in culture.
For the Industry
The lesson for boards: Don't fire the founder unless you're absolutely certain you can survive the aftermath.
The lesson for founders: Governance structures only matter if they have teeth. OpenAI's didn't.
The lesson for society: The organizations building the most powerful technology have weak oversight mechanisms. The market favors speed over caution.
For AI Safety
The concern: If a board that tried to enforce safety considerations couldn't survive the attempt, what does that mean for AI governance generally?
The hope: The incident sparked broader conversation about AI governance. Maybe it will lead to better structures - external regulation, different organizational models.
The reality: Most AI companies doubled down on commercial structures without even the pretense of nonprofit oversight.
What It Means
For Believers in Corporate Governance
The lesson: Governance works when there's alignment or when there are enforcement mechanisms. At OpenAI, neither existed.
The question: Can any corporate structure provide meaningful oversight of transformational technology when commercial interests are this large?
For Believers in AI Safety
The concern: The organizations building frontier AI are governed by the people building frontier AI. External oversight is weak or absent.
The hope: Maybe governments will step in. Maybe new organizational structures will emerge. Maybe the market will somehow reward caution.
The realism: Don't count on it.
For Everyone Else
The observation: The technology shaping our future is being built by organizations with significant governance weaknesses. The November 2023 saga made this visible.
The question: What should be done about it?
