English
Back
Download
Need Help?
Log in to access Online Inquiry
Back to the Top
75098506
commented on a stock · Feb 19 11:51
$Oracle (ORCL.US)$ But the locus of the AI resignation letter, as a kind of industry artifact, is the red-hot startup OpenAI, where major figures, including top executives and safety-minded researchers, have been leaving for the last two years. Some resigned; some were fired; some were described in the press as "forced out" over internal company disputes. Seven left in a short period in the first half of 2024.

With revenue paling compared to its massive and growing infrastructure costs, OpenAI recently announced that it would begin incorporating ads into ChatGPT. That caused researcher Zoë Hitzig to quit. This week, she published a resignation letter in the Times, warning about the potential implications of ads becoming part of the substrate of chatbot conversations. "ChatGPT users have generated an archive of human candor that has no precedent, in part because people believed they were talking to something that had no ulterior agenda," she wrote. But, she warned, OpenAI seemed prepared to leverage that "archive of human candor" — much as Facebook had done — to target ads and undermine user autonomy. In the service of maximizing engagement, consumers might be manipulated — the classic sin of the modern internet.

If you think you are building a world-changing invention, you need to be able to trust your leadership. That's been a problem at OpenAI. On November 17, 2023, Altman was dramatically fired by the company's board because, it claimed, Altman was "not consistently candid in his communications with the board." Less than a week later, he performed his own boardroom coup and was reinstated, before consolidating his power. The exodus proceeded from there.

On May 14, 2024, OpenAI co-founder Ilya Sutskever announced his resignation. Sutskever was replaced as head of OpenAI's superalignment team by John Schulman, another company co-founder. A few months later, Schulman left OpenAI for Anthropic. Six months later, he announced his move to Thinking Machines Lab, an AI startup founded by former OpenAI CTO Mira Murati, who had replaced Altman as OpenAI's interim CEO during his brief firing.

The day after Sutskever left OpenAI, Jan Leike, who also helped head OpenAI's alignment work, announced on X that he had resigned. "OpenAI is shouldering an enormous responsibility on behalf of all of humanity," Leike wrote, but the company's "safety culture and processes have taken a backseat to shiny products." He thought that "OpenAI must become a safety-first AGI company." Less than two weeks later, Leike was hired by Anthropic. OpenAI and Antrhopic did not respond to requests for comment.

At OpenAI, departing researchers have said that the experts concerned with alignment and safety have often been sidelined, pushed out, or scattered among other teams, leaving researchers with the sense that AI companies are sprinting to build an invention they won't be able to control. "In short, neither OpenAI nor any other frontier lab is ready, and the world is also not ready" for AGI, wrote Miles Brundage when he resigned from OpenAI's AGI readiness team in 2024. Yet he added that "working at OpenAI is one of the most impactful things that most people could hope to do" and did not directly criticize the company. Brundage now runs AVERI, an AI research institute.

Across the AI industry, the story is much the same. In public pronouncements, top researchers gently chastise or occasionally denounce their employers for pursuing a potentially apocalyptic invention while also emphasizing the necessity of doing that research. Sometimes they offer a "cryptic warning" that leaves AI watchers scratching their heads. A few do seem genuinely alarmed at what's happening. When OpenAI safety researcher Steven Adler left the company in January 2025, he wrote that he was "pretty terrified by the pace of AI development" and wondered if it would wipe out humanity.

Yet in the many AI resignation letters, there's little discussion of how AI is being used right now. Data center construction, resource consumption, mass surveillance, ICE deportations, weapons development, automation, labor disruption, the proliferation of slop, a crisis in education — these are the areas where many people see AI affecting their lives, sometimes for the worse, and the industry's pious resignees don't have much to say about it all. Their warnings about some disaster just beyond the horizon become fodder for the tech press — and de facto cover letters for their next industry job — while failing to reach the broader public.

"Tragedies happen; people get hurt or die; and you suffer and get old," wrote William Stafford in the poem that Mrinank Sharma shared. It's a terrible thing, especially the tones of passivity and inevitability — resignation, you might call it. It can feel as if no single act of protest is enough, or, as Stafford writes in the next line: "Nothing you do can stop time's unfolding."

Jacob Silverman is a contributing writer for Business Insider. He is the author, most recently, of "Gilded Rage: Elon Musk and the Radicalization of Silicon Valley."
Disclaimer: Community is offered by Moomoo Technologies Inc. and is for educational purposes only.Read more
Thumbs Up
2
16K Views
Report
Comments
Write a Comment...
2